Natural algorithms and the sciences

Today, I am passing through New York City on my way to Princeton’s Center for Computational Intractability for a workshop on Natural Algorithms and the Sciences (NA&S). The two day meeting will cover everything from molecular algorithms for learning and experiments on artificial cells to bounded rationality in decision-making and the effects of network topology on the evolution of collective migration. In other words, right at home for TheEGG blog. The full mission statement:

The workshop will bring together researchers from computer science, mathematics, physics, biology, and engineering to explore interactions among algorithms, dynamical systems, statistical physics, and complexity theory (in all senses of the term).

“Computer science” is a cruel historic misnomer for a field of study that doesn’t focus on just science, but combines three different approaches: engineering, science, and mathematics. Natural algorithms (or “Natural computing“, more generally) are similarly diverse, with three general themes roughly following the engineering-science-math distinctions:

  1. Taking inspiration from natural processes to design metaheuristics for solving difficult engineering optimization problems. In my experience, this is the most common approach and contains ideas like: genetic algorithms, neural networks, and swarm optimization.
  2. Implementing physical computations in non-electronic media. This includes ideas like molecular or DNA computing, and the experimental part of quantum computing (or combinations of both!)
  3. Using ideas of information processing and computation to reason about natural and social phenomena or the theories that describe them. To me, this seems like the least well developed and most exciting view. It corresponds to the algorithmic lens and includes fields like theory of quantum computing, and algorithmic game-theory/economics.

The history of this field can be traced back to Alan Turing’s work on B-type neural networks, artificial intelligence, morphogenesis, and artificial life. However, the real foundation and popularization probably came with Holland’s (same Holland that ran some of the first simulations of ethnocentrism) introduction of genetic algorithms[1]. At first, these metaheuristics were primarily employed by computer scientists and engineers to solve optimization problems, with occasional (but often inconsequential) wanderings into application to the sciences. It was the popularizing work of researchers like Robert Axelrod (1984), that brought genetic algorithms to masses of social scientists. The first direct mention of “Natural algorithms” in the sense used here that I could find is Dorigo’s 1992 thesis “Optimization, learning and natural algorithms”. Now, simulations using variants to the metaheuristics developed by the first stream, are a common sight everywhere from neuroscience to economics to political science; a strange realization of the third steam of natural computing[2].

Unfortunately, the field largely remained stuck at simulations, and forgot its roots in mathematics and analytic models. This is the curse of computing : it is simpler to run quick simulations than to develop the analytic tools to understand the underlying phenomena. The senior co-organizer of the NA&S workshop, Bernard Chazelle, is starting to address this issue by introducing a theory of natural algorithms and an algorithmic calculus (Chaezelle, 2012). He considers two types of processes, and illustrated them with the Brownian motion diffusion of pollen particles suspended in water:

  1. Eulerian approach: a differential equation for the concentration of particles at point and time. The strength of this approach is that is allows us to rely on a well developed theory of calculus. However, we must commit to infinitesimal changes and that every point abides by identical laws.In evolutionary game theory, replicator dynamics are the prototypical examples of this approach, and the Ohtsuki-Nowak transform is an example of a slightly more involved application.
  2. Lagrangian approach: track the movement of individual pollen particles and water molecules in accordance to Newton’s laws. If the simulation becomes intractable then simplify by mean-field or ergodic approximations; for instance by modeling a single pollen particle with random nudges from the ambient bath. In evolutionary game theory, this approach corresponds to agent-based modeling, and suffers from the lack of powerful theory like calculus to guide our understanding.

The lack of theory for understanding agent-based models means that we must usually rely on running individual simulations for each parameter setting we are interested in. This makes it difficult to understand how the assumptions of our model effect the large-scale behavior of our results, and how relaxations or modifications of those assumptions will change future models. Chazelle (2012) addresses this lack of theory by introducing a general modeling framework of influence systems and developing an algorithmic calculus to better understand them. Other participants at the NA&S workshop will present similar developments in their field, and I will report more in future posts!

Notes and References

  1. There is a parallel development in artificial intelligence and cognitive science with the use of production-systems and neural networks. But that discussion is more involved, so I might discuss it in a separate blog post.
  2. I call this realization “strange” because many of the approaches don’t start with thinking about how the underlying science functions, but instead borrow the engineers’ tools whole-sale and assume that the very rough analogy the original computer scientists that developed the algorithms is an accurate enough reflection of reality to draw scientific conclusions. In evolution and ecology, this is most painfully obvious when researchers adopt the simple “fittest fraction of agents survive” as their selection rule,”random pairing and recombination with point-wise mutations” as their variance rule, and their paper’s conclusion as their fitness function. In cognitive science, this happens when psychologists train artificial neural networks on simplifications of human tasks and then try to draw conclusions about neural structures of real brains, even when their models are built in only vague (and not scientifically reasonable) analogy to real neural dynamics.

Axelrod, R. (1984). The Evolution of Cooperation. Basic Books.

Chazelle, B. (2012). Natural algorithms and influence systems Communications of the ACM, 55 (12) DOI: 10.1145/2380656.2380679

Dorigo, M. (1992). Optimization, learning and natural algorithms. Ph. D. Thesis, Politecnico di Milano, Italy.

Holland John, H. (1975). Adaptation in natural and artificial systems: an introductory analysis with applications to biology, control, and artificial intelligence. University of Michigan Press.

About Artem Kaznatcheev
From the Department of Computer Science at Oxford University and Department of Translational Hematology & Oncology Research at Cleveland Clinic, I marvel at the world through algorithmic lenses. My mind is drawn to evolutionary dynamics, theoretical computer science, mathematical oncology, computational learning theory, and philosophy of science. Previously I was at the Department of Integrated Mathematical Oncology at Moffitt Cancer Center, and the School of Computer Science and Department of Psychology at McGill University. In a past life, I worried about quantum queries at the Institute for Quantum Computing and Department of Combinatorics & Optimization at University of Waterloo and as a visitor to the Centre for Quantum Technologies at National University of Singapore. Meander with me on Google+ and Twitter.

14 Responses to Natural algorithms and the sciences

  1. Pingback: Algorithmic view of historicity and separation of scales in biology | Theory, Evolution, and Games Group

  2. Pingback: Computer science on prediction and the edge of chaos | Theory, Evolution, and Games Group

  3. Pingback: Distributed computation in foraging desert ants | Theory, Evolution, and Games Group

  4. Pingback: Mathematical models of running cockroaches and scale-invariance in cells | Theory, Evolution, and Games Group

  5. Pingback: EGT Reading Group 41 – 45 and a photo | Theory, Evolution, and Games Group

  6. Pingback: Microscopic computing in cells and with self-assembling DNA tiles | Theory, Evolution, and Games Group

  7. Pingback: Machine learning and prediction without understanding | Theory, Evolution, and Games Group

  8. Pingback: Toward an algorithmic theory of biology | Theory, Evolution, and Games Group

  9. SabineLr says:

    Reblogged this on My Blog.

  10. Pingback: Cataloging a year of blogging: the algorithmic world | Theory, Evolution, and Games Group

  11. Pingback: Computational theories of evolution | Theory, Evolution, and Games Group

  12. Pingback: Cataloging a year of blogging: cancer and biology | Theory, Evolution, and Games Group

  13. Pingback: Antoni Gaudi and learning algorithms from Nature | Theory, Evolution, and Games Group

  14. Pingback: The science and engineering of biological computation: from process to software to DNA-based neural networks | Theory, Evolution, and Games Group

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: