Natural algorithms and the sciences
May 19, 2013 13 Comments
Today, I am passing through New York City on my way to Princeton’s Center for Computational Intractability for a workshop on Natural Algorithms and the Sciences (NA&S). The two day meeting will cover everything from molecular algorithms for learning and experiments on artificial cells to bounded rationality in decision-making and the effects of network topology on the evolution of collective migration. In other words, right at home for TheEGG blog. The full mission statement:
The workshop will bring together researchers from computer science, mathematics, physics, biology, and engineering to explore interactions among algorithms, dynamical systems, statistical physics, and complexity theory (in all senses of the term).
“Computer science” is a cruel historic misnomer for a field of study that doesn’t focus on just science, but combines three different approaches: engineering, science, and mathematics. Natural algorithms (or “Natural computing“, more generally) are similarly diverse, with three general themes roughly following the engineering-science-math distinctions:
- Taking inspiration from natural processes to design metaheuristics for solving difficult engineering optimization problems. In my experience, this is the most common approach and contains ideas like: genetic algorithms, neural networks, and swarm optimization.
- Implementing physical computations in non-electronic media. This includes ideas like molecular or DNA computing, and the experimental part of quantum computing (or combinations of both!)
- Using ideas of information processing and computation to reason about natural and social phenomena or the theories that describe them. To me, this seems like the least well developed and most exciting view. It corresponds to the algorithmic lens and includes fields like theory of quantum computing, and algorithmic game-theory/economics.
The history of this field can be traced back to Alan Turing’s work on B-type neural networks, artificial intelligence, morphogenesis, and artificial life. However, the real foundation and popularization probably came with Holland’s (same Holland that ran some of the first simulations of ethnocentrism) introduction of genetic algorithms. At first, these metaheuristics were primarily employed by computer scientists and engineers to solve optimization problems, with occasional (but often inconsequential) wanderings into application to the sciences. It was the popularizing work of researchers like Robert Axelrod (1984), that brought genetic algorithms to masses of social scientists. The first direct mention of “Natural algorithms” in the sense used here that I could find is Dorigo’s 1992 thesis “Optimization, learning and natural algorithms”. Now, simulations using variants to the metaheuristics developed by the first stream, are a common sight everywhere from neuroscience to economics to political science; a strange realization of the third steam of natural computing.
Unfortunately, the field largely remained stuck at simulations, and forgot its roots in mathematics and analytic models. This is the curse of computing : it is simpler to run quick simulations than to develop the analytic tools to understand the underlying phenomena. The senior co-organizer of the NA&S workshop, Bernard Chazelle, is starting to address this issue by introducing a theory of natural algorithms and an algorithmic calculus (Chaezelle, 2012). He considers two types of processes, and illustrated them with the Brownian motion diffusion of pollen particles suspended in water:
- Eulerian approach: a differential equation for the concentration of particles at point and time. The strength of this approach is that is allows us to rely on a well developed theory of calculus. However, we must commit to infinitesimal changes and that every point abides by identical laws.In evolutionary game theory, replicator dynamics are the prototypical examples of this approach, and the Ohtsuki-Nowak transform is an example of a slightly more involved application.
- Lagrangian approach: track the movement of individual pollen particles and water molecules in accordance to Newton’s laws. If the simulation becomes intractable then simplify by mean-field or ergodic approximations; for instance by modeling a single pollen particle with random nudges from the ambient bath. In evolutionary game theory, this approach corresponds to agent-based modeling, and suffers from the lack of powerful theory like calculus to guide our understanding.
The lack of theory for understanding agent-based models means that we must usually rely on running individual simulations for each parameter setting we are interested in. This makes it difficult to understand how the assumptions of our model effect the large-scale behavior of our results, and how relaxations or modifications of those assumptions will change future models. Chazelle (2012) addresses this lack of theory by introducing a general modeling framework of influence systems and developing an algorithmic calculus to better understand them. Other participants at the NA&S workshop will present similar developments in their field, and I will report more in future posts!
Notes and References
- There is a parallel development in artificial intelligence and cognitive science with the use of production-systems and neural networks. But that discussion is more involved, so I might discuss it in a separate blog post.
- I call this realization “strange” because many of the approaches don’t start with thinking about how the underlying science functions, but instead borrow the engineers’ tools whole-sale and assume that the very rough analogy the original computer scientists that developed the algorithms is an accurate enough reflection of reality to draw scientific conclusions. In evolution and ecology, this is most painfully obvious when researchers adopt the simple “fittest fraction of agents survive” as their selection rule,”random pairing and recombination with point-wise mutations” as their variance rule, and their paper’s conclusion as their fitness function. In cognitive science, this happens when psychologists train artificial neural networks on simplifications of human tasks and then try to draw conclusions about neural structures of real brains, even when their models are built in only vague (and not scientifically reasonable) analogy to real neural dynamics.
Axelrod, R. (1984). The Evolution of Cooperation. Basic Books.
Chazelle, B. (2012). Natural algorithms and influence systems Communications of the ACM, 55 (12) DOI: 10.1145/2380656.2380679
Dorigo, M. (1992). Optimization, learning and natural algorithms. Ph. D. Thesis, Politecnico di Milano, Italy.
Holland John, H. (1975). Adaptation in natural and artificial systems: an introductory analysis with applications to biology, control, and artificial intelligence. University of Michigan Press.