Algorithmic view of historicity and separation of scales in biology

A Science publications is one of the best ways to launch your career, especially if it is based on your undergraduate work, part of which you carried out with makeshift equipment in your dorm! That is the story of Thomas M.S. Chang, who in 1956 started experiments (partially carried out in his residence room in McGill’s Douglas Hall) that lead to the creation of the first artificial cell (Chang, 1964). This was — in the words of the 1989 New Scientists — an “elegantly simple and intellectually ambitious” idea that “has grown into a dynamic field of biomedical research and development.” A field that promises to connect biology and computer science by physically realizing John von Neumann’s dream of a self-replication machine.

makingBilayer

Although not the first presentation of the day, Albert Libchaber‘s after-lunch talk shared progress on this quest with the participants of the 2nd Natural Algorithms and the Sciences workshop. Libchaber shared his group’s bottom up approach to artificial biology, working toward the goal of experimentally realizing a minimal self-replication cell. He showed how his group is able to create a self-organizing lipid bilayer through a process no more complicated than mixing your salad dressing. These simple artificial cells were able to persists for many hours, and could even be seeded with plasmid DNA (although the group is not yet equipped to look at the transfer of plasmids between membranes). But most stunning for me, was how they could induce replication through fission in these simple membranes.

For Libchaber, metabolism is a competition between two time scales: (1) the growth of cell surface area by attachment of new fatty acids, and (2) the growth of intracell volume by absorption of water. When the fatty acids and water are in perfect balance, the artificial cell can grow symmetrically as a sphere to up-to 4 times its size. However, if extra fatty acids are introduced to the outer solution, then the surface area grows too quickly for the absorption of water to catch up. This forces the membrane into an chaotic adaptive configuration of law elastic energy consisting of an array of vesicles connected by thin tubules. Occasionally, stochastic fluctuations cause these tubules to sever and the cell divides. This is similar to the reproductive process of L-form bacteria, although Libchaber was quick to stress that this didn’t make his systems alive, but merely an “artificial cell in an artificial world”. He was effectively the designer: controlling the environment to exogenously introduce a higher concentration of fatty acids in the outside solution, by contrast the L-form bacteria achieve this on their own by endogenously producing more fatty acids.

In the first talk of the day, Leslie Valiant used ecorithms to study the question of a ‘designer’ and adaptive phenomena more generally. For Valiant, biology is a specialization of computer science, and the emergence of complex circuitry without a designer is evolution in the former and a specific type of machine learning in the latter. By modeling evolution as a restriction of general PAC-learning, Valiant (2009) can overcome biology’s agnosticism on time to evolve from A to B. This allows us to strengthen Dobzhansky’s “nothing in biology makes sense except in the light of evolution” by putting restrictions on the evolvable. As we try to explain some physiological or cognitive processes in animals, we can restrict our set of hypotheses by considering only those that are evolvable.

Unfortunately, the connection to machine learning is not without its downsides. In a typical machine learning application, there is a target function that the algorithm is trying to approximate. In the case of evolvability, Valiant is forced to define an “ideal function” that for every possible circumstance specifies the best action in the given environment. Although this function is unknown, and can be encoded in the structure of the physical world, it still brings us dangerously close to a teleological view of evolution and restricts us to static fitness landscapes. Finally, the fact that the target function and thus environment do not depend on the learning algorithm means that the dynamics are fundamentally limited in their historicity, and cannot capture everything we are interested about in biology. Valiant’s model is only effective for short-term micro-evolution where a relatively static fitness landscape can be reasonably assumed.

After a short coffee break, Simon Levin started his talk by looking at evolution and questioning the static fitness landscape assumption. In general, the environment isn’t constant, as in the early Nk model he developed in Kauffman & Levin (1987). Instead of a landscape of rigid hills-and-valleys, we should think of it as a waterbed where agents’ progress through the landscape deforms and creates new and different peaks as the environment co-evolves with the agents. This forces us to contrast the frequency independent and game-theoretic approach to evolution.

Of course, questions of biological complexity extend beyond evolution and are a strong presence in development and cognition. Connecting biology back to computer science, Levin highlighted Turing’s foundational work on morphogenesis — explaining how global symmetry-breaking occurs in the development of a spherically-symmetrical embryo into a non-spherically symmetrical organism while obeying local symmetry-preserving rules like diffusion. Turing (1952) showed that with two types of morphogens — activators and inhibitors — with different diffusion rates, small stochastic fluctuations in their concentration could be amplified into large static (or dynamic) global patterns such as zebra stripes or leopard spots. Since diffusion rates are a way to set timescales, this separation of time-scales leading to symmetry breaking foreshadowed Libchaber’s definition of metabolism and example of artificial cell-division later in the day.

Levin closed with cognition by focusing on bounded rationality in three settings: ultimatum game, hyperbolic discounting, and collective behavior. Bounded rationality contrasts the standard Homo Economicus by explicitly accounting for uncertainty, cost of gathering information, and computational complexity restrictions on cognitive processing. In the ultimatum game, the agent has to balance a culture dependent fairness norm against rational game-theoretic reasoning. These norms are heuristics (in the Kahnemann sense), and their shape is guided by a general theory of meta-games. In hyperbolic discounting, the increasing discount rate leads to intertemporal inconsistencies in the agent, leading to phenomena like self-enforcing behavior to overcome this. Levin contrasted the proximate and ultimate explanations for discounting. With a proximate cause being something as simple as an agent averaging the influence of many brain regions that follow scale-free geometric discounting, and the ultimate case being evolutionary: a selective pressure to account for uncertainty and the the cost of foreseeing the future. This proximal cause of averaging is not a trivial phenomena, whether you are looking at the collective behavior of brain regions, birds, or caribou the result can be a very rich dynamic.

In the sixth talk of the day, Naomi Enrich Leonard picked up where Levin left off and focused on the evolution of leadership in collective migration. Collective migration is a social learning dilemma where agents need to decide if they will innovate through costly measurements of the true direction they need to travel, or simply imitate the behavior of others. Pais & Leonard (2013) focused on studying under what conditions the convert stable strategy for their population was an evolutionary stable strategy of homogeneous level of investment in innovation by all agents, and when it bifurcated into a heterogeneous population of leaders and followers.They overcame the curse of computing by pushing their analytic treatment as far as possible before resorting to smart simulations. In particular, they separated the time-scales of migration (and thus, determination of fitness) and evolutionary change in levels of investment in leadership. They solved the migratory part of their model analytically, arriving at a function capturing the mean fitness and variance in fitness of leaders and followers in an unstructured herd. They could then use this fitness calculations to optimize their evolutionary simulation, and continued to investigate simpler cases (such as a herd of two) analytically. In their results, they were able to draw a sharp boundaries in the cognitive cost of leadership between homogeneous investment, unstable bifurcations, stable bifurcation, and total abandon of innovation. Further, if a population was pushed to abandon innovation then they observed a historicity effect: the cost of leadership would need to be lowered further than the stable bifurcation value before investment in innovation could be restored.

In his opening remarks, Bernard Chazelle, presented the general framework of influence systems (Chazelle, 2012) and defined natural algorithms as “a new language for algorithmic dynamics”. The collective migration models studied by Leonard and Levin are a special case of these influence systems. However, Chazelle (much like Valiant’s work with evolution) did not build his framework to answer specific questions like bifurcations between leaders and followers, but instead to build analytic tools for studying agent-based models and a explore the separation of time-scales. He used diffusion to illustrate his point:

  1. Micro-level: a diffusion system is a collection of individual agents (water molecules and the diffusing substance like a piece of dust) with each agent following simple local deterministic rules (Newton’s laws)
  2. Meso-level: diffusion can be modeled as the stochastic process of Brownian motion where most molecules are treated as a random bath, and a focal agent (say a piece of dust on the water’s surface) is tracked explicitly. Note that the stochasticity arises from our choice of abstraction.
  3. Macro-level: diffusion becomes a deterministic partial differential equation, that describes the density of the diffusing agents at each point and time.

In physics, this separation of scales is handled through renormalization. In agent-based models, the lack of symmetry makes the process much more involved.
Like Valiant, Chazelle draws a parallel to computer science by viewing the algorithmic renormalization needed to identify the relevant separations of scale in agent-based models as a compiler for dynamic systems. The renormalization used in physics, by comparison, is compiling a simple program by hand. For Chazelle, “biology = physics + history”, and history is the greatest symmetry breaker. However, we can’t remain reliant on mere simulations for exploring these history-dependent systems, and need to develop a full algorithmic calculus to grow our understanding.

Note: this is the first blogpost of a series summarizing the 2013 workshop on Natural Algorithms and the Sciences at Princeton’s Center for Computational Intractability. My order of presentation does not follow the speaker schedule since I am highlighting the talks based on my own synthesis. But don’t worry, I will mention all talks! The other four presentations from May 20th will be in the next post.

References

Chang, T. M. (1964). Semipermeable microcapsules. Science, 146(3643): 524-525.

Chazelle, B. (2012). Natural algorithms and influence systems. Communications of the ACM, 55(12).

Kauffman, S., & Levin, S. (1987). Towards a general theory of adaptive walks on rugged landscapes. Journal of Theoretical Biology, 128(1): 11-45.

Pais, D., & Leonard, N. (2013). Adaptive network dynamics and evolution of leadership in collective migration Physica D: Nonlinear Phenomena DOI: 10.1016/j.physd.2013.04.014

Turing, A. M. (1952). The Chemical Basis of Morphogenesis. Philosophical Transactions of the Royal Society of London 237 (641): 37–72.

Valiant, L.G. (2009) Evolvability. Journal of the ACM 56(1): 3

About Artem Kaznatcheev
From the Department of Computer Science at Oxford University and Department of Translational Hematology & Oncology Research at Cleveland Clinic, I marvel at the world through algorithmic lenses. My mind is drawn to evolutionary dynamics, theoretical computer science, mathematical oncology, computational learning theory, and philosophy of science. Previously I was at the Department of Integrated Mathematical Oncology at Moffitt Cancer Center, and the School of Computer Science and Department of Psychology at McGill University. In a past life, I worried about quantum queries at the Institute for Quantum Computing and Department of Combinatorics & Optimization at University of Waterloo and as a visitor to the Centre for Quantum Technologies at National University of Singapore. Meander with me on Google+ and Twitter.

18 Responses to Algorithmic view of historicity and separation of scales in biology

  1. Pingback: Computer science on prediction and the edge of chaos | Theory, Evolution, and Games Group

  2. Pingback: Distributed computation in foraging desert ants | Theory, Evolution, and Games Group

  3. Pingback: Mathematical models of running cockroaches and scale-invariance in cells | Theory, Evolution, and Games Group

  4. Pingback: Microscopic computing in cells and with self-assembling DNA tiles | Theory, Evolution, and Games Group

  5. Pingback: Machine learning and prediction without understanding | Theory, Evolution, and Games Group

  6. Pingback: Toward an algorithmic theory of biology | Theory, Evolution, and Games Group

  7. Pingback: Cooperation and the evolution of intelligence | Theory, Evolution, and Games Group

  8. Pingback: Micro-vs-macro evolution is a purely methodological distinction | Theory, Evolution, and Games Group

  9. Pingback: Monoids, weighted automata and algorithmic philosophy of science | Theory, Evolution, and Games Group

  10. Pingback: Stats 101: an update on readership | Theory, Evolution, and Games Group

  11. Pingback: Computational complexity of evolutionary equilibria | Theory, Evolution, and Games Group

  12. Pingback: Programming language for chemistry | Theory, Evolution, and Games Group

  13. Pingback: Baldwin effect and overcoming the rationality fetish | Theory, Evolution, and Games Group

  14. Pingback: Cataloging a year of blogging: the algorithmic world | Theory, Evolution, and Games Group

  15. Pingback: Phenotypic plasticity, learning, and evolution | Theory, Evolution, and Games Group

  16. Pingback: Cooperation, enzymes, and the origin of life | Theory, Evolution, and Games Group

  17. Pingback: Five fascinating science books to read | Hihid News

  18. Pingback: Fusion and sex in protocells & the start of evolution | Theory, Evolution, and Games Group

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.