Interdisciplinitis: Do entropic forces cause adaptive behavior?

Reinventing the square wheel. Art by Mark Fiore of San Francisco Chronicle.

Reinventing the square wheel. Art by Mark Fiore of San Francisco Chronicle.

Physicists are notorious for infecting other disciplines. Sometimes this can be extremely rewarding, but most of the time it is silly. I’ve already featured an example where one of the founders of algorithmic information theory completely missed the point of Darwinism; researchers working in statistical mechanics and information theory seem particularly susceptible to interdisciplinitis. The disease is not new, it formed an abscess shortly after Shannon (1948) founded information theory. The clarity of Shannon’s work allowed a metaphorical connections between entropy and pretty much anything. Researchers were quick to swell around the idea, publishing countless papers on “Information theory of X” where X is your favorite field deemed in need of a more thorough mathematical grounding.

Ten years later, Elias (1958) drained the pus with surgically precise rhetoric:

The first paper has the generic title “Information Theory, Photosynthesis and Religion” (title courtesy of D. A. Huffman), and is written by an engineer or physicist. It discusses the surprisingly close relationship between the vocabulary and conceptual framework of information theory and that of psychology (or genetics, or linguistics, or psychiatry, or business organization). It is pointed out that the concepts of structure, pattern, entropy, noise, transmitter, receiver, and code are (when properly interpreted) central to both. Having placed the discipline of psychology for the first time on a sound scientific base, the author modestly leaves the filling in of the outline to the psychologists. He has, of course, read up on the field in preparation for writing the paper, and has a firm grasp of the essentials, but he has been anxious not to clutter his mind with such details as the state of knowledge in the field, what the central problems are, how they are being attacked, et cetera, et cetera, et cetera

I highly recommend reading the whole editorial, it is only one page long and a delight of scientific sarcasm. Unfortunately — as any medical professional will tell you — draining the abscess is treating the symptoms, and without a regime of antibiotics, it is difficult to resolve the underlying cause of interdisciplinitis. Occasionally the symptoms flare up, with the most recent being two days ago in the prestigious Physics Review Letters.

Wissner-Gross & Freer (2013) try to push the relationship between intelligence and entropy maximization by suggesting that the human cognitive niche is explained by causal entropic forces. Entropic force is an apparent macroscopic force that depends on how you define the correspondence between microscopic and macroscopic states. Suppose that you have an ergodic system, in other words: every microscopic state is equally likely (or you have a well-behaved distribution over them) and the system transitions between microscopic states at random such that its long term behavior mimics the state distribution (i.e. the ensemble average and time-average distributions are the same). If you define a macroscopic variable, such that some value of the variable corresponds to more microscopic states than other values then when you talk about the system at the macroscopic level, it will seem like a force is pushing the system towards the macroscopic states with larger microscopic support. This force is called entropic because it is proportional to the entropy gradient.

Instead of defining their microstates as configurations of their system, the authors focus on possible paths the system can follow for time \tau into the future. The macroscopic states are then the initial configurations of those paths. They calculate the force corresponding to this micro-macro split and use it as a real force acting on the macrosystem. The result is a dynamics that tends towards configurations where the system has the most freedom for future paths; the physics way of saying that “intelligence is keeping your options open”.

In most cases to directly invoke the entropic force as a real force would be unreasonably, but the authors use a cognitive justification. Suppose that the agent uses a Monte Carlo simulation of paths out to a time horizon %latex \tau$ and then moves in accordance to the expected results of its’ simulation then the agents motion would be guided by the entropic force. The authors study the behavior of such an agent in four models: particle in a box, inverted pendulum, a tool use puzzle, and a “social cooperation” puzzle. Unfortunately, these tasks are enough to both falsify the authors’ theory and show that they do not understand the sort of questions behavioral scientists are asking.

If you are locked in a small empty (maybe padded, after reading this blog too much) room for an extended amount of time, where would you chose to sit? I would suspect most people would sit in the corner or near one of the walls, where they can rest. That is where I would sit. However, if adaptive behavior is meant to follow Wissner-Gross & Freer (2013) then, as the particle in their first model, you would be expected to remain in the middle of the room. More generally, you could modify any of the authors’ tasks by having the experimenter remove two random objects from the agents’ environment whenever they complete the task of securing a goal object. If these objects are manipulable by the agents, then the authors would predict that the agents would not complete their task, regardless of what the objects are since there are more future paths with the option to manipulate two objects instead of one. Of course, in a real setting, it would depend on what these objects are (food versus neutral) on if the agents would prefer them. None of this is built into the theory, so it is hard to take this as the claimed general theory of adaptive behavior. Of course, it could be that the authors leave “the filling in of the outline to the psychologists”.

Do their experiments address any questions psychologists are actually interested in? This is most clearly interested with their social cooperation task, which is meant to be an idealization of the following task we can see bonobos accomplishing (first minute of the video):

Yay, bonobos! Is the salient feature of this task that the apes figure out how to get the reward? No, it is actually that bonobos will cooperate in getting the reward regardless of it is in the central bin (to be shared between them) or into side bins (for each to grab their own). However, chimpanzees would work together only if the food is in separate bins and not if it is available in the central bin to be split. In the Wissner-Gross & Freer (2013) approach, both conditions would result in the same behavior. The authors are throwing away the relevant details of the model, and keeping the ones that psychologists don’t care about.

The paper seems to be an obtuse way of saying that “agents prefer to maximize their future possibilities”. This is definitely true in some cases, but false in others. However, it is not news to psychologists. Further, the authors abstraction misses the features psychologists care about while stressed irrelevant ones. It is a prime example of interdisciplinitis, and raises the main question: how can we avoid making the same mistake?

Since I am a computer scientists (and to some extent, physicist) working on interdisciplinary questions, this is particularly important for me. How can I be a good connector of disciplines? The first step seems to publish in journal relevant to the domain of the questions being asked, instead of the domain from which the tools being used originate. Although mathematical tools tends to be more developed in physics than biology or psychology, the ones used in Wissner-Gross & Freer (2013) are not beyond what you would see in the Journal of Mathematical Psychology. Mathematical psychologists tend to be well versed in the basics of information theory, since it tends to be important for understanding Bayesian inference and machine learning. As such, entropic forces can be easily presented to them in much the same way as I presented in this post.

By publishing in a journal specific to the field you are trying to make an impact on, you get feedback on if you are addressing the right questions for your target field instead of simply if others’ in your field (i.e. other physicists) think you are addressing the right questions. If your results get accepted then you also have more impact since they appear in a journal that your target audience reads, instead of one your field focuses on. Lastly, it is a show of respect for the existing work done in your target field. Since the goal is to set up a fruitful collaboration between disciplines, it is important to avoid E.O. Wilson’s mistake of treating researchers in other fields as expendable or irrelevant.

References

Elias, P. (1958). Two famous papers. IRE Transactions on Information Theory, 4(3): 99.

Shannon, Claude E. (1948). A Mathematical Theory of Communication. Bell System Technical Journal. 27(3): 379–423.

Wissner-Gross, A.D., & Freer, C.E. (2013). Causal Entropic Forces Phys. Rev. Lett., 110 (16) : 10.1103/PhysRevLett.110.168702

Will the droids take academic jobs?

As a researcher, one of the biggest challenges I face is keeping up with the scientific literature. This is further exasperated by working in several disciplines, and without a more senior advisor or formal training in most of them. The Evolutionary Game Theory Reading Group, and later this blog, started as an attempt to help me discover and keep up with the reading. Every researcher has a different approach: some use the traditional process of reading the table of contents and abstracts of selective journals, others rely on colleagues, students, and social media to keep them up to date. Fundamentally, these methods are surprisingly similar and it is upto the individual to find what is best for them. I rely on blogs, G+, Google Scholar author alerts, extensive forward-citation searches, surveys, and most recently: Google Scholar updates.

ScholarSuggest

The updates are a computer filtering system that uses my publications to gage my interests and then suggests new papers as they enter Google’s database. Due to my limited publication history, the AI doesn’t have much to go on, and is rather hit or miss. Some papers, like the Requejo & Camacho reference in the screeshot above, have led me to find useful papers on ecological games and hurry my own work on environmental austerity. Other papers, like Burgmuller’s, are completely irrelevant. However, this recommendation system will improve with time, I will publish more papers to inform it and the algorithms it uses will advance.

Part of that advancement comes from scientists optimizing their own lit-review process. Three days ago, Davis, Wiegers et al. (2013) published such an advancement in PLoS One. The authors are part of the team behind the Comparative Toxicogenomics Database that (emphasis mine):

promotes understanding about the effects of environmental chemicals on human health by integrating data from curated scientific literature to describe chemical interactions with genes and proteins, and associations between diseases and chemicals, and diseases and genes/proteins.

This curation requires their experts to go through thousands of articles in order update the database. Unfortunately, not every article is relevant and there are simply too many articles to curate everything. As such, the team needed a way to automatically sort which articles are most likely to be useful and thus curated. They developed a system that uses their database plus some hand-made rules to text-mine articles and assign them a document relevance score (DRS). The authors looked at a corpus of 14,904 articles, with 1,020 of them having been considered before and thus serving as a calibration set. To test their algorithms effectiveness, 3583 articles were sampled at random from the remaining 13884 and sent to biocurators for processing. The DRS correlated well with probability of curation, with 85% curation rate for articles with high DRS and only 15% for low DRS articles. This outperformed the PubMed ranking of articles which resulted in a ~60% curation rate regardless of the PubMed ranking.

With the swell of scientific publishing, I think machines are well on their way to replacing selective journals, graduate students, and social media for finding relevant literature. Throw in that computers already make decent journalists; and you can go to SCIgen to make your own AI authored paper that is ‘good’ enough to be accepted at an IEEE conference; and your laptop can write mathematical proofs good enough to fool humans. Now you have the ingredients to remind academics that they are at risk, just like everybody else, of losing their jobs to computers. Still it is tempting to take comfort from technological optimists like Andrew McAfee and believe that the droids will only reduce mundane and arduous tasks. It is nicer to believe that there will always be a place for human creativity and innovation:

For now, the AI systems are primarily improving my workflow and making researcher easier and more fun to do. But the future is difficult to predict, and I am naturally a pessimist. I like to say that I look at the world through algorithmic lenses. By that, I mean that I apply ideas from theoretical computer science to better understand natural or social phenomena. Maybe I should adopt a more literal meaning; at this rate “looking at the world through algorithmic lenses” might simply mean that the one doing the looking will be a computer, not me.

ResearchBlogging.orgDavis, A., Wiegers, T., Johnson, R., Lay, J., Lennon-Hopkins, K., Saraceni-Richards, C., Sciaky, D., Murphy, C., & Mattingly, C. (2013). Text Mining Effectively Scores and Ranks the Literature for Improving Chemical-Gene-Disease Curation at the Comparative Toxicogenomics Database PLoS ONE, 8 (4) DOI: 10.1371/journal.pone.0058201

Continuing our exploration of group selection

This Tuesday, I gave the second of two presentations for the EGT Reading group, both focused on the theory of group selection. Though I am currently working outside of academia, it has been a pleasure to pursue my interests in ecology, and our group discussions have proven to be both enjoyable and challenging.

The first presentation [pdf] is a review of a 2011 paper written by Marshall. It argues that when the models underlying inclusive fitness theory (IFT) and group selection are formally identical. However, as I tried to show during the presentation, this formal equivalency only holds for one specific type of group selection – group selection as the partitioning of selection between groups from selection within groups. It no longer holds when we consider the more restrictive definition of group selection as “natural selection on groups” in strict analogy to individual selection (this, incidentally, is the definition of group selection I gave in my last blog post)

Marshall J.A.R. (2011). Group selection and kin selection: formally equivalent approaches, Trends in Ecology & Evolution, 26 (7) 325-332. DOI:

The second presentation [pdf] is a review of a paper by Paulsson (2002). That paper presents an interesting case of multi-level (group) selection, where the “individuals” are plasmids – self-replicating gene clusters in the cytoplasm of procaryotes – and the “groups” are the plasmid-hosting cells. It’s a nice illustration of the basic dilemma that drives group selection. Inside a cell, plasmids which replicate faster have an advantage over their cell mates. But cells in which plasmids replicate too fast grow slower. Thus, at the level of individuals selfishness is favored, but at the level of groups altruism is favored. Paulsson’s paper explains the mechanisms of plasmid replication control; sketches up models of intra- and inter-cellular selection gradients; and explains how conflicts between individual- and group-selection are resolved by plasmids. He also considers a third level of selection on lineages, but both Artem and I were confused about what exactly Paulsson meant.

Paulsson, J. (2002). Multileveled selection on plasmid replication. Genetics, 161(4): 1373-1384.

Mathematical Turing test: Readable proofs from your computer

We have previously discussed the finicky task of defining intelligence, but surely being able to do math qualifies? Even if the importance of mathematics in science is questioned by people as notable as E.O. Wilson, surely nobody questions it as an intelligent activity? Mathematical reasoning is not necessary for intelligence, but surely it is sufficient?

Note that by mathematics, I don’t mean number crunching or carrying out a rote computation. I mean the bread and butter of what mathematicians do: proving theorems and solving general problems. As an example, consider the following theorem about metric spaces:

Let X be a complete metric space and let A be a closed subset of X. Then A is complete.

Can you prove this theorem? Would you call someone that can — intelligent? Take a moment to come up with a proof.
Read more of this post

Egalitarians’ dilemma and the cognitive cost of ethnocentrism

Ethnocentrism (or contingent altruism) can be viewed as one of many mechanisms for enabling cooperation. The agents are augmented with a hereditary tag and the strategy space is extended from just cooperation/defection to behaviour that can be contingent on if the diad share or differ in their tag. The tags and strategy are not inherently correlated, but can develop local correlations due to system dynamics. This can expand the range of environments in which cooperation can be maintained, but an assortment-biasing mechanism is needed to fuel the initial emergence of cooperation (Kaznatcheev & Shultz, 2011). The resulting cooperation is extended only towards the in-group while the out-group continues to be treated with the cold rationality of defection.

Suppose that circles are the in-group and squares the out-group. The four possible strategies and their minimal representations as finite state machines is given.

Suppose that circles are the in-group and squares the out-group. The four possible strategies and their minimal representations as finite state machines is given.

The four possible strategies are depicted above, from top to bottom: humanitarian, ethnocentric, traitorous, and selfish. Humanitarians and selfish agents do not condition their behavior on the tag of their partner, and do not require the cognitive ability to categorize. Although this ability is simple, it can still merit a rich analysis (see: Beer, 2003) by students of minimal cognition. By associating a small fitness cost k with categorization, we can study how much ethnocentric (and traitorous) agents are willing to pay for their greater cognitive abilities. This cost directly changes the default probability to reproduce (\text{ptr}), with humanitarians and selfish agents having \text{ptr} = 0.11 and ethnocentrics and traitorous agents having \text{ptr} = 0.11 - k. During each cycle, the \text{ptr} is further modified by the game interactions, with each cooperative action costing c = 0.01 and providing a benefit b (that varies depending on the simulation parameters) to the partner. For more detailed presentation of the simulation and default parameter, or just to follow along on your computer, I made my code publicly available on GitHub. Pardon its roughness, the brunt of it is legacy code from when I first build this model in 2009 for Kaznatcheev (2010).

Number of agents by strategy versus evolutionary cycle. The lines represent the number of agents of each strategy: blue --  humanitarian; green -- ethnocentric; yellow -- traitorous; red -- selfish. The width of the line corresponds to standard error from averaging 30 independent runs. The two figures correspond to different costs of cognition. The left is k = 0.002 and is typical of runs before the cognitive cost phase transition. The right is k = 0.007 and is typical of runs after the cognitive cost phase transition. Figure is adapted from Kaznatcheev (2010).

Number of agents by strategy versus evolutionary cycle. The lines represent the number of agents
of each strategy: blue — humanitarian; green — ethnocentric; yellow — traitorous; red — selfish. The width of the line corresponds to standard error from averaging 30 independent runs. The two figures correspond to different
costs of cognition. The left is k = 0.002 and is typical of runs before the cognitive cost phase transition. The right is k = 0.007 and is typical of runs after the cognitive cost phase transition. Figure is adapted from Kaznatcheev (2010).

The dynamics for low k are about the same as the standard no cognitive cost model as can be seen from the left figure above. However, as k increases there is a transition to a regime where humanitarians start to dominate the population, as in the right figure above. To study this, I ran simulations with a set b/c ratio and increasing k from 0.001 to 0.02 with steps of 0.001. You can run your own with the command bcRun(2.5,0.001*(1:20)); some results are presented below, your results might differ slightly due to the stochastic nature of the simulation.

The figure presents the proportion of humanitarians (blue), ethnocentrics (red), and cooperative interactions (black) versus cognitive cost for b/c = 2.5. The dots are averages from evolutionary cycles 9000 to 10000 of 10 independent runs. The lines are best-fit sigmoids.

Proportion of humanitarians (blue), ethnocentrics (red), and cooperative interactions (black) versus cognitive cost for b/c = 2.5. Dots are averages from evolutionary cycles 9000 to 10000 of 10 independent runs. The lines are best-fit sigmoids and the dotted lines mark the steepest point; I take take this as the point for the cognitive cost phase transition. Data generated with bcRun(2.5,0.001*(1:20)) and visualized with bcPlot(2.5,0.001*(1:20),[],1)

Each data-point is the average from the last 1000 cycles of 10 independent simulations. The points suggest a phase transition from a regime of few humanitarians (blue), many ethnocentrics (red), and very high cooperation (black) to one of many humanitarians, few ethnocentrics, and slightly less cooperation. To get a better handle on exactly where the phase transition is, I fit sigmoids to the data using fitSigmoid.m. The best-fit curves are shown as solid lines; I defined the point of phase transition as the steepest (or inflection) point on the curve and plotted them with dashed lines for reference. I am not sure if this is the best approach to quantifying the point of phase transition, since the choice of sigmoid function is arbitrary and based only on the qualitative feel of the function. It might be better to fit a simpler function like a step-function or a more complicated function from which a critical exponent can be estimated. Do you know a better way to identify the phase transition? At the very least, I have to properly measure the error on the averaged data points and propogate it through the fit to get error bounds on the sigmoid parameters and make sure that — within statistical certainty — all 3 curves have their phase transition at the same point.

The most interesting feature of the phase transition, is the effect on cooperation. The world becomes more equitable; agents that treat out-groups differently from in-group (ethnocentrics) are replaced by agents that treat everyone with equal good-will and cooperation (humanitarians). However, the overall proportion of cooperative interactions decreases — it seems that humanitarians are less effective at suppressing selfish agents. This is consistent with the free-rider suppression hypothesis that Shultz et al. (2009) believed to be implausible. The result is egalitarians’ dilemma: by promoting equality among agents the world becomes less cooperative. Should one favour equality and thus individual fairness over the good of the whole population? If we expand our moral circle to eliminate out-groups will that lead to less cooperation?

In the prisoners’ dilemma, we are inclined to favor the social good over the individual. Even though it is rational for the individual to defect (securing a higher payoff for themselves than cooperating), we believe it is better for both parties to cooperate (securing a better social payoff than mutual defection). But in the egalitarians’ dilemma we are inclined to favour the individualistic strategy (fairness for each) over the social good (higher average levels of cooperative interactions). We see a similar effect in the ultimatum game: humans reject unfair offers even though that results in neither player receiving a payoff (worse for both). In some ways, we can think of the egalitarians’ dilemma as the population analogue of the ultimatum game; should humanity favor fairness over higher total cooperation?

I hinted at some of these questions in Kaznatcheev (2010) but I restrained myself to just b/c = 2.5. From this limited data, I concluded that since the phase transition happens for k less than any other parameter in the model, it must be the case that agents are not willing to invest much resources into developing larger brains capable of categorical perception just to benefit from an ethnocentric strategy. Ethnocentrism and categorical perception would not have co-evolved, the basic cognitive abilities would have to be in place by some other means (or incredibly cheap) and then tag-based strategies could emerge.

Points of phase transition

Value of k at phase transition versus b/c ratio. In blue is the transition in proportion of humanitarians, red — proportion of ethnocentrics, and black – proportion of cooperative interactions. Each data point is made from a parameter estimate done using a sigmoid best fit to 200 independent simulations over 20 values of k at a resolution of 0.001.

Here, I explored the parameter space further, by repeating the above procedure while varying the b/c ratio by changing b from 0.02 to 0.035 in increments of 0.0025 while keeping c fixed at 0.01. Unsurprisingly, the transitions for proportion of ethnocentrics and humanitarians are indistinguishable, but without a proper analysis it is not clear if the transition from high to low cooperation also always coincides. For b/c > 2.75, agents are willing to invest more than c before the phase transition to all humanitarians, this invalidates my earlier reasoning. Agents are unwilling to invest much resources in larger brains capable of categorical perception only for competitive environments (low b/c). As b increases, the agents are willing to invest more in their perception to avoid giving this large benefit to the out-group. This seems consistent with explicit out-group hostility that Kaznatcheev (2010b) observed in the harmony game. However, apart from simply presenting the data, I can’t make much more sense from this figure. Do you have any interpretations? Can we learn something from the seemingly linear relationship? Does the slope (if we plot k versus b then it is about 0.5) tell us anything? Would you still conclude that co-evolution of tag-based cooperation and categorical perception is unlikely?

References

Beer, Randall D. (2003). The Dynamics of Active Categorical Perception in an Evolved Model Agent. Adaptive Behavior. 11(4): 209-243.

Kaznatcheev, Artem (2010). The cognitive cost of ethnocentrism Proceedings of the 32nd annual conference of the cognitive science society

Kaznatcheev, A. (2010b). Robustness of ethnocentrism to changes in inter-personal interactions. Complex Adaptive Systems – AAAI Fall Symposium.

Kaznatcheev, A., & Shultz, T. R. (2011). Ethnocentrism maintains cooperation, but keeping one’s children close fuels it. Proceedings of the 33rd Annual Conference of the Cognitive Science Society. 3174-3179.

Shultz, T. R., Hartshorn, M., & Kaznatcheev, A. (2009). Why is ethnocentrism more common than humanitarianism? Proceedings of the 31st Annual Conference of the Cognitive Science Society. 2100-2105.

Programming playground: Cells as (quantum) computers?

Nearly a year ago, the previous post in this series introduced a way for programmers to play around with biology: a model that simulated the dynamics of a whole cell at unprecedented levels of details. But what if you want to play with the real thing? Can you program a living cell? Can you compute with molecular biology?

Coccolithophore

Could this single-celled photosynthetic algae be your next computer?

Biology inspired computation can probably be traced back as far back as Turing’s (1948) introduction of B-Type neural networks. However, the molecular biology approach is much more recent with Adleman (1994) proposing DNA computing, and Păun (2000) introducing membrane computing with P-systems. These models caused a stir when they appeared due to the ease of misrepresenting their computational power. If you allow the cells or membranes to carry on exponential rate of reproduction for an arbitrarily long time, then these systems can solve NP-complete problems quickly. In fact, it is not hard to show that this model would allow you to solve PSPACE-complete problems. Of course, in any reasonable setting, your cells can only grow at an exponential rate until they reach the carrying capacity of the environment you are growing them in. If you take this into account then efficient DNA and membrane computing are no more powerful than the usual definition of efficient computation — polynomial time on a Turing machine.

The stirred (i.e. inviscid) nature of membrane and (early approaches to) DNA computing provide substantial constraints for empirical realizations, and scalability of bio-computing. In these early models, regulatory molecules are reused in the self-mixing environment of the cell, and gates correspond to chemical reactions. As such, gates are temporary; and the information carrying molecule must change at every step of the computation to avoid being confused with residue from the previous step. This made implementing some gates such as XNOR — output 1 only if both inputs are the same — experimentally impossible (Tamsir, 2011): how would you tell which input is which and how would the gate know it has received both inputs and not just an abnormally high concentration of the first?

To overcome this, Bonnet et al. (2013) designed a cellular computation model that more closely resembles the von Neumann architecture of the device you are reading this post on. In particular, they introduced a cellular analog of the transistor — the transcriptor. The whimsical name comes from the biology process they hijacked for computation, instead of electric current flowing on copper wires the researchers looked at the “transcriptional current” of RNA polymerase on DNA “wires”. Only if a control signal is present does the transcriptor allow RNA polymerase to flow through it; otherwise it blocks them, just like an electric transistor. By putting several transcriptors together, and choosing their control signals, Bonnet et al. (2013) can implement any logic gate (including the previously unrealized NXOR) just as an electrical engineer would with transistors. What matters most for connecting to quantum computing, is the ability to reliably amplify logical signals. With amplifying gates like AND, OR, and XOR, the authors were able to produce more than a 3-fold increase in control signal. For further details on the transcriptor listen to Drew Endy explain his group’s work:

Taking inspiration from biology is not restricted to classical computation. Vlatko Vedral provides a great summary of bio-inspired quantum computing; start from top down, figure out how biology uses quantum effects at room temperature and try to harness them for computation. The first step here, is to find a non-trivial example of quantum effects in use by a biological system. Conveniently, Engel et al. (2007) showed that photosynthesis provides such an example.

During photosynthesis, an incident photon becomes an ‘exciton’ that has to quickly walk through a maze of interconnected chlorophyll molecules to find a site where its energy can be used to phosphorylate used-up ADP into energy-carrying ATP. Unfortunately, if the exciton follows a classical random walk (i.e. spreads out in proportion to the square root of time) then it cannot reach a binding site before decaying. How does biology solve this? The exciton follows a quantum walk! (Rebentrost et al., 2009)

It is cool to know that we can observe a quantum walk, but can that be useful for computation? My former supervisor Andrew Childs (2009; see also Childs et al., 2013) is noted for showing that if we have control over the Hamiltonian defining our quantum walk then we can use the walk to do universal computation. Controlling the Hamiltonian generating a quantum walk is analogous to designing a graph for a classical walk. Theoretical work is still needed to bridge Rebentrost et al. and Childs, since (as Joe Fitzsimons pointed out on G+) the biological quantum walk is not coherent, and the decoherence that is present might doom any attempt at universal computation. The last ingredient that is needed is a classic controller.

Since the graph we need will depend on the specific problem instance we are trying to solve, we will need a classical computer to control the construction of the graph. This is where I hope synthetic biology results like Bonnet et al. (2013) will be useful. The transcriptors could be used as the classic control with which a problem instance is translated into a specific structure of chlorophyll molecules on which a quantum walk is carried out to do the hard part of the computation. The weak quantum signal from this walk can then be measured by the transcriptor-based controller and amplified into a signal that the experimenter can observe on the level of the behavior (say fluorescence) of the cell. Of course, this requires a ridiculous amount of both fundamental work on quantum computing, and bio-engineering. However, could the future of scalable quantum computers be in the noisy world of biology, instead of the sterility of superconductors, photon benches, or ion-traps?

References

Adleman, L. M. (1994). Molecular computation of solutions to combinatorial problems. Science, 266(5187), 1021-1023.

Bonnet J, Yin P, Ortiz ME, Subsoontorn P, & Endy D (2013). Amplifying Genetic Logic Gates. Science PMID: 23539178

Childs, A. M. (2009). Universal computation by quantum walk. Physical review letters, 102(18), 180501. [ArXiv pdf]

Childs, A. M., Gosset, D., & Webb, Z. (2013). Universal Computation by Multiparticle Quantum Walk. Science, 339(6121), 791-794. [ArXiv pdf]

Engel GS, Calhoun TR, Read EL, Ahn TK, Mancal T, Cheng YC et al. (2007). Evidence for wavelike energy transfer through quantum coherence in photosynthetic systems. Nature 446 (7137): 782–6.

Păun, G. (2000). Computing with membranes. Journal of Computer and System Sciences, 61(1), 108-143.

Rebentrost, P., Mohseni, M., Kassal, I., Lloyd, S., & Aspuru-Guzik, A. (2009). Environment-assisted quantum transport. New Journal of Physics, 11(3), 033003. [ArXiv pdf]

Tamsir, A., Tabor, J. J., & Voigt, C. A. (2011). Robust multicellular computing using genetically encoded NOR gates and chemical/wires/’. Nature, 469(7329), 212-215.