Evolution is a special kind of (machine) learning

Theoretical computer science has a long history of peering through the algorithmic lens at the brain, mind, and learning. In fact, I would argue that the field was born from the epistemological questions of what can our minds learn of mathematical truth through formal proofs. The perspective became more scientific with McCullock & Pitts’ (1943) introduction of finite state machines as models of neural networks and Turing’s B-type neural networks paving the way for our modern treatment of artificial intelligence and machine learning. The connections to biology, unfortunately, are less pronounced. Turing ventured into the field with his important work on morphogenesis, and I believe that he could have contributed to the study of evolution but did not get the chance. This work was followed up with the use of computers in biology, and with heuristic ideas from evolution entering computer science in the form of genetic algorithms. However, these areas remained non-mathematical, with very few provable statements or non-heuristic reasoning. The task of making strong connections between theoretical computer science and evolutionary biology has been left to our generation.

ValiantAlthough the militia of cstheorists reflecting on biology is small, Leslie Valiant is their standard-bearer for the steady march of theoretical computer science into both learning and evolution. Due in part to his efforts, artificial intelligence and machine learning are such well developed fields that their theory branch has its own name and conferences: computational learning theory (CoLT). Much of CoLT rests on Valiant’s (1984) introduction of probably-approximately correct (PAC) learning which — in spite of its name — is one of the most formal and careful ways to understand learnability. The importance of this model cannot be understated, and resulted in Valiant receiving (among many other distinctions) the 2010 Turing award (i.e. the Nobel prize of computer science). Most importantly, his attention was not confined only to pure cstheory, he took his algorithmic insights into biology, specifically computational neuroscience (see Valiant (1994; 2006) for examples), to understand human thought and learning.

Like any good thinker reflecting on biology, Valiant understands the importance of Dobzhansky’s observation that “nothing in biology makes sense except in the light of evolution”. Even for the algorithmic lens it helps to have this illumination. Any understanding of learning mechanisms like the brain is incomplete without an examination of the evolutionary dynamics that shaped these organs. In the mid-2000s, Valiant embarked on the quest of formalizing some of the insights cstheory can offer evolution, culminating in his PAC-based model of evolvability (Valiant, 2009). Although this paper is one of the most frequently cited on TheEGG, I’ve waited until today to give it a dedicated post.
Read more of this post

Machine learning and prediction without understanding

Big data is the buzzword du jour, permeating from machine learning to hadoop powered distributed computing, from giant scientific projects to individual social science studies, and from careful statistics to the witchcraft of web-analytics. As we are overcome by petabytes of data and as more of it becomes public, it is tempting for a would-be theorist to simply run machine learning and big-data algorithms on these data sets and take the computer’s conclusions as understanding. I think this has the danger of overshadowing more traditional approaches to theory and the feedback between theory and experiment.
Read more of this post

Natural algorithms and the sciences

Today, I am passing through New York City on my way to Princeton’s Center for Computational Intractability for a workshop on Natural Algorithms and the Sciences (NA&S). The two day meeting will cover everything from molecular algorithms for learning and experiments on artificial cells to bounded rationality in decision-making and the effects of network topology on the evolution of collective migration. In other words, right at home for TheEGG blog. The full mission statement:

The workshop will bring together researchers from computer science, mathematics, physics, biology, and engineering to explore interactions among algorithms, dynamical systems, statistical physics, and complexity theory (in all senses of the term).

Read more of this post

Will the droids take academic jobs?

As a researcher, one of the biggest challenges I face is keeping up with the scientific literature. This is further exasperated by working in several disciplines, and without a more senior advisor or formal training in most of them. The Evolutionary Game Theory Reading Group, and later this blog, started as an attempt to help me discover and keep up with the reading. Every researcher has a different approach: some use the traditional process of reading the table of contents and abstracts of selective journals, others rely on colleagues, students, and social media to keep them up to date. Fundamentally, these methods are surprisingly similar and it is upto the individual to find what is best for them. I rely on blogs, G+, Google Scholar author alerts, extensive forward-citation searches, surveys, and most recently: Google Scholar updates.

ScholarSuggest

The updates are a computer filtering system that uses my publications to gage my interests and then suggests new papers as they enter Google’s database. Due to my limited publication history, the AI doesn’t have much to go on, and is rather hit or miss. Some papers, like the Requejo & Camacho reference in the screeshot above, have led me to find useful papers on ecological games and hurry my own work on environmental austerity. Other papers, like Burgmuller’s, are completely irrelevant. However, this recommendation system will improve with time, I will publish more papers to inform it and the algorithms it uses will advance.

Part of that advancement comes from scientists optimizing their own lit-review process. Three days ago, Davis, Wiegers et al. (2013) published such an advancement in PLoS One. The authors are part of the team behind the Comparative Toxicogenomics Database that (emphasis mine):

promotes understanding about the effects of environmental chemicals on human health by integrating data from curated scientific literature to describe chemical interactions with genes and proteins, and associations between diseases and chemicals, and diseases and genes/proteins.

This curation requires their experts to go through thousands of articles in order update the database. Unfortunately, not every article is relevant and there are simply too many articles to curate everything. As such, the team needed a way to automatically sort which articles are most likely to be useful and thus curated. They developed a system that uses their database plus some hand-made rules to text-mine articles and assign them a document relevance score (DRS). The authors looked at a corpus of 14,904 articles, with 1,020 of them having been considered before and thus serving as a calibration set. To test their algorithms effectiveness, 3583 articles were sampled at random from the remaining 13884 and sent to biocurators for processing. The DRS correlated well with probability of curation, with 85% curation rate for articles with high DRS and only 15% for low DRS articles. This outperformed the PubMed ranking of articles which resulted in a ~60% curation rate regardless of the PubMed ranking.

With the swell of scientific publishing, I think machines are well on their way to replacing selective journals, graduate students, and social media for finding relevant literature. Throw in that computers already make decent journalists; and you can go to SCIgen to make your own AI authored paper that is ‘good’ enough to be accepted at an IEEE conference; and your laptop can write mathematical proofs good enough to fool humans. Now you have the ingredients to remind academics that they are at risk, just like everybody else, of losing their jobs to computers. Still it is tempting to take comfort from technological optimists like Andrew McAfee and believe that the droids will only reduce mundane and arduous tasks. It is nicer to believe that there will always be a place for human creativity and innovation:

For now, the AI systems are primarily improving my workflow and making researcher easier and more fun to do. But the future is difficult to predict, and I am naturally a pessimist. I like to say that I look at the world through algorithmic lenses. By that, I mean that I apply ideas from theoretical computer science to better understand natural or social phenomena. Maybe I should adopt a more literal meaning; at this rate “looking at the world through algorithmic lenses” might simply mean that the one doing the looking will be a computer, not me.

ResearchBlogging.orgDavis, A., Wiegers, T., Johnson, R., Lay, J., Lennon-Hopkins, K., Saraceni-Richards, C., Sciaky, D., Murphy, C., & Mattingly, C. (2013). Text Mining Effectively Scores and Ranks the Literature for Improving Chemical-Gene-Disease Curation at the Comparative Toxicogenomics Database PLoS ONE, 8 (4) DOI: 10.1371/journal.pone.0058201

Mathematical Turing test: Readable proofs from your computer

We have previously discussed the finicky task of defining intelligence, but surely being able to do math qualifies? Even if the importance of mathematics in science is questioned by people as notable as E.O. Wilson, surely nobody questions it as an intelligent activity? Mathematical reasoning is not necessary for intelligence, but surely it is sufficient?

Note that by mathematics, I don’t mean number crunching or carrying out a rote computation. I mean the bread and butter of what mathematicians do: proving theorems and solving general problems. As an example, consider the following theorem about metric spaces:

Let X be a complete metric space and let A be a closed subset of X. Then A is complete.

Can you prove this theorem? Would you call someone that can — intelligent? Take a moment to come up with a proof.
Read more of this post

Games, culture, and the Turing test (Part I)

Intelligence is one of the most loaded terms that I encounter. A common association is the popular psychometric definition — IQ. For many psychologists, this definition is too restrictive and the g factor is preferred for getting at the ‘core’ of intelligence tests. Even geneticists have latched on to g for looking at heritability of intelligence, and inadvertently helping us see that g might be too general a measure. Still, for some, these tests are not general enough since they miss the emotional aspects of being human, and tests of emotional intelligence have been developed. Unfortunately, the bar for intelligence is a moving one, whether it is the Flynn effect in IQ or more commonly: constant redefinitions of ‘intelligence’.

Does being good at memorizing make one intelligent? Maybe in the 1800s, but not when my laptop can load Google. Does being good at chess make one intelligent? Maybe before Deep Blue beat Kasparov, but not when my laptop can run a chess program that beats grand-masters. Does being good at Jeopardy make one intelligent? Maybe before IBM Watson easily defeated Jennings and Rutter. The common trend here seems to be that as soon as computers outperform humans on a given act, that act and associated skills are no longer considered central to intelligence. As such, if you believe that talking about an intelligent machine is reasonable then you want to agree on an operational benchmark of intelligence that won’t change as you develop your artificial intelligence. Alan Turing did exactly this and launched the field of AI.

I’ve stressed Turing’s greatest achievement as assembling an algorithmic lens and turning it on the world around him, and previously highlighted it’s application to biology. In the popular culture, he is probably best known for the application of the algorithmic lens to the mind — the Turing test (Turing, 1950). The test has three participants: a judge, a human, and a machine. The judge uses an instant messaging program to chat with the human and the machine, without knowing which is which. At the end of a discussion (which can be about anything the judge desires), she has to determine which is man and which is machine. If judges cannot distinguish the machine more than 50% of the time then it is said to pass the test. For Turing, this meant that the machine could “think” and for many AI researchers this is equated with intelligence.

Hit Turing right in the test-ees

You might have noticed a certain arbitrarity in the chosen mode of communication between judge and candidates. Text based chat seems to be a very general mode, but is general always better? Instead, we could just as easily define a psychometric Turing test by restriction the judge to only give IQ tests. Strannegård and co-authors did this by designing a program that could be tested on the mathematical sequences part of IQ tests (Strannegård, Amirghasemi, & Ulfsbäcker, 2012) and Raven’s progressive matrices (Strannegård, Cirillo, & Ström, 2012). The authors’ anthropomorphic method could match humans on either task (IQ of 100) and on the mathematical sequences greatly outperform most humans if desired (IQ of 140+). In other words, a machine can pass the psychometric Turing test and if IQ is a valid measure of intelligence then your laptop is probably smarter than you.

Of course, there is no reason to stop restricting our mode of communication. A natural continuation is to switch to the domain of game theory. The judge sets a two-player game for the human and computer to play. To decide which player is human, the judge only has access to the history of actions the players chose. This is the economic Turing test suggested by Boris Bukh and shared by Ariel Procaccia. The test can be viewed as part of the program of linking intelligence and rationality.

Procaccia raises the good point that in this game it is not clear if it is more difficult to program the computer or be the judge. Before the work of Tversky & Kahneman (1974), a judge would not even know how to distinguish a human from a rational player. Forty year later, I still don’t know of a reliable survey or meta-analysis of well-controlled experiments of human behavior in the restricted case of one-shot perfect information games. But we do know that judge designed payoffs are not the only source of variation in human strategies and I even suggest the subjective-rationality framework as I way to use evolutionary game theory to study these deviations from objective rationality. Understanding these departures is far from a settled question for psychologists and behavioral economist. In many ways, the programmer in the economic Turing test is a job description for a researcher in computational behavioral economy and the judge is an experimental psychologists. Both tasks are incredibly difficult.

For me, the key limitation of the economic (and similarly, standard) Turing test is not the difficult of judging. The fundamental flaw is the assumption that game behavior is a human universal. Much like the unreasonable assumption of objective rationality, we cannot simply assume uniformity in the heuristics and biases that shape human decision making. Before we take anything as general or universal, we have to show its consistency not only across the participants we chose, but also across different demographics and cultures. Unfortunately, much of game behavior (for instance, the irrational concept of fairness) is not consistent across cultures, even if it has a large consistency within a single culture. What a typical westerner university students considers a reasonable offer in the ultimatum game is not typical for a member of the Hadza group of Tanzania or Lamelara of Indonesia (Henrich et al., 2001). Game behavior is not a human universal, but is highly dependent of culture. We will discuss this dependence in part II of this series, and explore what it means for the Turing test and evolutionary game theory.

Until next time, I leave you with some questions that I wish I knew the answer to: Can we ever define intelligence? Can intelligence be operationalized? Do universal that are central to intelligence exist? Is intelligence a cultural construct? If there are intelligence universals then how should we modify the mode of interface used by the Turing test to focus only on them?

This post continues with a review of Henrich et al. (2001) in Part 2

References

Henrich, J., Boyd, R., Bowles, S., Camerer, C., Fehr, E., Gintis, H., & McElreath, R. (2001). In Search of Homo Economicus: Behavioral Experiments in 15 Small-Scale Societies. American Economic Review, 91 (2), 73-78

Strannegård, C., Amirghasemi, M., & Ulfsbäcker, S. (2013). An anthropomorphic method for number sequence problems Cognitive Systems Research, 22-23, 27-34 DOI: 10.1016/j.cogsys.2012.05.003

Strannegård, C., Cirillo, S., & Ström, V. (2012). An anthropomorphic method for progressive matrix problems. Cognitive Systems Research.

Turing, A. M. (1950) Computing Machinery and Intelligence. Mind.

Tversky, A.; Kahneman, D. (1974) Judgment under uncertainty: Heuristics and biases. Science 185 (4157): 1124–1131.