Social algorithms and the Weapons of Math Destruction

Cathy O'Neil holding her new book: Weapons of Math Destruction at a Barnes & Noble in NYC.

Cathy O’Neil holding her new book: Weapons of Math Destruction at a Barnes & Noble in New York city.

In reference to intelligent robots taking over the world, Andrew Ng once said: “I don’t work on preventing AI from turning evil for the same reason that I don’t work on combating overpopulation on the planet Mars.” Sure, it will be an important issue to think about when the time comes. But for now, there is no productive way to think seriously about it. Today there are more concrete problems to worry about and more basic questions that need to be answered. More importantly, there are already problems to deal with. Problems that don’t involve super intelligent tin-men, killer robots, nor sentient machine overlords. Focusing on distant speculation obscures the fact that algorithms — and not necessarily very intelligent ones — already reign over our lives. And for many this reign is far from benevolent.

I owe much of my knowledge about the (negative) effects of algorithms on society to the writings of Cathy O’Neil. I highly recommend her blog mathbabe.org. A couple of months ago, she shared the proofs of her book Weapons of Math Destruction with me, and given that the book came out last week, I wanted to share some of my impressions. In this post, I want to summarize what makes a social algorithm into a weapon of math destruction, and share the example of predictive policing.

Read more of this post

Argument is the midwife of ideas (and other metaphors)

In their classic book Metaphors We Live By, George Lakoff and Mark Johnson argue — very convincingly, and as I’ve reviewed before — that “[m]etaphor is one of our most important tools for trying to comprehend partially what cannot be comprehended totally” and that these conceptual metaphors are central to shaping our understanding of and interaction with the world we are embedded in. Based on the authors’ grounding in linguistics, part of their case proceeds by offering examples of, by my count, over 58 different metaphors and metonymies in our everyday language; and given their book’s intentions, they chose a particularly pertinent first case: ARGUMENT is WAR.[1]

They show this metaphor in action through some example of common usage (pg. 4):

What do you want me to do? LEAVE? Then they'll keep being wrong!Your claims are indefensible.
He attacked every weak point in my argument.
His criticisms were right on target.
I demolished his argument.
I’ve never won an argument with him.
You disagree? Okay, shoot!
If you use that strategy, he’ll wipe you out.
He shot down all my arguments.

Notice that the even the xkcd I borrowed for visual reinforcement is titled ‘Duty Calls’, an expression usually associated with a departure for war. With our awareness drawn to this militaristic structure, Lakoff and Johnson encourage the reader to ask themselves: how would discussions look if instead of structuring arguments adversarially, we structured them after a cooperative activity like dance?[2]

Read more of this post

Computational kindness and the revelation principle

In EWD1300, Edsger W. Dijkstra wrote:

even if you have only 60 readers, it pays to spend an hour if by doing so you can save your average reader a minute.

He wrote this as the justification for the mathematical notations that he introduced and as an ode to the art of definition. But any writer should heed this aphorism.[1] Recently, I finished reading Algorithms to Live By by Brian Christian and Tom Griffiths.[2] In the conclusion of their book, they gave a unifying name to the sentiment that Dijkstra expresses above: computational kindness.

As computer scientists, we recognise that computation is costly. Processing time is a limited resource. Whenever we interact with others, we are sharing in a joint computational process, and we need to be mindful of when we are not carrying our part of the processing burden. Or worse yet, when we are needlessly increasing that burden and imposing it on our interlocutor. If you are computationally kind then you will be respectful of the cognitive problems that you force others to solve.

I think this is a great observation by Christian and Griffiths. In this post, I want to share with you some examples of how certain systems — at the level of the individual, small group, and society — are computationally kind. And how some are cruel. I will draw on examples from their book, and some of my own. They will include, language, bus stops, and the revelation principle in algorithmic game theory.
Read more of this post

A year in books: Neanderthals to the National Cancer Act to now

A tradition I started a couple of years ago is to read at least one non-fiction book per month and then to share my thoughts on the reading at the start of the following year. Last year, my dozen books were mostly on philosophy, psychology, and political economy. My brief comments on them ended up running a long 3.2 thousand words. This time the list had expanded to around 19 books. So I will divide the summaries into thematic sets. For the first theme, I will start with a subject that is new for my idle reading: cancer.

As a new researcher in mathematical oncology — and even though I am located in a cancer hospital — my experience with cancer has been mostly confined to the remote distance of replicator dynamics. So above all else these three books — Nelson’s (2013) Anarchy in the Organism, Mukherjee’s (2010) The Emperor of All Maladies, and Leaf’s (2014) The Truth in Small Doses — have provided me with insights into the personal experiences of the patient and doctor.

I hope that based on these reviews and the ones to follow, you can suggest more books for me to read in 2016. Better yet, maybe my comments will help you choose your next book. Much of what I read in 2015 came from suggestions made by my friends and readers, as well as articles, blogs, and reviews I’ve stumbled across.[1] In fact, each of these cancer books was picked for me by someone else.

If you’ve been to a restaurant with me then you know that I hate choosing between close-to-equivalent options. To avoid such discomfort, I outsourced the choosing of my February book to G+ and Nelson’s Anarchy in the Organism beat out Problems of the Self by a narrow margin to claim a spot on the reading list. As I was finishing up Nelson’s book — which I will review last in this post — David Basanta dropped off The Emperor of All Maladies on my desk. So I continued my reading on cancer. Finally, Leaf’s book came towards the end of the year based on a recommendation from Jacob Scott. It helped reinvigorate me after a summer away from the Moffitt Cancer Center.
Read more of this post

A year in books: philosophy, psychology, and political economy

If you follow the Julian calendar — which I do when I need a two week extension on overdue work — then today is the first day of 2015.

Happy Old New Year!

This also means that this is my last day to be timely with a yet another year-in-review post; although I guess I could also celebrate the Lunar New Year on February 19th. Last year, I made a resolution to read one not-directly-work-related book a month, and only satisfied it in an amortized analysis; I am repeating the resolution this year. Since I only needed two posts to catalog the practical and philosophical articles on TheEGG, I will try something new with this one: a list and mini-review of the books I read last year to meet my resolution. I hope that based on this, you can suggest some books for me to read in 2015; or maybe my comments will help you choose your next book to read. I know that articles and blogs I’ve stumbled across have helped guide my selection. If you want to support TheEGG directly and help me select the books that I will read this year then consider donating something from TheEGG wishlist.

Read more of this post

Models and metaphors we live by

Metaphors
George Lakoff and Mark Johnson’s Metaphors we live by is a classic, that has had a huge influence on parts of linguistics and cognitive science, and some influence — although less so, in my opinion — on philosophy. It is structured around the thought that “[m]etaphor is one of our most important tools for trying to comprehend partially what cannot be comprehended totally”.

The authors spend the first part of the book giving a very convincing argument that “even our deepest and most abiding concepts — time, events, causation, morality, and mind itself — are understood and reasoned about via multiple metaphors.” These conceptual metaphors structure our reality, and are fundamentally grounded in our sensory-motor experience. For them, metaphors are not just aspects of speech but windows into our mind and conceptual system:

Our ordinary conceptual system, in terms of which we both think and act, is fundamentally metaphorical in nature. … Our concepts structure what we perceive, how we get around the world, and how we relate to others. Our conceptual system thus plays a central role in defining our everyday realities. … Since communication is based on the same conceptual system that we use in thinking and actiong, language is an important source of evidence for what that system is like.

I found the book incredibly insightful, and in large agreement with many of my recent thoughts on the philosophies of mind and science. After taking a few flights to finish the book, I wanted to take a moment to provide a mini-review. The hope is to convincing you to make the time for reading this short volume.
Read more of this post

Is Chaitin proving Darwin with metabiology?

Cover of Proving Darwin

Algorithmic information theory (AIT) allows us to study the inherent structure of objects, and qualify some as ‘random’ without reference to a generating distribution. The theory originated when Ray Solomonoff (1960), Andrey Kolmogorov (1965), and Gregory Chaitin (1966) looked at probability, statistics, and information through the algorithmic lens. Now the theory has become a central part of theoretical computer science, and a tool with which we can approach other disciplines. Chaitin uses it to formalize biology.

In 2009, he originated the new field of metabiology, a computation theoretic approach to evolution (Chaitin, 2009). Two months ago, Chaitin published his introduction and defense of the budding field: Proving Darwin: Making Biology Mathematical. His goal is to distill the essence of evolution, formalize it, and provide a mathematical proof that it ‘works’. I am very sympathetic to this goal.

Chaitin’s conviction that evolution can be formalized stems from his deeply Platonic view of the world. Since evolution is so beautiful and ubiquitous, there must be a pure perfect form of it. We have to look past the unnecessary details and extract the essence. For Chaitin this means ignoring everything except the genetic code. The physical form of the organisms is merely a vessel and tool for translating genes into fitness. Although this approach might seem frightening at first, it is not foreign to biologists; Chaitin is simply taking the gene-centered view of evolution.

As for the genetic code, Chaitin takes the second word very seriously; genes are simply software to be transformed into fitness by the hardware that is the physics of the organisms’ environment. The only things that inhabit this formal world are self-delimiting programs — a technical term from AIT meaning that no extension of a valid program is valid. This allows us to define a probability measure over programs written as finite binary strings, which will be necessary in the technical section.

Physics simply runs the programs. If the program halts then the natural number it outputs is the program’s fitness. In other words, we have a perfectly static environment. If you were interested ecology or evolutionary game theory, then Chaitin just threw you out with the bath water. If you were interested in modeling, and wanted to have something computable define your fitness, then tough luck. Finally, in a fundamental biological theory, I would expect fitness to be something we measure when looking at the organisms, not a fundamental quantity inherent in the model. In biology, a creature simply reproduces or doesn’t, survives or doesn’t; fitness is something the observer defines when reasoning about the organisms. Why does Chaitin not derive fitness from more fundamental properties like reproduction and survival?

In Chaitin’s approach there is no reproduction, there is only one organism mutating through time. If you are interested in population biology, or speciation then you can’t look at them in this model. The mutations are not point-mutations, but what Chaitin calls algorithmic mutations. The algorithmic mutation actually combine the act of mutating and selecting into one step, it is a n-bit program that takes the current organism A and outputs a new organism B of higher fitness (note, that it needs an oracle call for the Halting-problem to do so). The probability that A is replaced by B is then 2^{-n}. There is no way to decouple the selection step from the mutation step in an algorithmic mutation, although this is not clear without the technical details which I will postpone until a future post. Chaitin’s model does not have random mutations, it has randomized directed mutations. Fitness as a basic assumption, static environment, and directed mutations make this a teleological model — a biologist’s nightmare.

What does Chaitin achieve? His primary result is to show biological creativity, which in this model means a constant (and fast) increase in fitness. His secondary result is to delineate between three types of design: blind search, evolution, and intelligent design. He shows that to arrive at an organism that has the maximum fitness of any n-bit organism (this is the number BB(n) — the nth busy beaver number), blind search required on the order of 2^n steps, evolution requires between n^2 and n^3, and intelligent design (that selects the best algorithmic mutation at each step) requires n steps. These are interesting questions, but what do they have to do with Darwin?

Does Chaitin prove Darwin?

We are finally at the central question of this post. To answer this, we need to understand what Darwin achieved. The best approach is to look at Mayr’s (1982) five facts and three inferences that define Darwin’s natural selection:

  • Fact 1: Population increases exponentially if all agents got to reproduce.
    Metabiology: A single agent that doesn’t reproduce
  • Fact 2: Population is stable except for occasional fluctuations.
    Metabiology: There is always one agent, thus stable
  • Fact 3: Resources are limited and relatively constant.
    Metabiology: Resources are not defined.
  • Inference 1: There is a fierce competition for survival with only a small fraction of the progeny of each generation making it to the next.
    Metabiology: Every successful mutation makes it to the next generation.
  • Fact 4: No two agents are exactly the same.
    Metabiology: There is only one agent.
  • Fact 5: Much of this variation is heritable.
    Metabiology: Nothing is heritable, a new mutant has nothing to do with the previous agent except having a higher fitness.
  • Inference 2: Survival depends in part on the heredity of the agent.
    Metabiology: A mutant is created/survives only if more fit than the focal agent.
  • Inference 3: Over generations this produces continual gradual change
    Metabiology: Agent constantly improves in fitness

The only thing to add to the above list is the method for generation variation: random mutation. As we saw before, metabiology uses directed mutation. From the above, it mostly seems like Chaitin and Darwin were concerned about different things. Chaitin doesn’t prove Darwin.

However, I don’t think Chaitin’s exercise was fruitless. I think it is important to try to formalize the basic essence of evolution, and to prove theorems about it. However, I think Chaitin needs to remember what made his development of algorithmic information theory so successful. AIT was able to address existing questions of interest in novel ways. So the lesson of this post is to concentrate on the questions biologists want to answer (or have answered already) when building a formal model. Make sure that your formal approach can at least express some of the questions a biologist would want to ask.

References

Chaitin, G. (1966). On the Length of Programs for Computing Finite Binary Sequences. J. Association for Computing Machinery 13(4): 547–569.

Chaitin, G. (2009). Evolution of Mutating Software EATCS Bulletin, 97, 157-164

Kolmogorov, A. (1965). Three approaches to the definition of the quantity of information. Problems of Information Transmission 1: 3–11

Mayr, E. (1982). The Growth of Biological Thought. Harvard University Press. ISBN 0-674-36446-5

Solomonoff, R. (1960). A Preliminary Report on a General Theory of Inductive Inference. Technical Report ZTB-138, Zator Company, Cambridge, Mass.

Evolutionary dynamics by M.A. Nowak

Martin A. Nowak does some of the best work in evolutionary game theory. I enjoy many of his papers, and his general approach to questions in EGT. Thus, I was very happy when I first heard about his introductory book “Evolutionary Dynamics: Exploring the Equations of Life“. After reading the book, I prepared a reading list for the members of the LNSC. I recomend Nowak’s book as a gentle introduction to the field and include my reading list as a review.

Reading list

Here is what I would recommend to read from the book, along with a brief summary:

Cover of Evolutionary Dynamics by Martin A. Nowak

Chapter 2 (2.0 – 2.3) “What evolution is”

  • Covers the basics of evolution: reproduction (2.1), selection (2.2), and mutation (2.3)
  • Introduces the basic equations and phase space. Explains what the simplex is.
  • Section 2.4 is fun, but does not do justice to sexual evolution (and is never references again), so it can be omitted.

Chapter 4 (4.0 – 4.8) “Evolutionary games”

  • This chapter is essential, it covers the basics of EGT.
  • It is important to stress section 4.5 where the replicator equation is introduced.
  • Section 4.9 was fun for me. When I was first starting EGT, I was unaware of many of the classical results and reproved the equivalence of replicator dynamics and Lotka-Volterra, so seeing it was nostalgic. However, in general this section can be omitted.

Chapter 9 (all) “Spatial games”

  • This is the approach that really founded the idea of doing games in a lattice, and is important for everyone to read

Chapter 10 (Optional) “HIV infection”

  • Although not exactly game theory related. It does show the power of differential equations in biology.
  • Given a bit of thought, it is possible to relate the ideas to game theory. In particular, the asymmetric effect between strains of HIV and their suppressors can be thought of as a game. However, Nowak does not mention that in the book.
  • Mostly, it is a part of Nowak’s research that really hit the spot for theory building in biology.

Chapter 11 (Optional) “Evolution of virulence”

  • Also, much like chapter 10 much of this can be interpreted as games, but Nowak does not make the connection explicit.

Chapter 13 (13.3 – 13.6) “Language evolution”

  • The beginning 13.0 – 13.2 is rather weak. It is better if students have their own background in basic linguistics. I don’t think Nowak does a very careful or rigorous introduction. He drops results on the reader, without really making us appreciate them. For instance, he throws down Godel’s theorem for no real reason and without doing it any justice.
  • The later sections are useful. In particular, they show a really fun application of EGT to language coherence, etc.
  • Some of the conclusions are not particularly breathtaking, but it is interesting to see them reached from evolutionary arguments. It gives you much more insight into both EGT and language.
  • Nowak introduces the replicator-mutator equation here. This is a very general and important framework in EGT.

The ‘further reading’ section is one of the best parts of this book. There are a lot of great sources mentioned there.

Sections to omit

As for the omitted chapters and sections:

Chapter 3 is extremely tangential. The quasi-species equation is briefly mentioned again in chapter 13 (when the replicator-mutator equation is introduced) but chapter 13 can easily be understood and enjoyed without chapter 3. Quasi-species work is extremely interesting and promising, but Nowak’s exposition of it is lacking. Chapter 3 is not really worth reading, and might cause more confusion than good.

I omitted chapter 5, even though it is a whole chapter about prisoner’s dilemma. However, only direct reciprocity (iterated PD) is studied in this chapter. Direct reciprocity is pretty well understood as a mechanism for creating cooperation, hence there is not too much excitement here. It does talk about some of the difficulties of iterated PD, but this is not useful if you are interested in one-shot games.

For people that are more biology inclined, it might be good to read Chapters 6,7, and 8. These 3 chapters deal with finite populations in the way that biologists work with them (i.e. via Moran processes). These are very important for understanding the biological parts of EGT, especially neutral drift and fixation probabilities. In a social context though, there will be no fixation or neutral drift since a non-negligible mutation rate should always be included (and there can be no fixation with a mutation rate).

The reason biologists can ignore mutation rates between strategies and social scientists can’t, I think, is quiet simple. The chance of one strategy mutating into another (via genetic changes) is close to null, and hence can be ignored on any reasonable time scale (most mutations will be neutral or will kill you, a random mutation changing something as substantial as a strategy is nearly impossible). However, in social evolution, a person can always change their strategy, or at least they can do it much easier than in biological evolution. Hence, the Moran processes for social evolution will have no absorbing states and thus those 3 chapters on finite populations (which are only of interest because of absorbing states) become irrelevant. However, for people that read a lot of biology papers it is important to go through those chapters.

I found the chapter on evolutionary dynamics of cancer (chapter 12) lacking. It has no really interesting results and a lot of cancer specific set up. I don’t think Nowak has found a good way to apply evolutionary dynamics to cancer, yet. Personally, I prefer the approach of Axelrod and coauthors. Most of this chapter is a bunch of heavy math (which Nowak usually replaces by heuristics) to conclude pretty obvious results.

Conclusion and open problems

As I stated earlier, I think the book is a great introduction. However, I don’t think it equips you with the skills to start doing research. I am not sure how easy the book is to use as a textbook, but Nowak uses it in both his undergraduate and graduate course on mathematical biology.

Do you think this is a good introduction to the field? Is there another text you prefer? Would you like to take or teach a course from this book? Would you like to see a specific chapter reviewed in more depth?