Software monocultures, imperialism, and weapons of math destruction

This past Friday, Facebook reported that they suffered a security breach that affected at least 50 million users. ‘Security breach’ is a bit of newspeak that is meant to hint at active malice and attribute fault outside the company. But as far as I understand it — and I am no expert on this — it was just a series of three bugs in Facebook’s “View As” feature that together allowed people to get the access tokens of whoever they searched for. This is, of course, bad for your Facebook account. The part of this story that really fascinated me, however, is how this affected other sites. Because that access token would let somebody access not only your Facebook account but also any other website where you use Facebook’s Single Sign On feature.

This means that a bug that some engineers missed at Facebook compromised the security of users on completely unrelated sites like, say, StackExchange (SE) or Disqus — or any site that you can log into using your Facebook account.

A case of software monoculture — a nice metaphor I was introduced to by Jonathan Zittrain.

This could easily have knock-on effects for security. For example, I am one of the moderators for the Theoretical Computer Science SE and also the Psychology and Neuroscience SE. Due to this, I have the potential to access certain non-public information of SE users like their IP addresses and hidden contact details. I can also send communications that look much more official, along-side expected abilities like bans, suspensions, etc. Obviously, part of my responsibility as a moderator is to only use these abilities for proper reasons. But if I had used Facebook — disclosure: I don’t use Facebook — for my SE login then a potential hacker could get access to these abilities and then attempt phishing or other attacks even on SE users that don’t use Facebook.

In other words, the people in charge of security at SE have to worry not only about their own code but also Facebook (and Google, Yahoo!, and other OpenIDs).

Of course, Facebook is not necessarily the worst case of software monoculture or knock-on effects that security experts have to worry about. Exploits in operating systems, browsers, serves, and standard software packages (especially security ones) can be even more devastating to the software ecology.

And exploits of aspects of social media other that login can have more subtle effects than security.

The underlying issue is a lack of diversity in tools and platforms. A case of having all our eggs in one basket. Of minimizing individual risk — by using the best available or most convenient system — at the cost of increasing systemic risk — because everyone else uses the same system.

We see the same issues in human projects outside of software. Compare this to the explanations of the 2008 financial crises that focused on individual vs systemic risk.

But my favourite example is the banana.

In this post, I’ll to sketch the analogy between software monoculture and agricultural monoculture. In particular, I want to focus on a common element between the two domains: the scale of imperial corporations. It is this scale that turns mathematical models into weapons of math destructions. Finally, I’ll close with some questions on if this analogy can be turned into tool transfer: can ecology and evolution help us understand and manage software monoculture?

Today if you go to your local store and pick up some bananas, they will most likely be Cavendish bananas. These bananas were first mass produced in 1903, but were extremely unpopular until the 1960s. During this time, the dominant banana was the Gros Michel. The Cavendish was overall an inferior fruit: it bruised more easily, was harder to ship, and was much less creamy and delicious than the Gros Michel. Thus, the various realms of the European colonial empires primarily grew the Gros Michel. In fact, given the imperial urge to standardize, they grew almost identical strains of Gros Michel. As you might notice when you eat a banana, there are no seeds in this fruit; it’s flower also produces no viable pollen. Commercial bananas are reproduced asexually, making whole plantations into monocultures of genetically (nearly) identical bananas. This was good for consumers and easy for imperial corportations, but also good for the fungus Fusarium oxysporum.

When this fungus colonizes the roots of a banana plant, it causes Panama disease. This disease will kill the plant and thus stop the production of bananas. It is resistant to fungicides. And it usually appear asymptomatically in the shoots of banana plants. Since these shoots are used by farmers to reproduce the banana, they end up planting diseased plants. Finally, the fungus can persist asymptomatically in certain common weeds and the soil itself, so even if the whole field is replanted from a healthy individual, they will likely still get sick.

By the 1960, Panama diseases wiped out almost all production of the Gros Michel. It was one of the most destructive plant diseases of modern times. This systemic failure led us to the inferior Cavendish banana, which seemed to not be affected by Fusarium oxysporum. Unfortunately, the Cavendish is now starting to succumb to Panama disease as well. Let’s hope that the agriculture industry has learned something in the last 70 years.

Hopefully the analogy I’m making with software is clear. The banana is our favourite piece of software and the fungus is an unexpected bug or exploit. But the most important part is not an analogy, but the same: imperial corporations.

The monoculture of bananas was not an inherent, biological property of the banana itself. The rampant monoculture was an effect of the socio-biological system of imperial agriculture. The biological details of asexual reproduction of the banana plant and the aggressive, treatment-resistant fungus were important. But they only made half the story. The other half was the social and economic systems in place that reproduced the Gros Michel banana plant and spread it all over the world.

We have a similar thing to worry about with software. Especially the social algorithms that underlay weapons of math destruction.

For me, a social algorithm is a mathematical model, usually embedded in computer code, that often takes the form of a person scoring system. The scores are used to determine access to or denial of certain resources to that person. Usually, these algorithms are not (completely) hard-coded but trained on population (and individual) level data. Examples can be anything from recommendation engines, social network feeds, to credit scores and predictive policing.

There isn’t anything inherently wrong with a social algorithm (at least not with all social algorithms). Just like there isn’t anything inherently wrong with an asexually reproducing banana plant or a banana-killing fungus. The issue comes when this technology is combined with an imperial social organization. In that case, a social algorithm can become a weapon of math destruction. For Cathy O’Neil, the three main characteristics of a weapon of math destruction are: (1) scale, (2) opacity, and (3) vicious self-reinforcement. And here, I want to focus on scale.

One of the main mantras of Silicon Valley seems to be scale. The goal isn’t just to build a new technology, but to build a technology that will scale to the whole world. Build a single system that will reach everyone. It is this imperial ambition that pushes us towards software monoculture.

Once a monoculture is established, even a small bug or bias in the system can have huge repercussions. The reason that exploits on Facebook, YouTube, or Twitter matter isn’t because they are particularly powerful — but because they reach so many. Combine this with opacity: I usually don’t know how Facebook or Twitter order my timeline, or how YouTube recommends my next video. Finally, throw in some vicious self-reinforcement: I stay longer on the social networking sites and engage with a more homogeneous type of content.

Suddenly, even a small bias can start to have big effects.

To make things even more difficult: for the most important systems and processes, the repercussions can be difficult to spot. In the case of Gros Michel the effect was obvious: crop yield dropped until there was no more commercially viable Gros Michel bannanas. The farmer could directly see how the technological bug — a fungus attacking an asexually reproduced plant — was destroying their system. In the case of the Facebook security bug, the repercussions are relatively obvious: potential unauthorized access to 50 million accounts. The link to technology takes a little digging: finding the three bugs. But how do you approach measuring the repercussions of misinformation on social media? Or echo chambers?

Solutions aren’t always obvious either. Simply having more companies, for example, doesn’t help if everyone follows the same methods. That was the point of systemic failure in banking networks. In social media, you can see this with all three of Facebook, Twitter, and YouTube providing equally vicious reinforcement schedules for their users.

I don’t know how to approach many of these questions, but I wonder if there are useful insights that I could offer.

If we stick to the metaphor of ‘software monoculture’ then it suggests that evolution and ecology might offer a useful lens. Do you, dear reader, think that there is more than a metaphorical connection? Can evolution and ecology help us understand software monoculture? I’ve encountered the work of Stephanie Forrest on the evolution of software, and Russell Dinnage just pointed me to a study of biodiversity in the Linux ecosystem. I’m also familiar with the import of ecological models into finance for studying the ecology of banks. But what else should I be reading?

About Artem Kaznatcheev
From the Department of Computer Science at Oxford University and Department of Translational Hematology & Oncology Research at Cleveland Clinic, I marvel at the world through algorithmic lenses. My mind is drawn to evolutionary dynamics, theoretical computer science, mathematical oncology, computational learning theory, and philosophy of science. Previously I was at the Department of Integrated Mathematical Oncology at Moffitt Cancer Center, and the School of Computer Science and Department of Psychology at McGill University. In a past life, I worried about quantum queries at the Institute for Quantum Computing and Department of Combinatorics & Optimization at University of Waterloo and as a visitor to the Centre for Quantum Technologies at National University of Singapore. Meander with me on Google+ and Twitter.

7 Responses to Software monocultures, imperialism, and weapons of math destruction

  1. Hi Artem, really keen on seeing more posts of yours on the idea of software monocultures. Now, the analogy I see here is less to do with agriculture and monoculture and more with connectivity. The fact that you can use the same credentials to join different systems allows for more connectivity. If all these systems were so similar to each other that you could cause the same tools to break them from inside I think that this would be closer to the damage one expects from monoculture.
    Put it in a different way: if I have a pest in my garden that is ravaging all of the flowers inside it, and then manage to get access to your garden then you should be upset regardless of the outcome but you should be more upset if you also happen to have the same types of plants that I do.

    • Thanks David. I’m not sure if I have any follow up thoughts on this, yet. But I am sure some will develop.

      I meant the Facebook security breach mostly as an example and motivation. The more interesting cases for me are social algorithms and the effects of systemic risk vs individual risk on society (rather than just narrowly on privacy or security). If our tools and how we’re assessed shape our world and our work then what happens when a single kind of tool or assessment becomes a monoculture. What if that tool has a bug, or assessment is game-able in a counter-productive way; or worse, if it was build with nefarious goals in mind.

      I think that the concept of connectivity and monoculture is inter-related. Or maybe one subsumes the other? In particular, a monoculture requires connectivity. If two genetically identical species are not connected in any way (i.e. pathogens can’t flow between them, they don’t share resources, etc) then it doesn’t matter that they are genetically identical, since they are two separate populations with no meta structure. I would not call these two separate populations a monoculture. I’d only consider them a monoculture once they are connected and there is a risk of systemic failure.

      Of course, as your garden example points out. We could have high connectivity without monoculture. For example, maybe my garden is downstream/wind of yours, so every ‘pest’ that affects your garden will then affect mine. But our two gardens are so drastically different that they always respond in exactly opposite ways to the ‘pest’. If the pest A kills your flowers then pest A makes mine twice as beautiful. If pest B kills my flowers then it makes yours twice as beautiful. I wouldn’t consider this a monoculture.

      Finally, I guess, it is important to note that there can be a monoculture with respect to some traits and not others. For example, let’s look at a cartoon version of politics on twitter. Given any given piece of news, the right and left might have opposite emotional reactions to it (in the same way our flowers did) — so there it isn’t an emotional monoculture. But then those two sides can feed off each other, increasing polarization or radicalization within each group — so there is a ‘silo-fication’ monoculture. [CGP Grey had a nice video about this, and I have half-a-post written about that == maybe I should revisit that and see if it connected to monoculture].

      Although maybe ‘silo-fication’ is a population level property that doesn’t make sense — since the whole points of silos is to break connections. I guess I’ve awkwardly stumbled around a circle. But maybe it was a strange staircase (hopefully not Escherian) if we view the left and right gardens as agents (now this seems to be awkwardly connecting to my token-type thoughts and to extensions of your ideas on species in cancer… too many circles).

  2. Pingback: Methods and morals for mathematical modeling | Theory, Evolution, and Games Group

  3. Pingback: Software monocultures, imperialism, and weapons of math destruction – you without end

  4. Pingback: Cataloging a year of social blogging | Theory, Evolution, and Games Group

  5. Pingback: Twitter vs blogs and science advertising vs discussion | Theory, Evolution, and Games Group

  6. Pingback: The science and engineering of biological computation: from process to software to DNA-based neural networks | Theory, Evolution, and Games Group

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: