Software monocultures, imperialism, and weapons of math destruction

This past Friday, Facebook reported that they suffered a security breach that affected at least 50 million users. ‘Security breach’ is a bit of newspeak that is meant to hint at active malice and attribute fault outside the company. But as far as I understand it — and I am no expert on this — it was just a series of three bugs in Facebook’s “View As” feature that together allowed people to get the access tokens of whoever they searched for. This is, of course, bad for your Facebook account. The part of this story that really fascinated me, however, is how this affected other sites. Because that access token would let somebody access not only your Facebook account but also any other website where you use Facebook’s Single Sign On feature.

This means that a bug that some engineers missed at Facebook compromised the security of users on completely unrelated sites like, say, StackExchange (SE) or Disqus — or any site that you can log into using your Facebook account.

A case of software monoculture — a nice metaphor I was introduced to by Jonathan Zittrain.

This could easily have knock-on effects for security. For example, I am one of the moderators for the Theoretical Computer Science SE and also the Psychology and Neuroscience SE. Due to this, I have the potential to access certain non-public information of SE users like their IP addresses and hidden contact details. I can also send communications that look much more official, along-side expected abilities like bans, suspensions, etc. Obviously, part of my responsibility as a moderator is to only use these abilities for proper reasons. But if I had used Facebook — disclosure: I don’t use Facebook — for my SE login then a potential hacker could get access to these abilities and then attempt phishing or other attacks even on SE users that don’t use Facebook.

In other words, the people in charge of security at SE have to worry not only about their own code but also Facebook (and Google, Yahoo!, and other OpenIDs).

Of course, Facebook is not necessarily the worst case of software monoculture or knock-on effects that security experts have to worry about. Exploits in operating systems, browsers, serves, and standard software packages (especially security ones) can be even more devastating to the software ecology.

And exploits of aspects of social media other that login can have more subtle effects than security.

The underlying issue is a lack of diversity in tools and platforms. A case of having all our eggs in one basket. Of minimizing individual risk — by using the best available or most convenient system — at the cost of increasing systemic risk — because everyone else uses the same system.

We see the same issues in human projects outside of software. Compare this to the explanations of the 2008 financial crises that focused on individual vs systemic risk.

But my favourite example is the banana.

In this post, I’ll to sketch the analogy between software monoculture and agricultural monoculture. In particular, I want to focus on a common element between the two domains: the scale of imperial corporations. It is this scale that turns mathematical models into weapons of math destructions. Finally, I’ll close with some questions on if this analogy can be turned into tool transfer: can ecology and evolution help us understand and manage software monoculture?

Read more of this post

Advertisements

Overcoming folk-physics: the case of projectile motion for Aristotle, John Philoponus, Ibn-Sina & Galileo

A few years ago, I wrote about the importance of pairing tools and problems in science. Not selecting the best tool for the job, but adjusting both your problem and your method to form the best pair. There, I made the distinction between endogenous and exogenous questions. A question is endogenous to a field if it is motivated by the existing tools developed for the field or slight extensions of them. A question is exogenous if motivated by frameworks or concerns external to the field. Usually, such an external motivating framework is accepted uncritically with the most common culprits being the unarticulated ‘intuitive’ and ‘natural’ folk theories forced on us by our everyday experiences.

Sometimes a great amount of scientific or technological progress can be had from overcoming our reliance on a folk-theory. A classic examples of this would be the development of inertia and momentum in physics. In this post, I want to sketch a geneology of this transition to make the notion of endogenous vs exogenous questions a bit more precise.

How was the folk-physics of projectile motion abandoned?

In the process, I’ll get to touch briefly on two more recent threads on TheEGG: The elimination of the ontological division between artificial and natural motion (that was essential groundwork for Darwin’s later elimination of the division between artificial and natural processes) and the extraction and formalization of the tacit knowledge underlying a craft.
Read more of this post

Techne and Programming as Analytic Philosophy

This week, as I was assembling furniture — my closest approach to a traditional craft — I was listening to Peter Adamson interviewing his twin brother Glenn Adamson about craft and material intelligence. Given that this interview was on the history of philosophy (without any gaps) podcast, at some point, the brothers steered the conversation to Plato. In particular, to Plato’s high regard for craft or — in its Greek form — techne.

For Peter, Plato “treats techne, or craft, as a paradigm for knowledge. And a lot of the time in the Socratic dialogues, you get the impression that what Socrates is suggesting is that we need to find a craft or tekne for virtue or ethics — like living in the world — that is more or less like the tekne that say the carpenter has.” Through this, the Adamson twins proposed a view of craft and philosophy as two sides of the same coin.

Except, unlike the carpenter and her apprentice, Plato has Socrates trying to force his interlocutors to formulate their knowledge in propositional terms and not just live it. It is on this point that I differ from Peter Adamson.

The good person practices the craft of ethics: of shaping their own life and particular circumstances into the good life. Their wood is their own existence and their chair is the good life. The philosopher, however, aims to make the implicit (or semi-implicit) knowledge of the good person into explicit terms. To uncover and specify the underlying rules and regularities. And the modern philosopher applies these same principles to other domains, not just ethics. Thus, if I had to give an incomplete definition for this post: philosophy is the art of turning implicit knowledge into propositional form. Analytic philosophy aims for that propositional form to be formal.

But this is also what programmers do.

In this post, I want to convince you that it is fruitful to think of programming as analytic philosophy. In the process, we’ll have to discuss craft and the history of its decline. Of why people (wrongly) think that a professor is ‘better’ than a carpenter.
Read more of this post

Separating theory from nonsense via communication norms, not Truth

Earlier this week on twitter, Brian Skinner wrote an interesting thread on how to distinguish good theory from crackpottery. He started with a trait that both theorists and crackpots share: we have an “irrational self-confidence” — a belief that just by thinking we “can arrive at previously-unrealized truths about the world”. From this starting point, the two diverge in their use of evidence. A crackpot relies primarily on positive evidence: he thinks hard about a problem, arrives at a theory that feels right, and then publicizes the result.

A theorist, on the other prong, incorporates negative evidence: she ponders hard about a problem, arrives at a theory that feels right and then proceeds to try to disprove that theory. She reads the existing literature and looks at the competing theories, takes time to understand them and compare them against her own. If any disagree with hers then she figures out why those theories are wrong. She pushes her theory to the extremes, looks at its limiting cases and checks them for agreement with existing knowledge. Only after her theory comes out unscathed from all these challenges does she publicize it.

For Skinner, this second prong is the definition of scholarship. In practice, coming up with a correct theory is mostly a painful process of discarding many of your own wrong attempts. A good theorist is a thorough, methodical and skeptical of their own ideas.

The terminology of crackpottery vs scholarship is probably overly harsh, as Skinner acknowledges. And in practice, somebody might be a good theorist in one domain but a crackpot elsewhere. As Malkym Lesdrae points out, there are many accomplished accademics who are also crackpot theorists: “Most often it’s about things outside their field of specialty”. Thus, this ideal self-skepticism might be domain specific.

It is also a destructive ideal.

In other words, I disagreed with Skinner on the best way to separate good theory from nonsense. Mostly on the framing. Skinner crystalized our disagreement in a tweet: whereas he views self-skepticism as I an obligation to the Truth, I view a similar sort of self-reflective behavior as a social obligation. I am committed to this latter view because I want to make sense of things like heuristic models, where truth is secondary to other modelling concerns. Where truth is not the most useful yardstick for checking the usefulness of model. Where you hear Box’s slogan: “all models are wrong, but some are useful.

Given the brief summary of Skinner’s view above — and please, Brian, correct me in the comments if I misrepresented your position — I want to use the rest of this post to sketch what I mean by self-reflective behavior as a social obligation.
Read more of this post

On the Falsehood of Philosophy: a skeptic’s pastiche of Schopenhauer

Unless falsehood is the direct and immediate object of philosophy, our efforts must entirely fail of its aim.[1] It is absurd to look upon the enormous amount of wrong that abounds everywhere in philosophy, and originates in the words and writings of the greatest thinkers themselves, as serving no purpose at all and the result of mere error. Each separate mistake, as it topples an intricate system of thought, seems, no doubt to be something exceptional; but mistake in general is the rule.

I know of no greater absurdity than that propounded by the jury of Whig historians in declaring failure to be negative in its character. Failure is just what is positive; it feeds its own generating process. Plato is particularly concerned to defend failure as negative. To idealize a world for Forms and eternal Truths. Absurdly, he seeks to strengthen his position by dialogue with a man who knew but one things, he knew nothing. For Socrates recognized that it is success which is negative; in other words, truth and fact imply some discussion silenced, some process of inquiry brought to an end. If we have truth then there is no need for gadflies.

When the gadfly bites: the best consolation for mistake or wrong of any kind will be the thought of past great minds who erred still more than yourself. This is a form of consolation open for all time. But what an awful fate this means for philosophy as a whole!

Read more of this post

Hobbes on knowledge & computer simulations of evolution

Earlier this week, I was at the Second Joint Congress on Evolutionary Biology (Evol2018). It was overwhelming, but very educational.

Many of the talks were about very specific evolutionary mechanisms in very specific model organisms. This diversity of questions and approaches to answers reminded me of the importance of bouquets of heuristic models in biology. But what made this particularly overwhelming for me as a non-biologist was the lack of unifying formal framework to make sense of what was happening. Without the encyclopedic knowledge of a good naturalist, I had a very difficult time linking topics to each other. I was experiencing the pluralistic nature of biology. This was stressed by Laura Nuño De La Rosa‘s slide that contrasts the pluralism of biology with the theory reduction of physics:

That’s right, to highlight the pluralism, there were great talks from philosophers of biology along side all the experimental and theoretical biology at Evol2018.

As I’ve discussed before, I think that theoretical computer science can provide the unifying formal framework that biology needs. In particular, the cstheory approach to reductions is the more robust (compared to physics) notion of ‘theory reduction’ that a pluralistic discipline like evolutionary biology could benefit from. However, I still don’t have any idea of how such a formal framework would look in practice. Hence, throughout Evol2018 I needed refuge from the overwhelming overstimulation of organisms and mechanisms that were foreign to me.

One of the places I sought refuge was in talks on computational studies. There, I heard speakers emphasize several times that they weren’t “just simulating evolution” but that their programs were evolution (or evolving) in a computer. Not only were they looking at evolution in a computer, but this model organism gave them an advantage over other systems because of its transparency: they could track every lineage, every offspring, every mutation, and every random event. Plus, computation is cheaper and easier than culturing E.coli, brewing yeast, or raising fruit flies. And just like those model organisms, computational models could test evolutionary hypotheses and generate new ones.

This defensive emphasis surprised me. It suggested that these researchers have often been questioned on the usefulness of their simulations for the study of evolution.

In this post, I want to reflect on some reasons for such questioning.

Read more of this post

Labyrinth: Fitness landscapes as mazes, not mountains

Tonight, I am passing through Toulouse on my way to Montpellier for the 2nd Joint Congress on Evolutionary Biology. If you are also attending then find me on 21 August at poster P-0861 on level 2 to learn about computational complexity as an ultimate constraint on evolution.

During the flight over, I was thinking about fitness landscapes. Unsurprising — I know. A particular point that I try to make about fitness landscapes in my work is that we should imagine them as mazes, not as mountain ranges. Recently, Raoul Wadham reminded me that I haven’t written about the maze metaphor on the blog. So now is a good time to write on labyrinths.

On page 356 of The roles of mutation, inbreeding, crossbreeding, and selection in evolution, Sewall Wright tells us that evolution proceeds on a fitness landscape. We are to imagine these landscapes as mountain ranges, and natural selection as a walk uphill. What follows — signed by Dr. Jorge Lednem Beagle, former navigator of the fitness maze — throws unexpected light on this perspective. The first two pages of the record are missing.

Read more of this post

Looking for species in cancer but finding strategies and players

Sometime before 6 August 2014, David Basanta and Tamir Epstein were discussing the increasing focus of mathematical oncology on tumour heterogeneity. An obstacle for this focus is a good definitions of heterogeneity. One path around this obstacle is to take definitions from other fields like ecology — maybe species diversity. But this path is not straightforward: we usually — with some notable and interesting examples — view cancer cells as primarily asexual and the species concept is for sexual organisms. Hence, the specific question that concerned David and Tamir: is there a concept of species that applies to cancer?

I want to consider a couple of candidate answers to this question. None of these answers will be a satisfactory definition for species in cancer. But I think the exercise is useful for understanding evolutionary game theory. With the first attempt to define species, we’ll end up using the game assay to operationalize strategies. With the second attempt, we’ll use the struggle for existence to define players. Both will be sketches that I will need to completely more carefully if there is interest.

Read more of this post

Darwin as an early algorithmic biologist

In his autobiography, Darwin remarked on mathematics as an extra sense that helped mathematicians see truths that were inaccessible to him. He wrote:

Darwin Turing HeadbandDuring the three years which I spent at Cambridge… I attempted mathematics… but got on very slowly. The work was repugnant to me, chiefly from my not being able to see any meaning in the early steps in algebra. This impatience was very foolish, and in after years I have deeply regretted that I did not proceed far enough at least to understand something of the great leading principles of mathematics, for [people] thus endowed seem to have an extra sense. But I do not believe that I should ever have succeeded beyond a very low grade. … in my last year I worked with some earnestness for my final degree of B.A., and brushed up … a little Algebra and Euclid, which later gave me much pleasure, as it did at school.

Today, this remark has become a banner to rally mathematicians interested in biology. We use it to convince ourselves that by knowing mathematics, we have something to contribute to biology. In fact, the early mathematical biologist were able to personify the practical power of this extra sense in Gregor Mendel. From R.A. Fisher onward — including today — mathematicians have presented Mendel as one of their own. It is standard to attributed Mendel’s salvation of natural selection to his combinatorial insight into the laws of inheritance — to his alternative to Darwin’s non-mathematical blending inheritance.

But I don’t think we need wait for the rediscovery of Mendel to see fundamental mathematical insights shaping evolution. I think that Darwin did have mathematical vision, but just lacked the algorithmic lenses to focus it. In this post I want to give examples of how some of Darwin’s classic ideas can be read as anticipating important aspects of algorithmic biology. In particular, seeing the importance of asymptotic analysis and the role of algorithms in nature.
Read more of this post

Proximal vs ultimate constraints on evolution

For a mathematician — like John D. Cook, for example — objectives and constraints are duals of each other. But sometimes the objectives are easier to see than the constraints. This is certainly the case for evolution. Here, most students would point you to fitness as the objective to be maximized. And at least at a heuristic level — under a sufficiently nuanced definition of fitness — biologists would agree. So let’s take the objective as known.

This leaves us with the harder to see constraints.

Ever since the microscope, biologists have been expert at studying the hard to see. So, of course — as an editor at Proceedings of the Royal Society: B reminded me — they have looked at constraints on evolution. In particular, departures from an expected evolutionary equilibrium is where biologists see constraints on evolution. An evolutionary constraint is anything that prevents a population from being at a fitness peak.

Winding path in a hard semi-smooth landscape

In this post, I want to follow a bit of a winding path. First, I’ll appeal to Mayr’s ultimate-proximate distinction as a motivation for why biologists care about evolutionary constraints. Second, I’ll introduce the constraints on evolution that have been already studied, and argue that these are mostly proximal constraints. Third, I’ll introduce the notion of ultimate constraints and interpret my work on the computational complexity of evolutionary equilibria as an ultimate constraint. Finally, I’ll point at a particularly important consequence of the computational constraint of evolution: the possibility of open-ended evolution.

In a way, this post can be read as an overview of the change in focus between Kaznatcheev (2013) and (2018).
Read more of this post