Hiding behind chaos and error in the double pendulum

If you want a visual intuition for just how unpredictable chaotic dynamics can be then the go-to toy model is the double pendulum. There are lots of great simulations (and some physical implementations) of the double pendulum online. Recently, /u/abraxasknister posted such a simulation on the /r/physics subreddit and quickly attracted a lot of attention.

In their simulation, /u/abraxasknister has a fixed center (block dot) that the first mass (red dot) is attached to (by an invisible rigid massless bar). The second mass (blue dot) is then attached to the first mass (also by an invisible rigid massless bar). They then release these two masses from rest at some initial height and watch what happens.

The resulting dynamics are at right.

It is certainly unpredictable and complicated. Chaotic? Most importantly, it is obviously wrong.

But because the double pendulum is a famous chaotic system, some people did not want to acknowledge that there is an obvious mistake. They wanted to hide behind chaos: they claimed that for a complex system, we cannot possibly have intuitions about how the system should behave.

In this post, I want to discuss the error of hiding behind chaos, and how the distinction between microdynamics and global properties lets us catch /u/abraxasknister’s mistake.
Read more of this post

Advertisements

Software monocultures, imperialism, and weapons of math destruction

This past Friday, Facebook reported that they suffered a security breach that affected at least 50 million users. ‘Security breach’ is a bit of newspeak that is meant to hint at active malice and attribute fault outside the company. But as far as I understand it — and I am no expert on this — it was just a series of three bugs in Facebook’s “View As” feature that together allowed people to get the access tokens of whoever they searched for. This is, of course, bad for your Facebook account. The part of this story that really fascinated me, however, is how this affected other sites. Because that access token would let somebody access not only your Facebook account but also any other website where you use Facebook’s Single Sign On feature.

This means that a bug that some engineers missed at Facebook compromised the security of users on completely unrelated sites like, say, StackExchange (SE) or Disqus — or any site that you can log into using your Facebook account.

A case of software monoculture — a nice metaphor I was introduced to by Jonathan Zittrain.

This could easily have knock-on effects for security. For example, I am one of the moderators for the Theoretical Computer Science SE and also the Psychology and Neuroscience SE. Due to this, I have the potential to access certain non-public information of SE users like their IP addresses and hidden contact details. I can also send communications that look much more official, along-side expected abilities like bans, suspensions, etc. Obviously, part of my responsibility as a moderator is to only use these abilities for proper reasons. But if I had used Facebook — disclosure: I don’t use Facebook — for my SE login then a potential hacker could get access to these abilities and then attempt phishing or other attacks even on SE users that don’t use Facebook.

In other words, the people in charge of security at SE have to worry not only about their own code but also Facebook (and Google, Yahoo!, and other OpenIDs).

Of course, Facebook is not necessarily the worst case of software monoculture or knock-on effects that security experts have to worry about. Exploits in operating systems, browsers, serves, and standard software packages (especially security ones) can be even more devastating to the software ecology.

And exploits of aspects of social media other that login can have more subtle effects than security.

The underlying issue is a lack of diversity in tools and platforms. A case of having all our eggs in one basket. Of minimizing individual risk — by using the best available or most convenient system — at the cost of increasing systemic risk — because everyone else uses the same system.

We see the same issues in human projects outside of software. Compare this to the explanations of the 2008 financial crises that focused on individual vs systemic risk.

But my favourite example is the banana.

In this post, I’ll to sketch the analogy between software monoculture and agricultural monoculture. In particular, I want to focus on a common element between the two domains: the scale of imperial corporations. It is this scale that turns mathematical models into weapons of math destructions. Finally, I’ll close with some questions on if this analogy can be turned into tool transfer: can ecology and evolution help us understand and manage software monoculture?

Read more of this post

Labyrinth: Fitness landscapes as mazes, not mountains

Tonight, I am passing through Toulouse on my way to Montpellier for the 2nd Joint Congress on Evolutionary Biology. If you are also attending then find me on 21 August at poster P-0861 on level 2 to learn about computational complexity as an ultimate constraint on evolution.

During the flight over, I was thinking about fitness landscapes. Unsurprising — I know. A particular point that I try to make about fitness landscapes in my work is that we should imagine them as mazes, not as mountain ranges. Recently, Raoul Wadham reminded me that I haven’t written about the maze metaphor on the blog. So now is a good time to write on labyrinths.

On page 356 of The roles of mutation, inbreeding, crossbreeding, and selection in evolution, Sewall Wright tells us that evolution proceeds on a fitness landscape. We are to imagine these landscapes as mountain ranges, and natural selection as a walk uphill. What follows — signed by Dr. Jorge Lednem Beagle, former navigator of the fitness maze — throws unexpected light on this perspective. The first two pages of the record are missing.

Read more of this post

QBIOX: Distinguishing mathematical from verbal models in biology

There is a network at Oxford know as QBIOX that aims to connect researchers in the quantitative biosciences. They try to foster collaborations across the university and organize symposia where people from various departments can share their quantitative approaches to biology. Yesterday was my second or third time attending, and I wanted to share a brief overview of the three talks by Philip Maini, Edward Morrissey, and Heather Harrington. In the process, we’ll get to look at slime molds, colon crypts, neural crests, and glycolysis. And see modeling approaches ranging from ODEs to hybrid automata to STAN to algebraic systems biology. All of this will be in contrast to verbal theories.

Philip Maini started the evening off — and set the theme for my post — with a direct question as the title of his talk.

Does mathematics have anything to do with biology?

Read more of this post

Hackathons and a brief history of mathematical oncology

It was Friday — two in the morning. And I was busy fine-tuning a model in Mathematica and editing slides for our presentation. My team and I had been running on coffee and snacks all week. Most of us had met each other for the first time on Monday, got an inkling of the problem space we’d be working on, brainstormed, and hacked together a number of equations and a few chunks of code to prototype a solution. In seven hours, we would have to submit our presentation to the judges. Fifty thousand dollars in start-up funding was on the line.

A classic hackathon, except for one key difference: my team wasn’t just the usual mathematicians, programmers, computer & physical scientists. Some of the key members were biologists and clinicians specializing in blood cancers. And we weren’t prototyping a new app. We were trying to predict the risk of relapse for patients with chronic myeloid leukemia, who had stopped receiving imatinib. This was 2013 and I was at the 3rd annual integrated mathematical oncology workshop. It was one of my first exposures to using mathematical and computational tools to study cancer; the field of mathematical oncology.

As you can tell from other posts on TheEGG, I’ve continued thinking about and working on mathematical oncology. The workshops have also continued. The 7th annual IMO workshop — focused on stroma this year — is starting right now. If you’re not in Tampa then you can follow #MoffittIMO on twitter.

Since I’m not attending in person this year, I thought I’d provide a broad overview based on an article I wrote for Oxford Computer Science’s InSPIRED Research (see pg. 20-1 of this pdf for the original) and a paper by Helen Byrne (2010).

Read more of this post

Poor reasons for preprints & post-publication peer-review

Last week, I revived the blog with some reflections on open science. In particular, I went into the case for pre-prints and the problem with the academic publishing system. This week, I want to continue this thread by examining three common arguments for preprints: speed, feedback, and public access. I think that these arguments are often motivated in the wrong way. In their standard presentation, they are bad arguments for a good idea. By pointing out these perceived shortcoming, I hope that we can develop more convincing arguments for preprints. Or maybe methods of publication that are even better than the current approach to preprints.

These thoughts are not completely formed, and I am eager to refine them in follow up posts. As it stand, this is more of a hastily written rant.

Read more of this post

Preprints and a problem with academic publishing

This is the 250th post on the Theory, Evolutionary, and Games Group Blog. And although my posting pace has slowed in recent months, I see this as a milestone along the continuing road of open science. And I want to take this post as an opportunity to make some comments on open science.

To get this far, I’ve relied on a lot of help and encouragement. Both directly from all the wonderful guest posts and comments, and indirectly from general recognition. Most recently, this has taken the form of the Canadian blogging and science outreach network Science Borealis recognized us as one of the top 12 science blogs in Canada.

Given this connection, it is natural to also view me as an ally of other movements associated with open science; like, (1) preprints and (2) post-publication peer-review (PPPR). To some extent, I do support both of these activities. First, I regularly post my papers to ArXiv & BioRxiv. Just in the two preceeding months, I’ve put out a paper on the complexity of evolutionary equilibria and joint work on how fibroblasts and alectinib switch the games that cancers play. Another will follow later this month based on our project during the 2016 IMO Workshop. And I’ve been doing this for a while: the first draft of my evolutionary equilibria paper, for example, is older than BioRxiv — which only launched in November 2013. More than 20 years after physicists, mathematicians, and computer scientists started using ArXiv.

Second, some might think of my blog posts as PPPRs. For example. occasionally I try to write detailed comments on preprints and published papers. For example, my post on fusion and sex in proto-cells commenting on a preprint by Sam Sinai, Jason Olejarz and their colleagues. Finally, I am impressed and made happy by the now iconic graphic on the growth of preprints in biology.

But that doesn’t mean I find these ideas to be beyond criticism, and — more importantly — it doesn’t mean that there aren’t poor reasons for supporting preprints and PPPR.

Recently, I’ve seen a number of articles and tweets written on this topic both for and against (or neutral toward) pre-prints and for PPPR. Even Nature is telling us to embrace preprints. In the coming series of posts, I want to share some of my reflections on the case for preprints, and also argue that there isn’t anything all that revolutionary or transformative in them. If we want progress then we should instead think in terms of working papers. And as for post-publications peer review — instead, we should promote a culture of commentaries, glosses, and literature review/synthesis.

Currently, we do not publish papers to share ideas. We have ideas just to publish papers. And we need to change this aspect academic culture.

In this post, I will sketch some of the problems with academic publishing. Problems that I think any model of sharing results will have to address.

Read more of this post

Fusion and sex in protocells & the start of evolution

In 1864, five years after reading Darwin’s On the Origin of Species, Pyotr Kropotkin — the anarchist prince of mutual aid — was leading a geographic survey expedition aboard a dog-sleigh — a distinctly Siberian variant of the HMS Beagle. In the harsh Manchurian climate, Kropotkin did not see competition ‘red in tooth and claw’, but a flourishing of cooperation as animals banded together to survive their environment. From this, he built a theory of mutual aid as a driving factor of evolution. Among his countless observations, he noted that no matter how selfish an animal was, it still had to come together with others of its species, at least to reproduce. In this, he saw both sex and cooperation as primary evolutionary forces.

Now, Martin A. Nowak has taken up the challenge of putting cooperation as a central driver of evolution. With his colleagues, he has tracked the problem from myriad angles, and it is not surprising that recently he has turned to sex. In a paper released at the start of this month, Sam Sinai, Jason Olejarz, Iulia A. Neagu, & Nowak (2016) argue that sex is primary. We need sex just to kick start the evolution of a primordial cell.

In this post, I want to sketch Sinai et al.’s (2016) main argument, discuss prior work on the primacy of sex, a similar model by Wilf & Ewens, the puzzle over emergence of higher levels of organization, and the difference between the protocell fusion studied by Sinai et al. (2016) and sex as it is normally understood. My goal is to introduce this fascinating new field that Sinai et al. (2016) are opening to you, dear reader; to provide them with some feedback on their preprint; and, to sketch some preliminary ideas for future extensions of their work.

Read more of this post

Don’t take Pokemon Go for dead: a model of product growth

In the last month, some people wrote about the decay in active users for Pokemon Go after its first month, in a tone that presents the game as likely a mere fad – with article on 538, cinemablend and Bloomberg, for example. “Have you deleted Pokémon Go yet?” was even trending on Twitter. Although it is of course certainly possible that this ends up being an accurate description for the game, I posit that such conclusions are rushed. To do so, I examine some systemic reasons that would make the Pokemon Go numbers for August be inevitably lower than those for July, without necessarily implying that the game is doomed to dwindle into irrelevance.

Students in Waterloo playing Pokemon Go. Photo courtesy of Maylin Cui.

Students in Waterloo playing Pokemon Go. Photo courtesy of Maylin Cui.

Others have made similar points before – see this article and the end of this one for example. However, in the spirit of TheEGG, and unlike what most of the press articles can afford to do, we’ll bring some mathematical modeling into our arguments.
Read more of this post

Evolutionary dynamics of acid and VEGF production in tumours

Today was my presentation day at ECMTB/SMB 2016. I spoke in David Basanta’s mini-symposium on the games that cancer cells play and postered during the poster session. The mini-symposium started with a brief intro from David, and had 25 minute talks from Jacob Scott, myself, Alexander Anderson, and John Nagy. David, Jake, Sandy, and John are some of the top mathematical oncologists and really drew a crowd, so I felt privileged at the opportunity to address that crowd. It was also just fun to see lots of familiar faces in the same place.

A crowded room by the end of Sandy's presentation.

A crowded room by the end of Sandy’s presentation.

My talk was focused on two projects. The first part was the advertised “Evolutionary dynamics of acid and VEGF production in tumours” that I’ve been working on with Robert Vander Velde, Jake, and David. The second part — and my poster later in the day — was the additional “(+ measuring games in non-small cell lung cancer)” based on work with Jeffrey Peacock, Andriy Marusyk, and Jake. You can download my slides here (also the poster), but they are probably hard to make sense of without a presentation. I had intended to have a preprint out on this prior to today, but it will follow next week instead. Since there are already many blog posts about the double goods project on TheEGG, in this post I will organize them into a single annotated linkdex.

Read more of this post