Bourbaki vs the Russian method as a lens on heuristic models

There are many approaches to teaching higher maths, but two popular ones, that are often held in contrast to each other, are the Bourbaki and Russian methods. The Bourbaki method is named after a fictional mathematician — a nom-de-plume used by a group of mostly French mathematicians in the middle of the 20th century — Nicholas Bourbaki, who is responsible for an extremely abstract and axiomatic treatment of much of modern mathematics in his encyclopedic work Éléments de mathématique. As a pedagogical method, it is very formalist and consists of building up clear and most general possible definitions for the student. Discussions of specific, concrete, and intuitive mathematical objects is avoided, or reserved for homework exercises, Instead, a focus on very general axioms that can apply to many specific structures of interest is favored.

The Russian method, in contrast, stresses specific examples and applications. The instructor gives specific, concrete, and intuitive mathematical objects and structures — say the integers — as a pedagogical examples of the abstract concept at hand — maybe rings, in this case. The student is given other specific instances of these general abstract objects as assignments — maybe some matrices, if we are looking at rings — and through exposure to many specific examples is expected to extract the formal axiomatic structure with which Bourbaki would have started. For the Russian, this overarching formalism becomes largely an afterthought; an exercise left to the reader.

As with many comparisons in education, neither method is strictly “better”. Nor should the names be taken as representative of the people that advocate for or are exposed to each method. For example, I am Russian but I feel like I learnt the majority of my maths following the Bourbaki method and was very satisfied with it. In fact, I am not sure where the ‘Russian’ in the name comes from, although I suspect it is due to V.I. Arnol’d‘s — a famous Russian mathematician from the second half of the 20th century — polemical attack on Bourbaki. Although I do not endorse Arnol’d attack, I do share his fondness for Poincaré and importance of intuition in mathematics. As you can guess from the title, in this article I will be stressing the Russian method as important to the philosophy of science and metamodeling.

I won’t be talking about science education, but about science itself. As I’ve stressed before, I think it a fool’s errand to provide a definition or categorization of the scientific method; it is particularly self-defeating here. But for the following, I will take the perspective that the scientific community, especially the theoretical branches that I work in, is engaged in the act of educating itself about the structure of reality. Reading a paper is like a lesson, I get to learn from what others have discovered. Doing research is like a worksheet: I try my hand at some concrete problems and learn something. Writing a paper is formalizing what I learned into a lesson for others. And, of course, as we try to teach, we end up learning more, so the act of writing often transforms what we learned in our ‘worksheet’.
Read more of this post


The Noble Eightfold Path to Mathematical Biology

Twitter is not a place for nuance. It is a place for short, pithy statements. But if you follow the right people, those short statements can be very insightful. In these rare case, a tweet can be like a kōan: a starting place for thought and meditation. Today I want to reflect on such a thoughtful tweet from Rob Noble outlining his template for doing good work in mathematical biology. This reflection is inspired by the discussions we have on my recent post on mathtimidation by analytic solution vs curse of computing by simulation.

So, with slight modification and expansion from Rob’s original — and in keeping with the opening theme — let me present The Noble Eightfold Path to Mathematical Bilogy:

  1. Right Intention: Identify a problem or mysterious effect in biology;
  2. Right View: Study the existing mathematical and mental models for this or similar problems;
  3. Right Effort: Create model based on the biology;
  4. Right Conduct: Check that the output of the model matches data;
  5. Right Speech: Humbly write up;
  6. Right Mindfulness: Analyse why model works;
  7. Right Livelihood: Based on 6, create simplest, most general useful model;
  8. Right Samadhi: Rewrite focussing on 6 & 7.

The hardest, most valuable work begins at step 6.

The only problem is that people often stop at step 5, and sometimes skip step 2 and even step 3.

This suggests that the model is more prescriptive than descriptive. And aspiration for good scholarship in mathematical biology.

In the rest of the post, I want to reflect on if it is the right aspiration. And also add some detail to the steps.

Read more of this post

Minimal models for explaining unbounded increase in fitness

On a prior version of my paper on computational complexity as an ultimate constraint, Hemachander Subramanian made a good comment and question:

Nice analysis Artem! If we think of the fitness as a function of genes, interactions between two genes, and interactions between three genes and so on, your analysis using epistasis takes into account only the interactions (second order and more). The presence or absence of the genes themselves (first order) can change the landscape itself, though. Evolution might be able to play the game of standing still as the landscape around it changes until a species is “stabilized” by finding itself in a peak. The question is would traversing these time-dependent landscapes for optima is still uncomputable?

And although I responded to his comment in the bioRxiv Disqus thread, it seems that comments are version locked and so you cannot see Hema’s comment anymore on the newest version. As such, I wanted to share my response on the blog and expand a bit on it.

Mostly this will be an incomplete argument for why biologists should care about worst-case analysis. I’ll have to expand on it more in the future.

Read more of this post

The wei wu wei of evolutionary oncology

The world was disordered, rains would come and the rivers would flood. No one knew when. When it rained, plants would grow, but no one knew which were fit to eat and which were poisonous. Sickness was rife. Life was precarious.

The philosopher-king Yu dredged the rivers, cleaned them so they would flow into the sea. Only then were the people of the Middle Kingdom able to grow the five grains to obtain food.

Generations later, Bai Gui — the prime minister of Wei — boasted to Mengzi: “my management of the water is superior to that of Yu.”

Mengzi responded: “You are wrong. Yu’s method was based on the way of the water. It is why Yu used the four seas as receptacles. But you are using the neighbouring states as receptacles. When water goes contrary to its course, we call if overflowing. Overflowing means flooding water, something that a humane man detests… As for Yu moving the waters, he moved them without interference.”

Although Yu made changes to the environment by digging channels, he did so after understanding how the water flowed and moved naturally. He did so with knowledge of the Way. Yu’s management of water was superior to Bai Gui’s because Yu’s approach was in accordance with the Way. This is what evolutionary oncology seeks to achieve with cancer treatment. By understanding how the dynamics of somatic evolution drive tumour growth, we hope to change the selective pressures in accordance with this knowledge to manage or cure the disease.
Read more of this post

Mathtimidation by analytic solution vs curse of computing by simulation

Recently, I was chatting with Patrick Ellsworth about the merits of simulation vs analytic solutions in evolutionary game theory. As you might expect from my old posts on the curse of computing, and my enjoyment of classifying games into dynamic regimes, I started with my typical argument against simulations. However, as I searched for a positive argument for analytic solutions of games, I realized that I didn’t have a good one. Instead, I arrived at another negative argument — this time against analytic solutions of heuristic models.

Hopefully this curmudgeoning comes as no surprise by now.

But it did leave me in a rather confused state.

Given that TheEGG is meant as a place to share such confusions, I want to use this post to set the stage for the simulation vs analytic debate in EGT and then rehearse my arguments. I hope that, dear reader, you will then help resolve the confusion.

First, for context, I’ll share my own journey from simulations to analytic approaches. You can see a visual sketch of it above. Second, I’ll present an argument against simulations — at least as I framed that argument around the time I arrived at Moffitt. Third, I’ll present the new argument against analytic approaches. At the end — as is often the case — there will be no resolution.

Read more of this post

Methods and morals for mathematical modeling

About a year ago, Vincent Cannataro emailed me asking about any resources that I might have on the philosophy and etiquette of mathematical modeling and inference. As regular readers of TheEGG know, this topic fascinates me. But as I was writing a reply to Vincent, I realized that I don’t have a single post that could serve as an entry point to my musings on the topic. Instead, I ended up sending him an annotated list of eleven links and a couple of book recommendations. As I scrambled for a post for this week, I realized that such an analytic linkdex should exist on TheEGG. So, in case others have interests similar to Vincent and me, I thought that it might be good to put together in one place some of the resources about metamodeling and related philosophy available on this blog.

This is not an exhaustive list, but it might still be relatively exhausting to read.

I’ve expanded slightly past the original 11 links (to 14) to highlight some more recent posts. The free association of the posts is structured slightly, with three sections: (1) classifying mathematical models, (2) pros and cons of computational models, and (3) ethics of models.

Read more of this post

Software monocultures, imperialism, and weapons of math destruction

This past Friday, Facebook reported that they suffered a security breach that affected at least 50 million users. ‘Security breach’ is a bit of newspeak that is meant to hint at active malice and attribute fault outside the company. But as far as I understand it — and I am no expert on this — it was just a series of three bugs in Facebook’s “View As” feature that together allowed people to get the access tokens of whoever they searched for. This is, of course, bad for your Facebook account. The part of this story that really fascinated me, however, is how this affected other sites. Because that access token would let somebody access not only your Facebook account but also any other website where you use Facebook’s Single Sign On feature.

This means that a bug that some engineers missed at Facebook compromised the security of users on completely unrelated sites like, say, StackExchange (SE) or Disqus — or any site that you can log into using your Facebook account.

A case of software monoculture — a nice metaphor I was introduced to by Jonathan Zittrain.

This could easily have knock-on effects for security. For example, I am one of the moderators for the Theoretical Computer Science SE and also the Psychology and Neuroscience SE. Due to this, I have the potential to access certain non-public information of SE users like their IP addresses and hidden contact details. I can also send communications that look much more official, along-side expected abilities like bans, suspensions, etc. Obviously, part of my responsibility as a moderator is to only use these abilities for proper reasons. But if I had used Facebook — disclosure: I don’t use Facebook — for my SE login then a potential hacker could get access to these abilities and then attempt phishing or other attacks even on SE users that don’t use Facebook.

In other words, the people in charge of security at SE have to worry not only about their own code but also Facebook (and Google, Yahoo!, and other OpenIDs).

Of course, Facebook is not necessarily the worst case of software monoculture or knock-on effects that security experts have to worry about. Exploits in operating systems, browsers, serves, and standard software packages (especially security ones) can be even more devastating to the software ecology.

And exploits of aspects of social media other that login can have more subtle effects than security.

The underlying issue is a lack of diversity in tools and platforms. A case of having all our eggs in one basket. Of minimizing individual risk — by using the best available or most convenient system — at the cost of increasing systemic risk — because everyone else uses the same system.

We see the same issues in human projects outside of software. Compare this to the explanations of the 2008 financial crises that focused on individual vs systemic risk.

But my favourite example is the banana.

In this post, I’ll to sketch the analogy between software monoculture and agricultural monoculture. In particular, I want to focus on a common element between the two domains: the scale of imperial corporations. It is this scale that turns mathematical models into weapons of math destructions. Finally, I’ll close with some questions on if this analogy can be turned into tool transfer: can ecology and evolution help us understand and manage software monoculture?

Read more of this post

Overcoming folk-physics: the case of projectile motion for Aristotle, John Philoponus, Ibn-Sina & Galileo

A few years ago, I wrote about the importance of pairing tools and problems in science. Not selecting the best tool for the job, but adjusting both your problem and your method to form the best pair. There, I made the distinction between endogenous and exogenous questions. A question is endogenous to a field if it is motivated by the existing tools developed for the field or slight extensions of them. A question is exogenous if motivated by frameworks or concerns external to the field. Usually, such an external motivating framework is accepted uncritically with the most common culprits being the unarticulated ‘intuitive’ and ‘natural’ folk theories forced on us by our everyday experiences.

Sometimes a great amount of scientific or technological progress can be had from overcoming our reliance on a folk-theory. A classic examples of this would be the development of inertia and momentum in physics. In this post, I want to sketch a geneology of this transition to make the notion of endogenous vs exogenous questions a bit more precise.

How was the folk-physics of projectile motion abandoned?

In the process, I’ll get to touch briefly on two more recent threads on TheEGG: The elimination of the ontological division between artificial and natural motion (that was essential groundwork for Darwin’s later elimination of the division between artificial and natural processes) and the extraction and formalization of the tacit knowledge underlying a craft.
Read more of this post

Techne and Programming as Analytic Philosophy

This week, as I was assembling furniture — my closest approach to a traditional craft — I was listening to Peter Adamson interviewing his twin brother Glenn Adamson about craft and material intelligence. Given that this interview was on the history of philosophy (without any gaps) podcast, at some point, the brothers steered the conversation to Plato. In particular, to Plato’s high regard for craft or — in its Greek form — techne.

For Peter, Plato “treats techne, or craft, as a paradigm for knowledge. And a lot of the time in the Socratic dialogues, you get the impression that what Socrates is suggesting is that we need to find a craft or tekne for virtue or ethics — like living in the world — that is more or less like the tekne that say the carpenter has.” Through this, the Adamson twins proposed a view of craft and philosophy as two sides of the same coin.

Except, unlike the carpenter and her apprentice, Plato has Socrates trying to force his interlocutors to formulate their knowledge in propositional terms and not just live it. It is on this point that I differ from Peter Adamson.

The good person practices the craft of ethics: of shaping their own life and particular circumstances into the good life. Their wood is their own existence and their chair is the good life. The philosopher, however, aims to make the implicit (or semi-implicit) knowledge of the good person into explicit terms. To uncover and specify the underlying rules and regularities. And the modern philosopher applies these same principles to other domains, not just ethics. Thus, if I had to give an incomplete definition for this post: philosophy is the art of turning implicit knowledge into propositional form. Analytic philosophy aims for that propositional form to be formal.

But this is also what programmers do.

In this post, I want to convince you that it is fruitful to think of programming as analytic philosophy. In the process, we’ll have to discuss craft and the history of its decline. Of why people (wrongly) think that a professor is ‘better’ than a carpenter.
Read more of this post

Separating theory from nonsense via communication norms, not Truth

Earlier this week on twitter, Brian Skinner wrote an interesting thread on how to distinguish good theory from crackpottery. He started with a trait that both theorists and crackpots share: we have an “irrational self-confidence” — a belief that just by thinking we “can arrive at previously-unrealized truths about the world”. From this starting point, the two diverge in their use of evidence. A crackpot relies primarily on positive evidence: he thinks hard about a problem, arrives at a theory that feels right, and then publicizes the result.

A theorist, on the other prong, incorporates negative evidence: she ponders hard about a problem, arrives at a theory that feels right and then proceeds to try to disprove that theory. She reads the existing literature and looks at the competing theories, takes time to understand them and compare them against her own. If any disagree with hers then she figures out why those theories are wrong. She pushes her theory to the extremes, looks at its limiting cases and checks them for agreement with existing knowledge. Only after her theory comes out unscathed from all these challenges does she publicize it.

For Skinner, this second prong is the definition of scholarship. In practice, coming up with a correct theory is mostly a painful process of discarding many of your own wrong attempts. A good theorist is a thorough, methodical and skeptical of their own ideas.

The terminology of crackpottery vs scholarship is probably overly harsh, as Skinner acknowledges. And in practice, somebody might be a good theorist in one domain but a crackpot elsewhere. As Malkym Lesdrae points out, there are many accomplished accademics who are also crackpot theorists: “Most often it’s about things outside their field of specialty”. Thus, this ideal self-skepticism might be domain specific.

It is also a destructive ideal.

In other words, I disagreed with Skinner on the best way to separate good theory from nonsense. Mostly on the framing. Skinner crystalized our disagreement in a tweet: whereas he views self-skepticism as I an obligation to the Truth, I view a similar sort of self-reflective behavior as a social obligation. I am committed to this latter view because I want to make sense of things like heuristic models, where truth is secondary to other modelling concerns. Where truth is not the most useful yardstick for checking the usefulness of model. Where you hear Box’s slogan: “all models are wrong, but some are useful.

Given the brief summary of Skinner’s view above — and please, Brian, correct me in the comments if I misrepresented your position — I want to use the rest of this post to sketch what I mean by self-reflective behavior as a social obligation.
Read more of this post