Separating theory from nonsense via communication norms, not Truth

Earlier this week on twitter, Brian Skinner wrote an interesting thread on how to distinguish good theory from crackpottery. He started with a trait that both theorists and crackpots share: we have an “irrational self-confidence” — a belief that just by thinking we “can arrive at previously-unrealized truths about the world”. From this starting point, the two diverge in their use of evidence. A crackpot relies primarily on positive evidence: he thinks hard about a problem, arrives at a theory that feels right, and then publicizes the result.

A theorist, on the other prong, incorporates negative evidence: she ponders hard about a problem, arrives at a theory that feels right and then proceeds to try to disprove that theory. She reads the existing literature and looks at the competing theories, takes time to understand them and compare them against her own. If any disagree with hers then she figures out why those theories are wrong. She pushes her theory to the extremes, looks at its limiting cases and checks them for agreement with existing knowledge. Only after her theory comes out unscathed from all these challenges does she publicize it.

For Skinner, this second prong is the definition of scholarship. In practice, coming up with a correct theory is mostly a painful process of discarding many of your own wrong attempts. A good theorist is a thorough, methodical and skeptical of their own ideas.

The terminology of crackpottery vs scholarship is probably overly harsh, as Skinner acknowledges. And in practice, somebody might be a good theorist in one domain but a crackpot elsewhere. As Malkym Lesdrae points out, there are many accomplished accademics who are also crackpot theorists: “Most often it’s about things outside their field of specialty”. Thus, this ideal self-skepticism might be domain specific.

It is also a destructive ideal.

In other words, I disagreed with Skinner on the best way to separate good theory from nonsense. Mostly on the framing. Skinner crystalized our disagreement in a tweet: whereas he views self-skepticism as I an obligation to the Truth, I view a similar sort of self-reflective behavior as a social obligation. I am committed to this latter view because I want to make sense of things like heuristic models, where truth is secondary to other modelling concerns. Where truth is not the most useful yardstick for checking the usefulness of model. Where you hear Box’s slogan: “all models are wrong, but some are useful.

Given the brief summary of Skinner’s view above — and please, Brian, correct me in the comments if I misrepresented your position — I want to use the rest of this post to sketch what I mean by self-reflective behavior as a social obligation.
Read more of this post

Advertisements

Unity of knowing and doing in education and society

Traditionally, knowledge is separated from activity and passed down from teacher to student as disembodied information. For John Dewey, this tradition reinforces the false dichotomy between knowing and doing. A dichotomy that is socially destructive, and philosophically erroneous.

I largely agree with the above. The best experiences I’ve had of learning was through self-guided discovery of wanting to solve a problem. This is, for example, one of the best ways to learn to program, or math, or language, or writing, or nearly anything else. But in what way is this ‘doing’? Usually, ‘doing’ has a corporal physicality to it. Thinking happens while you sit at your desk: in fact, you might as well be disembodied. Doing happens elsewhere and requires your body.

In this post, I want to briefly discuss the knowing-doing dichotomy. In particular, I’ll stress the importance of social embodying rather than the physical embodying of ‘doing’. I’ll close with some vague speculations on the origins of this dichotomy and a dangling thread about how this might connect to the origins of science.

Read more of this post

As a scientist, don’t speak to the public. Listen to the public.

There is a lot of advice written out there for aspiring science writers and bloggers. And as someone who writes science and about science, I read through this at times. The most common trend I see in this advice is to make your writing personal and to tell a story, with all the drama and plot-twists of a good page-turner. This is solid advise for good writing, one that we shouldn’t restrict to writing about science but also for writing the articles that are science. That would make reading and writing as a scientist (two of our biggest activities) much less boring. Yet we don’t do this. More importantly, we put up with reading hundreds of poorly written, boring papers.

So if scientists put up with awful writing, why do we have to write better for the public? I think that the answer to this reveals something very important the role of science in society; who science serves and who it doesn’t. This affects how we should be thinking about activities like ‘science outreach’.

In this post, I want to put together some thoughts that have been going through my mind on funding, science and society. These are mostly half-baked and I am eager to be corrected. More importantly, I am hoping that this encourages you, dear reader, to share any thoughts that this discussion sparks.

Read more of this post

Poor reasons for preprints & post-publication peer-review

Last week, I revived the blog with some reflections on open science. In particular, I went into the case for pre-prints and the problem with the academic publishing system. This week, I want to continue this thread by examining three common arguments for preprints: speed, feedback, and public access. I think that these arguments are often motivated in the wrong way. In their standard presentation, they are bad arguments for a good idea. By pointing out these perceived shortcoming, I hope that we can develop more convincing arguments for preprints. Or maybe methods of publication that are even better than the current approach to preprints.

These thoughts are not completely formed, and I am eager to refine them in follow up posts. As it stand, this is more of a hastily written rant.

Read more of this post

Preprints and a problem with academic publishing

This is the 250th post on the Theory, Evolutionary, and Games Group Blog. And although my posting pace has slowed in recent months, I see this as a milestone along the continuing road of open science. And I want to take this post as an opportunity to make some comments on open science.

To get this far, I’ve relied on a lot of help and encouragement. Both directly from all the wonderful guest posts and comments, and indirectly from general recognition. Most recently, this has taken the form of the Canadian blogging and science outreach network Science Borealis recognized us as one of the top 12 science blogs in Canada.

Given this connection, it is natural to also view me as an ally of other movements associated with open science; like, (1) preprints and (2) post-publication peer-review (PPPR). To some extent, I do support both of these activities. First, I regularly post my papers to ArXiv & BioRxiv. Just in the two preceeding months, I’ve put out a paper on the complexity of evolutionary equilibria and joint work on how fibroblasts and alectinib switch the games that cancers play. Another will follow later this month based on our project during the 2016 IMO Workshop. And I’ve been doing this for a while: the first draft of my evolutionary equilibria paper, for example, is older than BioRxiv — which only launched in November 2013. More than 20 years after physicists, mathematicians, and computer scientists started using ArXiv.

Second, some might think of my blog posts as PPPRs. For example. occasionally I try to write detailed comments on preprints and published papers. For example, my post on fusion and sex in proto-cells commenting on a preprint by Sam Sinai, Jason Olejarz and their colleagues. Finally, I am impressed and made happy by the now iconic graphic on the growth of preprints in biology.

But that doesn’t mean I find these ideas to be beyond criticism, and — more importantly — it doesn’t mean that there aren’t poor reasons for supporting preprints and PPPR.

Recently, I’ve seen a number of articles and tweets written on this topic both for and against (or neutral toward) pre-prints and for PPPR. Even Nature is telling us to embrace preprints. In the coming series of posts, I want to share some of my reflections on the case for preprints, and also argue that there isn’t anything all that revolutionary or transformative in them. If we want progress then we should instead think in terms of working papers. And as for post-publications peer review — instead, we should promote a culture of commentaries, glosses, and literature review/synthesis.

Currently, we do not publish papers to share ideas. We have ideas just to publish papers. And we need to change this aspect academic culture.

In this post, I will sketch some of the problems with academic publishing. Problems that I think any model of sharing results will have to address.

Read more of this post

Cataloging a year of blogging: complexity in evolution, general models, and philosophy

Last month, with just hours to spare in January, I shared a linkdex of the 14 cancer-related posts from TheEGG in 2016. Now, as February runs out, it’s time to reflect on the 15 non cancer-specific posts from last year. Although, as we’ll see, some of them are still related to mathematical oncology. With a nice number like 15, I feel that I am obliged to divide them into three categories of five articles each. Which does make for a stretch in narrowing down themes.

The three themes were: (1) complexity, supply driven evolution, and abiogenesis, (2) general models and their features, (3) algorithmic philosophy and the social good.

And yes, two months have passed and all I’ve posted to the blog are two 2016-in-review posts. Even those were rushed and misshapen. But I promise there is more and better coming; hopefully with a regular schedule.

Read more of this post

Social algorithms and the Weapons of Math Destruction

Cathy O'Neil holding her new book: Weapons of Math Destruction at a Barnes & Noble in NYC.

Cathy O’Neil holding her new book: Weapons of Math Destruction at a Barnes & Noble in New York city.

In reference to intelligent robots taking over the world, Andrew Ng once said: “I don’t work on preventing AI from turning evil for the same reason that I don’t work on combating overpopulation on the planet Mars.” Sure, it will be an important issue to think about when the time comes. But for now, there is no productive way to think seriously about it. Today there are more concrete problems to worry about and more basic questions that need to be answered. More importantly, there are already problems to deal with. Problems that don’t involve super intelligent tin-men, killer robots, nor sentient machine overlords. Focusing on distant speculation obscures the fact that algorithms — and not necessarily very intelligent ones — already reign over our lives. And for many this reign is far from benevolent.

I owe much of my knowledge about the (negative) effects of algorithms on society to the writings of Cathy O’Neil. I highly recommend her blog mathbabe.org. A couple of months ago, she shared the proofs of her book Weapons of Math Destruction with me, and given that the book came out last week, I wanted to share some of my impressions. In this post, I want to summarize what makes a social algorithm into a weapon of math destruction, and share the example of predictive policing.

Read more of this post

Computational kindness and the revelation principle

In EWD1300, Edsger W. Dijkstra wrote:

even if you have only 60 readers, it pays to spend an hour if by doing so you can save your average reader a minute.

He wrote this as the justification for the mathematical notations that he introduced and as an ode to the art of definition. But any writer should heed this aphorism.[1] Recently, I finished reading Algorithms to Live By by Brian Christian and Tom Griffiths.[2] In the conclusion of their book, they gave a unifying name to the sentiment that Dijkstra expresses above: computational kindness.

As computer scientists, we recognise that computation is costly. Processing time is a limited resource. Whenever we interact with others, we are sharing in a joint computational process, and we need to be mindful of when we are not carrying our part of the processing burden. Or worse yet, when we are needlessly increasing that burden and imposing it on our interlocutor. If you are computationally kind then you will be respectful of the cognitive problems that you force others to solve.

I think this is a great observation by Christian and Griffiths. In this post, I want to share with you some examples of how certain systems — at the level of the individual, small group, and society — are computationally kind. And how some are cruel. I will draw on examples from their book, and some of my own. They will include, language, bus stops, and the revelation principle in algorithmic game theory.
Read more of this post

Systemic change, effective altruism and philanthropy

Keep your coins. I want change.The topics of effective altruism and social (in)justice have weighed heavy on my mind for several years. I’ve even touched on the latter occasionally on TheEGG, but usually in specific domains closer to my expertise, such as in my post on the ethics of big data. Recently, I started reading more thoroughly about effective altruism. I had known about the movement[1] for some time, but had conflicting feelings towards it. My mind is still in disarray on the topic, but I thought I would share an analytic linkdex of some texts that have caught my attention. This is motivated by a hope to get some guidance from you, dear reader. Below are three videos, two articles, two book reviews and one paper alongside my summaries and comments. The methods range from philosophy to comedy and from critical theory to social psychology. I reach no conclusions.

Read more of this post

Diversity and persistence of group tags under replicator dynamics

Everyday I walk to the Stabile Research Building to drink espresso and sit in my cozy — although oversaturated with screens — office. Oh, and to chat about research with great people like Arturo Araujo, David Basanta, Jill Gallaher, Jacob Scott, Robert Vander Velde and other Moffitters. This walk to the office takes about 30 minutes each way, so I spend it listening to podcasts. For the past few weeks, upon recommendation from a friend, I’ve started listing to the archive of the Very Bad Wizards. This is a casual — although oversaturated with rude jokes — conversation between David Pizarro and Tamler Sommers on various aspects of the psychology and philosophy of morality. They aim at an atmosphere of two researchers chatting at the bar; although their conversation is over Skype and drinks. It is similar to the atmosphere that I want to promote here at TheEGG. Except they are funny.

While walking this Wednesday, I listed to episode 39 of Very Bad Wizards. Here the duo opens with a Wilson & Haidt’s TIME quiz meant to quantify to what extent you are liberal or conservative.[1] They are 63% liberal.[2]

To do the quiz, you are asked to rate 12 statements (well, 11 and one question about browsers) on a six point Likert scale from strongly disagree to strongly agree. Here are the three that caught my attention:

  1. If I heard that a new restaurant in my neighborhood blended the cuisines of two very different cultures, that would make me want to try it.
  2. My government should treat lives of its citizens as being much more valuable than lives in other countries.[3]
  3. I wish the world did not have nations or borders and we were all part of one big group.[4]

Do you strongly agree? Strongly disagree? What was your overall place on the liberal-conservative scale?

ArtemScaleTIMES

Regardless of your answers, the statements probably remind you of an important aspect of your daily experience. The world is divided into a diversity of groups, and they coexist in a tension between their arbitrary, often artificial, nature and the important meaning that they hold to both their own members and others. Often this division is accompanied by ethnocentrism — a favoring of the in-group at the expensive of, or sometimes with direct hostility toward, the out-group — that seems difficult to circumvent through simply expanding our moral in-group. These statements also confront you with the image of what a world without group lines might look like; would it be more cooperative or would it succumb to the egalitarian dilemma?[5]

As you know, dear reader, here at TheEGG we’ve grappled with some of these questions. Mostly by playing with the Hammond & Axelrod model of ethnocentrism (2006; also see: Hartshorn, Kaznatcheev & Shultz, 2012). Recently, Jansson’s (2015) extension of my early work on the robustness of ethnocentrism (Kaznatcheev, 2010) has motivated me to continue this thread. A couple of weeks ago I sketched how to reduce the dimensionality of the replicator equations governing tag-based games. Today, I will use this representation to look at how properties of the game affect the persistence and diversity of tags.
Read more of this post