Randomness, necessity, and non-determinism

If we want to talk philosophy then it is necessary to mention Aristotle. Or is it just a likely beginning? For Aristotle, there were three types of events: certain, probable, and unknowable. Unfortunately for science, Aristotle considered the results of games of chance to be unknowable, and probability theory started — 18 centuries later — with the analysis of games of chance. This doomed much of science to an awkward fetishisation of probability, an undue love of certainty, and unreasonable quantification of the unknowable. A state of affairs that stems from our fear of admitting when we are ignorant, a strange condition given that many scientists would agree with Feynman’s assessment that one of the main features of science is acknowledging our ignorance:

Unfortunately, we throw away our ability to admit ignorance when we assign probabilities to everything. Especially in settings where there is no reason to postulate an underlying stochastic generating process, or a way to do reliable repeated measurements. “Foul!” you cry, “Bayesians talk about beliefs, and we can hold beliefs about single events. You are just taking the dated frequentist stance.” Well, to avoid the nonsense of the frequentist vs. Bayesian debate, let me take literately the old adage “put your money where you mouth is” and use the fundamental theorem of asset pricing to define probability. I’ll show an example of a market we can’t price, and ask how theoretical computer science can resolve our problem with non-determinism.

Following Tim Johnson‘s presentation, consider a good that is currently worth X_0, should you buy it or short it given that sometime in the future it will be worth either X_D or X_U (where X_D \leq X_U). Well, if X_U < X_0 then you should definitely short it, or if X_D > X_0 then you should definitely buy it. Easy arbitration and free money! Otherwise, there is some q \in [0,1] such that X_0 = (1 - q)X_D + qX_U. We can think of q as the belief of the market that the X_U state of the world will be reached, and if you believe that X_U is more likely to be reached than q then you value the good more than the market, and so you should buy it. In other words, prices express in very concrete terms our belief about the future.

Let’s consider the following game. There are an infinite number of goods and a computer-savvy, all knowing, but sinister Oracle that gives everyone the following offer: “you can come to me and name a fair price to buy any of the goods. If you buy the n-th good and the n-th Turing machine (in some fixed ordering) halts on the empty string then I will give you $10, otherwise I will give you $1.” How should you price these goods? All the machines are deterministic, so each good has to be priced at either $1 or $10. However, there is no algorithm that can solve the Halting problem, so there is no way for you to know if an arbitrary good will evaluate to $1 or $10. There is also no way to justify a belief in a given machine Halting or not. You can’t even say something like “well, machines tend to halt with probability p“, since that probability is also non-computable. This is where computer science offers a simple answer: we can’t price some of these goods, and we will leave their prices as undetermined.

This mindset is not restricted to weird pricing games, but a standard part of computer science. When a computer scientist generates random numbers in their code, they are okay with calculating probabilities since those are things under their control and it makes sense to average since some underlying distribution is known. However, when a user provides an input to the algorithm then the computer scientist has no way to know what that input will be and cannot even assume any sort of distribution for the input. It is simply not the computer scientists place to do so, maybe a psychologist could have a good model of computer users and say “this is a probability distribution that users have for inputs” but without this extra knowledge, the computer scientist has to assume the worst. This is why worst-case analysis is used, because it is a mathematical necessity that no matter how little we know about the user, we can guarantee something about the behavior of our algorithm. Why don’t scientists use similar ideas to judge the behavior of their theories?

In biology, for instance, there is no good empirical knowledge of what real fitness landscapes look like, so why do we allow theorists to feed arbitrary statistical assumptions into their theories? If we want to propose a mechanism for evolution, why not look at what guarantees our theory gives in the worst case? Alternatively, if we don’t know the micro-dynamical mechanism, but have reasonable expectations about macroscopic resource use — this can be a qualitative belief like “evolution reaches equilibrium in a reasonable amount of time on a static fitness landscape” — then why not use computational complexity to put constraints on what possible types of fitness landscapes are reasonable? What are the the properties of our model that necessitate the properties we can observe? Before the development of the rich theory of computational complexity, there was no real way for science to deal with truly undetermined events, now we can. Let’s use it, and reserve probability for when we are justified in having beliefs.

Advertisements

About Artem Kaznatcheev
From the Department of Computer Science at Oxford University and Department of Translational Hematology & Oncology Research at Cleveland Clinic, I marvel at the world through algorithmic lenses. My mind is drawn to evolutionary dynamics, theoretical computer science, mathematical oncology, computational learning theory, and philosophy of science. Previously I was at the Department of Integrated Mathematical Oncology at Moffitt Cancer Center, and the School of Computer Science and Department of Psychology at McGill University. In a past life, I worried about quantum queries at the Institute for Quantum Computing and Department of Combinatorics & Optimization at University of Waterloo and as a visitor to the Centre for Quantum Technologies at National University of Singapore. Meander with me on Google+ and Twitter.

7 Responses to Randomness, necessity, and non-determinism

  1. Pingback: What is the algorithmic lens? | Theory, Evolution, and Games Group

  2. Pingback: Bounded rationality: systematic mistakes and conflicting agents of mind | Theory, Evolution, and Games Group

  3. Pingback: Cataloging a year of blogging: the algorithmic world | Theory, Evolution, and Games Group

  4. Pingback: Kooky history of the quantum mind | Theory, Evolution, and Games Group

  5. Pingback: Limits of prediction: stochasticity, chaos, and computation | Theory, Evolution, and Games Group

  6. Pingback: Personification and pseudoscience | Theory, Evolution, and Games Group

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

w

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.