# Three types of mathematical models

Whenever asked to label myself, I am overcome by existential dread: what am I? A mathematician? A computer scientist? A modeler? A crazy man with a blog? Each has its own connotations and describes aspects of my approach to thought, but none (except maybe the last) represents my mindset accurately. I have experienced mathematical modeling in three very different setting during my research and education: theoretical computer science, physics, and modeling in social and biological sciences. In the process, I’ve concluded that there are at least three fundamentally different kinds of modeling, and three different levels of presenting them. This is probably not exhaustive, but I have searched for some time and could not find extensions, maybe you can suggest some. Since this post is motivated by names, let’s name the three types of models as abstractions, heuristics, and insilications and the three presentations as analytic, algorithmic, and computational.

### Insilications

In physics, we are used to mathematical models that correspond closely to reality. All of the unknown or system dependent parameters are related to things we can measure, and the model is then used to compute dynamics, and predict the future value of these parameters. Sometimes, as in the case of statistical or quantum mechanics, these predictions are probabilistic (for different reasons in the two theories) but are expected to agree with reality after many independent measurements. I call these models that translate measurements of ’empirical reality’ into predictions about future results of similar measurements as insilications because they are a model ‘replicating’ the relevant parts of reality. We usually learn these models presented in analytic terms as a series of mathematical equations that we can solve explicitly. A standard example would be solving for the motion of a cannonball using Newtonian mechanics, or a more complicated example would be solving the spectrum of a Hydrogen atom using quantum mechanics; both are exercises I had to do at various stages of my education.

The reason I chose the in silico computer-inspired name for these sort of models is because — in non-classroom settings — they are usually solved numerically or by simulation. I call this the computational presentation. Classic examples would be a civil engineer using a piece of software to calculate the stresses on a bridge design, or NASA simulating the trajectory of a spacecraft destined for Mars. Examples with more research-level physics would be simulations and subsequent statistical inferences from the detectors at the Large Hadron Collider (see here for a biological example). These models are simulated on computers, but we understand the equations going into the simulation, how they interact with each other, and how the computer calculates the relevant parameters so well that we might as well think of them as in silico recreations of the ‘real’ world. If there are discrepancies from empirical measurements then the theory these models are built within usually has a way of quantifying and accounting for such errors. If a systematic disagreement is found between a well-implemented insilication and empirical measurements then this can be used to bring into question or even falsify the theory underlying the model. I think these are the models that most people think theorists concern themselves with.

### Heuristics

In reality, though, most theorists outside of engineering and the hard physical sciences (and even some in them, like cosmologists) work on heuristic models. When George Box wrote that “all models are wrong, but some are useful”, I think this is the type of models he was talking about. It is standard to lie, cheat, and steal when you build these sort of models. The assumptions need not be empirically testable (or even remotely true, at times), and statistics and calculations can be used to varying degree of accuracy or rigor. Often, these models aren’t useful in spite of being false, but because they are false. A theorist builds up a collection of such models (or fables) that they can use as theoretical case studies, and a way to express their ideas. It also allows for a way to turn verbal theories into more formal ones that can be tested for basic consistency. However, the drastic contrast in basic goals of this sort of modeling is why people like Noah Smith that are more comfortable with insilications become uncomfortable with heuristics.

As with insilications, heuristics can be presented in several ways. In neo-classical economics, the standard presentation is analytic, our assumptions are represented as particular equations or axioms that might or might not be true or even empirically testable. Conclusions are then drawn from these assumptions by solving systems of equations or analyzing their qualitative dynamics. Sometime, this analysis can be done in general terms with some steps abstracted and replaceable by ‘any algorithm’ or ‘any model’, as I do in a biological setting with my recent work on evolutionary equilibria — this is an algorithmic perspective. By contrast, in fields like complex adaptive systems (or complexity economics, if we want to contrast with neo-classical), a computational perspective is used. Here, heuristic models are simulated on computers and conclusions are drawn based on the results of these experiments. To me, this is very dangerous, because unlike the analytic or algorithmic treatment of heuristics, it does not even provide a definitive link between assumptions and conclusions. Computational heuristics suffer from the curse of computing, and although they can be used for rhetorical purposes, it is not clear if the theorist studying them gains any understanding. In the words of theoretical physicist Eugene Wigner:

It is nice to know that the computer understands the problem. But I would like to understand it too.

### Abstractions

Unlike computational insilications, computational heuristics do not provide reliable predictions, and thus their outputs are useless unless they generate understanding. By contrast, the best way to gain understanding, in my opinion, is through the third type of models — abstractions. These are the models that are most common in mathematics and theoretical computer science. They have some overlap with analytic heuristics, except are done more rigorously and not with the goal of collecting a bouquet of useful analogies or case studies, but of general statements. An abstraction is a model that is set up so that given any valid instantiation of its premises, the conclusions necessarily follow. These models are not build to illustrate a point, but as tools to analyze any theory. The classical example is Turing machines and other models of computation; if your theory has certain qualitative features then it is necessarily Turing complete and from this we can conclude — for example — that some general questions about your theory are not answerable. Abstractions are most useful as a way of classifying other models, or drawing concrete connections between heuristics or insilications in different fields. As far as I know, there is no real way to study abstractions through the computational perspective, and results are shown analytically (say in mathematical physics) or algorithmically (say in theoretical computer science).

I only concentrated on mathematical models in this post, and ignored two important types of models: mental and physical. The first is usually expressed as intuitions or verbal theories. The second is popular in biology as model organisms such as the rat or E. coli, where we can study theories about animals on which we cannot ethically (or practically) perform experiments. Did I miss any other important types of modeling? What is your preferred type and presentation of models?

From the Department of Computer Science at Oxford University and Department of Translational Hematology & Oncology Research at Cleveland Clinic, I marvel at the world through algorithmic lenses. My mind is drawn to evolutionary dynamics, theoretical computer science, mathematical oncology, computational learning theory, and philosophy of science. Previously I was at the Department of Integrated Mathematical Oncology at Moffitt Cancer Center, and the School of Computer Science and Department of Psychology at McGill University. In a past life, I worried about quantum queries at the Institute for Quantum Computing and Department of Combinatorics & Optimization at University of Waterloo and as a visitor to the Centre for Quantum Technologies at National University of Singapore. Meander with me on Google+ and Twitter.

### 59 Responses to Three types of mathematical models

1. Jon Awbrey says:

In talking of models we often find denizens of different disciplines talking at cross purposes to one another. Logicians use the word to describe what we might call logical models, saying that a model is whatever satisfies a theory, anything that a theory holds true of, and this is the sense they use in the logical subject of model theory. Almost everyone else uses the word to describe what we might call analogical models, analogues being things that share enough properties with other things that learning about Thing 2 (the analogue system) can teach us about Thing 1 (the object system). It is actually not difficult to integrate the dual senses of the word model into a coherent picture of the whole situation, namely, the triadic relationship among objects, analogues, and theories.

• I didn’t want to discuss models in the model theoretic setting, because I don’t think it accurately reflects (as you pointed out) how most scientists use the word ‘model’. However, there are definitely some strong connections that can be made; my favorite is the view (popular in biology) of a theory as a collection of models. However, in a lot of settings (especially outside of fields with cleanly presented theories), a given model can actually be viewed as an instantiation of many different theories, and I am not sure what that would mean from the logical model point of view.

The triadic relationship tickles me in all the right ways, since it reminds me of Curry-Howard-Lambek correspondence. How can we best map it? Objects are programs, theories are proofs, and analogies are morphisms? However, my inner Kantian screams out in opposition: we don’t have access to the ‘objects’ of reality, so all we ever do is make analogies between different theories!

• Jon Awbrey says:

We evidently have 3 or 4 years worth of things to talk about here. But I’m presently in the middle of some very tedious work and will have to keep my nose to the grindstone for fear of never working up the fortitude to face it again. So for now I’ll just link to some very rough notes and hope for a chance to give them a proper set-up later. (Full disclosure — I come at almost everything from a Peircean perspective.)

2. ishanuc says:

I would add abductions as a fourth distinct category of mathematical models. This is the basic archetype of the modeling carried out in machine learning, where the idea is to fix a hypothesis class (a modeling framework if you will, for example, neural nets, probabilistic automata, ARMAX models), and then derive an instantiation using algorithmic reduction of physical observations. This is often confused as inductive reasoning. However, I see this clearly as a case of abduction (look up Wikipedia for a basic disambiguation of the two concepts). Anyway, there is a component of “abstraction” here; but it does not quite fit the category. Neural nets (I hate neural nets for various reasons by the way), is not quite like a Turing machine; there is the abstract definition of the formalism, but an instantiation always comes from data. Also, I claim that such models are not heuristic; indeed George Box was probably not talking about data driven modeling when he made his comment. Learned models attempt to capture what is learnable maximally (or they are ought to do something to that effect), and are not products of brilliant insights from physicists or engineers. For a rigorous study of the notion of learnability, I suggest Valiant’s new book (Probably Approximately Correct), which is either out, or will be on the stands soon. For examples of abductions with probabilistic automata, see the work of Crutchfield , Shalizi, and may be even this.

It is interesting to note that representations for this category also have some trouble fitting in one of the three labels. It seems that any representation must be simultaneously computational, and algorithmic; the hypothesis class being defined algorithmically, whereas instantiation requires a computational procedure.

• You make a good point with abductions, as you know (but I will include for the benefit of other readers) I’ve discussed this sort of modeling before in a skeptical light. My bad for forgetting about it in this setting; but maybe I should create a separate category for statistical modeling as opposed to mathematical modeling? I’ve read Valiant’s new book, but I am not sure if it is a good introduction to this topic, I’ve been meaning to review it for a while, but haven’t had a chance to yet.

Although abduction is a distinct four type of model, I disagree that machine learning style of abduction does not fall within the three categories of presentation. I think that data-driven ML falls cleanly into the computational perspective, since it is often the best example of making predictions without really ‘understanding’ the model. The algorithmic aspect of specifying the hypothesis class, is actually a red-herring, I think. This algorithmic part is only really studied explicitly in theoretical treatments of machine learning (say in Valiant’s PAC model), but those are models to study machine learning algorithms or paradigms and usually very cleanly fall into the algorithmic abstractions part. These analyses of the algorithms are usually distinct from the practical applications of those algorithms.

Thanks for the comment! We are up to four types and three presentations. I will check out the links you provided more closely.

• ishanuc says:

Ah yes, we did have a brief discussion to that effect. I do think that you tend to undersell machine learning a bit. This particular objection that ML only “predicts” and does not “understand” is held and professed by many, even within the community. I reject this with impunity, on grounds that the notion of “understanding” that is central to this objection is ill-defined. Also, the idea that ML is concerned with just prediction is not correct; a good deal of work is being done on the distillation of generative descriptions of data, and it is hard to see why that would not qualify (or is in any way different from) the classical approach to scientific inquiry.

I hate using quotes; feels like using the fallacious “appeal to authority” argument. Nevertheless, on this occasion, I would recruit Von Neumann to bolster my case:

“The sciences do not try to explain, they hardly even try to interpret, they mainly make models. By a model is meant a mathematical construct which, with the addition of certain verbal interpretations, describes observed phenomena. The justification of such a mathematical construct is solely and precisely that it is expected to work.
– that is, correctly to describe phenomena from a reasonably wide area. ”

— John von Neumann

‘Method in the Physical Sciences’, in The Unity of Knowledge, edited by L. Leary (1955), 158. Reprinted in John Von Neumann, F. Bródy (ed.) and Tibor Vámos (ed.), The Neumann Compendium (2000), 628.

Note that the above remark was made in 1955, when machine learning did not really exist (or was in its infancy). Therefore the comment does indeed refer to the classical scientific approach, professed to somehow bring this magical quality of “understanding” to the table. Von Neumann disagrees.

> “The algorithmic aspect of specifying the hypothesis class, is actually a red-herring, I think.”

It is not a “red-herring”. The particular hypothesis class dictates what kind of data reduction (learning algorithm) one can/must use, and also dictates the degree of learnability. For example, if one uses the probabilistic automata as the hypothesis class, and then uses Shalizi’s learning algorithm CSSR , then only a strict subset of ergodic, stationary stochastic processes can be learnt. However, better learning algorithms are possible (for example this ) which expand the learnable category, while still being restricted to ergodic stationary processes.
However, to prove that the model instance that the algorithm spits out is indeed a good one (say in the PAC sense), one would need to use the formal specification of the general hypothesis class (and not the instance produced from a particular dataset).
With a more complex hypothesis class, one may be able to learn more general phenomena; indeed restricted classes of non-stationary processes are learnable, but not with probabilistic automata. The learning algorithms would need to change accordingly.

• I agree that in the post in question, I am unreasonably critical of machine learning. There are other posts where I am a bit more generous. My views are not completely settled on the topic, I definitely enjoy machine learning (and one of the reasons why I study it), but I am also unhappy with all the “big data” hype around on the internet. I feel that it is part of my ‘responsibility; as a blogger to disspell some of the excess hype, and I apologize if I’ve made my criticisms in a non-respectful way to experts. In the case of your work, I enjoy it very much, and think it provides an awesome balance of interesting theory that can be related to actual implementations of ML algorithms. In that regard, I am thankful that my original post prompted you to respond so that we could have these fun exchanges!

Thank you for the von Neumann quote. He is one of my favorite mathematicians, and I was not familiar with this statement.

I will continue to disagree on the algorithmic aspect of applied machine learning. It is definitely the case that the applied machine learning expert needs to be aware of algorithmic results proved by theorists (and sometimes this is the same person). However, in the act of applying a machine learning algorithm to a problem, the scientists does not produce algorithmic results, but selects with the aid of known results which computational technique to use. This is similar to how a person writing a simulation, will choose their simulation paradigm: will he solve the PDEs numerically? do approximate integration with Monte-Carlo methods? Or will he build an agent-based model? maybe a mixed-model? Just like in machine learning, each approach can have a deep analytic or algorithmic theory that explains certain powers and restrictions on what a given computational technique will achieve. However, the actual scientist doing the simulation is still doing a computational study, even though they are aware of the algorithmic (or analytic) considerations.

• ishanuc says:

apparently neither of us sleeps…

anyway, I completely agree with your take on “big-data” (this conversation never happened).
Anytime “the industry” takes note of some neat result, puts a catchy name on it, and sells the crap out of it, we have this kind of unfortunate phenomenon unfolding. The key problem is that the field then attracts the “professional practitioner”, who can’t care less about the underlying theoretical elegance, and runs around with a hammer in hand with the quaint energy of a sugar-high infant; looking for nails.

I would also agree that using different learning frameworks, at least in the context in which it is generally understood, seems analogous to choosing a computational tool. But, this also, I would submit, is a result of what-everybody-knows-and-is-clearly-true-about-ML. It does not need to be so.

In a rather interesting paper in Science in 2009, Schmidt and Lipson showed how natural laws can be distilled from data alone . The ML scheme they used involves no new math, its basically symbolic regression. However, the point was that from just a camera watching a swinging pendulum, they were able to come up with fundamental invariance principles in physics such the conservation of momentum, energy, and even identify the correct Langrangian, all without knowing anything a priori about physics or about the system being studied. That, I think, is an “algorithmic” result, in the sense, its a result about general physical systems, obtained, as the authors claim, via “automation of scientific inquiry”. Interestingly, in the same issue of Science, King published his results on ADAM, a computer system that does experiments on yeast analyzes the results (in the real world), figures out what experiments to do next, and does them, and finally elucidates new scientific results on yeast biochemical pathways. Now, whether this can be realized in general is open to debate, and would bring us rapidly to epistemological questions about the mind.

What I am trying to get at, is that ML does not HAVE to be about mundane predictions of a time series or whatever. Indeed ML may lead us to understanding what “understanding” means; maybe even take a stab at the hard AI problems, and shed light on the very nature of conscious thought. Some of us are at least thinking on this.

I think I am waaay off-topic here. But then what are blogs for.

• Brian Calvert says:

Your last paragraph really gets at how I approach machine learning and statistics and all that good stuff. ML is just another thinking system, and we can study it to understand how we humans think, and how thinking proceeds in general. That it’s insanely useful is, to me, just a nice extra.

I didn’t know you had a background in physics, why the shift to biology and evolution? Did you get bored of models that actually correspond to reality?

• Didn’t you notice the quotation marks I put around ‘reality’? I am just too much of a post-positivist to take reality seriously. Of course, this is a relatively new development for me, and I was probably a logical positivist when I started college.

When I started undergrad, I wanted to answer the Big Questions, and highschool had left me with the misrepresentation of computer science as a purely technological field and so I was going to study physics and political science. I quickly realized that in practice, neither of those fields really bothers with deep questions anymore, and also learnt that theoretical computer science was full of wonderful math. I started concentrating on computer science and physics and specifically fields where I thought deep questions remained like quantum computing. Here, all of the best work was being done by computer scientists and so I developed a taste for the algorithmic lens. As I started moving toward serious research in quantum computing, the math remained beautiful but the big questions disappeared as I got bogged down in the trenches.

Since I wanted the big questions and had some experience with evolutionary game theory from my undergrad publications, it was natural to shift the focus of my algorithmic lens to evolution.

Now you know my life story.

When you talk to a lawyer or business person about why they do what they do most give an answer that involves a relatively recent introduction to the field. You have to go to the arts or sciences to hear about a lifelong passion.

Just sayin’, there’s a blog post in your life story. One I would enjoy

• Once (if?) I get more things accomplished as a scientist then I will try to write something like that. For now though, it seems a bit presumptuous to impose that on my readers. As such, I will continue to tease you with little personal tidbits interspersed through out the posts. You know what they say “only show a little bit of skin if you want them coming back for more”.

However, I will try to write up something about post-positivism, since I really want to sort that stuff out in my head and the best way I’ve found to do that is by writing it.

• Brian Calvert says:

I sympathize with your development from realist to post-realist. I agree that in the 20th century, the highest form of thought was concerned with the problems of realism. I’m most interested in this–the relation between theory and world, and what it means for a theory to be good or correct. That g+ post you link to of yours is one of the most explicit and wonderful discussions on it I’ve seen in a while, even though it was a #rant. My thoughts are mostly on the type of stuff you mention Quine, Kuhn, & co work on. I’m most interested in trying to puzzle out how we can come up with some good definitions of “truth” that manage to fix the problems of positivism and logical empiricism without falling back into post-modernism and solipsism. I’m a philosopher at heart, though, and so lately I’ve been thinking a lot about theory of value and large scale political-philosophic trends, and how that relates to truth and some other stuff, mostly Nietzsche and thermodynamics. I’ve also been trying to shore up my math skills, which are super lacking compared to my verbal faculties. Anyways, now that I’ve found this blog, I have a bunch more reading to do.

• Thank you! I’ve wanted to write about post-positivism more explicitly on the blog, and your encouragement definitely gives me more confidence. I will try to put something together in the near future.

I know that Popper struggled with “truth” for a long time, but eventually came to some sort of acceptance of it. Are you familiar with that work? It is definitely one of the biggest struggles for me right now. I am happy with truth only in the form of ‘valid’ and applied to mathematics (which for me captures all of our ‘mental reality’), but I don’t know how to talk about anything like truth in empirical reality or our direct (or technology-mediated) experience of it.

Do you run any sort of blog where I can read more about your thoughts? If not, then I look forward to hearing more from you in the comments! Welcome to the blog, most of it is not very philosophical, but this post and this tag might be to your liking.

• Brian Calvert says:

I’m glad to hear that! I’ll be sure to check it out when you get around to it. There are way too few thinkers concerned with correcting the problems of positivism, which is not good since its flaws cause a lot of problems in the practice of science.

And I read your Algorithmic Philosophy post yesterday when I was unable to tear myself away from the blog. I’m vaguely familiar with operationalism and very interested in it, so it was really interesting to read your take on it; my worry (or hope? depends on my mood) is that, in the binary case, f: \Sigma^* \rightarrow \{0,1\} just becomes verificationism: the world is the set of verifiable statements. That’s problematic for a few different reasons (e.g., can’t really distinguish between (theory-free) observation terms and theoretical terms); and while verification is certainly wonderful and interesting for a different set of reasons (e.g., a verification-method and “information” look a whole lot like each other), I don’t think it ends up being a satisfactory solution to the problems of positivism. That being said, it’s still a fascinating concept and often a useful tool, and I was super intrigued by your formalization of operationalism, even though a bit of the math flew over my head.

And I’m familiar with quite a bit of Popper, but I didn’t know that he dealt with truth explicitly; I’d love it if you could refer me to some of this work. I know that he was big on demarcation and falsification though, and I guess I always sort of assumed that he had an implicit theory of truth, of which the rest of his work were parts; I think that most of metaphysics works like this, i.e., the problems of positivistic theories are reflected in the correspondence theory of truth.

My main interest right now is in updating correspondence theory in a way that preserves the spirit of “truth” without making the flawed assumptions that causes problems for correspondence, say, in pessimistic meta-induction (which is, of course, not pessimistic at all, and is simply what we call “learning”, but hey, whatever). I think the main problem is the presumption of prearranged harmony between the logical form of belief and world: a belief is true if the objects and properties in the belief have a one-to-one mapping with the objects and properties in the world. This assumption – that the actual ontology of the world is known prior to investigation – is what causes meta-induction to be so paradoxical. Well, sometimes I’m more inclined to point to Wittgenstein, who says in TLP that the world is intelligible in virtue of sharing general logical form with propositions–this assumption of shared “meta-structure” is problematic too, since objecthood as a concept breaks down when we push real hard. I think the best way to summarize my stance here is that the world doesn’t have a factual or propositional or conceptual structure–that’s just Aristotelian essentialism.

Anyways, to solve the above problems, I’m trying to articulate a theory of “truth” which lets us preserve the spirit of correspondence (i.e., without slipping into solipsism like the post-modernists). I think the best strategy right now is to stop trying to define truth in a way that bridges the epistemological and the ontological, which seems to always be done; even probabilism assumes correspondence, it’s just uncertain about which proposition is the corresponding one. I actually think the best first step is to move to something like coherentism – a belief is “true” if it fits with the rest of what the believing agent believes; this is, I think, in a sense very similar to validity in mathematics. The problem with coherentism is that, historically, coherentists are non-realists or idealists or solipsists of some form, which I’m super not about. But I’m sympathetic to coherentism because, as an empirical fact, humans have believed a huge range of incompatible beliefs, and everyone pretty much believes that their beliefs are true. So, in practice people say something is true when it fits with the rest of their belief-network. But we need an additional criterion to get us back to correspondence and away from solipsism; that probably ends up looking something very much like Popper’s falsification: keep looking for anomalous data so that you can update your model of the world. Hopefully, this all will fit very closely with Bayes in (idealized?) theory and practice, but who knows.

I’m also working on an ontological treatment of truth, but (a) it’s still a little underdeveloped, and (b) I don’t want to talk BOTH of your ears off. In brief, it revolves around the causal entanglement of brain and environment, and the physical encoding of information. I’m probably a bit more interested in this, philosophically, since it ties us back in with the vocabulary of state-transformation and causal networks, which I’m big on.

You’ll definitely have to tell me more about your view on truth in mathematics, and how you think it differs from truth in empirics. In case you couldn’t tell, I’m definitely much more of an intuitive-verbal thinker than a logical-symbolic thinker. So, unfortunately, I’ve sort of been ignoring how to fit math into my picture of truth; I’ve sort of just assumed some form of coherentism or, as you say, validity will end up working out, but I need to do much more work here.

And I wish I had a blog I could direct you to. I keep meaning to start one, but I lack the motivation and energy. Plus, the perfectionist in my cringes imagining putting a “finished” product out where someone could criticize it. Silly, I know, but here we are. If you’ve made it all the way to the end, thanks for reading! I’m sure I’ll continue to poke around and comment on interesting posts.

• I am replying to your reply on my comment at this nesting, so that there’s space for you to comment (since I think there is a comment depth maximum).

And I read your Algorithmic Philosophy post yesterday when I was unable to tear myself away from the blog.

Thank you! Let’s the discussion about operationalism over to that post. Otherwise the comments become too non-linear and other people that might be interested in algorithmic philosophy won’t be able to read your comments :(.

I actually think the best first step is to move to something like coherentism – a belief is “true” if it fits with the rest of what the believing agent believes; this is, I think, in a sense very similar to validity in mathematics. The problem with coherentism is that, historically, coherentists are non-realists or idealists or solipsists of some form, which I’m super not about. But I’m sympathetic to coherentism because, as an empirical fact, humans have believed a huge range of incompatible beliefs, and everyone pretty much believes that their beliefs are true.

Sounds a lot like Quine’s web of belief. I am personally very much leaning towards non-realism, mostly because of quantum mechanics and since it lets me throw away determinism while keeping locality. Finally, I think that your first and last sentence in this quote contradict themselves. People often hold incompatible beliefs, which invalidates a trivial reading of coherentism.

However, with the (not big surprise from me) algorithmic lens, we can bring back a workable theory. In particular, even though some beliefs might be contradictory, it could very well be computationally intractable to notice that, this would allow a person to hold those beliefs without stress. If you do show them a certificate verifying that the beliefs are incompatible then this will create cognitive dissonance which will continue to cause discomfort unless the ideas are reconciled, or one of them abandoned.

You’ll definitely have to tell me more about your view on truth in mathematics, and how you think it differs from truth in empirics.

I’ve actually come to believe in an awkward form of dualism here. Hopefully not as silly as most mathematicians, but still underdeveloped. In particular, I believe that any argument that we can use to show that an “external empirical world exists” can be also used (almost unmodified) to show that an “external mental/mathematical world exists” in the same sense of the world. Once we establish that, one can argue that our knowledge of the mathematical world is much more certain, since our apparatus for perceiving/creating it (i.e. the mind) seems to be better adjusted than our apparatus for perceiving/creating the empirical world. I will need to treat this more carefully, but it was largely inspired by reading Schrodinger’s What is Life? followed by Mind and Matter. The first provided a great standard empiricist treatment of the world, while the second moved to a very eastern “connected single consciousness” perspective, without seeming unreasonable silly. As such, I’ve always wanted to more formally reconcile these two views as two perspectives on reality, where each can captures some of the properties of the other but never completely.

On a marginally related note. I think philosophy of math is the best playground for doing philosophy. If you can convincingly answer the big questions in philosophy of math, I feel like it is easy to port those answers over to answers the important questions in philosophy of mind, metaphysics, and even ethics. The last one was non-obvious to me (and I often belittled ethics as silly, back in my logical positivist days), but reading Tim Johnson’s blog has suggested that headway can be made here, too.

And I wish I had a blog I could direct you to. I keep meaning to start one, but I lack the motivation and energy.

I see that you seem to have interests connecting ideas from machine learning and philosophy. If you want to write about some of these then I am always happy to have guest posters on this blog. As you can see, we already have a lot of authors, but the overwhelming majority of the posts still come from me.

4. Good post! Actually I’ve written a textbook in Swedish about modelling from a general scientific perspective (see the link below). It’s pretty basic from a philosophical point of view, but the message that we want to get across is that the word model has a lot of different meanings in different disciplines. Actually a translation into English is in the making and will hopefully be out next year.

http://p-gerlee.blogspot.se/2012/08/scientific-models.html

5. Pingback: The Benefits of Being Unrealistic

This site uses Akismet to reduce spam. Learn how your comment data is processed.