A Theorist’s Apology

Gadfly of ScienceAlmost four months have snuck by in silence, a drastic change from the weekly updates earlier in the year. However, dear reader, I have not abandoned TheEGG; I have just fallen off the metaphorical horse and it has taken some time to get back on my feet. While I was in the mud, I thought about what it is that I do and how to label it. I decided the best label is “theorist”, not a critical theorist, nor theoretical cognitive scientist, nor theoretical biologist, not even a theoretical computer scientist. Just a theorist. No domain necessary.

The problem with a non-standard label is that it requires justification, hence this post. I want to use the next two thousand words to return to writing and help unify my vision for TheEGG. In the process, I will comment on the relevance of philosophy to science, and the theorist’s integration of scientific domains with mathematics and the philosophy of science. The post will be a bit more personal and ramble more than usual, and I am sorry for that. I need this moment to recall how to ride the blogging horse.

In his Apology, Plato’s Socrates compares himself to a gadfly biting at the lazy horse that is Athens and agitating the polis toward questioning the soundness of their understanding of Virtue, Knowledge, and the Good. Although most of the Socratic dialogues end at an impasse where neither Socrates nor his interlocutor arrive at an answer to the question of discourse, the discussion is not in vain. At the very least the activity exposes to the interlocutor the limits of their understanding. More importantly, the act of discussion — even when inconclusive — helps us learn more about the question. We don’t always need an affirmative answer to discover something, it is possible to learn from failure. The stated and ideal goal of these dialectics is, of course, Truth, but the willingness to discuss and frequent lack of decisive conclusion (as well as Plato’s literary qualities) remind us of the importance of perspective. Finally, through dialogue Socrates not only interprets the world in various ways, he acts to change it by engaging directly with the polis. It is hard for me to understand how Marx could have arrived at his 11th thesis, given that one of the founders of western philosophy serves as such a tempting counter-example.

A theorist is the gadfly of science. A theorist, in the sense I am trying to justify here, aims to engage with large swaths of science, uncover their assumptions, limits of understanding, and encourage the sharing of theoretical tools between different domains of inquiry. In some ways, this is similar to the goals of a philosopher of science and probably why posts on the philosophy of science appear so often on TheEGG. There is a distinction, though, in that a theorist is less concerned with describing (and definitely not prescribing) how science progresses or what broad methods it uses, than engaging directly with scientists on technical matters in venues and a language familiar to them. Maybe a theorist is part applied philosopher of science and part applied mathematician with a short attention span; the distinctions can get blurry sometimes.

Of course, no one is without philosophical baggage, and a theorist is no different. In this case, a certain neo-Kantian view of science is useful for justifying why the theorists expects certain models to be applicable in different domains. Everything, including “facts” and “observations”, is theory laden (in the Popper sense of the word), and these theories are shaped not only by the domain that they seek to describe, but also by our cognitive milieu. Thus, I expect to see similar models appear in different domains not (just) because the domains share something in common, but because there are limits and regularities in how we think about and describe things. We are all relying on models, but for many these are vague and intuitive mental models. Although our natural aversion to cognitive dissonance encourages us to prefer holding onto logically compatible models, sometimes paradox is hard to spot. Extracting, expliciting, and explicating these mental models allows us to bear the power of logical analysis and mathematics on our search for paradox and to clarify our underlying assumptions.

This elucidation of assumptions and search for contradiction might seem derivative-of and secondary-to the building of new models and conducting of experiments. At times, it can even seem pedantic and destructive. Although I sometimes embrace the label of professional troll (a modern variant on the gadfly, and maybe a more fitting title for Socrates given the descriptions of his appearance), I think these common prejudices are misplaced. The most striking advances in science, and Thought more generally, have come from identifying hidden and implicit assumptions. Assumptions that when questioned transformed not only our theories but the very nature of the relevant domain’s critical discourse. For me, the most salient examples are non-Euclidian geometry, evolution, Cantor’s theory of the infinite, special relativity, incompleteness and computability, and foundations of quantum mechanics.

The expliciting and formalization of mental models helps us to communicate better. Science has never been, and is most definitely not today, an isolationist, individual endeavour; it is a social process and demands scientists to be able to share their ideas with their peers. In such communicative settings, Zipf pointed out that ambiguity arises from the conflict in energy investment between the speaker and listener (note that there are some important assumptions buried here, too). The speaker wishes for a totally ambiguity — one sound to mean everything — and leave the difficulty of disambiguation for the listener. The listener, on the other hand, wishes for a totally unambiguous language, so that the speaker is left with the difficulty of words. In this regard, theorists are a listener’s friend, a theorist helps disambiguate the meaning of our private mental models. It also means that a theorist has to communicate in ways that bridge domains while maintaining mutual intelligibility with as many thinkers involved.

One of the best parts of mathematics, or — to avoid the “is math a language?” discussion — the mathematicians’ approach to discourse, is the clarity with which it recreates the ideas of your mind in the minds of others. Its effectiveness is so great that mathematicians can ponder together such highly complicated ideas that they start to feel the ideas as external to themselves. This is why a popular imagery among mathematicians is not that we are creating arbitrary mathematical models in our minds but discovering and exploring Platonic vistas. This high fidelity makes the formalization of mental models that theorists undertake an essential part of serving as connectors. By expressing ideas in a precise and highly communicable language, we can recruit more minds to work together instead of in isolated silos. However, mathematics can also be dangerous in its alienation. As Fawcett & Higginson (2012) pointed out, every equation in a biological paper drops your citations by a third. This means that a theorist must resist the urge to intimidate or self-gratify through math and instead stick to the simplest tools (and clearest accompanying prose) that will get the job done. Sometimes this might mean simply simplifying the work of others.

Again, this might seem secondary to the “real” work of experimentalists and domain experts creating mental models, and applied mathematicians solving formal models. But this view would miss half of mathematics, mathematics is not just about proving theorems but it is also about coming up with good definitions. Good definitions can often allow you to give intuitive and simple solutions to the hardest problems. The art of definitions is seldom taught, but I would argue that it is often the more creative of the two sides of mathematics. Coming up with good definitions and formalization requires having a foot in both the highly informal world of domain experts and the more formal world of applied mathematics. I think that a theorist is a definitions expert.

As the intuitive and philosophical roots of new fields ossify and a preferred terminology and formalization sets in, it becomes easier to forget the importance of translating between the intuitive and the formal. Theoretical computer science faces this problem. Computer science students — or maybe programmers more specifically — are among the few that are explicitly taught — instead of hoping that they pick it up on their own — how to translate the intuitive into the formal. After all, what is programming other than translating our intuitive goals and desires, or thoughts on procedure into the most formal and portable of languages? Yet the theoretical branch of the field has established its preferred interface of tools and terminology so well that many great theorist start to forget the central role of modeling the intuitive. In the early days of the cstheory StackExchange, we even had a discussion on if how-to-model-this questions should be allowed or not. Thankfully, Scott Aaronson reminded us of the importance of modeling:

A huge part of our job description as theoretical computer scientists is finding formal ways to model informally-specified notions! (What is it that Turing did when he defined Turing machines in the first place?) For that reason, I don’t think it’s possible to define “modeling questions” as outside the scope of TCS, without more-or-less eviscerating the subject.

Aaronson is a theoretical computer scientist that holds on to the philosophical roots of the field. He recognizes the importance of formalizing the intuitive not just for technological ends but also to gain insight into many of the timeless problems of philosophy. Although he works primarily at the intersection of computational complexity and quantum computing, he has also published insightful thoughts on economics, chemistry, classic problems of philosophy, ‘complexity’ or ‘interestingness’ of physical systems, and free will. He supports philosophy more than one expects now from personalities close to physics, but I don’t think he goes far enough to be my ideal theorist. The deal breaker for me is his endorsement of an exclusive focus on ‘ground truth’ and dismissal of hermeneutics and dialectic.

In some fields, most notably physics, the formalization of the domain’s mental models all share a single ontology and are expressed in a common language so well that it becomes easy to mistake the map for the territory. This lets us forget how our prescientific prejudices can blind us to our assumptions. In particular, it becomes easy to forget the problem of underdetermination and assume that your field’s ontology is unique and applicable far outside its original domain. The working ontology gets mistaken for the ‘ground truth’ and philosophically interesting positions are dismissed out-of-hand. This often stops a discourse before it starts, or devolves to two sides talking past each other. I like to call the condition: interdisciplinitis. When this is combined with direct condescension of the alternative views, it can become scientism.

I can see the roots of this in my own education; my technical background spanned numerous courses in computer science, physics, and math. During that time I was not taught or encouraged to dig into another thinker’s ontology, grant them as much of their system as possible simply for the sake of argument, and then critique from within their framework — on common ground. Even during the brief allusions to formalism in mathematics, we never went as far as working in obviously arbitrary or mutually inconsistent axiomatic systems. The closest I came to that was when I played around with building my own axiomatization of set theory, but even then — in my arrogant naivety — I thought I was searching for a Platonic ground truth, and that somehow I would do better in that quest than Frege, Russell-Whitehead, von Neumann–Bernays–Gödel, or Zermelo–Fraenkel. To some extent an education in the hard sciences felt like being initiated into a sacred cult with privileged access to the Ground Truth, and it felt great; I felt that I could proclaim, in agreement with Sheldon Cooper: “I’m a physicist. I have a working knowledge of the entire universe and everything it contains”.

Only in my philosophy electives, was I forced to assess the strength of people’s arguments from within their own frameworks. Even when I strongly disagreed with the basic premises of their ontology (as I often did in the case of philosophy of mind, for example), it was often enlightening to engage with their argument on common ground. I could learn some subtle aspects of the question under discussion by examining it from these different perspectives. Some of the lessons could then be reapplied through analogy from within frameworks that I was more comfortable with or believed to be closer to the ‘ground truth’. It was my only real experience at dialectic and the aporia in which it often ends.

I think that a theorist has to strive for a generous dialectic when entering a new domain. A theorist has to go through the hermeneutics of learning the standard approach and frameworks, relevant history, preferred terminology, and arbitrary quirks of domain specific discourse. A theorist should not try to overhaul these foundations completely, but just critique from common ground or introduce one new element within the framework. Only if there are multiple perspectives in play, should you use your intuition for ‘ground truth’ to pick the one that seems most likely. Even in these cases, though, it might be better to pick a framework that is not the most comfortable for you, since it will teach you more and have more space for your flavor of ideas. Of course, uncovering the foundation and structure of certain framework of thought, and exploring their history and interconnections with other frameworks is also rewarding outside the context of individual problems. It is beautiful to see the unity and contrast of different perspectives on the world.

Although this post has given you no reason for my extended absence, I hope it has let you see a bit more order in the mess of topics that TheEGG meanders through. I also promise to resume regular blogging, and assure you that I will try my best to not indulge as much in the naval-gazing that saturated this article. However, concerns over length have cut me a bit short, so I will still have to save explicing the obvious allusion to G.H. Hardy for next time.

Stay tuned!


About Artem Kaznatcheev
From the Department of Computer Science at Oxford University and Department of Translational Hematology & Oncology Research at Cleveland Clinic, I marvel at the world through algorithmic lenses. My mind is drawn to evolutionary dynamics, theoretical computer science, mathematical oncology, computational learning theory, and philosophy of science. Previously I was at the Department of Integrated Mathematical Oncology at Moffitt Cancer Center, and the School of Computer Science and Department of Psychology at McGill University. In a past life, I worried about quantum queries at the Institute for Quantum Computing and Department of Combinatorics & Optimization at University of Waterloo and as a visitor to the Centre for Quantum Technologies at National University of Singapore. Meander with me on Google+ and Twitter.

26 Responses to A Theorist’s Apology

  1. Abel Molina says:

    Very nice to see this post! Not sure about how other readers feel, but at least here, no need for apology for the style or not writing for a while, I think : ) Really like the point that the benefit of dialectics is often not reaching a satisfactory answer to a given question, but getting to look at the concepts that underly it.

    Something also interesting to consider is that even if courses in physics/math don’t follow the multiple framework approach, with a possible activity as going inside a framework and criticizing/contributing from within, some of the research does feel like that way, specially in the parts closers to theoretical physics…maybe some previous training would the approach would help having more clarity about what one exactly is doing when undertaking that kind of research…

    • Thanks!

      I agree with you in that I shortchanged the amount of multiple-frameworks that are used in physics/math/CS. I did this mostly for easy of narrative and length.

      I agree with you that there is a lot more multiple-frameworks when we get closer to the research edge for theory (as one would expect, since things are a bit more ‘up in the air’ there), Even without going to the research edge, I feel like you can see plenty of frameworks if you study the foundations of quantum mechanics, but I don’t feel like that is usually taught in undergrad. You also get smaller glimpses of multiple perspectives (although largely in the same framework( even in undergrad with the distinction between the Schrodinger, Heisenberg, and Feynman views of how to talk about change in quantum mechanics. But, at least in me, they did not instill the importance of uncovering frameworks and perspectives nearly as much as philosophy courses.

      In the case of computer science, I definitely cheated a bit in my description (took me a while to decide not to add an extra paragraph) because I did learn a lot about frameworks and perspective in my Logic and Computation, and Theory of Programming Languages courses. In fact, I think Logic and Computation did much more to shake my belief in the non-arbitrariness of logic than any philosophy course ever did. Of course, for theory of programming languages, you are basically beaten over the head with countless ways to look at the Church-Turing thesis, and no particular perspective is ‘better’ than the rest, they just emphasize different aspects and bring meaning to different concepts. Since then, I’ve even seen some cute blog posts about using programming language research to better understand philosophy of science.

      However, even in these CS classes, I was not taught or encouraged to ‘uncover’ the perspective of the previous authors. Instead it was very explicitly spelled out for me, usually by the original authors themselves, given how formal the field is. So for that skill, I still needed philosophy courses.

  2. Sergio Graziosi says:

    I see no reason to apologise for the style and content of this post. Looks like something you needed to put in writing (to settle the internal dialogue, I’d guess) and is spectacularly useful in providing a key to understand what knits together the wide range of posts in TheEGG as well as understanding the underlying aim of each post contents.
    I don’t know what slowed your blogging, but I know what slowed mine: lack of clarity and/or of novel thoughts that I felt were worth sharing. Reading between the lines it looks possible that you had many ideas spinning and they just needed some time to settle down, but I may be projecting my own inner world without a real justification.

    On the topic: what you write makes 100% sense to me, and required less-than-usual sheer effort to feel that I got what each sentence was supposed to mean, who knows if this feeling is justified [why it’s an apology escapes me, though].
    There is one part that seems to fit a little bit awkwardly, though, and expressed on your terms is about “confusing the map with the territory” or, in my terms, about the essentialism fallacy (and importantly, on where the essentialism fallacy does not apply). A theorist (in your words) engages in the art of definitions, while doing so finds both new ways to make such definitions useful as well as new understandings of their limits. I see an obvious regularity that applies to symbolic reasoning (the, hem, ‘essence’ of every modelling effort), that is implicit in your post and I was wondering if making it explicit might help. Let’s see if I can manage.

    In one way, you are conceptualising the process of making ever better maps, and this requires mapping the maps so to recycle usefulness across the board. For me, the interesting part is about error propagation. There is no doubt that when we talk about a real map, which refers to real features of actual places, the map is always somewhat wrong: in this case, the essentialist fallacy applies in the strongest sense. Symbolic reasoning applied to the real world has to leave something out and relies on the (guaranteed to be wrong?*) assumption that a symbol can completely capture the relevant essence of real entities. However, at the other extreme, symbolic reasoning applied to completely abstract concepts, can (theoretically) be error free, and is the reason why mathematicians can talk and work together without introducing communication mistakes. In the middle stands the realm of normal human interaction, where we employ, communicate and manipulate symbols that refer to neither purely real nor entirely conceptual entities (my favourite example is road-signs, that include a symbolic essence as well as physical properties, and you can’t have a road sign without both). So, as the error propagates from the entirely physical to the conceptual it gets smaller and smaller; I find this observation surprising and counter-intuitive.

    In the context of your post, this last observation neatly explains why the work of (your kind of) theorist is useful: conceptualising reduces the error and makes it possible to find patterns that apply to separate domains. The caveat is equally obvious: once you re-descend inside a particular domain, you are re-inserting some error due to the essentialism fallacy. Hence, the work of a theorist can be seen as trying to find new definitions that reduce the deductive (from general to particular) error generation**, and/or identifying and minimising the domain-specific sources of error that are introduced when applying a theory.
    At least, that’s how I translate your thoughts in my own “intuitive model”!

    *Being a doubt-junkie, I am starting to put a question mark on this assumption. Gives me a slight vertigo, but so far I seem to cope.
    **In my work environment, we are obsessed with the problem of induction, and I find it refreshing to start complaining about “deductive error generation” ;-)

    • I don’t know what slowed your blogging, but I know what slowed mine: lack of clarity and/or of novel thoughts that I felt were worth sharing. Reading between the lines it looks possible that you had many ideas spinning and they just needed some time to settle down

      Maybe, I actually worked on several posts in draft form in the background. I think there are 49 drafts in various states of completion in the dashboard right now, and several more ideas sketched on paper. The issue for me was really just finishing the posts. It was not some sort of perfectionism that seems to interfere with a lot of people (I don’t seem to have a problem with that, I am happy to put out half-baked ideas) but just an irrational road-block. It felt very good to finish this post and get past that road-block.

      Symbolic reasoning applied to the real world has to leave something out and relies on the (guaranteed to be wrong?*) assumption that a symbol can completely capture the relevant essence of real entities. However, at the other extreme, symbolic reasoning applied to completely abstract concepts, can (theoretically) be error free, and is the reason why mathematicians can talk and work together without introducing communication mistakes. … So, as the error propagates from the entirely physical to the conceptual it gets smaller and smaller; I find this observation surprising and counter-intuitive.

      I think you are in good company in this sentiment, I feel that Whitehead was expressing something very similar in his process philosophy. He also seems to have thought that symbolic reasoning on abstractions was necessary, and that this symbolic reasoning allowed us to keep a grip on error. At the same time all these systems by their very nature have to ignore something, and the ignorance can grow worse through ossification and entrenchment of the framework. He wrote: “Too many apples from the tree of systematized knowledge lead to the fall of progress.”

      For him, this seems to mean that there are two completely different and necessary modes of thought that co-exist when it comes to science. One is working within the system, and one is demolishing and building new systems. In this way, I think he was anticipating Kuhn and Feyerabend, but I might be overstating the connection (since I have just started reading into Whitehead).

      However, with Feyerabend in mind, I do hope that I wasn’t “conceptualising the process of making ever better maps” too explicitly. I don’t think that the process of overhauling system can be meaningfully systematized, or even if parts of it can, I am not sure if we would want to. I guess I did assert that I think the dialectic is a better approach to these overhauls, but I think that is sufficiently vague to not be restrictive.

      Finally, I want to finish with an essentialism tangent. I am not sure if you use reddit, but if you do then you might be interested. Dawkins’ article on the retirement of essentialism (the one you open your post with) was recently shared on the philosophy subreddit and garnered some interesting discussion. I haven’t been involved with it, but given your interest in the topic, I will try to do some closer readings so that I can have a useful opinion.

  3. vznvzn says:

    hey dude welcome back. was indeed wondering what happened to your copious blogs. “falling off the horse”? sounds dramatic. no explanation? hey its ok to have some personal stuff even in a scientific blog yaknow. no apology for a dormant blog is necessary. blogging is a public service based on volunteering. however, was a bit amazed at your earlier output, it is quite prolific & ambitious. keep in mind there is no optimum rate of posts. even a few a year can make a decent blog. keep up the good work man

    • Thank you.

      I don’t really view my blogging as a public service, I think of it more as just serving myself. Blogging helps with the housekeeping of my thoughts, and without it my mind has become a mess. It feels great to get back into writing.

      • vznvzn says:

        you admire but also sometimes diverge/ strongly disagree with scott aaronson? wow/ lol join the club. he’s quite brilliant but can also be quite polarizing. liked very much his allusion to the Turing machine wrt modelling. his philosophical writing is quite wideranging, articulate and erudite & serious TCS researchers willing to dabble in philosophy are very rare. anyway for one at least think it would be interesting if you expanded your soundbite about the disagreements into something more detailed.

  4. Hey, welcome back. You’re still making good sense to me. As someone who started in CS, moved to robotics engineering, and now is working a lot on writing, both creative and philosophical, I really appreciate your point of view. I was particularly struck by your point communications theory point about human communication, that the speaker has an incentive to be ambiguous, offloading work onto the listener by not actually clearly understanding what they are saying.

    Haven’t read your link about the uses of ambiguity yet, but I’ve been learning that in the creative world (I think also in politics and religion) the right level of ambiguity is something that you seek. In writing this is often justified by compression (not saying what is unnecessary), but I think there is also power in letting different people fill in the gaps in different ways. That way you speak to a larger audience. This is clearest in poetry or song lyrics, where people frequently construct powerful personal meanings that are different from the artist’s original inspiration.

    • Thank you!

      The link on use of ambiguity just goes to a question I asked on the linguistics StackExchange and it spells out the same sort of point as you did: sometimes people use ambiguity because they don’t want to communicate unambiguously. As you said, it can be used to reach a wider audience or for a number of other effects.

      However, I was wondering if there are ways to side-step this a little bit by being cheeky with what was the intent of communication. For example, the vagueness in art could be just that the intended state to recreate in other was a state of feeling. Or for a politician, the intended state was a state of voting-for-me. This might be being too fast and loose with meaning, though.

      I also wonder if the right level of ambiguity can be used to facilitate more engagement. That could be a very useful thing to know for the completely practical process of writings papers and blog posts.

      • Managing how much to say about what, and therefore the level of ambiguity, is very much at the heart of any kind of writing in natural language. This is one reason why page limits continue to have value in academic publication, even though the additional cost to the publisher is now negligible. I’d agree that both creative writing and political speech are not about transferring facts that could potentially be clearly articulated. It seems that in politics it is mostly to say “I’m your kind of guy, I know your concerns and what to do about them. I’m competent too.” I know very little about the craft of politics, but I’ve been writing a lot of fiction and also listening to podcasts and reading books about writing. A major major category of advice to writers is in the direction of trying to get the writer to say less, and (in my inference) leave more ambiguity. We are told to “resist the urge to explain” why characters do things, or why events happen. The main task for a fiction writer is to get the reader to emotionally engage with the character as a real person, pulling them into the story and making them want to know what happens on the next page. It’s my theory that one of the reasons why characters that are not too fully fleshed out are actually desirable is that it makes it them easier for the reader to fit onto themselves or someone else that they know.

        Another theory about the role of ambiguity is that readers like the feeling of solving a mystery, and don’t really want to be handed everything all neatly tied up. Common editing advice related to this is to throw away the first chapter and jump in after the action has already started. This story trick more clearly applies to nonfiction writing. Science writers often build interest through a bit of suspense, introducing the mystery, and telling us a bit about the characters, then expanding out to a (necessarily simplified) form of the resolution. Knowing where to stop must be one of the big challenges of science writing.

  5. Glad you’re back. Things were getting boring.

    There was something in there that I was going to quibble with… but somehow I’ve forgotten and can’t seem to find the offending paragraph. Something about theorists? Hm. I’ll be back.

  6. Reblogged this on CancerEvo and commented:
    More than an apology a defense of theory and “theorists”.
    After a long hiatus, Artem comes back with a post where he refocuses his scientific interests. A theorist, a gadfly that questions our assumptions and makes us aware of our blind spots. Now, gadflies might not be pleasant but, in science, they are a necessary good.

  7. Artem, seems like a lot of people were missing your posts. Glad to see that you are back. Gadflies are something we don’t always appreciate but yet fundamental to science.

  8. SamL says:

    I really enjoyed this post, and endorse the spirit of it wholeheartedly. I believe I’d like to think of myself as a ‘theorist’ too. Out of interest, have you encountered any Richard Rorty? I feel that his book Philosophy and the Mirror of Nature may be right up your alley.

    I found this point fascinating:

    “In such communicative settings, Zipf pointed out that ambiguity arises from the conflict in energy investment between the speaker and listener (note that there are some important assumptions buried here, too). The speaker wishes for a totally ambiguity — one sound to mean everything — and leave the difficulty of disambiguation for the listener. The listener, on the other hand, wishes for a totally unambiguous language, so that the speaker is left with the difficulty of words.”

    It strikes me also that we might offer a more sinister analysis — as well as vagueness requiring less energy it may also have rhetorical pay-offs: the more precisely I state my position the more alternatives I rule out as attributable to me, thereby increasing the possibility of my being wrong in the eyes of the audience. Could we tease out a game theory of rhetoric here in scenarios in which the winning ticket goes to whoever persuades the audience they’ve spoken the most truthfully? (Put like that it actually sounds like a fairly meagre and familiar point, but I’ve never thought about it in explicitly game-theoretical terms before.)


    • I haven’t read Rorty, but I have encountered his bridging of the analytic-continental divide on philosophy.SE. From a quick wikipedia plunge, I definitely like this quote:

      Truth cannot be out there — cannot exist independently of the human mind — because sentences cannot so exist, or be out there. The world is out there, but descriptions of the world are not. Only descriptions of the world can be true or false. The world on its own unaided by the describing activities of humans cannot.

      I want to endorse this stance and also the pragmatism that shaped Rorty fully, but the little mathematician inside me rebels. However, maybe Rorty’s ideas on culture are of a sufficiently rich flavour that I can hide my preferred philosophies of math inside them. I will definitely give him a read, unfortunately the growth of my reading list significantly outpaces my reading speed!

      Your point on using ambiguity for positive aims is well received, and it is exactly the sentiment I expressed in this linguistics.SE question. Rob MacLachlan and I started discussing it in a but more detail further up in the comments, maybe you want to jump in on that discussion so that Rob gets pinged to join too?

      I definitely think that making some (evolutionary ?) game theoretic models would be fun. Keven Poulin — a student I supervised at some point in the past — has started thinking about this in the context of the tension between cooperation and deception, expanding on some idle musing I once had on perception and deception (I also have an arms-race model for this buried somewhere in my notes, and I have been promising Keven that I would write it up, but just like my reading list outpaces my reading, my promise-to-write list outpaces my writing). It would be great to figure out how to work vagueness into such models. However, I really don’t want to take on the herculean task of figuring out what is already known on this topic.

  9. Pingback: Transcendental idealism and Post’s variant of the Church-Turing thesis | Theory, Evolution, and Games Group

  10. Pingback: Philosophy of Science and and analytic index for Feyerabend | Theory, Evolution, and Games Group

  11. Pingback: Critical thinking and philosophy | Theory, Evolution, and Games Group

  12. Pingback: Cataloging a year of blogging: the philosophical turn | Theory, Evolution, and Games Group

  13. Pingback: Seeing edge effects in tumour histology | Theory, Evolution, and Games Group

  14. Pingback: Pairing tools and problems: a lesson from the methods of mathematics and the Entscheidungsproblem | Theory, Evolution, and Games Group

  15. Pingback: An update | Theory, Evolution, and Games Group

  16. Pingback: Passive vs. active reading and personalization | Theory, Evolution, and Games Group

  17. Pingback: Cataloging a year of blogging | Theory, Evolution, and Games Group

  18. Pingback: Argument is the midwife of ideas (and other metaphors) | Theory, Evolution, and Games Group

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s