Personification and pseudoscience

If you study the philosophy of science — and sometimes even if you just study science — then at some point you might get the urge to figure out what you mean when you say ‘science’. Can you distinguish the scientific from the non-scientific or the pseudoscientific? If you can then how? Does science have a defining method? If it does, then does following the steps of that method guarantee science, or are some cases just rhetorical performances? If you cannot distinguish science and pseudoscience then why do some fields seem clearly scientific and others clearly non-scientific? If you believe that these questions have simple answers then I would wager that you have not thought carefully enough about them.

Karl Popper did think very carefully about these questions, and in the process introduced the problem of demarcation:

The problem of finding a criterion which would enable us to distinguish between the empirical sciences on the one hand, and mathematics and logic as well as ‘metaphysical’ systems on the the other

Popper believed that his falsification criterion solved (or was an important step toward solving) this problem. Unfortunately due to Popper’s discussion of Freud and Marx as examples of non-scientific, many now misread the demarcation problem as a quest to separate epistemologically justifiable science from the epistemologically non-justifiable pseudoscience. With a moral judgement of Good associated with the former and Bad with the latter. Toward this goal, I don’t think falsifiability makes much headway. In this (mis)reading, falsifiability excludes too many reasonable perspectives like mathematics or even non-mathematical beliefs like Gandy’s variant of the Church-Turing thesis, while including much of in-principle-testable pseudoscience. Hence — on this version of the demarcation problem — I would side with Feyerabend and argue that a clear seperation between science and pseudoscience is impossible.

However, this does not mean that I don’t find certain traditions of thought to be pseudoscientific. In fact, I think there is a lot to be learned from thinking about features of pseudoscience. A particular question that struck me as interesting was: What makes people easily subscribe to pseudoscientific theories? Why are some kinds of pseudoscience so much easier or more tempting to believe than science? I think that answering these questions can teach us something not only about culture and the human mind, but also about how to do good science. Here, I will repost (with some expansions) my answer to this question.

There are two great TED talks that together help shed some light on this question:

  1. David Deutsch (2005) “A new way to explain explanation“, and
  2. Richard Dawkins (2009) “Why the universe seems so strange

At a fundamental level, science is a narrative focused on explanation and only sometimes using that explanation to make predictions. This sentiment is not without controversy, and I think that many (physicists, especially) would echo Javier Rodriguez Laguna opposition:

If science is not required to provide predictions beyond the observed data, it’s no different from myth and religion … Prediction is mandatory, a nice story is not. A nice story to help understand is strongly recommended, sure. But science can proceed without. … Without the ability to make predictions, science is just another narrative.

Unfortunately, this prediction-centric view of science is not in accord with my own experience of doing science — although it might be a better fit to engineering or some parts of experimental science. From my experience, almost every theorist has as their goal to explain something. Only after they develop their theory do they realize that as a side-effect it made some new predictions. These predictions can be very sexy and hence we prefer to remember them, but they are not the motivators for the theory builder. Understanding is the motivator. As a case study, I would suggest either Maxwell on light or Dirac’s work on anti-particles. Both were motivated by trying to understand or better explain something that was to a large extent already known, only as a side-effect did they generate predictions which happened to be useful.

Thus, to most people, science is useless unless they understand the story it tells. The problem with modern science is to have a good grasp of its explanatory power, you need a lot of (often difficult) background. As you gain this background, you develop what Feynman would call the most fundamental skill in science: always questioning, being able to say “I don’t know”, and to hold contrasting ideas together. Although, I would argue that most scientists reliance on probability and statistics is still based on a myth of quantifiability of certainty. If you don’t invest in acquiring a scientific background, most of science seems like witchcraft passed down by ivory-tower academics in funny gowns and hats.

What pseudoscience (or even Feynman’s cargo-cult science) provide is explanations that require less background, purport to be more certain, have something for everyone (Forer effect), and reassure you that “there is an answer”. If you look at much of pseudoscience (or ancient myths) more closely, you will notice that they tend to personify their subject matter much more than science (my favorite example is the homunculus fallacy). They use this personification to provide agency, intent, and meaning to their explanations.

The great advantage of these human stories is that our minds are optimized for them. If you subscribe to Dunbar’s Social Brain Hypothesis (see this post for comparisons to some alternatives) then one of the main things evolution produced is a mind built to understand social structure, and other people. When an agent does not adhere to its role and violates our theory-of-mind and behaves erratically, without discernible intent and meaning, this is dangerous to us and our society; it causes us great discomfort. When you hijack the social mind to try to explain further and further afield parts of nature, you try to build the same sort of characters.

When you have to say “I don’t know” or “I don’t understand” this character, it creates discomfort. Pseudoscience thrives on this by giving an arbitrary, simple, shallow and easy to change explanation. Since most lay-people never pursue this explanation far enough to notice its contradictions, and since it shapes their observations (like Popper’s theory-laden observation and through confirmation bias) they never get a strong enough cognitive-dissonance to overcome to positive feeling of having an understandable ‘explanation’.

Unfortunately, just like much of pseudoscience, science is a story and therein lies the biggest difficulty of demarcation. But this can also be a source of strength, since it lets us share insights between literature (and its analysis) and science.

Advertisements

About Artem Kaznatcheev
From the Department of Computer Science at Oxford University and Department of Translational Hematology & Oncology Research at Cleveland Clinic, I marvel at the world through algorithmic lenses. My mind is drawn to evolutionary dynamics, theoretical computer science, mathematical oncology, computational learning theory, and philosophy of science. Previously I was at the Department of Integrated Mathematical Oncology at Moffitt Cancer Center, and the School of Computer Science and Department of Psychology at McGill University. In a past life, I worried about quantum queries at the Institute for Quantum Computing and Department of Combinatorics & Optimization at University of Waterloo and as a visitor to the Centre for Quantum Technologies at National University of Singapore. Meander with me on Google+ and Twitter.

22 Responses to Personification and pseudoscience

  1. Jon Awbrey says:

    Instead of a “papal” demarcation between true science and false science, Peirce distilled a stepwise refinement in the “methods of fixing belief”, all of them seeking the same end, to salve the irritation of doubt and avert its dangers, but differing in their chances of success.

    See this post for a comment and further links.

  2. I usually put that question in the context of personal development. The correct way of asking it for me would be “When does your curiosity become scientific?”. In the grand scheme of things, it doesn’t matter to me, whether my views or ways to acquire understanding are scientific or not. The term I use to describe the criterion to which I pay attention is “productive thinking”. It can be defined as the one providing the “spark of understanding”. I believe there is no short-cut around pseudo-scientific explanations. For instance, with human behavior as my major interest, I went through several religious, philosophical and other interpretation systems built around several high-order metaphors. If it did the thing for me, in all honesty I clung to it, but each time there came a moment, when the ability of those systems to provide insight died out, because I *did not stop thinking*. I am usually skeptical of people who “believe” in “science” without going through a process of developing a way to appreciate the fact that it is the actual front-runner in the game of understanding, a cutting edge epistemic device.

    In this line of thought, I am not sure what to make of the demarcation problem. It seems to be redundant to the process, because the discovered criterion would make sense to the ones already scientifically inclined, but it would make no major difference in their thinking. It stays an important theoretical question though.

    Also, I agree that the predictive ability of a theory is not the main motivation, while understanding (explanation) is. But that’s due to the fact that I don’t care about my understanding being labelled as scientific. The predictive ability is often more tangible or easier to quantify than the spark of understanding. And saying that scientific curiosity is driven by desire to explain things, though true, does not yet pose the demarcation problem. The main conflict here is between individual motivation and science as practiced by society – the two need different criteria, which are interacting in complex ways. The demarcation problem stems from the social plane – “How to qualify what is seen by that individual as providing an explanation as scientific”. I think that criterion is subject to consensus and to change as the science develops and it allows for anything that is considered as good practice by the experts in the field.

    • I think your third paragraph largely answers the dilemma of your second paragraph. It seems to me that the demarcation problem is primarily of political interest. As a society, we give a certain power and authority of science, and sometimes it is not clear if we should grant that authority to some new field. In the end, it seems like a lot of this power-granting turns into a historic and cultural process and I think that makes scientists (and philosophers of science) uncomfortable because they like to uphold the myth of objectivity when it comes to science.

      • Yep, I can see that now! I’ve been typing thoughts out one by one I guess. It seems to me people like Dawkins, engaged in public understanding of science, are doing a very healthy thing in that sense. Science has to be represented as an accessible narrative (inescapably oversimplified, overgeneralized or even biased) to gain acceptance of general public and support from a democratic-ish state, and creating this sort of narrative or engaging in a discussion with various pseudo-scientist might have a kind of therapeutic effect against this objectivity myth, because everyone claims objectivity. In a weird way, that’s what helps put scientific inquiry into perspective.

  3. Jon Awbrey says:

    We do indeed have to watch out for the Fallacy Of Misplaced Agency (FOMA) — I know I got the acronym from Vonnegut but I can’t remember if the fallacy itself is listed by Whitehead or not — closely related to the Fundamental Attribution Bias (FAB) in psychology. I have seen it rearing its head most recently among insufficiently tutored readers of Peirce who have fallen into the slipshod semiotics of saying that “signs do this and that”, as if they were word-magically disconnected from the duly interpretive context of any given sign relation.

    But we have to be careful about making personification too pejorative a term, as it is a primitive form of Hypostatic Abstraction (HA), all too easily also mocked, which Peirce thought basic to mathematical reasoning. The question here is not whether HA is good or bad in itself, but how to use it wisely.

    Hypostatic Abstraction

    • I didn’t mean to paint personification as pejorative. I actually think that it is a pretty good way to understand things. For example, if you watch the language of experts talking about science or math, they will often personify their concepts of study. I have no issue with this, and do it all the time myself. The danger of personification is actually its extreme power. Personification is so good for understanding that sometimes it can give a feeling of understanding even when nothing has been understood.

  4. Sergio Graziosi says:

    I agree with everything you say, but have some nitpicking to offer, mostly to make sure I make good use of your “seeds for thought”.
    First, I’ll distil some of the things you say in the following way:
    “human beings feel like they understand something when (and possibly only when) they are able to express their understanding in a story-like fashion”.
    For example, you could teach me how to calculate the pH of a buffer solution by telling me what formulas to use and how; with this “knowledge”, I’ll be able to perform the task reasonably well, but I will not have any feeling that I understand what influences pH (that’s what Dennett calls “competence without understanding”).
    To feel like I do “understand” such a system, I need to be able to express the mechanisms in words; where appropriate, I would also be able to show how such mechanisms can be described numerically, and how we can calculate what is going to happen.

    How this relates to what we understand of cognition is probably the subject for a full post, but I think what I’m saying should be fairly uncontroversial: predictive ability feels dry/fragile/ungrounded if we don’t have an associated narrative to match it.

    The other way to approach this subject is what you are doing in this post, and observe that scientists are not usually particularly interested in making predictions, they want understanding, and they need a narrative for that.
    The way I conceptualise (storify?) this observation is (unsurprisingly) as a cognitive bias: we judge the soundness of theories mostly on the basis of how coherent the narrative seems to be.
    This is heuristically sound (as with the vast majority of biases), but limited, so in practice, when I judge my own insights, I try to consciously concentrate on the other side, and ask myself: does this new theory of mine allow me to make new predictions? If not, the conclusion should be (and it’s hard to impose it on myself) that the new idea is still nothing but a nice fable, and either needs more thought (to find what predictions it can produce after developing it some more) or it’s value is limited to how easy it is to communicate and remember it (in case it can be used to produce reliable predictions that can also be reached via pre-existing theories).
    So, my overarching, self-imposed, heuristic is:
    – It’s OK to focus on what feels like “understanding”, to start with.
    – Once you reach that feeling, look for how this new understanding helps you to predict what is going to happen.
    – If it doesn’t help, try to question your feeling: it’s likely that you are fooling yourself and have found no new understanding at all.

    In short: I completely agree that the driving force of much of science is our need to feel that we are generating new understandings. But also: I do think that we should strive to overcome our predispositions and keep using prediction (and therefore falsifiability) as an important indicator. Or, if you want: I’ve put so much effort into giving predictions their role of an important indicator, and now I’m unwilling to let go…
    Exceptions of course do apply: for example, a new theory may be very useful because it unifies previously unrelated theories, allows for new generalisations, simplifies the calculations, or is just easier to understand, remember and transmit (is “more intuitive”)…

    I hope some of the above makes sense to you!

    • “human beings feel like they understand something when (and possibly only when) they are able to express their understanding in a story-like fashion”.

      I am not sure if I agree with this paraphrase. I might have agreed to it in part back in 2012 when I wrote the answer that makes up the second half of this post, but even then it would have been with some qualifications. However, let’s start with the converse:

      Expressing their understanding in a story-like fashion makes it easier for human beings to feel like the understood something.

      This statement I would agree with, and I think it is this statement that motivates us to present much of science (and everything else) as a narrative.

      As for what I actually mean by ‘understanding’ — that is a very difficult question. I was going to initially write a disclaimer in my post on that this was a loaded word and a deep examination might reveal that it is not all that different from a kind of ‘prediction’. However, I chose not to partially out of rush (although there was a big gap between my most recent posts, this was actually because I’ve had no time to sit down and type) and partially because it would take too long to go into. If I even understand ‘understanding’ at all.

      Right now, I would like to link understanding to (conscious ?) interfaces between our modes of interaction with the world (inspired in some ways by Metaphors we live by). Under such an approach — especially if I exclude the brackets — riding a bike might be a type of understanding of bikes, except a ‘physical’ understanding. Your pH example would also be a type of understanding — a technological one. But these do all feel different in some way from ‘academic understanding’ or ‘intellectual understanding’; so maybe to have those we would have to have one of the modes being our reasoning?

      I will have to spend more time thinking about this. As you point out, it would probably require its own post.

      I want to pick on this pair of sentences a bit:

      The way I conceptualise (storify?) this observation is (unsurprisingly) as a cognitive bias: we judge the soundness of theories mostly on the basis of how coherent the narrative seems to be.
      This is heuristically sound (as with the vast majority of biases), but limited, so in practice, when I judge my own insights, I try to consciously concentrate on the other side, and ask myself: does this new theory of mine allow me to make new predictions?

      Note that when you say that something is a “bias” and “heuristically sound”, you are asserting that there is an external judge which you can use to judge the accuracy or level of approximation. This judge is — as you discuss at the end — prediction, but then you kind of undermined the understanding perspective by grounding it in prediction. I think a lot of people are comfortable doing this, but I want to try to think about understanding without this grounding. I want to build an understanding-centric understanding of understanding, instead of a prediction-centric rationalization of understanding. Maybe this is impossible (for me). Maybe it is silly.

      Or, if you want: I’ve put so much effort into giving predictions their role of an important indicator, and now I’m unwilling to let go…

      The reason I want to stray away from this view is two-fold. One is that it seems like the ‘cultural norm’ in my social circles for grounding understanding, and it is always good to stray from that norm. The second is that I want to take very seriously the notion of theory laden-observation. If you take judging the veracity of your predictions (and the observations that verify/falsify them) as dependent on your existing understanding (or parts of it) then suddenly prediction seems like a less obviously sound grounding. This makes me want to go the route of throwing away the ‘disembodied intellect’ that is simply affected by and acts on the world, and instead try to think of it as situated in the world (and without resorting to physicalism).

      • Sergio Graziosi says:

        Ah! And there I was thinking that my intellectual ambition is too big for my own good. Trying to “build an understanding-centric understanding of understanding, instead of a prediction-centric rationalization of understanding” looks ambitious even to my eyes, but I can only bow to your courage: by all means, do try!

        I agree that my first paraphrase is too sweeping and that it does require too many qualifiers to hold some water, so Yes, let’s get rid of it.

        Note that when you say that something is a “bias” and “heuristically sound”, you are asserting that there is an external judge which you can use to judge the accuracy or level of approximation. This judge is — as you discuss at the end — prediction, but then you kind of undermined the understanding perspective by grounding it in prediction.

        You are exactly right. In the sense that I do undermine the “understanding perspective”; call me orthodox but I’m still unwilling to turn my back on prediction. This introduces an element of circularity, as you surely have noted, but I can’t find a way to disentangle it: if I’m describing my views in words, and/or when I’m consciously dissecting them, I can only do so in words, so I’m already grounding everything in a form of narrative, even if I then proclaim that the ultimate judge is empirical verification of prediction.
        I like to take it as plus, and claim: See? There is a role for both narrative and predictive tools in the quest for “understanding”. And by doing so I expect to be submerged by derisive counter-arguments, but at this time I don’t have anything better to offer.
        So in a sense, I’m with you in going against the accepted norm, because to hard-core empiricists my position will sound as dangerously post-modernist (and yes, I get some pleasure out of it).

        As for the real issue here, on the notion of theory laden-observation, I have nothing more than a hunch to report. I hope (I hesitate to put a stronger word such as “think” in here) that the following approach will eventually work out:
        The idea is to have an evolutionary account of cognition. The starting point is that our cognitive tools have been shaped by natural selection: they are tools that helps us breeding successfully. We are not born as blank slates, but have plenty of priors that are there specifically because they help the persistence of our genes (no news so far). I hope that we’ll find that one of these priors is about empirical verification (peppered with a hint of dualism, I suspect). If I’m right, I think/hope that it means that in the end we are bound to judge ideas/theories on the basis of their applicability simply because we are designed to work that way. The fact that we can try to do otherwise is a by-product of our enhanced cognitive ability (our ability to abstract, which is very useful in many situations). But in practice, when real life calls, we always fall back to the basic, proscribed general theory: there is one reality, and knowledge that works is knowledge that helps navigating it.
        In other words, I’m suggesting/hoping that we are born with an inescapable theory pre-implanted, and personally feel no need to fight against it, I’d rather accept it and embrace it. (I feel very Zen today)
        In your terms, it means that some of my pre-existing understanding is inescapable and necessary because it is grounded on some million years of natural selection. It does surely influence how I judge the veracity of all my predictions, but that’s OK, for it is grounded directly on the physical world in a way that precedes my very existence.

        Does any of the above make sense? It’s the first time I’ve attempted writing these thoughts, so it may well look as nonsensical gibberish to anyone but me…

    • I am replying to your latest comment, but at this level of nesting so that we have more room fot continued discussion. You write:

      The idea is to have an evolutionary account of cognition. The starting point is that our cognitive tools have been shaped by natural selection: they are tools that helps us breeding successfully. We are not born as blank slates, but have plenty of priors that are there specifically because they help the persistence of our genes (no news so far). I hope that we’ll find that one of these priors is about empirical verification (peppered with a hint of dualism, I suspect). If I’m right, I think/hope that it means that in the end we are bound to judge ideas/theories on the basis of their applicability simply because we are designed to work that way. The fact that we can try to do otherwise is a by-product of our enhanced cognitive ability (our ability to abstract, which is very useful in many situations). … I’m suggesting/hoping that we are born with an inescapable theory pre-implanted, and personally feel no need to fight against it

      I obviously enjoy explanations in terms of evolutionary biology and evolved universal biases; as you noted in our private communication this is some of the driving spirit under the evolution of useful delusions work (although, whenever we look at these models it is important to be careful about the distinction between cultural and universal). However, I think that if you are going to build a whole theory of knowledge, it is not the best starting point or at least should not be your only starting point since it implicitly accepts some form of physicalism. As a general all-encompassing metaphysic, I think that physicalism is impoverished and can often be oppressive. Thus, for the sake of plurality of methods, I suggest Kant.

      Kant arrives at a similar grounding of all our perceptions and fundamental, inescapable biases similar to what you are seeking in his transcendental idealism (I tried to provide an introduction to TI with a focused application the Church-Turing thesis, which I believe in one of our inescapable structures of perception; you might find better introductions elsewhere). He does this without the need to turn to evolution (since he wrote nearly 100 years before the Origin of Species) or any other aspect of physicalism. To me this seems like a more fulfilling way to achieve the grounding you seek, with an evolutionary story grafted on as just a physicalist rationalization but not the main (or only) reason for our belief.

  5. This comment is on both Sergio’s and Artem’s posts here.

    I think that Sergio’s evolutionary explanation and the notion of “prior” is a bit overstretched due to exclusion from the analysis of stages of neurodevelopment in the course of life and therefore shaping done by social interaction starting with parents and then broader culture. And while empirical verification can be seen as a conceptualization of causality, which surely is a basis not only of evolution but of all development, dualism is a very feeble necessity. It is similar to believer/non-believer type of thing – you can argue that ToM, which is *always* at work inevitably applies to non-agents and abstract notions, which produces bases for religious belief, but in an explicitly atheistic culture this activity can be framed differently, without spawning new agents into being. A theory is an interpretation and is therefore never pre-implanted, only certain basic mechanisms are. Precisely because my intuitions have always been monistic I, on the other hand, disagree with Artem on two points. Firstly, evolutionary explanations don’t imply physicalism, they imply monism, which does not postulate any particular kind of ontology, but only that there’s only one nature of things. Secondly, my understanding is that Kant’s “Kritik der reinen Vernunft” shows limits of introspection and can not be further developed, while the interplay between evolution, genetics, epigenetics, neurodevelopment and culture, and possibly chaos, introduces a novel tool box, which was unavailable to Kant and is *far* superior in explanation of relevant phenomena (Kant’s philosophy introduced several notions that are void and impractical). Try selling transcendental idealism a modern day psychiatrist and see what happens. Artem, how do you deal with a fact that we know that awareness arouse in the process of evolution of nervous systems? How is it not reasonable to try to explain mind in terms of brain processes, when from both clinical and everyday perspective we can see how this approach improves our daily practice? Think of the success of CBT. Or think of effective teaching techniques that utilize understanding of neurotransmitter systems?

    • I think that Sergio’s evolutionary explanation and the notion of “prior” is a bit overstretched due to exclusion from the analysis of stages of neurodevelopment in the course of life and therefore shaping done by social interaction starting with parents and then broader culture.

      This is a good point. My favourite example of how cultures shapes our manifest reality is the effects of our language on how we perceive/ categorize colours and phonemes. See these papers (and my question on cogsci.SE):

      Gilbert, A. L., Regier, T., Kay, P., & Ivry, R. B. (2006). Whorf hypothesis is supported in the right visual field but not the left. Proceedings of the National Academy of Sciences of the United States of America, 103(2), 489-494.

      Regier, T., & Kay, P. (2009). Language, thought, and color: Whorf was half right. Trends in cognitive sciences, 13(10), 439-446.

      evolutionary explanations don’t imply physicalism, they imply monism, which does not postulate any particular kind of ontology, but only that there’s only one nature of things

      I am not completely sure what you saying here, but I have yet to see someone call a monism that has satisfied both my philosophy of science and my philosophy of math. However, postulating a single ineffable world and worrying about multiple and only partially-coherent perspectives on it, has satisfied me to some extent (see this comment for more).

      my understanding is that Kant’s “Kritik der reinen Vernunft” shows limits of introspection and can not be further developed, while the interplay between evolution, genetics, epigenetics, neurodevelopment and culture, and possibly chaos, introduces a novel tool box, which was unavailable to Kant and is *far* superior in explanation of relevant phenomena

      In my reading, he did show limits of introspection (which I think still largely hold as limits of introspection, of course we have better empirical inspection now), but not that they cannot be further developed. I think that Kant would be little shaken by current evolution, (epi)genetics, and neuro; these fields are great technological developments, but they have not developed to the point where they have direct bearing on things that he worried about. Chaos is a completely vacuous field, and shouldn’t effect anybodies philosophy (although it and the related concepts of fractals seem to be super popular).

      I think the current focus on the differences of manifest reality between cultures (unfortunately, I’ve written very little about this) would give Kant some pause and he would have to adjust some of his perspectives. The only modern ‘scientific’ development that I am familiar with which I think would really affect Kant is Godel’s theorem and incomputability. As I’ve described before, this would allow him to go back to his more dualistic stance of the Inaugural Dissertation if he so wishes.

      Artem, how do you deal with a fact that we know that awareness arouse in the process of evolution of nervous systems? How is it not reasonable to try to explain mind in terms of brain processes, when from both clinical and everyday perspective we can see how this approach improves our daily practice? Think of the success of CBT. Or think of effective teaching techniques that utilize understanding of neurotransmitter systems?

      For some reason, it seems that people equate saying that “there are other perspectives” as saying “I don’t want to use the physicalist perspective”. I have no problems with current results in neuro and psychology (well, that’s not completely true, I think people tend to overstate their conclusions a bit, but that is all over science), I just don’t think it is an adequate grounding for a philosophy. However, it is a great rationalization of a philosophy; for example, if you can’t rationalize your philosophy from the perspective of science (i.e. if it blatenly contradicts experience without justification) then your philiosophy is wrong in my books. But this is very different from accepting the current scientific understanding as an ontology on which to build a philosophy.

      I can believe that my mind can be viewed as “an excretion” of my brain and that my brain is an artifact of evolution, development, and behavior, but also entertain another perspective that views my mind as primary and the brain and evolution as stories I tell to make sense of my sense-data. Anywhere where these two perspectives come into sharp tension without justification will then serve as a point for philosophical inquiry.

      • I think as an answer to this comment I’ll have to compile a post on transcendental idealism in it’s relation to modern thought and your views of it, that will touch upon the mind-body problem as well. As my musings are governed by what I call “productive thinking” (I’ll have to define this) I believe I’m able to show that transcendental idealism is not productive, and that’s due to the Kantian system having feet of clay, several types of clay actually. It is going to require me to take a week off, because I’ll have to re-read Kant for this… I’ll let you know when and if it comes to something conclusive!

        By the way, I find it awesome that your blog makes me formulate in English some of my thoughts that I thought through and have language for in Russian. Now I also have Peirce and Whitehead on my reading list.

  6. Sergio Graziosi says:

    Responding to both Artem and Alexander…
    Artem: I’m sympathetic with your doubts on physicalism, it does certainly feel limited/limiting, but my hope is that it doesn’t need to be so. What I’m aiming for is an overarching epistemology (I have a problem with ontologies, I’ll have to write a post on that soon…) that allows for:
    1) different local epistemologies that are suited for the subject at hand,
    2) the, at least theoretical, possibility to, ahem, reduce each and every legitimate epistemology to physical terms.

    The link being an understanding of cognition: every possible epistemology is constrained by how our brains work, and more specifically, by what our brains (or any computational device) can’t do. For example, brains can’t build and use a deterministic and predictive model of chaotic (non-periodic) systems, because they lack the computational power to use it in practice. Yes, you can outsource the computations to some limited extent, and we do it all the time, but it generally boils down to accepting some compromises on the desired level of precision/uncertainty. In short, all (really? shall we say “most”?) understanding of physical reality can be described as an heuristic model (and I can’t believe how much QM helps in exemplifying this point). Therefore it’s entirely appropriate to use separate epistemologies, even when exploring the same domain: some will be precise in one dimension, some in another. I we could hold and manipulate infinite amounts of information in our brains, we wouldn’t need different (and possibly incompatible) approaches, but we are limited, and therefore we do need to pick and choose the appropriate shortcuts.

    For all this to work, I need to be able to (theoretically) reduce any epistemology to something physical, and this is why I’ve tried to claim that we can bridge the gap between Shannon’s IT and good old physics.
    I don’t know if this preliminary step was successful because I haven’t got much feedback so far (a bad sign!). If it does work, then I would need to claim that is is possible to reduce the cognitive/neural instantiation of any epistemology to a description of physical phenomena. Not an easy feat, as it requires to at least “explain away” the hard problem of consciousness…

    Alexander: of course my explanation is overstretched. In fact, what I’ve written here is only a provisional sketch of the explanation that I hope to be able to build in the next *years*…
    And yes, developmental and cultural factors will have to enter the picture. However, for me the idea of being born with some priors is solid: for humans, priors will typically be very abstract, but they still need to exist (otherwise our behaviours will not tend to preserve our genes).
    The hint of dualism that I’ve mentioned boils down to this: we have a tendency to explain what happens in terms of agency. This is a prior (abstract enough for you two?) that most of us are born with (maybe less relevant for autistic people?) and is a tendency that nudges us towards dualism, nothing more than that. I wasn’t suggesting that we have to include some from of dualism, only that the typical human will find dualism somewhat attractive.

    But all this is premature, and by a long stretch: I haven’t even finished building my scaffolding and my argument doesn’t really exist; I’m barely mentioning where I hope I’m going, and I’m pretty sure I will change my mind along the way (I hope so, at least).

    • Sergio, your answer to Artem’s comment about building multiple epistemologies is very close to what I described in my comment on Artem’s post about physicalism. After having some time to think about it after reading this discussion I would add also that in the same way identity can be considered the only strictly ontological statement, the basis of all epistemology is registering differences, or the discerning ability. In that sense Kantian categories seem to me to be generated by a single process , which somehow eluded his thinking. I have not yet read your post about information, but I’m very curious. I’ll get back to you when I’ve read it!

      Concerning your notion of prior, may be you would be interested in reading my guest post about neuroscience and evolution of ethics in Artem’s blog soon (I’ve already compiled it, but the exact time of posting is not yet clear). I hope it might help you balance your own way of locating mental phenomena on the nature vs. nurture scale through several examples of different level mechanisms influencing moral decision making.

      • Sergio Graziosi says:

        Alexander: I wouldn’t be too excited about my take on information, it’s a preliminary attempt, something that certainly requires refinements as I’ll learn more.

        identity can be considered the only strictly ontological statement

        Yes! And is my problem with ontologies: when applied to physical reality, all statements of identity are, at the very least, potentially false. More strongly, one could say that statements of identity can only be “approximately true”. So, when you talk about ontologies, you are actually describing an epistemology or “how a given approach approximates identity”.

        I am certainly interested in reading your take on the evolution of ethics, I’ve given the subject a lot of thought and published a good deal of posts on it as well, it’s exactly my cup of tea!

    • The link being an understanding of cognition: every possible epistemology is constrained by how our brains work, and more specifically, by what our brains (or any computational device) can’t do. For example, brains can’t build and use a deterministic and predictive model of chaotic (non-periodic) systems, because they lack the computational power to use it in practice.

      I couldn’t possible disagree more with the above statement while still completely agreeing. If you had simply written the following, then I would have given you a hug of gratitude:

      The link being an understanding of cognition: every possible epistemology is constrained by how our minds work, and more specifically, bu what our minds can’t do.

      Instead, you had to go and resort to physicalism, again. This time Alex can’t even pick on me with the whole ‘this is just monism, not full fledged physicalism’ argument, since the mind-brain identity is like the most important point of physicalism (and one of the only settings where physicalism is debated). However, I wanted to pick on your connection to computation that strikes so close to home for me.

      What you did there was to embrace the popular physicalist version of the Church-Turing thesis. From that perspective, we are bounded by the computable because the physical world only does the computable. This seems to me to be a much more arrogant than the more Kantian perspective that says that what is understandable to us (or in my relaxation: communicable between us) is computable, and thus all of the phenomenal world for us (and our technology) appears computable regardless of how the things-in-itself is.

      Finally, to touch on the passing comment you make about chaos, maybe these two posts will be of interest: Computer science on prediction and the edge of chaos, and Limits of prediction: stochasticity, chaos, and computation. Moral of those posts is that chaos is over-rated.

      For all this to work, I need to be able to (theoretically) reduce any epistemology to something physical, and this is why I’ve tried to claim that we can bridge the gap between Shannon’s IT and good old physics.

      I don’t think I’ve read this post of yours, yet. However, I would like to warn you (I am guessing you already know) but the connection between information theory and good old physics is extremely well studied. The reason why both fields share the concept of ‘entropy’ is not accidental, and the reason that quantum information theory (what I used to specialize in to some extent) is studied by both physicists and (theoretical) computer scientists is pre-meditated. In fact, it is a pretty big movement now to recast physics as the science of information (instead of the dated conception of matter and energy) — I recommend Seth Lloyd as intro reading.

      But maybe I shouldn’t comment more on this here without first reading your post.

      • Sergio Graziosi says:

        Artem,
        I was making the case of physicalism, so it’s probably a good thing that we are managing to disagree and still exchange ideas. I worry that we’re drifting on a tangent so I’ll try to keep this short.
        I know about your Kantian perspective and it doesn’t strike me as unreasonable, not at all, but it doesn’t feel useful to me. If I was coming from a maths-centric background, I’m sure I’d be more sympathetic. As a biologist, it leaves me with a “OK, it’s certainly possible, but how does it help?” taste. Alexander summarises my position in his latest reply much better than I could.
        I’ve read your posts on Chaos, it’s a buzz-word that makes people sound deep when they mention it, that’s how I understand the “it’s overrated” claim. But I think you mean something stronger, only I don’t really know what.
        On my take on Information: of course I know that better people are (and have been) seriously working on it! I was expecting you to jump all over me when I published it, as I’m not an expert and the likelihood that I’ve written something stupid is high, even for my standards ;-). Will be very glad to hear your thoughts, no matter how harsh (the same applies to Alexander).

        Last comment: it’s amazing how you’ve managed to make us exchange ideas as if we were a bunch on friends chatting in a quiet pub, and I’ve never even met you guys!

  7. Abel Molina says:

    Heh, these reminds me of these posts about bounded rationality: http://www.scottaaronson.com/blog/?p=232, http://www.scottaaronson.com/writings/selfdelusion.html

    It’s noticeable harder to read the comments than the post, harder to weed out the assumptions/less context for what the words are supposed to mean. I remember some kind of philosopher social network whether people could mark on the profile the general ideas they associate with (e.g. monism vs dualism, idealism vs materialism), would be nice to see something like that be easily associated with comments on philosophical forums to see where the views are come from…

    Funny stuff too with the demarcation problem and social status. My own sociopolitical instincts take control of the situation (as they do for most people) and impel to spit at everything and run away, but it’s a topic that can certainly have some real world consequences, at least in some parts of the world, whether I like it or not…

  8. Pingback: Cataloging a year of blogging: the philosophical turn | Theory, Evolution, and Games Group

  9. Pingback: An update | Theory, Evolution, and Games Group

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s