Defining empathy, sympathy, and compassion

PaulBloomWhen discussing the evolution of cooperation, questions about empathy, sympathy, and compassion are often close to mind. In my computational work, I used to operationalize-away these emotive concepts and replace them with a simple number like the proportion of cooperative interactions. This is all well and good if I want to confine myself to a behaviorist perspective, but my colleagues and I have been trying to move to a richer cognitive science viewpoint on cooperation. This has confronted me with the need to think seriously about empathy, sympathy, and compassion. In particular, Paul Bloom‘s article against empathy, and a Reddit discussion on the usefulness of empathy as a word has reminded me that my understanding of the topic is not very clear or critical. As such, I was hoping to use this opportunity to write down definitions for these three concepts and at the end of the post sketch a brief idea of how to approach some of them with evolutionary modeling. My hope is that you, dear reader, would point out any confusion or disagreement that lingers.

Cognitive and emotional empathy

To start a definition, we need to distinguish between three things I can do with respect to an emotion: (1) I can feel an emotion, (2) I can identify an emotion, and (3) I can act on an emotion. The first and third points apply only to my own experience, while the second can apply to either introspection of my own emotions or to the use of theory-of-mind to understand the emotions of others. From the point of the self, I am not sure if (1) and (2) can be completely disentangled, since I don’t understand how one can feel an emotion without being able to identify it to at least some extent. However, I have definitely experienced cases where I only had a vague idea of the emotion I was feeling, and only after introspection and thought was I able to better identify it. I think that a classical literary example of this is the tension between love and lust.

However, it is in application of (2) to the other that we can find our first definition. Cognitive empathy, sometimes also called perspective taking, is the ability to identify the emotions of others. This can be either inference from the actions of others or from their circumstances. In the case of action, for example, if I see somebody sobbing then in most circumstances I have reason to infer that they are feeling sad. In the case of circumstance, for example, if I see somebody experience an injury then I will infer that they feel pain even if they don’t act on their pain by screaming or grabbing the injured body part. In a pure abstraction, cognitive empathy is an intellectual and not emotive activity.

However, in practice when I identify the emotions of others, I also tend to feel some (usually) attenuated version of those emotions. This is emotional empathy — feeling the same emotion as another person is experience. The intensity of this mirrored experience varies from person to person, and its complete absence is often considered a disorder. Of course, too strong of an emotional empathy can be debilitating, too.

To extract the essence of empathy, I would like to restate the definitions in a more analytic fashion that might appeal to cognitive scientists. Empathy is the ability to represent a model of another’s emotional state in your own mind. My empathy is emotional if the representation is an emotional one. In other words, if I feel what the other is feeling. The empathy is cognitive if my representation is a symbolic one. In other words, I understand what the other is feeling and I can use it for further cognitive processing.

Emotional and cognitive empathy are seldom independent from each other, and can relate to each other in any causal direction. For example, if you tell me that a person is in pain then you have already extracted his emotional state and packaged it for me. I have a cognitive understanding of the situation without an emotional experience. However, if that person is close to me (or if I am particularly emotionally empathic), I might then proceed to actually feel an attenuated pain from my imagination of their predicament. In this case, cognitive empathy caused emotional empathy. On the other hand, a very emotionally empathic person (like Hannah the psychotherapist from Bloom’s article) might first experience the emotion of the person they are interacting with and then proceed to identify that emotion in themselves. In this case, emotional empathy caused cognitive empathy.

Sympathy

So far, I don’t think that I have said anything controversial, because the definitions of emotional and cognitive empathy are relatively well characterized. Sympathy, however, seems to be a much more slippery concept: it seems like many people use ‘sympathy’ and ’empathy’ interchangeably. I think the cause is the historical novelty of the word empathy; it comes as a translation of the German Einfühlung which was coined by Robert Vischer in the late 19th century in the context of psychoanalysis and aesthetics. In this case, the Greek etymology of ’empathy’ does not come from Plato or Aristotle; the historic roots are only apparent by the creative translation of ‘einfühlung’ by E.B. Titchener. As such, many historically significant authors used ‘sympathy’ where it might have been better to use ’empathy’. The most significant case might be Adam Smith’s use of sympathy and fellow-feeling.

However, sympathy and empathy are not the same thing. For example, if I sympathize with your pain, it doesn’t mean that I feel your pain as much as that I feel pity or sorry for you. Of course, empathy is a component: I have to identify that you are feeling pain in order to then feel pity, but I don’t think it is the essence of the word. I also find it more useful to have non-overlapping definitions when possible, even if the concepts are seldom apart in practice. My favorite definition of sympathy comes from user musitard on /r/philosophy:

Sympathy is the ability to select appropriate emotional responses for the apparent emotional states of others.

In other words, sympathy is not about feeling the same thing that somebody else is feeling, but an appropriate emotion to complement theirs. Sometimes the appropriate response might be to produce the same feeling, in that case the concept is indistinguishable from empathy, but in general the response could be different as with the sadness-pity example. As with empathy, we could also distinguish between cognitive and emotional sympathy. Unfortunately, unlike empathy, this definition sneaks in a very loader term: appropriate. This forces us to situate the word in a broader cultural context or moral philosophy, as was the case with Adam Smith’s usage. I don’t want to focus on pinning down what exactly appropriate means and will stick to Stewart’s “I know it when I see it” attitude. For now, my take-away is that empathy is the reflection of feeling, while sympathy is a more culture or ethics dependent creation of a potentially different feeling in the self in response to a perception of feeling in the other.

Compassion

Recall that I distinguished three things one can do with respect to emotion, but never discussed the third: acting on an emotion. For that, we have to depart from the contemplative words of Greek etymology, and move to the can-do words of the Romans: compassion. Although the root of the word still refers to emotion, it’s usually associated with a distress that drives one to action. That is why we show compassion, instead of simply feeling compassion. As such, I would like to equate compassion to the active form on sympathy: selecting the appropriate action in response to the apparent emotional states of another. As with sympathy, ‘appropriate’ is a loaded word, but in the more active case of compassion the Golden rule seems like a good heuristic: treat others as you would like to be treated.

As in the previous case, empathy, sympathy, and compassion are usually intertwined. In a typical setting you first need to empathize with somebody and identify their emotion, feel sympathy toward them and then realize that emotion in an act of compassion. However, we can also think of some mild cases of compassion where empathy and sympathy are not requisites. The stereotypical Canadian ‘sorry’ is an act that comes to mind. If you bump into me on the street, I will usually apologize instinctively. This is not because I feel the mild inconvenience I caused you (empathy), and not even that I actually feel sorry for your mild inconvenience (sympathy), but simple because I was conditioned by my culture to apologize.

With this terminology, I can better understand Paul Bloom’s Against Empathy. Simply feeling somebody else’s misfortune is not enough to alleviate it, and in some case feeling somebody else’s pain can cloud your judgement and impair your ability to act in their best interest. Politically, a strong empathic drive can be exploited with charged individual anecdotes and overrule a compassion to act in the interest of many. Although, we can trace some of our pro-social drive to our ancient ancestors, today it might be worthwhile to be more mindful of the delicate relationship between empathy, sympathy, and compassion.

Game theory, rationality, and the eigeneinfühlung

With the careful terminology out of the way, where can we proceed with the pragmatic task of computational or evolutionary studies of pro-social behavior? I think the cultural baggage of ‘appropriate’ makes a thorough theory of sympathy and compassion out of reach for now, at least for me. However, I think we can start to operationalize empathy in our objective-subjective rationality framework. In fact, Paul Bloom has basically done it for me with his discussion of Hannah the psychotherapist:

Hannah’s concern for other people doesn’t derive from particular appreciation or respect for them; her concern is indiscriminate and applies to strangers as well as friends. She also does not endorse a guiding principle based on compassion and kindness. Rather, Hannah is compelled by hyperarousal — her drive is unstoppable. Her experience is the opposite of selfishness but just as extreme. A selfish person might go through life indifferent to the pleasure and pain of others — ninety-nine for him and one for everyone else — while in Hannah’s case, the feelings of others are always in her head — ninety-nine for everyone else and one for her.

Here Bloom is explaining how we can understand Hannah’s acts of compassion as stemming not from some altruistic drive, but from the need to alleviate her own overwhelming emotional empathy. The difference from the selfish rationalist is just that her feelings are mostly taking account the state of others, instead of herself. In a game-theoretic setting, suppose that if Alice does action A and Bob does action B then the objective payoff by the environment is F(A;B) for Alice and F(B;A) for Bob. If we want to incorporate empathy, then we can introduce a variable s in [0,1], and have Alice’s subjective payoff as G(A;B) = sF(A;B) + (1 – s)F(B;A). Alice can then act to maximize her subjective payoff G. This means that s is a measure of ‘selfishness’, with s = 0.99 corresponding to Bloom’s selfish person, and s = 0.01 to Hannah. We can then study the effect of the evolution of empathy on cooperation in the same way as Marcel, Tom, and I looked at the evolutionary effect of delusions.

You might argue that the above doesn’t really capture Bob’s feelings fully. Alice is optimizing a balance between her objective payoff and Bob’s objective payoff, but what if Bob is doing the same? Then his subjective payoff won’t actually match the objective one (as it doesn’t for Alice with s < 1), and if Alice is truly empathetic, she should take that into account. This leads to a feedback loop that is typical of game-theoretic reasoning and recursive theories of mind. Scott Aaronson toyed with this in the more general case of ethics, and playfully called his solution eigenmorality after the idea of eigenvalues and eigenvectors from linear algebra. To take a page from his book, I should call the recursive solution of Alice’s feelings taking into account Bob’s feelings taking into account Alice’s feelings taking into account… as eigeneinfühlung.

Of course, the barely-pronounceable neologism is to be said with tongue-in-cheek. But I do have some serious questions for you, dear reader: what did I miss in my discussion of empathy, sympathy, and compassion? Do you know of computational studies of these phenomena? How do they operationalize these difficult concepts? How would you distinguish between empathy, sympathy, and compassion in the context of a simulation?

Advertisements

About Artem Kaznatcheev
From the Department of Computer Science at Oxford University and Department of Translational Hematology & Oncology Research at Cleveland Clinic, I marvel at the world through algorithmic lenses. My mind is drawn to evolutionary dynamics, theoretical computer science, mathematical oncology, computational learning theory, and philosophy of science. Previously I was at the Department of Integrated Mathematical Oncology at Moffitt Cancer Center, and the School of Computer Science and Department of Psychology at McGill University. In a past life, I worried about quantum queries at the Institute for Quantum Computing and Department of Combinatorics & Optimization at University of Waterloo and as a visitor to the Centre for Quantum Technologies at National University of Singapore. Meander with me on Google+ and Twitter.

27 Responses to Defining empathy, sympathy, and compassion

  1. Hi there! Great post, on which I have a couple of thoughts to share.

    1) The first concern is with the emotions being identifiable. There is this notion of emotional literacy and emotional vocabulary which shows that people differ substantially in their level of ability to identify their emotions. There is also an emergent field of history of emotions, which studies how proper emotional responses migrate from culture to culture or from one social strata to another with the course of time. There is obviously a connection between knowing a name for your emotion, being articulate at describing your emotional states and being able to feel it. But I would argue that the main case here is not with the language, but with picking up and being able to react on environmental keys for feeling it. Thus, cognitive empathy is connected to emotional literacy, but also to the sheer amount of communication experience and personality type.

    There are also many subtle emotions that are hard to identify. Your examples like sobbing and injury are very illustrative, but imagine a conversation at a dinner table. There is no time to identify emotions, but a certain social flow, which also appears to be the context in which we arguably benefit the most from our social skills. When reading poetry or listening to music there is a whole bunch of subtle and complex emotion of virtually intractable origin, usually along with the main dominating emotion.

    There is also this language to describe less identifiable states, like “overwhelming”, “surge of emotion”, or just “I can’t find right words for it”.

    2) As far as sympathy goes, I have consulted an ordinary dictionary, and I find the Oxford Dictionary definition for sympathy quite elucidating:

    1.
    a. A relationship or an affinity between people or things in which whatever affects one correspondingly affects the other.
    b. Mutual understanding or affection arising from this relationship or affinity.

    So, with that ‘whatever’ they explicitly state that there is always a mediating agent involved in sympathy which makes it identifiable. It reminds me of a quote from Saint-Exupéry: ‘Love does not consist of gazing at each other, but in looking outward together in the same direction.’

    3) The last one is with the game-theoretic formula. I don’t know much about game theory, but the most obvious thing to do with the feedback loop is to cut it on a certain step by a separate optimizing procedure, which in real life also depends on experience when a person decides to ponder more on those emotions or to leave them as is. Neither Alice nor Bob have enough time to account for deep recursion and at a certain point some imperfect representation of the optimal strategy should be used in decision making.

    • Thank you for your awesome comment! Sorry that it has taken me so long to respond, I’ve been away at conference and swamped.

      Point (1) is awesome. I didn’t know that there is a history of emotions literature. Could you point me to something to read? I am very interested in how our representation of the world has changed over time, be it scientific, or emotional.

      Point (2): I struggled with the dictionary when I was working on this post. I think sympathy is by far the most controversial of my definitions, and I should probably reflect on it more.

      Point (3): yes, I am familiar with the real-life effects of cutting the feedback loop on 3 to 4 steps. It is also mentioned in more detailed in a comment further down, it is standard fare in game-theory with the more typical discussion being of an agent trying to mentally adjust for the fact that the other agent is also adjusting their strategy. However, building that in isn’t really an option for an evolutionary model, because you would want to instead explain why we truncate at 3 or 4 levels of nesting. However, the whole eigeneinfühlung part was just for giggles, and I am not thinking too seriously about it. Maybe I should.

      • Well, I came to know about this recent field of study through a collection of articles in Russian, called Российская империя чувств: http://www.ozon.ru/context/detail/id/5199285/, but the wikipedia article actually consists of a bibliography, so that’s a place to start: http://en.wikipedia.org/wiki/History_of_emotions#Literature

        I haven’t been reading any of those books. When I encountered the approach I was immediately impressed and incorporated it in my thinking when reading general history, memoirs or fiction.

        • This is very interesting. Are the articles in the book translations or were they written in Russian? I am a bit confused given that the editors are German. I’d like to read it, but I read much faster in English than in Russian. Also, I assume I can’t buy the book on Amazon?

          • Some of them are indeed translations from either German or English, and the ones translated have been in part already published somewhere else (I’ll be able to provide you with some references a bit later), but reworked for this edition. The majority of articles is by Russian authors. Yeap, I’m afraid it’s not on Amazon.

  2. I think you need to think about it in terms of brain, neuroscience has a lot to say about substrates & evolution of emotions. Cognitive empathy is learned, mostly by cingulate cortex & neocortex. Emotional empathy involves subcortical areas: brainstem, hypothalamus, amygdala, insular cortex. It’s a combination of innate mechanisms & conditioning. A good introduction is “The Archaeology of Mind: Neuroevolutionary Origins of Human Emotions”: http://www.amazon.com/Archaeology-Mind-Neuroevolutionary-Interpersonal-Neurobiology/.
    Brief summary: it’s a lot more complex & kludgy than you think.
    My, more theoretical, take on this: http://cognitive-focus.blogspot.com/2012/06/motivation-evolution-of-value.html.

    • Let me step in here. It looks like you and Artem are coming from different backgrounds and want to operationalize the same phenomenon for different models. The neuroscience approach, to the extent of as much detalization as possible, probably finding its limits somewhere at the level of quantum chemistry computations, would be useful for modeling brain activity. And Artem is trying to formulate minimalistic, but useful models of social interactions in an evolutionary context.

      • I don’t think you can understand “society” & “social interaction” unless you understand individuals.
        Game theory alone is pretty superficial & can’t give you ether empathy or conditioning,

        • As Alexander pointed out, you are just dismissing my whole approach out of hand. I am not asking people to stop doing the psychology or neuroscience of empathy, etc. That field is very exciting, but it is not something I work in or plan on working in. I would encourage you to try to look at things from multiple perspectives.

          However, your comment can be turned into something more constructive than a simple dismissal. As you point out, a lot of emotional empathy involves areas of the brain that are considered ‘evolutionarily old’, this can give us hypotheses to the timing of when various types of empathy/sympathy/etc entered our lineage. That would be interesting to me. The specifics of how the brain processes emotion: less so.

          But that is just me. If you love the neuroscience perspective then by all means pursue it, we need as many perspectives as we can have to provide feedback to each other. Hopefully constructive feedback.

          • Sorry for being so dismissive, but you specifically stated that you want to go beyond arbitrary numbers & try to understand the nature of these emotions from CogSci perspective. That is, you want to understand how they work in a human / mammalian brain. Because there is no such thing as empathy per se. It’s a combination of two things:

            1) Instinctual bonding response, driven by unconditional stimuli (olfactory, auditory, visual). Panksepp calls it a CARE circuit, plus relevant elements of four other social drives. As all other basic drives, it starts from brainstem.
            What might be of interest for you, he speculates that empathy originated from sexual drives. Sort of like Freud, but for Panksepp it’s all in the past, they are sufficiently distinct now. Combined with stimuli & responses conditioned by these instincts, this is purely “emotional” empathy.

            2) “Selfish” motives (both innate & conditioned) triggered by recognition of similarity of others to oneself. I don’t think this sort of empathy is conditioned, rather a purely “cognitive” association between areas that represent “self” (insular & posteromedial cortices) & those that represent others (probably prefrontal cortex).

            Neither of these things, or their origin, is currently feasible to simulate on a computer. So, you are stuck with guesses, hopefully slightly more educated.

            BTW, neuroscience is just a hobby for me. I am *at least* as much of a theory guy as you are. But this subject calls for specifics.

        • This is in response to your response to me, I am just nesting it at this level so that we have more space for discussion

          I appreciate your push-back. As long as we aren’t dismissing each other out-of-hand, I think we can have a useful dialogue.

          I am not sure where I said that I “want to go beyond arbitrary numbers”. I am actually a pretty big fan of heuristic models. As for the CogSci perspective, I didn’t mean that I want to switch to that perspective completely, since it is already well developed in its own disciplines of neuroscience, cognitive science, and psychology. Instead, I want to take evolutionary game theory (and its associated literature and methods) as a starting point, and incorporate more cognitive elements into it. From a biologists perspective, you might say that we are just considering slightly more complicated genotype-phenotype maps than the typical identity mapping used in EGT. I am well aware that the models I build are wrong, but I do find them useful. The hope is to go one step at a time and add complexity in a manageable way that allows us to maintain some understanding.

          Of course, neuroscience can be very informative to this process. Tom has written about this a bit, maybe you will find something closer to your interests in that post?

          You write:

          Neither of these things, or their origin, is currently feasible to simulate on a computer.

          Just to clarify, I don’t aim to simulate empathy, nor do I think that such simulations are all that useful; of course, people working in AI might disagree with me. In fact, I tend to have rather negative views of simulation and a lot of computational modeling. However, this is focused on the individual level.

          I think that a lot of problems can get easier at the level of populations, the classic example being statistical mechanics. We can discuss the social or adaptive significance of empathy without understanding all the mechanisms at the individual level. Of course, all the criticisms of evolutionary psychology apply in this context.

          • > I appreciate your push-back.

            And I appreciate you tolerance of my insolence.

            > Tom has written about this a bit, maybe you will find something closer to your interests in that post?

            No, this too macro. I am not interested in modeling / predicting behavior in specific situations. Rather, I am interested in evo-devo of motivation as a clue to the “meaning of life” thing. Mapping motives to brain regions helps to understand their origin. If you (deeply) realize that original purpose is no longer relevant, it helps you to override those primitive urges. Also, my own motivation is rather unusual, so I am curious if that’s a dead end or ahead of the time. Definitely doesn’t look like atavism.

            > We can discuss the social or adaptive significance of empathy without understanding all the mechanisms at the individual level. Of course, all the criticisms of evolutionary psychology apply in this context.

            Right, there are tons criticism, no point in adding to it here. I’ll just say that without understanding these mechanisms, you can’t predict how that fluid & composite empathy will change with the situation.

  3. neonemu says:

    Anyone else even slightly creeped out by the angry eyes in the author’s profile photo? Holy hell.

  4. Ad Nausica says:

    Great topic. At many times over the last decade or so I’ve thought about going back to school to do another PhD just to do research in this sort of area, but only if I could get Steven Pinker as my adviser.

    I only have a few random comments at this point because I haven’t had time to digest it all and think about it deeply.

    1. On the difference between cognitive and emotional empathy, you might look at the field of mirror neurons (http://en.wikipedia.org/wiki/Mirror_neuron). There’s a nice TED Talk by Vilayanur Ramachandran (http://en.wikipedia.org/wiki/Mirror_neuron), a neuroscientist in this area, and also his great book, The Telltale Brain (http://en.wikipedia.org/wiki/The_Tell-Tale_Brain) and associated Science Network talk on it (http://thesciencenetwork.org/programs/the-science-reader/the-tell-tale-brain-a-neuroscientist-s-quest-for-what-makes-us-human).

    In that context, I first disagreed with your statement, “In the case of circumstance, for example, if I see somebody experience an injury then I will infer that they feel pain even if they don’t act on their pain by screaming or grabbing the injured body part. In a pure abstraction, cognitive empathy is an intellectual and not emotive activity.”

    In fact we do have emotional responses to that. However, on reading further on your distinction of cognitive vs emotional empathy, I understand you mean this as an example, not a given response. If we witness somebody get injured we can cognitively infer their pain (cognitive empathy) and/or actually emotionally feel their pain (emotional empathy). Feeling their pain can come either directly via mirror neurons, without the need for cognitive empathy, or it can come indirectly as a result of our cognitive empathy causing our emotional empathy, or a combination. I fear the problem here will be in separating the two routes to emotional empathy, even if we are able to separate cognitive and emotional empathy in experiments or models.

    2. “but an appropriate emotion to compliment theirs”
    I think you mean “complement”. Common error.

    3. On the topic of modeling and infinite regression within game theory I suggest looking at the 2.3rds game (http://en.wikipedia.org/wiki/Guess_2/3_of_the_average), aka the 70% game, whereby people have to aim to guess a number between 0 and 100 that is 2/3rds of the average of the guess of everybody else trying to guess the same thing. The rational answer is zero because of the infinite regress, but the winning answer is usually in the 25-35 range, or roughly 3-4 iterations of estimation of what other people will do. That is, nobody should guess above 67 because that would require them to believe everybody else guessed 100, but they also know that so will guess no more than 67, so I should guess 44 (2/3*67). But they’ll do the same thinking, so I should guess 30 (2/3*44), ad infinitum.

    This combines two issues. You can guess a non-zero number if you think some of the other guessers are irrational, or you can guess a non-zero number if you think that they’ll guess a non-zero number because they think other guessers are irrational, or because they think that others are rationally guessing that others are guessing that others based on others being irrational, ad infinitum. Hence you can rationally guess a non-zero number based on a belief that everybody else is completely rational as well. It’s an evaluation of an infinite regression of beliefs of other people’s beliefs, and it typically seems to settle at about 3-4 iterations of the regression. I would argue this is a form of superrationality (http://en.wikipedia.org/wiki/Superrationality).

    Hence, to think about Bob and Alice’s subjective payoff you may model the regression based on experimental evidence that psychologically we tend towards 3-4 regressions and stop. You might also look at the differential results of simulating what happens when you stop the regressions after various numbers of steps: 1 (as you’ve done here), 2, 3, 4, 5, etc. If the difference in outcome is vanishingly small quickly, that might show justification for only a few regressions. To analyze in terms of natural selection, you might also look at the computational costs versus the cost of the difference in outcomes. For example, if there is a huge difference in behaviour based on 5 versus 10 iterations (one outcome gets you killed, the other is a slight embarrassment) and little computational cost, we’d tend to go further. If that difference diminshes quickly then there is no evolutionary value of going further than a few iterations.

    4. Ideally I would think probability needs to be included in the model. That is, you’ve modeled Alice’s subjective payoff as G(A;B) = sF(A;B) + (1-s)F(B;A). You’ve identified, I think, that really it should include something like G(A;B) = sF(A;B) + (1-s)G(B;A) = sF(A;B) + (1-s)[sF(B;A) + (1-s)G(A;B)], and hence the feedback loop where G(A;B) is a function of G(A;B). But, shouldn’t the subjective function be based on the *estimation* of G(B;A)? That is, let G'(B;A) = pG(B;A) and then Alice’s subjective payoff is G(A;B) = sF(A;B) + (1-s)G'(B;A). That is, Alice’s payoff is not based on Bob’s subjective payoff, but rather Alice’s estimate of Bob’s probable subjective payoff, which includes within it Alice’s estimate of Bob’s estimate of Alice’s subjective payoff.

    As a concrete example, over the years I have come to understand (I think) my mothers’s model of how I think and I can identify where her model of me is inaccurate. But being me, “can identify” means it is my subjective error model of her model of me. It makes my brain hurt to say it this way, but I think differently from the way that I think that she thinks that I think.

    So when I speak to her I sometimes think about how she will interpret my wording — not just based on my model of her, but based on my model of what she’ll think my intent is. I sometimes adjust the wording I use to compensate for (my model of) her model error of me.

    I don’t see anywhere in your model that you allow for that estimation parameter. You only have a weighting parameter, s, based on the relative importance Alice gives on her own objective payoff versus Bob’s payoff. Bob’s actual payoff can be quite different from Alice’s estimate of Bob’s payoff.

    Put another way, your model can’t really go wrong. We often do things thinking this is what the other would want, when it is something that we don’t personally want (in the objective sense, s = 1) and it turns out that is isn’t what they actually wanted either. It is simply what we thought they wanted, F’ or G’. I could be wrong about this in the particular application of payoffs, but I think it needs to be in there. People do feel good about doing things for others that they think they want, when nobody actually wanted it.

    5. I would model self-calibration via some accuracy measure. That is, we also estimate G-G’ based on some feedback and adjust G’ over time. My model of my mother has changed over time, but so has my model of my mother’s model of me. My models of everybody change over time. Certainly my model of my wife’s subjective payoffs has gotten better over time, as has hers of me.

    I think this adaptive calibration is what we often refer to as feeling “connected”; that is, we feel that the other person “gets me”.

    • Thanks for the thoughtful comment! A lot of interesting stuff in it. I’ll try to respond point by point:

      [1] For me, “mirror neurons” seems like an explanation that is too simple. I am always skeptical of such easy answers in empirical fields, especially neuroscience. However, if we are just looking for heuristics then I am happy with simple ones. In such cases, I prefer to avoid neuroscience if possible, but we do have an interesting discussion about it earlier in the comments, maybe you want to add something there.

      [2] Thanks, fixed. I tend too type these posts up too quickly.

      [3] Yes, the same sort of infinite regress I discussed in terms of empathy is standard stuff when discussing rationality in general. As you pointed out, humans tend to settle down after 3-4 nestings, but building this in kind of defeats the purpose of the analysis. The approach you discuss in the last section of point [3] is better, and I feel like I’ve seen it done before for studying the theory of mind. Also, I mostly meant that last section and it’s silly term eigeneinfühlung in jest.

      I’ve written briefly about super-rationality before and I think it is a cute rhetorical device, but not overly useful for game theory. It is also a very silly name, building on ‘rational’ as a value judgement instead of the arbitrary definition it happens to be.

      [4] Estimating parameters is something that can also be added, and I’ve blogged about it for our subjective-objective rationality model. However, I am usually not a fan of adding extra complexity into models unless I feel I really need it for something. This also goes for point [5], I think it is very easy to come up with progressively more complicated models, but I would prefer to understand the simpler cases first.

  5. Pingback: Week 4 Essay | MMPS - 381MC Specialiste Research

  6. Pingback: Do you need to care to be caring? Sympathy, Empathy, Compassion, and Caring in Healthcare

  7. Pingback: Cataloging a year of blogging: the philosophical turn | Theory, Evolution, and Games Group

  8. Pingback: An update | Theory, Evolution, and Games Group

  9. Mahesha 'M' Goleby says:

    I arrived at your blog researching empathy & nearly walked away after a few lines, thinking “what’s this nerd on about” [I’m a nerd too] but became engrossed and enjoyed reading the lot, including comments. I so miss teaching at uni – the staff conversations!!! Good one, Artem

  10. gttj says:

    Great posting, excellent discussion! From an ethical, moral standpoint, empathy, sympathy and compassion could be evoked to “justify” moral wrongdoing as well, if one takes moral relativism seriously, esp. metaethical versions (I am thinking of Jesse Prinz’s third book of his trilogy). That being said, I think one can argue that empathy, sympathy, and compassion are ultimately evoked for descriptive analysis rather than normative grounds of decision-making processes.

  11. Ralph Chaney says:

    I appreciate your approach to this.
    Please look over what Carl Rogers had to say about this operationally, in the expression of empathy and its effects. I know nothing of game theory but the “effects” of some act must be a part of it. Rogers is huge in this area of human interaction. He also refers to Gendlin.
    Would you please email a reply to me about this? I would be thankful.

  12. Pingback: Systemic change, effective altruism and philanthropy | Theory, Evolution, and Games Group

  13. Bungy Heart says:

    I know this is an old post now, but just wanted to comment on two things you’ve said:

    “Emotional and cognitive empathy are seldom independent from each other”, and

    “From the point of the self, I am not sure if (1) and (2) can be completely disentangled, since I don’t understand how one can feel an emotion without being able to identify it to at least some extent.”

    I realise you’re not approaching this from the field of autism spectrum disorders, but it’s worth being aware of the significant divide between cognitive and emotional empathy experienced by autists, and the high levels of alexithymia within this community compared with the neurotypical majority.

    Regarding alexithymia, which is currently believed to be present in around 50% of autistic and 10% of allistic people, there is a clear indication that the identification of emotions is, in fact, separate from the experience of them, and not just in being able to find the words to express them. Meanwhile, the historical perception of autistic people as having no or reduced empathy is now understood to be associated with reduced cognitive and expressive empathy, but is combined with a generally heightened emotional empathy.

    Not trying to make a point against anything you’ve said, just something to consider in your ponderings.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s