Systemic change, effective altruism and philanthropy

Keep your coins. I want change.The topics of effective altruism and social (in)justice have weighed heavy on my mind for several years. I’ve even touched on the latter occasionally on TheEGG, but usually in specific domains closer to my expertise, such as in my post on the ethics of big data. Recently, I started reading more thoroughly about effective altruism. I had known about the movement[1] for some time, but had conflicting feelings towards it. My mind is still in disarray on the topic, but I thought I would share an analytic linkdex of some texts that have caught my attention. This is motivated by a hope to get some guidance from you, dear reader. Below are three videos, two articles, two book reviews and one paper alongside my summaries and comments. The methods range from philosophy to comedy and from critical theory to social psychology. I reach no conclusions.

Read more of this post

Emotional contagion and rational argument in philosophical texts

Last week I returned to blogging with some reflections on reading and the written word more generally. Originally, I was aiming to write a response to Roger Schank’s stance that “reading is no way to learn”, but I wandered off on too many tangents for an a single post or for a coherent argument. The tangent that I left for this post is the role of emotion and personality in philosophical texts.

In my last entry, I focused on the medium independent aspects of Schank’s argument, and identified two dimensions along which a piece of media and our engagement with it can vary: (1) passive consumption versus active participation, and (2) the level of personalization. The first continuum has a clearly better end on the side of more active engagement. If we are comparing mediums then we should prefer ones that foster more active engagement from the participants. The second dimension is more ambiguous: sometimes a more general piece of media is better than a bespoke piece. What is better becomes particularly ambiguous when being forced to adapt a general approach to your special circumstances encourages more active engagement.

In this post, I will shift focus from comparing mediums to a particular aspect of text and arguments: emotional engagement. Of course, this also shows up in other mediums, but my goal this time is not to argue across mediums.

Read more of this post

An approach towards ethics: neuroscience and development

For me personally it has always been a struggle, reading through all the philosophical and religious literature I have a long standing interest in, to verbalize my intuitive concept of morals in any satisfactory way. Luckily for me, once I’ve started reading up on modern psychology and neuroscience, I found out that there are empirical models based on clustering of the abundant concepts that correlate well with both our cultured intuitions and our knowledge of brain functioning. Models that are for the studies of Ethics what the Big Five traits are for personality theories or what the Cattell-Horn-Carroll theory is for cognitive abilities.  In this post I’m going to provide an account of research of what is the most elucidating level of explanation of human morals – that of neuroscience and psychology. The following is not meant as a comprehensive review, but a sample of what I consider the most useful explanatory tools. The last section touches briefly upon genetic and endocrinological component of human morals, but it is nothing more than a mention. Also, I’ve decided to omit citations in quotes, because I don’t want to include into the list of reference the research I am personally unfamiliar with.

A good place to start is Jonathan Haidt’s TED talk:

Read more of this post

An approach towards ethics: primate sociality

Moral decision making is one of the major torrents in human behavior. It often overrides other ways of making judgments, it generates conflicting sets of cultural values and is reinforced by them. Such conflicts might even occur in the head of some unfortunate individual, which makes the process really creative. On the other hand ethical behavior is the necessary social glue and the way people prioritize prosocial practices.

In the comments to his G+ post about Michael Sandel’s Justice course, Artem Kaznatcheev invited me to have a take on moral judgment and social emotions based on what I gathered through my readings in the recent couple of years. I’m by no means an expert in any of the fields that I touch upon in the following considerations, but I’ve been purposefully struggling with the topic due to my interest in behavioral sciences trying to come up with a lucid framework to think about the subject. Not everything I write here is backed up very well by research, mainly because I step up a little and try to see what might come next, but I’ll definitely do my best to leave my general understanding distinct from concepts prevailing in the studies I have encountered. It is not an essay on ethics per se, but rather where I am now in understanding how moral sentiments work. A remark to make is that for the purposes of that text I understand behavior broadly, e.g. thinking is a behavior.

Read more of this post

Defining empathy, sympathy, and compassion

PaulBloomWhen discussing the evolution of cooperation, questions about empathy, sympathy, and compassion are often close to mind. In my computational work, I used to operationalize-away these emotive concepts and replace them with a simple number like the proportion of cooperative interactions. This is all well and good if I want to confine myself to a behaviorist perspective, but my colleagues and I have been trying to move to a richer cognitive science viewpoint on cooperation. This has confronted me with the need to think seriously about empathy, sympathy, and compassion. In particular, Paul Bloom‘s article against empathy, and a Reddit discussion on the usefulness of empathy as a word has reminded me that my understanding of the topic is not very clear or critical. As such, I was hoping to use this opportunity to write down definitions for these three concepts and at the end of the post sketch a brief idea of how to approach some of them with evolutionary modeling. My hope is that you, dear reader, would point out any confusion or disagreement that lingers.
Read more of this post

Weapons of math destruction and the ethics of Big Data

CathyONeilI don’t know about you, dear reader, but during my formal education I was never taught ethics or social consciousness. I even remember sitting around with my engineering friends that had to take a class in ethics and laughing at the irrelevance and futility of it. To this day, I have a strained relationship with ethics as a branch of philosophy. However, despite this villainous background, I ended up spending a lot of time thinking about cooperation, empathy, and social justice. With time and experience, I started to climb out of the Dunning-Kruger hole and realize how little I understood about being a useful member of society.

One of the important lessons I’ve learnt is that models and algorithms are not neutral, and come with important ethical considerations that we as computer scientists, physics, and mathematicians are often ill-equipped to see. For exploring the consequences of this in the context of the ever-present ‘big data’, Cathy O’Neil’s blog and alter ego mathbabe has been extremely important. This morning I had the opportunity to meet Cathy for coffee near her secret lair on the edge of Lower Manhattan. From this writing lair, she is working on her new book Weapons of Math Destruction and “arguing that mathematical modeling has become a pervasive and destructive force in society—in finance, education, medicine, politics, and the workplace—and showing how current models exacerbate inequality and endanger democracy and how we might rein them in”.

I can’t wait to read it!

In case you are impatient like me, I wanted to use this post to share a selection of Cathy’s articles along with my brief summaries for your browsing enjoyment:
Read more of this post

Where is this empathy that we are so proud of?

This post will be a momentary departure from the usual theme of mathematical and computational modeling that has defined this blog. I apologize for discussing non-scientific news and pushing this medium outside its comfortable academic niche. This post has also been edited after original posting to reflect on a comment discussion

The regular reader might notice that some of the models I try to examine concern themselves with topics like evolution of cooperation, compassion, and empathy. In fact, in my last lecture half of the questions I posed were about compassion:

2. What does Wright say compassion is from a biological point of view? Do you think this is a reasonable definition?
3. Can a rational agent be compassionate? Is understanding the indirect benefits (to yourself or your genes) that your actions produce essential for compassion?
5. Can compassion or cooperation evolve in an inviscid environment? What about a spatially structured one?
7. What is a zero-sum game? Does a non-zero-sum relationship guarantee that compassion will emerge?

As cynical as I am and as cold as the stereotype of a mathematician is, I have always functioned under the assumption that humans are empathetic, caring, compassionate beings. I did not ask if that is a reasonable axiom, but instead would be more interested in questions like: what were the earliest compassionate ancestors of modern humans? But as I reflect on misdeeds in my own personal life (don’t worry, the blog won’t get that personal) and stories in the news… I can’t help but wonder: Is compassion just a story we tell ourselves to sleep at night? Arm-chair hypothesizing on the beauty of our own morality? Where is this empathy that we are so proud of?

Yesterday afternoon, a deranged 29 year old man pushed a 58 year old Ki Suk Han onto the tracks in a Midtown, NYC subway station. There was not enough time for the train to come to a halt before Han was caught between it and the platform. The husband and father-of-one was pronounced dead on arrival at Roosevelt Hospital. Mayor Bloomberg commented: “It’s one of those great tragedies, it’s a blot on all of us, and if you could do anything to stop it, you would.”

What happened on the platform? One of the grizzling developments of the story (and unfortunately the reason it has become big) is a ghastly photo by NY Post photographer R. Umar Abbasi. I have not reproduced the photo here out of respect for Han: it captures the moment before his death, as he tried to scramble back onto the platform with the train bearing down upon him. The Post ran the image as its cover, with caption: “Doomed: pushed on the subway track, this man is about to die”.

As you can tell from this post, I am not a journalist and it is not my place to question the Post’s editorial decisions. I am also not writing to join the attacks on photographer Abassi. Actually, I think that (with the available evidence) twitter and the media are putting too much blame and guilt on him, already. My concern is with everybody on the platform.

At that time on Monday, I had just returned to Montreal from a trip to New York City. In my time in the city (and my earlier years of visits and living there) I have almost never seen a platform that deserted. Empty. Abandoned. There is nobody trying to reach over the edge to help Han; there is nobody running towards him. All you can see in the photo is a crowd of people standing nearly 10 meters down the platform simply looking. Watching. Paralyzed.

Can we continue to think that we are a compassionate species? Can we claim that we feel empathy? Is our groupthink so overbearing that it can paralyze us from helping a man in distress? Would I have had the strength to run over and try to help? Even knowing the physics of the situation, low likelihood of my success, and high chance of personal injury? Or would I have just remained motionless? As if robbed of my agency. What about you?

As pointed out by Joe Fitzsimons, the situation is not that simple:

When something like this happens right out of the blue, it takes most people a considerable amount of time to parse what has happened. Not through a lack of empathy, but through a mixture of incomplete information (presumably most people only look *after* something has happened), a lack of context, and so on. People instinctively hesitate. That’s why we train those people we expect to have to deal with such situations (for example soldiers, police, fire fighters, etc.) so that they don’t lock-up. Without training or prior experience, people do lock up, simply not react at all, or make the situation worse by doing something stupid. It’s nothing to do with a lack of empathy.

In such a situation, the confusion and surprise can be paralyzing. I have definitely felt this before and in much less stressful situations. In such a context, it is meaningless to discuss empathy since the stress robs humans of their agency. It is this setting that is most likely. The question becomes: what can we do to prevent ourselves from such paralysis? Is it reasonable to define empathy on such short timescales? Is compassion meaningful without the agency to act on it? How would I feel in a situation where I was witnessing a horror that I could not will my body to prevent? Would I feel robbed of the sense agency that defines my humanity? What about you?

I was not there. My questions should be secondary to the testimony of witnesses. I cannot reflect (or even understand) the mental state of the people caught up in this awful situation. After the incident, it is reported that Dr. Laura Kaplan (who did not see the incident occur) and a security guard, rushed over to try to administer CPR, but “there was no pulse, never, no reflexes.”

My heart goes out to the widow and child of Ki Suk Han. Although they will never read these words, I hope that there is still some cosmic comfort in them. In no way do I want to assign guilt to the bystanders on the platform, I am not in a position where I can glean understanding. The only clarity to me is that the assumption of humans as empathetic, compassionate, and caring is much more complicated than I had believed. Empathy needs to be operationalized and carefully studied. Its connection to agency, fear, and rationality needs to be carefully examined. We need to understand what makes humans seem so inhumane in such tragic situations.

Can we expand our moral circle towards an empathic civilization?

The Royal Society for the encouragement of Arts, Manufactures and Commerce (or RSA for short) hosts numerous speakers on ideas and actions for a 21st century enlihtenment. They upload many of these talks on their YouTube channel to which I am recent subscriber. I particularly enjoy their series of RSA Animate segments where an artist draws a beautiful white board sketch of the talk as it is being presented (usually filled with lots of visual puns and extra commentary). As a way to introduce our readers to RSA Animate, I thought I would share the talk entitled “The Empathic Civilisation” by Jeremy Rifkin:

Rifkin highlights three aspects of empathy: (1) the activity of mirror neurons, (2) soft-wiring of humans for cooperative and empathic traits, and (3) the expansion of the empathic circle from blood ties to nation-sates to (hopefully) the whole biosphere. There is a reason that I chose the word “circle” for the third point, and that is because it reminds me of Peter Singer‘s The Expanding Circle (if you want more videos then there is an interesting interview about the expanding circle). In 1981, Singer postulated that the extended range of cooperation and altruism is driven by an expanding moral (or empathic) circle. He sees no reason for this drive to cease before we reach a moral circle that includes all humans or even the biosphere; this would go past ethnic, religious, and racial lines.

One of my few hammers is evolutionary game theory, and points (2) and (3) become obvious nails. The soft-wiring towards empathy can be looked at in the framework of objective and subjective rationality that Marcel and I are developing; I might address this point in a later post. For now, I want to focus on this idea of the expanding circle since these ideas relate closely to our work on the evolution of ethnocentrism.

Moral circles do not expand in simple tag-based models

If I recall correctly, it was Laksh Puri that first brought Singer’s ideas to my attention. Laksh thought that our computational models for the evolution of ethnocentrism, could be adapted to study the evolutionary basis of morality as proposed by Singer. In 2008, I modified some of our code to test this idea. In particular, I took the Hammond and Axelrod (2006) model of evolution of ethnocentrism and built in an idea of super- (as I called them) or hierarchical (as Tom and Laksh called them) tags.

In the standard model, agents are endowed with an arbitrary integer which acts as a tag, and two strategies: one for the in-group (same tag), and one for the out-group (different tag). I was working with a 6-tag population: we can think of these tags as the integers 0, 1, 2, 3, 4, 5, and 6. To expand a circle means to consider more people as part of your in-group; to me this sounded like coarse grained tag perception. To implement this I did the easiest thing possible, and introduced an extra coarseness or mod parameter (1,2,3, or 6) which corresponded to the ability of agents to distinguish tags. In particular, a mod-6 agent could distinguish all tags, and thus if he was a tag-0 agent, he would know that tag-1, tag-2, tag-3, tag-4, and tag-5 were all different from him. A mod-3 agent on the other hand, would test for tag equality using modular arithmetic with base 3. Thus a tag-0 mod-3 agent would think that both tag-0 and tag-3 agents are part of his in-group (since 3 = 0 \mod 3); similarly for mod-2. For mod-1 everyone would look like the in-group. Therefore, a mod-1 ethnocentric is equivalent to a humanitarian, and a mod-1 traitor is equivalent to a selfish agent, and counted as such in the simulation results.

As the section title suggests, the simulation results did not support the idea of an expanding circle. The more coarse-grained tags did not fare well compared to the mod-6 agents and their ability for fine distinction. The interaction was the prisoner’s dilemma, and I ran 30 simulations with two conditions (allowing supertags and not) for four different $\frac{b}{c}$ ratios: 1.5, 2, 2.5, 3, and 4. I present here the results for 1.5 and 4. Unfortunately I did not bother to plot the error bars like I usually do, but they were relatively tight.

Plots of results for supertag simulations

On the left we have the proportion of strategies with humanitarians in blue, ethnocentrics in green, traitors in yellow, and selfish in red. From top to bottom, we have no supertags and b/c = 1.5; supertags, b/c = 1.5; no supertags, b/c = 4.0; and, supertags, b/c = 4.0.
On the right we have the break down by mod of the ethnocentric agent in the supertag cases. In red is mod-6, green is mod-3, and blue is the most coarse grained mod-2. From top to bottom: supertags with b/c = 1.5; and supertas with b/c = 4.0.
All results are averages from 30 independent runs.

In both the low, and high \frac{b}{c} ratio conditions, most of the ethnocentric agents tend towards being the most discriminating possible: mod-6 (the red lines on the right figures). For \frac{b}{c} = 1.5 it is also clear that the population is not nearly as effective at suppressing selfish agents. In the \frac{b}{c} = 4.0 case, it seems that supertags sustain a higher level of humanitarian agents, but it is not clear to me that this trend would remain if I ran the simulations longer. The real test is to see if there is more cooperation.

Proportion of cooperative interactions

Plots of the proportion of cooperative interactions. On the left is the b/c = 1.5 case and on the right is 4.0. For both plots, the blue line is the condition with supertags, and the black is without. Results are averages from 30 runs. Error bars are omitted, but in both conditions the black line is highter than the blue by a statistically significant margin.

Here, in both conditions there is fewer cooperative interactions when super-tags are allowed. The marginal increases in coarse-graining and numbers of humanitarians do not result in a more cooperative world. I have observed the same trade-off between fairness (more humanitarians) and cooperative interactions when looking at cognitive cost (Kaznatcheev, 2010). For this sort of simulation, this might very well be a general phenomena: increases in fairness produce decreases in cooperation. My central point, is that we do not see a strong expansion of our in-group circle. Further — even if we do see such an increase — it might at the cost of cooperation. It seems that evolution favors the most fine-grained perception of tags; there is no strong drive to expand our circle towards an empathic civilization.

References

Hammond, R, & Axelrod, R (2006). The Evolution of Ethnocentrism Journal of Conflict Resolution, 50 (6), 926-936 : 10.1177/0022002706293470

Kaznatcheev, A. (2010). The cognitive cost of ethnocentrism. In S. Ohlsson & R. Catrambone (Eds.), Proceedings of the 32nd annual conference of the cognitive science society. (pdf)

The evolution of compassion by Robert Wright at TED

An enjoyable video from Robert Wright about the evolution of compassion:

How would you model the evolution of compassion? How would your model differ from standard models of evolution of cooperation? Does a model of compassion necessarily need the agents to have model minds/emotions to feel compassion or can we address it purely operationally like cooperation?