Computational kindness and the revelation principle

In EWD1300, Edsger W. Dijkstra wrote:

even if you have only 60 readers, it pays to spend an hour if by doing so you can save your average reader a minute.

He wrote this as the justification for the mathematical notations that he introduced and as an ode to the art of definition. But any writer should heed this aphorism.[1] Recently, I finished reading Algorithms to Live By by Brian Christian and Tom Griffiths.[2] In the conclusion of their book, they gave a unifying name to the sentiment that Dijkstra expresses above: computational kindness.

As computer scientists, we recognise that computation is costly. Processing time is a limited resource. Whenever we interact with others, we are sharing in a joint computational process, and we need to be mindful of when we are not carrying our part of the processing burden. Or worse yet, when we are needlessly increasing that burden and imposing it on our interlocutor. If you are computationally kind then you will be respectful of the cognitive problems that you force others to solve.

I think this is a great observation by Christian and Griffiths. In this post, I want to share with you some examples of how certain systems — at the level of the individual, small group, and society — are computationally kind. And how some are cruel. I will draw on examples from their book, and some of my own. They will include, language, bus stops, and the revelation principle in algorithmic game theory.

George Kingsley Zipf outlines how considerations of computational kindness are embedded in our very language through a conflict between the speaker and listener in how to invest cognitive energies. A speaker hopes to have all concepts expressed by a single simple sound and leave the difficulty of disambiguation to the listener. The listener, on the other hand, wishes for a totally unambiguous language, so the difficulty of picking the right words is on the speaker, and the listener doesn’t need to spend energy on disambiguation.[3] The former task, of using a longer and more elaborate sequence of sounds to express our meaning, is clearly easier than the latter task of picking out the most consistent of countless interpretations of a terse utterance. Thus, the fact that our words are not extremely overloaded is a testament to our computational kindness towards listeners.

But this doesn’t mean that computational kindness ends at unambiguous words. We can also be more or less kind by what we choose to say. Some of these considerations are at odds with typical concerns of politeness. Suppose that my friend asks where I want to go for dinner tonight. If I reply with “anything’s fine by me, whatever you would like” then under conventional wisdom, I am being polite. I am offering her the freedom to determine dinner plans. However, anybody who has said this in practice (or worse yet, heard it as a response) knows that this is hardly kind. Now, my friend has to not only select something from her extensive list of preferences, but she has to also simulate my preferences to judge which of the options is best at optimising both of our utilities. It is often more difficult for my friend to simulate my preferences than it is for me to examine them for myself.[4] Thus, my response not only forced her to do all the cognitive work, but I have created a higher amount of total work to be done. The kinder response would be for me to provide a (small) set of concrete options that won’t require her to simulate my preferences.

We can also take this idea from interpersonal settings to a design principle. Suppose that I encounter a bus stop in a new city. Since I don’t know the frequency of the bus schedule, as I wait at the stop, I will have to constantly do three things: (1) remain vigilant, staring into the distance for the bus, and (2) update my beliefs of the expected waiting time based on how long I’ve already waited while[5] (3) checking this estimate against my utility for prompt arrival and cost of Uber. Instead, if the city posts a digital sign counting down to the next arrival then: I won’t have to stare into the distance, I won’t have to recalculate my estimate of the arrival, and I will only have to check my utility function once.[6] Not a huge difference in cognitive load for a single person, but a difference that can quickly add up over the transit population.

Finally, let us consider an explicitly computational example from mechanism design: an auction.[7] In a classic blind auction, a single item is up for auction and each bidder writes down their bid privately on a piece of paper. The auctioneer then looks at all the pieces of paper, giving the item to the bidder with the highest bid and charging them whatever they bid.

Let’s analyse this auction more closely with just two bidders: Alice and Bob. Alice values the item at a and Bob at b. Suppose that in this particular case, a > b. If Alice simply bids her valuation a then she will win, but not gain any value. On the other hand, if she could guess Bob’s bid and make her own bid a penny over b then she’d still win the item and make a profit of a – b. Of course, Bob doesn’t know ahead of time that he is going to lose the auction, so he will be going through a similar internal simulation of what Alice might do. In the end, both will have to spend a lot of effort simulating the other and in most cases will not bid their true valuation. The won’t employ a truthful bidding strategy. In fact, the first-price sealed-bid auction has no dominant bidding strategy.

If the item at stake is extremely valuable, you could imagine that Alice would hire a consultant to do the difficult cognitive task of simulating what Bob might bid. Of course, Bob will do the same. What happens if they hire the same consultant? Well then he’d see both bids and just tell Alice to bid a penny over b (or vice-versa with Bob, if b > a). And if they hire different consultants then they will have the difficult task of simulating each other to find this perfect advice. But if you are the auctioneer, why force the participants to waste money hiring consultants? Instead, we can just incorporate the trusted advisor into the bidding process. This is the idea behind the revelation principle. It is a way to transform any game (with some dominant strategy) into one where the truthful strategy is dominant.

In the case of the sealed-bid auction, the auctioneer could simply award the item to the top bid, but make them pay the second top bid. This effectively rolls up the advisor (to whom Alice told her true valuation so that he’d tell her what to bid) into the auction mechanism itself. It saves Alice and Bob the effort of simulation each other (or paying someone else to do that) and allows them to just bid truthfully. This is known as the Vickrey auction — the computationally kind sealed-bid auction. When Google holds auctions for their ads or the US government for swaths of radio frequencies, they use a Vickrey auction and are thus kind to their clients. Instead of wasting the bidder’s resources forcing them to simulate each other, by changing to a well-designed process, the auctioneer can save everybody this needless effort.

What do you think of computational kindness, dear reader? What are some other examples where the world around us can be improved by being more mindful of the computational problems that we pose each other? As an ethical principle, does computational kindness add something beyond the typical concerns of the moral philosopher? I think that opens the door for algorithmic philosophy to make a contribution not only to metaphysics, epistemology, and philosophy of science but also to (meta-)ethics.[8]

Notes

  1. Opening with an endorsement of succinctness puts me in the awkward position of giving you, dear reader, a tempting metric to judge me by. A metric that I often fail to satisfy. The introduction of extensive footnotes in my posts has been part of my way to mitigate my wordiness. You can read the main text, and skip these long notes.

    As a secondary defence, however: terseness is not the only way to be computationally kind. In fact, it can be unkind when parsing the terse code would require running needless loops of thinking.

    Finally, even the unkindness of ambiguity-in-presentation is not necessarily bad. For example, the author might by trying to encouraging active reading, to facilitate a fuller understanding.

  2. For full disclosure: this book was mailed to me by a marketing manager for Henry Holt and Company, with the hope that I would review it on this blog. Overall, I enjoyed the book, and there are many to whom I would recommend it. I will write a full review soon highlighting its strengths and shortcomings, and giving a fuller flavour of the content that it covers.
  3. Zipf’s tension — and many of the other interpersonal examples in this post — is premised on the assumption that the point of communication is to convey information unambiguously. An assumption that can be questioned.
  4. This is not to imply that this hypothetical interchange is worse for the pair taken as a whole. By having to regularly simulate each other’s preferences, and then test our simulations for accuracy — do I frown at her decision or am I excited about it? — we can build intimacy and understanding of each other.

    I am also not implying that the only motivation for an unkind response is trying to conserve my own processing effort. Consider, for example, the Ben Franklin effect: “He that has once done you a kindness will be more ready to do you another than he whom you yourself have obliged.” Building on the previous point, the interlocutors following the unkind strategy in alterations, might help fortify a relationship.

  5. Christian & Griffiths consider how we should update our expected waiting time for such cases in their Chapter 6: Bayes’s Rule.
  6. I think that computational kindness also helps me understand why I find subways to be less stressful than buses. Not only is the schedule more consistent, but the train will only stop at stations and will stop at every station. Thus, I don’t need to remain constantly vigilant for pulling the stop request, and can only tune in when the vehicle slows down to see if this is my stop.
  7. Christian & Griffiths consider this and other examples in their Chapter 11: Game Theory.
  8. For practical ethics, computer science already provides us with important considerations. For example, as I discussed in weapons of math destruction and the ethics of big data. But the discussion in that earlier post uses computer science in a very different way than here. My closest prior post in this direction might be on hiding lies in complexity, which — with hindisght — I could re-interpret as an example of computational cruelty in finance.

About Artem Kaznatcheev
From the Department of Computer Science at Oxford University and Department of Translational Hematology & Oncology Research at Cleveland Clinic, I marvel at the world through algorithmic lenses. My mind is drawn to evolutionary dynamics, theoretical computer science, mathematical oncology, computational learning theory, and philosophy of science. Previously I was at the Department of Integrated Mathematical Oncology at Moffitt Cancer Center, and the School of Computer Science and Department of Psychology at McGill University. In a past life, I worried about quantum queries at the Institute for Quantum Computing and Department of Combinatorics & Optimization at University of Waterloo and as a visitor to the Centre for Quantum Technologies at National University of Singapore. Meander with me on Google+ and Twitter.

10 Responses to Computational kindness and the revelation principle

  1. bchaller says:

    I don’t agree with the restaurant choice example, really. If I say “I don’t care, you choose”, I literally mean that I don’t care, and that anything the other person chooses is OK with me. They do not need to simulate my preferences, because I have none. They also save the work of trying to intersect my preferences with theirs; they can do the computationally easier task of simply evaluating their own preferences. And finally, our restaurant choice is more likely to be optimal, because they will not be constrained to choosing within a limited set of options provided by me. The “I don’t care, you choose” response is only unkind if it is not honest ā€“ if the person responds to a choice by saying “oh, no, I don’t like that” or by being grumpy or whatever. Then the person is just being passive-aggressive ā€“ which is unkind in more ways than just computationally!

    And by the way, Pinker has some useful things to say about this sort of thing in his recent book on how to write well.

    • Thanks for the feedback. Those are definitely two possible use-cases. But I am not sure how typical they are. Let’s focus on the honest one. So you are saying: there is a condition where my friend has strong preferences, and yet says “I don’t care, you choose” and in my response, since I have weak preferences, I pass and thus everybody wins. But then isn’t my friend being passive-aggressive in the opening? Since she has preferences?

      The more typical setting in my experience is that neither one us has strong preferences. And both of us are “ok with anything”. Otherwise, she would have just opened with “do you want to get sushi tonight?” But a decision must still be reached, and even though we have weak to nonexistent preferences, some choices are still (slightly) better than others for both of us.

      To go with my own experience further, let me turn the tables around so that I am asking “where do you want to go tonight?” and my friend responds with “I don’t care, whatever you like.” At this point, since I had weak to non-existent preferences, I will proceed to simulate her preferences and pick the first place that she would usually like, without consulting my own preferences all that much.

      I think this opens an interesting avenue on the difficulty of knowing oneself. The reason I end up simulating my friend’s preferences rather than my own is actually because it is easier. When my own preferences are weak, I find them to be particularly hard to evaluate. So I guess in that case, no unkindness is being done by my friend. Although maybe I should have stimulated her preferences to start with and just opened with a suggestion that I thought she’d want.

      I don’t think too much rests on this particular example though, we could definitely substitute something else in. Are the other examples more convincing for you?

      For Pinker’s book: is that The Sense of Style: The Thinking Person’s Guide to Writing in the 21st Century? Do you recommend it? Do you feel like you benefited from reading it?

      • bchaller says:

        Hi Artem. As you say, lots of different use-cases here. I agree that if someone has strong preferences but says “I don’t care, you choose”, they are being passive-aggressive. Well, passive at least; the “aggressive” part would then come in if they actually complained, got grouchy, etc., when the other person didn’t read their mind. :-> Many people seem to simply be passive, and even to take pleasure in submitting to another person’s wishes rather than expressing their own; if there are no hurt feelings, then this is not passive-aggressive, and not necessarily a bad thing.

        Another way to spin it is this. Suppose I have preferences, but they are not very strong. I could open by stating my preferences; you seem to find that desirable. But that also puts a burden on the other person, if they also have preferences. What if they don’t like the places I suggested? They might feel pressured to sublimate their preferences, even though my preferences were in fact weak. Even if I state explicitly that my preferences are weak, they might worry that they are really stronger than I am saying ā€“ which means they are having to go through the work of trying to simulate my true preferences even though I stated them explicitly.

        Indeed, it gets worse. Suppose I try to guess my friend’s preferences, and thus say “let’s have dinner at Viva Taqueria!” My friend usually likes Viva, but she doesn’t feel like it tonight. Now I have put her into a very complicated situation indeed, because she has to guess whether I have proposed Viva because I really want to go there myself, or have proposed it simply because I think it is where she wants to go. She has to guess at whether I know that she likes Viva, and if so, whether that motivated my suggestion or not. So she now has to not only try to simulate me, she even has to try to simulate my simulation of her! How much simpler would it have been if I had just said “I don’t really care; where do you want to go for dinner?”

        Or ā€“ rewinding a bit ā€“ they might counter with a different proposal, which I don’t like, and we might end up in conflict. We might end up eating at a place that neither of us likes, just because the negotiation got too complicated.

        So what I’m arguing is that sometimes passivity might lead to the best available outcome, and also that sometimes trying to be explicit and prevent the other person from having to perform hard cognitive work can actually put them in the position of doing *more* cognitive work. Social interactions are not as cut-and-dried as you are painting them, I think.

        Regarding the “difficulty of knowing oneself”, you say “the reason I end up simulating my friendā€™s preferences rather than my own is actually because it is easier”. I’m not sure this is right. If my preferences are weak, then it is more important to me that my friend be happy with where we go, than that we follow my weak preferences. Making my friend happy makes me happy. I think this is the actual reason for a lot of the “I don’t care, you choose” dancing ā€“ we are trying to place our friend’s preferences above our own because it makes us happy to see our friend happy, but our friend has the same preference in reverse.

        I think it’s a good example actually, because reflecting on it clearly shows all the social and emotional complexity behind these sorts of interactions. It’s a familiar situation that we all confront almost daily, and yet it is full of nuance.

        As to Pinker, yes, that’s the book. I’m only about a third of the way in so far, but I’m both enjoying it and finding it instructive with respect to my own writing. Pinker is one of my favorite authors; if you haven’t read him, you’re in for a treat. (And in that case, this book might not be the best place to start; I might recommend The Blank Slate, or The Language Instinct. But this book is the one that seems related to your discussion here of trying to avoid placing cognitive burdens upon others.)

      • Joel Malard says:

        A friend with strong preferences and who tells you “I don’t care, you chose” may be saying “surprise me”, not so much passive-aggressive as inviting you to express yourself more strongly. In that case, “I don’t care” is an act of kindness.

  2. Ridiculon says:

    An example that comes up in conversation somewhat rarely is the conversation starter “Guess what?”. If the speaker actually has the expectation that you guess it is computationally exhausting enough that many people will just refuse to engage.

    • That is a good example. A possible critique might be that people don’t actually expect you to guess in such cases, but are just announcing “I am about to share something that I am excited about!”. Similar to how “how are you?” in typical US conversation is often not meant as an actual inquiry, but just a ritualized greeting where you respond with “good, and you?”.

  3. greenjeff says:

    1) if there isn’t a way to be computationally kinder on the scale of trade with the human species a la bitcoin, I would be surprised.

    2) In practice this kindness requires that you have a model of what kind of machine are the people you are being kind to (especially if they have limited memory).

  4. Pingback: Cataloging a year of blogging: complexity in evolution, general models, and philosophy | Theory, Evolution, and Games Group

  5. Pingback: Abeba Birhane

  6. Pingback: Books | Abeba Birhane

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.