Lower bounds by negative adversary method

Are some questions harder than others?

Last week I quantified hardness of answering a question with a quantum computer as the quantum query complexity. I promised that this model would allow us to develop techniques for proving lower bounds. In fact, in this model there are two popular tools: the polynomial method, and the (negative) adversary method. In this week’s post, I’d like to highlight the latter.
Read more of this post

Individual versus systemic risk in asset allocation

Proponents of free markets often believe in an “invisible hand” that guides an economic system without external controls like government regulations. Therefore a highly efficient economic equilibrium can be created if all market participants act purely out of self-interest. In the paper titled “Individual versus systemic risk and the Regulator’s Dilemma.”, Beale et al. (2011) applied agent-based simulations to show that a system of financial institutions attempting to minimize their own risk of failure may not minimize the risk of failure of the entire system. In addition, the authors have suggested several ways to limit the financial institutions in order to lower the risk of failure for the financial system. Their suggestion responds directly to the regulatory challenges during the recent financial crisis where failures of some institutions have endangered the financial system and even the global economy.

Finance regulation

It’s easy to get tangled up trying to regulate banks.

To illustrate the point of individual optimality versus the system optimality, the paper makes simple assumptions of the financial system and its participants. In a world of N independent banks and M assets, each of the N banks seeks to invest its resources into these M assets from time 0 to time 1. The M returns on assets are assumed to be independently and identically distributed following a student’s t-distribution with a degree of freedom of 1.5. If a bank’s loss exceeds a certain threshold, it fails. Due to this assumption, each bank’s optimal allocation (to minimize it’s chance of failure) is to invest equally in each asset.

However, a regulator is concerned with the failure of the financial system instead of the failure of any individual bank. To incorporate this idea, the paper suggests a cost function for the regulator: c = k^s where k is the number of failed banks. This cost function is the only coupling between banks in the model. If s > 1, this cost function implies each additional bank failure “costs” the system more (one can tell by taking the derivative s \cdot k^{(s-1)}, which is an increasing function in k if s > 1). As s increases from 1, the systematic optimal allocation of all the banks starts to deviate further away from the individual optimal allocation of the banks. When s is 2, the systematic optimal allocation for each bank is to invest entirely in one asset, a drastic contrast to the individual optimal allocation (investing equally in each asset). In this situation, the safest investment allocation for the system leads to the riskiest investment allocation for the individual bank.

While the idea demonstrated above is interesting, the procedure is unnecessarily complex. The assumption of student t distribution with degree of freedom of 1.5 is far too broad of an assumption for the distribution of financial assets. However, the distribution does not have the simplicity of Bernoulli or Gaussian distributions to arrive at analytical solutions (See Artem’s question on toy models of asset returns for more discussion). One simple example would be bonds whose principal and coupon payments of the bond are either paid to the bondholder in full or partially paid in the event of a default. Therefore bond is not close to a t-distribution. Other common assets such as mortgages and consumer loans are not t-distribution either. Therefore the assumption of t-distribution does not come close to capturing the probabilistic nature of many major financial assets. The assumption of t-distribution does not provide any additional accuracy to simpler assumptions of Gaussian or Bernoulli distributions. Assumption of Gaussian distribution or Bernoulli distributions, on the other hand, is at least capable of providing analytical solutions without the tedious simulations.

The authors define two parameters D and G in an attempt to constrain the banks to have systematically optimal allocations. D denotes the average distance of asset allocations between each pair of banks. G denotes the distance between the average allocations across banks and the individual optimal allocation. When s is increasing from 1, it was found that bank allocations with a higher D and a near-zero G are best for the system. To show the robustness of these two parameters, the authors varied other parameters such as number of assets, number of banks, the distributions of the assets, correlation between the assets, and the form of the regulator’s cost function. They found lower systematic risk for the banking system by enforcing a near zero G and higher D. This result implies that the banks should concentrate in their own niche of financial assets, but the aggregate system should still have optimal asset allocations.

Based on the paper, it may appear that the systematic risk of failure can be reduced in the financial system by controlling for parameters D and G (though without analytical solutions). Such controls have to be enforced by an omnipotent “regulator” with perfect information on the exact probabilistic nature of the financial products and the individual optimal allocations. Moreover, this “regulator” must also have unlimited political power to enforce its envisioned allocations. This is far from reality. Financial products are different in size and riskiness, and there is continuous creation of new financial products. Regulators such as the Department of Treasury, SEC, and Federal Reserve also have very limited political power. For example, these regulators were not legally allowed to rescue Lehman Brothers whose failure led to the subsequent global credit and economic crisis. The entire paper can be boiled down to one simple idea: optimal actions for the individuals might not be optimal for the system, but if there is an all-powerful regulator who forces the individuals to act optimally for the system, the system will be more stable. This should come as no surprise to regular readers of this blog, since evolutionary game theory deals with this exact dilemma when looking for cooperation. This main result is rather trivial, but opens ideas for more realistic simulations. One idea would be to remove or weaken the element of “regulator” and add incentives for banks to act more systematically optimal. It would be interesting to look at how banks can act under these circumstances and whether or not their actions can lead to a systematically optimal equilibrium.

One key aspect of a systematic failure is not the simultaneous failure of many assets. There are two important aspects of bank operations. Banks operate by taking funding from clients to invest in riskier assets. This operation requires strong confidence in the bank’s strength to avoid unexpected withdrawals or the ability to sell these assets to pay back the clients. Secondly banks obtain short-term loans from each other by putting up assets as collateral. This connects the banks more strongly than a simple regulator cost function, creating a banking ecosystem.

The strength of the inter-bank connections depends on value of the collateral. In the event of catastrophic losses of subprime loans by some banks, confidence in these banks are shaken and the value of assets start to come down. A bank’s clients may start to withdraw their money and the bank sells its assets to meet its clients’ demands, further depressing the prices of the assets sometimes leading to a fire sale. Other banks would start asking for more and higher quality collateral due to the depressed prices from the sell-off. The bank’s high-quality assets and cash may subsequently become strained leading to further worry about the bank’s health and more client withdrawals and collateral demands. Lack of confidence in one bank’s survival leads to worries about the other banks that have lent to that bank triggering a fresh wave of withdrawals and collateral demands. Even healthy banks can be ruined in a matter of days by a widespread panic.

As a result, the inter-bank dealings are instrumental in the event of systematic failure. Beale et al. (2011) intentionally sidestepped this inter-bank link to arrive at their result purely from a perspective of asset failure. But, the inter-bank link was the most important factor in creating mass failure of the financial system. It is because of this link that failure of one asset (subprime mortgages) managed to nearly bring down the financial systems in the entire developed world. Not addressing the inter-bank link is simply not addressing the financial crisis at all.

ResearchBlogging.orgBeale N., Rand D.G., Battey H., Croxson K., May R.M. & Nowak M.A. (2011). Individual versus systemic risk and the Regulator’s Dilemma, Proceedings of the National Academy of Sciences, 108 (31) 12647-12652. DOI:

Habitual selfish agents and rationality

Many game theoretical studies of cooperation have focused on the trade-off between selfish behavior that optimizes the individual, versus cooperative behavior that optimizes the whole. Because this trade-off is inherent to prisoner’s dilemma, this game has been a preferred method for studying mechanisms that encourage cooperation among selfish agents. Davies et al. [1] adopt a different framework, however: that of a coordination/anti-coordination game. Note that (anti-)coordination games do not differ from the general class of cooperate-defect games that Artem discussed previously, and would fall into the X and Y games he covered at the end of the post.

To get a flavor of Davies et al.’s game, imagine that there is a social network of scientists, each of which is either (symmetrically) compatible or incompatible with each of their peers. Compatibility results in good collaborations, while incompatibility leads to shoddy research that benefits no one. Collaborations occur every night in one of two bars. Initially, scientists pick a bar randomly. Every night after that, each scientist decides in random order whether to stick with the bar they visited the previous night, or whether to switch bars to meet a greater number of compatible researchers. Because payoffs are symmetric, a stable configuration is quickly reached where no one wants to switch bars. While this arrangement represents a local maximum, in that each scientist makes the selfishly rational choice to stay, it generally fails to maximize the group’s total productivity. Thus, much as in the prisoner’s dilemma, rational agents have no motivation to unilaterally switch bars in the hopes that others follow suit, even though such selfishness ultimately results in lower payoffs for the whole group.

Davies et al. investigate how to tweak each agent’s local strategy (i.e., no knowledge of/concern for the global utility) such that agents remain selfishly rational in some sense, yet still optimize the global utility function. Their answer comes in the form of habituation: agents develop a preference for whichever agents they are currently paired with. Agents, rather than basing their behavior on true utility (a function of the number of agents they are coordinated with), instead base their behavior on perceived utility (a sum of their true utility and their preference for the agents they are interacting with). Because this enables agents to occasionally sacrifice some of their true utility in order to pursue a higher perceived utility (i.e., to switch to a bar with an initially lower payoff), the population is ultimately able to reliably converge to the global maximum. In brief, habituation acts by reinforcing local maxima which, in cases where the global function is a superposition of numerous pairwise interactions, tend to overlap with the global maximum, making it a strong and reliably reachable attractor. (See slides by Peter Helfer, who presented this paper at the EGT Reading Group 38, for a nice overview.)

The model’s workings and results aside, there are interesting questions to ask as to what habituation represents in this context. Artem has previously posted about our interest in the dissociation between objective and subjective rationality. At the heart of this issue is the question of whether we can explain certain examples of seemingly irrational behavior within the framework of selfish rationality. To do so, we can appeal to the notion of bounded rationality: agents are not omniscient and must rely on imperfect information to make their decisions. If the imperfection lies in what payoffs the agents think they are getting (i.e., what game they think they are playing), then we may have a straightforward way of accounting for deviations from objective rationality: the behavior is a result of rationality being applied subjectively, by a deluded agent.

For a loose analogy, consider the case of the missionary. From the point of view of an atheist, a Christian travelling to Africa and doing volunteer work there seems highly altruistic, as he is incurring a high cost and deriving few tangible benefits. At first blush, such behavior seems difficult to explain on the basis of selfish rationality. But consider things from the missionary’s point of view. If the reward for selflessness is an eternity in paradise and the alternative is an eternity of damnation, then becoming a missionary yields the best possible payoff. In short, if the world (or game) is what the missionary believes it to be, then the behavior is selfishly rational after all, rather than purely altruistic.

Davies et al. make a similar point. They observe that, though habituation constitutes a deviation from objective rationality, these agents are still playing rationally—they just happen to be playing a different game from the real one. In other words, behaviors are still selected to maximize utility; it’s just that their perceived utility (as defined by the agent) deviates from their true utility (as defined by the game). To return to the notion of bounded rationality, agent “irrationality” here is explainable as a deficiency in the agent’s knowledge of the game, rather than in its decision-making capacities.

So how do we make sense of habituation? While Davies et al. justify it as a simple biasing mechanism worthy of investigation in adaptive networks, it is not altogether clear what this implies for psychology. On one hand, if such cognitive biases are broadly effective at optimizing cooperative behavior, then it is plausible that they evolved, or are commonly adopted through learning. On the other hand, it is difficult to judge whether such a bias is useful in cooperative behavior beyond the present context. Here, it solves the specific problem of the population’s search for an optimal arrangement stalling in local maxima. This is a crucial point, as the goal here (optimizing global payoffs in this coordination/anti-coordination game) is more a problem of constraint satisfaction than of replacing selfish behavior with cooperation. To sum up, we should not approach habituation and its consequences as if it solves the same problem that we face in the prisoner’s dilemma.

References

  1. Davies, A., Watson, R., Mills, R., Buckley, C., & Noble, J. (2011). “If You Can’t Be With the One You Love, Love the One You’re With”: How Individual Habituation of Agent Interactions Improves Global Utility Artificial Life, 17 (3), 167-181 DOI: 10.1162/artl_a_00030