Mathematics in finance and hiding lies in complexity
October 13, 2013 13 Comments
Few believe that Fermat actually had a correct proof, because the conjecture remained open for over 350 years and when Andrew Wiles resolved it in 1993, it used the deepest mathematical ideas and technologies from the 20th century. Wiles had encountered the ‘theorem’ as a ten year old, but quickly realized that he was not equipped to tackle it. Although the question stayed in the back of his mind, it was not until 1982-1986 that Gerhard Frey, Jean-Pierre Serre, and Ken Ribet built a bridge between Fermat’s conjecture and the mainstream study of elliptic curves — showing the mathematical community that the Taniyama–Shimura conjecture for semistable elliptic curves would imply Fermat’s last theorem — that Wiles had the audacity to change his research direction, skirt his teaching responsibilities at Princeton, and invest himself completely in finding a proof.
Had Wiles not resolved the conjecture in 1993, Fermat’s Last Theorem would have likely been made one of the Millennium Prize Problems. In 1998, Landon T. Clay — an armchair mathematician that majored in English at Harvard and went on to become a money manager, CEO of Eaten Vance Investment Managers, and one of the richest people in Boston — underwrote the million dollar prizes and several other awards when he founded the Clay Mathematics Institute. Unfortunately, the public largely knows the prize for the size of its purse, and not the fundamental importance of the problems.
Clay’s most recent philanthropy was a sizable contribution to the construction of the Andrew Wiles math building at the University of Oxford. At the opening ceremony, Wiles bemoaned the abuse of mathematics during the financial crisis, saying that “one has to be aware now that mathematics can be misused and that we have to protect its good name.”
This was probably a reference to the widespread use of complex derivatives, and the use of models like VaR to hide risk in the long tails of outcome distributions. Of course, no one can seriously blame just mathematics for this — although the financial firms have done their best to throw quants under the bus. Mathematicians provide a tool, and it is up to the users of said tools to turn them to good or evil. However, that also doesn’t excuse mathematicians of ethical consideration in building these tools. It is a matter of figuring out: are we building guns or screwdrivers? Sure, in the wrong hands either can be used to injury or kill a person, but it is clear that one of the tools is designed solely for that purpose while the other is intended as a constructive implement.
For defenders of economics and finance, the popular story is that complex derivatives like collateralized debt obligation, and credit default swaps allow participants to “complete the market” and reduce the effects of asymmetric information (DeMarzo, 2005). In particular, the information-empowered seller can find buyers for the information-insensitive part of the asset’s cash flow and retain the information-sensitive part. Detractors of finance, point out that — in practice — pricing (or rating the risk of) a CDO is not robust even to very modest imprecisions in evaluating underlying risks (including systemic risk; see Coval et al., 2009). It is these mispricings of derivatives that most analyses place as central to the recent financial crisis (Brunnermeier, 2009; Coval et al., 2009).
So, are CDOs/CDSs a screwdriver turned murder weapon in the hands of greedy unethical bankers, or a loaded handgun in the hands of a unknowing child? The algorithmic lens suggests that it might be the latter. Arora et al. (2011) showed that for computationally bounded market participants DeMarzo’s (2005) perfect rationality analysis does not hold, and derivatives can actually amplify (instead of reducing) the cost of asymmetric information. The way common complex derivatives are set up, allows sellers to cherry-pick the packaged assets in such a way that the buyer cannot detect the hidden risk. Think of this as analogous to how Amazon can find two large prime numbers to serve as a public key, but you or credit-card thieves can’t factor that key to crack the RSA encryption. Except in this case, the thieves are doing the encoding and the honest party is tasked with cracking, and the difficult problem is finding the densest subgraph instead of factoring. In other words, complex derivatives are set up in such a way that it is easy to hide dishonesty behind their complexity.
In this case, the blame is not on the side of buyers pricing models and algorithms, or of inherent market information asymmetry. The fundamental laws of computation stop them from being able to perform better. The blame is that the tool of complex derivatives is inherently unfair, or too easy to use for unfair ends. By using CDOs/CDSs, we are giving more power to people who have better access to information. Thus, we are widening an already existing power gap, something that many (but obviously not all) would find to be unethical. Without taking into account computational complexity, this is impossible to see, but even with the algorithmic lens to inform us, it is ultimately an ethical decision that needs to be made. Unfortunately, it seems like the ethics of bankers and mathematicians are fundamentally unaligned, and this makes it particularly difficult for well-meaning mathematicians to imagine how their models will be misused.
Notes and References
it is impossible to separate a cube into two cubes, or a fourth power into two fourth powers, or in general, any power higher than the second, into two like powers. I have discovered a truly marvelous proof of this, which this margin is too narrow to contain.
As a testament to its difficulty, when David Hilbert was asked why he did not attempt to prove Fermat’s conjecture, he responded:
Before beginning I should have to put in three years of intensive study, and I haven’t that much time to squander on a probable failure.
However, that didn’t stop him from believing that all Diophantine equations can be solved by a mechanistic procedure, which he conjectured as his tenth problem at the 1900 International Congress of Mathematicians. By 1970, Yuri Matiyasevich resolved this conjecture in the negative showing that for an arbitrary Diophantine equation, asking if some integer solution exists is undecidable. This gave a computational justification for the difficulty of particular instances or families of equations like the ones in Fermat’s theorem.
- Andrew Wiles words summarizing his experience searching for the proof capture the fear, wonder, and locally-pointless but globally-necessary meandering one is overwhelmed by when working on mathematics:
You enter the first room of the mansion and it’s completely dark. You stumble around bumping into the furniture but gradually you learn where each piece of furniture is. Finally, after six months or so, you find the light switch, you turn it on, and suddenly it’s all illuminated. You can see exactly where you were. Then you move into the next room and spend another six months in the dark. So each of these breakthroughs, while sometimes they’re momentary, sometimes over a period of a day or two, they are the culmination of, and couldn’t exist without, the many months of stumbling around in the dark that precede them.
- The Poincare conjecture — every simply connected, closed 3-manifold is homeomorphic to the 3-sphere — is the only millennium problem that has been solved to date. To highlight the difference between the public obsession with the monetary incentives and a mathematician’s drive, Grigori Perelman — the mathematician that solved the Poincare conjecture — declined both the Fields medal and millennium prize. For him, it was a matter of fairness to previous mathematicians (such as Richard Hamilton that developed the Ricci flow technique perfected by Perelman) that dedicated themselves to the problem. Unable to remain complacent with perceived ethical degradation of the mathematical community, Perelman left his academic job, severed all ties with former colleagues, and (some believe that he) spends his time practicing wall stacking in sleazy Riichi Majong dives in Saint Petersburg, where he lives with his mother.
- In their paper, Arora et al. (2011) analyze a simplified model of derivatives, but similar tricks would be even easier in the derivatives actually used on Wall St. One of their open problems is if we can use a more standard or difficult hard problem to improve the negative results by looking more realistic models of derivatives. See their FAQ for more information.
- Note that this ability to deceive is not purely a consequence of a lack of information on the part of the buyers. In particular, what DeMarzo (2005) showed is that a computationally unbounded buyer cannot be tricked even though they lack some information (just like public-key cryptography can’t work if the hackers are computationally unconstrained). Much like the evolutionary results of Livnat & Pippenger (2008), these systematic mistakes on the part of the buyer stem not from a lack of information but the inability to do arbitrary computations.
- I suspect that finance is not the only place where this is a concern. In most fields where a large portion of the participants are not mathematically literate is open to exploitation by the seeming objectivity of mathematical models. I see this all the time with computational models in the social sciences and (to a lesser extent) biology, where modelers known how to hide their opinions or biases in the research degrees of freedom to make their models seem robust to those less familiar with the techniques.
Arora, S., Barak, B., Brunnermeier, M., & Ge, R. (2011). Computational complexity and information asymmetry in financial products. Communications of the ACM, 54 (5), 101-107 DOI: 10.1145/1941487.1941511
Brunnermeier, M. (2009). Deciphering the liquidity and credit crunch 2007-08. Journal of Economic Perspectives, 23(1): 77-100.
Coval, J., Jurek, J., Stafford, E. (2009). The economics of structured finance. Journal of Economic Perspectives, 23(1): 3-25.
DeMarzo, P. (2005). The pooling and tranching of securities: A model of informed intermediation. Review of Financial Studies, 18(1): 1-35.
Livnat A, & Pippenger N (2008). Systematic mistakes are likely in bounded optimal decision-making systems. Journal of Theoretical Biology, 250(3): 410-23