What is the algorithmic lens?

If you are a regular reader then you are probably familiar with my constant mention of the algorithmic lens. I insist that we must apply it to everything: biology, cognitive science, ecology, economics, evolution, finance, philosophy, probability, and social science. If you’re still reading this then you have incredible powers to resist clicking links. Also, you are probably mathematically inclined and appreciate the art of good definitions. As such, you must be incredibly irked by my lack of attempt at explicitly defining the algorithmic lens. You are not the only one, dear reader, there have been many times where I have been put on the spot and asked to define this ‘algorithmic lens’. My typical response has been the blank stare; not the “oh my, why don’t you already know this?” stare, but the “oh my, I don’t actually know how to define this” stare. Like the artist, continental philosopher, or literary critic, I know how ‘algorithmic lens’ makes me feel and what it means to me, but I just can’t provide a binding definition. Sometimes I even think it is best left underspecified, but I won’t let that thought stop me from attempting a definition.

If you are not a regular reader then — chances are — you are one of the 33 thousand views that have come here from Reddit. If you don’t know what Reddit is then you must watch CGPGrey’s explanation before you can continue:

My interaction with Reddit started out of vanity, the simple ego stroking of driving large amounts of traffic to this blog or other projects (like StackExchange; I originally joined Reddit to help advertise that I am involved in CogSci.SE). The gamification of seeing the visitors counter creep up helped me find the motivation to write posts more often, and the dismal performance of many of my early shares guided me in what to do and not to do. Like many users, however, I quickly discovered that links are just the tip of the reddit iceberg. The greatest part is the community that builds around various subreddits and the discussions in the comments. One of these discussions, is a great starting point for defining the algorithmic lens. The following is the definition by user @quattro from an r/bioninformatics discussion of my recent post on what theoretical computer science can offer biology:

While we would all love better algorithms to calculate estimations for our models and handle larger amounts of data, this post is discussing how can we view biology through the algorithmic lens. That is to say, how can evolution itself accomplish certain computational challenges?

The bread and butter of theoretical computer science is the ability to take an abstract problem and show that it requires some resources [be it time, or memory, or communication] by reducing a known/characterized problem to it.

THIS is what is meant by “What can theoretical cs offer biology”. Not necessarily “better algorithms”, but “what is required resource-wise for nature to accomplish these things”?

This really highlight what sets computational complexity apart from most other disciplines of thought. Science and engineering can usually tell us what can happen or how things can happen, but it rarely ever gives us negative statements. With the exception of some examples in physics (like conservation of energy and laws of thermodynamics ruling out perpetual motion machines), science almost never says “this can’t happen”, especially in cases where the mechanisms that could potentially achieve the impossible goal are not well characterized. Computational complexity on the other hand, takes a macrodynamic goal (to solve some problem) and a broad macrodynamic resource like time, space, or energy and then tries to characterize what any microdynamic process (algorithm) can achieve with those resources. Since any scientific theory is going to have to be mechanistic (else, we would not view it as an explanation), and algorithms capture all mechanistic procedures this places no restrictions on the conceivable processes. Algorithmic generality is especially useful in settings like biology and ecology where historicity and experimentally inaccessible time-scales limit the detail to which we can know microdynamic processes, or in economics, and social sciences where theorizing a process affects the system being studied, sometimes in an adversarial way. This ability and desire to deal rigorously with questions at such a general level is what sets the algorithmic lens apart from typical science.

The second strength of @quattro’s definition is the implicit demarcation between the algorithmic and computational lens. The computational lens is boring, at its shallowest it is just the use of computing machines to automate tasks in science (sometimes at the cost of understanding), and at its deepest it just continues the historic trend of drawing analogies between our current highest technology and the unknown. Saying that the brain, evolution, or the universe is like a computer is seldom explanatory and often misleading. Realizing that any scientific theory is a language and corresponding model of computation, and thus tools from the analysis of algorithms and computational complexity can be used to turn its heuristic models into abstractions, is much more exciting. The former is the computational view and the latter is the algorithmic lens.

Of course, I can’t claim title over the algorithm lens. The concept has been advocated before me, and I view Christos Papadimitriou’s talk and Julia Kempe’s course as inspirations for my views. I also realize that my current attempt is not inclusive of all the tools and thoughts of theoretical computer science, and I need to extend the definition from just Theory A to including both Theory A and B. How does the algorithmic lens make you feel? Is my definition off? How would you define it?

About Artem Kaznatcheev
From the Department of Computer Science at Oxford University and Department of Translational Hematology & Oncology Research at Cleveland Clinic, I marvel at the world through algorithmic lenses. My mind is drawn to evolutionary dynamics, theoretical computer science, mathematical oncology, computational learning theory, and philosophy of science. Previously I was at the Department of Integrated Mathematical Oncology at Moffitt Cancer Center, and the School of Computer Science and Department of Psychology at McGill University. In a past life, I worried about quantum queries at the Institute for Quantum Computing and Department of Combinatorics & Optimization at University of Waterloo and as a visitor to the Centre for Quantum Technologies at National University of Singapore. Meander with me on Google+ and Twitter.

17 Responses to What is the algorithmic lens?

  1. John Baez says:

    Just to stir up trouble, I’ll argue:

    1) Science is very good at providing us with negative statements. If you drop a rock, it won’t fall up. There’s no chemical reaction that turns salt into sugar. And so on. Of course, you have to add more fine print to make these precise and eliminate loopholes… but the point is, any statement about what will happen implies infinitely many statements about what won’t happen. This is one way science helps us, by keeping us from wasting time trying to do things that won’t work.

    2) Almost all nontrivial lower bounds on computational complexity remain unproven:


    We haven’t even been able to rule out a linear-time algorithm for multiplying integers:


    So, while computational complexity theory may someday prove lots of things can’t be done with limited resources, much of this promise remains unfulfilled at present.

    • Thank you for stirring up trouble, without trouble my views will never grow. I encourage you to stir up more.

      The example of “drop a rock, it won’t fall up”, is of a very different kind to me than “there is no perpetual motion machine”. First, it is basically a repetition of an experiment that has been conducted already to shape the theory of gravity, and so it feels less ‘general’. Further, the more open ended statement like “an object will fall down” needs to make a lot of assumptions about the objects and algorithms involved; the object is less buoyant than air, its initial velocity is less than Earth’s escape velocity, it doesn’t have jet packs, or wings, etc. This actually reveals a very deep understanding of the relevant microproperties of the object compared to the macrodynamic of “will it fall up, or down?”. This feels very much less of an impossibility statement than “there is no algorithm that sorts with fewer comparisons than O(n log n) in the worst case”. I was really thinking of settings like shortcuts in ecology, where we don’t have such direct access to an adequate theory of the microdynamics.

      I agree that it is unfortunate that so much of computational complexity must rely on some basic separation assumptions. However, I feel like these separations are ones for which we have enough evidence to start building scientific results on them. Kind of how physicists were happy to build to use re-normalization tricks before there was any reason to believe that they weren’t mathematical nonsense. Also, the heritage of birth as a branch of mathematics, makes the algorithmic lens more explicit about the extra-scientific assumptions it uses, so if something strange does happen with the P vs. NP question, at least it will be clear which theories it affects. Finally, there are resources (query and communication complexity, say) or restricted models of computation (regular languages, say) where we don’t need conditional results to prove theorems. I think the promise is filled enough to be scientifically useful.

      The type of computational model does raise some questions. Since the Church-Turing thesis doesn’t help us for complexity classes more subtle than computable. For example, when are we allowed to use the RSA hardness assumptions while using the algorithmic lens? If the underlying model of computation is classical then we should be fine, but if its quantum, then we can’t. However, I don’t think this is a show-stopper either, usually models are expressed in a common framework and we can figure out from this framework what the right model of computation is. Much like an ecologist can ignore quantum effects when worrying about macroscopic properties of predator-prey ratios.

    • codetaku says:

      Definitions can be very insightful and illuminating. As can rephrasing things in equivalent terms which might have the same denotation but differing connotation. For instance, you say “almost all nontrivial lower bounds on computational complexity remain unproven.” What is a rephrasing of this statement? How about “almost all nontrivial lower bounds on computational complexity have not been expressed in contemporary mathematical terms”?

      If we were to discover a principle which could not be (or at least not without tremendous difficulty) be expressed by our existing mathematical systems – what should we do with it? Would Einsteins ideas of relativity be discarded were they discovered and explained and experimentally verified before complex numbers became widely accepted by the mathematical community? In terms of proof, we have likewise never proven a link between mathematics and reality. We have, as we have with computational complexity ideas, an abundance of experimental evidence of high correllation… but we have no proof.

      Without dipping into the actual veracity of the idea of whether computational complexity deals with true ideas which can not be expressed by our mathematical systems (a discussion which I don’t believe can have any productive end at present), can you support the idea that we should always withhold the designation of ‘truth’ from any thing which can not be succinctly proven by contemporary mathematical systems? If you acknowledge that things could exist which are ‘outside’ of mathematics, how could one spot them?

      • Thank you for the comment! Unfortunately, I think there are some common misconceptions in your statements, that I would like to address. Let me know if I misunderstood something.

        First, your comment leaves me with the impression (especially your reference to Einstein) that you believe that ‘truth’ are used in the same way in mathematics as in science. Verificationism is a vestige of logical positivism — a philosophical stance that I am allergic to — that is unfortunately still common in lay discussions (often it is scientists that are having these laymen discussions) on the philosophy of science. However, I don’t know of any serious philosophers of science that hold on to this viewpoint, most modern philosophy of science follows a more skeptical, falsification oriented approach (or refinements of it) . As such, the word ‘true’ in science is nothing like the word ‘true’ in mathematics. Mathematical theorems — unlike scientific statements — are not temporary statements to be overturned by more evidence later.

        Second, your discussion of my use of “unproven” suggests that you think that I equate “true” with “proven”. This is definitely not the case, when looking at mathematics, when someone says that “this is unproven” they don’t mean to suggest that “this is untrue”, what they mean to suggest is that “I have no way of knowing if it true or not, yet”, but that doesn’t stop me from having beliefs. My beliefs could be influenced by empirical considerations, or anything else. Even the much stronger statement of “provable” cannot be equated with “true”, as Godel famously showed. In fact, there is a small subset of theorists who believe that the P vs NP question is independent of our formal systems; however, this is a minority group, and it is a hard statement to make since it is not clear what axiomatic system best serves as a foundation for cstheory.

        Finally, and this is a much more subtle point, your restatement is of “almost all nontrivial lower bounds on computational complexity remain unproven” is not as innocent as it might seem. I definitely believe that it is great to have many different and equivalent definitions for terms, as it is to have many different proofs of a theorem. However “not expressed in contemporary mathematical terms” is a stronger statement that computer scientists actually study in depth. One of the coolest parts of theoretical computer science is how deeply self-reflective it is. CStheorists have been stuck trying to prove P vs NP for a long time, and in an act of self-reflections have shown that there exist certain barriers that eliminate specific proof techniques that have been used to prove most of the other results in cstheory. This allows computer scientists to make very strong statements like “P vs NP cannot be resolved by our current tools”. Of course, there have been several developments of proof techniques that aren’t natural proofs, and don’t relativize or algebrize, so we don’t know if there is a way for these approach to work or not.

  2. Pingback: Bounded rationality: systematic mistakes and conflicting agents of mind | Theory, Evolution, and Games Group

  3. Pingback: John Maynard Smith introducing games animals play | Theory, Evolution, and Games Group

  4. Pingback: Mathematics in finance and hiding lies in complexity | Theory, Evolution, and Games Group

  5. Pingback: Software through the lens of evolutionary biology | Theory, Evolution, and Games Group

  6. Pingback: Quantum query complexity | Theory, Evolution, and Games Group

  7. Pingback: Cataloging a year of blogging: the algorithmic world | Theory, Evolution, and Games Group

  8. Pingback: Evolution is a special kind of (machine) learning | Theory, Evolution, and Games Group

  9. Boris Borcic says:

    I perceive your “algorithmic lens” is related to something I’ve in youngers years long felt was missing: the effort to use in hindsight not the power of computers but the acquired lessons from knowing them. The missing effort to compute the intellectual sum of what we knew before computers and what we’d learned from computers — without further participation of computers.

    • I’m not completely sure what you mean here. The technology we call computers is often used as a metaphor in science, for instance all of cognitive science is based on this metaphor. This is also typical of scientific progress, where people use the leading technology of the day as a metaphor for what they want to understand. Maybe this is why ‘network science’ is a thing now, and magically relevant to everything.

      For me, the algorithmic lens is really meant to be independent of computing machines (so in that was it agrees with your comment). It is just meant as logical analysis that takes processes and dynamics seriously. Philosophically, I meant it as an extension of analytic philosophy or maybe Russell’s logical analysis (if that is more precise). In this, regard, I guess we could say that “we learnt” how to worry about computational complexity from computers, but I am not sure if that is what most people would have in mind upon hearing that.

      I’ll have to reflect more on this, I have a post in the works where I contrast “algorithmic” versus “computational” thinking a bit in the context of biology, so maybe that will help clear things up.

  10. Pingback: Computational theories of evolution | Theory, Evolution, and Games Group

  11. Pingback: Five motivations for theoretical computer science | Theory, Evolution, and Games Group

  12. Pingback: tribute/ celebration of the Algorithmic Lens | Turing Machine

  13. Pingback: Quick introduction: the algorithmic lens | Theory, Evolution, and Games Group

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: