Closing the gap between quantum and deterministic query complexity for easy to certify total functions

Recently, trying to keep with my weekly post schedule, I’ve been a bit strapped for inspiration. As such, I’ve posted a few times on a major topic from my past life: quantum query complexity. I’ve mostly tried to describe some techniques for (lower) bounding query complexity like the negative adversary method and span programs. But I’ve never really showed how to use these methods to actually set up interesting bounds.

Since I am again short of a post, I thought I’d share this week a simple proof of a bound possible with these techniques. This is based on an old note I wrote on 19 April 2011.

One of the big conjectures in quantum query complexity — at least a half decade ago when I was worrying about this topic — is that quantum queries give you at most a quadratic speedup over deterministic queries for total functions. In symbols: D(f) = O(Q^2(f)). Since Grover’s algorithm can give us a quadratic quantum speed-up for arbitrary total functions, this conjecture basically says: you can’t do better than Grover.

In this post, I’ll prove a baby version of this conjecture.

Let’s call a Boolean total-function easy to certify if one side of the function has a constant-length certificate complexity. I’ll prove that for easy-to-certify total functions, D(f) = O(Q^2(f)).

This is not an important result, but I thought it is a cute illustration of standard techniques. And so it doesn’t get lost in my old pdf, I thought I’d finally convert it to a blog post. Think of this as a simple application of the adversary method.

Read more of this post

Span programs as a linear-algebraic representation of functions

I feel like TheEGG has been a bit monotone in the sort of theoretical computer science that I’ve been writing about recently. In part, this has been due to time constraints and the pressure of the weekly posting schedule (it has now been over a year with a post every calendar week); and in part due to my mind being too fixated on algorithmic biology.

So for this week, I want to change things up a bit. I want to discuss some of the math behind a success of cstheory applied to nature: quantum computing. It’s been six years since I blogged about quantum query complexity and the negative adversary method for lower bounding it. And it has been close to 8 years since I’ve worked on the topic.

But I did promise to write about span programs — a technique used to reason about query complexity. So in this post, I want to shift gears to quantum computing and discuss span programs. I doubt this is useful to thinking about evolution, but it never hurts to discuss a cool linear-algebraic representation of functions.

I started writing this post for the CSTheory Community Blog. Unfortunately, that blog is largely defunct. So, after 6 years, I decided to post on TheEGG instead.

Please humour me, dear reader.

Read more of this post

Realism and interfaces in philosophy of mind and metaphysics

In an earlier post, I discussed three theories of perception: naive realism, critical realism, and interfaces. To remind you of the terminology: naive realism is the stance that the world is exactly as we perceive it and critical realism is that perception resembles reality, but doesn’t capture all of it. Borrowing an image from Kevin Song: if naive realism is a perfect picture then critical realism is a blurry one. For a critical realist, our perception is — to move to another metaphor — a map of the territory that is reality; it distorts, omits details, adds some labels, and draws emphasis, but largely preserves the main structure. Interfaces, however, do not preserve structure. Borrowing now from Donald Hoffman: consider your computer desktop, what are the folders? They don’t reflect the complicated sequence of changes in magnetization in a thin film of ferromagnetic material inside a metal box called your hard-drive, not even at a coarse-grained level. Nor do they hint at the complicated information processing that changes those magnetic fields into the photons that leave your screen. But they do allow you to have a predictable and intelligible interaction with your computer, something that would be much more difficult with just a magnetized needle and a steady hand. The interface does not resemble reality, it just allows us to act. Although the comments section of the earlier post became rather philosophical, my original intention was to stay in the realm of the current scientific discourse on perception. The distinction between realism and interfaces, however, also has a rich philosophical history — not only in epistemology but also in metaphysics — that I want to highlight with a few examples in this post.
Read more of this post

Falsifiability and Gandy’s variant of the Church-Turing thesis

RobinGandyIn 1936, two years after Karl Popper published the first German version of The Logic of Scientific Discovery and introduced falsifiability; Alonzo Church, Alan Turing, and Emil Post each published independent papers on the Entscheidungsproblem and introducing the lambda calculus, Turing machines, and Post-Turing machines as mathematical models of computation. The years after saw many more models, all of which were shown to be equivalent to each other in what they could compute. This was summarized in the Church-Turing thesis: anything that is computable is computable by a Turing machine. An almost universally accepted, but also incredibly vague, statement. Of course, such an important thesis has developed many variants, and exploring or contrasting their formulations can be very insightful way to understand and contrast different philosophies.

I believe that the original and most foundational version of the thesis is what I called Kleene’s purely mathematical formulation. Delving into this variant allowed us explore the philosophy of mathematics; Platonism; and the purpose, power and limitations of proof. However, because of the popularity of physicalism and authority of science, I doubt that Kleene’s is the most popular variant. Instead, when people think of the Church-Turing thesis, they often think of what is computable in the world around them. I like to associate this variant with Turing’s long time friend and student — Robin Gandy. I want to explore Gandy’s physical variant of the Church-Turing thesis to better understand the philosophy of science, theory-based conceptions, and the limits of falsifiability. In particular, I want to address what seems to me like the common misconception that the Church-Turing thesis is falsifiable.
Read more of this post

Kooky history of the quantum mind: reviving realism

What do you want me to do? LEAVE? Then they'll keep being wrong!One of my hobbies in undergrad was to spend time reading and editing Wikipedia. Towards the end of my studies, I started to specialize in going through Wikipedia’s fat-tail, removing articles to non-notable individuals, and trying to counter pseudoscientists, kooks, and cranks. Trying to understand why people subscribe to pseudoscience; how to demarcate real and pseudo- science; and confronting, correcting, or trolling hokum masquerading as science has occupied an unhealthly portion of my time on the internet ever since.

I had a particularly difficult struggle with the quantum mystics of Wikipedia. Some of their proponents have long been active on the internet, and combined with the general confusion around both quantum mechanics and consciousness, they were a very difficult community to expose. An exceptionally hostile member was the usenet celebrity Jack Sarfatti, a proponent of quantum mechanics as a unifying force between science and art and explanation of consciousness.
Read more of this post

Lower bounds by negative adversary method

Are some questions harder than others?

Last week I quantified hardness of answering a question with a quantum computer as the quantum query complexity. I promised that this model would allow us to develop techniques for proving lower bounds. In fact, in this model there are two popular tools: the polynomial method, and the (negative) adversary method. In this week’s post, I’d like to highlight the latter.
Read more of this post

Quantum query complexity

Artem Kaznatcheev lecturing on quantum query complexityYou probably noticed a few things about TheEGG: a recent decrease in blog post frequency and an overall focus on the algorithmic lens — especially its view of biology. You might also be surprised by the lack of discussion of quantum information processing: the most successful on-going application of the algorithmic lens. I actually first became passionate about cstheory as a lens on science when I was studying quantum computing. In undergrad, I played around with representation theory and other fun math to prove things about a tool in quantum information theory known as unitary t-designs. At the start of grad school, I became more algorithmic by focusing on quantum query complexity. To kill two birds with one stone, I thought I would introduce you to query complexity and in doing so restore the more regular posting schedule you’ve been accustomed to. Of course, the easiest way to do this is to recycle my old writing from the now stale cstheory StackExchange blog.
Read more of this post

Evolution explains the fundamental constants of physics

While speaking at TEDxMcGill 2009, Jan Florjanczyk — friend, quantum information researcher, and former schoolmate of mine — provided one of the clearest characterization of theoretical physics that I’ve had the please of hearing:

Theoretical physics is about tweaking the knobs and dials and assumptions of the laws that govern the universe and then interpolating those laws back to examine how they affect our daily lives, or how they affect the universe that we observe, or even if they are consistent with each other.

I believe that this definition extends beyond physics to all theorists. We are passionate about playing with the the stories that define the unobservable characters of our theoretical narratives and watching how our mental creations get along with each other and affect our observable world. With such a general definition of a theorists, it is not surprising that we often see such thinkers cross over disciplinary lines. The most willing to wander outside their field are theoretical physicists; sometimes they have been extremely influential interdisciplinaries and at other times they suffered from bad cases of interdisciplinitis.

On the other hand, physicists like to say physics is to math as sex is to masturbation.

The physicists’ excursions have been so frequent that it almost seems like a hierarchy of ideas developed — with physics and mathematics “on top”. Since I tend to think of myself as a mathematician (or theoretical computer scientist, but nobody puts us in comics), this view often tempts me but deep down I realize that the flow of ideas is always bi-directional and no serious field can be dominant over another. To help slow my descent into elitism, it is always important to have this realization reinforced. Thus, I was extremely excited when Jeremy Fox of Dynamic Ecology drew my attention to a recent paper by theoretical zoologist Andy Gardner (in collaboration with physicists J.P. Conlon) on how to use the Price equation of natural selection to model the evolution and adaptation of the entire universe.

Since you will need to know a little bit about the physics of black holes to proceed, I recommend watching Jan’s aforementioned talk. Pay special attention to the three types of black holes he defines, especially the Hubble sphere:

As you probably noticed, our universe isn’t boiling, the knobs and dials of the 30 or so parameters of the Standard Model of particle physics are exquisitely well-tuned. These values seem arbitrary, and even small modifications would produce a universe incapable of producing or sustaining the complexity we observe around us. Physicists’ default explanation of this serendipity is the weak anthropic principle: only way we would be around to observe the universe and ask “why are the parameters so well tuned?” is if that universe was tuned to allow life. However, this argument is fundamentally unsettling, it lacks any mechanism.

Smolin (1992) addressed this discomfort by suggesting that the fundamental constants of nature were fine-tuned by the process of cosmological natural selection. The idea extends our view of the possible to a multiverse (not to be confused with Deutsch’s idea) that is inhabited by individual universes that differ in their fundamental constants and give birth to offspring universes via the formation of blackholes. Universes that are better tuned to produce black holes sire more offspring (i.e. have a higher fitness) and thus are more likely in the multiverse.

Although, Smolin (2004) worked to formalize this evolutionary process, he could not achieve the ecological validity of Gardner & Conlon (2013). Since I suspect the authors’ paper is a bit tongue-in-cheek, I won’t go into the details of their mathematical model and instead provide some broad strokes. They consider deterministically developing (also stochastic in the appendix) universes, and a 1-to-1 mapping between black-holes in one generation of universes and the universes of the next generation. Since — as Jan stressed — we can never go inside black-holes to measure their parameters, the authors allow for any degree of heritability between parent and offspring universes. At the same time, they consider a control optimization problem, with the objective function to maximize the number of black-holes. They then compare the Price dynamics of their evolutionary model to the optimal solution of the optimization problem and show a close correspondence. This correspondence implies that successive generations of universes will seem increasingly designed for the purpose of forming black holes (without the need for a designer, of course).

You might object; “I’m not a black hole, why is this relevant?” Well, it turns out that universes that are designed for producing black holes, are also ones that are capable of sustaining the complexity needed for intelligent observers to emerge (Smolin, 2004). So, although you are not a black-hole, the reason you can get excited about studying them is because you are an accidental side-effect of their evolution.

References

Gardner, A., & Conlon, J. (2013). Cosmological natural selection and the purpose of the universe Complexity DOI: 10.1002/cplx.21446

Smolin, L. (1992). Did the universe evolve?. Classical and Quantum Gravity, 9(1), 173.

Smolin, L. (2004). Cosmological natural selection as the explanation for the complexity of the universe. Physica A: Statistical Mechanics and its Applications, 340(4), 705-713.

Tegmark, M., Aguirre, A., Rees, M. J., & Wilczek, F. (2006). Dimensionless constants, cosmology, and other dark matters. Physical Review D, 73(2), 023505.

Programming playground: Cells as (quantum) computers?

Nearly a year ago, the previous post in this series introduced a way for programmers to play around with biology: a model that simulated the dynamics of a whole cell at unprecedented levels of details. But what if you want to play with the real thing? Can you program a living cell? Can you compute with molecular biology?

Coccolithophore

Could this single-celled photosynthetic algae be your next computer?

Biology inspired computation can probably be traced back as far back as Turing’s (1948) introduction of B-Type neural networks. However, the molecular biology approach is much more recent with Adleman (1994) proposing DNA computing, and Păun (2000) introducing membrane computing with P-systems. These models caused a stir when they appeared due to the ease of misrepresenting their computational power. If you allow the cells or membranes to carry on exponential rate of reproduction for an arbitrarily long time, then these systems can solve NP-complete problems quickly. In fact, it is not hard to show that this model would allow you to solve PSPACE-complete problems. Of course, in any reasonable setting, your cells can only grow at an exponential rate until they reach the carrying capacity of the environment you are growing them in. If you take this into account then efficient DNA and membrane computing are no more powerful than the usual definition of efficient computation — polynomial time on a Turing machine.

The stirred (i.e. inviscid) nature of membrane and (early approaches to) DNA computing provide substantial constraints for empirical realizations, and scalability of bio-computing. In these early models, regulatory molecules are reused in the self-mixing environment of the cell, and gates correspond to chemical reactions. As such, gates are temporary; and the information carrying molecule must change at every step of the computation to avoid being confused with residue from the previous step. This made implementing some gates such as XNOR — output 1 only if both inputs are the same — experimentally impossible (Tamsir, 2011): how would you tell which input is which and how would the gate know it has received both inputs and not just an abnormally high concentration of the first?

To overcome this, Bonnet et al. (2013) designed a cellular computation model that more closely resembles the von Neumann architecture of the device you are reading this post on. In particular, they introduced a cellular analog of the transistor — the transcriptor. The whimsical name comes from the biology process they hijacked for computation, instead of electric current flowing on copper wires the researchers looked at the “transcriptional current” of RNA polymerase on DNA “wires”. Only if a control signal is present does the transcriptor allow RNA polymerase to flow through it; otherwise it blocks them, just like an electric transistor. By putting several transcriptors together, and choosing their control signals, Bonnet et al. (2013) can implement any logic gate (including the previously unrealized NXOR) just as an electrical engineer would with transistors. What matters most for connecting to quantum computing, is the ability to reliably amplify logical signals. With amplifying gates like AND, OR, and XOR, the authors were able to produce more than a 3-fold increase in control signal. For further details on the transcriptor listen to Drew Endy explain his group’s work:

Taking inspiration from biology is not restricted to classical computation. Vlatko Vedral provides a great summary of bio-inspired quantum computing; start from top down, figure out how biology uses quantum effects at room temperature and try to harness them for computation. The first step here, is to find a non-trivial example of quantum effects in use by a biological system. Conveniently, Engel et al. (2007) showed that photosynthesis provides such an example.

During photosynthesis, an incident photon becomes an ‘exciton’ that has to quickly walk through a maze of interconnected chlorophyll molecules to find a site where its energy can be used to phosphorylate used-up ADP into energy-carrying ATP. Unfortunately, if the exciton follows a classical random walk (i.e. spreads out in proportion to the square root of time) then it cannot reach a binding site before decaying. How does biology solve this? The exciton follows a quantum walk! (Rebentrost et al., 2009)

It is cool to know that we can observe a quantum walk, but can that be useful for computation? My former supervisor Andrew Childs (2009; see also Childs et al., 2013) is noted for showing that if we have control over the Hamiltonian defining our quantum walk then we can use the walk to do universal computation. Controlling the Hamiltonian generating a quantum walk is analogous to designing a graph for a classical walk. Theoretical work is still needed to bridge Rebentrost et al. and Childs, since (as Joe Fitzsimons pointed out on G+) the biological quantum walk is not coherent, and the decoherence that is present might doom any attempt at universal computation. The last ingredient that is needed is a classic controller.

Since the graph we need will depend on the specific problem instance we are trying to solve, we will need a classical computer to control the construction of the graph. This is where I hope synthetic biology results like Bonnet et al. (2013) will be useful. The transcriptors could be used as the classic control with which a problem instance is translated into a specific structure of chlorophyll molecules on which a quantum walk is carried out to do the hard part of the computation. The weak quantum signal from this walk can then be measured by the transcriptor-based controller and amplified into a signal that the experimenter can observe on the level of the behavior (say fluorescence) of the cell. Of course, this requires a ridiculous amount of both fundamental work on quantum computing, and bio-engineering. However, could the future of scalable quantum computers be in the noisy world of biology, instead of the sterility of superconductors, photon benches, or ion-traps?

References

Adleman, L. M. (1994). Molecular computation of solutions to combinatorial problems. Science, 266(5187), 1021-1023.

Bonnet J, Yin P, Ortiz ME, Subsoontorn P, & Endy D (2013). Amplifying Genetic Logic Gates. Science PMID: 23539178

Childs, A. M. (2009). Universal computation by quantum walk. Physical review letters, 102(18), 180501. [ArXiv pdf]

Childs, A. M., Gosset, D., & Webb, Z. (2013). Universal Computation by Multiparticle Quantum Walk. Science, 339(6121), 791-794. [ArXiv pdf]

Engel GS, Calhoun TR, Read EL, Ahn TK, Mancal T, Cheng YC et al. (2007). Evidence for wavelike energy transfer through quantum coherence in photosynthetic systems. Nature 446 (7137): 782–6.

Păun, G. (2000). Computing with membranes. Journal of Computer and System Sciences, 61(1), 108-143.

Rebentrost, P., Mohseni, M., Kassal, I., Lloyd, S., & Aspuru-Guzik, A. (2009). Environment-assisted quantum transport. New Journal of Physics, 11(3), 033003. [ArXiv pdf]

Tamsir, A., Tabor, J. J., & Voigt, C. A. (2011). Robust multicellular computing using genetically encoded NOR gates and chemical/wires/’. Nature, 469(7329), 212-215.