Advertisement

Job One for Quantum Computers: Boost Artificial Intelligence

Job One for Quantum Computers: Boost Artificial Intelligence
From Wired - February 10, 2018

In the early 90s, Elizabeth Behrman, a physics professor at Wichita State University, began working to combine quantum physics with artificial intelligencein particular, the then-maverick technology of neural networks. Most people thought she was mixing oil and water. I had a heck of a time getting published, she recalled. The neural-network journals would say, What is this quantum mechanics? and the physics journals would say, What is this neural-network garbage?

Today the mashup of the two seems the most natural thing in the world. Neural networks and other machine-learning systems have become the most disruptive technology of the 21st century. They out-human humans, beating us not just at tasks most of us were never really good at, such as chess and data-mining, but also at the very types of things our brains evolved for, such as recognizing faces, translating languages and negotiating four-way stops. These systems have been made possible by vast computing power, so it was inevitable that tech companies would seek out computers that were not just bigger, but a new class of machine altogether.

Quantum computers, after decades of research, have nearly enough oomph to perform calculations beyond any other computer on Earth. Their killer app is usually said to be factoring large numbers, which are the key to modern encryption. Thats still another decade off, at least. But even todays rudimentary quantum processors are uncannily matched to the needs of machine learning. They manipulate vast arrays of data in a single step, pick out subtle patterns that classical computers are blind to, and dont choke on incomplete or uncertain data. There is a natural combination between the intrinsic statistical nature of quantum computingand machine learning, said Johannes Otterbach, a physicist at Rigetti Computing, a quantum-computer company in Berkeley, California.

If anything, the pendulum has now swung to the other extreme. Google, Microsoft, IBM and other tech giants are pouring money into quantum machine learning, and a startup incubator at the University of Toronto is devoted to it. Machine learning is becoming a buzzword, said Jacob Biamonte, a quantum physicist at the Skolkovo Institute of Science and Technology in Moscow. When you mix that with quantum, it becomes a mega-buzzword.

Yet nothing with the word quantum in it is ever quite what it seems. Although you might think a quantum machine-learning system should be powerful, it suffers from a kind of locked-in syndrome. It operates on quantum states, not on human-readable data, and translating between the two can negate its apparent advantages. Its like an iPhone X that, for all its impressive specs, ends up being just as slow as your old phone, because your network is as awful as ever. For a few special cases, physicists can overcome this input-output bottleneck, but whether those cases arise in practical machine-learning tasks is still unknown. We dont have clear answers yet, said Scott Aaronson, a computer scientist at the University of Texas, Austin, who is always the voice of sobriety when it comes to quantum computing. People have often been very cavalier about whether these algorithms give a speedup.

Quantum Neurons

The main job of a neural network, be it classical or quantum, is to recognize patterns. Inspired by the human brain, it is a grid of basic computing unitsthe neurons. Each can be as simple as an on-off device. A neuron monitors the output of multiple other neurons, as if taking a vote, and switches on if enough of them are on. Typically, the neurons are arranged in layers. An initial layer accepts input (such as image pixels), intermediate layers create various combinations of the input (representing structures such as edges and geometric shapes) and a final layer produces output (a high-level description of the image content).

Crucially, the wiring is not fixed in advance, but adapts in a process of trial and error. The network might be fed images labeled kitten or puppy. For each image, it assigns a label, checks whether it was right, and tweaks the neuronal connections if not. Its guesses are random at first, but get better; after perhaps 10,000 examples, it knows its pets. A serious neural network can have a billion interconnections, all of which need to be tuned.

On a classical computer, all these interconnections are represented by a ginormous matrix of numbers, and running the network means doing matrix algebra. Conventionally, these matrix operations are outsourced to a specialized chip such as a graphics processing unit. But nothing does matrices like a quantum computer. Manipulation of large matrices and large vectors are exponentially faster on a quantum computer, said Seth Lloyd, a physicist at the Massachusetts Institute of Technology and a quantum-computing pioneer.

For this task, quantum computers are able to take advantage of the exponential nature of a quantum system. The vast bulk of a quantum systems information storage capacity resides not in its individual data unitsits qubits, the quantum counterpart of classical computer bitsbut in the collective properties of those qubits. Two qubits have four joint states: both on, both off, on/off, and off/on. Each has a certain weighting, or amplitude, that can represent a neuron. If you add a third qubit, you can represent eight neurons; a fourth, 16. The capacity of the machine grows exponentially. In effect, the neurons are smeared out over the entire system. When you act on a state of four qubits, you are processing 16 numbers at a stroke, whereas a classical computer would have to go through those numbers one by one.

Lloyd estimates that 60 qubits would be enough to encode an amount of data equivalent to that produced by humanity in a year, and 300 could carry the classical information content of the observable universe. (The biggest quantum computers at the moment, built by IBM, Intel and Google, have 50-ish qubits.) And thats assuming each amplitude is just a single classical bit. In fact, amplitudes are continuous quantities (and, indeed, complex numbers) and, for a plausible experimental precision, one might store as many as 15 bits, Aaronson said.

But a quantum computers ability to store information compactly doesnt make it faster. You need to be able to use those qubits. In 2008, Lloyd, the physicist Aram Harrow of MIT and Avinatan Hassidim, a computer scientist at Bar-Ilan University in Israel, showed how to do the crucial algebraic operation of inverting a matrix. They broke it down into a sequence of logic operations that can be executed on a quantum computer. Their algorithm works for a huge variety of machine-learning techniques. And it doesnt require nearly as many algorithmic steps as, say, factoring a large number does. A computer could zip through a classification task before noisethe big limiting factor with todays technologyhas a chance to foul it up. You might have a quantum advantage before you have a fully universal, fault-tolerant quantum computer, said Kristan Temme of IBMs Thomas J. Watson Research Center.

Let Nature Solve the Problem

So far, though, machine learning based on quantum matrix algebra has been demonstrated only on machines with just four qubits. Most of the experimental successes of quantum machine learning to date have taken a different approach, in which the quantum system does not merely simulate the network; it is the network. Each qubit stands for one neuron. Though lacking the power of exponentiation, a device like this can avail itself of other features of quantum physics.

The largest such device, with some 2,000 qubits, is the quantum processor manufactured by D-Wave Systems, based near Vancouver, British Columbia. It is not what most people think of as a computer. Instead of starting with some input data, executing a series of operations and displaying the output, it works by finding internal consistency. Each of its qubits is a superconducting electric loop that acts as a tiny electromagnet oriented up, down, or up and downa superposition. Qubits are wired together by allowing them to interact magnetically.

To run the system, you first impose a horizontal magnetic field, which initializes the qubits to an equal superposition of up and downthe equivalent of a blank slate. There are a couple of ways to enter data. In some cases, you fix a layer of qubits to the desired input values; more often, you incorporate the input into the strength of the interactions. Then you let the qubits interact. Some seek to align in the same direction, some in the opposite direction, and under the influence of the horizontal field, they flip to their preferred orientation. In so doing, they might trigger other qubits to flip. Initially that happens a lot, since so many of them are misaligned. Over time, though, they settle down, and you can turn off the horizontal field to lock them in place. At that point, the qubits are in a pattern of up and down that ensures the output follows from the input.

Its not at all obvious what the final arrangement of qubits will be, and thats the point. The system, just by doing what comes naturally, is solving a problem that an ordinary computer would struggle with. We dont need an algorithm, explained Hidetoshi Nishimori, a physicist at the Tokyo Institute of Technology who developed the principles on which D-Wave machines operate. Its completely different from conventional programming. Nature solves the problem.

The qubit-flipping is driven by quantum tunneling, a natural tendency that quantum systems have to seek out their optimal configuration, rather than settle for second best. You could build a classical network that worked on analogous principles, using random jiggling rather than tunneling to get bits to flip, and in some cases it would actually work better. But, interestingly, for the types of problems that arise in machine learning, the quantum network seems to reach the optimum faster.

The D-Wave machine has had its detractors. It is extremely noisy and, in its current incarnation, can perform only a limited menu of operations. Machine-learning algorithms, though, are noise-tolerant by their very nature. Theyre useful precisely because they can make sense of a messy reality, sorting kittens from puppies against a backdrop of red herrings. Neural networks are famously robust to noise, Behrman said.

In 2009 a team led by Hartmut Neven, a computer scientist at Google who pioneered augmented realityhe co-founded the Google Glass projectand then took up quantum information processing, showed how an early D-Wave machine could do a respectable machine-learning task. They used it as, essentially, a single-layer neural network that sorted images into two classes: car or no car in a library of 20,000 street scenes. The machine had only 52 working qubits, far too few to take in a whole image. (Remember: the D-Wave machine is of a very different type than in the state-of-the-art 50-qubit systems coming online in 2018.) So Nevens team combined the machine with a classical computer, which analyzed various statistical quantities of the images and calculated how sensitive these quantities were to the presence of a carusually not very, but at least better than a coin flip. Some combination of these quantities could, together, spot a car reliably, but it wasnt obvious which. It was the networks job to find out.

The team assigned a qubit to each quantity. If that qubit settled into a value of 1, it flagged the corresponding quantity as useful; 0 meant dont bother. The qubits magnetic interactions encoded the demands of the problem, such as including only the most discriminating quantities, so as to keep the final selection as compact as possible. The result was able to spot a car.

Bottlenecks Into the Tunnels

Quantum Intelligence

Advertisement

Continue reading at Wired »