[1911.13282v1] Quantum Computation with Machine-Learning-Controlled Quantum Stuff

\begin{abstract}
We describe how one may go about performing quantum computation with arbitrary \lq\lq quantum stuff", as long as it has some basic physical properties. Imagine a long strip of stuff, equipped with regularly spaced wires to provide input settings and to read off outcomes. After showing how the corresponding map from settings to outcomes can be construed as a quantum circuit, we provide a machine learning algorithm to tomographically ``learn" which settings implement the members of a universal gate set. At optimum, arbitrary quantum gates, and thus arbitrary quantum programs, can be implemented using the stuff.
\end{abstract}
‹FIG. 1. On the left we see a section of a length of stuff with input and output wires placed at regular intervals. A timegated sequence of inputs is fed into the input wires and a similar time-gated sequence of outputs is read off the wires. We represent the input/output at position x and time t by a dot as shown. On the right we see the same figure with the dots divided up into octagons and squares. Points lying on the boundary between octagons are assigned to the upper octagon. (Introduction)FIG. 2. On the left we see an example of a circuit built with gates from our universal gate set. Note that the circuit is closed off from external influences because, at the bottom, input signals are absorbed by the identity measurement and, at the sides, quantum information coming into the circuit is shunted back out. On the right we see how gates can be assigned to octagons. (Gates, circuits, and tesselations)FIG. 3. Depiction of the simulation algorithm. Circuit Simulation: The user decides on a program (circuit), which via the tesselation assigns a gate label to each spacetime event. Each spatial point is assigned an encoder, mapping these gate labels to inputs to the stuff. Output from the stuff is then processed by the decoder into simulated logical output from the program. The decoder receives the gate label g and the central spatial point of the tessel x in addition to its depicted input. Training: by choosing circuits from a tomographically complete circuit set, the comparison between predicted and actual output from each can be used as a loss function for the encoder and decoder, such that all circuits are correctly implemented at optimum. This is achieved by representing the encoder and decoder as neural networks, and descending their weights towards this optimum, using randomly constructed circuits as input. (Machine Learning Algorithm)FIG. 4. Schematized optimization of the encoders and decoder. (Machine Learning Algorithm)

FIG. 5. Top Left: each point (t, x) in spacetime is assigned a gate label g by the tesselation. g is constant within each tessel, and takes a uniform “null” value outside of a tessel. Bottom Left: each spatial point x is assigned a recurrent neural network (RNN) “Encoder”, mapping g and a “memory” vector M to the input to the stuff, along with a new Mn+1 (note this n paramaterizes subsequent RNN calls, not training iterations). The map is governed by “weights” θE x , local to x and held fixed except during training. Top Middle, Top Right: the stuff advances through time, receiving encoded input dictated by the gate labels. Its raw outputs O(t, x) at each point in each tessel are collected into a vector, and then fed along with the gate label g to the decoder. The decoder, another neural network with weights θD , emits the simulated “logical output” of the gate. Bottom Middle, Bottom Right: two strategies to allow the encoder RNNs to collaborate over a region of spacetime. Rasterized memory (Bottom Middle) involves passing Mn in left-right order throughout a tessel. Causal memory (Bottom Right) involves passing it forward within each encoder’s future light cone, achieving, per the locality assumption, the same end in a shorter timescale. (Machine Learning Algorithm)FIG. 6. Circuit simulation using rasterized memory, passed sequentially between each point in each tessel. (RNN fleet implementation of encoder)FIG. 7. Circuit simulation using causal memory, passed within forward lightcones, confined by tessels. (RNN fleet implementation of encoder)