Quantum computing has seen a significant influx of capital in the past couple of years, increasing from $93.5m in 2015 to $3.2bn in 2021, with VC and private capital making up more than 70% of investments. However, one major challenge for the nascent sector is benchmarking the value of innovations. Without a standardised method of measuring “how well” a quantum computer is performing, there's a risk of misallocating this capital.
This could undermine quantum’s credibility, with unrealistic expectations creating a hype cycle. Such critiques have already been levelled at the sector, with Oxford physicist Nikita Gourianov recently arguing in the FT of “a highly exaggerated perspective on the promise of quantum computing” and “the formation of a classical bubble”.
But there are some measurable areas that generally correspond to improvements in performance for a quantum computer. In this piece, I’ll cover four benchmarks: gate fidelity, coherence time, scale potential and error correction.
Gate fidelity
The digital circuits we see in conventional processors are built on “logic gates” — effectively circuits that perform a set of instructions. A quantum logic gate is quantum computing’s equivalent — a basic quantum circuit operating a small number of qubits.
👉 Read: The different types of quantum computer startups, explained
However, quantum logic gates have a significant layer of complexity compared to conventional logic gates. Without delving too much into physics, under quantum mechanics, we dispense with the idea that we can exactly predict the value of some property held by a particle. Instead, until we measure it that particle can occupy a range of values for that property, with some values more probable than others. We call the range of probabilities for a particle’s state a “quantum state”.
This element of quantum states is a challenge in getting a quantum gate to work reliably. In short, greater gate fidelity means more reliable operations by a quantum gate and a greater likelihood of a processing cycle following through on the instructions we give it.
Coherence time
Imagine you had a very hot piece of metal. By directly converting that heat into power, you can use that hot metal to do a lot of work for you — for example, by converting heat into electricity. But over time, interaction with ambient air particles will steal most of that heat energy, down to the point where the metal is no longer hot enough to power any work.
Something similar is at play with the “quantumness” of particles. Over time, quantum particles lose their ability to perform useful informational work as they interact with their environment, eventually rendering them useless for a quantum computer.
A quantum particle that can perform useful work is called “coherent”. Quantum computers that can increase the time a particle remains coherent have more room to do useful calculations for us.
Scale potential
Some methods of quantum computing research have more scaling potential than others. For example, processes that create qubits from silicon can heavily borrow existing processes from the semiconductor industry for fabrication while also requiring little space to implement. In this case, this represents a greater scale potential for production and the number of qubits doing useful work in a square inch of chip.
How will the chosen architecture behave if it’s made 10 times bigger? 100 times bigger? 1,000 times bigger? And is it economical to do so? If you want to realise quantum computing in practice, we need to move beyond a handful of qubits at a time.
Error correction
Another area to benchmark is how we handle quantum error correction. There is always a degree of “background noise” surrounding quantum effects which can interfere with computation, along with the aforementioned loss of a quantum particle’s coherence. Together, these mean that any operation runs the risk of producing an error. For that reason, a quantum computer has to find ways to detect and prevent the propagation of errors, so they don’t undermine a process’s overall output.
👉 Read more: Inside a quantum lab
Achieving error correction requires teams to understand how errors propagate through a system. A team also needs to be able to engineer systems to offset the risks of error and correct where appropriate — whether in faulty quantum gates, the corruption of stored quantum information or faulty measurements.
Benchmarking for realistic growth in quantum
This has been a relatively surface-level exploration of possible benchmarking in quantum, with us just covering four underlying technical benchmarks. As you might be able to imagine, there are a lot of other factors that will be at play in evaluating the viability of a quantum computing startup.
To wrangle with the inherent complexity of the space, investors must be willing to dive in and engage with the technology and engineering fundamentals of quantum. Rather than just loosely discussing quantum’s potential, investors need to be more willing to understand the physical and engineering challenges and solutions they’re investing in. Only then can we improve capital allocation, pick the most promising teams and build the credibility of quantum computing.