prof-dr-ir
2 months ago
I am confused, since even factoring 21 is apparently so difficult that it "isn’t yet a good benchmark for tracking the progress of quantum computers." [0]
So the "useful quantum computing" that is "imminent" is not the kind of quantum computing that involves the factorization of nearly prime numbers?
Strilanc
2 months ago
Factoring will be okay for tracking progress later; it's just a bad benchmark now. Factoring benchmarks have little visibility into fault tolerance spinning up, which is the important progress right now. Factoring becoming a reasonable benchmark is strongly related to quantum computing becoming useful.
prof-dr-ir
2 months ago
> Factoring becoming a reasonable benchmark is strongly related to quantum computing becoming useful.
Either this relation is not that strong, or factoring should "imminently" become a reasonable benchmark, or useful quantum computing cannot be "imminent". So which one is it?
I think you are the author of the blogpost I linked to? Did I maybe interpret it too negatively, and was it not meant to suggest that the second option is still quite some time away?
adgjlsfhk1
2 months ago
Under a Moore's law-like scenario, factoring 21 happens only about ~5 years before factoring a 1024 bit number. With all the optimizations, factoring an n bit number only requires ~n logical qbits, but most of those optimizations only work for large n, so 21 is only ~5 doubles away from 2^1024.
the other problem is that factoring 21 is so easy that it actually makes it harder to prove you've factored it with a functional quantum computer. for big numbers, your program can fail 99% of the time because if you get the result once, you prove that the algorithm worked. 21 is small enough that it's hard not to factor, so demonstrating that you've factored it with a qc is fairly hard. I wouldn't be surprised as a result if the first number publicly factored by a quantum computer (using error correction) was in the thousands instead of 21. By using a number that is not absolutely tiny, it becomes a lot easier to show that the system works.
upofadown
2 months ago
Perhaps? The sort of quantum computers that people are talking about now are not general purpose. So you might be able to make a useful quantum computer that is not Shor's algorithm.
gsf_emergency_6
2 months ago
Simulating the Hubbard model for superconductors at large scales is significantly more likely to happen sooner than factoring RSA-2048 with Shor’s algorithm.
Google have been working on this for years
Don't ask me if they've the top supercomputers beat, ask Gemini :)
tomalbrc
2 months ago
Gemini hallucinated me a wild answer.
bawolff
2 months ago
I don't think that's correct, the research projects the article is talking about all seem to aim at making general purpose quantum computers eventually. Obviously they haven't succeded yet, but general purpose does seem to be what they are talking about.
unparagoned
a month ago
Basically qc are so far from ever doing a useful computation we need other benchmarks to measure progress. We need to be thinking in timelines of our lifetime not 5 years
bawolff
2 months ago
I always find this argument a little silly.
Like if you were building one of the first normal computers, how big numbers you can multiply would be a terrible benchmark since once you have figured out how to multiply small numbers its fairly trivial to multiply big numbers. The challenge is making the computer multiply numbers at all.
This isn't a perfect metaphor as scaling is harder in a quantum setting, but we are mostly at the stage where we are trying to get the things to work at all. Once we reach the stage where we can factor small numbers reliably, the amount of time to go from smaller numbers to bigger numbers will be probably be relatively short.
jvanderbot
2 months ago
From my limited understanding, that's actually the opposite of the truth.
In QC systems, the engineering "difficulty" scales very badly with the number of gates or steps of the algorithm.
Its not like addition where you can repeat a process in parallel and bam-ALU. From what I understand as a layperson, the size of the inputs is absolutely part of the scaling.
dmurray
2 months ago
But the reason factoring numbers is used as the quantum benchmark is exactly that we have a quantum algorithm for that problem which is meant to scale better than any known algorithm on a classical computer.
So it seems like it takes an exponentially bigger device to factor 21 than 15, then 35 than 21, and so on, but if I understand right, at some point this levels out and it's only relatively speaking a little harder to factor say 10^30 than 10^29.
Why are we so confident this is true given all of the experience so far trying to scale up from factoring 15 to factoring 21?
bawolff
2 months ago
> Why are we so confident this is true given all of the experience so far trying to scale up from factoring 15 to factoring 21?
I don't think we have any "real" experience scaling from 15 to 21. Or at least not in the way shor's algorithm would be implemented in practise on fault tolerant qubits.
We haven't even done 15 yet in a "real" way yet. I susect the amount of time to factor 15 on fault tolerant qubits will be a lot longer than the time to go from 15 to 21.
pclmulqdq
2 months ago
The algorithm in question is a hypothetical algorithm for a hypothetical computer with certain properties. The properties in question are assumed to be cheap.
In the case of quantum algorithms in BQP, though, one of those properties is SNR of analog calculations (which is assumed to be infinite). SNR, as a general principle, is known to scale really poorly.
bawolff
2 months ago
> In the case of quantum algorithms in BQP, though, one of those properties is SNR of analog calculations (which is assumed to be infinite). SNR, as a general principle, is known to scale really poorly.
As far as i understand, that isn't an assumption.
The assumption is that the SNR of logical (error-corrected) qubits is near infinite, and that such logical qubits can be constructed from noisey physical qubits.
pclmulqdq
2 months ago
There are several properties that separate real quantum computers from the "BQP machine," including decoherence and SNR. Error-correction of qubits is mainly aimed at decoherence, but I'm not sure it really improves SNR of gates on logical qubits. SNR dictates how precisely you can manipulate the signal (these are a sort of weird kind of analog computer), and the QFTs involved in Shor's algorithm need some very precise rotations of qubits. Noise in the operation creates an error in that rotation angle. If your rotation is bad to begin with, I'm not sure the error correction actually helps.
seanhunter
2 months ago
> The assumption is that the SNR of logical (error-corrected) qubits is near infinite, and that such logical qubits can be constructed from noisey physical qubits.
This is an argument I've heard before and I don't really understand it[1]. I get that you can make a logical qubit out of physical qubits and build in error correction so the logical qubit has perfect SNR, but surely if (say the number of physical qubits you need to get the nth logical qubit is O(n^2) for example, then the SNR (of the whole system) isn't near infinite it's really bad.
[1] Which may well be because I don't understand quantum mechanics ...
adgjlsfhk1
2 months ago
The really important thing is that logical qbit error decreases exponentially with error correction amount. As such, for the ~1000 qbit regime needed for factoring, the amount of error correction ends up being essentially a constant factor (~1000x physical to logical). As long as you can build enough "decent" quality physical qbits and connect them, you can get near perfect logical qbits.
JanisErdmanis
2 months ago
Having demonstrated error correction, some incremental improvements can now be made to make it more repeatable and with better characteristics.
The hard problem then remains how to connect those qubits at scale. Using a coaxial cable for each qubit is impractical; some form of multiplexing is needed. This, in turn, causes qubits to decohere while waiting for their control signal.
dboreham
2 months ago
This is quite falatious and wrong. The first computers were built in order to solve problems immediately that were already being solved slowly by manual methods. There never was a period where people built computers so slow that they were slower than adding machines and slide rules, just because they seemed cool and might one day be much faster.
bawolff
2 months ago
Charles babbage started on the difference engine in 1819. It took a very long time after that before computers were useful.
bawolff
2 months ago
Additionally, part of the problem was that metal working at the time wasn't really advanced enough to make the required parts to the necessary precision at a reasonable price. Which sounds really quite similar to how modern quantum computers are right at the edge of current engineering technology.
sfpotter
2 months ago
The fact that it does appear to be so difficult to scale things up would suggest that the argument isn't silly.
thrance
2 months ago
Actually yes, how much numbers you can crunch per second and how big they are were among the first benchmarks for actual computers. Also, these prototypes were almost always immediately useful. (Think of the computer that cracked Enigma).
In comparison, there is no realistic path forward for scaling quantum computers. Anyone serious that is not trying to sell you QC will tell you that quantum systems become exponentially less stable the bigger they are and the longer they live. That is a fundamental physical truth. And since they're still struggling to do anything at all with a quantum computer, don't get your hopes up too much.
amenhotep
2 months ago
That would be the bombe, which didn't really crunch numbers at all, but was an electromechanical contraption to automate physically setting Enigma rotors to enumerate what combinations were possible matches.
bawolff
2 months ago
> Anyone serious that is not trying to sell you QC will tell you that quantum systems become exponentially less stable the bigger they are and the longer they live.
If what you are saying is that error rates increase exponentially such that quantum error correction can never correct more errors than it introduces, i don't think that is a widely accepted position in the field.
fragmede
2 months ago
People who believed that would opt out of being in the field, though. No?