• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

So quantum computers are a thing now. What can we do with them?

Puppycow

Penultimate Amazing
Joined
Jan 9, 2003
Messages
32,070
Location
Yokohama, Japan


https://ai.googleblog.com/2019/10/quantum-supremacy-using-programmable.html

https://www.nature.com/articles/s41586-019-1666-5

They've completed a benchmark test, which shows that it can solve certain problems much faster than the fastest supercomputer. The benchmark test itself is not really practical for anything beyond proving that it works.

I wonder what practical uses this technology will have? I'm sure nobody could really think of much practical applications for early computers either, beyond calculating the trajectories of artillery shells, and stuff like that.

Can it make AI better, I wonder? Could it even lead to sentient AI? Is the AI singularity about to happen?
 
Maybe. I mean, Google has already invented Alpha Zero, which seems to be the best chess player ever. But Alpha Zero doesn't rely on quantum computing as far as I know. Perhaps quantum computing could make it even better.
 
The first practical applications will likely be simulation of simple quantum systems. In time this may become really useful for (quantum) chemistry, maybe even protein folding.

Easily cracking (most) current public encryption keys is at least a decade away and may prove impossible (the quantum computers which should be able to do this can be fairly easily described, at a high level; however, they may prove impossible to actually make/build, for a very long time).
 
Maybe the feat was not so great after all.

The Google group reiterates its claim that its 53-qubit computer performed, in 200 seconds, an arcane task that would take 10,000 years for Summit, a supercomputer IBM built for the Department of Energy that is currently the world’s fastest. But IBM appears to have already rebutted Google’s claim. On 21 October, it announced that, by tweaking the way Summit approaches the task, it can do it far faster: in 2.5 days. IBM says the threshold for quantum supremacy—doing something a classical computer can’t—has thus still not been met. The race continues.

https://www.sciencemag.org/news/2019/10/ibm-casts-doubt-googles-claims-quantum-supremacy

But how independent is IBM on this?
 
We can program them to figure out why people begin sentences with 'so'.
 
The next step is homeopathic computing. Take everything out of a quantum computer, leaving an empty box. But the box retains the memory of quantum computing, therefore it solves the problems far faster.
 
But how independent is IBM on this?

Well, IBM is making its own quantum computers so maybe they don't want to be upstaged by Google.

On 21 October, it announced that, by tweaking the way Summit approaches the task, it can do it far faster: in 2.5 days. IBM says the threshold for quantum supremacy—doing something a classical computer can’t—has thus still not been met.

Even still, 200 seconds << 2.5 days, no? 216000 seconds in 2.5 days, so it's over 1000 times faster.

It's still a pretty good demonstration that the technology works.
 
Last edited:
Scott Aaronson’s latest blogpost, “Quantum Supremacy: the gloves are off” (https://www.scottaaronson.com/blog/?p=4372) covers this very well, I feel. There are a lot of good comments, and responses from Scott.

OK, so let’s carefully spell out what the IBM paper says. They argue that, by commandeering the full attention of Summit at Oak Ridge National Lab, the most powerful supercomputer that currently exists on Earth—one that fills the area of two basketball courts, and that (crucially) has 250 petabytes of hard disk space—one could just barely store the entire quantum state vector of Google’s 53-qubit Sycamore chip in hard disk. And once one had done that, one could simulate the chip in ~2.5 days, more-or-less just by updating the entire state vector by brute force, rather than the 10,000 years that Google had estimated on the basis of my and Lijie Chen’s “Schrödinger-Feynman algorithm” (which can get by with less memory).

The IBM group understandably hasn’t actually done this yet—even though IBM set it up, the world’s #1 supercomputer isn’t just sitting around waiting for jobs! But I see little reason to doubt that their analysis is basically right. I don’t know why the Google team didn’t consider how such near-astronomical hard disk space would change their calculations; probably they wish they had.

I think at this point it's just a matter of semantics and definitions. Sure, with a ginormous supercomputer with a ginormous hard drive, you could perform the same task, albeit 1000 times slower. But the Sycamore chip can and will be improved with more qbits in the future.

I wonder if they are going to keep going with tree names for future generations of the chips?
 
I think at this point it's just a matter of semantics and definitions. Sure, with a ginormous supercomputer with a ginormous hard drive, you could perform the same task, albeit 1000 times slower. But the Sycamore chip can and will be improved with more qbits in the future.



I wonder if they are going to keep going with tree names for future generations of the chips?
Have to keep in mind that we don't know if they can add more qbits, albeit there doesn't seem to be any theoretical reason that states they won't be able to increase the number of qbits. But it is at a minimum a very tough engineering problem.
 
Have to keep in mind that we don't know if they can add more qbits, albeit there doesn't seem to be any theoretical reason that states they won't be able to increase the number of qbits. But it is at a minimum a very tough engineering problem.

Well they did say in their blog entry:
We see our 54-qubit Sycamore processor as the first in a series of ever more powerful quantum processors.
 
I wonder about the applicability of the computational time comparison. The comparison I'd like to see is some calculation that's equally abstract for both computers. It doesn't seem quite fair to make the "problem" the evolution of the state of the quantum computer itself.

By analogy, I claim to have developed a sophisticated analog computer that can rapidly calculate the trajectories of large numbers of simultaneously falling colliding solid massive bodies of complex arbitrary shapes. It works by dumping out a bucket of gravel, producing results in a few seconds. Since it would take a supercomputer many hours to compute the resulting trajectories, I've demonstrated grav-grav (gravitational/gravel) supremacy. Or have I?
 
I think at this point it's just a matter of semantics and definitions. Sure, with a ginormous supercomputer with a ginormous hard drive, you could perform the same task, albeit 1000 times slower. But the Sycamore chip can and will be improved with more qbits in the future.

I wonder if they are going to keep going with tree names for future generations of the chips?

There is a good comment, a long way down, on historical contingency: what if the first QC were only 15 or so qubits, and computers were still programmed with paper tape?

It’s certainly widely believed that the Google QC design can be extended to more qubits (60 and 70 are oft mentioned). However, I feel there’s rather too much hope in this ... the physics and engineering that went into Sycamore were heroic, yes the team has some truly brilliant people on it. But the achievement was very hard won, and getting to even 60 qubits will surely be even more heroic. So far, this stuff is NOT scalable.
 
Thanks for the link. How refreshing to read how science actually works rather than the pseudoscience of the likes of Mills and BLP.

I agree.

The actual process can be quite messy and confusing, and good definitions matter. In this particular case, I really like the openness, including Nature’s decision to let the reviewers be known (their choice).

Also, Google isn’t the only player, there are several others, quite independent ... including IBM.
 
There is a good comment, a long way down, on historical contingency: what if the first QC were only 15 or so qubits, and computers were still programmed with paper tape?

It’s certainly widely believed that the Google QC design can be extended to more qubits (60 and 70 are oft mentioned). However, I feel there’s rather too much hope in this ... the physics and engineering that went into Sycamore were heroic, yes the team has some truly brilliant people on it. But the achievement was very hard won, and getting to even 60 qubits will surely be even more heroic. So far, this stuff is NOT scalable.

So, no Moore's law for quantum computers?
 
I wonder about the applicability of the computational time comparison. The comparison I'd like to see is some calculation that's equally abstract for both computers. It doesn't seem quite fair to make the "problem" the evolution of the state of the quantum computer itself.

By analogy, I claim to have developed a sophisticated analog computer that can rapidly calculate the trajectories of large numbers of simultaneously falling colliding solid massive bodies of complex arbitrary shapes. It works by dumping out a bucket of gravel, producing results in a few seconds. Since it would take a supercomputer many hours to compute the resulting trajectories, I've demonstrated grav-grav (gravitational/gravel) supremacy. Or have I?

There are quite a few comments in Scott’s blogpost like this. One relevant one: who needs a QC to simulate protein folding, say, when you can just watch proteins fold? (Simulations of quantum processes are one of the very few applications of QCs that everyone thinks are possible).

Scott, being a computer science theorist, is extremely impressed by the abstract, scaling considerations (polynomial time vs exponential, say). A great many others are more concerned about feasibility and practical applications. :)
 

Back
Top Bottom