• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Merged Artificial Intelligence Research: Supermathematics and Physics

(1) I still lack sufficient GPU resources, in order to do particular tests w.r.t. to certain parts involving thought curvature.

(2) If you read the paper you would find that the paper is based on something called the quantum boltzmann machine, and quantum reinforcement learning.

So the outcome is that I lack yes, both GPU resources, and Quantum Computing resources.

Thanks for trying to help, but you ended up attacking my thread without evidence, like many others here have done.

If you're going to attack, attack with evidence please, and actually take more than 5 minutes to read thought curvature.

I'm not quite sure why you considered that an "attack." If anything it was providing you tools to get around your current hardware limitations or to at least quantify the hardware you WOULD need to get the job done. Knowing what resources you need is a big part of completing a project.

To that end, I suggest you look into some of the grid computing technologies available now:

https://golem.network

http://www.gridcoin.us

Neural Network modeling should lend itself nicely to distributed commuting. True, it won't be as zippy as if you had your own bank of Bitcoin mining machines re-purposed to your needs, but its far more productive than sitting around complaining about your lack of hardware.

Even better, accessing an existing grid commuting architecture will be a lot CHEAPER, making it far easier to get funding or even run a GoFundMe campaign to get the resources needed to put you ideas to the test.

If you take advantage of grid computing you can start a domino effect in your research, proving, or disproving a lot of your hypotheses.

9F3FaB8.gif
 
Last edited:
[IMGw=350]https://i.imgur.com/MrxleHs.jpg[/IMGw]

(1) I still lack sufficient GPU resources, in order to do particular tests w.r.t. to certain parts involving thought curvature.

(2) If you read the paper you would find that the paper is based on something called the quantum boltzmann machine, and quantum reinforcement learning.

So the outcome is that I lack yes, both GPU resources, and Quantum Computing resources.

From the QBM paper: "We show examples of QBM training with and without the bound, using exact diagonalization, and compare the results with classical Boltzmann training." So they actually did the research

And: "We also discuss the possibility of using quantum annealing processors like D-Wave for QBM training and application." They did not use a quantum computer to do the research.

The authors of the paper did not make excuses and post endlessly about being "attacked". Instead they did the research. You have access to GPU cloud compute resources. You have access to quantum computation models. Why are you here instead of doing the research?
 
Rumors are irrelevant to what you wrote. That was a section stating that it will propose an experiment and then not proposing an experiment. Thus:
5 October 2017: No experiment at all, proposed or actual at the given link or PDF!
5 October 2017: A PDF section title lies about a probable experiment no experiment at all, proposed or actual.

Plus:
6 October 2017: Usual insults about knowledge of machine learning.
Repeat of "Deepmnd atari q architecture" nonsense when the Arcade Learning Environment not built on Atari machines and has no "q" architecture! It would just be sloppy writing if it was not persistent.
5 October 2017: A link to a PDF repeating a delusion of a "Deepmnd atari q architecture".
15 August 2017: Ignorant nonsense about Deepmind
18 August 2017: Repeated "atari q" gibberish when DeepMind is not an Atari machine and has no "q" (does have Q-learning)

You accuse me of being ignorant about machine learning and then ask:
For example,...why did you then go on to discuss some paper that including pooling?
The answer is that I know about the use of pooling layers in machine learning and so researched whether DeepMind were looking at using pooling layers. I thought that you would be interested in learning more about the Google DeepMind company and so mentioned it on my post:
Into the introduction and:
15 August 2017 ProgrammingGodJordan: Ignorant nonsense about Deepmind.
Deepmind’s atari q architecture encompasses non-pooling convolutions
DeepMind is a "neural network that learns how to play video games in a fashion similar to that of humans". It can play several Atari games. It does not have an architecture related to those Atari games. What DeepMind does have is "a convolutional neural network, with a novel form of Q-learning". I have found 1 Google DeepMind paper about the neural network architecture that explicitly includes pooling layers but not as an implemented architecture element, Exploiting Cyclic Symmetry in Convolutional Neural Networks.

What is missing in the PDF is any references for DeepMind.
The last point is why I had to go looking for DeepMind sources. You did not support the "non-pooling" assertion. You did not even link to the Wikipedia article but then that would have shown everyone that "Deepmnd atari q architecture" was nonsense :p!
 
Last edited:
I'm not quite sure why you considered that an "attack." If anything it was providing you tools to get around your current hardware limitations or to at least quantify the hardware you WOULD need to get the job done. Knowing what resources you need is a big part of completing a project.

To that end, I suggest you look into some of the grid computing technologies available now:

https://golem.network

http://www.gridcoin.us

Neural Network modeling should lend itself nicely to distributed commuting. True, it won't be as zippy as if you had your own bank of Bitcoin mining machines re-purposed to your needs, but its far more productive than sitting around complaining about your lack of hardware.

Even better, accessing an existing grid commuting architecture will be a lot CHEAPER, making it far easier to get funding or even run a GoFundMe campaign to get the resources needed to put you ideas to the test.

If you take advantage of grid computing you can start a domino effect in your research, proving, or disproving a lot of your hypotheses.

[qimg]https://i.imgur.com/9F3FaB8.gif[/qimg]

Yeah, that is a given...
 
Snipped the irrelevant section about thought curvature experiment proposal

You accuse me of being ignorant about machine learning and then ask:

The answer is that I know about the use of pooling layers in machine learning and so researched whether DeepMind were looking at using pooling layers. I thought that you would be interested in learning more about the Google DeepMind company and so mentioned it on my post:

No, I was not interested in pooling w.r.t. to deep q learning.

I have done pooling elsewhere, as you can see here.


RealityCheck said:
The last point is why I had to go looking for DeepMind sources. You did not support the "non-pooling" assertion. You did not even link to the Wikipedia article but then that would have shown everyone that "Deepmnd atari q architecture" was nonsense :p!

[IMGw=250]https://i.imgur.com/CIpHftz.jpg[/IMGw]

The highlighted portion above, is of course, invalid.

(1) This is why I constantly point out that your words appear to stem from somebody who is absent basic machine learning knowledge.

(2) Notably, deepmind's deep Q learning model did not use pooling, because in order to learn on the varying changes in the positions of objects in latent space, the model did not pool, or impose translation invariance during learning.

(3) You neglected to copy the rest of the paragraph which like (2) above, explained why no pooling was used:

[IMGw=666]https://i.imgur.com/vcpJS4U.png[/IMGw]

(4) Even if you failed to understand the writing style in the thought curvature paper, if you really had the machine learning knowledge you claimed to have, you would likely have discovered deepmind's non-pooling phenomenon, by reading the reference material “Playing Atari with Deep Reinforcement
Learning" as cited in the thought curvature paper.
 
Last edited:
From the QBM paper: "We show examples of QBM training with and without the bound, using exact diagonalization, and compare the results with classical Boltzmann training." So they actually did the research

And: "We also discuss the possibility of using quantum annealing processors like D-Wave for QBM training and application." They did not use a quantum computer to do the research.

AIWPmfu.png


(1) The quantum boltzmann machine experiment was ran on a "2000 qubit system", with focus on some of the qubits. (See minute 22:57 on this video)

(2) I stand by reply 261, I still lack proper computational resources, to do particular experiments.

As an example, this simple residual neural network, for heart irregularity detection, (which I composed for a kaggle contest) destroyed my prior desktop nvdia card. I have a stronger system now [gtx 960, i7-6700, 2TB hdd, 32 gb ram], but I use this laptop for workplace stuff, and it can't manage any more large experiments, for now.

RusDill said:
The authors of the paper did not make excuses and post endlessly about being "attacked". Instead they did the research. You have access to GPU cloud compute resources. You have access to quantum computation models.

I did research too.
(1) What do you think is taking place in this thought curvature snippet image?
(2) If you understand (1), you would see that research was done.
(3) I don't mind being attacked at all, but if one is to attack me in argument, it must be on the premise of sensible data/evidence, rather than not.

RusDill said:
Why are you here instead of doing the research?

I am researching, but when I take breaks, I visit here, or elsewhere.
 
Last edited:
(1) The quantum boltzmann machine experiment was ran on a "2000 qubit system", with focus on some of the qubits. (See minute 22:57 on this video)

That's wonderful that they did the extension of their work that they described in their paper. It still stands that their original work was done without the D-Wave machine. The point still clearly stands that they did the original work, and published the original paper, without the use of a quantum computer. Given my experience with debugging, I would guess that no one ever runs anything on a D-Wave without working it out mathematically and running it on a simulator first. It would just be an untractable task to try and debug it on a D-wave.

ETA: After watching the video further, it looks like they have run it in a D-Wave simulator, and not yet on actual hardware. "Finally, after a brief introduction to D-Wave quantum annealing processors, I will discuss the *possibility* of using such processors for QBM training and application." Oh, and they used 8 qubits (32 annealing qubits). You are currently fully capable of running 8 qubit simulations. Maybe I missed part of the video.

(2) I stand by reply 261, I still lack proper computational resources, to do particular experiments.

You have access to the same computing resources the people who wrote the paper had access to. If you are lacking in something, it's clearly some other type of resource. Have you even attempted to repeat their results?

it can't manage any more large experiments, for now.

Then stop wasting money on local hardware and use cloud compute resources. You'll be surprised at how affordable it is.


I did research too.

Sorry, I forgot. I'm used to talking to computer science academics who have a very specific definition of research that is different from what everyone else means when they say research. I thought explaining it would be sufficient, but I guess I should just stop using that word because it tends to derail the conversation.

The people that wrote the paper did not make excuses about not having resources, they did the experiments to show the merits of their techniques. Why are you wasting time here instead of doing the experiments to show the merits of your techniques?
 
Last edited:
You have access to GPU cloud compute resources. You have access to quantum computation models. Why are you here instead of doing the research?

Well, that would be work. Besides, if he put his ideas to the test he might be proven wrong. He can't be a victim of academic suppression if he does concrete research that proves or discredits his ideas in a repeatable way.
 
That's wonderful that they did the extension of their work that they described in their paper. It still stands that their original work was done without the D-Wave machine. The point still clearly stands that they did the original work, and published the original paper, without the use of a quantum computer. Given my experience with debugging, I would guess that no one ever runs anything on a D-Wave without working it out mathematically and running it on a simulator first. It would just be an untractable task to try and debug it on a D-wave.

ETA: After watching the video further, it looks like they have run it in a D-Wave simulator, and not yet on actual hardware. "Finally, after a brief introduction to D-Wave quantum annealing processors, I will discuss the *possibility* of using such processors for QBM training and application." Oh, and they used 8 qubits (32 annealing qubits). You are currently fully capable of running 8 qubit simulations. Maybe I missed part of the video.


You have access to the same computing resources the people who wrote the paper had access to. If you are lacking in something, it's clearly some other type of resource. Have you even attempted to repeat their results?

Then stop wasting money on local hardware and use cloud compute resources. You'll be surprised at how affordable it is.

Sorry, I forgot. I'm used to talking to computer science academics who have a very specific definition of research that is different from what everyone else means when they say research. I thought explaining it would be sufficient, but I guess I should just stop using that word because it tends to derail the conversation.

The people that wrote the paper did not make excuses about not having resources, they did the experiments to show the merits of their techniques. Why are you wasting time here instead of doing the experiments to show the merits of your techniques?


(1) Yes, at 8 qubits, requiring roughly 8.533 gb of ram, some simulations based on the quantum Boltzmann machine are quite doable on my 32 gb machine.

Of course, the 8 qubit usage in both the Quantum Boltzmann machine and Quantum Reinforcement Learning paper, were for small toy examples, that don't deal with the (Super-) Hamiltonian structure required by thought curvature.


(2) The (Super-) Hamiltonian structure required by thought curvature will require a quite scalable scheme, such as some boson sampling aligned regime in particular. The scheme above is approachable on the scale of 42 qubits, or a 44.8 gb ram configuration for simple tasks/circuits, and I lack access to configurations with 44.8 gb of ram.

Even if I could squeeze some testing on my 32 gb system, this would be dangerous, since this is my only system used to generate my salaries which I use to thrive.

This is dangerous because from experimentation (See quote about gpu destruction), I know that training machine learning algorithms places a large toll on hardware.


(3) The green portion in your quote above is irrelevant. I use my laptop for work purposes, and other freelancing stuff, which provides me with salaries to thrive and research.


(4) Small nitpicks:
(a) The part I stroke through in your quote above is redundant; I already mentioned that they were focusing on a portion/sum of a "2000 qubit system", and linked to a site which provided you with the particular sum.

(b) The red portion in your quote above is not true.

I simply don't have access to their level or resources.

A "2000 qubit" machine simulation corresponds to a 2133.33333333 gb ram configuration.
I have 32 gb system, and 32 < 2133.33333333.
 
Last edited:
(1) Yes, at 8 qubits, requiring roughly 8.533 gb of ram, some simulations based on the quantum Boltzmann machine are quite doable on my 32 gb machine.

Of course, the 8 qubit usage in both the Quantum Boltzmann machine and Quantum Reinforcement Learning paper, were for small toy examples, that don't deal with the (Super-) Hamiltonian structure required by thought curvature.


(2) The (Super-) Hamiltonian structure required by thought curvature will require a quite scalable scheme, such as some boson sampling aligned regime in particular. The scheme above is approachable on the scale of 42 qubits, or a 44.8 gb ram configuration for simple tasks/circuits, and I lack access to configurations with 44.8 gb of ram.

Even if I could squeeze some testing on my 32 gb system, this would be dangerous, since this is my only system used to generate my salaries which I use to thrive.

This is dangerous because from experimentation (See quote about gpu destruction), I know that training machine learning algorithms places a large toll on hardware.


(3) The green portion in your quote above is irrelevant. I use my laptop for work purposes, and other freelancing stuff, which provides me with salaries to thrive and research.


(4) Small nitpicks:
(a) The part I stroke through in your quote above is redundant; I already mentioned that they were focusing on a portion/sum of a "2000 qubit system", and linked to a site which provided you with the particular sum.

(b) The red portion in your quote above is not true.

I simply don't have access to their level or resources.

A "2000 qubit" machine simulation corresponds to a 2133.33333333 gb ram configuration.
I have 32 gb system, and 32 < 2133.33333333.

How much of that can you get from grid computing options?

What can you do to reduce the overall scope of the test to get a partial proof of concept, a digital pilot study if you will?

Why do you keep talking about the capabilities of your local machine when we're explicitly discussing grid computing options for your tests?
 
A "2000 qubit" machine simulation corresponds to a 2133.33333333 gb ram configuration.

What part of "they used 8 qubits" did not parse for you? The D-wave has 2000 qubits in an annealing configuration, but an annealing machine with 32 qubits would have worked fine. You should be able to duplicate their work and start small scale tests of your own work, you have no excuses for not doing that.

If you aren't able to implement your design because it requires too many qubits, then don't use a quantum design. Use a classical learning model. The quantum design doesn't allow researchers to do anything that classical designs can't, there is just an enormous potential for speed up if the algorithms can be implemented on a quantum computer.

This would be like someone wanting to factor integers, but claiming that they can't because they don't have a quantum computer to run shor's on.

And you have another excuse, "I'm worried my hardware will blow up". I've run stuff hard, very hard, for more than a week at once, including custom hardware. If your GPU failed while being pushed, it would be due to defective hardware such as improperly installed cooling fan. It's not a legit concern and your hardware is under warranty anyway. Plus, if you are so worried about your hardware dying and therefore won't use it, why the hell are you spending money on hardware you won't use instead of cloud computer resources? Just so many endless excuses.
 
BTW, it's very clear that you don't understand the scaling properties of simulated qubits. They do not scale linearly, that's the point. For instance, the largest such simulation, a 45qubit simulation needs 500000GB of ram running on more than 8000 nodes.

If you were able to prove small scale tests, it's possible you could run your full scale 42 qubit design on such a machine at no cost to you.
 
What part of "they used 8 qubits" did not parse for you? The D-wave has 2000 qubits in an annealing configuration, but an annealing machine with 32 qubits would have worked fine. You should be able to duplicate their work and start small scale tests of your own work, you have no excuses for not doing that.

Your above statement was redundant then, and it remains redundant now.

It was I that provided the data to you, that they used 2000 qubit system with focus on some of the qubits. That focus was the 8 qubits you were proud to report from the very source I linked you to.


RussDill said:
If you aren't able to implement your design because it requires too many qubits, then don't use a quantum design. Use a classical learning model. The quantum design doesn't allow researchers to do anything that classical designs can't, there is just an enormous potential for speed up if the algorithms can be implemented on a quantum computer.

This would be like someone wanting to factor integers, but claiming that they can't because they don't have a quantum computer to run shor's on.

And you have another excuse, "I'm worried my hardware will blow up". I've run stuff hard, very hard, for more than a week at once, including custom hardware. If your GPU failed while being pushed, it would be due to defective hardware such as improperly installed cooling fan. It's not a legit concern and your hardware is under warranty anyway. Plus, if you are so worried about your hardware dying and therefore won't use it, why the hell are you spending money on hardware you won't use instead of cloud computer resources? Just so many endless excuses.

Please pay attention to the quote below, quite carefully:

ProgrammingGodJordan said:
(2) The (Super-) Hamiltonian structure required by thought curvature will require a quite scalable scheme, such as some boson sampling aligned regime in particular. The scheme above is approachable on the scale of 42 qubits, or a 44.8 gb ram configuration for simple tasks/circuits, and I lack access to configurations with 44.8 gb of ram.

Even if I could squeeze some testing on my 32 gb system, this would be dangerous, since this is my only system used to generate my salaries which I use to thrive.

This is dangerous because from experimentation (See quote about gpu destruction), I know that training machine learning algorithms places a large toll on hardware.

You don't seem to get that I don't want to run Hamiltonian simulations as ran in the Quantum Boltzmann/Reinforcement experiments,
I want to run (Super-) Hamiltonian experiments on the horizon of this source,
instead.

As you can see above, this essentially means my system fails to cover the 44.x gb ram configuration specified by toy examples of the boson like sampling methods as specified above.
 
BTW, it's very clear that you don't understand the scaling properties of simulated qubits. They do not scale linearly, that's the point. For instance, the largest such simulation, a 45qubit simulation needs 500000GB of ram running on more than 8000 nodes.

If you were able to prove small scale tests, it's possible you could run your full scale 42 qubit design on such a machine at no cost to you.

Your comment above stemmed from the prior invalid comment you made, regarding the actual space/time complexity required to perform these computations, as I specified in item 2 above.

Footnote:
I can't say I am knowledgeless when it comes to the exponential nature of quantum computation.

See this concise mathematical description of quantum computation, of mine: https://www.researchgate.net/public...ion_describing_the_basis_of_quantum_computing
 
Last edited:
Your above statement was redundant then, and it remains redundant now.

It was I that provided the data to you, that they used 2000 qubit system with focus on some of the qubits. That focus was the 8 qubits you were proud to report from the very source I linked you to.




Please pay attention to the quote below, quite carefully:



You don't seem to get that I don't want to run Hamiltonian simulations as ran in the Quantum Boltzmann/Reinforcement experiments,
I want to run (Super-) Hamiltonian experiments on the horizon of this source,
instead.

As you can see above, this essentially means my system fails to cover the 44.x gb ram configuration specified by toy examples of the boson like sampling methods as specified above.

What's your game plan?

How do you plan to approach getting answers to the questions your theories pose? So far I see a LOT of excuses, but no plans.

How are you going to get from point A to point B?
 
What's your game plan?

How do you plan to approach getting answers to the questions your theories pose? So far I see a LOT of excuses, but no plans.

How are you going to get from point A to point B?

I don't know what you meant by excuses, but these are currently unavoidable physical limitations brought by lack of funds/hardware.

As for how such hardware shall be acquired, I have been first working to complete certain pre-requisites in code, before requesting funding externally.
 
I don't know what you meant by excuses, but these are currently unavoidable physical limitations brought by lack of funds/hardware.

As for how such hardware shall be acquired, I have been first working to complete certain pre-requisites in code, before requesting funding externally.

Have you even researched the use of grid commuting? The kind of resources SETI uses for distributed analysis of signal data are available to the masses thanks to various grid commuting technologies.

What about smaller proof-of-concept tests that are within your reach, either on your own hardware or through grid computing technologies? Is there some smaller, more readily tested sub-set of your ideas that can be tested and used in a grant proposal to get funding for more ambitious tests?

You seem paralyzed by an all-or-nothing mentality, refusing to take partial measures if the complete solution isn't readily available. Your refusal to break the problem down into smaller units is functionally equivalent to conceding defeat. There would be a considerable degree of pathos in someone else coming along, stealing your ideas and publishing them as their own having done some of the smaller scale tests you appear to be refusing to even consider.
 
B]
I can't say I am knowledgeless when it comes to the exponential nature of quantum computation.

Thinking that quantum computation scales linearly with the number of qubits indicates not just a lack of knowledge in the field, but seems to indicate a complete lack of awareness of the entire point of quantum computation.

ETA: I encourage anyone to read that "paper" (don't worry, it's just a snippet). I really like how for some bizarre reason qubit got translated to "spooky-bit".
 
Last edited:
Your above statement was redundant then, and it remains redundant now.

It was I that provided the data to you, that they used 2000 qubit system with focus [/U][/B]on some of the qubits. That focus was the 8 qubits you were proud to report from the very source I linked you to.


Listen to the first question after the talk. They only used a portion of the qubits, a small scale test.
 
Thinking that quantum computation scales linearly with the number of qubits indicates not just a lack of knowledge in the field, but seems to indicate a complete lack of awareness of the entire point of quantum computation.

ETA: I encourage anyone to read that "paper" (don't worry, it's just a snippet). I really like how for some bizarre reason qubit got translated to "spooky-bit".

Please try to calm down and read my prior quotes.

No where had I expressed any such linear scaling.

Ironically, in the url I linked to you with my mathematical description, I clearly describe an "exponential order" process:

[IMGw=540]https://i.imgur.com/9D3ujNv.png[/IMGw]
 

Back
Top Bottom