• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Merged Artificial Intelligence Research: Supermathematics and Physics

ProgrammingGodJordan said:
It was I that provided the data to you, that they used 2000 qubit system with focus on some of the qubits. That focus was the 8 qubits you were proud to report from the very source I linked you to.
Listen to the first question after the talk. They only used a portion of the qubits, a small scale test.



I don't know if you're trolling, but I had long stated that they focused on some of the 2000 qubits, and the portion that was focused on is equal to the 8 qubits.

This means I am not disagreeing with the instance they used 8 qubit, as I had long stated that they were using a portion.

The very first quote I revealed to you about the video, showed that they used an 8 qubit system, quite clearly/obviously. Why bother to repeat the same thing to me?
 
Last edited:
"Yes, at 8 qubits, requiring roughly 8.533 gb of ram" (~1GB per qubit)

"A "2000 qubit" machine simulation corresponds to a 2133.33333333 gb ram configuration." (~1GB per qubit)

Then why did you scale this in a pure linear fashion?
 
"Yes, at 8 qubits, requiring roughly 8.533 gb of ram" (~1GB per qubit)

"A "2000 qubit" machine simulation corresponds to a 2133.33333333 gb ram configuration." (~1GB per qubit)

Then why did you scale this in a pure linear fashion?

Thanks for pointing out that large error.

Unlike the time I wrote the exponential order paper, I am a bit ill as I revealed earlier on page 6, and so things are somewhat blurry for me now.

This puts the old 44 gb ram configuration at 42 qubit =
qTHl2uA.png
gb = 131,072 gb ram configuration, instead.
 
Last edited:
Distinguished scientist on the mistakes pundits make when they predict the future of AI

Rodney Brooks -- eminent computer scientist and roboticist who has served as head of MIT's Computer Science and Artificial Intelligence Laboratory and CTO of Irobot -- has written a scorching, provocative list of the seven most common errors made (or cards palmed) by pundits and other fortune-tellers when they predict the future of AI.

His first insight is that AI is subject to the Gartner Hype Cycle (AKA Amara's Law: "We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run"), which means that a lot of what AI is supposed to be doing in the next couple years (like taking over half of all jobs in 10-20 years) is totally overblown, while the long-term consequences will likely be so profound that the effects on labor markets will be small potatoes.

–– ADVERTISEMENT ––



Next is the unexplained leap from today's specialized, "weak" AIs that do things like recognize faces, to "strong" general AI that can handle the kind of cognitive work that humans are very good at and machines still totally suck at. It's not impossible that we'll make that leap, but anyone predicting it who can't explain where it will come from is just making stuff up.
 

What was the point of the above, especially when the following is occurring?:



The more time passes, the more general smart algorithms are getting, and the more cognitive tasks they are doing:

Deep Learning AI Better Than Your Doctor at Finding Cancer:
https://singularityhub.com/2015/11/...ai-better-than-your-doctor-at-finding-cancer/


Self-taught artificial intelligence beats doctors at predicting heart attacks:
http://www.sciencemag.org/news/2017...igence-beats-doctors-predicting-heart-attacks


Here are a sequence cognitive fields/tasks, where sophisticated artificial neural models exceed human-kind:

1) Language translation (eg: Skype 50+ languages)
2) Legal-conflict-resolution (eg: 'Watson')
3) Self-driving (eg: 'otto-Self Driving' )
5) Disease diagnosis (eg: 'Watson')
6) Medicinal drug prescription (eg: 'Watson')
7) Visual Product Sorting (eg: 'Amazon Corrigon' )
8) Help Desk Assistance ('eg: Digital Genius)
9) Mechanical Cucumber Sorting (eg: 'Makoto's Cucumber Sorter')
10) Financial Analysis (eg: 'SigFig')
11) E-Discovery Law (eg: ' Social Science Research Network.')
12) Anesthesiology (eg: 'SedaSys')
13) Music composition (eg: 'Emily')
14) Go (eg: 'Alpha Go')
n) etc, etc




Can we build AI without losing control over it:
https://www.youtube.com/watch?v=8nt3edWLgIg&feature=youtu.be&t=613

The Rise of the Machines – Why Automation is Different this Time:
https://www.youtube.com/watch?v=WSKi8HfcxEk

Will artificial intelligence take your job?:
https://www.youtube.com/watch?v=P_-wn8ghcoY

Humans need not apply:
https://www.youtube.com/watch?v=7Pq-S557XQU

The wonderful and terrifying implications of computers that can learn:
https://www.youtube.com/watch?v=t4kyRyKyOpo
 
Last edited:

Yes, people tend to overestimate and underestimate.

We should also recall that artifical general intelligence is already here to some degree.

Deepmind's learning algorithms is arguably strongest AI on the planet, as their AIs are the first approximations of artificial general intelligence.

Here is Demis Hassabis discussing the general algorithms that Deepmind has already made, and are already improving:

https://youtu.be/t03xNZ9qY1A?t=164

And here is a little passage for those who might not understand the importance of games (that Deepmind deals with) in machine learning:
https://medium.com/@jordanmicahbenn...her-than-some-real-world-problem-55843c8ebcb9
 
Last edited:
What was the point of the above, especially when the following is occurring?:

It shows how an AI doesn't need much intelligence if you target it right. Fine tuning for the task at hand is going to result in far more productive AI for the time being than trying to achieve general purpose cognitive leaps. The medical AI you mentioned is fairly stupid in a general-purpose sense, but still out-performs the general-purpose intelligence of the doctors.
 
It shows how an AI doesn't need much intelligence if you target it right. Fine tuning for the task at hand is going to result in far more productive AI for the time being than trying to achieve general purpose cognitive leaps. The medical AI you mentioned is fairly stupid in a general-purpose sense, but still out-performs the general-purpose intelligence of the doctors.

Well, of course, AI doesn't need much intelligence to do narrow tasks.

However, we see that with human level intelligence, a general intelligence, we get general cognitive task performance.

It then makes sense that we attempt to mirror human level intelligence, at least in a way that we model general artificial models.

This is where my quote below comes in:

We should also recall that artificial general intelligence is already here to some degree.

Deepmind's learning algorithms is arguably strongest AI on the planet, as their AIs are the first approximations of artificial general intelligence.

Here is Demis Hassabis discussing the general algorithms that Deepmind has already made, and are already improving:

https://youtu.be/t03xNZ9qY1A?t=164

And here is a little passage for those who might not understand the importance of games (that Deepmind deals with) in machine learning:
https://medium.com/@jordanmicahbenn...her-than-some-real-world-problem-55843c8ebcb9

This is why the planet's smartest AI people are attempting to make general artificial intelligence, and probably why Google bought Deepmind's general atari game player for 500 million pounds.
Another example is Suzanne Gildert, former quantum computing specialist, now owner of Kindred AI, aiming to make general intelligence.

Suzanne Gildert left Dwave Quantum Computer Company to start her on Artificial Intelligence Lab: https://youtu.be/JBWc09b6LnM?t=1303

[IMGw=650]https://i.imgur.com/5JmNznK.png[/IMGw]
 
Last edited:
Egad! We are in agreement!

I will cherish the moment.

BTW, there's a tag for embedding YouTube videos:


I know, I have used that tag several times.

But it doesn't seem to work for video time stamps...

We are not in agreement though, at least not entirely; as I have demonstrated above, narrow tasks learners are not sufficient, as the task space may require general learning.

A quick example is that for narrow task learners, the engineers need to reconfigure their models for each task.

So, a big benefit of more and more general intelligence, is a phenomenon called transfer learning, which grants the ability to use knowledge from prior tasks, in new tasks, minus the massive reconfiguration effort.
 
I know, I have used that tag several times.

But it doesn't seem to work for video time stamps...

We are not in agreement though, at least not entirely; as I have demonstrated above, narrow tasks learners are not sufficient, as the task space may require general learning.

A quick example is that for narrow task learners, the engineers need to reconfigure their models for each task.

So, a big benefit of more and more general intelligence, is a phenomenon called transfer learning, which grants the ability to use knowledge from prior tasks, in new tasks, minus the massive reconfiguration effort.

I'm not seeing where we disagree. There's a reason I put the time qualifier of "for the time being" on my comments about the advantages of focused AI vs general-purpose AI. It's not unlike the difference between general-purpose computers and dedicated systems. Once general-purpose computers were mature and inexpensive enough they started replacing many dedicated systems. If you played Pac-Man in an arcade in the 1980's, it was on a custom built machine where the software and the hardware were intertwined. That hardware would never play another game, because the hardware was built for Pac-Man's code. If you play Pac-Man in an arcade today, it's likely on a general-purpose PC built into a cool looking cabinet, but that hardware could easily run any of a number of other games that use the same controls.
 
ProgrammingGodJordan said:
We are not in agreement though, at least not entirely; as I have demonstrated above, narrow tasks learners are not sufficient, as the task space may require general learning.

A quick example is that for narrow task learners, the engineers need to reconfigure their models for each task.

So, a big benefit of more and more general intelligence, is a phenomenon called transfer learning, which grants the ability to use knowledge from prior tasks, in new tasks, minus the massive reconfiguration effort.

I'm not seeing where we disagree There's a reason I put the time qualifier of "for the time being" on my comments about the advantages of focused AI vs general-purpose AI. It's not unlike the difference between general-purpose computers and dedicated systems. Once general-purpose computers were mature and inexpensive enough they started replacing many dedicated systems. If you played Pac-Man in an arcade in the 1980's, it was on a custom built machine where the software and the hardware were intertwined. That hardware would never play another game, because the hardware was built for Pac-Man's code. If you play Pac-Man in an arcade today, it's likely on a general-purpose PC built into a cool looking cabinet, but that hardware could easily run any of a number of other games that use the same controls.

Notably, we disagree, because I present that rather than not focusing on general Ai "for the time being", the focus on general ai is warranted right now, due to particular problems, that are affecting the field now. (An example is the transfer learning thing I mentioned in your quote of me above)
 
Notably, we disagree, because I present that rather than not focusing on general Ai "for the time being", the focus on general ai is warranted right now, due to particular problems, that are affecting the field now. (An example is the transfer learning thing I mentioned in your quote of me above)



Thank you for the clarification.
 
Resorts to repeated insults of my level of knowledge of machine learning

No, I was not interested in pooling w.r.t. to deep q learning.
Followed by a post with inane coloring and insults so:
12 October 2017: Resorts to a repeated insult of my level of knowledge of machine learning.
  1. 8 August 2017: Ignorant math word salad on academia.edu (gibberish title and worse contents).
  2. 14 August 2017: Thought Curvature abstract starts with actual gibberish.
  3. 14 August 2017: Thought Curvature abstract that lies about your previous wrong definition.
  4. 14 August 2017: A Curvature abstract ends with ignorant gibberish: "Ergo the paradox axiomatizes".
  5. 16 August 2017: Thought Curvature DeepMind bad scholarship (no citations) and some incoherence
  6. 18 August 2017: Thought Curvature uetorch bad scholarship (no citations) and incoherence
  7. 18 August 2017: Thought Curvature irrelevant "childhood neocortical framework" sentence and missing citation.
  8. 18 August 2017: Thought Curvature "non-invariant fabric" gibberish.
  9. 18 August 2017: Thought Curvature Partial paradox reduction gibberish and missing citations.
  10. 4 October 2017: Looks like an expanded incoherent document starting with title: "Thought Curvature: An underivative hypothesis"
  11. 4 October 2017: "An underivative hypothesis": An abstract of incoherent word salad linking to a PDF of worse gibberish.
  12. 4 October 2017: "Supermathematics ...": The "manifold learning frameworks" link is wrong because the paper does not have any manifold learning frameworks
  13. 4 October 2017: Links to people basically ignoring his ideas in 2 forum threads!
  14. 4 October 2017 ProgrammingGodJordan: It is a lie that I stated that manifold learning frameworks is in the paper.
  15. 4 October 2017 ProgrammingGodJordan: Lists messages form someone mostly ignoring his work!
  16. 5 October 2017: A link to a PDF repeating a delusion of a "Deepmnd atari q architecture".
  17. 5 October 2017: A lie about an "irrelevant one line description of deep q learning" when I quoted a relevant DeepMind Wikipedia article.
  18. 5 October 2017: No experiment at all, proposed or actual at the given link or PDF!
  19. 5 October 2017: A PDF section title lies about a probable experiment no experiment at all, proposed or actual.
  20. 6 October 2017: Insults about knowledge of machine learning when I displayed knowledge by looking for something I knew about (pooling versus non-pooling layers).
 
(3) You neglected to copy the ...
I linked to the source so that people could read the incoherent nonsense "Causal Neural Paradox (Thought Curvature): Aptly, the transient, naive hypothesis" for themselves. But that nonsense has vanished from academia.edu.
Your link:
As an unofficial AI researcher myself, I am working on AI, as it relates to super-manifolds.(I recently invented something called 'thought curvature',..
My first response:
You posted some ignorant math word salad on academia.edu. Starts with the title ("Causal Neural Paradox (Thought Curvature): Aptly, the transient, naive hypothesis") and gets worse from there.
Now you link to a different source and a mostly different PDF with a less nonsensical title "Thought Curvature: An underivative hypothesis"
 
Last edited:

Back
Top Bottom