• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Merged Artificial Intelligence Research: Supermathematics and Physics



"In the 60s, Marvin Minsky assigned a couple of undergrads to spend the summer programming a computer to use a camera to identify objects in a scene. He figured they'd have the problem solved by the end of the summer. Half a century later, we're still working on it."
 
Last edited:
My take on it is that he's saying that you can create estimates of probability using Bayesian statistics for separate abstract elements. Then you can combine the estimates to form a stronger prediction about the environment for a given intelligent agent. In Bayesian statistics a diffuse prior only provides vague predictions and therefore isn't very useful. His suggestion is that you can combine diffuse priors to make a much stronger prediction.

He seems to be trying to solve the problem of an intelligent agent acting within an environment without sufficient information about the environment. This is an ongoing problem with AI and machine learning. He seems to see this as a pattern matching problem which is why he refers to recurrent neural networks.

None of that is so bad, but after that it pretty much falls apart. He suggests that this mechanism would be useful for awareness and consciousness. He shows misunderstanding of focus and seems to believe the language of thought theory. For example, if you did try to use his structure for awareness you would fall into the Frame problem. Maybe he isn't aware of it. His notion about focus is ludicrous since it could give you a random, divergent, or convergent process. This is a common problem with bottom up approaches. To date there has been no supporting evidence for a language of thought other than that it goes along with computational theory.

That's my opinion about it. In other words, he might well be able to make a contribution to Bayesian statistics but I don't see this as advancing AI in the least.

Your opinion is off;

(1) It doesn't appear he is providing a framework to fully describe the structure for awareness; you may notice section 3, the consideration section where one quickly finds out that there is a suggestion (and no actual, detailed instruction) on how to try to build according to him, "a toy example" to describe the theory he presents.

(2) Based on (1), the remainder of your response describing how his paper "falls apart", is off.

The phrase toy example, especially in the context above, should show that the paper does not frame (nor intends to frame) any complete solution for awareness.
 

I don't know how your post above relates to the OP, but here are some useful links:

Deep Learning AI Better Than Your Doctor at Finding Cancer:
https://singularityhub.com/2015/11/...ai-better-than-your-doctor-at-finding-cancer/


Self-taught artificial intelligence beats doctors at predicting heart attacks:
http://www.sciencemag.org/news/2017...igence-beats-doctors-predicting-heart-attacks


Here are a sequence cognitive fields/tasks, where sophisticated artificial neural models exceed human-kind:

1) Language translation (eg: Skype 50+ languages)
2) Legal-conflict-resolution (eg: 'Watson')
3) Self-driving (eg: 'otto-Self Driving' )
5) Disease diagnosis (eg: 'Watson')
6) Medicinal drug prescription (eg: 'Watson')
7) Visual Product Sorting (eg: 'Amazon Corrigon' )
8) Help Desk Assistance ('eg: Digital Genius)
9) Mechanical Cucumber Sorting (eg: 'Makoto's Cucumber Sorter')
10) Financial Analysis (eg: 'SigFig')
11) E-Discovery Law (eg: ' Social Science Research Network.')
12) Anesthesiology (eg: 'SedaSys')
13) Music composition (eg: 'Emily')
14) Go (eg: 'Alpha Go')
n) etc, etc


The Rise of the Machines – Why Automation is Different this Time:
https://www.youtube.com/watch?v=WSKi8HfcxEk

Will artificial intelligence take your job?:
https://www.youtube.com/watch?v=P_-wn8ghcoY

Humans need not apply:
https://www.youtube.com/watch?v=7Pq-S557XQU

The wonderful and terrifying implications of computers that can learn:
https://www.youtube.com/watch?v=t4kyRyKyOpo

And also, a cool xkcd:

 
Last edited:

That comic is an excellent example of how it's not possible for us to be living in a simulation. The fact that the protagonist literally needs an infinite universe isn't just a problem faced by a computer made of rocks and sand.

Either we don't live in a simulation, or computing works differently outside the Matrix

But let's put those quibbles aside and dig into some physics, shall we? Theoretical physicists from Oxford just published Quantized gravitational responses, the sign problem, and quantum complexity in Science Advances, in which they document the geometric complexity of computing the location of particles that make up the universe. It turns out that figuring out these particles' locations scales at order n-squared, meaning the amount of computing power needed doubles with each additional particle, which means that "storing information about a couple of hundred electrons would require a computer memory that would physically require more atoms than exist in the universe."

Then there's the data transfer problem.

4KxfPM2.jpg
 
ProgrammingGodJordan: Looks like an expanded incoherent document

Have you updated your document to remove the gibberish that is "Thought Curvature "?
  1. 8 August 2017 ProgrammingGodJordan: Ignorant math word salad on academia.edu (gibberish title and worse contents).
  2. 14 August 2017 ProgrammingGodJordan: Thought Curvature abstract starts with actual gibberish.
  3. 14 August 2017 ProgrammingGodJordan: Thought Curvature abstract that lies about your previous wrong definition.
  4. 14 August 2017 ProgrammingGodJordan: A Curvature abstract ends with ignorant gibberish: "Ergo the paradox axiomatizes".
  5. 16 August 2017 ProgrammingGodJordan: Thought Curvature DeepMind bad scholarship (no citations) and some incoherence
  6. 18 August 2017 ProgrammingGodJordan: Thought Curvature uetorch bad scholarship (no citations) and incoherence
  7. 18 August 2017 ProgrammingGodJordan: Thought Curvature irrelevant "childhood neocortical framework" sentence and missing citation.
  8. 18 August 2017 ProgrammingGodJordan: Thought Curvature "non-invariant fabric" gibberish.
  9. 18 August 2017 ProgrammingGodJordan: Thought Curvature Partial paradox reduction gibberish and missing citations.
4 October 2017 ProgrammingGodJordan: Looks like an expanded incoherent document starting with title: "Thought Curvature: An underivative hypothesis""
4 October 2017 ProgrammingGodJordan: "An underivative hypothesis": A abstract of incoherent word salad linking to a PDF of worse gibberish.
Some Markov receptive C∞π(Rnπ) , reasonably permits uniform symbols on the boundary of Rn, betwixt some Uα, of φi; particularly on some input space of form η . (See preliminary encoding).
The link is to an even worse "Supermanifold Hypothesis (via Deep Learning)" PDF with nonsensical abstract of
If any homeomorphic transition in some neighbourhood in an euclidean space Rn yields ϕ(x,θ)Tw for wi, θ ϵ Rn, then reasonably, some homeomorphic transition sequence in some euclidean superspace C∞(Rn) yields ϕ(x,θ,θ)Tw for wi, θ ϵ Rn; θ ϵ some resultant map sequence over θ via ϕ, pertinently, abound some parametric oscillation paradigm, containing Zλ.[12]

Pertinently, Rn → form R0|n applies, on the horizon of the bosonic Riccati.[12]
Other than advertising your word & math salad PDFs, you seem to be
  • Going on about the trivial fact that babies learn and that their learning processes may be a model for AI learning.
  • Have a fantasy that the other posters are ignorant about programming and AI with posting of irrelevant tutorials.
 
Last edited:
ProgrammingGodJordan : "Supermathematics ...": the first word in the title is a lie

Next is the PDF "Supermathematics and Artificial General Intelligence" which does have a coherent abstract:
I clearly unravel how I came to invent the supermanifold hypothesis in deep learning, (a component in another description called 'thought curvature') in relation to quantum computation.
However:

4 October 2017 ProgrammingGodJordan: "Supermathematics ...": the first word in the title is a lie because supermathematics is not AI.
Supermathematics is the branch of mathematical physics which applies the mathematics of Lie superalgebras to the behaviour of bosons and fermions.
The behavior of bosons and fermions are not machine learning :eye-poppi.
 
ProgrammingGodJordan: "Supermathematics ...": Wrong "manifold learning frameworks

4 October 2017 ProgrammingGodJordan: "Supermathematics ...": The "manifold learning frameworks" link is wrong.
There are no "manifold learning frameworks" in
Disentangling factors of variation in deep representations using adversarial training. There are 3 instances of the word manifold referring to the data. The frameworks are Generative Adversarial Networks (GAN) and Variational Auto-Encoders (VAE) which this paper combines.
 
4 October 2017 ProgrammingGodJordan: "Supermathematics ...": The "manifold learning frameworks" link is wrong.

There are no "manifold learning frameworks" in

Disentangling factors of variation in deep representations using adversarial training. There are 3 instances of the word manifold referring to the data. The frameworks are Generative Adversarial Networks (GAN) and Variational Auto-Encoders (VAE) which this paper combines.



That’s the problem with folks who get too used to bamboozling people with sciency-sounding gibberish. Eventually they come someplace like this and get their ass handed to them by people who see through them.
 
halleyscomet said:
That’s the problem with folks who get too used to bamboozling people with sciency-sounding gibberish. Eventually they come someplace like this and get their ass handed to them by people who see through them.

@Halleyscomet, RealityCheck had already been shown to lack basic Machine Learning know how.

For example, RealityCheck demonstrated words, that indicated that he or she had not been aware of the basic fact, that deep learning models could include or exclude pooling, something the typical undergrad Machine Learning student would discover.

See the scenario here.

Here is a quick spoiler, saved just for this occasion:



ProgrammingGodJordan said:
[IMGw=180]http://i.imgur.com/MyFzMcl.jpg[/IMGw]


PART A

It's time to escape that onset of self-denial Reality Check.

Okay, let us unravel your errors:

(1) Why did you lie and express that 'any point in a supermanifold...is never euclidean', despite contrasting scientific evidence?

(2) Why ignore that you hadn't known that deep learning models, could include or exclude pooling layers?

(3) From your blunder in (2) above, why ignore that atari q did not include pooling for pretty clear reinforcement learning reasons (as I had long expressed in my thought curvature paper)?

(4) Why continuously accuse me of supposedly expressing that 'all super-manifolds were locally euclidean' contrary to contrasting evidence? Why do my words "Supermanifold may encode as "essentially flat euclidean super space" fabric" translate strictly to "Supermanifolds are euclidean" to you?
(accusation source 1, accusation source 2, accusation source 3)





PART B

Why Reality Check was wrong (relating to question 1):


Why Reality Check was wrong, (relating to question 2 and 3):

[imgw=150]http://i.imgur.com/JYrZOW4.jpg[/imgw]


Into the introduction and:
15 August 2017 ProgrammingGodJordan: Ignorant nonsense about Deepmind.

You are demonstrably wrong, as you will see below.



ProgrammingGodJordan said:
Deepmind’s atari q architecture encompasses non-pooling convolutions
DeepMind is a "neural network that learns how to play video games in a fashion similar to that of humans". It can play several Atari games. It does not have an architecture related to those Atari games. What DeepMind does have is "a convolutional neural network, with a novel form of Q-learning".


What is the relevance of your line above?

Here is a more detailed, intuitive, mathematical description of mine, regarding deepmind's flavour of deep q learning (written in 2016):

https://www.quora.com/Artificial-In...p-Q-networks-DQN-work/answer/Jordan-Bennett-9




I have found 1 Google DeepMind paper about the neural network architecture that explicitly includes pooling layers but not as an implemented architecture element, Exploiting Cyclic Symmetry in Convolutional Neural Networks.

What is missing in the PDF is any references for DeepMind.

(1)
My thought curvature paper is unavoidably valid, in expressing that deepmind did not use pooling layers in AtariQ model. (See (2) below).




(2)
Don't you know any machine learning?

Don't you know that convolutional layers can be in a model, without pooling layers?
PaUaBx9.png


WHY NO POOLING LAYERS (FOR THIS PARTICULAR SCENARIO)?

In particular, for eg, pooling layers enable translation in-variance, such that object detection can occur, regardless of position in an image. This is why deepmind left it out; the model is quite sensitive to changes in embedding/entities' positions per frame, so the model can reinforce itself by Q-updating.


SOME RESOURCES TO HELP TO PURGE YOUR IGNORANCE:

(a) Deepmind's paper.

(b) If (a) is too abstruse, see this breakdown, why atari q left out pooling layers. (A clear, similar explanation similar to the 'WHY NO POOLING LAYERS (FOR THIS PARTICULAR SCENARIO)?' section above, or as is long written in thought curvature paper)




FOOTNOTE:
It is no surprise that deepmind used pooling in another framework. Pooling layers are used in deep learning all the time, and convolutions can either include, or exclude pooling. (Deep learning basics)

Why Reality Check was wrong (relating to question 4):



No where had I supposedly stated that "all supermanifolds are locally Euclidean".

In fact, my earlier post (which preceded your accusation above) clearly expressed that "Supermanifold may encode as 'essentially flat euclidean super space' fabric".

No where above expresses that all supermanifolds were locally euclidean. Why bother to lie?



 
Last edited:
4 October 2017 ProgrammingGodJordan: "Supermathematics ...": The "manifold learning frameworks" link is wrong.
There are no "manifold learning frameworks" in
Disentangling factors of variation in deep representations using adversarial training. There are 3 instances of the word manifold referring to the data. The frameworks are Generative Adversarial Networks (GAN) and Variational Auto-Encoders (VAE) which this paper combines.

What are you on about above?

Are you disagreeing with my prior statement that disentangling factors aligns with manifold learning?
 
Next is the PDF "Supermathematics and Artificial General Intelligence" which does have a coherent abstract:

However:

4 October 2017 ProgrammingGodJordan: "Supermathematics ...": the first word in the title is a lie because supermathematics is not AI.

The behavior of bosons and fermions are not machine learning :eye-poppi.

For reality's sake, please look at the thought curvature paper, for more than 5 minutes.

You will notice a source in that paper, concerning Super Symmetry at brain scale.

That has something to do with something called the bosonic riccati.

I explain the details in a github document here (See item 2).
 
Last edited:
Why should this thread deal with an image of what looks like mathematical gibberish?

CIpHftz.jpg


Thought Curvature doesn't appear to be "mathematical gibberish" to apparently smart people from other places on the web.

Examples:

(1) Discussion on science forum:
http://www.scienceforums.net/topic/109496-supermathematics-and-artificial-general-intelligence/

The conversations in the science forum above, lead to another conversation with a user that had participated in the aforesaid conversation.
Warning the following image is quite large:
ALkXW6Z.png



(2) Discussion on physics overflow:
https://www.physicsoverflow.org/39603/possible-create-transverse-ising-compatible-hamiltonian


etc

What is it that you don't understand?

Why do you garner your words, (demonstrating lack of understanding) necessitates that thought curvature is suddenly supposedly "gibberish"?
 
Last edited:
ProgrammingGodJordan: Quote the cited description of manifold learning frameworks

What are you on about above?
An inability to understand what you read or maybe even write :eye-poppi!
4 October 2017 ProgrammingGodJordan: "Supermathematics ...": The "manifold learning frameworks" link is wrong because the paper does not have any manifold learning frameworks[[/B]

But just in case:
4 October 2017 ProgrammingGodJordan: Quote the description of manifold learning frameworks in the paper you cited.
 
Last edited:
ProgrammingGodJordan: Links to people basically ignoring him

Thought Curvature doesn't appear to be "mathematical gibberish" to apparently smart people from other places on the web.
4 October 2017 ProgrammingGodJordan: Rather ignorantly links to people basically ignoring him!
A couple of thread on other forums with a handful of posts or comments.

Mordred for example suggests that you need to study to make any progress.

In the other forum you admit that you do not have a college level of education or training in physics (and thus the required math skills).
Unfortunately, my knowledge is very limited, as I lack at minimum a Bachelors physics degree, or any training in physics, so the method outlined in the super Hamiltonian paper above, was the easiest entry point I could garner of based on evidence observed thus far.
 
4 October 2017 ProgrammingGodJordan: Rather ignorantly links to people basically ignoring him!
A couple of thread on other forums with a handful of posts or comments.

Mordred for example suggests that you need to study to make any progress.

Of what relevance is this to the OP?

As I mentioned in reply 194, Mordred also went on in personal inbox, to answer some questions that helped to lead to thought curvature's current form.

RealityCheck said:
In the other forum you admit that you do not have a college level of education or training in physics (and thus the required math skills)

Yes, I did. (Recall that it is I that linked you to said forum?)
However, this does not suddenly instantiate that thought curvature is "mathematical gibberish" as you would like to incite.
See the same forum once more.
 
Last edited:
Of what relevance is this to the OP?
Your post is not the OP nor is reply 191.
4 October 2017 ProgrammingGodJordan: Rather ignorantly links to people basically ignoring him!

But you did list Mordred's messages to you so:
4 October 2017 ProgrammingGodJordan: Lists messages form someone mostly ignoring his work!
Mordred describes a tiny bit of the mathematics and physics of QFT. Mordred ignores your work. Mordred mentions one of your citations favorably. He does not mention that this is a year and a half old preprint with no sign of publication. But it is clear that quantum computing should give advantages over classical computing in AI.
 
Last edited:

Back
Top Bottom