• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Merged Artificial Intelligence Research: Supermathematics and Physics

16 August 2017 ProgrammingGodJordan: Thought Curvature DeepMind bad scholarship (no citations) and some incoherence
PDF
Deepmind’s atari q architecture encompasses non-pooling convolutions, therein generating object shift sensitivity, whence the model maximizes some reward over said shifts together with separate changing states for each sampled t state; translation non-invariance
I have covered the "atari q" nonsense (no Atari or q architecture for DeepMind playing atria games using Q-learning). There is the bad scholarship of no supporting citations and some incoherence. This may be an attempt to say that DeepMind recognizes moving objects such as sprites in a video game.
 
16 August 2017 ProgrammingGodJordan: Thought Curvature DeepMind bad scholarship (no citations) and some incoherence
PDF

I have covered the "atari q" nonsense (no Atari or q architecture for DeepMind playing atria games using Q-learning). There is the bad scholarship of no supporting citations and some incoherence. This may be an attempt to say that DeepMind recognizes moving objects such as sprites in a video game.

Wrong.

It is no fault of mine, that you are unable to reduce basic English.

Anyway, it was you that expressed nonsense:

ProgrammingGodJordan said:
Deepmind’s atari q architecture encompasses non-pooling convolutions
I have found 1 Google DeepMind paper about the neural network architecture that explicitly includes pooling layers but not as an implemented architecture element, Exploiting Cyclic Symmetry in Convolutional Neural Networks.

What is missing in the PDF is any references for DeepMind.

You falsely believed that pooling layers were crucial to models with convolutional layers, even despite the fact that atari Q did not include any such pooling layer.

The evidence is clearly observable:

[imgw=150]http://i.imgur.com/JYrZOW4.jpg[/imgw]


Into the introduction and:
15 August 2017 ProgrammingGodJordan: Ignorant nonsense about Deepmind.

You are demonstrably wrong, as you will see below.



DeepMind is a "neural network that learns how to play video games in a fashion similar to that of humans". It can play several Atari games. It does not have an architecture related to those Atari games. What DeepMind does have is "a convolutional neural network, with a novel form of Q-learning".


What is the relevance of your line above?

Here is a more detailed, intuitive, mathematical description of mine, regarding deepmind's flavour of deep q learning (written in 2016):

https://www.quora.com/Artificial-In...p-Q-networks-DQN-work/answer/Jordan-Bennett-9




I have found 1 Google DeepMind paper about the neural network architecture that explicitly includes pooling layers but not as an implemented architecture element, Exploiting Cyclic Symmetry in Convolutional Neural Networks.

What is missing in the PDF is any references for DeepMind.

(1)
My thought curvature paper is unavoidably valid, in expressing that deepmind did not use pooling layers in AtariQ model. (See (2) below).




(2)
Don't you know any machine learning?

Don't you know that convolutional layers can be in a model, without pooling layers?
PaUaBx9.png


WHY NO POOLING LAYERS (FOR THIS PARTICULAR SCENARIO)?

In particular, for eg, pooling layers enable translation in-variance, such that object detection can occur, regardless of position in an image. This is why deepmind left it out; the model is quite sensitive to changes in embedding/entities' positions per frame, so the model can reinforce itself by Q-updating.


SOME RESOURCES TO HELP TO PURGE YOUR IGNORANCE:

(a) Deepmind's paper.

(b) If (a) is too abstruse, see this breakdown, why atari q left out pooling layers. (A clear, similar explanation similar to the 'WHY NO POOLING LAYERS (FOR THIS PARTICULAR SCENARIO)?' section above, or as is long written in thought curvature paper)




FOOTNOTE:
It is no surprise that deepmind used pooling in another framework. Pooling layers are used in deep learning all the time, and convolutions can either include, or exclude pooling. (Deep learning basics)
 
Last edited:
[IMGw=180]http://i.imgur.com/MyFzMcl.jpg[/IMGw]


PART A

It's time to escape that onset of self-denial Reality Check.

Okay, let us unravel your errors:

(1) Why did you lie and express that 'any point in a supermanifold...is never euclidean', despite contrasting scientific evidence?

(2) Why ignore that you hadn't known that deep learning models, could include or exclude pooling layers?

(3) From your blunder in (2) above, why ignore that atari q did not include pooling for pretty clear reinforcement learning reasons (as I had long expressed in my thought curvature paper)?

(4) Why continuously accuse me of supposedly expressing that 'all super-manifolds were locally euclidean' contrary to contrasting evidence? Why do my words "Supermanifold may encode as "essentially flat euclidean super space" fabric" translate strictly to "Supermanifolds are euclidean" to you?
(accusation source 1, accusation source 2, accusation source 3)





PART B

Why Reality Check was wrong (relating to question 1):


Why Reality Check was wrong, (relating to question 2 and 3):

[imgw=150]http://i.imgur.com/JYrZOW4.jpg[/imgw]


Into the introduction and:
15 August 2017 ProgrammingGodJordan: Ignorant nonsense about Deepmind.

You are demonstrably wrong, as you will see below.



ProgrammingGodJordan said:
Deepmind’s atari q architecture encompasses non-pooling convolutions
DeepMind is a "neural network that learns how to play video games in a fashion similar to that of humans". It can play several Atari games. It does not have an architecture related to those Atari games. What DeepMind does have is "a convolutional neural network, with a novel form of Q-learning".


What is the relevance of your line above?

Here is a more detailed, intuitive, mathematical description of mine, regarding deepmind's flavour of deep q learning (written in 2016):

https://www.quora.com/Artificial-In...p-Q-networks-DQN-work/answer/Jordan-Bennett-9




I have found 1 Google DeepMind paper about the neural network architecture that explicitly includes pooling layers but not as an implemented architecture element, Exploiting Cyclic Symmetry in Convolutional Neural Networks.

What is missing in the PDF is any references for DeepMind.

(1)
My thought curvature paper is unavoidably valid, in expressing that deepmind did not use pooling layers in AtariQ model. (See (2) below).




(2)
Don't you know any machine learning?

Don't you know that convolutional layers can be in a model, without pooling layers?
PaUaBx9.png


WHY NO POOLING LAYERS (FOR THIS PARTICULAR SCENARIO)?

In particular, for eg, pooling layers enable translation in-variance, such that object detection can occur, regardless of position in an image. This is why deepmind left it out; the model is quite sensitive to changes in embedding/entities' positions per frame, so the model can reinforce itself by Q-updating.


SOME RESOURCES TO HELP TO PURGE YOUR IGNORANCE:

(a) Deepmind's paper.

(b) If (a) is too abstruse, see this breakdown, why atari q left out pooling layers. (A clear, similar explanation similar to the 'WHY NO POOLING LAYERS (FOR THIS PARTICULAR SCENARIO)?' section above, or as is long written in thought curvature paper)




FOOTNOTE:
It is no surprise that deepmind used pooling in another framework. Pooling layers are used in deep learning all the time, and convolutions can either include, or exclude pooling. (Deep learning basics)

Why Reality Check was wrong (relating to question 4):



No where had I supposedly stated that "all supermanifolds are locally Euclidean".

In fact, my earlier post (which preceded your accusation above) clearly expressed that "Supermanifold may encode as 'essentially flat euclidean super space' fabric".

No where above expresses that all supermanifolds were locally euclidean. Why bother to lie?
 
Last edited:

You need observe once more, my prior quote:

ProgrammingGodJordan said:
You must observe by now, that supermanifolds may bear euclidean behaviour. (See euclidean supermanifold reference)

Where the above is valid, grassmann algebra need not apply, as long stated.

Otherwise, why bother to ignore the evidence?

How shall ignoring the evidence benefit your education?
 
Irrelevant. Max Tegmark, is also a physicist, that has not undergone official artificial intelligence training, and yet, he has already contributed important work in the field of machine learning.

Tegmark presents consciousness as a mathematical problem, while Witten presents it as a likely forever unsolvable mystery.
I didn't suggest that being a physicist would prevent him from making contributions to AI. I suggested that it wouldn't guarantee that he would. Showing that other physicists have made such contributions would address the first argument, but not the second.

Similarly, people who wear red hats aren't necessarily going to be able to make breakthroughs in AI. Finding a picture of an AI researcher who has made breakthroughs wearing a red hat wouldn't change that fact.




It is unavoidable, he could contribute; manifolds (something Edward works on) applies empirically in machine learning.

One need not be a nobel prize winning physicist to observe the above.

I actually think that it's reasonable to think he might be able to make some sort of a contribution, though I wouldn't wager whether it would be large or small. But you haven't addressed the point that his time is finite. He can either spend any particular minute of his time thinking about and working on physics or on AI, but not both. Again, I suspect that he is the best judge of how that time is best spent.
 
I actually think that it's reasonable to think he might be able to make some sort of a contribution, though I wouldn't wager whether it would be large or small. But you haven't addressed the point that his time is finite. He can either spend any particular minute of his time thinking about and working on physics or on AI, but not both. Again, I suspect that he is the best judge of how that time is best spent.

Consider a prior quote of mine, you may have missed:

ProgrammingGodJordan said:
It is noteworthy that physicists aim to unravel the cosmos' mysteries, and so it is a mystery as to why Witten would select not to partake amidst the active machine learning field, especially given that:

(1) Manifolds apply non-trivially in machine learning.

(2) AI is one of mankind's most profound tools.

(3) AI is already performing nobel prize level tasks, very very efficiently.

(4) AI may need only be mankind's last invention.
 
ProgrammingGodJordan: Thought Curvature uetorch bad scholarship (no citations)

18 August 2017 ProgrammingGodJordan: Thought Curvature uetorch bad scholarship (no citations) and incoherence
PDF
Separately, uetorch, encodes an object trajectory behaviour physics learner, particularly on pooling layers; translation invariance
A mish mash of words not meaning much.
There is a "uetorch" open source environment using the Torch deep learning environment.
 
Last edited:
ProgrammingGodJordan: Thought Curvature irrelevant "childhood neocortical framework"

18 August 2017 ProgrammingGodJordan: Thought Curvature irrelevant "childhood neocortical framework" sentence and missing citation.
PDF
It is non-abstrusely observable, that the childhood neocortical framework pre-encodes certain causal physical laws in the neurons (Stahl et al), amalgamating in perceptual learning abstractions into non-childhood.
That sentence is the only "Stahl" on the web page displaying the PDF!
I am getting the impression that English is a second language for the author or they are stringing together science words and thinking it makes sense.
 
ProgrammingGodJordan: Thought Curvature "non-invariant fabric" gibberish

18 August 2017 ProgrammingGodJordan: Thought Curvature "non-invariant fabric" gibberish.
PDF
As such, it is perhaps exigent that non-invariant fabric composes in the invariant, therein engendering time-space complex optimal causal, conscious artificial construction. If this confluence is reasonable, is such paradoxical?
Everyone can read that this paragraph is gibberish and invalid English.
A total non sequitur (not "As such" :eye-poppi) into "fabric".
 
ProgrammingGodJordan: Thought Curvature Partial paradox reduction gibberish

18 August 2017 ProgrammingGodJordan: Thought Curvature Partial paradox reduction gibberish and missing citations.
PDF
Partial paradox reduction
Paradoxical strings have been perturbed to reduce in factor variant/invariant manifold interaction paradigms (Bengio et al, Kihyuk et al), that effectively learn to disentangle varying factors.
 
ProgrammingGodJordan: A lie about what I wrote in a post

A crazily formatted post leads to:
18 August 2017 ProgrammingGodJordan: A lie about what I wrote in a post.
I did not write 'any point in a supermanifold...is never euclidean' in my 29th March 2017
Repeating ignorance about supermanifolds does not change that they are not locally Euclidean as everyone reads that Wikipedia article you cited understands.
Locally means a small region.
For others:
A point in a supermanifold has non-Euclidean components and so cannot be Euclidean.
Roger Penrose has a few pages on supermanifolds in 'The Road To Reality' and (N.B. from memory) gives the simplest example: Real numbers R with an anti-commuting generator ε "where εε = - εε whence ε2 = 0". For every a and b in R there is a corresponding a + εb. I visualize this as extending R into a very weird plane.

18 August 2017 ProgrammingGodJordan: A fantasy that I did not know deep learning models could include or exclude pooling layers.
15 August 2017 ProgrammingGodJordan: Ignorant nonsense about Deepmind
DeepMind is a "neural network that learns how to play video games in a fashion similar to that of humans". It can play several Atari games. It does not have an architecture related to those Atari games. What DeepMind does have is "a convolutional neural network, with a novel form of Q-learning". I have found 1 Google DeepMind paper about the neural network architecture that explicitly includes pooling layers but not as an implemented architecture element, Exploiting Cyclic Symmetry in Convolutional Neural Networks.
I already knew about their use in convolutional neural networks so I went looking for their possible use for DeepMind.

18 August 2017 ProgrammingGodJordan: Repeated "atari q" gibberish when DeepMind is not an Atari machine and has no "q" (does have Q-learning)

18 August 2017 ProgrammingGodJordan: "Supermanifold may encode as "essentially flat euclidean super space"" obsession again.
I translate that as ignorance about supermanifolds. It is a lie I translate that ignorance to "Supermanifolds are euclidean" because you know that I know supermanifolds are not Euclidean.
 
Last edited:
Supermathematics and Artificial General Intelligence / Thought Curvature

[imgw=350]http://i.imgur.com/1qOIvRh.gif[/imgw]


Intriguingly, both the Google Deepmind paper, "Early Visual Concept Learning" (September 2016) and the paper of mine, entitled "Thought curvature" (May 2016):

(1) Consider combining somethings in machine learning called translation invariant, and translation variant paradigms (i.e. disentangling factors of variation)

(2) Do (1) particularly in the regime of reinforcement learning, causal laws of physics, and manifolds.


FOOTNOTE:
Notably, beyond the Deepmind paper, thought curvature describes the (machine learning related) algebra of Supermanifolds, instead of mere manifolds.


QUESTION:
Given particular streams of evidence..., is a degree of the super-manifold structure a viable path in the direction of mankind's likely last invention, Artificial General Intelligence?


Edited by Agatha: 
Edited as the 'thought curvature' link is dead. Please go to this link: https://www.researchgate.net/publication/316586028_Thought_Curvature_An_underivative_hypothesis











Signature:
 
Last edited by a moderator:

Back
Top Bottom