• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Merged Artificial Intelligence Research: Supermathematics and Physics

See thought curvature paper.
I replied to your post where you defined eta with an link to an irrelevant Wikipedia article with a definition used in fluid mechanics.
The Kolmogorov scale eta is a parameter of a fluid:
where ν is the kinematic viscosity and ε is the rate of kinetic energy dissipation

But since you brought it up. The original post of gibberish:
"Simply", it consists of manifolds as models for concept representation, in conjunction with policy π - a temporal difference learning paradigm representing distributions over eta.
has lead to
1 September 2017 ProgrammingGodJordan: A lie about "distributions over eta" being in his thought curvature PDF.
There is no eta at all in the current PDF!
 
Last edited:
I replied to your post where you defined eta with an link to an irrelevant Wikipedia article with a definition used in fluid mechanics.
The Kolmogorov scale eta is a parameter of a fluid:


But since you brought it up. The original post of gibberish:

has lead to
1 September 2017 ProgrammingGodJordan: A lie about "distributions over eta" being in his thought curvature PDF.
There is no eta at all in the current PDF!

Eta (η) simply refers to input space on which thought curvature structure may absorb/evaluate.





Signature:
 
A crazily formatted post leads to:
18 August 2017 ProgrammingGodJordan: A lie about what I wrote in a post.
I did not write 'any point in a supermanifold...is never euclidean' in my 29th March 2017

Locally means a small region.
For others:
A point in a supermanifold has non-Euclidean components and so cannot be Euclidean.
Roger Penrose has a few pages on supermanifolds in 'The Road To Reality' and (N.B. from memory) gives the simplest example: Real numbers R with an anti-commuting generator ε "where εε = - εε whence ε2 = 0". For every a and b in R there is a corresponding a + εb. I visualize this as extending R into a very weird plane.

18 August 2017 ProgrammingGodJordan: A fantasy that I did not know deep learning models could include or exclude pooling layers.
15 August 2017 ProgrammingGodJordan: Ignorant nonsense about Deepmind

I already knew about their use in convolutional neural networks so I went looking for their possible use for DeepMind.

18 August 2017 ProgrammingGodJordan: Repeated "atari q" gibberish when DeepMind is not an Atari machine and has no "q" (does have Q-learning)

18 August 2017 ProgrammingGodJordan: "Supermanifold may encode as "essentially flat euclidean super space"" obsession again.
I translate that as ignorance about supermanifolds. It is a lie I translate that ignorance to "Supermanifolds are euclidean" because you know that I know supermanifolds are not Euclidean.


Alright, you have demonstrated that you lack basic machine learning knowledge.


PART A
You had unavoidably mentioned that "the set of points in the neighborhood of any point in a supermanifold is never Euclidean."


PART B
My prior expression "Deepmind's Atari Q architecture", no where mentioned that Deepmind (A machine learning company) was an "atari machine".

Here are other typical presentation, constituting Deepmind's atari q architecture:

(1) https://github.com/kuz/DeepMind-Atari-Deep-Q-Learner

(2) http://ikuz.eu/2015/02/27/google-deepmind-publishes-atari-q-learner-source-code/


PART CYou had long demonstrated that you lacked basic knowledge in machine learning.

WHY?
You had demonstrated that you hadn't known that deep learning models, could include or exclude pooling layers.

RECALL:

[imgw=150]http://i.imgur.com/JYrZOW4.jpg[/imgw]


Into the introduction and:
15 August 2017 ProgrammingGodJordan: Ignorant nonsense about Deepmind.

You are demonstrably wrong, as you will see below.



ProgrammingGodJordan said:
Deepmind’s atari q architecture encompasses non-pooling convolutions
DeepMind is a "neural network that learns how to play video games in a fashion similar to that of humans". It can play several Atari games. It does not have an architecture related to those Atari games. What DeepMind does have is "a convolutional neural network, with a novel form of Q-learning".


What is the relevance of your line above?

Here is a more detailed, intuitive, mathematical description of mine, regarding deepmind's flavour of deep q learning (written in 2016):

https://www.quora.com/Artificial-In...p-Q-networks-DQN-work/answer/Jordan-Bennett-9




I have found 1 Google DeepMind paper about the neural network architecture that explicitly includes pooling layers but not as an implemented architecture element, Exploiting Cyclic Symmetry in Convolutional Neural Networks.

What is missing in the PDF is any references for DeepMind.

(1)
My thought curvature paper is unavoidably valid, in expressing that deepmind did not use pooling layers in AtariQ model. (See (2) below).




(2)
Don't you know any machine learning?

Don't you know that convolutional layers can be in a model, without pooling layers?
PaUaBx9.png


WHY NO POOLING LAYERS (FOR THIS PARTICULAR SCENARIO)?

In particular, for eg, pooling layers enable translation in-variance, such that object detection can occur, regardless of position in an image. This is why deepmind left it out; the model is quite sensitive to changes in embedding/entities' positions per frame, so the model can reinforce itself by Q-updating.


SOME RESOURCES TO HELP TO PURGE YOUR IGNORANCE:

(a) Deepmind's paper.

(b) If (a) is too abstruse, see this breakdown, why atari q left out pooling layers. (A clear, similar explanation similar to the 'WHY NO POOLING LAYERS (FOR THIS PARTICULAR SCENARIO)?' section above, or as is long written in thought curvature paper)




FOOTNOTE:
It is no surprise that deepmind used pooling in another framework. Pooling layers are used in deep learning all the time, and convolutions can either include, or exclude pooling. (Deep learning basics)






Signature:
 


From prior threads, you had long demonstrated that you lack basic machine learning knowledge.

For example, you had demonstrated that you hadn't known that deep learning models, could include or exclude pooling layers.
A reminder:

[imgw=150]http://i.imgur.com/JYrZOW4.jpg[/imgw]


Into the introduction and:
15 August 2017 ProgrammingGodJordan: Ignorant nonsense about Deepmind.

You are demonstrably wrong, as you will see below.



ProgrammingGodJordan said:
Deepmind’s atari q architecture encompasses non-pooling convolutions
DeepMind is a "neural network that learns how to play video games in a fashion similar to that of humans". It can play several Atari games. It does not have an architecture related to those Atari games. What DeepMind does have is "a convolutional neural network, with a novel form of Q-learning".


What is the relevance of your line above?

Here is a more detailed, intuitive, mathematical description of mine, regarding deepmind's flavour of deep q learning (written in 2016):

https://www.quora.com/Artificial-In...p-Q-networks-DQN-work/answer/Jordan-Bennett-9




I have found 1 Google DeepMind paper about the neural network architecture that explicitly includes pooling layers but not as an implemented architecture element, Exploiting Cyclic Symmetry in Convolutional Neural Networks.

What is missing in the PDF is any references for DeepMind.

(1)
My thought curvature paper is unavoidably valid, in expressing that deepmind did not use pooling layers in AtariQ model. (See (2) below).




(2)
Don't you know any machine learning?

Don't you know that convolutional layers can be in a model, without pooling layers?
PaUaBx9.png


WHY NO POOLING LAYERS (FOR THIS PARTICULAR SCENARIO)?

In particular, for eg, pooling layers enable translation in-variance, such that object detection can occur, regardless of position in an image. This is why deepmind left it out; the model is quite sensitive to changes in embedding/entities' positions per frame, so the model can reinforce itself by Q-updating.


SOME RESOURCES TO HELP TO PURGE YOUR IGNORANCE:

(a) Deepmind's paper.

(b) If (a) is too abstruse, see this breakdown, why atari q left out pooling layers. (A clear, similar explanation similar to the 'WHY NO POOLING LAYERS (FOR THIS PARTICULAR SCENARIO)?' section above, or as is long written in thought curvature paper)




FOOTNOTE:
It is no surprise that deepmind used pooling in another framework. Pooling layers are used in deep learning all the time, and convolutions can either include, or exclude pooling. (Deep learning basics)


FOOTNOTE:

Of course, even if one lacks official machine learning training (as you clearly demonstrate above), depending on one's field/area of research, one may still contribute.
However this is not the case for you, all your claims of missing citations are invalid, as is demonstrated in the source.



Signature:
 
Last edited:
My area of research is computer science, particularly in Artificial Intelligence.

I am not trained in machine learning, university-wise, but I do research anyway.

Then if you have these questions and it's an area you either understand, or are thoroughly motivated to understand, why not answer them by experiment?

If you understand the topic well enough to do the experiments, then do them. If you don't, then you are arguing about a topic from a point of ignorance. Your time would be far better served by learning the topic sufficiently to answer the questions.
 
Then if you have these questions and it's an area you either understand, or are thoroughly motivated to understand, why not answer them by experiment?

If you understand the topic well enough to do the experiments, then do them. If you don't, then you are arguing about a topic from a point of ignorance. Your time would be far better served by learning the topic sufficiently to answer the questions.

There are particular limits, that I currently aim to resolve:

(1) I don't have access to google-level gpus for the purpose of rapid experimentation.

(2) I don't have the depth of knowledge that a phd pioneer like Yoshua Bengio would possess, especially, given the nature of my university's sub-optimal AI course.



FOOTNOTE:
(i) Despite (2), it is not inconceivable that I can detect regimes, that phd aligned machine learning people may miss.

For example (unlike state of the art related works), I consider machine learning algebra, as it relates to cognitive science. Bengio's works, especially concerning manifolds, do not yet? entirely compound cognitive science, as cognitive science entails supersymmetry/supermanifolds, which Bengio's work does not entail.

(ii) Likewise state of the art work, such as deepmind's works on manifolds do not yet? entail cognitive science, in entirety, although deepmind tends to consider boundaries amidst cognitive science.


(iii) Regardless of (2) though, I have communicated with Bengio, in order to compose the thought curvature paper.

As such, although thought curvature does not yet compound encodings that are experimentally observable, it does express valid machine learning aligned algebra, especially on the horizon of empirical evidence, on which future work may occur.
EXAMPLES OF COMMUNICATIONS WITH BENGIO:


x3RM20F.png








Signature:
 
Last edited:
Yes, I do science, and science is true.

So, it can be said that my area of research, like that of many scientists, is "truth", for science is true.





Signature:



I recently posted in another of your bovine excrement threads where you redefined "God" into a useless term, that "Science" had been redefined as "Medieval European Alchemy." I'm now expanding that definition to ALL threads in which you and I are participants.

Adjust your Dunning/Kruger discussion of AI and machine learning accordingly.

How do the Philosopher's Stone and the transmutation of metals fit into your model?
 
I recently posted in another of your bovine excrement threads where you redefined "God" into a useless term, that "Science" had been redefined as "Medieval European Alchemy." I'm now expanding that definition to ALL threads in which you and I are participants.

Adjust your Dunning/Kruger discussion of AI and machine learning accordingly.

How do the Philosopher's Stone and the transmutation of metals fit into your model?

I don't detect any sensible data amidst your response.



FOOTNOTE:
Curiously, how does Dunning/Kruger supposedly apply to a being (i.e. myself), who aims to acquire a lot more scientific data?





Signature:
 
Last edited:
No you don't. You have no understanding of what that word means.

Have you anything valid to express, beyond non-evidenced blather?


FOOTNOTE:
I am off to slumber, so I shan't yet have the opportunity to observe a valid response that you may later (or at all?) write here.




Signature:
 
Last edited:
I don't detect any sensible data amidst your response.


Now you know how we feel reading your posts.


FOOTNOTE:
Curiously, how does Dunning/Kruger supposedly apply to a being (i.e. myself), who aims to acquire a lot more scientific data?


You have no idea what you're writing about. You don't understand any of the concepts you're onanizing on. You cover up your complete lack of comprehension with arrogance and poor writing but nobody is fooled.

You are accumulating data but not understanding any of it. You are comparable to an illiterate man with a massive library, bragging about how educated he is because of the massive library he cannot read.

You should be seeking understanding, not accumulating more buzzwords to throw into your word salads.
 
Now you know how we feel reading your posts.





You have no idea what you're writing about. You don't understand any of the concepts you're onanizing on. You cover up your complete lack of comprehension with arrogance and poor writing but nobody is fooled.

You are accumulating data but not understanding any of it. You are comparable to an illiterate man with a massive library, bragging about how educated he is because of the massive library he cannot read.

You should be seeking understanding, not accumulating more buzzwords to throw into your word salads.

Instead of blathering on absent evidence, it is pertinent that you perhaps demonstrate how I supposedly fail to present valid data.





Signature:
 

Back
Top Bottom