• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Super Artificial Intelligence, a naive approach

.......The terminology and content remind me strongly of that seminal paper by a student of Lobachevsky, "Analytic and Algebraic Topology of Locally Euclidean Metrizations of Infinitely Differentiable Riemannian Manifolds," as referenced by Lehrer (1953). PGJ's "homeomorphic transition sequence in some Euclidean superspace C(Rn)" appears to describe certain infinitely differentiable Riemannian manifolds, so the resemblance, if coincidental, is almost eerie.

Do you know, I was just thinking the exact same thing myself........ :D :p
 
It is slightly beyond the hypothesis stage.

Here is some source code:
https://github.com/JordanMicahBennett/God

This is great. You've copied the code verbatim from this guy's master thesis, only adding a bunch of illucid comments, and every commit is commented "_commit_."

I don't think you actually know how to program.

Wow. How embarrassing. Did you really mean to post that?


Hey, it adds up to 203 lines of Matlab.

I am appropriately impressed.

The above code does not fall in the manifold interpretation of deep neural networks.

[qimg]http://i.imgur.com/r7kqGOX.png[/qimg]
OMG (no pun intended).

I always enjoy watching someone pretend to do mathematics.
 
Thank you. That makes a hell of a lot more sense of this thread.

From a less technical person, but not a total blithering idiot.

He missed a crucial part:

ProgrammingGodJordan said:
To make that analogy less silly, you could add that I presented real equations to modify that hypothetical vehicle.

This is typical science.
Scientific work isn't always implemented in full.

The math remains regardless.
 
I have not, but that might be my own limitations. In other words, there could be a deep subtle innovation described in the OP that I don't have the art to understand. I can say that if that is the case, it's not entirely my own fault, as whatever ideas are there are poorly communicated. It's as though a random English word were being substituted for every tenth word or so (in an already overly terse presentation); how else to explain phrasings such as:

(emphasis added)

There may indeed be a cogent, prompt, time-space complex optimal manifold construction paradigm, on the order of generic quantity priors/factors, but why are Kihyuk et al so saddened by this? That is not explained.

The terminology and content remind me strongly of that seminal paper by a student of Lobachevsky, "Analytic and Algebraic Topology of Locally Euclidean Metrizations of Infinitely Differentiable Riemannian Manifolds," as referenced by Lehrer (1953). PGJ's "homeomorphic transition sequence in some Euclidean superspace C(Rn)" appears to describe certain infinitely differentiable Riemannian manifolds, so the resemblance, if coincidental, is almost eerie.

(1)
I don't think it is fair for my name to be mentioned in the same breadth as Riemann. By comparison to figures like those, my intellect is almost nill.





(2)
With respect to Kihyuk et al, the following might help, especially part D.


ProgrammingGodJordan said:
(B)
The super manifold hypothesis extends the manifold hypothesis in deep learning, to enable learning as fabrics that are more than mere points/differentiable manifold sequences.

A popular problem in typical interpretation/paradigm, is that to learn, models need to be able to transfer knowledge.

My equations may point to a paradigm where that knowledge at basis, is represented as causal laws of interactions of physics units. These units may then compose to form pseudo novel representations of the units, in general reinforcement learning.



(D)
The causal laws of physics are akin to Chris Lu's pseudo code, or something like the 'learning physics intuition from tower blocks' paper.

I first got the idea for super-m by observing deep mind's Atari q player (that removed pooling layers to enable translation variance) and the above physics learner (that included pooling, to enable translation invariance).

I wanted a way to reasonably have a model that included both of these properties at once, because humans are observed to both do reinforcement learning, and benefit from learnt causal laws of physics. (pertinently from the baby stage)
 
Last edited:
Hey, it adds up to 203 lines of Matlab.

I am appropriately impressed.


OMG (no pun intended).

I always enjoy watching someone pretend to do mathematics.

(1)
As mentioned here, and in the repository, that thing is pseudo code.

I don't dare claim to be good at math, but I have invented new math in the past, regarding newtonian calculus.


(2)
Anyway, what is the problem you detect with my 'super-m' math in neural learning below?

ProgrammingGodJordan said:
RA3GJle.png
 
Last edited:
Yes, but I'm asking why you posted what you did as evidence that what you've presented is more than a hypothesis when it is not.

It is precisely because the math appears to be compatible.

That is, I can detect that the components are feasibly composable.

If I could not at all detect whether the components were composable, then it would be appropriate to call it mere hypothesis.
 
Any being of average intelligence can probably see that real math occurs.

Try to refrain from silly, analogies.

To make that analogy less silly, you could add that I presented real equations to modify that hypothetical vehicle.


I could add that, but it would not be truthful. I see strings of mathematical notation, which are descriptive of your conjectures. I see no mathematical operations being done or any results of such operations, though, so no mathematics.

And no equations. Not all strings of mathematical symbols are equations.
 
I could add that, but it would not be truthful. I see strings of mathematical notation, which are descriptive of your conjectures. I see no mathematical operations being done or any results of such operations, though, so no mathematics.

And no equations. Not all strings of mathematical symbols are equations.

That's odd.

The phi, transpose, symbols etc indicate particularly, functions and operations, while theta etc indicate parameters on which those functions occur, in deep learning/mathematics.

There are many other operations under those few lines, tensor manipulations via norms, component analysis sequences, etc.
 
Last edited:
It is precisely because the math appears to be compatible.

That is, I can detect that the components are feasibly composable.

If I could not at all detect whether the components were composable, then it would be appropriate to call it mere hypothesis.

Oh, okay, it's because you don't understand the meaning of the word "hypothesis". Carry on.
 
Oh, okay, it's because you don't understand the meaning of the word "hypothesis". Carry on.

A hypothesis means things done with incomplete evidence.
I detect that the mathematical strings persist, simply absent full implementation, in the equations.

Whether you can detect is another story (although it is detectable, given proper analysis by any average being)...
 
Last edited:
We're some 14 pages into this thread. Have you detected anything of substance in it at all?


To follow up — another unpromising sign is PGJ's habit of responding to a single word in a sentence, out of context, with a complete non-sequitur, much in the manner of an Eliza-type program. For example, I posted:

I'll be interested to see how you express "there might exist some set of transformations that when added to this learning algorithm would turn it into a better learning algorithm" as actual code. You haven't shown this yet, though.


His response, bizarrely, begin with:

As far as science goes, life itself is a sequence of transformations.


The context made it clear that I was talking about the "homeomorphic transitions" referred to in his own oft-posted conjecture ("transforms" or "functions" would have also had the same meaning), yet his response was not about mathematics but "life is change" pop philosophy, just because the same word could be plugged in there.

Another example a few posts ago, I referred very indirectly to Riemannian manifolds (not, note, any specific paper or achievement by Riemann). "Riemannian" is a defined and well-known modifier indicating particular properties that Riemann originally defined. To use the word Riemannian in that sense is no more comparing anyone to Riemann than reporting that someone plugged in a 20-Watt light bulb is comparing that person to James Watt. Yet he responded with an irrelevant and excessively self-deprecating declaration implying otherwise:

I don't think it is fair for my name to be mentioned in the same breadth as Riemann. By comparison to figures like those, my intellect is almost nill.


Keying weird responses off of single words taken out of context is rarely a sign of an earnest attempt to communicate a sound but technically abstruse idea.
 
Last edited:
To follow up — another unpromising sign is PGJ's habit of responding to a single word in a sentence, out of context, with a complete non-sequitur, much in the manner of an Eliza-type program. ......

A number of times in PGJ's multiple and interchangeable threads posters have posited that we are getting answers from a bot. It's like there is a programme set to trigger whenever a key word is spotted, which fires the pre-written answer. Now and then, PGJ pops back into the thread to keep things real (it is suggested). It is difficult to know which is more incomprehensible: the out-of-context response you talk of, or the long-winded extravagantly-formatted word soup which we know and love.
 
That's odd.

The phi, transpose, symbols etc indicate particularly, functions and operations, while theta etc indicate parameters on which those functions occur, in deep learning/mathematics.

There are many other operations under those few lines, tensor manipulations via norms, component analysis sequences, etc.


It's not at all odd. "Indicating" functions and operations etc. is not the same as reasoning about them or performing them. For instance, if I write:

2350!

…. I've indicated a series of 2,349 multiplication operations, but I have not posted an equation or done any mathematics.
 
(2)
Anyway, what is the problem you detect with my 'super-m' math in neural learning below?

ProgrammingGodJordan said:
[QIMG]http://i.imgur.com/RA3GJle.png[/QIMG]
I'd have written "problems", plural, but I'll respond with just one because you wrote the singular.

In your image, you wrote:
"...in some euclidean superspace C(Rn)..."
C is the set of infinitely differentiable functions, which is not usually regarded as a Euclidean space.

Perhaps you meant the topological space obtained by taking an infinite product of the complex numbers with the standard product topology, but that is not usually regarded as a Euclidean space either.

Then there's the question of what you might have meant by writing C(Rn), but (depending on what you meant by C) that might well be regarded as a second problem, so I won't bother to mention it or any other problems I may have detected.
 

Back
Top Bottom