• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Super Artificial Intelligence, a naive approach

.....That is silly, especially when Christopher Lu's work is referenced in the first line of the repository's readme.......

Repository.

1. A place where things may be put for safekeeping.
2. A warehouse.
3. A museum.
4. A burial vault; a tomb.
5. One that contains or is a store of something specified: "Bone marrow is also the repository for some leukemias and lymphomas" (Seth Rolbein).
6. One who is entrusted with secrets or confidential information.

1. a place or container in which things can be stored for safety
2. a place where things are kept for exhibition; museum
3. (Commerce) a place where commodities are kept before being sold; warehouse
4. a place of burial; sepulchre
5. a receptacle containing the relics of the dead
6. a person to whom a secret is entrusted; confidant

1. a receptacle or place where things are deposited, stored, or offered for sale.
2. an abundant source or supply.
3. a burial place; sepulcher.
4. a person to whom something is entrusted or confided.
 
Last edited:
I'll be interested to see how you express "there might exist some set of transformations that when added to this learning algorithm would turn it into a better learning algorithm" as actual code. You haven't shown this yet, though.
 
I'll be interested to see how you express "there might exist some set of transformations that when added to this learning algorithm would turn it into a better learning algorithm" as actual code. You haven't shown this yet, though.

As far as science goes, life itself is a sequence of transformations.

Anyway, it is common enough in deep learning, that deep neural nets may be observed to be learning via a sequence of transformations (continuous bijective-inverse wise functions), under certain constraints/topologies, such as differentiable manifolds.

Even before the manifold interpretation, transformations are commonly applied. Every layer that the neural net learns occurs because of activations or transformations. (eg sigmoid, hyperbolic tangential function, etc.)
 
Last edited:
(A)
Simply, it exists as a fundamental portion of the equations.

Some causal laws of physics are a part of the Supermanifold Hypothesis/Thought Curvature equations.


(B)
Lu's work shows that it is possible to query some mesoscale format in real time.


(C)
Also, deep mind shows that large scale reinforcement learning is possible.

[IMGw=300]http://i.imgur.com/TRoOnjY.jpg[/IMGw]

(D)
Combining (A), (B) and (C) it is perhaps observable that my fabric is possible/time-space complex optimal.

This is long mentioned in the work presented.

Why did you offer this as evidence if it is not?
 
Why did you offer this as evidence if it is not?



Pay attention in particular, to the second to last line from reply #520:

ProgrammingGodJordan said:
Combining (A), (B) and (C) it is perhaps observable that my fabric is possible/time-space complex optimal.

The existence of (A), (B) and (C) is not only strong evidence for the Supermanifold Hypothesis, but are components in the equations.

The equations are not mythical, they comprise of real structures/numbers.

In other words, the components appear to be mathematically compatible.
 
Last edited:
Albeit, it appears many beings here are unfamiliar with the scientific process...
No part of the "scientific process" involves copy+pasting code at random from the internet, uploading it under a project named "god," and claiming to understand its place in, well, anything, when you are too afraid to change a single line yourself.

Hey, how godly do you find this code I found elsewhere on the internet?

Code:
try
{
    Assert(Life.Real);   
    Assert(Life.Fantasy);
}
catch(LandSlideException ex)        
{
    #region Reality
    while(true)                     
    {
        character.Eyes.ForEach(eye => eye.Open().Orient(Direction.Sky).See(););   
        self.Wealth = null;
        self.Sex = Sex.Male;        
        self.Sympathies.Clear();  
                   
        if(self.ComeDifficulty == Difficulty.Easy      
                 && self.GoDifficulty == Difficulty.Easy
                        && self.High < 0.1                      
                              && self.Low < 0.1)                   
        {
 
            switch(wind.Direction)   
            {
                case Direction.North:
                case Direction.East:
                case Direction.South:
                case Direction.West:
                default:
                    self.Matter = false;    
                    piano.Play();          
                    break;
            }
        }
    }
    #endregion
}
 
No part of the "scientific process" involves copy+pasting code at random from the internet, uploading it under a project named "god," and claiming to understand its place in, well, anything, when you are too afraid to change a single line yourself.

Hey, how godly do you find this code I found elsewhere on the internet?

Code:
try
{
    Assert(Life.Real);   
    Assert(Life.Fantasy);
}
catch(LandSlideException ex)        
{
    #region Reality
    while(true)                     
    {
        character.Eyes.ForEach(eye => eye.Open().Orient(Direction.Sky).See(););   
        self.Wealth = null;
        self.Sex = Sex.Male;        
        self.Sympathies.Clear();  
                   
        if(self.ComeDifficulty == Difficulty.Easy      
                 && self.GoDifficulty == Difficulty.Easy
                        && self.High < 0.1                      
                              && self.Low < 0.1)                   
        {
 
            switch(wind.Direction)   
            {
                case Direction.North:
                case Direction.East:
                case Direction.South:
                case Direction.West:
                default:
                    self.Matter = false;    
                    piano.Play();          
                    break;
            }
        }
    }
    #endregion
}

Doesn't really matter to me.
 
Pay attention in particular, to the second to last line from reply #520:



The existence of (A), (B) and (C) is not only strong evidence for the Supermanifold Hypothesis, but are components in the equations.

The equations are not mythical, they comprise of real structures/numbers.

In other words, the components appear to be mathematically compatible.

Yes, that's why something else might or might not be evidence, I'm asking why you posted what you did as evidence when it is not.
 
No part of the "scientific process" involves copy+pasting code at random from the internet, uploading it under a project named "god," and claiming to understand its place in, well, anything, when you are too afraid to change a single line yourself.

Hey, how godly do you find this code I found elsewhere on the internet?

Code:
try
{
    Assert(Life.Real);   
    Assert(Life.Fantasy);
}
catch(LandSlideException ex)        
{
    #region Reality
    while(true)                     
    {
        character.Eyes.ForEach(eye => eye.Open().Orient(Direction.Sky).See(););   
        self.Wealth = null;
        self.Sex = Sex.Male;        
        self.Sympathies.Clear();  
                   
        if(self.ComeDifficulty == Difficulty.Easy      
                 && self.GoDifficulty == Difficulty.Easy
                        && self.High < 0.1                      
                              && self.Low < 0.1)                   
        {
 
            switch(wind.Direction)   
            {
                case Direction.North:
                case Direction.East:
                case Direction.South:
                case Direction.West:
                default:
                    self.Matter = false;    
                    piano.Play();          
                    break;
            }
        }
    }
    #endregion
}

The above code does not fall in the manifold interpretation of deep neural networks.

r7kqGOX.png


Anyway, that code of Chris Lu's is pseudo code.

Separately, that pseudo code describes a potential manifold in deep learning.

That manifold sequence fits into the real structure seen in the image above.
 
As far as science goes, life itself is a sequence of transformations.

Anyway, it is common enough in deep learning, that deep neural nets may be observed to be learning via a sequence of transformations (continuous bijective-inverse wise functions), under certain constraints/topologies, such as differentiable manifolds.

Even before the manifold interpretation, transformations are commonly applied. Every layer that the neural net learns occurs because of activations or transformations. (eg sigmoid, hyperbolic tangential function, etc.)


Yeah, that's one interpretation of how artificial neural networks work. What does your conjecture add? How would you construct a neural network differently to take advantage of it?

It looks to me like your conjecture amounts to, if you add more layers to a neural network, it can sort/search a higher dimensional space. But that's already known and applied. So what else would your new code do, if you actually bothered to write it?
 
Yeah, that's one interpretation of how artificial neural networks work. What does your conjecture add? How would you construct a neural network differently to take advantage of it?

It looks to me like your conjecture amounts to, if you add more layers to a neural network, it can sort/search a higher dimensional space. But that's already known and applied. So what else would your new code do, if you actually bothered to write it?

(A)
I have worked with residual neural networks before, and they alone don't appear to be able to solve the transfer learning problem in reinforcement learning .



(B)
The super manifold hypothesis extends the manifold hypothesis in deep learning, to enable learning as fabrics that are more than mere points/differentiable manifold sequences.

A popular problem in typical interpretation/paradigm, is that to learn, models need to be able to transfer knowledge.

My equations may point to a paradigm where that knowledge at basis, is represented as causal laws of interactions of physics units. These units may then compose to form pseudo novel representations of the units, in general reinforcement learning.


(C)
This interaction of causal laws could be learnt, and then pseudo novel abstractions could be learnt over those interactions, for general reinforcement learning.

So, the super-m hypothesis could yield a model that does reinforcement learning over learnt causal laws of physics, pertinently in a single model.



(D)
The causal laws of physics are akin to Chris Lu's pseudo code, or something like the 'learning physics intuition from tower blocks' paper.

I first got the idea for super-m by observing deep mind's Atari q player (that removed pooling layers to enable translation variance) and the above physics learner (that included pooling, to enable translation invariance).

I wanted a way to reasonably have a model that included both of these properties at once, because humans are observed to both do reinforcement learning, and benefit from learnt causal laws of physics. (pertinently from the baby stage)
 
Last edited:
(A)
I have worked with residual neural networks before, and they alone don't appear to be able to solve the transfer learning problem in reinforcement learning .


Neural networks alone aren't sufficient to solve that problem. Fine. What new functionality, what new/additional data structure or algorithm, do you propose to add to incorporate your conjecture and solve the problem?

You'd need to answer that question in order to write any relevant code.


(B)
The super manifold hypothesis extends the manifold hypothesis in deep learning, to enable learning as fabrics that are more than mere points/differentiable manifold sequences.

A popular problem in typical interpretation/paradigm, is that to learn, models need to be able to transfer knowledge.

My equations may point to a paradigm where that knowledge at basis, is represented as causal laws of interactions of physics units. These units may then compose to form pseudo novel representations of the units, in general reinforcement learning.

(C)
This interaction of causal laws could be learnt, and then pseudo novel abstractions could be learnt over those interactions, for general reinforcement learning.

So, the super-m hypothesis could yield a model that does reinforcement learning over learnt causal laws of physics, pertinently in a single model.

(D)
The causal laws of physics are akin to Chris Lu's pseudo code, or something like the 'learning physics intuition from tower blocks'.


A hypothesis is just symbols on paper (or on a screen). It doesn't do anything unless it's implemented into some functional system. (Conceivably it could be proven mathematically, but a theorem also doesn't do anything unless its implemented into some functional system, and an unproven conjecture can still be applied in a functional system if it works reliably, so the question of proof is not directly relevant.)

Let's see the model. If it's not more layers in a neural network, then what is it?
 
I already mentioned that the components are mathematically compatible.


Just so that others with less technical backgrounds can understand what's going on here, here's an analogy. PGJ claimed that he can make a car that flies. When critics expressed doubt, he said, "Here, take a look," and showed us an unmodified 2012 Kia Sorento with "With the right modifications this car could fly!" painted on the windshield. When those same critics pointed out that "his" "flying" car was built by someone else and cannot fly, he insisted that a working car is a major necessary component to a car-that-can-fly, so his claim is justifiable. With the above quote, he's further pointing out that if a car could be modified to fly, a 2012 Kia Sorento might possibly be an acceptable car to so modify.
 
Just so that others with less technical backgrounds can understand what's going on here, here's an analogy.........

We're some 14 pages into this thread. Have you detected anything of substance in it at all?
 
Neural networks alone aren't sufficient to solve that problem. Fine. What new functionality, what new/additional data structure or algorithm, do you propose to add to incorporate your conjecture and solve the problem?

You'd need to answer that question in order to write any relevant code.



A hypothesis is just symbols on paper (or on a screen). It doesn't do anything unless it's implemented into some functional system. (Conceivably it could be proven mathematically, but a theorem also doesn't do anything unless its implemented into some functional system, and an unproven conjecture can still be applied in a functional system if it works reliably, so the question of proof is not directly relevant.)

Let's see the model. If it's not more layers in a neural network, then what is it?

A typical hypothesis does not need to include math, but my super-m hypothesis does include a robust mathematical description in deep learning terms.
This is being codified in machine language now.


Anyway, although my IQ is not high, I tend to use what little I have to try to be creative/solve problems.

As I said, i may or may not have some python code (probably mxnet) up on the repository later.

This may be a part of the last invention mankind need make, so one may guess the code may not yet be finished.
 
Last edited:
Just so that others with less technical backgrounds can understand what's going on here, here's an analogy. PGJ claimed that he can make a car that flies. When critics expressed doubt, he said, "Here, take a look," and showed us an unmodified 2012 Kia Sorento with "With the right modifications this car could fly!" painted on the windshield. When those same critics pointed out that "his" "flying" car was built by someone else and cannot fly, he insisted that a working car is a major necessary component to a car-that-can-fly, so his claim is justifiable. With the above quote, he's further pointing out that if a car could be modified to fly, a 2012 Kia Sorento might possibly be an acceptable car to so modify.


Any being of average intelligence can probably see that real math occurs.

Try to refrain from silly, analogies.

To make that analogy less silly, you could add that I presented real equations to modify that hypothetical vehicle.
 
Last edited:
Just so that others with less technical backgrounds can understand what's going on here, here's an analogy. PGJ claimed that he can make a car that flies. When critics expressed doubt, he said, "Here, take a look," and showed us an unmodified 2012 Kia Sorento with "With the right modifications this car could fly!" painted on the windshield. When those same critics pointed out that "his" "flying" car was built by someone else and cannot fly, he insisted that a working car is a major necessary component to a car-that-can-fly, so his claim is justifiable. With the above quote, he's further pointing out that if a car could be modified to fly, a 2012 Kia Sorento might possibly be an acceptable car to so modify.

Thank you. That makes a hell of a lot more sense of this thread.

From a less technical person, but not a total blithering idiot.
 
We're some 14 pages into this thread. Have you detected anything of substance in it at all?


I have not, but that might be my own limitations. In other words, there could be a deep subtle innovation described in the OP that I don't have the art to understand. I can say that if that is the case, it's not entirely my own fault, as whatever ideas are there are poorly communicated. It's as though a random English word were being substituted for every tenth word or so (in an already overly terse presentation); how else to explain phrasings such as:

Separately, Kihyuk et al, laments a cogent, prompt, time-space complex optimal manifold construction paradigm, on the order of generic quantity priors/factors.
(emphasis added)

There may indeed be a cogent, prompt, time-space complex optimal manifold construction paradigm, on the order of generic quantity priors/factors, but why are Kihyuk et al so saddened by this? That is not explained.

The terminology and content remind me strongly of that seminal paper by a student of Lobachevsky, "Analytic and Algebraic Topology of Locally Euclidean Metrizations of Infinitely Differentiable Riemannian Manifolds," as referenced by Lehrer (1953). PGJ's "homeomorphic transition sequence in some Euclidean superspace C(Rn)" appears to describe certain infinitely differentiable Riemannian manifolds, so the resemblance, if coincidental, is almost eerie.
 
Last edited:

Back
Top Bottom