• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Super Artificial Intelligence, a naive approach

(i) Irrelevant opinion.
(ii) Argument by YouTube video is usually invalid
(iii) Brain speed is not "optimization".
(iv) Computer speed is not artificial intelligence.
(v) A badly (madly?) titled PDF on the internet is dubious, especially on a site that you have to register to download the PDF. "Causal Neural Paradox (Thought Curvature): Aptly, the transient, naive hypothesis" is nonsense.

The idea seems to be that increases in computer speed (e.g. quantum computing) will magically lead to something called "super artificial intelligence". Aided by an IBM chip?

The above is heavily garbage bound.


(1)
Thought curvature concerns manifolds.
You may have been unaware, but manifolds are potential solvers of severe discrimination issues in modern machine learning.

Each entity in the problem space is observable as detangleable manifolds.

(2)
I didn't express that computer speed was the only thing.

Instead, I mentioned that computer speed was core to the success of modern machine learning.



(3)
Now, the human brain is efficient per general intelligence. (Compared to other brain like constructs)

This efficiency has something to do with how many cycles per frame exist per material.



(3)
10^14 artificial synaptic operations per second are currently done by machines.

As these machines got more powerful, they did more and more cognitive tasks. (Some exceeding human intelligence)
It doesn't take a genius to see that with some number of advancements per power/structure, overall artificial human level intelligence is inevitable.
 
Last edited:
Last edited:
The above is heavily garbage bound.
Calling the real world garbage is not wise. In the real world the OP contains:
(i) Irrelevant opinion.
(ii) Argument by YouTube video is usually invalid
(iii) Brain speed is not "optimization".
(iv) Computer speed is not artificial intelligence.
(v) A badly (madly?) titled PDF on the internet is dubious, especially on a site that you have to register to download the PDF. "Causal Neural Paradox (Thought Curvature): Aptly, the transient, naive hypothesis" is nonsense.

A point I am trying to make is that you have not yet described your "naive approach" after a couple of days and 345 posts.
 
Last edited:
Calling the real world garbage is not wise. In the real world the OP contains:


A point I am trying to make is that you have not yet described your "naive approach" after a couple of days and 345 posts.

This does not perturb the gihub links' contents to suddenly become non science.

The description of the naive approach is contained in the paper/code, from the original post.
 
It is standard in science and common in real life that a calculation on a range of values does not use the extremes or values outside of that range. The reasonable value to use is a value in the middle of the range. That is usually the average or median value.

You gave no source for "roughly 10^16 to 10^18 synaptic operations per second" so a reasonable value would be 10^17 synaptic operations per second.
10^14 computer operations a second gets to the lower limit of the range that you gave in a little over 8 years (doubling every 2 years), i.e. at least 2025.

Moore's law may be running up against physical and economic constraints - some experts think that the rate of increase is decreasing.

(A)
Based on my original post's source, you can find an estimation of 10^15 synapses
At 10^15 synapses, we have 10 impulses per second, which yields 10^16 sops.


(B)
I had long discussed that Moore's law was decreasing.

Moore's law ends soon after 2020. (Hence the 2020 calculation was pertinent)

Regardless, any rate of improvement of machines shall probably yield artificial human level intelligence.
 
Read the OP: If mankind isn't erased (via some catastrophe), on the horizon of Moore's Law, mankind will probably create machines, with human-level brain power (and relevantly, human-like efficiency), by at least 2020.

Based on your numbers and Moore's law that claim is invalid. If Moore's law decreases then the claim is even more wrong.
Though you may want to clarify what "on the horizon of Moore's Law" means. It sounds like you expect it the law stop in 2020 and the number of transistors per chip to remains constant forever :).
 
Last edited:
Manifolds are a wonderful mathematic concept I first came across when learning General Relativity and I do know about their use in machine learning.

That's fine.

...but as I mention, it is common enough in machine learning.

At basis, each x ~ P may be observed as some manifold.

The problem then becomes separation of manifolds, such that unique solutions are discoverable for some dataset/input range, where each matrix representation is a bijective inverse for some following representation, on some continuous function sequence.

In other words, we have a detangling problem.

This is at least the sketch at the manifold level.
 
Last edited:

I did the calculations over some range a * 10^15 synapses, on Moore's law 2 years per minimization cycle.

So, the calculations holds while Moore's law is still alive.

Regardless, super intelligence is probably inevitable, given any rate of improvement over speed and or programming/architecture.
 
So, the calculations holds while Moore's law is still alive.
The actual calculation shows that it is impossible to achieve your claim by 2020 of roughly 10^16 to 10^18 synaptic operations per second.
The OP links to a 2014 IBM SyNAPSE chip with "only" 256 million synapses.
A claim of 10^14 computer operations a second gets to the lower limit of the range that you gave in a little over 8 years (doubling every 2 years), i.e. at least 2025. But your source for this is not an currently working chip - it is a simulation on a massively parallel supercomputer.

The rest of the post may need a Duh! because it is just about inevitable that we will have "a brain in a box" sometime - maybe within the next few decades.
 
Last edited:
The actual calculation shows that it is impossible to achieve your claim by 2020 of roughly 10^16 to 10^18 synaptic operations per second.

The rest of the post may need a Duh! because it is just about inevitable that we will have "a brain in a box" sometime - maybe within the next few decades.

I don't see how 10^14 synapses for current machine level, and human 10^15 synapses = 10^16 sops fail to yield 2020 year, applying H = K * 2n.
 
A basic sketch stating what is known does not create super artificial intelligence.
ETA: Even a cartoon does not create super artificial intelligence :D.

Here is slightly better description, in loose machine learning terms:

  • Points maintain homeomorphisms, such that for any point p under a transition T on some transformation/translation (pertinently continuous, inverse function) t, p0 (p before T) is a bijective inverse for p1 (p after T); on t.

  • Following the above, topologies maintain homeomorphisms, for any collection of points W (eg a matrix of weights), under some transition T on some transformation/translation sequence (pertinently continuous, inverse functions) s, W0(W before T) is a bijective inverse for W1(W after T); on s, where for any representation of W, determinants are non-zero.


  • Now, topological homeomorphisms maintain, until linear separation/de-tangling, if and only if neural network dimension is sufficient (3 hidden units at minimum, for 2 dimensional W)

    Otherwise, after maintaining homeomorphism at some point, while having insufficient dimension, or insufficient neuron firing per data unit, in non-ambient isotopic topologies that satisfy NOTE(ii) W shall eventually yield zero determinant, thus avoiding linear separation/de-tangling. At zero determinant, unique solutions for scalar multiplications dissolve, when the matrix becomes non-continuous, or non-invertible.

NOTE(i): The state of being "ENTANGLED" is the point before which some de-tangleable classes are de-tangled/made linearly separable.

NOTE(ii): Unique solutions in matrices are outcomes that resemble DATA SETS; for homeomorphisms (topologies: where zero-determinant continuous invertible transformations/translations engender OR ambient isotopies: where positive/nonsingular determinants, nueron permutations, and 1 hidden unit minimum occurs, i.e for 1-dimensional manifold, 4 dimensions are required in network)


oIOuGxD.png
 
Last edited:
I don't see how 10^14 synapses for current machine level, ...
I added some text to the posts you replied to but I will emphasize this:
10^14 synapses for current machine level wrong because no current machine has 10^14 synapses.

Even with this imaginary 10^14 synapse machine, you fail to get to 2020.
10^14 * 10 = 10^15 synaptic operations per second.
Double this to get to 2019: 2 * 10^15 synaptic operations per second.
Double this to get to 2021: 4 * 10^15 synaptic operations per second.
That is less than half of the lower limit of your claim. So the claim is debunked even if we ignore the reasonable interpretation of "roughly" + a range as meaning "somewhere in that range" :eek:!

Even worse, the mostly likely value in a range of values in the middle of the range, thus:
It is standard in science and common in real life that a calculation on a range of values does not use the extremes or values outside of that range
 
Last edited:
Here is slightly better description,...:
Random highlighting does not make a better description of anything. What might be a cut and paste from a textbook is a waste of space - link to the source.

ETA: looks more like a cut an paste from here which hints of out of context, mathematic word salad from an amateur.
 
Last edited:

Back
Top Bottom