• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Super Artificial Intelligence, a naive approach

Joined
Feb 22, 2017
Messages
1,718
Location
Jamaica
(i)
Life's meaning probably occurs on the horizon of optimization:

(source: mit physicist, Jeremy England proposes new meaning of life)




(ii)
Today, artificial intelligence exceeds mankind in many human, cognitive tasks:

(source: can we build ai without losing control over it?)

(source: the wonderful and terrifying implications of computers that can learn)





(iii)
The creation of general artificial intelligence is so far, mankind's largely pertinent task, and this involves (i), i.e. optimization.

The human brain computes roughly 10^16 to 10^18 synaptic operations per second.




(iv)
Mankind has already created brain based models that achieve 10^14 of the above total in (iii).

If mankind isn't erased (via some catastrophe), on the horizon of Moore's Law, mankind will probably create machines, with human-level brain power (and relevantly, human-like efficiency), by at least 2020.




(v)
Using clues from from quantum mechanics, and modern machine learning, I have composed (am composing) a naive fabric in aims of absorbing some non-trivial intelligence's basis.

Paper + Starting Code (rudimentary): "thought curvature"




(vi)
Criticism is welcome/needed.
 
Sorry to bother you, but are you the inventor of non beliefism!? Can I get a photo with you?
 
Last edited:
ProgrammingGodJordan said:
........
The human brain computes roughly 10^16 to 10^18 synaptic operations per second. ......

Mankind has already created brain based models that achieve 10^14 of the above total..........
So, one ten thousandth the number. Tear up everything we know and re-write the dictionary.

Could you tell us what the error is, with the figures you highlighted?
 
Last edited:
I already did. You are mis-using "pertinent".

You highlighted 10^14 and 10^18 before adding the pertinent comment.

Why did you highlight those figures?

What did you mean by "one thousandth of the number"? (a comment you made under the figures)

So, what errors do you find in the figures you highlighted and criticized in reply #8?
 
Last edited:
Deepmind’s atari q architecture encompasses non pooling convolutions, therein generating object shift sensitivity, whence the model maximizes some reward over said shifts together with separate changing states for each sampled t state; translation non invariance. Separately, uetorch, encodes an object trajectory behaviour physics learner, particularly on pooling layers; translation invariance.

It is non-abstrusely observable, that the childhood neocortical framework pre-encodes certain causal physical laws in the neurons (Stahl et al), amalgamating in perceptual learning abstractions into non-childhood.

As such, it is perhaps exigent that non invariant fabric composes in the invariant, therein engendering time-space complex optimal causal, conscious artificial construction.

If this confluence is reasonable, is such paradoxical?

A genuine question: Was this written by AI code? The reason I ask is that several years ago I created a module that generated prose very similar to what we see here. Of course, it was all nonsense, but it was grammatically correct and thus appeared impressive to the casual viewer.
 
A genuine question: Was this written by AI code? The reason I ask is that several years ago I created a module that generated prose very similar to what we see here. Of course, it was all nonsense, but it was grammatically correct and thus appeared impressive to the casual viewer.

I wrote the paper.

Some related code, however crude exists in relation to paper.

The topics discussed are probably primarily common for undergrad machine learning students.
 
Last edited:

Back
Top Bottom