• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Super Artificial Intelligence, a naive approach

For example, I didn't apply 10^x synapses for one side while 10^x sops for another side of moore's law equation, I applied synapse values, for both sides, as seen in early pages in this thread.

Two problems:
The 10 signals per second for humans is an unsubstantiated guess.
The speed for human brains does not have to apply to a computer chip or simulation.
In fact, it doesn't:
....yes, not least because the simulation was "only" running at 1542 times slower than realtime :rolleyes:
 
(1)

Two problems:
The 10 signals per second for humans is an unsubstantiated guess.:

A source had long been given.

Principles of Neural Science, by Eric R. Kandel and James H. Schwartz, 2nd edition, Elsevier, 1985.

In other words, it is an estimation by neuroscientists, unlike others on this forum.





(2)

Two problems:
The speed for human brains does not have to apply to a computer chip or simulation.
In fact, it doesn't:

We can compute how much slower a neural chip is, in comparison to the brain, as those chips do cognitive tasks (some cognitive tasks done by these neural chips either at similar levels or exceeding human performance)

In other words, contrary to your contradictory statements above (i.e. "it doesn't have to" indicates it may, yet you essentially wrote "it doesn't at all", i.e. "in fact it doesn't"), we can literally apply a computer chip as a variable speed of the human brain.

Where do you get the data from, regarding the highlighted sequence above?
 
Last edited:
........
A source had long been given.

Principles of Neural Science, by Eric R. Kandel and James H. Schwartz, 2nd edition, Elsevier, 1985.

In other words, it is an estimation by neuroscientists, unlike others on this forum...........

That source actually uses the word "guess".
 
........Where do you get the data from, regarding the highlighted sequence above?

Sorry, but it was your claim in the first place (that the speed of a human brain does applies to a computer chip), so the onus is on you to back this up. You made the positive claim, so you don't get to demand that those saying "you haven't demonstrated that" prove you wrong.
 
Sorry, but it was your claim in the first place (that the speed of a human brain does applies to a computer chip), so the onus is on you to back this up. You made the positive claim, so you don't get to demand that those saying "you haven't demonstrated that" prove you wrong.

Let us break it down:

(1)
I provided sources that describe simulations of synapses.

For example, in the source given, ibm did not simulate ~10^14 apples.

They simulated ~10^14 synapses.


(2)
Now, synapses, are applicable to the human brain; the human brain has a number of synapses.

Ibm also simulate a number of synapses.

Also, anyone (except some here on this forum) may observe that operations take place in the brain, as time passes. So, there are synaptic operations per some moment, as it relates to the above synapse numbers. (i.e speed)

If you combine this with the instance that ibm's machines are doing cognitive tasks, you may finally observe that these machines are comparable.
 
Last edited:
Cognitive tasks at about 1/1500th the speed of an actual human brain. Which casts doubt on your claims that we'll build human level AI by 2020.
 
We can compute how much slower a neural chip is, in comparison to the brain, as those chips do cognitive tasks (some cognitive tasks done by these neural chips either at similar levels or exceeding human performance)

You appear to be assuming that the simulations exceed human capability in all aspects of performance. There are many dimensions to performance: accuracy, speed, power consumption, etc.
 
You appear to be assuming that the simulations exceed human capability in all aspects of performance. There are many dimensions to performance: accuracy, speed, power consumption, etc.

You assumption was sillily invalid.
As stated in ibm's paper, the simulation is not real time.
There, the machine brain already falls short in one dimension.

I already mentioned this before now.
 
Cognitive tasks at about 1/1500th the speed of an actual human brain. Which casts doubt on your claims that we'll build human level AI by 2020.

These predictions occur on the boundary of evidence, including technological advancement. (We already see these cognitive machines get better with time, and these are already causing job-displacement, only to probably cause more later)

As mentioned in the original post, this is at least 2020. So, it may be later.
 
Merely an example.

There are other neuromorphic chips.

The tianhe-3 may not be as efficient as these small/low in power consumption as these neuromorphic chips.

For example, if I recall correctly, simulating 240 trillion synapses, ibm synapse chip consumes merely 82.19 kW, (compared to human consumption (~20 W), but still x1542 slower than human brain), while supercomputers like k computer, in a similar year range, simulated a small 10 trillion synapses, at 9.9 MW.

Like k-computer, tiahne-2 ran in the mega watt range, 17.6 megawatts, although at the quadrillion range, above the trillion range of ibm.

This means that architectures like ibm are more efficient, especially the later versions at 70 milliwatts, capable of 46 billion synaptic operations per second.

70 mW for 46 billion SOPS is rather quite impressive, are you sure you're not missing an important bit of info from that link?
 
How silly.

A guess is an estimate.

http://www.thesaurus.com/browse/guess

Such is ironic foolishness of yours, as is typical/expected.

From your link, so is a shot in the dark.

And a stab. Did you murder the evidence?

One may select what synonym one aims to utilize.
I had already specified the synonym set.

I would mention context once again, but as you like synonyms I suppose I can use ambience, (http://www.thesaurus.com/browse/context?s=t) which would make my sentence as nonsensical as yours. Although all the synonyms in this case seem fitting. (http://www.thesaurus.com/browse/nonsensical)
 
You know, normally I'd weigh in on a thread like this, since estimates of simulated neural activity is a hobby of mine, but honestly I don't know what the hell PGJ is talking about either.

If you want some upper-bound numbers:
The cost of brute-forcing neural computation by running a compartment-level simulation (a la the NEURON simulation environment), is about 4 teraflops (10^12) per real-time neuron.
The human brain has about 100 billion (10^11) such neurons.
So to emulate a human brain in real time will take about a yottaflop (10^24) of computational capacity.

The world's best supercomputer clocks in at 100 petaflops (10^17), a factor of 10^7 too slow.
Computers have traditionally increased by an order of magnitude every four years (although there are signs that that may be slowing).
Seven generations at four years per is 28 years from now.
So assuming things hold, we ought to be able to brute force humanlike machine intelligence by 2045.

Now, keep in mind this is an upper bound. Each neuron certainly isn't performing 4 teraflops of information processing. It takes that much to properly simulate a neuron because the electrotonic cable properties boil down to some simple but exhausting differential equations which just have to be ground through numerically. In all but a few niche cases you can replace it with a simple delay to represent action potential propagation, and the same goes for other, similar shortcuts, but since we don't know all of the niche cases, we can't yet estimate how much cheaper we can make it.

Even then, actual intelligence is likely to be much simpler than that. This kind of whole brain emulation would work, but it'd be like building a plane that flaps its wings just like a bird. Once we understand more about intelligence and how to achieve it (and I'd argue that large-scale emulations of the type I'm describing are ideal for that), I'm sure we'll find much better methods of reaching the same goal.
 
You know, normally I'd weigh in on a thread like this, since estimates of simulated neural activity is a hobby of mine, but honestly I don't know what the hell PGJ is talking about either.

If you want some upper-bound numbers:
The cost of brute-forcing neural computation by running a compartment-level simulation (a la the NEURON simulation environment), is about 4 teraflops (10^12) per real-time neuron.
The human brain has about 100 billion (10^11) such neurons.
So to emulate a human brain in real time will take about a yottaflop (10^24) of computational capacity.

The world's best supercomputer clocks in at 100 petaflops (10^17), a factor of 10^7 too slow.
Computers have traditionally increased by an order of magnitude every four years (although there are signs that that may be slowing).
Seven generations at four years per is 28 years from now.
So assuming things hold, we ought to be able to brute force humanlike machine intelligence by 2045.

Now, keep in mind this is an upper bound. Each neuron certainly isn't performing 4 teraflops of information processing. It takes that much to properly simulate a neuron because the electrotonic cable properties boil down to some simple but exhausting differential equations which just have to be ground through numerically. In all but a few niche cases you can replace it with a simple delay to represent action potential propagation, and the same goes for other, similar shortcuts, but since we don't know all of the niche cases, we can't yet estimate how much cheaper we can make it.

Even then, actual intelligence is likely to be much simpler than that. This kind of whole brain emulation would work, but it'd be like building a plane that flaps its wings just like a bird. Once we understand more about intelligence and how to achieve it (and I'd argue that large-scale emulations of the type I'm describing are ideal for that), I'm sure we'll find much better methods of reaching the same goal.
(A)
I was describing changes in terms of synaptic operations.

I also long described that with crude approximations of neurons, machines are already doing some cognitive tasks equally well, or better than human performance.
With the above in mind, I computed a lower bound of at least 2020, using standard Moore's equation in terms of machine-simulated/human-synapse distribution.



(B)
What is your lower bound estimation? (Since the original argument targeted lower bounds anyway)

Other bodies (I.e. tiahne-3 group) plan exascale models for 2020 too, apart from neuromorphic companies like IBM, that plan other estimation regions for human level cycles per moment.
 
Last edited:
What is your lower bound estimation? (Since the original argument targeted lower bounds anyway)
There is nothing to base a lower bound estimate on. If we knew what needed to be done for intelligence, we could make a guess as to the hardware required, but we don't, so we can't. Maybe it'll be easy. Maybe it won't. We're waiting for scientific understanding to break through the problem, and until it does we can't really say. The only thing I'd venture is that abstracted machine intelligence will probably need synapses the same way planes need feathers.

Also, I'd caution you to be less credulous when you hear people making comparisons between electronics and biology. Just because you hear some knucklehead at IBM claim his system has as many synapses as a cat's brain, don't make it true. Convolutional neural networks do have "neural" in the name, but they absolutely are not "crude approximations of neurons," and anyone who says so is talking out of their ass.
 
Last edited:

Back
Top Bottom