When will machines be as smart as humans?

The problem is that the things that humans do are really really hard to do through floating point opperations. The computers may get really fast, but because of their nature, the problems humans deal with on a regular basis explode factorally computationally for computers. That basically makes it impossible to do.

In the end we have to wonder if our brains evolved the way they did because they're the most efficient tools for the job they were presented.
 
I do not choose to believe that an immaterial component makes me aware, I don´t know what it is. I only know that I am conscious and noone could ever describe and replicate my subjective experiences because that´s what they are. Noone can have access to them, except me. And that is the point of this thread, isn´t it?. A washing machine is already an intelligent machine, it is objective, accurate, predictable, etc.

Well, the original question was when will machines be as "smart" as humans, which is an unfortunately vague term. I think that most of the discussion has centered around whether it's possible for a machine (computers being the obvious best candidates) to exhibit the sort of self-awareness and general intelligence that humans do. I'm not quite sure what your point about subjective experiences is getting at. Obviously a computer can't share your subjective experiences, but on the other hand, neither can I. I think a computer able to simulate the data processing capacities of the human brain, and to exhibit the sort of apparent self-awareness that people appear to possess, would be a strong candidate for a bona fide artificial intellience.

I find Dennett´s attempts to quine qualia as incredibly ridiculous.

Would you care to elaborate on the incredible ridiculousness of Dennett's discussion?
 
If the technological march doesn't stop, (which it may), at some point, machines will have to become as smart as us, won't they?

By "smart" do you mean as in clever or intelligent?

If clever - answer = Some time ago.

If intelligent - answer = never
 
The thing is that we (or scientists) are still trying to figure it out HOW we become self aware or what means to be conscious.
So, how can someone replicate or create a self aware machine if he still has no clue about what it is?.
We solved that problem millions of years ago.
 
As for the OP:

Hasn't our best human chess player (I supposed "best" is arguable) lost to a computer?

That was Deep Blue, right?
 
As for the OP:

Hasn't our best human chess player (I supposed "best" is arguable) lost to a computer?

That was Deep Blue, right?
Not quite. The support team for the computer monitored Kasparov's play and developed strategies for the computer to play against Kasparov. This is typical of the problem of computer thinking at this time. Computer problem solving is specialized and the more advanced the program the more pre-programing for problem solving there is.

What is intelligence?
What is sentience?
Are these things substrate neutral?

I think there will be a change in paradigms as they relate to intelligence. Humans have interesting problem solving techniques that we don't as yet know how to program into a computer. We don't even fully understand how we do many of the things that we do.

Bear in mind we are no where near a computer passing the Turning test (see Loebner prize). However we are making incredible advances. Are we about to turn a corner? Could be. Is it inevitable. Intellectually I would have to say yes. Intuitively I would say no. Well see. Don't let it threaten you though. Human cognition (and to a small part animal cognition) is simply amazing. That we could get a machine to think would be a testament to human thought.
 
Last edited:
I find Dennett´s attempts to quine qualia as incredibly ridiculous.
Dear me, has Daniel Dennett been quining qualia?

So what is it with you and the letter Q anyway?

To quantify or qualify
quintessence (quasi-physical)
or quibble of such quiddities
is quirky, quaint and quizzical.
You cannot quine a qualium ---
who queries this quodlibet
unquestioned by the quorums
and the queues of the quickwitted
quoth I, is queer and quarrelsome ---
so quell such talk and quit it!

So down with Daniel Dennett!
Damn his dark deceitful mind!
The question's quite quotidian
and quales can't be quined.


You may quote him in quadruple
from his quartos (without equal)
but how to quine a quale must be
sequestered in the sequel.
You may quote me quite unqualified:
those quacks who quine their qualia
like that dummy Daniel Dennett
are all destinied for failure.
In despite of Daniel Dennett
I'll be damned if I can see
any way to quine a qualium
qua quale. Q ... ED.

So down with Daniel Dennett
and his devilish designs!
There's no question that a qualium
is not a thing one quines.
 
Last edited:
I was working towards a grad school degree in Artificial Intelligence before I dropped out. Reason? I came to realize (to quote a phrase) that we are climbing trees, trying to reach the moon.

IMO, there is a LONG way to go before we achieve any kind of high-level adaptable intelligence with computers. It pains me to see people quote the number of operations per second, or the number of connections, or the amount of memory, etc., that computers can now achieve, as compared to the human brain, as if those figures had ANYTHING to do with it. All that says is that we've built a possible container for a brain, without any idea how to fill it. Worse, if we could somehow figure out how to fill it, it might run a billion times slower.

It WILL happen, though, if technology keeps advancing. It's hard to imagine there ever being a time when AI programmers throw up their hands and say, "Well, that's the best we can do. It's impossible to make this program any more brainlike." They'll eventually achieve it, but not for a LONG time, if you ask me. Maybe 500 years.
 
Ray Kurzweil has a new book out, and he maintains 25-30 years. Says that our computational capacities will increase at even beyond present levels, due to emerging nanotech-based processors.
 
When we have a computer that can do 10 petaflops, which programs are we going to run on it?
Oh, I think the problem is more serious than that; we don't even know what the appropriate architecture is.

I didn't adequately communicate my belief that this problem is more difficult than one of computation at the outset; still, it ought to be interesting to see what emerges from current models as the problem of computational power vanishes. And I think what emerges could well behave intelligently (for some definitions), if not consciously.

That's the nice thing about human brains - they bootstrap themselves quite nicely as long as they are exposed to the culture produced by other human brains.
They do an impressive job of it even in the absence of culture.
 
To go back to the original topic, I think the appropriate answer is: after we get flying cars but before we make a moon colony. What do I win?
 
Then I really don't understand your point. If materialism is true, then human beings are complex machines whose self-awareness is an emergent quality of intricate interactions between a multitude of neurons comprising the brain. Like any algorithmic process, the program run by the human brain is substrate-neutral, such that, if a similar process of data manipulation were set up in another substance (such as a silicon-based computer), the result would be the same. Hence, conscious computers. The fact that this hasn't been achieved yet is no evidence for its impossibility, and, as Belz and I pointed out above, the fact that it has been achieved in biological machines is good evidence that it is, in fact, possible.
There are problems with sentient machines, using human beings as a ruler to measure it anyway:

1) Being designed, unlike humans, they would lack a subjective self-directed purpose and relationship to nature.

2) Assuming in what manner they could self-efface and have similer emotional and intellectual qualities as intelligent and sentient organic life, they would have countless intellectual and existential conflicts.

3) How would you even approach, except as a simulacrum, to achieve sentience artificially?
 
You like talking nonsense.
What nonsense? You were asking how we could produce self-aware machines if we have no idea how they work. I point out that we have been doing this for millions of years.

You're here, therefore the problem has been solved. Assuming that you are self-aware. I'll give you the benefit of the doubt.
 
I was working towards a grad school degree in Artificial Intelligence before I dropped out. Reason? I came to realize (to quote a phrase) that we are climbing trees, trying to reach the moon.

It WILL happen, though, if technology keeps advancing. It's hard to imagine there ever being a time when AI programmers throw up their hands and say, "Well, that's the best we can do. It's impossible to make this program any more brainlike." They'll eventually achieve it, but not for a LONG time, if you ask me. Maybe 500 years.
It think it depends. My understanding is that AI research has not made any real advances in 20-30 years or so. It's in need of a revolutionary genius to shift the paradigm. When (if) that happens, we can start making more reasonable forecasts. Any current forecasts are so long term as to be meaningless - the field will almost certainly have had major shifts in thinking over 500 years.
 
It think it depends. My understanding is that AI research has not made any real advances in 20-30 years or so. It's in need of a revolutionary genius to shift the paradigm. When (if) that happens, we can start making more reasonable forecasts. Any current forecasts are so long term as to be meaningless - the field will almost certainly have had major shifts in thinking over 500 years.


Well, of course it's impossible to come up with any really good number; putting the distance at 500 years is just my way of saying "eventually, yes, but surely no time soon".
 
There are problems with sentient machines, using human beings as a ruler to measure it anyway:

1) Being designed, unlike humans, they would lack a subjective self-directed purpose and relationship to nature.

Unless they were given one, or acquired one.

Assuming in what manner they could self-efface and have similer emotional and intellectual qualities as intelligent and sentient organic life, they would have countless intellectual and existential conflicts.

Making them all the more like us.

How would you even approach, except as a simulacrum, to achieve sentience artificially?

I could recommend some good books on the subject. Saying you don't understand how it could be done is no argument that it can't be done.
 
I could recommend some good books on the subject. Saying you don't understand how it could be done is no argument that it can't be done.
Oh, I suppose I could wrap my head around how, but what a trip. I had in mind robots when I wrote that; genetically engineered cyborgs are another thing entirely, and a lot easier to imagine existing though.
 

Back
Top Bottom