When will machines be as smart as humans?

For clarification, this entire thread is based on semantics and answers depend on preciseness of the definitions we are using.

Yes, it always becomes a problem of semantics. I would like to ask Pixy to provide his definition of consciousness. Maybe it would help to understand what he is talking about.
 
I'm not talking about evolution. I'm talking about people. For as long as we have been self-aware, we have been self-awarely creating self-aware machines.

Again, I am not talking about evolution.
Can I biological machine create a physical machine that behaves exactly as itself?. The answer is no.

No. I never said that, or anything remotely resembling that. I don't even know who you have me confused with. I'm the one who points out that p-zombies are conscious.

You are contracting yourself. If evolution has created survival machines, then that´s what we are, a collection of billions of genes that interact and produce sensations, thoughts, etc. But it does not mean that there is an "I" inside our brains, does it?. That´s what you are saying.

Belem
 
If you wish to argue that point, then we are DNA.
But we're not.

We don't know how to create them. Stipulated. And yet, we do so. Begetting is the means; creation is the result.
Begetting new humans is what humans do. Creating new humans is what human DNA does. And "humans" and "human DNA" are not equivalent.
 
Or, more reasonably, we'd have to recognize our status as biological machines.

Apes have very similar biology - are they human now?

No, you are right on - but then again, we cannot know what the perception of an Ape is, and therefore cannot program a machine to mimick it. In programming abstracts, we are confined by our understanding of it.

The most apparent problem would be that we can't test the abstractual program for "bugs" if it's not based on a sytem we can directly identify with.
 
Last edited:
What part of being condescending is not nice don't you understand?

BBS Warzone remnant. I just don't like it when people misread the words I use.

More nicely, your conditional could be read two ways. First, the way you intended. "If [once] we understand....". Or, in the collequial sense of "if" meaning "since", in which case your sentence would have read "since we understand ....", in which case Q-source's comments were an appropriate follow up.

Good point. However, what other word could I have used ? "in the eventuality" ?

Why be rude to somebody just because they misread your sentence?

Because it's fun. Not productive, mind you, but fun. He'll get over it.
 
rharbers said:
I will certainly make it a priority to read Dawkins. Though I doubt that anything would make me accept artificial intelligence as something that is real. A machine may be able to make extremely fast calculations, but that does not make it smart.

As said before: the human mind is a biological machine. Once we understand how it works, we could theoretically replicate it, using the same materials or something synthetic. Ergo, a thinking machine, every bit as sentient as you or I.
 
Emotional intelligence? :confused: You are arguing that machines cannot be intelligent because they don't have endocrine imbalances? What if we simulate that biology?

That doesn't matter anyway. The point is to make them sentient... no... not even. The OP is about "smart" machines. All they have to do is learn, adapt and interract in a way at least comparable to us in efficiency. Of course, I'd go a step further and give them sentience too, as an added bonus. Emotion or not, there's no reason to believe this can't be done. Quite the opposite, in fact.
 
Again, I am not talking about evolution.
Can I biological machine create a physical machine that behaves exactly as itself?. The answer is no.
You've never met my brother and my nephew.

You are contracting yourself. If evolution has created survival machines, then that´s what we are, a collection of billions of genes that interact and produce sensations, thoughts, etc. But it does not mean that there is an "I" inside our brains, does it?. That´s what you are saying.
No, that is not what I am saying. We are machines. We are conscious. There is an I generated by our brains.
 
But we're not.
I agree. But if you want to make that assertion, we are.

Begetting new humans is what humans do. Creating new humans is what human DNA does. And "humans" and "human DNA" are not equivalent.
If you are arguing that DNA is what is creating new humans, then what we are is DNA. The DNA can't do diddly without human effort.

Look at it one way, and DNA is merely the tool we humans use to create the new humans. Look at it the other way, and humans are merely the tool DNA uses to create new DNA. But you don't get to pick one from column A and one from column B.
 
cpolk said:
The point I was getting at (long-windedly, of course) was that abstracts caused by our human condition, such as fear of death, ambition, etc. is what motivates us to use our intelligence.

That may not be true to a computer, and that would not prevent it from beign sentient. I don't think emotion and sentience are necessary to one another.

cpolk said:
The computer is doing calculations and going through the motions of playing a game of cards or chess with you, but it does not feel stressed when it is losing. There is no anticpation, no shame, no desire to win, and no comraderie after the game is over.

A bonus for them, actually. For humans, emotions seem important. Objectively, though, a machine does not need them and is not hampered by their absence.

James Kirk would disagree, of course, and his mere rantings can make ANY intelligent computer explode.

[/geek]
 
I'm sorry but that doesn't necassarily follow. In fact we know that consciousness, human consciousnes is not only an information process. On the contrary, human consciousness would not exist without biological processes.

I have to disagree, here. A computer is an electronic device but it carries information which, itself, is not an electronic device. Of course, it's MADE of electronic things... but... damn semantics. You know what I mean.
 
Vagabond said:
They have no intelligence at all regardless of how many computations they can make. Computers are designed by people to simulate intelligence sufficiently to fool most people that they are intelligent. Those who don't know better. I doubt this will ever change regardless of how advanced computer science becomes. I think they will get more and more sophisticated so it will be harder to tell the difference. But, the difference will remain.

I'm not sure you read the thread so far. I'll repeat: IF we find out how the human brain works and WHY it is conscious, and apply this knowledge to artificial machines such as computers, what's stopping us from building a conscious computer ?
 
That may not be true to a computer, and that would not prevent it from beign sentient. I don't think emotion and sentience are necessary to one another.

This is where semantics come in heavily.

It is our humand condition - limited life span, pain, injury, death, excitement, joy, etc. that motivates us to use our intelligence. Without these things, the machines would be like the lazy lions at the Zoo, just laying around all day without anything really to accomplish.

I don't have any problems admitting that I already get stomped at cards, chess, and even checkers by a simulated opponent. This is in no way comparable to the poker nights I occassionally attend at my buddy's house.

Computers are already capable of doing more than what we use them for - they just lack ambition and reason to do it. (I'm talking about the PC I'm typing on right now.)
 
Yet ambition and reason can be given to these machines. Very simply, in fact.

Think about this: if every computer on Earth were somehow given the Asimovian Three Laws, wouldn't that give them purpose, ambition, and reason to act?
 
A brain is physical. Consciousness is informational.
A brain, like all matter, carries both qualities of linear matter and nonlinear space. The difference in living things appears to be in a third variable, a ghost in the machine, that is a result of matter configuration and an incomplete biophysical model, or a type of linear information processing that is unknown. I see your point though. Basically for human intelligence to be fundamentally different, we need a different biophysical model and different types of linear/nonlinear interaction than what we already have. Otherwise it is just semantics, and ultimately any difference between a 2-stroke engine and the human brain is an illusion.
 
If you are arguing that DNA is what is creating new humans, then what we are is DNA. The DNA can't do diddly without human effort.

Look at it one way, and DNA is merely the tool we humans use to create the new humans. Look at it the other way, and humans are merely the tool DNA uses to create new DNA. But you don't get to pick one from column A and one from column B.
If those are the two choices then I am firmly in column B. But that still doesn't make humans equivalent to human DNA, and it doesn't mean that humans know how to make human DNA.

It's the difference between a computer designer and a computer user. Just because someone knows how to Press Any Key doesn't mean that they know how to take a bucket of sand and make a Pentium out of it. Just because two people know that putting Tab A into Slot B results in a baby nine months later doesn't mean that they know the biomechanics of fertilization and zygote development at a deeper level.
 
If those are the two choices then I am firmly in column B. But that still doesn't make humans equivalent to human DNA, and it doesn't mean that humans know how to make human DNA.

It's the difference between a computer designer and a computer user. Just because someone knows how to Press Any Key doesn't mean that they know how to take a bucket of sand and make a Pentium out of it. Just because two people know that putting Tab A into Slot B results in a baby nine months later doesn't mean that they know the biomechanics of fertilization and zygote development at a deeper level.

I hesitate to complicate this subject further, but, there is some speculation that human consciousness is actually an emergent property of the interaction of memes floating around in the hardware of the brain created by the genes (with that process possibly having been influenced by the memes). Susan Blackmore makes an interesting argument for this in The Meme Machine. If this is the case, then creating a computer that replicates human consciousness becomes even trickier. Or not, since memes are ultimately just units of self-replicating information.
 
If those are the two choices then I am firmly in column B. But that still doesn't make humans equivalent to human DNA, and it doesn't mean that humans know how to make human DNA.
Yep.

It's the difference between a computer designer and a computer user. Just because someone knows how to Press Any Key doesn't mean that they know how to take a bucket of sand and make a Pentium out of it. Just because two people know that putting Tab A into Slot B results in a baby nine months later doesn't mean that they know the biomechanics of fertilization and zygote development at a deeper level.
Oh, no disagreement there. It's all a question of the semantics of a throw-away line anyway. We didn't know (in any detail) how to build these machines, but we did know that if you set the process in motion, that's what you got. And the volition was on the part of the machines, not the process.
 

Back
Top Bottom