When will machines be as smart as humans?

What nonsense? You were asking how we could produce self-aware machines if we have no idea how they work. I point out that we have been doing this for millions of years.

"we"?, who are "we"?. If you mean evolution then that is true. But you know that we are not talking about evolution.

You're here, therefore the problem has been solved. Assuming that you are self-aware. I'll give you the benefit of the doubt.

Well, according to you. I am not conscious, in fact there is no "I". The irony of all this, is that you may be quite right, but not for the reasons that you argue.


Dr. Adequate: I didn´t mean to upset you, really.
 
Last edited:
If the technological march doesn't stop, (which it may), at some point, machines will have to become as smart as us, won't they?

Machines will never be as "smart" as humans, in the way you are thinking. Some human will eventually be smart enough to build a machine that can convince some people that it is, though.
 
Machines will never be as "smart" as humans, in the way you are thinking. Some human will eventually be smart enough to build a machine that can convince some people that it is, though.

That's a fairly short-sighted point of view.

I think machines will eventually be smarter than humans in every way. In fact, I think certain machines are already smarter than certain humans, in some ways.
 
We solved that problem millions of years ago.
How so? What self-aware machine have we been creating?

We certainly haven't been creating humans; DNA does that. The closest we come to is begetting them, since we don't yet know how to create them.
 
That's a fairly short-sighted point of view.

I think machines will eventually be smarter than humans in every way. In fact, I think certain machines are already smarter than certain humans, in some ways.

Not short-sighted at all. Mathematically, yes they are. Emotionally, no they aren't, and will never be, because they will not have the same biological functions that will allow them the same reactions that cause emotions in humans. When they do contain that biology, they will not be robots, they will be humans.
 
Not short-sighted at all. Mathematically, yes they are. Emotionally, no they aren't, and will never be, because they will not have the same biological functions that will allow them the same reactions that cause emotions in humans. When they do contain that biology, they will not be robots, they will be humans.

Did you just say machines (in particular, computers) are smarter mathematically than humans? If so you have a poor understanding of mathematics. Computationally they are faster, and less likely to make mistakes in long computations (though the mistakes that can occur are of a different kind than the human sort), but that doesn't constitute intelligence.
 
How so? What self-aware machine have we been creating?
Us.

We certainly haven't been creating humans; DNA does that.
If you wish to argue that point, then we are DNA.

The closest we come to is begetting them, since we don't yet know how to create them.
We don't know how to create them. Stipulated. And yet, we do so. Begetting is the means; creation is the result.
 
"we"?, who are "we"?. If you mean evolution then that is true. But you know that we are not talking about evolution.
I'm not talking about evolution. I'm talking about people. For as long as we have been self-aware, we have been self-awarely creating self-aware machines.

Well, according to you. I am not conscious, in fact there is no "I".
No. I never said that, or anything remotely resembling that. I don't even know who you have me confused with. I'm the one who points out that p-zombies are conscious.
 
Not short-sighted at all. Mathematically, yes they are. Emotionally, no they aren't, and will never be, because they will not have the same biological functions that will allow them the same reactions that cause emotions in humans. When they do contain that biology, they will not be robots, they will be humans.
Emotional intelligence? :confused: You are arguing that machines cannot be intelligent because they don't have endocrine imbalances? What if we simulate that biology?
 
Did you just say machines (in particular, computers) are smarter mathematically than humans? If so you have a poor understanding of mathematics. Computationally they are faster, and less likely to make mistakes in long computations (though the mistakes that can occur are of a different kind than the human sort), but that doesn't constitute intelligence.

That depends on whether or not you are equating the ability of intelligence to the condition of being "smart". If your definition is that they are one in the same, then I agree with you, and I would be wrong. If your definition of "smart" is only the knowledge you posess at any given moment, then computers can be "smarter".

Emotional intelligence? :confused: You are arguing that machines cannot be intelligent because they don't have endocrine imbalances? What if we simulate that biology?

To have intelligence, it must be able to learn through experience. Experience is shaped through perception; perception through senses. For instance, machine cannot "understand" something abstractual, such as music, without being able to sense it in a manner that is close to the way we do.

If we simulate biology, would we be able to simulate sensory reactions as well? Or would we have to actually replicate the biology? Would a machine be able to appreciate "illness" and "death" and understand it on a level we do without experiencing it first-hand to some degree? Would it be possible to simulate such things, or can it only be replicated?

Would such a machine be considered to be "alive"? If we replicate the biology, wouldn't it no longer be a machine?
 
To have intelligence, it must be able to learn through experience.
Hmm. I think that's a definition rather than a necessary conclusion, but it's one that I can accept.

Experience is shaped through perception; perception through senses.
Yes.

For instance, machine cannot "understand" something abstractual, such as music, without being able to sense it in a manner that is close to the way we do.
Sure it can. It's just that its understanding would be different. I'm not saying that machine intelligence would be human; merely that it is possible.

If we simulate biology, would we be able to simulate sensory reactions as well? Or would we have to actually replicate the biology?
There are research projects underway to do both.

Would a machine be able to appreciate "illness" and "death" and understand it on a level we do without experiencing it first-hand to some degree?
Here's the thing: I don't think this relates at all to the question of intelligence. It's a question of knowledge. Not the same at all.

Would it be possible to simulate such things, or can it only be replicated?
Either way is possible.

Would such a machine be considered to be "alive"? If we replicate the biology, wouldn't it no longer be a machine?
I consider humans to be machines, so while I understand the question, I'm not sure my answer would satisfy you.
 
That's a fairly short-sighted point of view.

I think machines will eventually be smarter than humans in every way. In fact, I think certain machines are already smarter than certain humans, in some ways.
I guess you can define smart in such a way to make an argument but it's not really impressive. Machines at the moment perform precisely in a manner that was predetermined by the designer/programmer. The better (smarter) they are the more narrowly focused they are. Get a chess program to play black jack without re-programing it with a complex algorithm with all the rules and possible hands. There are some very fundamental differences between humans and machines. This doesn't mean that humans won't figure out how to make machines do what we do just that we haven't gotten there yet.
 
I consider humans to be machines, so while I understand the question, I'm not sure my answer would satisfy you.
The poster asks a valid question. Forget the human is machine meme for a moment. That's just semantics. If I replicate the gene sequence of an orange tree and program a computer with the genetic code will I get oranges? No, not unless I equip the computer with the mechanics to produce oranges including the mechanics to assemble cells and infuse them with chemicals and water. Oranges are not substrate neutral. I mean, I'm not going to get oranges from plastic, wire and steel. Right?

Is human cognition, in part, biological? Of course it is but how much and is the biology crucial to self awareness?

Dismissing the argument for semantics purposes doesn't really help much.
 
The poster asks a valid question. Forget the human is machine meme for a moment. That's just semantics. If I replicate the gene sequence of an orange tree and program a computer with the genetic code will I get oranges? No, not unless I equip the computer with the mechanics to produce oranges including the mechanics to assemble cells and infuse them with chemicals and water. Oranges are not substrate neutral. I mean, I'm not going to get oranges from plastic, wire and steel. Right?
That's a completely different question.

Oranges are material objects; consciousness is informational. Information is necessarily substrate-neutral.

Is human cognition, in part, biological? Of course it is but how much and is the biology crucial to self awareness?
No; and none. The brain is of course entirely biological. Consciousness is generated by the brain. That does not make consciousness biological.

Dismissing the argument for semantics purposes doesn't really help much.
The question was about semantics, and it was based on a premise I do not accept. That is why I did not answer.
 
PixyMisa, I like the way you think. ;)

Hmm. I think that's a definition rather than a necessary conclusion, but it's one that I can accept.

Is was really more of a generalization. Probably a bit overly so.

Sure it can. It's just that its understanding would be different. I'm not saying that machine intelligence would be human; merely that it is possible.

If we are programming machines to understand perception, we are limited by the fact that we only know the human perception. Therefore, we can only program machines to experience through similar ways that we can experience. I have doubts that we would be able to teach a machine to experience abstracts in any other way that what we are able to experience ourselves.

But that's just my opinion, I'm not set in stone about anything.

Here's the thing: I don't think this relates at all to the question of intelligence. It's a question of knowledge. Not the same at all.

The point I was getting at (long-windedly, of course) was that abstracts caused by our human condition, such as fear of death, ambition, etc. is what motivates us to use our intelligence.

I consider humans to be machines, so while I understand the question, I'm not sure my answer would satisfy you.

Actually, so do I. :D That is why I think that first we will create machines that closely resemble human intelligence, give it the biological aspects to understand that abstractual perception of human intelligence, and end up just creating a human. After that, we would be doing nothing more than improving on the human design.

This is all my own abstractual thinking, of course. This may or may not happen. I'm having a specially-cooked meatloaf tomorrow, so I am happy.
 
I guess you can define smart in such a way to make an argument but it's not really impressive. Machines at the moment perform precisely in a manner that was predetermined by the designer/programmer. The better (smarter) they are the more narrowly focused they are. Get a chess program to play black jack without re-programing it with a complex algorithm with all the rules and possible hands. There are some very fundamental differences between humans and machines. This doesn't mean that humans won't figure out how to make machines do what we do just that we haven't gotten there yet.


The computer is doing calculations and going through the motions of playing a game of cards or chess with you, but it does not feel stressed when it is losing. There is no anticpation, no shame, no desire to win, and no comraderie after the game is over.
 
Last edited:
No; and none. The brain is of course entirely biological. Consciousness is generated by the brain. That does not make consciousness biological.

What I am getting at is that the computer won't be able to feel its heart pound with anxiety during a tense situation. Perhaps this would be regarded as a strength rather than a fault? Anyway, in order to approximate that, I think we would have to somehow impart not just the condition of the heart onto the machine, but all of the corresponding biological functions that approximate anxiety.

Otherwise, how can the machine understand what we mean when we say that we are having an anxiety attack?
 
That's a completely different question.

Oranges are material objects; consciousness is informational. Information is necessarily substrate-neutral.
Yes, information is necessarily substrate-neutral. Where did you get the notion that consciousness = information.

No; and none. The brain is of course entirely biological. Consciousness is generated by the brain. That does not make consciousness biological.
I did not imply otherwise. Our consciousness is a result of biological processes. To what extent do those biological processes shape consciousness?

The question was about semantics, and it was based on a premise I do not accept. That is why I did not answer.
No, you are making it semantics. It is still a valid question. You don't have to accept premises. Rejecting a premise however won't invalidate it.
 

Back
Top Bottom