When will machines be as smart as humans?

Unless programmed with the same stupid need that we seem to have for comforting intellectual gap-fillers, no.

I was thinking more of awareness as an emergent consequence of such vast computational power and as a result realise who brought about its own awareness.

Sorry, got to log off to go home will look in later.

Bye for now.
 
Read Richard Dawkins's The Selfish Gene. You are a living machine.

I speak of a machine in its generic sense; As a contrivance to help us do work. Don't take me wrong, but I love science fiction; especially Frank Herbert novels.
 
Then I really don't understand your point. If materialism is true, then human beings are complex machines whose self-awareness is an emergent quality of intricate interactions between a multitude of neurons comprising the brain. Like any algorithmic process, the program run by the human brain is substrate-neutral, such that, if a similar process of data manipulation were set up in another substance (such as a silicon-based computer), the result would be the same. Hence, conscious computers. The fact that this hasn't been achieved yet is no evidence for its impossibility, and, as Belz and I pointed out above, the fact that it has been achieved in biological machines is good evidence that it is, in fact, possible.
 
Then I really don't understand your point. If materialism is true, then human beings are complex machines whose self-awareness is an emergent quality of intricate interactions between a multitude of neurons comprising the brain. Like any algorithmic process, the program run by the human brain is substrate-neutral, such that, if a similar process of data manipulation were set up in another substance (such as a silicon-based computer), the result would be the same. Hence, conscious computers. The fact that this hasn't been achieved yet is no evidence for its impossibility, and, as Belz and I pointed out above, the fact that it has been achieved in biological machines is good evidence that it is, in fact, possible.

You make an excellent point. So I won't say it's impossible, I'll just say it's improbable. If it takes millions of years for a biological machine to gain consciousness, then I will agree it will take millions of years for our machines.
 
It's not computational power that's the roadblock, though. It's our understanding of what "smart" really means.

So to answer the original question, the only way machines will become as "smart" as us is if we increase our scientific knowledge a great deal. Because what we know now is insufficient to explain intelligence, much less produce it.
Right, and this is why we should probably try to define what we mean by 'smart' first, since it gets tossed around in a lot of different contexts. But I do think having the computational power to create cognitive testbeds will be useful.

rharbers, I think the mistake you're making is in assuming that we have to understand how something works before we can create it. This isn't true; we already have processes that allow us to create tools without an understanding of how they work.
 
rharbers, I think the mistake you're making is in assuming that we have to understand how something works before we can create it. This isn't true; we already have processes that allow us to create tools without an understanding of how they work.

I don't think we create anything. We simply take energy and matter and transform it. As far as saying we create tools without understanding how they work, you will have to name a few. I will agree to accidental discoveries.
 
I don't think we create anything. We simply take energy and matter and transform it.
You'll have to explain to me why it's useful to reduce the problem to this level.

As far as saying we create tools without understanding how they work, you will have to name a few. I will agree to accidental discoveries.
The most obvious example is genetic algorithms. We can create software or a circuit to perform a task with no understanding of the underlying logic.
 
You'll have to explain to me why it's useful to reduce the problem to this level.


The most obvious example is genetic algorithms. We can create software or a circuit to perform a task with no understanding of the underlying logic.

Ok. You're way over my head now. I only reduce the problem to it's basic level due to my limited knowledge. I'm ignorant enough to think that hunks of metal can't reason. That's all.
 
If "Star Trek: The Next Generation" has taught us anything, it's that yes, machines can become as smart as humans. But they'll be socially awkward, boring as conversationalists, irritating as coworkers, and they'll suck the fun out of the room when they lurch in and stare at you with their creepy, soulless yellow eyes. Even Whoopi Goldberg the bartender will dislike them, but since they'll have no social understanding and can't read body language, they'll miss any and all hints that their company isn't wanted. It'll end, inevitably, in someone removing the batteries and stuffing the thing in the back of the coat closet until the next garage sale.
 
If "Star Trek: The Next Generation" has taught us anything, it's that yes, machines can become as smart as humans. But they'll be socially awkward, boring as conversationalists, irritating as coworkers, and they'll suck the fun out of the room when they lurch in and stare at you with their creepy, soulless yellow eyes. Even Whoopi Goldberg the bartender will dislike them, but since they'll have no social understanding and can't read body language, they'll miss any and all hints that their company isn't wanted. It'll end, inevitably, in someone removing the batteries and stuffing the thing in the back of the coat closet until the next garage sale.

You're talking about Data, right?

;)
 
Belz... said:
Nonsense. If biological organisms can be self-aware, there's no reason to believe that we can't achieve the same results with machines.

There is no reason to believe we can.

Why the hell not ? If we can understand HOW biological organisms are self-aware, what's stopping us ?

Please provide your line of reasoning, if any. Otherwise I might think it's an unsupported assumption of yours.
 
You make an excellent point. So I won't say it's impossible, I'll just say it's improbable. If it takes millions of years for a biological machine to gain consciousness, then I will agree it will take millions of years for our machines.

However, technological evolution is much faster, and it IS the hand of an intelligent designer, this time.
 
Why the hell not ? If we can understand HOW biological organisms are self-aware, what's stopping us ?

Please provide your line of reasoning, if any. Otherwise I might think it's an unsupported assumption of yours.

I have already acquiesced to the possibility. I'm sure I'll never see it in my lifetime.
 
Why the hell not ? If we can understand HOW biological organisms are self-aware, what's stopping us ?

The thing is that we (or scientists) are still trying to figure it out HOW we become self aware or what means to be conscious.
So, how can someone replicate or create a self aware machine if he still has no clue about what it is?.
 

Back
Top Bottom