When will machines be as smart as humans?

If awareness is an emergent property of complex systems , there is no need for a designer to know how to create awareness. He need only know how to create complexity.
Knowing how to create a decent cup of coffee on the other hand, is a challenge.
 
Well, if the ability to make a decent cup of coffee is a sign of intelligence, the machines are ready to take over.

Now tea, on the other hand...




My psychic powers tell me someone will be unable to resist completing this HHGTTG allusion
 
Sorry for the delayed reply; I was out of town over the weekend. I hope it's not too late to address a few of the points in your responses.

I speak of a machine in its generic sense; As a contrivance to help us do work. Don't take me wrong, but I love science fiction; especially Frank Herbert novels.

I already responded to this, but it occurred to me only later that we may have had a miscommunication here. The Selfish Gene is not a science fiction novel; it is a nonfiction book about evolutionary biology by Richard Dawkins, an Oxford zoologist. The main point of the book is that evolution is in most cases best explained in terms of the differential reproduction of genes, not organisms, and that organisms are best viewed as "survival machines" developed by natural selection for the purpose of protecting the genes and facilitating their reproduction. In one of the book's better known passages, Dawkins writes:

Now [the genes] swarm in huge colonies, safe inside gigantic lumbering robots, sealed off from the outside world, communicating with it by tortuous indirect routes, manipulating it by remote control. They are in you and in me; they created us, body and mind; and their preservation is the ultimate rationale for our existence.​

Thus, in Dawkins's view, and the view of at least a fair number of modern evolutionary biologists, organisms are literally living machines developed for the purpose of protecting the self-replicating genes that created them; this was the point of my post to whicn you responded above.

So I won't say it's impossible, I'll just say it's improbable. If it takes millions of years for a biological machine to gain consciousness, then I will agree it will take millions of years for our machines.

You're overlooking the vitally important fact that, unlike biological intelligence, artificial intelligence will be created by an intelligent designer (or, actually, a large number of intelligent designers working toward a common goal) rather than by the blind process of natural selection. The effect of consciousness that took nature billions of years to achieve can certainly be achieved much more quickly by an intelligent agent with the power of foresight. I haven't the expertise to make an informed judgment as to how soon we might expect to achieve bona fide artificial consciousness, but I wouldn't be surprised to see it in our lifetime (assuming that technology progresses at the present rate), and certainly do not expect it to take millions of years.

Ok. You're way over my head now. I only reduce the problem to it's basic level due to my limited knowledge. I'm ignorant enough to think that hunks of metal can't reason. That's all.

We're all ignorant of a great many things, but ignorance and sarcasm are not a defense when maintaining a discredited view. Your point here seems to be that the idea of consciousness emerging from the exchange of electrical signals between microchips seems counterintuitive and implausible. Is it any more intuitive or plausible that consciousness could emerge from the exchange of electrical signals between hunks of protein and water? Science has long since established that it does.
 
Last edited:
The human brain is capable of about 10 quadrillion neural operations per second, and a neural operation is simpler than a floating-point operation. When we have a computer that can do 10 petaflops, it will exceed the total computational power of the brain.

The fastest supercomputers at the moment are a couple of orders of magnitude short of that number. I expect that they'll catch up within a decade or two. I also doubt they'll be used to try to reproduce human intelligence.
Errr... Leaving your numbers aside for the moment...

When we have a computer that can do 10 petaflops, which programs are we going to run on it?

That's the nice thing about human brains - they bootstrap themselves quite nicely as long as they are exposed to the culture produced by other human brains.

Memes R Us.
 
You make an excellent point. So I won't say it's impossible, I'll just say it's improbable. If it takes millions of years for a biological machine to gain consciousness, then I will agree it will take millions of years for our machines.
Think of how long it took to get to simple single-celled organisms, and think how long it took to go from a programmed loom to the internet. Computers are on a timescale vastly different from our own. Besides which, computer brains won't have to do nearly as much mucking about with how to keep the organism alive and powered.

Nice thing about being parasitic.
 
I think we must realize that all the smart machines we have seen imagined in Sci-Fi etc. are androids. That is, they are imitations of humans. However, real machines are not particularly likely to be androids. Just think of robots. Robots are all around us, but they are not androids. Your automatic laundry machine is a robot, but it does not clomp around on legs and try to talk; it sits quietly in a corner and does what it is built for.

Intelligent computers will not imitate human brains (unless somebody sets out to make tham do just that), they will be intelligent machines. What machine intelligence will be like is not really easy to imagine, but basically, I think it will be mashines that do what we want them to do, and/or need them to do, instead of what we ask them to do or what the constructor thought we wanted them to do.

Will they want to take over? Why should they? That would not be an intelligent thing to do, and I think they will have some very pragmatic intelligences.

Hans
 
The thing is that we (or scientists) are still trying to figure it out HOW we become self aware or what means to be conscious.
So, how can someone replicate or create a self aware machine if he still has no clue about what it is?.

What part of "IF" didn't you understand ?
 
What part of "IF" didn't you understand ?
What part of being condescending is not nice don't you understand?

More nicely, your conditional could be read two ways. First, the way you intended. "If [once] we understand....". Or, in the collequial sense of "if" meaning "since", in which case your sentence would have read "since we understand ....", in which case Q-source's comments were an appropriate follow up.

Why be rude to somebody just because they misread your sentence?

ETA: I think I may have expressed myself clumsily. Consider this sentence:

"If we can send men to the moon, why can't we do X"

Clearly the speaker fully believes we can send people to the moon.
 
Last edited:
If I were a machine, I would be acting without thought and will.
Oh, you're one of Ian's people.

I'm so sorry... I thought you were just being a bit thick.

Like it or not, you're a machine - a beautiful, splendid machine.

Deal with it.
 
If I were a machine, I would be acting without thought and will.

At this point I think I can only recommend that you read a few books about these issues that might inspire you to revise your opinion. Dawkins's The Selfish Gene offers a very good explanation of organisms as survival machines from the gene's perspective, though it doesn't address the question of artificial intelligence specifically. Dennett's Darwin's Dangerous Idea does talk a bit about artificial intelligence and consciousness as an algorithmic process. I believe that Dennett's Consciousness Explained goes into further detail, but unfortunately I have not yet read that book (I'm hoping to get to it at some point this year).
 
At this point I think I can only recommend that you read a few books about these issues that might inspire you to revise your opinion. Dawkins's The Selfish Gene offers a very good explanation of organisms as survival machines from the gene's perspective, though it doesn't address the question of artificial intelligence specifically. Dennett's Darwin's Dangerous Idea does talk a bit about artificial intelligence and consciousness as an algorithmic process. I believe that Dennett's Consciousness Explained goes into further detail, but unfortunately I have not yet read that book (I'm hoping to get to it at some point this year).

I will certainly make it a priority to read Dawkins. Though I doubt that anything would make me accept artificial intelligence as something that is real. A machine may be able to make extremely fast calculations, but that does not make it smart.

Who's this Ian fellow?
 
In his book, Dawkins does not address how consciousness arise from a buch of genes. How can this ever offer an answer to the original question about the possibility of creating a machine as intelligent as humans?.

Machines can only imitate human behaviour, they would always be limited to the designer´s programme. They wouldn´t be able to think out of the box.

Of course that in Dennett´s fairy world this would be possible. As long as you start by assuming that humans´ feelings, perceptions and thought can be replicated because they are nothing more than electrical impulses. If this is so, then why is taking so long to create a conscious machine?
 
And where is my flying car?

Mercutio: did you read all the reviews of that Kurzweil book on that Amazon page (the reviews on the front page, not all the reviews of the book). One of them makes some pretty serious accusations any skeptic should look deeper into.
 
In his book, Dawkins does not address how consciousness arise from a buch of genes. How can this ever offer an answer to the original question about the possibility of creating a machine as intelligent as humans?

I wasn't offering Dawkins as an answer to the original question. I was offering Dawkins as a response to rharbers's comment that there is no such thing as a living machine. The Selfish Gene certainly addresses that point.

Of course that in Dennett´s fairy world this would be possible. As long as you start by assuming that humans´ feelings, perceptions and thought can be replicated because they are nothing more than electrical impulses. If this is so, then why is taking so long to create a conscious machine?

Do you mean the "fairy world" of materialism-- the assumption that all human behavior is can ultimately be reduced to interactions between the physical components of a system? You're right, that is a fundamental assumption. If you choose to believe that there is some intangible, immaterial component that quickens our brain matter to produce consciousness, then you have a reason to doubt that the effect of consciousness could ever be replicated in a machine. Personally I'm convinced that there is no such immaterial soul, but a discussion as to that question would be a thread unto itself.
 
Mercutio: did you read all the reviews of that Kurzweil book on that Amazon page (the reviews on the front page, not all the reviews of the book). One of them makes some pretty serious accusations any skeptic should look deeper into.
I did--and I saw other critiques of his ideas on other sites, critiques by people who are in a position to speak from knowledge.

I don't know enough about computers to comment on his predictions there, but my gut feeling is that it would take longer than he thinks simply to convince people not to be afraid of the technology if it ever actually does come to pass.
 
If you choose to believe that there is some intangible, immaterial component that quickens our brain matter to produce consciousness, then you have a reason to doubt that the effect of consciousness could ever be replicated in a machine. Personally I'm convinced that there is no such immaterial soul, but a discussion as to that question would be a thread unto itself.

I do not choose to believe that an immaterial component makes me aware, I don´t know what it is. I only know that I am conscious and noone could ever describe and replicate my subjective experiences because that´s what they are. Noone can have access to them, except me. And that is the point of this thread, isn´t it?. A washing machine is already an intelligent machine, it is objective, accurate, predictable, etc.

I find Dennett´s attempts to quine qualia as incredibly ridiculous.
 

Back
Top Bottom