When will machines be as smart as humans?

I hesitate to complicate this subject further, but, there is some speculation that human consciousness is actually an emergent property of the interaction of memes floating around in the hardware of the brain created by the genes (with that process possibly having been influenced by the memes). Susan Blackmore makes an interesting argument for this in The Meme Machine. If this is the case, then creating a computer that replicates human consciousness becomes even trickier. Or not, since memes are ultimately just units of self-replicating information.
I haven't read the book, but is the suggestion that consciousness is not necessarily something that evolved, but something that hijacked existing hardware?
 
Yet ambition and reason can be given to these machines. Very simply, in fact.

Think about this: if every computer on Earth were somehow given the Asimovian Three Laws, wouldn't that give them purpose, ambition, and reason to act?


The laws are (paraphrasing):

1. Do not harm humans or allow actions by others to harm humans.
2. Obey all humans, as long as you do not violute '1'.
3. Protect your existence, unless it violates rule '1' or '2'.

Aside from the obvious paradoxes that would be impossible to program around (what does "existence" mean, exactly?), this gives no reason for motivation whatsoever to better itself or society.

We can make a rule:

4. Improve on original programming

The problem is, it has to be programmed to do this, rather than just given a rule. What exactly does it mean to "improvement to original programming" from the perspective of the robot? We will obviously be programming this machine to make "improvements" that we ourslves try to make from our human perspective.
 
I'm not sure you read the thread so far. I'll repeat: IF we find out how the human brain works and WHY it is conscious, and apply this knowledge to artificial machines such as computers, what's stopping us from building a conscious computer ?
It might help to describe how, not why. Conceptually, the only thing I can imagine in that regard is a genetically enhanced cyborg that could take advantage of various sensory enhancements. A computer would only be a simulation unless you were using biological processes to achieve the result, in which case you would have a genetically enhanced clone, so this is circular. If it walks like a duck, quacks like a duck, and craps all over the sidewalk like a duck - that doesn't mean it is a duck in this case. What is an interesting topic is, what exactly is the interaction within biological processes that drives self-effort in everything from ants to humans?
 
A computer would only be a simulation unless you were using biological processes to achieve the result
This is an assumption that you have not justified. What's so special about meat?

What is an interesting topic is, what exactly is the interaction within biological processes that drives self-effort in everything from ants to humans?
For one, apparent self-effort may simply be an emergent property (or illusion) of a sufficiently complex system, which others in this thread have noted several times. In fact, this seems pretty likely, absent some magical handwaving explanation.
Even if the some little biological process is a necessary component for thought, it would be duplicated as part of this theoretical model, and therefore the model should have the necessary components for consciousness.

Unless you think there's something special about the meat itself, such that if you do the exact same thing with silicon, it won't work. In that case, you should read this: http://www.eastoftheweb.com/short-stories/UBooks/TheyMade.shtml
 
The laws are (paraphrasing):

1. Do not harm humans or allow actions by others to harm humans.
2. Obey all humans, as long as you do not violute '1'.
3. Protect your existence, unless it violates rule '1' or '2'.

Aside from the obvious paradoxes that would be impossible to program around (what does "existence" mean, exactly?), this gives no reason for motivation whatsoever to better itself or society.

We can make a rule:

4. Improve on original programming

The problem is, it has to be programmed to do this, rather than just given a rule. What exactly does it mean to "improvement to original programming" from the perspective of the robot? We will obviously be programming this machine to make "improvements" that we ourslves try to make from our human perspective.

I think you really haven't thought it through, then. I suggest taking a good look at Asimov's various robot short stories, the Robot Novels, and the Foundation series for the implications of the Three Laws.

As for being 'given a rule', that's not an accurate portrayal of how the Three Laws relate to robots. They are hard-wired into the positronic brain.
 
It might help to describe how, not why. Conceptually, the only thing I can imagine in that regard is a genetically enhanced cyborg that could take advantage of various sensory enhancements. A computer would only be a simulation unless you were using biological processes to achieve the result, in which case you would have a genetically enhanced clone, so this is circular. If it walks like a duck, quacks like a duck, and craps all over the sidewalk like a duck - that doesn't mean it is a duck in this case. What is an interesting topic is, what exactly is the interaction within biological processes that drives self-effort in everything from ants to humans?

You're making a rather faulty assumption that biological processes are somehow special. There's nothing special in the biological process - no 'ghosts in the machine' - that can't be reproduced artificially.
 
It is our humand condition - limited life span, pain, injury, death, excitement, joy, etc. that motivates us to use our intelligence. Without these things, the machines would be like the lazy lions at the Zoo, just laying around all day without anything really to accomplish.

Do you have any evidence for this ? I do tons of things that motivate me intellectually. And what would a machine be if not intellectual ?

Computers are already capable of doing more than what we use them for - they just lack ambition and reason to do it. (I'm talking about the PC I'm typing on right now.)

They're not sentient.
 
As said before: the human mind is a biological machine. Once we understand how it works, we could theoretically replicate it, using the same materials or something synthetic. Ergo, a thinking machine, every bit as sentient as you or I.

As you state yourself, it's all theory. What if we finally understand how the mind works then realize we are unique?
 
As you state yourself, it's all theory. What if we finally understand how the mind works then realize we are unique?

That doesn't even make sense as an assumption. Every human is self-aware and I'm sure a great number of animal species are aware of their own existence as well. That's billions and billions of beigns, all sentient. There is nothing unique there. It works, therefore there is a process that makes it possible. All we need is to know that process.

You seem to be claiming that there is a ghost in the machine.
 
I haven't read the book, but is the suggestion that consciousness is not necessarily something that evolved, but something that hijacked existing hardware?

Sort of. Her theory is that brains evolved per the usual Darwinian theory of the need for an on-board computer to make executive decisions in real time while pursuing the genes' ultimate goal of reproduction. But, according to Blackmore, something weird happened when an early hominid developed an uncanny knack for imitating behavior: a second replicator, the meme, was born. Blackmore suggests that from that point, memes played a role in driving biological evolution in directions best suited for the reproduction of memes, and speculates that the rapid expansion of the human brain is due largely to the memes' influence. She further suggests that what we experience as consciousness is in fact nothing more than the interaction of a multitude of memes running around in the hardware of our brains. So, in one sense the memes hijacked the existing hardware, but then they further modified it for their own use.

I haven't the expertise to hold much of an opinion regarding the plausibility of this theory, but the book is fascinating nonetheless.
 
That doesn't even make sense as an assumption. Every human is self-aware and I'm sure a great number of animal species are aware of their own existence as well. That's billions and billions of beigns, all sentient. There is nothing unique there. It works, therefore there is a process that makes it possible. All we need is to know that process.

You seem to be claiming that there is a ghost in the machine.

I mean that the human mind is unique and as a species we are unique. For example, there are millions a stars like our sun, with probably millions of planets like our own; that doesn't mean there is life there like ours though it is possible. It is just as possible that we are alone and that would make us unique. I don't know what you mean by "Ghost in the Machine", because I don't believe machines will ever be "aware". Animals may have feelings, but I am skeptical when someone says that they are aware in the sense humans are aware of their existence.
 
I think you really haven't thought it through, then. I suggest taking a good look at Asimov's various robot short stories, the Robot Novels, and the Foundation series for the implications of the Three Laws.

I've watched some Star Trek and I know about DATA. :p

As for being 'given a rule', that's not an accurate portrayal of how the Three Laws relate to robots. They are hard-wired into the positronic brain.

I understand what you are getting at, but what I am saying is that abstractual thought must be specifically programmed. Whether the rule is hard-wired or not is irrelevent if the robot is not programmed with the meaning of a term such as "existence" or "harm".

In order to program that into a machine, we must have a clear definition ourselves. That definition will come from our perception, influenced by the human condition. In order to program a machine in our perception, it must be able to experience our perception. In order to experience our perception, it needs our biology - or at least the equivalent thereof.

The question is, can we build a machine that is smarter than humans? That depends on what is considered "smarter". Able to complete tasks quicker? Sure. Capable of ambition and motivation? I doubt it, because our knowledge of such abstracts is based solely on our human condition (being confined to our 'meat'), and that is the only way we will be able to program it.
 
Sort of. Her theory is that brains evolved per the usual Darwinian theory of the need for an on-board computer to make executive decisions in real time while pursuing the genes' ultimate goal of reproduction. But, according to Blackmore, something weird happened when an early hominid developed an uncanny knack for imitating behavior: a second replicator, the meme, was born. Blackmore suggests that from that point, memes played a role in driving biological evolution in directions best suited for the reproduction of memes, and speculates that the rapid expansion of the human brain is due largely to the memes' influence. She further suggests that what we experience as consciousness is in fact nothing more than the interaction of a multitude of memes running around in the hardware of our brains. So, in one sense the memes hijacked the existing hardware, but then they further modified it for their own use.

Your meme!

(sorry :p )
 
I mean that the human mind is unique and as a species we are unique. For example, there are millions a stars like our sun, with probably millions of planets like our own; that doesn't mean there is life there like ours though it is possible. It is just as possible that we are alone and that would make us unique. I don't know what you mean by "Ghost in the Machine", because I don't believe machines will ever be "aware". Animals may have feelings, but I am skeptical when someone says that they are aware in the sense humans are aware of their existence.

That's a little foolish. It's like saying that, because you can't sense other people's thought, they're not aware.

By "ghost in the machine", I mean the "soul". I don't believe in it. Therefore nothing in the human mind is mystical or "special", and it CAN be reproduced. If self-awareness is purely physical, then there's no reason to presume that we can't replicate it.

You seem to be assuming the the human mind is somehow beyond science and therefore supernatural.
 
Neither is your machine if it cannot experience abstract.

What is that supposed to mean ? Since when is comprehension of abstraction a prerequisite for awareness ? The only requisite is beign able to be aware of your own existence. We simply don't know how that works, exactly, so saying that this and that is or isn't sentient is mere speculation.

We DO know that awareness exists, however. Why would we assume it can't be replicated, since it's so obvious that human reproduction CREATES awareness in what starts off as a single cell ?
 
That's a little foolish. It's like saying that, because you can't sense other people's thought, they're not aware.

By "ghost in the machine", I mean the "soul". I don't believe in it. Therefore nothing in the human mind is mystical or "special", and it CAN be reproduced. If self-awareness is purely physical, then there's no reason to presume that we can't replicate it.

You seem to be assuming the the human mind is somehow beyond science and therefore supernatural.

I don't mean any of the above. For some reason I can't make myself understood. My fault. I don't know whether there is a soul or not and I don't think the human mind is beyond science. Quite the contrary. Without the human mind there would be no science. By supernatural, if you mean shamanism, I agree. The supernatural should be understood as "more natural".
 
What is that supposed to mean ? Since when is comprehension of abstraction a prerequisite for awareness ? The only requisite is beign able to be aware of your own existence. We simply don't know how that works, exactly, so saying that this and that is or isn't sentient is mere speculation.

We DO know that awareness exists, however. Why would we assume it can't be replicated, since it's so obvious that human reproduction CREATES awareness in what starts off as a single cell ?

Our ability to be aware comes only from our five senses and our brain's interpretations thereof. If we eliminate our senses, we eliminate our awareness. It is only through our experiences with these senses that we can define our awareness, so it is only through these senses that we can program awareness.
 
By "ghost in the machine", I mean the "soul". I don't believe in it. Therefore nothing in the human mind is mystical or "special", and it CAN be reproduced. If self-awareness is purely physical, then there's no reason to presume that we can't replicate it.

As pointed out before, we can "replicate" humans, so that's not the problem. One important question Randfan pointed out is the question of whether or not sentience and intelligence are "substrate neutral". Essentially, it's the hardware problem, and that question seems to be often handwaved in the discussion. I mean, take life for example. There shouldn't be anything "magical" about it, but we still can't make a living cell out of purely lifeless chemicals, from scratch, and I'm not willing to make a prediction on when we'll be able to achieve that if ever. So the same goes for intelligent machines. You can't expect some singularity to magically occur out of playing with the software and current type of hardware and architecture. Now if there was some important discovery/complete paradigm shift either on the concept of intelligence/awareness, human brain understanding or computer architecture/hardware, then maybe we can start thinking about "when" we'll see intelligent (non-human) machines.
 
I don't mean any of the above. For some reason I can't make myself understood. My fault. I don't know whether there is a soul or not and I don't think the human mind is beyond science. Quite the contrary. Without the human mind there would be no science. By supernatural, if you mean shamanism, I agree. The supernatural should be understood as "more natural".

If the mind is all physical, then why do you maintain it cannot be replicated with machines ?
 

Back
Top Bottom