When will machines be as smart as humans?

Unfortunately, Searle's Chinese Room problem and Dennett's Frame problem both suffer from the same fundamental error: assuming that an artificial intelligent system will be as our computers today are. Yes, if we give a robot instructions to deal with bombs and batteries, and it encounters a chihuahua, that factor is outside of its frame of reference. Luckily for us, most robots are programmed to ignore things that are not in its database, so it could still potentially succeed in its task. However, if the chihuahua interferes, the robot is brought to a standstill.

Yet, I can't help but notice how this is directly like the behavior of infants and young children. Keep them in familiar settings with familiar objects, and they accomplish their tasks fairly well; introduce a new, confounding variable, and they often freeze, or retreat, or lose all track of what they were doing. Of course, the difference there is that the child, even while having to deal with a previously unknown variable, is learning (reprogramming) about the new variable, while the robots we have today lack this ability.

But that's the key - 'we have today'.

I'm fascinated by the simple insect robots that have been assembled in the last ten years. These machines have no central processor, such that most computer scientists would recognize; rather, they have numerous processors all working in tandem, and a single 'goal' - usually to reach a marker or light source. There is no program for operating legs, for direction, for orientation; yet within minutes, these simple machines learn how to walk with six legs, and amazingly, in the same patterns as insects walk. They learn how to circumnavigate obstacles effectively. If there's any real flaw in these things, it's that they don't recall that learning later - and that may be something as simple as adding a memory card with some manner of autosave feature, to sort that problem out.

Yes, if we build a computer, such as we have on our desktops, it's never going to be able to learn and grow; but there's no reason why a sufficiently complex arrangement of simple processors provided with a simple set of instinctual goals couldn't learn anything and everything that we learn - after all, what is a brain, if not a collection of simple processors, with some instinctual goals hard-wired into them?

I'm not a big fan of 'embodiment theory' - largely because everything I've read about it doesn't say anything. It's like those writing about this theory can't figure out the right terms to use, to make themselves really understood. They skirt the issues, and assume the modern computer system to be the most advanced we'll ever have. That may be what's taught at universities today; but that doesn't make the theory any more or less right than computationalism or any other cognitive theory.

Whatever theory is right or wrong or whatever, one thing is certain: to assume that something is impossible because it cannot be done today, is short-sighted and ignorant. This may sound like I'm singing an old, old chord, but it's every bit as true now as any other time: people used to say the same things about flight, space travel, artificial hearts, and so on and so forth.

Artificial intelligence will happen - and I predict, probably within the next 200 years. We'll be faced with machines that are as smart - and probably smarter - than we are. It may even be the next stage of evolution. We can't look at our laptops and our RoboSapiens today and say, 'These things will never be smart. They suck.' Because things always change.
 
Yes, if we build a computer, such as we have on our desktops, it's never going to be able to learn and grow; but there's no reason why a sufficiently complex arrangement of simple processors provided with a simple set of instinctual goals couldn't learn anything and everything that we learn - after all, what is a brain, if not a collection of simple processors, with some instinctual goals hard-wired into them?
Will they ever be able to have sex and replicate, and enjoy it at the same time? :D
 
That would be fine. Bear in mind that rivers detect rocks and metal detects heat.

My definition of awareness is the ability to process information. A river does not process information. Metal does not process information. A thermostat regulates temperature by gathering information, processing it, and making changes accordingly. The processing of information leads to an action.

In humans, thought is an action. Making a choice is an action. They are both a processing of information that ultimately leads to a change.
 
Will they ever be able to have sex and replicate, and enjoy it at the same time? :D

Why not? It's an old idea to sci-fi buffs.

As for 'enjoyment' - I think we should all keep in mind that 'pleasure' exists solely as a means of motivation for life forms to behave in ways that propogate species and ensure survival. Think about it - if we felt pain every time we ate food, or felt pleasure when we broke our bones - how long would we survive? Hence - pleasure, as we have it today.

For a machine, pleasure might simply be the fulfillment of its functions. If one of those functions is propitiation of the species, then the machine will find this pleasurable.
 
Yes, but wouldn't it be much easier to knock up the girl next door? How many trillions and trillions of dollars of reasearch would we have to put into that?

Yeah, but you have to think ahead. Do you watch the show, "Futurama"? Did you see the one with Lucy Liu and the Marilyn Monrobot? Think of the future responsibility we have torwards our children!
 
I think the fact that insects show displays of self-preservation also shows a type of self-awareness, in that they are capable by the limitation of their senses to interact with their environment; and in the case of social insects, preservation of the colony, I think, shows self-awareness, if not civil-awareness. It is just as real as humans - however, the NES is just as real as a GameCube. Obviously, one is of a greater capacity.
My suspicion is that the awareness you describe here is closer to the chemical "awareness" that lets a cell in a gastrula know where it is in relation to it's process, both in space and time- ie largely chemical, largely programmed, but with some capacity to learn. I don't suppose for an instant that an ant is aware of the colony as a concept or as an entity. (I wonder what humans are unaware of?)
That computer was only one example to show that it can be self-referential.
The "self" is the program in its entirety.
Not the computer? So by analogy is it humans who are aware, human minds which are aware or human memes which are aware?
I don't accept qualia; there's no proof. It can be explained more logically by saying that our self-awareness is comepletely dependent on our five senses and how we use them. The very definition of self-awareness attests to that.
I need to check the meaning of the word. I sense that (as with several others) we are using it differently. I mean simply the experiences of being conscious - the interaction of sense and internal monitoring which tells us we exist and the consistency which tells us we are awake. Clearly , there is proof for that, so you must be meaning something different. So scratch that paragraph.

Are you more human than Helen Keller, who wouldn't have been able to see the computer screen or hear music? Would you say there are varying degrees of awareness, using the example of comparing yourself to what Helen Keller was capable of being aware of?
You use "human" here as an adjective. That calls for a (probably meaningless) value judgement. The lady died in 1968, which makes me more human for the moment.;)

Yes , awareness is a matter of degree, but I think it is a discontinuous series. Humans are astonishingly like chimpanzees physically and genetically and yet only a fool of either species would confuse the two physically or mentally. The difference is huge, though it may (must?) arise from a tiny difference in genetics.


When it started mouthing off and getting smart. :P
The " Sirius Cybernetics" logo is always a bad sign.:eek:
I should know better than to have a glass of wine while scanning a mound of negatives and taking part in a discussion of this nature.:o
ETA The name "cpolk" has been bothering me for two hours. I finally realised I used to work (about fifteen years ago) with a fellow name of Clyde Polk. But would a machine feel this partial memory failure as a sensation of mild unease? I wonder.

http://www.sony.net/SonyInfo/QUALIA/word/index.html
There's something horribly apt about this being the first thing Firefox found when I entered "Qualia".
 
Last edited:
My suspicion is that the awareness you describe here is closer to the chemical "awareness" that lets a cell in a gastrula know where it is in relation to it's process, both in space and time- ie largely chemical, largely programmed, but with some capacity to learn. I don't suppose for an instant that an ant is aware of the colony as a concept or as an entity. (I wonder what humans are unaware of?)

Again, I am not speaking of the term "colony" in the human sense of civilzation. They colonized according to chemicals, but the point is, they are still social insects and they still colonize and it shouldn't be dismissed as not being aware of something, even it is merely chemical.

I need to check the meaning of the word. I sense that (as with several others) we are using it differently. I mean simply the experiences of being conscious - the interaction of sense and internal monitoring which tells us we exist and the consistency which tells us we are awake. Clearly , there is proof for that, so you must be meaning something different. So scratch that paragraph.

The last time I've had a conversation with someone concerning qualia, they asked me to define experience in units. I had no idea what they were talking about. Qualia, from most people's stance, is an experience that is impossible to describe, and that can only be experienced; for instance, describing the taste of an apple to someone who has never eaten an apple. The counter-claim against it is, just because we lack the words in our language and the necessity to create such words does not mean it is impossible - it's just easier to say, "Here, taste this," than to add to our vocabulary.

You use "human" here as an adjective. That calls for a (probably meaningless) value judgement. The lady died in 1968, which makes me more human for the moment.;)

I know, you got me. :D I laughed hard when I read this.

Yes , awareness is a matter of degree, but I think it is a discontinuous series. Humans are astonishingly like chimpanzees physically and genetically and yet only a fool of either species would confuse the two physically or mentally. The difference is huge, though it may (must?) arise from a tiny difference in genetics.

I absolutely agree that there are major differences, even with close DNA, and obviously, humans are far above any other life on this planet. If we accept that there are varying degrees of awareness, are we stopping self-awareness at the human level? A lot of people argue that animals are self-aware, inasmuch as their senses will allow them to be.
 
But you agree that the comparison to the color "red" is the same. Do you agree that there are varying levels of awareness?

Varying levels of awareness for who or what? I couldn't disagree with this. There's different levels of awareness between insects and dogs, between babies and adults, between sleeping people and people who are awake, people on stimulants and people on downers....
 
Hello again ZD

I'm not a big fan of 'embodiment theory' - largely because everything I've read about it doesn't say anything. It's like those writing about this theory can't figure out the right terms to use, to make themselves really understood. They skirt the issues, and assume the modern computer system to be the most advanced we'll ever have.

I'm not sure what you've read about it....

Do you know of the example of a watt governer?

http://en.wikipedia.org/wiki/Centrifugal_governor

Here you have what looks like intelligent behaviour, but involves no computation.
 
In the sense of consciousness we are speaking of, it is not the collection of information that is awareness, but the interpretation of that information. This interpretation is what we refer to as abstract. Because we only know our own capable abstracts, we can only program those abstracts.

You're losing me here. I thought consciousness meant self-aware. Where does abstract come into play ?
 
The NPC's work on a routine loop - once their routine loop has ended, they will either remain stationary or repeat their loop, depending on their programming. In this example, the character is not a separate entity - it is part of a larger code that is programmed to appear to be a 'character' from our perspective as the game-player.

I'm sure you realise that humans are also pre-programmed in a completely deterministic universe, making your argument somewhat vacuous.

I won't talk about "free will", but I will talk about urge to survive. That line of code, when I eliminate it from the game, does not abstractually affect the rest of the game whatsoever. The machine does not get pissed off and decide to cheat; it does not understand what it is to lose, other than in a completely mathematical sense of keeping a score.

So now emotion is the basis of consciousness ? Come on. Beetles have emotion, but I wouldn't say they are self-aware.

cpolk said:
Asking Data, in this example, "How can you handle this particular situation so that no one is injured?" is akin to, "What is 2+2?". Much more complicated and involving many more factors, but it still comes down to mathematical logic.

Same with us, but with more variables. Our response might not be logical, but the factors that come into play are.
 
Thus far, computers like the one I'm using now can only do what it is explicitly programmed to do. We, too, have explicit functions - we don't have a choice in our heartbeat, the blood flowing through our veins, the hair growing on our bodies, skin cells replicating, etc. We can choose to use our outside environment to alter those things, but we have to have the motivation to do so.

Still, how do we even know that the computers we are using AREN'T self aware ? I mean, I don't think they are, but hey, since we don't know how consciousness works anyway, we can't look at someone and say: "this guy's self-aware!" So, how could we know about, say, deep blue ? Sure, it's programmed, but does that mean it's not aware ?
 
1. Isn't that the definition of self-awareness? Why have two terms with the exact same definition?

2. Is someone who has no senses (no taste, touch, hearing, sight, smell) conscious? Even by your definition? They would be considered in a coma. If consciousness were defined as sense "+", then taking away the senses should still leave the "+". By your definition, it doesn't; this person is not conscious in any sense - literally.

By using my definition, it still does leave the "+", as long as the brain is still maintaining bodily functions of internal organs. That person cannot be self-aware, but their body is at least aware of what is happening internally.

It is no different than the computer running through lines of programming; if the body is aware of what is happening internally during this hypothetical coma, then why should the same definition not be applied to a program being aware of what is happening inside of its sequences?

That is why I use a broader definition of aware, and leave the more specific definition to self-aware.
I don't have a problem using the term "self-aware". I see little point in debating words. Your use of the word "aware" is so broad as to have little meaning. A rain drop is aware of when it hits the ground. If you wan't to better facilitate communication with the term "self-aware" I won't argue.
 
Last edited:
Hello again ZD



I'm not sure what you've read about it....

Do you know of the example of a watt governer?

http://en.wikipedia.org/wiki/Centrifugal_governor

Here you have what looks like intelligent behaviour, but involves no computation.

How do you get intelligent behavior out of that? I see simple physics action-reactions.

Do you consider your engine to be intelligent, too?

Not at all impressed so far.
 
I think the notion of 'awareness' has to somehow involve some sort of feedback loop - like a program in a computer that monitors and regulates other programs in the computer, as well as itself. Simple perception seems lacking from an 'awareness' POV - there has to be some aspect that acknowledges the perceived event for awareness to occur.

Raindrops and toilet cisterns lack awareness. I think we can say many digital systems are aware, but not necessarily self-aware. My car is aware of when the door is open or when the fuel is too low, but isn't self-aware.
 
Humans do not work like this. Nor does any organic entity.

In embryological development a fertilised cell splits and goes on splitting. The gamete already has a vast amount of stored information, which will only operate correctly if it finds itself in the correct environment, or one very like it.

But now you ARE describing a sort of programming.

Can this physical structure be synthesised? Probably. But it seems improbable to me that it can be synthesised by any manufacturing process yet in existence except the one you suggested yourself early in this thread. Brains must be grown, not built.


I don't see how the process by which something is built makes a difference: if the architecture and functionning is similar, it SHOULD work in a similar way.
 
There is a fundamental difference between the way you teach a child and the way you can teach a computer. With a child, you point to a red tractor and say "red tractor". Eventually the child associates experiences of red and of tractors with the words "red" and "tractor". The only way you could to this to a machine was if the machine was conscious

Not sure I agree. If a non-sentient computer is programmed to associate single-word utterances to people pointing to something that it can see with a camera, then it will be able to "learn" this information, even if it's not aware.
 

Back
Top Bottom