When will machines be as smart as humans?

On that note, neither are we! We do not just go out and learn anything, either. We have different factors in our lives that motivate us to learn. Programming reasons for motivation is going to be the challenge, I believe.
I don't think motivation is the answer. Humans aren't simply motivated. Important but it doesn't answer the question.

Humans are capable of applying learning from one discipline to another. Strategies learned in chess can be applied to Go or checkers or other games. Humans can apply things learned from opening a door with a knob to a door with a latch. Computers, as yet, don't do that.

Don't get me wrong. This is due to our programming. And someday computers may very well have this self same programming and will be able to apply disparately learned concepts form one area to another.

They don't now. Humans do.

What does this mean? Nothing beyond the fact that we as humans have yet to understand how exactly humans think, learn and solve problems.
 
But you miss the point. The super computer had to be specifically programed to stack the blocks.
No. It was a general-purpose learning program given an environment containing blocks and a means by which to manipulate them.

The problem is to what degree computers can learn? To date they can only learn what they are programed to learn.
To exactly the same extent that we can only learn what we are programmed to learn, yes.
 
You are talking in the abstract. "Informational" is not even defined.
Wrong again. It is quite precisely defined in mathematics and physics.

Does the program comprehend the program?
It can do.

Does a book understand the story?
Why do you even ask that question?

This really tells us nothing. Because one identifiable aspect of consciousness is information you define it as "informational", whatever the hell that means. ?
Of or pertaining to information.

I don't know what consciousness is. I can't argue from ignorance.
Yet you seem to be doing exactly that.

I only know that defining consciousness as "informational" tells us nothing.
No.

Which again tells us nothing. I get your debating style.
I have explained precisely what the problem is with the original question. Cpolk, who asked the question, understood this immediately. If you don't, the problem is at your end.

I consider animals and humans as machines also. I don't think you understand the question. You can't even define the premise you object to. Saying you object to the question doesn't deliniate the premise. The question wasn't composed of a single premise nor does the question hinge only on this premise AIU.
Then your understanding is lacking.

The question asked if a machine stopped being a machine when it attained certain human attributes. I pointed out that I consider humans to be machines, so the qustion, to me, rests on a false premise. This has already been explained to you.
 
So.

If there are some 10 billion connections in the human brain...I'm not electronically savvy enough to know if there is a electronic corollary to a neuron, maybe a transistor? Processor? But anyway, the point is, would it matter if these things were meat or metal? If we strung enough of them together and kept feeding instructions into it, wouldn;t it be possible to gain a form of consiousness this way? A 'ghost' in the machine, if you will?


Am I making any sense? The idea is in my head, but I'm having trouble expressing it. Suppose you hooked 10 billion processors that mimiced neurons together in such a way that new paths could be made to increase efficiency....

Well, what else would you get but a form of consciousness? A 'ghost'?
 
Insomuch as a river running down stream is aware of its course. Yes, I agree but this is a meaningless distinction.
Wrong again. Programs can make decisions based on their own operation, as well as on user input. In fact, this is extremely common.

A practical measure of consciousness but this tells us nothing of consciousness or what it means to process information. A computer processes information, is it "conscious"?
Tell me how you define consciousness, and I will give you a precise answer. If you can't define consciousness, then what are you asking the question for?

As Daniel Dennett defines consciousness, any information processing system, and that includes something as simple as a thermostat, is conscious. Other definitions will say that a simple thermostat is not conscious, but more complex industrial control systems are.

My problem is one of debate and basing premises and debate solely on semantics and accepting or rejecting concepts based simply on a definition of the word.
What other possible basis is there?

In any event, I'm trying to understand consciousness. I know that only biological machines currently posses traits that you and I might associate with consciousness and/or sentience. I also find it rather arrogant to simply declare that consciousness is "informational" and there is no biological component to it.
So - show us the biological component. This isn't a premise, it's a conclusion. No-one has ever found any component of consciousness that is other than informational, except when applying definitions of consciousness that make no sense. Since it's a conclusion from evidence, it can be shown to be wrong by contradictory evidence.
 
I appreciate the smilie but saying that consciousness is simply abstract tells us nothing. We can abstractly consider consciousness but the process is quite real.

By acknowledging that it is abstract, we acknowledge that we need to agree on a specific definition simply to have a conversation. Just as if I were to say, "Help me find a beautiful house," I would first have to define "beauty".

The consciousness of a cockroach is just as real as the consciousness of a human, but it is different. The awareness of someone who is deaf, blind, and dumb is just as real as anyone else's, but it is different.

Human consciousness alone is abstractual in that it is based on the condition of being human (our 5 senses and bodily capacity), so when you consider consciousness on-whole, terms can vary greatly. So in order to have a discussion, we need to give a definition to consciousness that will be used in the discussion.


Insomuch as a river running down stream is aware of its course. Yes, I agree but this is a meaningless distinction.

A river is not aware of its course, because there is no mechanism spurred by a processing of information. In order for it to be aware, there has to be a processing of information. How so is this a meaningless distinction?

A practical measure of consciousness but this tells us nothing of consciousness or what it means to process information. A computer processes information, is it "conscious"?

It is aware - it is conscious - of whether or not the conditions are met to run its next sequence. It is not self-aware (self-conscious). It is only aware of the order of sequence and conditions needed to move from one sequence to another. That is all it is programmed to be aware of.

I'm sorry but this comes off as patronizing though I suspect you don't intend to be. I have no problem with reality or the lack of it. I'm not invested in it. I'm happy to let the chips fall wherever they may. Self or no self, free will or no free will, homonculous or illusion, I honestly don't care. I understand your point that definitions won't change reality but I actually knew this.

I apologize, I did not mean to patronize. Since definitions do not change the reality, then we should be able to have a discussion using any definition of any term, as long as it does not disagree with the reality itself.

My problem is one of debate and basing premises and debate solely on semantics and accepting or rejecting concepts based simply on a definition of the word.

It only matters when the definition that is asked to be agreed upon does not match reality. To Pixy, adding a condition such as, "Assume the human body is not a biological machine" is like adding, "Assume this ball that is perfectly round is not shaped like a sphere".


I don't know what consciousness is. I know what my experiences are and I can only assume that others share similar experiences including emotion and what I sense as self awareness. I'm not a solipsist but I accept that I could be the only conscious entity. I see no practical benefit to solipsism.

If you don't know what it is, then how can you discuss whether or not it can be programmed into a machine? Again, I don't mean to sound patronizing, but this is where we need definitions. Unless you can offer one of your own, then you will need to accept ours.

In any event, I'm trying to understand consciousness. I know that only biological machines currently posses traits that you and I might associate with consciousness and/or sentience. I also find it rather arrogant to simply declare that consciousness is "informational" and there is no biological component to it.

Consciousness is nothing - it's the letter "C", the letter "O", the letter "N", etc. It's only a word. Its definition is however we use it. Socially, its definition is whatever we agree on it being. Since there isn't even a proper scientific definition, that tells us that the majority of the populace has not come to an agreement. So, in order to have a conversation, we must come to 'some' agreement of the definition, even if the life of that definition is only for the span of the one conversation.

Since it doesn't change reality, the definition need only not conflict with reality. As you can see, this is very much about semantics.
 
Last edited:
If there are some 10 billion connections in the human brain...
More like a trillion, I think.

I'm not electronically savvy enough to know if there is a electronic corollary to a neuron, maybe a transistor? Processor?
Let's call it a "logic element". It's a common term in general-purpose electronic devices like FPGAs. It's something that has inputs, outputs, logic and memory - without precisely defining how much of each.

But anyway, the point is, would it matter if these things were meat or metal? If we strung enough of them together and kept feeding instructions into it, wouldn;t it be possible to gain a form of consiousness this way? A 'ghost' in the machine, if you will?
Yes, as far as we can tell.

Take a neuron, for example. It is a signal processor. It takes inputs, does things with them, and produces outputs. We can fairly easily build an electronic replacement for it that works just the same. It would (at the moment) be tricky to build one the same size, but we know it is physically possible. (Transistors have been built in the lab that are just 5 nanometres wide, but we can't mass-produce them yet.)

So, we replace one neuron with an electronic duplicate. The brain keeps on ticking exactly as before, with our electronic neuron firing in place of its biological predecessor. Then we replace another neuron, and another, and...

Well, what else would you get but a form of consciousness? A 'ghost'?
You'd get the real thing. Not just consciousness, but if you wired up this electronic brain the same way as a human brain, human consciousness.
 
I don't think motivation is the answer. Humans aren't simply motivated. Important but it doesn't answer the question.
Do you speak Chinese? I don't. I have a minimal understanding of Spanish, but during the process of learning the language, I lost motivation and stopped.

Motivation is what causes us to learn or do anything.

Thus far, computers like the one I'm using now can only do what it is explicitly programmed to do. We, too, have explicit functions - we don't have a choice in our heartbeat, the blood flowing through our veins, the hair growing on our bodies, skin cells replicating, etc. We can choose to use our outside environment to alter those things, but we have to have the motivation to do so.
 
Last edited:
You'd get the real thing. Not just consciousness, but if you wired up this electronic brain the same way as a human brain, human consciousness.
However, only if it is given sensory perception peripherals that function similar to a human's. Otherwise, the consciousness would not be human, as human consciousness is created by the combination of our five senses. Although I doubt we can program abstracts that we can't experience, if we can, it could very well be "smarter" than humans, in every sense of the word.
 
When will machines be as smart as humans?

When they can solve the frame problem:

What makes you think that humans can solve the frame problem? At least in the sense that it was presented in your link, the computational and metaphysic issues.
 
More like a trillion, I think.


Let's call it a "logic element". It's a common term in general-purpose electronic devices like FPGAs. It's something that has inputs, outputs, logic and memory - without precisely defining how much of each.


Yes, as far as we can tell.

Take a neuron, for example. It is a signal processor. It takes inputs, does things with them, and produces outputs. We can fairly easily build an electronic replacement for it that works just the same. It would (at the moment) be tricky to build one the same size, but we know it is physically possible. (Transistors have been built in the lab that are just 5 nanometres wide, but we can't mass-produce them yet.)

So, we replace one neuron with an electronic duplicate. The brain keeps on ticking exactly as before, with our electronic neuron firing in place of its biological predecessor. Then we replace another neuron, and another, and...


You'd get the real thing. Not just consciousness, but if you wired up this electronic brain the same way as a human brain, human consciousness.


Hmmmmm.

Well, I'm glad you could make sense of my ramble. This is my suspicion, anyway. That the above senario will happen one day. When?

Hmmmmmm.

That's a tough call. Wasn't it said that out technological complexity will double every ten years or so? If that is true, then perhaps in our lifetime, maybe in our kids'.

*Ponders*

I hope it is within mine! That's be some awesome, eh?
 
When will machines be as smart as humans?

When they can solve the frame problem:

http://plato.stanford.edu/entries/frame-problem/

i.e. never. :)
Humans can't solve the frame problem either. We just ignore it.

This illustrates, again, the problem of letting philosophers talk about the mind. They work by logic, so they think that minds work by logic. This is simply not the case. If it were, philosophers wouldn't get things wrong all the time.
 
Last edited:
However, only if it is given sensory perception peripherals that function similar to a human's. Otherwise, the consciousness would not be human, as human consciousness is created by the combination of our five senses. Although I doubt we can program abstracts that we can't experience, if we can, it could very well be "smarter" than humans, in every sense of the word.
Yes - ish. :) If it modelled an established adult human consciousness, then it would suffer in the same way as a sensory-deprived adult would. If it modelled a newborn's consciousness, then it would not develop remotely normally, or possibly, at all.
 
Humans can't solve the frame problem either. We just ignore it.

This illustrates, again, the problem of letting philosophers talk about the mind. They work by logic, so they think that minds work by logic. This is simply not the case. If it were, philosophers wouldn't get things wrong all the time.

We don't ignore it, we just simplify it and don't overcomplicate it like the scientists and philosophers do.

They say: Not only must you program everything that does happen during an action, but you must also program everything which does not happen during an action.

I see that as overcomplication to avoid admitting laziness. Instead of programming proper sensory perception, you must compensate by way of strenuous extra processing programming. In other words, since the "eyes" don't do as much work, the "brain" has to compensate.

Whereas they are looking at it as the robot will have to be programmed (they claim) to recognize that in order to paint an object, there is a change of color (what does happen) but not in location (what does not happen).

Us humans look at it as such: The box is in the same location until it is moved.

Next. The box is not changing color - I am putting the paint on the box. When the painting task is complete, I will rephrase the box as the a painted new color.

Not so complicated. If we program as we perceive, rather than how we envision a robotic form should perceive (based on what perception, I'd like to know), the answers become self-explanatory.

We spend the first.. what, three years?.. of our lives getting past the frame problem. It'll probably take a machine a bit longer, until technology catches up to the biology.
 
Last edited:
We spend the first.. what, three years?.. of our lives getting past the frame problem. It'll probably take a machine a bit longer, until technology catches up to the biology.
Have you seen the videos from the Brain, Mind and Consciousness conference? There's an excellent talk on this subject, with a study on how young children form causal inferences. The point - as you said - being that we work by inductive inferences, not by exhaustive deductive proofs. Which was what I meant by saying that we ignore the frame problem. We know that moving things doesn't change their colour - but it could be wrong, in which case we act surprised and form a new inference.
 
Do you say "never" as in, the only reason humans can make a choice is because we simply choose a way to do it, regardless as to if there is an even better way to do it? If so, I agree with that. If we spent our time analyzing every little detail, we'd never get anything accomplished.

Something like that. Except in Dennett's example of the robot faced with the novel task of defusing a bomb then the robot never does anything. It can't act at all because it needs to keep looking outside its frame of reference to be able to decide what to do. Humans do not seem to suffer from this problem. It's not just that the robot might not carry out the best action, it's that it would never be able to act at all.

I guess it's like, if we have an idea that might work, we don't always wait until our next idea comes to see if it's better. A computer will wait, though, until every calculation is solved before it'll make an action - but by that time, the situation may have changed, and the computations will have to start all over.

So we'll need a few "illogic" mechanisms as well.

In other words we need to transcend computation?
 
What makes you think that humans can solve the frame problem? At least in the sense that it was presented in your link, the computational and metaphysic issues.

It's not that we can solve it. We just don't seem to suffer from it. The argument goes that the robot is stuck in a "microworld" and can only make decisions based upon information in its representational microworld, but humans don't live in a microworld. We don't deal with a representation of the room and the bomb because we are presented directly with a room and a bomb. The machine is cut off from reality in a way that humans aren't - at least that would be my explanation.
 
Something like that. Except in Dennett's example of the robot faced with the novel task of defusing a bomb then the robot never does anything. It can't act at all because it needs to keep looking outside its frame of reference to be able to decide what to do. Humans do not seem to suffer from this problem. It's not just that the robot might not carry out the best action, it's that it would never be able to act at all.

A baby human wouldn't do anything, either. A human has to 'learn', so maybe in the case of the robot, we are asking too much too soon?

In other words we need to transcend computation?

Thought is computational.
 
It's not that we can solve it. We just don't seem to suffer from it. The argument goes that the robot is stuck in a "microworld" and can only make decisions based upon information in its representational microworld, but humans don't live in a microworld. We don't deal with a representation of the room and the bomb because we are presented directly with a room and a bomb. The machine is cut off from reality in a way that humans aren't - at least that would be my explanation.

Is the problem, then, that the computer simply has too little information to do what we think it should do? Are we expecting machines to behave in a human-like manner without having the same access to information regarding the outside world (senses) as humans?

I think we do suffer from it, but get over it at an early age. In some respects, we don't get over it - we have an existential awareness of time (in that our existence is confined by it) so we make choices without regards for analyzing all of the available information. Or, rather, we set protocols as we experience that allow us to determine what information is relevant to a particular situation and what is not.
 
Is the problem, then, that the computer simply has too little information to do what we think it should do?

Not quite. The problem is that in some situations the computer would not be able to come to a decision unless it was supplied with a complete model of the whole universe. It's not so much that it's frame isn't big enough to make the decision, but that in some cases the frame can never be big enough.

http://www.leaderu.com/orgs/arn/odesign/od182/blue182.htm

Philosopher and AI researcher Daniel C. Dennett describes the Frame Problem as how to get a computer to look before it leaps, or, better, to think before it leaps. Ask a computer to perform a task outside of a clearly defined domain (like chess), and one will soon be stopped cold by the Frame Problem. Dennett tells an amusing anecdote of a robot, R1D1, whose task it is to recover its spare battery from a room where there is a time bomb set to detonate soon. R1D1, designed by experts in AI to be an intelligent system, always considers the implications of its actions. This is a great improvement from R1, who, unfortunately, did not consider all the implications of pulling out the battery with the time bomb strapped to it, and met an untimely demise. R1D1 is much improved; a crowning achievement for AI. So (as the story goes) R1D1 must rescue its battery from the time bomb, and, like R1, hits on the command PULLOUT (WAGON,ROOM). Only this time R1D1's superior program begins to consider the implications of such an action. Dennett tells the story:

It had just finished deducing that pulling the wagon out of the room would not change the color of the room's walls, and was embarking on a proof of the further implication that pulling the wagon out would cause its wheels to turn more revolutions than there were wheels on the wagon -- when the bomb exploded.

R1D1, like all computers, is a victim of the Frame Problem. The Frame Problem arises because most tasks deemed intelligent require intuitive, contextually-based knowledge of a situation that cannot be pre-programmed because the possible scenarios arising from them are effectively infinite. A computer programmed to, say, order a sandwich at a restaurant like a person performs flawlessly until the waiter asks a question that presupposes knowledge of things on a broader scale. Suppose the waiter asks the program designed to order food, "How is the weather today?" Since the computer relies on a specified list of responses, any program lacking explicit responses about the weather will fail. (The computer, of course, could just respond "fine", but it would have to know to say "fine" and not say, "I don't know.") Any other interactions that are not straight-away consequences of ordering food (or playing chess, analyzing stock market trends, and so on) will fail as well. Because computers are programmed, they cannot fill in what is not explicitly given in their programs. Yet it is impossible to pre-program all the information that might become necessary in intelligent interactions. Managing even brief conversational exchanges requires an effectively infinite lexicon of facts, that, even if they could be stored, could not be used in real-time.

In the case of R1D1, a further difficulty involving the Frame Problem arises. It is impossible to pre-program all the information that is not necessary to complete an intelligent task. There is an effectively infinite list of implications connected to any action. Only a few are relevant. The ratio of the revolutions of the wheels to their number on the wagon is not relevant to rescuing R1D1's battery. Nor is the paint on the walls. Why not program a computer to just ignore irrelevant implications? This sounds fine until one realizes that a computer busy ignoring infinite numbers of irrelevant implications is not likely to solve a problem in real-time -- the time it takes to get the battery before the bomb explodes. Of course, by simply programming R1D1 to, say, locate and remove the bomb from the battery, one can get the desired result. What is left, however, is not intelligent behavior but a programmed list of instructions for a mindless machine.

Are we expecting machines to behave in a human-like manner without having the same access to information regarding the outside world (senses) as humans?

Some people are.

I think we do suffer from it, but get over it at an early age. In some respects, we don't get over it - we have an existential awareness of time (in that our existence is confined by it) so we make choices without regards for analyzing all of the available information. Or, rather, we set protocols as we experience that allow us to determine what information is relevant to a particular situation and what is not.

Yes, we seem to be capable of inferencing our way to the answer via some sort of process which transcends computation. We don't get stuck in the infinite regress that Dennett's robot does. Put simply, we can deal with a world of infinite possibilities but no machine can do the same.
 

Back
Top Bottom