When will machines be as smart as humans?

Our ability to be aware comes only from our five senses and our brain's interpretations thereof. If we eliminate our senses, we eliminate our awareness. It is only through our experiences with these senses that we can define our awareness, so it is only through these senses that we can program awareness.

Maybe. But a computer has access to information as well, either from cameras, or directly from the internet. I don't see how this is different.
 
As pointed out before, we can "replicate" humans, so that's not the problem. One important question Randfan pointed out is the question of whether or not sentience and intelligence are "substrate neutral". Essentially, it's the hardware problem, and that question seems to be often handwaved in the discussion. I mean, take life for example. There shouldn't be anything "magical" about it, but we still can't make a living cell out of purely lifeless chemicals, from scratch, and I'm not willing to make a prediction on when we'll be able to achieve that if ever. So the same goes for intelligent machines. You can't expect some singularity to magically occur out of playing with the software and current type of hardware and architecture. Now if there was some important discovery/complete paradigm shift either on the concept of intelligence/awareness, human brain understanding or computer architecture/hardware, then maybe we can start thinking about "when" we'll see intelligent (non-human) machines.

I see your point. Basically, we can't manipulate each atom or molecule individually, but I think this is an exaggeration of the problem. After all, we can build things that are made out of atoms without manipulating the atoms themselves. If, for example, the human mind is generated by the combined electromagnetic field of the nervous system, then it may be possible to replicate that, even if we don't build it from "scratch".
 
Maybe. But a computer has access to information as well, either from cameras, or directly from the internet. I don't see how this is different.

In the sense of consciousness we are speaking of, it is not the collection of information that is awareness, but the interpretation of that information. This interpretation is what we refer to as abstract. Because we only know our own capable abstracts, we can only program those abstracts.

I'm going to slip into fantasy for a sec:

If we can meld the human brain to a machine that can let us sense more than we do now, then we would be able to understand senses other than what we currently are able to, and therefore we would (at some point) be able to program according to that. That might be a possibility.
 
This is an assumption that you have not justified. What's so special about meat?
Well nothing, when it's dead. When it's alive it does funny things like try to harness the forces of nature through science and other personal practices.

Even if the some little biological process is a necessary component for thought, it would be duplicated as part of this theoretical model, and therefore the model should have the necessary components for consciousness.
It's more paradoxical than reductionistic, in my view. Others see it as semantical. For example, you and I actually debating whether or not we have something resembling "free will" or not. The idea of two entities with entirely deterministic behavior debating determinism is in of itself, an absurdity, but an ephemeral one. It's entirely possible that while 95% of our behavior may be predictable, a small percentage may account for the "ghost in the machine".

Unless you think there's something special about the meat itself, such that if you do the exact same thing with silicon, it won't work. In that case, you should read this.
Somewhat thought provoking, but has there been any theoretical proof of naturally arising silicon life or potentially intelligent life arising out of completely different configurations of matter?
 
You're making a rather faulty assumption that biological processes are somehow special. There's nothing special in the biological process - no 'ghosts in the machine' - that can't be reproduced artificially.
What is then, the main obstacle now to producing C-3PO or Data, and having them be a productive member of this forum...
 
In the sense of consciousness we are speaking of, it is not the collection of information that is awareness, but the interpretation of that information. This interpretation is what we refer to as abstract. Because we only know our own capable abstracts, we can only program those abstracts.
This is a good point. Let's pretend we have a character like Data, and we ask him to do something other than make comparisons and connect the dots when asked for advice. He could do this using simply an advanced form of logic? Or would abstraction require (quantum computation?).
 
This is a good point. Let's pretend we have a character like Data, and we ask him to do something other than make comparisons and connect the dots when asked for advice. He could do this using simply an advanced form of logic? Or would abstraction require (quantum computation?).

Logic is mathematical. If given the hard details of a scenario and the program to analyze it, a machine can handle advanced logic much faster than humans, just as a calculator can handle equations more quickly than humans.

Asking Data, in this example, "How can you handle this particular situation so that no one is injured?" is akin to, "What is 2+2?". Much more complicated and involving many more factors, but it still comes down to mathematical logic.

If we change to question so that it is semantical in nature, however, Data will not be able to calculate it unless he has abstractual sense programmed to collect abstractual information. Such a real-world question faced daily may be one like, "How can you handle this situation without hurting anyone's feelings?"
 
I've watched some Star Trek and I know about DATA. :p

Then you've barely scratched the surface.

I understand what you are getting at, but what I am saying is that abstractual thought must be specifically programmed. Whether the rule is hard-wired or not is irrelevent if the robot is not programmed with the meaning of a term such as "existence" or "harm".

Actually, Asimov confronted many of these concepts in his shorts. In one story, a robot was forced to deal with what constituted 'harm' to humans from a unique perspective - he was a mind reader. In another, the robots decided that humans were inferior, obsolete servants of their computer master. One of the ongoing themes Asimov explored in depth was the concept of a zeroeth law - against harm to Humanity as opposed to harm concerning individual humans.

Quite obviously, they would have to be programmed with parameters dealing with harm, etc... and those parameters would be continuously improved or updated as needed. We'll be lucky if our first Asimovian 'bots can manage to operate a month without hurting someone; 100 years later, they'll be protecting humanity from itself... I think.

In order to program that into a machine, we must have a clear definition ourselves.

Not really. In order to fulfill the intention of the program, we will - but that's going to be hit or miss as we go along.

In order to program a machine in our perception, it must be able to experience our perception. In order to experience our perception, it needs our biology - or at least the equivalent thereof.

I feel you're wrong on both counts, here.

The question is, can we build a machine that is smarter than humans? That depends on what is considered "smarter". Able to complete tasks quicker? Sure. Capable of ambition and motivation? I doubt it, because our knowledge of such abstracts is based solely on our human condition (being confined to our 'meat'), and that is the only way we will be able to program it.

Again, I disagree. After all, we can offer 'motivation' to computer characters in video games, and other simulations of abstraction; and with the advent of quantum computing, or some other form as yet unknown to us, there's no telling what we'll be able to do.
 
Not really.
I feel you're wrong on both counts, here.
Again, I disagree.

We can always agree to disagree. ;)

Again, I disagree. After all, we can offer 'motivation' to computer characters in video games, and other simulations of abstraction; and with the advent of quantum computing, or some other form as yet unknown to us, there's no telling what we'll be able to do.

We control our character in the video game, and the game program is programmed to act accordingly. It does not have a sense of what the game is representing any more than the computer understands the words it is displaying at this moment.
 
We can always agree to disagree. ;)



We control our character in the video game, and the game program is programmed to act accordingly. It does not have a sense of what the game is representing any more than the computer understands the words it is displaying at this moment.

I take it you don't play too many games, where individual NPC characters do things quite based on their own motivation.

Well, the illusion of motivation, of course... but, ultimately, our own 'ambition and motivation' are illusionary as well.

But let's not take this into an argument about so-called 'free will'... :D
 
I take it you don't play too many games, where individual NPC characters do things quite based on their own motivation.

Well, the illusion of motivation, of course... but, ultimately, our own 'ambition and motivation' are illusionary as well.

But let's not take this into an argument about so-called 'free will'... :D

The NPC's work on a routine loop - once their routine loop has ended, they will either remain stationary or repeat their loop, depending on their programming. In this example, the character is not a separate entity - it is part of a larger code that is programmed to appear to be a 'character' from our perspective as the game-player. This line of code has no more motivation than what we imagine it having - nothing more than what a rag doll is to an infant.

I won't talk about "free will", but I will talk about urge to survive. That line of code, when I eliminate it from the game, does not abstractually affect the rest of the game whatsoever. The machine does not get pissed off and decide to cheat; it does not understand what it is to lose, other than in a completely mathematical sense of keeping a score.
 
I see your point. Basically, we can't manipulate each atom or molecule individually, but I think this is an exaggeration of the problem. After all, we can build things that are made out of atoms without manipulating the atoms themselves. If, for example, the human mind is generated by the combined electromagnetic field of the nervous system, then it may be possible to replicate that, even if we don't build it from "scratch".

But I'm not saying we need to go to the molecular level. I guess my example about life misguided you, the idea is that we know pretty much a lot about the chemistry of life, but we can't reduce biology to the chemical level, so we can't replicate life from lifeless chemicals. In the case of intelligence, it's even more complicated because we don't have a good grasp of the concept of intelligence and consciousness in the first place, we don't know if it is substrate neutral, and, supposing it is, the entire architecture and software of an artificial brain still remains an open problem. Is focusing on the neuron and the brain level really sufficient to create consciousness (what's the role of the senses and perception of the outside world in all of this)? With so many unanswered questions, there is a lot of ground that needs to be covered in the different aspects of the problem before we can even decide whether it is really feasible to make an intelligent machine.
 
When will machines be as smart as humans?

When they can solve the frame problem:

http://plato.stanford.edu/entries/frame-problem/

i.e. never. :)

Do you say "never" as in, the only reason humans can make a choice is because we simply choose a way to do it, regardless as to if there is an even better way to do it? If so, I agree with that. If we spent our time analyzing every little detail, we'd never get anything accomplished.

I guess it's like, if we have an idea that might work, we don't always wait until our next idea comes to see if it's better. A computer will wait, though, until every calculation is solved before it'll make an action - but by that time, the situation may have changed, and the computations will have to start all over.

So we'll need a few "illogic" mechanisms as well.
 
A computer is physical. A computer program is informational.
You are talking in the abstract. "Informational" is not even defined. Does the program comprehend the program?

A book is physical. A story is informational.
Does a book understand the story?

A brain is physical. Consciousness is informational.
This really tells us nothing. Because one identifiable aspect of consciousness is information you define it as "informational", whatever the hell that means. ?

As I said, point out any property of consciousness that is not informational in nature and I will admit I am wrong.
I don't know what consciousness is. I can't argue from ignorance. I only know that defining consciousness as "informational" tells us nothing.

It's not something you responded to. It's the original question. You've been complaining all along about how I responded to that questiion.
Which again tells us nothing. I get your debating style.

Since I consider animals (including humans) to be machines, this question is - to me - based on a false premise.
I consider animals and humans as machines also. I don't think you understand the question. You can't even define the premise you object to. Saying you object to the question doesn't deliniate the premise. The question wasn't composed of a single premise nor does the question hinge only on this premise AIU.
 
This was done successfully decades ago. Sorry.
But you miss the point. The super computer had to be specifically programed to stack the blocks. The problem is to what degree computers can learn? To date they can only learn what they are programed to learn. A chess program isn't going to pick up Chinese unless it is programed to do so.
 
You are talking in the abstract. "Informational" is not even defined. Does the program comprehend the program?

It is difficult not to talk in abstractual terms when you're having a conversation about an abstractual term. ;)

It is not self-aware, in the sense that it knows what it's function is in our human society, if that's what you mean. In the purely mechanical sense, however, self-awareness does not equate to awareness.

For instance, the program is aware of whether or not the conditions are met for it to run its next string of sequences.

This really tells us nothing. Because one identifiable aspect of consciousness is information you define it as "informational", whatever the hell that means. ?

I don't know what consciousness is. I can't argue from ignorance. I only know that defining consciousness as "informational" tells us nothing.

His definition (and my personal one as well) is that "consciousness" is the processing of information. I've already explained the purely mechanical function of that term. An example using our biology would be: If you are capable of processing the information from your senses, you are considered conscious; if you are not able to process the information from your senses (coma, asleep), you are said to be unconscious.

You may consider consciousness differently, and if so, you need to state the difference so that everyone can get on the same definition. Otherwise, you are going to have to accept, at least for the course of this conversation, the definition offered. Don't worry - the reality doesn't change, regardless of how it is defined.
 
Last edited:
The problem is to what degree computers can learn? To date they can only learn what they are programed to learn. A chess program isn't going to pick up Chinese unless it is programed to do so.

On that note, neither are we! We do not just go out and learn anything, either. We have different factors in our lives that motivate us to learn. Programming reasons for motivation is going to be the challenge, I believe.
 
It is difficult not to talk in abstractual terms when you're having a conversation about an abstractual term. ;)
I appreciate the smilie but saying that consciousness is simply abstract tells us nothing. We can abstractly consider consciousness but the process is quite real.

It is not self-aware, in the sense that it knows what it's function is in our human society, if that's what you mean. In the purely mechanical sense, however, self-awareness does not equate to awareness.

For instance, the program is aware of whether or not the conditions are met for it to run its next string of sequences.
Insomuch as a river running down stream is aware of its course. Yes, I agree but this is a meaningless distinction.

His definition (and my personal one as well) is that "consciousness" is the processing of information. I've already explained the purely mechanical function of that term. An example using our biology would be: If you are capable of processing the information from your senses, you are considered conscious; if you are not able to process the information from your senses (coma, asleep), you are said to be unconscious.
A practical measure of consciousness but this tells us nothing of consciousness or what it means to process information. A computer processes information, is it "conscious"?

You may consider consciousness differently, and if so, you need to state the difference so that everyone can get on the same definition. Otherwise, you are going to have to accept, at least for the course of this conversation, the definition offered. Don't worry - the reality doesn't change, regardless of how it is defined.
I'm sorry but this comes off as patronizing though I suspect you don't intend to be. I have no problem with reality or the lack of it. I'm not invested in it. I'm happy to let the chips fall wherever they may. Self or no self, free will or no free will, homonculous or illusion, I honestly don't care. I understand your point that definitions won't change reality but I actually knew this.

My problem is one of debate and basing premises and debate solely on semantics and accepting or rejecting concepts based simply on a definition of the word.

I don't know what consciousness is. I know what my experiences are and I can only assume that others share similar experiences including emotion and what I sense as self awareness. I'm not a solipsist but I accept that I could be the only conscious entity. I see no practical benefit to solipsism.

I live my life as though I have free will because I really have no choice. I live my life as though there are other conscious entities because, I really have no choice.

In any event, I'm trying to understand consciousness. I know that only biological machines currently posses traits that you and I might associate with consciousness and/or sentience. I also find it rather arrogant to simply declare that consciousness is "informational" and there is no biological component to it.
 

Back
Top Bottom