When will machines be as smart as humans?

Yes, information is necessarily substrate-neutral. Where did you get the notion that consciousness = information.
Consciousnes is not information, but consciousness is an information process. Hence it is substrate neutral.

I did not imply otherwise. Our consciousness is a result of biological processes. To what extent do those biological processes shape consciousness?
To what extent do they shape our consciousness? To a fairly significant extent, I would think, because biology shapes both our experiences and our responses. Ultimately, though, it's informational in nature; all possible consciousnesses must share many properties.

No, you are making it semantics.
Nope. It's is fundamentally a question of semantics. I cannot change that.

It is still a valid question. You don't have to accept premises.
If it is based on a premise I do not accept - a premise I believe to be false - then it is not a valid question. I can answer it, but the answer would not be valid either. So what's the point?
 
For clarification, this entire thread is based on semantics and answers depend on preciseness of the definitions we are using.
 
Consciousnes is not information, but consciousness is an information process. Hence it is substrate neutral.
I'm sorry but that doesn't necassarily follow. In fact we know that consciousness, human consciousnes is not only an information process. On the contrary, human consciousness would not exist without biological processes.

To what extent do they shape our consciousness? To a fairly significant extent, I would think, because biology shapes both our experiences and our responses. Ultimately, though, it's informational in nature; all possible consciousnesses must share many properties.
You are back to defining consciousness as information.

Nope. It's is fundamentally a question of semantics. I cannot change that.
Then you are just left to gainsaying. You know, you've got a great rhetorical device there. It's a bit obtuse but hey... whatever.

If it is based on a premise I do not accept - a premise I believe to be false - then it is not a valid question.

I can answer it, but the answer would not be valid either. So what's the point?
Yeah, and you can stick your fingers in your ears and hum. I suppose that is an appropriate response to premises you think are invalid. Just declare them so. Hey, don't answer.
 
Um, okay. Ignore my last post too then. :)
Sorry Pixy, by never mind I was referring to a rather out of character and rude response that I snipped and replaced with "never mind". I didn't mean for you to ignore my previous post. Probably something about my personality and the fact that I'm in a lousy mood. I just never cared for

"your wrong and I'm wright and I'm not going to address the question because your point is wrong".

I'll let the whole thing go. :( It's not worth the headache.
 
I'm sorry but that doesn't necassarily follow. In fact we know that consciousness, human consciousnes is not only an information process. On the contrary, human consciousness would not exist without biological processes.
Which doesn't make it anything other than informational. If you could illustrate a property of consciousness that is other than informational in nature, then that would change things.

You are back to defining consciousness as information.
No.

Then you are just left to gainsaying. You know, you've got a great rhetorical device there. It's a bit obtuse but hey... whatever.
No. It is a question of semantics, based on a premise I don't accept... And that cpolk doesn't accept either. Cpolk says this in a post above.

Yeah, and you can stick your fingers in your ears and hum. I suppose that is an appropriate response to premises you think are invalid. Just declare them so. Hey, don't answer.
No. If you ask me a question, and the question depends on an invalid premise, I cannot provide a valid answer to the question, so instead I will point out the problem with the premise.

If you ask, "What sort of guns did the astronauts use to protect themselves against the giant green mice living in the moon?", then how else am I supposed to reply?
 
For clarification, this entire thread is based on semantics and answers depend on preciseness of the definitions we are using.
I don't really want to get into a semantical argument about semantics. {sheesh}.

No, the entire thread is not simply based on semantics. On the contrary, to focus on semantics is to miss the point. What is consciousness? I've largely abandoned dualism and I suspect that humans will fully understand human consciousness sometime in the future. How soon I really don't know. Further I suspect that consciousness my very well just be substrate neutral. However, to date the only "machine" that is capable of debating consciousness is a biological one. Is that significant? I don't know. To declare that it is would be to argue from ignorance. But I would think that before we dismiss the significance of biology we ought to understand its role and perhaps we ought to understand a bit better what exactly consciousness is. One thing I am certain of is what consciousness isn't. Consciousness isn't simply information. By that definition a book is conscious.

Don't get me wrong. The words we use are important. I'm not saying that semantics isn't important to the discussion. I'm saying that to simply tie a concept to a word and the argue the word and not the underlying principles is just silly semantics. It doesn't further the discussion.
 
Which doesn't make it anything other than informational. If you could illustrate a property of consciousness that is other than informational in nature, then that would change things.
My computer is informational in nature. A book is informational in nature. The process of human consciousness isn't.

Again, you are just gainsaying.

...so instead I will point out the problem with the premise.
I'm sure I missed something, which premise and what is the problem with it?

If you ask, "What sort of guns did the astronauts use to protect themselves against the giant green mice living in the moon?", then how else am I supposed to reply?
I know of no such logically invalid premise that I responded to. What premise was logically invalid?
 
Of course consciousness is informational. All the mind ever does is proccess information. It takes in various sensory inputs, it performs various transformations on it, keeping some of the information in the brain as experience or memory or whatever, and sending the rest off to the rest of the body to tell it to "do" things.

Of course, one could argue (and many have) that abstracting the mind to that level of abstraction is silly, and that you could abstract anything to being just about information. The difference is that the definition of information (well, one of them) is by definition tied in with mental events. A thing is information if someone knows it.

The definition of consciousness is self-awareness, which is nothing more than when a "mind" (an object which acts upon information in the above way) has knowledge of its own existance. This seems rather trivial to implement. In fact, I think I could write a program which would be nominally self aware, except that it would be far too unintelligent to be of much use to anyone.

Of course, part of the problem is that people don't define the mind as information proccessing, but rather they just give it the painfully circular definition of "that thing that we do." No true scotsman has a mind, or something like that. :)
 
You can take a two year old and give them a pile of blocks and they can make a stack or pyramid out of them. A supercomputer can't. Told to make a pyramid the computer will keep putting the top block on first and dropping it forever never changing until a human interceeds. A computer has to be told somebody who was dead yesterday is still dead today. It's understanding of death is limited to knowing the person isn't available for comment.

They have no intelligence at all regardless of how many computations they can make. Computers are designed by people to simulate intelligence sufficiently to fool most people that they are intelligent. Those who don't know better. I doubt this will ever change regardless of how advanced computer science becomes. I think they will get more and more sophisticated so it will be harder to tell the difference. But, the difference will remain.

Doesn't a dog's brain do just as many computations as a human brain does? But, where is the intellect? The understanding? These are ineffable.
 
Last edited:
No, the entire thread is not simply based on semantics. On the contrary, to focus on semantics is to miss the point.

I just learned the hard way that it is, indeed, about semantics.

We are trying to discuss something that has not been clearly defined. For example, the thread started by asking whether or not a machine can surpass human intellect, without defining human intellect. We all have our different views, every one of us, about what constitutes human intellect. If our arguments are based on varying definitions and we never agree to a unified definition to use, even for just the sake of this one discussion, then how can we make a determination about the machines? In that sense, this entire thread, like the other one I've been posting in, has started with semantics.
 
Last edited:
:cool:
Is it relevant that I glanced at the thread title and saw "machines as smartas as humans?


Completely. Great example... just because a machine is programmed to understand the underlying implications of jokes, would they also have the ability to find them humorous?
 
Which then leads to the question of why we find jokes humorous, and then to why we don't find all jokes humorous. I tried to find out once, but all I managed to do was ensure that many jokes would not be funny to me. Not sure if that was progress or not.
 
My computer is informational in nature. A book is informational in nature. The process of human consciousness isn't.
Wrong, all three.

A computer is physical. A computer program is informational.

A book is physical. A story is informational.

A brain is physical. Consciousness is informational.

Again, you are just gainsaying.
As I said, point out any property of consciousness that is not informational in nature and I will admit I am wrong.

Thought? Information processing. Memory? Information. Experience? Information. Awareness? Information processing.

I'm sure I missed something, which premise and what is the problem with it?

I know of no such logically invalid premise that I responded to. What premise was logically invalid?
It's not something you responded to. It's the original question. You've been complaining all along about how I responded to that questiion.

Would such a machine be considered to be "alive"? If we replicate the biology, wouldn't it no longer be a machine?
Since I consider animals (including humans) to be machines, this question is - to me - based on a false premise.
 
Which then leads to the question of why we find jokes humorous, and then to why we don't find all jokes humorous. I tried to find out once, but all I managed to do was ensure that many jokes would not be funny to me. Not sure if that was progress or not.
:D

Yeah, a large part of humour is incongruity. The better you understand this, the more you are aware of the various ways of constructing these incongruities, the less funny the jokes are.
 
That depends on whether or not you are equating the ability of intelligence to the condition of being "smart". If your definition is that they are one in the same, then I agree with you, and I would be wrong. If your definition of "smart" is only the knowledge you posess at any given moment, then computers can be "smarter".



To have intelligence, it must be able to learn through experience. Experience is shaped through perception; perception through senses. For instance, machine cannot "understand" something abstractual, such as music, without being able to sense it in a manner that is close to the way we do.

If we simulate biology, would we be able to simulate sensory reactions as well? Or would we have to actually replicate the biology? Would a machine be able to appreciate "illness" and "death" and understand it on a level we do without experiencing it first-hand to some degree? Would it be possible to simulate such things, or can it only be replicated?

Would such a machine be considered to be "alive"? If we replicate the biology, wouldn't it no longer be a machine?

Or, more reasonably, we'd have to recognize our status as biological machines.

Apes have very similar biology - are they human now?
 
You can take a two year old and give them a pile of blocks and they can make a stack or pyramid out of them. A supercomputer can't. Told to make a pyramid the computer will keep putting the top block on first and dropping it forever never changing until a human interceeds. A computer has to be told somebody who was dead yesterday is still dead today. It's understanding of death is limited to knowing the person isn't available for comment.

They have no intelligence at all regardless of how many computations they can make. Computers are designed by people to simulate intelligence sufficiently to fool most people that they are intelligent. Those who don't know better. I doubt this will ever change regardless of how advanced computer science becomes. I think they will get more and more sophisticated so it will be harder to tell the difference. But, the difference will remain.

Doesn't a dog's brain do just as many computations as a human brain does? But, where is the intellect? The understanding? These are ineffable.

You completely lack all clues regarding computers, don't you?
 

Back
Top Bottom