When will machines be as smart as humans?

Thought is computational.

That particular hypothesis has been under very severe attack for at least the last fifteen years. It is now widely accepted to have been disproven:

http://www.pdcnet.org/pdf/2Searle.pdf

Paradoxically, cognitive science was founded on a mistake. There is nothing
necessarily fatal about founding an academic subject on a mistake; indeed
many disciplines were founded on mistakes. Chemistry, for example,
was founded on alchemy. However, a persistent adherence to the mistake is at
best inefficient and an obstacle to progress. In the case of cognitive science
the mistake was to suppose that the brain is a digital computer and the mind is
a computer program.
There are a number of ways to demonstrate that this is a mistake but the
simplest is to point out that the implemented computer program is defined
entirely in terms of symbolic or syntactical processes, independent of the
physics of the hardware. The notion “same implemented program” defines an
equivalence class that is specified entirely in terms of formal or syntactical
processes and is independent of the specific physics of this or that hardware
implementation. This principle underlies the famous “multiple realizeability”
feature of computer programs. The same program can be realized in an indefinite
range of hardwares. The mind cannot consist in a program or programs,
because the syntactical operations of the program are not by themselves
sufficient to constitute or to guarantee the presence of semantic contents of
actual mental processes. Minds, on the other, hand contain more than symbolic
or syntactical components, they contain actual mental states with semantic
content in the form of thoughts, feelings etc., and these are caused by
quite specific neurobiological processes in the brain. The mind could not consist
in a program because the syntactical processes of the implemented program
do not by themselves have any semantic contents. I demonstrated this
years ago with the so-called Chinese Room Argument

Searle's attack was just one prong of a multi-pronged attack. Computationalism is effectively dead. I am currently studying philosophy and cognitive science at one of the Universities which founded the discipline of cogntive science in the first place. People come from all over the world to study this subject at my Uni, but because of the developments over the past decade the funding for new projects based on computationalist theories of mind has dried up and the school of cognitive science was closed down. It is now just offered as minor for people studying majors in other subjects. computationalism is dead. If it's dead at Sussex University, then it's dead. OKAY....there is a small minority still defending it, but they are the oldest of the old school. They are the people who don't have an academic career any more if they accept it is dead. NONE of the people going through the system now are going to end up being computationalists precisely because they are now faced with the full force of Searle's arguments, and there really isn't any way to avoid their conclusions when put into an environment where pure dogmatisim doesn't get you anywhere and the people you are discussing it with properly understand the issues. The future of "cognitive science" lies in theories of consciousness stressing "embodiment", a move prompted by another cognitive scientist from Sussex called Andy Clarke. He wrote a book called "Being There: Putting brain, mind and world back together." The title is a reference to Martin Heideggers concept of Dasein or "there-being", meaning "being in a world". In other words, human minds are not computational at all - they are embodied, thus avoiding the frame problem.
 
Last edited:
Yes, we seem to be capable of inferencing our way to the answer via some sort of process which transcends computation. We don't get stuck in the infinite regress that Dennett's robot does. Put simply, we can deal with a world of infinite possibilities but no machine can do the same.

Isn't that not because we ignore (a negative action), but make a choice (a positive action) about what we are trying to do? I made the comment earlier that it will be difficult to program motivation.

Consider an unmotivated student at school, with a test laying on his desk. He'll examine the eraser of his #2 pencil. He'll read the "#2" on his pencil. He'll doodle a character on the side of his test. He'll look up at the wall and analyze the bricks. He'll stare at his desk. He'll stare at a student. He'll chew his eraser. "Pencils down, hand your test forward."

Are there worthwhile similarities in this comparison?
 
Last edited:
That particular hypothesis has been under very severe attack for at least the last fifteen years. It is now widely accepted to have been disproven:

No offense torwards you, but this is what I was talking about with the robots. Just as we cannot expect mechanical machines to function like biological humans, we shouldn't expect biology to function like mechanical machinery. We are biological machines, so of course semantics are going to be biological processes. EVERY process performed by the human body is biological.

So, since we are not able to detect how semantics are programmed into the brain, we are to assume that it is not?! I hardly think we know the brain well enough to make that call.

I think that that statement you posted is wrong, because it fails to recognizes that our semantics, or abstracts, are dependent on our five senses. While "syntactical processes of the implemented program do not by themselves have any semantic contents", meaning that the brain by itself is not capable of abstract, they are not taking into account that abstract comes in response to sensual stimuli.
 
Last edited:
Isn't that not because we ignore (a negative action), but make a choice (a positive action) about what we are trying to do? I made the comment earlier that it will be difficult to program motivation.

How do we choose what to ignore and how to act? Why can't the computer be programmed to do the same?

Consider an unmotivated student at school, with a test laying on his desk. He'll examine the eraser of his #2 pencil. He'll read the "#2" on his pencil. He'll doodle a character on the side of his test. He'll look up at the wall and analyze the bricks. He'll stare at his desk. He'll stare at a student. He'll chew his eraser. "Pencils down, hand your test forward."

Are there worthwhile similarities in this comparison?

Not sure.
 
So, we replace one neuron with an electronic duplicate. The brain keeps on ticking exactly as before, with our electronic neuron firing in place of its biological predecessor. Then we replace another neuron, and another, and...
(snip)
...You'd get the real thing. Not just consciousness, but if you wired up this electronic brain the same way as a human brain, human consciousness.

Pixy - this is where your opinion diverges from mine on this question.
A computer is built. Then some sort of BIOS is installed and an OS is added.
On power up, an electrical process loads coded information which controls further operation by setting switches.

Humans do not work like this. Nor does any organic entity.

In embryological development a fertilised cell splits and goes on splitting. The gamete already has a vast amount of stored information, which will only operate correctly if it finds itself in the correct environment, or one very like it.

The physical position of each cell in the various stages of development, as well as the next step it must execute, is in some sense "known" by the cell itself and by it's immediate neighbours and by certain other cells which might be on the far side of the embryo at the moment (though they might have been near neighbours at an earlier stage. ) The "information" that allows this " knowledge" is probably a matter of chemical gradient, but that's a best guess. (Sheldrake's morphic field would be a neater explanation, were there only a shred of evidence for it.)
My point is that this is typical of the situation at any stage of the development of the body,including the nervous system and brain: The whole damn thing is to some extent aware of it's own internal organisation. The OS is in there from the start.
Now there's nothing of mysticism in this. The OS is a matter of physics and chemistry. Timing is crucial in development, as is the presence in the environment of the correct chemicals, particularly hormones.
(The meme superlayer is an interesting complication, but my view is it accelerated communication rather than creating awareness itself.)

Can this physical structure be synthesised? Probably. But it seems improbable to me that it can be synthesised by any manufacturing process yet in existence except the one you suggested yourself early in this thread. Brains must be grown, not built.
Every brain we are aware of, including those which display consciousness or something like it, is assembled in this fashion. It seems very probable to me that this process embodies many as yet unknown processes (if you like, the setting of innumerable defaults and switches) which are absolutely critical to the origin of consciousness.

While we can play with Dennett's definition, we all know damn well that a thermostat is not aware. It is responsive to a stimulus, true- as is a rock on the edge of a cliff. It might be considered that something responsive is alive- (a sufficiently responsive automated machine gun would be a formidable enemy) - but is it consciously self aware?
(A simple " no" will do).

Note - I do not deny the possibility of machine intelligence in the sense of the self-defending machine gun. Far from it. I can easily imagine a machine society . (Anthills of course, come to mind).
I agree with your earlier comment that AI need not be remotely like human intelligence. We may never actually recognize true AI if we encounter it, seeing only how the hardware reacts to our input. Likewise, the AI may be utterly unaware of us. Which is sad.

I am however, pretty sure that unless a brain is grown, with a multi-level OS already built in at the level of the logic elements, an OS that enables those elements to be aware of each other and their place and function in the whole, nothing remotely like human consciousness will emerge, no matter how closely the assembled hardware resembles human neural hardware.

By the same argument, it's perfectly possible that a machine might be built- maybe out of elastic bands and string- which could accurately reproduce all human behaviour. Vanishingly improbable, but possible. I would just say that I am pretty sure the one mechanical form guaranteed not to give such a result would be one built as an element by element replica of neural architecture, because that architecture itself is meaningless unless arrived at by the same (or a similar) process of development.

Can I support any of this with evidence? Not at all, except the glaringly evident fact that so far as we know, the only things displaying awareness all developed in this manner. Except one. In whom we do not believe.
 
How do we choose what to ignore and how to act? Why can't the computer be programmed to do the same?

In my example, earlier, I used my attempt to learn Spanish. For a while, I was doing fine, until I lost motivation.

I argue that everything we do, and don't do, is a result of our motivation to accomplish the task. That motivation is directly related to our human condition (senses, mortailty, etc.) and the resulting desires. How can we program desire in a machine? What makes a machine want to focus on the bomb in the room, rather than the wheel of the cart? What makes the student want to focus on his test during an exam, rather than the wall?

Through this motivation, we prioritize our desires. In my example, the student who chose to chew on his eraser was distracted in the same way that the robot was distracted by the colors in the room; neither of them had a desire to do the task that was expected of them.
 
Last edited:
We are biological machines.....

So say some people. Plenty of other people reject this, and for perfectly sound reasons.

I think that that statement you posted is wrong, because it fails to recognizes that our semantics, or abstracts, are dependent on our five senses. While "syntactical processes of the implemented program do not by themselves have any semantic contents", meaning that the brain by itself is not capable of abstract, they are not taking into account that abstract comes in response to sensual stimuli.

So are you rejecting Searle's chinese room argument? You are welcome to do so, but I think the damage is already done. I think that computationalism with regard to mind is in terminal decline, largely because of Searle - but not exclusively because of Searle. The problem is that even though the hardcore of cognitive scientists are still maintaining that computationalism is a live theory, the vast majority of other interested parties and the vast majority of new students of the philosophy of cognitive science have accepted it and moved on. It is that last part that is crucial. Nobody is interested in funding research based on computationalism any more, and nobody wants to do that research any more - it isn't where the action is - nobody wants to be studying what was cutting edge theories 20 years ago. They want to study what is cutting edge theory now, and that is all to do with embodied cognition. Basically, anyone who is still left trying to defend computationalism is going to get left behind as cognitive science moves on to new research based on the theories which are competing to replace it. The result is that when the current generation of academics at the top of the hierarchy retire they will be replaced by non-computationalists. There aren't any new computationalists to replace them.

"Computationalism is dead: Now What?":

http://www.rpi.edu/~brings/SELPAP/fetz.nash/fetz.nash.html

"The failure of computationalism":

http://www.ecs.soton.ac.uk/~harnad/Papers/Harnad/harnad93.symb.anal.net.searle.html

"Time for a change in cognitive science":

http://www.mdx.ac.uk/www/psychology/torrance/metascience.htm

From an academic point of view, computationalism is a theory without a future.
 
Last edited:
Having learned some German, I ordered a coffee and sandwich. The response was "Etwas sonst"?

I did not recognise the phrase. My response was to whip out the pocket dictionary and (based on phonetics) look it up.

(It means "Anything else?")
Surely this is where you start programming a robot- by teaching it how to acquire more information. It's one way in which you educate a child.
 
Having learned some German, I ordered a coffee and sandwich. The response was "Etwas sonst"?

I did not recognise the phrase. My response was to whip out the pocket dictionary and (based on phonetics) look it up.

(It means "Anything else?")
Surely this is where you start programming a robot- by teaching it how to acquire more information. It's one way in which you educate a child.

There is a fundamental difference between the way you teach a child and the way you can teach a computer. With a child, you point to a red tractor and say "red tractor". Eventually the child associates experiences of red and of tractors with the words "red" and "tractor". The only way you could to this to a machine was if the machine was conscious. If all it is doing is processing information then it simply cannot associate any meaning with the word "red". It is just a label that it is told applies to certain other things, which are also just labels. All the computer can do is shuffle symbols about. All it can do is put "red" and "tractor" together to make "red tractor". It then says "tractor is red", but it never "knows" what any of the symbols actually mean, which is the point of Searle's Chinese Room argument. So the only way you could teach a robot in this way is if it was actually conscious, but the arguments against computationalism have made it very difficult to claim that computation alone can ever produce consciousness, for the exact reasons given by both Searle and Penrose, both of whom claim to have provided logical proofs against any computational theory of consciousness, both proofs having been widely accepted (even if Penrose's additional claims about microtubules are widely questioned). There hasn't really been a computationalist backlash. There has been much gnashing of teeth and mudslinging (especially at Penrose) but very little concrete response to the arguments. I am currently studying the philosophical foundations of cognitive science with 3 students who are on AI or science courses (rather than majoring in philosophy as I am). All three of them have been waiting for the cavalry to come charging over the hill and rescue computationalism, and they are currently coming to terms with the fact that the cavalry isn't coming. They are becoming painfully aware that none of the arguments supposed to rebutt Searle are convincing - even for them! They aren't helped by the fact that the german guy who is teaching them AI is a recent convert to the position that computationalism has failed, so they aren't just having to deal with this ◊◊◊◊ in their philosophy class. They are getting the same message from the geek in the informatics department. :D
 
Last edited:
A computer is built. Then some sort of BIOS is installed and an OS is added.On power up, an electrical process loads coded information which controls further operation by setting switches.

Humans do not work like this. Nor does any organic entity.

Can this physical structure be synthesised? Probably. But it seems improbable to me that it can be synthesised by any manufacturing process yet in existence except the one you suggested yourself early in this thread. Brains must be grown, not built.
Every brain we are aware of, including those which display consciousness or something like it, is assembled in this fashion. It seems very probable to me that this process embodies many as yet unknown processes (if you like, the setting of innumerable defaults and switches) which are absolutely critical to the origin of consciousness.

We've put a little thought into this in another thread. I believe you are right about the biological and growth component, but that aside, let's examine how to do this is the most rudimentary sense.

For each sense, we need at least one processor solely dedicated to that sense alone. So, at least one processor for vision, one for sound, touch, etc.

We need a master component that controls all of the functions of the 'organism', such as relating information between itself and the sense processors.

Finally, we need a separate program that receives messages from the master component concerning only those interactions that are necessary for the organism-machine to function in the environment. This is our "consciousness" mechanism, and does not receive any other signals than it needs for environmental interaction.

This, I believe, should result in at least awareness as an insect knows.

While we can play with Dennett's definition, we all know damn well that a thermostat is not aware. It is responsive to a stimulus, true- as is a rock on the edge of a cliff. It might be considered that something responsive is alive- (a sufficiently responsive automated machine gun would be a formidable enemy) - but is it consciously self aware?
(A simple " no" will do).

Do you recognize a difference between "aware" and "self-aware"? The computer program that is running our computers is a series of sequences. The program has to be aware of whether or not the conditions have been met to move to the next line of programming, otherwise, it wouldn't do anything. This implies no self-awareness, because unlike us humans, it does not have a section of programming that is designated as the "user".

We have to be careful of how we define it, because by such a strict definition, we are only partially self-aware ourselves. There are many bodily functions that our brain is responsible for that we are not aware of, because it is not sent to our "user" program, as it is not necessary to our interaction with our environment.

I would just say that I am pretty sure the one mechanical form guaranteed not to give such a result would be one built as an element by element replica of neural architecture, because that architecture itself is meaningless unless arrived at by the same (or a similar) process of development.

You are correct, I believe. :)
 
There is a fundamental difference between the way you teach a child and the way you can teach a computer. With a child, you point to a red tractor and say "red tractor". Eventually the child associates experiences of red and of tractors with the words "red" and "tractor". The only way you could to this to a machine was if the machine was conscious. If all it is doing is processing information then it simply cannot associate any meaning with the word "red". It is just a label that it is told applies to certain other things, which are also just labels. All the computer can do is shuffle symbols about.

That is what I have been saying - in order to teach a robot to understand human function, we have to give it the capacity (in regards to senses) to experience them in ways similar to humans.

Replace the computer in that example with someone who was born blind, and you will have exactly the same analogy.
 
Wrong again. Programs can make decisions based on their own operation, as well as on user input. In fact, this is extremely common.
? I have no idea what this even means. "Programs can make decisions?" I'm not sure you even know what you are talking about. Programs MUST follow their program. Period. Full stop. Nothing else. They don't make decisions. That is an illusion. Perhaps not unlike human decision making.

In any event, programs and rivers both follow logic. Rivers seek the course of least resistance. So do programs. Computers are composed of switches or gates just like the gates in an irrigation canal. By turning some of these switches on we allow current to flow. By turning some of the switches off we create resistance causing the current to stop in that direction. The electrical current in the computer is identical to the water current in the river in that both follow logic. They both seek the course of least resistance. Computer programs are simple boolean logic circuits. They only do what they are programed to do based on internal and external variables.

BTW, physical limitations aside I can theoretically create a computer of water and gates to perform any logic that a supercomputer can perform.

So, you can argue that humans are precisely the same as both the computer and the river, and perhaps we are. But you cannot reasonably argue that a computer program is substantively different from a river as far as making decisions. They are demonstrably the same.

Tell me how you define consciousness, and I will give you a precise answer. If you can't define consciousness, then what are you asking the question for?
Bingo! You equate the ability of humans to ponder and debate what it means to be consciousness to the coiled bimetallic strip of metal in a thermostat expanding and contracting in response to changes in tempreture. You will only acknowledge a difference if I give you a precise definition. I don't have a precise definition. I only know that there is an appreciable difference between a coiled bimetallic strip of metal that expands and contracts to changes in temperature and a human brain that can experience pain, joy and wonder.

Now, human thought might just be the equivalent of billions of a biological version of coiled bimetallic strips but something amazing happens when they are arranged in a specific way. You and I can debate. You and I can want to feel smug at carrying the argument which is why we will debate for potentially many pages. If you see no difference then you are just blind.

As Daniel Dennett defines consciousness, any information processing system, and that includes something as simple as a thermostat, is conscious.
By this definition all dynamic systems are conscious. Trees, rivers, stars, wind, etc., all process information. And I'm willing to accept that they are by some definition of the word. It's not at all a helpful definition as it relates to human consciousness. We cannot at this time get a computer to pass the touring test and we don't understand many aspects of human consciousness such as emotion and introspection. I'm quite certain that a thermostat does not have emotion or introspection. Or do you claim that it does? I'm a Dennett fan BTW and I agree with him. But his point is not relevant to this discussion as to why humans do things that are markedly different than thermostats?

Other definitions will say that a simple thermostat is not conscious, but more complex industrial control systems are.
Which is simply semantics. A complex industrial control system is hardly different from a thermostat. Both systems that follow logic. Neither, as far as we know, contemplate the meaning of their own existence. Neither would permanently end their functioning giving the choice to do so. Humans do these things. Why? You want to define a word and hinge the entire discussion on that word. That's just a game

What other possible basis is there?
Discussing and debating logic and not the meaning of words. I don't care what word you use to describe the demonstrable difference between humans and thermostats. Call it XYZ factor for all I care. I'm interested in the facts and logic behind the workings of human thought and not rhetorical devices intended to win debates.

So - show us the biological component. This isn't a premise, it's a conclusion.
I've drawn no conclusions. I've stated that to date we don't know why humans act differently than thermostats. How is it that we think that we have free will. How is it that there seems to be a self. Do rivers dream? Why not? Is consciousness simply a dynamic system? Such a definition is so broad as to not tell us anything about the wide range of human sensations, thoughts and emotions.

I have stated time and again that to draw conclusions from ignorance is wrong and I'm not doing that. I'm stating two facts.

1.) We don't currently understand how or why humans experience this illusion of self.

2.) The only machines that experience such an illusion are biological ones.

Question: Is it possible that there is a biological component or some other component that we as yet do not understand to consciousness? Have we really made all of the discoveries that can be made about basis for sentience, consciousness, self awareness, emotion, sensation, etc..

No-one has ever found any component of consciousness that is other than informational, except when applying definitions of consciousness that make no sense. Since it's a conclusion from evidence, it can be shown to be wrong by contradictory evidence.
I make no conclusions. I have asserted time and again that human cognition may just well be nothing more than complex algorithms processing information giving rise to illusions of self and awareness.

If I'm not drawing conclusions then please don't claim that I am.
 
Last edited:
? I have no idea what this even means. "Programs can make decisions?" I'm not sure you even know what you are talking about. Programs MUST follow their program. Period. Full stop. Nothing else. They don't make decisions. That is an illusion. Perhaps not unlike human decision making.

Would you rather use a term such as "detect"? In order to detect, something must be aware of what it is detecting. A computer program must detect whether or not the conditions are met to go to the next sequence of programming. So we must restart the semantical argument, all because we used the word "aware", which is... mystical? Or at least, we want it to be?

So, you can argue that humans are precisely the same as both the computer and the river, and perhaps we are. But you cannot reasonably argue that a computer program is substantively different from a river as far as making decisions. They are demonstrably the same.

It isn't even so much the decision-making as it is the simple fact that a computer program is programmed with an awareness - an ability to detect - where it is in its sequence. It is not the same as humans, because humans are not only aware, but self-aware.

Discussing and debating logic and not the meaning of words. I don't care what word you use to describe the demonstrable difference between humans and thermostats. Call it XYZ factor for all I care. I'm interested in the facts and logic behind the workings of human thought and not rhetorical devices intended to win debates.

How about "aware" vs. "self-aware"? Even by the most basic, mechanical definitions, these two terms make clear distinctions that anyone can understand. Is that acceptable? Isn't it redundant to have one definition for "aware" and the same definition for "self-aware"? Doesn't it make sense that there can be awareness of function (within the design, of course) without the self-referential awareness abilities that you are really pointing at when you use the word "aware"?
 
Last edited:
By acknowledging that it is abstract, we acknowledge that we need to agree on a specific definition simply to have a conversation. Just as if I were to say, "Help me find a beautiful house," I would first have to define "beauty".
Yes, agreed.

The consciousness of a cockroach is just as real as the consciousness of a human, but it is different. The awareness of someone who is deaf, blind, and dumb is just as real as anyone else's, but it is different.
Only by a very broad definition of the word.

Human consciousness alone is abstractual in that it is based on the condition of being human (our 5 senses and bodily capacity), so when you consider consciousness on-whole, terms can vary greatly. So in order to have a discussion, we need to give a definition to consciousness that will be used in the discussion.
I don't have a problem with that I simply have a problem of those who say cockroaches and humans are the same and there is no significant difference.

A river is not aware of its course, because there is no mechanism spurred by a processing of information. In order for it to be aware, there has to be a processing of information. How so is this a meaningless distinction?
Then you don't really understand how computers and computer programs work. Computer programs create the illusion of doing something different from rivers but they aren't (see above).

It is aware - it is conscious - of whether or not the conditions are met to run its next sequence.
In a broad sense yes. But then so is a river and any dynamic system. If you have ever constructed a simple computer using bread boards and transistors you would understand that there is nothing special about the inter-workings of a computer to differentiate it from a river. Switches (resistance or no resistance) are set to allow electrical current to flow or not flow. The example Dennet gives (thanks Pixy) is quite appropriate. Logic. That's it. That's all that is going on. Humans set the switches to control the flow of the electrical current based on THEIR (the humans) decisions.

It is only aware of the order of sequence and conditions needed to move from one sequence to another.
No. No more than a river is aware of the order of sequences and conditions needed to move from one sequence to another.

That is all it is programmed to be aware of.
No, that is all that it is programmed to do.

I apologize, I did not mean to patronize. Since definitions do not change the reality, then we should be able to have a discussion using any definition of any term, as long as it does not disagree with the reality itself.
No argument.

If you don't know what it is, then how can you discuss whether or not it can be programmed into a machine? Again, I don't mean to sound patronizing, but this is where we need definitions. Unless you can offer one of your own, then you will need to accept ours.
I'm only interested if there is a difference between thermostats, cockroaches, rivers and human thought and consciousness. A broad definition that includes all dynamic systems is not a useful one. I think human consciousness is appreciably different. I don't know why but I can enumerate the differences. I would say that these differences are reason to see human consciousness different than rivers, thermostats and cockroaches.

Consciousness is nothing - it's the letter "C", the letter "O", the letter "N", etc. It's only a word. Its definition is however we use it. Socially, its definition is whatever we agree on it being. Since there isn't even a proper scientific definition, that tells us that the majority of the populace has not come to an agreement. So, in order to have a conversation, we must come to 'some' agreement of the definition, even if the life of that definition is only for the span of the one conversation.
I don't necessarily disagree. However I'm more interested in people acknowledging that there is demonstrable differences between human consciousness and all other dynamic systems that we know of.

1.) We are the only machines or dynamic systems that we know of to have an illusion of self and for this illusion to ponder the meaning of the illusion and to debate with other illusions as to the meaning.

2.) We are biological machines. How does our biology shape and to what extent does it contribute to, if at all, human consciousness?

Since it doesn't change reality, the definition need only not conflict with reality. As you can see, this is very much about semantics.
Not for me. I think we understand what humans can do that thermostats and cockroaches can't. Call that difference what ever you like. Debate the word if you want but to me it is all a waste of time. Instead focus on the differences between human consciousness and the consciousness of thermostats or rivers or cockroaches.

What is the difference and why? I don't care what the end result is. I only care that there is a difference and I would like to understand it. I'm not going to dismiss casual relationship because at the moment they remain casual and not causal.
 
We've put a little thought into this in another thread. I believe you are right about the biological and growth component, but that aside, let's examine how to do this is the most rudimentary sense.

For each sense, we need at least one processor solely dedicated to that sense alone. So, at least one processor for vision, one for sound, touch, etc.

We need a master component that controls all of the functions of the 'organism', such as relating information between itself and the sense processors.

Finally, we need a separate program that receives messages from the master component concerning only those interactions that are necessary for the organism-machine to function in the environment. This is our "consciousness" mechanism, and does not receive any other signals than it needs for environmental interaction.

This, I believe, should result in at least awareness as an insect knows.
Supposing an insect to be aware, perhaps so. But in your own opinion, is this a model of insect awareness or is it real awareness?
(If you think that's unanswerable, that's a fair answer too)
Do you recognize a difference between "aware" and "self-aware"?
I'm not honestly sure. I'd say not. Something aware is aware of itself being aware. Of course I only have one sample to work from- and that only aware on good days. I'm not convinced insects are aware for example. Responsive (in the thermostat sense) yes, but not aware.
The computer program that is running our computers is a series of sequences. The program has to be aware of whether or not the conditions have been met to move to the next line of programming, otherwise, it wouldn't do anything. This implies no self-awareness, because unlike us humans, it does not have a section of programming that is designated as the "user".
Hm- I feel you are hedging a bit here. If defining a few memory registers as the "self" is all it takes to make a program "self aware" then we reduce the whole question to one of definitions. I would say rather that the program (or computer- and there's another debate) has no self awareness because there is no self to be aware. But it's all definitions again, isn't it?
We have to be careful of how we define it, because by such a strict definition, we are only partially self-aware ourselves. There are many bodily functions that our brain is responsible for that we are not aware of, because it is not sent to our "user" program, as it is not necessary to our interaction with our environment.
:)
Oh I operate in a dwam much of the time. It appears to be adequate. On the other hand, there may be more than one aware subroutine in our heads.

I suppose at the end of the day it comes down to the qualia thing. I sit here, looking at a screen, aware of the fact it has got dark outside, aware of the falling temperature and the fact I'd like a cup of coffee and I just grok the oneness of it all and think I may have a glass of Merlot instead and I think- How likely is it that a thermostat would ever behave like this...?:confused:

ETA- And if one did, how soon would I replace it?
 
Last edited:
Would you rather use a term such as "detect"? In order to detect, something must be aware of what it is detecting. A computer program must detect whether or not the conditions are met to go to the next sequence of programming. So we must restart the semantical argument, all because we used the word "aware", which is... mystical? Or at least, we want it to be?
That would be fine. Bear in mind that rivers detect rocks and metal detects heat.

It isn't even so much the decision-making as it is the simple fact that a computer program is programmed with an awareness - an ability to detect - where it is in its sequence. It is not the same as humans, because humans are not only aware, but self-aware.
I find such terminology misleading. A program does or doesn't based on state. Nothing more nothing less. So long as you acknowledge that all events that are caused including water running down stream is due to awareness then I'm fine with that but understand it does nothing to help us advance the understanding of human awareness.

How about "aware" vs. "self-aware"? Even by the most basic, mechanical definitions, these two terms make clear distinctions that anyone can understand. Is that acceptable? Isn't it redundant to have one definition for "aware" and the same definition for "self-aware"? Doesn't it make sense that there can be awareness of function (within the design, of course) without the self-referential awareness abilities that you are really pointing at when you use the word "aware"?
It's all problematic. I'm happy to use such terms but they are still debatable to anyone who is more interested in semantics than a discussion of the differences between humans and thermostats. I'm on board. We will see what others say.
 
1.) We are the only machines or dynamic systems that we know of to have an illusion of self and for this illusion to ponder the meaning of the illusion and to debate with other illusions as to the meaning.

2.) We are biological machines. How does our biology shape and to what extent does it contribute to, if at all, human consciousness?

1. Isn't that the definition of self-awareness? Why have two terms with the exact same definition?

2. Is someone who has no senses (no taste, touch, hearing, sight, smell) conscious? Even by your definition? They would be considered in a coma. If consciousness were defined as sense "+", then taking away the senses should still leave the "+". By your definition, it doesn't; this person is not conscious in any sense - literally.

By using my definition, it still does leave the "+", as long as the brain is still maintaining bodily functions of internal organs. That person cannot be self-aware, but their body is at least aware of what is happening internally.

It is no different than the computer running through lines of programming; if the body is aware of what is happening internally during this hypothetical coma, then why should the same definition not be applied to a program being aware of what is happening inside of its sequences?

That is why I use a broader definition of aware, and leave the more specific definition to self-aware.
 
Last edited:
Supposing an insect to be aware, perhaps so. But in your own opinion, is this a model of insect awareness or is it real awareness?
(If you think that's unanswerable, that's a fair answer too)

I think the fact that insects show displays of self-preservation also shows a type of self-awareness, in that they are capable by the limitation of their senses to interact with their environment; and in the case of social insects, preservation of the colony, I think, shows self-awareness, if not civil-awareness. It is just as real as humans - however, the NES is just as real as a GameCube. Obviously, one is of a greater capacity.

That computer was only one example to show that it can be self-referential.

If defining a few memory registers as the "self" is all it takes to make a program "self aware" then we reduce the whole question to one of definitions. I would say rather that the program (or computer- and there's another debate) has no self awareness because there is no self to be aware. But it's all definitions again, isn't it?

The "self" is the program in its entirety.

I suppose at the end of the day it comes down to the qualia thing.

I don't accept qualia; there's no proof. It can be explained more logically by saying that our self-awareness is comepletely dependent on our five senses and how we use them. The very definition of self-awareness attests to that.

I sit here, looking at a screen, aware of the fact it has got dark outside, aware of the falling temperature and the fact I'd like a cup of coffee and I just grok the oneness of it all and think I may have a glass of Merlot instead and I think- How likely is it that a thermostat would ever behave like this...?:confused:

It can't; it doesn't have the senses needed to see the computer screen, taste coffee or wine, etc.

Are you more human than Helen Keller, who wouldn't have been able to see the computer screen or hear music? Would you say there are varying degrees of awareness, using the example of comparing yourself to what Helen Keller was capable of being aware of?

ETA- And if one did, how soon would I replace it?

When it started mouthing off and getting smart. :P
 
Last edited:
That is what I have been saying - in order to teach a robot to understand human function, we have to give it the capacity (in regards to senses) to experience them in ways similar to humans.

Yes, and the problem is that we can't! In principle, not because we lack the technology.

Replace the computer in that example with someone who was born blind, and you will have exactly the same analogy.

Not quite. Although it is true with respect to the meaning of the word "red". A blind person can nevertheless have an experience of a tractor.
 
Not quite. Although it is true with respect to the meaning of the word "red". A blind person can nevertheless have an experience of a tractor.

But you agree that the comparison to the color "red" is the same. Do you agree that there are varying levels of awareness?
 
Last edited:

Back
Top Bottom