• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Explain consciousness to the layman.

Status
Not open for further replies.
The weight of the ice is what breaks the branch. The particles in the ice behave the same whether they're part of a patch heavy enough to break the branch or not.

Huh?

If they behave the same, then why does the branch break in one case and not the other?

Magic beans again?
 
To me, the metaphorical phrase "information flowing in [the] brain" has no clear meaning at all.

How about the extremely simple to understand meaning like "whatever is going on in your brain when you are not dead" ?

I would like you to explain why "whatever is going on in your brain when you are not dead" is so fundamentally different from "whatever is going on in a computer when it is not turned off," given that both transistors and neurons, our brain and electronic circuits, are so similar to each other when compared to everything else in the universe.
 
Keep your eyes peeled for the post on why the sim can't replace a brain. I'll explain more fully there. Maybe tomorrow, but I'm out of state this weekend, so maybe not til next week.

I can't wait to read it. Except, oh wait -- nobody ever claimed a sim could replace a brain.
 
If I could exctract a dollar out of every person who is 100% certain of things just out of wishful thinking and because of the desire to alleviate cognitive dissonance, I would be as rich as the Pope and Donald Trump....oh wait... Isn't that in fact how they do it?

Are you certain that when you take a step onto the sidewalk that you won't fall through the Earth and go flying into space on the other side?

Is your certainty based on "wishful thinking" and "cognitive dissonance?"
 
Based on what evidence?

Your so-called "information theory" of consciousness is little more than a conjecture based on a mathematical model.

Where are the independently verifiable observations of this machine consciousness that it models?

Well, for starters, you could feed the simulated person video information about the external world, so he would see you. He might comment on how you look or what he sees behind you.

You could speak to him if we fed him audio information, and he could speak back, because it is trivial to generate audio signals from digital information.

If we hooked up the simulation to mechanical arms or sensors he could interact with the world outside the simulation just like a doctor can remotely operate on a patient using touch feedback devices.

So you could pretty much verify that this person was conscious to the same extent you can verify that anyone else is conscious when they are not literally right in front of you. Do you doubt that the rest of the world holds conscious people just because you see them on the nightly news instead of in person?
 
No, it's not.

And if you didn't have the hubris which makes you believe that studying non-conscious machines grants you the expertise to contradict the people who study conscious brains, then you might see that.

Pixy's definition of consciousness would leave you with nowhere to begin if you tried to use it to study actual conscious objects.

It is not useful at all. That's why no one who actually studies consciousness uses it.

I think you're mistaken. It is a very useful tool, but you don't realise what it's useful for.

Clearly, as a tool for understanding consciousness as commonly understood, it's hopeless. If we really believed that it was true, we'd ignore human brains as research tools for consciousness altogether. Human brains are really difficult to experiment on. All kinds of experiments have to be ruled out on ethical grounds. We have to wait for people to suffer horrendous brain injuries to investigate what happens when certain functions are switched off.

If we can convince ourselves that consciousness is something that can be studied in a mechanism, think how easier life becomes. It isn't necessary to consider humans at all. Just produce ever more complex programs, and if you've described them all as conscious in the first place, then just by running the programs, you've studied consciousness!

But this convenience is not the main thing. The most important thing is what has been, particularly here, the computationalist mantra - Human's aren't special. They mustn't be unique, and a property that appears to be something confined to humans, or other living things, is just not acceptable.

An honest definition of consciousness would recognise and admit that it has been observed only in humans, surmised in higher animals, and appears to be entirely absent in inanimate matter. That would be the starting point, and a confident materialist would then look for the unique physical properties that are associated with human consciousness and discover how they could be reproduced.
 
I guess you heard of the Turing test (computers will have achieved intelligence when it becomes impossible to distinguish them from humans in a conversation). Someone asked Turing once how we would know the computer was thinking and Turing's reply was to ask how could he (Turing) ever know his questioner was thinking.

Well that illustrates the problem with approaching consciousness in terms of "whether one is thinking" instead of "how one behaves."

I love my wife, and I like to think she is conscious, it makes my time with her more enjoyable.

If she happened to be a perfectly programmed sneaky chinese room android, who just happened to know the exact behavioral response to every possible event she would ever encounter in her existence, such that she appeared conscious to every external observer .... who cares?

How do we know we aren't sneaky chinese room androids programmed to think we are conscious? We don't. Who cares?

Most people that actually do research in these fields, or actually do professional work in these fields, aren't after the magic bean of thought itself. They are after ways to simply get the same behavior they observe in conscious beings. If I can figure out how to get a robot to pass the turing test, I'm done. There is nothing more to it, as far as I am concerned. I don't really care if it is really thinking or not, no more than I care if I am really thinking or not.

More to the point, if we manage to get hardware technology to the point where we can upload ourselves, and I am running around in a virtual world enjoying myself, I don't really care if westprog is sitting there in the external world trying to tell everyone that I am no longer conscious. Well, I care if he convinces people to turn off the power to the computer running the simulation -- which is probably why I participate in these arguments. The thought of people treating me like ... a computer program ... just because I happen to be ... a computer program ... scares me. But if I feel conscious in the simulation, who could convince me otherwise?
 
Last edited:
Yes, I disagree. (Except, of course, with the bit about not being conscious after you're dead.)

To me, the metaphorical phrase "information flowing in [the] brain" has no clear meaning at all.

It's never been properly defined - in fact, I don't think it's supposed to be defined. It's certainly the case that "information is flowing" in the brain when a person is awake, asleep, asleep and not dreaming, or even in a coma. In fact, if we use a physical definition of information flow, information is certainly flowing around the brain after death. Maybe even more information.
 
But this convenience is not the main thing. The most important thing is what has been, particularly here, the computationalist mantra - Human's aren't special. They mustn't be unique, and a property that appears to be something confined to humans, or other living things, is just not acceptable.

Lemme fix that for you:

But this convenience is not the main thing. The most important thing is what has been, particularly here, the computationalist mantra - Human's aren't magical. They mustn't be magical, and a magical property that appears to be something confined to humans, or other living things, is just not acceptable.

I certainly agree with the fixed version.
 
Last edited:
Well, for starters, you could feed the simulated person video information about the external world, so he would see you. He might comment on how you look or what he sees behind you.

You could speak to him if we fed him audio information, and he could speak back, because it is trivial to generate audio signals from digital information.

If we hooked up the simulation to mechanical arms or sensors he could interact with the world outside the simulation just like a doctor can remotely operate on a patient using touch feedback devices.

You have made some excellent suggestions for testing the validity of such a simulation as a proxy for human behaviors. You might even be able to tease out some useful predictions concerning consciousness from this. Yet you have not convinced me that even the most detailed and accurate map is ever equivalent to the terrain that it models.

So you could pretty much verify that this person was conscious to the same extent you can verify that anyone else is conscious when they are not literally right in front of you. Do you doubt that the rest of the world holds conscious people just because you see them on the nightly news instead of in person?

Kindly spell out the specific methodology you had in mind for such a verification. I'm not 100% convinced that everyone I converse with is conscious even when they are right in front of me. :-/
 
You always end up with a bootstrapping problem with the "consciousness as illusion" proposition.

One of the biggest issues with philosophy over hundreds of years has been - how do we know that anything exists? It's been something that everyone from Berkeley to Descartes has had to wrestle with. There are a range of views on this - from everything is an illusion to cold hard reality. They are all at least self-consistent.

However, the idea that consciousness is an illusion turns the whole thing backwards. We have to accept that the real world is out there, capable of producing robots and planets and the internet. However, while the world is definitely not an illusion - oh no, no way - our perceptions of the world are an illusion. We aren't really perceiving the world at all. But it's still there.
 
Are you certain that when you take a step onto the sidewalk that you won't fall through the Earth and go flying into space on the other side?

Is your certainty based on "wishful thinking" and "cognitive dissonance?"
Nope, based on verifiable facts from experience, unlike "Read GEB and 'I Am A Strange Loop', SRIP, Church-Turing" = consciousness.
 
It is not a "gripe"..... it is an objection.... look up the words in a dictionary if you do not quite know the difference.

My objection is not to the fact that under his definition it is not remarkable. My objection is to his definition which renders it not remarkable.
Okay, so are you only accepting definitions which renders consciousness remarkable?
I put the question to you:
Do you think achieving consciousness in a computer would be a remarkable milestone in the field of AI and other fields of science?​
In order to honestly answer this question, we need a scope on what consciousness is. The question literally translates to how difficult it is to achieve consciousness--if consciousness is easy, then no, it would not be a remarkable milestone in the field of AI and other sciences to achieve it. If consciousness were difficult, then yes, it would be a remarkable milestone.

On the other hand, the human mind itself is immensely complex. Not only would it be a remarkable milestone in the field of AI and other fields of science to achieve a similar degree of functionality to the human mind, but there are in practice a lot of very remarkable milestones achieved in engineered computation in the field of AI and other fields of science merely approaching various functions of the human mind--let's call these the "impressive features". Then again, there are also capabilities of engineered computations today that far exceed other functions of the human mind, but which nevertheless aren't all that remarkable.

Now, I can think of a lot of impressive features of the human mind. Human agency, for example, is vastly complex, and extremely impressive. Human perceptual capabilities are astounding--just the idea of a generalized image recognition program on par with human capabilities makes me drool. As I understand it, the state of the art in generalized voice recognition in AI makes about 21% errors compared to humans, who make about 15% (source here is indirect--basically Hinton's newer google talk). The problem here is that all of these really complicated impressive features I can think of that the human mind is capable of, can be done without conscious involvement whatsoever; likewise, it's also impressive just how banal a thing I can sit here being self aware of, while conscious.

So whereas I'm not going to reject the notion that consciousness is going to wind up being impressive, when I really look at it, it appears that all of the really impressive things I can think of aren't done by it. Maybe they're prerequisites for consciousness somehow, I'm not sure. But I think I know a bit too much about the subject to just swallow that consciousness is going to be this enormously complicated cherry on the top of these already complicated things without an argument advanced for it.

Now, it might be that it would be cause for quite a party to nail down exactly what this consciousness cherry is, but that's a different matter than the question you asked entirely--which was, how impressive it would be for computers to actually achieve it.

So your answer is in effect, I don't know. Give me an operational definition and I'll give you an opinion.
I am not talking about Pixy's definition…. I am talking about people who work and research in these fields…….
There are a lot of people who work and research in the area of the mind, and there are a lot of fields. Could you narrow it down? Point me to some specific person's operational definition of consciousness.

Alternately, just point me to some operational definition at all that's not specific. For example, a common operational definition of consciousness is what a subject reports on by self report. That's on par with the human mind complexity of perception, judgment, reflection, and so on. Give me a machine that does this and I would summarily be impressed.

OTOH, by this definition, my cat isn't conscious. Poor kitty.
do you think that achieving conscious computer programs is a remarkable event or not? What about in your opinion?
I might also remind you that nobody is compelled to have a particular opinion.
If I formulate an "operational definition" that enables me to make a claim that I am bodily flying every time I hop a few feet off the ground and then based on that “operational definition” claim that people are flying all the time and it is a very simple technique to do so...... would YOU accept my assertions?
Sure. Obviously you're just trying to be silly, but that's what an operational definition is for.

But let's turn this around. Let me propose a particular operational definition of consciousness--consciousness is what you claim you are doing when you feel that you are conscious. That's right, my operational definition for this particular question is entirely, 100% a ball in your court. In return, I only ask that you answer it honestly.

Think of what is necessarily true about consciousness given this operational definition. In particular, whatever consciousness entails, you had better be doing that thing* at each and every moment you're going to claim you are conscious (because that's the deal--if we're going to define consciousness this way, then claiming you are conscious ipso facto means you're doing whatever consciousness entails at all times you are conscious; you're allowed to take breaks only when you slip into unconsciousness). Now given this criteria, can you name one thing besides what PixyMisa included that you are doing when you're conscious?
Don't you think that the "operational definition" should be at the very least accepted by Neuroscientists whose SCIENTIFIC career is to formulate such a definition?
There is no commonly agreed on operational definition of consciousness by neuroscientists.+
And if not then at the very least discuss it with them and show them why they are wrong?
What do you mean by "wrong" in this context? Using the wrong operational definition?

--
*Technically speaking, this doesn't have to be a singular thing, but if it is "must be A, B, or C", then you'd better be conscious when you're doing none of A, nor B, nor C.

+There's also nothing that guarantees that there ever will be. It's feasible that the entire field will ditch all efforts to speak of "consciousness" per se--at least in this context--and would just use an entirely different vocabulary.
 
Last edited:
Well that illustrates the problem with approaching consciousness in terms of "whether one is thinking" instead of "how one behaves."

I love my wife, and I like to think she is conscious, it makes my time with her more enjoyable.

If she happened to be a perfectly programmed sneaky chinese room android, who just happened to know the exact behavioral response to every possible event she would ever encounter in her existence, such that she appeared conscious to every external observer .... who cares?

How do we know we aren't sneaky chinese room androids programmed to think we are conscious? We don't. Who cares?

Most people that actually do research in these fields, or actually do professional work in these fields, aren't after the magic bean of thought itself. They are after ways to simply get the same behavior they observe in conscious beings. If I can figure out how to get a robot to pass the turing test, I'm done. There is nothing more to it, as far as I am concerned. I don't really care if it is really thinking or not, no more than I care if I am really thinking or not.

More to the point, if we manage to get hardware technology to the point where we can upload ourselves, and I am running around in a virtual world enjoying myself, I don't really care if westprog is sitting there in the external world trying to tell everyone that I am no longer conscious. Well, I care if he convinces people to turn off the power to the computer running the simulation -- which is probably why I participate in these arguments. The thought of people treating me like ... a computer program ... just because I happen to be ... a computer program ... scares me. But if I feel conscious in the simulation, who could convince me otherwise?

Thanks. I find that helpful. Not sure about the 'who cares' part though. I am interested in my own consciousness and feel it is a reaonable enough inference, even if it cannot be proved, that you (and your wife) have the same experience of the world that I do.

I still wonder why the simulation going on in my head not only enables me to avoid falling down holes and bumping into things, to find food, interact with other humans to my advantage (or not!) but also gives me an awareness that these things are happening. Surely I would function just as well without the awareness, like an ant (which I assume without being able to back it up has no consiousness) or a computer (ditto).

I once heard an interesting conversation about AI in which one of the participants tried to argue that a thermostat has two thoughts: 'it's too warm' and 'it's too cold'. I suppose a similar argument could be made for a toaster. How do we know a toaster doesn't think 'the toast is done' before popping up?

I don't think a toaster or a thermostat have thoughts but I cannot say why.
 
If I can figure out how to get a robot to pass the turing test, I'm done. There is nothing more to it, as far as I am concerned. I don't really care if it is really thinking or not, no more than I care if I am really thinking or not.
Well there is your problem, if your not thinking about the robot, it won't pass the Turing Test. Or maybe that's just your plan. Define thinking as not thinking ;-).
 
Last edited:
Okay, so are you only accepting definitions which renders consciousness remarkable?

We start with the obvious fact that consciousness - the word - refers to something to do with the experience of being human. We then continue to the other obvious fact that we don't know, precisely, how it arises.

The SRIP definition has the flaw that it has nothing to do with the first part, and it ignores the second part. Show that definition to a visiting alien, and he would assume that it was something to do with computer science. He would have no reason at all to associate it with human brain function.
 
So your answer is in effect, I don't know. Give me an operational definition and I'll give you an opinion.

I still think Piggy's is the closest to an objective description of what is apparently a subjective phenomenon. Consciousness is a property of a human mind when awake or dreaming, that is absent in deep sleep or other brain states.*

This is not quite a definition, but if you wanted to get across to someone what he meant by consciousness, it would be a reasonable way to do it. I think most laymen would agree that someone is conscious when awake or (probably) dreaming, but not otherwise.

This definition also doesn't start with an explanation. SRIP explains consciousness without defining what it is. Piggy's definition leaves all possible explanations open. It neither insists that consciousness is inherently biological in nature, or that it is open to computer programs. It leaves the discovery of the difference between the states of the human mind as something to be achieved.

SRIP appears precise - but it leaves almost everything open. What is the "self" that is being referenced? What is "information"? What does it mean to "process information"? What does "referencing a self" mean?

The explanations for all these things aren't achieved by precise definitions - they are achieved by examples from the computer field. It's then asserted that the brain kinda sorta works the same way. There have been a number of flat assertions by neuro-scientists that we don't know that the brain works like a digital computer - referenced - and some usually unreferenced claims that we do know that the brain works like a digital computer.

Aside from that, SRIP is fine.
 
Thanks. I find that helpful. Not sure about the 'who cares' part though. I am interested in my own consciousness and feel it is a reasonable enough inference, even if it cannot be proved, that you (and your wife) have the same experience of the world that I do.

I think that a concern as to whether your wife (or even husband) is experiencing happiness or sadness is an important part of marriage. I suppose it is also true that many people would settle for their spouse giving a good simulation of happiness, regardless of some theoretical inner state.
 
I still think Piggy's is the closest to an objective description of what is apparently a subjective phenomenon. Consciousness is a property of a human mind when awake or dreaming, that is absent in deep sleep or other brain states.*

This is not quite a definition, but if you wanted to get across to someone what he meant by consciousness, it would be a reasonable way to do it. I think most laymen would agree that someone is conscious when awake or (probably) dreaming, but not otherwise.

This definition also doesn't start with an explanation. SRIP explains consciousness without defining what it is. Piggy's definition leaves all possible explanations open. It neither insists that consciousness is inherently biological in nature, or that it is open to computer programs. It leaves the discovery of the difference between the states of the human mind as something to be achieved.

SRIP appears precise - but it leaves almost everything open. What is the "self" that is being referenced? What is "information"? What does it mean to "process information"? What does "referencing a self" mean?

The explanations for all these things aren't achieved by precise definitions - they are achieved by examples from the computer field. It's then asserted that the brain kinda sorta works the same way. There have been a number of flat assertions by neuro-scientists that we don't know that the brain works like a digital computer - referenced - and some usually unreferenced claims that we do know that the brain works like a digital computer.

Aside from that, SRIP is fine.

The definition you cite reminds me of one a barrister once gave in conference. We were discussing a provision in a lease which obliged the client to operate a 'high class' hotel and the barrister's interpretation of that was 'not low class'. LOL.

Come to think of it, I subsequently advised on another lease provision requiring a residential property to be used in single occupation. Perhaps taking a cue from the earlier example, I said it meant: not multiple occupation, a definition with which the court agreed.

So, consciousness is not unconsiousness. Hmmm. I think we may need a definition of 'definition'.
 
Based on what evidence?

Your so-called "information theory" of consciousness is little more than a conjecture based on a mathematical model.

Where are the independently verifiable observations of this machine consciousness that it models?
Humans.
 
Status
Not open for further replies.

Back
Top Bottom