• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Explain consciousness to the layman.

Status
Not open for further replies.
I agree that this is an abstract discussion. The optimism of the early days of Artificial Intelligence has disappeared now. We didn't get conscious computers after thirty years, or fifty years. We can settle down for a long wait.
Our ability to build conscious computers is dependant on our understanding of how consciousness works. It is probably NOT dependent on computing ability.
(There might be a performance threshold we would have to get passed, but chances are, we passed it already.)

The more we learn about how consciousness works, the more conscious-like our computing can become.... and guess what... this trend works! We ARE learning more about how consciousness works, every year; and our computers ARE becoming more intelligent and conscious-like as a result of some of this research.

We can't expect fully conscious computers to sprout overnight, one amazing day. This is a technology that will.... emerge... over time, over various stages.

Your appeal to "optimisim disappearing" is irrelevant.
 
You mean a stable attractor?

I think there's a more precise synonym used a lot in computational neuroscience, but damned if I can think of it just now.

Yes, that is part of it -- the way a system maps a large set of input to a smaller set of internal states is dependent upon the existence of stable attractors, or local minima in the configuration space as I call them.

If there are no local minima ( stable attractors ) then the set of internal "states" would be effectively infinite or effectively arbitrary. The only way you could say a system was in a particular state is if it corresponds to a precise particle configuration OR you just arbitrarily label a set of particle configurations as a "state."

However if you base the definition on minima ( attractors ) it is different -- it is completely valid to refer to the "state" of a system.

For example, even though there is an infinite number of particle configurations in a cell, there is a discrete number of minima ( attractors ), as measured by any metric possible. If you partition the state space of particle configurations based on which minima the system of the cell will converge upon ( or attract to, as it were), you can say the cell is "in state X" any time the current exact configuration will converge on attractor X all else being equal.
 
Last edited:
A robot will never be able to experience actual pain sensation because its not real life. It may get a simulation of it but not it in my opinion.

You are certainly welcome to that opinion, but without any evidence or logic to support it, well, it's worth about as much as my opinion that birds don't sleep if you're not watching them.
 
It would be perfectly feasible to provide an objective, precise definition of walking that would allow us to once and for all answer the difficult question "can machines walk". We could debate which is the real definition, of course - but at least we could choose each definition in turn and provide objective answers in each case.

I've noticed that the more I ask for precision and objectivity, the more the real agenda is supposed to be magic and religion. It's a strange kind of theory where precision and objectivity is seen as such a threat. If the theory is really that sound, precision and objectivity will improve it. It's only if it's essentially waffle that it will evaporate away.

What's amazing, but not surprising given our experience, is that yet another thread about consciousness -- a function of the brain which is not performed by any machine that we know of -- is hijacked yet again by people who not only want to discuss machines (which are not conscious) but also simulations run by machines (which cannot be conscious) and who seem to want to have nothing to do with discussions of the brain.

Why?

Well, if they had to knuckle down and discuss the brain, they wouldn't be able to build philosophical castles in the air.

But the fact is, no hypothesis is worth much until and unless it's verified against the physical world.

Einstein, for instance, used a type of modeling to determine that gravity should bend light.

Did everyone cheer and declare it true?

No.

It was only accepted after it was tested by observing the light from stars during an eclipse.

Yet the computational literalists on this thread appear to want no truck at all with confirmation.
 
Actually, when you consider from back near the start of the thread that the only definition of consciousness which didn't rely on subjective "I knows it when I sees it" metrics was "consciousness is whatever the brain does," then yes, a simulation of whatever the brain does will do whatever the brain does. That's kind of the point.

If you're not claiming any special pleading, then you must also believe that a simulation of a power plant will really do, in the real world of objective physical reality, what a power plant really does in the real world.

And we know this is not true.

And I can give you a more precise definition of consciousness, although it's necessarily a functional one since everyone admits we don't yet understand enough about the brain to describe how it's doing what it's doing.

Consciousness is what the brain is doing when you are awake, or asleep and dreaming, which it is not doing when you are asleep and not dreaming or when you are under profound sedation. It generates an experience, which is also a sense of self.

When you are asleep and not dreaming, or under profound sedation, there is no sense of self and experience going on in your body. Your body is not conscious.

When you are reading these words, there is a sense of self and experience going on in your body. Your body is conscious. The brain in your body is doing consciousness.

How it does that, we don't know.
 
certainly we are not able to guarantee that the functionality of the brain will be adequately represented by a digital computation.

How anyone can claim otherwise is beyond me.

Until we understand what's to be modeled, then while there will still be a lot of modeling we can learn from, we're not going to learn anything fundamental about consciousness directly from the model because we have no way of knowing if it's accurate or not.

There may be any number of possibilities that could be right or wrong and we'll never be able to decide until we figure out exactly what we're looking for.

It's like if I want to know what makes my truck run, I could look at my neighbor's remote control model truck, and I can observe that it outwardly acts just like my truck... it can go forward and backward, turn left and right, back up, stop, speed up and slow down, carry loads, bump into things, and so forth. (A veritable p-truck.)

But this doesn't mean I can look inside the remote control truck and draw conclusions about what's under the hood of my truck.

We're going to figure out how the brain consciouses by studying the brain. With the help of computers, yes, but only biology will (or can) answer this question in a way that's confirmable.
 
Yet the computational literalists on this thread appear to want no truck at all with confirmation.

It's noteworthy that absolutely no objective tests have been proposed to verify the hypothesis. The Turing Test is entirely subjective, and it's obvious that whether or not someone considers a computer able to pass it depends entirely on the person.

It's very odd that it's claimed that consciousness is entirely rule-based, but there's no set of rules for testing consciousness.
 
Because consciousness is a process, not a product. It's whatever the brain does.

It's most certainly not "whatever the brain does".

It is a minority of what the brain does. It is one of the things the brain (sometimes) does (and sometimes does not).

Understanding the difference between these two times is what we're after.

What is the brain doing when it's conscious that it's not doing when it's not conscious?

Computers are a truly indispensible tool if we want to find out what makes that difference. But we cannot find the answers by studying those tools. Nor is it justifiable to claim that the process which produces consciousness is fundamentally similar to the workings of the machines we call computers. Nobody has ever provided sufficient support for such a claim, precisely because it is not possible to do so prior to understanding what the brain is doing.
 
It's noteworthy that absolutely no objective tests have been proposed to verify the hypothesis. The Turing Test is entirely subjective, and it's obvious that whether or not someone considers a computer able to pass it depends entirely on the person.

It's very odd that it's claimed that consciousness is entirely rule-based, but there's no set of rules for testing consciousness.

Nor can anyone enunciate a set of "rules" which the brain is obeying beyond the laws of physics.

And of course those laws are only "rules" in the sense that we have abstracted them... the brain is in no way "applying" or "following" them.

The brain follows precisely the same rules as a calf muscle.

This is obvious to biologists, but confounding to those information theorists who have managed to make the error of mistaking abstractions for reality. (Which is easy to do, because their work requires them to treat abstractions as if they were real.)

So far, I've not had anyone give me a definition of "computation", for instance, which is broad enough to encompass the activity of my brain and of the machine I'm typing this post on, but narrow enough so that it excludes clouds, oceans, and supernovae.

And if they can't do that, then the claim that "the brain is a computer" dissolves into either falsehood or triviality.
 
It's most certainly not "whatever the brain does".

If it were "what the brain does" then it would be impossible to produce consciousness except with a brain.

It is a minority of what the brain does. It is one of the things the brain (sometimes) does (and sometimes does not).

Understanding the difference between these two times is what we're after.

What is the brain doing when it's conscious that it's not doing when it's not conscious?

Computers are a truly indispensible tool if we want to find out what makes that difference. But we cannot find the answers by studying those tools. Nor is it justifiable to claim that the process which produces consciousness is fundamentally similar to the workings of the machines we call computers. Nobody has ever provided sufficient support for such a claim, precisely because it is not possible to do so prior to understanding what the brain is doing.

I certainly don't believe that the claim that the brain is essentially digital in nature has been established with any degree of certainty.
 
If you're not claiming any special pleading, then you must also believe that a simulation of a power plant will really do, in the real world of objective physical reality, what a power plant really does in the real world.

And we know this is not true.


Hey, Piggy!

I've seen you made this argument before and I haven't responded, in part because I haven't felt up to a sustained discussion.

I think that it is possible for a computing system to exhibit conscious behavior.

Whether this comes about through simulation, an unintended consequence, or as an intended consequence of a non-simulation really isn't the issue.

Consciousness is an entirely physical phenomenon, as is power production. I think that consciousness can arise on a variety of computational systems and that simulation of a brain could lead to consciousness/conscious behavior. I agree that simulation of a power plant will not generate electricity in our (simulation-encompassing) world. I think that you are erring, however, in stating that a simulation of a brain could not result in consciousness, or in conscious behavior.
 
Last edited:
Nor can anyone enunciate a set of "rules" which the brain is obeying beyond the laws of physics.

And of course those laws are only "rules" in the sense that we have abstracted them... the brain is in no way "applying" or "following" them.

The brain follows precisely the same rules as a calf muscle.

The valves of the heart operate in a quasi-digital fashion as well, but that doesn't mean that the heart is a digital logic device. It can be considered as one, of course, and the model is viable, but that doesn't deal with its primary function.

It may well be that the brain could be considered as a digital computer, but it doesn't follow that this is the best model, or the most informative one, and particularly that we can construct a digital computer program that will entirely replicate everything that the brain does.


This is obvious to biologists, but confounding to those information theorists who have managed to make the error of mistaking abstractions for reality. (Which is easy to do, because their work requires them to treat abstractions as if they were real.)

So far, I've not had anyone give me a definition of "computation", for instance, which is broad enough to encompass the activity of my brain and of the machine I'm typing this post on, but narrow enough so that it excludes clouds, oceans, and supernovae.

And if they can't do that, then the claim that "the brain is a computer" dissolves into either falsehood or triviality.

"Information" is a word that is thrown around freely without considering its implications. Either it's something that is being exchanged between every atom in existence, or else it's an engineering term that deals with the exchange of data between human beings. The same word is used for two different things as if they were the same.
 
There's one big distraction I think we need to dispense with, and that's all the talk about digital computer simulations being conscious.

There's a lot to learn about the brain through digital simulations (which is to say, representations of the brain) but we don't need to concern ourselves about whether a new instance of consciousness would enter into the world as a result, in much the same way as it does when a baby develops consciousness.

Here's why....

Now, first, I should say that I'm not talking about functional models of a brain, which would of course be conscious.

If I made a functional model of a power plant, for instance, it would actually produce energy that I could measure with an electrician's meter. If I made a functional model of a tornado, it would actually toss around objects that were placed in its vicinity.

In contrast, a digital computer simulation of a tornado might display patterns of light on a screen which, for instance, tell me how fast the wind has to get before a concrete wall collapses. Or it might print patterns of ink on paper which tell me under what conditions the electrical grid fails.

But I don't have to worry about the computer running the simulation getting damaged by wind or electrical surges as a result of running these simulations.

That's because the digital computer simulation is a representation, not a reproduction, of something in the real world.

There is a fundamental framing error made repeatedly by some of the literal computationalists on this board, which also causes them to falsely diagnose framing errors in others' arguments, which is to cite a "world of the simulation" in which events can be said to actually occur.

But there is no such thing.

Representations are made out of phsyical stuff of some sort, and are made to trigger human imagination about something else. There's no real tornado, but the patterns on the screen can make me think "the wall collapses at about 250 miles per hour".

And these are the only two realms that we know of -- the physically real world, and the world of representations (imagination) inside our heads.

If we try to talk of what is "real" inside the "world of the simulation", we're making the absurd leap of asserting that such a world objectively exists, by conflating the physical reality of the medium with the imaginary reality of what is represented upon it.

But if representations cause their represented realities to exist, then we need to concern ourselves with the "real world" of Pinocchio and Gepetto who are very convincingly rendered by the Hildebrandt brothers in my copy of the book by Collodi.

The reproduction brain exists in the real physical world, and if we are able to build it, then there will be a new instance of consciousness in the world, just as there is when a baby develops.

The representation of the brain -- whether that's a drawing or a clay model or a set of equations or a computer simulation -- is only "a brain" in our imaginations. My dog, for instance, would have no way of identifying paper or clay or a computer as somehow an animal part.

If anyone wants to propose a "world of the simulation" in which we can talk of things as "real", then they're going to have to describe how it is that such a world comes into existence.

Short of that, can we please drop the pointless talk of what is "real" inside a purely imaginary space?
 
And by the way, why is a question about biology posted in the philosophy forum?
 
I agree that simulation of a power plant will not generate electricity in our (simulation-encompassing) world. I think that you are erring, however, in stating that a simulation of a brain could not result in consciousness, or in conscious behavior.

Well, my friend, you're gonna have to bring the heat, then.

You're going to have to explain why the brain is an exception.
 
There's one big distraction I think we need to dispense with, and that's all the talk about digital computer simulations being conscious.


This is where I think you go off in the wrong direction.

Some of us exhibit conscious behavior.

If another computing system exhibits conscious behavior, will it be conscious? Will it have consciousness?

I don't care whether it is a simulation of a brain or not.

You seem concerned that we would be calling behavior undeserving of the name 'conscious behavior'.

I suspect that we all tend to regard our experience of consciousness as far more special than it really is.

I think that our consciousness and the consciousness that could arise on a computer are identical in nature.

I regard the potential equivalence in behavior as an equivalence in experience; you appear to disagree.
 
"Information" is a word that is thrown around freely without considering its implications. Either it's something that is being exchanged between every atom in existence, or else it's an engineering term that deals with the exchange of data between human beings. The same word is used for two different things as if they were the same.

Those are the two primary definitions I'm familiar with.

Either you're talking about what humans do (although not necessarily exclusively) or you're talking about levels of entropy.

If there are other definitions in play here, that's fine, as long as they make sense.

Another problem is this notion that the brain uses symbols.

That's a bit like saying that my muscles open a door by turning a knob.

That's not actually what my muscles are doing at all. They're contracting and expanding. "Turning a knob" doesn't help you understand how my hand does what it does.

Yes, I am turning a knob, but that's not a useful description of how my muscles are involved in accomplishing the opening of the door.

Using symbols is something humans do.

If you don't know what the housekeeper's language will be at a place you're renting, you might put a piece of paper on the door with a drawing of a baby lying down with her eyes closed. The housekeeper will know to be quiet, because everyone understands what a sleeping baby means!

But it's an error to say that the housekeeper's brain therefore must manipulate symbols.

It's an interesting idea, but I would challenge anyone to describe how that's supposed to work in terms of the physical activity of the brain.

When people blithely accept assertions like this, without so much as a coherent hypothesis behind them, it's really quite astounding.
 
Well, my friend, you're gonna have to bring the heat, then.

You're going to have to explain why the brain is an exception.


The brain is not an exception.

We exhibit conscious behavior and we experience consciousness.

Another computing system will be able to exhibit conscious behavior. I suggest that it may be able to experience consciousness as well as we do.
 
This is where I think you go off in the wrong direction.

Some of us exhibit conscious behavior.

If another computing system exhibits conscious behavior, will it be conscious? Will it have consciousness?

I don't care whether it is a simulation of a brain or not.

You seem concerned that we would be calling behavior undeserving of the name 'conscious behavior'.

I suspect that we all tend to regard our experience of consciousness as far more special than it really is.

I think that our consciousness and the consciousness that could arise on a computer are identical in nature.

I regard the potential equivalence in behavior as an equivalence in experience; you appear to disagree.

No, you're not getting it.

What I'm saying is that the claim itself is nonsense.

Yeah, it's reasonable to say that conscious machines can be built.

But we don't know enough about consciousness to claim that a computer, on its own, is capable of being conscious... and given the lack of functional and structural equivalence between our brains and the machines we call computers, there's no reason at the moment to believe that it could be.

When people start talking about "programming consciousness" they're off into Wonderland territory, because we don't know nearly enough about how consciousness is accomplished at the moment to make any such statements.
 
The brain is not an exception.

We exhibit conscious behavior and we experience consciousness.

Another computing system will be able to exhibit conscious behavior. I suggest that it may be able to experience consciousness as well as we do.

Well, this brings you back to the claim that "the brain is a computer" which is not supportable. Or, I would say, even coherent.
 
Status
Not open for further replies.

Back
Top Bottom