Has consciousness been fully explained?

Status
Not open for further replies.
"The brain stem, cerebral cortex and memory act in unison in the complex mental process that tell us who we are and generate the feelings that are at the heart of being conscious".
I'd say adding that at least begins to approach what being conscious entails; imo it doesn't add enough.

For some, life might represent being conscious; for idealists I suspect nothing exists except consciousness, and the complexity of life is just the first time we can notice (objectively) conscious behaviors.

I'd also suspect Pixy's simulation of the universe at planck scale would actually be a universe, not a simulation, assuming we actually know enough about how the parts interact and can make them do. The Unified Field Theory is still lacking.

ymmv :)
 
I'd say adding that at least begins to approach what being conscious entails; imo it doesn't add enough.

For some, life might represent being conscious; for idealists I suspect nothing exists except consciousness, and the complexity of life is just the first time we can notice (objectively) conscious behaviors.

I'd also suspect Pixy's simulation of the universe at planck scale would actually be a universe, not a simulation, assuming we actually know enough about how the parts interact and can make them do. The Unified Field Theory is still lacking.

ymmv :)


Yes, that's the deeper insight that needs discussion at some point, since none of us really know what subatomic particles really *are*. Ultimately, it's all just interactions.
 
The brain evolved for decision making.

One of the mechanisms (if we accept the hypothesis that consciousness is a tool for making decisions) has the effect of generating a sense of felt awareness that starts and stops and hovers with fuzzy borders/center in the brain cavity.

Whatever it is that's doing that -- making decisions and generating this sense of being -- is part of a physical apparatus. The causes of all behaviors of the body are physical.

Consciousness isn't like a tooth. It's like chewing. And to actually chew a piece of broccoli, you need real teeth working in real time. To generate a sense of awareness in a brain cavity, you need a real brain working in real time.

The thing that's like a tooth is whatever it is that's making the brain "make decisions" and simultaneously generate this feeling of awareness (which the brain can make decisions without).

We can abstract logic from it, describe it in rules, describe it in terms of information, but regardless of how we transform it intellectually, it's all eventually explanable on a purely physical level, which means the brain -- the lump of matter -- is what's "making decisions" (which is just to say that the body does one thing and not another) and also what's causing my sense of self-in-spacetime to crank up every morning.

To replicate this, we need a model brain.

We cannot make any machine conscious without altering what it's doing physically in 4-D spacetime in some significant way, in a way that replicates whatever the brain's doing when our Sofias crank up.

This cannot be purely "programmed" for the same reason that you can't have a computer control its physical temperature purely by programming.

If you want to replicate the behavior, and actually have a sense of awareness in the tower of your computer, you must have something more than programming, because you cannot get behavior for free.

If the only physical behavior you have is for running the logic and nothing else, and the machine is not doing anything physically different from what it's doing when it's up to other tasks, then you cannot extract this new behavior from it in real spacetime.

You are confused.

Nobody has ever said that a computer could be conscious without some method to gain information about itself in relation to the real world around it. In fact we have been very specific about any conscious system requiring a "self" reference. You can't have a self reference without knowing what self vs. non-self is. Meaning, any conscious computer must know that it is the computer and not the walls of the room it sits in, etc.

What has been said is that if there was a simulation, and the simulation included a system that could distinguish itself from the other systems in the simulation, then that entity within the simulation could be conscious. That doesn't mean the computer running the simulation is conscious, any more than the universe would be conscious because you inhabit it.

An implication of one saying consciousness is a form of SRIP is that the distinction between self and non-self must be supported by the environment the consciousness exists in. This is really all you need to know to see why your arguments are mostly against strawmen.
 
True, but that doesn't change anything. When we refer to a real orange we are referring to an actual collection of particles in the world. When we refer to a simulation of the orange we are either not referring to an actual collection of particles in the world or we are referring to a very different collection of particles (those that comprise the hardware that is carrying out the simulation).

But why does it matter if it is a "very different" collection of particles?

Particles are particles, no? One is just as real as any other.
 
How can one argue against such insanity?

Seriously.

So you are saying that the transistors or magnetic bits of the simulated orange, on some memory device in the computer be it RAM or disk, are not real?

That sounds pretty "insane" to me. What do you think is inside a computer, piggy? A magical black hole?
 
The difference is that you think you can "program" consciousness and have it work -- actually produce a conscious machine -- and from where I sit you're demanding behavior for nothing, 4-D spacetime effect without any direct physical cause.

No.

See the post I just made where I explain that we stick that "self" in "self referential information processing" for a reason.

The "self" thing is kinda important...
 
This thread continues to be worth reading. I don’t have time to post much, but I thought I’d add a few thoughts. Repayment for the food for thought I’ve received from you.

Consciousness is the name we give to what it's like to be us. How can we be mistaken in such a belief? We can be mistaken in thinking that something else is conscious, but we can't be mistaken in thinking we are conscious ourselves.

I agree. We can be mistaken about what ‘consciousness’ consists of though. I think that is what many people are referring to in the ‘consciousness is an illusion’ camp. I think they are correct for the definitions they are using.

What I am not at all convinced about is that an isomorphic action is not the same as the 'real action' which it 'mimics'. I know why we can distinguish between isomorphic outputs for things, but I don't see how we can make such distinctions for actions because actions consist in the interactions of parts. Why does it matter if the 'parts' are 'real' or only 'simulated' (actions themselves)?

Nicely put. I am reminded of a zen koan I read in GEB.

Mind you, the reason we're talking about a Planck scale simulation is that it's a reductio ad absurdum counter to the position of, well, the other half of this thread. We've duplicated the Universe exactly. Now can we have a conscious mind? If you still say no, then you believe in magic. In which case it's time for us to just shrug and walk away.

You say this, but in truth we can’t actually know. I say this as someone who thinks the answer is yes, but it’s an unfalsifiable hypothesis.

So you are saying that the transistors or magnetic bits of the simulated orange, on some memory device in the computer be it RAM or disk, are not real?

That sounds pretty "insane" to me. What do you think is inside a computer, piggy? A magical black hole?

Your argument strikes me as similar to claiming that Harry Potter is real because the books written about him are paper and ink, which are clearly real objects.

Which is a very good ploy, but it also removes the charge that you are talking about a description of a system since you very obviously are not. Possibly a description of a subatomic particle, but everything else from there is just part of the system and not a description of it.

The problem I see with their argument is that they are arguing for what I would consider to be the ‘soul’ of a person. This is my definition of a soul: the unique pattern of relationships that comprise who we are as an individual. Further, because it is a pattern and not a physical thing, it is eternal.

Can we build a machine that has a soul? By my definition of soul, certainly we can and perhaps, already have. I’m not sure if consciousness is even required for this definition, on reflection I think not. Rather, consciousness could be described as a certain class of such patterns.

Just some musings.
 
You are confused.

Nobody has ever said that a computer could be conscious without some method to gain information about itself in relation to the real world around it. In fact we have been very specific about any conscious system requiring a "self" reference. You can't have a self reference without knowing what self vs. non-self is. Meaning, any conscious computer must know that it is the computer and not the walls of the room it sits in, etc.

What has been said is that if there was a simulation, and the simulation included a system that could distinguish itself from the other systems in the simulation, then that entity within the simulation could be conscious. That doesn't mean the computer running the simulation is conscious, any more than the universe would be conscious because you inhabit it.

An implication of one saying consciousness is a form of SRIP is that the distinction between self and non-self must be supported by the environment the consciousness exists in. This is really all you need to know to see why your arguments are mostly against strawmen.

I thought the claim on your side was consciousness is SRIP.
 
You are confused.

Nobody has ever said that a computer could be conscious without some method to gain information about itself in relation to the real world around it. In fact we have been very specific about any conscious system requiring a "self" reference. You can't have a self reference without knowing what self vs. non-self is. Meaning, any conscious computer must know that it is the computer and not the walls of the room it sits in, etc.

What has been said is that if there was a simulation, and the simulation included a system that could distinguish itself from the other systems in the simulation, then that entity within the simulation could be conscious. That doesn't mean the computer running the simulation is conscious, any more than the universe would be conscious because you inhabit it.

An implication of one saying consciousness is a form of SRIP is that the distinction between self and non-self must be supported by the environment the consciousness exists in. This is really all you need to know to see why your arguments are mostly against strawmen.

If it's possible we exist inside a simluation, then simulated consciousness is possible.
 
No.

See the post I just made where I explain that we stick that "self" in "self referential information processing" for a reason.

The "self" thing is kinda important...

So consciousness only arises in a SRIP where there's already a self? This is tautologically true. Unless you're claiming that things like thermostats are "selves".
 
T
Your argument strikes me as similar to claiming that Harry Potter is real because the books written about him are paper and ink, which are clearly real objects.

It is a little different because Harry Potter novels do not do anything by themselves, and when we read them most of the behavior arises in our minds rather than on the novel pages.

A simulated orange, on the other hand, might affect the behavior of some other object in the simulation regardless of whether or not someone is observing it.

But yes, it is similar. What is wrong with that argument? Harry Potter is real, it is a two word phrase, humans label a "name," that is found written in many books. It isn't a real person -- so what? It is still "real" in that it exists in our universe and we can interact with it.
 
I thought the claim on your side was consciousness is SRIP.

Perhaps, but lets make the meaning of such a claim clear.

All anyone on my side has ever said is that if you look at things people consider conscious, and try to define what it is that makes such things qualitatively different from things people don't immediately consider conscious, the only real mathematically supportable conclusion is that conscious things exhibit some type of SRIP.

Everything above and beyond SRIP doesn't seem to be a requisite for consciousness once it is really nailed down. I mean you can take yourself and ask "would I still be conscious if my mind lost the ability to do X" and in every case the answer is "yes" except for self reference. If you, or any other conscious thing, lost that then there would simply be no consciousness.

On the flip side, we can examine whether there is any qualitative difference between examples of SRIP that we don't consider immediately conscious -- like the infamous electronic toaster with many features, or even the programmable thermostat -- and things like squirrels, dogs, monkeys, or people. And the answer is that no, there really is no mathematically describable difference. Yes, people dream, and love, and hate, and have internal dialogue, and Sofias, but those are not qualitative differences since they can all be reduced to just another flavor of SRIP.

So the position people like pixy and I take is to just say consciousness, in a fundamental way, is SRIP and SRIP is consciousness. This confuses naive people because they think "wait are they saying any SRIP can cry when it watches I Am Sam?" and quite obviously no, that is not what the claim means. The claim means that all the stuff we attribute to people is actually something beyond basic consciousness and should be studied as such. Call it "human" consciousness, whatever -- it is definitely *not* just vanilla SRIP, obviously, since a thermostat doesn't fall in love with the female hands that set it's temperature.

But people refuse to speak in those terms. They say "well love is an aspect of consciousness." WRONG, because obviously love is not a requisite for consciousness and if you think about it trying to account for the myriad aspects of the human creature in a single unified theory is bound to fail from step one. So pixy, I, and others try to make it clear that hey, the basic consciousness thing is easy, it is SRIP, lets move on and talk about what makes human consciousness different from dog consciousness different from fish consciousness different from toaster consciousness.

Just wanted to make that clear.
 
So consciousness only arises in a SRIP where there's already a self? This is tautologically true. Unless you're claiming that things like thermostats are "selves".

I am speaking of "self" in computer science terms, which only means a reference to the self object, whatever that may be.

And now that I think about it, it is not immediately clear whether a thermostat has a self or not, because like I said self requires knowledge of non-self and I don't know if thermostats can make a distinction.

I will have to think about this -- I may have to withdraw the claim that thermostats are SRIP.
 
ETA:
What I am not at all convinced about is that an isomorphic action is not the same as the 'real action' which it 'mimics'. I know why we can distinguish between isomorphic outputs for things, but I don't see how we can make such distinctions for actions because actions consist in the interactions of parts. Why does it matter if the 'parts' are 'real' or only 'simulated' (actions themselves)?

Yes.

I have already made the argument that if you construct an epistemology graph it turns out that the "action" nodes are all equivalent because they are merely relations between "thing" nodes. Why would a relation be different depending on the nature of the things it related? There is no mathematical support for such an idea.

A relation is a relation, plain and simple.
 
I am certainly not arguing that a simulated orange is the same as a real orange, so please do not misunderstand me.

Understood. The claim I made that aroused dispute was that the simulation of the orange cannot do everything the real orange can do. If it can, then I don't see a basis for saying it is not the same.

I am arguing here that description is not the right word for the sort of simulation that RD and Pixy are talking about. Description is the right word for simulations like we see in current computer games, but the simulations are fairly unlike the actual objects as they exist in the real world.

What they seem to be talking about, though, is a robust 'world' (which, yes, does not exist in a 'real sense') that is based on the 'real world' down to the atomic level.
(bolding added)

Since this 'world' does not exist in a 'real sense', then it either does not exist at all or it merely exists in our minds as a (perhaps useful) abstraction. The latter is what Piggy and westprog have been saying. I see no third option.

That simulated world is itself an action, not a description. We can use it as a description, but its nature is as action -- steps being carried out within the computer. That is why it has no location, no extension, etc.

Steps being carried out within the computer certainly have location. And that is the only action that takes place in the simulation as far as I can see (other than the mental actions of the people interpreting the simulation's outputs).

Also, I wouldn't call the simulation "an action". In a computer implemented simulation, for example, if you save it to a hard disk and turn off the system then does the computer not still contain the simulation? And where is the action involved?

Descartes considered the soul to be an immaterial 'thing' because it had the same 'properties' -- no extension, no location, etc.; but the mistake he made was in calling it a substance. It is not a substance but an action. Actions are realized in particular locations, just as simulations (which are actions) are realized in particular locations.

We are used to thinking of 'things' causing actions to occur, but I still do not see the objection to actions also causing actions to occur.

I would define an 'action' merely as a change in the world. As cognizers we can recognize certain changes as being similar or equivalent to each other and from that we categorize actions into different sets. So, 'running' refers to a set of patterns of change that are related in some way. I think you would agree with that definition(?)

Anyway, given that definition, an action must be something that is attributed to a "thing". Now, we could attribute an action to an imaginary thing that we have conceptualized, but the action would be equally imaginary, would it not? There is nothing actually undergoing that change outside of our mind. An action can, of course, cause another action, but each action still must be an action of something.

ETA:
What I am not at all convinced about is that an isomorphic action is not the same as the 'real action' which it 'mimics'. I know why we can distinguish between isomorphic outputs for things, but I don't see how we can make such distinctions for actions because actions consist in the interactions of parts. Why does it matter if the 'parts' are 'real' or only 'simulated' (actions themselves)?
 
Last edited:
But why does it matter if it is a "very different" collection of particles?

Particles are particles, no? One is just as real as any other.

It doesn't necessarily matter in regard to your point (I may have misunderstood). If you're saying the simulation's particles would be (in the case of a computer simulation) computer particles and you're not saying that the simulation "can do everything the physical system can do" then you can disregard that post.
 
The problem I see with their argument is that they are arguing for what I would consider to be the ‘soul’ of a person. This is my definition of a soul: the unique pattern of relationships that comprise who we are as an individual. Further, because it is a pattern and not a physical thing, it is eternal.

Can we build a machine that has a soul? By my definition of soul, certainly we can and perhaps, already have. I’m not sure if consciousness is even required for this definition, on reflection I think not. Rather, consciousness could be described as a certain class of such patterns.

Just some musings.

By your definition of soul, wouldn't all machines have them?
 
I thought the claim on your side was consciousness is SRIP.

rocketdodger said:
Perhaps, but lets make the meaning of such a claim clear.

All anyone on my side has ever said is that if you look at things people consider conscious, and try to define what it is that makes such things qualitatively different from things people don't immediately consider conscious, the only real mathematically supportable conclusion is that conscious things exhibit some type of SRIP.

I don't know what you mean by "mathematically supportable". What does "exhibit" mean here? Show? Conscious things show SRIP? How? Sometimes they show SRIP. Most of the time, no.

I define a conscious thing as something which experiences consciousness. This is no more circular than the trivial and non-controversial claim that a happy person is someone who's experiencing happiness.

Everything above and beyond SRIP doesn't seem to be a requisite for consciousness once it is really nailed down. I mean you can take yourself and ask "would I still be conscious if my mind lost the ability to do X" and in every case the answer is "yes" except for self reference. If you, or any other conscious thing, lost that then there would simply be no consciousness.

Really? So consciousness can only happen when some kind of self-reference is going on? That is demonstrably false. I am conscious of all sorts of things that have nothing to do with self reference.

On the flip side, we can examine whether there is any qualitative difference between examples of SRIP that we don't consider immediately conscious -- like the infamous electronic toaster with many features, or even the programmable thermostat -- and things like squirrels, dogs, monkeys, or people. And the answer is that no, there really is no mathematically describable difference.

Perhaps because conscious experience has nothing to do with mathematics? There is no equation for "Ouch, I stubbed my toe!".

Yes, people dream, and love, and hate, and have internal dialogue, and Sofias, but those are not qualitative differences since they can all be reduced to just another flavor of SRIP.

Assertion.


So the position people like pixy and I take is to just say consciousness, in a fundamental way, is SRIP and SRIP is consciousness. This confuses naive people because they think "wait are they saying any SRIP can cry when it watches I Am Sam?" and quite obviously no, that is not what the claim means.

It confuses people because it is obviously wrong. Piggy has pointed out many times that much of what the brain does is unconscious, including SRIPs. And things like bacteria have SRIPs, yet to claim E coli is conscious reduces your position to an absurdity. Why would you think anyone would take that seriously?

The claim means that all the stuff we attribute to people is actually something beyond basic consciousness and should be studied as such. Call it "human" consciousness, whatever -- it is definitely *not* just vanilla SRIP, obviously, since a thermostat doesn't fall in love with the female hands that set it's temperature.

Who's claiming something is beyond basic consciousness? Consciousness includes sofia, subjective experiences, emotions, etc. These don't go "beyond" basic consciousness. They are consciousness. Putting a qualifier like "basic" in front of "conscious" just muddies the water. Something either has conscious experience or it doesn't.

Under materialism, a thermostat/bacteria/toaster is not conscious because it has no mechanism that can produce consciousness that is remotely close to the complexity of the human brain.


But people refuse to speak in those terms. They say "well love is an aspect of consciousness." WRONG, because obviously love is not a requisite for consciousness and if you think about it trying to account for the myriad aspects of the human creature in a single unified theory is bound to fail from step one. So pixy, I, and others try to make it clear that hey, the basic consciousness thing is easy, it is SRIP, lets move on and talk about what makes human consciousness different from dog consciousness different from fish consciousness different from toaster consciousness.

Just wanted to make that clear.

Did you not notice the irony that you're trying to account for the "myriad aspects of consciousness" with a single definition? I think intuitively, you know that SRIP is not a sufficient account of consciousness. If it were, you wouldn't be tying yourself in knots with "SRIP is consciousness" and "consciousness is a form of SRIP" and "thermostats do/don't have SRIPs".

ETA: In the book BLINK, there is an interesting study that demonstrates that people make unconscious inferences long before they consciously figure out what's going on.
http://www.gladwell.com/blink/blink_excerpt1.html
 
Last edited:
Self Referential Information Processing.

From a philosophical perspective the answer to 'what is consciousness?' is no more contentious than the answer to 'what is the self?'. Therefore an answer to the first question which assumes that the second question has already been answered is mere question begging circularity.

Rocketdodger writes:

I am speaking of "self" in computer science terms, which only means a reference to the self object, whatever that may be.

In that case, the SRIF definition of consciousness is a category error and it is no wonder that we are left with useless and absurd references to thermostats.

'Self' and 'consciousness' are terms that seem to be significant but indefinable, in a similar way to 'being' and 'existence'. Of course we can pretend that we have a clear definition of self, only it turns out to be an uninteresting one and not the one that relates to what people are thinking about when they think about 'oneself'.

It seems to be presumptuous to assume that problems relating to consciousness are solvable by using the understanding of the word self in terms of computer science - the self is already there before computers are dreamt of; to say that the appropriation of the term for computer science becomes its definition is to fundamentally upturn the reality. It's as if a model of Paris with some working features becomes Paris and elements of the city which have not been reproduced are then rejected from the definition of Paris.
 
Status
Not open for further replies.

Back
Top Bottom