• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

My take on why indeed the study of consciousness may not be as simple

AkuManiMani said:
I'm not offering an explanatory gap filler. All I'm doing is pointing to a real phenomenon [i.e. consciousness], describing features of it, and naming those features.

When we are conscious we have experiences, and those experiences are are made up of a wide range and combination of subjective qualities. When we are unconscious there is no experience of any subjective qualities. It just so happens that there's a word already in existence in the English language for such subjective qualities: qualia.
In philosophy-speak at least, yes.

What I'm taking issue with is that there are many participating in this discussion who not only refuse to address the problem that we currently have no scientific explanation of subjective experience, there are some who choose to completely ignore subjectivity all together and claim they're explaining consciousness.
I don't think anyone is saying that we have a scientific explanation in the sense of a full accounting for everything we call consciousness. It's just that some people think we need some new physics and some people don't.

But I still think you're being a bit too cavalier. Without a precise description of qualia and consciousness and what-not, in terms of exactly which bits of my inner behavior/experience we're talking about, it's too easy to blow it up into a hard problem of consciousness. There may indeed be a hard problem, but I don't think that appeal to complexity or appeal to incredulity is all that compelling.

In other words, our lack of ability to describe what we mean by qualia may be the leading component of its apparent complexity.

~~ Paul
 
Last edited:
Hmm, are they discussing neurons in your textbook? Is neurology not good enough either?

A physical theory manages to define what is going on in a physical system, and to link physical causes to physical effects. It's not just a matter of itemising everything that's happening. Nor is it a matter of selecting one part of what is happening and deciding that it's the critical aspect.
 
In philosophy-speak at least, yes.


I don't think anyone is saying that we have a scientific explanation in the sense of a full accounting for everything we call consciousness. It's just that some people think we need some new physics and some people don't.

If you want to incorporate something into scientific theory that doesn't exist there, then you will find it difficult to do so without either describing it in terms of known physical quantities, or by using new physics.

But I still think you're being a bit too cavalier. Without a precise description of qualia and consciousness and what-not, in terms of exactly which bits of my inner behavior/experience we're talking about, it's too easy to blow it up into a hard problem of consciousness. There may indeed be a hard problem, but I don't think that appeal to complexity or appeal to incredulity is all that compelling.

In other words, our lack of ability to describe what we mean by qualia may be the leading component of its apparent complexity.

~~ Paul

The inability to provide a precise description isn't the cause of thinking that there's a hard problem, it is the hard problem. It it's accepted that there is such a thing as subjective experience, then it should be a first step to define it. If you can't define it, the only other option is to deny that it exists at all - which a number of philosophers are indeed attempting.
 
y participating in this discussion who not only refuse to address the problem that we currently have no scientific explanation of subjective experience, they choose to completely ignore subjectivity all together and claim they're explaining consciousness.

What I find odd is that the lack of a precise definition of subjective experience is used as an indicator that there isn't a problem. When there wasn't a precise definition of light, say, I don't think that showed that the phenomenon was fully understood. Somewhat the contrary.
 
Not sure I follow you.

I think that one of the chief characteristics of both the brain and of computer programs is that they are not self-referential. Human beings are conscious, to some extent, of a lot of processes in their body. They can feel themselves breathing, test their pulse, look at their body moving. The one thing that they aren't conscious of is the actual process of their brain. The best one can manage is a headache. There's a hole in the head where consciousness of the rest of the body and the world resides.

The same applies to computer programs. The chief characteristic of computer equipment is isolation. Each component is shielded as much as possible from every other component, so it can't effect it.
 
The chief characteristic of computer equipment is isolation. Each component is shielded as much as possible from every other component, so it can't effect it.
Incorrect. The chief characteristic of computer equipment is controlled dependency, not isolation. Components may be shielded, but they're connected via wiring. The object isn't to isolate, it's to control the variables.
 
I think that one of the chief characteristics of both the brain and of computer programs is that they are not self-referential.

Huh ? Aren't they ?

Human beings are conscious, to some extent, of a lot of processes in their body.

I was simply adding to your post that humans aren't aware of ALL their processed.

The same applies to computer programs. The chief characteristic of computer equipment is isolation. Each component is shielded as much as possible from every other component, so it can't effect it.

And yet it does. Often.
 
westprog said:
The inability to provide a precise description isn't the cause of thinking that there's a hard problem, it is the hard problem. It it's accepted that there is such a thing as subjective experience, then it should be a first step to define it. If you can't define it, the only other option is to deny that it exists at all - which a number of philosophers are indeed attempting.
That is because they believe that the inability to define it is due to its nonexistence. Or at least the nonexistence of folk-consciousness. Rather like the nonexistence of elan vital.

What I find odd is that the lack of a precise definition of subjective experience is used as an indicator that there isn't a problem. When there wasn't a precise definition of light, say, I don't think that showed that the phenomenon was fully understood. Somewhat the contrary.
On the other hand, the lack of a precise definition of life gave us elan vital. Light is a concrete, easy-to-see sort of thing. Life and consciousness not so much.

~~ Paul
 
A physical theory manages to define what is going on in a physical system, and to link physical causes to physical effects. It's not just a matter of itemising everything that's happening. Nor is it a matter of selecting one part of what is happening and deciding that it's the critical aspect.

So where do you go with that? Sensation, perception, cognition, these are all being studied, are they not physical phenomena? Or is there some dintinction you are making?

I am truly not understanding your point.
 
What I find odd is that the lack of a precise definition of subjective experience is used as an indicator that there isn't a problem. When there wasn't a precise definition of light, say, I don't think that showed that the phenomenon was fully understood. Somewhat the contrary.

So what is the study of perceptions?
 
I think that one of the chief characteristics of both the brain and of computer programs is that they are not self-referential. Human beings are conscious, to some extent, of a lot of processes in their body. They can feel themselves breathing, test their pulse, look at their body moving. The one thing that they aren't conscious of is the actual process of their brain. The best one can manage is a headache. There's a hole in the head where consciousness of the rest of the body and the world resides.

The same applies to computer programs. The chief characteristic of computer equipment is isolation. Each component is shielded as much as possible from every other component, so it can't effect it.

So what about associative learning in the patterns of neuronal firing? They may not label themselves but they do learn to fire in response to each other.

That is self referencing in terms of neurons.
 
The runs are different, but why would you say the algorithm is different? The words "same" and "different" are contextual... sometimes five nickels is the same as a quarter. Sometimes five nickels are different than a quarter.

In order to know if we have a real disagreement, I want to know what context you're using to judge if algorithms are different. With that in mind, suppose that I grab a sheet of paper and perform a sieve of Eratosthenes for the numbers 1 through 100 (skip 1, point at 2, cross out every 2 numbers past that, yada yada). And let's compare that to a BASIC program performing a sieve of Eratosthenes on a VIC-20.

I'm definitely not a VIC-20, and I'm not even using BASIC. But we're going to go through the same steps and produce the same result. So the VIC-20 is five nickels, and I'm a quarter. In the context of running an algorithm, are we performing the same algorithm?

If not, suppose that on Wednesday I perform a sieve of Eratosthenes on the numbers 1 through 100. And suppose that on Thursday I perform a sieve of Eratosthenes on the numbers 1 through 100. The brain is very complex, and I'm constantly changing, and I'm pretty sure I'm not doing exactly the same thing Wednesday, in my brain, as I'm doing Thursday. So on Wednesday I am five nickels. And on Thursday I'm a quarter. In the context of running an algorithm, am I performing the same algorithm on Wednesday that I perform on Thursday?

You are and you are not. :)

Clearly, there is a single algorithm that describes what you do on both days. But there is also clearly an algorithm that describes what you do on Wednesday that is very different from the one that describes what you do on Thursday.

The difference is simply the level of detail that is included in the algorithm.

And I am not saying that a certain level of detail is the correct one to use when it comes to consciousness. I am simply saying that in this case, the changes to the step by step behavior of the system (which is ... an algorithm) result in steps taken during Run2 being mixed with steps taken during Run3.
 
I don't see how consciousness can exist in a single probe. Consciousness is the sequence of events represented by all the probes (or at least some subset of them). They aren't communicating, but their states arrayed throughout space is the consciousness.

If it is not the case that we need a sequence of states to produce consciousness, then every particle in the universe is conscious.

~~ Paul

But ... every probe represents the final transition in a sequence of states. And that sequence is also independent of every other probe. Didn't you read that post of mine?

The sequence of states leading to instruction X on probe X is simply instruction zero through X-1 that occured during Run2.

I agree with you that it is strange to think that perhaps a random aggregation of particles might have a structure identical to state X-1, and thus probe X might be conscious for an instant of time as the result of nothing but randomness.

But that is how science works -- you have to accept the counterintuitive if the model holds up. Eventually it becomes intuitive. If the computational model suggests that random systems might be conscious for an instant, here and there, so what? Or, you can simply alter the definition of consciousness to exclude instantaneous instances -- which seems to be how most people think of it anyway (non-instantaneous). No biggie, just a feature of the model.
 
Each individual copy has no running-ness at all. It is the sequence of copies that embodies running-ness, assuming we are willing to grant that there is any running-ness in the first place.

~~ Paul

I don't understand why the original sequence, up to copy X, plus copy X, does not embody running-ness.

You are treating each copy as if they are isolated from the original sequence -- they aren't. The were generated from the orginal sequence, so how could they be isolated from it?
 
Dipping my head in here again.

Isn't what some people here are doing just Loki's Wager? It seems like people want to attempt to make consciousness deliberately impossible to define in order to keep it "ok" to hold wacky beliefs?

It seems to me like the entire HPC is just a big Loki's Wager?
 
rocketdodger said:
But ... every probe represents the final transition in a sequence of states. And that sequence is also independent of every other probe. Didn't you read that post of mine?

The sequence of states leading to instruction X on probe X is simply instruction zero through X-1 that occured during Run2.
Yes.

I agree with you that it is strange to think that perhaps a random aggregation of particles might have a structure identical to state X-1, and thus probe X might be conscious for an instant of time as the result of nothing but randomness.
I don't think probe X is conscious for an instant of time. Consciousness is embodied in the sequence of the computation, so therefore only in the sequence of probes.

But that is how science works -- you have to accept the counterintuitive if the model holds up. Eventually it becomes intuitive. If the computational model suggests that random systems might be conscious for an instant, here and there, so what? Or, you can simply alter the definition of consciousness to exclude instantaneous instances -- which seems to be how most people think of it anyway (non-instantaneous). No biggie, just a feature of the model.
Even if I assume it's the sequence that's conscious, I still have to put up with the occasional random sequence being conscious. I have no problem with that.

I don't understand why the original sequence, up to copy X, plus copy X, does not embody running-ness.
In the case of the horse, it does. In the case of the probes, certainly some subsequences would embody consciousness. I just don't think individual probes do, just as individual horses don't embody movement.

You are treating each copy as if they are isolated from the original sequence -- they aren't. The were generated from the orginal sequence, so how could they be isolated from it?
They are isolated but not independent. It's just that an instant of consciousness is no consciousness at all.

This entire argument could be fatally flawed. I'm still thinking about it. Part of the problem is that we're not specifying how an external observer knows that something is consciousness, which renders the experiment nonempirical.

~~ Paul
 
Last edited:
I don't think anyone is saying that we have a scientific explanation in the sense of a full accounting for everything we call consciousness.

You're not making such a claim. Other individuals (most notably PixyMisa) have made such claims.

It's just that some people think we need some new physics and some people don't.

If it were just a discussion of whether or not consciousness is a feature of known physics or some physics altogether different I'd not be taking exception. Its the fact that some here seem genuinely unable to distinguish between computer science and physics thats got my gourd inna discord.

But I still think you're being a bit too cavalier. Without a precise description of qualia and consciousness and what-not, in terms of exactly which bits of my inner behavior/experience we're talking about, it's too easy to blow it up into a hard problem of consciousness. There may indeed be a hard problem, but I don't think that appeal to complexity or appeal to incredulity is all that compelling.

Its not so much that I'm incredulous to the explanations being proposed here. Its the fact that the supposed "explanations" deftly sidestep the central issue entirely, while at the same time alleging to have resolved it.

In other words, our lack of ability to describe what we mean by qualia may be the leading component of its apparent complexity.

~~ Paul

The point of me elaborating and philosophizing about qualia is to try to cut thru the apparent complexity of the issue and down to the basics. The higher order cognitive processes we carry out as humans [such as complex language, planning, art, adaptive learning, etc.] are all off-shots of the more fundamental question of subjective experience. For all we know, a creature as simple as a nematode may be conscious. So, no; I don't think consciousness is necessarily based upon complexity. Below is a general rundown of what I mean by "consciousness" and how I understand the concept:

Minds are the subjects that experience. To the best of our knowledge, they are the product of neural processes [tho, conceivably, the mind could be the product of some more general cellular processes]. Minds are what inhere mental properties [like volition, intention, attention etc.] and mental objects [like memories, "memes", etc.].

Consciousness, on the other hand, is a state of the mind, during which, it has the active capacity to be aware and experience information as subjective qualities. What the subject is aware of, at any given moment, is the focus of their conscious mental activity [i.e. their attention]. Information at the center of one's awareness they are most conscious of, and information more peripheral to the center of awareness they are less conscious of. Information and mental processes completely out of ones awareness are unconscious.

The basic element of conscious information is a quale. These include simple positive [e.g. pleasure] and negative [e.g. pain] subjective qualities, and the broad spectrum of other subjective variations that make up an individual's conscious experience. To put it simply, qualia are the quanta of conscious experience.

Keep in mind that the above is not meant as an explanation of consciousness, but as a schematic description of what can be observed about it via introspection. A sufficient theory of conscious, IMO, would provide a means of explaining how the above relates to the externally observed physical activity of the brain/body. In principle, such knowledge could then be used to design systems with the physical capacity for consciousness and provide a means of specifying the type and quality of it's conscious experience(s).
 
Last edited:
AkuManiMani said:
Minds are the subjects that experience. To the best of your knowledge, they are the product of neural processes [tho, conceivably, minds could be the product of some more general cellular processes]. Minds are what inhere mental properties [like volition, intention, attention etc.] and mental objects [like memories, "memes", etc.].
I really wish you wouldn't use the term mind. It's so dualistic-ey.

Consciousness, on the other hand, is an active state of the mind, during which, it has the active capacity to be aware and experience information as subjective qualities. What the subject is aware of, at any given moment, is the focus of their conscious mental activity [i.e. their attention]. Information at the center of one's awareness they are most conscious of, and information more peripheral to the center of awareness they are less conscious of. Information and mental processes completely out of ones awareness are unconscious.

A basic element of conscious information is a quale. These include simple positive [e.g. pleasure] and negative [e.g. pain] subjective qualities, and the broad spectrum of other subjective variations that make up an individual's conscious experience. To put it simply, qualia are the quanta of conscious experience.
Yup, this is similar to a hundred descriptions I've read. I'm not sure what it does for us in terms of trying to figure out how the brain works. Also, how do we know it's not just a social construction?

Anyway, time will tell.

~~ Paul
 

Back
Top Bottom