• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

My take on why indeed the study of consciousness may not be as simple

How about we try this properly.



Now, please tell us what portion of the above statement you take exception with.

Pain is not data being passed through the nervous system. Pain is caused by data being passed through the nervous system, or alternatively, by certain centres of the brain having certain kinds of activity. That is not a useful or realistic description of what it feels like to be in pain.

Ask anyone with a toothache whether it matters more which centre of the brain is causing the excruciating agony, or the sensation of excruciating agony that he feels. Not on this forum, obviously.
 
Last edited:
Pain is not data being passed through the nervous system. Pain is caused by data being passed through the nervous system, or alternatively, by certain centres of the brain having certain kinds of activity. That is not a useful or realistic description of what it feels like to be in pain.

Ask anyone with a toothache whether it matters more which centre of the brain is causing the excruciating agony, or the sensation of excruciating agony that he feels. Not on this forum, obviously.

I fail to see how you can say the effect matters more than the cause, given that the cause causes the effect and the effect is the effect of the cause.

Furthermore, I defy you to come up with a better "useful or realistic" description of what it feels like to be in pain. The only available description I am aware of is that pain feels like pain.
 
So you are abandoning the test that people actually use constantly during their waking hours, and you are going to replace it with what? The Turing test isn't good enough for you - so how are you going to determine whether something is conscious or not? Is it the Pixy way, by fiat?

I really don't know what you are talking about or what your position is anymore. Your default reply to anyone who says that computers behave as if they are conscious is some snarky remark about how obvious it is that chat bots aren't human. Well so what? Are you saying that in order for something to be conscious, it must first convince you that it is human? What kind of ridiculous non sequitur is that? Why is 'ability to pretend to be a human' a requisite for awareness?
 
Third Eye Open said:
Are you saying that in order for something to be conscious, it must first convince you that it is human? What kind of ridiculous non sequitur is that? Why is 'ability to pretend to be a human' a requisite for awareness?
I am curious, does this also sound ridiculous to you?
That to be human I need to convince you that I am conscious.
 
I am curious, does this also sound ridiculous to you?
That to be human I need to convince you that I am conscious.

That doesn't really follow. I assume humans are conscious If they behave that way.

Someone who is in a vegetative state is obviously not conscious, but they are still human.
 
I am curious, does this also sound ridiculous to you?
That to be human I need to convince you that I am conscious.

The solution may be to stop using the poorly defined term consciousness. Being conscious is defined loosely in medicine as an ability to respond and interact with stimuli, outside of the cases of volition or impairment.
 
I am curious, does this also sound ridiculous to you?
That to be human I need to convince you that I am conscious.

Yep.

This position is simply the result of being backed into multiple corners and making multiple concessions, many of which contradict each other.

To be conscious you need to be a human, except if you are a conscious animal, which may or may not exist, but if you are a human you may or may not be conscious, depending on whether you behave like you are conscious or not, but if a non-human behaves like a conscious human, it probably isn't.

This is what happens when you try to have your cake and eat it too -- the cake turns into a patchwork mess of chewed-up spit-out bites.
 
That doesn't really follow. I assume humans are conscious If they behave that way.

Someone who is in a vegetative state is obviously not conscious, but they are still human.

That is what !kaggen is saying -- the assumption that consciousness is equivalent to human-ness is ridiculous.
 
You're using the words "conscious" and "aware" differently than I am. When you use the terms you simply mean any IP system that referencing data about it's environment [aware] or itself [conscious/self-aware]. When I use those words I'm referring to lucid/semi-lucid states when our minds translate the data, or some portion of it, into qualia. So when when I use the word "observe" I don't just mean that system-p is referencing some stream of data, but that the data is producing a subjective experience of some kind within system-p.
What is the difference supposed to be then? Apart from the enormous problems with the very notion of qualia, what is the actual difference? A subjective experience is merely a combination of reference and self-reference; we know this because this is the only successful explanatory model we have that does not contradict reality. To but it more simply, there is nothing else for it to be.

In other words, you simply mean an information processing system that is referencing data about its environment and itself. The notion of qualia is entirely irrelevant (as well as, at best, meaningless).
 
Sorry, missed this one.


Another day, another quixotic whack at a definition. Here's one only a phenomenologist's mother could love: Awareness is the relation of the subject to the objects of experience for purposes of description (focussed awareness -- intentionality), framing description (contextual awareness -- the gestalt background), reaction (peripheral awareness -- of the 'shape' of one's environment), and monitoring (latent awareness -- alert to 'significant' change).

All very good, but you know the next step -- what do we mean by subject, object, description, monitoring, etc. -- how do we break down those issues?

And how do we do it without lapsing into dualism-speak?

By the way I don't really expect answers to those questions.



It's hard to pin down a subjective process, alright. But don't despair: things aren't as bad as the Titans' start to the season. My sense is perceptions are just processes, stable processes, sometimes discrete, sometimes overlapping, within the process of awareness. When we freeze cognitive processes to talk about them as separate objects, we sometimes forget they're originally embedded processes, I think, muddying up the water a bit.


Excellent point. One that we need to keep in the foreground.:)

Good thing I don't really care about the Titans. I kind of gave up on pro football when the Oilers left Houston (originally a Texan) and put all my allegiances into college ball. Been some lean years with the Longhorns at times, but they're back in the big show this year. Seems eerily familiar, like 2005 is repeating itself.

Shudder.
 
I fail to see how you can say the effect matters more than the cause, given that the cause causes the effect and the effect is the effect of the cause.

Furthermore, I defy you to come up with a better "useful or realistic" description of what it feels like to be in pain. The only available description I am aware of is that pain feels like pain.

Which is enough for most normal people to understand. However, for some odd reason, you don't think that's valid or real unless it's translated into a set of 26 symbols and expressed in terms of a different set of 26 symbols.

The symbols, the referents, were invented in order that we could describe to each other the fact of pain. The pain came first. To regard the definition of pain as being more important than the fact of the pain is to miss the point.
 
I fail to see how you can say the effect matters more than the cause, given that the cause causes the effect and the effect is the effect of the cause.

How can you say that the proximate cause matters more than the cause of the cause, and so on back to the big bang?
 
You're using the words "conscious" and "aware" differently than I am. When you use the terms you simply mean any IP system that referencing data about it's environment [aware] or itself [conscious/self-aware]. When I use those words I'm referring to lucid/semi-lucid states when our minds translate the data, or some portion of it, into qualia. So when when I use the word "observe" I don't just mean that system-p is referencing some stream of data, but that the data is producing a subjective experience of some kind within system-p.

Why would you do that ? Why would "observe" be synonymous with "subjective experience" ? Of course, if you use them that way, thermostats and cameras don't "observe". But then why didn't you say so in the first place ?

Because when I use the term "observe" I take for granted that its referring to mentally focusing one's attention to some sensible object or event. If I expand the meaning of the term any wider [like in the instance of a camera or audio recorder] then any transfer of information from one system to another could be classified as "observation". I just assumed that you understood the word in the same sense. My badnik. Looks like we're going to have to qualify our word usage even more than I thought :-X

Likewise, if the computer were simply mining, processing, and refining raw data, as per the designs of conscious scientists who intend to read an interpret it later, then it's being -used for- the purpose of science but is not -doing science- itself.

Agreed. But a computer is able, in principle, to read data on its own, analyse it and reach conclusions without human intervention. In fact, a bunch of computers could communicate to one another without any human action

In principle I think such a thing is possible. We have plenty of automated systems now that can manage themselves to some degree without human intervention; but the kicker is that they were intentionally designed by conscious agent(s) to perform the tasks they carry out. The systems themselves, however, are essentially blind to what they're doing.

In everyday life we must make the distinction between events that are initiated with conscious intent and those that aren't. Its how we distinguish crimes from accidents and natural disasters. If we didn't draw this distinction there would be no contention over whether life evolved blindly via natural processes or was "intelligently designed" by some deliberate conscious agency; there wouldn't be any real difference between the two. The question of intentionality isn't something trivial that can be disregarded, its absolutely vital to the question of consciousness.

Because while computing an output and formulating a conclusion are both examples of information processing, they are qualitatively different. In the former example there isn't any subjective component to the operation; in the latter case there is.

Your explanation sounds circular to me. Why do you think there is no subjective experience ?

Because of what we've learned from our own biology and neurophysiology we know that just because a system is processing information does not mean that theres a subjective component to it. The overwhelming majority of information processing in our own bodies is completely unconscious [i.e. no subjective component]. This clearly demonstrates that information processing IAOI is not the same as, or even necessarily produces, conscious thought.

Sorry for the temper, but I get kinda tired of answering the same question over and over again. I already made it abundantly clear that I am absolutely certain I'm experiencing my experiences. If you're not sure of your own thats your affair.

It's precisely when you're really certain about something that you should doubt it the most.

How can one hold empirical science to be true and reliable when they doubt the very basis of empiricism?

Can every phenomenon be simulated/described by a computation?

Perhaps. You mean a simulation or a model ?

I mean both actually. I've a very specific reason for asking that question since it comes right down to the crux of an issue that I'm not sure I've communicated effectively so far. We agree that not every phenomenon is a computation. I assume we also agree that while this may be the case, those phenomena can still be mathematically described or modeled in some way, whether it be by some set of equations or a computer simulation based on those equations.

With that being said, this establishes that there is a difference between a phenomenon and a computational model of said phenomenon. Which brings me back to a point I've been attempting to explain. Below I going to try to lay out my line of reasoning:

[P1] - Not all phenomena are computations, although abstracted features of them may be mathematically modeled and simulated on computer systems.

[P2] - Computational simulations are not identical to the systems they're modeling. They only provide abstracted descriptions to aid in our understanding of them.

[P3] - We know from the example of our own biology that processing information [even self-referencing IP] does not necessarily equate with there being a subjective component. Whatever consciousness is, its a product of the physical conditions of the brain/body.

[P4] - Right now neuroscientists are still hard at work trying to understand what it is about brain activity that produces what we call consciousness and the subjective experiences that accompany it. Until we have such an understanding we have no way of technically designing such a feature into our current technology.

[C] - Given the above, it is not justified to assume that computational simulations of brain function will necessarily produce consciousness.

How does my inability to directly access the subjective states of others establish, or even imply, that I've none of my own?

That's not what I said. I said that you have no idea if other humans are conscious (since you said there is no test to determine consciousness) so why do you still reach the conclusion that they are ?

Because it's an instinctive feature of humans to identify with the external behaviors of others and equate those behaviors with our own consciousness [humans with conditions like autism seem to lack this instinct]. But, just like all instincts, they are not necessarily accurate. For example, objects roughly of a particular size and shape can fool many species of birds into treating them like their own eggs, even tho they aren't eggs at all. Likewise, human intuitions about what behaviors indicate consciousness can be fooled.

I asked you WHY you are so certain of it. THAT was my question and all you've done is tell me that you are certain.

I'm certain that I'm experiencing my experiences for the same reason I am certain 1+1=2: Because its unequivocally -demonstrated- to me that it is so. In fact, it's not even possible for me to know that 1+1=2 unless I -experience- in my own mind the a priori fact that one and one are equal to two. Its not a matter of faith, or even a matter of inference -- the manifest proof of an experience is the experience itself. You're essentially asking for proof of proof. Why on earth would you need this explained to you? :confused:
 
Last edited:
You're using the words "conscious" and "aware" differently than I am. When you use the terms you simply mean any IP system thats referencing data about it's environment [aware] or itself [conscious/self-aware]. When I use those words I'm referring to lucid/semi-lucid states when our minds translate the data, or some portion of it, into qualia. So when when I use the word "observe" I don't just mean that system-p is referencing some stream of data, but that the data is producing a subjective experience of some kind within system-p.

What is the difference supposed to be then? Apart from the enormous problems with the very notion of qualia, what is the actual difference? A subjective experience is merely a combination of reference and self-reference; we know this because this is the only successful explanatory model we have that does not contradict reality. To but it more simply, there is nothing else for it to be.

In other words, you simply mean an information processing system that is referencing data about its environment and itself. The notion of qualia is entirely irrelevant (as well as, at best, meaningless).

PixyMisa, I'd love to be able to engage you in some meaningful discussion but, like I told you before, I'm not going to slog thru your ideological baggage in an attempt to do so. At this point, all you've managed to do is thoroughly convince me that you're an intellectually shallow ideologue with no interest in an actual discussion. When you're ready to do so let me know. Until then stop wasting my time.
 
Last edited:
That doesn't really follow. I assume humans are conscious If they behave that way.

Someone who is in a vegetative state is obviously not conscious, but they are still human.


Interesting.
The question then is how do we recognize a human?
Do we need to a conscious being or a conscious human to recognise a human?
Is a corpse still a human?
If so when is a corpse no longer a human?
 
In everyday life we must make the distinction between events that are initiated with conscious intent and those that aren't.
Quick question... what is "conscious intent"?

Do you mean to convey that our conscious minds create the intent, or that we have an intent that we are aware of?

If it's the former, by what means? (And does that really fit our experience? It doesn't fit mine...)

And if it's the latter, could we have intent for things we're not conscious of? And if so, would there really be a difference between said intent and a computer, even if the computer had no "subjectivity" about it?

(FYI, I'm clumsily trying to avoid the word "intention", which means something else).
 
I've been investigating Roger Penrose and I found out that he wrote a book about consciousness entitled "The Emperor's New Mind". In it, he argues that classical mechanics are insufficient to study and understand the process of human consciousness, and that quantum mechanics is closer to becoming a tool to understanding it.

Since the issue of Artificial Intelligence has been argued here before, and dualists have argued that you can't explain the "magical" process of consciousness through science and math; it occurs to me that maybe we haven't been fair to the process and appeared too simplistic with our explanations, and we have not fully reviewed in what way is it that consciousness can be approached (And the way Penrose suggest is Quantum Mechanics).

(And considering that things behave very different at the Quantum Level, it looks like this could be a clue to the apparent "mystery" to the behavior and nature of consciousness)

This is why I invite anyone who can contribute their thoughts and knowledge about how consciousness can better be understood from the Quantum Mechanics point of view.

And that includes people who have read Penrose's book and can give their layman version of what's more or less addressed in such book (Trying not to spoil us the surprises too much:D )
Plato said that exactly as "see seeing" and "hear hearing" is meaningless, so is "know knowing" is meaningless.

What do you think?
 
Because when I use the term "observe" I take for granted that its referring to mentally focusing one's attention to some sensible object or event. If I expand the meaning of the term any wider [like in the instance of a camera or audio recorder] then any transfer of information from one system to another could be classified as "observation".

I think that is at the heart of the disagreement. The proponents of the algorithmic theories seem to be claiming that there is somewhere in between - that "observe" can be more general than a person looking at something, but less general than the exchange of information that happens between all forms of matter, all the time.

My primary disagreement is that there doesn't seem to be any rigorous explanation as to why this middle ground exists.
 

Back
Top Bottom