• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Has consciousness been fully explained?

Status
Not open for further replies.
I'm not disputing that the algorithm is platform-independent. I've just never heard that the algorithm is necessary and sufficient to determine all properties of subjective experience.

I think I'm just making a slightly weaker claim than you are--from a position of profound and possibly permanent ignorance of the qualitative details of another's subjective experience.

I guess I'm (possibly Sydney Shoemaker before me?) positing something like this:

Soft = software
Hard = hardware
Bhvr = conscious behavior (behavior that leads others to think an entity is conscious)
Exp = conscious experience (the private subjective "what it's like" for an entity)

Soft A + Hard A ==> Bhvr A + Exp A

Soft B + Hard A ==> Bhvr B + Exp B

Soft A + Hard B ==> Bhvr A + Exp C

Soft B + Hard B ==> Bhvr B + Exp D

In other words, identical software run on any hardware will always result in identical behavior. (Substrate independence claim from functionalism.)
Experience can be considered as just another behaviour - in whcih case it is identical for identical software - or as the state or state progression of the software - in which case it is identical for identical software.

If the software is functionally identical, experience is identical. Hardware is irrelevant.

The combination of software and hardware determines a possibly unique subjective experience.
How?

It's not clear to me that subjective experience needs to be fully determined by software.
That's why I keep asking - in various forms - the same question: What else can possibly determine it?

But--this does entail epiphenomenalism, which I intuitively object to, while not having a strong argument against.
Sorry, not clear on that. What exactly entails epiphenomenalism?
 
Well, yes, they're immature. The same is true of many adults! ;)

However, the types of abstract symbolic thought that distinguish adults from children do develop at that age range. Teenagers don't behave like adults, but children under the age of about 10 don't think like adults. If you try to present abstract concepts to a child under that age, often it just doesn't work, and changing the way you represent it doesn't help.

Ok, I see. We were just talking about 2 different aspects of development.

Those very basic conceptual facilities are indeed intact in teens. But their tendency to so frequently evoke "What were you thinking?!" responses from adults comes from a lack of development in other areas of the brain.
 
The argument seems to be between the computationalists (I've referred to them as Strong AI proponents) who claim that the execution of an algorithm is both necessary and sufficient to create the conscious experience.

Precisely. They don't recognize consciousness as a behavior, so they believe it can be pulled off without a mechanism to execute it at the end of the algorithm (which I would contend is an abstraction of a physical process anyway in itself) just as we need mechanisms to make us blink, shiver, focus light on the retina, and so forth.
 
:) Pituitary hormones don't help either.

Some of it is also learned behaviors. :)

I figure it seems people 'mature' at about 25, when the hormones kick way down. (Anecdote solely)

Theories involving hormones and experience have now been eclipsed by discoveries of developmental differences between teens and adults in key areas of the brain. Specifically, those areas handling impulse control, emotional control, and thinking through long-term consequences.

Evolutionary biologists also theorize that the teen brain may actually be built to take risks, but that hasn't yet been demonstrated, although it's certainly testable. Jury still out.
 
Certainly. There is no requirement for the whole brain to be involved in the consciousness loop - and it isn't. There is no requirement that there only be one consciousness loop in the brain, and there is no requirement that the conscious loop be self-consistent - and, as Ramachandran notes with several examples in that second article, it isn't.

Right. And so the key question is this: What is the difference between the configurations which generate Sofia, and those which do not? Without understanding this, we cannot claim to have understood how consciousness is created.

As I've said before, it leaves us trying to explain how a car runs by saying it has metal parts.
 
It occured to me last night that people aren't always conscious in the same ways to begin with.

For example, if I am at the theatre and I am really paying attention to the movie (which implies I am not paying much attention to anything else) am I conscious of my current self? Of the environment? Of anything besides the information being conveyed to me by the photons coming from the screen and the sound waves coming from the speakers?

Part of the struggle I think many people have with this issue is that for some reason they lump all aspects of the conscious experience into the same term "consciousness" and really miss important fact that "consciousness" isn't one thing in humans. They say "oh a computer can't be conscious because it can't write poetry" or something stupid like that -- well are you writing poetry when you watch a movie?

Exactly. And there's not even any reason to believe that consciousness is necessary to write poetry in the first place.

The behavioral definitions relying on output almost always fall flat.

On the other hand, consciousness must be doing something, or else evolution would not have produced it. It's resource-intensive, so it's got to be serving some very important function.

Seems to me that it evolved to handle high-level decision-making, and was of course so useful that it was co-opted for other functions as well.

And of course, split brain experiments clearly demonstrate that consciousness is not unitary, as it seems to be from our everyday naive perspective.
 
If you have a better definition of Physical Information then feel free.

If we are discussing the physics of information transfer, then it really doesn't matter what they layman's understanding of the word is. We presumably want to know how the physics works.

Incidentally, the reference was originally given by Pixy. One might almost suppose that he hadn't read it...

So, how is the term usued in a technical sense?

How does people who practise physics use it?
 
Ok, I see. We were just talking about 2 different aspects of development.

Those very basic conceptual facilities are indeed intact in teens. But their tendency to so frequently evoke "What were you thinking?!" responses from adults comes from a lack of development in other areas of the brain.

And lack of history, along with the impulse control issue, which many adults never resolve.
 
Not the way I define it. As I said, experience requires self-reference, that is, you not only have to process the data, you have to process the processing of the data.

Okay, then I guess I think about self-reference differently. A Roomba gets information from its environment that an object is in the way, processes the information and tells itself (self-reference) to move around the object. At no point does it ever "process the processing" of the information, so I don't think it is self-aware at all, although I would say it is self-referencing. In fact, I am not aware of any machine that "processes the processing" of information it gets from the environment. Certainly not in any way like a conscious mind does.

Pure epiphenomenalism makes no sense anyway.

I agree.
 
Theories involving hormones and experience have now been eclipsed by discoveries of developmental differences between teens and adults in key areas of the brain. Specifically, those areas handling impulse control, emotional control, and thinking through long-term consequences.

Evolutionary biologists also theorize that the teen brain may actually be built to take risks, but that hasn't yet been demonstrated, although it's certainly testable. Jury still out.

I agree and disagree, because of the complexity of the systems. I will read your links and ponder. I doubt that the sample sizes are that high. :)

Risk taking or just goofing off?

ETA: Did you post links on that?

I would think that 10-12 is really prepubescent,
12-17 adolescent,
17-21~25 finish maturing,

so i am interested ib what those studies are.
 
Last edited:
What he's saying is, We already know brains produce consciousness; it's quite clear that to do this the neural network has to loop back and examine its own activity.


Notice that you said "loop back" and "examine it's own activity." The "loop back" part is self-reference, the "examine it's own activity" part is self-awareness. Again, I think you are confusing these two terms.
 
Experience can be considered as just another behaviour - in whcih case it is identical for identical software - or as the state or state progression of the software - in which case it is identical for identical software.

Hang on a sec. Experience *can* be considered as just another behavior. But it's not at all clear or obvious that it *should* be. If it's a behavior, it's one unlike any other behavior we know. We can't observe it in others, for one thing.

If we don't consider subjective experience as a behavior, then functionalism makes no claim about it.

If the software is functionally identical, experience is identical. Hardware is irrelevant.

Only if experience is a behavior, like you assumed earlier.


If I get a cataract in one eye, then I see things differently. The change in experience is the product of the software, which does not change, and the hardware, which does.

That's why I keep asking - in various forms - the same question: What else can possibly determine it?
See above.

Sorry, not clear on that. What exactly entails epiphenomenalism?

The account I have outlined entails epiphenomenalism. If the subjective experience *does* change, but the behavior does not, then the experience has no causal power. If it had causal power to change behavior, then either functionalism would not be true (which I highly doubt), or hardware would play no part in subjective experience--which is what you're arguing.

Incidentally, I'm not married to this view, just exploring it.
 
He's not explained it. He's seen the mirror neurons, and he's speculating that they might have something to do with consciousness. If he has a theory, what is it?

He's offering an explanation for self-awareness. It may not be well evidenced yet and it may be wrong, but it is still an attempt at an explanation. When I posted the link originally, I said it was an interesting "hypothesis," which it clearly is. I certainly don't think it is the final answer for self-awareness, not yet at least.
 
Okay, then I guess I think about self-reference differently. A Roomba gets information from its environment that an object is in the way, processes the information and tells itself (self-reference) to move around the object. At no point does it ever "process the processing" of the information, so I don't think it is self-aware at all, although I would say it is self-referencing. In fact, I am not aware of any machine that "processes the processing" of information it gets from the environment. Certainly not in any way like a conscious mind does.



I agree.

Your example is a very weak sort of self-reference.

I believe what Pixy is talking about is when an entity works with an informational model of the world--which also contains a model of the entity itself. This is the sort of self-reference and self-awareness that seems to matter in consciousness.

Also, the idea of "process the processing" is not particularly special. Large database systems can keep logs of the amount of memory and time spent processing queries, and can process those logs to look for certain basic patterns.
 
Exactly. And there's not even any reason to believe that consciousness is necessary to write poetry in the first place.

The behavioral definitions relying on output almost always fall flat.
Behavioral definitions have to be off the "One from column a, two from column B", sort of thing any how. Which is why the 'subjective experience' thing is a dead end.

Either something meets the chosen criteria or it doesn't. It is hard to judge 'subjective experience' from behavioral criteria, even those of self report.
On the other hand, consciousness must be doing something, or else evolution would not have produced it. It's resource-intensive, so it's got to be serving some very important function.

Seems to me that it evolved to handle high-level decision-making, and was of course so useful that it was co-opted for other functions as well.

And of course, split brain experiments clearly demonstrate that consciousness is not unitary, as it seems to be from our everyday naive perspective.

Thanks you, I think that consciousness is a rubric for separate events.
 
Okay, then I guess I think about self-reference differently. A Roomba gets information from its environment that an object is in the way, processes the information and tells itself (self-reference) to move around the object.
No, that's not self-reference, that's merely reference.

At no point does it ever "process the processing" of the information, so I don't think it is self-aware at all, although I would say it is self-referencing. In fact, I am not aware of any machine that "processes the processing" of information it gets from the environment. Certainly not in any way like a conscious mind does.
All of this is correct except the suggestion that the Roomba is engaging in self-reference. It's not. If you read the Wikipedia article on reflection it will give you a good idea of how self-reference is used in conventional computing.

Or read Godel, Escher, Bach, of course.
 
Notice that you said "loop back" and "examine it's own activity." The "loop back" part is self-reference, the "examine it's own activity" part is self-awareness. Again, I think you are confusing these two terms.
Not at all.

As I said earlier, a self-referential sentence is not self-aware, because it is not aware.

A program (a running program) is aware. A self-referential program is self-aware.
 
Hang on a sec. Experience *can* be considered as just another behavior. But it's not at all clear or obvious that it *should* be. If it's a behavior, it's one unlike any other behavior we know. We can't observe it in others, for one thing.
We can certainly infer it in others.

If we don't consider subjective experience as a behavior, then functionalism makes no claim about it.
Then it can not be said to exist in any way at all. I don't think that helps.

Only if experience is a behavior, like you assumed earlier.
If it does something, it's a behaviour, and it will be identical on identical software regardless of the hardware.

If it's just a pattern in the software state, it will be identical on identical software regardless of the hardware.

If I get a cataract in one eye, then I see things differently. The change in experience is the product of the software, which does not change, and the hardware, which does.
No! That's a change in the input data.

If you run a calculator program on a Mac and a PC, and you put in 1 + 2 on the Mac and 3 * 4 on the PC, you'll get different answers.

But that provides no insight into the underlying question.

The account I have outlined entails epiphenomenalism. If the subjective experience *does* change, but the behavior does not, then the experience has no causal power.
What would it mean for the subjective experience to change, if it does not change behaviour?

If it had causal power to change behavior, then either functionalism would not be true (which I highly doubt)
How would that follow? In a functionalist view, consciousness is just another function, with its inputs and its outputs.
 
Status
Not open for further replies.

Back
Top Bottom