• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Explain consciousness to the layman.

Status
Not open for further replies.
Sure.I'm not sure this is coherent. It should be phrased the other way--if a full behavioral analysis is possible, and that analysis does not include something that can rightfully be called "subjective experience", then there's no such thing as a subjective experience.

How can we tell if a behavioural analysis is complete unless it explains everything we find in a system?

This is the real problem scenario. If you really do have subjective experiences, then it necessarily follows that the term "subjective experiences" means something; and that requires, as a requisite condition, that we were able to associate that label with some thing. How did we ever manage to do this if there's no causal connection?

I think this goes to the heart of the issue. How can we associate the label "subjective experience" as meaning something real without a causal connection? Well, my answer is that we do. We are perfectly well able, as human beings with subjective experience, to talk about our subjective experiences, without any causal connection.

Of course, there is a necessary causal connection, logically speaking, between
subjective experience and the physical effect that produces the experience, and the physical manifestation of that experience. How that connection works, we don't know.

This directly contradicts your claim that you know you have subjective experiences.

Suppose we pick up the machine from before, and find a label on it claiming that it was made with Infografix Technology <TM>. "Humbug!", says your robot researcher. "I cannot believe in what is obviously a ploy by a marketing guy to sell this device." And so the robot researcher opens the machine, starts fiddling with it, and then manages to figure out exactly how the machine works.

"Aha!", says the robot researcher. "Just as I suspected. I now have a complete theory of this machine's inner workings, and nowhere did I ever run across this Infografix Technology thing. I knew there was no such thing!"

Now, I suspect the robot researcher is loony. Even knowing the full workings of the machine, there's no way it can conclude that Infografix Technology does not exist, because the robot researcher forgot to figure out what Infografix Technology even means. The robot could easily be wrong, given that the workings of the machine that the robot researcher figured out is Infografix Technology.

And the objective robot will be able to find a definition of Infografix Technology, which it will use its objective rules to evaluate. The robot is perfectly reasonable, according to its own lights.

Back up a bit. Before telling me that we have information that subjective experiences are real, tell me what it means for them to be real.

That's probably a profound philosophical question - but I'll simply say that if our subjective experiences of the universe are not real, we have no basis for thinking that anything is real.


And before we get there, please tell me how we came to conclude that these were the things we should be attaching the label "subjective experience" to, in order to call ourselves worthy Native English speakers.
There's only one world though. The objective robot needs to figure out what the words "subjective experience" refer to. Upon opening up our heads, it figures out what causes us to utter those words. Somewhere in that mess is the key to that objective robot understanding what "subjective experience" means.

But the objective robot has no more information about this than we have, at any given time. We are capable of doing anything that the objective robot can do. We are just as able to figure out what makes people claim subjective experience.

The next step is for that robot to determine if the thing it discovers "subjective experience" should mean, is actually there.

Given this, I'm not so sure I agree with your conclusions. If we are really describing subjective experiences, then there must be something there that we're talking about. This thing must play a critical causal role in our description of it.

That seems likely. Whatever our subjective experience is, it is tied into the physical world, and is affected by it. It is not something seperate.

And therefore, the robot researcher should find a correlate to the term "subjective experience" and some thing that really is there, causing us to label it with that term. If the robot does not associate the term "subjective experiences" with the mechanism that causes us to describe them at this point (which should necessarily exist, if we have those things), then the robot is broken. Check the warranty.

"Without having to allow for the reality" is a bit of a bigger claim than you're letting on. It's more like denying that our star maker uses Infografix Technology. Sure, you can say that a theory including Infografix Technology would be superfluous, but you cannot actually deny the reality of that theory unless you know what Infografix Technology refers to.

But in our case, your robot researcher should know exactly what causes us to claim we have subjective experiences. That in itself tells the robot what it is we're referring to.

But again, you are assuming that the objective robot has made a complete analysis of the system and knows exactly how everything works. If and when this happens, then we will make a judgement accordingly. We don't know if perfect knowledge and understanding of the system is possible, even in principle.

However, if the objective robot doesn't have perfect knowledge of the system, he will have to make a judgement based on what he does know.

No, it's not the case. The subjective reality and the objective reality should be the same reality. You are underestimating what it takes in order to make a claim that a thing is not real.
He would only have to incorporate a theory concerning the meaning of "subjective knowledge" relative to the entities he is studying. The causal mechanisms are sufficient for him to incorporate that theory.
You're directly contradicting yourself above. If we know subjective experiences exist, and the robot has access to the same knowledge we have know, then the robot automatically knows subjective experiences exist.

But we've already said that the robot does not have subjective experiences. Hence the only access he has to the concept is via the descriptions we give. He has access to all other knowledge that we have - but not our actual experiences.

It appears to me that there are no objective descriptions of subjective experience that would make sense to an objective robot. All of our descriptions rely on the person interpreting the concept being someone who has objective experiences himself.

The only access the objective robot has to the reality of subjective experience is the actual statement from human beings that such a thing exists. We can assume that the objective robot can be as patient and undogmatic as we like, but it will still be able to make a probabilistic judgement based on the knowledge it has.

I know that if the objective robot had perfect understanding of the way the brain works, it would very likely be able to make a definitive judgement on the nature of subjective experience. But until it has perfect knowledge, it would have to make a judgement on the basis of the knowledge it has.
 
It appears to me that there are no objective descriptions of subjective experience that would make sense to an objective robot.

You have it totally backwards.

The objective descriptions *only* make sense to an objective robot. People, with prior conceptions of what subjectivity means, are unable to understand the objective descriptions.

A perfect example -- when I claimed the objective description for the subjective experience of a rock was "what it is like to be a rock," you responded that the idea of a rock having subjective experience is absurd.

The description I gave is perfectly valid. Any objective robot would agree that it is a valid description. It is only because you have in your mind that all subjective experience is somehow the same as your subjective experience that you feel objective descriptions cannot be arrived at.

An objective robot wouldn't care. A valid description is a valid description. That is the whole point of being objective.
 
But that isn't really dualism then.
Perhaps one can see it that way, however due to our obvious limitations of knowledge and understanding the actual ontology of existence is beyond our reach. All we can do is speculate about its nature and where ever one draws the line in defining the substance or principle of an ontology. Any monism may be one aspect of a more fundamental dualism and visa versa. Which way up the coin lands cannot be known, heads for monism, tails for dualism.

If a duality is an aspect of an underlying monism, then by definition each side of that duality can have knowledge of the rules governing the other.
Yes this may be the case in some way, or an underlying substructure. We cannot know if we have as yet identified the underlying substance and may not be able to recognise when we do detect it, or the relation between two apparent aspects of a dualism.

But the whole point of spirit/matter dualism, for example, is that knowledge of the spirit is impossible to gain from the perspective of matter alone. In other words, if we made a robot that could think, but didn't have a spirit, it couldn't know anything about the spirit world.
This is not the point, its one way of viewing it.

I see no reason why a sufficiently advanced Robot would not have a spirit(if such a thing exists), or experience subjective knowledge. Given an ontology in which a substance(substances) exists, all is substance/material of differing grades or spectra. All is technologically/scientifically accessible.

I am not assuming the non existence of other phenomena than substance here. I am merely limiting myself to substance for this point regarding ontologies. We could exist in a substance monism or a substance dualism and be blind to the actual ontology. The opinion that a monism is the favoured ontology may well be a consequence of our inherent limitations or a fashionable trend.



This is in direct opposition to the fundamental tenant of monism, that there is a *single* underlying set of rules that everything follows. Because if the spirit world and the material world share the same rules, then one doesn't need to have access to the spirit world to learn the rules that the spirit world functions according to.
Why cannot they both share the same rules, a set of rules we are not aware of?

If you want to say that you think spirit/matter duality is possible, and that one day we will understand the full set of monistic rules that govern both, then that is a perfectly valid viewpoint. However it is certainly not dualism -- it is just another way of saying there is stuff that we don't understand yet.
Yes we can only speculate, monism/dualism what's the difference from our perspective when one considers what actually exists.
 
Why cannot they both share the same rules, a set of rules we are not aware of?

Because that is what dualism is -- two sets of rules, where one cannot be explained in terms of the other. I didn't make up the definition, that is just what it is!!!!!

Look, I think you have a misunderstanding of what everyone means when they say "dualism."

Dualism is the ontological view that there is a set of things that simply *cannot* be framed in terms of the set of fundamental rules that everything else is framed in terms of. For human purposes, that means describable in terms of mathematics and logic.

For example, the spirit. A dualist thinks that *if* there is a spirit or soul ( and most of them do ) then the rules governing that aspect of existence cannot be reduced to the same mathematical truths that the rules governing the material world can be reduced to. They hold that there is no mathematical equation that can describe the behavior of the spirit or soul.

Plain and simple -- no ifs, ands, or buts about it. Spirit == something that *cannot* be described by mathematics. No exceptions.

If on the other hand you happen to think that the soul or spirit is something that we just don't know HOW to describe in mathematical terms, then that is NOT dualism.
 
Last edited:
The opinion that a monism is the favoured ontology may well be a consequence of our inherent limitations or a fashionable trend.

No, it is not a trend.

If anything, it is a consequence of the limitations of the logic and mathematics that our minds use to model the world.

However, this is a limitation that cannot be overcome. If you are a human being -- or any intelligent entity in our universe that a human being can observe, for that matter -- then you are either a monist or wrong.

Here is a good example: If there is a spirit, how does it interact with our physical bodies?

The only possible answer is that it somehow interacts with our physical bodies. The only possible way anything can interact with anything is if both things share an underlying set of rules. Maybe that means we don't understand everything about particle physics, and somehow a spirit could affect particle behavior. Who knows. But in any possibility, it requires the spirit and particles to share rules. Otherwise there would be no way for the spirit to affect the particles.

There may be some logic that allows for this to happen, however it is not any logic that a human or any other intelligent entity is capable of using. That language simply does not exist in our physical universe. In fact it *can't* exist in our physical universe, because everything in our physical universe by definition exists according to the known language of mathematics. Mathematics is all inclusive because it is defined to be all inclusive. Anything that math cannot describe simply does not exist according to us.
 
Last edited:
Yeah but that's like saying you only eat bananas, and "oh, well, I consider all edible compounds to be bananas."

Seems like it is simpler to just call yourself a food-eater rather than a banana-eater-where-banana-means-any-type-of-food.
Unless new discoveries had shown that all edible compounds are in fact bananas.
 
How can we tell if a behavioural analysis is complete unless it explains everything we find in a system?
Not sure I understand the question. I said full, not complete. But yeah, I meant explain everything in the system. Specifically, as a lemon test, the robot researcher should have some theory that successfully predicts what we do, or will say, given our current state and some input experience.
I think this goes to the heart of the issue. How can we associate the label "subjective experience" as meaning something real without a causal connection? Well, my answer is that we do.
Of course we don't. If I have a dream of an elephant, I can say, "I had a dream of an elephant". If I did not have a dream of an elephant, I can also say, "I had a dream of an elephant". In the latter case I'm probably lying; in the former case I'm reporting an experience.

Something distinguishes the two cases--primarily, whether or not I believe myself to have a dream of an elephant. If all goes right, and I believe I had a dream of an elephant, then my belief is based on my actually having a dream. So in order for me to correctly say that I had that subjective experience of dreaming of an elephant, there had to actually be a dream of an elephant that caused me to believe I had a dream of an elephant.

Now let's discuss how "objective" facts work.

If I see an elephant at the zoo, I can say, "I saw an elephant at the zoo". If I did not see an elephant at the zoo, I can also say, "I saw an elephant at the zoo". In the latter case, I'm probably lying. Again, something distinguishes the two cases--primarily, whether or not I believe I saw an elephant at the zoo. In order for me to correctly say that I saw an elephant at the zoo, there had to have actually been an elephant at the zoo that caused me to believe I saw one.

There is no difference between these two cases, from an epistemic standpoint.

When I say I dreamed of an elephant, I most certainly do posit that there was a thing that actually happened that is aptly described as dreaming of an elephant, and that this somehow caused me to say that I dreamed of an elephant. That is explicitly what I am claiming. If I claim to know I dreamed of an elephant, I'm automatically claiming that the dreaming caused me to believe it.
We are perfectly well able, as human beings with subjective experience, to talk about our subjective experiences, without any causal connection.
Of course we are. I can say that I dreamed of an elephant without actually having dreamed of one. One way this could happen is if I didn't in fact dream of an elephant, but got confused somehow, and thought I dreamed of one. The other way this can happen is if I'm just lying. In either case, the thing that caused me to say it was not dreaming of an elephant.

And there's even likely some more contrived Gettier problem like "justified true belief but not quite knowledge" scenario where we might could have had that dream, forgot about it, been mistaken about it in a particular strange way, and then later formed a belief having nothing to do with the dream causally except for just coincidentally happening to be about our dreaming of an elephant.

But the only way we can know we have a subjective experience, and for there to actually be one, is for that subjective experience to have caused the belief that we had it.
Of course, there is a necessary causal connection, logically speaking, between subjective experience and the physical effect that produces the experience, and the physical manifestation of that experience. How that connection works, we don't know.
We know that, if the subjective experiences are real, there had better be a difference between claiming we had a dream about an elephant and lying about having a dream about an elephant.
That's probably a profound philosophical question - but I'll simply say that if our subjective experiences of the universe are not real, we have no basis for thinking that anything is real.
I'm not asking it to pose a philosophical dilemma. I'm asking because I think the answer lies in a causal connection between an observation and a belief. I think we form beliefs that we dreamed of elephants in the same way that we form beliefs that we see them at the zoo, and that it is fundamentally a causal relationship between an extension and an intension. We may also cognize "false patterns", where intensions incorrectly form without a sensible extension, but the only way we can obtain knowledge, whatever the sort, is for there to be some sort of extension.
But the objective robot has no more information about this than we have, at any given time. We are capable of doing anything that the objective robot can do. We are just as able to figure out what makes people claim subjective experience.
We pull off the trick using empathy. You are, it is my understanding, trying to exclude this capability from the robot. So the robot has to work a bit harder. But if it obtains a full causal theory of how your brain works, then it can relate to the physical mechanisms within that theory to figure out what you mean by subjective experience.
But again, you are assuming that the objective robot has made a complete analysis of the system and knows exactly how everything works. ... We don't know if perfect knowledge and understanding of the system is possible, even in principle.
Why yes, I am. If I assume less we can't talk to it. So work with it. Incidentally, you're assuming that your objective robot can exist without any subjective experiences even in principle; as such, I don't understand what you're trying to demonstrate by pointing this out.
However, if the objective robot doesn't have perfect knowledge of the system, he will have to make a judgement based on what he does know.
Sure. Anything that doesn't explain why there's a star shape on the paper is false.
But we've already said that the robot does not have subjective experiences. Hence the only access he has to the concept is via the descriptions we give. He has access to all other knowledge that we have - but not our actual experiences.
It doesn't matter that the robot has no subjective experiences. What matters are that subjective experiences are real things, and that they cause us to make statements about them in particular ways. There should be an actual difference between my dreaming of an elephant and my not dreaming of an elephant.
It appears to me that there are no objective descriptions of subjective experience that would make sense to an objective robot. All of our descriptions rely on the person interpreting the concept being someone who has objective experiences himself.
I would surmise that either subjective experiences do not exist, or they have a correlate with a certain property. In the latter case, when we describe them, we may be describing them subjectively, but only because our brain does all of the work of automatically creating something useful for us to analyze on a conscious level, and injects the digested product directly into the concept-analyzing portions of our brain.

The robot should find these correlates before it can make sense of them on the level of analysis you're likely thinking of. But even before then, it should note interesting patterns; people independently seem to agree quite often about how optical illusions appear, for example.
The only access the objective robot has to the reality of subjective experience is the actual statement from human beings that such a thing exists.
That's no small thing. If I claim that there is a category red, that appears sometimes even when "red" light frequencies aren't present, then that's something worthy of raising a robot eyebrow. However, as soon as the robot verifies that a large percentage of the population can independently establish the redness of an area where no "red" light frequencies are present, then something interesting about that category and configuration arises that is just as worthy of demanding explanation as the star our machine creates.
 
Last edited:
What would consciousness look like from inside the machine?

First it would need a database (memory) for storing propositions, some of which are about itself:

I am Robot14.
I am a CPU.
I am seeing my environment.

Then it would need a face detection program that generates a name for a face image.

Then it would need to convert this to a proposition:
You are Phyllis.
You are a, b, c, etc.

Finally it would need some self-consciousness of what it's doing:
I'm seeing Phyllis.

Then you could say it has become self-conscious on some level.

An example of face detection from inside robot:


An example of robot doing above:
 
Last edited:
The activity of computation is the aspect of the life of the entity from which consciousness derives its physical presence.
Not necessarily, perhaps one could say that this computation is the aspect of the entity from which the conscious awareness of its physical presence emerges. However the physical presence is already there so to speak, independent of the conscious awareness.

I think you'll find that all computational activity is entirely physical.
Yes some in living computers like a brain, others in cold silicon and metal circuit boards.
The clue is in 'activity'. Automata theory may deal with the theory of computation on abstract machines, but any actual computation is inevitably physical.
In the case of automata there is no living connection between the circuit board and the robotic machine. If there is a physical activity it is purely electrical, not material on any molecular level.

You could show me wrong by suggesting an example of abstract or non-physical computational activity; can you?
By non physical are you including electrical charge?
 
Because that is what dualism is -- two sets of rules, where one cannot be explained in terms of the other. I didn't make up the definition, that is just what it is!!!!!

Look, I think you have a misunderstanding of what everyone means when they say "dualism."

Dualism is the ontological view that there is a set of things that simply *cannot* be framed in terms of the set of fundamental rules that everything else is framed in terms of. For human purposes, that means describable in terms of mathematics and logic.

For example, the spirit. A dualist thinks that *if* there is a spirit or soul ( and most of them do ) then the rules governing that aspect of existence cannot be reduced to the same mathematical truths that the rules governing the material world can be reduced to. They hold that there is no mathematical equation that can describe the behavior of the spirit or soul.

Plain and simple -- no ifs, ands, or buts about it. Spirit == something that *cannot* be described by mathematics. No exceptions.

If on the other hand you happen to think that the soul or spirit is something that we just don't know HOW to describe in mathematical terms, then that is NOT dualism.

Ok already,

Yes I understand this distinction, I have yet to meet such a dualist. If I were to I would point out to them that it can only serve as a philosophical tool along with monism. It should not form the basis of a philosophical doctrine as neither dualism or monism can be determined in any way when applied to the reality of existence. The only reason I have been given in favour of monism on this forum is parsimony.

To define existence in the terms of two absolutely unconnected entities is a ridiculous and smacks of a characterisation of the position.
 
I think that the ability of human beings to detect the feelings of other human beings via the slightest of clues is extraordinary. It is sufficiently developed that people have to actively attempt to hide the outward appearance of their feelings in order that they not be detected. People who can't do it are considered to have a serious disorder. Whether you consider "very good" as an accurate description is of course a value judgement.

I certainly agree that humans, like other higher mammals, have subtle and sophisticated means of detecting each other's emotional states when in visual or even vocal contact. However, I was referring to the particular example of symbols on a screen that you chose as your example of humans being 'very good' at detecting other human's feelings.

For any reasonable values of 'very good', this is a very poor example, because, as I pointed out, it is so frequently mistaken that a set of emotional indicators was developed to make the emotion associated with those symbols explicit.

IOW, I'm simply saying you chose a demonstrably bad example.
 
Yes some in living computers like a brain, others in cold silicon and metal circuit boards. In the case of automata there is no living connection between the circuit board and the robotic machine. If there is a physical activity it is purely electrical, not material on any molecular level.
Did you see the video I posted earlier of a adding machine that works on marbles and gravity?

By non physical are you including electrical charge?
Electrical charge is physical.
 
Ok already,

Yes I understand this distinction, I have yet to meet such a dualist.
David Chalmers, the man who created the term "hard problem consciousness" that Westprog likes to use, is exactly that kind of dualist.

So of course was Rene Descartes.

If I were to I would point out to them that it can only serve as a philosophical tool along with monism. It should not form the basis of a philosophical doctrine as neither dualism or monism can be determined in any way when applied to the reality of existence. The only reason I have been given in favour of monism on this forum is parsimony.
No, you've been given another and much stronger reason: Dualism leads inevitably to logical contradictions.

To define existence in the terms of two absolutely unconnected entities is a ridiculous and smacks of a characterisation of the position.
Yes, it is ridiculous. But that's what dualism means in philosophy.
 
...perhaps one could say that this computation is the aspect of the entity from which the conscious awareness of its physical presence emerges. However the physical presence is already there so to speak, independent of the conscious awareness.
Perhaps one could say that, but it wasn't what we were talking about. You've changed the subject.

The aspect of the life of the entity from which consciousness derives its physical presence is computation, i.e. computation gives rise to the physical presence (process) of consciousness. So yes, consciousness awareness of the physical presence of the entity is therefore also a product of computation.

In the case of automata there is no living connection between the circuit board and the robotic machine.
It's possible to make an automaton with living components and/or connections, but why do you think it matters (why mention it) ?

If there is a physical activity it is purely electrical, not material on any molecular level.
Any physical activity involves the material. Electrical activity is no exception. Your phrase 'not material on any molecular level' is gibberish - what are these 'molecular levels'?

By non physical are you including electrical charge?
No, electrical charge is physical - have you never had an electric shock? if electric charge was non-physical, our electrical technology wouldn't work.
 
I certainly agree that humans, like other higher mammals, have subtle and sophisticated means of detecting each other's emotional states when in visual or even vocal contact. However, I was referring to the particular example of symbols on a screen that you chose as your example of humans being 'very good' at detecting other human's feelings.

For any reasonable values of 'very good', this is a very poor example, because, as I pointed out, it is so frequently mistaken that a set of emotional indicators was developed to make the emotion associated with those symbols explicit.

IOW, I'm simply saying you chose a demonstrably bad example.

Clearly people are much better at detecting emotional state from face to face contact. However, I chose the example of symbols on a screen specifically to indicate how little information is needed to transfer emotional state from one person to another. It can be as little as a few hundred bits.
 
Incidentally, you're assuming that your objective robot can exist without any subjective experiences even in principle; as such, I don't understand what you're trying to demonstrate by pointing this out.

I'm attempting to demonstrate the limitations of the objective approach. The objective robot need not even be a robot - just an investigator determined to ignore subjective experience.
 
I'm attempting to demonstrate the limitations of the objective approach. The objective robot need not even be a robot - just an investigator determined to ignore subjective experience.
But the reason I have the robot having a full theory of the brain is simply because I want to demonstrate that subjective experiences, if they are real, are part of that full theory. If I held that the robot had anything less than a full theory, you could object that subjectivity isn't part of it.

But nothing here precludes that this robot could have a working theory of subjectivity without a full theory. So what sort of limitation are you attempting to demonstrate by this?
 
Last edited:
But the reason I have the robot having a full theory of the brain is simply because I want to demonstrate that subjective experiences, if they are real, are part of that full theory. If I held that the robot had anything less than a full theory, you could object that subjectivity isn't part of it.

I don't dispute that, particularly, but I don't think that speculation about the nature of the full theory of consciousness is necessarily going to be productive because we don't have a full theory of consciousness. I agree that a full theory would include subjective experience. I don't know if such a theory is possible, theoretically or practically.

In any case, what we can learn by imagining that we knew the answer isn't much help when we don't know the answer.

But nothing here precludes that this robot could have a working theory of subjectivity without a full theory. So what sort of limitation are you attempting to demonstrate by this?

I'm trying to show that if an objective robot had a level of knowledge similar to what we have - that is to say, understanding the basic principles on which the brain worked - it would be able to come up with two explanations for the claim that we have subjective experience.

One possibility is that he would accept that subjective experience is real, though undefined, and he would have to try to come up with an explanation for its real existence. He would have to try to define what subjective experience is without having access to it. This would clearly be a very difficult task - possibly entirely impossible.

Alternatively, he could conjecture that as a side effect of the process of the human brain, this claim to non-existent subjective experience appears. To explain this, he doesn't need to deal with undefined quantities. He doesn't need to conjecture about something that doesn't fit into his view of the world. Everything fits nicely. When he delves a little deeper into the workings of the human brain, he expects to see the particular quirk in the workings that explains the claim of subjective experience.

Of course, he won't find such a thing, because we do have subjective experience. However, we only know we have it by having it. If we didn't have it, we wouldn't need to believe it existed, because its existence doesn't explain anything that can't be as easily explained by other means.

I find it interesting that some philosophers have adopted the position of the objective robot, and claim that subjective experience doesn't exist, or that it is an illusion. Because they are wedded to the objective process, they are willing to discard the edge they have over the objective robot, and forget their own existence.
 
New thought:

We seem to assume consciousness is one definitive "thing." It's not just one thing, but rather, something quite arbitrary.

Let me explain.

The only consciousness we know of is something the brain does. Our brains. But our brains evolved over millions of years in largely arbitrary ways that just happened to help get the genes responsible for its pieces into the next generation. The remnants of most, if not all, of the earlier brains are in our brains like nested Russian dolls.

There was first a worm brain, and around that grew a fish brain. Added to that was an amphibian, then reptile, then mammal, then primate, then our brain. All those are still alive and whirring in our skulls. The brain is a collection of modules, large and small, that handle all kinds of usually helpful (sometimes not so helpful) functions. Every step of the way it was added to by the chaotic process of evolution by mutation and natural selection. (I may have one or more detail off, but the heart of this is sound)

Reference: Kluge: The Haphazard Evolution of the Human Mind (Marcus, 2008)

(Does the cerebral cortex decide that's its version of consciousness is the definitive one? What about its subjective experience of the more ancient inner layers? Is consciousness the same without them? Why would we want to design a "conscious" machine with the subjective experience caused by the inner fish brain?)

Plus, every one of our brains is slightly different. We have different chemistries, we have modules that may not work in some of us, connections that vary in strength and modules that change size due to chemical and social environments (e.g. religion shrinks part of the brain).

My point is that everyone's subjective experience of consciousness is unique. We don't all have the same cartesian theater. Maybe some of us have Radio City Music Hall, and others have an old B&W TV in the garage. ;) LOL We are collections of modules -- hundreds or thousands of them. Each brain's differing recipies flavors our subjective experience of consciousness with infinite variety.

It's our memories of the competitive interplay of these modules that informs us of the nature of consciousness, so we have to stop thinking that everyone's consciousness is the same. When we discuss it, we are really discussing our own, not some universal ideal. We can't assume that everyone on every side of the consciousness debate has the same internal subjective experience (ISE). Someone with a dull ISE may more likely argue for materialism. Someone with intense ISE may more likely argue for dualism. Then the groups ridicule each other, as if we all had the same ISE and the opposing side was a bunch of clueless idiots.

But it's not an either/or one-dimensional scale. The variety of ISEs vary infinitely, from species to species, individual to individual, and from one moment to the next.

When we struggle to define consciousness specifically enough to make a conscious machine, we forget, dammit, that it's not one thing. It's the mess evolution gave us, and the arbitrary variations of its design and nature are daunting.
 
Last edited:
I don't dispute that, particularly, but I don't think that speculation about the nature of the full theory of consciousness is necessarily going to be productive because we don't have a full theory of consciousness.
You have some explaining to do then. In post #1266, you speculated on the nature of the full theory of our plotting machine.
In any case, what we can learn by imagining that we knew the answer isn't much help when we don't know the answer.
I disagree, which is why I'm posting in the first place. But more to the point, you are insulting me and establishing a dual standard. You're telling me that I don't get to speculate because the things I'm speculating about do not help us, since we're speculating about things that don't exist. However, you get to speculate about things that don't exist, and your speculations are helpful.
I'm trying to show that if an objective robot had a level of knowledge similar to what we have - that is to say, understanding the basic principles on which the brain worked - it would be able to come up with two explanations for the claim that we have subjective experience.
I have no idea what you're talking about. What knowledge are you giving this objective robot outside of the range of the basic principles on how the brain works?
One possibility is that he would accept that subjective experience is real, though undefined, and he would have to try to come up with an explanation for its real existence.
How would the objective robot here even know there's a word for subjective experience? You're not outlining the hypothetical clear enough for me to make sense out of it.

You're calling this robot "him" and "he"--is it an android? Does the android have a visual detection apparatus--something we can call eyes? If I pull a penny out behind a curtain, will it be able to tell it's a penny? If I show the android a photograph of a penny, will it be able to tell that the photograph is not a penny, and doesn't have a penny inside of it?
I find it interesting that some philosophers have adopted the position of the objective robot, and claim that subjective experience doesn't exist, or that it is an illusion. Because they are wedded to the objective process, they are willing to discard the edge they have over the objective robot, and forget their own existence.
Who are you talking to here?
 
Last edited:
Status
Not open for further replies.

Back
Top Bottom