• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Explain consciousness to the layman.

Status
Not open for further replies.
Translation: "I don't know anything about electricity or biology."

I know enough to see the differences between a computer and a brain.

If one considers that consciousness is not emergent from the action specifically resulting in the computation in the brain, but some other biological activity in the brain. Then it would not follow that a computer simulation of that same computation would include consciousness of the same kind. Because the other biological activity would not be replicated.
 
Thank you!

I was starting to feel hurt that no one was responding with "nonsense" or "you don't know what you're talking about." :rolleyes:

You bring up a couple of thoughts which I find interesting here.

Firstly our heritage of consciousness, different parts of the brain developed at different periods in our evolution. The most highly developed areas of the brain responsible for the degree of consciousness experienced by humanity have developed relatively recently.

Our ancestors going back a long way developed awareness of their environment right at the beginning. For billions of years they developed and finely tuned bodies subtly adapted to and aware of their environment.

We in recent millenia have inherited this legacy and added the icing of self consciouness on the top.

Secondly that we are all different, we each develop in our own unique way and the more subjective and abstract activity in our brains may be very different. This may well also stretch to ideology, a mystic may think in a very different way about very different subjects to a mathematician in a western university, for example.
 
...Our ancestors going back a long way developed awareness of their environment right at the beginning. For billions of years they developed and finely tuned bodies subtly adapted to and aware of their environment...
So our ancestors, the eukaryotes, were aware of their environment billions of years ago?
Genus homo has only been around about 2+ million years.
 
I know enough to see the differences between a computer and a brain.
So you should be able to answer the following questions:

If one considers that consciousness is not emergent from the action specifically resulting in the computation in the brain, but some other biological activity in the brain.
Such as? what significant non-computational biological activity in the brain are you aware of that might be relevant?

Then it would not follow that a computer simulation of that same computation would include consciousness of the same kind. Because the other biological activity would not be replicated.
True enough. So all you need now is some evidence for such activity. What biological activity goes on in the brain that isn't computational in nature and might plausibly be relevant to consciousness?
 
So our ancestors, the eukaryotes, were aware of their environment billions of years ago?
Genus homo has only been around about 2+ million years.

I would answer with yes, they were in very simplistic ways, any organism has to react and interact (in this case chemically) with the environment to survive thus a certain awareness (not in a modern definition) is a prerequisite I would say
 
So you should be able to answer the following questions:

Such as? what significant non-computational biological activity in the brain are you aware of that might be relevant?

True enough. So all you need now is some evidence for such activity. What biological activity goes on in the brain that isn't computational in nature and might plausibly be relevant to consciousness?

I want to add on this one, because I also have for most of the time a problem when the brain is compared to a computer. I find it highly misleading. One example is that we have not yet established that neuronal firing relates to an on-and off state of the activity- like in electronics. A transmission might have much more to it than just 0 or 1.

It might have been addressed already in this thread, as I am new here and do not have the time to go through the thread but how do you define computational activity?
 
The god of the inherent imprecision.

One a number of topics, but especially this one, there's a faith demonstrated that it's possible to have everything clearly defined in terms of something real, which is itself clearly defined, and so on. This is obviously impossible - at some point, definitions must get blurry, or circular. There is no infinite regress of language.

However, the idea that this proves the existence of god is a new one on me. Personally I just regard it as a limitation of language.
 

We have many ways to tell, with a margin of error, what somebody's state of mind is. This is always subject to the imprecision of language.

Now, we can tell, with a margin of error, and allowing for the imprecision of language, what somebody's state of mind is by examining the brain, as well as all the other things we use to determine it. How is this any fundamentally different?

If we could tell in some exact, unambiguous, certain way what their subjective experience was, this might represent a fundamental breakthrough. As it is, we can tell that the angry man is actually angry. How do we test that our readings of the brain are accurate? By measuring them against the things we already knew in the first place.
 
Who cares about language ? What part of "the machine reads your mind and translated it into data one can read" don't you understand ?

I do understand it. It's not that complicated. We get someone to look at a red wall and then measure his brain, and notice that the brain is different when he's looking at a red wall compared to a blue wall. What I don't understand is why I'm supposed to be astonished at this. I could already find out what kind of wall he's looking at by asking him.

As to what it feels like to look at one or the other wall - that remains undefined and scientifically inaccessible.
 
You're begging the question.

I don't see how that applies. He's expressing a possibility that the fundamental operation of the brain might not be computational. No other organ of the body is purely computational. It's not an absurdity to consider that the brain might not be either.
 
I want to add on this one, because I also have for most of the time a problem when the brain is compared to a computer. I find it highly misleading. One example is that we have not yet established that neuronal firing relates to an on-and off state of the activity- like in electronics. A transmission might have much more to it than just 0 or 1.

It might have been addressed already in this thread, as I am new here and do not have the time to go through the thread but how do you define computational activity?

Oh, we don't do that. Nor is there evidence advanced that the functionality of the brain is computational. The burden of proof is to demonstrate that the functionality of the brain isn't computational. A vague analogy between neurons and bits is all that's needed.

In fact, even computers aren't purely computational, any more than a blackboard or an abacus.
 
"Subjective experiences are not real" is a kind of evaluation of it.

Yes, that's the evaluation that I said that the objective robot would probably provide.

But subjective experiences should have a mechanism, and that mechanism should have a causal relation to your describing them with the term "subjective experience". Our robot can use this as a basis for applying the term "subjective experience" to a meaning. At an absolute worst case, it would include too many things, and this ill-defined extra that you think is critical to subjective experience would remain unknown.

I don't know what you mean by "include too many things". If subjective experience is real*, and the robot concludes that it is not, then it is failing to include something.


But there should also be a reason that the robot can figure out, in the causal connections--at least in theory--for why you say that there is this ill-defined extra. If that reason is, in fact, based on the actual ill-defined extra, then the robot is actually in a better situation of saying what subjective experience really is than you are, even with your "cheat" of actually experiencing it. And if it is "not", then there is no such ill-defined extra.

How can it be in a better position? I can see, just about, in this hypothetical situation where it has found out the reason for the claim of subjective experience, that it might be in as good a position. Even in this highly conjectural situation, where it has managed to entirely track the cause of the claim of subjective experience - it still has done no more than we are perfectly well able to do. If this is possible (and we don't know that it is possible) then we are entirely able to do everything the objective robot does. In what way does the objective robot have an advantage when we have full access to objective reasoning?

I find it interesting that the human possession of subjective experience is characterised as a "cheat". I suppose it is, in the sense that when a native French speaker takes a French exam, he's in an advantaged position compared to the person who's had to learn the language later in life. It doesn't mean that he's going to get the wrong answers in the test.



If you see a third possibility, then please, point it out.
But the tools we use to make judgments are, in practice, applied automatically.

I think it's clear that if the objective robot can entirely track down the reasons for the claim of subjective experience, then he will have quite possibly explained subjective experience. I don't see how this gives us some additional understanding of subjective experience in the absence of such reasons.


You're being a very poor devil's advocate. I think I can guess the reason you're going along these lines--it's because you want to remove the analogy of "looks like a penny", because you think I'm defining subjective experience. That is, in fact, not what I'm doing--I'm playing the devil's advocate, and granting that the robot has no subjective experience, but trying to show you that even in that scenario, the robot winds up having a perfectly good theory of what subjective experience is.

I fail to see how your example of the penny differs in any fundamental respect to any physical interaction between any physical objects. There's nothing subjective about the way a machine evaluates whether a penny is a penny.

What you're trying to do, I'm guessing, is to make the robot's evaluation seem to me to be that much further removed from experiencing, by associating it with things like objective measures. However, in doing so, you are ironically weakening your argument. Here's why.

Suppose your robot does analyze pennies and photographs this way. Well, we do not. We come to the conclusion fairly quickly that the penny behind the curtain is a real penny, but the penny in the photograph is just a photograph of a real penny. Now let's say that the robot hears us make this "magical" claim that we can tell real pennies from photos of pennies without weighing them--just by looking at them. Let's further suppose that the robot doubts us, and performs a simple test of our alleged capabilities using something very similar to the MDC.

Now here's the problem. We pass the test.

Now what is the robot to do with our claims of "magical" capabilities of divining real pennies from photos of pennies? At this point it must conclude that there should be some underlying mechanism that we use that allows us to make this determination, even though it does not know exactly what this mechanism is. Remember the plotting machine making a star? Just like that.

And also similar to this machine, we make a claim that we use Subjective Experience technology <TM> to perform this analysis. Well, the robot doesn't know exactly what Subjective Experience technology is, so it goes about opening our heads and figuring out how we really do it.

But the way we evaluate whether a penny is a penny is exactly the same way as the objective robot does it. We are perfectly well able to objectively identify pennies. We might not know precisely the way that our senses interact with the penny in order to tell that it is a penny, and not a picture of a penny, but we are aware that they act in an objective way.

However, in parallel to the evaluation of the penny, we have a subjective experience associated with the penny. The objective robot does not. He will see us using exactly the same means that he uses to tell whether the penny is real. We assure him that we have the subjective experience of holding and seeing the penny. We can also tell him that this subjective experience is something separate from the objective evaluation of the penny.

When the objective robot can see no difference between the objective evaluation of the penny, and the objective evaluation combined with subjective experience, then why should it consider that subjective experience fulfills any function?

You can guess the rest of the plot... I keep repeating this story.
I'll grant that it doesn't equate if you note that I never said that it equates. I still reach my conclusion that the robot doesn't arrive at a claim that subjective experience doesn't exist (with the caveat that it might, but depends on the robot and the line of inquiry; nevertheless, I still maintain you should check that robot's warranty).

You're missing the point of what I'm trying to do... when I say "at least analogous to" in the latter post, I'm not trying to claim that they "are" subjective experiences in themselves. My point is about how an entity that is minimally capable of doing so associates a word to a meaning--it has to map this word to some set of invariants that applies to the way that the word is used. The analogy here is so tight that there's no reason for the robot to doubt that subjective experiences actually exist--unless you can make some specific claim that differentiates what you surmise that the robot would get "subjective experience" confused with, and what subjective experiences actually are.
And how would the objective robot map the meaning of "subjective experience"? How would he define it?

I believe that it's possible for a human being to tell another human being what he means by subjective experience. I question whether it's possible to provide an association for subjective experience that would be meaningful for an entity that didn't possess it. It is possible that this makes the conjectured objective robot a practical impossibility. I don't insist on this, but I think it at least possible.
And that's what I was looking for when I asked you to make a claim about subjective experiences that you think the robot would disagree with. In the latest reply, you suggested that this was probably impossible. Well, the implications of it being impossible is that the robot would think it does exist, and that it's no different than the kinds of things it thinks it might be.

I think it's impossible in the sense that I don't think we could ever communicate meaningfully with the robot about subjective experience, and that we might not even get past the stage of defining it.

And furthermore, who is to say the robot's guess is wrong? Shouldn't the subjective experiences actually correlate to something real, that is part of the causal chain, assuming they do exist?

We are in the same fortunate position of the French student who knows that "aller" is an irregular verb. We know that either subjective experience is real - or else nothing else can be considered real.

The only way that we have formed the hypothesis of an objective universe is because we find patterns in our subjective experiences. Subjective experience is all we have. If it isn't real, then all bets are off.

Doesn't matter. The whole point is, "subjective experiences are not real" is a type of claim. As I said before, you're severely underestimating what it takes to claim this--even for "objective robots".
As are experiences.


*I continue to regard the "subjective experience is not real" hypothesis as absurd, for reasons I've given.
 
Then you haven't been keeping track of what punshhh is arguing.

ETA: This starts around, roughly, post #625?

Yes, I know about his general claims. I'm not referring to those - I'm referring to the very specific claim that the functionality of the brain is not necessarily computational.
 
I know enough to see the differences between a computer and a brain.

You clearly don't.

If one considers that consciousness is not emergent from the action specifically resulting in the computation in the brain, but some other biological activity in the brain. Then it would not follow that a computer simulation of that same computation would include consciousness of the same kind. Because the other biological activity would not be replicated.

That would be begging the question, however.
 
I do understand it. It's not that complicated. We get someone to look at a red wall and then measure his brain, and notice that the brain is different when he's looking at a red wall compared to a blue wall. What I don't understand is why I'm supposed to be astonished at this. I could already find out what kind of wall he's looking at by asking him.

:rolleyes:

There are states in your brain that you are not aware of. Anyway, we were talking about making the subjective objective. Please don't try to make me believe that you forgot that.
 
If one considers that consciousness is not emergent from the action specifically resulting in the computation in the brain, but some other biological activity in the brain. Then it would not follow that a computer simulation of that same computation would include consciousness of the same kind. Because the other biological activity would not be replicated.
Even if you could identify such an activity, even that activity could be simulated in a computer.

Let's say, just to throw out a random example, that the physical act of swallowing food is necessary for consciousness to emerge. Of course, this particular example is not really likely, but, for argument's sake, I hope we can indulge in it...

...Who says that such an act can't also be modeled and replicated in a virtual manner, inside a computer?


I want to add on this one, because I also have for most of the time a problem when the brain is compared to a computer. I find it highly misleading. One example is that we have not yet established that neuronal firing relates to an on-and off state of the activity- like in electronics. A transmission might have much more to it than just 0 or 1.
Comparing the brain to a computer, like all analogies, is not going to be perfect. It doesn't have to be. As long as the general principles of the argument are able to be communicated, we can still use the comparison to that limited degree.

There is always a danger in taking any analogy too far. If a businessman took the words "That tornado was like a freight train running through the town!" too seriously, he might be inclined to try to ship his products via tornadoes.

In the case of how the brain operates: It might be through 0s and 1s. But, non binary digits can still be represnted as binary digits. Neural networks use decimal numbers for "weights" between neurons, to model the essential elments of how the mind works. Yet, those decimal numbers are ultimately stored in memory as a block of 1s and 0s.
 
Status
Not open for further replies.

Back
Top Bottom