• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Are You Conscious?

Are you concious?

  • Of course, what a stupid question

    Votes: 89 61.8%
  • Maybe

    Votes: 40 27.8%
  • No

    Votes: 15 10.4%

  • Total voters
    144
What if the simulation is already taking place on a computer? Then the composition is the same. Hadn't you thought of that problem?

The big issue that your view entirely fails to address is what happens when you have a simulated consciousness receive input from a real entity and direct output to a real entity?

In other words, what if we built a little box that simulated all the neural activity in your brain, took out your brain, put in the little box, and hooked up all the incoming and outgoing neurons where the brainstem meets the spinal cord?

Assuming we got the engineering correct, and assuming that externally you behaved exactly the same, what are you then? Are you conscious, or not conscious because your consciousness takes place in a simulation from our frame of reference despite the fact that the rest of you is real?

The thing about these "what if's" is that they haven't happened. If someone managed to replace a brain with a computer, and produce identical behaviour, and if that computer asserted - as humans do - the presence of consciousness - then that would be evidence of something. But nobody has done this. For all we know, a computer replacing a brain wouldn't work at all. Or it might only work if it were a precise physical copy of the brain.

A hypothetical computer doesn't prove anything. In particular, it doesn't prove that anything omitted from the physical activity of the brain is redundant in creating consciousness.
 
I don't care about Dawkins' opinions. And neither should you.

Dawkins' opinions are relevant only in your characterisation of the people who consider consciousness a Hard Problem. Since he doesn't fit the characterisation, he's a sufficient rebuttal to your point. The fact that he has an opinion doesn't determine the truth or otherwise of the main argument - just of your observation.

Ideally, the people discussing this topic would be less concerned with the failings of the people who disagree with them, and more concerned with addressing the arguments.
 
A point:

I suggested that the answer to the thread question was ‘I am conscious if I know I am’. In other words, I am conscious if I know what the word conscious means (in any one of the six odd thousand languages that populate our planet….or any other language that might be considered one….by itself). From what UE has argued, there seems to be epistemological and ontological obstacles to assuming that our scientific practices and vocabulary have the capacity to ‘explain’ consciousness (my philosophical education is full of holes so please feel free to correct me on this issue if I have stated it inaccurately). These do not seem to be merely substantial issues, but fundamental ones….perhaps even the fulcrum upon which this issue will be resolved, or not.


An issue:

I don’t think we need to ask whether there is a single ‘yes’ answerer (‘am I conscious’?) who could provide anything remotely resembling a definitive explanation for or description of what consciousness actually is…even to them self (… which, of course, begs the question how/why they can answer ‘yes’ to the question [‘maybe’ would be the intelligible answer in this case]…oh well, confusion reigns I guess).

A question:

Ichneumonwasp submitted that if we find out what consciousness is, we found out what we are (the implication being that we neither know what consciousness is nor do we know what we are;….ouch, that gets real personal). Could not that statement be more accurately expressed as ‘find out WHO you are and you find out the ‘truth’ of your consciousness’ (perhaps our ‘consciousness’ is ultimately also some variety of ‘who’ [I know, heresy.....woo and all that])’. This is obviously a somewhat vague statement…but that is exactly the point. Our ‘consciousness’ (which I’ll define as some variety of sum-total of who/what we are) seems to be one hell of a lot bigger than we are….or our ability to understand ‘it’ (by substantial orders of magnitude). An explanation of this argument would obviously have to be fairly substantial itself but it is the conclusion which matters. If this conclusion is accurate (which, for the sake of brevity, I’ll assume it to be), what are the implications (I know….entire libraries have been devoted to simpler questions….)?

An observation:

The question then simply becomes….what means do we have to achieve an understanding of consciousness? Which brings me to the reason for this contribution (besides a troubled ego). An observation (no doubt previously observed by others better qualified to do so but since I haven’t seen it explicitly expressed elsewhere I’ll express it here as explicitly as I know how). What is consciousness? What is awareness? What is life? Doubtless the most complex and substantial questions there are. I follow these various discussions with great interest (and, quite often, far more than a little uncertainty). The insights and abilities of the various contributors are not to be trifled with (….forgive me, I am practicing for a role as a sycophant in an upcoming theatrical production).

BBBBBBBBBBBBBUUUUUUUUUUUUUUUUTTTTTTTTTTTTTTTTTT

….an obvious issue seems to beg recognition. We’re not studying some variety of cosmological anomaly at the center of our galaxy, we’re not studying yet another mystery of quantum reality, we’re not studying genetics, or chemistry, or engineering etc. etc. etc. we’re studying….us (and our ability to study …us). Each of us (including each and every contributor to these forums) has some variety of access (I hesitate to call it either complete or direct given the obvious ambiguity expressed throughout all these discussions) to what it is that is trying to be explained. We don’t need a million dollar grant, we don’t need a laboratory, we don’t need an LHC, we don’t need a research team, etc. etc. (…..perhaps a piece of that peyote that Carlos Castaneda used to guzzle might be of value…but that’s a whole other issue). We (arguably) each have exclusive access to the object of our interest as well as (again, arguably) access to all the means required to become ‘acquainted’ with the object of our interest (perhaps we simply need to demand that ‘consciousness’ explain itself….and deal with the schizophrenia later).

The implication:

So why the heck don’t we know what we’re talking about?...and what might occur (or be ‘experienced’) if someone did find out (….as per my first example using Dennet for a guinea pig: does Dennet implode…explode…or become a Dennet deity [what options are there?]). Apart from FUWF (who risked his hide admitting an interest in knowing ‘the truth’….and who presented some rather sci-fi sounding speculations on what a SRIP leveraged [as he put it] ‘expanded’ understanding might implicate) nobody seems to want to touch this with a ten foot pole. I think it’s very likely quite central. What don’t we know…of course…but why don’t we know it (are we biologically dysfunctional…or psychologically dysfunctional…to put it overly simply […ooooohhh, that sounds messy….like….I don’t know what ‘consciousness’ is cause I’m an a-hole…..not good])?

….just looking back on this post I am troubled (….a rather sloppy post….yech!). This issue (consciousness) is so….annoying. It requires the consideration and summation of such mindboggling issues. I’m sure it would be easier to consider the ethics of the antichrist than resolve this issue. I need some chicken soup.
 
The question then simply becomes….what means do we have to achieve an understanding of consciousness?

All we really have right now is introspection and neuroscience. We don't yet know enough about consciousness to instantiate it artificially which, IMO, indicates that our scientific understanding of it is extremely primitive.

….an obvious issue seems to beg recognition. We’re not studying some variety of cosmological anomaly at the center of our galaxy, we’re not studying yet another mystery of quantum reality, we’re not studying genetics, or chemistry, or engineering etc. etc. etc. we’re studying….us (and our ability to study …us). Each of us (including each and every contributor to these forums) has some variety of access (I hesitate to call it either complete or direct given the obvious ambiguity expressed throughout all these discussions) to what it is that is trying to be explained. We don’t need a million dollar grant, we don’t need a laboratory, we don’t need an LHC, we don’t need a research team, etc. etc. (…..perhaps a piece of that peyote that Carlos Castaneda used to guzzle might be of value…but that’s a whole other issue). We (arguably) each have exclusive access to the object of our interest as well as (again, arguably) access to all the means required to become ‘acquainted’ with the object of our interest (perhaps we simply need to demand that ‘consciousness’ explain itself….and deal with the schizophrenia later).

Couldn't you have just summed that all up by saying consciousness must be studied introspectively? :confused:

So why the heck don’t we know what we’re talking about?

We do. Well -- MOST of us do. What I would personally like to see accomplished is a means of knowing what consciousness looks on the 'outside' as well as the 'inside', and an understanding of how it fits in with what we already know of physics.
 
Last edited:
A point:

I suggested that the answer to the thread question was ‘I am conscious if I know I am’. In other words, I am conscious if I know what the word conscious means (in any one of the six odd thousand languages that populate our planet….or any other language that might be considered one….by itself). From what UE has argued, there seems to be epistemological and ontological obstacles to assuming that our scientific practices and vocabulary have the capacity to ‘explain’ consciousness (my philosophical education is full of holes so please feel free to correct me on this issue if I have stated it inaccurately). These do not seem to be merely substantial issues, but fundamental ones….perhaps even the fulcrum upon which this issue will be resolved, or not.

Glad to know somebody out there can understand me... :)

All good questions and points, but not really addressed to me.
 
Physical composition, genius.

What if the simulation is already taking place on a computer? Then the composition is the same. Hadn't you thought of that problem?

A simulated computer is made of magnetic patterns on a hard disk. A real computer is made of the atoms that make up hardware.

The big issue that your view entirely fails to address is what happens when you have a simulated consciousness receive input from a real entity and direct output to a real entity?

You're assuming that real consciousness is an algorithm.

In other words, what if we built a little box that simulated all the neural activity in your brain, took out your brain, put in the little box, and hooked up all the incoming and outgoing neurons where the brainstem meets the spinal cord?

Assuming we got the engineering correct, and assuming that externally you behaved exactly the same, what are you then? Are you conscious, or not conscious because your consciousness takes place in a simulation from our frame of reference despite the fact that the rest of you is real?

My consciousness would be fizzling out in the dying brain you just removed from my head. Unless your hypothetical black box could produce actual consciousness [as opposed to just simulating it] with my all memory traces in the correct physical format, and the capacity to produce the same type and range of sensation, it wouldn't even be a duplicate of me. I'd just be a murder victim in a ghoulish failed experiment.
 
The thing about these "what if's" is that they haven't happened. If someone managed to replace a brain with a computer, and produce identical behaviour, and if that computer asserted - as humans do - the presence of consciousness - then that would be evidence of something. But nobody has done this. For all we know, a computer replacing a brain wouldn't work at all. Or it might only work if it were a precise physical copy of the brain.

And this is relevant how?

Once again, you just don't get it. If you have a model, it needs to work in all conceivable consistent scenarios. You can't just say "well, we haven't run into that scenario before, so even though my model makes no sense in that scenario, it isn't important."

That isn't how science works. You can't just throw away hypothetical edge cases because they are hypothetical. If it is mathematically consistent, then you have to account for it.

But I am not surprised that a theistic dualist has a hard time grasping this principle.

A hypothetical computer doesn't prove anything. In particular, it doesn't prove that anything omitted from the physical activity of the brain is redundant in creating consciousness.

I know, I went the other way around. I used the evidence that much can be omitted from the physical activity of the brain while retaining consciousness to deduce that we might replace neurons with simulated ones and functionality would remain the same.

Such evidence is plentiful, if one is a rational person that understands science and mathematics.
 
I have thought about it. I've come to the conclusion that minds are physical objects and not just patterns written on physical objects.

A pattern -- made of physical objects, like everything else in the universe -- is not a physical object?
 
A simulated computer is made of magnetic patterns on a hard disk. A real computer is made of the atoms that make up hardware.

Actually, the simulation takes place in the same hardware, not "on a hard disk."

So I am curious as to why on Earth you think the very same atoms are somehow fundamentally different at the same time, despite being the very same atoms.

You're assuming that real consciousness is an algorithm.

No, I am assuming that in theory we can simulate the behavior of molecules to an arbitrary level of precision.

Consciousness being an algorithm is just a corollary of that.

My consciousness would be fizzling out in the dying brain you just removed from my head. Unless your hypothetical black box could produce actual consciousness [as opposed to just simulating it] with my all memory traces in the correct physical format, and the capacity to produce the same type and range of sensation, it wouldn't even be a duplicate of me. I'd just be a murder victim in a ghoulish failed experiment.

But this is besides the point. You are changing the goalposts.

Your original contention was that simulated consciousness isn't real consciousness -- that they are somehow fundamentally different.

You did not say that a simulated consciousness would know this difference. So the assumption is that a simulated consciousness would think it was conscious and just wouldn't know any better.

So, with that assumption -- which you have said nothing thus far to invalidate -- what happens when we replace your brain with the black box that is running a simulated version of your consciousness, that thinks it is you? Is the consciousness still simulated even though it's volition affects the real world?
 
The pain is what is produced by the nerves and the brain.

How is that not a behaviour ? I notice both you and UE seem incapable of actually answering that.

Dawkins' opinions are relevant only in your characterisation of the people who consider consciousness a Hard Problem. Since he doesn't fit the characterisation, he's a sufficient rebuttal to your point.

Huh ? What point ?
 
How is that not a behaviour ? I notice both you and UE seem incapable of actually answering that.

That's because the question you are asking us is akin to "How come bananas aren't tectonic plates?"

What are we supposed to say?
 
From what UE has argued, there seems to be epistemological and ontological obstacles to assuming that our scientific practices and vocabulary have the capacity to ‘explain’ consciousness
But why are they different from the epistemological and ontological obstacles to assuming that our scientific practices and vocabulary have the capacity to 'explain' anything at all?

Energy, for example.

And is it really possible that there will be some other epistemic system that can explain what science cannot? Do we even have a candidate?
 
That's because the question you are asking us is akin to "How come bananas aren't tectonic plates?"

What are we supposed to say?
Well no - if behaviour is "stuff that happens", then clearly consciousness is "stuff that happens".

The question is more akin to "are bananas and tectonic plates stuff that happens?".
 
Sent to me by FedUpWithFaith, who is currently suspended and may not return at all:

Edited by Darat: 
Posting on behalf of a suspended Member removed.
Richard Rorty turns in his grave...

robin said:
And is it really possible that there will be some other epistemic system that can explain what science cannot? Do we even have a candidate?

See above.
 
Last edited by a moderator:
A simulated computer is made of magnetic patterns on a hard disk. A real computer is made of the atoms that make up hardware.

Actually, the simulation takes place in the same hardware, not "on a hard disk."

So I am curious as to why on Earth you think the very same atoms are somehow fundamentally different at the same time, despite being the very same atoms.

I can send virtual computers via the internet, LAN, or portable disk because they are software. I cannot do the same with computer hardware.

You're assuming that real consciousness is an algorithm.

No, I am assuming that in theory we can simulate the behavior of molecules to an arbitrary level of precision.

Consciousness being an algorithm is just a corollary of that.

A simulation of a molecule, no matter how accurate, is just a representation. Thats what makes it a simulation -- it's not the thing in itself. If one wants a physically efficacious molecule, at some point they are going to have to physically create an actual molecule. The same holds true for consciousness.

My consciousness would be fizzling out in the dying brain you just removed from my head. Unless your hypothetical black box could produce actual consciousness [as opposed to just simulating it] with my all memory traces in the correct physical format, and the capacity to produce the same type and range of sensation, it wouldn't even be a duplicate of me. I'd just be a murder victim in a ghoulish failed experiment.

But this is besides the point. You are changing the goalposts.

Your original contention was that simulated consciousness isn't real consciousness -- that they are somehow fundamentally different.

You did not say that a simulated consciousness would know this difference. So the assumption is that a simulated consciousness would think it was conscious and just wouldn't know any better.

My point is that unless its actual consciousness it couldn't be said to KNOW anything. A simulated bucket can't hold water.

So, with that assumption -- which you have said nothing thus far to invalidate -- what happens when we replace your brain with the black box that is running a simulated version of your consciousness, that thinks it is you? Is the consciousness still simulated even though it's volition affects the real world?

Your proposal has a couple of fundamental flaws.

One: You presume the ability to produce simulated consciousness without knowledge of what actual consciousness is.

Two: You assume that simulation is the same as actualization.
 

Back
Top Bottom