The Hard Problem of Gravity

So it is impossible to make something that looks conscious, but isn't?

In a thought experiment I proposed earlier I suggested a way in which you could have something that appears unconscious but actually is conscious. In principle it would be possible to have someone paralyzed from birth who was also unable to receive incoming sensory data to the brain. For all intents and purposes this would be a vegetative and severely impaired individual. Even if one were able to miraculously restore sensory and motor functions to such a person after critical developmental stages of childhood they would almost certainly not exhibit even the capacities exhibited by a conversing web-bot. I find that it would be hard to argue that such an vegetative person is not conscious in some way or that they don't experience anything akin to dreams, or what have you.

The physiology of even unconscious people exhibits self-referential intelligent capacities that could shame even the best of current AI systems. Intelligent, self-referential behaviors can and do arise in systems that are clearly unconscious. It's clear that such intelligent capacities, in and of themselves, are not sufficient to produce conscious experience. It seems painfully apparent that consciousness is restricted to a specific range of physiological states in the brain and we would do well to understand exactly what it is about such states which produce conscious experience. Simply crafting intelligent systems is not sufficient, methinks.
 
Last edited:
When you assign multiple paths for an individual to select from, truly select from, then you are saying that there are a variety of possible outcomes(effects) for the variables(causes) that are processed in the mind. In any other causal system(non-quantum), we only think that there are multiple possible outcomes, because we cannot fully comprehend the variable pool and processes involved. If one were able to fully comprehend all of these factors, the end result would be calculable, and therefore deterministic.
How do you know this to be true? It seems to me that this is an assumption you are making about the nature of reality. Not only is there no evidence that this is true, we have evidence that at it's most basic level - i.e. QM - reality is NOT deterministic.

In an example that truly does have more than one possible outcome(not deterministic), you are saying that no amount of knowledge of these variables or processes could result in a calculable outcome.
Yes, at least not calculable beyond a probabilistic description. That is my understanding of QM as well.
This is a violation of causality, however, it is a accepted in our thought experiment because we have applied quantum uncertainty to the behavior of this system(the brain).
Yes. That's why I think that QM provides support for the concept of free will.
Your example is a good one, but it is not in any way similar to the probabilistic nature of quantum uncertainty. In your example, the people actually do have a blood type. In quantum mechanical terms, there is no such data. All that exists is the probabilistic distribution. So if this quantum wackiness were to work its way up to the human brain, one would have to agree that there would be no actual selections(positions) to choose from, just as there are no true measurements of position/momentum in QM due to wave collapse.
I'm sorry, but why would that follow? When you open the box, Shroedinger's cat is either dead or alive. Of course, if you wait long enough to open the box, it will almost certainly be dead :p
You have taken all of the useful bits of quantum uncertainty and applied them to the human mind to "make room" for *free will/spooky consciousness*, but you are ignoring the aspects of the same principle that would contradict your idea. You also have not presented a mechanism for how "you" would override this probabilistic brain in a deterministic fashion to "choose" your path.
I don't see any contradictions. The brain is 'you' or at least, a substantial part of what 'you' are. What mechanism, other than your brain, is needed for you to control yourself?
 
Given that the p-zombie stuff is an admission that we cannot tell even if a human is actually conscious or just pretending, I cannot see this as a dealbreaker with regard to consciousness. How would we falsify the hypothesis that humans are conscious, as opposed to merely pretending?

How could a human pretend to be conscious when he/she wasn't actually? Sleepwalking?
 
I'm no programmer, but even I could write a BASIC programme that when asked the string "What are you doing now?" would produce the response "I am producing a response to your question". It's one line of code, and self-referential, but there's no thought, or intelligence. I'd argue that this type of restricted parrot-bot looks as if it is self-referential, but isn't. Perhaps you wouldn't?
I agree.

This is obviously not what SHRDLU is doing.

As I said, if it can answer a question that requires self-reference, then it's conscious. SHRDLU can, therefore it is.
 
But it isn't self evidently true.

It requires a mathematical proof.

If you don't believe me, then contact any mathematician and ask them.

Just because something is extremely easy to understand doesn't mean it is self evident.

When you wake up in the morning it is self-evident that you are conscious. You don't need a mathematical proof to establish that you are awake and conscious. It is a given empirical fact.

Your aversion to formalizing what you are talking about is what leads you to circular logic in the first place.

See! You think a definition of "to know" should be predicated on consciousness and you are trying to define consciousness in terms of "to know."

I find it remarkable that you can't see how circular this argument is...

Whats circular about saying "consciousness exists as a phenomenon; consciousness is a requisite of knowledge"? Its no more circular than saying "mass is a real property; mass is a requisite to weight."


AkuManiMani said:
Containing a collection of information is not the same as knowing [In the sense that I, or the vast majority of other people use the term]. A book does not 'know' the information it contains anymore than a wiki site 'knows' the information provided in it's articles.

Well that is your problem right there. If you use fuzzy definitions instead of formal definitions, you will get fuzzy results instead of exact results.

No wonder you don't think consciousness can be explained by what we already know of physics.

We know that consciousness exists as a phenomenon and that each of us experiences this state at various periods of the day; this is a given. What we don't know is what in physics necessitates or governs conscious [i.e. subjective] experience. This is the reason why we are stuck with informal, 'fuzzy' definitions. For reasons that I've already mentioned, its is evident that self-referential intelligence is not a sufficient requisite for conscious experience.


AkuManiMani said:
I'm of the position that we can gain a better scientific understanding of these processes. I also think that in order to do this, we have to understand the physics of it.

Well, that would require a mathematical model of consciousness.

Which is what you seem to be asserting can't exist currently.

So which is it? Can we model consciousness mathematically, or can't we?

Of course.

Modeling in finer detail the exact physiological processes that give rise to said consciousness will require considerably more than simply stating consciousness as a given. The point of me assigning an "X" variable to consciousness is to serve as a conceptual placeholder until there is such a formal method of modeling what it is, exactly. There is no convincing evidence that we have such a formal system yet. My purpose here is to suggest possible avenues of investigation to determine a means of crafting such a system. My guess is that we need to study the physical process of instances that we do know are conscious [e.g. living brains] and work from there.
 
Last edited:
How would you falsify this hypothesis? Can one create something that seems to be conscious (that is, it actually only appears to be so)?
No - except in the trivial case you mention with your one-line Basic program.

What's the difference between something that is conscious, and something that is designed only to give the appearance of consciousness?
How can you give the appearance of being conscious without being conscious?

To appear conscious, you have to be able to refer to your internal mental processes. To do this - which SHRDLU does - you need access to your internal mental processes. That's consciousness.

I don't know, but it seems this needs some clarification if your assertion (and that's really all you've done so far - assert) is to be more useful.
Why?

You just asserted, with no further justification, that consciousness is self-reference is consciousness.
No, that's not what I said.

I said that consciousness is self-referential information processing.

Now, you may be right, and I think I agree with you to a degree, but this needs more verbosity.
Hofstadter.

It really isn't as self-evident as you are asserting.
No, it's not self-evident. It's needs a fair bit of thought. But it is correct.

Is self-reference a sufficient quality for consciousness, or only a necessary one?
Self-referential information processing is necessary and sufficient.

I know you've cited GEB up-thread - would Hofstadter's Achilles and Tortoise stories be conscious?
The stories? Of course not. They're just stories. Not information processing.

I would say - of course not. Which seems to me to imply that something more than self-reference is at play here (or, more clearly, that self-reference and consciousness are not functionally or conceptually exactly synonymous).
Information processing.

Read any of two thousand earlier posts wherein this is explained.

Or GEB, wherein this is explained.
 
No, that's not what I said.

I said that consciousness is self-referential information processing.

"And self-referential information processing is consciousness is self-referential..."

Such processes take place in the brains and bodies of individuals who are, in fact, not conscious. This assertion is flat-out wrong.

Self-referential information processing is necessary and sufficient.

No. It is not.
 
Last edited:
How do you know this to be true? It seems to me that this is an assumption you are making about the nature of reality. Not only is there no evidence that this is true, we have evidence that at it's most basic level - i.e. QM - reality is NOT deterministic.

On a quantum scale it is not. The instances where this affects macroscale reality are few and far between, are they not? I was under the impression that nearly all systems above the scale of QM operate in a causal manner. The illusion of "randomness" on these larger scales is based on a lack of information.

If macro reality was deeply affected by QM, on the level that it would cause systems to violate causality, then why only the brain? Why has causality been so fundamental to science? Why aren't rocks and trees and planetary orbits exhibiting probabilistic behavior due to quantum uncertainty? Why haven't I experienced quantum entanglement with my car door?

Yes. That's why I think that QM provides support for the concept of free will.

It provides support for probabilistic behavior on large scales, if we assume that it applies to large scales.

I'm sorry, but why would that follow? When you open the box, Shroedinger's cat is either dead or alive. Of course, if you wait long enough to open the box, it will almost certainly be dead :p

This is a good point. I actually thought about this poor kitty when I was replying to you earlier(maybe because of your avatar). I had assumed that you were thinking of yourself(your free will) as deciding the end state of the cat, not opening the box.

I was trying to explain that your earlier example of probabilistic distribution was not analogous to quantum uncertainty. Schroedinger's cat does not really help your case, if anything it illustrates what I was attempting to explain. You can not "choose" the end state of the cat(because it is impossible to determine if the cat is dead or alive), so even if the opening of the box was somehow not causal, or an act of free will, you wouldn't really be choosing anything at all, you would be throwing yourself unto the mercy of quantum uncertainty.

It is also helpful to remember that this thought experiment is paradoxical.

The kitty gets us nowhere, unless you have more to add?

I don't see any contradictions. The brain is 'you' or at least, a substantial part of what 'you' are. What mechanism, other than your brain, is needed for you to control yourself?

None whatsoever. I think that both of us agree that your brain is the mechanism for input processing and the resulting output, we just disagree on how this works.

As I said in the earlier free will thread, both of us are arguing from unfalsifiable hypotheses. I think that the onus is not on me, to prove that free will doesn't exist. I feel that the burden of proof lies with you(to prove that it does), however, if you hold free will to be axiomatic, you can shift that burden back onto me.
 
It is also helpful to remember that this thought experiment is paradoxical.

While you're correct as far as causality/randomness, I believe the cat is supposed to be a ridiculous example of how not to interpret QM.

One can even set up quite ridiculous cases. A cat is penned up in a steel chamber, along with the following device (which must be secured against direct interference by the cat): in a Geiger counter there is a tiny bit of radioactive substance, so small, that perhaps in the course of the hour one of the atoms decays, but also, with equal probability, perhaps none; if it happens, the counter tube discharges and through a relay releases a hammer which shatters a small flask of hydrocyanic acid. If one has left this entire system to itself for an hour, one would say that the cat still lives if meanwhile no atom has decayed. The psi-function of the entire system would express this by having in it the living and dead cat (pardon the expression) mixed or smeared out in equal parts.

It is typical of these cases that an indeterminacy originally restricted to the atomic domain becomes transformed into macroscopic indeterminacy, which can then be resolved by direct observation. That prevents us from so naively accepting as valid a "blurred model" for representing reality. In itself it would not embody anything unclear or contradictory. There is a difference between a shaky or out-of-focus photograph and a snapshot of clouds and fog banks.
http://www.tu-harburg.de/rzt/rzt/it/QM/cat.html#sect5
 
[We disagree about] what constitutes an image, apparently.
Let's try the direct approach. What's an image?
Ok, I have cut quite a bit out, because it is a huge post and I don't want to get sidetracked.
That's fine. I think I'll cut my reply to your reply short anyway... there's another project I'd like to start carving into in my spare time.

Maybe I have to start explaining where I'm coming from to start showing where I am getting confused. I'm going to assume that both of us agree to the following assertions. Correct me if I'm wrong for any of them.
  • "Mind" and "brain" refer to distinct things (e.g., if I have a brain, I don't necessarily have a mind--more specifically, for example, a dead brain has no mind).
  • Nevertheless, there's nothing ethereal about the mind. The mind is entirely dependent on the brain.
  • There are such things as mental states.
  • Every construct of the mind can be mapped onto a construct of the brain. For example, if I mentally multiply two three digit numbers, there is something within the brain that is carrying out that operation.
  • Experience is a property of a mind (dead brains don't experience, for example).
  • The "audience", if it can be identified at all, is a property of the mind (e.g., when I die, I'm not going to be there).

Now, assuming the above points are agreed with (if they aren't, ignore the following and just focus on the points of disagreement)... you are making one interesting claim. Your claim is that a Cartesian Theater does not exist.

Now, I'd like to point out first, that in my previous post, I did not in fact do what you thought I did:
your version, whether the blind men and the elephant or the neurological story, has the players as the audience, the audience as players, in an active interaction. I can't disagree with that; while you are calling "the rest of the brain" the audience
This is almost true, but the difference, though small, is very significant to me. I'm talking about things in the working brain, which may or may not be "experience" and "audience", but they work like the Cartesian Theater you describe when you say that this is the part that has experience and that is the part that has the audience. I'm doing this intentionally--because I'd like to get an idea of how what you're saying applies to what you think the brain itself actually does, and I'm doing that so that when you say this:
The cartesian theatre has a show (conscious experience) and an audience (the self);
...and you say there is no such thing, I can tell what you're trying to say isn't physically manifest, as opposed to what's imagined.

Part of what I'm trying to do, in order to decide whether or not we even agree in the first place, is to feel out whether your claims are a priori or a posteriori. If it's the former, I would still be able to believe them, but I'd be much more tempted to dismiss it as a sort of "political position".

And watch how this confusion grows when I apply it to certain other statements:
What I rail against, in the form of the cartesian theatre, is the notion that consciousness is generated, that it is given rise to, that it is somehow a qualitatively different something (it is never certain exactly what, if you have followed this thread).
...so when translating this cartesian theater from mind-speak to brain-mapped speak, I get something I'm certain you couldn't be saying--that consciousness is not generated, that it is not given rise to, and that it's not qualitatively different than something; i.e., it's not generated by the brain, given rise to by the normal workings of a living brain, or qualitatively different than a dead brain.

If mind can be mapped to brain, this translation doesn't make sense. That leaves me confused.
 
yy2bggggs said:
...so when translating this cartesian theater from mind-speak to brain-mapped speak, I get something I'm certain you couldn't be saying--that consciousness is not generated, that it is not given rise to, and that it's not qualitatively different than something; i.e., it's not generated by the brain, given rise to by the normal workings of a living brain, or qualitatively different than a dead brain.

I'm not sure if this might be the proper place because I might misunderstand what you two talk about completely, but here's some food for though at least.

From the following PloS Biology article – Exploring the "Global Workspace" of Consciousness:
Richard Robinson said:
The authors point out that consciousness is always “about” something; there may be no “pure” state of consciousness that is independent of the content of our thought, and so an important question is whether these results, concerning a single word the participant could not even name, are generalizable to understanding the ongoing flow of our conscious experience. Further experiments with other kinds of stimuli may reveal which late-stage, widespread brain events are common to all conscious processing, and which are specific to the experiment at hand.


The article is about two things: First, it's about Bernard Baars' theory about consciousness as a global workspace. Baars utilizes the Theatre metaphor quite a bit himself, but it's still just a metaphor. For instance in this article: In the Theatre of Consciousness, where he starts out like this (from the abstract):
Baars said:
One dramatic contrast is between the vast number of unconscious neural processes happening in any given moment, compared to the very narrow bottleneck of conscious capacity. The narrow limits of consciousness have a compensating advantage: consciousness seems to act as a gateway, creating access to essentially any part of the nervous system. Even single neurons can be controlled by way of conscious feedback. Conscious experience creates access to the mental lexicon, to autobiographical memory, and to voluntary control over automatic action routines. Daniel C. Dennett has suggested that consciousness may itself be viewed as that to which ‘we’ have access. (Dennett, 1978) All these facts may be summed up by saying that consciousness creates global access.

How can we understand the evidence? The best answer today is a ‘global workspace architecture’, first developed by cognitive modelling groups led by Alan Newell and Herbert A. Simon. This mental architecture can be described informally as a working theatre. Working theatres are not just ‘Cartesian’ daydreams — they do real things, just like real theatres (Dennett & Kinsbourne, 1992; Newell, 1990). They have a marked resemblance to other current accounts (e.g. Damasio, 1989; Gazzaniga, 1993; Shallice, 1988; Velmans, 1996). In the working theatre, focal consciousness acts as a ‘bright spot’ on the stage, directed there by the selective ‘spotlight’ of attention. The bright spot is further surrounded by a ‘fringe,’ of vital but vaguely conscious events (Mangan, 1993). The entire stage of the theatre corresponds to ‘working memory’, the immediate memory system in which we talk to ourselves, visualize places and people, and plan actions.


Which takes us to the second point of the original article by Robinson: There seems to be some new and exiting evidence for explaining consciousness as a kind of global workspace, and which at least partially can illuminate how it all comes together; Converging Intracranial Markers of Conscious Access by Gaillard R, Dehaene S, Adam C, Clémenceau S, Hasboun D, et al. (in PloS Biology).
 
Given that the p-zombie stuff is an admission that we cannot tell even if a human is actually conscious or just pretending, I cannot see this as a dealbreaker with regard to consciousness. How would we falsify the hypothesis that humans are conscious, as opposed to merely pretending?

Indeed - that's exactly the statement the p-zombie is making, in a way. Perhaps this is not ultimately a falsifiable hypothesis?

I don't know the answer to that - but I don't think it's sufficient to insist, as some here seem to be, that these questions are self-evident; clearly they are not. I'm actually inclined to agree - as I already pointed out - that something that seems conscious is conscious, but I still think such a case needs a little more weight behind it than presented in this thread. There seems to be a part of the case missing - for example, why can consciousness not be faked?
 
Depending on your definition of consciousness, it may not be impossible to do this... but it would be impossible to tell the difference, so you could never know.

You're right that it really comes to definitions (as always). But I don't think you're quite right. Imagine my simple one-line chat-bot, for example: as long as you ask it the right question, it gives the appearance of consciousness (unless you want to define fixed-response-to-fixed-stimulus as "conscious", which to me does not seem sufficient, as that makes litmus paper conscious), but there are questions which reveal the limitations of the system.

I think levels of degree have already been mentioned, and I think this may be important. After all, my chat-bot could be programmed with billions of inputs and billions of formulaic answers and it might hypothetically take geological time before anyone was able to find the "magic" question (as it were). This machine appears conscious, but it seems to lack something that the human mind lacks, doesn't it?

This is not to say that machine intelligence is impossible, far from it. Not my case at all - I do not agree with westprog and AMM, for the most part. I'm just trying to pull out some threads of the consciousness = self-reference and appearance-of-consciousness == consciousness positions.
 
I agree.

This is obviously not what SHRDLU is doing.

As I said, if it can answer a question that requires self-reference, then it's conscious. SHRDLU can, therefore it is.

My parrot-bot is self-referential. At least - it appears to be in limited circumstances. So how do we tell the difference? If appearance-of-consciousness is consciousness, why is my parrot-bot not conscious by your definition?

I think you're probably correct, but you need more meat on the bones of this to really make it a convincing position.
 
I agree.

This is obviously not what SHRDLU is doing.

As I said, if it can answer a question that requires self-reference, then it's conscious. SHRDLU can, therefore it is.

Your notion of "conscious" here is drawn from AI, I think, Pixy. With humans it is more complex. There can be a whole heap of self-referencing processing going on completely unconsciously.

To define "consciousness" in AI is easy. To do so in humans is more complex.

Nick
 
Last edited:
No, that's not what I said.

I said that consciousness is self-referential information processing.


Hofstadter.

Again, this may be so in AI, but in humans it is clearly not so straightforward.

The most widely accepted neuronally-based models for human consciousness are the many variants of Global Workspace Theory. Amongst cognitive neuroscientists they clearly predominate. In these models, consciousness is a "global access" state in which certain information is "broadcast" to a wide base of distributed parallel networks - unconscious modules. Thus it can be seen that self-referencing has little to do with what makes information conscious or not.

You are trying to transfer the ultra simplistic notions of consciousness in AI to human consciousness and it simply does not work. Human consciousness (actual phenomenality) is vastly more complex and quite possibly a completely different thing from machine consciousness.

Nick
 
Last edited:
No - except in the trivial case you mention with your one-line Basic program.

Well, that's interesting. What makes it trivial? Hypothetically, we could, as I just mentioned in my reply to Mercutio, extend this simple program to produce fixed inputs to billions and billions of the same responses. This could pass your "appearance of consciousness" test. But think it's still trivial, isn't it? It's unable to imagine novel conceptions (though it may be impossible to discover this fact), despite looking and appearing conscious.

Now - the simple way to break this program would be to ask the same question twice. Would the simple addition of a line of code to prevent duplicate or repetitive answers invoke consciousness?

[The rest of your post I have no quibble with, and thank you for your clarifications.]
 
Last edited:
You're right that it really comes to definitions (as always). But I don't think you're quite right. Imagine my simple one-line chat-bot, for example: as long as you ask it the right question, it gives the appearance of consciousness (unless you want to define fixed-response-to-fixed-stimulus as "conscious", which to me does not seem sufficient, as that makes litmus paper conscious), but there are questions which reveal the limitations of the system.

I think levels of degree have already been mentioned, and I think this may be important. After all, my chat-bot could be programmed with billions of inputs and billions of formulaic answers and it might hypothetically take geological time before anyone was able to find the "magic" question (as it were). This machine appears conscious, but it seems to lack something that the human mind lacks, doesn't it?

This is not to say that machine intelligence is impossible, far from it. Not my case at all - I do not agree with westprog and AMM, for the most part. I'm just trying to pull out some threads of the consciousness = self-reference and appearance-of-consciousness == consciousness positions.

I'll stick my neck out here.

I don't believe (and it is nothing more than an opinion) that if we had an incredibly accurate computer simulation (running on hardware something like we have today i.e. transistors and the like) of a human being that responded exactly as I would do in its simulated universe that it would be conscious the same way as I am.

This is not because I have any belief that humans are special or we have any magical properties but simply because "the map is not the territory". It's the same for any simulation or computer modeling of anything. For example we can already create an incredibly accurate computer simulation of a steam engine, we can get responses from that model that will match up exactly with what would happen if we run a real steam engine in the real world but of course the model will never even pump a millilitre of real water. For all the model steam engine has the "appearance" of working as a real steam engine we know there are fundamental differences between it and the real steam engine.

Now how this is all falsifiable (from either direction) is a tricky question. But is it one that we need be concerned with? We know from empirical evidence from people with brain damage through physical assaults on the structure of the brain (e.g. injury, Alzheimer's, Parkinson's disease) or people with congenital brain defects that consciousness is certainly not as we usually describe it with our language.
 
Well, that's interesting. What makes it trivial?
Its triviality.

Hypothetically, we could, as I just mentioned in my reply to Mercutio, extend this simple program to produce fixed inputs to billions and billions of the same responses.
No, you can't.

This would pass your "appearance of consciousness" test.
No, it wouldn't.

Stop a moment and think out what the combinatorial possiblities are for even short grammatical English sentences. And then for a short conversation.

Your program would occupy the entirety of the visible Universe before it even got started.

But think it's still trivial, isn't it?
And impossible.

It's unable to imagine novel conceptions (though it may be impossible to discover this fact), despite looking and appearing conscious.
And impossible.

Now - the simple way to break this program would be to ask the same question twice. Would the simple addition of a line of code to prevent duplicate or repetitive answers invoke consciousness?
"Invoke" consciousness? What does that mean?

If you have self-referential information processing, you have consciousness. If not, then not. You can't "invoke" consciousness.
 
While you're correct as far as causality/randomness, I believe the cat is supposed to be a ridiculous example of how not to interpret QM.

The idea that the cat is in an indeterminate state until the box is opened is an example of how QM is misinterpreted. The idea that a single quantum event can have potentially unlimited real world consequences is correct, however, and it would be perfectly possible to set up such an experiment.
 
Last edited:

Back
Top Bottom