The Hard Problem of Gravity

What would an conscious machine look like?

At least one poster here thinks that SHRDLU is conscious, and that after that the problem was solved and of no interest.

The Turing Test remains popular. Personally, I'd expect a conscious machine to convince us of its consciousness using the same methods that humans use. However, since its experience would be so different to our own, I'm not sure how easy that would be.
 
So, again, it's a matter of function. It's just a matter of making things work the right way.

It's still the case that the only way to find out if it's possible is to do it.

We know that we can create living tissue from non-living tissue. We can create human beings in a petri dish. We could possibly create artificial beings who are conscious and still not know how the consciousness got there.
 
At least one poster here thinks that SHRDLU is conscious, and that after that the problem was solved and of no interest.

The Turing Test remains popular. Personally, I'd expect a conscious machine to convince us of its consciousness using the same methods that humans use. However, since its experience would be so different to our own, I'm not sure how easy that would be.
Sure. That's what I was alluding to when I mentioned chat-bots. There's a really strong case to be made that a Turing Test Machine really is conscious as opposed to being a good facsimile of being conscious, but it's not self-evident by any means. What is the difference? Is there a difference? Is an emulation of consciousness conscious? Can an emulation of consciousness be created that is not conscious? Why? Why not?

Your talk here about "life is life", or "we know life when we see it" is tautological, and not really that useful IMHO. What we need to talk about is how we'd falsify the consciousness hypothesis - that is, how would we work out if something was conscious, or merely exhibiting a facsimile of consciousness?

This is the inverse of the p-zombie "problem" in a way. Or something like the Chinese Room. What exactly is the difference between consciousness and unconsciousness?

These are difficult conceptual problems, and no amount of bald assertion will win either argument. I'm not entirely convinced on this, despite having done a great deal of reading on the subject, but my instinct is that as hypothetically perfect chat bot is indistinguishable from a conscious agent, it is therefore conscious, but I'd like to see someone propose a way to falsify the hypothesis.
 
Last edited:
At least one poster here thinks that SHRDLU is conscious, and that after that the problem was solved and of no interest.



Sure. That's what I was alluding to when I mentioned chat-bots. There's a really strong case to be made that a Turing Test Machine really is conscious as opposed to being a good facsimile of being conscious, but it's not self-evident by any means.

Your talk here about "life is life", or "we know life when we see it" is tautological, and not really that useful IMHO. What we need to talk about is how we'd falsify the consciousness hypothesis - that is, how would we work out if something was conscious, or merely exhibiting a facsimile of consciousness?

This is the inverse of the p-zombie "problem" in a way. Or something like the Chinese Room. What exactly is the difference between consciousness and unconsciousness?

These are difficult conceptual problems, and no amount of bald assertion will win either argument. I'm not entirely convinced on this, despite having done a great deal of reading on the subject, but my instinct is that as hypothetically perfect chat bot is indistinguishable from a conscious agent, it is therefore conscious, but I'd like to see someone propose a way to falsify the hypothesis.

The problem is that the way we determine whether other human beings are conscious is very complex and hard to define. It depends hugely on having shared experience. In the absence of such shared experience, it's difficult to see how we could determine consciousness.

A machine taking the Turing test could pretend to have such experiences, like someone manning a help desk in Delhi who tries to convince you he's calling from Manchester. But if the machine is claiming to have experiences it doesn't have, how can that be evidence of consciousness?
 
I don't get the quantum argument in any of these threads. Every time I post in one of the "free will" threads it gets brought up as well.

I find myself arguing constantly about how changing "deterministic" to "probabilistic" doesn't do anything for their argument.

This is a good point that I have been pondering how to respond to. I think the crucial difference comes down to random --> probabilistic but probabilistic -\-> random.

To give a textbook example (and this does come from a textbook), a random variable could be defined as "how many people in a random sample have type O+ blood". This is considered a 'random variable' because before we collect the data, there's no way of knowing exactly how many people in the sample will have that type blood. After we collect the data, we can use the results to describe a probabilistic distribution of the proportion of people who have type O+ blood. However, one's blood type is NOT random. It's specified by the particular genes that were inherited in regard to that trait.

The fact that probabilistic does not necessarily imply random means that if choices follow probabilistic rather than deterministic paths they could be originated by the individual making the selection rather than being either an inevitable response to the environment (determinism) or randomly selected from the various possible choices. Thus, the argument for free will can be supported by a probabilistic approach.
 
This is a good point that I have been pondering how to respond to. I think the crucial difference comes down to random --> probabilistic but probabilistic -\-> random.

To give a textbook example (and this does come from a textbook), a random variable could be defined as "how many people in a random sample have type O+ blood". This is considered a 'random variable' because before we collect the data, there's no way of knowing exactly how many people in the sample will have that type blood. After we collect the data, we can use the results to describe a probabilistic distribution of the proportion of people who have type O+ blood. However, one's blood type is NOT random. It's specified by the particular genes that were inherited in regard to that trait.

The fact that probabilistic does not necessarily imply random means that if choices follow probabilistic rather than deterministic paths they could be originated by the individual making the selection rather than being either an inevitable response to the environment (determinism) or randomly selected from the various possible choices. Thus, the argument for free will can be supported by a probabilistic approach.

Interesting argument.

(1.) We first assume, for the sake of this argument, that Quantum Uncertainty can affect the macroscale functions of the human brain. This is a stretch in my opinion, but I am a layman in this regard, so I reluctantly must digress.

(2.) We must then assume, that the probabilistic nature of our brain processing information, results in a variety of "end results". This could also be seen as more than one effect in the chain of causality, and thus, a violation of causality. I am willing to entertain that there could be more than one possible path, contingent upon our first assumption(1) being correct.

(3.) We must now assume that that "you" are somehow the one that "chooses" the path(effect).

I have big problems with(3). The fact that there is more than one possible effect as the end result of the calculation of causes, is due to Quantum Uncertainty, which is unlike your blood type analog in the fact that it is based on the idea of wave function collapse, not a lack of data about a random persons blood type. First you are violating causality by letting these effects "go macro" on the human brain, but then you are saying that the individual could "choose" his effect or path. This is a violation of the very principle that you have applied to the brain, because in QM, that probabilistic distribution is not due to lack of data, it is due to there being no data in nature that can describe the quantum position/momentum.

It seems like you want to assume that QM affects thought processing, to help violate causality. This changes "deterministic" into "probabilistic". Then you want to "choose" which one of the probabilistic effects occurs. This changes "probabilistic" to "deterministic but I was the one that determined it", and not only have you not explained a mechanism behind this, but you are seemingly choosing from a set that is STRICTLY probabilistic in nature, with NO hidden values(like your blood type example had).

Whew, that made my brain hurt.
 
Interesting argument.

(1.) We first assume, for the sake of this argument, that Quantum Uncertainty can affect the macroscale functions of the human brain. This is a stretch in my opinion, but I am a layman in this regard, so I reluctantly must digress.
My understanding is that quantum effects can and do scale up to macroscopic effects, but as yet there is no evidence that human brains are affected by this. It's pure supposition at this point. AkuManiMani has posted some links supporting this idea previously in this thread.
(2.) We must then assume, that the probabilistic nature of our brain processing information, results in a variety of "end results". This could also be seen as more than one effect in the chain of causality, and thus, a violation of causality. I am willing to entertain that there could be more than one possible path, contingent upon our first assumption(1) being correct.
I’m not sure I follow you here. What do you mean by ‘more than one effect in the chain of causality'? And why would this be violation of causality?
(3.) We must now assume that that "you" are somehow the one that "chooses" the path(effect).
Allowing the possibility that each of us may choose our path is why this argument gives support to the idea of free will.
I have big problems with(3). The fact that there is more than one possible effect as the end result of the calculation of causes, is due to Quantum Uncertainty, which is unlike your blood type analog in the fact that it is based on the idea of wave function collapse, not a lack of data about a random persons blood type. First you are violating causality by letting these effects "go macro" on the human brain, but then you are saying that the individual could "choose" his effect or path. This is a violation of the very principle that you have applied to the brain, because in QM, that probabilistic distribution is not due to lack of data, it is due to there being no data in nature that can describe the quantum position/momentum.
I’m sorry, but I’m not following this chain of reasoning at all. Hopefully, once you explain your concerns with (2), this will be cleared up also.
It seems like you want to assume that QM affects thought processing, to help violate causality. This changes "deterministic" into "probabilistic". Then you want to "choose" which one of the probabilistic effects occurs. This changes "probabilistic" to "deterministic but I was the one that determined it",
Yes. That would be my understanding of free will. Each individual may determine the choices he or she makes.
and not only have you not explained a mechanism behind this, but you are seemingly choosing from a set that is STRICTLY probabilistic in nature, with NO hidden values(like your blood type example had).
An individual always chooses from whatever set of alternatives he or she perceives is available at the time. I'm not sure what you mean by this set being 'STRICTLY probabilistic in nature'. Hidden values do not enter into it AFAIK. If they did, it would become another deterministic argument. My example was just a way of illustrating that probabilistic does not necessarily imply random (perhaps arbitrarily random would be a better choice of words).

Whew, that made my brain hurt.
Fun stuff to think about, eh?
 
My understanding is that quantum effects can and do scale up to macroscopic effects, but as yet there is no evidence that human brains are affected by this. It's pure supposition at this point. AkuManiMani has posted some links supporting this idea previously in this thread.

Yeah, as I said, I am too much of a n00b to try and debunk/verify this. :(

I’m not sure I follow you here. What do you mean by ‘more than one effect in the chain of causality'? And why would this be violation of causality?

When you assign multiple paths for an individual to select from, truly select from, then you are saying that there are a variety of possible outcomes(effects) for the variables(causes) that are processed in the mind. In any other causal system(non-quantum), we only think that there are multiple possible outcomes, because we cannot fully comprehend the variable pool and processes involved. If one were able to fully comprehend all of these factors, the end result would be calculable, and therefore deterministic. In an example that truly does have more than one possible outcome(not deterministic), you are saying that no amount of knowledge of these variables or processes could result in a calculable outcome. This is a violation of causality, however, it is a accepted in our thought experiment because we have applied quantum uncertainty to the behavior of this system(the brain).

Yes. That would be my understanding of free will. Each individual may determine the choices he or she makes. An individual always chooses from whatever set of alternatives he or she perceives is available at the time. I'm not sure what you mean by this set being 'STRICTLY probabilistic in nature'. Hidden values do not enter into it AFAIK. If they did, it would become another deterministic argument. My example was just a way of illustrating that probabilistic does not necessarily imply random (perhaps arbitrarily random would be a better choice of words).

Your example is a good one, but it is not in any way similar to the probabilistic nature of quantum uncertainty. In your example, the people actually do have a blood type. In quantum mechanical terms, there is no such data. All that exists is the probabilistic distribution. So if this quantum wackiness were to work its way up to the human brain, one would have to agree that there would be no actual selections(positions) to choose from, just as there are no true measurements of position/momentum in QM due to wave collapse.

You have taken all of the useful bits of quantum uncertainty and applied them to the human mind to "make room" for *free will/spooky consciousness*, but you are ignoring the aspects of the same principle that would contradict your idea. You also have not presented a mechanism for how "you" would override this probabilistic brain in a deterministic fashion to "choose" your path.
 
At least one poster here thinks that SHRDLU is conscious
SHRDLU is conscious. Did you read the transcript?

and that after that the problem was solved and of no interest.
No interest to AI researchers, specifically.

The Turing Test remains popular. Personally, I'd expect a conscious machine to convince us of its consciousness using the same methods that humans use.
The Turing Test is not about consciousness, it's about being human. Acting human, using natural language, knowing everything you'd expect an adult human to know, having emotions, all of that baggage. That's vastly more complicated than mere consciousness.
 
Tell me how we would distinguish a conscious machine from, say, a really good chat-bot? Or would a really good chat-bot by definition be conscious? I don't know, but the answers are most certainly not self-evident.
Ask it questions about itself. If it can answer questions that require self-reference, then it's conscious.
 
My understanding is that quantum effects can and do scale up to macroscopic effects, but as yet there is no evidence that human brains are affected by this.
Correct on both points. Nuclear reactors, photomultipliers, that sort of thing, are clear examples of quantum effects scaling up to the macro world.

It's pure supposition at this point.
Yep.

AkuManiMani has posted some links supporting this idea previously in this thread.
Yes, I took a look at them. Four of them were papers on physical chemistry - and of course quantum mechanics is important at that level. The fifth was pure speculation.
 
The problem is that the way we determine whether other human beings are conscious is very complex and hard to define.
No it isn't.

It depends hugely on having shared experience.
No it doesn't.

In the absence of such shared experience, it's difficult to see how we could determine consciousness.
No, it's extremely simple.

A machine taking the Turing test could pretend to have such experiences, like someone manning a help desk in Delhi who tries to convince you he's calling from Manchester.
And has almost nothing to do with the Turing Test.

But if the machine is claiming to have experiences it doesn't have, how can that be evidence of consciousness?
Because all consciousness is, is self-reference. If you can do that, you're conscious. The end. SHRDLU can do it; SHRDLU is conscious. SHRDLU couldn't pass the Turing Test in a million years, but that's completely irrelevant.
 
Ask it questions about itself. If it can answer questions that require self-reference, then it's conscious.

How would you falsify this hypothesis? Can one create something that seems to be conscious (that is, it actually only appears to be so)? What's the difference between something that is conscious, and something that is designed only to give the appearance of consciousness?

I don't know, but it seems this needs some clarification if your assertion (and that's really all you've done so far - assert) is to be more useful. You just asserted, with no further justification, that consciousness is self-reference is consciousness. Now, you may be right, and I think I agree with you to a degree, but this needs more verbosity. It really isn't as self-evident as you are asserting. Is self-reference a sufficient quality for consciousness, or only a necessary one?

I know you've cited GEB up-thread - would Hofstadter's Achilles and Tortoise stories be conscious? I would say - of course not. Which seems to me to imply that something more than self-reference is at play here (or, more clearly, that self-reference and consciousness are not functionally or conceptually exactly synonymous).
 
Last edited:
Ask it questions about itself. If it can answer questions that require self-reference, then it's conscious.

I'm no programmer, but even I could write a BASIC programme that when asked the string "What are you doing now?" would produce the response "I am producing a response to your question". It's one line of code, and self-referential, but there's no thought, or intelligence. I'd argue that this type of restricted parrot-bot looks as if it is self-referential, but isn't. Perhaps you wouldn't?
 
What's the difference between something that is conscious, and something that is designed only to give the appearance of consciousness?

No difference that I can think of. And if there is no difference that matters, then there is no reason not to say both are conscious.
 
No difference that I can think of. And if there is no difference that matters, then there is no reason not to say both are conscious.

So it is impossible to make something that looks conscious, but isn't?
 
Last edited:
How would you falsify this hypothesis? Can one create something that seems to be conscious (that is, it actually only appears to be so)? What's the difference between something that is conscious, and something that is designed only to give the appearance of consciousness?

Given that the p-zombie stuff is an admission that we cannot tell even if a human is actually conscious or just pretending, I cannot see this as a dealbreaker with regard to consciousness. How would we falsify the hypothesis that humans are conscious, as opposed to merely pretending?
 
So it is impossible to make something that looks conscious, but isn't?

I think this is what Pixy is saying. That the things that would make something look conscious are the same things that make it conscious.

I don't know enough about the subject to agree or disagree, but it seems to make sense. How else do we judge consciousness but by behavior?
 
So it is impossible to make something that looks conscious, but isn't?

Depending on your definition of consciousness, it may not be impossible to do this... but it would be impossible to tell the difference, so you could never know.


eta--this is the JREF forum, right? Nobody here seriously considers mind-reading a serious possibility?
 
Something that is possible, in principle, with transistors and software. There are machines that build other machines, and certainly there are programs that emulate evolution by using variation and selection. So, why would this be so different from the biological equivalent ?

Actually, as I was responding to you this morning I was considering the possibility of having a self sufficient machine factory with a wireless network. The factory would be run and maintained entirely by robots of some sort or another and their actions would be coordinated by a network intelligence. This factory could manufacture many products including the robots and machines are part of it's day-to-day operations. When a unit malfunctions or is damaged beyond repair it's parts would be recycled.

I suppose such a factory could be considered analogous to a cell; albeit a gigantic, crude, functional equivalent of a cell. Could the factory be considered 'alive'? I suppose in some sense it could. /shrug
 
Last edited:

Back
Top Bottom