• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

My take on why indeed the study of consciousness may not be as simple

Can I buy a vowel?
I'm thinking along the lines of RNG's.
I did mean "computable to arbitrary precision" rather than "strictly computable"; certainly there are processes that are not strictly computable.
Not sure precision is always meaningful in this context, but I was just being annoyingly pedantic.
 
Randfan, can we capitulate.

You say that my argument is equivalent the the Chinese Room argument because colloquially "understanding" is more or less equivalent to "consciousness". Is that fair?

I have said that these terms are not equivalent and in any case the CR argument has a different structure - it hinges on what the operatives do or do not understand whereas mine does not.

You have asked me if we ever understand something exists when we are not conscious.

I have previously said that this is an irrelevant question because:

X whenever Y does not mean that X is equivalent to Y

It also has to be true that Y whenever X

Now you are pressing me to answer the question - I am happy to do so, but I wish you would at least acknowledge the reason I didn't answer it in the first place.

However:

I ask you, can you understand something to exist if you are not conscious?
Colloquially speaking? Yes.

Suppose I am about to go in for an operation. I understand my wallet exists when I put it in the valuables locker.

Then I am unconscious for a period and when I wake up I understand that my wallet exists even before I have seen it.

So colloquially I say that I still understand. I don't say that I stopped understanding and started understanding.

My intuitive understanding is that understanding persists during periods of unconsciousness.

Now the second part - are there ever moments when you don't understand what you are perceiving?

Are there any "what the hell was that?" moments?

Colloquially does "I perceive the taste of a peach" really mean exactly "I understand the taste of a peach"?
 
I'm not sure your experience could result from paper consciousness. I think the paper-entity would know it was a paper-entity.

I suppose the emulators could also emulate a body and the external world for the paper-entity.

~~ Paul

The Algorithmic theory of consciousness asserts that the entire experience is due to the execution of the algorithm. Interaction with the environment, time, the physical means of executing the algorithm - all are totally irrelevant.

I'm not sure how many theorists actually believe this. It's a very immaterial sort of materialism. There's also a complete absence of explanation in the sense of cause and effect. Consciousness just happens when you run the algorithm. It's an "emergent property". And if you have doubts about this, it means you believe in magic, apparently.
 
Searle's point was to rebut AI. Your entire complaint is hyper silly and absurd. To think Searle's thought experiment had nothing to do with consciousness when the entire point was to show why the Turing test could never demonstrate consciousness.
I can only deal with what he says, not try to guess what he means.

The objection I made was precisely that he had not clearly defined what he means by "understand".

And also that he makes the non-sequitur that the CR does not understand because the operative does not understand.

Now you are saying that these objections are silly and absurd because Searle has made abundantly clear that he meant understanding as a synonym for consciousness.

I don't agree. I think he has not clearly defined "understand" and neither have you.
That Searle didn't mean consciousness when he said "understand" though how one could understand without being conscious you won't tell us.

X whenever Y does not mean that X is equivalent to Y

It is also necessary that Y whenever X​
Clearly Searle is making a case against strong AI. Right?
And he was making it in terms of the concept of "understanding", which he later clarified to relate to intentionality.

He is saying that an AI system does not understand. That it has no intentionality.

He could have clarified to say "I mean consciousness" but he didn't.

In fact he later made a separate argument that consciousness was not computational, because nothing was intrinsically computational.
 
But don't you think that a Turing machine can simulate a coin toss to an arbitrary degree of randomness?

~~ Paul

No. A Turing machine is deterministic.

In practice, real computers are able to produce effectively random numbers (something that is often a requirement) from arbitrary data, such as the current time. But the fact remains - it's part of the basic definition of a Turing machine that the same data ("tape") produces the same result, without fail, every time.

This has the effect that when we choose a Turing machine, we are also choosing all the decisions that it can possibly make.
 
lol.

Alright, lets play this game a different way.

Can you give me any examples of a decision that a human makes that does not satisfy this mathematical definition of decision?

Since you don't have a mathematical definition of decision I think that's going to be a problem for a start. A mathematical definition for decidability is something different.
 
By the way, when I asked if people thought it was possible that the moment they are experiencing right now could in reality be a billion years of writing numbers on paper, I only received one confident "yes".

What about the others?
 
Human prediction is fallible. BTW: The Leobner prize is pushing us toward that very thing. The human brain is in a number of ways the most complex puzzle science has studied. It's going to be awhile. That it will take some time isn't proof of anything.

It's a strong indicator that we shouldn't think we've solved the problem until we've solved the problem.

I'm now reading The 21st Century Brain, by Steven Rose for an idea of the state of play in neurological research.

That's cool but bear in mind you've provided no justification for your prediction and the experts in the field aren't scratching their heads and throwing in the towel. IBM has committed significant resources. Perhaps you know something they don't.

You seem to be making the assumption that AI research has the aim of producing consciousness. That might be the occasional case , but I'm pretty sure that the reason IBM is committing significant resources is because they think they can make money out of it.

If you have a mission statement from IBM saying that they intend to crack the secret of human existence, rather than produce more efficient expert systems, then I'd like to see it. I have absolutely no problem with research into Artificial Intelligence. I simply don't believe it is going to produce machines that think or feel at any time in the foreseeable future.
 
So you keep saying but I have no idea what you mean by "self aware"

OK, let's go through this again.

Is the computer programmed car aware of it's environment?

Can it be aware of another car in it's environment?

You see, I regard the concept of "self-awareness" as being meaningless in the context of inanimate objects. I've yet to see a definition that's in any way rigorous.

Drop a pencil on the floor and it will become "aware" of the floor when it hits it.
 
By the way, when I asked if people thought it was possible that the moment they are experiencing right now could in reality be a billion years of writing numbers on paper, I only received one confident "yes".

What about the others?

Pixy seems to disagree. Though maybe it's the Pixy-bot writing "wrong" after my posts.
 
By the way, when I asked if people thought it was possible that the moment they are experiencing right now could in reality be a billion years of writing numbers on paper, I only received one confident "yes".

What about the others?

Despite my doubts about "replacing neurons with non-biological switches" I don't doubt that what you describe could well be possible however is it likely? Well I simply have no way of making such a call so it falls into one of those "good premise for a science fiction story" (and it had been used as such in at least one novel, Permutation City by Greg Egan) but otherwise I see no utility in the idea.
 
Pixy seems to disagree. Though maybe it's the Pixy-bot writing "wrong" after my posts.
I was Robin's 'one confident "yes"'.

Your statement was, as usual, emphatic, far-reaching, and factually false. (Hint: Algorithms do nothing without data.)
 
I have seen some "easier to understand" ones, but I don't have links. If I remember correctly many reviews of the book go over simpler refutations as well.

That one you posted is incidentally the one I have bookmarked, lol.

Happy anniversary, Dodger. I'm happy for you that the Earth has completed yet another revolution since you were ejected from a uterus :p I just love how humans celebrate meaningless things, don't you ?
 
Despite my doubts about "replacing neurons with non-biological switches" I don't doubt that what you describe could well be possible however is it likely? Well I simply have no way of making such a call so it falls into one of those "good premise for a science fiction story" (and it had been used as such in at least one novel, Permutation City by Greg Egan) but otherwise I see no utility in the idea.

The utility of the idea is not in the likelihood that it's actually happening - it's in the concept that if an algorithm is equivalent to the algorithm that is executed in a human brain, then any execution of that algorithm is entirely equivalent to the human experience.

Incidentally, Egan makes the point that not only can the algorithm be executed in any physical way, it can be executed in any order, and it will still be experienced normally.


Personally I find the argument mainly useful as a reductio ad absurdum.
 
The utility of the idea is not in the likelihood that it's actually happening - it's in the concept that if an algorithm is equivalent to the algorithm that is executed in a human brain, then any execution of that algorithm is entirely equivalent to the human experience.
The human brain doesn't execute an algorithm. The function of the human brain can, however, be approximated arbitrarily closely by an algorithm.

Incidentally, Egan makes the point that not only can the algorithm be executed in any physical way, it can be executed in any order, and it will still be experienced normally.
It would be experienced out of order as measured against an external clock, but any such clock is by definition not accessible to the experiencer, who therefore has no way of knowing this.

Personally I find the argument mainly useful as a reductio ad absurdum.
So, where exactly is the logical contradiction?
 

Back
Top Bottom