• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

My take on why indeed the study of consciousness may not be as simple

You seem to be making the assumption that AI research has the aim of producing consciousness. That might be the occasional case , but I'm pretty sure that the reason IBM is committing significant resources is because they think they can make money out of it.
Only if their model is correct. It hinges on whether or not they can reverse-engineer the brain.

If you have a mission statement from IBM saying that they intend to crack the secret of human existence, rather than produce more efficient expert systems, then I'd like to see it. I have absolutely no problem with research into Artificial Intelligence. I simply don't believe it is going to produce machines that think or feel at any time in the foreseeable future.
"Human existence"? What?

The Blue Brain Project is the first comprehensive attempt to reverse-engineer the mammalian brain, in order to understand brain function and dysfunction through detailed simulations.
 
westprog said:
But the "program" will have no interaction with its environment. It will simply have data. If the experience of consciousness is identical whether or not there is interaction with the environment, then the environment is irrelevant. There's just data, presented in one format or another.
Why would the experience of consciousness be identical with no environment vs. a simulated environment?

In order for this to be a valid explanation, we have to assume sharp boundaries between the organism and the environment, so that we can extract the data processing component of the brain, snip off all the input channels, and replace them with suitable data inputs. I don't see this as reflecting how human beings actually work.
What do you think the difference is? Let's assume that the simulated human can change the simulated environment appropriately, too.

Certain processes lead to certain internal behaviours? That sounds very close to "just happens" to me.
I don't understand what you mean by "just happens." Can you give an example of something that doesn't "just happen"?

The only explanation as to why we need to assume that there is something called "consciousness" involved with algorithms is that there are human beings who claim to possess it. It explains nothing about the behaviour of algorithms.
I don't understand.

If you change the Turing model, then all the theory relating to how Turing machines work has to be dumped. Turing machines are deterministic. It's a fundamental characteristic.
Yup, you're right. I've created an oracle machine with a random oracle. Let's stick with a Turing machine and not worry about rerunning the same simulation. After all, you can't rerun a human.

But this brings up an interesting question. Do we need an oracle machine with a random oracle to simulate a human? I believe such an oracle is not equivalent to a Turing machine, so it's an interesting question. I suppose the human brain could have some source of true randomness. However, I don't see why this is a problem for the physicalist model of consciousness.

~~ Paul
 
drkitten said:
Not at all. If you simply define the "data" as the problem data plus the date-and-time at which it was run (which is easy enough to put in), you get effective randomness. If you define "data" as the problem data plus the output of a scintillation counter, you get quantum randomness.
In the latter case, I don't think you have a Turing machine anymore. That is, if it can obtain an arbitrary quantity of random numbers as it executes.

We may get into some cans of worms if we talk about multiple runs of the simulation with modifications to the tape in between. I think we should consider one long run, just like a real human.

~~ Paul
 
Continuing my blabbering: We know that a single-ended Turing machine is equivalent to a no-ended one. So let's consider a no-ended tape with two infinite ends, one for the normal use and the other with an infinite sequence of truly random numbers that the machine can consult.

Does that do the trick? Do we have a Turing machine with an infinite source of truly random numbers?

~~ Paul
 
Continuing my blabbering: We know that a single-ended Turing machine is equivalent to a no-ended one. So let's consider a no-ended tape with two infinite ends, one for the normal use and the other with an infinite sequence of truly random numbers that the machine can consult.

Does that do the trick? Do we have a Turing machine with an infinite source of truly random numbers?

~~ Paul
An infinite tape? How does the tape generate these true random numbers? Or do we simply assume that they exist for the purpose of the thought experiment? Which is fine I'm just asking so I can wrap my head around it.

BTW: How does a tape that has one end and continues infinitely differ from the no-ended tape? Is one infinitely longer than the other? ;)
 
Last edited:
RandFan said:
An infinite tape? How does the tape generate these true random numbers? Or do we simply assume that they exist for the purpose of the thought experiment? Which is fine I'm just asking so I can wrap my head around it.
Good question. I have no idea whether an arbitrarily long initialized portion of the tape is legit or not. If not, then I retract my idea. :D

BTW: How does a tape that has one end and continues infinitely differ from the no-ended tape? Is one infinitely longer than the other?
The one-ended tape has a special symbol in the last cell and will not allow the machine to move past it. The no-ended tape has no movement restrictions whatsoever.

Edited to add: If we initialized only a finite number of random numbers, but enough to last for a simulation of a full lifetime, then we finesse the infinite initialization issue.

~~ Paul
 
Last edited:
Good question. I have no idea whether an arbitrarily long initialized portion of the tape is legit or not. If not, then I retract my idea. :D

The one-ended tape has a special symbol in the last cell and will not allow the machine to move past it. The no-ended tape has no movement restrictions whatsoever.

Edited to add: If we initialized only a finite number of random numbers, but enough to last for a simulation of a full lifetime, then we finesse the infinite initialization issue.

~~ Paul
Cool.

I still don't know if a one sided infinite tape is infinitly shorter than a no-ended tape but I'm just having fun. :)
 

Thanks, that's interesting.

Yes, they are doing research which is more blue sky than I'd thought - but it's not necessary for them to produce something that experiences in order for it to be useful. They can make money if the architecture copied from a (initially a rat's) brain turns out to have unexpected properties with unpredictable applications. As far as pure research goes, if it turns out that consciousness isn't there, then that's interesting too.
 
Thanks, that's interesting.

Yes, they are doing research which is more blue sky than I'd thought - but it's not necessary for them to produce something that experiences in order for it to be useful. They can make money if the architecture copied from a (initially a rat's) brain turns out to have unexpected properties with unpredictable applications.
Of course but there is no bottom line financial objective. I've no doubt that they hope for it but it would be a bad corporate decision if financial return were the only impetus.

As far as pure research goes, if it turns out that consciousness isn't there, then that's interesting too.
Of course. It's called falsification. It's an integral part of science. Often times we learn by finding out we are wrong and often in unexpected ways. This could lead to insight into an entirely different paradigm. But they didn't commit the resources in hopes of proving the idea wrong and finding a new paradigm.
 
Last edited:
But that is not a definition.

Dear God that I do not believe in...

Look, see this sentence from the wiki page on decidability?

In logic, the term decidable refers to the existence of an effective method for determining membership in a set of formulas.

That thing called "effective method" is what is informally known as a decision. If you want a mathematical definition of "decision" then just replace "effective method" with "decision." And guess what -- there is a middleman page for "effective method" that, among other things, links to pages about everything else we have been talking about here.

Are you starting to notice a trend? I am. The trend is that every single time anyone provides you a resource that will satisfy your demands for "proof" of something you don't understand you reply in turn that you don't even understand the resource that will help you understand what you don't understand.

If one extrapolates this pattern, the conclusion is that you really aren't very educated at all in any of the areas of knowledge required to understand this issue and furthermore, given how easy it is to follow links in a web browser, that you don't even want to be educated in these areas.
 
Despite my doubts about "replacing neurons with non-biological switches" I don't doubt that what you describe could well be possible however is it likely?
Does not matter if it is likely, it is a thought experiment. It is extremely unlikely. As long as you think it possible that this moment that you are experiencing right now could in reality be be a billion years of putting marks onto paper with pencils.

And of course that if someone then goes and rechecks the deskcheck, recalculating the numbers and writing down the number next to the original - since there is no difference in process, this moment you are experiencing right now could also be that. Yes?

So can I take that as a second confident yes?
 
I was Robin's 'one confident "yes"'.

Your statement was, as usual, emphatic, far-reaching, and factually false.
Your statement was, as usual, emphatic and vague. What is factually false?
(Hint: Algorithms do nothing without data.)
But who said it does not have data?

If we can model the neuronal process, modelling the sensory inputs for a half of a second of consciousness ought to be easy. The algorithm has all the data for the neuronal states of an adult human at a particular age (which would obviously include memories) and the data for sensory input.

So it will have no problem in producing a half second of actual consciousness, like the sound of a saxaphone or the taste of a peach, just as you experience it right now - yes?

You are not backing away from the "yes" are you?
 
So it will have no problem in producing a half second of actual consciousness, like the sound of a saxaphone or the taste of a peach, just as you experience it right now - yes?

Which also means the previous half second might have been monks and quills in an entirely different universe, simulating the me in this current simulated universe, the rules of which might be somewhat different from each other.

I tell you, the computational model opens up some monumentally fascinating possibilities. The stuff religion comes up with pales to reality.
 
This is being obtuse.
Will you stop with the insulting language. It reflects more on you than it does on me.
It's not controversial at all. The point is trivially true. Have you ever heard of a thing called inference?
Well if the definition he is using of "understand" is so uncontroversial, why not just say what it is. No Socratic asking question. Just a straight out "by 'understand' Searle means..."

That is all I am asking.
HE IS TRYING TO REBUT AI.
As I have already agreed. Again and again. I think if you are going to call people obtuse you really ought to pay more attention to what I said.

The question is, what is his precise argument.
And you are unable to figure out what the hell was his point about the Chinese Room as it relates to consciousness? Unbelievable.
We both think we figure out what his point is.

We both disagree. Therefore one of us is wrong. You are assuming that it is not you. And that is irritating.

I think his point is that the person working the CR does not understand Chinese, therefore the CR operates without any understanding of Chinese, in other words the Chinese room has no intentional states.

I am not sure what you think his argument is.
I don't know if you are just yanking my chain or if you took exception at the beginning and now just can't find a way out.
What do you think I am yanking your chain about. Are you saying that I really do know what you or Searle mean by "understand" as it relates to the CR argument?
The Turing test is proposed to ostensibly test for AI.
The Turing test is specifically to answer the question 'Can machines think?'
Searle sets out to rebut the Turing Test.
Searle's purpose is to rebut AI.
As I have already agreed. More than once. But what we are disagreeing about is the precise argument he is using to rebut the proposition that machines can think.
This isn't controversial and Searle has not step forward to complain about the many millions of web pages and sources that link the Chinese Room to consciousness. He has never made a point that his argument is being misaplied. There's no reason to think his argument has been misaplied.
Searle has restated his thesis many times in interviews and other articles and the argument is always about understanding and intentionality.

He is addressing the issue of whether machines can think but leaving completely aside the matter of whether they can feel.
 
The Turing test is specifically to answer the question 'Can machines think?
Thanks. I know, I know, that's seems a cheap rhetorical device but in fact that is all we are talking about. That's the end all be all of the discussion.

Understanding, consciousness, thinking, sentience, awareness, etc. are not concretely defined terms. We don't really know exactly what these things are. We use these words as labels for abatract concepts that we have a rough idea of.

He is addressing the issue of whether machines can think but leaving completely aside the matter of whether they can feel.
"Feel"? Who said anything about "feel"?

FTR: I actually do think that to "feel" is important to consciousness AND understanding. Meaning, IMO, very likely has an emotional component.

But that is beside the point. You've tried to make the trivial significant for what I simply do not know. You've tried to make concepts that have no concrete lines exclusive and discrete. Or at least you are trying to force me to prove that there are no concrete lines there. I think that could be done but it is so esoteric and so trivial as to be pointless for the purpose of this discussion. It all depends on how you define your terms. Consciousness isn't exclusive of understanding and vice versa. You are trying to create a false dichotomy. I either demonstrate that consciousness and understanding are exactly equivalent or I accept that they are mutually exclusive.

Sorry, no.
 
Last edited:

Back
Top Bottom