Explain consciousness to the layman.

Status
Not open for further replies.
I hate this red herring about whether a Turing Machine is a general purpose computation machine or not. Off topic and irrelevant. Maybe, like qualia, we can side step it with a synonym. How about GPCM (general purpose computation machine)?

Thought experiment time!

1) We wire a sufficiently advanced GPCM to a spider after removing its brain, programmed to do exactly what a spider's brain does in handling input and output and confirm that the spider does everything a real spider does in every way.

2) We do the same thing with a human. We get the same result. It even spontaneously sings rhapsodically about how the subjective experience of redness must be somehow immaterial and incomputable, even though it was not specifically programmed to do that.

Can #2 happen? If not, why not? If it happened, would it be conscious?

To elaborate on what I think you are really asking:

It may be possible to merely "program" a GPCM to do exactly what a spider's brain does.

However, it is not possible to do so with anything even remotely as complex as a human brain. To create a GPCM that can do everything the human brain does requires figuring out how the brain works and doing the same kind of stuff in the GPCM.

Meaning, it would not be a simple list of inputs and corresponding outputs ( a chinese room, to use the proper terminology ). A "chinese room" can't really exist, it is a thought experiment created by someone who knew nothing about either computer science or neurobiology.

So the question then changes-- instead of "would a GPCM that simply matched all possible inputs to the proper outputs be conscious" it is now "would a GPCM that processes information in much the same way as the human brain be conscious?"

And that question, in my opinion, has a much more obvious answer. For example, if the body responds to your question "are you conscious" because the GPCM processes the auditory input, understands what the words mean, figures out the implications of those meanings using some kind of inference -- which entails some knowledge of self, since the word "you" is included in the question -- and formulates a response, then I would certainly call it conscious in some sense.
 
Last edited:
I'm aware that when people talk about replacing a brain with a GPCM, or computer, or artificial brain, they are thinking in terms of a machine that will actually allow the person to catch the ball. Such a machine would not be a pure computational device - it will be highly interactive. I consider it quite possible that some such device, able to control an actual human body, or precise simalcrum of such, might be conscious where a pure simulation, running only on computer hardware, might not. This is not the computational view, however.

Such lies.

Lets play the same game we have been playing for years, eh? Whadda ya say?

You say "a program by itself can't catch a ball, it has to be hooked up to a body."

I say "what if the body was part of the program?"

You say "then what about the ball? How does the body catch the ball?"

I say "what if the ball was also part of the program?"

You say .........
 
The only reason I harp on it, Mr. Scott, is because certain people play semantic games to avoid giving an honest answer to your question #2.

For example, I suspect something like "but a Turing machine can't account for real time events, so if it was hooked up to the human body it couldn't be a Turing machine. So clearly computation alone isn't responsible for consciousness," could be a counter-argument. Which is just another way of stating that a GPCM has some aspects that an idealized Turing machine does not -- well, durr, because a GPCM is real and a Turing machine is not.

A false dichotomy. There are actual computers, and systems running on computers, which use the abstract Turing model. There are computers, and systems running on computers, which use a real-time, interactive model.

The distinction is not between abstract models and real life implementations. It's between different models, and different implementations of those models. The real-time model is as abstract as the Turing model. In some cases, for example, to calculate a list of prime numbers, the Turing model is entirely applicable for the task. For other tasks, such as responding to the press of a key and displaying a corresponding symbol on the screen, a different model is used.

It's entirely wrong to suppose that one takes the pure, abstract model of the Turing machine, and somehow when it's instantiated in the real world, all the real time features somehow appear there as part of the design process. Real time modelling is a necessary element in designing real time systems.

To give a more concrete example - a programming language is an abstract model that we use to describe what we want a computer to do. If we want to perform a computation along the lines of the Turing model, we might use a language like PASCAL. When we specify what we want the program to do in PASCAL, we do not have a particular timing in mind. The first programs I wrote were typed on punched cards and handed in to a computer operater. Later on, I went to a cubby hole and collected the printed output. That's an example for which the Turing model is entirely appropriate. However, if we wish to open a valve five seconds after a pressure gauge registers a particular value, then we cannot use the PASCAL program to do this. There are no structures in the PASCAL language to do this. We need to extend the language. This can often be done with function libraries or operating system calls. There are some specialised programming languages which directly include real-time, interactive features, and they are available for most modern programming languages.

In the old days of computing, programming was taught to many people as if the Turing model was the only thing to be considered. Sometimes that led to a shock when the real-time, real world was encountered, and a device driver had to be written, or a program to control a water works. Nowadays that's less likely to be the case, because most programs are interactive and respond to user input. Asynchronous, event-driven programming is the rule, not the exception.

The use of the correct model to describe behaviour is part of science. If we have a model which doesn't describe or predict how a system behaves, we don't shrug and say "Of course it doesn't! A model is just a fantasy, and this is the real world!". We apply a new model which does describe the behaviour. If this model fits better, we are more inclined to use it. Obviously, the Turing model of computation does not describe the functionality of the human brain and nervous system. A real-time programming model is a much better fit, and provides at least the possibility of a computer brain.

I don't know why there is such an insistence that a deterministic, closed, timing-independent non-interacting model should be applied to a non-deterministic, open, time-dependent interactive system. I suspect it has far more to do with the history of AI than with the most appropriate approach.
 
You're claiming that something has not been shown. Your saying so doesn't make it true.

No, the failure to show evidence makes it true. If you can point to actual evidence for the assertion, please do so. That's real evidence, not Pixy's laughable circular redefinitions, not incredulity that consciousness isn't computational, not the deep feelings in your heart that it must be so. You are making a strong claim. It is necessary to justify it. I do not have to provide proof. You do.

I've shown in some detail, over a number of posts, that the actual behaviour of a human brain is not described by the Turing computational model. If you want to demonstrate the contrary, work away.
 
I've previously given the example of catching a ball as something that can't be done as a computational process, because it is not sufficient to accurately calculate the trajectory of the ball

The history of computing is riddled with corpses of claims of what computers would never do, e.g., Deep Blue defeating a world grandmaster at chess.

You are confusing quantitative with qualitative limitations.

Sufficiently fast computers would catch balls better than any human.

Sufficiently fast computers would be able to sufficiently emulate the human brain (and, btw, surpass it :eek:)

The real question is where the qualitative limitation is to computability of consciousness.
 
You REFUSED to explain yourself.

That's simply not the case. I've given far more detailed, intricate explanations than anyone else on the thread. In fact, I've been forced to present the case for the computational hypothesis because none of its supporters (except for possibly Pixy, whom I don't read) can be bothered to do so. Meanwhile most of the contrary arguments consist of nit-picking chasing after precise definitions and personal attacks.

Any objective observer can see how much of what I post is concerned with brains and computers, and how much of what some other people post is about me. Do the math.
 
The history of computing is riddled with corpses of claims of what computers would never do, e.g., Deep Blue defeating a world grandmaster at chess.

You are confusing quantitative with qualitative limitations.

Sufficiently fast computers would catch balls better than any human.

Sufficiently fast computers would be able to sufficiently emulate the human brain (and, btw, surpass it :eek:)

The real question is where the qualitative limitation is to computability of consciousness.

I am not making claims about computers. I'm making claims about the computational process. I suggest you read in detail my analysis of the difference between the Turing and real-time models.

I'm well aware that computers can perform real-time operations. In order to do so, they have to depart from the pure computational model. You may wonder why this is such an issue. It wouldn't be, if the claim for the computational model were modified.

The reason why this is an issue is that it's claimed that not only a computer mind controlling a human (or human like body) in real time would be conscious, but also a simulation of that system running as a non-interactive program, and that even if it took hundreds of years to simulate five minutes of human experience, nevertheless, the experience would be precisely the same.

The two claims are very different. Of course, in the absence of precise details about what is being claimed, I'm forced to summarise what I'm arguing against. If someone isn't attached to the Turing model of computation being necessary and sufficient for consciousness, then I'm probably not arguing with him at present.
 
Such lies.

I don't understand why it is necessary to discuss a fairly esoteric, abstract issue in a deliberately confrontational, insulting way. I'm fairly sure that you wouldn't talk to somebody face to face in this fashion. If you do so regularly, then I surmise that you'd be a regular recipient of a smack in the mouth.

It does nothing for your arguments, and puts forward the impression of immaturity and instability. It's not a new thing - it's the way you seem to like to dispute. After a while, there's no actual presentation of arguments, just personalised ranting, and I put you away for a few months.

If you want to present this kind of face to the world, I can't stop you doing so, but I suggest that you find some trustworthy person, show him some of these tirades, and ask objectively if that is an appropriate way to engage in civil discourse.

Lets play the same game we have been playing for years, eh? Whadda ya say?

You say "a program by itself can't catch a ball, it has to be hooked up to a body."

I say "what if the body was part of the program?"

You say "then what about the ball? How does the body catch the ball?"

I say "what if the ball was also part of the program?"

You say .........

And then you extend your closed system to encompass everything with which it interacts. Eventually, when you find that you've had to model everything in the universe, you might realise that an open model, which deals with asynchonous inputs, is a far better choice than a closed model which has to be extended to include one giant system encompassing every interaction.
 
westprog said:
There are no* physical restrictions on computation.

This is just an outright lie. Stop lying.

If I were going to make a claim such as that, I'd be somewhat careful about my facts. Now, if you look at my post above, you will note a "*" next to the "no". Odd, that. What does it mean?

Well, if we go back to the original post, we see a footnote, dealing directly with the claim.

westprog said:
*Beyond the simple requirement of enough complexity in the system to reflect the computation. (It seems to be State The Obvious Day every day).

One might have thought that if a claim was so outrageous as to deserve a public accusation of dishonesty, that it would be appropriate to include the claim in full. This was not done. Anyone reading the clip from my post, and the response to it, might have formed the impression that the above assertion was all I had to say on it. Though if RD really wanted to hide the truth, he should have deleted the "*", which kinda gave it away. He'll know better next time.
 
And then you extend your closed system to encompass everything with which it interacts. Eventually, when you find that you've had to model everything in the universe, you might realise that an open model, which deals with asynchonous inputs, is a far better choice than a closed model which has to be extended to include one giant system encompassing every interaction.

That is actually very funny thanks for the comic relief.
 
No, the failure to show evidence makes it true.

That circles right back to what I said: you SAY that the evidence fails and we should just accept that even if we do have evidence that you are wrong.

You are making a strong claim.

Which claim ? That consciousness is computational ? Are you contending that this claim is stronger than saying, for instance, that consciousness is an actual object ?

That's simply not the case.

Whenever I ask you why you think it doesn't work all you answer is "because the evidence fails" or something to that effect. That's not an explanation.
 
I've previously given the example of catching a ball as something that can't be done as a computational process, because it is not sufficient to accurately calculate the trajectory of the ball - a signal has to be sent soon enough that a hand can reach out and grab the ball. Clearly, a system that cannot guarantee that it will send the signal in time is not plug compatible. This is why I point out that the Turing model does not describe what the brain actually does.
Entirely false.

Time is just a co-ordinate. If you can encode the other three into the data for the computational model, you can certainly encode time.

If you don't understand, ask. If you don't agree, tough.
 
Time is just a co-ordinate. If you can encode the other three into the data for the computational model, you can certainly encode time.

If you couldn't, even A.I. as simple as that implemented in Pong wouldn't function.

EDIT: Or, rather, it would, but you could implement it with a time variable in order to calculate how fast the paddle needs to move to be where the ball will be. It would just be more complex than the game needs it to be. It would function pretty much indistinguishably, though. It'd just require a lot of unnecessary lines of code.
 
Last edited:
Yeah. Westprog is basically denying the existence of the entire field of physics. All those formulas scientists, engineers, and technicians all over the world rely on every day? Don't exist.
 
This is a bit of a thread hijack, but I feel a certain amount of the preceding material is necessary for the discussion. In a new thread you'd just have similar people hashing out similar things again before the topic got started anyway.

All of these are very good things to think about. The problem is, human emotion gets in the way, and invariably people start to stop and cling to irrational positions.
This is going to elaborate on the idea of "convention" I touched on in the atheism-materialism thread. I can link if you'd like, but basically the universe doesn't care in the slightest if we live or die. If the goop in our heads form brains or bananas.

As self-aware patterns of neural activity, we've evolved a strong instinct for self-preservation. The problem is, we've also evolved the smarts to argue about how we define the self. There are many conventions people use, none of them are perfectly rational in any objective sense (not even yours), nor is any one useful all the time.

A common convention which works 90% of the time is material continuity. If you go brain-dead on the operating table for a minute and come back, it's nice to not be considered a different person. But that has problems because (like you mention) quantum thingummies are restating their spins or whatever every instant, and while that doesn't affect anything so far as we can tell, parts of your brain are in a slow churn of die-off and replacement (other parts just die), and that does. Plus there's the blasted Ship of Theseus paradox that muddies the water further.

Another convention, my preferred one, is pattern continuity. This certainly has problems, as our minds are stopped (which doesn't happen when you sleep, by the way: your mind keeps going, you just don't remember it) much more often than they get replaced. The aforementioned operating table for instance. I'm still drawn to it, however, by the potential application of software version system analogies.

Yet a third, which it seems you switched to when ditching the first, is... I don't know what to call it. But as long as there's still someone in the universe who can legitimately call himself you (by some standard, opinions wildly differ here), what happens to you you is inconsequential. This is no more rational than the other two: what if, to use the teleporter hypothetical, the person who steps through the In gate is not instantly disassembled but instantly copied, then slowly tortured to death for the amusement of the alien race who runs the teleporter? From your point of view it will be the same - the you leaving the Out gate would remember going through the In gate and nothing more. Would you still be so eager?


I like your second hypothetical situation better, because it stands a good chance of not being so hypothetical in a few decades' time. Assuming there were a method of whole brain emulation which could accurately replicate minds to your satisfaction, but doing so involved tearing the donor brain into tiny little bits of flesh examined under the microscope for their content, would you do it? After all, you've got less than a century left on Earth, but memento mori means little in silico.

I'm curious what the responses of the wider audience in this thread would be.

Personally I'd see the situation as similar to dying so that a close relative might live; a very close relative indeed. I'd do it eventually, but I'd wait until I had a good reason: terminal cancer or Alzheimer's or such.
 
I don't know why there is such an insistence that a deterministic, closed, timing-independent non-interacting model should be applied to a non-deterministic, open, time-dependent interactive system. I suspect it has far more to do with the history of AI than with the most appropriate approach.

The reason is that all computation can be reduced to the sort of stuff a Turing machine can do.

All computer science majors are required to take a class on it, actually. Mine was called "automata."

All computer science majors are supposed to learn about certain things in that class, like what an automata is, what finite state automata are, what deterministic and non-deterministic finite state automata are, and in particular how every single possible computation that a human can even imagine is reducible to a series of steps that can be performed on an idealized Turing machine. The model is all encompassing.

Why don't you know all this?

The argument that you can't model a person catching a ball with a Turing machine who's program doesn't include a representation of the ball, because the Turing machine doesn't wear a baseball glove, and so cannot interact with a real ball -- and yes, that is exactly what your argument boils down to, no matter how much you try to blow smoke -- is just stupid. It is stupid because it is so obviously true, and it is stupid because it so obviously doesn't address any of the issues being discussed.
 
That circles right back to what I said: you SAY that the evidence fails and we should just accept that even if we do have evidence that you are wrong.

I think that the people proposing that consciousness is computational should present their evidence that this is so. If you wish to enumerate the posts on this thread where this has been done, I'll refer to them.*

Which claim ? That consciousness is computational ? Are you contending that this claim is stronger than saying, for instance, that consciousness is an actual object ?

I dare say you could come up with stronger claims. The claim that consciousness is computational is undoubtedly stronger than my claim that consciousness is currently unexplained.

Naturally the claims that I believe X,Y and Z, and that I have a secret magical agenda can be made, but I can't help what people claim that I'm really thinking.

Whenever I ask you why you think it doesn't work all you answer is "because the evidence fails" or something to that effect. That's not an explanation.

I've given precise, very detailed analysis of why the computational model is inappropriate for the kind of control systems that are present in the brain. However, I really am not obliged to do so, any more than an atheist is obliged to disprove the existence of god. All I need to do when presented with the claim "Consciousness is computational" is to say "Evidence?". The claim that consciousness is unexplained is the default position. It's also a negative. I don't have to demonstrate it. You have to refute it.

It's relatively simple to disprove my contention that consciousness is unexplained. Just explain it.


*Pixy excepted, of course.
 
I don't understand why it is necessary to discuss a fairly esoteric, abstract issue in a deliberately confrontational, insulting way.

You don't allow it any other way, that's why.


And then you extend your closed system to encompass everything with which it interacts. Eventually, when you find that you've had to model everything in the universe, you might realise that an open model, which deals with asynchonous inputs, is a far better choice than a closed model which has to be extended to include one giant system encompassing every interaction.

Who cares about "better?"

Your contention is that it is impossible. This is simply false.
 
Status
Not open for further replies.

Back
Top Bottom