• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Are You Conscious?

Are you concious?

  • Of course, what a stupid question

    Votes: 89 61.8%
  • Maybe

    Votes: 40 27.8%
  • No

    Votes: 15 10.4%

  • Total voters
    144
Alright, if you want to be made out to be a fool, I will play along -- lets look at what the article actually says:



First, note that right off the bat there is a disclaimer -- "two interpretations," of M, one of which they don't even discuss yet is the only relevant one, "conforms to the physical laws of the actual world," while the other is merely "IN A WIDE SENSE THAT ABSTRACTS FROM THE ISSUE OF WHETHER OR NOT THE NOTIONAL MACHINE IN QUESTION COULD EXIST IN THE ACTUAL WORLD.". Hmm -- red flags, anyone? And then they state that only "under the latter interpretation, thesis M is false."

Really, westprog? This is the best you can do? You link an article that discusses how the CT thesis (or the informal version M above) breaks down and doesn't work when magic is invoked?

They state that it's an open empirical question whether M is true. IOW, M has not been confirmed. Note that the only which I highlighted is not part of the original quote. One has to be careful in reading these things.

And referring to M as if it were equivalent to Church-Turing is simply wrong. Stating that Church-Turing implies things that are implied only by M is simply wrong. Stating that M is demonstrably true is simply wrong.

The article is quite clear in showing how advocates of a particular AI viewpoint have misused Church-Turing. That's why a significant part of the article deals with the ways in which CT has been misinterpreted. One might hope that now that this has been explained, in detail, that the persistent claims that CT proves that Turing machines are sufficient to produce consciousness would be abandoned. Of course this won't happen. At least now we have a BS-marker.
 
No, it doesn't. If you don't collect a typed character before the next character is typed, it is lost. That is not order dependence. That is time dependence.

It is however trivial to modify the Turing machine so all output data is collected in a sequential log file.

I don't know why you are pretending that there is no distinction between Turing- and real-time programming.

I think everybody is well aware of the difference. A Turing machine is an abstract description, much like a differential equation can be an abstract description of an airflow. It is a mathematical tool. A TM doesn't even have a concept of time.

The point is that this difference is not very significant. You can take any Turing machine, and mechanically transform it into a physical represention. If the physical representation is too slow, just make it faster. If it's too fast, make it wait until it synchronizes with an external clock.
 
A time-dependent MP3 player can be implemented by a Turing machine, as long as it is modified to output pairs of (timestamp, audio).

To implement a real, physical, MP3 player, a hardware device is needed with a proper real-time clock. When the timestamp matches the clock, the audio signal is sent to the speaker.

There's a word there which implies real-time dependence in the computer.

A Turing machine cannot play MP3's. However, computers can because they have clocks built in. Indeed, a clock of some kind is pretty much essential on a computer, because the hardware needs it.
 
A Turing machine cannot play MP3's. However, computers can because they have clocks built in. Indeed, a clock of some kind is pretty much essential on a computer, because the hardware needs it.

Of course a TM cannot play MP3's, in the sense that 'play' means that it produces audible signals.

All a TM can do is convert a representation of an MP3 song into another representation.

Likewise, a TM could be designed where you feed it an MP3 segment contain the spoken words: "what do you think of Dennett's opinion of qualia?", and after a considerable amount of processing, the TM would then produce another MP3 file containing a long spoken reply, where the TM explains it doesn't agree with Dennett, and why. You can copy this MP3 file to your computer, and play it. This process could be repeated a number of times, so you could have a long discussion about the topic.

This way, the "essential clock" is moved to your computer recording and playing the MP3 file, basically reducing the clock to a trivial part of the whole system.

Inside the TM, there would be no physical clock.

If, based on your discussion about qualia with the TM, you should decide that the machine exhibits real consciousness, which part of the system do you think is the most important ? The dynamic state of the TM, or the clock in your computer ?
 
It is however trivial to modify the Turing machine so all output data is collected in a sequential log file.

Trivially?

I think everybody is well aware of the difference. A Turing machine is an abstract description, much like a differential equation can be an abstract description of an airflow. It is a mathematical tool. A TM doesn't even have a concept of time.

The point is that this difference is not very significant. You can take any Turing machine, and mechanically transform it into a physical represention. If the physical representation is too slow, just make it faster. If it's too fast, make it wait until it synchronizes with an external clock.

There's a reason why I gave that lengthy history of computing. It used to be that most programs were dumped onto the computer and left to run. Then the output would be collected. You really didn't know when something would happen. Jobs would be run one after another.

Taking that Turing model and producing machines that would have multiple interacting processes, that would have guaranteed response times - interrupts, semaphores, process control, data monitoring, task priority - these were big, big problems that took a lot of very smart people a long time to solve. They were problems where the Turing model of the computer was simply not applicable. It wasn't the right way to think about it.

So there were computers that implemented Turing machines that could not run real time programs. Then there were other computers built with additional capacity that could.

What is the right way to think about the human brain? A single tape running through a reader, changing state as it goes? Or thousands of different processes interacting, reacting to real-time data?

It might be possible to create a Turing machine that can model what the brain does. (Though we don't know this). However, we do know that a Turing machine cannot, in principle, replace a brain, because a brain is real-time and a Turing machine is not.
 
What is the right way to think about the human brain? A single tape running through a reader, changing state as it goes? Or thousands of different processes interacting, reacting to real-time data?

Since one is a physical system, and the other a mathematical description of another, it makes no sense to compare them.

Now, if you take a robot, implementing a single tape running through a reader at high enough speeds, augmented with a number of simple sensors and actuators, and compare that to a human, it does make sense.
 
Since one is a physical system, and the other a mathematical description of another, it makes no sense to compare them.

But much of computer "science" involves the actualisation of Turing machines. It's been found that the capabilities of Turing machines match very closely to the capabilities of computers.

It's also the case that computers, programs, operating systems and microprocessors are all designed with an abstract model in mind, and that people program and use those machines according to the abstract model, not according to the actual physical computer. When you interact with JREF, you don't have to think about electricity connecting your computer with the JREF server. You work through a set of abstractions, and that's what makes the complexity manageable.

The reason that we consider Turing machines in the context of the brain is because it's been explicitly claimed that a physical machine that implements a Turing machine needs no other capability to emulate a human brain. Thus it is relevant to show that a human brain cannot, in principle, be replaced by a Turing machine. Even if the physical machine that replaced it were of arbitrary processing speed, it would not have that capability.

Now, if you take a robot, implementing a single tape running through a reader at high enough speeds, augmented with a number of simple sensors and actuators, and compare that to a human, it does make sense.
 
The reason that we consider Turing machines in the context of the brain is because it's been explicitly claimed that a physical machine that implements a Turing machine needs no other capability to emulate a human brain. Thus it is relevant to show that a human brain cannot, in principle, be replaced by a Turing machine. Even if the physical machine that replaced it were of arbitrary processing speed, it would not have that capability.

What about a physical machine, with some additional simple hardware interfaces, like a sound card, and a real-time clock ?
 
What about a physical machine, with some additional simple hardware interfaces, like a sound card, and a real-time clock ?

Yes, I think there's a lot of scope for investigating exactly what is physically required to emulate a human brain and human consciousness. I'm leaving simulation - which may be irrelevant - and replacement - which presents serious interface issues - on one side for a moment. Perhaps real time computing is sufficient - or neural networks. Perhaps we will need to emulate precisely what occurs in individual neurons, rather than treating them as simple switches.

The trouble is that abandoning the Turing machine model will be a big wrench for some of the AI crowd. Not the pragmatists, but the people who've relied on theory for a long, long time. Throw away the Turing machine and put in something new, and you have to abandon the TM theories.
 
Yes, I think there's a lot of scope for investigating exactly what is physically required to emulate a human brain and human consciousness.

My point is that it is impossible to know what is physically required, unless you can already define consciousness by functional behavior first.

If you don't know if your test subject is a p-zombie, you can't learn anything about the physics.
 
I.E. the physics of the brain is still relevant. Its one thing to rule out whether a given system is conscious or not, its quite another to know how to reproduce it physically.

Sure, it's important, but there's no reason to only approach it from the bottom up.

Without knowing exactly how consciousness works, you can already try to build computers to emulate it, and judge if they have consciousness based on their behavior.

You're not getting it. A person can build a computer simulation to emulate the effects of gravity but they are not producing gravity, nor does the emulation demonstrate that they even understand what it is. The same goes for any physical entity/process, including consciousness.
 
I.E. the physics of the brain is still relevant. Its one thing to rule out whether a given system is conscious or not, its quite another to know how to reproduce it physically.

Okay, so are you proposing that non-computable physics - i.e. some sort of physical infinity - is required for consciousness?

No, you cretinous blockhead. How many times must I go over this with you? I'm saying that computing a physical process is not reproducing said process and that consciousness [being a physical process] is not magically conjured up my emulating the computations of the brain.
 
Last edited:
You're not getting it. A person can build a computer simulation to emulate the effects of gravity but they are not producing gravity, nor does the emulation demonstrate that they even understand what it is. The same goes for any physical entity/process, including consciousness.

I'm not talking about a simulation, but making a physical computer, and hooking it up to sound card, video input, and whatever sensors and actuators you'd like.

After you're done, you can ask this computer about it's experience of the color red, and you get a response. Based on these response, you judge that there's consciousness inside, or not.

Sure, inside the computer is a simulation running. If you show a red ball to its video camera, nothing inside the computer turns red. You could say it's simulating the experience of 'red'. But then again, I'm pretty sure nothing changes color in my brain either, so I'm also running a simulation.
 
You're not getting it. A person can build a computer simulation to emulate the effects of gravity but they are not producing gravity
Category error.
nor does the emulation demonstrate that they even understand what it is.
Red herring.

The same goes for any physical entity/process, including consciousness.
Non-sequitur.
 
My point is that it is impossible to know what is physically required, unless you can already define consciousness by functional behavior first.

If you don't know if your test subject is a p-zombie, you can't learn anything about the physics.

I am generally willing to agree with people who say that we don't know.
 
Of course a TM cannot play MP3's, in the sense that 'play' means that it produces audible signals.

All a TM can do is convert a representation of an MP3 song into another representation.

Likewise, a TM could be designed where you feed it an MP3 segment contain the spoken words: "what do you think of Dennett's opinion of qualia?", and after a considerable amount of processing, the TM would then produce another MP3 file containing a long spoken reply, where the TM explains it doesn't agree with Dennett, and why. You can copy this MP3 file to your computer, and play it. This process could be repeated a number of times, so you could have a long discussion about the topic.

This way, the "essential clock" is moved to your computer recording and playing the MP3 file, basically reducing the clock to a trivial part of the whole system.

Inside the TM, there would be no physical clock.

If, based on your discussion about qualia with the TM, you should decide that the machine exhibits real consciousness, which part of the system do you think is the most important ? The dynamic state of the TM, or the clock in your computer ?

Oh, the "if" game. If we end up with a Turing machine that exhibits consciousness, we can worry about the if.

What "if" we find out that we can produce consciousness in machines that respond in real-time, but not in machines that respond in a batch fashion, as above? Would that imply that time is an essential element of consciousness?

But until either of these scenarios occurs, we don't know.

What we do know is that the brain is not doing data processing - it's doing process control. We could assume that this is a minor difference, and that it works pretty much like a Turing machine anyway - or we can assume that what it does represents what it is.
 
I'm not talking about a simulation, but making a physical computer, and hooking it up to sound card, video input, and whatever sensors and actuators you'd like.

After you're done, you can ask this computer about it's experience of the color red, and you get a response. Based on these response, you judge that there's consciousness inside, or not.

Sure, inside the computer is a simulation running. If you show a red ball to its video camera, nothing inside the computer turns red. You could say it's simulating the experience of 'red'. But then again, I'm pretty sure nothing changes color in my brain either, so I'm also running a simulation.
Once you agree that what I know -- which I suspect is more than I know about anything else -- about what we term as consciousness is nothing more than my public behavior, you are where Pixy & RD are. Thermostats, toasters, etc are also then conscious.

From what you know about the single point of consciousness you will ever be privy to, do you agree?

I apologise if I've mis-stated the Pixy/RD/etal position and assume I will be corrected.
 
Trivially?
Yes.

There's a reason why I gave that lengthy history of computing. It used to be that most programs were dumped onto the computer and left to run. Then the output would be collected. You really didn't know when something would happen. Jobs would be run one after another.
To drown us in irrelevancies?

Taking that Turing model and producing machines that would have multiple interacting processes, that would have guaranteed response times - interrupts, semaphores, process control, data monitoring, task priority - these were big, big problems that took a lot of very smart people a long time to solve.
It took a decade. Actually, from the introduction of IBM's first transistorised computers, the 7000 series, to the 360 that included everything you list and more, took less than four years.

They were problems where the Turing model of the computer was simply not applicable. It wasn't the right way to think about it.
Wrong.

So there were computers that implemented Turing machines that could not run real time programs. Then there were other computers built with additional capacity that could.
Wrong.

What is the right way to think about the human brain? A single tape running through a reader, changing state as it goes? Or thousands of different processes interacting, reacting to real-time data?
The two were proven to be mathematically equivalent decades ago. So either one, or any of a long list of other perspectives, also proven mathematically equivalent.

It might be possible to create a Turing machine that can model what the brain does.
Short of evidence that brain function depends on physical infinities of some sort (what rocketdodger described as "magic"), it is necessarily true.

(Though we don't know this).
It is, by reason of mathematical proof, the default position.

However, we do know that a Turing machine cannot, in principle, replace a brain, because a brain is real-time and a Turing machine is not.
Category error: Turing machines do not exist.

Red herring: The brain is not real-time. It is merely sufficiently fast for certain purposes. It manages this by a combination of ignoring most of the information presented to it, huge lookup tables that take decades to construct, and guesswork.
 
Last edited:
Once you agree that what I know -- which I suspect is more than I know about anything else -- about what we term as consciousness is nothing more than my public behavior, you are where Pixy & RD are. Thermostats, toasters, etc are also then conscious.

From what you know about the single point of consciousness you will ever be privy to, do you agree?

I apologise if I've mis-stated the Pixy/RD/etal position and assume I will be corrected.
Thermostats are not conscious. No self-reference.
 
You're not getting it. A person can build a computer simulation to emulate the effects of gravity but they are not producing gravity, nor does the emulation demonstrate that they even understand what it is. The same goes for any physical entity/process, including consciousness.

I'm not talking about a simulation, but making a physical computer, and hooking it up to sound card, video input, and whatever sensors and actuators you'd like.

After you're done, you can ask this computer about it's experience of the color red, and you get a response. Based on these response, you judge that there's consciousness inside, or not.

As I don't like repeating myself:

Going back to the generator analogy:

Lets say that a 16th century tinkerer was introduced to an early 20th century hand cranked dynamo wired to an incandescent bulb without any introduction or explanation. With no understanding of the physical principles underlying it's design [such as the role of the magnet and electrical coil] he goes on to build a replica that emulates the structure and moving parts perfectly but they do not generate any electrical power. He has to know what the appropriate materials are in order to build a physically efficacious reproduction. If the tinkerer is thinking of the device purely in the mechanical terms hes familiar with and lacks any concept of electricity [or worse, tacitly rejects any suggestion of a 'mysterious' energy he has no understanding of] then he will be forever stuck in the mud -- his efforts will go no where.

AI researchers of today are basically doing the same thing with regard to the brain. Many desire to reproduce a product of brain activity [consciousness] but they don't really have any idea of HOW the brain produces it or even what it is. So they just emulate the brain's computational architecture [since thats something they feel they have a pretty good technical grasp of] and completely disregard any need to understand the underlying physics of the brain. This is a dire mistake.

Sure, inside the computer is a simulation running. If you show a red ball to its video camera, nothing inside the computer turns red. You could say it's simulating the experience of 'red'. But then again, I'm pretty sure nothing changes color in my brain either, so I'm also running a simulation.

Wait-- what? Do you honestly mean to tell me that you believe shining red light into a camera means that the color red is being seen? And just what the heck do you mean "I'm also running a simulation"? Just what are you supposed to be 'simulating', anyway? :confused:
 
Last edited:

Back
Top Bottom