• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Are You Conscious?

Are you concious?

  • Of course, what a stupid question

    Votes: 89 61.8%
  • Maybe

    Votes: 40 27.8%
  • No

    Votes: 15 10.4%

  • Total voters
    144
No they don't.

Standford article said:
An error which, unfortunately, is common in modern writing on computability and the brain is to hold that Turing's results somehow entail that the brain, and indeed any biological or physical system whatever, can be simulated by a Turing machine.

So this statement has nothing to do with Pixy's assertion that the brain can be simulated by a Turing machine?
 
Define "work".

It doesn't do what it's supposed to. It doesn't play the music.

It will, however, perform the computation correctly. So in a Turing sense, it will work just fine. The simulated computer will produce the "correct" result, but because it runs hundreds of times slower, it will be unworkable.

So is consciousness time dependent or not? If it is - if it is necessary for consciousness to relate directly to the real world in a time dependent fashion - then it is not a pure Turing phenomenon.

Is it reasonable to wonder whether something that evolved to cope with the real world might not, in fact, work according to an abstraction that doesn't deal with time at all? Or is it conceivable that the time element to the brain is critical for consciousness?

(I don't intend to comment further on Rocketdodger's relativity diversion, which is one of the silliest things I've seen on this, or indeed any other subject).
 
Okay, cool. I think thats as good a response that anyone can give on this subject at the moment.

However, there are some short comings to this scheme: The first is that inputs into a system are sensory only if the given system has subjective sensibility.

I am not sure I follow you here -- are you suggesting that unconscious animals do not register or act on sensory input in any fashion?

If the entity in question is not conscious, then their responses to inputs/stimuli are no more sensory than a mousetrap having it's trigger tripped. In order to qualify as being conscious the system in question as to experience internal/external stimuli as qualia.

As you've already surmised from my line of argument so far, I think that the computational architecture of the brain serves as the systemic constraint that organizes qualia into an internal model of the world relevant to the subject's survival. However, the actual qualia themselves are a product of the brain's biophysics.

The second is that symbols only take on the force of being symbols if there is a conscious subject associating those symbols with meaning(s).

Sorry, I was using the term "symbol" in a Shannon information theory sense. I probably should have just used the term "output", as the nervous system processes information whether it possesses what we call "consciousness" or not.

Thats a major reason for my objection to thinking of consciousness purely in IP terms.

So we're still left with having to explain the whole subjective aspect of the issue [i.e. consciousness]: What is it in physical terms, and what are the sufficient conditions for it?

I think describing it in physical terms will be useful to the same degree that describing any moderately complex computational process (an algorithm for simulated annealing or a forward chaining expert system or whatever) in terms of what is happening at a transistor by transistor level on my laptop is -- OK for reverse engineering if that is all you have, not so useful once we figure out what is happening.

Going back to the generator analogy:

Lets say that a 16th century tinkerer was introduced to an early 20th century hand cranked dynamo wired to an incandescent bulb without any introduction or explanation. With no understanding of the physical principles underlying it's design [such as the role of the magnet and electrical coil] he goes on to build a replica that emulates the structure and moving parts perfectly but they do not generate any electrical power. He has to know what the appropriate materials are in order to build a physically efficacious reproduction. If the tinkerer is thinking of the device purely in the mechanical terms hes familiar with and lacks any concept of electricity [or worse, tacitly rejects any suggestion of a 'mysterious' energy he has no understanding of] then he will be forever stuck in the mud -- his efforts will go no where.

AI researchers of today are basically doing the same thing with regard to the brain. Many desire to reproduce a product of brain activity [consciousness] but they don't really have any idea of HOW the brain produces it or even what it is. So they just emulate the brain's computational architecture [since thats something they feel they have a pretty good technical grasp of] and completely disregard any need to understand the underlying physics of the brain. This is a dire mistake.

What I'm objecting to is the assertion thats frequently made here that we already have a sufficient answer. We most assuredly don't.

I argue that we have what we need to find a sufficient answer, and that we will not need to describe the answer in terms of the four fundamental forces any more than we have to for any other biological process.

The thing is that we can atleast describe those biological processes in physical terms if need be [for instance we're gaining an ever better understanding of the physics of photosynthesis]. The thing is we don't even know how to describe consciousness in biological terms yet.

The problem regarding consciousness is twofold. The obvious issue is epistemic; we don't know how we can identify consciousness unequivocally in entities other than ourselves nor do we have accesses to mental states other than our own. The second problem is ontological; we strait-up don't know what consciousness is or how it fits into the larger ontological frame of the our physical sciences. These two limitations conflate in turning consciousness into a very tricky scientific and philosophical problem. IMO, just chalking it all up to "computation" and calling it a day is tantamount to rolling over in defeat.

Physically, the difference is shown as the varying frequencies of the brain's EM activity. Each frequency range is correlated with a particular conscious state, or lack thereof.

Are you referring to EEG related stuff? That is a very crude diagnostic indeed.

Unfortunately EEGs, MEGs, and other brain imaging techniques, combined with the self reports of human subjects are the best avenue we have in the scientific study of consciousness. We haven't gleamed nearly enough from this yet to devise means of reproducing consciousness artificially. Without a rigorous scientific theory of consciousness on par with the criteria I listed, synthetic consciousness is just a pipe dream.

Every single one of those biological mechanisms -- [1] membrane potentials, [2] polarization, [3] signal transduction, etc.. -- all of them, utilize EMF interactions.

Yes, at the level of individual atoms. When we are talking about processes happening at the cellular level we do not care about EMF interactions at all beyond what is necessary to explain the chemistry of what is happening.

But they are EMF interactions all the same and we can afford to abstract them into biological language precisely because we understand the role those interactions play in the 'public' observables of biochemical interaction. We can even isolate the molecules in question and observe them under controlled conditions outside of the context of a living cell. Physically speaking, we know what the cells are doing and, if need be, we can always express the cellular activity in terms of physics.

Its not nearly so simple with consciousness. Not only must we employ cumbersome imagining techniques on -live- subjects, those subjects must also be sapient enough to report their 'private' states. We can abstract those subjective states into psychological language but, unlike with other biological processes, we've not even the barest understanding of it at the level of physics.

Conscious experience is not a functional abstraction of what neural cells do but what they are actually physically producing.

We disagree. I see it as an artifact of the way our nervous system models, learns from, and adapts to our environment. I don't see individual nerve cells producing much beyond metabolic waste products, heat, and the odd depolarization event. The only thing that is interesting about them from the standpoint of consciousness is that their depolarization events can be controlled by other nerve cells, and that they can be connected in huge, ornate networks. Other than that, they suck at being antennae and they are way too hot and dense for quantum effects to start being interesting.

On the contrary, a lot of the cutting edge research in biophysics investigates the role that quantum level interactions play in biological functioning -- photosynthesis just being one of them. Even a quick search on this topic is bound to bring up articles on the subject.

[If you're interested in further reading on the subject I'd recommend a book my bio proff. recommended to me: The Rainbow and the Worm. I found it very tough to go thru since much of the material in it is very technically dense but its still a facinating read all the same :) ]

Even so, I'm sure you realize that the only way that we can falsify any claim to creating a conscious system is a scientific theory of consciousness that meets the criteria I listed earlier.

Not exactly -- I think that the only criteria we have to establish if something is conscious or not right now is to interact with it and see if it acts like a conscious entity. I realize this is a very crude test, but it is likely to be a good as we can get for a while. I also think that establishing a scientific test based on the physical properties of neurons is the wrong way to go about it -- at the very least, I would focus more on their properties as information processors, and I would look more at how the networks as a whole in the brain behave rather than focusing on individual neurons.

I've no doubt that consciousness is the result of the global activity of the brain and that our cognition is a reflection of it's computational architecture. Even so, the fact remains that conscious experience itself is a product of the brain's -physical- activity. We must understand how it reduces to biophysical terms before we can learn to instantiate it into artificial systems.

Unlike Chalmer's, I'm not smugly content with thinking of consciousness as an insoluble philosophical conundrum. I think that science can make real inroads in this area. I also think that philosophy should be used as a tool to help us attack this problem, not as a means rationalize it into an eternal mystery box.

I think that so far philosophy has made a hash of it -- too much thinking about the problem, not enough of it empirical. Bring on the science.

Scientific theorycrafting is an inherently philosophical endeavor. Shoddy philosophical thinking in science can be just as detrimental as lousy experimental design.

Every object inheres information and every process is processing information.

Yeah, quantum mechanics 101. Neurons do not just process information in that trivial sense, though -- they do it by summing their excitatory inputs, subtracting inhibitory inputs, and firing if their input passes a certain threshold. Totally different ball of wax.

But understanding -how- those input/outputs translate into sensation requires understanding neural activity in physical terms rather than just the computational.

For example, understanding the computational architecture of output display does not tell one how to build a working computer monitor. In instantiating those features into a hardware system there must be a physical understanding of the materials involved and how they should be integrated. Consciousness is no different.

Its the literal physical flipping of the computer hardware's switching mechanisms. The computer simulation of the power plant is just a representational tool. Like language, it only takes on symbolic significance in the minds of the humans who use the computer.

So all that switch flipping is still just a simulation even though the power plant would stop functioning (possibly catastrophically) if the computer running it crashed or was switched off?

My point is that the simulation is just a representation of the physical plant, not a magical looking glass world. Literally speaking, its merely a switching pattern on computer hardware thats integrated into the functioning of the actual power plant. There is no electrical power being generated "inside" the simulation; the power is being generated by the physical equipment of the plant. The computer simulation isn't anymore a power plant than the written word "muffin" is an actual pastry.
 
Last edited:
Playing an MP3 file. Do it in X seconds and it's fine. Do it in Y and it doesn't work. I explained this to you already. I showed the difference between computation and what a computer can actually do, and how time dependence affects certain operations to the extent that they will not take place correctly if they don't complete within a given time.

"playing an mp3" is not an operation. A single step in an algorithm is an operation.

So, again -- please provide an example of an operation that can give a different result if completed in Y seconds instead of X seconds.

And we aren't asking about a different resulting algorithm. We are asking you for an operation -- say, the addition of two numbers -- that will be different depending on how long it takes.

Waiting...
 
"playing an mp3" is not an operation. A single step in an algorithm is an operation.

So, again -- please provide an example of an operation that can give a different result if completed in Y seconds instead of X seconds.

And we aren't asking about a different resulting algorithm. We are asking you for an operation -- say, the addition of two numbers -- that will be different depending on how long it takes.

Waiting...

I'm not interested in your formal Turing world if it doesn't explain what goes on in the human brain. Of course Turing operations are not time dependent. I said that already. What is the point of circular reasoning about non-time dependent steps?

For some reason you seem unwilling to address the difference between Turing operations, where as you state, the outcome is not time-dependent, and other things that take place controlled by computers and the human brain, where the outcome is time-dependent.

An algorithm which plays an MP3 is clearly time-dependent, and the state of the entire system will be different after a single operation, depending how long it takes to perform that operation. People who write device drivers or real-time programs have to allow for this.
 
Tell it to Stanford university. They very clearly, in the article I quoted, point out the mistakes in the computational view, explain why they are mistakes, explain what Church-Turing actually says, and so on, etc etc etc. You can continue to bluster about it, but the article is quite clear.

Alright, if you want to be made out to be a fool, I will play along -- lets look at what the article actually says:

Stanford article said:
Thesis M:
Whatever can be calculated by a machine (working on finite data in accordance with a finite program of instructions) is Turing-machine-computable.



Thesis M itself admits of two interpretations, according to whether the phrase "can be generated by a machine" is taken in the narrow, this-worldly, sense of "can be generated by a machine that conforms to the physical laws (if not to the resource constraints) of the actual world", or in a wide sense that abstracts from the issue of whether or not the notional machine in question could exist in the actual world. Under the latter interpretation, thesis M is false. It is straightforward to describe notional machines, or ‘hypercomputers’ ( Copeland and Proudfoot (1999a)) that generate functions not Turing-machine-computable (see e.g. Abramson (1971), Copeland (2000), Copeland and Proudfoot (2000), Stewart (1991)). It is an open empirical question whether or not the narrow this-worldly version of thesis M is true. Speculation that there may be physical processes -- and so, potentially, machine-operations -- whose behaviour conforms to functions not computable by Turing machine stretches back over at least five decades; see, for example, da Costa and Doria (1991), (1994), Doyle (1982), Geroch and Hartle (1986), Hogarth (1994), Kreisel (1967), (1974), (1982), Pour-El and Richards (1979), (1981), Scarpellini (1963), Siegelmann and Sontag (1994), and Stannett (1990). (Copeland and Sylvan (1999) is a survey; see also Copeland and Proudfoot (1999b).)

First, note that right off the bat there is a disclaimer -- "two interpretations," of M, one of which they don't even discuss yet is the only relevant one, "conforms to the physical laws of the actual world," while the other is merely "IN A WIDE SENSE THAT ABSTRACTS FROM THE ISSUE OF WHETHER OR NOT THE NOTIONAL MACHINE IN QUESTION COULD EXIST IN THE ACTUAL WORLD.". Hmm -- red flags, anyone? And then they state that only "under the latter interpretation, thesis M is false."

Really, westprog? This is the best you can do? You link an article that discusses how the CT thesis (or the informal version M above) breaks down and doesn't work when magic is invoked?
 
I'm not interested in your formal Turing world if it doesn't explain what goes on in the human brain. Of course Turing operations are not time dependent. I said that already. What is the point of circular reasoning about non-time dependent steps?

For some reason you seem unwilling to address the difference between Turing operations, where as you state, the outcome is not time-dependent, and other things that take place controlled by computers and the human brain, where the outcome is time-dependent.

An algorithm which plays an MP3 is clearly time-dependent, and the state of the entire system will be different after a single operation, depending how long it takes to perform that operation. People who write device drivers or real-time programs have to allow for this.

I have clearly explained to you that the only reason real systems are time dependent is due to the fact that algorithms are order dependent -- which is a property of all algorithms, including those run on Turing machines.

Do you disagree with this? Can you provide an example of an operation that is time dependent in such a way that this time dependence doesn't reduce to order dependence?
 
An algorithm which plays an MP3 is clearly time-dependent, and the state of the entire system will be different after a single operation, depending how long it takes to perform that operation. People who write device drivers or real-time programs have to allow for this.

A time-dependent MP3 player can be implemented by a Turing machine, as long as it is modified to output pairs of (timestamp, audio).

To implement a real, physical, MP3 player, a hardware device is needed with a proper real-time clock. When the timestamp matches the clock, the audio signal is sent to the speaker.
 
Not a Turing machine at all, in fact.

Yes, it is both more (in the sense that it has input and output) and less (in the sense of not having infinite memory) than a Turing machine. So?

The point I was making was that a Turing machine cannot, in principle, interact with the world in real time. A computer can. It can do things that a Turing machine cannot.
Of course.

This is important when we're thinking about whether consciousness is necessarily equivalent to the operation of a pure Turing machine. Pixy still seems to be asserting that it is - and that Church-Turing proves this. I've provided contrary references.
Pixy seems to be arguing that consciousness can in principle implemented on a turing machine (frankly, I don't care about that argument), and also that there is no good reason that we cannot implement it on a physical computer of sufficient capacity (which I agree with).

If you think the UTM side of things is implausible, then you must really strongly object to Digital Physics.
 
I've no doubt that consciousness is the result of the global activity of the brain and that our cognition is a reflection of it's computational architecture. Even so, the fact remains that conscious experience itself is a product of the brain's -physical- activity. We must understand how it reduces to biophysical terms before we can learn to instantiate it into artificial systems.

How can you ever hope to understand what physical activity is required to produce consciousness ?

Sure, you can examine somebody's brain, but how can you be sure that person is not a p-zombie, due to a genetic defect ?
 
How can you ever hope to understand what physical activity is required to produce consciousness ?

Sure, you can examine somebody's brain, but how can you be sure that person is not a p-zombie, due to a genetic defect ?
The whole concept of a p-zombie is incoherent anyways -- what does it even mean for something to act like a human under every possible test but not have consciousness?
 
The whole concept of a p-zombie is incoherent anyways -- what does it even mean for something to act like a human under every possible test but not have consciousness?

I think the only scenario in which that would be anywhere near plausible would be to have some animatronic device [complete with built-in cameras & sound feed] remotely controlled by a conscious human. A casual passersby might possibly be fooled into believing that the robotic avatar is conscious when its really just a puppet :D
 
The whole concept of a p-zombie is incoherent anyways -- what does it even mean for something to act like a human under every possible test but not have consciousness?

I agree. But if someone claims that functional behavior alone is not sufficient to determine consciousness, but that a special physical aspect must be present, then this position must also allow undetectable p-zombies.

The problem with that position is that you don't know a-priori which physical aspects matter, so you can't say who's a p-zombie, and who has real consciousness.

If you can't tell who's a p-zombie, then you can't determine which physical aspects are important.

From this position, progress is impossible.
 
A time-dependent MP3 player can be implemented by a Turing machine, as long as it is modified to output pairs of (timestamp, audio).

To implement a real, physical, MP3 player, a hardware device is needed with a proper real-time clock. When the timestamp matches the clock, the audio signal is sent to the speaker.

It can't be implemented on a Turing machine unless the Turing machine is time dependent. IOW, not a Turing machine.

Of course you can add bells and whistles to a Turing machine to make it do useful things. This makes it no longer a Turing machine. It's just an abstraction to think about computing. It's not an actual machine.
 
Alright, if you want to be made out to be a fool, I will play along -- lets look at what the article actually says:



First, note that right off the bat there is a disclaimer -- "two interpretations," of M, one of which they don't even discuss yet is the only relevant one, "conforms to the physical laws of the actual world," while the other is merely "IN A WIDE SENSE THAT ABSTRACTS FROM THE ISSUE OF WHETHER OR NOT THE NOTIONAL MACHINE IN QUESTION COULD EXIST IN THE ACTUAL WORLD.". Hmm -- red flags, anyone? And then they state that only "under the latter interpretation, thesis M is false."

Really, westprog? This is the best you can do? You link an article that discusses how the CT thesis (or the informal version M above) breaks down and doesn't work when magic is invoked?

The article is clear and the extract I posted speaks for itself. There is no "informal" version. There is Church-Turing and there are other theories which have been developed from it.

"Informal version", yeah. I recommend that anyone interested in what Church-Turing actually says, and what is actually proven, read the article.
 
I have clearly explained to you that the only reason real systems are time dependent is due to the fact that algorithms are order dependent -- which is a property of all algorithms, including those run on Turing machines.

Do you disagree with this? Can you provide an example of an operation that is time dependent in such a way that this time dependence doesn't reduce to order dependence?

Reading the contents of a device data register.
 
The article is clear and the extract I posted speaks for itself. There is no "informal" version. There is Church-Turing and there are other theories which have been developed from it.

"Informal version", yeah. I recommend that anyone interested in what Church-Turing actually says, and what is actually proven, read the article.

The extract I posted also speaks for itself -- they explicitly admit that their criticisms of the (mis)use of the CT thesis and thesis M are due to notional machines that do not even comply with the physical laws of the universe.

In other words, since nobody has proven that the mind is constrained by the laws of the universe, it is a fallacy to suppose that the mechanisms of the mind should follow the laws of the universe.

Yeah ...
 
It can't be implemented on a Turing machine unless the Turing machine is time dependent. IOW, not a Turing machine.

Of course you can add bells and whistles to a Turing machine to make it do useful things. This makes it no longer a Turing machine. It's just an abstraction to think about computing. It's not an actual machine.

Obviously, if you want to keep track of a physical clock, you'll need a physical implementation, one way or the other.

The trick is that you can make the time dependent part really small and simple, so nobody can claim that it has consciousness. The Turing machine does all the hard work by producing the (timestamp, value) pairs.
 
I agree. But if someone claims that functional behavior alone is not sufficient to determine consciousness, but that a special physical aspect must be present, then this position must also allow undetectable p-zombies.

The problem with that position is that you don't know a-priori which physical aspects matter, so you can't say who's a p-zombie, and who has real consciousness.

If you can't tell who's a p-zombie, then you can't determine which physical aspects are important.

From this position, progress is impossible.

A couple things:

[1] - The main point in trying to understand what physically constitutes consciousness [i.e. the active capacity for subjective experience] is so that we can know how to produce it in artificial systems. With a scientific theory meeting the criteria I mentioned earlier we would not only know how to create a conscious entity artificially, we would also be able to specify the quality of it's sensations given particular physical inputs.


[2] - Even tho we do not know, in physical terms, what consciousness is as I mentioned earlier I think that there are certain behaviors that non-conscious agencies cannot reproduce, thus ruling out p-zombies in principle. I also suspect that living systems may be required to support consciousness [by "living" I mean that its an autopoietic system that spontaneously & adaptively maintains itself against thermodynamic equilibrium]. It seems highly unlikely that non-conscious systems could meet the necessary behavioral criteria unless they are atleast 'alive' by the above definition.
 

Back
Top Bottom