• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Are You Conscious?

Are you concious?

  • Of course, what a stupid question

    Votes: 89 61.8%
  • Maybe

    Votes: 40 27.8%
  • No

    Votes: 15 10.4%

  • Total voters
    144
I'm afraid I don't understand. I understand that when discussing computation in abstract terms that it is time independent, but if a computer adds and multiplies a list of numbers it must do so following a set of rules, and it does so over a particular amount of time. It doesn't follow each rule simultaneously however fast it carries out the steps; if t did so it wouldn't function. All physical examples of computation are time dependent. I don't see how this makes such physical examples of computation not be computation.

There's a difference between ordering and time dependence. It's part of the definition of a Turing machine that the steps be taken in order. It's not part of the definition that they have any particular interval between them.

How does that follow? The time dependence is not a fundamental feature of any sort of neural processing if we look at it abstractly. It is only important for the real world and real world modelling. For neural processing, things must occur in steps, but how is that different for most information processing?

But I don't necessarily accept that what goes on in the brain is time independent. A brain that ran ten times too fast or too slow would be useless in its interaction with the world. A brain is not an abstract data processing device. In practice, it is time dependent.

So would I, so would I.

You only say that because of your political and religious beliefs.

That is only because we work the Chinese Room argument in the abstract and forget the time factor. The slowness of a person doing the sorting is not the problem. The problem it shows is that simple syntax is not sufficient in and of itself to provide semantics.

But there really is another problem -- a problem for which it is unfortunate that he chose language as his example -- there are examples probably in all languages where syntax cannot be used to determine meaning, so if the room always produces a result that is correct, that tends to imply that there is semantic content in the room, though none is supplied. The thought experiment cheats a bit.




But we could imagine any number of situations in which information processing would only work properly if information is introduced at a particular step rather than at another step in the process -- which in the real world requires time dependence. That is the only sort of time dependence that neural processing requires. It exists in the real world, but you could potentially calculate all the bits of information independent of time as long as they occurred in the proper spatial arrangement and in the proper order, though I cannot conceptualize such a "thing".

In the case of a computer performing a calculation, we can introduce arbitrary gaps in the processing which will have no effect on the outcome. That's the basis behind the invention of the multi-tasking operating system - save the state and come back to it, and you can't tell the difference. You can't do that with a human brain.

Nothing is a Turing machine, though, is it? A computer is not. Turing machines are abstractions useful in thought experiments. They don't describe actual physical machines that exist in space or work in time. The issue here, I thought, is whether or not a computer can do what the brain does. Saying that neural processing is not a Turing machine is fine, but since neither is a computer I'm not sure what difference that makes. Turing machines are not models of computers but of the abstract notion of computation.

I am not a believer that a Turing machine is anything more than an abstraction that we make up to make sense of computation. I don't think that two different instantiations of Turing machines are the same thing in any real sense. I don't see that because it might be possible to abstract some of the functionality of the human brain as a Turing machine that some other implementation of the same Turing machine would produce consciousness in the same way.

This appears to be what the computational theory claims. Indeed, if Rocketdodger's vision of eternal life in computer program form is to be realised, it must be true. I don't think it's true, and I certainly don't think that it's been proven beyond doubt.

Perhaps it would help me if you would tell me what you are actually arguing against with your above set of statements, because this sounds again like over-generalization of the abstract to the physical world. We know computation occurs in the physical world and that it occurs as an intrinsic property of functioning neurons, so any statement that tells me that computation is necessarily abstract or time-independent is wrong, or at the very least over-generalized.

Well, I don't think this. I think that the fact that we can produce a mathematical model of a physical system means that another system for which the same model applies is equivalent - because all mathematical models leave out information.
 
There's a difference between ordering and time dependence. It's part of the definition of a Turing machine that the steps be taken in order. It's not part of the definition that they have any particular interval between them.

Sure, but not really in the abstract where real world constraints are removed. For neural processing we speak of time dependence because of the peculiar properties in which this process occurs. If the information is not dealt with at the right time, the information disappears; so it cannot be dealt with at a later time.

I don't understand how that makes it not be computation. Is the summation of neurons and their firing not computation? If it is computation, how does a group of them summating and firing suddenly become not-computation? If it is not computation what do we call it?

If absolute time independence is part of the definition of computation, then there is something wrong with the definition of computation or computation is the wrong word to apply to the nervous system -- as I said before, I don't care what it's called, only what it does.

It certainly seems to me that summation of inputs and integration of this information fits into what we generally mean by using the word computation, or the words information processing, but if these are the wrong words, then what is the replacement?

Or does the 'theory of computation' leave out some forms of computation, so it isn't a complete theory?


But I don't necessarily accept that what goes on in the brain is time independent. A brain that ran ten times too fast or too slow would be useless in its interaction with the world. A brain is not an abstract data processing device. In practice, it is time dependent.


Well, yes, in the real world time dependence is necessary. It is only in the abstract that it should make no difference. How it jibes with the world, believe it or not, probably doesn't matter, since it is just a means of trying to move about the world and avoid being eaten and dealing with the Joneses. The brain would learn to wire differently if it received different input. There would have to be some consistency in timing of it vs. the world -- we couldn't just arbitrarily speed it up and down and leave the world at one speed -- for survival, but probably not for consciousness. It would certainly make for a very weird experience if time shifts occurred, though.

But, correct, in the real world the timing of inputs in the brain is critical for its proper function. We can't have long pauses in the processing and 'just get back to it' since in the brain that processing would no longer work. It is the coordinated flow of information through neurons, most working in parallel networks and in which the information from one network must interact with information in another network at a particular time to result in a particular effect.

We know what happens when the timing is messed up, though this occurs in real life only to a very small degree for us to see anything much -- we call it multiple sclerosis. MS (or a variant, one time demyelinating hit on the nervous system) can be so bad that consciousness is lost, but that generally leads to death or severe disability.

There is another experience that everyone has had that is thought to be due to timing disregulation and a 'story' we mold of that experience -- that is deja vu.



You only say that because of your political and religious beliefs.

I only say that because I think we should always discuss the ideas, not what we think of the personalities with whom we discuss them.

I say this because I looked at my own responses in the past and decided that if I had to reach for a quick insult it probably meant that I did not have a good argument with which to respond. No good argument, then we shouldn't respond.

As you know I have defended you in the past when I thought you were being unfairly attacked, not because you needed any defending, but because I thought your attackers should reply to arguments and leave the personal stuff on the sidelines.



In the case of a computer performing a calculation, we can introduce arbitrary gaps in the processing which will have no effect on the outcome. That's the basis behind the invention of the multi-tasking operating system - save the state and come back to it, and you can't tell the difference. You can't do that with a human brain.

That is correct, in the real world we cannot do that with a brain. But in a computer system we probably could because the information doesn't tend to disappear; from what I know it follows in steps with information being held in temporary memory store. I don't know enough about the blue brain project to say, but it may well be more like the way nervous tissue acts -- information is not held in memory stores, and the only important issue is the movement through nodes in a network. I hope it is more like the latter if it is really intended to mimic a brain.

Regardless, if we want to stick to a strict definition of computation as being time independent, then that is the wrong word to apply to what nervous tissue does. What is the correct word?



I am not a believer that a Turing machine is anything more than an abstraction that we make up to make sense of computation. I don't think that two different instantiations of Turing machines are the same thing in any real sense. I don't see that because it might be possible to abstract some of the functionality of the human brain as a Turing machine that some other implementation of the same Turing machine would produce consciousness in the same way.

This appears to be what the computational theory claims. Indeed, if Rocketdodger's vision of eternal life in computer program form is to be realised, it must be true. I don't think it's true, and I certainly don't think that it's been proven beyond doubt.


Don't get me wrong, I'm not arguing a stance here other than that I think computers can recreate consciousness somehow and help us to understand it, but that is still just a hypothesis until it is done. Is the underlying issue here whether or not Church-Turing applies to what the brain does? I'm not sure that the time dependence of nervous processing would impact that idea, since wouldn't the time dependence simply be one more aspect of what is computed in the abstract?



Well, I don't think this. I think that the fact that we can produce a mathematical model of a physical system means that another system for which the same model applies is equivalent - because all mathematical models leave out information.

Let me make sure we are on the same page here. I seem to hear you saying that what neurons do is not computation. Would that be accurate? If that is so, that's fine with me, though it seems a strange conclusion since we commonly use the word 'compute' to refer to summation. But what word do we apply then? If time dependence rules out computation, then what do we call it?
 
Every now and again I look at one of Pixy's posts to see if he's actually produced anything beyond inserting "wrong" at each paragraph mark - or, in other words, demonstrated consciousness. Clearly he still hasn't got the "argument" thing.

It must have some form of consciousness. It actually seems to think its contributions to the discussion are pithy.
 
Sure, but not really in the abstract where real world constraints are removed. For neural processing we speak of time dependence because of the peculiar properties in which this process occurs. If the information is not dealt with at the right time, the information disappears; so it cannot be dealt with at a later time.

I don't understand how that makes it not be computation. Is the summation of neurons and their firing not computation? If it is computation, how does a group of them summating and firing suddenly become not-computation? If it is not computation what do we call it?

If absolute time independence is part of the definition of computation, then there is something wrong with the definition of computation or computation is the wrong word to apply to the nervous system -- as I said before, I don't care what it's called, only what it does.

It certainly seems to me that summation of inputs and integration of this information fits into what we generally mean by using the word computation, or the words information processing, but if these are the wrong words, then what is the replacement?

Or does the 'theory of computation' leave out some forms of computation, so it isn't a complete theory?

I don't think anyone is denying that

  • The brain can be considered as performing computation
  • The brain can be modelled as a Turing machine, at least in principle
  • Any real world implementation of a Turing machine will have timing issues

However, this is not the computational claim. I would summarise the computational view as being

  • Any two implementations of the same Turing machine are functionally equivalent
  • Consciousness results from the implementation of a particular Turing machine
  • Hence the experience of consciousness will be identical on different implementations of the same Turing machine, no matter how it is implemented

Well, yes, in the real world time dependence is necessary. It is only in the abstract that it should make no difference. How it jibes with the world, believe it or not, probably doesn't matter, since it is just a means of trying to move about the world and avoid being eaten and dealing with the Joneses. The brain would learn to wire differently if it received different input. There would have to be some consistency in timing of it vs. the world -- we couldn't just arbitrarily speed it up and down and leave the world at one speed -- for survival, but probably not for consciousness. It would certainly make for a very weird experience if time shifts occurred, though.

But, correct, in the real world the timing of inputs in the brain is critical for its proper function. We can't have long pauses in the processing and 'just get back to it' since in the brain that processing would no longer work. It is the coordinated flow of information through neurons, most working in parallel networks and in which the information from one network must interact with information in another network at a particular time to result in a particular effect.

We know what happens when the timing is messed up, though this occurs in real life only to a very small degree for us to see anything much -- we call it multiple sclerosis. MS (or a variant, one time demyelinating hit on the nervous system) can be so bad that consciousness is lost, but that generally leads to death or severe disability.

There is another experience that everyone has had that is thought to be due to timing disregulation and a 'story' we mold of that experience -- that is deja vu.

The question then becomes - is our consciousness independent of our interaction with the world? Is the brain-in-a-vat scenario possible? I think that it's at least a possibility that consciousness requires a direct connection with the real world - and hence timing would be an issue, not in the implementation of the Turing machine, but as another essential element.

We know that timing is a requirement in the brain. What we can't tell for sure is if it is a requirement to make the Turing machine work - and that alone - or if it is in itself a requirement for consciousness.

I only say that because I think we should always discuss the ideas, not what we think of the personalities with whom we discuss them.

I say this because I looked at my own responses in the past and decided that if I had to reach for a quick insult it probably meant that I did not have a good argument with which to respond. No good argument, then we shouldn't respond.

As you know I have defended you in the past when I thought you were being unfairly attacked, not because you needed any defending, but because I thought your attackers should reply to arguments and leave the personal stuff on the sidelines.

You did know that was a joke, I hope...

This particular subject is especially prone to a tendency to dissect the motives of the people putting forward a particular viewpoint. That might be a valid sideline, but it has nothing to do with rebutting the arguments, and it's a well-known logical fallacy. That's why I prefer to engage with the Wasp, since it's possible to concentrate on the issues.

That is correct, in the real world we cannot do that with a brain. But in a computer system we probably could because the information doesn't tend to disappear; from what I know it follows in steps with information being held in temporary memory store. I don't know enough about the blue brain project to say, but it may well be more like the way nervous tissue acts -- information is not held in memory stores, and the only important issue is the movement through nodes in a network. I hope it is more like the latter if it is really intended to mimic a brain.

Regardless, if we want to stick to a strict definition of computation as being time independent, then that is the wrong word to apply to what nervous tissue does. What is the correct word?

Thinking?

Don't get me wrong, I'm not arguing a stance here other than that I think computers can recreate consciousness somehow and help us to understand it, but that is still just a hypothesis until it is done. Is the underlying issue here whether or not Church-Turing applies to what the brain does? I'm not sure that the time dependence of nervous processing would impact that idea, since wouldn't the time dependence simply be one more aspect of what is computed in the abstract?

If consciousness is time dependent, then it cannot, in principle, be instantiated on a Turing machine approach, since Turing machines are not time dependent. It would not be the case that identical Turing machines running on different implementations would produce the same outcome.

Let me make sure we are on the same page here. I seem to hear you saying that what neurons do is not computation. Would that be accurate?

It's trivially true that what neurons do is not only computation. Indeed, any real-world implementation of a Turing machine will do more than computation.

The question of interest is whether the other things that neurons do are relevant in the issue of consciousness, or whether the reductionist approach is correct.

If that is so, that's fine with me, though it seems a strange conclusion since we commonly use the word 'compute' to refer to summation. But what word do we apply then? If time dependence rules out computation, then what do we call it?

It's confusing because a lot of what computers do is of course time-dependent - increasingly so nowadays. Back in the eighties most computers were running programs where time was not an issue, except in terms of getting their results out before five o'clock. Now much of what a typical PC does has considerable time constraints. Sound, video - even typing a character and having it echoed onscreen. But this is not computation in the Turing machine sense, and such activity cannot be performed by pure Turing machines. That shouldn't bother us particularly - Turing machines are an abstraction to help us understand computing. That there are tasks that they can't do - usually where interaction with the real world is involved - shouldn't surprise us.

*out of probably UE, Aku and me - two physicalists and an anti-materialist.
 
No, they have structural differences that give them different physical and chemical properties. Aku and I are open to the possibility that different physical and chemical properties might affect the question of consciousness - since physical and chemical properties affect everything else we encounter. We don't say "Eat that coal - it has carbon in it, just like toast". We recognise that a common fundamental composition is basically irrelevant.

Well that is probably why you seem to be a dualist.

Because a common fundamental composition is at the heart of all of science.

The principle being that if you had a machine that could arrange particles however you wish, and you took a lump of coal and rearranged the relevant particles in the same way as those particles are arranged in toast, you would end up with a legitimate piece of toast.

Do you disagree with that? Do you think there is a difference between coal and toast that cannot be reduced to just a difference in the arrangement of the fundamental particles?
 
It sounds plausible but I'm not sure it will be necessarily true in all cases.

Again -- shades of dualism.

Monism, and all of science, is pretty clear about this. If a process is perfectly understood, it can in principle be duplicated.
 
Time dependence is extremely critical because it affects the fundamental contention of computationalism. If consciousness is time dependent, then it isn't computational - it's physical. That's a possibility for which I've been arguing.

If consciousness is time dependent, then, for example, Rocketdodger's scenario of eternal life in a simulation may well be impossible in principle.



If consciousness is time-dependent, then it's not computable. As you've said, anything computable can be computed on a Turing machine, and Turing machines are not time dependent.



I'd certainly prefer it if my POV wasn't being rebutted by reference to my secret agenda, or explanations of why supporters of the physicalist view were actually anti-materialists.



But if consciousness is time-dependent, then simulations such as the Chinese room, running millions of times slower than a human mind, don't even arise. They will be ruled out on grounds of being too slow.

This is quite a different issue to the physical implementation of a Turing machine, where the different components will have time dependencies in order to make the thing work. A Turing machine is not time dependent.

You seem to have missed my post about algorithms being order dependent, which is equivalent to physical time dependence.

This should be intuitively clear to you since you live in the age of general relativity -- remember that whole "time diliation" thing? Yeah......

So your entire post here is just wrong, because you are wrong about computing and time dependence.
 
I don't think anyone is denying that

  • The brain can be considered as performing computation
  • The brain can be modelled as a Turing machine, at least in principle
  • Any real world implementation of a Turing machine will have timing issues

However, this is not the computational claim. I would summarise the computational view as being

  • Any two implementations of the same Turing machine are functionally equivalent
  • Consciousness results from the implementation of a particular Turing machine
  • Hence the experience of consciousness will be identical on different implementations of the same Turing machine, no matter how it is implemented


Theoretically the second list is possible, but for consciousness I would like to see evidence before I sign on the dotted line. The problem I see, again, is in trying to reproduce what occurs at the synapse. The rest is probably not that difficult, just a matter of knowing what links up where and when and at what frequency in the real world -- a monumental knowledge task but not a big engineering task. Dealing with all the modulations at the synapse, though is going to be a bear.

Also keep in mind, as I'm sure you do, when discussing Turing machines, we are discussing abstractions, ideals. Ideally we should be able to produce identical experiences. I doubt that we could ever pull that off in the real world, though.



The question then becomes - is our consciousness independent of our interaction with the world? Is the brain-in-a-vat scenario possible? I think that it's at least a possibility that consciousness requires a direct connection with the real world - and hence timing would be an issue, not in the implementation of the Turing machine, but as another essential element.


I think 'consciousness being independent of our interaction with the world' is a different issue. My answer to that would be 'no'.



We know that timing is a requirement in the brain. What we can't tell for sure is if it is a requirement to make the Turing machine work - and that alone - or if it is in itself a requirement for consciousness.

True, but I think it is fair to say that we could probably abstract all the time dependent processing in the brain and represent it in a Turing machine in abstract form in a time independent manner. It would not be easy, but I don't see any definite obstacle.

Consciousness, however, is not an abstraction, so whatever a theoretical Turing machine could or could not do has no bearing on consciousness itself necessarily as it is embodied in the real world.


You did know that was a joke, I hope...

Yes, and I did think of replying in kind, but I wanted to drive the point for those following at home one more time. Talk the issues, please.


Thinking?

Don't like it, I'm afraid. For one thing, it means that neurons involved in emotion are thinking. And that neurons are thinking. Doesn't sound or feel right. Computing fits much better than that, I fear.


If consciousness is time dependent, then it cannot, in principle, be instantiated on a Turing machine approach, since Turing machines are not time dependent. It would not be the case that identical Turing machines running on different implementations would produce the same outcome.

But there is no reason to believe that time dependence is a necessary property of consciousness. Sure it is necessary with brains acting in the real world, but again I see no absolute problem with us abstracting the computations occurring in neurons in a time dependent fashion to a Turing machine where they can be implemented in a time independent way.

Turing machines have an endless tape where instructions are always remembered, so interrupting them is never a problem. What is important in neural processing is that steps occur in a particular order and that they integrate in a particular way. It doesn't matter if those steps are interrupted if the 'instructions are remembered'.

Take as a real world example what occurs with absence seizures. Children with absence epilepsy (there are several forms unfortunately, but I'm talking about benign childhood absence here) can have hundreds of seizures a day if untreated. When they have a seizure, they 'check out' for a few seconds and then resume whatever activity.

To demonstrate this we often put them in epilepsy monitoring units (well, I don't because I only see adults) and film them with an EEG running, while asking them to perform some activity -- a favorite is counting. The kids will start to count, have a seizure, and then one of four things typically occurs. They may continue counting from the place where they left off prior to the seizure (say they stopped at 3, they will begin counting again at 4); they may start counting again at one, or at some other random number; they may become confused and forget what they were doing all along; or sometimes they will continue counting at a number they would have reached if they continued at the same pace with which they started as though the seizure never occurred.

Here's a situation in which consciousness is turned off, then returns and there is no sense that time has passed or that anything is missing. The same is theoretically possible with a Turing machine. I don't see why you could start it and stop it anywhere along the way as long as the proper relationships are maintained in the processing.



It's trivially true that what neurons do is not only computation. Indeed, any real-world implementation of a Turing machine will do more than computation.

The question of interest is whether the other things that neurons do are relevant in the issue of consciousness, or whether the reductionist approach is correct.

Yes, that is the question. My bet is on the computational properties, but the computation must obviously be done in a very particular way in the real world.


It's confusing because a lot of what computers do is of course time-dependent - increasingly so nowadays. Back in the eighties most computers were running programs where time was not an issue, except in terms of getting their results out before five o'clock. Now much of what a typical PC does has considerable time constraints. Sound, video - even typing a character and having it echoed onscreen. But this is not computation in the Turing machine sense, and such activity cannot be performed by pure Turing machines. That shouldn't bother us particularly - Turing machines are an abstraction to help us understand computing. That there are tasks that they can't do - usually where interaction with the real world is involved - shouldn't surprise us.


Right, I don't think we should let it bother us either. Again, I don't think that time dependence is an intrinsic property of the computation that neurons do. If someone has an argument as to why time dependence is an intrinsic property and why they cannot be emulated on a Turing machine I would be interested to hear it.

Time dependence is a real world issue as far as I can tell.
 
Just to make sure I understand exactly where you're coming from:

What do you think "information" is and what do you think it means for a system to process it?
Information is a difference that makes a difference, to toss out a one-liner. In that last paragraph, I was using infomation as a stand-in for sensory input (coded using whatever coding methods are applicable for the sense in question), that which is stored in memory, and the symbolic representation of output.

How can you simulate something if you don't even know what it is you're trying to simulate?
I think that we will figure it out as we go along, and that starting with what we currently know about how the various parts of the brain interact and making more and more elaborate models based on what did and did not work before will get us there, and drawing conclusions based on how those models work and how their components interact. Evolution produced something conscious, I see no reason that we cannot figure it out and improve on it.

Oh, you mean the actual -perception- of the colors yellow and green? Heh, thats one of the questions I'm asking. Its questions like that that we must have a rigorous answer to if we want to make the leap from Artificial Intelligence to Synthetic Consciousness.
I think we will figure out what is and what is not necessary as we go along, and that insisting that we have a rigorous answer at our current level of understanding is foolish.

No, that was a real question. To make it a little more concrete, what is the stark qualitative difference between the state of my brain while typing this post, while in deep sleep, sleepwalking, or general anasthesia?

Being as how thats the known physical mechanism that neurons utilize it seems like a pretty good place to start.
No, the known physical mechanism that neurons utilize is maintaining an ion gradient across their cell membranes, which allows their axons to rapidly depolarize once the inputs to the cells reach a certain threshold, which causes the synaptic vesicles at the ends of the axons to release whatever neurotransmitters they were storing. The neurotransmitters then slot into receptors on the neighboring neurons (or target cells), stimulating or inhibiting it according to the receptor type.

Note that I did not have to invoke QED specifically at all.

And even so, most of the activity of our nervous system does not produce consciousness. Even our brains produce consciousness only for limited periods of time, and they continue to process information during unconscious states. At this point, its seems to me that the terms "information", "information processing", and "computation" are too abstract and too broad to deal with the specifics of whats actually going on here.
I keep bringing information processing and computation up because they are the most powerful method we have of turning complex abstract models into something that can actually exist and interact with the physical world.

I know you feel that reproducing the general architecture is sufficient but, epistemically, such an approach is directly comparable to the attempts of the Cargo Cults.

Reproducing the general architecture at a sufficient level of detail, yes. I do not think that a comparison to Cargo Cults is warranted, though -- the process of trying to reproduce consciousness using a computational model will at least be both testable and falsifiable, and we will learn alot even if we fail.

We need more than the superficial level of understanding we have right now.
Yep.

If only it were just an issue of understanding -all- the principles of subjective sensibility; the problem is we understand -none-. We're working in the dark here.
Chalmers fan, eh? I don't think we have a Hard Problem, I think we have lots of hard problems.

Also, why the shift from a superficial understanding to no understanding and using the term consciousness to subjective sensibility?

The point in having a scientific theory of consciousness is so that we CAN abstract it conceptually. Our theories are abstractions of concrete entities and phenomena. If we do not understand the phenomena in question, or even know what it is, we cannot achieve technical mastery of it.
I think we understand consciousness very poorly, not that we have no understanding of it. However, that is enough for the scientific method to work.

The -concept- of consciousness is an abstraction. The -phenomena- of consciousness is not.
If information processing is what the brain is doing, and consciousness is an artifact of the particular way information is being processed, then things are not quite that simple. Granted, there are a couple of big ifs there, but I feel they are justified for now.

In the -physical- powerplant that the -physical- computer is running, ofcourse.
Where in the physical computer? Please be specific.
 
Last edited:
Every now and again I look at one of Pixy's posts to see if he's actually produced anything beyond inserting "wrong" at each paragraph mark - or, in other words, demonstrated consciousness. Clearly he still hasn't got the "argument" thing.
As always, if you don't like being told you're wrong all the time, don't be wrong all the time.

First, you haven't shown that consciousness is time-dependent at all. That's why I described your argument as grasping at red herrings - you are simply trying to find something, anything, which might plausibly (to you) cause a problem for the computational approach, rather than actually analysing the problem.

Second, time-dependance does not make a process non-computable. Your argument is based on a premise which is categorically false. In the abstract, you simply map the time dependence into the computational process. In the abstract, there is no time, so it simply does not matter.

In both the real world and the abstract, order dependence is maintained.

In the real world, you can either maintain that mapping of the time domain to some other abstract domain, or if you need to respond in real time, you build your system so that it responds in real time.

Put plainly, your so-called argument was a bare assertion followed by a non-sequitur. A dismissive wrong is more than it deserved.
 
Just to make sure I understand exactly where you're coming from:

What do you think "information" is and what do you think it means for a system to process it?

Information is a difference that makes a difference, to toss out a one-liner. In that last paragraph, I was using infomation as a stand-in for sensory input (coded using whatever coding methods are applicable for the sense in question), that which is stored in memory, and the symbolic representation of output.

Okay, cool. I think thats as good a response that anyone can give on this subject at the moment.

However, there are some short comings to this scheme: The first is that inputs into a system are sensory only if the given system has subjective sensibility. The second is that symbols only take on the force of being symbols if there is a conscious subject associating those symbols with meaning(s). So we're still left with having to explain the whole subjective aspect of the issue [i.e. consciousness]: What is it in physical terms, and what are the sufficient conditions for it?

How can you simulate something if you don't even know what it is you're trying to simulate?

I think that we will figure it out as we go along, and that starting with what we currently know about how the various parts of the brain interact and making more and more elaborate models based on what did and did not work before will get us there, and drawing conclusions based on how those models work and how their components interact. Evolution produced something conscious, I see no reason that we cannot figure it out and improve on it.

I'm fairly confident that we will eventually be able to understand it sufficiently to create conscious technological systems [of course, when/if that happens there will be a whole bevy of ethical concerns that will take the fore of the issue].

Oh, you mean the actual -perception- of the colors yellow and green? Heh, thats one of the questions I'm asking. Its questions like that that we must have a rigorous answer to if we want to make the leap from Artificial Intelligence to Synthetic Consciousness.

I think we will figure out what is and what is not necessary as we go along, and that insisting that we have a rigorous answer at our current level of understanding is foolish.

What I'm objecting to is the assertion thats frequently made here that we already have a sufficient answer. We most assuredly don't.


No, that was a real question. To make it a little more concrete, what is the stark qualitative difference between the state of my brain while typing this post, while in deep sleep, sleepwalking, or general anasthesia?

As a subject, the difference is more stark than night and day: in one instance [consciousness] you're 'here' and lucid, experiencing a full panoply of subjective states and sensations. On the other end of the spectrum [complete unconsciousness] there is absolutely nothing -- as a subject you're, for all intents and purposes, non-existent.

Physically, the difference is shown as the varying frequencies of the brain's EM activity. Each frequency range is correlated with a particular conscious state, or lack thereof.

Being as how thats the known physical mechanism that neurons utilize it seems like a pretty good place to start.

No, the known physical mechanism that neurons utilize is maintaining an ion gradient across their cell membranes, which allows their axons to rapidly depolarize once the inputs to the cells reach a certain threshold, which causes the synaptic vesicles at the ends of the axons to release whatever neurotransmitters they were storing. The neurotransmitters then slot into receptors on the neighboring neurons (or target cells), stimulating or inhibiting it according to the receptor type.

Note that I did not have to invoke QED specifically at all.

Every single one of those biological mechanisms -- [1] membrane potentials, [2] polarization, [3] signal transduction, etc.. -- all of them, utilize EMF interactions. Conscious experience is not a functional abstraction of what neural cells do but what they are actually physically producing.

And even so, most of the activity of our nervous system does not produce consciousness. Even our brains produce consciousness only for limited periods of time, and they continue to process information during unconscious states. At this point, its seems to me that the terms "information", "information processing", and "computation" are too abstract and too broad to deal with the specifics of whats actually going on here.

I keep bringing information processing and computation up because they are the most powerful method we have of turning complex abstract models into something that can actually exist and interact with the physical world.

The problem here is that many are confusing the model with reality. What makes it all the worse is they don't even recognize that they have no idea what it is they're trying to model.

I know you feel that reproducing the general architecture is sufficient but, epistemically, such an approach is directly comparable to the attempts of the Cargo Cults.

Reproducing the general architecture at a sufficient level of detail, yes. I do not think that a comparison to Cargo Cults is warranted, though -- the process of trying to reproduce consciousness using a computational model will at least be both testable and falsifiable, and we will learn alot even if we fail.

You seem a good deal more sober minded about this issue than most others here I've conversed with. So far, I haven't seen you making any of the evangelical & overreaching claims made by some [they know who they are]. I'm not discounting the value of information science in this subject area. When and if we do develop conscious systems it will no doubt be a great asset.

Even so, I'm sure you realize that the only way that we can falsify any claim to creating a conscious system is a scientific theory of consciousness that meets the criteria I listed earlier. I don't think many of the participants in this discussion fully appreciate the magnitude of this undertaking and what it will require.

If only it were just an issue of understanding -all- the principles of subjective sensibility; the problem is we understand -none-. We're working in the dark here.

Chalmers fan, eh? I don't think we have a Hard Problem, I think we have lots of hard problems.

Unlike Chalmer's, I'm not smugly content with thinking of consciousness as an insoluble philosophical conundrum. I think that science can make real inroads in this area. I also think that philosophy should be used as a tool to help us attack this problem, not as a means rationalize it into an eternal mystery box.

Also, why the shift from a superficial understanding to no understanding

Because a superficial understanding is not real understanding. The Melanesian natives had a very superficial understanding of what brought supplies to their islands based off of their empirical observations. Valid as their observations where, they had no true insight as to what was happening . As far as consciousness is concerned, we're epistemically in -exactly- the same boat.

and using the term consciousness to subjective sensibility?

To emphasize that that is an essential characteristic of consciousness. After over a year and a half of debating this issue I've found that can't take for granted that participants here are aware of this fact.


The point in having a scientific theory of consciousness is so that we CAN abstract it conceptually. Our theories are abstractions of concrete entities and phenomena. If we do not understand the phenomena in question, or even know what it is, we cannot achieve technical mastery of it.

I think we understand consciousness very poorly, not that we have no understanding of it. However, that is enough for the scientific method to work.

Contrary to what some here seem to think, I'm not claiming we can't, I'm pointing out that we don't. Many of the participants here vastly overestimate our level of understanding regarding consciousness. I also don't think the computationalist perspective frames this issue properly. If one approaches a problem with the wrong set of conceptual tools it makes gaining insight into it much much more difficult.

The -concept- of consciousness is an abstraction. The -phenomena- of consciousness is not.

If information processing is what the brain is doing, and consciousness is an artifact of the particular way information is being processed, then things are not quite that simple. Granted, there are a couple of big ifs there, but I feel they are justified for now.

Every object inheres information and every process is processing information. Pointing out the obvious fact that brains process information tells us nothing useful about consciousness qua consciousness. Information processing is not an explanation, its just a very general description.

In the -physical- powerplant that the -physical- computer is running, ofcourse.

Where in the physical computer? Please be specific.

Its the literal physical flipping of the computer hardware's switching mechanisms. The computer simulation of the power plant is just a representational tool. Like language, it only takes on symbolic significance in the minds of the humans who use the computer.
 
Last edited:
RD has a point. If Materialism is true, and you were able to replace neurons with functionally equivalent microchips, serotonin with an equivalent artificial chemical, etc., consciousness should be the end result.
Of course. If consciousness did not result then they would not, by definition, have the same function.

The same would be true of Idealism.
 
First, you haven't shown that consciousness is time-dependent at all. That's why I described your argument as grasping at red herrings - you are simply trying to find something, anything, which might plausibly (to you) cause a problem for the computational approach, rather than actually analysing the problem.
The problems with the computational approach are still there

1. What explanatory power does it have?
2. What empirical confirmation would it have?
3. What mechanism are you proposing that is constantly scanning the entire universe and finding calculations and recognizing them as calculations and recognising them as chains of calculations and producing this unified experience I call my consciousness?

I think the ball is pretty much still in your court for those.
 
Last edited:
The problems with the computational approach are still there

1. What explanatory power does it have?

Well, that rather depends on what your question is.

2. What empirical confirmation would it have?
What the computational model does is tie observed neural function to the observed results of neural function. That's empirical confirmation. Indeed, it is a profound empirical confirmation.

We have other models proposed: The quantum model, which we know to be impossible, the field model, which we know to be impossible, and various other models which aren't even scientific.

What you should be asking is what would falsify the computational model. And to find that, you would first need to propose an alternate model (preferably one that is not already falsified) and a test that distinguishes between them.

3. What mechanism are you proposing that is constantly scanning the entire universe and finding calculations and recognizing them as calculations and recognising them as chains of calculations and producing this unified experience I call my consciousness?
One of the big advantages of the computational approach is that it allows us to recognise your point 3 as a fantastically absurd strawman.

I think the ball is pretty much still in your court for those.
Back to you, Robin.
 
Well, that rather depends on what your question is.
In other words no explanatory power that you know of.
What the computational model does is tie observed neural function to the observed results of neural function. That's empirical confirmation. Indeed, it is a profound empirical confirmation.
I am not sure you understand the concept of empirical confirmation.
What you should be asking is what would falsify the computational model. And to find that, you would first need to propose an alternate model (preferably one that is not already falsified) and a test that distinguishes between them.
And you definitely don't understand the concept of falsification.
One of the big advantages of the computational approach is that it allows us to recognise your point 3 as a fantastically absurd strawman.
Why?


We have a set of numerical calculations. We have my conscious experience. You claim one could be responsible for the other. You appear to be missing a step here.
Back to you, Robin.
You haven't even addressed one of my points yet.
 
Last edited:

Back
Top Bottom