Are You Conscious?

Are you concious?

  • Of course, what a stupid question

    Votes: 89 61.8%
  • Maybe

    Votes: 40 27.8%
  • No

    Votes: 15 10.4%

  • Total voters
    144
In other words no explanatory power that you know of.
Well, in other words that do not in any way represent what I said, yes.

I am not sure you understand the concept of empirical confirmation.
Neurons compute.

The brain is made of neurons.

There is this thing called consciousness.

Now, it we consider consciousness to be the result of neural computation, it will, for example, be associated - always - with physical, functional, brains.

Oh look, it is!

And you definitely don't understand the concept of falsification.
OKay, try this then: Take a prediction of the computational model, for example, that consciousness will always be associated with physical, functional, brains.

Now show me one that isn't.

If you want to complain that this is too general, then as I said, you have to come up with an alternative hypothesis, one that hasn't already been falsified, and a test that distinguishes between the two.

The computational model is very general, and the predictions it makes are very general - and so far as we can test them, obviously true. If you want to be more specific, you'll have to provide an alternate hypothesis.

Because it in no way follows from the computational model.

We have a set of numerical calculations.
Yes.

We have my conscious experience.
Yes.

You claim one could be responsible for the other.
Actually, I point out that consciousness is obviously computational.

You appear to be missing a step here.
Why?

You haven't even addressed one of my points yet.
Yes, I have. The third is of course based on an abject misunderstanding of consciousness and computation, but I've responded to it anyway.
 
But I seem to remember you tried to justify this position by claiming that you could write the algorithm so that the CPU could know what the instruction was before the current instruction it was processing.
 
A dismissive wrong is more than it deserved.
Sorry, Robin, you don't get to play that card.

Westprog had already brought up that incorrect assertion. It had already been addressed. He paid no attention, and simply repeated it. His argument was, as I pointed out, a bare assertion (and a false one) followed by a non-sequitur. Invalid and unsound.

Now, I'll ask again, wrong in what way?
 
But I seem to remember you tried to justify this position by claiming that you could write the algorithm so that the CPU could know what the instruction was before the current instruction it was processing.
Yes. Of course I can.

Or the instruction after. Or any other piece of data in the system. It's not even difficult.

Just do a comparison between a register value and a PC-relative indexed address.

The sequence of operations for a Turing machine would be longer, but it is already proven that it can be done (over 50 years ago). This is hardly controversial.
 
Well, in other words that do not in any way represent what I said, yes.
When I asked you what was the explanatory power you answered "it depends upon the question you are asking".
Neurons compute.

The brain is made of neurons.

There is this thing called consciousness.

Now, it we consider consciousness to be the result of neural computation, it will, for example, be associated - always - with physical, functional, brains.

Oh look, it is!
Hmmm.... So computationalism predicts that we ought to be conscious. And we are conscious. Wow! That is so impressive.

But some neurons compute. And consciousness does not result from those computations.
OKay, try this then: Take a prediction of the computational model, for example, that consciousness will always be associated with physical, functional, brains.

Now show me one that isn't.
Well no, you are predicting that consciousness will always be associated with an equivalent algorithm whether or not it is associated with a functional brain.

And again, your "falsification" says - if this theory is false then we are not conscious. Again - not very impressive.
If you want to complain that this is too general, then as I said, you have to come up with an alternative hypothesis, one that hasn't already been falsified, and a test that distinguishes between the two.
Which is why I said that you didn't understand the concept of falsification - which has nothing to do with alternate hypotheses. This "if the other theory is wrong then mine must be, by default, right" is woo, not science.
Because it in no way follows from the computational model.
So are you saying that an equivalent algorithm evaluated on any substrate will not necessarily produce this conscious experience that I have?

But earlier you claimed very specifically that it did.
Robin said:
You claim one could be responsible for the other.
Actually, I point out that consciousness is obviously computational.
Er no, you rather specifically claimed that a set of calculations - even the desk check of a program, even if there were no physical connection between calculations - could result in the moment of consciousness you you are experiencing right now.
Yes, I have. The third is of course based on an abject misunderstanding of consciousness and computation, but I've responded to it anyway.
It is based on a perfect understanding of your position - if I had it wrong in any way I gave you more than ample time to correct me. I questioned you quite closely on this.
 
Sorry, Robin, you don't get to play that card.
Nor do you. Unless you are suggesting a double standard should apply.
Now, I'll ask again, wrong in what way?
The CPU has registers and current instructions that is all. It does not get to "know" the algorithm, once an instruction has completed and another comes along then the last one has gone.

It never gets to "know" more than it's register values and the current instruction.
 
Last edited:
Yes. Of course I can.

Or the instruction after. Or any other piece of data in the system. It's not even difficult.

Just do a comparison between a register value and a PC-relative indexed address.
Specifically - how?

Which register? Which PC-relative indexed address? How? Remember it will have to be in a single CPU cycle otherwise the instruction will not be the last instruction any more.

Will the method work if the last instruction was a "jump" or "return"?
The sequence of operations for a Turing machine would be longer, but it is already proven that it can be done (over 50 years ago). This is hardly controversial.
So on a Turing Machine the reading head can "know" about what happened two steps ago or two steps in advance? Can you provide this proof?

Does it "know" which action was invoked?

Also - if the sequence is "longer" then in what way is it reading the last action? How can it be reading the last action in a sequence longer than one step?
 
Last edited:
Okay, cool. I think thats as good a response that anyone can give on this subject at the moment.

However, there are some short comings to this scheme: The first is that inputs into a system are sensory only if the given system has subjective sensibility.

I am not sure I follow you here -- are you suggesting that unconscious animals do not register or act on sensory input in any fashion?

The second is that symbols only take on the force of being symbols if there is a conscious subject associating those symbols with meaning(s).

Sorry, I was using the term "symbol" in a Shannon information theory sense. I probably should have just used the term "output", as the nervous system processes information whether it possesses what we call "consciousness" or not.

So we're still left with having to explain the whole subjective aspect of the issue [i.e. consciousness]: What is it in physical terms, and what are the sufficient conditions for it?
I think describing it in physical terms will be useful to the same degree that describing any moderately complex computational process (an algorithm for simulated annealing or a forward chaining expert system or whatever) in terms of what is happening at a transistor by transistor level on my laptop is -- OK for reverse engineering if that is all you have, not so useful once we figure out what is happening.

I'm fairly confident that we will eventually be able to understand it sufficiently to create conscious technological systems [of course, when/if that happens there will be a whole bevy of ethical concerns that will take the fore of the issue].
Yeah, the ethics will be a big deal. I would hate to upload myself and then have the Supreme Court rule that artificial sapient systems do not have human rights.

What I'm objecting to is the assertion thats frequently made here that we already have a sufficient answer. We most assuredly don't.
I argue that we have what we need to find a sufficient answer, and that we will not need to describe the answer in terms of the four fundamental forces any more than we have to for any other biological process.

Physically, the difference is shown as the varying frequencies of the brain's EM activity. Each frequency range is correlated with a particular conscious state, or lack thereof.
Are you referring to EEG related stuff? That is a very crude diagnostic indeed.

Every single one of those biological mechanisms -- [1] membrane potentials, [2] polarization, [3] signal transduction, etc.. -- all of them, utilize EMF interactions.

Yes, at the level of individual atoms. When we are talking about processes happening at the cellular level we do not care about EMF interactions at all beyond what is necessary to explain the chemistry of what is happening.

Conscious experience is not a functional abstraction of what neural cells do but what they are actually physically producing.
We disagree. I see it as an artifact of the way our nervous system models, learns from, and adapts to our environment. I don't see individual nerve cells producing much beyond metabolic waste products, heat, and the odd depolarization event. The only thing that is interesting about them from the standpoint of consciousness is that their depolarization events can be controlled by other nerve cells, and that they can be connected in huge, ornate networks. Other than that, they suck at being antennae and they are way too hot and dense for quantum effects to start being interesting.

Even so, I'm sure you realize that the only way that we can falsify any claim to creating a conscious system is a scientific theory of consciousness that meets the criteria I listed earlier.
Not exactly -- I think that the only criteria we have to establish if something is conscious or not right now is to interact with it and see if it acts like a conscious entity. I realize this is a very crude test, but it is likely to be a good as we can get for a while. I also think that establishing a scientific test based on the physical properties of neurons is the wrong way to go about it -- at the very least, I would focus more on their properties as information processors, and I would look more at how the networks as a whole in the brain behave rather than focusing on individual neurons.

Unlike Chalmer's, I'm not smugly content with thinking of consciousness as an insoluble philosophical conundrum. I think that science can make real inroads in this area. I also think that philosophy should be used as a tool to help us attack this problem, not as a means rationalize it into an eternal mystery box.
I think that so far philosophy has made a hash of it -- too much thinking about the problem, not enough of it empirical. Bring on the science.

Every object inheres information and every process is processing information.
Yeah, quantum mechanics 101. Neurons do not just process information in that trivial sense, though -- they do it by summing their excitatory inputs, subtracting inhibitory inputs, and firing if their input passes a certain threshold. Totally different ball of wax.

Its the literal physical flipping of the computer hardware's switching mechanisms. The computer simulation of the power plant is just a representational tool. Like language, it only takes on symbolic significance in the minds of the humans who use the computer.
So all that switch flipping is still just a simulation even though the power plant would stop functioning (possibly catastrophically) if the computer running it crashed or was switched off?
 
Last edited:
In case you did not see my response Pixy


!Kaggen said:
PixyMisa said:
The strong interpretation of Goethe's Metamorphosis of Plants demonstrates that that the technique of imaginative thinking as a form of introspection can be used to describe objective reality and hypothesize the homologous structures of plant organs. This same technique can be used to study consciousness.
This same technique can indeed be used to study consciousness. However, it gives answers that are now well established to be wrong.
Such as?

You asked Aku what else might be necessary for consciousness apart from computation. I have replied that consciousness is not just understood as a result of reasons, but the reason for results. In other words consciousness not only has necessary reasons, but also contingent results.
Evidence?
Ok, lets assume only computation is required. Computation is repeatable. Then an exact copy of a conscious computer is the same conscious computer. That is what you have been telling us, right. Therefore if I produce an exact copy of a human by cloning would I get the same person? No, behavior studies shows that the historical experience of the original person is a contingent part of his/her consciousness. Therefore even if computation is the necessary part of consciousness the contingent part (being the historical experience of the person) of consciousness is required to replicate a brain. Is this physically possible, no. Therefore the initial assumption is wrong and computation is not all that is required.

It requires imagination to predict the contingent results of consciousness.
Evidence?
Ask any person who makes a living from predicting the behavior of real people.

You missed the point.
Song writing develops the imagination.
In computers?
No, in those who want to build conscious computers.
 
Specifically - how?
I. Just. Told. You.

Which register?
The one you want to use.

Which PC-relative indexed address?
The one you want to look at.

Using an instruction that performs a comparison between a register value and a PC-relative indexed address.

Remember it will have to be in a single CPU cycle otherwise the instruction will not be the last instruction any more.
No, it has to be a single instruction, not a single cycle. Or you can be specific about what you mean when you say "the last instruction" and then it can be any number of instructions.

Will the method work if the last instruction was a "jump" or "return"?
Yes, of course.

ETA: Sorry. Depends what you mean by the last instruction - if the last instruction executed was a taken jump or return, then no, a simple PC-relative lookup won't work. You'd have to use a different method. But it is computable regardless. This is much simpler than the halting problem, which is not computable.

So on a Turing Machine the reading head can "know" about what happened two steps ago or two steps in advance? Can you provide this proof?
Yes.

As you can see, if it can be done on any of the computationally equivalent systems - which includes modern digital computers (which are finite random access stored program register machines), Turing machines, cellular automata, Lambda calculus, and a number of other concepts.

Does it "know" which action was invoked?
It has access to the program counter; it has access to the program memory; it can perform index+offset memory access, which means that it can easily access the value located at PC-1, i.e. the numeric value of the previously executed instruction (assuming a fixed instruction word).

Given all that, does it "know" which action was invoked? You can ask it, and it can tell you. Does that mean it "knows" which action was invoked? It can perform the same instruction again. Does that mean it "knows" which action was invoked? It can alter its own code to perform that instruction at a different time, under different circumstances. Does that mean it "knows" which action was invoked?

If not, is "know" a meaningful term any more?

Also - if the sequence is "longer" then in what way is it reading the last action?
It's the last action before the sequence. You can create a virtual machine running on top of the Turing machine under which these operations are atomic. Does this make a difference? If so, why?

How can it be reading the last action in a sequence longer than one step?
How can you write a sentence with more than one word?
 
Last edited:
Sorry, I did miss your post. I've been really busy the last week or so, and haven't had a chance to read everything, much less reply to everything. And that's likely to just get worse in coming weeks. :(

Such as the notion that consciousness is causal - see Libet's experiments there. Though Libet's conclusions haven't been decisively confirmed, the conception of consciousness formed through introspection has been effectively dustbinned.

Ok, lets assume only computation is required. Computation is repeatable.
Yep.

Then an exact copy of a conscious computer is the same conscious computer.
A second instantiation of that consciousness.

That is what you have been telling us, right.
Well, no.

Therefore if I produce an exact copy of a human by cloning would I get the same person?
Cloning doesn't produce exact copies.

No, behavior studies shows that the historical experience of the original person is a contingent part of his/her consciousness.
That's even more irrelevant than the point I thought you were going to make. That is not an exact copy. It's a clone with different memories.

Therefore even if computation is the necessary part of consciousness the contingent part (being the historical experience of the person) of consciousness is required to replicate a brain.
Wrong. All you have to do is actually copy the brain.

Is this physically possible, no.
No, but it's completely irrelevant too.

Therefore the initial assumption is wrong and computation is not all that is required.
No, your argument is unsound.

Ask any person who makes a living from predicting the behavior of real people.
That's not evidence.

No, in those who want to build conscious computers.
We already have song-writing computers. Does song-writing develop the imagination in those computers?
 
So the information in genes is insufficient to reproduce the consciousness of biological organisms, but algorithms aren't?
Cloning is insufficient for a huge number of reasons. Clones aren't biologically identical, and even if they were they wouldn't have identical consciousnesses, so none of this relates in any way to the computational model.

Algorithms aren't sufficient either. I don't understand why you think they would be.
 
I. Just. Told. You.
No. You. Did. Not.

What. You. Are. Describing. Is. Impossible.

Show. Some. Specific. Code. If. You. Disagree.
(or to be precise you can only do it in the cases where you already know what the last instruction is)
Yes, of course.

ETA: Sorry. Depends what you mean by the last instruction - if the last instruction executed was a taken jump or return, then no, a simple PC-relative lookup won't work. You'd have to use a different method. But it is computable regardless. This is much simpler than the halting problem, which is not computable.
What different method? What will find the last instruction even if it was a return or a jump?
You cite this as an all purpose proof for every different occasion don't you?

But it does not even come remotely close to proving what you claim.
 
Last edited:
Nobody is putting forward a contrary view. I've already said so a few posts upthread.

Nobody is saying that the knowledge of the experiences gives one the experience. Knowing about millionaires doesn't make me one. I'm saying that the experience of pain IS pain. You're adding a useless layer to pain.

Come on, Westprog, a little effort, please.

I also notice you haven't answered Dodger's question about rearranging particles.
 
You seem to have missed my post about algorithms being order dependent, which is equivalent to physical time dependence.

This is quite an interesting post, because RocketDodger has previously given indications that he understands some things about computing from a practical point of view. His theory is pretty vague, but he seems to know what programs do, in a general sense.

So he should know - because it's pretty fundamental - that the steps of a computation that can be carried out on a Turing Machine are not time dependent. It doesn't matter if each step takes a nano-second or a year - the outcome will be the same.

However, this doesn't apply to everything that happens on a computer. Certain programs are time dependent, because they interact with the real world in such a way that if they take too long, they will not produce the same outcome. That's one reason why in the '70's and '80's DEC became a highly successful company - because they produced computers - and more importantly, operating systems - that were capable of producing guaranteed responses in a given time.

That is not to say, of course, that the mainframes being produced in that era - in fact, all computers ever built - didn't have time dependencies. It's impossible to build a computing device in the real world that doesn't have time dependencies. However, the intention of the bulk processing mainframes was to provide a Turing type environment for programs where timing issues would not apply. A COBOL payroll program would not be written with timing issues in mind. Indeed, there would be no COBOL keywords related to timing. The instructions would, as in the Turing model, be ordered, but the programmer would simply submit the program, and await the output, without concerning himself with precisely when any given instruction was executed. Indeed, the function of the multi-tasking operating systems was to conceal issues of timing from the programmer and provide a pure Turing environment.

In the more practical world of process control and monitoring, this abstraction was not possible. The DEC systems provided the programmer with tools that allowed guaranteed response times. There were also microprocessor based systems and microcontroller based systems. Languages such as FORTRAN were extended with system calls to ensure precise timing. Specialist languages such as RTL/2 were written. I've designed such languages myself.

Programs written in such languages, for such computing systems cannot be modelled on Turing Machines because Turing Machines don't concern themselves with issues of timing. Hence the real-time world was quite vulgar for a while, outside the mainstream of computer theory.

But the problems of real-time programming were also the problems of computing as a whole. The issue of synchronising processes running in parallel was important for operating system design as well as real-time control. And as computers became cheaper, more and more real-time applications arose.

It's now the case that most of the programs run on a normal PC have real-time issues. Play an MP3 and it can't be interrupted for half a second without noticing. Hit a key in Microsoft Word and you expect the corresponding letter to appear on screen. Microsoft Windows has the designs of those DEC real-time operating systems under the hood.

How does this relate to consciousness? Well, partly because the AI people in the seventies were not involved with the real-time programmers. They were using the Turing model, where time was not an issue. So we have theories in which any two implementations of the same Turing machine are functionally equivalent - as in the rhetorical question that Pixy asked Aku. Is a simulation of a computer equivalent to a computer? Well, the programs will produce the same output. But if you want to play a YouTube video, you'll quickly find that the simulation does produce functionally - and qualitatively - different output.

The AI view of computing seems stuck in it's world of the 1970's - stacks of punch cards ready for processing, according to the whim of the computer operater - each producing its output when it's finished. All modelled as Turing machines, and every run equivalent if it produces the same output.

This is not what computing is like now, when most people interact continuously with their programs. And it's the interaction with the environment that defines human consciousness. Can a human mind be modelled by a pure Turing machine? Not if it is time dependent.

This should be intuitively clear to you since you live in the age of general relativity -- remember that whole "time diliation" thing? Yeah......

This is like when WW2 bombers used to jettison aluminium strips to avoid radar. Does general relativity and time dilation have anything to do with what we're talking about? No, but create some noise and confusion.

So your entire post here is just wrong, because you are wrong about computing and time dependence.

And when in doubt, channel Pixy.

It would all be simple enough if RD could just bring himself to say that a pure Turing implementation of consciousness might well be lacking in some vital element - viz, time dependence. But conceding a point? That's not going to happen. However, if the Wasp is reading I'd be interested to hear what he thinks.
 
Theoretically the second list is possible, but for consciousness I would like to see evidence before I sign on the dotted line. The problem I see, again, is in trying to reproduce what occurs at the synapse. The rest is probably not that difficult, just a matter of knowing what links up where and when and at what frequency in the real world -- a monumental knowledge task but not a big engineering task. Dealing with all the modulations at the synapse, though is going to be a bear.

Also keep in mind, as I'm sure you do, when discussing Turing machines, we are discussing abstractions, ideals. Ideally we should be able to produce identical experiences. I doubt that we could ever pull that off in the real world, though.

I'm not sure that implementing Turing machines has any relevance to consciousness - partly because they aren't real-world objects, just ways to think about computation.

I think 'consciousness being independent of our interaction with the world' is a different issue. My answer to that would be 'no'.





True, but I think it is fair to say that we could probably abstract all the time dependent processing in the brain and represent it in a Turing machine in abstract form in a time independent manner. It would not be easy, but I don't see any definite obstacle.

You could certainly write a Turing program which represents time as another abstract parameter - but such a program would not be itself time dependent. For example, a COBOL payroll program would count hours, but it would not have its output changed according to how long it took to run. There's a fundamental difference between non-Turing programs that interact with the environment and Turing programs that effectively just read their tape.

Consciousness, however, is not an abstraction, so whatever a theoretical Turing machine could or could not do has no bearing on consciousness itself necessarily as it is embodied in the real world.

I broadly agree with this.

But there is no reason to believe that time dependence is a necessary property of consciousness. Sure it is necessary with brains acting in the real world, but again I see no absolute problem with us abstracting the computations occurring in neurons in a time dependent fashion to a Turing machine where they can be implemented in a time independent way.

Turing machines have an endless tape where instructions are always remembered, so interrupting them is never a problem. What is important in neural processing is that steps occur in a particular order and that they integrate in a particular way. It doesn't matter if those steps are interrupted if the 'instructions are remembered'.

Take as a real world example what occurs with absence seizures. Children with absence epilepsy (there are several forms unfortunately, but I'm talking about benign childhood absence here) can have hundreds of seizures a day if untreated. When they have a seizure, they 'check out' for a few seconds and then resume whatever activity.

To demonstrate this we often put them in epilepsy monitoring units (well, I don't because I only see adults) and film them with an EEG running, while asking them to perform some activity -- a favorite is counting. The kids will start to count, have a seizure, and then one of four things typically occurs. They may continue counting from the place where they left off prior to the seizure (say they stopped at 3, they will begin counting again at 4); they may start counting again at one, or at some other random number; they may become confused and forget what they were doing all along; or sometimes they will continue counting at a number they would have reached if they continued at the same pace with which they started as though the seizure never occurred.

Here's a situation in which consciousness is turned off, then returns and there is no sense that time has passed or that anything is missing. The same is theoretically possible with a Turing machine. I don't see why you could start it and stop it anywhere along the way as long as the proper relationships are maintained in the processing.

That works quite well with counting - but try it with juggling and it won't work as well. Most of the interactions between humans and their environment are extremely time dependent. Is consciousness necessarily time dependent? Perhaps, perhaps not, but I certainly don't thing that we can say for certain yet.

Yes, that is the question. My bet is on the computational properties, but the computation must obviously be done in a very particular way in the real world.

I don't mind people placing different bets, as long as it's understood that the results aren't in yet.

Right, I don't think we should let it bother us either. Again, I don't think that time dependence is an intrinsic property of the computation that neurons do. If someone has an argument as to why time dependence is an intrinsic property and why they cannot be emulated on a Turing machine I would be interested to hear it.

Time dependence is a real world issue as far as I can tell.
 
I'm fairly confident that we will eventually be able to understand it sufficiently to create conscious technological systems

I'm not confident that consciousness is understandable in principle. Nor am I confident that if understood, we would be able to create consciousness in any other way than we do at present. However, in practice I favour an approach that assumes that both are possible.

I realise that doubts on this issue are not acceptable to the hardline materialists, but I can live with that.
 
So are you saying that an equivalent algorithm evaluated on any substrate will not necessarily produce this conscious experience that I have?

But earlier you claimed very specifically that it did.

I'm having this experience rather a lot with the computationalists. Sometimes the claim is that the same algorithm/Turing Machine will produce an identical experience - sometimes it isn't.

It's quite critical because if this claim is withdrawn, the excesses of the computational approach aren't as egregious.
 

Back
Top Bottom