Robin
Penultimate Amazing
- Joined
- Apr 29, 2004
- Messages
- 14,971
A dismissive wrong is more than it deserved.Wrong in what way?
A dismissive wrong is more than it deserved.Wrong in what way?
Well, in other words that do not in any way represent what I said, yes.In other words no explanatory power that you know of.
Neurons compute.I am not sure you understand the concept of empirical confirmation.
OKay, try this then: Take a prediction of the computational model, for example, that consciousness will always be associated with physical, functional, brains.And you definitely don't understand the concept of falsification.
Because it in no way follows from the computational model.Why?
Yes.We have a set of numerical calculations.
Yes.We have my conscious experience.
Actually, I point out that consciousness is obviously computational.You claim one could be responsible for the other.
Why?You appear to be missing a step here.
Yes, I have. The third is of course based on an abject misunderstanding of consciousness and computation, but I've responded to it anyway.You haven't even addressed one of my points yet.
Sorry, Robin, you don't get to play that card.A dismissive wrong is more than it deserved.
Yes. Of course I can.But I seem to remember you tried to justify this position by claiming that you could write the algorithm so that the CPU could know what the instruction was before the current instruction it was processing.
When I asked you what was the explanatory power you answered "it depends upon the question you are asking".Well, in other words that do not in any way represent what I said, yes.
Hmmm.... So computationalism predicts that we ought to be conscious. And we are conscious. Wow! That is so impressive.Neurons compute.
The brain is made of neurons.
There is this thing called consciousness.
Now, it we consider consciousness to be the result of neural computation, it will, for example, be associated - always - with physical, functional, brains.
Oh look, it is!
Well no, you are predicting that consciousness will always be associated with an equivalent algorithm whether or not it is associated with a functional brain.OKay, try this then: Take a prediction of the computational model, for example, that consciousness will always be associated with physical, functional, brains.
Now show me one that isn't.
Which is why I said that you didn't understand the concept of falsification - which has nothing to do with alternate hypotheses. This "if the other theory is wrong then mine must be, by default, right" is woo, not science.If you want to complain that this is too general, then as I said, you have to come up with an alternative hypothesis, one that hasn't already been falsified, and a test that distinguishes between the two.
So are you saying that an equivalent algorithm evaluated on any substrate will not necessarily produce this conscious experience that I have?Because it in no way follows from the computational model.
Er no, you rather specifically claimed that a set of calculations - even the desk check of a program, even if there were no physical connection between calculations - could result in the moment of consciousness you you are experiencing right now.Actually, I point out that consciousness is obviously computational.Robin said:You claim one could be responsible for the other.
It is based on a perfect understanding of your position - if I had it wrong in any way I gave you more than ample time to correct me. I questioned you quite closely on this.Yes, I have. The third is of course based on an abject misunderstanding of consciousness and computation, but I've responded to it anyway.
Nor do you. Unless you are suggesting a double standard should apply.Sorry, Robin, you don't get to play that card.
The CPU has registers and current instructions that is all. It does not get to "know" the algorithm, once an instruction has completed and another comes along then the last one has gone.Now, I'll ask again, wrong in what way?
Specifically - how?Yes. Of course I can.
Or the instruction after. Or any other piece of data in the system. It's not even difficult.
Just do a comparison between a register value and a PC-relative indexed address.
So on a Turing Machine the reading head can "know" about what happened two steps ago or two steps in advance? Can you provide this proof?The sequence of operations for a Turing machine would be longer, but it is already proven that it can be done (over 50 years ago). This is hardly controversial.
Okay, cool. I think thats as good a response that anyone can give on this subject at the moment.
However, there are some short comings to this scheme: The first is that inputs into a system are sensory only if the given system has subjective sensibility.
The second is that symbols only take on the force of being symbols if there is a conscious subject associating those symbols with meaning(s).
I think describing it in physical terms will be useful to the same degree that describing any moderately complex computational process (an algorithm for simulated annealing or a forward chaining expert system or whatever) in terms of what is happening at a transistor by transistor level on my laptop is -- OK for reverse engineering if that is all you have, not so useful once we figure out what is happening.So we're still left with having to explain the whole subjective aspect of the issue [i.e. consciousness]: What is it in physical terms, and what are the sufficient conditions for it?
Yeah, the ethics will be a big deal. I would hate to upload myself and then have the Supreme Court rule that artificial sapient systems do not have human rights.I'm fairly confident that we will eventually be able to understand it sufficiently to create conscious technological systems [of course, when/if that happens there will be a whole bevy of ethical concerns that will take the fore of the issue].
I argue that we have what we need to find a sufficient answer, and that we will not need to describe the answer in terms of the four fundamental forces any more than we have to for any other biological process.What I'm objecting to is the assertion thats frequently made here that we already have a sufficient answer. We most assuredly don't.
Are you referring to EEG related stuff? That is a very crude diagnostic indeed.Physically, the difference is shown as the varying frequencies of the brain's EM activity. Each frequency range is correlated with a particular conscious state, or lack thereof.
Every single one of those biological mechanisms -- [1] membrane potentials, [2] polarization, [3] signal transduction, etc.. -- all of them, utilize EMF interactions.
We disagree. I see it as an artifact of the way our nervous system models, learns from, and adapts to our environment. I don't see individual nerve cells producing much beyond metabolic waste products, heat, and the odd depolarization event. The only thing that is interesting about them from the standpoint of consciousness is that their depolarization events can be controlled by other nerve cells, and that they can be connected in huge, ornate networks. Other than that, they suck at being antennae and they are way too hot and dense for quantum effects to start being interesting.Conscious experience is not a functional abstraction of what neural cells do but what they are actually physically producing.
Not exactly -- I think that the only criteria we have to establish if something is conscious or not right now is to interact with it and see if it acts like a conscious entity. I realize this is a very crude test, but it is likely to be a good as we can get for a while. I also think that establishing a scientific test based on the physical properties of neurons is the wrong way to go about it -- at the very least, I would focus more on their properties as information processors, and I would look more at how the networks as a whole in the brain behave rather than focusing on individual neurons.Even so, I'm sure you realize that the only way that we can falsify any claim to creating a conscious system is a scientific theory of consciousness that meets the criteria I listed earlier.
I think that so far philosophy has made a hash of it -- too much thinking about the problem, not enough of it empirical. Bring on the science.Unlike Chalmer's, I'm not smugly content with thinking of consciousness as an insoluble philosophical conundrum. I think that science can make real inroads in this area. I also think that philosophy should be used as a tool to help us attack this problem, not as a means rationalize it into an eternal mystery box.
Yeah, quantum mechanics 101. Neurons do not just process information in that trivial sense, though -- they do it by summing their excitatory inputs, subtracting inhibitory inputs, and firing if their input passes a certain threshold. Totally different ball of wax.Every object inheres information and every process is processing information.
So all that switch flipping is still just a simulation even though the power plant would stop functioning (possibly catastrophically) if the computer running it crashed or was switched off?Its the literal physical flipping of the computer hardware's switching mechanisms. The computer simulation of the power plant is just a representational tool. Like language, it only takes on symbolic significance in the minds of the humans who use the computer.
!Kaggen said:Such as?PixyMisa said:This same technique can indeed be used to study consciousness. However, it gives answers that are now well established to be wrong.The strong interpretation of Goethe's Metamorphosis of Plants demonstrates that that the technique of imaginative thinking as a form of introspection can be used to describe objective reality and hypothesize the homologous structures of plant organs. This same technique can be used to study consciousness.
Ok, lets assume only computation is required. Computation is repeatable. Then an exact copy of a conscious computer is the same conscious computer. That is what you have been telling us, right. Therefore if I produce an exact copy of a human by cloning would I get the same person? No, behavior studies shows that the historical experience of the original person is a contingent part of his/her consciousness. Therefore even if computation is the necessary part of consciousness the contingent part (being the historical experience of the person) of consciousness is required to replicate a brain. Is this physically possible, no. Therefore the initial assumption is wrong and computation is not all that is required.Evidence?You asked Aku what else might be necessary for consciousness apart from computation. I have replied that consciousness is not just understood as a result of reasons, but the reason for results. In other words consciousness not only has necessary reasons, but also contingent results.
Ask any person who makes a living from predicting the behavior of real people.Evidence?It requires imagination to predict the contingent results of consciousness.
No, in those who want to build conscious computers.In computers?You missed the point.
Song writing develops the imagination.
I. Just. Told. You.Specifically - how?
The one you want to use.Which register?
The one you want to look at.Which PC-relative indexed address?
Using an instruction that performs a comparison between a register value and a PC-relative indexed address.How?
No, it has to be a single instruction, not a single cycle. Or you can be specific about what you mean when you say "the last instruction" and then it can be any number of instructions.Remember it will have to be in a single CPU cycle otherwise the instruction will not be the last instruction any more.
Will the method work if the last instruction was a "jump" or "return"?
Yes.So on a Turing Machine the reading head can "know" about what happened two steps ago or two steps in advance? Can you provide this proof?
It has access to the program counter; it has access to the program memory; it can perform index+offset memory access, which means that it can easily access the value located at PC-1, i.e. the numeric value of the previously executed instruction (assuming a fixed instruction word).Does it "know" which action was invoked?
It's the last action before the sequence. You can create a virtual machine running on top of the Turing machine under which these operations are atomic. Does this make a difference? If so, why?Also - if the sequence is "longer" then in what way is it reading the last action?
How can you write a sentence with more than one word?How can it be reading the last action in a sequence longer than one step?
Such as the notion that consciousness is causal - see Libet's experiments there. Though Libet's conclusions haven't been decisively confirmed, the conception of consciousness formed through introspection has been effectively dustbinned.Such as?
Yep.Ok, lets assume only computation is required. Computation is repeatable.
A second instantiation of that consciousness.Then an exact copy of a conscious computer is the same conscious computer.
Well, no.That is what you have been telling us, right.
Cloning doesn't produce exact copies.Therefore if I produce an exact copy of a human by cloning would I get the same person?
That's even more irrelevant than the point I thought you were going to make. That is not an exact copy. It's a clone with different memories.No, behavior studies shows that the historical experience of the original person is a contingent part of his/her consciousness.
Wrong. All you have to do is actually copy the brain.Therefore even if computation is the necessary part of consciousness the contingent part (being the historical experience of the person) of consciousness is required to replicate a brain.
No, but it's completely irrelevant too.Is this physically possible, no.
No, your argument is unsound.Therefore the initial assumption is wrong and computation is not all that is required.
That's not evidence.Ask any person who makes a living from predicting the behavior of real people.
We already have song-writing computers. Does song-writing develop the imagination in those computers?No, in those who want to build conscious computers.
Cloning doesn't produce exact copies.
Cloning is insufficient for a huge number of reasons. Clones aren't biologically identical, and even if they were they wouldn't have identical consciousnesses, so none of this relates in any way to the computational model.So the information in genes is insufficient to reproduce the consciousness of biological organisms, but algorithms aren't?
No. You. Did. Not.I. Just. Told. You.
What different method? What will find the last instruction even if it was a return or a jump?Yes, of course.
ETA: Sorry. Depends what you mean by the last instruction - if the last instruction executed was a taken jump or return, then no, a simple PC-relative lookup won't work. You'd have to use a different method. But it is computable regardless. This is much simpler than the halting problem, which is not computable.
You cite this as an all purpose proof for every different occasion don't you?
Nobody is putting forward a contrary view. I've already said so a few posts upthread.
Nobody is saying that the knowledge of the experiences gives one the experience. Knowing about millionaires doesn't make me one. I'm saying that the experience of pain IS pain. You're adding a useless layer to pain.
You seem to have missed my post about algorithms being order dependent, which is equivalent to physical time dependence.
This should be intuitively clear to you since you live in the age of general relativity -- remember that whole "time diliation" thing? Yeah......
So your entire post here is just wrong, because you are wrong about computing and time dependence.
Theoretically the second list is possible, but for consciousness I would like to see evidence before I sign on the dotted line. The problem I see, again, is in trying to reproduce what occurs at the synapse. The rest is probably not that difficult, just a matter of knowing what links up where and when and at what frequency in the real world -- a monumental knowledge task but not a big engineering task. Dealing with all the modulations at the synapse, though is going to be a bear.
Also keep in mind, as I'm sure you do, when discussing Turing machines, we are discussing abstractions, ideals. Ideally we should be able to produce identical experiences. I doubt that we could ever pull that off in the real world, though.
I think 'consciousness being independent of our interaction with the world' is a different issue. My answer to that would be 'no'.
True, but I think it is fair to say that we could probably abstract all the time dependent processing in the brain and represent it in a Turing machine in abstract form in a time independent manner. It would not be easy, but I don't see any definite obstacle.
Consciousness, however, is not an abstraction, so whatever a theoretical Turing machine could or could not do has no bearing on consciousness itself necessarily as it is embodied in the real world.
But there is no reason to believe that time dependence is a necessary property of consciousness. Sure it is necessary with brains acting in the real world, but again I see no absolute problem with us abstracting the computations occurring in neurons in a time dependent fashion to a Turing machine where they can be implemented in a time independent way.
Turing machines have an endless tape where instructions are always remembered, so interrupting them is never a problem. What is important in neural processing is that steps occur in a particular order and that they integrate in a particular way. It doesn't matter if those steps are interrupted if the 'instructions are remembered'.
Take as a real world example what occurs with absence seizures. Children with absence epilepsy (there are several forms unfortunately, but I'm talking about benign childhood absence here) can have hundreds of seizures a day if untreated. When they have a seizure, they 'check out' for a few seconds and then resume whatever activity.
To demonstrate this we often put them in epilepsy monitoring units (well, I don't because I only see adults) and film them with an EEG running, while asking them to perform some activity -- a favorite is counting. The kids will start to count, have a seizure, and then one of four things typically occurs. They may continue counting from the place where they left off prior to the seizure (say they stopped at 3, they will begin counting again at 4); they may start counting again at one, or at some other random number; they may become confused and forget what they were doing all along; or sometimes they will continue counting at a number they would have reached if they continued at the same pace with which they started as though the seizure never occurred.
Here's a situation in which consciousness is turned off, then returns and there is no sense that time has passed or that anything is missing. The same is theoretically possible with a Turing machine. I don't see why you could start it and stop it anywhere along the way as long as the proper relationships are maintained in the processing.
Yes, that is the question. My bet is on the computational properties, but the computation must obviously be done in a very particular way in the real world.
Right, I don't think we should let it bother us either. Again, I don't think that time dependence is an intrinsic property of the computation that neurons do. If someone has an argument as to why time dependence is an intrinsic property and why they cannot be emulated on a Turing machine I would be interested to hear it.
Time dependence is a real world issue as far as I can tell.
I'm fairly confident that we will eventually be able to understand it sufficiently to create conscious technological systems
So are you saying that an equivalent algorithm evaluated on any substrate will not necessarily produce this conscious experience that I have?
But earlier you claimed very specifically that it did.