Robot consciousness

I need to retreat on a point. I've been proceeding as if Church-Turing is proven. It is not proven, however, it is assumed to be true.

Turing wrote about computability models that aren't Turing reducible. Hypercomputation generally involves introducing elements like oracles (something that somehow magically knows the right answer), infinities (calculate an infinite amount of work in a finite time), infinite superpositions in QM, etc. In short, nothing that appears to be physically realizable. Mark Burgin claims to have a physically realizable super-recursive algorithm, but his results are not accepted in the mathmatics community.

So of course what I've been arguing is not proven without exception. However, so far as I am aware no one has suggested a physically realizable non-Turing computation model. Every suggestion breaks some currently known feature of physics. Which is not to say what we know about physics won't change.

But what to do? Argue what is possible based on what we know, or just throw things out and argue anything? Based on what we know, the brain is computable. New evidence could change that understanding. Hand waving and personal incredulity arguments aren't interesting (to me).

I wish DrKitten was here. She's quite knowledgeable about computability, and would have slapped me around for my earlier mistakes.
 
However, so far as I am aware no one has suggested a physically realizable non-Turing computation model. Every suggestion breaks some currently known feature of physics. Which is not to say what we know about physics won't change.

I could be wrong, but my understanding is that Penrose does just this, although he goes about it the opposite way by asserting that our consciousness is a non-Turing model and just leaving the reader to figure out just what that model might be.

I think it has something to do with microtubules giving human neurons access to quantum computing resources that would allow us to implement genuine non-deterministic state machines, or whatever other advantages a quantum computer might have.

I wish DrKitten was here. She's quite knowledgeable about computability, and would have slapped me around for my earlier mistakes.

Wait... drkitten is a woman?
 
I need to retreat on a point. I've been proceeding as if Church-Turing is proven. It is not proven, however, it is assumed to be true.

Turing wrote about computability models that aren't Turing reducible. Hypercomputation generally involves introducing elements like oracles (something that somehow magically knows the right answer), infinities (calculate an infinite amount of work in a finite time), infinite superpositions in QM, etc. In short, nothing that appears to be physically realizable. Mark Burgin claims to have a physically realizable super-recursive algorithm, but his results are not accepted in the mathmatics community.

So of course what I've been arguing is not proven without exception. However, so far as I am aware no one has suggested a physically realizable non-Turing computation model. Every suggestion breaks some currently known feature of physics. Which is not to say what we know about physics won't change.

But what to do? Argue what is possible based on what we know, or just throw things out and argue anything? Based on what we know, the brain is computable. New evidence could change that understanding. Hand waving and personal incredulity arguments aren't interesting (to me).

I wish DrKitten was here.

Ask and ye shall receive -- PM and it shall be opened unto you. We're talking about models of computation that are not Turing equivalent? Yes, they exist, there are a lot of them, and no one pays any attention to them because no one can figure out either how to build one or even what we'd use them for.

With that said, all analog computation is inherently non-TM equivalent due to the butterfly effect. In that sense, every system we build is actually more powerful than a TM and we spend a tremendous amount of time and effort lobotomizing it to make it "only" as powerful as a TM, under the guise of noise resistance. If you have any continuous quantities at all (and as far as I know, most physicists are still fairly confident that space and time are continuous), then you have the potential to pack infinitely much information into a single data point. Since the brain involves timing information, the brain is potentially non-computable.

No one takes this possibility seriously. At least, not while the bars are still open.

Similarly, you can build a non-TM system by including an oracle (essentially, a direct pipeline to God). We may have an oracle neuron in our heads. (I went looking for one a while ago and couldn't find it, but I got bored somewhere around the five hundred billionth neuron and lost interest. ) This may, in fact, be how the "soul" magically intervenes in a dualistic framework.

But no one takes this possibility seriously. Even when the bars are closed, and you're standing around the parking lot afterwards, as long as the worm is still in the tequila bottle. In fact, I don't think I've even seen the possibility of an "oracle neuron" proposed in the literature, so if you decide to write this one up for the journals, please cite me.

A more realistic possibility is that chemistry is NOT simply applied QM, and that there are in fact factors that apply at a higher level that the physicists are as yet unaware of. But again this really just boils down to an argument from ignorance.

The most realistic possibility is that the brain is sensitive to other things that we are not aware of and that are not part of the computational framework, and that these things are a key component of consciousness. Here we're on much firmer ground, because we know that neurons are much more environmentally sensitive than TM components. I've been involved in experiments, for example, that involved putting my head in a strong EM field and seeing if I can think clearly -- and the answer, perhaps surprisingly, is "almost so." A hardcore neurocomputationalist would expect that the chemical processes would be unchanged and so there would be no measurable difference. A hardcore "the brain is complicated so you **** with it and it breaks" would expect significant failure. Instead, my performance dropped a few but very reliable percent. This tells me that the EM field generated by the brain itself is an important part of how it operates, and so if we just modelled the electrochemical connections of the neurons, we would not be able to make a brain.

This would make the brain at least "apparently" nonalgorithmic because we weren't tracking all the inputs and outputs of the system when we measured it. Whether or not the EM field could act as a form of oracle neuron or not is an open question, in part because no one has asked it....
 
With that said, all analog computation is inherently non-TM equivalent due to the butterfly effect. In that sense, every system we build is actually more powerful than a TM and we spend a tremendous amount of time and effort lobotomizing it to make it "only" as powerful as a TM, under the guise of noise resistance. If you have any continuous quantities at all (and as far as I know, most physicists are still fairly confident that space and time are continuous), then you have the potential to pack infinitely much information into a single data point. Since the brain involves timing information, the brain is potentially non-computable.
I'm snipping this part out because it is the part that I don't understand.

My thinking was along the lines of Feynmann, stating that we really, really know that at the lowest levels the universe is composed of quanta - discrete packets. To me, this strongly implies that the universe is in fact computable. I know that some forms of hypercomputation assumes say analog natural or real numbers, and this is non-TM, but the claim is that so far it doesn't really represent the physics. I.e. we can talk about the distance between two particles as being represented by a real number, and hence you have infinite data, but the reality of physics strictly limits that due to Heisenberg. However, we have a quote from Hawking (which is argued against by some physicists): Although there have been suggestions that space-time may have a discrete structure I see no reason to abandon the continuum theories that have been so successful. Others, such as Wheeler, strongly disagree with him.

So the next question for me becomes does the sum of that planck scale activity, discrete or not, become effectively discrete, and thus computable? Your bar comment suggests the answer is very most probably, if I read you right.
 
This tells me that the EM field generated by the brain itself is an important part of how it operates, and so if we just modelled the electrochemical connections of the neurons, we would not be able to make a brain.

This would make the brain at least "apparently" nonalgorithmic because we weren't tracking all the inputs and outputs of the system when we measured it. Whether or not the EM field could act as a form of oracle neuron or not is an open question, in part because no one has asked it....
Sorry, I wanted to respond to this as well. I don't think any of us are claiming that modelling only the neurons would necessarily be sufficient. There are other factors, not the least the chemical soup they swim in. A better wording would be modelling the brain, which leaves open whatever other influences there are. I don't see any reason to think those influences, known or unknown, would be nonalgorithmic unless it turns out the universe is not discrete, and occurances at that level act on the brain in a meaningful way.
 
My thinking was along the lines of Feynmann, stating that we really, really know that at the lowest levels the universe is composed of quanta - discrete packets.

Well, Feynmann's wrong. :D After all, what does he know -- he's only the fourth or fifth smartest human ever to have lived on this planet!

Seriously, though. The universe seems to be (beyond reasonable doubt) composed of discrete packets of "stuff," but that doesn't mean that the framework of the universe (time and space) are. Some physicists have proposed a quantized theory of time and space, but most physicists do not accept them. (Physics is a rough gig, and sometimes you have to propose all sorts of gibberish to get a paper out. Doesn't mean that even the author necessarily believes the gibberish -- even when the gibberish turns out to be true. See "Cat, Shroedinger's.")



I know that some forms of hypercomputation assumes say analog natural or real numbers, and this is non-TM, but the claim is that so far it doesn't really represent the physics. I.e. we can talk about the distance between two particles as being represented by a real number, and hence you have infinite data, but the reality of physics strictly limits that due to Heisenberg.

Er, no. Heisenberg's principle simply introduces probabilistic errors into the system; it doesn't neutralize the representational capacity. Suppose that I place two particles a distance x apart, and another two particles a distance x+e where e is less than the Heisenberg limit for the system. While I can't recover the original distance x, I do know (and Heisenberg doesn't contradict me) that it is probable that I will measure the second pair as being more distant than the first.

This allows for the possibility of repeated measurement for arbitrary precision. I can encode a single point with arbitrary precision, and you can measure it with moderate precision. I can then encode the same datum into a different point (with arbitrary precision) and you re-measure until you've achieved better accuracy in representation than a TM can get.

Or you could not bother and assume that the brain is noise-tolerant instead of noise-reliant.

However, we have a quote from Hawking (which is argued against by some physicists): Although there have been suggestions that space-time may have a discrete structure I see no reason to abandon the continuum theories that have been so successful. Others, such as Wheeler, strongly disagree with him.

So the next question for me becomes does the sum of that planck scale activity, discrete or not, become effectively discrete, and thus computable? Your bar comment suggests the answer is very most probably, if I read you right.

Yup. Although this kind of stuff is theoretically possible, it would essentially throw most of neuroscience on its head if it were true. Among other things, it means that the most important part of brain effects are not the large scale neural pulses that we can measure, but the tiny microscale things that we can't, which makes mapping techniques like EEGs and PET scans not just misguided but actively wrong. (We're literally judging a brain by the least significant part of its activity....)

Fun to speculate about just before last call, but nothing more.
 
Sorry, I wanted to respond to this as well. I don't think any of us are claiming that modelling only the neurons would necessarily be sufficient.

Well, someone wrote this a couple of pages earlier:


I see that I can't do that in a way that you'll agree, since you apparently don't think that what neurons do is 'computation'. I'll point out that neuroscientists as Krasnow et al are using computational models to produce extremely precise models of neural behavior. [snip]

Second, on the neuron front. We have identified nothing in a neurons behavior that is not computational. The fact that we can simulate it proves it is computational. This is such a basic point that I think you must have some weird definition of 'computation' that is not actually used in information science. To be clear, by the definition the rest of us are using, a lever is computational. A set of equations is computational. An algorithm on a computer is also computational. "Computational" has nothing to do with silicon chips or computers, except that in practice computers sure do computations quick. But neurons do computations too.

Anyway, a single neuron in a petri dish responds to inputs as they come.

I wanted to make sure that people understood that simply modelling neurons in a petri dish was not going to be sufficient to build brains.
 
I have no idea who that idiot is.

One of my big weaknesses in forum posts is not writing precisely. that argument is supposed to be extended to the networks the neurons make, the neurotransmitters, etc.
 
Thanks, drkitten.

So, among other things, this leaves us wondering whether noise plays an important role in the workings of the brain. And if it does, whether we could model that noise on a computer in sufficient detail to play the same role.

~~ Paul
 
So, among other things, this leaves us wondering whether noise plays an important role in the workings of the brain. And if it does, whether we could model that noise on a computer in sufficient detail to play the same role.

More or less. Any my off-hand answer would be "no, it doesn't, it's a hindrance there as elsewhere" which renders the second question irrelevant.

But there's definitely an interesting line of research there if anyone wants to chase it and isn't worried about the possibility [probability] of not getting tenured.
 
drkitten, not having personally read Penrose's book, isn't this along the lines of what he was proposing? Basically QM effects making into the macro level, causing non-algorithmic behavior?

Lucky for him he didn't need to get tenure through that book.
 
drkitten, not having personally read Penrose's book, isn't this along the lines of what he was proposing? Basically QM effects making into the macro level, causing non-algorithmic behavior?

Somewhat. He was proposing that the QM behavior dominates at the synapse level -- i.e. whether or not a synapse triggers is not driven by the electrochemical potential, but by the Mind of God.

But he's not really talking about hypercomputation. He's talking more about God actually running a deterministic universe at below the level where Heisenberg says He's just playing craps.

Lucky for him he didn't need to get tenure through that book.

Oh, you know it.

I used to be a great admirer of Penrose. Then he stopped doing math.
 
Who's to say that there is no "what it's like" to be the stock market, or a video camera-and-monitor, or a heater-and-thermostat.

The same reason we know that a Barbie doll isn't going to get up and dance the fandango just cause it's got legs.

I sometimes run across the "thermostats could be conscious" line from computer folks who haven't read about the brain.

It's bad reasoning.

It's like saying, well, space shuttles move people around, and skateboards move people around, and I could fly off into orbit on a space shuttle, so maybe I could fly off into orbit on a skateboard.

This line of thinking ignores the fact that flying off into orbit requires equipment that the skateboard ain't got.

There's an idea floating around that consciousness is an "emergent property", but that's not accurate. If it's emergent, then it's an emergent feature or function.

Emergent properties are like the whiteness of clouds. Water droplets aren't white. But get a bunch of them in a cloud, and the cloud appears white, even though its constituents are not white. Compare that with a brick wall, which appears whatever color the bricks are.

So there's this notion that consciousness "emerges" simply by virtue of having a bunch of neurons in one place.

But that's not the case. Consciousness is something the brain does, like vision or motor coordination or regulating breathing. It's a specialized function.

We know thermostats aren't conscious for the same reason we know they dont' breathe (unless, for some reason, a breathing apparatus is built into them). They don't have the equipment for it.
 
I'm running out of time to keep up with this thread. So, I'll perhaps unfairly only respond to a bit. However, it is the crux.

Okay, certainly it has not been proven, but it follows from everything we know about physics. Yes, physics. Physics is, as far as we know, computational. Certainly QM is - our predictions and calculations have reached a level of precision that we have never achieved in any other field.

From physics you get to chemistry. Again, chemistry is computational, so far as we can tell. We conclude this in two different ways. First, we observe that we can compute everything that we have seen so far. Second, reductionism. Chemistry devolves to physics, or QM. Put another way, QM in a macro environment is described as chemistry. And, as we know from Turing, any combination of computable elements is also computable.

QM is computational at a statistical level. There is genuine randomness with regard to specific instances.
 
"Explanatory" - I don't want to be one of those people who grasp a word out of context, but I think you probably chose this word well.

QM is not a good explanatory model of chemistry. No one uses QM to do chemistry, except in certain circumstances. There are far better models.

Yet, there is no doubt that chemistry is merely the sum behavior of QM.

Just because we can't right now come up with an easy computational model for language in no way means that language is not computational.

This is where my assertions of dualism comes in. You are saying the brain is chemicals and networks, both of which we have extraordinary evidence are computable, and then you say the sum of the parts is not computable. It just doesn't follow without a dualist element.

I'm not saying language is not computational.

But we can't yet say that it is.

Now, here you seem to be saying that there is no randomness at any level of granularity in the physical world. A pretty bold statement. Care to back it up?

Of course chemical reactions are extremely predictable.

But to view neuronal activity as an idealized chemical reaction is unjustified. The brain is a biological system, not a test tube.

Some have even hypothesized that a certain degree of randomness must be inherent in biological systems for evolution to occur. Not only for the basic process, but because any totally non-random critter is bound to get wiped out by competitors &/o predators.

And anyway, if you're going to accuse me of introducing a dualist element, I'd appreciate it if you'd explain what it is, rather than saying I "must be" positing one when I haven't actually posited one. You can say I'm wrong, but please, drop this whole dualist thing.

In any case, how important is this to our thought experiment?

Does it really impact the question of neural speed?
 
Piggy, here you go again, making assertions about a field you know little about. The network of neurons and the information stored in the neurons is the programming. It's a very basic tenet of information theory. Daises are so bad an analogy to a computational brain that I'm astonished that you are suggesting that it is in any way a rebuttal to what I am saying.

Well that's all well and good, but I'm talking about the brain.

And you're over-reaching in your application of information theory.

My point with the daisies was simply that they don't have the equipment to generate consciousness.

Do you really want to dispute that?
 

Back
Top Bottom