• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Robot consciousness

Wrong. That's a result from the 1990s, IIRC.


A Turing Machine (of the type used in computers, he did describe others) is a state based construct while a Neural Network is based around changing the connections within the machine itself. Our model for consciousness is the human brain which as far as anyone can tell employs both techniques and reorganizes connections on the fly.

It’s certainly true that you can solve the connections with a state based Turing machine but without sufficient processing power you won’t be able to keep up with how they evolve over time which changes your results for future computation which has clear implications for the question at hand.
 
A Turing Machine (of the type used in computers, he did describe others) is a state based construct while a Neural Network is based around changing the connections within the machine itself.

... which are stateful. Specifically, each neuron has associated with it an activation state and a connection matrix defining the current state of connection weights between it and each other neuron in the model. (Of course, in practical implementations, most weights are zero and therefore not directly represented).

Our model for consciousness is the human brain which as far as anyone can tell employs both techniques and reorganizes connections on the fly.

Reorganizing connections on the fly is still stateful.
 
Piggy,

you wouldn't necessarily have to do time dilation. You could buffer the inputs, you could present the robot/brain with a simulated world. I think everybody already agrees that if you present our robot brain with a green triangle for a brief enough period it'll struggle to get much of an impression of it. You might try to cope by arranging subsystems so that ones that are able to cope with faster input get to access the datastream first, so maybe colour detection get's a look in, but the robot doesn't know it's a triangle. It may be that the robot is implemented in such a way that it doesn't bother to pass on the incomplete information that isn't around long enough to be worth worrying about. Afterall if the higher robot brain functions were bothered by every fleeting impression the higher robot brain functions might become overloaded.
 
Yes, I figured as much. There are dozens of I-know-nothing-of-QM-but-want-to-post-anyway threads already. Why don't you go join one of them instead?


Link? I could probably educate some of the people on quantum theory, having studied it for the last three years during my degree. Whats your beef with Penrose?
 
This way of presenting the slowing down question seems to be a question of the finite information processing capacity of any brain/computer. If we are chucking 1TB/sec of data at a brain/computer that can only cope with 1GB/sec some infomation is going to go straight in the bin if it copes at all. At each stage of processing some information may go in the bin in order to keep up. By slowing the brain/computer down, or speeding up the input you clearly make the problem worse.
 
We know from experimentation that not all of the data processed by the brain is made available to consciousness.

I'm not arguing otherwise. I'm saying that consciousness (I'm beginning to hate this word, as it's so loaded) is just more information processing happening in the brain. There is no question that there are different levels of processing.

That does not mean that the types of computers we now have will eventually be able to simulate consciousness by themselves.

No one is claiming they can! Why are you even mentioning this?

Nothing has been said about slowing down the inputs.

What?! I thought this was an integral part of your experiment:

This time, just before we flash the image of the green triangle, we turn on the machine to slow his brain down. We also increase the length of time the green triangle is on the screen accordingly. (I have no idea how long that would be -- perhaps our experimenters have to go into suspended animation during this period, or perhaps their descendants come back to complete it, or maybe robots are doing the test, but we'll ignore that.) Just after the image changes back to a red dot, we turn off the machine and Joe's brain runs at normal speed again.
 
No, not in this case. Eyes are part of the brain, wired right into it. Remember, what's posited is that impulses traverse axons at an average time of 1 second. That includes the visual nerves.
Ok, I misread that part. So perception is slowed along with cognition, but you still let the system interact with another element (the physical world) that isn´t slowed down along with the brain/AI/pencil. This of course alters the result of the computation.
That´s not the question though, the original post is about consciousness in general, not about consciousness that has external sensors while being slowed down.
 
This way of presenting the slowing down question seems to be a question of the finite information processing capacity of any brain/computer. If we are chucking 1TB/sec of data at a brain/computer that can only cope with 1GB/sec some infomation is going to go straight in the bin if it copes at all. At each stage of processing some information may go in the bin in order to keep up. By slowing the brain/computer down, or speeding up the input you clearly make the problem worse.
Well, that´s what timestamps are for. You log timestamped input data and then your algorithms will give you the same answer, regardless of the actual speed of computation (if you did everything right, that is). Of course this severly limits the capability of interaction for a system, feedback into the real world is not possible. Also, this method obviously works only for finite timespans.
This is where simulations come in handy. With a simulator you can run the world the agent lives in abritrarily slow and thus won´t have the problem of data piling up. If you got a superfast computer running both the simulation and the agent, the agent might "live an entire live" in only 10 real milliseconds. If your computer is a little slower the very same computation might take thousands of years. But the computation and thus consciousness (if any) of the agent will be exactly the same in both cases.
 
Yup, being somewhat prudent here, I would say that free will and agency are the two most significant attributes of consciousness.
That's news to me. Apologies for being flippant, but do tell.... What is free will?
 
Well, that´s what timestamps are for. You log timestamped input data and then your algorithms will give you the same answer, regardless of the actual speed of computation (if you did everything right, that is). Of course this severly limits the capability of interaction for a system, feedback into the real world is not possible. Also, this method obviously works only for finite timespans.
This is where simulations come in handy. With a simulator you can run the world the agent lives in abritrarily slow and thus won´t have the problem of data piling up. If you got a superfast computer running both the simulation and the agent, the agent might "live an entire live" in only 10 real milliseconds. If your computer is a little slower the very same computation might take thousands of years. But the computation and thus consciousness (if any) of the agent will be exactly the same in both cases.
I agree, though whether the brain uses time stamping, or some other method...
 
That's news to me. Apologies for being flippant, but do tell.... What is free will?


Wiki is your friend :) http://en.wikipedia.org/wiki/Free_will

Heres my take anyway...

The problem that i see with the traditional brain = mind = computer is that it should mean that when a computer gets up to the processing ability of a human it should become conscious. I very much doubt that it would, AI proponents frequently make that claim, but there is absolutely no evidence that machines can be conscious in any way, or could be in the future.

The problem with this is that the people who make these claims (that the brain is nothing more than a computer) assume that the neurons in the brain, and their connections, the synapses, work as fundamental units. So for example we have roughly ten billion neurons, with about a thousand or ten thousand connections to other neurons, which gives us about 10^15 operations per second, with each neuron acting as a fundamental unit. The problem that i see with that is that neurons are much, much more complex than a simple switch. For example, consider a single cell, like a paramecium, it swims around, it finds food and proteins, if you suck it into a capillary tube it escapes, and if you do it again it will do it quicker and quicker each time, so it can learn, it can find mates and reproduce, it does all kinds of things. It does not have any neurons what-so-ever, it is just one cell.

So If a paramecium can do all these things why should we think that a neuron, or a synapse, is just a simple on off switch? The capacity of a neuron seems much greater than that.

Then if you go down to the next level of the cell and ask how it does that, it uses its internal structure, the cytoskeleton, which seems like a structural support but it is also the nervous system within each cell, mainly comprised of microtubules, which are hollow cylindrical polymers that seemingly are perfectly designed to be information processing devices at the molecular level. They are the nervous systems within each cell, and the nervous system within each neuron too. So these proteins (that’s what they are made of) switch much faster than neurons and there is many, many more of them, ten million within each cell for example, switching within nano seconds. So if we think of processing going down to that level there is as much processing going on at that level as there is in the whole brain (according to the AI type estimates). So if we think that information processing in the brain goes down to the level of microtubules we roughly increase the information capacity from 1015 to 1027, so that pushes the goal way further for the AI people.

The problem with that is that even if we go down to that level and accept that microtubules are the fundamental units of consciousness, that still does not explain why we have experience, why we have emotions, feeling, what philosophers call qualia. That’s just more reductionism, but it does not solve the problem. However, when you get down to the smallest level, the quantum level, everything changes and it is not deterministic with definate outcomes.

If the brain is a computer then our lives are deterministic, we are just reacting to things in our environment, meaning we should be completely predictable, just like a computer is. We would be merely helpless spectators watching our lives unfold in front of us.

As stated above, I take a similar view to Penrose et al, that the there is something about our minds that is non computable, something that is beyond the realm of computation. So we know things other than through algorithms, sort of related to Godel's famous theorem (which to be honest, I dont fully understand). The only thing that can give us this non computable element in nature is a process that is not deterministic like other areas of science, and the only area of science that is not thought of to be definitive and deterministic is Quantum physics, where mechanistic formalities are replaced by possibilities and indeterministic models.

Sorry for the tangent...
 
Last edited:
Then I guess you don't understand the term. There are a lot of well-known uncomputable problems.

I was using roger's definition. If I understand him rightly, all physics is computable, therefore everything in the universe is computable. Whether there are abstract problems that aren't, I wouldn't know.
 
No one is claiming they can! Why are you even mentioning this?

I thought that was drkitten's position. I don't mean that the actual machines we have now could do it, but that computers which are TMs could, in theory, do it by themselves (i.e., without additional components that are not TMs) once we figure out how it's done.


What?! I thought this was an integral part of your experiment:

I would call that extending the duration of a constant input, not slowing it down.
 
Well, I've had to join the Dark Side, folks. And it was indeed that analogy-breaker which Philosaur mentioned -- and which I said I kept coming back to, wondering if I was making an error there -- that done me in.

Joe and Jane will report seeing a green triangle.

Our conscious robot, or our human under the influence of the Glacial Axon Machine, will retain consciousness. If they're still using their normal senses, the world will become an incomprehensible confusion. If they have stored memories and so can dream, and they're dreaming during the slowdown, they won't notice anything. Ditto if they're in a constant environment, such as our isolation booth with the screen.

I can still give the thumbnail of the brain process and links to some cool and sometimes bizarre research if anyone cares for it, but maybe that's for another thread.

Thanks for the patience, folks.
 
To boil one point down from the last few pages: A pure TM has no input/output. It's just an infinite tape with one's and zero's and a state machine that moves back and forth changing the bits. Consciousness (of the real world) requires at least an input from the real world.

A TM also does not run in real time. Time itself must be simulated in the TM.

These limitations however do not preclude a TM from producing consciousness. We need only note that our universe has produced consciousness that apparently does not require input from outside our universe.
 
The problem that i see with the traditional brain = mind = computer is that it should mean that when a computer gets up to the processing ability of a human it should become conscious. I very much doubt that it would, AI proponents frequently make that claim, but there is absolutely no evidence that machines can be conscious in any way, or could be in the future.
You are right about the lack of evidence. But as long as the functions of the brain have not been completely worked out, and no attempt has been made with a simulation of the size of a real brain, there is also no evidence to the contrary.

The problem with this is that the people who make these claims (that the brain is nothing more than a computer) assume that the neurons in the brain, and their connections, the synapses, work as fundamental units.
I believe that is for simplicity's sake. Few doubt that the brain is more complex than that, but it is hoped to construct consciousness without having to go all the way in complexity. That may well be false.

So If a paramecium can do all these things why should we think that a neuron, or a synapse, is just a simple on off switch? The capacity of a neuron seems much greater than that.
Again, this is for the sake of simplicity. We assume that whatever the neurons do apart from the well-known functions is done in order to keep the neuron alive. At this stage, it seems fairly far-fetched to assume that single neurons are involved in actual decision-making in the brain.

So if we think that information processing in the brain goes down to the level of microtubules we roughly increase the information capacity from 1015 to 1027, so that pushes the goal way further for the AI people.
If you are right, yes. That certainly does not invalidate the idea that consciousness is a Turing Machine, it just makes it more likely to remain out of grasp.

The problem with that is that even if we go down to that level and accept that microtubules are the fundamental units of consciousness, that still does not explain why we have experience, why we have emotions, feeling, what philosophers call qualia.
Well, that is currently being assumed to be a higher-order function.

As stated above, I take a similar view to Penrose et al, that the there is something about our minds that is non computable, something that is beyond the realm of computation. So we know things other than through algorithms, sort of related to Godel's famous theorem (which to be honest, I dont fully understand). The only thing that can give us this non computable element in nature is a process that is not deterministic like other areas of science, and the only area of science that is not thought of to be definitive and deterministic is Quantum physics, where mechanistic formalities are replaced by possibilities and indeterministic models.
Penrose's idea that indeterministic models like QM are involved, certainly makes "free will" moot, because that would involve true randomness in our decisions, which is not how we experience consciousness, and it is not how we act.
 
We're talking about a conscious robot with a slowed brain in the real world.

Thanks for clarifying where you see a problem. I still disagree, and here's why.

Firstly, there seems to be some ambiguity about what is meant by 'consciousness'. Is it to be aware of ones surroundings, or is it to be self-aware? I've been thinking the second meaning, not the first. I do not think processing of input data is a necessary or sufficient condition for consciousness.

Is my doorbell conscious? It indicates when the door button is pressed. And if you press the door button fast enough, it will not be aware of it -- the electrical impulse to the electromagnet will be insufficient to move the hammer. If I change the dimensions of the hammer, or the strength of the electro magnet, I can slow down the response time such that the bell will take longer and longer to trigger. How is that different from Piggy's argument about slowing the brain? I don't think anyone would consider the doorbell conscious, so processing of input data is not a sufficient condition for consciousness.

We also have good evidence that people lacking sensory input are still conscious. So vast swathes of input data can be lost, and consciousness kept. Of course, there's a difficulty of determining whether a person in a persistent vegetative state (for example) is conscious. That's because they have no output. We don't know if they have any sensory input, and we don't know whether they are still conscious. I see no reason why consciousness would be lost merely by losing sensory input -- one would still be self aware (cogito ergo sum and all).

If ones definition of conscious includes real-time constraints, then of course slowing the processing platform down is going to eventually fail those constraints. But does that mean consciousness is lost all together? Or does it just mean the timescale of the consciousness is changed? For example, when I was a child, a week seemed like an age, summer holidays went on for ever, birthdays were anticipated for weeks. Now a week doesn't seem so long, next summer doesn't seem so far away, birthdays fly by. Am I conscious at a different time scale?

If you slowed the conscious platform down, so it could no longer perceive the diurnal rhythm, would it become conscious at, say, a geologic timescale rather than daily timescale? I don't see why not.

Would we recognize such a consciousness? I suspect not, but that doesn't mean its not conscious. Or is that another implicit assumption in this discussion -- it's not 'would it still be conscious?', but 'would we continue to recognize the consciousness?'
 
The problem that i see with the traditional brain = mind = computer is that it should mean that when a computer gets up to the processing ability of a human it should become conscious. I very much doubt that it would, AI proponents frequently make that claim, but there is absolutely no evidence that machines can be conscious in any way, or could be in the future.
I don't think most AI proponents are claiming that if you get a computer with the processing ability of the human brain and run Windows Vista on it, it will be conscious. Perhaps it would run at an acceptable speed, but I digress.

The problem with this is that the people who make these claims (that the brain is nothing more than a computer) assume that the neurons in the brain, and their connections, the synapses, work as fundamental units. So for example we have roughly ten billion neurons, with about a thousand or ten thousand connections to other neurons, which gives us about 10^15 operations per second, with each neuron acting as a fundamental unit. The problem that i see with that is that neurons are much, much more complex than a simple switch. For example, consider a single cell, like a paramecium, it swims around, it finds food and proteins, if you suck it into a capillary tube it escapes, and if you do it again it will do it quicker and quicker each time, so it can learn, it can find mates and reproduce, it does all kinds of things. It does not have any neurons what-so-ever, it is just one cell.
My impression was that paramecium learning was an ambiguous area.


So If a paramecium can do all these things why should we think that a neuron, or a synapse, is just a simple on off switch? The capacity of a neuron seems much greater than that.
It probably would be pretty complicated if one wanted to build an artificial model of a brain made up of artificial models of neurons.

Then if you go down to the next level of the cell and ask how it does that, it uses its internal structure, the cytoskeleton, which seems like a structural support but it is also the nervous system within each cell, mainly comprised of microtubules, which are hollow cylindrical polymers that seemingly are perfectly designed to be information processing devices at the molecular level. They are the nervous systems within each cell, and the nervous system within each neuron too. So these proteins (that’s what they are made of) switch much faster than neurons and there is many, many more of them, ten million within each cell for example, switching within nano seconds. So if we think of processing going down to that level there is as much processing going on at that level as there is in the whole brain (according to the AI type estimates). So if we think that information processing in the brain goes down to the level of microtubules we roughly increase the information capacity from 1015 to 1027, so that pushes the goal way further for the AI people.
OK, but we're still talking about a problem of building a big enough, fast enough computer.

The problem with that is that even if we go down to that level and accept that microtubules are the fundamental units of consciousness, that still does not explain why we have experience, why we have emotions, feeling, what philosophers call qualia. That’s just more reductionism, but it does not solve the problem.
I agree absolutely.

However, when you get down to the smallest level, the quantum level, everything changes and it is not deterministic with definate outcomes.
I don't understand why this is important, or what this changes in terms of qualia.

If the brain is a computer then our lives are deterministic, we are just reacting to things in our environment, meaning we should be completely predictable, just like a computer is. We would be merely helpless spectators watching our lives unfold in front of us.
Yes, though we might well have the illusion that we had free will.

As stated above, I take a similar view to Penrose et al, that the there is something about our minds that is non computable, something that is beyond the realm of computation. So we know things other than through algorithms, sort of related to Godel's famous theorem (which to be honest, I dont fully understand). The only thing that can give us this non computable element in nature is a process that is not deterministic like other areas of science, and the only area of science that is not thought of to be definitive and deterministic is Quantum physics, where mechanistic formalities are replaced by possibilities and indeterministic models.
So there is a random element at the core. Are you saying that our will is in some kind of conscious control of those countless non-deterministic elements? In that case, would we expect to see some kind of against expectation probability distribution in terms of the quantum behavior of these neurons? What evidence do you have that free will exists that you need to go seeking it?
 

Back
Top Bottom