• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Robot consciousness

Would the robot be conscious if we ran the computer at a significantly reduced clock speed? What if we single-stepped the program? What would this consciousness be like if we hand-executed the code with pencil and paper?

Those strike me as profound-sounding questions that fall apart into semantics if you look a little more closely.

If your robot is possible at all, then consciousness is nothing more than a sequence of processes (chemical or electrical, in continuous time or not - I don't think that's important) that cause the computer/human to declare "Cogito, ergo sum", behave in certain ways, pass Turing tests, etc.

If so, consciousness isn't a rigidly defined concept (have you ever said "Cogito, ergo sum" out loud?), anymore than "identity" or "north" is. And sure, you can always find situations in some gray area where it's hard to say whether something is conscious or not, just as for "identity" and "north" (do you have the same identity if you fall asleep and wake up? have amnesia? get a total organ transplant? what's north of the north pole? what's north of the galactic center?). But since it boils down to your choice of definition for the term, it's not all that interesting in the end.
 
Last edited:
We can't determine whether or not it's possible for a computer to be conscious. We can't even define consciousness in a way that allows us to find out where between Homo Sapiens and a sea cucumber it arises from the brain (or if it does, not that I'm a dualist).

Asking if it's affected by clock rate is a bit premature.

Still, it's a nice geeky game to play, so if we posit that it's all in the physical neurons I'd have to go with no, as long as an increased clock rate doesn't change the operations of the computer. (Parts overheating and affecting other parts that way.) The thought of a paper and pencil consciousness makes it more understandable to me why some people attempt to escape into dualism though.

As always, there's an appropriate XKCD.
 
Since we have no way of determining the former, we'll have to go with the latter. We have to use some sort of Turing test.


I'm not convinced it's that simple. If the robot can be in non-REM sleep and pass the Turing test for that state, then maybe it's the case that the robot could be in a vegetative state if the processor was running slow enough.

~~ Paul
I guarantee, if it passed a Turing test running at 3Ghz, it would still pass it running at 3Hz, so long as you concealed the slowdown from the person passing judgment. How could it be otherwise? The Turing test is a judgment based purely on the sequence of events outputted by your robot. Why would it matter what speed it displayed the sequence of behaviors. In your example, would the sequence of actions performed by the robot be different based on the speed it was running?

I don't understand what you mean by the robot being in a vegetative. This is different to running slowly, surely..... as is sleep.

It strikes me that by restricting yourself to the aspects of consciousness that are testable you have excluded the aspects of consciousness that you are interested in.
 
sol invictus said:
If so, consciousness isn't a rigidly defined concept (have you ever said "Cogito, ergo sum" out loud?), anymore than "identity" or "north" is. And sure, you can always find situations in some gray area where it's hard to say whether something is conscious or not, just as for "identity" and "north" (do you have the same identity if you fall asleep and wake up? have amnesia? get a total organ transplant? what's north of the north pole? what's north of the galactic center?). But since it boils down to your choice of definition for the term, it's not all that interesting in the end.
Note that in the OP I said "Let's assume that the appropriately organized software is conscious in a sense similar to that of human brains." Unless you want to deny every definition of consciousness, I think the questions in the OP are interesting.

~~ Paul
 
bjornart said:
Still, it's a nice geeky game to play, so if we posit that it's all in the physical neurons I'd have to go with no, as long as an increased clock rate doesn't change the operations of the computer. (Parts overheating and affecting other parts that way.) The thought of a paper and pencil consciousness makes it more understandable to me why some people attempt to escape into dualism though.
I think you're oversimplifying, too.

Imagine that the robot has a circadian clock that is independent of the underlying computer speed. Further imagine that it has neural synchronization/desynchronization clocks that are dependent on the underlying speed. Then it could compare the two clock rates and "know" how fast the neural clock was running. This might affect its state of consciousness.

~~ Paul
 
shuttlt said:
I guarantee, if it passed a Turing test running at 3Ghz, it would still pass it running at 3Hz, so long as you concealed the slowdown from the person passing judgment. How could it be otherwise? The Turing test is a judgment based purely on the sequence of events outputted by your robot. Why would it matter what speed it displayed the sequence of behaviors. In your example, would the sequence of actions performed by the robot be different based on the speed it was running?
See my previous post.

I don't understand what you mean by the robot being in a vegetative. This is different to running slowly, surely..... as is sleep.
Perhaps if the neural clock speed was too slow relative to the circadian clock, the robot would enter a different state of consciousness.

I agree that if the robot's brain was completely self-contained and completely deterministic, with no clocks other than ones dependent on the underlying computer, and probably certain other restrictions, then clock speed should be irrelevant.

It strikes me that by restricting yourself to the aspects of consciousness that are testable you have excluded the aspects of consciousness that you are interested in.
Only if the really interesting aspects of consciousness are entirely out of reach objectively. The history of neuroscience research suggests that we might not be completely out of luck.

~~ Paul
 
Last edited:
But autonomous circuits may not allow consciousness.
-ENOSENSE

Asynchronous, not autonomous. Besides, our heads are stuffed with asynchronous circuits (albeit based on squishy meat instead of semiconductors), and they support consciousness just fine (unless you are a p-zombie :) ).

We have to distinguish the clock (or lack thereof) of the underlying processor from the clocks (or lack thereof) of the algorithms.

Classifying algorithms based on the number of clock cycles (if applicable) is not very useful most of the time (above a certain threshold) -- that is why we use big O notation instead.

The speed of the underlying processor might have an effect on the algorithms (see next post).

Of course it does -- if the platform the algorithm is running on is not powerful enough, then the algorithm will not work as advertised. Downclocking synchronous logic makes it less powerful by decreasing the number of ops/second it can perform.

You can perform realtime edge detection (10ms/frame or so) on any modern commodity desktop machine. Downclock it far enough, and you no longer can. The algorithm has not broken -- it can still edge detect -- but it will not be able to do so fast enough.

If you deliberately cripple the platform, of course things will break. This is not some deep revelation, though.
 
Unless you want to deny every definition of consciousness, I think the questions in the OP are interesting.

Only insofar as they force us to define "conscious" more sharply.

But even if you consider that an interesting and worthwhile endeavour, consciousness is such a vague concept as it stands now that this seems like trying to run before you can walk. There are so many questions you could ask - are monkeys conscious, are rats, are people when they sleep, are babies or fetuses conscious, etc. - and in my opinion most of those cut closer to the bone and are more useful and interesting to think about than some imaginary robot.

Generally I think all such questions are more or less dead ends, because I don't think consciousness can be sharply defined. It's like life - people used to include all sorts of specific characteristics in the definition of life, like cellular structure, etc. But then we discovered things like viruses and started to think about more general possibilities. The whole thing is a continuum - a fascinating and interesting continuum, with lots of structure - and it's more fun to study the properties of that continuum than it is to try to decide where some arbitrary sharp line should be drawn on it.

Ultimately, the answer to the question "is this alive" or "is this conscious" boils down to the person asking the question - it's an arbitrary category the asker is seeking to impose, when no such sharp distinction necessarily exists.
 
Last edited:
Note that in the OP I said "Let's assume that the appropriately organized software is conscious in a sense similar to that of human brains." Unless you want to deny every definition of consciousness, I think the questions in the OP are interesting.

~~ Paul
This is surely not to deny EVERY definition of consciousness? I know I'm retreading a desperately old argument, but there are surely subjective aspects of consciousness that are impossible to test. I'm not sure how the OP makes sense without them though. All your left with is a question about whether if you underclock a computer enough it might not operate as it does at normal speeds producing different output for a given input. How is this an interesting question?
 
nescafe said:
Asynchronous, not autonomous. Besides, our heads are stuffed with asynchronous circuits (albeit based on squishy meat instead of semiconductors), and they support consciousness just fine (unless you are a p-zombie ).
Are you sure groups of those circuits don't have to work synchronously in order to produce consciousness?

If you deliberately cripple the platform, of course things will break. This is not some deep revelation, though.
Is adjusting the speed crippling it?

~~ Paul
 
sol invictus said:
Ultimately, the answer to the question "is this alive" or "is this conscious" boils down to the person asking the question - it's an arbitrary category the asker is seeking to impose, when no such sharp distinction necessarily exists.
I agree, which is why I said that we should consider a robot with consciousness like that of a human. I'm not trying to specify what is conscious and what is not, nor am I asking whether monkeys, rabbits, or slugs are conscious.

If there is some Turing test that we feel is indicative of human consciousness, could we build a robot that could pass it, more or less? If you don't think there is a Turing test for humans, then I guess that does put the kibosh on it.

~~ Paul
 
What if we single-stepped the program? What would this consciousness be like if we hand-executed the code with pencil and paper?

If you did that, then it would no longer be operating the way a human brain operates which allows it to be conscious. You would obviate the conditions.
 
shuttIt said:
This is surely not to deny EVERY definition of consciousness? I know I'm retreading a desperately old argument, but there are surely subjective aspects of consciousness that are impossible to test. I'm not sure how the OP makes sense without them though. All your left with is a question about whether if you underclock a computer enough it might not operate as it does at normal speeds producing different output for a given input. How is this an interesting question?
Neuroscientists have spent decades trying to understand when human consciousness is active and when it is not. The goal is an objective analysis of a subjective process. Wouldn't all these questions apply to a robot brain, too?

Talking about underclocking the computer does make it sound mundane, but it's more interesting when you wonder what it would be like if you hand-executed the software with pencil and paper. What exactly would be conscious then? If the algorithms are sensitive to clocks of various kinds, then perhaps there would be no consciousness at all.

I understand how vague the definition of consciousness is, but I still find the question interesting.

~~ Paul
 
Piggy said:
If you did that, then it would no longer be operating the way a human brain operates which allows it to be conscious. You would obviate the conditions.
We're talking about a robot brain, which presumably doesn't operate the way a human brain operates. Do you think the method of execution of the algorithms matters? That's the question I'm asking.

~~ Paul
 
Are you sure groups of those circuits don't have to work synchronously in order to produce consciousness?
Asynchronous circuits do not have a master clock (or clocks). Any synchronization is an emergent property of how the system is interconnected, not something imposed by a clock running at a given speed. That is the primary difference -- asynchronous vs. synchronous has little to do with whether the system is or can be synchronized, but everything to do with how it is synchronized.


Is adjusting the speed crippling it?

~~ Paul

The only answer to that is "maybe". :) Would your consciousness be crippled if your flicker fusion rate was 2.5 frames/min instead of 25 or so frames/second?
 
Last edited:
I agree, which is why I said that we should consider a robot with consciousness like that of a human. I'm not trying to specify what is conscious and what is not, nor am I asking whether monkeys, rabbits, or slugs are conscious.

But I think the first statement contradicts the second. You can't say it's "conscious like a human, but not human" without defining or assuming a definition for "conscious".

If there is some Turing test that we feel is indicative of human consciousness, could we build a robot that could pass it, more or less?

I suspect it's possible to build a robot that could pass just about any such individual test. But I also don't think there's any (non-arbitrary) test for consciousness, because I don't think it's a well-defined concept. Unless you simply define it via a test, but then - as with any sharp definition - we could easily answer your question.

For example we could use the Turing test, and then the questions in the OP are trivial to answer (no, no, and no).
 
Does consciousness also relate to autonomy? If you can turn down the clock speed does the robot actually have consciousness as surely one of the things that makes us conscious is autonomy which by having it under control you are denying?

Steve
 
I love this question of clock rate and consciousness. Let's call "clock rate" "computer speed" to avoid the synchronous/asynchronous confusion. I have pondered this topic before when reading about Ray Kurzweil's technological singularity. Kurzweil argues that Moore's Law of ever increasing computer speed extrapolates to super-intelligent conscious machines that we can upload our own consciousness to. Kurzweil believes that it will occur in his lifetime.

My argument against Kurzweil's extrapolation is that computer speed is not the only factor to determine when or if a technological singularity is possible. If at the time of the predicted singularity, computers are N times faster than today, then we should be able to construct the same conscious computers today which would arrive at conscious decisions N times slower than at the singularity.

I currently see no signs of computer consciousness today at any speed. What is currently portrayed as A.I. is just big databases with fancy decision tree processing that give the illusion of intelligence. As computers get bigger and faster, the illusion gets bigger and more complex, but I believe that type of system is not headed toward consciousness.

I think consciousness and intelligence is not understood sufficiently at the biological level for a machine consciousness to be developed yet. I do think it might be possible some day to develop a machine intelligence, but I think we do not know enough to make a reasonable prediction of when that might happen.
 
If I ask the robot, "Are you conscious?" and it answers, "Yes.", how do I prove it wrong? :duck:
 

Back
Top Bottom