• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Robot consciousness

It seems that people are thinking about the purely computational aspects of consciousness without considering the subtleties of sensory input and other interactions with the external world.

Not at all -- the subtleties are included in the computational model.

Hence my point about slowing down the sensory input and other interactions as well.
 
Piggy said:
But one person would not be able to take all the "input" -- which gets real complex b/c in the brain much output=input -- for all the modules (assuming it can even be defined clearly enough for that) and feed them back into the system.
Why? If the robot brain runs on a more-or-less conventional computer, I don't see why it couldn't be hand-simulated.

~~ Paul
 
LissaLysikan said:
circadian clocks are only relevant to living things that depend on time-of-day due to evolutionary pressures. For an AI they wouldn't matter. All that would matter then is whatever pressures caused the breakthrough to consciousness and if it continued in the simulation (the pencil/paper recreation).
You don't think there is any possibility that circadian clocks have anything to do with the way the brain processes information? I don't know, just askin'.

The rate of computations wouldn't matter at all - as long as all parts of the system are working at the same relative rates, absolute time wouldn't matter, and intuitively shouldn't. Why would it matter if it takes four seconds or 1/4 second for part A to respond to part B, if part B is working on the same time scale as part A?
I don't think it should, which is why I'm focused on the possibility that real-time clocks might be involved.

~~ Paul
 
shuttIt said:
I know what blindsight is. It's like I'm blind, but if you throw me a ball I can catch it....
Which means that consciousness of vision is separate from vision.

I think you've left wiggle room by including the bolded part of the first sentence. Forget woo souls that can float off and talk to angels. Lets say 'souls' are grounded in brains. What then? Can you imaging a brain with a 'soul' and one without?
Sorry, I shouldn't have used the word disembodied. I'm referring to some sort of consciousness that is separate from our brain, not simply a function of it. If there is no such thing, then p-zombies are incoherent because you cannot have a human brain without human consciousness.

~~ Paul
 
rocketdodger said:
Not at all -- the subtleties are included in the computational model.

Hence my point about slowing down the sensory input and other interactions as well.
Right, sorry, I missed the subtleties of your response. :D

~~ Paul
 
Which means that consciousness of vision is separate from vision.
For sure, one is no longer conscious of seeing... but what one is to deduce from that I'm not sure. It's pretty clear anyway that our consciousness only has access to some of the number cruching that goes on in our brains. By eliminating one of the subsystems that consciousness has access to, have you altered the consciousness, or just broken something it was connected to, perhaps directly, perhaps indirectly.

Sorry, I shouldn't have used the word disembodied. I'm referring to some sort of consciousness that is separate from our brain, not simply a function of it. If there is no such thing, then p-zombies are incoherent because you cannot have a human brain without human consciousness.

~~ Paul
If p-zombies would necessarily be conscious then p-zombies certainly are incoherent. In most situations I'm happy to assume that this is the case just to keep things moving. Are there better arguments for making this assumption than it is useful to do so?
 
Why? If the robot brain runs on a more-or-less conventional computer, I don't see why it couldn't be hand-simulated.

If by 'more or less conventional' you mean 'equivalent to a Turing machine', then of course the nature of platform executing the program doesn't matter. That's what Turing equivalence means.

If by 'more or less conventional' you mean 'includes quantum computing elements', the answer is again yes. Even though Quantum computers are more powerful than Turing machines, you can simulate them on a conventional computer. Hence you can simulate them on any Turing machine, including a pencil and paper.

Now, you may have noticed I said quantum machines are more powerful than Turing machines, but can be simulated with them. How is that so? The answer is that the time taken to do the simulation changes. A quantum computer can compute some things in linear (or constant) time, that it takes a Turing machine exponential time to do.[*] (There are other details I'm handwaving here).

So, although in theory you could use a pencil and paper to simulate the original assumed machine, it might take you longer than is feasible to simulate anything related to a 'concious event' in the simulated conciousness.

[*] A dear friend of mine once described the heirarchy of these machines as:
1) A Finite State Machine is a Turing machine that can only move the tape in one direction
2) A Turing machine can move the tape in both directions
3) A Quantum computer can move the tape in both directions simultaneously.
 
Because we can only do one thing at a time.
I can rub my head and pat my tummy while doing a box step and singing the theme from Alfie all at the same time. I think you must be thinking of Gerald Ford.
 
I bet the Chinese Olympic Committee could build a massively parallel human computer tomorrow.
 
Img214516458.jpg


This looks like a multi-person implementation of something kinda-computational.
 
You're saying infallible sequential humans cannot simulate a parallel algorithm?

I disbelieve you.

If you claim that one person can handle everything that's going on in the brain simultaneously, rather than sequentially, then I disbelieve you.

But let's assume that's not what we're talking about, and instead we're talking about, say, a staff of hypothetical infallible humans working at the same time.

Ok, then what are we really talking about?

Let's say that we've got this robot wired up, and all we do is to introduce a step where there's essentially just a pause at one moment when whatever calculations are happening are switched over to our staff of mathematically infallible humans who get the outputs as inputs, run the calculations, then feed the results back in.

In that case, you're just doing the equivalent of putting the robot on pause.

Is the robot conscious during this pause? Given that nothing is happening in the robot's hardware, how could it be?

So if you, in effect, chain this process so that we're bouncing back and forth between the robot and the infallible staff, does the robot's consciousness flicker in and out? (Note that the robot brain has to perform every other step -- if the staff of humans is passing inputs and outputs to each other to perform, nothing's running on the hardware at all.)

I seriously doubt that you'd get an intermittent consciousness from this setup.
 
Last edited:
I can rub my head and pat my tummy while doing a box step and singing the theme from Alfie all at the same time. I think you must be thinking of Gerald Ford.

There are interesting studies about what happens in the brain when we multitask, and in the end it appears to break down into very rapid task-switching.
 
There are interesting studies about what happens in the brain when we multitask, and in the end it appears to break down into very rapid task-switching.
In respect to tasks involving the same subsystems I suspect you're right. However, are we counting how it's actually achieved in my head, or just what goes on on the huge pad of paper upon which I'm implimenting this. What about James Garfield writing Latin with one hand and Greek with the other? He might have been able to implement some kind of dual core system.
 
In respect to tasks involving the same subsystems I suspect you're right. However, are we counting how it's actually achieved in my head, or just what goes on on the huge pad of paper upon which I'm implimenting this. What about James Garfield writing Latin with one hand and Greek with the other? He might have been able to implement some kind of dual core system.

I've always been intrigued about that claim regarding Garfield, and similar claims regarding Milton composing 2 poems simultaneously (dictating to his 2 daughters). I've always been suspicious of them, especially considering what we know now about how resource-intensive writing and reading are.

Was Garfield's ability ever documented? And if so, could he write anything he wanted in Greek and Latin simultaneously, or had he learned to do it with specific passages as a kind of parlor trick?
 

Back
Top Bottom