• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Robot consciousness

Could we hook up an old Morse code machine so that the human computer could communicate with the outside world? That way we could get the invoice processing system working again.
Any system will do. In the end, you write ones and zeros into the graphics card memory. I could whip up a little program that takes zeros and ones as input, and dumps the info to the monitor. Away you go.

Of course, this 9 (the screen output) is going to run at the same speed as the algorithm. If I put a really slow clock in your computer running the accounting software, you could wait 3 hours for the screen to update. But update it would.

Piggy, if this is not clear, in C++ you would write cout << 3 to print 3 on the screen. Behind the scenes there is code and hardware that takes the data presented to 'cout' and puts it on the screen. On different hardware necessarily the software and hardware is different. If you do it on your PC with an NVIDIA card, the code and hardware is specialized for the NVIDIA chipset. On a Cray machine, it's specialized for whatever is hooked up to the Cray. So how the '3' gets to the screen is immaterial. So, my little program is a completely acceptable replacement for the combination of C library code and OS code that gets that 3 onto the screen.
 
Paul, as for the OP, we know that Turing machine behavior is independent of clock speed because clock speed is not part of the Turing machine specification. Hence, any Turing equivalent is necessarily independent of clock speed.

So, your question boils down to: is the brain TM equivalent? If not, is it still independent of clock speed?

I dunno is the only honest response to both questions. If it turns out the brain is TME, then yes, any paper and pencil implementation would also be conscious. If not, than who can say?
 
Let's assume it's continuous. All I'm trying to do is slow down the computation significantly and then ponder whether consciousness would remain or disappear.

So you're talking about bouncing back and forth between the staff and the robot brain, then?


The software wouldn't "work" because you would be offloading the computation to pencil and paper. But why wouldn't the invoicing process still work fine?

Maybe it would, albeit slowly, if you were doing the bouncing.

But of course it wouldn't work at all if it were all on pen and paper at the home office.


An accounting system. A word processing system. Just about anything, as long as the inputs and outputs come from/end up in the same place.

Again, depends. If you're talking about software that just does pen and paper stuff really fast, ok. If any sort of other feature is involved, such as remote access, which requires the hardware, then no.

For instance, networking software for group meetings, using video and audio hookups.
 
Last edited:
Well, there you go, you have "Hello World" on the screen, don't you?

Not if you don't have a screen.

But I think this has become a moot point now, since we seem to be discussing a system with the robot brain as one component and the staff as a kind of symbiotic component, not an entirely pen and paper system.
 
Any known software in the world.

Here's some real C++ code.
Code:
   int x = 3;
   int y = x + 1;
What is the value of y? (if you don't know C, the "int" just says create an integer variable, so "int x = 3" means create an integer, call it x, and assign it the value 3.)

I don't need to compile this to get that y equals 4. I ran it in my brain, a completely acceptable substrate for this algorithm. A slightly more complex example I could just do on paper. If I was infallible and had enough time to kill , I could do any computer program in existence.

Algorithms are independent of the computing platform. That's the whole point of Turing machines - any of a class of machines that are functionally equivalent are, well, equivalent. And the human brain can compute the results of a Turing machine. It doesn't matter if the execution is done with paper and pencil, punched tape (the canonical Turing machine form), a MIMD machine, an abacus, a hand calculator, a 386 processor, a 486 processor, a MIPS processor, a Cray machine, neurons extracted from a crayfish and rewired into a Turing machine, etc. The end result, y=4, will be the same.

As to Paul, I didn't notice that you stipulate that inputs are similarly slowed down, though I think it is assumed. If so, computational theory tells us algorithms are independent of processing speed. If inputs are still real time, it is perhaps dubious that consciousness would result, since only a tiny fraction of the available real time data stream could ever be processed. But that depends on a lot of assumptions about the characteristics of the underlying algorithm.

edit: piggy, to make this clear, I could compile that real C code into 8086 assembly, convert that into fortran, the fortran into cobol, the cobol into Forth, the Forth into Lisp, the Lisp in Algol, the Algol into Visual Basic, the Visual Basic into APL, and then turn the APL into a game of Life representation (which can be made into a Turing machine), and then simulate the game of Life using paper and pencil, and I'd still get "4". Or I could stop anywhere along the way, and either compute the result with pen and paper, or compile the code and run it, and I'd get "4". All versions are computationally equivelent.

Certainly.

So the question -- if we were considering whether consciousness would somehow manifest in a purely on-paper system -- is whether code like that adequately describes (is sufficient to) what a conscious robot brain would be doing.

I say no. To get the effect, it will have to be run on the hardware, just as I can't browse the Internet using pen and paper calculations.
 
Piggy, if this is not clear, in C++ you would write cout << 3 to print 3 on the screen. Behind the scenes there is code and hardware that takes the data presented to 'cout' and puts it on the screen. On different hardware necessarily the software and hardware is different. If you do it on your PC with an NVIDIA card, the code and hardware is specialized for the NVIDIA chipset. On a Cray machine, it's specialized for whatever is hooked up to the Cray. So how the '3' gets to the screen is immaterial. So, my little program is a completely acceptable replacement for the combination of C library code and OS code that gets that 3 onto the screen.

Well, if we're talking about, in effect, building a pause into the robot brain system, where everything is dumped out, then a segment of the processing is done by hand, then those outputs are dumped in again at the appropriate point as inputs, the question is: Was the robot conscious during that pause?

The answer has to be "No."
 
I think that the OP is posing a question that is independent of whether it is actually possible to create a program in any language that could be considered conscious.

Assuming that we can create such a conscious program, or robot, and that the program can be single stepped, I would say that across some significant numbers of steps, the robot would be conscious. In between steps, the robot would be unconscious, in the same way that it is also doubtful if ourselves can be called conscious if you choose a time frame that is below a certain limit.

If the time the takes to execute the program is too slow, I would still think that the robot would be conscious, but we would not be able to recognise it. We can only recognise intelligence or consciousness that is moving with a speed comparable to our own. If there were conscious plants and it took them years to form a thought, we would never find out.
 
Dont be silly people. Robots and computers will never do anything more than we consciously program them to do. They are entirely a result of our consciousness. And will remain so.
 
Certainly.

So the question -- if we were considering whether consciousness would somehow manifest in a purely on-paper system -- is whether code like that adequately describes (is sufficient to) what a conscious robot brain would be doing.

I say no. To get the effect, it will have to be run on the hardware, just as I can't browse the Internet using pen and paper calculations.
But you are conflating things.

If I go down to my garage, set my torque wrench to 175 ft-lbs, and reef on a 3mm aluminum screw on my engine block, that bolt is going to be broken. If I run a simulation of this on a computer, that simulated bolt is going to be broken. Naturally, my real bolt on my real engine will not be broken by the simulation. But that doesn't mean the simulated bolt wasn't broken. The "hardware" is different, but the result is the same.

Likewise, if you surf the internet using nothing but pencil and paper, then you will be surfing the pencil and paper internet. if all the inputs are, for example, defined in a huge paper book, then you will in fact be surfing that internet. The hardware is different, the data is different, but you are still surfing an internet.

Or, to use another old chestnut to get to Paul's question. Replace one neuron in your brain with a computer chip that is functionally identical. Are you still conscious. Replace a second neuron. A third.... the trillionth, until you are all chips. did consciousness fade out as the neurons were replaced, or did consciousness remain? I'll assume you state "remain". Okay, now just slow down those chips by 0.00001%, and slow down the inputs the same amount. Are you still conscious? Slow down by another 0.00001%. Are you still conscious? Slow down until each neuron is running at paper/pencil speed. Are you still conscious? Slow down until each neuron is running at 1/# neurons of the paper/pencil speed. Are you still conscious?

Now, for one of those neurons, substitute a real paper/pencil calcuation for the computer chip. Are you still conscious? (note that now the real paper speed is the same as the computer chip speed) Then replace a second chip with a pen/paper calculation. Then a third. continue until all computer chips are replaced with pencil paper, where inputs still come from the real world, and outputs are still input into the human body (yes, kind of impossible, given the body would die and rot away before the paper/pencil calculations were done, but this is a thought experiment - assume infinite lifespan).

At what point in that entire process did conscious fade away?

I say it never did.
 
Well, if we're talking about, in effect, building a pause into the robot brain system, where everything is dumped out, then a segment of the processing is done by hand, then those outputs are dumped in again at the appropriate point as inputs, the question is: Was the robot conscious during that pause?

The answer has to be "No."
Who is asking about the pause? We are talking about the entire process. Are you conscious while you sleep? No. it does not follow that you are not conscious while you are awake and processing.

The machine (whatever it is - your brain, a simulated brain, or pen/paper) does not detect that the clock is stopped. For all you know, the whole universe stops a billion times a second and your real brain is being stepped. So what? You are still conscious while the universe is 'running', and you have no way to detect if the universe is stopping or not.
 
Dont be silly people. Robots and computers will never do anything more than we consciously program them to do. They are entirely a result of our consciousness. And will remain so.
We already use genetic algorithms and genetic programming to get computers to do things we can't figure out on our own. No conscious programming other than setting up the conditions for the algorithms or programs to evolve.
 
In between steps, the robot would be unconscious, in the same way that it is also doubtful if ourselves can be called conscious if you choose a time frame that is below a certain limit.

Good point.

Awareness of anything turns out to be elusive if we try to pin it down to a moment.
 
We already use genetic algorithms and genetic programming to get computers to do things we can't figure out on our own. No conscious programming other than setting up the conditions for the algorithms or programs to evolve.


Yeah and I use a calulator to do things I can't figure out on my own. And guess what? The calculator was programmed only by someones conscious input. As is every program.

I cant see what difference a global search heuristic program has to any other program. The computers are doing what we consciously tell them to. Nothing more, nothing less.
 
Another thought experiment: assume a multiverse where time runs at different rates, and there is the possibility for interaction between universes.

We live in universe A. Universe B runs at 10^50 faster than we do.

1. for each neuron in your brain, universe B calculates its response to its inputs on paper, and than causes your neuron to output a signal based on that calculation. You, of course, cannot detect this occurrence. Are you still conscious?

2. Assuming you answered 'yes', would the beings in B perceive the paper calculations as being conscious? Probably not, but that is a perception based on relative speeds. But would they be conscious? How would they not be, as they are perfectly replicating everything going on in your nervous system?

3. If not, where is the ghost in the machine in your own body? You have 2 systems, exposed to exactly the same inputs, producing the exact same outputs, yet one is conscious, one is not? Where does the difference lie?
 
Last edited:
Yeah and I use a calulator to do things I can't figure out on my own. And guess what? The calculator was programmed only by someones conscious input. As is every program.

I cant see what difference a global search heuristic program has to any other program. The computers are doing what we consciously tell them to. Nothing more, nothing less.
I did not say a global search heuristc program. I said a genetic program. Genetic programs are not consciously programmed. Sorry, it's just not true, your assertions to the contrary.
 
Genetic programs are not consciously programmed

Thats not true. And I'll eat my leg if you can prove it. Give me the exact code lines that the computer created from its 'consciousness', complete with the reasons the computer (presumably) gave (as conscious entities should be able to) as to why they decided to program this code.
 
Thats not true. And I'll eat my leg if you can prove it. Give me the exact code lines that the computer created from its 'consciousness', complete with the reasons the computer (presumably) gave (as any conscious entity would be able to) as to why they decided to program this code.
Goalposts are over here. I neither stated that current genetic programs are 'conscious', or conscious (not sure what the scare quotes are about).

Genetic programs are not created by any consciousness.

I could be wrong, but I don't think you know what genetic programs are.

Take a set of computer instructions, say Lisp.

Randomly generate Lisp programs.

Use a fitness function, natural selection, combinations, and mutations to combine the best performing lisp programs into a new population.

Repeat until you fully satisfy the fitness function.

Tada, you have a program that was not generated by any conscious entity.

This seems to be a derail, so I'll stop here.

edit: but if you want lines of code produced entirely by a computer using genetic programming, google is your friend.
 
Last edited:
Likewise, if you surf the internet using nothing but pencil and paper, then you will be surfing the pencil and paper internet. if all the inputs are, for example, defined in a huge paper book, then you will in fact be surfing that internet. The hardware is different, the data is different, but you are still surfing an internet.

But the fact is, there is no such book, there is no paper Internet. That's why I chose that example.

When you have an effect that depends on hardware and software -- consciousness, or Web browsing -- you can't remove the hardware and get that effect.

Consciousness appears to be an effect which depends on coordinated sets of input coming from particular circuitry going into particular modules designed to handle that input.

For example, take the real-life case of Marvin, who suffered from emotional blindness.

He still had many of the physical effects of emotion, but he wasn't aware of any emotion -- he didn't "feel" emotion. He had to make guesses about his emotional states in much the same way that we all guess about others' emotional states, except he had more information (e.g., he could feel the knots in his own stomach if he were nervous).

[Something similar but less extreme happens to me, and yes it's bizarre.]

A stroke had destroyed the bridge that carried feedback from his body's neural network (emotional output from the brain goes to the body first for response, then to a specialized module for awareness) to his emotional processing center.

So if we're dealing with a robot that has a similar setup (and if we're not, then the question about slowing is meaningless unless we describe that setup), then clearly the hardware is necessary.

Let's say we somehow knew everything there was to know about all the inputs to John Doe's brain over his lifetime, and everything that went on inside it.

Then we "replayed" John's neurological mental life by writing it all out on paper, impulse by impulse.

That effort would not somehow generate a disembodied consciousness that re-experienced John Doe's life all over again, because John's brain (the hardware) is gone.

Ok, so let's get back to our robot.

And rather than consider the bouncing scenario, let's simplify it to one "bounce out" where all the outputs are simultaneously dumped onto our staff, who all do the next steps (presumably passing outputs to each other as inputs as necessary) for a certain time or certain number of steps, and then dump everything back into the robot brain at the appropriate points.

The robot cannot have been conscious while the staff was doing its job because its hardware was idle.

And no disembodied consciousness would have been created by virtue of the staff doing its work.

So what happens if we bounce back and forth continually between the robot brain and our staff? Does the robot blink in and out of consciousness?

Maybe.
 

Back
Top Bottom