• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

On Consciousness

Is consciousness physical or metaphysical?


  • Total voters
    94
  • Poll closed .
Status
Not open for further replies.
Maybe you answered my request for clarification and I missed it. Sorry to have to repeat myself.

You are conceding that a duplicate of the brain made of robot neurons that behave exactly like real neurons would indeed be conscious? Experience qualia?

Can we please use the word "machine" instead of "robot"? The term "robot" has highly anthropomorphic overtones that we would do best to steer clear of.

But there's nothing to "concede" -- if you build a model of anything which has a complete set of components which all behave exactly like the components of the original, then it will behave like the original. Doesn't matter if we're talking about a brain or a car or a jellyfish or an alarm clock. That's trivial.
 
Well, yes, and no. He definitely has been saying it, or something like it, more than once. To me, that would end the discussion in agreement. I regard computer consciousness that emulates the brain precisely a certainty, and one that does not is only a possibility. Without either, I agree with Piggy that we should study the only thing that we knows has consciousness, and that is a working brain.

However, almost immediately after conceding that machine consciousness is possible, Piggy will make some statement that it is actually impossible for a computer to be conscious, because consciousness can only exist in wet hardware, or because there is a certain je-ne-sait-quois that is impossible to achieve in a computer, and this raises everybody's hackles again!

There's a difference between saying a machine can be built to do a thing, and saying that a computer can be programmed to do it (without the addition of specialized hardware).

The only thing I'm objecting to is a "pure programming" solution.
 
Found it:



Well, if you can't do with in ropes, what could you do it with? Silicon chips? Gears, levers, belts and pulleys? Hydraulic logic gates? Relays? Vacuum tubes?

What's the problem with ropes?

Do you consider a rope to be functionally equivalent to any brain tissue?
 
Right, it's often assumed that neurons contain the magic beanery of consciousness. Penrose: access to quantum behavior, Pigliucci: carbon atoms or something, etc.

But, right now neurons are pretty well understood as data processing units. Those can but simulated in a computer, and networks of them can be simulated in a computer.

Neurons can be abstracted as data processing units.

But in your example of a completely functionally equivalent brain, ALL brain functions -- regardless of anyone's guess about what's important or not -- would have to be replicated. The resulting machine would reproduce the brain's function in real space and time, all the functions of all the tissues, right down to the "noise" and "junk".

Keep in mind that, given consciousness's place in evolutionary history, and given how evolution works, it may well be the "noise" of non-conscious neural processes that's critical to conscious awareness.

Maybe not. But maybe so.

My own expectation is that the so-called noise is indeed critical, but time will tell.
 
If you mean timeliness in a relative manner, I absolutely agree. Everything need not be performed in lockstep as in current computer designs, but some sort of synchronisation is necessary. However, some people, and I think that Piggy is among them (but I am not sure), believe that processing also need to be performed at the speed that human brains work. An argument against the rope-and-pulley computer emulation of consciousness has been that it cannot work fast enough. While I concede that we can probably not recognise consciousness if it takes hundreds of years to formulate the thought "What a nice millennium", but it would be consciousness nonetheless.

This view assumes that the signature waves are not in any way causative, and that therefore their strength and coherence is irrelevant. But we cannot make that assumption at this point.

I know that sounds strange, but then, consciousness is a very strange phenomenon, and the answer to the puzzle may turn out to be surprising.
 
And I know that all software runs on hardware. The claim by the informationalilsts is that there need be no other hardware other than the bare minimum needed to run the logic. That's like saying you can have a display with no monitor, the "logic" or "data" or "information" will take care of the display by itself.

I can see what you mean, but you seem to be caught up in the circular requirement that an 'interpreter' is necessary to validate 'real world' activities, and that the interpreter is a biological consciousness (i.e. a person).

For example, it is perfectly possible to generate and use holograms with software running on a bog standard computer. A hologram is an interference pattern stored on some accessible medium. That medium can be a photographic plate or a block of RAM. An optical hologram can be queried by shining the reference beam through the plate and measuring the light reflected/refracted; a hologram in RAM can be queried by running a suitable algorithm with a reference dataset on it. The result of querying an optical hologram is a visible, virtual image, the result of querying a RAM hologram is a partial (or full) reconstruction of the original source dataset. A machine could even use optical sensors to create a RAM holgram image and subsequently query that data image to retrieve 3D information to assist it in environment depth perception. The interpreter? the computer.

Similarly with the idea of a display - a display is just a means to provide access to information for humans to process. Forget the human interpreter for a moment. A computer can use the 'raw' RAM dataset to access (potentially the same) information without the need of a separate display device to radiate that information in visible light. A computer could process that data to control a milling machine, navigate a vehicle, etc. The interpreter? the computer.

Certainly such a machine needs sensors and effectors to achieve anything physical, but that's not a significant problem. I really don't see the relevance of these examples to the consciousness debate.

As for this ongoing debate about whether red exists 'out there' or is purely an internal construct - who reading this thread does not acknowledge that our entire perceptual experience is an internal construct based on pulses of electrochemical activity from a variety of sensors? We construct a kind of map and refer to the map when talking about what's 'out there' because that map is pretty much all we've got in common (and even that's assumed consensus). Our experience of red isn't even a direct mapping of light of a particular frequency range, it is also relative to the overall balance of light frequencies in our visual field. we will perceive the same frequency differently depending on this balance.

Yes, it's all an internal construct - can we move on?
 
Last edited:
...given consciousness's place in evolutionary history, and given how evolution works, ...

1. thank you all for the stimulating discussion on so interesting a topic.

2. I would be very curious as to what you believe the evolutionary history of consciousness to be. I'm sure there are multiple answers to this question from various posters and I am curious to hear them. Do you believe consciousness to be solely a hominid, characteristic?; primate?, mammalian?, vertebrate?, etc.

3. The models of neurons depicted upthread are vastly simpler than any neocortical neuron. Even the small pyramidal cells in layer 3 (http://cercor.oxfordjournals.org/content/11/6/558.long) have roughly a thousand dendritic spines, and axons that may form thousands of contacts on hundreds of target neurons elsewhere.

4. Given #3, and the 25 billion human cerebral cortical neurons, not to mention those of the thalamus, basal ganglia, and other directly connected structures, does our current computing power really exceed that of a human brain? How do you relate the power of Google's data center to the wiring of the brain?
 
....
A quantum computer can perform certain operations faster than deterministic (Turing equivalent) computer - potentially orders of magnitude faster. They can't solve problems that are not solvable in principle by a Turing equivalent computer, though, and a Turing equivalent computer can simulate the operation of a quantum computer. So even if human consciousness does involve quantum processes (and I agree, there's no evidence that it does) it can still be implemented on any sufficiently powerful general-purpose computer.

Thank you for the correction. I was under the impression that qubits' ability to represent more than one state simultaneously was something a GPC could not emulate.


A massively multi-core system works exactly the same as a single-core system, mathematically speaking. There are practical differences, but in principle, anything you can do on one, you can do on the other.

The reason I made that suggestion has to do with my hypothesis that the subjective experience of qualia derives from a multitude of (for lack of a better term) sub-routines accessing relevant associations simultaneously while communicating with each other about what each is doing.


There's a difference though between consciousness in itself and a fully functional human mind. The former is a lot simpler than the latter.

Granted. But what's the minimum threshold for consciousness to exist? How much of a human mind is needed?
 
Well, on that, we can build computers pretty much as big as you want. It's just that as technology is still advancing rapidly, it doesn't make sense to build computers bigger than you'll need for the next 3-5 years; any more and you'll only be burning money.

If you consider the Internet as a single integrated system (which it sort of is...) it is already vastly more complex than the human brain. Each of Google's major datacenters is roughly brain-sized. (This is if you consider each neuron to be the equivalent of thousands of transistors.) And of course common CPUs switch at rates orders of magnitude faster than neurons, so a brain-scale computer network is orders of magnitude more powerful than a real brain.

I'd like one of our neurobiologists to correct me if I'm wrong, but my understanding is that any given neuron may be acting simultaneously as a component of many different networks, so that the brain comprises a multitude of superimposed networks, and it's processing power is a great deal more than a GPC with an equivalent number of transistors.
 
If you mean timeliness in a relative manner, I absolutely agree. Everything need not be performed in lockstep as in current computer designs, but some sort of synchronisation is necessary. However, some people, and I think that Piggy is among them (but I am not sure), believe that processing also need to be performed at the speed that human brains work. An argument against the rope-and-pulley computer emulation of consciousness has been that it cannot work fast enough. While I concede that we can probably not recognise consciousness if it takes hundreds of years to formulate the thought "What a nice millennium", but it would be consciousness nonetheless.

I disagree. I think consciousness requires at least enough speed to respond to real world conditions in real time.
 
If we don't now perfectly understand the behavior of neurons, then either we will some day, or there's a magic bean. So far, there's no evidence for a magic bean.

I agree. I asked because I'm curious how far along the state of the science is now.



Certainly, but I wanted to be as terse as possible and skip all those details, which are obviously just more data processing, and keep to the essentials.

I disagree. Maybe. I think it's possible that there may be a qualitative difference between different forms of data processing, at least when it comes to the emergent production of the illusion of subjective experience.


We aren't saying we can build a computer-simulated human brain today. The point is that, with computer technology advancing as fast as it is, there does not seem to be any reason we won't some day succeed (unless you postulate a magic bean).

I agree. The "If" question is not at all interesting to me. I'm fascinated by the "how" question.
 
BTW, I've proposed the robot neuron brain as a thought experiment to expose a principle, not as a proposal for a practical way to make a conscious machine.

Electronic circuits and computer programs that behave like neurons have been around for decades.

...and, computer programs have been around for decades that simulate electronic circuits. They are essential today for circuit design.

Linkies: Circuit of "robot neuron" and the paper about it.

Here's another one. Google image "neuron schematic" for more!
[qimg]http://www.internationalskeptics.com/forums/imagehosting/67364f7519b89762e.jpg[/qimg]

Are there any conditions under which networks of robot neurons are known to produce integrated brainwave-type signals.
 
I'd like one of our neurobiologists to correct me if I'm wrong, but my understanding is that any given neuron may be acting simultaneously as a component of many different networks, so that the brain comprises a multitude of superimposed networks, and it's processing power is a great deal more than a GPC with an equivalent number of transistors.
Sure. A neuron is a lot more complex than a transistor, having multiple inputs and outputs, while on the other hand a transistor is several orders of magnitude faster. So a transistor can also act as a component in multiple overlaid networks, just via temporal overlays rather than spatial ones.

The human brain has about 1011 neurons. My desktop PC has about 3 x 1011, not counting the SSDs, which would increase it to around 4 x 1012. Neurons switch at less than 1kHz; transistors switch at rates on the order of 1GHz, a million times faster. But a lot of the transistors in a typical computer are purely memory, where all neurons have logical function as well, so the comparison is not simple.

Still, the point stands: We can easily build a computer with the storage capacity of the brain; with a little more effort, we can build one with the processing capacity of the brain. We could even build one with the parallelism of the brain and the switching rate of a modern computer if we really wanted to. That would be expensive, though.
 
Are there any conditions under which networks of robot neurons are known to produce integrated brainwave-type signals.
Brainwaves are just electromagnetic noise generated by the switching of large numbers of neurons in phase. You can do that on a computer by simply running a program with a fixed loop. Tune your radio in, and voila, computer waves.
 
I disagree. I think consciousness requires at least enough speed to respond to real world conditions in real time.
No, that's too broad.

You can't be conscious of something that happens too fast - or too slowly - for you to be aware of it, but a slow-moving consciousness can deal perfectly well with slow-moving events.
 
What is "real time"? (In the context you are using?)
Right, that's a better question.

I see no reason why a non-human consciousness needs to respond at the same speed as human consciousness. Respond in real-time with respect to some class of external events - yes, that's a reasonable position.

But almost everything in the Universe is either too fast or too slow for us to notice, so the simple argument would be that we're not conscious either.
 
Right, that's a better question.

I see no reason why a non-human consciousness needs to respond at the same speed as human consciousness. Respond in real-time with respect to some class of external events - yes, that's a reasonable position.

But almost everything in the Universe is either too fast or too slow for us to notice, so the simple argument would be that we're not conscious either.

Yes. If you took a brain and made it run at 80% of normal speed, would it be unconscious? A tenth speed? One millionth speed?

It makes no sense at all to implicate speed in the production of consciousness.

The essential question is if the difference between machine and animal consciousness is qualitative or quantitative. Speed is purely quantitative.
 
Last edited:
Do you consider a rope to be functionally equivalent to any brain tissue?

What's the function of the word "any" there?

A rope computer can, in theory (but not practically of course), perform the functions of any other kind of computer.

We're fighting intuition here, not knowledge, logic, or reasoning.

It's not intuitive that a mechanical computer could be conscious, just like it's not intuitive that you should switch to the other door in the Monty Hall problemWP when a goat is revealed. That's because of limits of intuition, not limits in the power of mechanical data processing.

The insistence that consciousness has a special quality beyond data processing comes purely from intuition, just as the intuition that the chances of our first choice being the winning choice raise to 50/50 since there are now two doors instead of three.

Intuition is not useful for what it didn't evolve for. Our brains didn't evolve to intuitively understand the machinery of consciousness. We need disciplined science, logic, and reliance on evidence, not gut guesses.

Your gut tells you consciousness is not computable? I don't friggin' care. It seems like magic because we're inside it.
 
Last edited:
Status
Not open for further replies.

Back
Top Bottom