• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

On Consciousness

Is consciousness physical or metaphysical?


  • Total voters
    94
  • Poll closed .
Status
Not open for further replies.
Take the tale of the Sphex wasp and the evil experimenter. No matter how many times the experimenter moves the wasp's food, the wasp never catches on - it simply doesn't have the circuitry to monitor its own mental processes.

You can catch that sort of thing mechanically, unconsciously, but it is actually simpler and less expensive to do it by adding the feedback loop we call consciousness, because it generalises the problem such that a single process can monitor all such cases.

Aha! That's the "seen it before" module which Dennett mentions in his Magic of Consciousness lecture, that misfires and produces the deja vu quale. Sphex Wasps I guess haven't got one.

If you disable only that module, do you completely snuff out consciousness? Or, just anti-sphexishness?

Time stamp 14:55

 
Aha! That's the "seen it before" module which Dennett mentions in his Magic of Consciousness lecture, that misfires and produces the deja vu quale. Sphex Wasps I guess haven't got one.

If you disable only that module, do you completely snuff out consciousness? Or, just anti-sphexishness?


You mean on humans, if I can disable somehow that second module that misfires often I will not be liable to deja-vu? Anti-parkinsonian drugs can do that but I dont know why (Taiminen and Jääskeläinen 2001) This theory says that dopamine increase in the mesial temporal areas of the brain is responsible. Which explains the noradrenaline kick-in and the awareness of something other-worldly sci-fi feeling.

For the record, regarding the sphix, the endless repetition is not standard.
 
You mean on humans, if I can disable somehow that second module that misfires often I will not be liable to deja-vu? Anti-parkinsonian drugs can do that but I dont know why (Taiminen and Jääskeläinen 2001) This theory says that dopamine increase in the mesial temporal areas of the brain is responsible. Which explains the noradrenaline kick-in and the awareness of something other-worldly sci-fi feeling.

For the record, regarding the sphix, the endless repetition is not standard.

Highlighted: That's the hypothesis.

I never read an article that said sphex wasps would repeat a behavior forever, just so many times they appear to be unconscious robotic zombies.

What's important to me is that the attribute of unsphexishness is emblematic of what we think is a hallmark of consciousness and its lack. An artificial intelligence with a well functioning seen-it-before antisphexishness module would be a breakthrough in exhibiting conscious behavior.
 
No, nobody knows if the "signature" deep brain waves play a causative role, or are merely correlated. I have never asserted that they're known to be causative. (The analogy with the choir was mere analogy, and in any case, there currently is no workable theory of consciousness anywhere.)
I'm taking the liberty of highlighting the weasel words for the peanut gallery. I said that that you think that brain waves cause consciousness, you're denying that they're known to cause consciousness, which doesn't preclude my argument. Funny how slippery all questions to and from you tend to become.

Indeed, just a bit later:
This view assumes that the signature waves are not in any way causative, and that therefore their strength and coherence is irrelevant. But we cannot make that assumption at this point.
So you're not saying it's causative, but it's causative. Great.

The human brain has about 1011 neurons. My desktop PC has about 3 x 1011, not counting the SSDs, which would increase it to around 4 x 1012. Neurons switch at less than 1kHz; transistors switch at rates on the order of 1GHz, a million times faster. But a lot of the transistors in a typical computer are purely memory, where all neurons have logical function as well, so the comparison is not simple.

Still, the point stands: We can easily build a computer with the storage capacity of the brain; with a little more effort, we can build one with the processing capacity of the brain. We could even build one with the parallelism of the brain and the switching rate of a modern computer if we really wanted to. That would be expensive, though.
Ehhhhh. You have to do a lot of handwaving to get down to numbers like those. A better estimate can be had using the NEURON simulator as a basis, specifically Markram's Blue Brain project. That'll include emulation overhead, but it's also cutting corners we're not sure can be cut (like astrocytic involvement at the synapse), so assuming the two balance out we're still looking at a few decades of Moore's Law before whole brain comparisons become feasible.
 
Would turning off anti-sphexishness suspend the production of qualia?

I do suspect that the antisphex/seen-it-before/deja-vu (ASD) module might be responsible for what we call consciousness. Qualia I think might come from that roar of associative connections. However, our perception of qualia may result from the combination of those two, plus a dash of cultural woo.

Do you think it's fair to give Piggy's brain wave theory of consciousness the woo tag? Computers can simulate brain waves with data processing. Why wouldn't brain waves in detailed computer program simulations produce computer consciousness, if they produce it in people?
 
I do suspect that the antisphex/seen-it-before/deja-vu (ASD) module might be responsible for what we call consciousness. Qualia I think might come from that roar of associative connections. However, our perception of qualia may result from the combination of those two, plus a dash of cultural woo.

Do you think it's fair to give Piggy's brain wave theory of consciousness the woo tag? Computers can simulate brain waves with data processing. Why wouldn't brain waves in detailed computer program simulations produce computer consciousness, if they produce it in people?

I would not call the possibility that brainwaves might be functional rather than just noise woo just yet. Perhaps they have some sort of modulating effect, causing various nerve networks to act "in tune" with each other in just the right way. I don't really see a strong argument one way or the other. Or perhaps they somehow act like an "interference beam" does in producing a hologram.

OTOH I don't think there's any doubt that an artifical consciousness can be produced. I don't think it can be produced in a programming-only simulation because I think consciousness requires interaction with the environment, so input/output hardware has to be included.
 
Ehhhhh. You have to do a lot of handwaving to get down to numbers like those.
The numbers themselves are accurate; the comparison is difficult.

A better estimate can be had using the NEURON simulator as a basis, specifically Markram's Blue Brain project. That'll include emulation overhead, but it's also cutting corners we're not sure can be cut (like astrocytic involvement at the synapse), so assuming the two balance out we're still looking at a few decades of Moore's Law before whole brain comparisons become feasible.
Blue Brain is a molecular-level simulation of neural activity, so naturally it is orders of magnitude more computationally expensive than a direct replication of the brain's functionality. It says nothing directly about the complexity or computational power of the brain vs. feasible computers. It takes a lot more hardware to emulate an Apple II than the Apple II itself ever had.

What it has going for it is that it is a priori a valid and instructive research program; if it doesn't work as expected, that in itself will tell us a great deal.
 
I would not call the possibility that brainwaves might be functional rather than just noise woo just yet.
No, it's definitely woo. In the modern world we wander in and out of stronger EM fields at similar frequencies all the time - if you work in certain jobs, much stronger fields - and the effect is zero. Zero, that is, until you crank up the field until you are actually inducing electrical currents in the brain. That's orders of magnitude stronger than the original brain waves.

Perhaps they have some sort of modulating effect, causing various nerve networks to act "in tune" with each other in just the right way.
Nope. If that were true, you'd pass out just from walking under a fluorescent light. Or alternately, be physically unable to sleep with the light on. Of course, neither of these is true.

OTOH I don't think there's any doubt that an artifical consciousness can be produced. I don't think it can be produced in a programming-only simulation because I think consciousness requires interaction with the environment, so input/output hardware has to be included.
We interact with the environment through our nervous system, so that's all you need to simulate. But you do need to have some sort of interaction, because (at the very least) without that there's no way to test whether the simulation is conscious.
 
IOTOH I don't think there's any doubt that an artifical consciousness can be produced. I don't think it can be produced in a programming-only simulation because I think consciousness requires interaction with the environment, so input/output hardware has to be included.
Nobody who knows anything about computers would even think of designing an emulation without I/O. All AI and neural networks until now have been designed with I/O.

I wonder how the idea of a "programming only" simulation ever came up?
 
Would turning off anti-sphexishness suspend the production of qualia?
Hmm. Not necessarily. Anti-sphexishness has two components - detecting the loop, and acting upon that information. So you could have a working detector (and other subjective experiences) but have something broken at the point where you act to break the sphexish loop.

If I knew more about neuropathology and psychopathology, I might be able to name a specific syndrome that illustrates this. If anyone knows of such, I'd be very interested.
 
No, it's definitely woo. In the modern world we wander in and out of stronger EM fields at similar frequencies all the time - if you work in certain jobs, much stronger fields - and the effect is zero. Zero, that is, until you crank up the field until you are actually inducing electrical currents in the brain. That's orders of magnitude stronger than the original brain waves.

Does wandering in and out of said EM fields have any effect on the brain's production of brainwaves? Is there technology available that can manipulate brainwave production? If so does it have any effect on conscious experience?


Nope. If that were true, you'd pass out just from walking under a fluorescent light. Or alternately, be physically unable to sleep with the light on. Of course, neither of these is true.

Doesn't that happen to people with some medical conditions?


We interact with the environment through our nervous system, so that's all you need to simulate. But you do need to have some sort of interaction, because (at the very least) without that there's no way to test whether the simulation is conscious.

To me, consciousness is partially defined as a form of interaction with one's environment, so the requirement is tautological.
 
Does wandering in and out of said EM fields have any effect on the brain's production of brainwaves?
Nope.

Is there technology available that can manipulate brainwave production?
Well, plenty of drugs do that. ;) But if you mean directly via EM fields, then yes-ish: It's called transcranial magnetic stimulationWP.

If so does it have any effect on conscious experience?
Blinded trials suggest that it doesn't. That is, patients cannot distinguish between real TMS and sham TMS at better than chance. (Which is news to me, actually.)

Doesn't that happen to people with some medical conditions?
Um, which one? Certainly no-one is sensitive to typical household electric fields (or typical industrial ones, for that matter). Many people claim to be, but blinded trials indicate that this is nothing more than confirmation bias.

To me, consciousness is partially defined as a form of interaction with one's environment, so the requirement is tautological.
Fair enough. If the alleged consciousness isn't interacting in some way, what would it mean to call it conscious?
 
If the alleged consciousness isn't interacting in some way, what would it mean to call it conscious?

FWIW:

I'd say night dreaming is a non-interactive conscious state in which the brain produces its own sensory input (way upstream from the raw senses, of course) and suppresses output with paralysis (excepting eye movement). In other words, it's interacting with itself. We wouldn't have to wait until it wakes up to interact with us to tell us its dream. We could monitor and confirm its consciousness without "interacting" with it. We also do a lot of rehearsing of actions in our minds (animals do, too) without interacting, and quite consciously.

E.g. a great way for a chess program to learn is to play against itself a gazillion times. No interaction with the world. (It's a common technique in development to substitute IO devices with software simulations.) This is clearly analogous to dreaming. If it had a good functional "seen it before" module, we'd have a hard time arguing that it didn't have a faint sliver of consciousness.

Now we get into a Schrodinger's consciousness realm. Like Piggy once apparently argued, a computer running a brain consciousness simulation, if on a planet with no one to observe it, would not conscious.
 
Last edited:
The numbers themselves are accurate; the comparison is difficult.


Blue Brain is a molecular-level simulation of neural activity, so naturally it is orders of magnitude more computationally expensive than a direct replication of the brain's functionality. It says nothing directly about the complexity or computational power of the brain vs. feasible computers. It takes a lot more hardware to emulate an Apple II than the Apple II itself ever had.

What it has going for it is that it is a priori a valid and instructive research program; if it doesn't work as expected, that in itself will tell us a great deal.
I don't want to get into pedantic semantics, but NEURON's a channel-level simulation, not molecular. A full molecular dynamics sim of a brain would be crazy, with a capital "tell me what your grant reviewers say, this is gonna be good."

And NEURON is the way it is because that's the necessary level of detail to get functional equivalence. Integrate and fire models are not sufficient to replicate behavior we know to be important (for example, dendritic backpropagation), much less whatever model you're thinking of when you compare a neuron to a single transistor.

As I mentioned before, there's certainly emulation overhead that we can eliminate. But there's probably also additional behavior we'll need to add. Until large-scale emulation projects like Blue Brain start revealing simplifications and deficiencies, there's no sense in estimating how much computation these two categories will comprise. To directly compare storage and processing capacity of the brain to today's computers requires a list of assumptions as long as your arm.
 
I don't want to get into pedantic semantics, but NEURON's a channel-level simulation, not molecular. A full molecular dynamics sim of a brain would be crazy, with a capital "tell me what your grant reviewers say, this is gonna be good."
Pedantic semantics is what it's all about. :) And yeah, I must have got my projects mixed up; I know there is no active project trying to do a molecular simulation of an entire brain, but I believe it's being done for functional modules.

And NEURON is the way it is because that's the necessary level of detail to get functional equivalence.
Given what we currently know, yes.

Integrate and fire models are not sufficient to replicate behavior we know to be important (for example, dendritic backpropagation), much less whatever model you're thinking of when you compare a neuron to a single transistor.
Just to be clear: I'm not equating a neuron with a single transistor. I'm saying that the computational power of a single transistor, integrated over time, is of the same order as a neuron, though their immediate behaviours are very different, with the neuron being a few orders of magnitude (at least 3 and probably 4) more complex.

As I mentioned before, there's certainly emulation overhead that we can eliminate. But there's probably also additional behavior we'll need to add.
Sure.

Until large-scale emulation projects like Blue Brain start revealing simplifications and deficiencies, there's no sense in estimating how much computation these two categories will comprise. To directly compare storage and processing capacity of the brain to today's computers requires a list of assumptions as long as your arm.
We can look at the amount of bandwidth being consumed, though, and from that infer the amount of processing going on. That gives us an upper bound, though not a lower one.
 
Status
Not open for further replies.

Back
Top Bottom