• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

On Consciousness

Is consciousness physical or metaphysical?


  • Total voters
    94
  • Poll closed .
Status
Not open for further replies.
When I asked Piggy if we had a mechanical brain that was functionally equivalent to an organic brain, his answer was clear: It would be conscious.
But that's not what Scott asked, which was whether a brain made of robot neurons would be conscious.

This is the heart of Piggy's magic bean theory of consciousness: a "functionally equivalent" machine brain, with all the qualifiers he tacked on in the reply to Scott, can be conscious. A machine brain made of neurons cannot, since that can be simulated on different hardware and he's explicitly said that can't be conscious.

The difference between the two? He ain't saying. Until he does - it's a magic bean.
 
But that's not what Scott asked, which was whether a brain made of robot neurons would be conscious.

This is the heart of Piggy's magic bean theory of consciousness: a "functionally equivalent" machine brain, with all the qualifiers he tacked on in the reply to Scott, can be conscious. A machine brain made of neurons cannot, since that can be simulated on different hardware and he's explicitly said that can't be conscious.

The difference between the two? He ain't saying. Until he does - it's a magic bean.

Well, I asked Piggy about mechanical brains, so I don't see "robot neurons" as being much different. Anyway, we'll see what he says.
 
Found it:

Yes, a functionally equivalent machine would be conscious.

Can't do it with ropes, however... at least, it sure doesn't seem so, given the latest research.

Well, if you can't do with in ropes, what could you do it with? Silicon chips? Gears, levers, belts and pulleys? Hydraulic logic gates? Relays? Vacuum tubes?

What's the problem with ropes?
 
Last edited:
Found it:



Well, if you can't do it in ropes, how could you do it? Silicon chips? Gears, levers, belts and pulleys? Hydraulic logic gates? Vacuum tubes? What's the problem with ropes?

I agree. Why would the "hardware" matter, if it's functionally equivalent?
 
Mmno. Sorry. One side makes sense, the other side is constructed entirely of logical fallacies.

Actually, the primary flaw I see in Piggy's position is a faulty premise rather than logical fallacy. On your side, I'm not sure exactly what it is you're arguing yet, other than that a seeming misrepresentation of Piggy's position is wrong, so I have not yet formed an opinion as to the quality of your logic.
 
...

...Or did you mean "how do brainwaves produce consciousness?" Damned if I know. Piggy doesn't know either. That's an essential element of his argument: if we had any idea at all how it might work, we could test it. But all tests anyone's come up with so far (like not getting your brains scrambled by weak magnetic fields) have returned negative. So his best option is to have the model remain mysterious and unknowable, and just stir up enough mud around every other model that no one questions his holograms and qualia.

Granted I may be misunderstanding Piggy, but I don't believe he's said that brainwaves produce consciousness, only that certain patterns of brainwaves are always coincident with consciousness, as far as anyone knows.


Actually, no he didn't. He answered a similar-sounding question which wasn't asked, and he didn't even answer it with "yes." He left himself multiple ways of backing out of the statement, from magic beaning something aside from neurons that's vital to consciousness (see above re: brainwaves), to claiming that a working model tornado inna box yadda yadda can't be conscious.

I don't know. I just don't see your interpretation in the words that Piggy used. Maybe I'm being naive.


Good! That's a much healthier viewpoint than what you had when I rejoined the conversation - you were practically Piggy's disciple. Question everything! Let no one speak from authority. Listen, and judge for yourself.

This is just the sort of gross overstatement that I had in mind when I accused both sides of flinging straw. At the point that you rejoined the conversation I was asking questions trying to ascertain exactly what Piggy's point of view was. I often do that by attempting to point out and explore bits of common ground, but to suggest that anything I've said in this thread might mark me as a disciple of anyone is way over the top.
 
Last edited:
Actually, the primary flaw I see in Piggy's position is a faulty premise rather than logical fallacy.
But his arguments to shore up his faulty premise are riddled with logical fallacies.

On your side, I'm not sure exactly what it is you're arguing yet, other than that a seeming misrepresentation of Piggy's position is wrong, so I have not yet formed an opinion as to the quality of your logic.
Our position (generally speaking) is that consciousness is a form of information processing and is thus substrate-neutral and can be programmed on any general-purpose computer.

We also note that even if consciousness requires a squishy biological brain for some reason (which it doesn't), it can still be programmed on any sufficiently large general-purpose computer, because simulating the brain necessarily simulates its function, and if its function includes consciousness, so does the simulation. If it doesn't, you haven't simulated the brain, and we know that brains can be simulated, because any physical system can be simulated.
 
I really want Piggy to explicitly acknowledge that he believes a brain made of robot neurons would be conscious, just to make sure I heard him right.

I suspect he's not confirming this because he knows it's mate on one if he makes that move. I'm giving him a chance to take back the move. I see his hand still on the piece. However, to take back the move he'd need to explain the reason really well, or come up with a novel assertion.

I await his response.

I'm not sure what you want here. I've seen Piggy answer this question quite clearly at least twice. But just for the heck of it, let's pretend that I claim your proposition won't work. Could you demonstrate your mate in one for me please?




But that's not what Scott asked, which was whether a brain made of robot neurons would be conscious.

This is the heart of Piggy's magic bean theory of consciousness: a "functionally equivalent" machine brain, with all the qualifiers he tacked on in the reply to Scott, can be conscious. A machine brain made of neurons cannot, since that can be simulated on different hardware and he's explicitly said that can't be conscious.

The difference between the two? He ain't saying. Until he does - it's a magic bean.

The proposition which Piggy acceded too was that if one were to replace every neuron (I'm not sure what if anything is being done with non-neuron cells in the brain) with a synthetic one that accepted the same inputs and produced the same outputs in real-time, then the resulting brain would still be conscious. Such a set-up can't help but be "functionally equivalent". As far as I can tell, he has not conceded that some other organization of synthetic neurons would also be conscious, but I think he's left that open to possibility, pending a better understanding of all the functions being carried out by living brains. I'm still confused about just what Piggy's beef is with "simulation" vs. "model" but I think I have a fuzzy inkling of what's going on there which I am still trying to develop.



But his arguments to shore up his faulty premise are riddled with logical fallacies.

Which fallacies? I'm genuinely curious how you analyze his position in detail, as I think it would help me to understand yours. For my own part, when I come across a faulty premise I rarely bother to check for fallacies later on in an argument, whether they're there or not is irrelevant when the premises are not true. GIGO, and all that.




Our position (generally speaking) is that consciousness is a form of information processing and is thus substrate-neutral and can be programmed on any general-purpose computer.

I'm not sure I agree that any and all forms of information processing can be carried out on any general purpose computer. I'm not at all up to speed on this stuff, but my admittedly meager understanding is that, for instance, quantum computers are theorized to be able to perform tasks that could never be done on an ordinary general purpose computer. I am NOT suggesting that there's anything "quantum" about consciousness, or the brain, but if there's one counter-example to your claim then there may be others.


We also note that even if consciousness requires a squishy biological brain for some reason (which it doesn't), it can still be programmed on any sufficiently large general-purpose computer, because simulating the brain necessarily simulates its function, and if its function includes consciousness, so does the simulation. If it doesn't, you haven't simulated the brain, and we know that brains can be simulated, because any physical system can be simulated.

With the hilighted qualifier, and perhaps another--massively multi-core architecture--I think I probably agree, with you though I suspect that the "suffiently large" part may turn out to be more than many people suspect. I think the brain is doing one heck of a lot of stuff simultaneously, and that simulating everything the brain does in a timely manner may require more robust hardware than we're even close to being able to throw at the problem. I suspect that in order for consciousness to be present in any meaningful sense, that timeliness/simultaneity may be crucial.
 
Neurons ? Why the special pleading, right ?

Right, it's often assumed that neurons contain the magic beanery of consciousness. Penrose: access to quantum behavior, Pigliucci: carbon atoms or something, etc.

But, right now neurons are pretty well understood as data processing units. Those can but simulated in a computer, and networks of them can be simulated in a computer.
 
I'm not sure what you want here. I've seen Piggy answer this question quite clearly at least twice. But just for the heck of it, let's pretend that I claim your proposition won't work. Could you demonstrate your mate in one for me please?

It's a bit of a spoiler, but I'll consider the time-expired buzzer to have sounded.

Assertion #1: a brain made of robot neurons would experience qualia consciousness, as Piggy has apparently asserted.

A robot neuron's input-to-output behavior is well enough defined to be duplicated in software, plus a network of neurons, built up to the sensory inputs to the motor outputs (we can even average the neuron spikes to output the brain waves indicative of consciousness.

Therefore, if such a computer-simulated brain, presented, at its visual input, an apple of ordinary appearance, would respond on a voice output device how lovely the red of the apple was. When asked if it was experiencing a subjective experience of red, it could report it was, and wonder about the seemingly uncomputable nature of that experience, and perhaps wonder if it was produced by something more than data or information processing. We wouldn't program it to lie. It's a working model of the brain, remember, just in circuit states rather than neuron states, so the impression of the quale would have to fall out. (unless we postulate a magic bean, then assertion #1 has to be revoked)

Of course, my opponent could just say it's not a checkmate and dump the pieces off the board, but it's the peanut gallery that matters.
 
It's a bit of a spoiler, but I'll consider the time-expired buzzer to have sounded.

Assertion #1: a brain made of robot neurons would experience qualia consciousness, as Piggy has apparently asserted.

A robot neuron's input-to-output behavior is well enough defined to be duplicated in software, plus a network of neurons, built up to the sensory inputs to the motor outputs (we can even average the neuron spikes to output the brain waves indicative of consciousness).

Therefore, if such a computer-simulated brain, presented, at its visual input, an apple of ordinary appearance, would respond on a voice output device how lovely the red of the apple was. When asked if it was experiencing a subjective experience of red, it could report it was, and wonder about the seemingly uncomputable nature of that experience, and perhaps wonder if it was produced by something more than data or information processing. We wouldn't program it to lie. It's a working model of the brain, remember, just in circuit states rather than neuron states, so the impression of the quale would have to fall out. (unless we postulate a magic bean, then assertion #1 has to be revoked)

Of course, my opponent could just say it's not a checkmate and dump the pieces off the board, but it's the peanut gallery that matters.

Thank you. I'm trying to resolve several different lines of thought, and it's very helpful for me in trying to understand all sides here when you can be this clear about even simple questions. I have a couple of follow-up questions, if you don't mind:

Regarding the first hilighted bit, how sure are we that a living neuron's input/output behaviour is well enough defined that we know how to produce a suitable robot replacement? Also, what is the range of input/output behaviour that living neurons produce? My understanding is that a neuron is a lot more complicated than a transistor, but I'm not clear on just how much more.

Regarding the second hilighted bit, why do so at all? Is there any research that suggests these signature brainwaves are cause rather than effect/correlation? My own suspicion is that they are effect. I think they are produced as a biproduct whenever the brain starts doing whatever is necessary that the conscious mind "wakes up".

Regarding the third hilighted bit, I'll tell you a little of what I'm thinking. It seems to me that a conscious human, presented with an apple, actually "does" quite a lot more than you describe, but most of that is internal processing. A visual cue like that actually activates a multitude of associations and memories of varying strengths. While one neural network processes the apple's appearance, another recalls the flavour of apple, still another craves slice of pie, and another wonders why the heck that breakfast cereal is called Apple Jax when it doesn't taste like apples, and on and on. All of these different "sub-routines" are talking to each other, telling each other what they're doing and simultaneously vying for the attention of the executive network. As the stronger associations gain this attention, the apple quale resolves, sort of like a cacophony of individual voices unifying into a chorus in harmony.

In short, I think the "phenogram" Piggy is talking about is a sort of illusion produced by the brain "talking to itself" in a lot of different voices in harmony. I also think it's plausible that the minimum hardware required to pull this off may be beyond the scope of the general purpose computers we can even come close to building today.
 
Which fallacies? I'm genuinely curious how you analyze his position in detail, as I think it would help me to understand yours. For my own part, when I come across a faulty premise I rarely bother to check for fallacies later on in an argument, whether they're there or not is irrelevant when the premises are not true. GIGO, and all that.
That's true; as long as the premise is unsound, it doesn't directly matter if the argument is valid. I can go back over some of Piggy's posts and identify some of the errors if you like. (Not sure if that will be productive for the thread, though.)

I'm not sure I agree that any and all forms of information processing can be carried out on any general purpose computer. I'm not at all up to speed on this stuff, but my admittedly meager understanding is that, for instance, quantum computers are theorized to be able to perform tasks that could never be done on an ordinary general purpose computer. I am NOT suggesting that there's anything "quantum" about consciousness, or the brain, but if there's one counter-example to your claim then there may be others.
A quantum computer can perform certain operations faster than deterministic (Turing equivalent) computer - potentially orders of magnitude faster. They can't solve problems that are not solvable in principle by a Turing equivalent computer, though, and a Turing equivalent computer can simulate the operation of a quantum computer. So even if human consciousness does involve quantum processes (and I agree, there's no evidence that it does) it can still be implemented on any sufficiently powerful general-purpose computer.

With the hilighted qualifier, and perhaps another--massively multi-core architecture--I think I probably agree, with you though I suspect that the "suffiently large" part may turn out to be more than many people suspect.
A massively multi-core system works exactly the same as a single-core system, mathematically speaking. There are practical differences, but in principle, anything you can do on one, you can do on the other.

I think the brain is doing one heck of a lot of stuff simultaneously, and that simulating everything the brain does in a timely manner may require more robust hardware than we're even close to being able to throw at the problem. I suspect that in order for consciousness to be present in any meaningful sense, that timeliness/simultaneity may be crucial.
There's a difference though between consciousness in itself and a fully functional human mind. The former is a lot simpler than the latter.
 
In short, I think the "phenogram" Piggy is talking about is a sort of illusion produced by the brain "talking to itself" in a lot of different voices in harmony. I also think it's plausible that the minimum hardware required to pull this off may be beyond the scope of the general purpose computers we can even come close to building today.
Well, on that, we can build computers pretty much as big as you want. It's just that as technology is still advancing rapidly, it doesn't make sense to build computers bigger than you'll need for the next 3-5 years; any more and you'll only be burning money.

If you consider the Internet as a single integrated system (which it sort of is...) it is already vastly more complex than the human brain. Each of Google's major datacenters is roughly brain-sized. (This is if you consider each neuron to be the equivalent of thousands of transistors.) And of course common CPUs switch at rates orders of magnitude faster than neurons, so a brain-scale computer network is orders of magnitude more powerful than a real brain.
 
I think the brain is doing one heck of a lot of stuff simultaneously, and that simulating everything the brain does in a timely manner may require more robust hardware than we're even close to being able to throw at the problem. I suspect that in order for consciousness to be present in any meaningful sense, that timeliness/simultaneity may be crucial.
If you mean timeliness in a relative manner, I absolutely agree. Everything need not be performed in lockstep as in current computer designs, but some sort of synchronisation is necessary. However, some people, and I think that Piggy is among them (but I am not sure), believe that processing also need to be performed at the speed that human brains work. An argument against the rope-and-pulley computer emulation of consciousness has been that it cannot work fast enough. While I concede that we can probably not recognise consciousness if it takes hundreds of years to formulate the thought "What a nice millennium", but it would be consciousness nonetheless.
 
how sure are we that a living neuron's input/output behaviour is well enough defined that we know how to produce a suitable robot replacement? Also, what is the range of input/output behaviour that living neurons produce? My understanding is that a neuron is a lot more complicated than a transistor, but I'm not clear on just how much more.

If we don't now perfectly understand the behavior of neurons, then either we will some day, or there's a magic bean. So far, there's no evidence for a magic bean.

Is there any research that suggests these signature brainwaves are cause rather than effect/correlation? My own suspicion is that they are effect. I think they are produced as a biproduct whenever the brain starts doing whatever is necessary that the conscious mind "wakes up".

I agree. Just wanted to address a Piggy point on the brainwave thing.

It seems to me that a conscious human, presented with an apple, actually "does" quite a lot more than you describe, but most of that is internal processing.

Certainly, but I wanted to be as terse as possible and skip all those details, which are obviously just more data processing, and keep to the essentials.

I also think it's plausible that the minimum hardware required to pull this off may be beyond the scope of the general purpose computers we can even come close to building today.

We aren't saying we can build a computer-simulated human brain today. The point is that, with computer technology advancing as fast as it is, there does not seem to be any reason we won't some day succeed (unless you postulate a magic bean).
 
Last edited:
Want to refute an objection about computer software and execution as an abstraction. It's not. Computers running software are physical machines, like mechanical or chemical machines.

Electronic computing machine states are patterns of static electric charges, and functions are performed by switching electric currents to move charges around the machine. They are physical mechanisms of electricity. A running computer program is not an abstraction. It's a physical machine in action using the controlled movements of electron charges.

(sure, there's other stuff in current computers like magnetic storage, but those are optimizations of economy and don't change the essence of how computing machines do their thing)
 
Last edited:
how sure are we that a living neuron's input/output behaviour is well enough defined that we know how to produce a suitable robot replacement?

BTW, I've proposed the robot neuron brain as a thought experiment to expose a principle, not as a proposal for a practical way to make a conscious machine.

Electronic circuits and computer programs that behave like neurons have been around for decades.

...and, computer programs have been around for decades that simulate electronic circuits. They are essential today for circuit design.

Linkies: Circuit of "robot neuron" and the paper about it.

Here's another one. Google image "neuron schematic" for more!
67364f7519b89762e.jpg
 
Last edited:
That is a strawman. Please quote where anybody has said this if you want to dispel the accusation.

All computers work with hardware, particularly with input/output systems, but special math processors are also known.

Any system that is trying to emulate the brain will need corresponding hardware for the sensory input and the motor output. Not necessarily 'wet' hardware, but something that can provide the proper input and output.

But we're not discussing sensory input or motor output, but rather the production of the phenomenology.

Is it your position that this requires no specialized hardware, that it simply is part-and-parcel of the "information processing" function of the brain?

If so, that's the position which seems to me to be fatally flawed, for a number of reasons.
 
no u


His pet theory is that the brain waves themselves cause consciousness. Since that's kind of like the idea that a car moves because the engine goes "vroom," he doesn't like saying it outright. He tried doing so earlier in the thread. It didn't work out well for him, although he'd probably say his opponents were unable to make any credible arguments against it and were swooned, swooned I say, by the force of his animal magnetism. The more recent mentions of it have been much more carefully worded.

Point is, having a scientific theory that's too embarrassing to say out loud doesn't leave you with a lot of options.

Among the better ones is elimination: if you can disprove or discredit every alternative hypothesis, yours wins by default. That's been his primary motivation behind all this disagreeableness: he's not looking for the truth, but to simply be an obstinate git until everyone else gives up.

No, nobody knows if the "signature" deep brain waves play a causative role, or are merely correlated. I have never asserted that they're known to be causative. (The analogy with the choir was mere analogy, and in any case, there currently is no workable theory of consciousness anywhere.)

And whenever you want to get around to actually answering the questions you've failed to address for lo these many pages, that will be helpful.
 
Status
Not open for further replies.

Back
Top Bottom