My take on why indeed the study of consciousness may not be as simple

Have we come to a decision as to how one would determine whether a noun is engaged in the verb think?

Oh, I think so. If you want to claim chess computers "think" (or possibly "think"), the onus is on you to back it up. If chess computers can "think", then presumably it's possible an abacus, watch, car alarm, neo pet can all think. Is that your position?



A horsey like think with a horn in the middle of the head? They don't exist; although such a being is not totally preposterous. (Unless, of course, we move beyond morphology into the various mythologically assigned magical properties of unicorns).

I don't know if they exist or not. Possibly on some planet there is an animal we could legitamately call a unicorn. Or if we're in a simulation, maybe the programmer(s) will reveral them on an uncharted island in the future. My point is, if you want to claim machines are possibly "thinking" when their voltage potentials change, you might as well go all out and claim other things are possible.



Hmm, no,I still only see a mass of changing voltage potentials. I don't see where this "assign" or "2" thing could be.

As Westprog pointed out, we give it meaning. Do you think the computer has any notion its playing a game? I'm surprised you bought into the idea of "chess" at all. Chess is simply moving pieces around according to a set of rules. The only way to "play" chess is to recognize those moves and rules constitute a "game" with a "winner" and "loser". Those are human concepts. A chess computer has no idea its playing a game, nor does it have a desire to win. A human player knows it's a game and has a desire to win. Completely different approaches, wouldn't you say?



No. The act of a human playing chess occurs when nerves in the appropriate limbs fire in such a way that chess pieces can be said to have been moved according to the game of chess. The origin of these nerve signals is in the brain - which is a mass of changing synaptic potentials.

Seems a bit dry. When I think of playing chess, for some reason synaptec potential don't come to mind. Past games, contemplating a tough board position, awareness of the game, desire to win, love of a challenge... for some reason, that's what comes to my mind (and I imagine most anyone else) when I think of playing chess. Your description fails because it doesn't address any of these.

But you still haven't convinced me that a computer plays chess by performing mathematical operations - all I see is a mass of changing voltage potentials which eventually lead to photons being streamed out of an appropriate display device - with the patterns of photons changing in a way that could said to have changed according to the game of chess.

A computer doesn't "play" chess at all, does it? It seems like it does to us, but that's because we attach particular meaning to the moves it makes. So what would make you think a computer, if it's simply a mass of changing volt potentials, could "play" chess?



So the computer is not thinking, it's logic gates are just firing? :rolleyes:

Do you think it's thinking? That it can formulate thoughts? Of the following, which are capable of thinking:
Abacus?
ENIAC?
Thermostat?
Calculator?
Cray XT5?

If one wishes to argue vitalism then presumably the only argument as to why anything doesn't have a conscious awareness would be the lack of the elan vital.

Or something is not consciously aware because it lacks consciousness?

Presumably if one could bottle it then I could pour it over a rock and it would be conscious?

If one thought consciousness could exist in a liquid form (which I wouldn't put past a couple people here).



Where does consciousness come from again? Could you answer that please?

That's why its a hard problem, isn't it? Did consciousness come first, and everything follow, or does it arise (magically, perhaps), when you arrange enough neurons together in the right way? Randfand doesn't like this question (courtesy of Kaggan), but I think it has some merit:

If two hydrogen and one oxygen gives us a substance with the property "wet", how many neurons does it take for consciousness to arise?



Or at least what it is?

I only know what my own consciousness is, as you only know what yours is. It's unique to all of us. I know what it isn't. It isn't... square. It's not... soft. It's not selfreferentialinformationprocessing (or whatever Pixy likes to call it).

Fundamentally if you just can't say what properties an object is supposed to have before it is conscious then I really fail to see how any sort of progress could ever be made here.

Back up: how can we ever determine if something IS conscious or not?
 
Last edited:
Oh, I think so. If you want to claim chess computers "think" (or possibly "think"), the onus is on you to back it up. If chess computers can "think", then presumably it's possible an abacus, watch, car alarm, neo pet can all think. Is that your position?

Again it really all depends on what you mean by "think".

I would say that there is a sense in which the computer certainly is "thinking" about chess but we know it is not analogous to how a human would think about chess or anything else.

Again, it really depends on what you want "think" to mean. (After all "logic" derives from the Greek for thought and we certainly accept that computers are logical).

I don't know if they exist or not. Possibly on some planet there is an animal we could legitamately call a unicorn.

What's wrong with a little common or garden genetic engineering like what my grandparents used to do?

My point is, if you want to claim machines are possibly "thinking" when their voltage potentials change, you might as well go all out and claim other things are possible.

Well, apparently "I" "think" despite the only physical evidence for that being synaptic potential changes as far as you're concerned - so I really am going to have to demand as reason why you categorise these things differently.

As Westprog pointed out, we give it meaning. Do you think the computer has any notion its playing a game?

Absolutely not. Why would it? It's not playing the game in the way we play it. That's missing the point of whether or not it could do so.

(Or, as an aside as to what a "game" is, whether chess remains a game if say, the outcome of the game would determine your survival).

Completely different approaches, wouldn't you say?

Or one could argue that since the computer's sole raison d'etre is chess that it takes the whole matter rather more seriously than the human.

I am not of course: I just want you to explain why when a human clearly is engaged in some sort of chess playing algorithm and the computer is clearly engaged in some sort of chess playing algorithm that you see these as such alien domains just because the human may also be thinking about having a poo, having sex, being hungry, etc...

That a human is a bit more general purpose is missing the entire point of this thread.

Seems a bit dry. When I think of playing chess, for some reason synaptec potential don't come to mind.

Well, when I'm pumping blood for some reason the vast network of veins, arteries and capillaries don't come to mind, when I'm digesting food for some reason I'm not thinking about enzymes breaking down complex polypeptides into amino acids.

Why should I be thinking about synaptic potentials just because synaptic potentials allow me to think?

The dryness is irrelevant - if thinking about a tough position, anticipating the victory or appreciating the challenge come about because of the changing synaptic potentials then these experiences are not fundamentally different in quality to the machine switching voltages to come to a chess decision. The difference is that your version comes with a load of non-chess specific baggage.

If, however, you're saying that these things are fundamentally beyond what merely seems to be happening physically when one looks into a brain as one would attach a debugger to deep blue then you are really going to have to say WHY.

So what would make you think a computer, if it's simply a mass of changing volt potentials, could "play" chess?

Well apparently for the same reason I supposed to accept that you experience "toughness", "awareness", "challenge" if you are just a mass of shifting synaptic potentials.

Which is all you can appear to be to me from the outside right?

How do I know you think at all? Sure, you give the appearance of such a thing, but that is only because I attach such a meaning to it.

Do you think it's thinking?

Absolutely.

That it can formulate thoughts?

Yep.

Of the following, which are capable of thinking:
Abacus?
ENIAC?
Thermostat?
Calculator?
Cray XT5?

The Thermostat could be said to be thinking about temperature in a very shallow way. An abacus is not capable. The other devices have the potential if they are programmed. Simply put the sophistication of thought would have to be related to the complexity of the programming.

Now, of the following, which are capable of thinking:

Carbon?
Animo Acid?
Protein?
Cell?
Nerve?
Cortex?
Brain?

Or something is not consciously aware because it lacks consciousness?

Consciousness is the elan vital I am talking about.

If one thought consciousness could exist in a liquid form (which I wouldn't put past a couple people here).

What form DOES it exist in please?

That's why its a hard problem, isn't it?

It is much harder when one refuses to attempt to say what it is and instead engages in saying what does not have it.

If two hydrogen and one oxygen gives us a substance with the property "wet", how many neurons does it take for consciousness to arise?

How many carbon atoms until door?

You can't ask a quantative question of a qualative thing.

For example: there is a minimum requirement of logical steps required to prove a logical statement. One cannot ask what is the minimal requirement of logic for proof - such a statement is meaningless. You are asking the later. You need to ask the former - and therefore you need to say what the "statement" is that consciousness is.

The minimum logical requirements for the property of consciousness will come from that.

I only know what my own consciousness is, as you only know what yours is. It's unique to all of us. I know what it isn't. It isn't... square. It's not... soft. It's not selfreferentialinformationprocessing (or whatever Pixy likes to call it).

How do you know it's not any of those things?

Back up: how can we ever determine if something IS conscious or not?

Well apparently we can do so by appealing to whatever we feel about something at any particular time or place. But we certainly cannot do so by trying to pin down what we mean by the words we are using - that would be madness.
 
Last edited:
westprog said:
That's not what Robin, or Aku or Westprog have been saying. We've been saying that there's exactly as much evidence that the brain operates as TM+RNG as there is for pure TM. There is no reason to select one over the other apart from having a lot of handy theory for Turing machines.
There is a lot of evidence that the brain is at least a Turing machine. So if someone wants to propose that it is more powerful, it seems like they should try to present a compelling argument that it needs to be and why.

Given this rather limited assertion (and I will happily let Robin and Aku disassociate themselves from it, of course) I think the burden of proof is on those selecting the single option, and claiming that it is the only possibility.
The burden of proof is on both sides. And I, for one, am not claiming that it is the only possibility. I'm just asking for a compelling reason why it can't be.

People use tools for all sorts of things. They make things happen, or make them happen more than they otherwise would.
Which is why running the algorithm might produce consciousness while the static program does not.

That jump cut in 2001 says it all. They could have cut from an abacus to a computer and it would have meant the same thing.
I have absolutely no idea what the "jump cut" is.

~~ Paul
 
You seem to be missing the point here: it is the mathematics that unifies the concepts.

Physical instance of system, X -> Abstract description of system X, M
Abstract system M -> Physical instance of system, Y

Now, I don't believe ANYONE has said that M is conscious. That only leaves discussions about physical systems like X and Y.

Pixy and I are saying M is also conscious, to the extent that it describes whatever aspects of the system are relevant, because of this:

Physical instance of system, X -> Abstract description of system X, Physical system isomorphic to relevant portions of X, M

Abstract system M, Physical system isomorphic to relevant portions of Y -> Physical instance of system, Y.

The question is, then, what those "relevant" portions consist of.

Do you disagree?
 
Do you disagree?

Yes.

The abstract system - must be by virtue of the definition - be something like an encoding of a physical system. Much like these words are an abstract encoding of spoken words but they are not loud nor is something that describes what consciousness is conscious itself.
 
Yes.

The abstract system - must be by virtue of the definition - be something like an encoding of a physical system. Much like these words are an abstract encoding of spoken words but they are not loud nor is something that describes what consciousness is conscious itself.
Right. It's a description of a conscious mind; it's not conscious until you replay or map that description into a physical process. (Of course, there's no other kind of process, but anyway.)

There are all sorts of different ways you can instantiate the description, and if they reproduce the original states they reproduce the original consciousness, but the description itself is just a description.
 
Pixy said that a simulation of photosynthesis actually fixes real carbon? Wow, that would be bizarre. Where did he say that?

Here.

What he said is that it fixes simulated carbon within the simulation. Indeed, this does not help a real plant to thrive.

Thats just the thing. There is no carbon "within the simulation".

The argument being made is that consciousness is more like mathematics, in that a careful simulation of the brain would actually produce a simulated consciousness that is equivalent to a real consciousness. That is, consciousness is different from photosynthesis, because it is a purely computation thing. An example of something like it is money: Banks simulate the interchange of money with computers, even though no real cash is actually moving around.

So the question is this: Is there some aspect of real consciousness that would escape a careful simulation on a computer? If people think so, it would be cool to get a description of what that aspect might be (something more than "it might be randomness"). Of course I will stipulate that the inability to give an example does not mean that there is no such aspect.

The point I'm making is that the capacity to generate consciousness [i.e. subjective experience] is a physical property of the brain that is medium dependent, in much the same way that electrical conductivity is medium dependent. Essentially I'm arguing that theres a basic underlying physics to consciousness and that it is not simply a computational function. Once we know what the physics of consciousness is there can be serious discussion about to how create it artificially.

Observation: If you don't agree that all brain functions are entirely mechanistic, then all bets are off.

~~ Paul

What would it even mean for something to be non-mechanistic anyway? Surely if a phenomenon is produced there must be some means by which it occurs, right?
 
Last edited:
AkuManiMani said:
No kidding, Sherlock. However I can't help but notice that you've avoided answering some rather simple questions I've asked you. What gives?

What gives is that I've put you on ignore. I'm sure if you were to say something apposite - or even coherent - it would bubble up in a quote somewhere.

Unfortunately, Paul tends to lose the links when he quotes people. So I looked at this comment and...

AkuManiMani said:
Like self-referencing systems that aren't conscious? :rolleyes:

Looks like I won't need to reply to you any time soon.

Sounds like a lame excuse for not having any cogent responses.

ETA: Oh, and if you ever decide to grow some thicker skin and stop hiding behind the "Ignore" feature, Pixy, perhaps you could address this post as well.
 
Last edited:
Right. It's a description of a conscious mind; it's not conscious until you replay or map that description into a physical process. (Of course, there's no other kind of process, but anyway.)

There are all sorts of different ways you can instantiate the description, and if they reproduce the original states they reproduce the original consciousness, but the description itself is just a description.

But might not the description itself reproduce the original states, since after all we are speaking of information?
 
Again it really all depends on what you mean by "think".

Well, what is your definition? Pick one of the following:

1 : to form or have in the mind
2 : to have as an intention <thought to return early>
3 a : to have as an opinion <think it's so> b : to regard as : consider <think the rule unfair>
4 a : to reflect on : ponder <think the matter over> b : to determine by reflecting <think what to do next>
5 : to call to mind : remember <he never thinks to ask how we do>
6 : to devise by thinking —usually used with up <thought up a plan to escape>
7 : to have as an expectation : anticipate <we didn't think we'd have any trouble>
8 a : to center one's thoughts on <talks and thinks business> b : to form a mental picture of
9 : to subject to the processes of logical thought <think things out>

I don't think, say, a thermostat can do any of those because it doesn't have a mind. But you do. Care to explain?

I would say that there is a sense in which the computer certainly is "thinking" about chess but we know it is not analogous to how a human would think about chess or anything else.

Of course it's not analagous, that was my original point. It's not even close. A person understands they are playing a game. They can evaulate pieces, board position, and countermoves in a way that is impossible for computers to do.

gain, it really depends on what you want "think" to mean. (After all "logic" derives from the Greek for thought and we certainly accept that computers are logical).

Think. Um... "to form or have in the mind"? Do you think thermostats have a mind?

And computers are logical? I grant you they behave logically, but that does not entail they are logical. Do they understand modus tollens or modus ponens? Can a computer come up with a logical argument why capital punishment is wrong?

Something logical would understand there are no non-green green things, correct? Do you think a computer understands that?

What's wrong with a little common or garden genetic engineering like what my grandparents used to do?

Nothing. I thought you said unicorns don't exist (in fact, you did say this). Are you now saying they possibly exist?


Well, apparently "I" "think" despite the only physical evidence for that being synaptic potential changes as far as you're concerned - so I really am going to have to demand as reason why you categorise these things differently.

Hmm? It's your claim that computers think when their voltage potentials change. I want you to explain how my digital wristwatch is capable of thought.



Absolutely not. Why would it? It's not playing the game in the way we play it. That's missing the point of whether or not it could do so.

Possibly, it could do so. But we'll never know. At least now you admit a chess computer has no knowledge it's even playing a game. Yet you claim it can "think" (presumably about chess moves, even though it doesn't know what chess is). That's a pretty low threshold for thinking, isn't it? Are you trying to water down the definition so much that anything can "think"?


Or one could argue that since the computer's sole raison d'etre is chess that it takes the whole matter rather more seriously than the human.

Which would be strange since you admit it doesn't even know its playing a game. I submit that to "take something seriously", you have to be aware what the something is!

I am not of course: I just want you to explain why when a human clearly is engaged in some sort of chess playing algorithm and the computer is clearly engaged in some sort of chess playing algorithm that you see these as such alien domains just because the human may also be thinking about having a poo, having sex, being hungry, etc...

Because the human is aware they're playing a game and the computer is not? And I don't know about you, but when I play chess, I'm not thinking of some "chess playing algorithm". Is that how you play chess?

That a human is a bit more general purpose is missing the entire point of this thread.

A bit more? Really? The computer has no "general purpose" at all. It does not care about winning or losing or missing a good move because it doesn't care at all.



Well, when I'm pumping blood for some reason the vast network of veins, arteries and capillaries don't come to mind, when I'm digesting food for some reason I'm not thinking about enzymes breaking down complex polypeptides into amino acids.

So when you think of a meal you enjoyed at a good restraunt, you think about complex polypeptides? Sucks to be your date! :)

Why should I be thinking about synaptic potentials just because synaptic potentials allow me to think?

I didn't say you should. In the case of chess, I would imagine you would be thinking about the experiences you've had playing the game. I hope synaptic potentials isn't the first thing that comes to mind when you think about how much you love your spouse/kid/whatever! That's what's wrong with your definition- it misses the essence of what it means for a person to think about something. You cannot take subjective experience out of the equation. It colors everything we do.

The dryness is irrelevant - if thinking about a tough position, anticipating the victory or appreciating the challenge come about because of the changing synaptic potentials then these experiences are not fundamentally different in quality to the machine switching voltages to come to a chess decision. The difference is that your version comes with a load of non-chess specific baggage.

you mean chess specific experiences and memories? :rolleyes:

If, however, you're saying that these things are fundamentally beyond what merely seems to be happening physically when one looks into a brain as one would attach a debugger to deep blue then you are really going to have to say WHY.

Seems we're back to mental states are brain states. :boggled:
Read up on Mary's room. Not going to rehash it.



Well apparently for the same reason I supposed to accept that you experience "toughness", "awareness", "challenge" if you are just a mass of shifting synaptic potentials.

Wait, I thought you admitted chess computers can't "play" chess. Are you now saying a chess computer knows its playing a game (with everything that concept entails)?


Which is all you can appear to be to me from the outside right?

Right. I behave as if I'm conscious, but you don't know. A life-like robot programmed to order a hamburger at Wendy's may behave like it's hungry. Does that mean it is?

How do I know you think at all? Sure, you give the appearance of such a thing, but that is only because I attach such a meaning to it.

You don't know. You can only assume.



Absolutely.



Yep.

So you really think thermostats can formulate thoughts about temperature (in their little thermostatic minds). This helps your argument immensely :rolleyes:
It's right up there with Pixy's unconscious conscious anesthetized patient. Very believeable.



The Thermostat could be said to be thinking about temperature in a very shallow way. An abacus is not capable. The other devices have the potential if they are programmed. Simply put the sophistication of thought would have to be related to the complexity of the programming.

Is a thermostat conscious, or is it a p-thermostat?


Now, of the following, which are capable of thinking:

Carbon?
Animo Acid?
Protein?
Cell?
Nerve?
Cortex?
Brain?

None, I'm an idealist. Thought comes first. Everything else is an effect.


Consciousness is the elan vital I am talking about.

I like that. Vital force fits it very well.

What form DOES it exist in please?

If I knew that, it wouldn't be a hard problem, would it?


It is much harder when one refuses to attempt to say what it is and instead engages in saying what does not have it.

No smelgs are grue.
X is a smelg.
X is not....
See. I have no idea what a smelg is, yet I know it's not grue. Just because we don't know what consciousness is, exactly, doesn't mean we can't rule out what it isn't. I'm pretty sure it's not a 63 Mustang.



How many carbon atoms until door?

Is "door" a property of carbon atoms?

You can't ask a quantative question of a qualative thing.

Apparently you can: how many atoms of Hydrogen and Oxygen does it take to make something that is wet? Three.

For example: there is a minimum requirement of logical steps required to prove a logical statement. One cannot ask what is the minimal requirement of logic for proof - such a statement is meaningless. You are asking the later. You need to ask the former - and therefore you need to say what the "statement" is that consciousness is.

Proof is a property of logic? Anyway, I'm not talking about logic. I'm talking about physical properties. "Wet" is a physical property. Is consciousness a physical property?

The minimum logical requirements for the property of consciousness will come from that.

Why would there be "logical requirements" for consciousness? What would these requirements be? Are there "logical requirements" for wet?



How do you know it's not any of those things?

How do I know my consciousness isn't square? Are you really asking me that? Is yours? :boggled:



Well apparently we can do so by appealing to whatever we feel about something at any particular time or place. But we certainly cannot do so by trying to pin down what we mean by the words we are using - that would be madness.

Trying to pin down is fine. Asserting that consciousness IS A PROPERTY OF THE BRAIN/ END OF STORY is where I have a problem.
 
But might not the description itself reproduce the original states, since after all we are speaking of information?
It comes down to how we define consciousness. If we define it by behaviours - by how it responds to information - then a static representation of a conscious system isn't itself conscious, because it can't respond to anything. You have to actually instantiate the algorithm in some way.
 
No. Why you persist in this absurdity is unclear.

Thats just the thing. There is no carbon "within the simulation".
Category error. There is carbon in the simulation. It's simulated. That's what "simulation" means.

The point I'm making is that the capacity to generate consciousness [i.e. subjective experience] is a physical property of the brain that is medium dependent, in much the same way that electrical conductivity is medium dependent.
My point is that this claim is both unsupported and mathematically impossible.

Essentially I'm arguing that theres a basic underlying physics to consciousness and that it is not simply a computational function.
My point is that no matter what the underlying physics may be, it can be simulated, and the results are identical.

You can't just keep saying "no it can't". We've established that it can, based on mathematical and physical principles. If you disagree, then you have to disagree with at least one of those mathematical or physical principles.

Which ones, exactly?

Once we know what the physics of consciousness is there can be serious discussion about to how create it artificially.
No. No matter what the physics are (and indeed, we know perfectly well what the physics are; Penrose is just plain wrong), as long as they are logically consistent they can be simulated, and therefore, so can consciousness.

What would it even mean for something to be non-mechanistic anyway? Surely if a phenomenon is produced there must be some means by which it occurs, right?
Yes. And therefore, it can be simulated.
 
So you really think thermostats can formulate thoughts about temperature (in their little thermostatic minds).
They can't not. Anything that doesn't do that is not a thermostat.

It's right up there with Pixy's unconscious conscious anesthetized patient. Very believeable.
You also appear to have a problem with language.

A socialist who is bummed because he couldn't attend the big meeting in Copenhagen would be a blue green red. There's no contradiction here, it's just how things work if you don't precisely define your terms.

I'm beginning to believe that it's not the love of money but failure to define your terms that's the root of all evil.
 
But so what?

I am asking exactly how you construct a mathematical expression that demonstrates that the human brain, or any animal brain, is equivalent to a turing machine.

And remember that we have already established that we cannot rely on it to behave as the mathematics of information processing say it should.

Perhaps I am simplifying arguments to the point of irrelevance here, but is it fair to say the state of consciousness, as expressed and understood here is leaning more towards Searle than Penrose?

How does the brain/mind bind together millions of disparate neuron activities into an experience of a perceptual whole?
How does the " I " or " Self " or the perceived wholeness of my world emerge from a system consisting of so many billions of neurons? What creates the " oneness " or the " totality " of thought processes ?
What creates individuality and " I " ness or " self "? What creates feelings, free will and creativity ?
What model of the body/brain/consciousness are we considering as valid?

If we wish to say that yes, the interaction of the ion channels (10 million in each neuron I think) and the resultant oscillation of disparate neurons is a quantum property, then the issue becomes quite straightforward, using the Bose-Einstein condensate model
If not, then aren't we essentially back at the 'drawing board'?

Searle might say these intellectual acrobatics within the domain of classical science to find solutions to a problem that may transcend the limits of classic science cannot yield any valid solution.

Just throwing this out there for dissection.

Thanks.
 
How many carbon atoms until door?

You can't ask a quantative question of a qualative thing.

BAM! Exactly!

Consciousness experience is qualitatively different than simply computing. It is not sufficient to simply increase the amount or complexity of computation to produce consciousness.

For example: there is a minimum requirement of logical steps required to prove a logical statement. One cannot ask what is the minimal requirement of logic for proof - such a statement is meaningless. You are asking the later. You need to ask the former - and therefore you need to say what the "statement" is that consciousness is.

The minimum logical requirements for the property of consciousness will come from that.

The active capacity for subjective experience is the basic minimum criteria for consciousness. All of the addition features such as cognition are elaborations of this basic element.
 
Perhaps I am simplifying arguments to the point of irrelevance here, but is it fair to say the state of consciousness, as expressed and understood here is leaning more towards Searle than Penrose?

How does the brain/mind bind together millions of disparate neuron activities into an experience of a perceptual whole?
How does the " I " or " Self " or the perceived wholeness of my world emerge from a system consisting of so many billions of neurons? What creates the " oneness " or the " totality " of thought processes ?
What creates individuality and " I " ness or " self "? What creates feelings, free will and creativity ?
Self-referential information processing.

What model of the body/brain/consciousness are we considering as valid?
Ones that (a) support our observations, (b) do not predict things that we know do not happen, (c) do not contradict established laws of physics, and (d) are logically and mathematically sound. Possibly not in that order.

If we wish to say that yes, the interaction of the ion channels (10 million in each neuron I think) and the resultant oscillation of disparate neurons is a quantum property, then the issue becomes quite straightforward, using the Bose-Einstein condensate model
Straightforward, simple, and completely wrong. The brain is not a Bose-Einstein condensate.

If not, then aren't we essentially back at the 'drawing board'?
No.

Searle might say these intellectual acrobatics within the domain of classical science to find solutions to a problem that may transcend the limits of classic science cannot yield any valid solution.
Searle might very well say that. Searle would, of course, be wrong.
 

Back
Top Bottom