westprog
Philosopher
- Joined
- Dec 1, 2006
- Messages
- 8,928
Where does consciousness come from again?
Voltage potentials. Or something else.
Where does consciousness come from again?
Have we come to a decision as to how one would determine whether a noun is engaged in the verb think?
A horsey like think with a horn in the middle of the head? They don't exist; although such a being is not totally preposterous. (Unless, of course, we move beyond morphology into the various mythologically assigned magical properties of unicorns).
Hmm, no,I still only see a mass of changing voltage potentials. I don't see where this "assign" or "2" thing could be.
No. The act of a human playing chess occurs when nerves in the appropriate limbs fire in such a way that chess pieces can be said to have been moved according to the game of chess. The origin of these nerve signals is in the brain - which is a mass of changing synaptic potentials.
But you still haven't convinced me that a computer plays chess by performing mathematical operations - all I see is a mass of changing voltage potentials which eventually lead to photons being streamed out of an appropriate display device - with the patterns of photons changing in a way that could said to have changed according to the game of chess.
So the computer is not thinking, it's logic gates are just firing?![]()
If one wishes to argue vitalism then presumably the only argument as to why anything doesn't have a conscious awareness would be the lack of the elan vital.
Presumably if one could bottle it then I could pour it over a rock and it would be conscious?
Where does consciousness come from again? Could you answer that please?
Or at least what it is?
Fundamentally if you just can't say what properties an object is supposed to have before it is conscious then I really fail to see how any sort of progress could ever be made here.
Oh, I think so. If you want to claim chess computers "think" (or possibly "think"), the onus is on you to back it up. If chess computers can "think", then presumably it's possible an abacus, watch, car alarm, neo pet can all think. Is that your position?
I don't know if they exist or not. Possibly on some planet there is an animal we could legitamately call a unicorn.
My point is, if you want to claim machines are possibly "thinking" when their voltage potentials change, you might as well go all out and claim other things are possible.
As Westprog pointed out, we give it meaning. Do you think the computer has any notion its playing a game?
Completely different approaches, wouldn't you say?
Seems a bit dry. When I think of playing chess, for some reason synaptec potential don't come to mind.
So what would make you think a computer, if it's simply a mass of changing volt potentials, could "play" chess?
Do you think it's thinking?
That it can formulate thoughts?
Of the following, which are capable of thinking:
Abacus?
ENIAC?
Thermostat?
Calculator?
Cray XT5?
Or something is not consciously aware because it lacks consciousness?
If one thought consciousness could exist in a liquid form (which I wouldn't put past a couple people here).
That's why its a hard problem, isn't it?
If two hydrogen and one oxygen gives us a substance with the property "wet", how many neurons does it take for consciousness to arise?
I only know what my own consciousness is, as you only know what yours is. It's unique to all of us. I know what it isn't. It isn't... square. It's not... soft. It's not selfreferentialinformationprocessing (or whatever Pixy likes to call it).
Back up: how can we ever determine if something IS conscious or not?
There is a lot of evidence that the brain is at least a Turing machine. So if someone wants to propose that it is more powerful, it seems like they should try to present a compelling argument that it needs to be and why.westprog said:That's not what Robin, or Aku or Westprog have been saying. We've been saying that there's exactly as much evidence that the brain operates as TM+RNG as there is for pure TM. There is no reason to select one over the other apart from having a lot of handy theory for Turing machines.
The burden of proof is on both sides. And I, for one, am not claiming that it is the only possibility. I'm just asking for a compelling reason why it can't be.Given this rather limited assertion (and I will happily let Robin and Aku disassociate themselves from it, of course) I think the burden of proof is on those selecting the single option, and claiming that it is the only possibility.
Which is why running the algorithm might produce consciousness while the static program does not.People use tools for all sorts of things. They make things happen, or make them happen more than they otherwise would.
I have absolutely no idea what the "jump cut" is.That jump cut in 2001 says it all. They could have cut from an abacus to a computer and it would have meant the same thing.
You seem to be missing the point here: it is the mathematics that unifies the concepts.
Physical instance of system, X -> Abstract description of system X, M
Abstract system M -> Physical instance of system, Y
Now, I don't believe ANYONE has said that M is conscious. That only leaves discussions about physical systems like X and Y.
Do you disagree?
Right. It's a description of a conscious mind; it's not conscious until you replay or map that description into a physical process. (Of course, there's no other kind of process, but anyway.)Yes.
The abstract system - must be by virtue of the definition - be something like an encoding of a physical system. Much like these words are an abstract encoding of spoken words but they are not loud nor is something that describes what consciousness is conscious itself.
Pixy said that a simulation of photosynthesis actually fixes real carbon? Wow, that would be bizarre. Where did he say that?
What he said is that it fixes simulated carbon within the simulation. Indeed, this does not help a real plant to thrive.
The argument being made is that consciousness is more like mathematics, in that a careful simulation of the brain would actually produce a simulated consciousness that is equivalent to a real consciousness. That is, consciousness is different from photosynthesis, because it is a purely computation thing. An example of something like it is money: Banks simulate the interchange of money with computers, even though no real cash is actually moving around.
So the question is this: Is there some aspect of real consciousness that would escape a careful simulation on a computer? If people think so, it would be cool to get a description of what that aspect might be (something more than "it might be randomness"). Of course I will stipulate that the inability to give an example does not mean that there is no such aspect.
Observation: If you don't agree that all brain functions are entirely mechanistic, then all bets are off.
~~ Paul
AkuManiMani said:No kidding, Sherlock. However I can't help but notice that you've avoided answering some rather simple questions I've asked you. What gives?
What gives is that I've put you on ignore. I'm sure if you were to say something apposite - or even coherent - it would bubble up in a quote somewhere.
Unfortunately, Paul tends to lose the links when he quotes people. So I looked at this comment and...
AkuManiMani said:Like self-referencing systems that aren't conscious?![]()
Looks like I won't need to reply to you any time soon.
Right. It's a description of a conscious mind; it's not conscious until you replay or map that description into a physical process. (Of course, there's no other kind of process, but anyway.)
There are all sorts of different ways you can instantiate the description, and if they reproduce the original states they reproduce the original consciousness, but the description itself is just a description.
Again it really all depends on what you mean by "think".
I would say that there is a sense in which the computer certainly is "thinking" about chess but we know it is not analogous to how a human would think about chess or anything else.
gain, it really depends on what you want "think" to mean. (After all "logic" derives from the Greek for thought and we certainly accept that computers are logical).
What's wrong with a little common or garden genetic engineering like what my grandparents used to do?
Well, apparently "I" "think" despite the only physical evidence for that being synaptic potential changes as far as you're concerned - so I really am going to have to demand as reason why you categorise these things differently.
Absolutely not. Why would it? It's not playing the game in the way we play it. That's missing the point of whether or not it could do so.
Or one could argue that since the computer's sole raison d'etre is chess that it takes the whole matter rather more seriously than the human.
I am not of course: I just want you to explain why when a human clearly is engaged in some sort of chess playing algorithm and the computer is clearly engaged in some sort of chess playing algorithm that you see these as such alien domains just because the human may also be thinking about having a poo, having sex, being hungry, etc...
That a human is a bit more general purpose is missing the entire point of this thread.
Well, when I'm pumping blood for some reason the vast network of veins, arteries and capillaries don't come to mind, when I'm digesting food for some reason I'm not thinking about enzymes breaking down complex polypeptides into amino acids.
Why should I be thinking about synaptic potentials just because synaptic potentials allow me to think?
The dryness is irrelevant - if thinking about a tough position, anticipating the victory or appreciating the challenge come about because of the changing synaptic potentials then these experiences are not fundamentally different in quality to the machine switching voltages to come to a chess decision. The difference is that your version comes with a load of non-chess specific baggage.
If, however, you're saying that these things are fundamentally beyond what merely seems to be happening physically when one looks into a brain as one would attach a debugger to deep blue then you are really going to have to say WHY.

Well apparently for the same reason I supposed to accept that you experience "toughness", "awareness", "challenge" if you are just a mass of shifting synaptic potentials.
Which is all you can appear to be to me from the outside right?
How do I know you think at all? Sure, you give the appearance of such a thing, but that is only because I attach such a meaning to it.
Absolutely.
Yep.
The Thermostat could be said to be thinking about temperature in a very shallow way. An abacus is not capable. The other devices have the potential if they are programmed. Simply put the sophistication of thought would have to be related to the complexity of the programming.
Now, of the following, which are capable of thinking:
Carbon?
Animo Acid?
Protein?
Cell?
Nerve?
Cortex?
Brain?
Consciousness is the elan vital I am talking about.
What form DOES it exist in please?
It is much harder when one refuses to attempt to say what it is and instead engages in saying what does not have it.
How many carbon atoms until door?
You can't ask a quantative question of a qualative thing.
For example: there is a minimum requirement of logical steps required to prove a logical statement. One cannot ask what is the minimal requirement of logic for proof - such a statement is meaningless. You are asking the later. You need to ask the former - and therefore you need to say what the "statement" is that consciousness is.
The minimum logical requirements for the property of consciousness will come from that.
How do you know it's not any of those things?

Well apparently we can do so by appealing to whatever we feel about something at any particular time or place. But we certainly cannot do so by trying to pin down what we mean by the words we are using - that would be madness.
It comes down to how we define consciousness. If we define it by behaviours - by how it responds to information - then a static representation of a conscious system isn't itself conscious, because it can't respond to anything. You have to actually instantiate the algorithm in some way.But might not the description itself reproduce the original states, since after all we are speaking of information?
No. Why you persist in this absurdity is unclear.
Category error. There is carbon in the simulation. It's simulated. That's what "simulation" means.Thats just the thing. There is no carbon "within the simulation".
My point is that this claim is both unsupported and mathematically impossible.The point I'm making is that the capacity to generate consciousness [i.e. subjective experience] is a physical property of the brain that is medium dependent, in much the same way that electrical conductivity is medium dependent.
My point is that no matter what the underlying physics may be, it can be simulated, and the results are identical.Essentially I'm arguing that theres a basic underlying physics to consciousness and that it is not simply a computational function.
No. No matter what the physics are (and indeed, we know perfectly well what the physics are; Penrose is just plain wrong), as long as they are logically consistent they can be simulated, and therefore, so can consciousness.Once we know what the physics of consciousness is there can be serious discussion about to how create it artificially.
Yes. And therefore, it can be simulated.What would it even mean for something to be non-mechanistic anyway? Surely if a phenomenon is produced there must be some means by which it occurs, right?
Then you have a problem with reality.Asserting that consciousness IS A PROPERTY OF THE BRAIN/ END OF STORY is where I have a problem.
They can't not. Anything that doesn't do that is not a thermostat.So you really think thermostats can formulate thoughts about temperature (in their little thermostatic minds).
You also appear to have a problem with language.It's right up there with Pixy's unconscious conscious anesthetized patient. Very believeable.
But so what?
I am asking exactly how you construct a mathematical expression that demonstrates that the human brain, or any animal brain, is equivalent to a turing machine.
And remember that we have already established that we cannot rely on it to behave as the mathematics of information processing say it should.
How many carbon atoms until door?
You can't ask a quantative question of a qualative thing.
For example: there is a minimum requirement of logical steps required to prove a logical statement. One cannot ask what is the minimal requirement of logic for proof - such a statement is meaningless. You are asking the later. You need to ask the former - and therefore you need to say what the "statement" is that consciousness is.
The minimum logical requirements for the property of consciousness will come from that.
Self-referential information processing.Perhaps I am simplifying arguments to the point of irrelevance here, but is it fair to say the state of consciousness, as expressed and understood here is leaning more towards Searle than Penrose?
How does the brain/mind bind together millions of disparate neuron activities into an experience of a perceptual whole?
How does the " I " or " Self " or the perceived wholeness of my world emerge from a system consisting of so many billions of neurons? What creates the " oneness " or the " totality " of thought processes ?
What creates individuality and " I " ness or " self "? What creates feelings, free will and creativity ?
Ones that (a) support our observations, (b) do not predict things that we know do not happen, (c) do not contradict established laws of physics, and (d) are logically and mathematically sound. Possibly not in that order.What model of the body/brain/consciousness are we considering as valid?
Straightforward, simple, and completely wrong. The brain is not a Bose-Einstein condensate.If we wish to say that yes, the interaction of the ion channels (10 million in each neuron I think) and the resultant oscillation of disparate neurons is a quantum property, then the issue becomes quite straightforward, using the Bose-Einstein condensate model
No.If not, then aren't we essentially back at the 'drawing board'?
Searle might very well say that. Searle would, of course, be wrong.Searle might say these intellectual acrobatics within the domain of classical science to find solutions to a problem that may transcend the limits of classic science cannot yield any valid solution.