Has consciousness been fully explained?

Status
Not open for further replies.
It's been suggested that if the universe, or some portion of it, could be perfectly simulated on the Planck scale, that the end result would be in some way "real". Leaving aside the almost certain impossibility of such a proceeding (many orders of magnitude more unlikely than the Chinese room) there's no reason to suppose that such a simulation would be any more "real" than any other simulation. It would simply have more detail. There's no magical point at which simulations spring into life. No matter how detailed or accurate the simulation of magnetism, you'll never get a magnet, in or out of the simulation world.

The inference an unbiased person is supposed to make upon thinking of such a simulation, the inference that you have repeatedly shown that you are unable to make, is that if the Planck scale really represents some kind of a fundamental limit of fidelity in our own universe then oh, maybe there is no way to prove we are not in a simulation to begin with.

Stick your head in the sand all you want Mr. Ostrich, it doesn't change that fact.
 
It might seem obvious that neurons are carrying out summation, but as we've seen in previous failed attempts to assign mathematical concepts a physical reality, either everything is doing computation or nothing is. The absence of a physical theory is the gaping hole in the computationalist viewpoint. Admittedly, the physicalists can do little more than point out the need of a physical theory, but that's a necessary first step which a sound theory will require.

Nope.

Already explained how mathemacal concepts can be assigned a physical reality.

Try again.
 
I don't doubt neurons do that. The question is, is it computation? No, not anymore so than an abacus shaking in an earthquake is doing computation. Computation [snip] is mind-dependent.

Not really. The behavior we label as computation is very mind independent.

Unless you think all life on Earth would disappear as soon as someone stopped thinking about it. Do you think that?

What we call computation is a behavior exhibited by living things (among other things) that allows them to do stuff that non-living things cannot do.

Sorry if you don't understand it, but the fact remains that bacteria are different from rocks.
 
What we call computation is a behavior exhibited by living things (among other things) that allows them to do stuff that non-living things cannot do.

Non-living things (the "among other things" you lached onto living things) do stuff that non-living things cannot do? :boggled:

Do you think calculators are living things?
 
Not really. The behavior we label as computation is very mind independent.
Agreed, but is it life independent?

Unless you think all life on Earth would disappear as soon as someone stopped thinking about it. Do you think that?
See question above. SRIP going on, I'd say; implication, some difference between the 'lifeform=self' and the surroundings that aren't 'self'.

What we call computation is a behavior exhibited by living things (among other things) that allows them to do stuff that non-living things cannot do.
Why yes, the answer! But what other things? And do you really think the behavior of those other things fits your definition of computation?

Sorry if you don't understand it, but the fact remains that bacteria are different from rocks.
Wow. Agreement!
 
Perhaps, but lets make the meaning of such a claim clear.

All anyone on my side has ever said is that if you look at things people consider conscious, and try to define what it is that makes such things qualitatively different from things people don't immediately consider conscious, the only real mathematically supportable conclusion is that conscious things exhibit some type of SRIP.

I would like to see the mathematical support for your conclusion.

Everything above and beyond SRIP doesn't seem to be a requisite for consciousness once it is really nailed down. I mean you can take yourself and ask "would I still be conscious if my mind lost the ability to do X" and in every case the answer is "yes" except for self reference. If you, or any other conscious thing, lost that then there would simply be no consciousness.

That is your intuition, but I don't think it's accurate.

You've claimed necessity, but ignored sufficiency. Whether or not it's necessary, it's obviously not sufficient for consciousness (i.e. by your "things people consider conscious").

On the flip side, we can examine whether there is any qualitative difference between examples of SRIP that we don't consider immediately conscious -- like the infamous electronic toaster with many features, or even the programmable thermostat -- and things like squirrels, dogs, monkeys, or people. And the answer is that no, there really is no mathematically describable difference.

You must know you can't support this absurd claim?

Yes, people dream, and love, and hate, and have internal dialogue, and Sofias, but those are not qualitative differences since they can all be reduced to just another flavor of SRIP.

Nor this.

But people refuse to speak in those terms. They say "well love is an aspect of consciousness." WRONG, because obviously love is not a requisite for consciousness and if you think about it trying to account for the myriad aspects of the human creature in a single unified theory is bound to fail from step one. So pixy, I, and others try to make it clear that hey, the basic consciousness thing is easy, it is SRIP, lets move on and talk about what makes human consciousness different from dog consciousness different from fish consciousness different from toaster consciousness.

Just wanted to make that clear.

lol @ "toaster consciousness"
 
I don't disagree. But I think it's wrong to label those who disagree as claiming magic. It's only true if you define 'unknown cause' as equal to magic and few people posting here are of that opinion.

I really, really don't want to get into all the name calling going around. My impression of the 'magic' charge is similar to the way I have used it -- magic simply means 'magical thinking'; which essentially amounts to no possible physical interaction. It is thrown around a bit freely I admit.


It does make things rather hard to keep straight. When you are talking about the reality of simulation oranges, I assume you are talking about the same level of reality as fictional characters. If something qualitatively different is meant, I don't know what it is. People arguing for a mechanical being being conscious I see as isomorphic to arguing that people possess souls.

Hmm, let me think about that. My initial feeling is that there is a difference, but unless I can express it clearly there may not be one. A simulated orange 'exists' only as it is implemented in a computer program. That is not precisely the case with a fictional character in a book, but it is the case that such a character exists only in the sense that it is implemented in our brains when we read the work of fiction. One clear difference is that we can look at the simulated orange separate from us, but not the fictional character since the fictional character only exists in our minds while the simulated orange actually has a sense of existing in the running of the program while the computer continues to function and the program is running.

Actually, I think neurons do "sums" in exactly the same way that rocks falling in a landslide do.

Again, we are not talking about "doing sums" as in "doing arithmetic" but the more general term -- summation. In other words, adding different inputs together to arrive at a new output. It really doesn't matter if we call it summation or "doing sums" or 'bawana'. The important thing is that it is a form of computation -- of taking inputs, and dealing with them according a set rule and arriving at a new output that differs (also according to a set of rules) from the original inputs.

For a typical cortical neuron somewhere on the order of 30 or so inputs are needed to summate in order to reach threshold at the axon hillock. Inhibitory inputs will alter this number.

There is no sense that anyone has to call it summation for the process at the axon hillock to occur. It's just going to happen and when it happens it will cause other things to happen. A bunch of rocks falling adds to the other set of rocks only in an observers mind because it doesn't cause another set of changes to occur according to a set of rules. The nervous system is set up to take inputs and actually do something with them, not simply rely on chance occurrences.

I really don't want to have to start questioning people's knowledge or intelligence here, but to compare this to the type of computation that is completely observer dependent -- like calling a set of 3 cans over there and 2 cans right here equal to five cans, or the type of thing that occurs when rocks fall -- is, I'm very afraid, not only wrong, but almost criminal in its wrongness. I know that you know better than this, so I'll simply chalk this up to lack of diligence.


I think that different people in this discussion are primarily concerned about various issues. I'd like to hear what you consider to be the real issues. Thanks.


There are lots of real issues. The one at play now, as far as I can tell, is the question about whether or not a computer simulation might possibly be conscious. While the transporter problem is also very important and very much worth discussion, I simply think it would be too much of a derail to re-introduce it here. That is all I meant.
 
Summation is the operation of combining a sequence of numbers using addition; the result is their sum or total.
http://en.wikipedia.org/wiki/Summation

OK, so what you are saying is that summation, even in a general sense, is arithmetic in base ten?

That's a form of computation and neurons are not doing that. Isn't the term you're using actually "spatial summation"?

Are you saying that spatial summation (it's actual spatial and temporal, but that's another matter entirely) is not a form of summation? What are we really supposed to call it?

No one argues that neurons do addition in the way that humans talk about doing it (so, I'm really not sure why you would try to imply that). What neurons do is take inputs, add them together (sometimes subtracting), to arrive at a final input that is coded temporally. The behavior of neurons follows rules that are based in physics and biology and these rules are not completely chaotic like rocks falling. They are quite controlled and produce a limited number of outputs. Why is that not a form of summation, not a form of calculation? Isn't the idea of calculation that inputs are summed following a set of rules to produce an output? Isn't that the essence of summation?


Because it would not be a simulation of real consciousness. Do you go unconscious when no one's observing you?

Wait a second. Is there real consciousness and not real consciousness? Could there not be simply different forms of it.

I don't believe for a second that a simulation ceases to be a simulation when no one is looking, so this is a moot point, but your point that it is a problem for RD and Pixy simply falls flat. The simulation is either conscious or not unless you can decide on what constitutes real and not-real consciousness. There could potentially be any number of types of consciousness that do not follow a human pattern.



See above. It would be qualiatatevly different than real consciousness because real consciousness is not observer dependent. I'm not sure observer-dependent consciousness is even a coherent concept.

If it isn't a coherent concept then why did you bring it up. Simulations continue to occur whether anyone is looking at them, so now I am totally unsure what your point was. Perhaps if you could restate it in a way that would make sense to you and me, since it doesn't now even seem to make sense to you.


I'm suggesting if it comes down between a recognized authority like Searle and a bunch of anonymous forum posters, the smart money is on the authority. Perhaps someone here has published something as influential as the Chinese Room? Anyone?

Sure and you'd generally be right if you were just going to decide on the person and not look at the argument. Are you telling me that you don't understand the argument or that you don't want to work through the argument? I didn't just say Searle was wrong. I gave you a reason why he was wrong. He is simply wrong. You don't have to believe me -- look at what neurons do. They summate. He's wrong. There is simply no reason to appeal to authority in this sort of situation, so I don't understand why you would want to go that way.


Again, I tell you: summation is the operation of combining a sequence of numbers using addition; the result is their sum or total.

Can you at least see that inputs coming into a neuron are just that? Each EPSP in most CNS synapses is 1/30. They summate at the axon hillock to 1 (threshold) so that they neuron fires. We can call them numbers, or EPSPs or whatever we want. The point is that something summates to create a total. That total can then do something. It doesn't matter if you recognize it as a calculation or addition or anything for it to do what it does.


Adding and subtracting numbers. Isn't that what computation is all about? Do you think numberless computation is possible?

What is a number? I certainly think that computation without numbers is possible. Computers do it all the time. They use electricity that amounts to the same thing as a number. Neurons use ion channels and synapses to do the same thing.


Except we have multiple definitions going on and you're playing fast and loose with them. You don't seem so gung-ho about definitions as you were earlier in the thread.

How am I any less gung-ho about definitions than I was earlier? As I have said all along, I don't care what you want to call the process that neurons do. But what they do is summate inputs. It isn't complicated. If you want to restrict your definition of computation to what humans do with numbers, as I have said all along, go right ahead. But neurons are going to keep summating their inputs no matter what anyone wants to call it.
 
It doesn't seem logically coherent to me to talk of an action that is "within" something, that something being "required for it to occur", yet has no location and is not defined as a change in the real world. Or am I misunderstanding?

That's fine. I was drawing a parallel between the actions that occur within a computer and Descartes' view of the soul. He also saw the soul as occurring within a person, and he also saw the person as required for it to occur in this world.

It was an analogy for the purposes of illustration, but the reality is always more complex than the simple story. Who said anything about a computer program not changing anything in the real world. It runs in a computer, is based in the physics of electricity passing through silica chips. There are very real changes that occur in the real world.


Also, to say there is no magic involved appears rather meaningless. How do you define magic? A connotation of the word for me is "does not really exist" or "impossible". If we look at the word that way, then it's trivially true to say there's no magic involved. OTOH, if we look at it as simply something that is not understood, known, or explained by physics, then it's trivially false to say there's no magic, since physics is incomplete.

Magic is generally defined as something that occurs independent of the laws of physics. Not the currently known laws of physics, but the laws of physics full stop. It is not logically impossible in the sense that we use that term. It is simply physically impossible.
 
I agree Searle's Chinese room argument was fallacious in the jump from "the man in the room does not understand Chinese" to "therefore a computer with the same capabilities couldn't understand Chinese" (paraphrasing what I think his argument was).

Where I'd agree with him is that functionalist definitions of mental states are not adequate, except from a purely pragmatic standpoint.
 
Backup for a sec.... You spoke of a "robust 'world'" that does not exist in a real sense. The pixels you see on the monitor are in the real world, just like whatever is implementing the simulation is in the real world. The "simulation world" can't both exist and "not exist in a real sense", unless you mean that it exists in someone's imagination. I have no problem saying the simulation exists when it it's not being said to exist in some separate "world".

I don't recall saying that it doesn't exist in the real world. It isn't a "thing" in the real world. It is an action. It exists. I can see it.


I don't see how "action" can be defined as anything other than change in the real world (with location) and retain any meaningfulness or usefulness as a term. Nor do I see how any action of the simulation is not simply just an action of the implementation.

Recall you said "[...] its nature is as action -- steps being carried out within the computer. That is why it has no location, no extension, etc." Surely you're not saying steps carried out within a computer have no location?


No, I'm saying that the action itself, as opposed to a 'thing' cannot be localized in space in the same way that a 'thing' can be. It takes place within the confines of a computer, just as Descartes soul did its thing within the confines of a person. The difference philosophically is that Descartes viewed the soul as composed of a separate immaterial substance that interacted with the material plane through some sort of magic that he never explained. A computer program appears to have no location -- you can't touch anything in the simulation -- because it is an action taking place within the computer, just as our minds are an action. Descartes took the characteristics of an action that he did not understand (he knew virtually nothing about nervous tissue and its function) and created the idea that this action was actually a separate 'thing'.


You hit some keys which sent electrical signals to the computer, which resulted in some pattern of electrical signals within the computer, which resulted in some output. That all happens in the real world. The concept "number" exists in your brain.

(ETA: In the sense that it exists in your brain, it can be said to exist in the real world, of course.)



The controversial part is the claim that an action can exist independently of a physical thing and have non-locality.


Ah, OK, then I have not been clear, which is my fault and I apologize. As I tried to explain above, I used the non-locality only as a reference to the way that Descartes discussed the problem. I am not arguing in any way that there is no relation to the real world. I am not aware of any action that ultimately does not depend on a physical reality. Please excuse my clumsiness if you ever got that idea.

What I was trying to argue previously is that I don't see what the problem is in having an action result in further actions, just as we use abstract concepts to drive other abstractions. In other words, why could not the action of a simulated particle be able to join with the actions of another simulated particle to drive a simulation that could produce a conscious simulated person. Of course it is all tied to an underlying physical being -- a computer. I don't see how we could interact with anything not tied to a physical being.

I just don't see why it is the case that there cannot be another level of abstraction between the physical being (computer) and the action we are discussing (consciousness) -- that other level of 'abstraction' being the simulation that makes up the computer world. I'm not sure that I accept without reservation the physicalist argument (though my sympathies still lie there) that action must only occur with physical beings alone. Yes, they must be connected with those physical beings, but what is the reason that the actions initially produced by the physical being cannot result in further action? That is what a simulation does.
 
I agree Searle's Chinese room argument was fallacious in the jump from "the man in the room does not understand Chinese" to "therefore a computer with the same capabilities couldn't understand Chinese" (paraphrasing what I think his argument was).

Where I'd agree with him is that functionalist definitions of mental states are not adequate, except from a purely pragmatic standpoint.


I initially agreed with him, but the more I think about the argument the more I think he is wrong. Part of what I think is the problem is that he leaves so much out of what we mean by 'understanding'. From one point of view, understanding can certainly be defined as 'using the language properly'. I mean, what else do we mean when we say that someone understands a language.

But I think what he really leaves out is the feeling of understanding that we all get when we 'get' what someone is saying. The Chinese Room is designed in such a way so as never to have that feeling. I have tried here before to get folks to talk about what 'feeling' actually means and what 'awareness' actually means so that we can move forward. What if the feeling of getting it right was a part of the Chinese Room? Would it then understand?

Or is the problem that it is very clear that the way the rules are set up in the Chinese Room do not mimic how we solve language problems? Why can't we argue for a better program?
 
That's fine. I was drawing a parallel between the actions that occur within a computer and Descartes' view of the soul. He also saw the soul as occurring within a person, and he also saw the person as required for it to occur in this world.

It was an analogy for the purposes of illustration, but the reality is always more complex than the simple story. Who said anything about a computer program not changing anything in the real world. It runs in a computer, is based in the physics of electricity passing through silica chips. There are very real changes that occur in the real world.

Well an action that is not an action of something (i.e. that physically exists) would not be a change in the real world. Similarly, an action that does not have locality cannot be a change in the real world. My understanding is that you are claiming a simulation is "an action" that does not have locality and is not an action of a thing (but rather of another action).
 
It might seem obvious that neurons are carrying out summation, but as we've seen in previous failed attempts to assign mathematical concepts a physical reality, either everything is doing computation or nothing is. The absence of a physical theory is the gaping hole in the computationalist viewpoint. Admittedly, the physicalists can do little more than point out the need of a physical theory, but that's a necessary first step which a sound theory will require.


No, I'm sorry, but that is just not the case. Computation is not just two things ending up together, two rocks falling together, so 1+1=2. Computation is a process that requires some set of rules that are followed. One of the reasons that we tend only to think of it as observer dependent is that most of the examples we encounter in the world do require an observer to interpret what she sees as computation -- like the example of rocks falling or using an abacus (or a computer for that matter). But that is simply not the case with neurons. They actually perform a function according to a set of rules decided on by natural selection -- they summate inputs to arrive at an output. I'm not at all sure why this is a controversial issue. It's something that just *is*. There isn't any way to argue against the reality of what neurons do.
 
Status
Not open for further replies.

Back
Top Bottom