• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Has consciousness been fully explained?

Status
Not open for further replies.
Sure, as it relates to their lives; but this is the case here and now as computers relate to the lives of most beings.

Well, exactly."Beings" being conscious beings, one supposes.

Here's a definition of "information" that might work. I'm winging it, so it's probably got holes in it. "Information" means a physical interaction between a conscious being and it's environment that allows the being to form a model of said environment. That includes the nerve impulses and the computer programs, and excludes the rock - as far as I can tell.
 
If a tree falls in the forest with no one around to hear, then it does not make sound. It emits waves, sure, but sound is the interaction of those waves with someone capable of hearing.

I would say that there is blooming, buzzing stuff out there; it becomes information when it interacts with a system capable of decoding it.

I guess this is just a definitional thing, and I have never thought about it much, but that's how I tend to think about it.

This is exactly so. I'm saying that the "system" is a conscious being. I would say a person, but it might be a dog, or a cockroach.

Anyone insisting that it's a person or a computer needs to find the commonality between the person and the computer.
 
I'm going to ignore your posts from now on, because you add nothing to this discussion.

You could throw me in there, because I agree with him completely. I certainly don't regard an unwillingness to look for a definition as being the sign of a strong argument.
 
Well, exactly."Beings" being conscious beings, one supposes.

Here's a definition of "information" that might work. I'm winging it, so it's probably got holes in it. "Information" means a physical interaction between a conscious being and it's environment that allows the being to form a model of said environment. That includes the nerve impulses and the computer programs, and excludes the rock - as far as I can tell.



No; I don't know how you could get that interpretation. I meant 'beings' in its most generic sense. Computers are useless to most beings in the world here and now; we don't need to be radically absent for that situation to be the case.

I think we already have a good definition of information between RD and Blobru that fits pretty much everything that I was trying to say.
 
Your distinction between a model and a simulation is arbitrary. You keep asking this stupid question "is simulated water wet?" Why don't you answer the other question -- is modeled water wet? What does that even mean?
"Modelled" water will be H2O, and yes, wet. What don't you understand about modelling vs simulating?
 
This is exactly so. I'm saying that the "system" is a conscious being. I would say a person, but it might be a dog, or a cockroach.

Anyone insisting that it's a person or a computer needs to find the commonality between the person and the computer.


I don't agree any longer with the statement I made above; that issue does not concern what information is but rather refinement in information, or quality of information, or whatever term we wish to apply to the limitation of all the available possible inputs.


I do not think that consciousness needs to be involved. Receptors are not conscious and yet they 'refine' information because they only respond to a small number of the multitudinously available stimuli.
 
What I don't get here is that you keep skimming over the point that a Turing machine + suitable interface doesn't work. Can't work. It won't be a Turing machine any more. It will be a different device altogether.

I know that you keep coming up with ways to avoid this. Yes, a perfect implementation of a Turing machine isn't possible in the real world. So what? The same applies to any design or concept. If we were to use that as an approach, we wouldn't be able to reason about any system.

What makes the concept of the Turing machine useful is that we can make predictions about computations. These predictions are of great practical value. We know that we can launch our Pascal computations into the time-sharing computer, and not worry about implementation details or interaction with the world - and be sure that the program which takes an hour will give exactly the same result as one that take a millisecond.

This is clearly not the case with the replacement neuron. To talk blithely about coupling is to miss the point that a coupled Turing machine is not a Turing machine, and the reasoning we use about Turing machines no longer applies. A Turing machine is, by definition, a closed, non-interacting system. The people who design computers and operating systems have to go to great lengths to provide environments where programs could operate as if they were Turing machines. In almost every case, the computer and operating system which runs the programs has to use a different model, because the Turing model isn't appropriate for running a computer. I gave a link to a paper describing the issues involved in coping with these issues.

So when describing a device which can replace a neuron in a human body, the Turing model is simply irrelevant. Turing-style programs are designed to work as closed systems. The neuron is designed to be open, time-dependent, asynchronous, reactive. The Turing model is of no help in understanding or replacing neuron behaviour.

I just realized your problem. You have a fundamental misunderstanding of the nature of mathematics in general and in particular computer science.

The fact that no turing machine actually exists isn't irrelevant. It illustrates that any mathematical description is merely that -- a description.

When pixy says that a turing machine can be conscious, he isn't saying that the magical fairy turing machine in abstract land can be conscious. That is absurd. Abstract land doesn't exist, unless you subscribe to some very loony schools of philosophy. Reality exists.

What he is saying is that any system which satisfies the behavior constraints we call turing complete -- idealized by the definition of a turing machine -- we can be conscious. That is a very, very, very different claim than what you seem to think he is saying.

This is equivalent to saying "anything that is round -- idealized by the definition of a circle -- can roll." Period. No different. "Round" and "turing equivalent" are the same kind of thing in human language -- a description, a mathematical description, of a class of behaviors of real stuff, regardless of what one thinks of the ultimate "nature" of that stuff.

"Round" is the class of behaviors that lead to a system interacting with other systems in a way that is mathematically isomorphic -- after much filtering is done by the observer -- to a circle. There is a location within the system that each location on the surface of the system is equidistant from. The idealized description is what we call a circle -- so what. Nothing perfectly round exists. No circle actually exists. Hmmm... kind of like no perfect turing machines exist? Yet you don't complain when people say "anything circular can roll," do you?

Likewise, "turing equivalent" is a class of behaviors that are mathematically isomorphic, after much filtering done by the observer, to a turing machine. <insert formal definition of turing machine>

The thing that allows a round object to roll is how near it is to the idealized circle. But you still need a hill, gravity, etc, or else nothing can roll. But nobody bothers to argue this point because it is utterly stupid, given that when someone says "a circle can roll" everyone knows what they mean -- we have grown up and this concept is so familiar it isn't an issue.

To pixy and I, who are very familiar with computer science, saying "a turing machine can do X" is not controversial in the least. We understand that the implicit meaning is "a turing equivalent system can do X, probably with the help of lots of stuff that isn't necessarily part of the ideal turing machine, just like a round object can't roll without the help of many things that have nothing to do with the ideal circle."

Why is that so hard for you to understand? Turing machines don't really exist. Circles don't really exist. When people speak of either they are referencing reality, not fairy land.
 
Last edited:
I do not think that consciousness needs to be involved. Receptors are not conscious and yet they 'refine' information because they only respond to a small number of the multitudinously available stimuli.
Consciousness. So many meanings, so little time. :) ;)
 
"Modelled" water will be H2O, and yes, wet. What don't you understand about modelling vs simulating?

Huh?

Why can't we model water with cabron tetrachloride? It has similar bipolar properties, it can act as a bipolar solvent, can it not?

See this is what I am talking about. You people say that a mechanical bird is a model bird just because it flies, but disagree that carbon tetrachoride can be model water despite the fact that it is a bipolar solvent.

You can't just arbitrarily choose a function and ignore all the others. I mean, you can, but then it is arbitrary.

Thats why I claimed the distinction was ... arbitrary.
 
This is exactly so. I'm saying that the "system" is a conscious being. I would say a person, but it might be a dog, or a cockroach.

Anyone insisting that it's a person or a computer needs to find the commonality between the person and the computer.

Westprog, not a single person who is claiming a computer can be conscious in a vague non-human way does not also firmly believe that a cockroach could not also be conscious in some vague non-human way.

Part of the problem is that you are stuck on this human-like idea of consciousness, and that isn't what the computational model restricts itself to.
 
Not until one has a mathematical definition of X, including both its' necessary and sufficient aspects.

You seem to have missed the point.

People say "a circle can roll" all the time.

It is understood that a "downward" force, and an incline, or a forward force and just some surface, all in reality, are needed as well.

If you don't understand a similar thing about turing machines, then you need to read some textbooks. Because after reading many textbooks, and being a professional programmer, it is crystal clear to me. Turing machine's don't really exist. Circles don't really exist. This fact doesn't prevent us from using those terms in language every day.
 
It's like saying that a drawing of Winnie the Pooh can't tell the difference between being a drawing and being a real talking teddy bear.

I know that the drawing comparison is very annoying to the simulationists, because they are thinking of something way more complicated and detailed. But it's entirely appropriate. A drawing is a simulation of the thing it represents, and thinking about drawings is a very good way to realise that the reason a drawing is not the same as the thing it represents is not because it's insufficiently detailed - it's because it's a drawing.
 
Just as a jumping off point, I hope that I can explain why I think folks have been talking past one another. You have laid out the distinction nicely.

I may have RD and Pixy's argument completely wrong, but when they talk about this simulation they are not interested in the representation. The point was not the simulation of but to simulate the universe, if that distinction makes sense. Talking about the simulation was just an easy way of reffering to the process that actually occurs in the guts of the computer and is hard for anyone to 'see'.

In other words, I see one group concentrating on the fact that a representation can't do anything. Well, I think everyone agrees on that. I thought they were trying to point to the simulation as a way of helping people to see that earlier objections to this argument were not well-founded (remember this has been going on for a long time). We used to focus exclusively on the pattern in the computer and folks argued that it wouldn't act like real neurons, etc.

Again, maybe I'm wrong about the way they see their argument and this is just my way of seeing the argument, but I thought the emphasis was on simulating -- the actual actions involved -- in order to recreate the pattern of activity in a computer that is identical (or functionally identical) to the pattern of activity in a brain that is conscious.

If we want to call that a model, that's fine with me. I don't see why it matters that much because the underlying argument is the same -- get a computer to do the same thing a brain does, remake the pattern a brain makes when conscious -- and the computer should be conscious unless we think there is something more to consciouness than that pattern of activity.

If consciousness is an action or process (I don't think it is, but I think I'm the minority on that one), then the simulation can't "do" anything. Or is the claim that the simulation isn't conscious, but the computer is, while it's running the simulation? But then how does software make hardware conscious? This is what seems magical (again, not that I have a problem with magic): a simulation that can't produce any real-world phenomena (e.g., wet), suddenly is producing a real-world phenomena: consciousness.
 
I know that the drawing comparison is very annoying to the simulationists, because they are thinking of something way more complicated and detailed. But it's entirely appropriate. A drawing is a simulation of the thing it represents, and thinking about drawings is a very good way to realise that the reason a drawing is not the same as the thing it represents is not because it's insufficiently detailed - it's because it's a drawing.



No, I think they find it irritating because they are talking about a function and not a representation. That is what I have been trying to tell you; and why I think you guys are talking past one another. Sure, it's a lot more complicated, but the central issue is that the representation (the epiphenomenon of a simulation) is most decidedly not the point. It is the way the representation is made -- that in the process of making that representation the action pattern of a conscious brain is replicated (not replicated as a representation of a conscious brain but the actual action pattern of a brain being conscious or 'consciousing').
 
That is the point -- anything can be information, and anything can be said to react to information, or behave according to information, or whatever.

The difference between life and computers and rocks isn't that some have information and some don't, or that one "uses" information and one doesn't, it is the level and depth of sequential behaviors as a result of a given piece of information within the system that is vastly different.

Typically, a rock doesn't go through a cascade of internal state changes that results in some gross external state change as a result of some piece of information.

Typically, life and computers do.

Ah, an actual definition. So, it's a "gross external state change" that is significant. Which is odd, because both computers and brains can assimilate and process enormous amounts of information without any gross external state change whatsoever.
 
No, I think they find it irritating because they are talking about a function and not a representation. That is what I have been trying to tell you; and why I think you guys are talking past one another. Sure, it's a lot more complicated, but the central issue is that the representation (the epiphenomenon of a simulation) is most decidedly not the point. It is the way the representation is made -- that in the process of making that representation the action pattern of a conscious brain is replicated (not replicated as a representation of a conscious brain but the actual action pattern of a brain being conscious or 'consciousing').

We are not talking past one another -- make no mistake.

Everyone knows darn well that the computational side is all about function.

The dispute is over what that function is. Obviously, the religious side wants there to be some function that cannot be matched by an artificial system. Otherwise, their heaven would no longer be so special.

That is really the heart of the argument -- if the necessary function for consciousness can be found in other substrates, other things can be conscious, and the whole religious dogma falls apart. Is there heaven for conscious robots? For simulated beings? I don't want to have to think about that, because it decreases my faith in my worldview, so I am going to take the position that there is some function that cannot be found in other substrates.
 
If consciousness is an action or process (I don't think it is, but I think I'm the minority on that one), then the simulation can't "do" anything. Or is the claim that the simulation isn't conscious, but the computer is, while it's running the simulation? But then how does software make hardware conscious? This is what seems magical (again, not that I have a problem with magic): a simulation that can't produce any real-world phenomena (e.g., wet), suddenly is producing a real-world phenomena: consciousness.

Leaving aside ontological differences................


The claim is that the computer is conscious while running the simulation. The way that the program accomplishes it is by restricting the pattern of gate openings and electron movement to replicate the action of a brain that is conscious. Brains do it 'bottom up' by having a set of defined neurons (and synapses) that are put in place by natural selection, development and learning, allowing only certain types of information processing. We could set up a robot in the same way -- chip for neuron -- or we could have a program provide a top-down way of controlling the movements of electrons to produce the same pattern.

I think the reason it seems magical is because all it does is recreate the pattern of action. A simulation would also recreate the pattern of relationships that constitute 'wet', but there is no matter around to express those relationships. With a computer, though, the movements of the electrons is fairly close to what a brain does -- both essentially consist in types of electrical charges moving about in coordinated ways. In order to see anything conscious in the computer, though, the pattern that is created would need to be expressed in the real world through some type of peripheral.

Theoretically we should be able to do the same thing with 'wet' if we linked all of the info in the simulation to 'peripheral matter' and recreated all the interactions in the simulation; but that is hard to see because the atoms of water in the real world will simply self-organize into 'wet' without any needed input from the simulation. Consciousness needs the organization of these complex interactions even to "be".
 
Status
Not open for further replies.

Back
Top Bottom