• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Explain consciousness to the layman.

Status
Not open for further replies.
You start by assuming that consciousness is a computational process, and using a series of deductive steps, you deduce that consciousness is a computational process. Be sure to insist that anyone who notices that the entire edifice is built on circular reasoning "believes in magic".
The exercise of deducing the steps leading to the "computation of consciousness" would not merely be a statement that consciousness is computational. Yes, that would be circular reasoning.

HOWEVER, the real conclusion is The Computational Process Itself. And, THAT will tell us a LOT more about how consciousness works, then the simple statement of "oh, it is computational".

You seem to be missing the whole point of why some of us are going through these deductive steps.

Wait, wait.

So, a computer that can see through a colour camera doesn't have qualia, but proto-qualia ?
Perhaps I should explain the concept better:

Do we agree that consciousness evolved, naturally, in at least one species of living entities? If so, then that means qualia, or at least our "sense of having qualia", is a product of evolution, as well. I think it is unlikely that "qualia" came into existence in one solid step. It probably had evolutionary steps of its own, and I call those proto-qualia.

(If, for some reason, you do not agree that consciousness could emerge from any natural evolutionary pathway, then you would be unlikely to contribute anything productive to the discussion of natural consciousness.)

Then it wouldn't be a Turing machine. (Or, to be more precise - a necessary element of its functionality would not be part of the Turing model).
I only introduced the robots to demonstrate how our minds tend to shift in these regards. Take the robot spiders away, again, replace them with CG spiders on CG webs, but DO NOT change the algorithm being used to evolve and evaluate the webs.
One person might say that the replication is now back to a simulation.

Another would say that the replication and simulation were actually the same thing, the whole time. It was only the proximate details: Robots or CG spiders, that was different.
 
There are people (mostly living alone) who think that the characters on their TV are conscious. It's quite possible that devices will be produced that will give the illusion sufficiently well that only people who fully understand their detailed operation will be able to doubt their conscious status.
Yip ignorance is required in order to sell the conscious computer trick.
 
The mind goes through a lot of experiences you are not conscious of, called sub-consciousness. But, I assume that is not what you want from me. You want examples of conscious experiences that would not qualify as qualia. The best examples I have to offer, for now, deal with alternate states of consciousness, I'm afraid.

But I don't understand why those sorts of things are not qualia. I guess I am saying that the qualia you are speaking of seem to be arbitrary distinctions.

Let me elaborate on why I think that is bad -- imagine trying to figure out software without knowing that a screen is actually composed of pixels. Imagine how many wrong paths you would be led down if you thought the words you see were actually words, and the pictures actually pictures, and you had no idea how the computer could display such things.

Now imagine how much easier it is once you simply realize everything you see, no matter how smooth, is just anti-aliased rasterized pixels. All of a sudden there is a simple unified method of output from the program.

I feel the same way about consciousness -- I think if people try to differentiate between experiences without focusing on what all experiences have in common they invariably make any explanation far more complex than it really is.

What if I told you there is no evidence that any experience is composed of anything more than sensory input and memory of sensory input? Would you agree with that, or dispute that? If you agree with that, don't you think it is much simpler to then approach conscious experience as simply " how the brain spins on sensory input and memory of sensory input," rather than quale this and quale that ?
 
You start by assuming that consciousness is a computational process, and using a series of deductive steps, you deduce that consciousness is a computational process. Be sure to insist that anyone who notices that the entire edifice is built on circular reasoning "believes in magic".

I said early on that the focus would shift, as it always does, from arguing about consciousness to characterising the motives of the people who fail to fall into line.

Or, you could just show how any process can be a computational process if framed correctly. Thats kind of what all the talk about simulations is about.

That is somewhat easier, since last time I checked the definition of "any" suggests it is all inclusive.
 
Last edited:
I find this highly implausible, chiefly because the idea of brain-as-Turing-machine doesn't really address how the brain works, or what its real function is.

Let me fix that for you:
I find the idea that the mind comes from the behavior of neurons highly implausible, because the idea of brain-as-Neurons doesn't really address how the mind works, or what its real function is.

Just to make sure your dualist agenda is front and center -- wouldn't want anyone to forget that, would we?
 
Last edited:
"Magic" would be insisting that running a computer simulation of a process can produce the same effects as the process itself. No, I don't believe in that kind of magic.

What about running a computer simulation of a computer simulation of a computer simulation?

Seems like that has the same effects as a computer simulation of a computer simulation.

Just sayin
 
I only introduced the robots to demonstrate how our minds tend to shift in these regards. Take the robot spiders away, again, replace them with CG spiders on CG webs, but DO NOT change the algorithm being used to evolve and evaluate the webs.
One person might say that the replication is now back to a simulation.

Another would say that the replication and simulation were actually the same thing, the whole time. It was only the proximate details: Robots or CG spiders, that was different.

If I might introduce yet another term: "emulation" is often used to describe simulations which accurately model the irl processes which result in the output. Your spiders, in other words. Even if they aren't connected to robot spiders, they could be, and it would work.
 
That isn't my question. Everyone knows what a "turing machine" is.

However, a "turing machine" is an idealized mechanism -- they do not exist in reality.

So I am asking westprog what exactly he/she is referring to, since "turing machines" do not exist except as a fantasy.

A Turing machine is not a fantasy. It's a specification, like motor-bike or television. When somebody says that a motor-bike can't fly, they mean that a device created according to the specification for a motor-bike doesn't possess the means of flight.

It might, of course, be possible to attach wings to a motor-bike and hence allow it to glide through the air for a while. However, that would be to add functionality not in the original specification, and would not refute the statement about the capabilities of a motor-bike.

The capacities of a Turing machine are well-defined. The reason that computer scientists like to talk about Turing machines is that they can reason about them and make deductions about the actual capabilities of actual computers - deductions that have proven to be of genuine predictive value. No, Turing machines are not "fantasies". They are mathematical abstractions.

When it is claimed that a Turing machine can perform a particular action, then the implication is that a physical implementation of the Turing machine can perform the designated actions. For example, calculate the first thousand prime numbers.

If somebody says "A Turing machine can run a flight simulation, but it can't actually fly" then this is shorthand for saying that a device built to the Turing machine specification, capable of all the operations included in that specification, could run such a simulation. It also means that the specification doesn't include any means of actual flight. One could of course attach the device to the back of a winged motor-bike, and allow the Turing machine to fly - but this would not be a matter of implementing the specification, and it would be an accidental property of a particular implementation, not an inevitable quality of all implementations of the Turing machine specification.

Of course, in addition to the specific properties defined for a Turing machine, there must be additional implicit properties which apply to any possible implementation. There would have to be some transfer of energy, for example, however small. Still, this is a very minor element of the operation of such a machine.

When it's claimed that the operation of a Turing machine can, or cannot, produce consciousness, the implication is that any implementation of such a machine will do so. It doesn't mean that there are conceivable Turing machines that have additional, unspecified properties, which are a necessary element. If the actual claim is that a Turing machine, with certain other elements, can produce consciousness, then those other elements need to be defined. In such a case, the theories which apply to the Turing machine - such as Church-Turing - will no longer be applicable.

I've gone into this particular red herring in some detail because vagueness of language and imprecision are constant companions in this discussion - and sometimes concision is portrayed as inaccuracy. I won't be treating every derail in the same fashion.
 
What about running a computer simulation of a computer simulation of a computer simulation?

Seems like that has the same effects as a computer simulation of a computer simulation.

Just sayin

A computer simulation of a computer simulation will produce identical results. A computer simulation of anything else won't.
 
Let me fix that for you:


Just to make sure your dualist agenda is front and center -- wouldn't want anyone to forget that, would we?

Yes, that makes it very clear. Exactly as I predicted, the arguments will be ignored and instead, let's concentrate on the sinister agenda of the people who refuse to subscribe to The One True Faith. Ignore what people say, and make up their arguments for them.
 
A Turing machine is not a fantasy.

ORLY? There are places on Earth where you can get infinite memory tape and infinite time to perform computations?

'cause that is what you need to build a Turing machine "to spec," so to speak.

If you would just grow up and use the proper terminology, I.E. "Turing equivalent," then we wouldn't even be having this derail.

I dunno why you have categorically refused to use the proper terminology for years and years in here.
 
Yes, that makes it very clear. Exactly as I predicted, the arguments will be ignored and instead, let's concentrate on the sinister agenda of the people who refuse to subscribe to The One True Faith. Ignore what people say, and make up their arguments for them.

Is it ignoring what you say, though? Really?

Because for you to come in here and claim that you don't accept the idea that X is Turing equivalent, because Turing equivalent operations don't immediately explain the behavior of X, is really misleading in my opinion.

It is quite like saying you don't accept that a baseball is made of atoms because atoms don't immediately explain a home run. Only a snake-oil salesman with an agenda would try to sell that argument to people.

So pardon me for bringing your logic to its natural conclusion. If you consider that "ignoring the arguments" then I don't know how you plan on lasting in any debate.

Now if you instead would say "I don't see how only Turing equivalent operations could lead to a process that behaves like the human mind" then it is a different story. Then the burden would be on other people to actually explain the details, and many people probably could not. But you never say anything like that. Instead you dismiss things outright and then wonder why people accuse you of having an agenda. Well, it is because you dismiss things outright.
 
Last edited:
What if I told you there is no evidence that any experience is composed of anything more than sensory input and memory of sensory input? Would you agree with that, or dispute that?
I can agree that the basic ingredients (or "pixels") of experience are nothing more than sensory input and memory input (and possibly other physiological inputs, but that's another story).

But, even pixels on a computer screen are computed. The instructions for which pixels show what color has to be explained. In actual computers, this would be the instructions in its software.

For consciousness, this would be models of the Self being constructed in various parts of the mind. According to theory.

A full explanation of experience would include both what it is "composed of" and "how it is computed".

If you agree with that, don't you think it is much simpler to then approach conscious experience as simply " how the brain spins on sensory input and memory of sensory input," rather than quale this and quale that ?
I am NOT claiming I know how to divide up all of our experiences into what should be "qualia" or not. My argument is simply:
The more we learn about how consciousness works, the more the more we will realize that experiences can not be summed up into a single term, even if that term is generally useful for some of those experiences.

I think the words "qualia" and "quale" can be used as a term to describe some of the ways the brain spins sensory and memory inputs.
I think it is important to realize that not all experiences would be computed the same way. Some of them we more directly experience than others.

The qualia of pain usually brings stronger attention to us than the qualia of the color red.

The emotion of love can yield strong attention, but it is not something that usually qualifies as "qualia". We don't "feel it" like we do pain. But, it's there in our minds, driving us mad.

There could be one prominent way in which we experience several things: colors, music, etc. And, that could be labeled "qualia", and everything that deviates from it would not be qualia, though they would be experienced in other ways. Only time, and lots of study, will tell exactly where to put what.
 
A computer simulation of a computer simulation will produce identical results. A computer simulation of anything else won't.

Well, what about computations performed by your brain?

Meaning, would the results of a computation you did be the same as the results of a computation a computer did? Forget about consciousness, I am speaking of simple arithmetic instructions.

Or would they be fundamentally different for some reason?
 
I can agree that the basic ingredients (or "pixels") of experience are nothing more than sensory input and memory input (and possibly other physiological inputs, but that's another story).

I am going one step further and saying that memory input can be fully reduced to past sensory input. Which implies that all your conceptions of self can also be fully reduced to past sensory input.

There is nothing but sensory input and how it is processed, is what I am claiming.

But, even pixels on a computer screen are computed. The instructions for which pixels show what color has to be explained. In actual computers, this would be the instructions in its software.

Yes, but that is the point -- all the software needs to deal with are bits. That makes it more complex in some ways, but vastly simpler in all other ways. If all neural processing needs to deal with is sensory input ( past and present ), it has the same effect.

Fundamentally this isn't disputable, of course, because one can just claim that any neural impulse is a "sensory input" and so by definition the brain would then process nothing but sensory input.

However I think it is more than that, I think if one just thinks hard on the issue it starts to become apparent that all of our thoughts are merely reflections on past sensory input, and our emotions are merely modifiers that change how we treat sensory input. And our sense of self is just a collection of reflections on sensory input relating to the physical body we happen to be.

I don't see how throwing different categories of reflection on sensory input, such that one category is a quale and one isn't, necessarily furthers any explanation.
 
Last edited:
I believe you are using one, right now, to post to this very forum! :jaw-dropp

Just to put this to rest -- from the wikipedia entry on Turing Machine:

wikipedia said:
A Turing machine is a device that manipulates symbols on a strip of tape according to a table of rules. Despite its simplicity, a Turing machine can be adapted to simulate the logic of any computer algorithm, and is particularly useful in explaining the functions of a CPU inside a computer.
The "Turing" machine was described by Alan Turing in 1936,[1] who called it an "a(utomatic)-machine". The Turing machine is not intended as a practical computing technology, but rather as a hypothetical device representing a computing machine. Turing machines help computer scientists understand the limits of mechanical computation

The proper terminology for the thing I am using right now to type this is a Turing equivalent process. It is not strictly a Turing machine, unless we want to just do away with the formalism, in which case this thread is going to rapidly degrade into people talking past each other because there is no agreed upon definition of anything.
 
I only introduced the robots to demonstrate how our minds tend to shift in these regards. Take the robot spiders away, again, replace them with CG spiders on CG webs, but DO NOT change the algorithm being used to evolve and evaluate the webs.
One person might say that the replication is now back to a simulation.

Another would say that the replication and simulation were actually the same thing, the whole time. It was only the proximate details: Robots or CG spiders, that was different.

It should be noted that a machine that can control robot spiders (or indeed CG spiders that operate in real time) is not the same thing as a Turing machine. It's a control mechanism.
 
I am going one step further and saying that memory input can be fully reduced to past sensory input. Which implies that all your conceptions of self can also be fully reduced to past sensory input.
Not all mental input originate from the senses. I would use the phrase "physiological input" to more generically cover the basic components of self and experiences and such. But, that's a minor point.

Yes, but that is the point -- all the software needs to deal with are bits. That makes it more complex in some ways, but vastly simpler in all other ways. If all neural processing needs to deal with is sensory input ( past and present ), it has the same effect.
Reducing experiences to "sensory input" (or "physiological input" as I prefer) the same way we reduce a computer images to pixels isn't a very insightful exercise. Yes, it took mankind a looong time to get that far. But, today it's a trivial statement, and doesn't explain how experiences actually come about.

Dennett would call that "greedy reductionism". It's like reducing a painting to merely a bunch of paint strokes, as an explanation for why people can get all emotional about them.

I think the much more interesting insights are coming from the study of the processing involved in turning those inputs into experiences. So, that influences how I write these posts.

I don't see how throwing different categories of reflection on sensory input, such that one category is a quale and one isn't, necessarily furthers any explanation.
Perhaps my tearing things apart was a bit premature. Tossing categories out won't further any explanation, I'll admit that.

However, as further explanations develop, there will inevitably be more categories formulated in the process. I was offering a word of warning that this is likely going to happen.

Just to put this to rest -- from the wikipedia entry on Turing Machine:
This thread isn't about Turning Machines.

How about if I use the more generic term "Universal Computing Machine", instead? (This term would include typical personal computers.)

I suspect it is possible that a Universal Computing Machine could emerge a consciousness, given the proper algorithm and inputs and such.

It should be noted that a machine that can control robot spiders (or indeed CG spiders that operate in real time) is not the same thing as a Turing machine. It's a control mechanism.
A control mechanism modeled inside a Turing Machine...

...or perhaps you might prefer "Universal Computing Machine", if you'd rather not have me use the term "Turing".
 
Last edited:
Status
Not open for further replies.

Back
Top Bottom