• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Has consciousness been fully explained?

Status
Not open for further replies.
Yes, Piggy, if you are a simulation, you don't need a real house to live in. Why is that so hard to get?

First, you need to understand why it's not possible for a conscious entity to actually be a simulation.
 
Meaningless? The frame of reference of a simulated consciousness would also be its own reality.

A simulated anything exists only in the mind of the perceiver who is able to understand the symbols as symbols and decode them. It has no independent existence whatsoever. To treat it as if it does is absurd.
 
Consciousness happens in the brain. The brain is made up of neurons, most of which we understand enough to program their behavior. Therefore, we can (or will be able to) program (in theory) the behavior of the brain. Consciousness is a subset of that behavior.

"Grasping happens in the hand. The hand is made up of muscle and bone, most of which we understand enough to program their behavior. Therefore, we can (or will be able to) program (in theory) the behavior of the hand. Grasping is a subset of that behavior."

Do you believe that this means that we can get machines to grasp objects in reality by programming alone?

Sure would save us a lot of money that we now spend on building robotic arms.

Trouble is, it's pure nonsense.

And the truth is, we cannot program a hand, we can only program representations of a hand, which are useless when it comes to doing anything in the real world.

(And truth be told, we can't even program that, because we need monitors or other such equipment to be able to perceive those representations and interpret them as being a hand.)

If we want to create machines that grasp, we have to build the apparatus to make it happen.

By the same token, if we want to create machines that do consciousness, we will have to build the apparatus to make it happen. THERE IS NO PURE PROGRAMMING SOLUTION, NO PURE LOGIC SOLUTION, NO PURE INFORMATION SOLUTION.

ETA: Ok, I'm really gone now because this conversation has descended into idiocy and I have to be at work tomorrow... no, wait, today... <groan!>
 
Last edited:
laca said:
Knowing you are unable to tell whether you are in a perfect simulation... do you live your life differently from somebody who finds the idea absurd?

No, and that was exactly my point. What's yours?


I'd set about determining which possibility were the more likely first off. If chances indicated I were in a simulation I might even move to Hollywood :D

That aside, since we already know that any particular thing such as the beer I'm drinking is either a computer program feature or an actual beer... what actually exists - simulation or external reality - is already compatible with the external-world propositions.

If you argue that my beer is a computer program feature you destroy your own argument. If you argue that my beer is a cold frosty one then you have no argument at all.

A true simulation cannot allow for the possibility of a real external world. Something I couldn't get rocketdodger to come to grips with.
 
You are confusing two very different situations: (A) being a conscious entity who is experiencing a simulated world, and (B) being a conscious entity which is itself a simulation.

A is entirely possible, B is not.

Since consciousness is not material in nature, it is rather a property, I'm not sure where the difference lies.
 
Here's my line of thought:
  1. We can program all neuronal functions.
  2. The brain is made up of neurons.
  3. From 1. and 2. it follows that all brain functions can be performed by programming.
  4. Consciousness is entirely the "byproduct" of the brain.
  5. From 3. and 4. it follows that consciousness can be performed by programming.
Where is the error?

The first error is at step 1. We cannot program all neuronal functions.

The neurons will not be programmable. Representations of the neurons will be programmable.

ETA: Ok, I'm really gone now because this conversation has descended into idiocy and I have to be at work tomorrow... no, wait, today... <groan!>

What's idiotic is that you conceded the (so far) only objection to my argument and are still trying to proclaim victory by ridicule.
 
I'd set about determining which possibility were the more likely first off. If chances indicated I were in a simulation I might even move to Hollywood :D

You can't determine which is more likely. You must assume it's real, because that is the only thing that's helpful to you. Otherwise, you collapse into solipsism.

That aside, since we already know that any particular thing such as the beer I'm drinking is either a computer program feature or an actual beer... what actually exists - simulation or external reality - is already compatible with the external-world propositions.

If you argue that my beer is a computer program feature you destroy your own argument. If you argue that my beer is a cold frosty one then you have no argument at all.

No, I'm arguing that you cannot know. You must assume it's real. After all, the only way you experience your beer is through your senses. A computer simulation can have beer in it. Even cold frosty ones.

A true simulation cannot allow for the possibility of a real external world. Something I couldn't get rocketdodger to come to grips with.

Why is that? I'm not following.
 
It's just another red herring. If the brain in a vat can tell it's a brain in a vat, then it's not a true brain in a vat.
 
Look, I think -- I hope -- that we can all agree that you can't make a computer get up and walk across the room by having it run a sim of a human body standing up and walking across the room.

And I think/hope we can all agree that programming alone will not make a computer play music or produce a printout or display photographs. To do these things, we need hardware that's designed and built to do them.

So far, we can agree to that.

But when it comes to being conscious, some folks contend that this one behavior is an exception

Not "this one". But "one, including this". I don't think it's special pleading. To coin your own phrase, I hope we can all agree that, IF consciousness is computation, then machine computation could, in theory, produce consciousness. If so, then what is a simulation if not computation ? As I said earlier, simulations don't exist in a vacuum, they pretty much necessarily interact with the "outside" world.

The problem with this claim is that we can describe the real-world behavior of every organ and system in our bodies -- in fact, every cell and molecule -- in the same way.

Only if we reduce "computation" to its most useless definition.

I have always maintained that consciousness is a behavior, a bodily function. I have never even implied that it is a substance. I have no clue where you and Pixy are getting that notion.

It might have something to do with the fact that you said it requires certain configuration and specific fuel.

The analogy is simply to point out that machines running simulations do not somehow begin to exhibit the behaviors of the systems being simulated. If they did, you could power your computer by running a simulation of a hydroelectric station and plugging it into itself.

I already agreed that the machine itself is not running. But I want to know what you think about the following proposition: the simulated person IS running.

Let me get this straight.... You're asking me for a link to what brain researchers are not thinking and saying?

You claim to know what they're thinking. I'd like some evidence that it's not just your opinion.

Who gives a rat's ass?

The character in the simulation might. In fact, many of them are programmed to avoid getting hurt. I'm not claiming they're actually conscious, mind you, but I'm not sure I agree with your definition of "hurt".

Tell you what, you hire me to fly you to Phoenix, I'll do it in a simulator. Satisfied?

No, that does not answer my question. What about simulated actions ? Are they not actions, themselves ?


You are being unusually dismissive in this thread, Piggy.

The simulated person exists exclusively in your imagination.

No, they "exist" in the simulation. As Iaca pointed out, _we_ might be in a simulation, but it changes nothing of how we value our lives. In a hypothetical complex simulation, the simulated person might do as well.
 
I think I'm done with this merry-go-round.

As soon as I have time, I'm going to start a thread in the science forum for discussion of research on consciousness from a biological perspective, and put my efforts there.

That thread will explicitly NOT be about computer science, information science, philosophy, or imaginary conscious machines. After all, the cranks have plenty of room to roam here in the philo forum.

Take care, y'all.

Gosh, I hate it when I take the time to adress someone's arguments in detail and then they leave abruptly.
 
No, it would not, because simulations exist only in the minds of those perceiving them. They have no independent existence whatsoever.

"Independent existence" ? Really, what the hell does that even mean ?

The simulated mind knows nothing.

Again, it doesn't exist in a vacuum. My computer "knows" some things, and any simulated character running on one of its programs may be given access to that knowledge. How does it not know anything then ?

Pixy is correct when he points out, contra what you said in an earlier post, that you consider consciousness to be related to substance rather than behaviour. Otherwise you wouldn't insist that it's the same as any other substance.

The best analogy, as I said, is between consciousness and "running". But for some reason you seem to think that a simulated person doesn't run in the simulation, or at least, if it does, "so what ?" So what ? So it RUNS, and it defeats your entire argument, is what.
 
Gosh, I hate it when I take the time to adress someone's arguments in detail and then they leave abruptly.
It's also nice when someone decides to rule relevant scientific fields out of a scientific discussion, because they say so.

"Independent existence" ? Really, what the hell does that even mean ?
I don't know, but Piggy think that minds generated one way magically have it, and minds generated another way magically don't.

He has not, so far, been able to explain this.
 
Look, I think -- I hope -- that we can all agree that you can't make a computer get up and walk across the room by having it run a sim of a human body standing up and walking across the room.

And I think/hope we can all agree that programming alone will not make a computer play music or produce a printout or display photographs. To do these things, we need hardware that's designed and built to do them.

But when it comes to being conscious, some folks contend that this one behavior is an exception, that it can result purely from programming, and that running a perfect sim of a brain will result in the computer being conscious. (Nevermind that the folks doing just that don't believe any such thing.)

When asked why, the reason given is that consciousness is a behavior resulting from computation, which makes it different.


To make a suitably programmed computer play music, we have to connect it to (or include within the computer) a d/a converter, amplifier, and speaker. We do not have to include or attach a guitar or a symphony orchestra.

To make a suitably programmed computer display a photograph, we have to connect it to an output buffer and a display screen. We do not have to attach a cat, mother-in-law, or mountain range, to display photographs of those things.

To make a suitably programmed computer walk, we do have to attach legs. But we can define what properties and abilities those legs need to have. Mechanical linkages, actuators, and sensors (force and position, typically), and a motive power source are required. Reflexes, balance, and control knobs for pace and speed are not required, because the computer provides those functions. So robot legs can suffice; we do not have to attach a man or a horse.

To make a suitably programmed computer conscious, what do we have to add?

If the answer is "an entire biological brain," that is a lot like requiring a symphony orchestra to be attached to a computer for it to play Beethoven's Ninth, or a live horse for it to walk. Which doesn't mean it can't be the right answer (though it is obviously wrong with regard to playing music or walking), but then the question is why no portion at all of the brain's activity in producing consciousness can be performed instead by the computer. Why not?

If the answer is "something, but we don't know what," then the question is, if you don't know what, what justifies the conclusion that any such "something" exists?

If the answer is "some portion of a biological brain, but we don't know what portion," the same question: what justifies the conclusion that the necessary portion is nonempty?

In all of the above examples, and all others used in this thread (flying an airplane, controlling a power plant, playing chess, etc.) we can state very specifically what additional hardware must be attached to a suitably programmed computer in order to create the corresponding real-world behavior. What additional hardware is required for the real-world behavior of consciousness? If we cannot answer, then why not, and what justifies any constraints or assumptions we place on that non-answer?

The computationalists' answer is clear and specific: no additional hardware is needed (though a minimal amount, e.g. a keyboard or microphone and a text display or audio output, would be needed for us to perceive the conscious behavior). What are the alternative answers?

Respectfully,
Myriad
 
Last edited:
There is a reason that the computationalists keep their theory of consciousness going in R&P. It's called confirmation bias. They have yet to propose a scientific test which might negate their theory.

Or -- and I am just guessing here -- the thread is in R&P because thats where the OP is.
 
I don't think that sympathetic crying is an instance of conscious volition. I was only making the point that the poster had very little understanding of the newborn brain.

I don't think that newborns think like you and I do, just as newborns don't use their muscles the way you and I do.

However, I don't doubt that newborns have muscles or that they're conscious.

That seems like a "just because" argument.

If you don't think newborns think like you and I, then why do you think they are conscious?

What requisites for "consciousness" do newborn's satisfy?

This is like the template behavior of participants in this thread. People will ask rational questions, and make rational arguments, and at some point the logical conclusions become distasteful, and certain individuals fall back to parroting these "that is just silly" statements.

I am just asking you -- what do newborns do that make you think they are conscious? Why are those things so different from what programmable thermostats do? Replying with some variation of "this is just silly" is a stupid way to have a discussion, if you even wanted to have a discussion in the first place ...
 
laca said:
I'd set about determining which possibility were the more likely first off. If chances indicated I were in a simulation I might even move to Hollywood :D

You can't determine which is more likely. You must assume it's real, because that is the only thing that's helpful to you. Otherwise, you collapse into solipsism.

That aside, since we already know that any particular thing such as the beer I'm drinking is either a computer program feature or an actual beer... what actually exists - simulation or external reality - is already compatible with the external-world propositions.

If you argue that my beer is a computer program feature you destroy your own argument. If you argue that my beer is a cold frosty one then you have no argument at all.

No, I'm arguing that you cannot know. You must assume it's real. After all, the only way you experience your beer is through your senses. A computer simulation can have beer in it. Even cold frosty ones.

A true simulation cannot allow for the possibility of a real external world. Something I couldn't get rocketdodger to come to grips with.

Why is that? I'm not following.


The question of whether we are in a simulation or the external world necessarily acknowledges the external world. A priori knowledge having nothing to do with the senses.

Since we already know we might be in a simulation or we might be in the external world I ask you: what about that cup of coffee over there... computer program feature of the simulation or an external world cup of coffee?

If you call it a computer program feature you acknowledge that computer program feature within the external world and destroy your own argument. If you call it a cup of coffee you have no argument at all.

Both alternatives take place in the external world.

If you want to make your simulation argument successful it must be incompatible with real external-world propositions. Good luck with that.
 
If the brain in a vat can tell it's a brain in a vat, then it's not a true brain in a vat.


Yes. A brain in a vat could only realize it was a brain in a vat if it had knowledge of the external world... something a brain in a vat could not have.
 
Status
Not open for further replies.

Back
Top Bottom