• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Explain consciousness to the layman.

Status
Not open for further replies.
Reminds me of the question: "if a tree falls in a forest and no body is around to hear it... does it make a sound?"

. For a sound to register ears are required.
The correct answer is no



I dunno. What does the similarity do when the observer goes home for the night?

So when humans aren't around the world doesn't exist?
 
Making an analogy to "meaning" though defining that word is more problematic than defining "sound".

But you don't need to make that analogy -- nobody here claims that information can have meaning without an observer to infer that meaning.

In fact you could say that "meaning" is just part of how an observer reacts to information, as in "if observer reacts with behavior X, the information carried meaning Y for the observer."

So it isn't that hard to define, either.
 
You were talking about a film. We could toss in tomato salad while we're at it, and clowns, and the proper way to age cheese, but all you're going to do is add distractions. What's wrong with talking about a specific thing? What benefit is there to slap DVD's into the discussion? It's just an excuse to lose focus.
Yes, it does. In modern video games, there are often pauses in game play in order to present a little snippet of story--a cinematic. We could call this a film as well. Problem is, some of these are actually generated using the same mechanisms as the game play--and this certainly bleeds into the realm of simulation.

So, yeah, it matters.

If you read the above posts, you'll see what is being emphasized. Yes, it does matter that we talk about a particular thing, because you can't be sure an opinion is supposed to even apply until you hear the opinion first. When I want to offer an opinion on DVD's, let's have the subject come up first. Then let's have me offer an opinion on it. Then you can comment.

It doesn't matter what answer comes up. It's simply not going to be possible to claim to have a qualitative difference between the film and the computer if you can't slot the DVD unambiguously into one area or the other. Clearly the DVD holds the images, in a way- but also they are generated.

Naturally the grey areas are an annoyance, because if there really were a qualitative difference between the film and the simulation, there wouldn't be grey areas.

Until then, comment on what has been said, and stop throwing things in randomly. It's just fogging up the issue. You're only giving the illusion of having a discussion.
Yes, they are. They are stored in miniaturized forms on the film strip. When the film is shown, they are projected on the screen. You can tell exactly which image would get projected at which time by looking at the image on the film strip, and where it appears in the film.

You can do exactly the same with the computation. It might be tedious to do so, but you can read the program, look at the data, and you will be able to say precisely what will be produced at every stage. The output of the computation is inherent in the starting conditions every bit as much as the pictures on the film produce the images being projected. Which is exactly why the DVD presents such a problem. They're all just different ways to generate images using a bit of equipment and some information. If one is dynamic, they're all dynamic. If one uses rules, they all use rules. They are all causal. Causality is a necessary component of consumer electronics, otherwise you wouldn't be able to switch them on.
 
You could try using 'context' in place of 'world'.

Although any expression you use can be deliberately misrepresented to subvert your meaning by someone determined to avoid your argument.

You could use the word "context" but then talking about what something would look like to one of the inhabitants of this alternative context would clearly make no sense.

If the use of "world" was intended to just refer to a point of view for looking at things, there's been ample opportunity to make that clear.
 
So, it's a really large number.So that's a particular class of configurations.

Your intuition is failing you because you see that really large number, get impressed, and conclude, "surely there must be some conscious thing in there somewhere". But what you're missing is that simply the fact that some states will produce consciousness, plus the existence of lots of states, doesn't in itself guarantee any likelihood at all that conscious states could realistically pop up by chance. To even ball park this, you would have to compare something akin to the ratio of minimally conscious states of systems to non-conscious states of systems. And there may be so many more non-conscious states than conscious states that it would be as perverse to expect a chance occurrence of a conscious state as it would be to expect a glass of water with uniformly dissolved ink in it to spontaneously break apart into a clear half and an ink-laden half.
The computationalist account would require specific kinds of configurations; not merely "complicated" configurations, but configurations that have relations in it behaving in particular ways.Not even PixyMisa is saying that! The computationalist account requires specific kinds of computations.

But sure, everything is doing computations.

I don't think you realise quite how big that number is. It's big enough that even if an incredibly small percentage of those computations are of the "right kind" then consciousness must be springing into existence all over the place.
 
I think you are miss quoting me here:D

I do expect that human brains will be replicated artificially and be conscious. Just not at the moment.



That is because you have not been watching enough science FICTION programs.

If only you managed to REDEFINE English words according to your wishful thinking and set your standards to "monumentally simplistic" levels you would be quite convinced that we have already extant conscious machines and you would be wishing you could befriend them.

It all depends on what is the meaning of the word "is" and anything can be redefined to make "monumentally simplistic" conclusions sound profound…. unfortunately you must first have subconsciously confounded reality in your consciousness.
 
Last edited:
It's bizarre. Piggy has been totally explicit for a long time about his absolute insistence that artificial minds are possible in principle. He's been very explicit about that. He's a hard-line materialist, AFAIAA. Yet if this conflicts with someone's hard-core belief system, it's just ignored.



If it is not out of an oversight it must be a deliberate ploy..... and since it has been rebutted and denied on several occasions then

It is still extremely bizarre to hear folks make this accusation,...

[snip]
Why y'all keep insisting otherwise is nothing short of baffling.
 
But you don't need to make that analogy -- nobody here claims that information can have meaning without an observer to infer that meaning.

In fact you could say that "meaning" is just part of how an observer reacts to information, as in "if observer reacts with behavior X, the information carried meaning Y for the observer."

So it isn't that hard to define, either.


Show me an algorithm containing Gadamer's Truth and Method
 
If it is not out of an oversight it must be a deliberate ploy..... and since it has been rebutted and denied on several occasions then

OK, so why does Piggy keep saying that it is impossible for a simulation to have consciousness? If that is not what he is saying, then what is he saying?

Am I totally misunderstanding what a simulation is? Because to me a simulation in this context is a computer running software that mimics a conscious mind. Where is the distinction between a computer that mimics consciousness and a computer running software that mimics consciousness?

Please remember that I'm not saying that such computers/simulations exist here and now, only that I don't think they are impossible in principle.

Unless I'm totally misunderstanding the whole debate, Piggy seems to be OK with the idea of machines generating their own consciousness, but also saying that it is impossible if they do it by simulating a human mind.

:confused:

Please feel free to go on insulting me if it makes you feel better.:rolleyes:
 
I repeat--the film is a simulation. But the entities within the film are simply portions of images--they do not relate to each other causally. More specifically, if you see a film about a tornado blowing a house away, the image of the tornado on the film doesn't cause the image of the house on the film to blow away. Rather, the sequence of images provides the visual illusion of a tornado moving such that it blows the house away.

There's no obvious causal connection between the frames of a reel of film. However, there's no causal connection between the successive instructions of a computer program either. The system which shows the film or runs the program provides the causal link. In the case of displaying a film, the pictures are displayed in succession according to the rules that apply to producing the illusion of motion from single images. It's a simple causal system, but it is causal.
 
OK, so why does Piggy keep saying that it is impossible for a simulation to have consciousness? If that is not what he is saying, then what is he saying?

Am I totally misunderstanding what a simulation is? Because to me a simulation in this context is a computer running software that mimics a conscious mind. Where is the distinction between a computer that mimics consciousness and a computer running software that mimics consciousness?

Please remember that I'm not saying that such computers/simulations exist here and now, only that I don't think they are impossible in principle.

Unless I'm totally misunderstanding the whole debate, Piggy seems to be OK with the idea of machines generating their own consciousness, but also saying that it is impossible if they do it by simulating a human mind.

It's generally agreed that a computer simulation of X will not actually produce X for any given property or system. This is something generally considered too obvious to need pointing out. A tornado is the example we're currently using, and it hardly needs to be stated that a computer simulation of a tornado is not a tornado.

However, it's insisted that a computer simulation of consciousness will be conscious. This is held to be so obvious that disputing it means that you have to believe in magic.

Piggy's view - which I tend to prefer, though with a little more diffidence - is that in order to produce consciousness, the actual physical processes necessary for consciousness need to be duplicated. Whatever those necessary processes are, they are present in the human brain.
 
There's no obvious causal connection between the frames of a reel of film. However, there's no causal connection between the successive instructions of a computer program either.
But the program does cause data to transform in particular ways, and that's where you find your causal relations. All you're doing here is pointing out, in terms of our marble computer, that putting a rocker here doesn't cause the next rocker to be there. And while correct, it's irrelevant. We're going to have something corresponding to the tornado in the computer, and something corresponding to the house. That's not going to be code--it will be data.
The system which shows the film or runs the program provides the causal link. In the case of displaying a film, the pictures are displayed in succession according to the rules that apply to producing the illusion of motion from single images. It's a simple causal system, but it is causal.
But playing the film still doesn't result in the image of the tornado making the image of the house move.
 
OK, so why does Piggy keep saying that it is impossible for a simulation to have consciousness? If that is not what he is saying, then what is he saying?

Am I totally misunderstanding what a simulation is? Because to me a simulation in this context is a computer running software that mimics a conscious mind. Where is the distinction between a computer that mimics consciousness and a computer running software that mimics consciousness?



Yes...you are... have a look here.


Unless I'm totally misunderstanding the whole debate, Piggy seems to be OK with the idea of machines generating their own consciousness, but also saying that it is impossible if they do it by simulating a human mind.


Yes... you seem to be.... I think it is because you are

totally misunderstanding what a simulation is
 
Not quite.

They relate in all sorts of ways that we ignore because they are irrelevant to any of the behaviors that we don't particularly care about.

Why is this an issue?

We don't care about the relations that are relevant only to the behaviors that a rock can exhibit.

We do care about the relations that are relevant to the behaviors that things like lifeforms can exhibit.

This is fact -- lifeforms exhibit behaviors that rocks do not. Whether or not those behaviors are significant is another issue, although there are objective statistical reasons for why they are. But that is neither here nor there -- if we want to figure out why a person is conscious, we certainly don't need to look at the relations that go into causing a rock to sit in the sun, because we don't consider rocks conscious. We tend to look at the relations that go into behaviors such as the behaviors that neurons exhibit in a living conscious brain as opposed to the behaviors that neurons exhibit when one is buried 6 feet in the Earth in a coffin.

I don't understand why you find this controversial.

It's an issue because once you start talking about behaviors "we don't particularly care about", then you're dragging the observer back into it.

The fact is, without the observer who knows how to use that machine symbolically, it is, for all intents and purposes, just like a rock.
 
It seems like another attempt to separate out the aspects of consciousness which are difficult and interesting, and to present it as a data processing issue.

Yes, he opens by observing that his cohort doesn't yet know enough to design conscious machines or even to model consciousness.

But then he makes the leap to a claim that this somehow means that biologists don't have a workable definition, either.

In fact, the progress currently being made in neurobiology is evidence that we do have a workable definition for research purposes.

It seems to be standard practice for the computer folks to work inside their own bubble and attribute their problems to other fields which do not share those problems.
 
Thank you for telling me why I'm ignoring something I didn't mention. For your next trick, tell me what I had for dinner.

Translation:

Do realize that you're entirely making up this useless fact about something never mentioned, please. Your desire to extrapolate into your opposition's head is the very thing that causes you to be a poor communicator. Don't presume you can speak for me--especially if you're speaking to me.

ETA: The rest of your posts I'm fine for at the moment. The real test is how others respond to you.

I don't speak for you.

I'm pointing out why your view from the bubble is incomplete and, as a result, incorrect.

But whenever I make references to anything outside your bubble, you think I'm going off-topic.

Nothing I can do about that, I suppose.
 
But what if "the world of the simulation" is made up of images and sounds from the real world, transferred via cameras, microphones etc?

If a simulated person was reacting in real-time to the outside world like a person looking through eyeballs and listening through ears does, what then?

Then nothing.

As soon as you mention "a simulated person" you're talking about an imaginary person.
 
Why? If the computer running the simulation is still switched on, and the software is still processing information, why should it matter if a person is watching?
What if we set up a mirror so it could watch itself?
What if watching itself was part of the simulation? (seems to me this would be necessary in a simulated consciuosness anyway).

When no one is observing the simulator, it's impossible to say there's any simulation going on.

There's only the machine changing states.

Only when you add an observer to the system who knows how to interpret the output does the "world of the simulation" appear as the world of the simulation.

Think of Olivier playing Hamlet.

Only Olivier is ever on stage.

Hamlet is in the minds of the audience, if they speak English or are familiar with Shakespeare.

Olivier is, for all intents and purposes, a perfect particle-level simulation of Hamlet, and yet even this down-to-the-quark simulation has no Pinocchio point... Oliver never becomes a 13th century Danish prince, no matter how well he acts the part.
 
Even if the simulator is wired up to things outside?

Yes, even then.

Because the only thing which determines the output of that machine is the behavior of the machine.

The machine never, ever stops being a machine and starts being something else, or starts being the machine and something else.

It is always only doing what we can observe and measure it to be doing, which is essentially the same no matter what it is simulating (if anything).

The "world of the simulation", like the world of the graphical user interface, is an imaginary construct on the observer's part.
 
Status
Not open for further replies.

Back
Top Bottom