• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Explain consciousness to the layman.

Status
Not open for further replies.
When you look at your desk, why don't you see the electrons?

So... you're saying that if I looked at the electrons in the CPU, I'd see a truck pull going on? Electrons with big tires crushing quarks or something?
 
You need to go back and read some of Belz's stuff, cause he either said that or confused me enough to think he did.
It is certainly possible to argue that you can't assert that something exists if you can't define it, but that's a question of semantics - if you can't define X, the assertion that X exists is not meaningful.

Is that what you're thinking of? I know that subject has come up more than once.

Certainly no-one is claiming that things don't exist unless they're defined. Well, knowing these forums, someone probably is claiming exactly that. Possibly in the Deeper than primes thread, which contains transfinite densities of nonsense.
 
I didn't say that the simulation is the same as the simulator. I said that the simulation is the same as the process the simulator is carrying out.

There's no difference between the simulator and what the simulator is doing.

You gotta watch out for that back door dualism, my friend.
 
if you can't define X, the assertion that X exists is not meaningful.

That's certainly true.

But the trick that gets played on these threads is to insist that a functional definition is not good enough even though we're only at the observational stage, and that if you can't provide a detailed structural definition, then the thing you're talking about must itself be "vague" or perhaps not exist at all.

Of course, there was a time when the aurora borealis was just "those lights in the sky" but that did not make them "vague" or perhaps non-existant or perhaps the same thing as all the other lights in the sky.
 
Then why don't you call up Tononi and Balduzzi and let them know.

I'm sure they'll rush out and revise their theory to assert that normal brains can indeed separate the experience of red from the experience of a square when they see a red square, and experience those qualities separate from one another.

Hey, happens all the time, right?
The answer is both yes and no.

You can recognise colour without shape and shape without colour; there is no requirement that the two be linked. However, when they are linked, they are linked in interesting and complex ways, as indicated by the Stroop effect or the McCullough effect.

And by the way, if you want to believe a camera is conscious, knock yourself out, but don't expect me to take you seriously.
Depends on the camera, of course. But given cameras today, it is not a trivial question.

I mean, really... do you not understand that your ideas are laughable in the context of cognitive neurobiology? (Which is the study of the brain, you know... that thing that does consciousness... unlike, say, machines... which don't do that.)
You're assuming your conclusion again.

Do you really not see this?
No. I do see that you're mostly wrong. Not entirely, but enough to make render your conclusions nonsensical.

No, of course you don't, or you wouldn't keep repeating them... unless maybe you're an info truther exposing the conspiracy of the globalist biology world order to cover up what you know.
Perhaps you could try arguing your point instead?
 
That's certainly true.

But the trick that gets played on these threads is to insist that a functional definition is not good enough even though we're only at the observational stage, and that if you can't provide a detailed structural definition, then the thing you're talking about must itself be "vague" or perhaps not exist at all.
Could you give me a specific example? I don't believe I've ever seen that.
 
I think it was Ramachandran who mentioned the recent evidence contradicting the Global Workplace Model in an interview, but I can't seem to find it right now. If anyone has read / seen / heard the interview I'm thinking of (or wants to correct me otherwise), please post a link.
 
There's no difference between the simulator and what the simulator is doing.
Okay, if you want to take that perspective (it's not wrong, but the language is tricky) then the simulator and the world of the simulation are necessarily one and the same.

You gotta watch out for that back door dualism, my friend.
Heed your own advice, Piggy.
 
I have explained this to you many times, as have others. You're not paying attention.

You consistently assert that a brain is required to interpret the results of a computer. We point out that the brain is a computer; that everything it does can also be done by a computer.

You assert that it is somehow more. You can't say what, or how, or why you think so. You just insist that it is.

That's your magic bean.

You may "point out" that "the brain is a computer", but I might also point out that your aunt is a bicycle.

I see you're back on your Church-Turing kick. Well, can't help you there. You'll probably die with that needle in your arm.

And by the way, no brain is needed to interpret the results of any physical process, including what a computer does. It does what it does, end of story.

Now, on the other hand, if you've decided that what the computer actually does (change voltage states, run a fan, emit heat, play speakers, light up a monitor, spew ink on paper) is supposed to represent something else... then you're damn right you need an interpreter! To say otherwise is patently absurd.

Without an observing mind, there are no representations, just similarities.

And this is where your magic bean comes in -- the one you're holding and accusing me of stealing....

The machine is doing what it's doing, period.

If it's supposed to represent something else, fine... you need a brain in that system to decide what it's supposed to represent, otherwise it represents nothing.

If you insist that it does, then you're holding the magic bean, because you're claiming that the imagined representation exists somewhere outside the mind of the person who decides that A represents B.

You're insisting that the relationship "A represents B" is itself objectively real, and therefore requires no involvement by any brain to decide this. (It's called entification, and you are not by any means alone in engaging in it.)

Hello, magic bean!
 
Could you give me a specific example? I don't believe I've ever seen that.

There was a weak version of it here just recently when someone said that consciousness is just a label we put on things -- in other words, too vague to distinguish.

And I know I've heard Belz demand a structural definition, and complain that the concept is too vague to work with since we don't have one.
 
You can recognise colour without shape and shape without colour; there is no requirement that the two be linked. However, when they are linked, they are linked in interesting and complex ways, as indicated by the Stroop effect or the McCullough effect.

But this is exactly what Tononi and Balduzzi were saying!

If you see a red square, you cannot separate the redness from the squareness and decide to experience them separately. Experience is integrated in a way that is not divisible.

In other words, you can only have one point of view.

Of course, in cases of brain damage you do get weird effects, like people who see motion, but no thing performing the motion, which is impossible for normal brains to imagine.

But still, even for them, they have one point of view, the experience they have is integrated, and they can't simply decide to break it down into component experiences.
 
I think it was Ramachandran who mentioned the recent evidence contradicting the Global Workplace Model in an interview, but I can't seem to find it right now. If anyone has read / seen / heard the interview I'm thinking of (or wants to correct me otherwise), please post a link.

It's got its problems, for sure.

If you do run across that, pass it on, I like reading his stuff.
 
Okay, if you want to take that perspective (it's not wrong, but the language is tricky) then the simulator and the world of the simulation are necessarily one and the same.

Then why are you using two words?
 
Now the first thing to note is that Tononi and Balduzzi – who are among a group of researchers at the leading edge of consciousness studies – call their article “Toward a Theory of Consciousness”.

[snip]

They illustrate this point with the example of a digital camera with detectors that can distinguish among enough alternative states to correspond to some 1 million bits of information.
No they don't.

They very specifically illustrate the point with the camera sensor. The sensor only. That is an entirely different proposition. Indeed, if you read the paper and follow their model, it becomes clear that the camera as a whole, with its image processor and memory, is by their definition quantifiably conscious.
 
They are looking for a detailed operational theory of how consciousness arises in the human brain. That we don't have. But that it's the result of information processing is not even a question at this point.

You need to call up the biologists and let them know. They don't seem to have heard.

I mean, yes, they use the term "information" as a metaphor or abstraction, which can be very useful, but as far as they're concerned, consciousness is a result of the physical activity of the brain.

Pick up "The Cognitive Neurosciences" and browse the section on consciousness, for example... you won't find anyone fooling with an information processing model.
 
No they don't.

They very specifically illustrate the point with the camera sensor. The sensor only. That is an entirely different proposition. Indeed, if you read the paper and follow their model, it becomes clear that the camera as a whole, with its image processor and memory, is by their definition quantifiably conscious.

Ok, I'm done with this conversation.

They explicitly state that it's not.

In fact, the entire point of their example was that, according to their theory, the camera cannot be conscious because of the way it's built.

What is the key difference between you and the camera? [T]he chip is not an integrated entity: since its 1 million photodiodes have no way to interact, each photodiode performs its own local discrimination.... In reality, the chip is just a collection of 1 million photodiodes.... In other words, there is no intrinsic point of view associated with the chip as a whole.

Pixy, I can handle your coy jabs, your absurd conclusions, and even blowing smoke, but making stuff up? No, I'm not wasting my time with that kind of nonsense.
 
It does answer the question, entirely accurately. Rather than give a reference to something else, or express it in terms which are themselves expressed in other terms, it allows you to directly have the experience yourself, and find out what it feels like.

No, it doesn't at ALL. It's like if I asked you how gravity works, and you suggest I drop an anvil on my toe and find out.

You asked "what does it feel like".

Yes and you didn't understand the question, clearly.

that's good, but not as good as direct experience.

That's a cop out. You didn't give me a clear definition because you can't, not because your answer was better.
 
The problem in making a network the subject in a subject-object relationship is... locating the object.

ie... if everything is a first person subject, well, then there is no object.

Likewise, if everything is a third person object, well, then there is no subject.

Humanity cannot find an object and sees no particular point in the subject and yet is precariously perched on the thread of existence perceived between the two.

Objects and subjects exist only as symbols, in the mind, of what is and what it may be.

The touchstone of what we perceive guides us in the subject of the object.
 
Status
Not open for further replies.

Back
Top Bottom