The Hard Problem of Gravity

We use words in quite sophisticated ways. When we say "The picture was a man, running" we know that what it really is is something entirely static. What we mean is that the image creates an impression of running for the viewer. It doesn't say anything about the intrinsic properties of the object in relation to its environment, or quasi-environment. When we say "Hey Joe come over here and see the pretty doggy running through the forest" we are implying several real objects and all the associated relationships. If we mention Mikes computer, we instantly realise that we are describing a potentially shared experience - which is how we communicate consciousness. The only relationship that matters is between the image and the people who view it - and more profoundly, the shared qualia.

What about a closed circuit video feed of a dog running through a forest?

Oh, and you are wrong -- when one says a simulated person ran through a forest, they don't mean it just created the impression of running in an observer, they mean the mechanics of the relations between the simulated components are the same as the ones in a real running person.
 
Last edited:
I don't think that fits either of the definitions I provided.

Can you explain how it does?

There are quantum computers than depend on the inherent 'switching' capacities of matter. Are you arguing that such systems are not computational..?
 
Last edited:
There are quantum computers than depend on the inherent 'switching' capacities of matter. Are you arguing that such systems are not computational..?

That isn't an answer, and even if it was it isn't correct.

Quantum computers aren't "quantum" because they use mere atoms as switches, they are "quantum" because their memory can assume a superposition of states rather than a single state at once.

So... again... can you explain how a single atom satisfies either of the "switch" definitions I provided?

Note that I already know single atoms switch. It isn't that hard to figure out ways atoms can switch. In fact, it can be shown mathematically that since multi-atom switches are made of nothing but atoms, at least one atom in the collection must itself switch.

The converse is not true -- you can't say a rock switches just because the molecules that make it up can switch. You can't say a molecule switches just because the atoms that make it up switch. You can't say an atom switches just because the particles that make it up can switch. That is called a fallacy of composition.

The behavior of "switching" isn't some ill-defined generic behavior that all the matter in the universe exhibits. It is well defined. The fact that neither you nor westprog have come up with a counterexample (yet) is indicative of this.
 
Last edited:
There's really not much point in defining what I call a switch. I don't think switches are anything very significant at all in the workings of the universe. I certainly don't think that the operation of switches - under any definition - is a plausible explanation for consciousness. However, if I'm to argue with Rocketdodger about it (and Pixy seems to agree on this point) then I have to argue with what he thinks. No point in arguing over what a switch really is.

Fair enough.

I think dodger's point is that when you have a few billion switches...
 
What I'm saying is that what one consciously experiences at any given time is made up of qualia. Qualia are the subjective correlates of internal and external stimuli. Together they make up the totality of our experience. For instance, the individual tastes, smells, sounds, sensations, emotions, and thoughts reduce to qualia; together they make up your collective experience at any given moment.

But it seems to me, from your description, that they do not represent anything real and are rather labels for sets of private behaviors. In other words "qualia" is a useless term because we already have other terms to describe this.

It seems that the only basis for your objection to the term is that it sounds too 'soul-like' to you. If you have a more logical reason as to why you believe the term isn't suitable please share it 'cause I've yet to hear it.

I've already explained that my own consciousness doesn't feel, to me, that clear-cut. It very often looks fuzzy and unfocused. I'm having trouble finding the exact words to express what I mean in English, mind you. Silly language! :P

That isn't what I'm saying. My point is that emergent properties are collective properties of a system that do not exist [or have no meaning] below a certain reductive level of organization.

Gotcha.

Theres no need for you to try so hard to find out my position when I've already explicitly and repeatedly stated it. I'll tell you one last time. I believe that the 'mind' and 'consciousness' can be understood just like any other phenomenon. Its just that currently, we simply lack sufficient scientific understanding to realistically model or reproduce it.

My problem is that what you claim is your position and what you are arguing otherwise in this thread seem different. I get the impression that you're giving consciousness a special quality that I find unjustified.

I asked you what a qualia is. You said it is what constitutes experience, and then you say we can experience qualia.

Okay, so whats the problem?

Well, its turtles about turtles. Qualia are supposed to be the basic constituents of experience, and yet you can experience them. It's like saying that letters compose words but that letters are composed of letters, too.

Thats exactly my point. Once humans have that knowledge we'll be able to seriously devise ways to synthetically create it.

That's funny, I thought you meant that computers were not aware of their own code and therefore were not really self-aware.

Why do you claim that you aware of it after rather than when?

Because recent neurological studies have shown this.

Us being conscious is the reason why our language has words that attempt to describe it.

Perhaps. Or perhaps us thinking we're conscious is the reason why our language has such words in it. Or maybe we're just infering our own state of consciousness based on our observation of others, as Mercutio suggests. In fact, we usually don't get anywhere in those terms until we reach a certain age, even if we already know the words.

I'd insult less if individuals would stop being deliberately obtuse.

Since you are not a mind-reader I would suggest you should be more careful about what you think other people think.

I never claimed that unconscious processes are self aware. I said that conscious processes can be self aware, and that such self-awareness is what we call introspection.

I think I've lost the thread of that particular aspect of our discussion. :boggled:
 
Perhaps it has none.

I'm still waiting, by the way: why do you consider consciousness to be algorithmic and what does the term mean to you in this context ?

I don't - but that seems to be the position of Strong AI. It is the performance of an algorithm that produces consciousness. But if you don't think so, let me know.
 
What about a closed circuit video feed of a dog running through a forest?

Oh, and you are wrong -- when one says a simulated person ran through a forest, they don't mean it just created the impression of running in an observer, they mean the mechanics of the relations between the simulated components are the same as the ones in a real running person.

Of course they don't. The image might be a few line of charcoal. Where are the relationships there? It's a relationship between the image and previously catalogued images in the viewer.
 
Of course they don't. The image might be a few line of charcoal. Where are the relationships there? It's a relationship between the image and previously catalogued images in the viewer.

The image isn't the simulation.

The image is just a way to observe the simulation.

The relationships exist in the simulation, not in the image.

And you still haven't stated the difference between a video of something running and a video of something in a simulation running.
 
The image isn't the simulation.

The image is just a way to observe the simulation.

The relationships exist in the simulation, not in the image.

What relationships exist in the simulation have very little to do with whether we call what we see "running". There might be a strong network of relationships - there might be none.

And you still haven't stated the difference between a video of something running and a video of something in a simulation running.

That's entirely a matter of how the viewer percieves it. If he thinks that it's a simulation, he will interact with it quite differently. He's quite unlikely to use a remote control machine gun to kill a real person via a video link, for example.
 
I don't - but that seems to be the position of Strong AI. It is the performance of an algorithm that produces consciousness. But if you don't think so, let me know.

Would you PLEASE answer my question ?

I'm still waiting, by the way: why do you consider consciousness to be algorithmic and what does the term mean to you in this context ?
 
Would you PLEASE answer my question ?

I'm still waiting, by the way: why do you consider consciousness to be algorithmic and what does the term mean to you in this context ?

I don't consider consciousness to be algorithmic. That appears to be the logical implication of consciousness being part of a computer program - that is, associated with the execution of an algorithm. That is the viewpoint which I am opposing, or at least doubting.

It's possible that Rocketdodger thinks of it in a more physical way. He seems to think it's a matter of switches changing around. Pixy seems to think it's associated with the software. I welcome any clarifications.
 
We use words in quite sophisticated ways. When we say "The picture was a man, running" we know that what it really is is something entirely static. What we mean is that the image creates an impression of running for the viewer. It doesn't say anything about the intrinsic properties of the object in relation to its environment, or quasi-environment. When we say "Hey Joe come over here and see the pretty doggy running through the forest" we are implying several real objects and all the associated relationships. If we mention Mikes computer, we instantly realise that we are describing a potentially shared experience - which is how we communicate consciousness. The only relationship that matters is between the image and the people who view it - and more profoundly, the shared qualia.



Sure we use language in many ways. When we say "see the picture of the man running" it is shorthand for "see this representation of a man caught in the act of running" -- we don't mean that there is a man running or that the picture is running. We do mean that we can tell that it represents a man running because we see a particular relation of his body parts in the right context so that we can determine that it is a picture of a man running. The whole idea of running is "that relation of body parts that produces translational motion -- motion of a body with respect to its environment."

Is there some definition of "running" that excludes the simulated dog running through the simulated forest that is not circular, excluding the running dog because you have defined 'running' as pertaining only to things in the 'real world'? I can't think of any. That's just the way verbs are.
 
I don't consider consciousness to be algorithmic. That appears to be the logical implication of consciousness being part of a computer program - that is, associated with the execution of an algorithm. That is the viewpoint which I am opposing, or at least doubting.

Firstly, you asked me why it WOULDN'T be algorithmic. And secondly, you STILL haven't answered my question.
 
Is there some definition of "running" that excludes the simulated dog running through the simulated forest that is not circular, excluding the running dog because you have defined 'running' as pertaining only to things in the 'real world'?
Okay, hopefully so we can get past this point (this thread is crawling, pun intended)... we can come up with such a definition that's not circular. It's easy. Make the definition referential. Let's call this concept "really running".

Then the simulated dog is not "really running". A real dog, having been recorded, "really ran". If we watch a videotape, we can see that the dog "really ran", unless the videotape did not actually record something from the real world--that is, unless it was... let's use the technical term "faked" to refer to that.

I'm not so sure what the problem is with referential definitions, but they are definitely not circular, and I wish people would stop confusing the two.

Regardless, concerning the simulation of running versus "real running", if you simulate running in all relevant ways, you have solved the problem of running. However, it's difficult to know if you really have simulated things in all relevant ways, so it's quite fair to demand you put your money where your mouth is. In this case, the way to do that is to build an actual robot that really runs.

But we've done it! So what are we talking about?
 
Okay, hopefully so we can get past this point (this thread is crawling, pun intended)... we can come up with such a definition that's not circular. It's easy. Make the definition referential. Let's call this concept "really running".

Then the simulated dog is not "really running". A real dog, having been recorded, "really ran". If we watch a videotape, we can see that the dog "really ran", unless the videotape did not actually record something from the real world--that is, unless it was... let's use the technical term "faked" to refer to that.

I'm not so sure what the problem is with referential definitions, but they are definitely not circular, and I wish people would stop confusing the two.

Regardless, concerning the simulation of running versus "real running", if you simulate running in all relevant ways, you have solved the problem of running. However, it's difficult to know if you really have simulated things in all relevant ways, so it's quite fair to demand you put your money where your mouth is. In this case, the way to do that is to build an actual robot that really runs.

But we've done it! So what are we talking about?

The issue isn't whether running in a simulation produced with components from our own frame is identical ro running in our own frame. The issue is whether running should be defined so the frame is relevant or not.

Westprog has explicitly stated that he thinks it should be relevant. He stated that if we are living in a simulation right now, then nothing is real. In other words, we might run, or we might not, depending on the nature of our universe. Another absurd implication of this is that the definition of running necessarily includes references to a whole slew of stuff nobody would ever imagine as part of the definition.

I, on the other hand, think it should. Whether we are living in a simulation or not is irrelevant -- running is running.

So what you call "really running" is just "the relations that constitute running, within the same frame as us." But no matter how much you qualify the definition, that core -- "the relations that constitute running" -- remains. So what should that core be called? Should we call it "almost running, just not in the right frame?" I think that is absurd.

What is even more absurd is the notion that these relations are only "running" in a single frame, among all possible frames. According to westprog, the top level frame.
 
What relationships exist in the simulation have very little to do with whether we call what we see "running". There might be a strong network of relationships - there might be none.

The same can be said for stuff in our reality.

And this has nothing to do with the issue.

The issue is whether behavior, in two different frames, can be mathematically the same.

It can. You haven't shown otherwise.

I don't care what people call it, or how people perceive it. Relationships between entities exist independent of people. If you want to call a guy running in a forest something different because it isn't the same frame as you, that is your perogative. But that has nothing to do with the underlying relationships between the man and the forest, which can be mathematically identical between reality and simulation.

That's entirely a matter of how the viewer percieves it. If he thinks that it's a simulation, he will interact with it quite differently. He's quite unlikely to use a remote control machine gun to kill a real person via a video link, for example.

Hmmmm, how interesting.

It would seem to me, given your answer here, that you suggest reality vs. simulation is observer dependent.

I find that quite funny, since your stance from the beginning of this thread has been that reality vs. simulation is an absolute.
 

Back
Top Bottom