The Hard Problem of Gravity

But we've done it! So what are we talking about?


It isn't really about running, but about consciousness. Westprog's argument is that something will necessarily be left out of the picture with any simulation, while the rest of us are arguing that verbs are defined relationally, so a perfect simulation of running is running. The "thing" running isn't a part of the real world, but so what?

The same is true of consciousness. Any simulation of it that is accurate in all respects solves the problem of consciousness; if we can do it with a computer then we would have explained it.
 
I don't consider consciousness to be algorithmic. That appears to be the logical implication of consciousness being part of a computer program - that is, associated with the execution of an algorithm. That is the viewpoint which I am opposing, or at least doubting.

It's possible that Rocketdodger thinks of it in a more physical way. He seems to think it's a matter of switches changing around. Pixy seems to think it's associated with the software. I welcome any clarifications.

From wikipedia, "an algorithm is a finite sequence of instructions, an explicit, step-by-step procedure for solving a problem."

What this means is if you arbitrarily label a physical process (which is redundant, since all processes are physical) as "solving a problem" then it becomes an algorithm.

Hence, if consciousness is "solving a problem" -- and it is, the problem being "how to create consciousness" -- and if the underlying process is finite -- and we know it is, since clearly we are conscious right now -- then consciousness is algorithmic.

The only reason one would say consciousness is not algorithmic is if one thought an unknown physical process that could not be described mathematically -- in particular, one that somehow emobodied the infinite without being infinite -- was necessary for consciousness.
 
It isn't really about running, but about consciousness. Westprog's argument is that something will necessarily be left out of the picture with any simulation, while the rest of us are arguing that verbs are defined relationally, so a perfect simulation of running is running. The "thing" running isn't a part of the real world, but so what?
The "so what" part is fair to argue though. So long as you can draw a distinction, the two are not identical--so long as they aren't identical, there could be differences. Westprog's trying to refer to something that is hard to describe, that he thinks could be absent in the simulation (and it's what, I believe, you are calling feel). I think westprog goes too far when he asserts it's absent in any particular configuration of simulation though (how does he know?)
[Edit: to->too... after I was quoted too, but still]

Rocketdodger's assessment is different from what I imagined westprog was arguing... if he's arguing that point, then we're simply forced to conclude that nothing's real anyway--since essentially nothing is at it appears anyway, there's no meaningful sense to say that things are not being simulated. But that doesn't match my referential assessment, which would claim that things are real if... that dog, right there, is doing that stuff that he's doing (referential). (And I would stand by the referential part, though it's also perpetually useful to speak in relative terms so that we can speak of fantasy, the hypothetical, etc--to describe what Bugs Bunny does when Elmer grabs his gun).

Edit (inserting):
rocketdodger said:
The only reason one would say consciousness is not algorithmic is if one thought an unknown physical process that could not be described mathematically -- in particular, one that somehow emobodied the infinite without being infinite -- was necessary for consciousness.
And that I believe to be ridiculous, if such a process is being held to be fundamental (and not all that interesting if it's incidental). Essentially this would be randomness (anything directed would be, insofar as it is directed, algorithmic). And nothing about consciousness is random (I don't merely see red--I see red when I look at red things--these are causal relations).

I've had long discussions with people in the past who were convinced there was a third category besides algorithmic and random, but I've yet to be sold onto its feasibility.
 
Last edited:
From wikipedia, "an algorithm is a finite sequence of instructions, an explicit, step-by-step procedure for solving a problem."

What this means is if you arbitrarily label a physical process (which is redundant, since all processes are physical) as "solving a problem" then it becomes an algorithm.

Hence, if consciousness is "solving a problem" -- and it is, the problem being "how to create consciousness" -- and if the underlying process is finite -- and we know it is, since clearly we are conscious right now -- then consciousness is algorithmic.

The only reason one would say consciousness is not algorithmic is if one thought an unknown physical process that could not be described mathematically -- in particular, one that somehow emobodied the infinite without being infinite -- was necessary for consciousness.

Thank you, Dodger. If westprog had his way I would never have understood what he meant.
 
The "so what" part is fair to argue though. So long as you can draw a distinction, the two are not identical--so long as they aren't identical, there could be differences. Westprog's trying to refer to something that is hard to describe, that he thinks could be absent in the simulation (and it's what, I believe, you are calling feel). I think westprog goes to far when he asserts it's absent in any particular configuration of simulation though (how does he know?)


As far as I can tell no one is arguing that the two are identical (I certainly am not). I think we are all very clear on the fact that the frames differ, and that the actors differ. The issue concerns whether or not there is any actual difference in the actions. Your attempted definition refers only, as far as I can tell, to the actors -- one 'real' and one not -- while making no clear distinction in the action itself.

Actions, being defined relationally, if all the parts work in the proper relation, what is the difference in the action itself between the simulation and the 'real dog'? Saying one is right there doing it doesn't really address the issue as far as I can tell. The 'real' running and simulated running still consist in a particular relation of parts.

The reason this is is because verbs are not real 'things'. Referential issues concern the actors, not the action.


ETA:

Let me expand on this because there may be some confusion -- I covered this earlier with Westprog.

If we could simulate consciousness in all its particulars -- I mean really get it right -- just as we could simulate a dog running in all its particulars -- get gravity in there, muscles with tone, simulated blood vessels pumping blood, etc. -- then I maintain that we would have an explanation for these actions. Westprog argues that we would still be leaving something out so we still wouldn't explain running or consciousness.

I am certainly not arguing that a simulation is the real thing or that a comic book version of running is the same as running. What I am arguing is that if we get all the bits right then we've got the action down -- we understand it at a deep level.

Yes, knowing when we have all the particulars is a real bear -- that's why Turing came up with his test. But the theoretical point is that we can potentially do it. We just have to get the particulars down right.
 
Last edited:
But that doesn't match my referential assessment, which would claim that things are real if... that dog, right there, is doing that stuff that he's doing (referential). (And I would stand by the referential part, though it's also perpetually useful to speak in relative terms so that we can speak of fantasy, the hypothetical, etc--to describe what Bugs Bunny does when Elmer grabs his gun).

I have no problem with this.
 
As far as I can tell no one is arguing that the two are identical (I certainly am not). I think we are all very clear on the fact that the frames differ, and that the actors differ. The issue concerns whether or not there is any actual difference in the actions. Your attempted definition refers only, as far as I can tell, to the actors -- one 'real' and one not -- while making no clear distinction in the action itself.

Actions, being defined relationally, if all the parts work in the proper relation, what is the difference in the action itself between the simulation and the 'real dog'? Saying one is right there doing it doesn't really address the issue as far as I can tell. The 'real' running and simulated running still consist in a particular relation of parts.

The reason this is is because verbs are not real 'things'. Referential issues concern the actors, not the action.


ETA:

Let me expand on this because there may be some confusion -- I covered this earlier with Westprog.

If we could simulate consciousness in all its particulars -- I mean really get it right -- just as we could simulate a dog running in all its particulars -- get gravity in there, muscles with tone, simulated blood vessels pumping blood, etc. -- then I maintain that we would have an explanation for these actions. Westprog argues that we would still be leaving something out so we still wouldn't explain running or consciousness.

This is the first time that the word "explain" has entered this particular discussion, unless I've missed something.

I certainly agree that it's possible to use simulations to gain understanding of all kinds of phenomena. Indeed, for certain things, a simulation is a better way to gain understanding, because it does discard information.

I am certainly not arguing that a simulation is the real thing or that a comic book version of running is the same as running. What I am arguing is that if we get all the bits right then we've got the action down -- we understand it at a deep level.

The problem with simulating consciousness is knowing which bits aren't important. It's not obvious even with something as simple as running. Ideally, a simulation should discard as much as possible, but not more. But what is critical about it?

We don't even know if consciousness is something which can be simulated.

Yes, knowing when we have all the particulars is a real bear -- that's why Turing came up with his test. But the theoretical point is that we can potentially do it. We just have to get the particulars down right.
 
This is the first time that the word "explain" has entered this particular discussion, unless I've missed something.

I certainly agree that it's possible to use simulations to gain understanding of all kinds of phenomena. Indeed, for certain things, a simulation is a better way to gain understanding, because it does discard information.



The problem with simulating consciousness is knowing which bits aren't important. It's not obvious even with something as simple as running. Ideally, a simulation should discard as much as possible, but not more. But what is critical about it?

We don't even know if consciousness is something which can be simulated.


I looked back and you're right that I didn't mention this as primarily an explanation to you. It was in post 1846, I think (I'll have to look back again) in a reply to yy2bgggs.

I think I did mention to you that the explanation is what we are all after -- explaining consciousness. If we can simulate it properly, then we can explain it.

Whether or not we can simulate it actually is one of the big questions, yes. I think we can but haven't gotten there yet (depending on one's definition of consciousness), but I don't see any theoretical reason why we cannot. We've only just started down this road.

As to what is critical about it -- I thought most had agreed that it was the feeling of things happening. That's why I spent some time trying to piece out what I think that word -- "feelings" -- means. If we can't get close to a definition, then we could never recreate it in a simulation.
 
Hmmmm, how interesting.

It would seem to me, given your answer here, that you suggest reality vs. simulation is observer dependent.

I find that quite funny, since your stance from the beginning of this thread has been that reality vs. simulation is an absolute.

The fact that an observer realises that a simulation is not reality does not mean that it's observer dependent. Real running takes place even if unobserved. Simulated running is meaningless unobserved. Apples and concrete.
 
The fact that an observer realises that a simulation is not reality does not mean that it's observer dependent. Real running takes place even if unobserved. Simulated running is meaningless unobserved. Apples and concrete.

Except that according to you, running is only real if the running is being done by an agent that is able to observe.

CIRCLES ARE FUN!
 
The fact that an observer realises that a simulation is not reality does not mean that it's observer dependent. Real running takes place even if unobserved. Simulated running is meaningless unobserved. Apples and concrete.

So if we are in a simulation right now, it is meaningless to run from a bear if there are no observers outside the simulation monitoring it?
 
But it seems to me, from your description, that they do not represent anything real and are rather labels for sets of private behaviors. In other words "qualia" is a useless term because we already have other terms to describe this.

Well, whether or not one wishes to call them 'private behaviors' they are definitely real in the sense of being actual phenomena. Our private behaviors have physical consequences such affecting external behavior responses [like motion and speaking] or our general physiology [such as stress responses and immunity]. How we perceive the world has a direct effect on how we interact with it. I think it would be wise to consider mental phenomena to be just as real as any other biological process.

I've already explained that my own consciousness doesn't feel, to me, that clear-cut. It very often looks fuzzy and unfocused. I'm having trouble finding the exact words to express what I mean in English, mind you. Silly language! :P

You speak English so well I never would have thought that you weren't a native speaker :)

My problem is that what you claim is your position and what you are arguing otherwise in this thread seem different. I get the impression that you're giving consciousness a special quality that I find unjustified.

I suppose I can understand how you would get that impression. I think it is special in the sense that it is a distinct biological process that we don't fully understand yet. However, I do not think that it is supernatural or beyond scientific understanding.

Well, its turtles about turtles. Qualia are supposed to be the basic constituents of experience, and yet you can experience them. It's like saying that letters compose words but that letters are composed of letters, too.

Well the whole idea of qualia is that they are the subjective impressions of sensory information, either internal or external. When we reflect on our own thoughts, sensations, and emotions it necessarily creates self-referential impressions of them. To perceive a sensation as color/sound/taste/etc. is qualia; to think about one's owns sensations itself generates other qualitative impressions.

Even tho we are not directly aware of the unconscious processes that generate our qualia we can be aware of the qualia themselves because they are, by definition, what we consciously perceive. I'm not sure how I could clarify it anymore than that :-/

AkuManiMani said:
Thats exactly my point. Once humans have that knowledge we'll be able to seriously devise ways to synthetically create it.

That's funny, I thought you meant that computers were not aware of their own code and therefore were not really self-aware.

My point is that we have no convincing evidence that current artificial computers are aware. Living brains are a proof of principle that computers CAN be created which are conscious. Once we know exactly what it is about or physiology that creates consciousness, and what it is exactly, we'll have the scientific know how to create conscious computers artificially.


AkuManiMani said:
Why do you claim that you aware of it after rather than when?

Because recent neurological studies have shown this.

Ah, okay. I think I know what study you're referring to. It's the one in which subjects were asked to record when they consciously chose to take a particular action; afterwards ,it was found that there was a spike in brain activity immediately before the time subjects reported consciously choosing to take action. I don't really think that that study necessarily refutes conscious choice. If anything, it appears to demonstrate that we must build initiative before executing our conscious choices.

Of course, if this is not the study you're referring to please feel free to correct me :)


Perhaps. Or perhaps us thinking we're conscious is the reason why our language has such words in it. Or maybe we're just infering our own state of consciousness based on our observation of others, as Mercutio suggests. In fact, we usually don't get anywhere in those terms until we reach a certain age, even if we already know the words.

That position doesn't really make much sense to me. It seems to imply that we're not really 'conscious' until we're taught to be. Even if I must infer whether other people are conscious I directly observe that I experience certain stimuli as 'red', 'loud', 'sweet', etc. I even have memories of my childhood from before I even acquired fully developed language. I experiences emotions, sensations, and thoughts even before I acquired words to describe them.

AkuManiMani said:
I'd insult less if individuals would stop being deliberately obtuse.

Since you are not a mind-reader I would suggest you should be more careful about what you think other people think.

That may be true, but when I get comments from apparently intelligent people repeatedly making false statements about what has and has not been said I'm forced to assume that they're being deliberately dishonest :-/


AkuManiMani said:
I never claimed that unconscious processes are self aware. I said that conscious processes can be self aware, and that such self-awareness is what we call introspection.

I think I've lost the thread of that particular aspect of our discussion. :boggled:

Meh >_<

I didn't mean to confuse. I guess the simplest way to explain what I mean is to just say that we can't be self-aware unless we are conscious to begin with :D
 
Last edited:
AkuManiMani said:
There are quantum computers than depend on the inherent 'switching' capacities of matter. Are you arguing that such systems are not computational..?

That isn't an answer, and even if it was it isn't correct.

Quantum computers aren't "quantum" because they use mere atoms as switches, they are "quantum" because their memory can assume a superposition of states rather than a single state at once.

Thats true; they are 'quantum' because they can have a superposition of switch states. The point is that they -do- switch and that -every- physical interaction is the product of logical ops. In other words, physics is computation.

Note that I already know single atoms switch. It isn't that hard to figure out ways atoms can switch. In fact, it can be shown mathematically that since multi-atom switches are made of nothing but atoms, at least one atom in the collection must itself switch.

The converse is not true -- you can't say a rock switches just because the molecules that make it up can switch. You can't say a molecule switches just because the atoms that make it up switch. You can't say an atom switches just because the particles that make it up can switch. That is called a fallacy of composition.
The behavior of "switching" isn't some ill-defined generic behavior that all the matter in the universe exhibits. It is well defined. The fact that neither you nor westprog have come up with a counterexample (yet) is indicative of this.

A rock itself is not a 'switch' in the sense that you've defined it but its properties, structure, composition, and interactions with other objects are the products of its constituents, and its environment, switching. The reason why any physical process can be mathematically modeled/simulated, and physical objects can be used as switches at all, is because all physical processes are themselves computational.
 
Last edited:
Dreams...reality...

Real life is quite different from imagined things. Learning that films and books and computer games aren't real is something most of us learn quite young.

No, the relationship between a running man and his environment, and a simulated running man and his environment are not the same. They have very little in common.

I think we may differ slightly on this issue. It is possible to reproduce a processes on another medium, provided the essential characteristics of that process are accurately translated onto that medium.

Running is a kind of motion thru an environment of some type. A virtual entity 'running' thru a simulated environment is not identical a human running thru actual space-time but they are analogous in an operational sense.

As far as the question of consciousness goes, I agree that since it has not been physically defined we cannot go about trying to simulate it. However, once its physically defined we can not only determine what media would be suitable to simulate it, but even reproduce it.
 
Last edited:
Thats true; they are 'quantum' because they can have a superposition of switch states. The point is that they -do- switch and that -every- physical interaction is the product of logical ops. In other words, physics is computation.

If you consider all of physics computation, then the term "computation" ceases to be useful.

What, then, should we call the difference between what was previously labeled computation and what was not?

A rock itself is not a 'switch' in the sense that you've defined it but its properties, structure, composition, and interactions with other objects are the products of its constituents, and its environment, switching. The reason why any physical process can be mathematically modeled/simulated, and physical objects can be used as switches at all, is because all physical processes are themselves computational.

What, then, is the difference between a calculator and a puddle?

You and westprog just don't "get it." All there is are particles. Period. The only differences are in how the particles behave. We label only certain behaviors as computation. All computation is a behavior, but not all behavior is computation.

You guys can play this word game all you want -- it doesn't change anything. Systems of particles behave differently than other systems of particles. Even if you say they don't, they still do. The behavior of a transistor is different from the behavior of a rock, and nothing you say will ever change that.
 
Last edited:

Back
Top Bottom