The Hard Problem of Gravity

..uh, why?

How does my own feel imply the existence of feel in other entities?

It proves that "feel" exists. You could assume that a physical process created "feel" in you, and not in anyone else, but that still leaves you needing to explain at least one instance of "feel".
 
Not really - since what we call "reality" is all "made up stuff" inside our heads.

That's what leads me to think that the "made up stuff" is of primary importance. If it doesn't have a physical explanation, then there is no physical explanation of anything.
 
Um, there is a difference between an action, which is a relation of parts, and an actual existent which comprises those parts.

Actions may only be defined in functional terms. Consciousness, whatever else you want to say about it, is an action and not a thing.

What is being simulated in a computer is the actual existents. When we simulate running, the "thing" in the simulation is running. We can't define the action -- running -- except in function terms, as a relation of the parts of the body running and its translational motion (relative to the rest of the environment). That is what 'running' means. The same is true of any simulation of consciousness. If we simulate neuron action, the simulated neurons will not be actual neurons, but the action is real; it is still a relation of the simulated parts.

This isn't true of only consciousness. It is true of all actions. Consciousness is simply one type of action.

The examples you continue to cite involve the inability of simulations to produce things, actual existents. No one thinks that simulations can do such a thing. Actions, however, are not things.

But a simulation of running on a computer is in no way like running. To show this is true, compare the problem of programming a robot to run, and getting Lara to run. Getting a robot to run is really, really difficult. AFAIAA, it hasn't actually been done yet for bipedal motion. That's because the robot has to really implement the relationships between components that make up "running". A digital simulation, OTOH, does not have to cope with these relationships. The supposed interaction between the parts does not actually happen.
 
Sure, inertial velocity is an action -- it can only be represented in relational terms.

The easiest way to describe it would be to say that which is represented in our language by a verb (most of the time).

Say, for instance, that we want to simulate a person calculating. We create the simulation. Whatever we can say about the "person" in the simulation, the calculation still occurs because it can only be defined in terms of its functions, its action.

There is no "real" person in the simulation; but I don't know how we could say that there was no "real" calculation going on.

I would certainly deny that "real" calculation was going on. If real calculation goes on on a purely functional basis, then calculation takes place every time stones are washed up on a beach.
 
No, I am not. There is no way to know whether the difference between digital and analogue even matters.

Of course there is. Prepare lunch for two people. Give one of them actual food, and give the other one a digital simulation of food.

The test is really quite easy.

Let us assume our universe is a simulation. Can you determine if it is a digital or an analog simulation? No.

Let us assume our universe is not a simulation. Can you determine if physical properties are discrete or not? No.

Let us assume our universe is not a simulation and that physical properties are not discrete -- they are continuous. Can you determine if a change in a given property will result in a measurable behavior within the systems that property affects? Yes. This is all any entity can do.

So really, the only argument of yours that is even logically valid is that the phenomenon of consciousness requires precision that only an analog system can provide.

That is a decent argument, but the evidence we have about the way neurons work -- in particular, the fact that every molecule in the brain is in constant motion relative to all the others -- doesn't do much to support it.

The issue of digital and analogue is not a matter of precision. It's a matter of reality versus fiction. No actual physical process can be emulated digitally. None. This applies whether or not the universe is discrete at the Planck scale. Precision has nothing to do with it. It's possible to emulate all kinds of physical processes with very little precision. It's only possible to simulate them digitally.

Yet, you consider both to be a consciousness. You just label one with extra words, "meta."

Kind of like how Pixy and I say certain programs are conscious, but not "human" - conscious.

Now, you could argue that the meta-conscious beings of the outer universe don't consider us to be conscious -- that there are probably rocketdodger and westprog analogs arguing about the same issue we are arguing about.

And my response to that -- we know we experience, plain and simple. Who are they to doubt us? If we create a silicon system complex enough to be able to assert the fact of it's own experience in a non-trivial way, then who are we to doubt it?

The most important word in that post is "if". No program has asserted the fact of its own experience is a non-trivial way. When it does, I'd like to look at the code to see how it does it. Such a thing would be remarkable. However, I'm not going to speculate about something that isn't going to happen any time soon. Whether such a program would have anything useful to tell us about human consciousness we'd have to see at the time.
 
That's what leads me to think that the "made up stuff" is of primary importance. If it doesn't have a physical explanation, then there is no physical explanation of anything.

I didn't say it didn't have a physical explanation just that it is not "reality" - it is a reflection of it. One physical system representing another physical system - or what is at least the "objective" of the brain. The representation doesn't have to be "true" - i.e. the physical system of the brain attempting to represent the physical system of reality at large doesn't have to do so perfectly. A perfect representation would imply that the systems were isomorphic - which is where the soliphist comes along.

Basically what it comes down to is this: understanding something means you have built a model of the thing that is representative of how it behaves. (And by colloray failing to understand something means your model is not representative). This is as true of sillicon as carbon.
 
But a simulation of running on a computer is in no way like running. To show this is true, compare the problem of programming a robot to run, and getting Lara to run. Getting a robot to run is really, really difficult. AFAIAA, it hasn't actually been done yet for bipedal motion. That's because the robot has to really implement the relationships between components that make up "running". A digital simulation, OTOH, does not have to cope with these relationships. The supposed interaction between the parts does not actually happen.
Wrong.
 
It proves that "feel" exists. You could assume that a physical process created "feel" in you, and not in anyone else, but that still leaves you needing to explain at least one instance of "feel".

What's really going to bug you, now, is that "feel" is just self-referential information processing.
 
It proves that "feel" exists. You could assume that a physical process created "feel" in you, and not in anyone else, but that still leaves you needing to explain at least one instance of "feel".

lol.

Let me repeat the question, since you failed to answer it.

How does my own feel imply the existence of feel in other entities?
 
But a simulation of running on a computer is in no way like running. To show this is true, compare the problem of programming a robot to run, and getting Lara to run. Getting a robot to run is really, really difficult. AFAIAA, it hasn't actually been done yet for bipedal motion. That's because the robot has to really implement the relationships between components that make up "running". A digital simulation, OTOH, does not have to cope with these relationships. The supposed interaction between the parts does not actually happen.

Allow me to elaborate on Pixy's "wrong:" you are wrong about everything it is possible to be wrong about.

First, digital simulations have to cope with whatever the rules of the simulation specify:

http://www.youtube.com/watch?v=hOvq3-oG5BM

Second, bipedal robots can run:

http://www.youtube.com/watch?v=Q3C5sc8b3xM

Third, just to get you up to speed on the state of simulation + AI technology (so you at least have a chance to produce a good argument):

http://www.youtube.com/watch?v=87qdmuOesRs&feature=related
 
What do you mean by 'experience'? Generally most people mean that something happens and they feel something along with the occurrence (perception of blue and the feeling of perceiving blue). Do you mean something else by it? It's generally considered that the perception part, while hard, is not unsolvable. It is the feeling part that is supposed to be so hard.

Is it such a hard problem because we don't have proper definitions? That certainly seems to be the case. Anything is hard if you can't talk about it properly.

I don't think there's anything wrong with the vocabulary we have. The problem is in finding sharp definitions. If we had the sharp definitions, we'd be a lot further along. However, I don't see any point in using precise definitions that don't apply.
 
What's really going to bug you, now, is that "feel" is just self-referential information processing.

It's strange - that's been used since the start of the thread - and indeed, probably for years before that - and I've yet to see any kind of rigourous definition of what "self-referential information processing" actually is.
 
It's strange - that's been used since the start of the thread - and indeed, probably for years before that - and I've yet to see any kind of rigourous definition of what "self-referential information processing" actually is.

That's because you haven't been reading the thread although you've been participating in it.
 
That's because you haven't been reading the thread although you've been participating in it.

Evasion noted. You could perfectly well have posted a link to where "self-referential information processing" was unequivocally defined. You could have just reposted the definition.

The problem with the Strong AI idea of information processing is that it takes an arbitrary subset of the physical concept of information, and then uses handwaving to justify the restriction.
 
The issue of digital and analogue is not a matter of precision. It's a matter of reality versus fiction. No actual physical process can be emulated digitally. None. This applies whether or not the universe is discrete at the Planck scale. Precision has nothing to do with it. It's possible to emulate all kinds of physical processes with very little precision. It's only possible to simulate them digitally.

I don't think you know what you are talking about.

An emulation is nothing more than simulation at the same level as the entities which interact with it and to those entities the emulation is the real thing.

Suppose we hook all you sensory neurons up to a machine that feeds them input. They are no longer exposed to the real world.

Now, in this new state, you see a car. You get in the car. You can feel it, you can smell it, everything.

Is the car a simulation? An emulation? Real?

The most important word in that post is "if". No program has asserted the fact of its own experience is a non-trivial way. When it does, I'd like to look at the code to see how it does it. Such a thing would be remarkable. However, I'm not going to speculate about something that isn't going to happen any time soon. Whether such a program would have anything useful to tell us about human consciousness we'd have to see at the time.

I am pretty sure they have, just not in english. It is clear to anyone reading this thread that you are 30 years behind the times when it comes to computer science.

Is this on purpose? I would have thought that someone genuinely interested in a subject at least picks up the newspaper now and then.
 
Yes, that's why I said "AFAIAA". Which stands for, IYANA, As Far As I Am Aware. The Asimo has achieved the astonishing speed of 6 kph. Usain Bolt has been estimated to reach nearly 50 kph. So do I think you've invalidated what I said with your shuffling biped? No, not really.

You sound just like a theist.

Of course, when robots run twice as fast as humans, people like you are going to say "well, they run too perfectly -- look at the way Usain Bolt runs -- so no, you haven't invalidated what I said."

What a joke.
 
The problem with the Strong AI idea of information processing is that it takes an arbitrary subset of the physical concept of information, and then uses handwaving to justify the restriction.

No it doesn't.

You already argued with me about that. You lost. But I would be happy to go over it again.

Go ahead and specify what you think this "subset" is and "handwaving" is.
 
But a simulation of running on a computer is in no way like running. To show this is true, compare the problem of programming a robot to run, and getting Lara to run. Getting a robot to run is really, really difficult. AFAIAA, it hasn't actually been done yet for bipedal motion. That's because the robot has to really implement the relationships between components that make up "running". A digital simulation, OTOH, does not have to cope with these relationships. The supposed interaction between the parts does not actually happen.

What? No. You're mistaking levels of complexity for something else. A digital simulation copes with exactly as many relationships as you build it to cope with. A walking simulation made of some virtual bones and pivots and hand-keyed animation won't be enough to let a real robot work. But a walking simulation made of some virtual motors and assorted robot bits, if it's sufficiently complex to model a real robot well and if the physics are good, models reality just fine, and is what they use to help solve actual robotics locomotion problems. Whenever a digital model of a robot is not a precise analog for what happens in the real world it's because the model of the robot or the model of the physics are not precise.
 

Back
Top Bottom