The Hard Problem of Gravity

You sound just like a theist.

Of course, when robots run twice as fast as humans, people like you are going to say "well, they run too perfectly -- look at the way Usain Bolt runs -- so no, you haven't invalidated what I said."

What a joke.

I'm sure that the relatively easy task of making a two legged robot run properly will be eventually achieved. The point I was making was that simulating running is vastly simpler than emulating running. It's also a totally different thing. People have been simulating running since they drew on cave walls. Naturally this point has to be explained over and over, and will continue to be missed, misconstrued and ignored.

Just where theism comes in to the argument I'm not sure.
 
What? No. You're mistaking levels of complexity for something else. A digital simulation copes with exactly as many relationships as you build it to cope with. A walking simulation made of some virtual bones and pivots and hand-keyed animation won't be enough to let a real robot work. But a walking simulation made of some virtual motors and assorted robot bits, if it's sufficiently complex to model a real robot well and if the physics are good, models reality just fine, and is what they use to help solve actual robotics locomotion problems. Whenever a digital model of a robot is not a precise analog for what happens in the real world it's because the model of the robot or the model of the physics are not precise.

But the problem, even with such a simple thing as bipedal motion, is to produce a simulation which includes all the relevant factors. Even a really simple physical system depends on many factors - and sometimes it isn't even clear which ones are relevant. I was talking to an engineer yesterday about the effect of a test ban on Formula One teams this season. It will greatly restrict their capacity to modify the cars. They can run simulations all they want, but there is always something missing.

But even if the simulation is a really good one, it is still a simulation. What goes on in the computer is not running. What the clumsy robot does so poorly is.
 
Evasion noted. You could perfectly well have posted a link to where "self-referential information processing" was unequivocally defined. You could have just reposted the definition.

The problem with the Strong AI idea of information processing is that it takes an arbitrary subset of the physical concept of information, and then uses handwaving to justify the restriction.

I think it's important here to distinguish between what is "normal" Strong AI, and Pixy's assertions regarding "self-referencing information processing."

AFAIA, no one other than Pixy maintains that consciousness actually is self-referencing information processing, likely for the simple reason that many aspects of consciousness quite obviously aren't. Sensory information isn't. That which enters awareness may be directed to be there by a self-referencing system - a cortico-thalamic loop or similar - but the actual phenomenal awareness itself is not innately self-referencing.

Nick
 
Last edited:
People have been simulating running since they drew on cave walls

"Simulate" is not a synonym of "not simulate".

A drawing is a representation, not a simulation.
 
I'm sure that the relatively easy task of making a two legged robot run properly will be eventually achieved. The point I was making was that simulating running is vastly simpler than emulating running. It's also a totally different thing. People have been simulating running since they drew on cave walls. Naturally this point has to be explained over and over, and will continue to be missed, misconstrued and ignored.

Just where theism comes in to the argument I'm not sure.

You are throwing around these terms, "simulation" and "emulation," without even knowing what they mean. Can you give me a definition of simulation versus emulation?

Like I said, I don't think you know what you are talking about.
 
I don't think you know what you are talking about.

An emulation is nothing more than simulation at the same level as the entities which interact with it and to those entities the emulation is the real thing.

Suppose we hook all you sensory neurons up to a machine that feeds them input. They are no longer exposed to the real world.

Now, in this new state, you see a car. You get in the car. You can feel it, you can smell it, everything.

Is the car a simulation? An emulation? Real?

It's a simulation. It is not real.

Yes, it is possible that we live in a simulation. In which case, we know nothing about the material world whatsoever.

I am pretty sure they have, just not in english. It is clear to anyone reading this thread that you are 30 years behind the times when it comes to computer science.

Is this on purpose? I would have thought that someone genuinely interested in a subject at least picks up the newspaper now and then.

It's pretty clear to me reading this thread that some people involved in AI are so focused on what they are doing that they've missed the fundamental, basic flaws in the whole concept of Strong AI.
 
But again, people keep mentioning the HPC without answering my question - what precisely is the HPC?

David Chalmers said:
"If any problem qualifies as the problem of consciousness it is this one...even when we have explained the performance of all the cognitive facilities and behavioural functions in the vicinity of experience - perceptual discrimination, categorisation, internal access, verbal report - there may still remain a further unanswered question: Why is the performance of these functions accompanied by experience? Why doesn't all this information processing go on 'in the dark,' free of any inner feel?" - Chalmers as quoted by Blackmore in Consciousness: An Introduction

Nick
 
I think it's important here to distinguish between what is "normal" Strong AI, and Pixy's assertions regarding "self-referencing information processing."

AFAIA, no one other than Pixy maintains that consciousness actually is self-referencing information processing, likely for the simple reason that many aspects of consciousness quite obviously aren't. Sensory information isn't. That which enters awareness may be directed to be there by a self-referencing system - cortico-thalamic loops or whatever - but phenomenal awareness itself is not.

Nick

Pixy's position is that everything beyond self referential information processing is just icing on the cake. That is, there is no qualitative differences between a worm and a human. Just complexity.

I happen to agree with him. So does mathematics. That is why everyone has failed to find an observable qualitative difference -- it doesn't exist.

Thus, we say lots of things are conscious, just like lots of things run and lots of things sleep.

Nobody complains about saying a cheetah "runs" because "they don't move in the same way a human moves." Likewise, nobody complains about saying a dolphin "sleeps" because "their brain state doesn't change the same way a human's does." So why is consciousness different? Why can't we say things are "conscious, but not like a human?"
 
You are throwing around these terms, "simulation" and "emulation," without even knowing what they mean. Can you give me a definition of simulation versus emulation?

Like I said, I don't think you know what you are talking about.

A simulation is an attempt to show the effects of a physical interaction without actually performing it. An emulation is an attempt to show the effects of a physical interaction by duplicating the physical principle.

All computer models of real world events are simulations. Some physical models can also be simulations. Other physical models attempt to actually create the physical effect that is to be investigated.

In the case of physical models, it's often unclear whether the physical effect is really being produced or not. But a digital simulation will never produce the physical effect. No version of Microsoft Flight Simulator will ever actually fly.
 
"Simulate" is not a synonym of "not simulate".

A drawing is a representation, not a simulation.

Well, if you can give me a precise definition of "representation" and "simulation" with a sharp boundary point, I'd be glad to see it. It seems to me that it's just a matter of degree.
 
Pixy's position is that everything beyond self referential information processing is just icing on the cake. That is, there is no qualitative differences between a worm and a human. Just complexity.

Yet, where is there actually an innate sense of "self-referencing" in sensory awareness? Where is it? It's not there.

When we examine our inner world, that of inner dialogue and feelings, one could certainly make IMO a strong case that the former is innately self-referencing and a weaker case for the latter as the same. But sensory awareness? It's not innately self-referencing at all. Look at the monitor. How is it self-referencing?

I happen to agree with him. So does mathematics. That is why everyone has failed to find an observable qualitative difference -- it doesn't exist.

Thus, we say lots of things are conscious, just like lots of things run and lots of things sleep.

Nobody complains about saying a cheetah "runs" because "they don't move in the same way a human moves." Likewise, nobody complains about saying a dolphin "sleeps" because "their brain state doesn't change the same way a human's does." So why is consciousness different? Why can't we say things are "conscious, but not like a human?"

Well, the comparison between machine consciousness and human consciousness is more the issue IMO. And the simple well-acknowledged fact is that we don't yet know for sure if they're analogous.

There may or may not be a HPC. Personally, I doubt it, but the simple fact is that many brain researchers still acknowledge that we don't have the knowledge yet to make a clear statement either way. Check Baars or Ramachandran for starters here.

Nick
 
It's a simulation. It is not real.

Yes, it is possible that we live in a simulation. In which case, we know nothing about the material world whatsoever.

lol.

So your definition of "real" is predicated on whether or not we live in a simulation?

Turtles all the way down...
 
Well, if you can give me a precise definition of "representation" and "simulation" with a sharp boundary point, I'd be glad to see it. It seems to me that it's just a matter of degree.

A representation of a physical system is a static encoding of it.

A smulation of a physical system is a dynamic execution of it.

Writing the definition of an algorithm on paper won't allow the paper to perform a calculation.
 
It's statements like the above that make me worry. How can I fight against the whole of mathematics, which apparently agrees with 'dodger and Pixy about everything?

Indeed. It's just so clear that the whole of mathematics is in complete agreement, why do we even bother?

Meanwhile, back in the real world....if we take a modern model for the brain, one of the numerous GWTs, the question is...how come one of many similar visual data streams is conscious? Pixy's assertions about self-referencing make no sense here. They don't account for the difference.

Nick
 
how come one of many similar visual data streams is conscious?

This misses the point somewhat I think about the aggregate nature of the thing being proposed.
 
A representation of a physical system is a static encoding of it.

A simulation of a physical system is a dynamic execution of it.

Writing the definition of an algorithm on paper won't allow the paper to perform a calculation.

I'd be happy enough with those. It's reasonably useful, which is what one wants from definitions.
 
This misses the point somewhat I think about the aggregate nature of the thing being proposed.

If there are multiple parallel-networked modules concurrently processing information, as GWT asserts, how does self-referencing dictate which stream is consciously available? And why is this information consciously available whilst myriad similar information is not?

Nick
 
Last edited:
The most important word in that post is "if". No program has asserted the fact of its own experience is a non-trivial way. When it does, I'd like to look at the code to see how it does it. Such a thing would be remarkable. However, I'm not going to speculate about something that isn't going to happen any time soon. Whether such a program would have anything useful to tell us about human consciousness we'd have to see at the time.

You may want to take a look at Cyc. While for now the way it "experiences" things is by someone feeding the knowledge manually into it, i see absolutely no reason why a different front-end could not be used. For example cameras, all kind of sensors, etc.

Now think of combining it with some kind of artificial neural network, for example to pre-process sensor data it gets.

Don't you think that such a system would be able to communicate on its own, even with humans? That it could tell about it's experiences, and even draw conclusions upon these experiences? If not, why not?

Greetings,

Chris
 
You may want to take a look at Cyc. While for now the way it "experiences" things is by someone feeding the knowledge manually into it, i see absolutely no reason why a different front-end could not be used. For example cameras, all kind of sensors, etc.

Now think of combining it with some kind of artificial neural network, for example to pre-process sensor data it gets.

Don't you think that such a system would be able to communicate on its own, even with humans? That it could tell about it's experiences, and even draw conclusions upon these experiences? If not, why not?

Greetings,

Chris

As always with AI, I'll believe it when I see it.
 

Back
Top Bottom