Has consciousness been fully explained?

Status
Not open for further replies.
Magic?

Whatever does (or does not ?) happen under Planck limits and/or in dimensions 5-10 or 5-11 that effects the reality we can observe.
 
There is a difference between simulation and reality for physical objects, but it is not at all clear that there is a clear distinction between simulation and reality for actions (aside from the obvious truism that a simulation is not the thing itself). That is the fundamental point when discussing consciousness. Consciousness is not an object; it is an action.

The better analogy that we have used repeatedly in these discussions is with running. Simulated running is running -- not in the real world but in the simulated world. The closer the simulation is to actual running the more informative the process by which we acheive the simulation is for understanding how running works in the real world. Same for consciousness, at least theoretically. The idea is that a simulation can potentially provide a model for how consciousness works in the brain; not the any simulation would be consciousness in the brain. Obviously it cannot be -- it is a simulation after all. But if the simulation provides enough information, if it is close enough to how the brain works, then we probably should even speak of it "being conscious" within its simulated world.

ETA:
Or more succinctly, as Pixy said, it's a category error, or a framing error. The simulated orange is an orange within the simulation. A real orange is an orange in the real world.

The only way that running in a simulation is a similar thing to running in the real world is in the consciousness of the person observing the simulation, which perceives the existence of similar relationships which objectively do not exist.

A film of a person running will look, to an observer, much like an actual person running. Objectively, it's nothing like that at all - just patterns of light on a wall. To expect the simulation to exhibit all the behaviours of running would be the same category error as viewers in the first cinema who fled the approaching train in panic.
 
For a start, we cannot assume that a robot will exhibit the behaviours of consciousness, and then use that as evidence that a robot can be conscious. So far, there has been no AI that has come remotely close to passing a Turing test. The assumption used to be that by 2001 HAL would be able to talk to us just like a human being over a slightly fuzzy phone line. Nowadays, most people (even those who believe in the computational approach) don't expect robots or computers to exhibit signs of consciousness for the foreseeable future.

Yes, of course. I'm not arguing otherwise.

What Piggy has indicated that simply producing a machine designed to produce one form of activity doesn't mean that it will exhibit another form of activity.


That is not what I gather from his analogy. There are a limited number of engineering solutions to produce a leg that moves like our legs move. Whether or not a robotic leg contains anything like a muscle doesn't matter as far as telling us how movement can occur, where the stresses on joints are maximal, etc. We can learn a considerable amount about motion from a robot whether or not muscles are used; and the robot can still walk -- assuming that such a thing is possible.

If we don't know (and despite the endless assertions, we still don't know) what, precisely, produces consciousness in the only physical structure in which we know it exists, we can't, therefore, leave out aspects and be sure that the baby hasn't been thrown out with the bathwater. Either the precise mechanism that produces consciousness must be found, or else we cannot assume that it will be there.

To say as a counterargument that the hypothetical robot will exhibit consciousness is meaningless, until a robot can be found that actually does so.

As we have gone over many times, that might be true and it might not. That there are other engineering solutions for walking that do not employ physical muscle tissue implies that there may be other engineering solutions for consciousness that do not employ neurons. If we want to create an artificial human brain out of other material then we will need to keep strictly to the form of the brains that we have. That does not mean that all conscious entities will need to be structured in the same way -- there might be other types of solutions -- but I suspect that there are going to be very limited solutions to this issue (and it is probably true that the more complex a system, the more limited the solutions to reproduce that system become).

I don't know anyone here who would argue that a robot would be conscious without evidence to show that it is. With similar structure and function, however, we should have every reason to believe that it would be.

I still maintain -- and this was the point of that previous exchange -- that it makes no sense to discuss a robot that acts exactly like it is conscious but which is not conscious. How could you tell that is the case? The only reason I 'know' that you are conscious is because of your behavior and vice versa (and because I know my experiences and have every reason to believe the same of you). If we structure a robot to function the same as we do and if it displays all the behaviors of a conscious being, on what grounds could we conclude that it is not conscious?
 
So If I understand you correctly a simulated brain can only have simulated consciousness in a simulated world if we are to avoid a category error?
Nope. That's the diametric opposite of my point.

An orange in a simulation has all the behaviours of an orange in our world, but it is not in our world. You - real-world you - can't eat it.

A conscious mind in a simulation has all the behaviours of a conscious mind in our world... And you can communicate with it, because information is substrate-independent.

That's intrinsic to what a simulation is; if information didn't pass between the two world, it wouldn't be a simulation, it wouldn't be anything at all.

So:

There's a difference between how we can interact with an orange in a simulation and an orange in our world.

There is no difference between how we can interact with a mind in a simulation and a mind in the real world.

Conflating these two - the orange and the mind - in either direction, for whatever reason - is the category error. Thinking we need to - or even can - interact with a mind in the same way we interact with an orange, is the category error.

Simulated computation is real computation. Simulated art is real art. Simulated minds are real minds. When it comes to information processing, the map is the territory.
 
No, you could not eat an orange simulated by a turing machine, if you are outside the simulation.

But you could have a conversation with a simulated consciousness, you could play games with a simulated consciousness, you could have cyber sex with a simulated consciousness, you could show a simulated consciousness your photos and have it react like a normal person would via. a chatroom or webcam, and finally if you hooked the simulated consciousness up to a body it could control outside the simulation then you could walk down the street with it to the store and buy a pack of cigarettes.

You have no trouble assuming I am conscious, just because of the way I respond to your communication. If I told you I was a simulated consciousness, what would you say -- oh, sorry, I change my mind because you aren't flesh and bone? I don't get it.

We have no basis on which to know any of this. There is simply no reason to assume that we could interact with a simulated consciousness any more than we can interact with a simulated orange.

A simulation is not the thing simulated.
 
The only way that running in a simulation is a similar thing to running in the real world is in the consciousness of the person observing the simulation, which perceives the existence of similar relationships which objectively do not exist.

A film of a person running will look, to an observer, much like an actual person running. Objectively, it's nothing like that at all - just patterns of light on a wall. To expect the simulation to exhibit all the behaviours of running would be the same category error as viewers in the first cinema who fled the approaching train in panic.


Explain please. If a simulation simulates in every detail every quark in the real world, every interaction in the real world, how are the actions in that simulation not like the actions in the real world? Of course they "exist" within a digital frame and so do not interact with the physical world, but the physical world also does not interact with the digital world (except where it can interfere with the manifestation of it through the physical processes that create that digital world).

Running on film is not a simulation in the sense being discussed, so I don't know why you bring it up.
 
I still maintain -- and this was the point of that previous exchange -- that it makes no sense to discuss a robot that acts exactly like it is conscious but which is not conscious. How could you tell that is the case? The only reason I 'know' that you are conscious is because of your behavior and vice versa (and because I know my experiences and have every reason to believe the same of you). If we structure a robot to function the same as we do and if it displays all the behaviors of a conscious being, on what grounds could we conclude that it is not conscious?
Yes, this is the core of it. How is it even meaningful to discuss something that exhibits all the behaviours of X but is not X? For any X, not just consciousness?
 
We have no basis on which to know any of this. There is simply no reason to assume that we could interact with a simulated consciousness any more than we can interact with a simulated orange.

A simulation is not the thing simulated.


Why not? If we were to precisely match all the information in a simulated body to a robot and had the simulated body run should we not expect the robot in the real world to run?
 
The options seem fairly obvious to me. Either we can't discuss the matter in the first place and so should remain silent; or we can and should try to agree on what we are talking about in the first place.
.

I think that we are perfectly entitled to discuss the subject, but should accept that precise definitions of what we are talking about do not exist.

(There are wrong and misleading definitions around, of course).
 
I sincerely hope you know why that is a totally inadequate response. I certainly hope that you do not think that anyone in this thread has ever argued that the behavior of any game character even remotely approximates a conscious entity.

I think you may be mistaken there.
 
I think that we are perfectly entitled to discuss the subject, but should accept that precise definitions of what we are talking about do not exist.

(There are wrong and misleading definitions around, of course).


Whether or not precise definitions are possible is a conclusion that can be drawn only at the end stage of the process. We can certainly start with definitions and try to improve and refine them over time. That is what I asked for in the past and got few to no players -- some simply said outright that it couldn't be done, that there were no definitions for the words, no way to break them down into simpler components. I don't accept that without at least making an attempt.
 
Wrong. The particles of the semiconductors in the computer hardware are the particles of the semiconductors in the computer hardware. The orange is the orange.

While it's possible to link the orange displayed on the computer screen to particles in the computer hardware, that's not a very interesting or useful way to look at it, because there's clearly nothing there that behaves in the same way as a real world orange. The similarity between a real orange and the simulated orange occurs in the mind of the person observing the orange on the screen. That is where the simulation exists. Looking for orange-like behaviour within the computer is the kind of thing we outgrow with a sophisticated understanding of the way the world works. We know that it's just an image, and that it doesn't have any property of the orange apart from the property which has been put there on purpose.
 
Let me see if I understand you correctly -- the computer memory where the data that we label an "orange" in the simulation is actually not the orange in the simulation?

Where, then, is the orange in the simulation? Magic bean land? And I suppose the monitor is a magical window into magic bean land?

There is no orange. That's something that babies learn at about eighteen months. They can't do anything with the orange.

And since the orange interacts with other simulation constructs, I suppose that must all be in magic bean land as well?

Or are you saying the whole thing is in our imagination?

Precisely. There is no orange. We imagine the orange because it looks like an orange. Try to say where the simulated orange exists and you can't.

Why is that different from a normal orange? If you remove every human in the universe, an orange is no longer an orange. It becomes just a bunch of particles that maybe monkeys (although they are no longer monkeys, since that is a human word as well ) like to eat. But the "orange" is gone, forever.

Doesn't make much sense to me. But then the idea of saying some things are "real" and some are "not" has no logical or mathematical basis to begin with.

Back to the "all is illusion". I happen to believe that a real orange can fall in the forest without someone there to see it. That's the difference between the real world and the world of stories and pictures and enormously complicated computer simulations.

It's really quite a useful skill to be able to differentiate between things that exist, and things which are made up. We can imagine that a combination of shadows looks like a face. We shouldn't think that a face is really there.
 
Nope. That's the diametric opposite of my point.

An orange in a simulation has all the behaviours of an orange in our world, but it is not in our world. You - real-world you - can't eat it.

A conscious mind in a simulation has all the behaviours of a conscious mind in our world... And you can communicate with it, because information is substrate-independent.

That's intrinsic to what a simulation is; if information didn't pass between the two world, it wouldn't be a simulation, it wouldn't be anything at all.

So:

There's a difference between how we can interact with an orange in a simulation and an orange in our world.

There is no difference between how we can interact with a mind in a simulation and a mind in the real world.

Conflating these two - the orange and the mind - in either direction, for whatever reason - is the category error. Thinking we need to - or even can - interact with a mind in the same way we interact with an orange, is the category error.

Simulated computation is real computation. Simulated art is real art. Simulated minds are real minds. When it comes to information processing, the map is the territory.


Thanks for stating your perspective clearly.

How is information being substrate-independent not dualistic?
 
May I call a time out?


From what I can tell from Westprog's responses you guys are talking past one another as seems to happen in many of these threads. The two sides have very, very different ways of talking about the simulation at play in the examples.
 
Why would a simulated consciousness be conscious in our frame?


Because actions, as actions, are frame-independent. What is frame dependent is the way the actions are carried out -- either in silica chips or in physical biological entities.

Physical objects are frame dependent, but actions are not because actions are relations between parts and not the parts themselves.
 
Thanks for stating your perspective clearly.

How is information being substrate-independent not dualistic?


Because information is not a "thing"; it exists in interactions of 'things'. The reason that we have dualistic categories and Descartes arrived at his form of dualism is because consciousness has the same 'property'. It is an action, and actions are substrate-independent -- they can be realized in multiple different systems.

We run into this problem because words like 'information' and 'consciousness' are nouns and that makes us think of them as 'things'. They really should be verbs because they are not 'things' but rather actions.
 
Status
Not open for further replies.

Back
Top Bottom