• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Explain consciousness to the layman.

Status
Not open for further replies.
That's true, but it doesn't defeat the point I was making.

In a story, the rules themselves don't have to do with the tornado. The tornado behaves according to the needs of the plot, unless it's a non-sequitur--in which case the tornadoes could do nearly anything.

In a simulation, the rules have to reflect the relationships that the tornado has with elements of interest, and the simulated tornado always behaves according to the application of those rules.

Unless the book is a textbook about tornadoes, and the simulation is a cool computer game. There's nothing inherent in a computer simulation that makes it more accurate than any other representation of a real-world object. There are a vast number of different forms of simulation.
 
Unless the book is a textbook about tornadoes, and the simulation is a cool computer game. There's nothing inherent in a computer simulation that makes it more accurate than any other representation of a real-world object. There are a vast number of different forms of simulation.

But this isn't true, and we have already spent pages arguing about it:

If a simulation exhibits more behavior isomorphisms to the real thing than some other simulation does, then it is "more accurate."

And -- that is an entirely objective metric, so you can't hand-wave it away with your normal "observer dependent" nonsense.

If you wanted to, you could mathematically evaluate the difference between the behavior of the book and the game and objectively determine which one displays a higher level of behavior isomorphism.
 
Last edited:
No. Their relationship to the things they depict is not imaginary--it is abstract. There's a difference between a simulation and a story, or film, or flipbook--that difference is critical to our calling the thing a simulation.

Namely, the simulation has to actually follow a set of rules. Furthermore, in order for that simulation to be called a simulation of a tornado, the rules have to somehow have something to do with a tornado.
Luckily we have 'tornado rules' at thermodynamic level so the solutions to partial differential equations are what are used, not lower (perhaps lowest?) level 4-force particle interactions. I doubt that a tornado can be modelled at 4-force complexity.

Now when we can move our understanding of consciousness to thermodynamic level we should have something to work with.
 
You're not responding to the post.

You said that it's possible that consciousness requires constant real-time interaction with the real world. Now you're just lecturing about how obvious interaction with the world is. Sure, the brain interacts with the world. But that has nothing to do with the question.

What is the relationship between consciousness and real time interaction, if not something distinct from sped up or slowed down interaction? And what do you mean by real world, if not something distinct from a virtual world?
At human consciousness level it means you at least try to return service if playing tennis. At slower pace, decide whether to catch the object thrown to you, step aside, run for cover, or throw yourself on the grenade to save the lives of your fellow soldiers.

And steer into the skid not touching the brake.

At lower animal level, react quickly enough to obtain food or sex, or escape threat.
 
Unless the book is a textbook about tornadoes, and the simulation is a cool computer game.
No, that's not an exception. The book follows laws of thermodynamics; it has a gravitational pull on the planet earth. Its pages have particular mechanical properties allowing them to be turned with reasonable force, and the ink on it tends to stay embedded at a particular location. So there are many rules the book follows, but you're talking about the book's content. I'm afraid that's not generated by the rules followed by the book.

What's written in the book may be the result of rules in another system--in fact, the book may even be about the results of a simulation. But in this case, the rules related to the tornado were applied by the behavior of the simulation. Not the book.

Unless, of course, you find a clever way of using the properties of the book to build a simulation. But in that case, once again, it's no exception.
There's nothing inherent in a computer simulation that makes it more accurate than any other representation of a real-world object.
This phrase is too general to have a meaning.
There are a vast number of different forms of simulation.

But the only things we get to call simulations are the things that actually bother with simulating.
 
Last edited:
Okay, but if that is what you mean, then the only way a simulated entity could interface with a so-called simulated world is by interfacing with the real world.

So I don't understand why there's a point to make here in the first place.

Best I can tell at the moment, all you're saying is that all interactions happen in the real world. Yeah, they do. They happen if the interactions are in a brain or in a computer program. So how are we to conclude something about the brain that contrasts with the program?

Well, that remains to be seen. If you can point out what you mean by real world in such a way that it actually contrasts with computation, you can make a point about what computation cannot do. But you haven't done that, so long as all you're talking about is that all interactions occur in the real world.

Of necessity, all human interactions happen in real time, in the real world. However, it's been insisted that it's possible for a computer consciousness to react at different rates, to simulated events, and these simulated events can be played at any speed, and the experience of the computer consciousness will be identical if the rate of the events matches the rate of the computation. I'm claiming that this is fundamentally different to the way things work with human consciousness. If, in the sensory deprivation tank, events happen twice as fast, they will be experienced twice as fast. Not so with the computational model.

It might be that consciousness is time-independent, and that while the brain can't run at radically different rates, some artificial consciousness reacting to simulated events might be able to do so. I do claim that this supposition is at the very least unproven.
 
No, that's not an exception. The book follows laws of thermodynamics; it has a gravitational pull on the planet earth. Its pages have particular mechanical properties allowing them to be turned with reasonable force, and the ink on it tends to stay embedded at a particular location. So there are many rules the book follows, but you're talking about the book's content. I'm afraid that's not generated by the rules followed by the book.

What's written in the book may be the result of rules in another system--in fact, the book may even be about the results of a simulation. But in this case, the rules related to the tornado were applied by the behavior of the simulation. Not the book.

Unless, of course, you find a clever way of using the properties of the book to build a simulation. But in that case, once again, it's no exception.
This phrase is too general to have a meaning.

But the only things we get to call simulations are the things that actually bother with simulating.

What do you call a simulation of a system, then? What definition excludes the book and the film and the photograph, but includes all the different varieties on the computer?

I agree that a book is not a simulation - but it includes symbols which represent a tornado. How are those symbols less valid than those in the computer?
 
Of necessity, all human interactions happen in real time, in the real world. However, it's been insisted that it's possible for a computer consciousness to react at different rates, to simulated events, and these simulated events can be played at any speed, and the experience of the computer consciousness will be identical if the rate of the events matches the rate of the computation.
Okay, sure. And you actually have a problem with this? It's like saying, if I stretch a ruler out equally to twice it's length, and I stretch out a piece of board to twice its dimensions, then the ruler would measure the same thing. Sure, we haven't actually stretched the ruler or a board, but this sounds like a fair claim on the face of it. Why would you have a problem with this? It's simply a scaling.
I'm claiming that this is fundamentally different to the way things work with human consciousness. If, in the sensory deprivation tank, events happen twice as fast, they will be experienced twice as fast. Not so with the computational model.
But this doesn't make any sense. Suppose I simulated a bullet being fired from a gun--and I get a certain set of results. Now I'm curious what would happen if I simulate a bullet being fired twice as fast. Do you imagine that I simply run the simulation twice as fast? Of course not! If I simply run the simulation twice as fast, I'll be simulating the same thing.

But that seems to be the way you're performing the comparison. It just doesn't seem appropriate. If you simulate me being in a sensory deprivation tank with the VR goggles and headset and hand control, your baseline simulation should mimic what I would do in that situation. If you want to compare to what I would do if you speed up the VR world, you don't just run the simulation twice as fast. You change the simulation such that the simulated VR world is twice as fast as the simulated me.

And if you do that, you should get the same result.

I'm confused why this isn't obvious to you.
It might be that consciousness is time-independent, and that while the brain can't run at radically different rates, some artificial consciousness reacting to simulated events might be able to do so. I do claim that this supposition is at the very least unproven.
Prove what exactly? If it's a computation it can run at radically different rates.
 
What do you call a simulation of a system, then? What definition excludes the book and the film and the photograph, but includes all the different varieties on the computer?
Well, as I said--the results of a simulation are generated by the application of rules within the system. My book on tornadoes may be based on real tornadoes--in which case the tornado itself followed its own physics. Or it could be based on a model, in which case the model specifies rules of how it would react. And unless you use the book somehow to simulate this model, you can't say that the book is a simulation.
I agree that a book is not a simulation
Okay, then what are you disagreeing with?
but it includes symbols which represent a tornado. How are those symbols less valid than those in the computer?
The question is a red herring. You're asking about the difference between a simulation and a book, film, or photograph.
 
Well, if it's a massive communication breakdown, how do you know what he is saying?
Because the communication breakdown is only in one direction.

What Westprog is saying is clear, obvious, and entirely wrong. But he's not listening to the explanations of why this is so.
 
Well, as I said--the results of a simulation are generated by the application of rules within the system. My book on tornadoes may be based on real tornadoes--in which case the tornado itself followed its own physics. Or it could be based on a model, in which case the model specifies rules of how it would react. And unless you use the book somehow to simulate this model, you can't say that the book is a simulation.
Okay, then what are you disagreeing with?

The question is a red herring. You're asking about the difference between a simulation and a book, film, or photograph.

The point I'm making is that if it's valid to talk about the world in the simulation, it's just as valid (or invalid) to talk about the world inside the book, or the film. If there are properties unique to the simulation, then I haven't yet seen them.
 
Okay, sure. And you actually have a problem with this? It's like saying, if I stretch a ruler out equally to twice it's length, and I stretch out a piece of board to twice its dimensions, then the ruler would measure the same thing. Sure, we haven't actually stretched the ruler or a board, but this sounds like a fair claim on the face of it. Why would you have a problem with this? It's simply a scaling.

But this doesn't make any sense. Suppose I simulated a bullet being fired from a gun--and I get a certain set of results. Now I'm curious what would happen if I simulate a bullet being fired twice as fast. Do you imagine that I simply run the simulation twice as fast? Of course not! If I simply run the simulation twice as fast, I'll be simulating the same thing.

But that seems to be the way you're performing the comparison. It just doesn't seem appropriate. If you simulate me being in a sensory deprivation tank with the VR goggles and headset and hand control, your baseline simulation should mimic what I would do in that situation. If you want to compare to what I would do if you speed up the VR world, you don't just run the simulation twice as fast. You change the simulation such that the simulated VR world is twice as fast as the simulated me.

And if you do that, you should get the same result.

I'm confused why this isn't obvious to you.

Prove what exactly? If it's a computation it can run at radically different rates.

Well, that's the point, isn't it? All I'm doing is pointing out differences between a computational process, and the process of a human brain. I agree that it's all obvious.
 
Well, that's the point, isn't it?
Is it? You said there was something fundamentally different, but it seems you're comparing apples to oranges. I wouldn't call that a fundamental difference--I'd just call it a false comparison.
All I'm doing is pointing out differences between a computational process, and the process of a human brain. I agree that it's all obvious.
But I don't see the difference you're pointing out. Compare apples to apples, and the simulated brain works the same was as the real one.

So if there's a fundamental difference, where is it? Is it just in your assumption that computation cannot produce consciousness? If so, why all the hoopla? Why not just end at the statement that you assume computation cannot produce consciousness? You don't get anything new by adding the observation that you can't in practice speed up or slow down a human brain except relativistically.
 
The point I'm making is that if it's valid to talk about the world in the simulation, it's just as valid (or invalid) to talk about the world inside the book, or the film.
Sure, but you failed to make your point. The very fact that it's a simulation makes it a different kind of thing in a relevant way.
If there are properties unique to the simulation, then I haven't yet seen them.
The unique property of the simulation that is relevant is that it is actually bothering to simulate the thing. The rules involved in the system we call the simulation drive what is happening in a causal fashion. That's not always true for a book or a film.
 
Is it? You said there was something fundamentally different, but it seems you're comparing apples to oranges. I wouldn't call that a fundamental difference--I'd just call it a false comparison.
But I don't see the difference you're pointing out. Compare apples to apples, and the simulated brain works the same was as the real one.

So if there's a fundamental difference, where is it? Is it just in your assumption that computation cannot produce consciousness? If so, why all the hoopla? Why not just end at the statement that you assume computation cannot produce consciousness? You don't get anything new by adding the observation that you can't in practice speed up or slow down a human brain except relativistically.

No, that's not the point. The point is not that the human brain can't speed up or slow down - it's that it is inextricably linked to its environment. The point about the computation is that it is a model of a process entirely isolated from its environment. I'm continually amazed that the supporters of the computational model don't have a problem with this, or are willing to handwave it away as something irrelevant. The computational nature of the brain is just to be accepted, and major, significant differences like the interactive nature of the brain can be left aside.
 
...The point is not that the human brain can't speed up or slow down - it's that it is inextricably linked to its environment...

If we developed a complete computer simulation of a human brain, and inserted it into a robot that had our five senses and would talk and move like a person, what evidence would we seek to show that it was (or was not) conscious?
 
No, that's not the point. The point is not that the human brain can't speed up or slow down - it's that it is inextricably linked to its environment.
The human brain is only "inextricably linked" to the pieces of the environment it is "inextricably linked" to; it is isolated from the pieces of the environment it is isolated from. Neutrinos don't have much of an effect on my brain processing, for example.

In nearly any system, there are pieces of the environment that affect it greatly, and pieces that don't have so much of an effect. You talk about this as if the human sensory apparatus glues us into everything going on around us.
The point about the computation is that it is a model of a process entirely isolated from its environment.
Actually, no, that's not "the point" about a computation. The point of a computation is that it is in itself a process. Ideally you would want to isolate the computation from outside processes that may interfere with results, yes. But the isolation isn't the point of it--the things it does is.
I'm continually amazed that the supporters of the computational model don't have a problem with this, or are willing to handwave it away as something irrelevant.
Quite the opposite. A computational model is an environment. The thing you isolate the computational system with is indeed irrelevant, but that means that you can ignore that piece. And there's something left--the environment that is part of the computation.

And that is relevant. Indeed, it is the point. But that is what you are hand waving away as irrelevant.
The computational nature of the brain is just to be accepted, and major, significant differences like the interactive nature of the brain can be left aside.
The simulated brain is interacting with a simulated environment; and, both are processes (if they weren't, there'd be no computation--see above). So there's no "major, significant differences" in this regard. Just as my brain is affected by red photons, the simulated brain can be affected by simulated red photons. And just as my brain isn't so much affected by neutrinos, the simulation can leave out neutrinos and be fairly accurate.

But all this means is that none of the points you raised are valid. It doesn't automatically mean the computational nature of the brain is just to be accepted. It simply means you didn't make any valid points.
 
Last edited:
What does that mean? what would qualify as such (recursive definition not acceptable)?
"approximate a living mind".

Well this has been described in this thread under discussions of intelligence. Although the requirement for a mind to be alive has been ignored.

If I ignore the alive bit then a mind is an individual coherent entity interfacing with its environment external and internal via inputs and outputs. With the ability to learn, reason and represent its experience and behavior subjectively.


Do you mean self-aware?
"with a sense of being and experience in the physical world".

Well if I put being to one side then I do mean consciously self aware.

Being is my magic bean and is where I feel inclined to consider the possibility of alternative ontologies to physical matter materialism.
 
Status
Not open for further replies.

Back
Top Bottom