• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Has consciousness been fully explained?

Status
Not open for further replies.
Somewhere in the patterns of electrons flowing through the right gates, there should be the pattern that coincides with what occurs in my brain as I type these words.

Let's be clear, then. Are you talking about a simulation or a model?
 
Really?

So are you saying that if you've got a machine that can run detailed sims, and you have it run a sim of a hurricane, and then you have it run a sim of a brain, this machine doesn't take on any of the characteristics of the hurricane, but it does take on characteristics of the brain?

If so, how does that work?

If not, how is it at all relevant here?

One of the recurring problems on these threads is the proliferation of badly formed thought experiments.

If a computer running a sim of a hurricane doesn't itself have a wind speed, then a computer running a sim of a brain is not itself conscious.


It depends on what you mean by characteristics of a hurricane and characteristics of a brain. We are discussing the relationships and functions here, not the actual matter in the process, even though there is matter involved in the processes of brain function.

Of course a computer will never recreate the actual wind of a hurricane, but it should be able to reproduce the relationships within a hurricane that account for all the wind and damage. There is no actual matter involved in reproducing the relationships, so we see no wind (unless we translate all of it to a screen and visualize it there), but that is beside the point -- the relationships are there.

With a brain, the issue is much easier to see because computers were designed to do mental activities; that is their function. So, if a computer recreated the atomic relationships responsible for conscious thought we should be able to see much more easily that it is conscious. And if we translate the pattern of electron movements into audible form, we should be able to hear someone speaking conscious thoughts.
 
It is not a proven fact that every kind of machine or system can be simulated by some Turing Machine.

Nor do not know for sure if the true and complete underlying mathematical description of our universe allows for perfect simulation by a Turing Machine. For example, perhaps our universe requires an uncountable infinity of independent rules to fully describe it.

It may seem unlikely to some that a Turing Machine would not be sufficient given what we think we currently understand about how the brain works and also that theoretical physics may seems to be heading in the direction of a relatively small and compact set of rules rather than the other way (as far as I know at least) but this proves nothing.

There have been various references to Turing Machines and the Church-Turing Thesis earlier in this thread and also other frequent references to logic gates so it certainly seems that everybody is agreed that any simulation is restricted to being hosted on nothing more computationally powerful than a Turing Machine.

To me this makes it quite clear that the most you could say for any such simulation is that it might produce consciousness, not that it absolutely will. As such I'm not sure what the "thought experiment" could achieve apart from sorting those on the thread into rough groupings according to their personal beliefs about how they think the universe or consciousness may work.

There's a section in the Wikipedia Digital Physics page quoting Richard Feynman which I'll copy below because I think it's pretty relevant:

Wikipedia/Digital Physics/Richard Feynman said:
Moreover, the universe seems to be able decide on their values in real time, moment by moment. As Richard Feynman put it:
"It always bothers me that, according to the laws as we understand them today, it takes a computing machine an infinite number of logical operations to figure out what goes on in no matter how tiny a region of space, and no matter how tiny a region of time. How can all that be going on in that tiny space? Why should it take an infinite amount of logic to figure out what one tiny piece of space/time is going to do?"
He then answered his own question as follows:
"So I have often made the hypothesis that ultimately physics will not require a mathematical statement, that in the end the machinery will be revealed, and the laws will turn out to be simple, like the checker board with all its apparent complexities. But this speculation is of the same nature as those other people make—'I like it,' 'I don't like it'—and it is not good to be prejudiced about these things".

That same wiki page specifically details some of the criticisms of digital physics.
 
Let's be clear, then. Are you talking about a simulation or a model?


We are discussing a simulation, but we tend to get lost, again, in the programming aspects and forget the actual workings of the computer -- which are electrons passing through logic gates.

There really is a physical process occurring in a computer. And it should be possible to create changes in the way that physical process occurs with different types of programs.

I agree with you completely that a program that only does step 1, then step 2, then step 3 couldn't produce anything like consciousness because consciousness is an active process and a program like that would not be an active process. It would be a recipe.

Again, my understanding is that programming is not stuck in those days. Neural networks have been shown to learn -- to change what happens over time based on what occurred earlier.

Technically, it shouldn't matter if the changes occurred within the programming itself or within the way the computer responds at the level of the electron gates directly as a response to the computer acting according to a set of rules and not just line code. The importance is a change in computer function over time based on some set of rules that are not preordained in the original code whether it is recursive loops or whatever.
 
It is not a proven fact that every kind of machine or system can be simulated by some Turing Machine.

Nor do not know for sure if the true and complete underlying mathematical description of our universe allows for perfect simulation by a Turing Machine. For example, perhaps our universe requires an uncountable infinity of independent rules to fully describe it.

It may seem unlikely to some that a Turing Machine would not be sufficient given what we think we currently understand about how the brain works and also that theoretical physics may seems to be heading in the direction of a relatively small and compact set of rules rather than the other way (as far as I know at least) but this proves nothing.

There have been various references to Turing Machines and the Church-Turing Thesis earlier in this thread and also other frequent references to logic gates so it certainly seems that everybody is agreed that any simulation is restricted to being hosted on nothing more computationally powerful than a Turing Machine.

To me this makes it quite clear that the most you could say for any such simulation is that it might produce consciousness, not that it absolutely will. As such I'm not sure what the "thought experiment" could achieve apart from sorting those on the thread into rough groupings according to their personal beliefs about how they think the universe or consciousness may work.

There's a section in the Wikipedia Digital Physics page quoting Richard Feynman which I'll copy below because I think it's pretty relevant:



That same wiki page specifically details some of the criticisms of digital physics.


But none of that actually matters as regards a simulation of 'the universe', because we have only been using this as a thought experiment to discuss the issue.

We could do the same thing by recreating the local conditions of the earth to a certain degree. It just doesn't matter. What matters in this scenario is the ability of the computer to change its function over time -- I thought that was evident from the description of the thought experiment.

The basic critique of computationalism, however, is quite important and certainly deserves further consideration. If a Turing machine cannot be counted on the produce systems reliably then that is a real problem for the argument. I am not qualified to comment on that issue.
 
It depends on what you mean by characteristics of a hurricane and characteristics of a brain. We are discussing the relationships and functions here, not the actual matter in the process, even though there is matter involved in the processes of brain function.

Of course a computer will never recreate the actual wind of a hurricane, but it should be able to reproduce the relationships within a hurricane that account for all the wind and damage. There is no actual matter involved in reproducing the relationships, so we see no wind (unless we translate all of it to a screen and visualize it there), but that is beside the point -- the relationships are there.

No, it is not beside the point.

It is, in fact, the entire point.

Who cares if analogous relationships are represented in a simulation?

It doesn't make the computer have a wind speed, and it doesn't make the computer conscious.

With a brain, the issue is much easier to see because computers were designed to do mental activities; that is their function. So, if a computer recreated the atomic relationships responsible for conscious thought we should be able to see much more easily that it is conscious. And if we translate the pattern of electron movements into audible form, we should be able to hear someone speaking conscious thoughts.

Really?

Showing me a street view of San Francisco is a mental activity?

Processing my purchase of an overcoat is a mental activity?

What are you talking about?

But anyway, it doesn't matter if you do create a computer that's able to simulate a brain right down to the atoms. For the same reason that it doesn't matter if you create a computer that can simulate a hurricane right down to the atoms.

The computer running the simulation still will not have an average wind speed, nor will it be conscious.

The granularity of the simulation makes zero difference when it comes to this fact.
 
We are discussing a simulation, but we tend to get lost, again, in the programming aspects and forget the actual workings of the computer -- which are electrons passing through logic gates.

There really is a physical process occurring in a computer. And it should be possible to create changes in the way that physical process occurs with different types of programs.

I agree with you completely that a program that only does step 1, then step 2, then step 3 couldn't produce anything like consciousness because consciousness is an active process and a program like that would not be an active process. It would be a recipe.

Again, my understanding is that programming is not stuck in those days. Neural networks have been shown to learn -- to change what happens over time based on what occurred earlier.

Technically, it shouldn't matter if the changes occurred within the programming itself or within the way the computer responds at the level of the electron gates directly as a response to the computer acting according to a set of rules and not just line code. The importance is a change in computer function over time based on some set of rules that are not preordained in the original code whether it is recursive loops or whatever.

You're still on the wrong track, my friend.

If it's an evolving simulation, it's still a simulation. And a simulation doesn't cause the machine running the simulation to take on qualities or behaviors of the system being simulated, because the machine simply keeps doing what it always did.

If it was conscious beforehand, it will continue to be conscious. If it wasn't, then it won't be.

On the other hand, if you've managed to create an actual evolving physical system, then you've managed to create an evolving model, which can be conscious simply because a model brain can be conscious.

In other words, if you run a simulation of evolving critters that scavenge for virtual food, this does not make the machine running the sim start scavenging for food.

On the other hand, if you produce robots that scavenge for food and learn to get better at it thru experience, then you've produced machines that scavenge for food.

It's really as simple as that.

If you want a conscious machine, you have to build it to be conscious, including whatever physical apparatus is necessary.
 
No, it is not beside the point.

It is, in fact, the entire point.

Who cares if analogous relationships are represented in a simulation?

It doesn't make the computer have a wind speed, and it doesn't make the computer conscious.



Really?

Showing me a street view of San Francisco is a mental activity?

Processing my purchase of an overcoat is a mental activity?


Yes. Carrying out the search processes to show you those things is a mental activity.



But anyway, it doesn't matter if you do create a computer that's able to simulate a brain right down to the atoms. For the same reason that it doesn't matter if you create a computer that can simulate a hurricane right down to the atoms.

The computer running the simulation still will not have an average wind speed, nor will it be conscious.

The granularity of the simulation makes zero difference when it comes to this fact.


But they are not analogous situations. The damage a hurricane does in the real world depends on the existence of a certain type of matter in a particular relationship. The computer cannot reproduce the matter, it can only reproduce the relationships. The same is true of consciousness, except that the kind of matter and type of action that the computer actually does -- moving electrons around -- is much closer to what a brain does, so the relationships should be easier to see. And translation between the electron movement and us actually seeing consciousness should be much easier.

Now, if we had technology far beyond our capacity and linked all the information from the simulation to the real world -- 'atom' to atom and relationship of 'atom' to atom (where the scare quotes represent the simulated atoms) -- then we should be able to recreate all the forces of a hurricane. There is no way to do that physically in a way that we are aware, but it should be possible in theory.
 
You're still on the wrong track, my friend.

If it's an evolving simulation, it's still a simulation. And a simulation doesn't cause the machine running the simulation to take on qualities or behaviors of the system being simulated, because the machine simply keeps doing what it always did.

If it was conscious beforehand, it will continue to be conscious. If it wasn't, then it won't be.

On the other hand, if you've managed to create an actual evolving physical system, then you've managed to create an evolving model, which can be conscious simply because a model brain can be conscious.

In other words, if you run a simulation of evolving critters that scavenge for virtual food, this does not make the machine running the sim start scavenging for food.

On the other hand, if you produce robots that scavenge for food and learn to get better at it thru experience, then you've produced machines that scavenge for food.

It's really as simple as that.

If you want a conscious machine, you have to build it to be conscious, including whatever physical apparatus is necessary.



But we are not discussing a simulation of evolution, but a simulation that actually evolves. There is an actual change in the behavior of the machine based on what occurs.

If we ran a program that simulated critters that scavange for food, those critters would actually evolve if we set up the conditions as I have outlined -- just the rules of the system and basic guidelines for what a critter would be. Those changes would be reflected in the movements of electrons through gates, the relationships that accounted for the changes would still be reflected in those electron movements.

So, we could theoretically match all of those changes in electron behavior to dumb robots with or without feedback loops and see their behavior change over time -- based on the changes in electron movements through gates that enact the simulation.

No one is saying that the machine is going to start doing things in the new world like scavaging for food. It has an output, however, that can be translated so that we can see what the changes it creates amount to. And that output changes over time to meet changes in the 'environment'.
 
Despite the problems with this description, do you not understand why it is irrelevant?

Consciousness is not learning. It is not memory. It is not self-referential information processing. It is not perception. It is not responding to the environment.

All of this can happen with or without the involvement of the systems that handle the phenomenon/behavior of conscious awareness.

If you want to explain consciousness, you have to explain consciousness. There's no way around it.


I am well aware of all of this, and it doesn't matter as far as what I am describing here. Remember my profession after all.

Now the problem with the description, especially where it comes to the programming, may be a real problem.
 
Sorry, guys, for the delay. This thread moves fast so I fear my response may be old news already; however:


Yes, you can certainly treat the questions separately, to see what is and isn't being claimed for consciousness, but they are related. If it's impossible to implement the simulation, then physical reality cannot be simulated, so the question whether beings in a perfect simulation would be conscious becomes moot.



For me, interpretation, what it means in this context (being conscious, we're familiar with the top-down sort, but bottom-up needs a lot of explanation), is the question, the nail on which the whole frame depends: either we hang our pixellated picture of consciousness from it, or drive it into in the computationalist coffin (so to speak). :p



sim-blobru sim-spits, sim-scuffs the sim-dirt, sim-thrusts his sim-hands in his sim-overall pockets and takes a sim-sidelong look at sim-nothing; sim-offers sim-Ich_wasp some sim-chaw. :talk025:



That meaning question again. Interaction as imposing meaning may be the key to understanding bottom-up meaning. There is certainly a sense in which interactions impose meaning, as long as they are differentiable. For example, in my simple universe of X & x, if there is a single entity with two states, it has nothing to interact with. But, if we add a second entity, and then assume interactions between the two entities depend on the state of each, meaning begins to emerge.

For example, assume the two entities sometimes interact by colliding, and that the reactions are different according to which state each is in. Symbolically, something like: x+x=X+X ; X+x=x+X ; x+X=x+X ; X+X=x+x+x (the first two collisions swap states; the third does nothing; the fourth swaps states and creates a new entity). Remember in the single entity universe, with no interactions there was no way to distinguish x expanding to X and X shrinking to x. Now, after assigning very basic properties to our two entities, we have a way to distinguish between them. Now if we run a program of this universe, the history of the program, and the switching which underlies it, may occur in a way which is only isomorphic to the simple interaction rules we assumed initially. In a way then, by running the simulation, the history of the simulation imposes bottom-up the interpretation we assumed top-down. And in that way, interaction may function as interpretation, and meaning emerge from complexity.

That's a very quick and dirty discussion of the principle (not sure how well my example works as it's just off the top of my head, but hopefully it fits well enough Wolfram's cellular automaton principle mentioned by Myriad elsewhere); it has some fascinating implications and seems to overlap with logical recursion and even quantum theory (though it's easy to get carried away with superficial resemblance), but I'm out of my depth and late for dinner so I'll leave it there for digestion's sake. :drool::)



Dude,

I'm leaving all the math and programming talk to youse guys as understands it.

I'm just the neuron system and EEG guy here.

But, yes, I think it helps.
 
That is a clearer way of putting it. The problem I have is when you say things like "the water is real in the simulation" or "the person is real in the simulation". Why? I don't consider water to be an action and I don't consider a person to be an action.
Well, there's your problem.

Water and people and minds are defined by their interactions. The difference between water and people on the one hand and minds on the other hand is that we can interact with a simulated mind in all the same ways that we can interact with a simulated mind.
 
Yes. Carrying out the search processes to show you those things is a mental activity.

No, it is not.

It is not in the least analogous.

If I ask you to show me a street view of San Francisco, can your brain do it?

If I ask you to process my order with Amazon.com, can your brain do it?

A: No.
 
You're still on the wrong track, my friend.

If it's an evolving simulation, it's still a simulation. And a simulation doesn't cause the machine running the simulation to take on qualities or behaviors of the system being simulated, because the machine simply keeps doing what it always did.

If it was conscious beforehand, it will continue to be conscious. If it wasn't, then it won't be.
Which is pure dualism. So I'll chalk you up for the magic column too?
 
I am well aware of all of this, and it doesn't matter as far as what I am describing here. Remember my profession after all.

Now the problem with the description, especially where it comes to the programming, may be a real problem.

Our professions are irrelevant, and it matters a great deal.
 
Which is pure dualism. So I'll chalk you up for the magic column too?

Oh, Jesus, are you pulling out that dualism card again?

I'm sorry, but I'm going to have to require you to justify it.

It is clearly NOT dualism to state that a computer doesn't radically alter its physical behavior when it runs various simulations.

And it is clearly NOT dualism to observe the fact that computers don't take on the qualities of systems they are digitally simulating.

Therefore, a computer running a sim of a hurricane does not somehow gain an average wind speed, and a computer running a sim of a brain does not somehow become conscious.

There's zero dualism in that.

And that's the case regardless of your unfounded assertion.

If you want to claim dualism, I'm afraid you're going to have to actually back it up.
 
It is not a proven fact that every kind of machine or system can be simulated by some Turing Machine.

Nor do not know for sure if the true and complete underlying mathematical description of our universe allows for perfect simulation by a Turing Machine. For example, perhaps our universe requires an uncountable infinity of independent rules to fully describe it.
If your argument requires that you assume that everything we know is wrong, you have a real problem.

It may seem unlikely to some that a Turing Machine would not be sufficient given what we think we currently understand about how the brain works and also that theoretical physics may seems to be heading in the direction of a relatively small and compact set of rules rather than the other way (as far as I know at least) but this proves nothing.
See above.

There have been various references to Turing Machines and the Church-Turing Thesis earlier in this thread and also other frequent references to logic gates so it certainly seems that everybody is agreed that any simulation is restricted to being hosted on nothing more computationally powerful than a Turing Machine.
Sure, since nothing more powerful has ever been devised that is logically conisistent and physically possible.

To me this makes it quite clear that the most you could say for any such simulation is that it might produce consciousness, not that it absolutely will.
Wrong. The simulation will be indistinguishable in all respects from our Universe. There's no maybe. Either you accept that it can produce consciousness, or you believe in magic.

As such I'm not sure what the "thought experiment" could achieve apart from sorting those on the thread into rough groupings according to their personal beliefs about how they think the universe or consciousness may work.
We already know what those personal beliefs are. What I'm asking for is some justification for those beliefs. On the one side - computationalism - we have an understanding of biology and physics. On the other side, so far all we have is logical fallacies. I'm hoping for more than that.
 
But they are not analogous situations. The damage a hurricane does in the real world depends on the existence of a certain type of matter in a particular relationship. The computer cannot reproduce the matter, it can only reproduce the relationships. The same is true of consciousness, except that the kind of matter and type of action that the computer actually does -- moving electrons around -- is much closer to what a brain does, so the relationships should be easier to see. And translation between the electron movement and us actually seeing consciousness should be much easier.

Now, if we had technology far beyond our capacity and linked all the information from the simulation to the real world -- 'atom' to atom and relationship of 'atom' to atom (where the scare quotes represent the simulated atoms) -- then we should be able to recreate all the forces of a hurricane. There is no way to do that physically in a way that we are aware, but it should be possible in theory.

They are entirely analogous.

You don't get reproductions of actual behavior in the phyiscal world from digital simulations. Period. (Unless, of course, you rig it up from the get-go to be redundant, as in the example of the simulation which creates a simulated heat equivalent to the heat produced by the machine running the sim, or the example of a computer which runs a sim of itself.)

What a computer is doing physically is not equivalent to what a brain is doing physically.

If you do create a machine that's doing physically what a brain is doing, then you have a model, not a simulation.
 
Status
Not open for further replies.

Back
Top Bottom