• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Has consciousness been fully explained?

Status
Not open for further replies.
That makes total sense provided somebody else taking issue with an aspect of the theoretical model isn't immediately accused of subscribing to the supernatural.

Complicated by the fact we're talking about consciousness which has intentionality or an "interior" (I believe). The theoretical model doesn't address this at all it is assumed that if you build the "outside" the "inside" will come. It worked in "Field of Dreams" :)


Agreed.


Not familiar with your use of origin and eternal presence here.

We may be misunderstanding each other. My point was as above a reference to "interiors" and "exteriors". That generally speaking a sufficient "interior" structural development (or adequate interpretation of logically coherent possibilities in the very very general example above) is necessary before a theoretical model all of the forces of nature provided there is a single substance can be effectively constructed.

And that this theoretical model all of the forces of nature doesn't account for the "interior" structural elements which enabled its construction at all.


I think we are saying the same thing.



I agree it's a dead end though I'm curious how to explain it away logically.



Me too. Paul A started a discussion of it a while back, but we haven't really discussed it in any detail for a while. When I have brought up the possibility of dualism being impossible logically and tried to explain it away in the past I have been accused of all sorts of terrible human activity amounting to baby consumption.

IIRC the article that Paul A brought up discussing how we use 'logical possibility' carried on a discussion concerning the logical possibility of iron floating. We can imagine iron floating, though it cannot possibly do so physically. But do we really imagine iron floating or do we imagine something that looks like an iron bar floating, because for iron to float would require that it have a different specific gravity than it does and part of the definition of iron includes that particular specific gravity (holding water constant)?

The upshot was that, perhaps, when we say things like "I can imagine iron floating" it's just word-play.

But I that would be a huge derail; and I'm not sure I completely accept that argument.
 
There is no way to explain away dualism. It is logically possible. It is simply intellectually unsatisfying/an intellectual dead end.
Some people feel that way about materialism.

First of all, I am not a computationalist; I am arguing against the people who say that the computationalists are wrong because I do not see merit in their arguments.
I see some merit in their arguments.
Even here and now, when we use the word 'real' we are not assured that it describes Ultimate Reality. We simply think that it does.
I agree. In fact, some people aren't even that certain about it.
This discussion has been raging for several years now. We have always defined what occurs in a simulation in functional terms, not ontologically and have always argued that it is in the function that actions (such as consciousness or flight) are defined.

If consciousness is defined by a pattern of actions then that pattern has an isomorphism with some mathematical construct. To me, that moves it out of the realm of the physical substrate it resides on and into the idealized world of mathematics.

You are speculating that consciousness exists in pattern not substance. Pattern is not a part of the physical world; pattern lives in the realm of mathematics. Then you ask what is the second 'dualistic' substance? mu. Dualists who believe in souls are not speaking about a second substance. They are talking about the non-physical aspects of our consciousness.

A computer simulation that could recreate everything in our world should recreate consciousness. This doesn’t happen at the level of ‘particles’ interacting ‘in the simulation’ but in the electrons moving around through gates. Consciousness is an action – it is a pattern of interacting bits; and if we could recreate the world in a simulation, somewhere in those whizzing electron interactions is a pattern that does the same thing as me writing this on my computer consciously.
I find this a very good argument. However, I don't find it convincing either. Simply because we know of no reason that such a thing wouldn't happen, equally we no of know reason why consciousness should simply arise from such a setup. We simply don't understand it well enough yet.

To argue against this type of scenario one must either contend that we cannot describe/recreate the patterns of the world using math/computation or that there is some other unexplainable component involved in the process.
Or one can simply point out that while it is a fine, logically constructed argument, there is no way of empirically testing it and therefore, it cannot be falsified.

I'm not sure what you are asking since there are many things that we do not know.

Myriad reported the logical possibilities. There is either one substance with its underlying physical laws (and a sufficiently sophisticated machine could replicate those laws and therefore simulate that reality) or there is not. If there is not one substance, then there are two or more. If there are two substances, then we have magic, since that word describes the interaction between completely different substances.
What are actions made of? I disagree that dualism requires two different substances - at least not the way it's used around here. But I understand your point of view as well. If you don't mind allowing such things as souls, ghosts, etc. into a monistic POV, I don't think we disagree about much.
No, I didn't say any of those things.

What I said was, there are only two possibilities: consciousness actually does result entirely from underlying processes describable by physical laws, or it does not. If it does, then a machine sufficiently replicating all those processes would also be conscious. If it does not, then by definition it is magic.

Take your pick, but the latter is the dualistic view.

Respectfully,
Myriad

I think the problem is that I'm not convinced that such a machine is possible. To assume such a thing is possible makes it trivially true. Not much to discuss, because you're assuming that any additional needed dimensions of reality can be incorporated into the machine.

For illustrative purposes, assume that we are part of a simulation -- we call what we see around us 'our world'. A perfect simulation should be able to produce people who would say the same thing -- they would speak of their 'world'.

That we know that their reality is actually caused by something other than what they think wouldn't stop them from speaking of what they see as their world. It is in that sense that I speak of it as well.

Consider the denizens of your simulated world trying to construct a simulation of theirs of the type that Myriad described. Wouldn't their simulation of their reality also be a simulation of our reality? I must quote you from earlier:
Even here and now, when we use the word 'real' we are not assured that it describes Ultimate Reality. We simply think that it does.

Another thought experiment. Imagine an infinite regress of such simulations. How could you ever determine if your own reality was or was not the beginning of it when you built the simulation?

Not necessarily, no it doesn't. If the nature of reality is that they are simply actions in a computer and Ultimate Reality is matter, then their belief that they know what matter is simply is wrong (from the viewpoint of Ultimate Reality), and there is no dualism involved. Someone could, of course, read dualism into it, but that wouldn't make it be dualistic.

The problem with them deciding on dualism would be the same as any of us deciding on dualism -- how can two entirely different types of substance interact? The reason that they are called different substances is because they cannot interact. No one can solve that problem because it doesn't have a solution.

I want to thank you for this conversation. It has led me to some new thoughts on the subject - at least new to me. If consciousness is a pattern or an action (which can be described as pattern of physical particles in time) they are not composed of any substance, but are none the less 'real' in way most of us are willing to accept. I think that many descriptions of souls, spirits, etc. are an attempt to describe the substrate-independent pattern that could be considered the essence of a person or thing.
 
What is being denied is that a very particular kind of machine "doing computations" is both sufficient and necessary for consciousness. It's important to realise that it's that claim which is being disputed, and it's a far stronger claim than the above.

Exactly.

What the computationalists claim is that we can get the actual mechanism, the machine itself, to exhibit real-world behavior without any physical cause.

The physical machine would exhibit conscious behavior in the real world simply because of what's happening on an "informational" (i.e., abstract) level.

This, of course, is balderdash.

Or, at least, it is unless they come up with some sort of coherent explanation.

But it would have to be a quite revolutionary one, because we're violating the laws of conservation there.

If there is literally only enough hardware to support running the program, then the machine is electro-physically doing the same sort of thing it's doing at any other time, when it's supposedly not conscious.

This is like saying that we can get a machine to jump by pure programming, even though the machine will be doing the same sort of thing in the physical world as always.

This claim seems to arise from an unfounded opinion that consciousness occurs in the brain as a result of pure IP.

And yet, so far we've seen no explanation of how this might occur (even from the poster who likes to needle others with demands to "be specific").

The whole thing is a mare's nest from top to bottom.
 
What's also crept in is the idea that a simulation is a "world" in itself, just as real as this one. I find this a very odd concept.

Oh, come on, admit it... it's not just "odd", it's ridiculous.
 
But no-one on your side has put forward a single coherent counterpoint to the computationalist position. Not one. Ever.

Baloney.

First of all, your misinterpretation of Church-Turing has been explained.

Secondly, it's been explained why a computationalist "explanation" of consciousness violates the known laws of physics.

Thirdly, it's been shown that you actually have no computationalist explanation or model of consciousness at all. Unless, of course, you care to produce one now.

This idea of yours has fallen apart at every available opportunity.

If you want to claim that it's true, you're going to have to do much better than that.

And doing better does not include trying to get people to prove you wrong.
 
Then I must consider myself an outlier because I am arguing against the implication that it is impossible for computation to account for consciousness. It is one thing to argue that we need to demonstrate that computation is sufficient to account for consciousness (what !Kaggen seems to be arguing) and another thing to argue that it cannot.

Care to explain how we get observable behavior without a direct physical cause?

Care to explain how the abstract/symbolic slops over into the real physical world?
 
Some people feel that way about materialism.

I see some merit in their arguments.
I agree. In fact, some people aren't even that certain about it.


If consciousness is defined by a pattern of actions then that pattern has an isomorphism with some mathematical construct. To me, that moves it out of the realm of the physical substrate it resides on and into the idealized world of mathematics.

You are speculating that consciousness exists in pattern not substance. Pattern is not a part of the physical world; pattern lives in the realm of mathematics. Then you ask what is the second 'dualistic' substance? mu. Dualists who believe in souls are not speaking about a second substance. They are talking about the non-physical aspects of our consciousness.

I find this a very good argument. However, I don't find it convincing either. Simply because we know of no reason that such a thing wouldn't happen, equally we no of know reason why consciousness should simply arise from such a setup. We simply don't understand it well enough yet.

Or one can simply point out that while it is a fine, logically constructed argument, there is no way of empirically testing it and therefore, it cannot be falsified.

What are actions made of? I disagree that dualism requires two different substances - at least not the way it's used around here. But I understand your point of view as well. If you don't mind allowing such things as souls, ghosts, etc. into a monistic POV, I don't think we disagree about much.


I think the problem is that I'm not convinced that such a machine is possible. To assume such a thing is possible makes it trivially true. Not much to discuss, because you're assuming that any additional needed dimensions of reality can be incorporated into the machine.



Consider the denizens of your simulated world trying to construct a simulation of theirs of the type that Myriad described. Wouldn't their simulation of their reality also be a simulation of our reality? I must quote you from earlier:


Another thought experiment. Imagine an infinite regress of such simulations. How could you ever determine if your own reality was or was not the beginning of it when you built the simulation?



I want to thank you for this conversation. It has led me to some new thoughts on the subject - at least new to me. If consciousness is a pattern or an action (which can be described as pattern of physical particles in time) they are not composed of any substance, but are none the less 'real' in way most of us are willing to accept. I think that many descriptions of souls, spirits, etc. are an attempt to describe the substrate-independent pattern that could be considered the essence of a person or thing.


Beth,

I agree it has been a good conversation and you have asked very good questions, but I don't think I would call any of that dualism. Yes, we can define a substance dualism and property dualism, but I don't think it is proper to label actions as another substance or a property. I think they are the reason why we think in terms of dualism, though.

The problem with spirits and ghosts is that all of the actions I am describing (and all actions) only occur because of the numerous constraints and I don't see how spirits or ghosts maintain any sort of pattern. There is always some physical substrate that makes actions possible in every example provided, including the simulation.

Regarding the infinite regress of simulations, I don't see how anyone could tell they were in any simulation; and I can see nothing to stop such an infinite regress.

I also am a bit dubious about the possibility of such a simulation.
 
Care to explain how we get observable behavior without a direct physical cause?

Care to explain how the abstract/symbolic slops over into the real physical world?



Hey Piggy, welcome back, always glad to see you.

Well, there is always an underlying physical substrate -- electrons moving through logic gates.

All that our programming languages do is provide an easy means for us to see how to control the opening and closing of those gates because no one wants to think in zeroes and ones.

I think one of the problems we run into is in thinking too much about the programming language and thinking about it in purely abstract terms. As an abstraction it can't actually do anything. It is only as a top-down way of controlling the logic gates that it does anything -- through the machine language, etc.

We concentrate on the programming language because that is what programmers do and it's hard to see the kinds of things we normally think of as actions in electrons zipping about; but the relationships should be maintained in a perfect kind of simulation.
 
Hey Piggy, welcome back, always glad to see you.

Well, there is always an underlying physical substrate -- electrons moving through logic gates.

All that our programming languages do is provide an easy means for us to see how to control the opening and closing of those gates because no one wants to think in zeroes and ones.

I think one of the problems we run into is in thinking too much about the programming language and thinking about it in purely abstract terms. As an abstraction it can't actually do anything. It is only as a top-down way of controlling the logic gates that it does anything -- through the machine language, etc.

We concentrate on the programming language because that is what programmers do and it's hard to see the kinds of things we normally think of as actions in electrons zipping about; but the relationships should be maintained in a perfect kind of simulation.

None of that matters, really. At least, not when it comes to consciousness.

What matters is that the computer is doing the same sort of thing electro-physically whether it's running a sim of a brain (however close to perfect) or whether it's not.

You can't get consciousness by pure programming for the same reason you can't get jumping by pure programming. And for the same reason that a computer can't power itself by running a sim of Hoover dam.

We don't know what exactly the brain is doing to make the phenomenon of conscious awareness stop and start like it does, but there's no reason to doubt that the cause is electro-physical.

Your example of addition is rather deceptive, btw.

We know computers can add. We have no reason to believe they are conscious, or can be conscious by themselves.

So when we get a computer to run a sim that does things that computers already can do, we're simply introducing a redundancy, but it's a situation that conforms entirely with the observation that simulations do not change what the computer is doing in the real world.

If a computer can add, then it can add.

Running the sim doesn't change that.

In other words, in that situation, the rule doesn't change: The sim has no effect on the real-world capacities of the machine.

However, to assert that the actual physical mechanism, the box and wires and chips, will become conscious in the real world as a result of running a sim -- despite the fact that the physical behavior of the apparatus is no different -- this is an absurdity.

(Oh, and sorry about the long absence. Work is kicking my butt. I'm developing a database w/ a fellow from another dept who has a very different philosophy about design than I do -- and this time, I'm not budging.)
 
Ichneumonwasp said:
That makes total sense provided somebody else taking issue with an aspect of the theoretical model isn't immediately accused of subscribing to the supernatural.

Complicated by the fact we're talking about consciousness which has intentionality or an "interior" (I believe). The theoretical model doesn't address this at all it is assumed that if you build the "outside" the "inside" will come. It worked in "Field of Dreams" :)


Agreed.


Not familiar with your use of origin and eternal presence here.

We may be misunderstanding each other. My point was as above a reference to "interiors" and "exteriors". That generally speaking a sufficient "interior" structural development (or adequate interpretation of logically coherent possibilities in the very very general example above) is necessary before a theoretical model of all of the forces of nature provided there is a single substance can be effectively constructed.

And that this theoretical model of all of the forces of nature doesn't account for the "interior" structural elements which enabled its construction at all.


I think we are saying the same thing.


"Fixed" my quote. You still agree? :D
 
None of that matters, really. At least, not when it comes to consciousness.

What matters is that the computer is doing the same sort of thing electro-physically whether it's running a sim of a brain (however close to perfect) or whether it's not.

You can't get consciousness by pure programming for the same reason you can't get jumping by pure programming. And for the same reason that a computer can't power itself by running a sim of Hoover dam.

We don't know what exactly the brain is doing to make the phenomenon of conscious awareness stop and start like it does, but there's no reason to doubt that the cause is electro-physical.

Gonna let much of that go, but I don't see why none of it matters when it comes to consciousness, because I think we can agree that whatever consciousness *is*, it occurs because of brain action, so it is an action. Yes, we do not understand what the brain is doing -- that is why we speak in terms of a simulation that recreates what occurs in the real world. If such a machine could exist, it should recreate consciousness among many other things. We speak of all of this occurring in the simulation or because of the programming, but the real truth is that all of it occurs in the electron movements through logic gates that are controlled through the programming.

There is no such thing as pure programming. Programming, keep in mind, is just the level of abstraction at which we work to understand how to control the logic gates. The real work still occurs in the logic gates.


Your example of addition is rather deceptive, btw.

We know computers can add. We have no reason to believe they are conscious, or can be conscious by themselves.

So when we get a computer to run a sim that does things that computers already can do, we're simply introducing a redundancy, but it's a situation that conforms entirely with the observation that simulations do not change what the computer is doing in the real world.


It depends a bit which example of adding to which you refer. If you refer to my reply to Westprog, that was meant only to counter the claim that simulations are not expected to produce real effects. It doesn't matter if it is in a sim or not; it can produce real effects.


In other words, in that situation, the rule doesn't change: The sim has no effect on the real-world capacities of the machine.


I never said that it did for that situation, but it is not the fact that the kind of simulation we are discussing has no effects on the real-world functioning of the machine.


However, to assert that the actual physical mechanism, the box and wires and chips, will become conscious in the real world as a result of running a sim -- despite the fact that the physical behavior of the apparatus is no different -- this is an absurdity.

(Oh, and sorry about the long absence. Work is kicking my butt. I'm developing a database w/ a fellow from another dept who has a very different philosophy about design than I do -- and this time, I'm not budging.)


(Sorry about RL, been there done that -- with other players and issues, of course)

I am not saying that, though. It is the action within the machine that is important, just as it is the action within the brain the 'produces' consciousness.

Let me use the other way that I have talked about addition in this thread -- just as we can think of '5' and '3', the computer can do something similar.

In our brain there would be a particular sequence of neuron firing that *is* the number '5' and another, slightly different sequence that *is* the number '3'. We can then add these numbers together in our minds -- and this is another sequence of neuron firing that 'uses' the earlier sequences of neuron firings that 'represents' the numbers -- with all of this being one grand action occurring within our brains. There is no 'reality' to any of the numbers -- they are concepts. Everything, in this whole process is an action.

The same sort of thing happens in the computer simulation. We start with a description of a particle or set of particles and a description of a set of physical principles. What really happens is that the description of the 'particle' is created by a certain set of gates opening or closing. And the 'physical principles' are also enacted by a certain set of gate opening and closing. Now, I haven't the slightest idea how such a thing could be carried out in a computer, but supposedly it can. Everything else that occurs in the simulation is based on the way that 'particles' interact through 'physical principles' to create something that was not originally coded.

Granted, most of the way that we think of how computers function is -- line of code means this behavior -- but there are plenty of programs that learn and change their behavior based on what occurred before; and this sort of change does not occur in the program but in the way electrons move through gates based on rules set up by the original program. I could be wrong, but my understanding is that the behavior of the computer changes based on what occurs earlier at the level of what happens in the machine and not the coding itself. This would be one grand version of things unfolding in a way that could be predicted if we had enough knowledge but none of us could do it; and no one could do it by reading the code. If we assume perfect knowledge, which none of us have, and assume that we could alter the unfolding of the universe with all the contingencies involved, then we should end up with everything that we see in this world with relatively simple coding.

This is not a situation, for this sort of thought experiment, in which everything is coded ahead of time. We don't code for much in this -- only descriptions of the particles and the physical laws that describe their interactions over time -- so it is not like we are using programming to do all the work. But it really shouldn't matter because programming doesn't occur in a vacuum. It actually represents a top-down way of controlling what actually occurs in the logic gates in the way that nature did it from the bottom up.

You are quite right to point out that the programming would need to allow for changes in the ways that electrons move through logic gates. If this is not a dynamic process, then we are not talking about consciousness. My understanding is that programming can carry off changes such as this (learning), but this is my absolute weak point in all these discussions because computers and I get along only passingly. I've set up my own home network with home built server and one home built computer, but that's about it. I am not a programmer. I can speak much more effectively about neuron function.
 
Gonna let much of that go, but I don't see why none of it matters when it comes to consciousness, because I think we can agree that whatever consciousness *is*, it occurs because of brain action, so it is an action. Yes, we do not understand what the brain is doing -- that is why we speak in terms of a simulation that recreates what occurs in the real world. If such a machine could exist, it should recreate consciousness among many other things. We speak of all of this occurring in the simulation or because of the programming, but the real truth is that all of it occurs in the electron movements through logic gates that are controlled through the programming.

Ok, but so what?

That can be said of a simulation of anything, right?

And yet no one claims that the actual physical machine should somehow begin to exhibit the behavior of the system being simulated digitally.

There's no reason why consciousness should be an exception.

If we build a model brain, and if it's sufficiently robust, it will be conscious.

But this has nothing to do with the behaviors of machines running simulations.
 
Sorry, guys, for the delay. This thread moves fast so I fear my response may be old news already; however:
Ignoring the potential absurdity of the implementation, a perfect simulation would simulate consciousness, by definition.
This is what I have a problem with. Consciousness is being defined as a simulation. i.e. our reality is a simulation. Then the claim follows that consciousness can therefore be simulated. Its a circular argument.

But that's an implementation issue: a serious hurdle for the claim that reality is, in fact, a simulation; but not for the claim that if it is a simulation, then so is our consciousness.

Exactly, the problem boils down to the relationship between concepts and percepts. We can conceptualize our reality as a simulation, but we also need to perceive it as such before we can claim we have knowledge that a simulation can re-create reality. This is the essence of the Turing Test. It is not a thought experiment.

Yes, you can certainly treat the questions separately, to see what is and isn't being claimed for consciousness, but they are related. If it's impossible to implement the simulation, then physical reality cannot be simulated, so the question whether beings in a perfect simulation would be conscious becomes moot.

Not from the pov of the simulation, no. Everything would be simulated stuff.

I think it's the interaction between the simulating world's physical stuff, the physical switch sequences which create the simulated world, and the simulated world's simulated stuff, which requires a higher-level interpretation on top of the switch sequences, that was causing confusion. I felt (mistakenly, I guess; see discussion to follow) it was being claimed there was a necessary, uninterpreted connexion between some physical switching and specific simulated actions, which would smack of dualism, recalling Descartes' mysterious mediation of matter and soul via the pineal gland.


I'm not sure where the confusion arises. It very likely does arise from that interaction. I tried to make it easier for folks to see what was going on by mentioning the actual physical changes but likely muddled it more thoroughly.

There is no way that a simulation could proceed with no interpretation between what occurs in the simulation and the switches of the computer; it would be meaningless without it. That is why earlier I tried to make the distinction between the bottom-up meaning that we have (through natural selection) and the top-down meaning of a computer system (where we impose meaning by arranging which gates will open and when).

For me, interpretation, what it means in this context (being conscious, we're familiar with the top-down sort, but bottom-up needs a lot of explanation), is the question, the nail on which the whole frame depends: either we hang our pixellated picture of consciousness from it, or drive it into in the computationalist coffin (so to speak). :p

Ignoring the potential absurdity of the implementation, a perfect simulation would simulate consciousness, by definition.

Yes. The simulation argument being that if our program describes the right kind of world, finely-grained, properly-ordered, sufficiently complex, entities within that world will be conscious. If we assume that is true, then trivially, it seems to me, the argument follows.

Of course, that reality is, effectively, nothing but descriptions of well-defined changes in definite physical states is a whole nother claim, as it may be impossible to implement (and if it can't be implemented [even in a thought experiment], trivially, it can't be true).

Yep.

sim-blobru sim-spits, sim-scuffs the sim-dirt, sim-thrusts his sim-hands in his sim-overall pockets and takes a sim-sidelong look at sim-nothing; sim-offers sim-Ich_wasp some sim-chaw. :talk025:

Right, that would be a serious, seemingly fatal, implementation issue. As an illustration, consider an extremely simple universe: one entity with two states (fluctuating between big X and small x, say). Obviously, this universe can be represented by a single switch with two states: off & on. Easy enough. The problem is: which is which?

That is, within the world which the program creates, how will each state of the programmed switch be realized? Will "off" = X and "on" = x; or will "off" = x and "on" = X? <corrected> In the simulating world, we have two choices for interpretation. So what's going on in the simulated world? When the switch is "off", does the simulated world consist of big X, or little x? (Or do we create two mirror worlds? Or is bigger and smaller an arbitrary illusion?)

With no necessary connection between physical switching and our competing interpretations of the switching (what it makes sense to us to describe it as), we lose any necessary connection between physically constrained forms of experience [physical interpretations of substance] and substance. The same switches may represent logically equivalent but describe physically different worlds (in our simple example, one where x is expanding, another where X is shrinking). Maybe that's the case -- sounds a little, though only a little, like many worlds quantum theory -- but whatever it is, it's not as simple as every physical representation uniquely prescribes a virtual world... voila!

But that's an implementation issue: a serious hurdle for the claim that reality is, in fact, a simulation; but not for the claim that if it is a simulation, then so is our consciousness.

As I have repeated often, I haven't the slightest idea if we could actually create this sort of simulation. I have no idea how it could be done, if we could interact with such a simulation if we could create it, or even how one would go about programming something that acts like a particle and then how to program the 'physical forces'.

It certainly seems as though it should be possible, though. I haven't heard an argument against it being possible that seems to stand.

There is no question that we must define meaning in such a system, but that is not the same as saying that we must observe its implementation for it to occur or to have meaning. We devise the meaning in the system from the outset. If we were to map the output of a simulation onto the 'real world' we would have to devise how it maps as well.

It has been mentioned repeatedly that this follows the simple dictum that all meaning is observer dependent, and it certainly does. The problem with the 'dictum', however, is that it is not at all clear that all meaning is observer dependent. This depends, of course, on how 'observer' is defined. Meaning is decoded (imposed) from the environment every time a sensory receptor is activated because receptors only respond to certain types of inputs and they provide information that is maintained throughout transmission (location, duration, intensity, modality, etc.). We can define receptors as observers, but they are certainly not conscious observers.

That meaning question again. Interaction as imposing meaning may be the key to understanding bottom-up meaning. There is certainly a sense in which interactions impose meaning, as long as they are differentiable. For example, in my simple universe of X & x, if there is a single entity with two states, it has nothing to interact with. But, if we add a second entity, and then assume interactions between the two entities depend on the state of each, meaning begins to emerge.

For example, assume the two entities sometimes interact by colliding, and that the reactions are different according to which state each is in. Symbolically, something like: x+x=X+X ; X+x=x+X ; x+X=x+X ; X+X=x+x+x (the first two collisions swap states; the third does nothing; the fourth swaps states and creates a new entity). Remember in the single entity universe, with no interactions there was no way to distinguish x expanding to X and X shrinking to x. Now, after assigning very basic properties to our two entities, we have a way to distinguish between them. Now if we run a program of this universe, the history of the program, and the switching which underlies it, may occur in a way which is only isomorphic to the simple interaction rules we assumed initially. In a way then, by running the simulation, the history of the simulation imposes bottom-up the interpretation we assumed top-down. And in that way, interaction may function as interpretation, and meaning emerge from complexity.

That's a very quick and dirty discussion of the principle (not sure how well my example works as it's just off the top of my head, but hopefully it fits well enough Wolfram's cellular automaton principle mentioned by Myriad elsewhere); it has some fascinating implications and seems to overlap with logical recursion and even quantum theory (though it's easy to get carried away with superficial resemblance), but I'm out of my depth and late for dinner so I'll leave it there for digestion's sake. :drool::)
 
Last edited:
It depends a bit which example of adding to which you refer. If you refer to my reply to Westprog, that was meant only to counter the claim that simulations are not expected to produce real effects. It doesn't matter if it is in a sim or not; it can produce real effects.

No, it cannot.

Well, unless of course you count things like the machine producing heat as it generates the simulation.

And of course you could design your simulation so that it's a simulation of a system that produces the same amount of heat as a real computer running the simulation. And then you could say "See!"

But that's a contrived triviality.

So is the addition example.

It has nothing at all to say about artificial consciousness.
 
I never said that it did for that situation, but it is not the fact that the kind of simulation we are discussing has no effects on the real-world functioning of the machine.

Really?

So are you saying that if you've got a machine that can run detailed sims, and you have it run a sim of a hurricane, and then you have it run a sim of a brain, this machine doesn't take on any of the characteristics of the hurricane, but it does take on characteristics of the brain?

If so, how does that work?

If not, how is it at all relevant here?

One of the recurring problems on these threads is the proliferation of badly formed thought experiments.

If a computer running a sim of a hurricane doesn't itself have a wind speed, then a computer running a sim of a brain is not itself conscious.
 
It is the action within the machine that is important, just as it is the action within the brain the 'produces' consciousness.

The action of what?

Let's keep apples to apples here.

The physical action of the brain produces consciousness -- unless you're prepared to produce a metaphysical explanation. Therefore, the physical action of a model brain (a conscious machine) must also produce consciousness.

If you -- or anyone else -- has a description of how IP/programming can make a physical machine exhibit the behavior of being conscious, well, by all means, lay it out for us.
 
Ok, but so what?

That can be said of a simulation of anything, right?

And yet no one claims that the actual physical machine should somehow begin to exhibit the behavior of the system being simulated digitally.

There's no reason why consciousness should be an exception.

If we build a model brain, and if it's sufficiently robust, it will be conscious.

But this has nothing to do with the behaviors of machines running simulations.


Yes, correct, but it depends on what sort of simulation we are discussing.

For instance, flight has been used as an example recently in this thread. We could devise a simulation of a bird flying with code determining which pixels on a screen make the wings seem to go up and down and other pixels showing that it is moving through space. That would be simulated flight.

But that is not the sort of simulation we have been discussing in which everything is accounted for down to the atomic level. If we program in what amounts to atoms and physical laws and let them 'evolve' over time to the point where we see the entire world before us, then we should expect to see actual consciousness also emerge.

Somewhere in the patterns of electrons flowing through the right gates, there should be the pattern that coincides with what occurs in my brain as I type these words.

This could probably be arranged in many ways, but the easiest to 'see' is the 'emergence of complex behavior with a minimum of coding' scenario that I mentioned earlier.

Theoretically we should be able to do the same sort of thing if we just programmed a 'person' with the ability to learn and change. There would never be any sort of change in the coding itself; rather the change would be in the way that electrons move through those logic gates -- a change in computer behavior that was governed by the way the original code was programmed. My understanding is that such behaviors can be produced by programming now. Isn't that the basis of Paul A's work with altered DNA codes?
 
In our brain there would be a particular sequence of neuron firing that *is* the number '5' and another, slightly different sequence that *is* the number '3'. We can then add these numbers together in our minds -- and this is another sequence of neuron firing that 'uses' the earlier sequences of neuron firings that 'represents' the numbers -- with all of this being one grand action occurring within our brains. There is no 'reality' to any of the numbers -- they are concepts. Everything, in this whole process is an action.

The same sort of thing happens in the computer simulation. We start with a description of a particle or set of particles and a description of a set of physical principles. What really happens is that the description of the 'particle' is created by a certain set of gates opening or closing. And the 'physical principles' are also enacted by a certain set of gate opening and closing. Now, I haven't the slightest idea how such a thing could be carried out in a computer, but supposedly it can. Everything else that occurs in the simulation is based on the way that 'particles' interact through 'physical principles' to create something that was not originally coded.

Granted, most of the way that we think of how computers function is -- line of code means this behavior -- but there are plenty of programs that learn and change their behavior based on what occurred before; and this sort of change does not occur in the program but in the way electrons move through gates based on rules set up by the original program. I could be wrong, but my understanding is that the behavior of the computer changes based on what occurs earlier at the level of what happens in the machine and not the coding itself. This would be one grand version of things unfolding in a way that could be predicted if we had enough knowledge but none of us could do it; and no one could do it by reading the code. If we assume perfect knowledge, which none of us have, and assume that we could alter the unfolding of the universe with all the contingencies involved, then we should end up with everything that we see in this world with relatively simple coding.

This is not a situation, for this sort of thought experiment, in which everything is coded ahead of time. We don't code for much in this -- only descriptions of the particles and the physical laws that describe their interactions over time -- so it is not like we are using programming to do all the work. But it really shouldn't matter because programming doesn't occur in a vacuum. It actually represents a top-down way of controlling what actually occurs in the logic gates in the way that nature did it from the bottom up.

You are quite right to point out that the programming would need to allow for changes in the ways that electrons move through logic gates. If this is not a dynamic process, then we are not talking about consciousness. My understanding is that programming can carry off changes such as this (learning), but this is my absolute weak point in all these discussions because computers and I get along only passingly. I've set up my own home network with home built server and one home built computer, but that's about it. I am not a programmer. I can speak much more effectively about neuron function.

Despite the problems with this description, do you not understand why it is irrelevant?

Consciousness is not learning. It is not memory. It is not self-referential information processing. It is not perception. It is not responding to the environment.

All of this can happen with or without the involvement of the systems that handle the phenomenon/behavior of conscious awareness.

If you want to explain consciousness, you have to explain consciousness. There's no way around it.
 
Yes, correct, but it depends on what sort of simulation we are discussing.

For instance, flight has been used as an example recently in this thread. We could devise a simulation of a bird flying with code determining which pixels on a screen make the wings seem to go up and down and other pixels showing that it is moving through space. That would be simulated flight.

But that is not the sort of simulation we have been discussing in which everything is accounted for down to the atomic level. If we program in what amounts to atoms and physical laws and let them 'evolve' over time to the point where we see the entire world before us, then we should expect to see actual consciousness also emerge.

No, we don't expect to see actual consciousness emerge, no matter how robust the simulation is, as long as it is a digital simulation.

We only expect consciousness to emerge from a model brain, regardless of how similar it is to (or how dissimilar it is from) an organic brain.
 
Status
Not open for further replies.

Back
Top Bottom