• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Explain consciousness to the layman.

Status
Not open for further replies.
You two are on opposite ends of this. Whereas it may have no preferred description, it is a physical process, and the number of wolves that result is indeed really 7. And yes we can also discuss other aspects of the system, but that's not a discussion of the entities we label wolves.

Given a wolf is what we call it, and that is an entity that we learned to recognize (as opposed to invented), 7 is the only correct answer for how many there are.

Wolves are a tricky example, as they might be conscious of how many of them there are, but we can count all kinds of things in that system, and I don't think that any of them are more physical than any other. Suppose we want to count the pretty wolves, or the dirty ones?
 
More than one, I hope. It's a process, isn't it?

Any real thing can be described as an event, and any real event can be described as a process.

But the question of the unity of consciousness is also an interesting one.

The split brain studies are very illuminating here.

You're probably familiar, but if any lurkers are left who are not, there's a body of research on people who have had to have the connection cut between the two halves of their brain.

You can use physical apparatus to make it possible to communicate only with one or the other hemisphere of the brain, because of the ways our eyes work and our hands and mouths work, while the other one has no idea what y'all are discussing, or only finds out when it hears its own body say it.

Only one of these halves is controlling the language centers that can be used to talk to you (although both sides overhear the conversation).

Gazzaniga had one subject whose vocal side was religious, but whose mute side said (or wrote) that it no longer believed in God.

I probably wouldn't either, shut up in my own skull like that, only able to talk to anyone at the occasional lab test. What kind of God would allow that?

Anyway, there appear to be two coherent conscious entities in that skull now.

And there are the cases of people who lose consciousness of one half of their visual field.

The strange thing is, they don't particularly notice this, the way we don't (can't) notice our blind spot.

What they see fills up their entire conscious field of awareness, just like ours does. Looking at a mall from one end and then from the other, they would describe two completely different scenes. The fact that these views are not symmetrical doesn't bother these folks in the least.

If a fellow trips over a footrest, it wasn't because he couldn't see it, it's because he wasn't looking where he was going.

We know that some experience is bound and can't be untangled by consciousness.

But we also know that brain damage can short-circuit the entangling with very counterintuitive results, such as being able to perceive motion without any perception of shape or color or brightness.

But the evidence seems pretty clear at this point that conscious awareness has to involve real-time synchronization of activity across various regions of the brain, and there's a point at which coherence fails and the body stops being conscious.

That can happen suddenly (as with a lot of anesthesia -- I think the recent link up there mentions some anesthesia studies) or gradually, as when you fall asleep, but it always seems to follow a pattern of weak coherence followed by strengthening when consciousness is re-established.
 
Well, you can move off into philosophical musings about it, but that's not really necessary in order to do what needs to be done here.
The point of my "philosophical musings" is that "addition" is simply a description, but you're trying to treat it as something special. By describing the same situation in multiple ways, I'm trying to show you that it's just a way to describe things.

But fine. Let's go straight to the heart of it.
Arguing about where groups begin and end is like the argument over the border between a language and dialect or between atmosphere and outer space... a pointless exercise that doesn't help us use these terms in the contexts in which they're significant.
What exactly is significant about it?
Aggregations in systems that don't need to include brain states is physical addition -- whether that's raindrops filling up a footprint to make a small puddle, or predators closing ranks to make a more formidable fighting force -- are examples of physical addition.
Okay, but this is a bit muddled. It's really important to clear this up because we're actually talking about building brain analogues.

I don't know what you mean by "need to include brain states".

Let's say we have an artificial object--a quarter. It rolls onto the floor next to another quarter. Is that physical addition? What if, instead, I put two quarters into a vending machine. Still physical addition? In this case, note that I intentionally put two quarters in--that certainly requires brain states.

Suppose that I put five quarters in, and press A1 to get a snack. Is that physical addition? Now, how about if instead I put one dollar bill in, and one quarter, and press A1 to get a snack. Still physical addition? If so, do we count the sum as the same as the first physical addition? Note that here, to get the required result, we do need $1.25.
Now let's compare that to what PixyMisa's marble machine does.
...
But if you bring a brain into the system which understands the symbols painted on it and the rules for its operation, then (and only then) do you have symbolic addition, in which the observing brain may be reminded of the number 26 or 52 or 8 rather than 3 when the 3 marbles make their run from the top down into the bowl.
But what's so special about a brain here? It's trivial to add a physical machine that converts these placements to a physical quantity. Are you ruling out brain states just to rule out brain states?
 
An artificial nueron could only work by physical computation.

If it doesn't, it has no means of communicating with the neurons it connects, which operate exclusively by physical computation.

I mean, you could include a symbolic computation in the mix, but why would you?

If you can replace

Neuron -> Neuron -> Neuron

with

Neuron -> Replica -> Neuron

Why bother with

Neuron -> Modulator -> Simulation -> Demodulator -> Neuron?

Probably because that is how it would be implemented.

Man this thread has just gone downhill...
 
I'm glad you brought this up, because I wasn't aware there was any confusion on this point. However, I can see why my too-narrow use of the word "built" may have caused it.

Brains are built and computers are built. That's true.

Which means that you are 100% correct that programming is "fine-grained building". What the programmer is doing at the keyboard is changing the structure of the machine it's connected to so that some of its parts behave in different ways when electricity runs through them than they did before.

(Fortunately, the programmer doesn't have to know what these physical changes are, or even be aware of them, to peform the operation remotely.)

But it is fine-grained building of a particular type. For instance, building a skyscraper isn't "programming it" in the sense we mean when we say computers are programmed.

Programming the computer involves manipulating it so that it moves in ways which are intended to mimic, in some useful form, a set of ideas in the head of the programmer, perhaps associated with some real-world system or some imaginary system... or perhaps randomly if anyone desires to be reminded of a random number or color or sound at any time in the future.

Computers can be made in many configurations, out of many types of materials, each with its own physical properties, some faster and easier to manage than others.

So the question is this: Can that kind of "building" alone allow us to take the material we're working in and turn it into a conscious object? Or will that end up being like programming a skyscraper into existence... the wrong kind of building?

Well, to answer that, we have to ask "What sort of building is necessary to make consciousness happen?"

The short answer to that, of course, is: We don't know.

There simply are no satisfactory answers right now.

That situation by itself means first that we cannot confirm that a machine which is not conscious can be made conscious by programming alone, without building some other structures designed for the purpose, not of supporting symbolic logic, but of enabling the phenomenon of conscious awareness.

But digging further, the brain is obviously doing some sort of real work to make the phenomenon happen. That's beyond doubt, and nobody in biology questions it that I know of.

If no work were involved, the phenomenon could not occur. Because there are no observable phenomena that have no real causes.

Are the signature waves noise, or are they doing some of the necessary work?

Nobody knows right now, but they're currently our best lead for a component of the work that has to happen somehow, the synchronization of patterns in disparate parts of the brain.

In any case, since we know that consciousness is the product of some sort of physical work of the brain, we're not going to get a human body, or a robot body either, conscious unless we've got some structures that are designed to do whatever minimum work turns out to be necessary.

This is what eliminates the possibility of a pure-programming solution.

It simply means that if the machine wasn't already equipped to perform this behavior to make the phenomenon occur in spacetime, you can't do the kind of fine-building called programming (which is intended to make the computer mimic some other system symbolically, regardless of the actual physical actions of the computer which could be any number of possibilities) and expect to make it perform that behavior.

It's like saying you can take a machine without a CD player and make it play CDs by programming alone.

Can't be done, because programming doesn't get you the real work you need.

Programming and a laser and the other stuff, then yeah, you can perform the task and get a real phenomenon going, air molecules bouncing around.

Programming and the right hardware -- which will be more than just the hardware required to run the logic, because the laws of physics demand it -- will get you a real event of conscious awareness.

If you want to say, no, the brain or machine representing things to itself can cause conscious awareness, then you're in trouble.

There are several reasons why, but one fatal reason is that the brain represents things to itself via feedback in many known ways which are not involved in conscious awareness, although some are. So this requires us to follow up by asking "What makes the difference between feedback loops involved in conscious experience, and those that aren't?"

That's kinda where we are now.

No offense but I am convinced now that you honestly think the more sentences you type the more correct your logic becomes.

It doesn't work that way piggy.
 
In any case, since we know that consciousness is the product of some sort of physical work of the brain, we're not going to get a human body, or a robot body either, conscious unless we've got some structures that are designed to do whatever minimum work turns out to be necessary.

This is what eliminates the possibility of a pure-programming solution.
No, it does not rule out a pure-programming solution. What if the minimal work turns out to be the controlled manipulation of symbols which are invariant? That's essentially what we're doing with pure programming.

It simply does not follow. You cannot rule this out by speculating that there must be something essential that the brain does, because this could be that essential thing that the brain does. If you want to rule it out, you need to look at what pure-programming can actually do.
Programming and the right hardware -- which will be more than just the hardware required to run the logic, because the laws of physics demand it -- will get you a real event of conscious awareness.
Which law of physics?
There are several reasons why, but one fatal reason is that the brain represents things to itself via feedback in many known ways which are not involved in conscious awareness, although some are. So this requires us to follow up by asking "What makes the difference between feedback loops involved in conscious experience, and those that aren't?"
This is true, but it's not fatal. You're not taking into account that an environment can be programmed.
 
It's funny, from our point of view, there's a similar point in your approach where the imaginary suddenly becomes real. :)

So let me tackle that.

And yeah, I do agree that in some cases there's no difference between simulation and emulation. In fact, I can describe those cases, I think: it's when no real work needs to be done by the physical system during that portion of the chain (except what might be coincidentally done by the original part and the replacement, such as output of random heat) and when the implementation of the hardware doesn't hinder the functioning of other subsystems.

And the necessary fallout from this is that simulations cannot also be emulations -- or in my lingo, representations cannot also be replicas -- at any point where the system does any real work that the replacement can't also perform.

That's why a representation (including a computer simulation) of a kidney can never replace the entire kidney -- it can't do the phsyical work. You have to have an actual dialysis machine for that.

That's why Pinocchio can never be a boy as long as he's made out of wood -- it can't do the physical work that a human body does.

Or we can imagine Major Tom in a space suit, out repairing a space probe.

Let's say he can either use his jet pack to float back to his ship, or he can teleport like in Star Trek.

We'll ignore the problematic bits and just stipulate that he can be Wonkavisioned through space with the relative position and type of his particles preserved in the transfer of photons or whatever, and reassembled from that configuration into a corresponding collection of massive particles, a man in a spacesuit.

Now let's imagine that on this day Tom spots a space squid between him and the mother ship. A small one, but big enough to break something important, so he decides to kill it on his way back.

If David back on the ship teleports him, he can't kill the space squid.

But why not? The teleportation is real, the inputs match the outputs... but it's like I said, there is some real work to be done at that point in the chain which cannot be performed by the physical choice we've made for Tom's transportation (whatever it is) unless it kills anything in its beam, which it probably doesn't because then it would be too dangerous to use.

We could even program the transporter to rearrange the information about Tom's physical state along the way, run it through the same transformations the physical Tom would go through if he had killed the squid.

He would emerge with the memory of using his jet pack to return, and killing the squid along the way. Then he'd look out the portal and see the damn thing still there!

Ok, so getting back to the brain example, if we get down to the level of one neuron, it's likely (but not certain) that there's a medium available which we can use as a replica. And if there's only one action that's important (e.g. a neuron fired or it didn't) then it's hard to describe that as also not a digital simulation.

But if we try to zoom out to the level of the brain, specifically a conscious brain, then we'd have to assume that there is no physical work at all which is ever important in the functioning of the brain which will not also be produced coincidentally by a machine designed to run computer sims, if we are to believe that it could be replaced by a machine designed to run computer sims.

That's a tall order.

And it becomes an insurmountable one when we consider that consciousness is a phenomenon observable in spacetime, which makes it objectively real -- my body as a physical object is generating a conscious experience right now which is locatable in both time and space (i.e., it ain't happening tomorrow in Paris) -- and the laws of physics demand that all real phenomena require some sort of work.

Therefore we can conclude that a simulation/representation of a brain, rather than a replica of a brain, cannot generate a real instance of conscious awareness -- as happens, for instance, when a baby begins to exhibit that behavior -- because the physical work of the machine is too different from the physical work of a brain to make it happen.

So yes, the point of separation you're talking about is very real.

Yep, just like I thought.

You really need to condense your arguments into just a few sentences piggy.
 
Last edited:
Suppose we want to count the pretty wolves, or the dirty ones?
Then we need a criteria for pretty wolves or dirty wolves, and since we don't have a well defined criteria for this in common, there would no longer be only one correct answer.

But that's not a problem, because it has no bearing on the number of wolves, given our criteria for wolves. It's a non-sequitur.
 
Not quite. A computer simulation cannot be conscious if consciousness requires any physical work on the part of the brain which is not also done by the computer.
Of course.
Given what we know, we can safely conclude this is the case. (If it weren't, then it's difficult to see how consciousness could be postponed until after binding, and so forth.)
Maybe we can safely conclude this is the case. But that's the whole point--we need to safely conclude it. And I have seen no argument from you so far that allows that to happen.
 
Then we need a criteria for pretty wolves or dirty wolves, and since we don't have a well defined criteria for this in common, there would no longer be only one correct answer.

But that's not a problem, because it has no bearing on the number of wolves, given our criteria for wolves. It's a non-sequitur.

I figured this out by going back to first principles. It has nothing to do with physical location, definition, subjectivity - it all comes down to sets.

It's possible for a set to include any object. The set can be defined by a rule (all white men over six feet tall) or simply by enumeration. The rule is just a shortcut.

The set can include any object, compound or otherwise. The set can include wolves individually, or as a group, or each atom in each wolf.

The "addition" is simply the cardinality of the set.

It is entirely objective, but there are a lot of sets.
 
So far, so good.



Still good.



I'm with you, brother.



Cool.

Alright.

Now assume that we have a magical machine which is capable of the following:

1) It can apply an arbitrary spatial/temporal transformation to any number of particles.

2) It keeps a record of all such transformations that have been applied.

3) It alters the behavior of spacetime in a way that results in the interactions between any two particles remaining normal, as if neither of them have had any spatial/temporal transformations applied to them.

This may be hard to follow, so let me provide an example. You are sitting at your computer typing, and the machine decides to apply a 3 meter spatial translation to all the particles within a 0.3 meter diameter sphere, centered at the middle of your head, in a direction roughly "upwards" relative to you.

However, your perception of your space remains normal, and you remain living, because the machine magically insures that the particles in the sphere and the particles outside the sphere end up producing the same results when they interact, or rather "would have" interacted were it not for the translation. For example if a quark inside the sphere would have collided with a proton right outside the sphere, even though it is now 3 meters off target, the machine "fakes" it and applies effects identical to the collision on both the quark and the proton in question. Thus from the perspective of the quark and proton, they really did collide.

Note that the machine doesn't actually alter causation, it merely applies a sort of inverse transformation to any relevant behaviors of the particles so that the results are identical to what they would have been without any funny stuff going on.

Do you agree that such a machine would result in your continued existence in a manner that was totally transparent to you if it applied such a transformation to your head?
 
Last edited:
I figured this out by going back to first principles. It has nothing to do with physical location, definition, subjectivity - it all comes down to sets.

It's possible for a set to include any object. The set can be defined by a rule (all white men over six feet tall) or simply by enumeration. The rule is just a shortcut.

The set can include any object, compound or otherwise. The set can include wolves individually, or as a group, or each atom in each wolf.

The "addition" is simply the cardinality of the set.

It is entirely objective, but there are a lot of sets.

Yes and life is the ability to keep adding .... ;-)
 
You're still begging the question, as to whether something that implements a computation going on in the artificial brain is doing the same thing as the artificial brain. We can't easily test this hypothesis, because we can't plug the computer into a human body, unlike the artificial brain. In the event that we can produce an artificial brain, we can see if the human "works properly".

So we can plug an artificial brain made of multiple processors into a human body, but we can't plug in a single computer in the same way?
 
Not quite. A computer simulation cannot be conscious if consciousness requires any physical work on the part of the brain which is not also done by the computer.
Yes, the magic bean theory of consciousness. Which you cannot support in any way whatsoever - or even coherently define.
 
For those who weren't present for the previous threads, I characterised Piggy's argument as consciousness requiring computation and a magic bean.

I've seen nothing since then to change my mind.
 
Oh, sorry, looks like I deserve my own complaint. :o

You're right, I thought we had gone back to the point in the discussion about swapping the whole brain out. My mistake, sorry.
No problem ;)

On the one hand, we can try to imagine how we would literally wire up such a box.
Leave that for a rainy day...

On the other hand, we can simply assume that we have something set up that accepts all the neural bombardment that V takes in, and spews out all the neural bombardment V emits, from and to the right locations, and not worry about where the apparatus lives.
Yup, this was my view.

(This thing's gonna have a ming-boggling array of inputs and outputs, though.)
A couple of big reels of neural probe wire from Maplins should do it :D
I think it's only ever likely to be a hypothetical on even the smallest brains we think may support consciousness. Smaller brains, maybe.

But because our black box isn't doing any of the physical computation that V actually performed, any brain processes which depend on that work will either fail or go haywire in some way.
It would be interesting to know which processes do depend on it.

If consciousness depends at all on the coherence and synchronization of its "signature" global brain waves, which is to say if they're not just noise like the heat coming out of my computer or the noise coming out of my truck, then it's quite likely our black box just created a conscious "blind spot" which will remove visual activity from consciousness.

And although we don't know yet, my bet is that the brain waves are not noise, since we have few other good candidates for mechanisms to synchronize the activity of disparate brain regions, which we know is happening in conscious experience.
It seems likely - I've seen a number of articles that suggest synchronised waves across large areas of the cortex are a key feature of conscious mental states.

If a black box removes a physical component necessary for bundling across disparate regions, the effects could be quite unpredictable and very weird.
Interesting point.

Thanks for that analysis - just the sort of food for thought I was after.
 
For those who weren't present for the previous threads, I characterised Piggy's argument as consciousness requiring computation and a magic bean.

I've seen nothing since then to change my mind.

The alternative being to make up a definition of a brain activity that can be explained without an empirical understanding of that brain activity.
And no, understanding how neurons work is not understanding how the brain works, but how neurons work.
We don't understand dolphins because they are made up of atoms which we do understand. We understand dolphins by studying their behaviour in their environment. You can build models based on dolphin atoms behavior all you want. It's pointless. Unless of course you define dolphin behavior by what their atoms do and since we know what atoms do it's just a matter of finding the algorithm that uses dolphin atom behavior to predict dolphin behavior.

It is convenient for pretending to know it all but pointless when it comes to empirical evidence.

Piggy has consistently suggested we study the brain in order to inform our definition of the brain activity called consciousness.
You have never expressed the same interest. Instead you claim we know already how consciousness works and have a definition for it.
 
Last edited:
I think that some kind of underlying reality is the basic assumption of materialism. If there isn't some kind of underlying reality, then there is only our own subjective experience, and no reason to assume that any other kind is possible.

Quite and the underlying reality of materialism is not known:rolleyes:.
 
First, you told me you aren't really a monist anyway.
I am primarily a monist.

Second, obviously a full simulation of the universe would include a "simulation of timespace in order to capture any kind of dimensional experience."
Yes, I gather you are talking hypothetically as I would doubt that such simulations are doable with current technology, or current understanding of what exists or the role played by timespace in that existence.

Remember we cannot determine what exists, all we can determine is what appears to exist.

If we simulate what appears to exist we will have a simulation which only appears to be real, including any conscious entities there in.
 
Status
Not open for further replies.

Back
Top Bottom