• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Explain consciousness to the layman.

Status
Not open for further replies.
Right. The interesting thing here, though, is that materialism is consistent with all evidence, and other ontologies are consistent with the evidence precisely insofar as they are consistent with materialism.
Yes of course they would be consistent with what we know about existence.

However materialism is only half an ontology, the critical part which distinguishes it from other ontologies is missing, ie the stuff, the very material of materialism is unknown and hidden behind the fig leaf of being defined by what it does rather than what it is.

As Westprog suggests (my wording) without the underlying substance of existence, materialism is nothing more than idealism pretending to be something else.
 
So we can plug an artificial brain made of multiple processors into a human body, but we can't plug in a single computer in the same way?

You've just substituted "processor" for "artificial neuron". They are totally different things.
 
Why bother with

Neuron -> Modulator -> Simulation -> Demodulator -> Neuron?

It might turn out to be easier to do for a large number of pseudo-neurons sharing a physical processor resource. But plain curiosity seems to me a good enough reason.
 
Piggy has consistently suggested we study the brain in order to inform our definition of the brain activity called consciousness.
You have never expressed the same interest. Instead you claim we know already how consciousness works and have a definition for it.

First, pixy takes issue with piggy's suggestion that we study the brain in ways any rational person with a computer science or neuroscience degree would agree is pointless, because of some logically inconsistent mumbo-jumbo about "physicality" or whatever. That is the "magic bean" he refers to.

Second, pixy has stated specifically what *his* definition of "consciousness" is and exactly how to get it, and he is at least logically consistent. That you disagree with the definition he uses has no bearing on the consistency of his arguments.

If you, or any of the other individuals with bizarre illogical worldviews, had bothered to ask intelligent questions of pixy -- such as "how do we visually perceive things in the world?" or "how does our memory work?" or anything else that is specifically pertinent to human minds -- he would give intelligent answers. Instead you people ask the most vague questions possible, "how does consciousness arise?" and then complain when his answer is too simple. Then you complain even more when he says "alright, if you want a more complex answer, please ask the question in a suitably complex way."

So blame yourself.
 
Last edited:
The alternative being to make up a definition of a brain activity that can be explained without an empirical understanding of that brain activity.
No.

And no, understanding how neurons work is not understanding how the brain works, but how neurons work.
Understanding how neurons work is part of understanding how the brain works.

We don't understand dolphins because they are made up of atoms which we do understand.
That is a critical part of our understanding of dolphins.

We understand dolphins by studying their behaviour in their environment.
That is another critical part.

You can build models based on dolphin atoms behavior all you want.
Yes.

It's pointless.
No.

Unless of course you define dolphin behavior by what their atoms do and since we know what atoms do it's just a matter of finding the algorithm that uses dolphin atom behavior to predict dolphin behavior.
Correct. Not just correct, but undeniable.

But complicated.

Which is why we build models at multiple levels of granularity to study different aspects of various systems, including dolphins.

It is convenient for pretending to know it all but pointless when it comes to empirical evidence.
Wrong, and wrong.

No, I correct myself: Dishonest and hopelessly, irredeemably wrong.

Piggy has consistently suggested we study the brain in order to inform our definition of the brain activity called consciousness.
No. Piggy has insisted that there must be a magic bean in the brain in addition to its known function.

He has no evidence for this magic bean's existence. He has no evidence for any function it carries out. He can't define the magic bean or what function it may have. He can't coherently explain why he thinks it is necessary.

He just insists it must be there, just because.

You have never expressed the same interest.
Untrue.

Instead you claim we know already how consciousness works and have a definition for it.
Well, duh.

If you don't have a definition for it, I suggest you find one before using the word.
 
Any real thing can be described as an event, and any real event can be described as a process.
OK, I guess I was thinking of the physics usage.

Anyway, there appear to be two coherent conscious entities in that skull now.
There was an interesting documentary a while ago, interviewing a South American(?) split-brain patient, and showing film of her shortly after the operation, where one hand (controlled by the non-verbal side) would persistently thwart or undo the actions of the other hand, e.g. when she buttoned up her blouse with one hand, the other would follow, unbuttoning it. It occurred to me this might have been the only way the non-verbal half could express it's frustration or distress, etc. The interview was conducted a couple of years post-op, when all these conflicting behaviours had died away. I wondered what had transpired in her head. They said that where such dual consciousness is manifest, it usually doesn't last long before the obvious differences go away and some kind of integration seems to happen.

If a fellow trips over a footrest, it wasn't because he couldn't see it, it's because he wasn't looking where he was going.
There are some very weird integration problems, like totally blind people who insist they can see perfectly well, and vice-versa. They're not just saying for effect, either; they believe it.
 
But what's so special about a brain here?
That is the question I've been asking all along. What is special about the brain? Why would we think anything is special about the brain?

All available evidence says it's just a big marble machine. In fact, as far as we know, it's not even possible for it to be more than that.

So whatever the brain can do with the output of the marble machine, a bigger marble machine can do by itself.
 
However materialism is only half an ontology, the critical part which distinguishes it from other ontologies is missing, ie the stuff, the very material of materialism is unknown and hidden behind the fig leaf of being defined by what it does rather than what it is.

A wonderfully descriptive turn of phrase.

"However atheism is only half a religion, the critical part which distinguishes it from other religions is missing, ie the stuff, the very material of religion is unknown and hidden behind the fig leaf of being defined by what happens rather than who caused it."

"However science is only half an epistemology, the critical part which distinguishes it from other epistemologies is missing, ie the stuff, the very material of science is unknown and hidden behind the fig leaf of being defined by what we know rather than what we don't."
 
As Westprog suggests (my wording) without the underlying substance of existence, materialism is nothing more than idealism pretending to be something else.
How come idealism is nothing more than materialism pretending to be something else?

Regardless, I know exactly what "stuff" is. Here's how it works. We take something we know a bit about--it could be anything. An idea, a billiard ball, information, whatever. But it helps that what we pick has a fancy name; if we say "words", then it's just called magic, and that's not so fancy sounding. "Noeticism" by contrast is really fancy; the idea is that you want to feel special and be able to look smart by having the position. Then we pick the one we like to use to conceptualize reality the most, and make a metaphor. Finally, we declare that the thing is not in fact a metaphor, but is how stuff "really is". Apply rationalizations to defend the position, attach to it psychologically, and you have an ontology.

And who said ontology was difficult?
 
For those who weren't present for the previous threads, I characterised Piggy's argument as consciousness requiring computation and a magic bean.

I've seen nothing since then to change my mind.

Gotta have some where for the god* part to hide.


*soul, spirit, essence, love, hate, art, or any other thing that makes us "uniquely human".
 
Yes of course they would be consistent with what we know about existence.

However materialism is only half an ontology, the critical part which distinguishes it from other ontologies is missing, ie the stuff, the very material of materialism is unknown and hidden behind the fig leaf of being defined by what it does rather than what it is.

As Westprog suggests (my wording) without the underlying substance of existence, materialism is nothing more than idealism pretending to be something else.

There is nothing underlying the substance of existence, it's merely a phrase you have concocted to hide your misunderstanding of the world around you.
 
Gotta have some where for the god* part to hide.


*soul, spirit, essence, love, hate, art, or any other thing that makes us "uniquely human".

It's interesting that Piggy - who AFAIAA has always been an atheist and materialist - has somehow managed to argue for the god he doesn't believe in without recourse to "soul, spirit, essence, love, hate, art or any other thing that makes us uniquely human". Whatever his motives are, it's hardly plausible that they are the same as mine - which is where the ad hom approach falls down.

It's also interesting that no matter how often it's put forward that consciousness is a physical process, this is always reinterpreted as a statement that consciousness is a non-physical process.
 
The point of my "philosophical musings" is that "addition" is simply a description, but you're trying to treat it as something special. By describing the same situation in multiple ways, I'm trying to show you that it's just a way to describe things.

I understand that words are ways to describe things.

So we can leave off that tangent.

The point here is that the "addition" we do in our heads is a form of symbolic aggregation, which is different from real aggregation.

Take a look at the marble machine, and you'll see what I mean. The way in which it aggregates the marbles is distinct from the symbolic aggregation it's performing when used as an information processor. And this fact is important to understand the system.

So yeah, musing about when a wolf walking toward a group actually joins the group is irrelevant philosophical musing, despite the fact that this sort of natural aggregation and disaggregation is precisely the reason why our brains evolved the ability to do math in the first place.
 
So yeah, musing about when a wolf walking toward a group actually joins the group is irrelevant philosophical musing,
Irrelevant to what?

A human brain--or at the very least, a wolf brain, is responsible for judging whether or not the "join" happened. What exactly are you drawing a distinction to? And why is this such a "relevant philosophical musing"?

ETA: I'm wanting to know what it is you think is significant about physical addition before moving on to discussing things like momentum, aggregation of entropy, and so on. In particular, your concept of "physical addition" is meant to contrast with "symbolic addition", and I want to know if there's such a thing as symbolic addition that does not involve a brain state. Your inclusion of brain state in your definition of physical addition is already suspect, since the very things you are claiming are physical addition do indeed involve brain states somehow.
 
Last edited:
Your line of reasoning is analogous to reasoning that we cannot build a flying contraption because all of the flying devices we know of are biological.


I think you must have been a farmer or related to farming somehow.

You seem to have a natural proclivity to building straw men and laying down fresh manure.

You really are very adept at making silly analogies...aren't you?

I am going to try to see if maybe I could EXPLAIN to you how asinine your straw man airplane is.... I have stated numerous times that emulating a brain will entail building a NEURAL NETWORK that has a critical mass of complexity and interconnectivity that would bring about the necessary emergent properties of synergetic interaction to realize consciousness.

So I did not say that only biological matter can do it.... the Neural Network is not Biological matter.

So you see how much of a straw man your airplane fallacy is.

If you really wanted to use a more appropriate analogy instead of building straw men you might consider the analogy of a TRANSPORTER...... here is something we can imagine. It even seems to not violate the laws of physics strictly speaking.


But, can we make one? Would we one day be able to rip someone apart and convert them into energy and beam it to some other location and then make them back into mass according to E=MC2 laws of physics. Or maybe analyze the spin and energy states of every last atom and electron in the body and build a database of that information and transmit that to another location where we can use the database to recreate the same human from that data? And then to avoid duplication having to disintegrate the original.

Here is a process that we are SURE obeys all the laws of physics and we even KNOW what these laws are.

Have we done it? How soon do you think we would be able to do it? Will we ever be able to do it? Is there perhaps a law of physics that we have not discovered that might in fact PREVENT the possibility? Is there one that might PRECIPITATE IT?

Now that is the kind of thing I am talking about when I DOUBT that building a brain in a silicon chip running a simulation could precipitate a consciousness.


But..... AGAIN.... don't lose site of the fact that NEITHER YOU NOR I know..... but I am willing to bet on the impossibility of one day having a transporter beaming us around....DESPITE Gene Rodenberry’s exquisite imagination to solve a budgeting restriction that prevented him from using shuttles in his films.

Likewise I am betting against a silicon chip becoming sentient….. but I am NOT betting against a critical mass of interconnected silicon chips emulating a brain maybe being able to achieve it.

So…. I advice you that using equivocating straw man fallacies is a foolish thing to do.... maybe you should try to consciously break this instinctive predilection of yours.
 
Your line of reasoning is analogous to reasoning that we cannot build a flying contraption because all of the flying devices we know of are biological.

To be fair, he did explicitly qualify it as a 'hunch':

My HUNCH is based on the fact that AS FAR AS WE KNOW HERE AND NOW (and not in imagined possible other realms and fiction) there is SO FAR no other physical process that has produced consciousness other than CEREBRAL CORTEXES and not even all of them at that.

Having said that, history has shown that such hunches are a poor guide to what is possible.



Yes… I am aware enough of my limitation of knowledge, unlike a few people around here who cannot seem to distinguish between IMAGINATION and knowledge.

Just because you can imagine something does not mean it is true….. a fact that a few people seem to have lost sight of.

I always try my best to maintain CONSCIOUS AWARENESS of what is fact and what is imagination and I CONSCIOUSLY try to keep the difference always in sight.

Have a look at my rebuttal to the asinine airplane straw man fallacy to see a more HISTORICALLY ACCURATE analogy to the situation than the history you seem to be referring to.
 
I have stated numerous times that emulating a brain will entail building a NEURAL NETWORK that has a critical mass of complexity and interconnectivity that would bring about the necessary emergent properties of synergetic interaction to realize consciousness.

But why a physical neural network, instead of a simulated one? Why a neural network instead of a system that processes information in the same manner?
 
So you're just arguing from incredulity, Leumas. "I can't think of any reason why it can't happen, but I'm pretty sure it can't happen."

Likewise I am betting against a silicon chip becoming sentient….. but I am NOT betting against a critical mass of interconnected silicon chips emulating a brain maybe being able to achieve it.
Course here it just loses conherency entirely. What if you put the "critical mass" of silicon chips into a single silicon chip? Because that sort of thing can happen very easily.
 
I don't know what you mean by "need to include brain states".

Let's say we have an artificial object--a quarter. It rolls onto the floor next to another quarter. Is that physical addition? What if, instead, I put two quarters into a vending machine. Still physical addition? In this case, note that I intentionally put two quarters in--that certainly requires brain states.

Suppose that I put five quarters in, and press A1 to get a snack. Is that physical addition? Now, how about if instead I put one dollar bill in, and one quarter, and press A1 to get a snack. Still physical addition? If so, do we count the sum as the same as the first physical addition? Note that here, to get the required result, we do need $1.25.

But what's so special about a brain here? It's trivial to add a physical machine that converts these placements to a physical quantity. Are you ruling out brain states just to rule out brain states?

Ok, I'll put it back into context for you and we'll move on from there.

Remember, the topic is symbolic information systems, such as adding machines and simulator machines.

What does the difference between symbolic aggregation and physical aggregation have to do with that, and what's up with the role of the brain state?

If two quarters are washed into the same spot in a gutter, or if they're picked up by a hand and placed together, these are both examples of physical aggregations of the objects.

And as your examples show, no brain state is required for this to happen -- it can happen with one, without one, doesn't matter.

But what about our marble machine? What sort of aggregation can it peform, and under what circumstances?

To examine this, we'll use an old schoolbook way of looking at systems.

Imagine a page with two big circles drawn on it, representing two systems. Whatever's inside the circle is necessary for that system, whatever isn't, ain't.

So if I want to represent the system necessary to move a pile of brush to the county composter, I need the brush pile to be in the circle, of course, and the county composter, and some sort of vehicle that can carry the brush, and some sort of passage for the vehicle between the two places.

That's a fair representation of our necessary system.

Notice we leave out the color of the vehicle, because although the vehicle will certainly have some color to it, no particular color is necessary for the system.

Ok, so let's look at the first circle, which only contains the marble machine, nothing else.

If that's all that's in the circle, can it be an information processor?

As it turns out, the answer is "No". Here's why....

Without a brain state to determine the "meanings" of the patterns of paint on the machine, they might as well not be there. You need a brain to assign those meanings and to interpret them.

Without that, all you're left with is an object you can drop marbles through.

You can prove this to yourself by viewing the video with the sound off, and ignoring the patterns of paint -- remember, inside our circle there's nothing that has any means of deriving any meaning from them. (Even if the machine were completely self-aware, this would still be the case.)

Without a brain state somewhere in the circle, the system cannot be a symbolic information processor. The only addition it can do is to channel the marbles at the top into a single group at the bottom.

So let's use our other circle to describe the minimum system needed for this thing to work as an info processor.

Well, we need at least one brain capable of assigning values to different aspects of the machine, determining the rules of operation, and interpreting the results of the machine's behavior. (A programmer and a user or reader.)

It might be one brain or more involved, but at least one is required.

And this is true of any object in the universe that is (or can be) used as an information processor.

By itself, alone in the circle, as an independent unobserved system, it cannot perform that function. To do so, at least one brain has to be involved in the system.

And here's an extremely important consequence of that fact:

The "world of the simulation" -- which is to say, whatever a simulation program is supposed to represent -- cannot be located in the simulator.

Remember, PixyMisa's marble machine is intended to simulate aggregation -- that is, its physical activities are designed to mimic, in a certain way and only if you understand the symbology, the process of grouping things together. In that way, it can answer the question "How many quarters do I have in the house if I've got 3 in my pocket, 32 in my coin collection, and 2 under the sofa cushions?"

But by itself, it can't do that.

It can only do that if an observing brain changes its internal states while oberving it.

Which leads us to an extremely important fact:

Changes in the state of the "world of the simulation" exist as changes in the state of a brain.

Notice that there is nowhere else for this "world of the simulation" (WOTS) to be.

All we need for the simulator to run is the machine and a brain that understands its symbols and usage.

Changes in the state of the machine cannot be changes in the WOTS because, as we've seen, by themselves they mean nothing... they are only what they are, the movements of the machine.

The only other choice we have is the other object in the system... the brain, or brains.

And indeed, we find that the WOTS must change every time the relevant brain states change.

In short, the world of the simulation literally exists in your imagination, not in the simulator.


This is not philosophy, but physics.

Now I hope you can see the relevance to the question of whether or not a machine simulating (not replicating) a brain can be wired into a robot body and produce a conscious robot.

No, because the brain being simulated doesn't exist in the simulator.

You might as well plug in a machine simulating a sunset over Miami Beach.

But when we examine the system, it is so tempting, when we move along the sensory nerves up to the skull, and get to that computer running the brain sim, to switch our perspective over to the WOTS in our own imaginations and take our eye off the ball, which is simply the mass of moving parts in front of us.

In other words, if you want to know how a part will work in a machine -- and that's what we're doing here, sticking a computer box into a machine -- you can only pay attention to what it's physically doing.

So we could only use our marble machine inside another machine that needed a few marbles at a time. The fact that we can symbolically use that machine to add two-digit numbers doesn't mean it can work in a machine that needs 20 marbles to come out of it.

And the fact that we can "read" the simulator in order to change our own brain states which are the simulated brain... well, it doesn't mean anything at all as far as whether or not a robot body will be conscious if we stick a machine designed to run simulations in its head, rather than a machine designed to be conscious.

The answer is no, it won't.

If you want that robot to be conscious, you've got to design and build a brain that does whatever is physically necessary to make that happen, not designed to run simulations of brains.

That is what I have been saying.

And that is why consciousness cannot be programmed.

For the same reason you can't go and program yourself a new truck or a bigger house or clean laundry.
 
Status
Not open for further replies.

Back
Top Bottom