• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Explain consciousness to the layman.

Status
Not open for further replies.
But you're begging the question. There is an association between that state of the machine and a referent--a particular set of inputs.

No doubt.

If there weren't, the system wouldn't work.

The problem is, it's impossible for the system to know this from its state alone.

Again, that's like saying you can examine a person and know if she's got a twin.

Let's take the example of light bouncing off a human face onto another human face, into the eye, causing cascades of impulses through the retinas and from there into the optic nerves, headed for the thalamus.

The patterns that these impulses take are related to the shape of the object they bounced off of, but they don't "represent" it, and they are not "symbols" of it... they are merely a consequence of it.

As an analogy, imagine that you put a golf ball in a racketball court and smack the daylights out of it with a driver. It bounces off the wall and ricochets off toward a house and breaks the window. The golf ball takes a wicked hop off the window and lands in the shrubs, while bits of broken glass lie on the carpet inside the house.

Is the pattern of broken glass a "symbol" of the racketball court wall? Does that pattern "represent" that wall?

Of course not. Or if you want to say it did, you'd have to propose an observer to make the association in his mind.

And that's a crude example, but the same holds for the impulses arriving at the thalamus.

Look around the thalamus... do you see any entity in there who is in a position to "know" that these impulse patterns "represent" something outside the brain? I don't.

We can use the post office metaphor here and say the thalamus "recognizes" the pattern as "being a face" and "routes" the "image of the face" to the proper cortex for special "processing".

But this is pure metaphor.

What's happening here depends on physics, and physics alone. There is no symbolic logic which has any effect at all on this process.

Therefore, to conclude that some symbolic value of anything that's going on is a cause of anything that this system is going to do... it's hopeless, it makes no sense.

I'm sorry, but these are the inevitable conclusions of what rocketdoger would call materialist monism.
 
Last edited:
Furthermore, on the flip side, we do not actually in practice imagine the association between the symbol and its referent when performing an addition. In fact, some people are perfectly able to perform an addition and don't even know the association between the symbol and its referent. They just know how to apply a set of rules.

You're off on an interesting but irrelevant tangent.

I dig the Feynman video, but it's not pertinent to the current issue.
 
We learn logic by interacting with physical entities.

Yes, like the example of evolving basic mathematical skills in order to do things like deciding how large our group is and how large the groups of predators and prey are.

But the point about the computations is that only the physical ones are real.

The logical ones are imaginary.
 
Sounds like you have the entities but not the transformations.

Exactly.

But according to your theory, since those transformations are guaranteed to make sense in that phsyical system, they should be self-evident.

But they're not.

My problem isn't that I can't guess the transformations, but rather that I can guess any number of possible sets of transformations, and I have no way of prefering any one over any other.

Ditto for the marble machine (if you could forget for even ten seconds that you already know what it's "supposed" to be used for).
 
You're talking a different language, perhaps because you're so obsessed with brain states.

I've already told you. I'm not trying to figure out what that builder had in mind. I'm trying to figure out what the states of that machine imply about the inputs. So when I say that the marble in # represents two marbles being added to the column in $, I don't care if nobody on the planet has ever conceived of the machine working that way, including the builder. Because I'm not after brain states--you are. All I'm after is what that marble being in # means.

And it means the same thing whether one marble gets put into #, or two into $. Therefore, I say, because it behaves that way, regardless of anyone's brain state but my own serving only as the guy figuring out what the meaning of that marble being in # is, that that marble in # represents two marbles in $.

Look, if you would stop describing systems that require brain states, I'd be the first one to stop discussing them... in fact, that's precisely where I'm trying to get!

And yes, if you go beyond noting the relationships you've noted in the object's physical qualities, and you then start assigning "meanings" to them, either you are trying to figure out what the builder intended it to do, or else you're trying to figure out how you can use it symbolically, because none of those "meanings" can exist without... wait for it... someone outside the machine to decide they exist.

The bolded statement in the quote from you at the top is a contradiction.

These states mean nothing to the machine.

They can only mean anything to... you guessed it... an observing brain.

So I beg you... if you want to stop discussing brain states, please stop describing situations which demand them.

Btw, regarding your logic that there's an inherent two-ness about the # column because putting one marble into the # slot results in the same situation as putting two marbles into the $ slot, you're simply ignoring the other marble, aren't you?

Why?

Because you "know" that the other marble is out of play. There's no way to deduce that from the machine.

We could also note that putting two marbles into the $ channel results in the same output into the "bone pile" as putting one marble in the bone pile -- one marble in the bone pile. So by your logic, the bone pile also must inherently represent "two" somehow.

In fact, putting two marbles through the $ channel results in one marble in the # channel and one marble in the bone pile, which means according to your logic, the # channel, the $ channel, the bone pile each has a value of 1.
 
...has nothing to do with the marble machine.

It has everything to do with the marble machine.

Can you look at two coins and determine their relative value only by examining their features?

No, not by that alone.

If everyone agrees that the coins are made out of something that gives them value by virtue of that alone, then the weight of the coins will tell you something about their relative value.

If not, though, then that relationship is irrelevant.

The same is true for all physical systems that involve representation of any sort -- without being told what they are, it is impossible to determine from the system itself alone which relationships have representative value and which don't.

This is just as true of your marble machine as any other.

Objectively, it always takes X number of marbles in and puts X number of marbles out, or else it traps Y number of marbles and puts out X-Y.

It does this every time, and that's all it does.

Unless, of course, you -- and I do mean you -- introduce a brain into the system which knows what the symbols mean and what's important to look at and what's important to ignore.

And I've been waiting for days now for you to stop doing that, and then criticizing me for dealing with the brain you introduced into the mix when I reply.
 
Yes, but we were assuming that you can plug in a single neuron. The original point stands. If you can plug in and connect many of these to make a brain, then you can plug in a computer that emulates the entire system in the same way.

No, you can't.

The problem is, if the thing works, it's both a replica and a representation.

But the only part of that which is important here is the fact that it works as a physical replacement. If it also works as a representation or not is irrelevant, merely coincidental, but trivial and irrelevant.

The result of that fact is this....

If you treat the representation as if it were important, you're going to examine that aspect of it -- the one that's irrelevant to why the thing works -- and follow the rules of logical systems, which are different from those of real systems.

For instance, you might conclude that it can operate at any speed and exhibit the same sets of behavior because logical computations can be performed at any speed and the outcome won't change.

But this is not true of physical systems -- you can't make an airplane fly by running its physical computations at any speed, for example.

And the math which describes our world is reversible in time, but reality stubbornly isn't.

Instead, we have to view the replacement neuron as a physical system.

And yes, a collection of these physical parts would make a brain, if they really do act functionally like neurons in all respects.

But this does not mean that a machine which preserved only the logic would do what a brain does.

That's like saying we can make Major Tom appear at the space station with his atoms logically rearranged as if he had killed the space squid, and thereby kill the space squid.

If the brain is replaced by a machine that doesn't get the physical work done, then that work doesn't get done, and the machine isn't conscious in the real world... and there's no other place to be conscious.
 
The use of the brain as a food source is just as real as the brain's activity when it generates consciousness.
You're refusing to think about what I write.

Are you suggesting that the use of the brain as a food source may be key to generating consciousness? Or has a point flew over your head?
 
The accusation:
You find it impossible, not just to accept a view of the marble machine as an object, but even to contemplate such a view.
Your position:
But notice all the information you had to drag in from elsewhere. And notice that this information has to do with how human brains tend to act.
My criteria:
There is an association between that state of the machine and a referent--a particular set of inputs.​
So, which of us is not treating the machine as an object? If you look at my criteria for meaning, both highlighted things are part of the object. Look at yours, and you point to the human.

Your ultimate position is that the use of these machines cannot derive meaning. You're currently trying to argue this position by arguing that there exists no context in which meaning can be derived without using a human brain.

I'm giving you the context, and it does not depend on a human brain. But you keep pointing out why it fails, because, it's not what a particular human brain came up with. How can you not see this as begging the question?

If you want to treat this thing as just an object, then stop treating it as a simulation. It's just a machine. Imagine everyone removed from the picture. The marbles just magically flow into it--"Bob" knows how, who cares. And the states just react. That's what we're looking at.
How do you know that the particular relationships in the machine's construction which you're focusing on are significant to its use, rather than coincidental?
What is your criteria for significance? Mine is simply what the states of the machine imply about the inputs.
And how do you know that other relationships of the machine (say, its height, width, and depth, or the length of the channels) aren't used as indicators to perform further mathematical functions?
They could be, but it doesn't deter from the fact that the states tell particular things about the inputs.
Or perhaps the purpose of the machine is only to route marbles along those channels and through those baffles,
That sounds like some guy's intent. I'm not appealing to some guy's intent. I'm only telling you what the states imply about the inputs.
and the particular mathematical relationships are coincidental,
The machine simply does a particular thing. Unless you treat this as a simulation with an intended use at some particular point in time (you're ignoring that part too--see the henhouse/checkbook dilemma), there's no context by which to call it "behavior" versus "coincidence". We're just describing what it does.
or are there because this part needs to fit into another part which has those relationships
"Has those relationships" already implies that it has this meaning.
or because the designer was a mathematician and an artist who found the relationships elegant.
That's some guy's intent again.
Of course, if you're exploring all the possibilities, and you find one like "it could be used as an adding machine",
Not what I'm saying, Piggy. It's not that it can be used as an adding machine; this sort of thing means that I'm going to take it, and have a marble represent some thing I'm going to count--perhaps a hen egg. And that's some guy's intent again.

Rather, it's that what it does is actually addition. In particular, the term "addition using a placement system with radix 2" describes what the machine is doing; in that the machine's states are equivalent to shoving that many marbles into column $. Not that many hen eggs--that many marbles.
and you decide that there's very little chance that a human being would set it up like that accidentally,
You're making up that I decided that. I haven't "decided" any use for the machine. All I did was figure out how it behaves.
and you decide that there's very little else a person might want it to do, then you might think "It's got to be an adding machine."
Straw man.
But the machine itself, if it were self-aware and if it knew everything that could be known about its body and nothing else -- in other words, if we're looking at the circle with just the machine in it, the machine as an isolated system -- the apparatus itself has no way to guess all these things, and so every possible use of the machine is just a good a guess about its body's purpose and function as any other.
This is wrong. I observe a world using my senses, and I interact with the same world. When I reach out and touch a cup, and say "cup", what I mean by "cup" is whatever thing I reached out and touched. I'm relating that concept to a thing I recognize using my sensory inputs, and the thing I can interact with in particular ways.

It's the same thing here, scaled dramatically down. We have a machine whose states are affected a particular way according to its inputs. The set of inputs is a lot smaller, and the machine doesn't do as much interaction. But there's no reason it can't--none, at least, that you have argued for.
So your world, in which our only choices are folks using the machine to add stuff,
Straw man. "Folks using the machine" are people with intents, and that's looking at it as a simulation instead of a machine.
 
Amazing!

Your definition of the "machine" you're looking at... apparently doesn't include the object you're looking at.

To you, the "machine" is an abstraction based on a single possible symbolic use of the object which has been described to you.
The neurons in our head are living cells. They have a DNA sequence with particular genes in it. They make proteins; they undergo respiration. They do lots of things. If you want to argue that every last thing the neuron does is responsible for consciousness, then you can speak of every part of this machine as critical.

Otherwise, you're nit picking about irrelevant details. Stay inside your circles.
 
But the point about the computations is that only the physical ones are real.

The logical ones are imaginary.
No, the distinction is imaginary. If every time I apply modus ponens, it works in the real world, then modus ponens works because it works in the real world. It's not a type of thing that only works for Donald Duck.

Quite the opposite. Only Donald Duck can do things that betray logic.

Your use of the term "imaginary" is questionable. It seems you're just conflating "imaginary" with "abstract".
 
You're talking a different language, ... All I'm after is what that marble being in # means.

And it means the same thing whether one marble gets put into #, or two into $.
Look, if you would stop describing systems that require brain states,
Yes, that's your premise--that meaning requires something in a human brain and cannot be derived from devices like what PixyMisa is suggesting.

But you're supposed to be arguing for this, not assuming it.
And yes, if you go beyond noting the relationships you've noted in the object's physical qualities, and you then start assigning "meanings" to them, either you are trying to figure out what the builder intended it to do,
See above posts. You are to treat this as only a machine, not a simulation. Intent has nothing to do with it.
or else you're trying to figure out how you can use it symbolically,
I'm only figuring out the rules of the machine. The states imply certain things about the inputs. How I can use it is restricted by what it does.
because none of those "meanings" can exist without... wait for it... someone outside the machine to decide they exist.
I did not decide that putting two marbles into slot $ puts the machine into the same state as putting one marble into slot #. I figured out that it did.
The bolded statement in the quote from you at the top is a contradiction.
No, it's just a fact that if you shove two marbles into slot $, the machine goes into the same state as shoving one in slot #.
These states mean nothing to the machine.
Of course not, because the machine is not self aware. But the states do mean something about the inputs.
They can only mean anything to... you guessed it... an observing brain.
No, I didn't guess it. You asserted it. And what do you mean by an observing brain? Anything I say about the system will be done by an observing brain, because that's what I am. If you want to include an observing brain for this reason, then you need to do it honestly across the board, and conclude that the universe would not exist if you were not alive. After all, even if someone shows you evidence to the counter, you can only see it with your observing brain.

But there's no reason why we can't build a reasoning machine that builds a theory about what this marble machine does, if you want to take a brain out of the picture.
So I beg you... if you want to stop discussing brain states, please stop describing situations which demand them.
Who said I wanted to stop discussing brain states?
Btw, regarding your logic that there's an inherent two-ness about the # column because putting one marble into the # slot results in the same situation as putting two marbles into the $ slot, you're simply ignoring the other marble, aren't you?
No, I'm not ignoring the marbles. I'm attending to the machine states.
Why?

Because you "know" that the other marble is out of play. There's no way to deduce that from the machine.
You're losing focus of what you're trying to do. Our brains generate waste products. Is your theory going to include those in consciousness?
We could also note that putting two marbles into the $ channel results in the same output into the "bone pile" as putting one marble in the bone pile -- one marble in the bone pile.
Yeah, so what's noteworthy about it? Our brains use energy. And whether you eat a tomato, or a tamale, you get the same... how shall I put this... end results.

So what's the point? Are you going to include those end results in your theory of how the brain generates consciousness? Or are you just randomly slinging mud?
So by your logic, the bone pile also must inherently represent "two" somehow.
Straw man.

You're really reaching too.
 
Last edited:
No, the computer would not be made of artificial neurons, but simply emulated artificial neurons. If artificial neurons are implemented by digital electronics or by a computer, then there is nothing a network of such neurons can do that a single processor system with the same inputs and outputs can't also do. Given arbitrarily large storage, no computer has more functionality than any other, and no network of computers has more functionality than any single computer. Computable is computable.

My italics.

Clearly there is something the computer can't do. It can't replace a human brain and control a human body. Therefore there is functionality it lacks. To claim that because it is computationally equivalent to the network of artficial neurons that it is functionally equivalent is to beg the question as to whether brain function is computational.

The fact that the network of artificial neurons could, in theory, replace the human brain, and that the computer couldn't, in itself, as a thought experiment, casts doubt on the equivalence between functional and computational.
 
Wanna bet?

It has a helluva lot more than that.

Try using colored marbles and see how many it has.

Stop ignoring the bone pile and see how many it has.

Any physical system has an effectively infinite number of states. We tend to focus on particular states because they are of more interest to us, but they are not privileged in any objective way.
 
Last edited:
You're refusing to think about what I write.

Are you suggesting that the use of the brain as a food source may be key to generating consciousness? Or has a point flew over your head?

I think it's at least worth considering the possibility that some physical attribute of the brain produces a physical effect of consciousness. I don't think that to consider that possible equates to a belief in magic beans.

The burden of proof is undoubtedly on those who insist that it is only the computational nature of the brain that counts, and that nothing else is of any significance. They need to demonstrate that nothing in the physical configuration of the brain matters.
 
Explain Consciousness to the Layman

Ok, well, I’ll tell you what I know about it, and that should be understandable to the layman, seeing as how I understand it and I’m not a biologist.

I’ll lay out some of what is widely known and some of what is still unknown. In a later post (or posts) I’ll get around to common errors in popular ways of thinking about the brain and consciousness, as well as the question of whether or not you can program a computer to be conscious, and even the implications for free will, but we’ll tackle this first.

Be prepared... this is not a bumper-sticker question, so this post might end up being the length of an article.


What is consciousness?

When we talk about consciousness, we’re talking about something the brain is doing which everyone can observe.

It’s what happens when you wake up in the morning and your body starts having – or, more accurately, performing -- certain experiences, for example an experience of the color of the ceiling, or an experience of being hungry, or smelling coffee.

This wasn’t happening before.

Light was bouncing off the ceiling onto your head (let’s suppose you sleep with your eyes open), and your stomach was secreting chemicals, and little molecules from the coffee beans were bouncing around in your nose, and all of that was causing very complicated things to happen in your brain – more on these later – but there wasn’t a feeling of hunger at that spot in the universe before you woke up, or a smell of coffee, or the brownness of that damn spot from the roof leak you still haven’t painted.

You can examine coffee from top to bottom, and you got no way of knowing what it smells like to a duck... depends on the duck’s brain entirely. Same for your brain.

We know that the brain is causing the experience because we can manipulate experiences by manipulating the brain, and we can observe (or receive reports of observations, depending on whether you’re the patient or the doctor) how brain injury and malformation changes experience, and we’re building an increasing body of observation about precisely how this works.

Now that you’re awake, your body is performing one of its bodily functions, which is the very experience you’re “having”.

But to say that “you” are “aware of” an experience is redundant. When an experience is going on, whatever it is, you’re going on too, and when one's not, you ain’t either.

Experience can include a sense of self or not, btw. Optional but not required.

What we need to explain is this:

1. What is the brain doing when experiences are going on which it is not doing when experiences are not going on?

2. Whatever that turns out to be, why does any particular activity of the brain produce any particular experience (the smell of cinnamon, for example) and not some other experience (like the color of a clear sky) or none at all?​

The second question is currently beyond our means to answer.

But we’re making progress on the first one.


What is going on in my body when it’s having an experience, which is not going on when it's not?

Before answering that, we should probably dismiss some obvious candidates which might not be so obvious to most folks.

What about attention? When I’m dead asleep, I’m not paying attention to anything, right?

Turns out you are. Your brain is constantly shifting attention, which is not surprising since the body is extremely vulnerable to predators and injury while dead asleep, so you need attention working in case consciousness needs to be kick-started, for instance.

Not only that, but while you’re awake your brain is shifting visual attention all over the place, but that attentional mechanism isn’t part of our experience.

What about memory, using and forming memory? What about learning?

Turns out, our brains learn and use memory just fine without needing consciousness to be engaged or involved.

One odd thing about experience (which I’ll explain later when we get to the mechanics of it) is that it can only be so brief. If an event happens too quickly, we can’t be aware of it.

For example, if you look at a pair of lights, and one blinks after the other one, and you make the time difference shorter and shorter, there will come a point at which you will experience the blinks as being simultaneous when they’re not.

And it seems that way precisely because what happened in the meantime simply had nothing to do with your experience, had no effect on it. (The reason why will have to wait.)

But the rest of your brain is effected by what happened in the meantime between the two blinks!

We can demonstrate this by showing people slide shows, and timing some of the slides so they’re only visible for a time span that’s too short for them to have an experience of.

This experiment has been done thousands of times, with many types of reporting methods from the subjects, and we know that people looking at the “subliminal” slides have no experience of seeing them.

But if you show a group of people some slides of geography or buildings or random geometric shapes, for example, and then ask them to name any animal whose name begins with A, you’re going to get “Ape” as an answer much more frequently from a group who has viewed subliminal slides of gorillas, and “Aardvark” or “Anteater” or “Ant” more often from a group who viewed subliminal slides of aardvarks.

When asked why they came up with that answer, they will all have reasons, but none of them to do with the activity of their non-conscious brains, which was the actual cause.

We can also make associations between images by pairing them subliminally with others, and these pairings will be learned by the brain in a way that the person will act on later. So you don’t have to be conscious to learn.

What about imagination? Gotta be conscious to imagine, right?

Nope, not that either.

In fact, our “conscious minds” work in tandem with a process of non-conscious imagination all the time.

When we’re imagining something... let’s say we’re imagining a room in the house where we used to live... we’re using the same circuits of our brain that would be engaged if we were standing in such a room and looking at it and listening to it and smelling it.

That’s not all we’re doing, obviously, or we’d be hallucinating the experience, but those circuits are indeed engaged.

As it turns out, our brains are doing the same thing constantly while we’re awake (but apparently not in the same way when we’re dreaming!) even though it forms no part of our experience in the way that conscious imagination does.

At any given moment, your brain is (non-consciously) imagining what it expects to happen next and comparing that with the actual cascade of impulses that does happen. But you are not aware of it, until a mismatch occurs that triggers the involvement of experience.

So when you forget that you hung your coat on the end of the door, and you get up in the night and go to the kitchen in the dark, and scare the living daylights out of yourself, that’s because your brain wasn’t imagining a coat there and the clash between how those circuits were lit up and how the incoming circuits lit up created an experience of HOLY****THERESSOMETHINGLOOMINGTHATMIGHTBEAPERSONWHERENOPERSONSHOULDBE!

And if the elevator doors open and you start to move forward and see there’s no elevator, the clash between the patterns also frightens you, this time because something wasn’t there which was expected, and the empty space was scary.

How this happens is quite complicated and not very well understood, but we do know that the non-conscious imagining and pattern-matching is going on by observing the brain in action, so that you can have the same sort of reaction from something being there or not being there, depending on what you expected. (If you were there to fix the elevator, for example, the empty space would not have scared you.)

So we know that consciousness is not the same as attention, memory, learning, or imagination.


Ok, then what is it?

It turns out that what’s going on is a coordination of activity among different parts of the brain.

This can be detected by interfering magnetically with different areas of the brain when people are deeply asleep and when they are awake.

If you’re awake and this happens, effects of this disturbance start popping up all over the brain in different ways.

If you’re asleep, the magnet affects the area it’s physically near, and that’s all.

This is important because it explains one of the most salient features of experience – it coordinates events that are going on in many different parts of the brain at once.

For instance, you can search YouTube for videos on the McGurk effect, which demonstrates that the sound we hear when someone is speaking can depend on what we see.

If the sound of someone repeating the sound “fa” is played over a video of someone mouthing the sounds “fa” and “ba” in various sequences, what you hear will change back and forth, even if you’ve already heard the soundtrack and know that it’s being replayed.

And there’s the example of folks with synesthesia, who see numbers as having colors, or who taste words, and so forth.

And there are any number of visual illusions that work even when you know how they work, even after you prove to yourself they’re illusions.

By the moment of experience, activity that’s going on in different areas of the brain – the patterns of activity that result from hearing, for example, and those resulting from seeing... or those involved with response to motion and those involved with response to temperature – has been merged into a single pattern which consciousness itself cannot untangle.


Then what’s doing the coordinating?

The answer comes from research done on a group of people who have had to have implants surgically placed deep in their brains for medical reasons.

Some of these folks have generously allowed scientists (and us) to observe what’s happening deep in the brain.

These observations have led to the discovery of “signature” brain waves of consciousness.

They can be observed as a brain falls asleep, dreams, wakes up, is anesthetized, comes out of anesthesia, and so forth, and they can be seen to gain coherence as a person wakes up, and they lose coherence and fail (in observably distinct ways) as a person falls asleep or is anesthetized. They also operate when we dream.

These brain waves cross many different areas of the brain, and the electrical activity of those areas will affect the overall wave.

If these waves are the mechanism by which binding is accomplished – which allows us to see “our friend James” rather than a barrage of sound, color, motion, and heat – that would explain why we are not conscious whenever they’re not up and running and coherent, and simultaneously why the magnetic disturbances don’t spread when we’re not conscious.

It would also explain why experience is “temporally granular”, which is to say why there are "blind spots" in our experience of time (if experience is based upon repetition, in which case a deviation in a single sub-wave cycle would have no impact.)

Of course, it’s possible that these waves are “noise” like the sound made by my truck’s engine or the heat coming out of my computer, but if so, we have to ask “Noise made by what... and what is that thing doing?” At the moment, the waves themselves are our best lead.


The conscious choir

Think of it this way....

Your brain is an astonishingly dense and intricate and complex coil of little things that bounce chemicals and electrical charges around.

There’s something like a hundred billion of these little guys in there, humming away.

The shapes that they form in their dense little bundles, when the electrochemical activity is going on, create a very real electric environment inside your skull. Each little shape is humming away, forming its own electric pattern in space.

But space doesn’t particularly care.

You can imagine it like a vast and complex choir of voices humming – be warned, this is just a metaphor! – but with no air in the room.

Without the air, the vibrations don’t leave the choir members, and they remain separate.

Put air in the room, now the vibrations have something to bounce around in, and you get harmonies and fugues and everything else.

The waves cutting across the brain are like air in the room for the humming electrical shapes in your skull.

Now they mix in all sorts of ways to produce new shapes which did not exist before the air was there, and which will cease to exist if the air is removed.


Look around

Stop and look around for a moment.

Look at the color of paint on the wall. Or the color of the leaves of a plant.

Inhale and smell. Listen.

This experience – which, after all, is you, isn’t it? – is somehow performed by those waves in your brain as they are warped by the electrical humming of the various shapes of different parts of your brain.

And, of course, the humming of the choir is affected by feedback from the waves.

But that’s where the greenness of the grass is, and the coldness of the ice, and the sweetness of the smell of the back of your lover’s neck when you wake up in the middle of the night.

And these things – these experiences – are little pockets that go floating and crawling and sliding around in the universe. They are, after all, simply shapes that the universe takes in its vast silent explosion.

Now think about it....

Imagine yourself at a football game, maybe the local pro team, maybe your kid’s school team.

The place is full of these little pockets, where experiences are happening... and nothing in the light bouncing around the place, or the heat, or the crashing molecules that bounce off our ears and go up our noses... nothing about any of that stuff can predict the smell of hot dogs, or the sound of a shoe hitting a ball, or the taste of a drink, or the emotion of watching your team win (or lose).

All of that is something that the human brain does.

What it is to be human is to have the kind of body we have. What it means to be human is to have that body’s kind of experience of the world, which exists entirely within it, because of the way it is shaped.


Questions to be answered

If the brain waves are the air and the electrical shapes formed by brain activity in the various sub-organs are the choir... who’s listening?

(You see, I told you it was only a metaphor!)

We still don’t know why this particular physical arrangement creates experience, much less why it leads to the particular experiences it does.

It’s tempting to think that the mere fact of interaction does the trick by itself, but as we’ve seen from our examples of non-conscious attention, learning, and imagination, the interaction of brain patterns is happening all the time, including those which we would identify as “representative” of something outside the body or elsewhere in the body, but most of them don’t have an effect on experiences.

And if the interaction of patterns in a medium is the cause, then why isn’t the real air conscious when it is the vehicle of real vocal harmonies?

So that’s the next step.

Why does having a brain shaped like ours make our body perform the experiences it performs?

And why does a brain shaped like ours perform any experience at all?
 
Last edited:
You're refusing to think about what I write.

Are you suggesting that the use of the brain as a food source may be key to generating consciousness? Or has a point flew over your head?

If you want to talk about the purpose of a brain, I cannot see how the purpose of keeping one animal alive is any more valid than the purpose of keeping another animal alive.

Unless, of course, you want to take into account the personal preferences of the animals involved.

After all, what is the purpose of an ear of corn?
 
My criteria:
There is an association between that state of the machine and a referent--a particular set of inputs.​
So, which of us is not treating the machine as an object? If you look at my criteria for meaning, both highlighted things are part of the object. Look at yours, and you point to the human.

Yeah, well, you forgot to highlight the word "association" in your criteria there.

Where does that association exist?

I want a location as an answer, please.

There are "associations" between the marble machine and all sorts of real or imagined systems.

This is why people are able to write books like "The Bible Code", for example. You can imagine all sorts of possible associations for any system.

But how is any object "associated" with any other object in the way you're describing, which is to say as a "referent"?

A: This is only possible if someone somewhere decides what the damn thing is supposed to refer to!

That is why I do indeed include a brain state in my description of any system that includes "meaning", and unapologetically so, because it is required!

You want to include "meaning" in systems with no interpreting brain, and then you go and describe systems that demand an interpreting brain to work the way you want them to -- if you deny it, go back and look at your own examples of how the marble machine was put to "right" use.

Now I'm tired of this baloney, and if you're unable to discuss the machine as a machine and not as a symbol system, or if you want to propose the idiotic notion (and I use that word advisedly) that a symbol system can exist without anybody to decide that it does, you're on your own.

You can have that conversation with yourself.
 
Explain Consciousness to the Layman

Ok, well, I’ll tell you what I know about it, and that should be understandable to the layman, seeing as how I understand it and I’m not a biologist ...


Thanks for your summary on where you are at with respect to explaining consciousness. It's obvious that you are putting some serious thought into it. If I understand what you are saying, then my own view is very similar in that consciousness seems to be an emergent property of a brain/sensory IO/programming combination, the brain part consisting of memory and processing of the sensory information through the programming, some of which is hard wired and some of which is adaptive, the parameters of which are dependent on the needs of the overall system.
 
Your ultimate position is that the use of these machines cannot derive meaning. You're currently trying to argue this position by arguing that there exists no context in which meaning can be derived without using a human brain.

I'm giving you the context, and it does not depend on a human brain. But you keep pointing out why it fails, because, it's not what a particular human brain came up with. How can you not see this as begging the question?

I never said any such thing.

Of course the use of these machines can derive meaning, as you say.

Now, when this happens, which circle are we looking at... the one with just the computer in it, or the one with the computer and a brain in it?

It must happen in the one with the brain in it, because otherwise all we have is some marbles in, and the same number of marbles out.

These relationships you see in the machine are there, but so are a bunch of others which you're ignoring as unimportant... without a brain that knows which ones to pay attention to and why, then no, there is no way to "derive meaning" from the machine.

You are the one who refuses to stop describing scenarios that require brain states!
 
Status
Not open for further replies.

Back
Top Bottom