• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Explain consciousness to the layman.

Status
Not open for further replies.
Naturally-occuring consciousness might not be the result of logical computations, but neither you nor Westprog, nor anyone else, has given us a good reason for why consciousness can not be programmed. It would be different, but it would still yeild the same result.

This requires dualism, I'm afraid. (Which is why Westprog and I are so baffled when the computationalists call us dualists for believing that only physical computations are involved.)

Logical computations are, by definition, imaginary overlays onto physical computations.

If you believe that logical computations can cause real (that is, non-imaginary) events of any kind, then you're saying that the matter and energy which are only sufficient to cause the physical computations are causing them and are also causing a second event.

(In other words, PixyMisa is holding his own dreaded "magic bean".)

This is why the whole "world of the simulation" thing is so ridiculous.

You have to believe that the mass and energy which is only sufficient to make the machine do what it's doing will get you that and a "world of the simulation" which is somehow not imaginary.

(Of course, this problem is solved once you recall that you need a brain to create and read the sim, at which point it's obvious that the "world of the simulation" is, and can only be, a state of someone's imagination, with the matter provided by the brain and the energy provided by the body's metabolism, which removes the need for dualism. This is why you can't swap a sim machine for a brain -- the sim is imaginary, only the physical actions of the machine matter.)

And since logical computations are symbolic, if consciousness is the outcome of logical computations, then consciousness itself is a symbol. (In fact, the brain was recently described on this thread as literally being a symbol system... perhaps to prop up this notion, I don't know.)

But in order for a symbol to exist, there must be some sort of object paired with someone who decides what it means and interprets that meaning. Which means, if consciousness is a symbol &/or the brain is a symbol system, you need a mind somewhere outside of the brain to decide what those symbols are and what they mean.

Back door dualism.

So if you accept that logical computation can generate consciousness, you are diving into a world where dualism is required, no matter how much the computationalists may attempt to deny it.

The biological model avoids that fatal problem by asserting that the brain behaves like everything else in our world (no special pleading, as is required by computationalism) and that all its behaviors have physical causes, end of story.
 
No. However, the speculation about the interaction of brain waves and neural "noise" is just that, speculation. It's certainly testable, if anyone cared to design the test, but it's also certainly not tested, so there's no reason to believe it.
Brain waves are neural noise. It's been tested.

On the other hand, the coordination of the signature brain waves during consciousness, and the parallels of experience during the losing and gaining of consciousness, these are observable.
Sure. But correlation is not causation.

Given that observable coordination, we have to consider the waves some sort of NCC. If they're just "noise" from some other process, then they're noise from a process which is also coordinated w/ states of awareness.
Yes, certainly. But they are just noise.

If the integrated information theory describes something correct, then the brain waves interacting with "noise" from the neural structures is an interesting thought to entertain, and it would fit the bill.
Except that it's physically impossible. If you can't discard the physically impossible, you're in serious trouble.

It might be odd to think of your own awareness, all of your experiences, as being caused by something so weak, but when you think about it, there's no reason why that should matter at all. Why not a brain wave as a medium for information from the electric hum of the various shapes of neural tissue?
Because that's not how the brain works.

It's exactly as if you opened up a computer and ignored all the wires and decided that it's the 2.5GHz RF noise that's doing all the work. It's not, and everyone knows it's not.
 
It's the unconsidered assumptions that do the damage. E.g., if something doesn't have a precise language definition, it doesn't exist. It doesn't take much thought to realise that the more fundamental and essential something is, the less likely it is to have a definition, since there are fewer things available to define it.

Another trap is to take the metaphor literally.

This is an especially sticky tar baby if you spend a lot of time with machines that are specifically designed to make imaginary things look real, and/or in a discipline that uses a standard set of metaphors almost all the time (e.g. information theory, mathematics, computer programming).

The results can be very odd, like taking the post office metaphor of the brain literally, so that you actually believe that patterns of neural impulses are "images" of things which which are "recognized" by brain structures and "routed" to the appropriate destination.

The error becomes somewhat clearer when you apply that to a coin sorting machine, and claim that the coins are "recognized" and "routed", rather than that they simply follow the laws of physics and fall down whenever they're not supported.

But given our experience with the marble machine, there may be some here who would accept that as a literal explanation, too.

If you try to apply that to a log in a river, however -- the log was "recognized" by the river and "routed" into the appropriate channel, rather than "the log was too big to go down one channel so it went down the other" -- the mistake should be apparent to anyone.
 
You might be surprised that I begin by citing information theorists, but I actually have no problem with information theory... it’s extremely useful... I only have a problem with misapplications of info theory, such as taking the metaphors literally, or attempting to apply theories which describe non-conscious activity to conscious activity as though they were identical.
I'm pretty sure this is begging the question again.
First, we should note that ...
Not under dispute.
It is also accepted that ...
Not under dispute, pending certain definitions of consciousness. But I should have you note--that conscious experience is well defined does not entail that "consciousness" is.
in other words, you’re not conscious just because you have a brain
Of course not. First off, there has to be a you at all, and that requires an instantiation in physical form.
and that consciousness does not merely “emerge” as a result of some critical mass of neurons.
Totally agree, no matter what definition you use that's at least remotely common. Leumus doesn't though.
NCCs are simply the physical states of the brain (what it’s doing) when we are aware of various things...
Interestingly enough, that marble machine also has physical states--physical states which you assert cannot possibly generate consciousness.
I mean, if we were to find out that for every human being, the smell of cinnamon happens when the brain is doing one particular combination of activities, we still might not know, from that fact alone, why that state makes us experience the smell of cinnamon instead of having some other experience.
One thing is absolutely certain. If we can consistently recognize the smell of cinnamon, and we can consistently recognize the smell of sulfur, and we can distinguish the smell of cinnamon from sulfur and we can distinguish the smell of sulfur from cinnamon, then there must be some configuration C which is equivalent under a particular transformation, and some configuration S which is equivalent under a particular transformation, where C differs from S and S differs from C under that transformation. And if any other kind of property C' is consistently associated with cinnamon under a transformation, and any other kind of property S' is consistently associated with sulfur under a transformation, and this transformation provides a difference between C' and S', then C' must correlate to some such C and S' must correlate to some such S.

So if C' is "experiencing the smell of cinnamon", and S' is "experiencing the smell of sulfur", and the experiences are distinct, then C' can only correspond to something that is consistent when cinnamon is present and absent when sulfur is present.

Therefore, if you have no explanation for an experience, but a symbolic machine can produce a correlate of these states, you cannot rule out that the symbolic machine generates an experience based on its failure to produce a distinction between them.
This, in a nutshell, is the biological model of consciousness.
And nobody here disagrees with this.
In other words, these guys would not agree that you can replace a brain with a simulator machine running a sim of a brain (however detailed) because – as Westprog and I have been trying to explain to many deaf ears – the physical work of the two machines is different.
Non-sequitur. Please explain how this follows from your previous paragraphs.
This is important to establish up front, because – as we’ve seen on this and just about every other thread on the topic – it is so widely misunderstood.
This is the "you're too dumb to just get it" argument.
So I’ll insert a comment from Ned Block on the subject from “Comparing Major Theories of Consciousness” (NB is Block’s):
The competitors to the biological account are profoundly nonbiological, having more of their inspiration in the computer model of the mind of the 1960s and 1970s than in the age of the neuroscience of consciousness of the 21st century. As Dennett confesses, “The recent history of neuroscience can be seen as a series of triumphs for the lovers of detail. Yes, the specific geometry of the connectivity matters; yes, the location of specific neuromodulators and their effects matter; yes, the architecture matters; yes, the fine temporal rhythms of the spiking patterns matter, and so on. Many of the fond hopes of the opportunistic minimalists [a version of computationalism: NB] have been dashed: they had hoped they could leave out various things, and they have learned that no, if you leave out x, or y, or z, you can’t explain how the mind works.”
That paragraph is not saying what you think it is. This is a criticism of simplistic integrate-and-fire models of the neural network of the brain.
Note that temporal rhythms are included here, a feature which disproves the claim that “consciousness is a product of logical computation” because, as we know, logical computations can proceed at any speed and produce the same result – not so with consciousness.
Citation?
Or, as Christof Koch, himself no stranger to info theory, puts it:
Brain scientists are focusing on experimental approaches that shed light on the neural basis of consciousness rather than on... philosophical problems with no clear resolution.
I'm having a bit of problems processing this. Could you explain how your quote of Christof Koch relates to the thing you said above? Please be specific and, where possible, use snippets of exactly what you quoted to show the relations.
I hope that these observations will at least inspire a modicum of caution in those tempted to accept the assertions of folks who believe that consciousness can be understood without studying it directly (e.g. by studying math, or general info theory,
If I didn't know better, I might get the impression that you are trying to suggest that people who study math or general information theory are against studying consciousness directly.
and a great deal of skepticism toward anyone who claims that it is widely accepted that conscious experience has no direct physical cause,
If I didn't know better, I might get the impression that you are trying to suggest that people who believe that physical machines can generate consciousness believe that consciousness has no physical cause.
especially when that is coupled with an assertion that the cause of consciousness is known.
What if they do know?
I assure you, this is a fantasy bordering on delusion.

Anyway, that settled, let’s get back to IIT....
It's not quite settled. I'm going to need more than your assurance.
To begin teasing out these questions, they begin with a simple thought experiment:
...
At this point, you’re probably starting to see the difference between laptops and brains, and why brain waves might be more important than the computational literalists and neurons-only crowd on this forum imagine they could be. And this is coming from information theorists!
If I didn't know better, I might get the impression that you are claiming that it is impossible, computationally speaking, to determine that an entire panel is lit up white as opposed to a single pixel.
What they’re implying here is that there can only be a “you”, there can only be an “experience”, if there is some degree of integration of information caused by real physical activity.
So, like a Hopfield network?
In other words, consciousness exists in those places where the physical-temporal integration of information actually creates a point of view!
Like a Hopfield network?

Incidentally, what exactly do you mean by integration producing a point of view, using the specific example of the visual field?
And although I’m getting a bit ahead of myself, ...
Yes, but I'm not going to go there just yet. Too many other things happening at the moment.
...
Now, one thing that’s very important to note here is Tonini and Balduzzi do not define information as symbols (who would determine their values, or read them?) nor as representations (of what?).
Well, whatever information is, if "I smell cinnamon" is what it is, and "I smell sulfur" is what it is, and whatever it is means that those two things are different things; and I can consistently smell cinnamon when I hold cinnamon up to my nose, and consistently smell sulfur when I hold sulfur up to my nose, and can consistently tell the difference between cinnamon and sulfur, then there absolutely must exist some abstract, yet real physical configuration C that is consistently induced by cinnamon being held near my nose, and some abstract, yet real physical configuration S that is consistently induced when sulfur is being held up my nose, such that C correlates to my smelling cinnamon (i.e., is invariant to my recognition that I smell cinnamon), S correlates to my smelling sulfur (i.e., is invariant to my recognition that I smell sulfur), and that C is distinct from S (otherwise I would confuse the smell of cinnamon with the smell of sulfur).

And whatever those things are, why would they not be symbols?
So the integration of information is neither merely “self-referential information processing” (as was mentioned in the article’s opening paragraph, this happens all over the brain) nor is it an interaction of symbols.
If there were no symbols, as defined above, what are you left with that allows you to recognize cinnamon and sulfur and distinguish the two?
...
Moving on to the topic of integration, they note that....
We need to find out how much of the information generated by a system is integrated information – that is, how much information is generated by a single entity, as opposed to a collection of independent parts. The key idea here is to consider the parts of the system independently, ask how much information they generate by themselves, and compare it with the information generated by the system as a whole.
Regular computers can integrate information.
...
Which leads to this observation (and again, note that current research, including evidence from computer simulations – despite protests to the contrary from some quarters on this forum – contradicts the “consciousness as logical computation” model which demands that it occur at any speed,
Again, citation?
and the “information only” model which dispenses with any direct physical cause):
All physical machines work by having physical causes.
...Now, here I’m jumping off into speculation, but it is productive speculation – that is to say, it is based on actual research on conscious brains and non-conscious brains (not notions derived from philosophy or observation of computers
If I didn't know better, I might get the impression that you are trying to suggest that people who apply principles from philosophy or observations of computers are ignoring research on conscious brains and non-conscious brains.
and other non-conscious entities
What if those people are applying principles from observations of conscious computers?
or pure mathematics
If I didn't know better, I might get the impression that you are trying to suggest that people who apply principles of pure mathematics are ignoring research on conscious brains or non-conscious brains.
or anything like that)
Mhm....
and it can lead to experiments which falsify or confirm the premise.
Sure.
Since we know that evolution works with the parts it’s got, whatever they may be, let us speculate about an early brain, one which is not conscious but is just a set of stimuli and responses like any dumb machine can do.
As opposed to what smart machines can do.
This brain is based on what we normally think of as the architecture of the brain – that is, neurons firing in chains (very complex and tangled chains, but chains nonetheless) – kind of like traffic in a busy city.

It’s difficult to see how this arrangement can produce integration, and therefore conscious experience.
Actually, no, it's not difficult to see how it can produce integration.
But like the wires on a power line (remember my point about the difference between the performance of wires strung vertically versus wires grouped triangularly) these neurons produced a lot of electrical noise.
...
Let’s go back to the choir analogy.
Okay, so, here's a basic introduction to harmonics, by the awesome vihart:

You'll note that in order to produce:
When you do this, the instruments which are now encompassed in the slab of air can make their voices heard, and they produce all sorts of new harmonies and subtones which did not exist in the vacuum.
...those things, you need to have the waves at specific frequencies in relation to a carrier frequency. And if you just put a wave there, all it can do is be a wave. To integrate a mass amount of information, a single wave at 1/3 frequency of a carrier can only possibly mean one thing, if it's issued globally. It's just not distinct from itself; it's impossible to tell a wave at that frequency that's supposed to be smelling cinnamon from a wave at that frequency that's supposed to be smelling sulfur.

There are thousands of smells. And that's just smells.
In other words, there is now more information in the system, and that information is integrated!
The chorus analogy just doesn't do it for me, because I imagine a finite number of higher frequencies that can be generated, which are woefully insufficient for expressing the information we need to integrate; too much lower of a frequency and you get into severe time delays (note that it requires whole multiples, so it minimally grows exponentially by factors of 2). So please explain the actual mechanism by which you think this could work.
In short, this model brings together the most current brain research on consciousness with the most current information theory which has the potential to make consciousness quantifiable!
How?
If this model is accurate (who knows?) then it would mean that the classically neural function of our brains is the realm of non-conscious activity, and the larger-scale electrical activity (which is, of course, directly related to and depending upon the underlying neural activity) is the realm of consciousness.
It's not a model until you flesh it out; in particular, you need to sufficiently flesh it out to provide a feasible mechanism for your claims; for example:
This hypothesis has not been tested, but it sure would explain a lot, and it is consistent with current research.
What exactly would it explain? In particular, please answer this question by specifically invoking the information-carrying properties of the electromagnetic wave that are not available on the classically neural level which specifically contribute to integration of the type not available at the classically neural level.
Before we dive into all of the great things your theory will tell us, I'd like some assurance that your theory is the thing that is telling that to us, and not merely you. So I need a good model.

Otherwise, it's just an emotional placeholder for you; a "just so" theory that explains nothing.

Do you agree that it's fair to ask this?
And might I add, it is the height of arrogance for people who don’t much care about brain research, to insult others for having the audacity to question their expertise in the matter.
If I didn't know better, I would think that you're trying to suggest that those you are questioning the expertise of do not care about brain research.
 
Last edited:
This requires dualism, I'm afraid. (Which is why Westprog and I are so baffled when the computationalists call us dualists for believing that only physical computations are involved.)
No.

I mean, yes, you and Westprog are baffled, but the computationalist approach requires no dualism. Asserting that it does is, in fact, dualism.

Logical computations are, by definition, imaginary overlays onto physical computations.
Abstract.

If you believe that logical computations can cause real (that is, non-imaginary) events of any kind, then you're saying that the matter and energy which are only sufficient to cause the physical computations are causing them and are also causing a second event.
The claim that there is a "second event" is dualism.

You have to believe that the mass and energy which is only sufficient to make the machine do what it's doing will get you that and a "world of the simulation" which is somehow not imaginary.
You are only confused because you are approaching this from a dualist position. You think there's a physical computational process and a world of the simulation. That's entirely wrong. They are one and the same.

(Of course, this problem is solved once you recall that you need a brain to create and read the sim, at which point it's obvious that the "world of the simulation" is, and can only be, a state of someone's imagination, with the matter provided by the brain and the energy provided by the body's metabolism, which removes the need for dualism. This is why you can't swap a sim machine for a brain -- the sim is imaginary, only the physical actions of the machine matter.)
And there you have your magic bean.
 
Brain waves are neural noise. It's been tested.

I'm not saying they aren't.

But you forget what evolution can do with junk.

And as always, you forget that there is no current explanation of consciousness, so it's not possible that any "test" has demonstrated that brain waves aren't involved.

And you simply don't want to discuss the actual observations of the correlations.

But you're right, correlation isn't causation. All we have at the moment is a correlation, but it's a damn exciting one.

Unless of course you're clinging steadfastly to debunked ideas, or you're so deluded that you believe you have solved the problem of consciousness and are simply being ignored by the Nobel committee out of spite.
 
Another trap is to take the metaphor literally.

This is an especially sticky tar baby if you spend a lot of time with machines that are specifically designed to make imaginary things look real
That's what brains do.

The error becomes somewhat clearer when you apply that to a coin sorting machine, and claim that the coins are "recognized" and "routed", rather than that they simply follow the laws of physics and fall down whenever they're not supported.
That's also what brains do.

You are so mired in dualistic thinking, Piggy, that you see it only where it isn't.
 
You are so mired in dualistic thinking, Piggy, that you see it only where it isn't.

If you want to point out any actual dualism, well, OK.

But simply shouting "Dualism!" from the sidelines is just silly.
 
You are only confused because you are approaching this from a dualist position. You think there's a physical computational process and a world of the simulation. That's entirely wrong. They are one and the same.

If it's true that they're "one and the same", then when I observe the CPU, why don't I see the simulation?

Why must I interpret lights on a screen, or ink on paper, or vibrations from a speaker?

Why is that necessary?

Why can't I just look at the CPU?

The non-dualist approach accounts for this problem by avoiding the claim that the simulation is "one and the same" with the simulator.
 
Airplanes fly differently than birds. But, the result is still something in the air that can fly around. Computers simulate flocking behavior differently that natural birds, but the result is still the emergence of a flock of things. Etc.

Therefore, there is also no good reason why you can not replace a brain with a simulation of a brain, or that a robot can't be conscious at any speed. (Granted, the consciousness wouldn't be very useful if it was too slow to react to stimulus in the real world, but that's beside the point.)

Here's a better analogy: Sail across the ocean in a simulation of a boat.
 
Where, Pixy? Where?

Don't be coy, explain yourself.

Where is the magic bean?
I have explained this to you many times, as have others. You're not paying attention.

You consistently assert that a brain is required to interpret the results of a computer. We point out that the brain is a computer; that everything it does can also be done by a computer.

You assert that it is somehow more. You can't say what, or how, or why you think so. You just insist that it is.

That's your magic bean.
 
I think the word "emerge" is still useful. The challenge is to show HOW it emerges, not that it emerges.

I think it's useful, too.

Despite the quotation marks, I actually wasn't objecting to the term "emerge" but rather to the claim that it emerges simply by dint of a critical mass of neurons, a position which comes up from time to time on these threads.
 
No-one has said that, or anything that can be rationally taken to imply that.

You need to go back and read some of Belz's stuff, cause he either said that or confused me enough to think he did.
 
I don't think this matters. The proximate details between how a machine works and a natural brain works might be very different. But, there is no reason why the Ultimate results can't be the same thing. This is no reason, that I can see, that would make it impossible to replace one's brain with an adequately engineered artificial machine. (not that I expect such a machine to be built any time soon.)

If by "artificial machine" you mean a "man-made machine" then yeah.

But if you think any machine part can be replaced by a computer simulation of that part -- except in some weird case where the simulator machine itself did the same work as the missing part -- then like I say, why not sail a computer simulation of a boat across the ocean?
 
If it's true that they're "one and the same", then when I observe the CPU, why don't I see the simulation?
When you look at your desk, why don't you see the electrons?

Why must I interpret lights on a screen, or ink on paper, or vibrations from a speaker?

Why is that necessary?

Why can't I just look at the CPU?
You can, if you use a big enough CPU.



The non-dualist approach accounts for this problem by avoiding the claim that the simulation is "one and the same" with the simulator.
You're confused again. I didn't say that the simulation is the same as the simulator. I said that the simulation is the same as the process the simulator is carrying out.
 
That's completely wrong. Reality is much closer to the reverse of that - the brain is built up of modules each with a specific function, and your awareness is a partial summation of those modules.

Then why don't you call up Tononi and Balduzzi and let them know.

I'm sure they'll rush out and revise their theory to assert that normal brains can indeed separate the experience of red from the experience of a square when they see a red square, and experience those qualities separate from one another.

Hey, happens all the time, right?

And by the way, if you want to believe a camera is conscious, knock yourself out, but don't expect me to take you seriously.

I mean, really... do you not understand that your ideas are laughable in the context of cognitive neurobiology? (Which is the study of the brain, you know... that thing that does consciousness... unlike, say, machines... which don't do that.)

Do you really not see this?

No, of course you don't, or you wouldn't keep repeating them... unless maybe you're an info truther exposing the conspiracy of the globalist biology world order to cover up what you know.
 
Status
Not open for further replies.

Back
Top Bottom