Has consciousness been fully explained?

Status
Not open for further replies.
That is my recollection, too.

But, as with my recollection that he claims not to have a Sofia, I may be wrong.

PM can easily clear up the question by simply stating what it is he believes.

There seems to be some controversy over the claim. I'm not sure why. Pixy has repeated it endlessly. I've specifically called it the function of the Turing machine in order to avoid the red herring of possible side effects due to the specific implementation of a Turing machine.

Other posters have been ambivalent about the brain as an organ of the body, but AFAIAA, Pixy has insisted throughout that it has been mathematically proven that a Turing machine can do anything that the brain can do. The implication is therefore that any implementation of a Turing machine could in some unspecified way replace the brain.

I obviously don't think that this is true myself, but there would be no point in arguing with it if it wasn't somebody's genuine opinion.
 
[Bolding mine] I have to take exception to this. The compared statements for which no difference is claimed here, as provided by RD from westprog's post, are:

1) PM's position is that the consciousness of the brain springs from its function as a Turing machine.
2) PM has consistently made the claim that consciousness springs entirely from the function of the brain as a Turing Machine - and nothing else

[Bolding original to RD]
I can accept 1) as a characterization of my own perspective, but not 2). I perceive, but can't know westprog's actual reasoning, that it comes from a misunderstanding of the nature of emergence. I have previously explained how emergent properties are in principle predictable variables, not some intrinsically new property that magically appears in system ensembles.

RD has been demanding a well defined distinction between the mechanistic models used to describe aspects of consciousness and how it differs from consciousness, so I'll provide the principles in terms of emergence and show how it is consistent with the definitions of Myriad and others. This also goes toward the limits of the Church-Turing thesis, and why such a thesis can be valid in all subsystems of a system when not valid in the system itself.

I'll use rogue waves as an example. At the level of individual water molecules you have linear translations through space, mean free path, wrt speed and density. They are simply bouncing off each other. In a standard linear wave, water is not even being transported with the wave, just bouncing back and forth between each other as the wave passes them. Normal waves are an effectively linear emergent phenomena.

Rogue waves are characterized not by a linear compression, but a nonlinear sum of a bunch of chaotic randomly interacting smaller waves. Now, in spite of the individual molecules still being characterized by linear spacial translations, we must model the nonlinearities and treat the emergent wavelets as new entities, and then model the interactions of these new emergent entities to produce a complete new nonlinear emergent entity, created from underlying emergent entities, called a rogue wave.

Now the Church-Turing thesis is limited to ‘explicitly stated rule’, like the collisional rules of the individual water molecules. Yet for the rogue wave we are dealing with a hierarchy of emergent properties, where emergent properties are interacting with emergent properties to produce new emergent properties. It is distinctly nonlinear, and requires input modeling that lacks explicit values as defined by the Church-Turing thesis. Yet foundationally they remain the product of explicit linear mechanistic molecular collisions.

So consider the definitions provided by Myriad and others, where the specific of any given calculation approach in our brains is modified, or evolves, in accordance with our past history and experience with similar such calculations. We may even make tactical changes in how we arrive at a calculation midstream, by noticing patterns during the process of that same calculation. This is a distinctly nonlinear approach to calculations, which don't 'directly' model well, or at all, in a Turing machine. Yet like the rogue wave, the foundational mechanics can remain an ensemble of Turing machines.

Thus there is nothing wrong with describing non-intelligent, non-conscious, linear mechanistic machines as 'operationa'l elements of consciousness. It simply isn't valid to accuse people, that speak of these operational elements as elements of consciousness, of claiming these elements are themselves conscious, period. It's a "Composition fallacy" to accuse people describing composition of a Composition fallacy, when the claimed fallacy was your own composition.

Does this qualify as a well defined distinction RD? Does anybody have any questions not made clear here?

I must admit that I don't entirely follow the reasoning here. However, let me point out that I used the word "function" advisedly. If there is a supposition that the operation of the brain is equivalent to a Turing machine and certain side effects of the operation of a Turing machine, then that is not the same thing as the brain having purely the function of a Turing machine.
 
I described the simulation before

The simulation is, as I have said, a detailed simulation of a human brain down to the neuron level. It models the physical interactions of the components of the brain, ie it is how physics says that they should behave. It is running on an instruction set computer so it is unquestionably an algorithm.

But if it is an accurate model of the physical interaction of the components of the brain then it ought to behave as a human does.

If you mean that the programmers have preprogrammed in some behaviours then, no, obviously not possible under the scenario that I proposed. They have programmed in nothing but the physical interactions of the brain components and the sense data.


When a human being standing in front of a tree says "I see a tree" then it's reasonable to suppose some connection between the tree and the person, based on light waves travelling from between the two.

If we then exactly simulate the state of the brain using some as yet unknown technology, and we simulate the sensory experience of seeing the tree, then the simulation will claim to see a tree. But we know that there is no tree. We know that the simulation has been fooled. Hence if the simulation claims to have SOFIA, we don't know if it does or not.

Of course, this also leaves the possibility that there actually is no tree, even for us - and we can never know for sure.
 
So rocks can reproduce? Rocks can metabolize? Rocks have complex biochemical control pathways?

And computer chips can reproduce, metabolize, have biochemical control pathways?

What an amazingly stupid corner you have backed yourself into.

I find it genuinely astonishing that you cannot actually comprehend the points I make no matter how carefully and slowly and simply I spell them out.

Place a computer chip, a rock, and a living creature together. All three are different. They possess different properties in the sense that they have different mass, appearance, etc. However, all three of them have a mass, have an appearance, and so on.

You are claiming that the living creature and the computer chip have some property in common, which is not shared by the rock. Why you continually insist on regarding my disagreement with this claim as a statement that I can't distinguish between the chip and the rock, I don't know. It's rather odd.

And just so you know, literally nobody else is even paying attention to our conversation because of how absurd your position is -- so you don't need to keep speaking to an audience. You really are the only person on this forum who honestly thinks an unrefined rock has the same properties as a refined silicon semiconductor.

In fact, you are the only person who thinks that I think that.
 
Although I've only been a graduate student / research assistant for a couple years, this is my field. I'm not trying to be a dick with all my corrections and objections -- I'm trying to clear up misconceptions that tend to be a result of news articles that over-hype or misunderstand scientific findings on occasion. I am wrong sometimes and would like it to be pointed out when I am. If you're right, you're right regardless of my background and occupation. But news stories and TED talks aren't a good source to demonstrate it. Journal articles are a good source.
Ok.

Brain wave game controllers measure brain wave frequencies. The MRI game controller (I think you're talking about "Epoc") uses MRI and isn't a brain wave controller, so I assumed that wasn't what you were talking about. I was actually not aware of Epoc until I looked it up. The idea behind it is very cool.
The "not a brain wave controller" is somewhat confusing as it is the brain waves doing the controlling, but I take this a saying the brain waves aren't doing the controlling. Also note that even brain waves are a byproduct of neural firing, not the firing itself, not unlike the blood flow, or metabolism of sugar marked by blood oxygen changes, used by fMRI. Anyway, back to the Epoc case for instance.

The Emotiv EPOC actually provides developers with 3 implementation methods labeled Expressiv, Affectiv, and Cognitiv. These correspond to facial expressions, emotional states, and thought respectively. Now note, in all 3 the information is obtained from brain waves. The facial expression information does not come from observing facial expressions, it comes from mapping facial expression information from the brain waves. There remains a dictionary approach to interpretations that can be individualy defined. If you want to use brain waves in a more open ended way use the Cognitiv implementation.

My point was not specific implementations of brain wave monitoring, nor a claim that it represented mind reading in the sense a proclaimed psychic would imply, nor that it was direct reading of meaning associated with fine scale neural firing, or even that such techniques were direct unabstracted readings of neural activity, which they are not. What it does provide is commonalities in neural activity between brains, and provides enough location and location progression information learn the about the processes and locations involved in various experiences. We can abstract these out to build control devices based on facial expressions without bothering to monitor facial expressions, as in the EPOC case.

I'm less interested in these control devices, which do in fact involve dictionary outputs on very limited resolutions, than what can be learned about the brains operation principles in general. The abstraction between neural firing and the global monitoring techniques do not invalidate the fact that neural activity is being monitored with various levels of resolution and abstraction, and the only important feature between the various methods is the actual resolution to neural activity. I'll get to the single neuron case next.


Ideally you might want to, but we can't measure the activation of individual neurons, just general brain activity that is assumed based on things like oxygen distribution. Human neurons are (to my knowledge) too small to observe in action.
Single neuron activation:
http://www.nature.com/nature/journal/v451/n7174/full/nature06447.html said:
The sensory impact of individual cortical neurons therefore remains unknown. Here we show that stimulation of single neurons in somatosensory cortex affects behavioural responses in a detection task. We trained rats to respond to microstimulation of barrel cortex at low current intensities. We then initiated short trains of action potentials in single neurons by juxtacellular stimulation. Animals responded significantly more often in single-cell stimulation trials than in catch trials without stimulation.
If you want more info on "juxtacellular stimulation":
http://www-ulpmed.u-strasbg.fr/laec/juxtacellular_techn-UK.html
PDF link on page. It is far more precise than the traditional pointlike eclectic probe, which in spite of being pointlike tended to involve tiny bundles of neurons. The notion that the operational principles of our brain is structured such that single neurons play a significant role is more than a little suspect anyway. Yet the electric probe experiments can repeatedly produce very precise memories, complex actions, etc., without the subjects control. Even actions as complex as reliably lifting you hand to your mouth, among others.

Well, we don't know do we? Maybe you do, but I'd have to read the details of the study to know to what extent it was a success or failure.
The picture quality is about as poor as it gets. The implications are far more interesting than what was actually accomplished, and I have grave doubts about how much resolution is even possible in principle.

Back a few post, it was an explanation of why I doubt highly significant increases in resolution is possible, as I just mentioned above. It's certainly not authoritative, and would require an outline of a far more detailed modeling approach to do these assumptions any real justice.
 
Of course it does. Unless you can point to more than one bodily function that stops when you fall asleep, starts back up when you dream, stops again when the dream is over, and starts again when you wake up.

It's like if I say, "Elanor caught a frog today" and you ask "Who is Elanor?" and I point to a 14 lb black cat lounging on the back of the sofa. I've just shown you what I mean by "Elanor". Would you then ask me for a definition?

Sofia is what is absent when you are asleep and not dreaming. It's your sense of being you.

We all have that. To say otherwise is just silly.

True , I would gently say that SOFIA is mainly the perceptions conflated through the persistence of memory and social reinforcement, it is comprised of many separate events and either confabulated by the brain or conflated.

Just a personal opinion.

The point I would also add is that it would be a multivariate scale :attention, awareness, various forms of cognition and most importantly from the outside observer 'interactiveness'.

Now I always focus on the behaviors (in the modern sense) because of a simple thing, we use the same behaviors to judge consciousness in our selves that we use to judge it in other people, we just can't access their events.

The problem I have with a unitary definition of consciousness is that people can have various issues with different brain functions, as you have pointed out very well, and when you consider how many of them can exist yet a person still meets most of the criteria for ‘consciousness’ is seems to be telling.

When you consider that people with dementia, alzheimers, autism and various other issues with brain function can meet some of the criteria for ‘consciousness’ but seemingly fail in others, it points to what I consider to be the rubric nature of the phrase ‘consciousness’. Then you can add in the whole volitional conundrum and it just seems to be really a phrase for a bunch of disparate processes.
 
Work with me, Earthborn. That's obviously a metaphor.

The thing I think I can say here is this:

Earthborn, you are correct that the brain does not run prepackaged programs written in a symbolic logic system and then translated into seperate acts of the processors.

Yet given the conditioned and associative nature of the neural networks, they do develop habitual patterns of response that are an analog to 'programs'.
 
That is my recollection, too.

But, as with my recollection that he claims not to have a Sofia, I may be wrong.

PM can easily clear up the question by simply stating what it is he believes.

Well that may get back to an extended discussion some years ago of p-zombies and m-zombies. I think the issue is that p-zombies would have the behaviors of SOFIA and therefore be 'conscious'. This comes about from some of the threads with Mercutio.
 
If we then exactly simulate the state of the brain using some as yet unknown technology,
It is a computer running a program. That is not an unknown technology. It just needs to be a very powerful computer. I am not sure why everyone wants to change the example
and we simulate the sensory experience of seeing the tree, then the simulation will claim to see a tree. But we know that there is no tree. We know that the simulation has been fooled. Hence if the simulation claims to have SOFIA, we don't know if it does or not.
Of course the question was not - "does it have a Sofia?". The question was - would it behave as though it did?

First, we know it is an algorithm.

We are not asking if it has a Sofia because we can examine both possibilities:

1. It has a Sofia. This would mean that an algorithm can have a Sofia, or

2. It does not have a Sofia. But since we know that any behaviour we observe from it must be a function of the modelled interactions of the same brain architecture that we have, its claim to have a Sofia comes from the same mechanism that produces our claim to have a Sofia.

This would imply that our claim to have a Sofia has nothing to do with the fact that we do have a Sofia

So 1 and 2 both lead to different apparent absurdities.
 
Last edited:
The thing I think I can say here is this:

Earthborn, you are correct that the brain does not run prepackaged programs written in a symbolic logic system and then translated into seperate acts of the processors.

Yet given the conditioned and associative nature of the neural networks, they do develop habitual patterns of response that are an analog to 'programs'.
More generally, a discrete-state neural network is provably equivalent to a stored-program computer. The brain isn't exactly a discrete-state neural network, but still.
 
It is a computer running a program. That is not an unknown technology. It just needs to be a very powerful computer. I am not sure why everyone wants to change the example
Because it gives them precisely the answers they wish to avoid.

Westprog in particular is completely unable to get past his reification fallacy; any time you propose a computer model of consciousness he will point out that the modelled sensory data doesn't relate directly to real objects... As though this either (a) mattered or (b) happened in the real world in the first place.

Whether this is an honest failure of his conceptual grasp or some strange sophistry I cannot say, but it's his rock and he's foundered on it.
 
I must admit that I don't entirely follow the reasoning here. However, let me point out that I used the word "function" advisedly. If there is a supposition that the operation of the brain is equivalent to a Turing machine and certain side effects of the operation of a Turing machine, then that is not the same thing as the brain having purely the function of a Turing machine.
I would be interested in what's not so clear about it. I'm not clear on what the advisory on the use of "function" is based on. Though I never personally used that term, it is contained in the claims I objected to the equivalence of.

[The function question]
Mathematically a linear function can be written in the y=mx+b form. Thus nonlinear functions are quiet trivial to define. For systems, such as we are discussing, linearity is characterized by the superposition principle, i.e., the sum of two or more inputs is the sum of the of the outputs of the inputs considered separately. A function merely defines a transform, if any, that takes place between an input and output. For systems, this is merely initial and final states defining how the system evolves.

I described an intensely complex system, at least from a molecular perspective. Yet we know the global system is a product of the molecular system. In fact a conservation law applies to such collisions: Conservation of Linear Momentum in Collisions. I can also resort to authority and points out classical Solitons are by definition nonlinear waves. Yet these waves are fundamentally subsets of collisions. So in what way did I bastardize the concept of functions or linearity?

[The brain question]
By suggesting the operation of the brain is "equivalent to" a Turing machine is tantamount to suggesting that a Soliton is "equivalent to" a bunch of collisions. Nobody is making such a suggestion. Only some people are insisting others are making that claim, solely on the basis of talking about the logical equivalent of collisions involved in producing the logical equivalent of a Soliton.

Does this not make the distinction between an element of consciousness and consciousness itself clear?
 
I am often mistaken I thought it was EEG in nature as opposed to MRI. I can , will be and am often wrong.
I did mess that one up, but wasn't going to call myself out. I justified this on the grounds that the point was about neural activity rather than specific methods of obtaining information about that neural activity. It's absolutely true that in all such noninvasive methods have a limited and variable resolution in providing information about the actual neural activity.
 
Cornsil, is there an experiment MRI interactive system as opposed to EPOC?
http://www.emotiv.com/

EPOC seems to say EEG.

Oh, you're right. Like I said, I'd never heard of Epoc. I just did a search for MRI controller and came up with that somehow.

An interactive MRI system is possible, but I don't know if one exists. I imagine it'd be too expensive to be practical with current technology.
 
I don't know what PM means, but if he means "A Turing machine plus X" then he has to explicitly state that, not say something quite different.

Why? You don't say "a brain plus X."

I already said that the brain has the functionality of a Turing Machine. I also stated that other functionality is absolutely essential.

That other functionality is only present when you have "a brain plus X."

Why the double standard, westprog?
 
I don't really want to try to breakdown the whole exchange, but for example your idea about the significance of "ability to maintain stability" (paraphrasing) applies to many things, such as rocks. You seemed to take this as the claim that there is no difference between cells and rocks.

I clearly explain why the way a rock remains stable is very different from the way a cell remains stable -- a cell changes it's internal state as a result of the environment to a much larger degree than a rock.

Do you not agree with that?

Cells divide, they move on their own, they change their metabolism, they change their shape, they change their structure, they change their chemical composition, etc, all due to small changes in the environment that, were such changes to occur in the environment of a rock, there would be little change in the rock.

Do you not agree?
 
But could the external behaviour of a human be modelled by a computer program that modelled the interactions of the components of the brain?

Note that people who reply "I don't know" are, by definition, supporters of magic.
 
Status
Not open for further replies.

Back
Top Bottom