• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Explain consciousness to the layman.

Status
Not open for further replies.
Not-alive (at least no one has yet pretended that a simulation of life on a silicon computer -- or pebbles in the sand -- are alive) vs a lifeform.
How is that relevant?

And no one has explained why lifeform consciousness is wholly computational; it certainly has been and is being asserted as fact by you and a few others.
Sure. Because that is what we observe.

If you or anyone else has some contradictory data, go right ahead and present it. "Nuh-uh!" doesn't count.
 
Worse off, you give a theory explaining that if you add to these kinds of systems electromagnetic induction with some sort of thing that metaphorically works like harmony--though you don't explain what this thing is--that it explains everything. Though you don't explain what it explains exactly, and why the things you're rejecting do not explain it.

That was not a "theory".

I was clear that that bit was "jumping off into the Sea of Speculation" just to demonstrate how IIT might be applied to a brain, but I clearly said it would be "ridiculous" to accept it as true. Which it would.

In fact, it would also be ridiculous to accept some of the basic premises of that scenario as true.

Nobody knows the role (if any) of the signature waves in consciousness, only that they are correlates of consciousness.

Nobody knows if elements of the system such as the shape of electrical noise have any role at all.

And certainly nobody knows if IIT is correct. (My biggest problem with it is that Tononi and Balduzzi offer no evidence that II cannot be measured in any other non-conscious system outside the brain.)

And even if some scenario involving brain waves which conforms to ITT turns out to be accurate, that still won't answer the most difficult questions about consciousness.
 
I'm beginning to agree with you on some of the "faith" aspects of computational literalism.

I mean, it doesn't seem to bother the comp.lits that science has moved on and nobody studying the brain is working in that framework.

Or that their claims require violations of the laws of physics.
It's been explained repeatedly why your position violates the laws of physics, something you have completely ignored.

Your claims about our the computational approach is based on a blatant and frankly absurd strawman. This has also been explained repeatedly. If you can't understand it, ask.

Or that their claims inevitably lead to a host of absurd conclusions (e.g. a brain made of rope could be conscious, or consciousness could be created by writing out the equations describing the brain's operations).
If you could make a brain out of rope - I don't see how, but if you could - then what exactly would prevent it from being conscious?

As for writing out the equations: Once again, this is just you failing to understand the argument. It's not true, no-one's said that, it's just your confusion talking.

Or that their claims contradict direct observation.
Name one.

Or that many of their views are based on philosophy which hasn't been verified against reality... and apparently, in their opinion, need not be.
Name one.
 
Nobody knows the role (if any) of the signature waves in consciousness, only that they are correlates of consciousness.
Wrong. The answer is none at all.

Nobody knows if elements of the system such as the shape of electrical noise have any role at all.
It's not an element, it's noise.

What role does the sound of a steam train play in its function, Piggy?

And certainly nobody knows if IIT is correct. (My biggest problem with it is that Tononi and Balduzzi offer no evidence that II cannot be measured in any other non-conscious system outside the brain.)
Why is that a problem?
 
Yes there is such a common usage of the word symbol. Are you referring to the common usage when you draw a distinction between physical computation and symbolic computation?

But you're standing in the shop. Nobody tricked you to come in, or dragged you in--you ran into here screaming something about anthropomorphisms and how nobody in here cares about brain research.

No, I'm not standing in your shop.

The progress being made in consciousness research is being made by neurobiology, and I simply don't find anyone in that field refering to the brain as a "symbol system".

You are not the world, my friend.

And yes, when I'm discussing symbolic computation, it is to contrast the logical values which we imagine physical computations to be representing with the physical computations themselves.

If we do not make this distinction, then we will conflate the behavior of the machine (whether an abacus or a flight simulator) with the imaginary behavior of the system we decide that it represents.
 
But it's not the end of the story : you mentioned a possibility about brain waves, and have been explained why it wouldn't work. If "we don't know" is the end of it, why mention that possibility ? And why retreat to "we don't know" when told that we at least know that it's not a possibility ?

We'll never find out how it works if we don't speculate about what might be possible based on what we already know.

Even if I were sure that the premises for that scenario were sound (and I can't conclude that they are) it would still only be a hypotheses, and for any given hypothesis (since there can always be many) the chances of it being accurate are less than the chances of it being inaccurate.

There's no doubt in my mind that the scenario I described is probably wrong. But I didn't outline it because I thought it was correct. That was clearly not the point.
 
We'll never find out how it works if we don't speculate about what might be possible based on what we already know.
That doesn't mean you should speculate that the impossible is possible, or that the illogical is logical. And yet, that is what you are doing.
 
Not-alive (at least no one has yet pretended that a simulation of life on a silicon computer -- or pebbles in the sand -- are alive) vs a lifeform.

And no one has explained why lifeform consciousness is wholly computational; it certainly has been and is being asserted as fact by you and a few others.

This is the fundamental argument, which gets confused every now and again with something less restrictive or certain. It's been put forward that the alternatives are that consciousness is entirely computational, or else you believe in magic beans. Nothing else possible.
 
Hopefully I'll have them up sometime tomorrow and Friday... but I'm in a horrible bout of insomnia that's screwing up my blood sugar, which makes me not want to eat, which screws up the blood sugar even more, which makes the insomnia worse, and so on... I've lost 8 pounds in the last 5 days and my brain isn't as sharp as it should be at the moment.

Just like Pixy told us. Carefully designed electronics are better. I doubt whether your PC has insomnia. Go computers!
 
This is the fundamental argument, which gets confused every now and again with something less restrictive or certain. It's been put forward that the alternatives are that consciousness is entirely computational, or else you believe in magic beans. Nothing else possible.
The truth is that two approaches have been put forward: The computational approach, and magic beans. Anyone is free to put forward an alternative to the computational approach that makes sense. No-one has done so.

This is not my problem.
 
"The human brain is the most complex object in the known universe ... complexity makes simple models impractical and accurate models impossible to comprehend," ….Scott Huettel …the Center for Cognitive Neuroscience at Duke University
The first statement is untrue and has been untrue for over a decade. The internet, considered as a system, is vastly more complex than the human brain at this point.

The other two claims are also untrue in the general case. We have models of brain function that work just fine for specific purposes.

…but not anymore folks. Here at JREF, the eternal mystery, the fathomless dilemma of the human brain has finally been resolved. The simple model has triumphed!
Begging the question.

TWO POUNDS OF MEAT (…ooops, sorry….’warm’ meat!)

….sayeth the Pixy!...the ontological equivalent of the proverbial ‘finely-engineered machine of wire and silicon’ (…’two pounds of warm meat’…’ finely-engineered machine of wire and silicon’…I guess it’s fairly obvious where Pixy’s bias lies).
You didn't read punshhh's post, did you? The logical fallacy I was responding to?

By the way Pixy…you forgot that inconsequential bit about the numerous pounds of warm meat that created those finely engineered etc. etc. etc. Incidental I know.
Not relevant.

So…in one corner, we have Scott Huettel, director of the Human Neuroeconomics Laboratory and associate director of the Brain Imaging Analysis Center at Duke University….claiming that ‘the brain is the most complex object in the known universe’.
Which is untrue.

…and in the opposite corner we have that which is known as PixyMisa (who may, in fact, be nothing more than the aforementioned ‘finely-engineered machine of wire and silicon’ ….how are we to know?)
Don't even bother with that nonsense.

who insists the brain is comparable to a couple of Big Macs.
Nope.

Shall we take a poll or shall we assume that one of these two parties requires an education?
Huettel's statement is factually false. My statement is correct and apposite. Both have been taken out of context to hide the fact that you apparently have nothing to contribute.
 
westprog said:
Assuming consciousness has content where/when/how do the formal symbols generate representational content?

One could say its the relationships between the symbols - but the relations between the symbols are themselves symbols.


All defined in terms of already available representations if I understand you.

Just wondering how new representional content can emerge from this... symbols interacting with symbols/formal language/idea-illustrating encodings.
 
Assuming consciousness has content where/when/how do the formal symbols generate representational content?
Here's a particular model.

(ETA: Again, I'm not pushing a computational model of the brain, suggesting that this is how the brain works, and so on. I'm just observing that this looks very sufficient for me, and the objections I've heard so far are highly unimpressive. This is simply a description of how semantics can be generated on these higher scales being claimed to be impossible.)

Now the direct answer to your question sounds pretty unimpressive--almost like wussing out. But it is legitimate.

You don't merely have symbols--you have symbols and a set of transformations. They go together; if I were teaching a math class, and I started drawing symbols on the board, I might draw a 2 in white chalk. Maybe some time later on the other board, out of convenience, I'll pick up a green piece of chalk and draw another 2; and some time later draw an 8. Each instance of a 2 (aka, each token) is the same symbol--in this context, the students had better understand they are both 2's. But the 8 is certainly a different symbol. Yet, if we could take a Martian and put him into the classroom, the Martian might simply see three symbols--and, perhaps because I'm sloppy, and perhaps simply because he's a Martian--the Martian would not know whether those two 2's are supposed to be the same symbol, or a different one; and may not even know if that 8 is different, or the same as the 2's.

What makes those two tokens the same symbol, on the microcosm, is that under this set of transformations they behave in the same way for every transformation. And as for the 8, there exists a transformation for which the behavior would be different.

Given this, we have the tokens partitioned into equivalent sets of symbols under the transformations. Semantic content can then follow by analyzing the symbols using transformations; if we produce a new symbol from two existing symbol, the nature of the transformation is to produce a third symbol that refers to a higher level pattern established by the two.

To get somewhat similar to a human, you apply this style of analysis in layers, over and over again, from a set of symbols that represent sensory inputs; through discovering patterns that are significant; through representing large sets of contents, producing internal states (see the video), analyzing these states, integrating them, applying symbols to certain "locations" to produce movements, analyzing the results of those movements, discovering patterns, learning about the correlation of those patterns produced to the "locations" used to produce movements, building a body map from this, iterating from this how such body maps can manipulate objects, building from this ideas of how objects behave as well as how to recognize them; having sets of symbols that represent drives, learning how to recognize the drives, learning what satisfies the drives, learning how to apply our body map/model map to satisfy drive, and so on.
 
Last edited:
The first statement is untrue and has been untrue for over a decade. The internet, considered as a system, is vastly more complex than the human brain at this point.


There you go again with unsubstantiated assertions. Do you have a reference to this from a COMPETENT source?

Also.....even if I grant you that.... why isn't the internet CONSCIOUS yet? Or is it?.....and if you assert that it is.... then who is the one with the Pixy Dust in his brain now?

I suspect you’re watching way too many science fiction movies.
 
Last edited:
The truth is that two approaches have been put forward: The computational approach, and magic beans. Anyone is free to put forward an alternative to the computational approach that makes sense. No-one has done so.

This is not my problem.



False dichotomy
 
Here's a particular model.

Now the direct answer to your question sounds pretty unimpressive--almost like wussing out. But it is legitimate.

You don't merely have symbols--you have symbols and a set of transformations. They go together; if I were teaching a math class, and I started drawing symbols on the board, I might draw a 2 in white chalk. Maybe some time later on the other board, out of convenience, I'll pick up a green piece of chalk and draw another 2; and some time later draw an 8. Each instance of a 2 (aka, each token) is the same symbol--in this context, the students had better understand they are both 2's. But the 8 is certainly a different symbol. Yet, if we could take a Martian and put him into the classroom, the Martian might simply see three symbols--and, perhaps because I'm sloppy, and perhaps simply because he's a Martian--the Martian would not know whether those two 2's are supposed to be the same symbol, or a different one; and may not even know if that 8 is different, or the same as the 2's.

What makes those two tokens the same symbol, on the microcosm, is that under this set of transformations they behave in the same way for every transformation. And as for the 8, there exists a transformation for which the behavior would be different.

Given this, we have the tokens partitioned into equivalent sets of symbols under the transformations. Semantic content can then follow by analyzing the symbols using transformations; if we produce a new symbol from two existing symbol, the nature of the transformation is to produce a third symbol that refers to a higher level pattern established by the two.

To get somewhat similar to a human, you apply this style of analysis in layers, over and over again, from a set of symbols that represent sensory inputs; through discovering patterns that are significant; through representing large sets of contents, producing internal states (see the video), analyzing these states, integrating them, applying symbols to certain "locations" to produce movements, analyzing the results of those movements, discovering patterns, learning about the correlation of those patterns produced to the "locations" used to produce movements, building a body map from this, iterating from this how such body maps can manipulate objects, building from this ideas of how objects behave as well as how to recognize them; having sets of symbols that represent drives, learning how to recognize the drives, learning what satisfies the drives, learning how to apply our body map/model map to satisfy drive, and so on.

ETA: Again, I'm not pushing a computational model of the brain, suggesting that this is how the brain works, and so on. I'm just observing that this looks very sufficient for me, and the objections I've heard so far are highly unimpressive.


Thanks for this answer. My simple contrarian reply is: how is this not all merely circular?

There is nothing external to the setup you describe far as I can tell.
 
Thanks for this answer. My simple contrarian reply is: how is this not all merely circular?

There is nothing external to the setup you describe far as I can tell.
Well, consciousness itself is an internal process, but it doesn't do a whole lot without sensory input. People undergoing sensory deprivation tend to drift in an out of consciousness; the brain seems to need stimulation to keep it fully ticking over.
 
False dichotomy
That's what I just said. Please pay attention.

I'm not saying that the only models for consciousness are computation and magic. I'm saying the only models for consciousness presented in this thread are computation and magic.

If I've missed something, feel free to point it out.
 
(Sorry I didn't see this post earlier.)

I just came across this and thought you might find it interesting.

Hearts Have Their Own Brain and Consciousness

The heart reacting to emotions does not imply that it has its own brain and consciousness. It just means the heart can react to emotions.

Your lungs, spleen, stomach, liver, and bladder, etc. are also impacted by emotions in different measurable ways. Does your spleen have its very own brain and consciousness, too?

I think your sources are sorely reading much too much into unsurprising findings.
 
Status
Not open for further replies.

Back
Top Bottom