• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Why dualism?

What problem is that? It's not like making the prisoners the guards, which has been done as well.

Well, you have some light (or other indicator) that trips to indicate a fault in the system. The light itself might burn out. So then you need some way to monitor that... and so on. Hence the infinite regress.

However, I suppose there could be a nifty solution you had in mind. Maybe some sort of circular structure to avoid the normal hierarchy? That's what I was asking about.
 
Actually there is no "infinite regress argument:" self monitoring is a well established aspects of many automated systems. Often initiating corrective or fault response actions, preferably just signaling everything is still OK. While there isn't (at least that we know of) a construction of a self narrative in those devices or systems as in consciousness, it still demonstrates no inevitable infinite regress in self monitoring.


I agree, in that I don't think arguments about a specific or implied infinite regress are valid, just that they do come up. That's what's happening when, after looking at a hypothetical design for a conscious system, a critic asks "but where's the part that's aware of all these other parts?"

There is a regress of sorts, but not an infinite one, in Hofstadter's "strange loops" model of self-reflection. The remembered act of mental evaluation of a generated narrative becomes part of the next generated narrative. You can remember something, then remember remembering it, then remember remembering remembering it, and so forth. Hofstadter thinks that consciousness itself is something that "emerges" from such self-referential vortices, while I think it's simpler than that. But in either case, it's sequential, not simultaneous. So it's no more an infinite regress than is the cycling of a steam engine.

marplots said:
How do they get around the problem of the monitor monitoring itself?


And here we have an example of that kind of argument.

The monitor monitors its recent past actions, and/or certain aspects of its present state. It monitors the narrative of itself that it's generated. It cannot and does not monitor its entire self, or the entire process of generating that narrative.

Ironically, for that very reason, the monitor is convinced that its entire self must be unbounded and nonphysical...
 
Well, you have some light (or other indicator) that trips to indicate a fault in the system. The light itself might burn out. So then you need some way to monitor that... and so on. Hence the infinite regress.


If the light, or more specifically the system that turns it on, is tripped then the monitor has indeed monitored the error and acted. A burned out light would be an open circuit and easily detectable. Such system circuits are usually constructed as normally closed so sensor or component failure result in an open circuit and an error. A short on the other hand would not change as specific system conditions change and would also result in an error. A short that just happens to work exactly like the sensor is just doing the job of the sensor. Again no infinite regression.



However, I suppose there could be a nifty solution you had in mind. Maybe some sort of circular structure to avoid the normal hierarchy? That's what I was asking about.

The solution to a burned out light is just to replace the light, nothing particularly nifty. Exactly what "normal hierarchy" are you asking about or think should be avoided? Often the simplest solution to an error is just to retry. Others may result in a correction or re-calibration routine. Such errors are self-recovering unless the retry or re-cal fail which result in a different error and may cause the unit to take it self offline. Some errors that could be self-recovering you don't want to be as they may be indicative of other systemic errors not necessarily related to that particular unit itself.

Complex systems can have complex problems some of which just may not be monitored. However, for integrated automated systems the possibility of unknown errors or just wacky input from a sub or super system are covered as just general error responses.

Much like ourselves automated systems have certain expectations and when those expectation aren't met, for the automated system, it often result in an error or corrective action. For ourselves it depends on the individual.
 
Well, you have some light (or other indicator) that trips to indicate a fault in the system. The light itself might burn out. So then you need some way to monitor that... and so on. Hence the infinite regress.

However, I suppose there could be a nifty solution you had in mind. Maybe some sort of circular structure to avoid the normal hierarchy? That's what I was asking about.


Oh, cool question.

First of all, as certain very annoying current TV commercials say, why monitor a problem if you don't fix it? So, assuming the light isn't something that can always fix itself, you don't want to have just one light that tells you the first light is broken. Let's say you have a hundred of them instead, so if one or more of them burn out you still get the fault signal.

But now the infinite regress is much worse! Instead of an infinite regress of one more light at each step (to show the one before it has tripped) you have an infinite regress of a hundredfold more lights at each step.

But you can use your panel of fault lights more cleverly than that. Instead of all hundred lights always indicating one specific fault, you wire them so that a certain fifty of them light up for that fault. And a different pattern of fifty of the hundred lights light up for a different fault. And so forth; there could be hundreds of different patterns for hundreds of different fault conditions. And all those patterns could still be recognizable and distinguishable from one another even if a random scattering of the lights had burned out. (Just for comparison, with about a hundred lights you can easily display three clear alphanumeric characters using the standard 5x7 dot matrix, which would give you tens of thousands of different three-character error codes using standard characters alone.)

In fact, you can distinguish far more different error states than there are lights in your display, even with some of the lights burned out, so a hundred of the error patterns can indicate burned-out individual lights in the panel.

Of course, you also need an error pattern recognizer to monitor the lights (just as you needed a burned-out-light monitor to monitor the one light). In fact, since one of those can go wrong too, you'd better have a bunch of them, maybe with different ones more sensitive to different categories of error light patterns, but with lots of overlap for redundancy, and you have them all operating at once and comparing their results to one another. Would they need another panel of lights to display the results of their comparisons? Not necessarily; those could be additional patterns on the original panel instead.

An engineer might choose a different way, for less complexity and more behavioral predictability (things engineers like). One fault light for the original fault condition, a fault light fault light, a fault light for that, and a fault light for that. Four layers. The top layer would be an extra super durable kind of light just to make sure, and the engineer might further reason that because that fourth light would hardly ever have to go on, it would never burn out in the conceivable lifetime of the whole system.

But evolution can't do it that way. Because if the fourth light hardly ever needs to go on, the genes that encode for it could mutate into non-functionality without noticeably affecting the system's ability to function and reproduce. So the fourth light would disappear over the generations, then perhaps the third... Nope, evolution develops systems more like the panel of lights, where there's lots of overlap and redundancy that makes every part useful but few or no parts individually essential.

So, will such a system, with all its functional overlap and redundancy, work forever? No. Eventually, enough lights will burn out that the error codes can no longer be reliably distinguished, or enough circuits in the recognizers will fail so that the faults are no longer correctly responded to.

That's called aging.
 
I am confused, where is the awareness absent a body, be concrete for this old man....

this is like asking where is there matter absent an object - it doesn't make any sense.

as far as this old man, I find awareness there no matter where I go. :)
 
I agree, in that I don't think arguments about a specific or implied infinite regress are valid, just that they do come up. That's what's happening when, after looking at a hypothetical design for a conscious system, a critic asks "but where's the part that's aware of all these other parts?"

There is a regress of sorts, but not an infinite one, in Hofstadter's "strange loops" model of self-reflection. The remembered act of mental evaluation of a generated narrative becomes part of the next generated narrative. You can remember something, then remember remembering it, then remember remembering remembering it, and so forth. Hofstadter thinks that consciousness itself is something that "emerges" from such self-referential vortices, while I think it's simpler than that. But in either case, it's sequential, not simultaneous. So it's no more an infinite regress than is the cycling of a steam engine.




And here we have an example of that kind of argument.

The monitor monitors its recent past actions, and/or certain aspects of its present state. It monitors the narrative of itself that it's generated. It cannot and does not monitor its entire self, or the entire process of generating that narrative.

Ironically, for that very reason, the monitor is convinced that its entire self must be unbounded and nonphysical...

That's probably the rub of it Myriad the perception that the system has to monitor everything about itself perhaps even simultaneously and that's just not how self-monitoring systems work. It's a risk benefit assessment. If the error poses a considerable risk then you don't just have a light come on. While detecting a burned out light isn't much of a problem it also may not be much of an issue again unless that error (failed light) poses some risk. Also certain aspects don't activate and certain risks aren't present until performing certain tasks.

For example a furnace will have some means of flame detection to make sure fuel is burning and not just being pumped into the fire box. However, fuel can't burn unless it is first pumped and ignited. So the flame sensor has a delay at start up to let the fuel pump so it can start burning.
 
And here we have an example of that kind of argument.

The monitor monitors its recent past actions, and/or certain aspects of its present state. It monitors the narrative of itself that it's generated. It cannot and does not monitor its entire self, or the entire process of generating that narrative.

Wouldn't that mean there is no mechanism to detect a false narrative and that errors would propagate? (Which actually sounds a lot like the kinds of errors we do make.)

I don't have any problem with a glitchy system and a failure to monitor - those aren't deal-killers when it comes to explaining consciousness. My dispute is with the idea of a nested system generally and the inherent limitations.
 
That's probably the rub of it Myriad the perception that the system has to monitor everything about itself perhaps even simultaneously and that's just not how self-monitoring systems work. It's a risk benefit assessment. If the error poses a considerable risk then you don't just have a light come on. While detecting a burned out light isn't much of a problem it also may not be much of an issue again unless that error (failed light) poses some risk. Also certain aspects don't activate and certain risks aren't present until performing certain tasks.

For example a furnace will have some means of flame detection to make sure fuel is burning and not just being pumped into the fire box. However, fuel can't burn unless it is first pumped and ignited. So the flame sensor has a delay at start up to let the fuel pump so it can start burning.

But then you'd have to monitor the delay function, wouldn't you?

Your first paragraph inspires a solution to me, at least at the "hunch" level. Make it so errors propagate strongly enough that the system will detect them - even though a specific, lower level error may escape notice. So, for example, my Windows machine fails to boot. I don't know why, but I do know something is wrong. A very loose sort of monitoring that doesn't catch all the errors, but only some avalanche of errors.

For your furnace example, I'd then say, "Well, the error is revealed when the house explodes."
 
Wouldn't that mean there is no mechanism to detect a false narrative and that errors would propagate? (Which actually sounds a lot like the kinds of errors we do make.)

I don't have any problem with a glitchy system and a failure to monitor - those aren't deal-killers when it comes to explaining consciousness. My dispute is with the idea of a nested system generally and the inherent limitations.

Well, just the existence of a mechanism doesn't necessitate it use. Also the preference of some particular narrative can be quite compelling and perhaps beneficial.

A sense of depersonalization...

https://en.wikipedia.org/wiki/Depersonalization

... is that the narrator isn't creating the narrative but again that is just a narrative.
 
But then you'd have to monitor the delay function, wouldn't you?

No actually not, if I recall correctly the delay is controlled by a capacitive discharge. A failing capacitor (or dielectric) doesn't charge, hold as much of a charge or just discharges faster. Part of engineering is to use expected failure modes to the benefit of the system.


Your first paragraph inspires a solution to me, at least at the "hunch" level. Make it so errors propagate strongly enough that the system will detect them - even though a specific, lower level error may escape notice. So, for example, my Windows machine fails to boot. I don't know why, but I do know something is wrong. A very loose sort of monitoring that doesn't catch all the errors, but only some avalanche of errors.

Well that's part of it but the intent of the first paragraph was to show that how things fail becomes part of the monitoring. You know systems will fail so you want them to fail safely.


For your furnace example, I'd then say, "Well, the error is revealed when the house explodes."

Of course a possibility but you want it so improbable that it is unlikely to happen without some deliberate or erroneous intervention (like someone keeps pressing the reset button).
 
No actually not, if I recall correctly the delay is controlled by a capacitive discharge. A failing capacitor (or dielectric) doesn't charge, hold as much of a charge or just discharges faster. Part of engineering is to use expected failure modes to the benefit of the system.

This is similar to what I had in mind, but now I realize it dodges the issue. Because "benefiting the system" is in a different category than monitoring. I still have hopes for a cyclical structure though. Everyone watches the guy before him in the chain. And then the chain loops back on itself.
 
This is similar to what I had in mind, but now I realize it dodges the issue. Because "benefiting the system" is in a different category than monitoring. I still have hopes for a cyclical structure though. Everyone watches the guy before him in the chain. And then the chain loops back on itself.

Right and a point I was going to make but for some reason just didn't. Does a monitor have to light a light or stop the system if the system just stops itself? Is the system just stopping any less of a monitor function then some other part of the system that looks at it then stops it?


To the latter part of your post that sounds like just a feedback loop. The output or part of the output of a system becomes part of its input. Another critical part of self-monitoring particularly in complex and automated motion system.


ETA:

https://en.wikipedia.org/wiki/Feedback
 
Last edited:
Actually it is one of the benefits of an independent objective reality that the nature of something doesn't change or depend solely upon how it appears to us.


A mirage in the desert that appears as a body of water does so since it seems to reflect the sky. The nature of the existence (ontology) of that perception is the refraction of light combined with our interpretation of how a body of water appears at a distance.

Hi,
Having been around this block, there will be the materials vs. immaterial and how you can't tell which is which, so it could be all Mind.

To which i say that the ontology is moot, it appears and acts as though it is material
 
Hi,
Having been around this block, there will be the materials vs. immaterial and how you can't tell which is which, so it could be all Mind.

To which i say that the ontology is moot, it appears and acts as though it is material


Yeah, a block we've both been around but its not the ontology that ends up moot in that case, it's the pretense of a distinction one asserts where they can't tell which is which. So dualism, in such a case, just collapses to either materialism or idealism.
 
Exactly. Now why would this be troubling at all if the alternative assumption is no reachable reality?

It's a serious question.


It’s not troubling…except that science is encroaching on all manner of what were previously impenetrable paradigms. What is it that actually occurs prior to quantum reality? What is it that causes quantum reality to behave like quantum reality? Why does ‘reality’ behave as if it follows laws? What is ‘information’? What is ‘consciousness’ and what is the actual ‘truth’ of human experience? What actually is a ‘real’ human being?

…all of these question explicitly implicate elementary ontology…as well as the processes by which we understand the meaning of such concepts.

So it’s no longer merely academic. It may be necessarily academic since no one currently has the slightest capacity to empirically resolve these questions…but quite obviously…the issues are at the forefront of numerous areas of empirical exploration.

...and of course, as soon as questions about elementary ontology start to get batted around we're immediately into the big leagues of religion and spirituality and the incomprehensible world of '"what's going on here then amen!".
 
Why did humans develop this way? Pascal Boyer answers that in Religion Explained. Our brains see and sense conscious forces behind everything, like otherwise unexplained rustling in the grass. If our ancestors all stayed put to be devoured by predators instead of making this assumption we wouldn't be here to ask about dualism. And many times those unexplained sounds and movements had no visible animal to account for them. So unseen conscious entities were responsible. And then there were dreams ....

Those creaturely survival skills aren't unique to humans. Animals use sensory awareness to flee. Since some animals hunt humans, why wouldn't the sound itself be an alert, as it would be for any prey animal?

The dream thing is very true. And night/day in itself sets up dualism. I think dreams are probably fragments of material contained in your relatively short-term memory. Cerebrospinal fluid flushes the brain every night and maybe it's sweeping out fragments of recent thoughts and observations that have served their purpose. We then experience them again but very fleetingly, and though they seem to make up a story, it's our so-called conscious mind imposing a narrative.

***

To say no one knows what consciousness is, is pretty much assuming there's such a thing as "consciousness" in the first place. That could be why we struggle to define it. How do we even though it exists?

Descartes' "I think before I am" presupposed an "I" in the place.

It's a tautology.
 
Last edited:

Back
Top Bottom