proto-consciousness field theory

Do you believe that your own personal consciousness had any effect on the contents of your posts in this thread so far? It seems to me that it would have. If you do not believe so then I would like to understand why.

My honest, and boring, answer is I really don't know. If it did then I suspect the interaction was minimal. Remember that I believe consciousness arises from distortions of the conscious field produced by information processing. The information processing occurs right here in this physical world we can see and measure. To what extent that 'concentration' of conscious works the other way to affect the information processing I don't know.

If the replicant did not report consciousness then it is missing something essential and we have failed to replicate conscious experience.

Or it is not missing anything and consciousness is a pure product of the human brain.

If it does report success then whatever consciousness is, we will have captured it in our replication and that would imply that there are no unknown fields or other influences involved in producing consciousness.

To me it would suggest there is a conscious field and that consciousness arises from information processing alone, regardless of the substrate in which this occurs.
 
The good old HPC, I used to joke I was a p-zombie as I couldn't understand this idea of "experience of red". Turns out I am actually a p-zombie, as I don't ever have an "experience of red" apart from when there are photons hitting my retina and the following cascade of measurable changes in the chemicals in my brain and other tissues as I have no "mind's eye".

You are saying that you cannot intentionally induce imaginary visual experiences. To me that is something very different from not experiencing anything at all. I assume that when you look at a table you can pick out an apple that is sitting on it and tell that it is red.

Is the issue only in voluntary visual imagination? Do your dreams contain images?

Is it only visual? Can you recall songs you have heard? How about smells?
 
You are saying that you cannot intentionally induce imaginary visual experiences. To me that is something very different from not experiencing anything at all. I assume that when you look at a table you can pick out an apple that is sitting on it and tell that it is red.



Is the issue only in voluntary visual imagination? Do your dreams contain images?



Is it only visual? Can you recall songs you have heard? How about smells?
None of the above. For instance if I am told to think about my mother's face nothing happens, I don't get any "image", I can't count sheep, and my dreams are not visual, not even my sleep paralysis ones.

I can not recall music or smells the idea you can have those sensations without direct environmental stimulus is alien to me.
 
Last edited:
There are reports that conscious experience does not control our behavior but only becomes aware of it after it has occurred. I would suspect these reports might be behind an earlier claim that neuroscience has established that consciousness is an illusion.

We appear to become aware of our own actions after actually taking them. That awareness can still influence future actions.

Or it's just programmed to report success or to claim consciousness it doesn't actually have.

The experiment being discussed is an attempted faithful reproduction of the actual functioning of a human brain. It would not be pre-programmed any more or less than you are.
 
We appear to become aware of our own actions after actually taking them. That awareness can still influence future actions.
Can that be demonstrated? And even if that awareness does influence our future actions does it actually do so in a way that can be distinguished from non-aware influence?


The experiment being discussed is an attempted faithful reproduction of the actual functioning of a human brain. It would not be pre-programmed any more or less than you are.
That doesn't really say much does it?
 
We appear to become aware of our own actions after actually taking them. That awareness can still influence future actions.

When I first heard of the experiments of Libet and subsequent experimenters I concluded that conscious decision had no influence on the brain and therefore no influence on physical action. I believed that it showed consciousness to be purely an observer that takes credit for physical action and has no influence on the 'real' world.

I still believe consciousness takes credit for decisions it doesn't cause (indeed that has been proved beyond doubt in experiment) but I also think that it can affect the physical. In general terms I see consciousness as willpower, with willpower defined as doing something you neither want to do nor have to do.

So in Libet's experiments the subjects didn't meet this criteria. Not only did they feel they needed to make a decision, they also wanted to make a decision, and for that reason they operated unconsciously with actions for which their consciousness later took credit.

This is why I was equivocal with the amount of conscious action it takes me to write my posts. I don't have to do it, true, but in the main I want to do it, so the influence of willpower is minimal.

EDIT: To try and clarify, consciousness is ubiquitous but conscious action is very rare.
 
Last edited:
Sure, if the robot's programmed to simply ape consciousness, then, as you say, all it is is a mimic, even if an advanced model.

But I don't see why, given sufficient complexity, a robot cannot break through to 'true' consciousness.

I don't see why not, either, but I also feel like we know too little about what consciousness is and is caused by to guess about the probability either way.
 
You've read that correctly, my 'understanding' of "multiple personality disorder" does derive from fiction, and movies, and the odd article in newspapers or magazines or online. :--)

Still, although no doubt different about the details from movies, multiple personalities is still fact, right? I was wondering if this kind of pathology might not be evidence of sorts for diffused consciousness 'centers'.

No, it's very much in dispute.
https://ww1.cpa-apc.org/Publications/Archives/CJP/2004/september/piper.pdf
https://ww1.cpa-apc.org/Publications/Archives/CJP/2004/october/piper.pdf

https://www.ncbi.nlm.nih.gov/pubmed/23197123
The rise and fall of dissociative identity disorder.

Dissociative identity disorder (DID), once considered rare, was frequently diagnosed during the 1980s and 1990s, after which interest declined. This is the trajectory of a medical fad. DID was based on poorly conceived theories and used potentially damaging treatment methods. The problem continues, given that the DSM-5 includes DID and accords dissociative disorders a separate chapter in its manual.
 
I cannot readily quote (or even recall) my specific 'sources' for thinking this, but it was my understanding that neuroscience has already shown that free will and consciousness are no more than illusions. I agree, that is a discomfiting and disorienting idea, no less so than a theist first considering the implications of atheism.

You seem to disagree with this?

I was fairly sure this is a done thing, but I can't begin to 'defend' this impression of mine without digging around afresh for sources.

Neither have been proven. Personally, I'd say the evidence + logic strongly indicates that free will is probably an illusion, but that consciousness itself is mere illusion isn't really backed by any evidence at all. Some people just look at evidence showing how quirky and deceptive conscious experience is and use that to conclude (somehow) that the whole thing is just an illusion.
 
That sounds pretty lame to me. Since you just asserted that you don't think a non-conscious entity can't [sic] do this I'll just assert that one can. I have no idea why you think this is a good example.


Show me the one that can.
 
Last edited:
My hypothesis is that the processing and remembering and recalling and summarizing it has to do in order to fake narrating its experiences, is indistinguishable from actually having experiences. By indistinguishable I don't mean we can't tell the difference (which is what we started out inherently assuming), I mean there is no difference. Which makes our posited p-zombie conscious, and therefore not a p-zombie.

When it comes to an AI version of a p-zombie, it could (and they probably will very soon) be programmed to fake having had real experiences to recount, based on real dialogues and events which have transpired.
 
I have never understood the "consciousness is an illusion" claim. What does it even mean?

Sent from my Moto C using Tapatalk
 
That's the p-zombie. That you can have all the appearance of being conscious but there are no qualia. In other words if I say to you "close your eyes and imagine a juicy red apple" you will have the qualia of the experience of a red apple. The robot would just say it is imagining the red apple but would have no qualia of the experience of a red apple, it would be lying, just as I've found out I have been doing all my life, I have no such qaulia. I cannot close my eyes and imagine a red apple, juicy or not. If qualia are a neccessary component of consciousness you have to conclude I am not conscious.

Qualia is not just visual memory. Even people with extreme amnesia who can't remember 3 minutes ago have experiences and thus "have qualia".
 
I have never understood the "consciousness is an illusion" claim. What does it even mean?

Sent from my Moto C using Tapatalk

I've watched a lot of Daniel Dennett videos where he supposedly "explained" consciousness by "proving" it was an illusion, and I still don't get it, either.

A lot of other people find the arguments less than compelling, too.
 
When it comes to an AI version of a p-zombie, it could (and they probably will very soon) be programmed to fake having had real experiences to recount, based on real dialogues and events which have transpired.


Any recording device can "fake having had experiences" by playing back the recording. Such faking is easily detected due to the lack of summarizing.

Imagine a video of a woman cooking in a kitchen, who needs an ingredient that's in a canister on a high shelf out of reach, stands on a chair to reach it, but unexpectedly the chair breaks, causing her to fall, knocking a soup pot over and spilling soup all over the floor.

Do you think there will soon be an AI that can examine that video file, and output "A woman cooking in a kitchen, who needs an ingredient that's in a canister on a high shelf out of reach, stands on a chair to reach it, but unexpectedly the chair breaks, causing her to fall, accidentally knocking a soup pot over and spilling soup all over the floor."?

I don't think so, although I don't think the task is inherently impossible. In the 50+ years since Marvin Minsky assigned visual object recognition as a summer project for some undergraduate students, that problem has been partially cracked; we probably wouldn't have too much trouble getting the AI to recognize the woman, chair, soup pot, and canister in the video frames and track their movements over time. But imagine the processing and real-world knowledge integration needed to recognize that the woman's actions represent an attempt to reach the canister, and that the reason she does so is because she is cooking and it contains a cooking ingredient; and the agent modeling needed to recognize the woman didn't expect the chair to break and that knocking the soup pot over was therefore unintentional. Imagine the sophistication of the module that decides that the parts of the video when the woman stirs the soup, and when she puts down a measuring cup before moving the chair into position to stand on, are less relevant to the main history and therefore should not be mentioned in a brief summary.

That's all just passive observing and analysis, but one can then imagine extending such an AI to include an agent model of itself (just as it already has for the people and animals and moving things it analyzes in videos) and to pursue goals of its own by participating in actual events. (Biological evolution, of course, would never produce a solely passive observer as that would never confer any adaptive advantage, so participating in events would coexist with the cognitive processing from the start.) At that point we'd almost surely have a conscious being, not merely an AI version of a p-zombie.

We might get (although we might have no way to recognize) rudimentary consciousness arising in self-driving vehicles, depending on what capabilities turn out to be needed to make them functional.
 
Last edited:
Any recording device can "fake having had experiences" by playing back the recording. Such faking is easily detected due to the lack of summarizing.

Imagine a video of a woman cooking in a kitchen, who needs an ingredient that's in a canister on a high shelf out of reach, stands on a chair to reach it, but unexpectedly the chair breaks, causing her to fall, knocking a soup pot over and spilling soup all over the floor.

Do you think there will soon be an AI that can examine that video file, and output "A woman cooking in a kitchen, who needs an ingredient that's in a canister on a high shelf out of reach, stands on a chair to reach it, but unexpectedly the chair breaks, causing her to fall, accidentally knocking a soup pot over and spilling soup all over the floor."?

I don't think so, although I don't think the task is inherently impossible. In the 50+ years since Marvin Minsky assigned visual object recognition as a summer project for some undergraduate students, that problem has been partially cracked; we probably wouldn't have too much trouble getting the AI to recognize the woman, chair, soup pot, and canister in the video frames and track their movements over time. But imagine the processing and real-world knowledge integration needed to recognize that the woman's actions represent an attempt to reach the canister, and that the reason she does so is because she is cooking and it contains a cooking ingredient; and the agent modeling needed to recognize the woman didn't expect the chair to break and that knocking the soup pot over was therefore unintentional. Imagine the sophistication of the module that decides that the parts of the video when the woman stirs the soup, and when she puts down a measuring cup before moving the chair into position to stand on, are less relevant to the main history and therefore should not be mentioned in a brief summary.

That's all just passive observing and analysis, but one can then imagine extending such an AI to include an agent model of itself (just as it already has for the people and animals and moving things it analyzes in videos) and to pursue goals of its own by participating in actual events. (Biological evolution, of course, would never produce a solely passive observer as that would never confer any adaptive advantage, so participating in events would coexist with the cognitive processing from the start.) At that point we'd almost surely have a conscious being, not merely an AI version of a p-zombie.

We might get (although we might have no way to recognize) rudimentary consciousness arising in self-driving vehicles, depending on what capabilities turn out to be needed to make them functional.

It would be pretty easy to get AI to identify a human, chair, getting on chair, "retrieving something" (vs changing a lightbulb), etc. It could even be easily programmed to see "falling" in a kitchen and identify it as "accident" and as something like a 4 on a one to five scale of "accident severity/relevance".
Military drone AIs might already have reached about that level of analytical sophistication.

I'm not really sure how you'd program a robot to have an agent model of itself, or how to program it to pursue it's "own" goals. If you program a drone to fly forward 3 feet and then hover, is that it's "own" goal?
 

Back
Top Bottom