proto-consciousness field theory

The good old HPC, I used to joke I was a p-zombie as I couldn't understand this idea of "experience of red". Turns out I am actually a p-zombie, as I don't ever have an "experience of red" apart from when there are photons hitting my retina and the following cascade of measurable changes in the chemicals in my brain and other tissues as I have no "mind's eye".
Do you dream? What's that like?

If you've gotten this far in life without a "mind's eye," then apparently you're doing pretty well. The so-called mind's eye is unreliable, sometimes convincing people that something they vividly "see" never happened.

Many multiple personality stories are widely thought to have arisen when therapists work with suggestible patients. False memories were also an issue in cases of alleged satanic abuse.

ETA: ninja'd
 
Last edited:
Having criticized my consciousness guru ex-friend, I need to make it clear that I am not talking about either Roger Penrose or David Chalmers.
 
I'm not really sure how you'd program a robot to have an agent model of itself, or how to program it to pursue it's "own" goals. If you program a drone to fly forward 3 feet and then hover, is that it's "own" goal?


I'm reminded of a really primitive robot I saw on some forward looking program when I was a kid. It wandered around a room seeking out electrical outlets to charge it's battery. Apparently if you give a robot enough of those algorithms it becomes conscious. I happen to actually believe that myself, that if AI becomes complex or intelligent enough it will become conscious. Probably the closest thing I have to a religious belief.
 
I'm reminded of a really primitive robot I saw on some forward looking program when I was a kid. It wandered around a room seeking out electrical outlets to charge it's battery. Apparently if you give a robot enough of those algorithms it becomes conscious. I happen to actually believe that myself, that if AI becomes complex or intelligent enough it will become conscious. Probably the closest thing I have to a religious belief.

Serious question: how do you know some of them aren't already conscious?
 

Fair enough. :) I really have no idea how we'll ever know. Sam Harris said for him it will boil down to waiting for an AI to make a compelling case (that it wasn't pre-programmed to make) that it really is conscious/sentient. Seems about right.
 
It would be pretty easy to get AI to identify a human, chair, getting on chair, "retrieving something" (vs changing a lightbulb), etc. It could even be easily programmed to see "falling" in a kitchen and identify it as "accident" and as something like a 4 on a one to five scale of "accident severity/relevance".


If this would be pretty easy, you should do it straight away. A program that could run on a typical processor, and could monitor a video feed of a room and reliably detect when a person falls, would be worth hundreds of millions to the nursing home and home health care industries. (Reliability doesn't have to be perfect. A certain rate of false positives is tolerable. Think smoke detectors.) I'd invest in your start-up. It would be like free money.

Heck, if you demonstrated an AI that could monitor a video feed from a swimming pool, and reliably tell the difference between someone jumping or diving in on purpose, and someone falling in, within a year every insurance company in the world would be requiring every pool owner in the world to install one.

Now, just maybe, we're at the point where these AIs would be possible, they just wouldn't be practical for general use because they'd require a supercomputer to run on. (And yes, for some military applications of comparable difficulty that limitation might not matter.) That practical problem becomes a fundamental conceptual problem for the idea of reproducing the human brain's ability to compress raw sensory observations into narrative, as I'll show.

The problem here is, you're looking at the difficulty of a specific case, such as "detect falling in an indoor space and estimate its severity," as if it were representative of the difficulty of the problem in general. Sure you could probably configure IBM's Watson to recognize "cooking in a kitchen" and "move a chair" and "get on chair" and "retrieve something" and "read from a book" and "put down a small object" and "fall off something" in a video stream. And perhaps one might not notice at first that the narrative thus created, of a string of detected events lacking any sense of the causal connections between them or inferences therefrom (that the book is a cookbook; that the object reached for was a cooking ingredient) or any editing of the unimportant details (moving the chair), is a terribly poor one compared with what a young child could manage. But then you input a different video, say of a young child in a snowsuit climbing a snowbank with a snow saucer, sitting down in it, and then being pushed to slide down the slope by a large friendly dog, and your AI wouldn't be able to make any sense of it. You've solved zero percent of the general problem of turning a stream of sensory data into summary narrative. What you have instead is known in the business as a rigged demo.

What practical present-day AIs do to overcome such problems is to exhaust all possibilities, by taking advantage of the incredible speed of present-day processors and by constraining the range of possibilities considered. "Alexa" doesn't really figure out what you're saying; it figures out which of a limited (though large) list of possible commands you're most likely giving it.

But that won't work for the problem at hand, because the range of possibilities is too large and the processing too intensive. Remember how you'd probably need a supercomputer to monitor a video feed and reliably detect whether there's a person falling? To simultaneously detect whether there's a person reaching for something would require another supercomputer. To simultaneously detect whether there's a dog pushing a snow saucer would require another one. And so forth. Long before you run out of possibilities, you reach the limits of processing power.

But maybe that's because the input is video, which is data-intensive. Maybe you could use just one supercomputer to identify all the objects in each frame. Then a second one to keep track of continuities (e.g. movements of the same objects) from frame to frame. Then a third to determine "actions" and "events" from the continuities. (The chair continues deforming/breaking; the person begins falling.) Then a fourth judges causalities (the person falls because the chair broke). And so forth. Only the first two layers need to process actual video, to produce coded data such as words, maps, and trajectories that all the subsequent layers would use. Would that help?

It might, but it's still a formidable computational task. What would Google Inc. pay to buy a startup that had developed an AI that could reliably summarize documents? Documents are already just words, just about the lowest bandwidth data you can have, but processing meanings is very difficult, which is one of the reasons why state of the art AI language translation is poor. Can an AI read a Harry Potter novel and summarize it in a page? Not at present, or anywhere on the horizon. But a fourth grader can.

Sheesh, Myriad, what's the point? It's this. The reason you, and Marvin Minsky in 1966 when he assigned undergrads to solve machine vision in one summer, and just about everyone, vastly underestimate the difficulty of AI is that you think the world is just there for you to see. You think your eyes are like transparent windows you look out of, at things like chairs and soup pots and "reaching for things" and cause and effect and dogs pushing snow saucers, that are just there. Minsky's colleagues in 1966 were making good progress in getting computers to play chess well, which they considered one of the most difficult cognitive tasks humans are capable of. How hard could it be, by comparison, to scan a photograph of a room and find the chess board? A three year old child or a trained rat can do that. The answer, going by Moore's Law and the approximately 40 years it took to get the latter capability working well, turns out to be, about a million times harder.

The reason is that, although (at least in our empiricist world view) the objects and (arguably) events really are there, we don't perceive them anywhere close to directly. Our brains reconstruct them from the continuously changing blobs of light and color that our retinas transduce, the continuously changing frequency spectra our ears detect, and a few other signals. Our brains sort out not only the objects and positions and movements (a person climbs on a chair and reaches for a canister) but the causes and explanations (she's cooking in a kitchen and needs an ingredient from inside the canister). We don't perceive the enormous computational effort this requires. Or rather, we do, but we don't perceive it as effort. We perceive it, in part, as consciousness.
 
Last edited:
How does this follow from anything you said prior? Why would it be guaranteed to be conscious?


It follows the same way that it follows that a car that can go 200 miles in one hour must have speed. It's guaranteed to be conscious because what it does is what consciousness is.
 
It follows the same way that it follows that a car that can go 200 miles in one hour must have speed. It's guaranteed to be conscious because what it does is what consciousness is.


Your last two posts suggest you have no idea what we're talking about.
 
Why must every discussion of conscious turn into a game of "The difference is the thing I'm defining and the definition is the difference?"

Every single time we try to have this discussion this happens. "Conscious" gets used to mean everything from simple sensory inputs and reactions up to a not even pretending it's not a code word for "soul" and pretty much every possible step between the two.

The label is so vague and varied it is meaningless at this point. It's a "Widget" in economic discussions, a place holder you put in to argue for what works in your argument.

I've dropped the Sword of Damocles on that particular Gordon Knot myself a long time ago by just getting over the term.

Without using the word conscious explain what problem we are trying to solve, what variable we are trying to account for, or what missing piece we are trying to find a fit for.
 
I was waiting for Joe to show up with his "Get Offa My Lawn"-ery. lol
 
Just FYI, the author of that really good SciAm article (he's also the author of the book "The End of Science", if you've ever heard of that) has written a whole free ebook on "this" (consciousness/the hard problem/the mind-body problem) stuff.

I'm currently reading it. It's here:
https://mindbodyproblems.com/

:)
 
Just FYI, the author of that really good SciAm article (he's also the author of the book "The End of Science", if you've ever heard of that) has written a whole free ebook on "this" (consciousness/the hard problem/the mind-body problem) stuff.

I'm currently reading it. It's here:
https://mindbodyproblems.com/

:)

He hits the nail on the head in one paragraph which actually addresses a few posts in this thread. You can't eliminate subjectivity when what you are studying is subjectivity.

Then I thought, Hold on, there’s a paradox here. Science is a method for eliminating subjectivity from our perceptions so we see things as they really are, we achieve objectivity, which philosopher Thomas Nagel calls “the view from nowhere.”https://mindbodyproblems.com/introduction/# But the mind-body problem is different from other scientific problems, because subjectivity is part of the problem. Subjectivity, you might say, is the problem. Maybe we cannot escape our subjectivity when we contemplate consciousness and other mind-related riddles. When it comes to the mind-body problem, maybe there is no view from nowhere.

 
Without using the word conscious explain what problem we are trying to solve, what variable we are trying to account for, or what missing piece we are trying to find a fit for.

I have often said that the issue should be discussed without using the term "consciousness' just as the claim that free will is an illusion should be set out without using the term "free will".

It really deserves a thread of its own. I would create one but I have an appointment to get tooth picks rammed under my fingernails and I don't want to miss it.

Sent from my Moto C using Tapatalk
 
Actually, having tooth picks rammed under your fingernails is a lot easier after you have read some Dennett and realised you are not really feeling pain, it is just an illusion, you only seem to be feeling pain.

Sent from my Moto C using Tapatalk
 
But this is a pretty important step in the right direction...

"The team double-checked their work by looking at fMRI scans of 45 patients in comas or vegetative states, and showed that all of them had the network between these three regions disrupted."

Now, I'm not saying that this is the definitive cause of consciousness. Hell if I know where consciousness comes from. But it points to a possible objective cause of consciousness. Much like the DE and DM cases, we don't have to understand the absolute causes of the phenomena to believe that they are objectively present. I really don't see how this is any different.

I think this is definitely a counter-argument to the idea that consciousness is a product of any data-processing, or that any data-processing causes a distortion in the consciousness-field. If that were the case, then damage to one small part of the brain wouldn't cause consciousness to be lost. Then again, the same is true of sleep and anaesthesia - the brain is still functioning while in these states, processing a relatively massive amount of data when compared to most other data processing systems.

At the very least, it implies that only certain kinds of data-processing give rise to consciousness, which doesn't support the idea that consciousness is all-pervasive.
 
Last edited:
That's the p-zombie. That you can have all the appearance of being conscious but there are no qualia. In other words if I say to you "close your eyes and imagine a juicy red apple" you will have the qualia of the experience of a red apple. The robot would just say it is imagining the red apple but would have no qualia of the experience of a red apple, it would be lying, just as I've found out I have been doing all my life, I have no such qaulia. I cannot close my eyes and imagine a red apple, juicy or not. If qualia are a neccessary component of consciousness you have to conclude I am not conscious.

Here you're talking about qualia absent the presence of stimulus, but an absence of stimulus is not required for qualia to exist. If you feel pain then you have qualia, regardless of whether or not you can induce pain simply by thinking about it.
 
I think this is definitely a counter-argument to the idea that consciousness is a product of any data-processing, or that any data-processing causes a distortion in the consciousness-field. If that were the case, then damage to one small part of the brain wouldn't cause consciousness to be lost. Then again, the same is true of sleep and anaesthesia - the brain is still functioning while in these states, processing a relatively massive amount of data when compared to most other data processing systems.

At the very least, it implies that only certain kinds of data-processing give rise to consciousness, which doesn't support the idea that consciousness is all-pervasive.

I think the IIT advocates would counter that by saying, yes, but, the other non-conscious data processing still might be "proto-consciousness" (which sounds to me like "not-actually-consciousness".)
 
Last edited:
Why must every discussion of conscious turn into a game of "The difference is the thing I'm defining and the definition is the difference?"

Every single time we try to have this discussion this happens. "Conscious" gets used to mean everything from simple sensory inputs and reactions up to a not even pretending it's not a code word for "soul" and pretty much every possible step between the two.

The label is so vague and varied it is meaningless at this point. It's a "Widget" in economic discussions, a place holder you put in to argue for what works in your argument.

I've dropped the Sword of Damocles on that particular Gordon Knot myself a long time ago by just getting over the term.

Without using the word conscious explain what problem we are trying to solve, what variable we are trying to account for, or what missing piece we are trying to find a fit for.

I think I've defined my usage pretty well. I can't speak for anybody else.
 

Back
Top Bottom