The Hard Problem of Gravity

Then clearly, you are using 'consciousness' as a pseudonym for for a much wider range of phenomenon than it actually encompasses.
Says who? You still haven't defined the term.

Again, you're simply just defining yourself out of a problem instead of working towards solving it.
What problem? You haven't defined the term, so you can't define what the problem is.
 
So, apart from the Nine Month Process, do we now have the operational and technological expertise to build a self-aware person?
(I agknowledge that it wasn't till 10 years of age that I found psycholgical self-awareness, ie I'm here right now and I'm aware of it. And I agknowledge as before that this thing called self-consciousness is a "marvelous mirage.")

Can we do Star Trek TNG's Data?
or my "Apathia" (an android character in an unpublished novel.)

I realize I'm talking Psychology here.
How do we, how can we get to the Psychology?
(Not counting projecting personhood onto an automaton that has no sense of personhood.) (Yikes! There are sociopaths who almost don't.)

(This is not the "Hard Problem." This is the very messy problem. LOL)
 
So, apart from the Nine Month Process, do we now have the operational and technological expertise to build a self-aware person?
That depends on how you define person.

If you want human-level cognition, then no.

If you want something that can explore and explain its own domain and its own reasoning, then we've had that since the late 60's.

Can we do Star Trek TNG's Data?
Nope, we're not even close to that. The computing power for that would - even today - still fill several warehouses and require its own power station. And our understanding of the details of the structure of such a system is still incomplete.
 
I use the word "apparently" because I'm careful about what I'm claiming.

And yet you assume that this apparence represents truth.

Perhaps you could explain what you rely on to interpret the universe apart from your perceptions.

My personal observations can be wrong. Therefore I rely on science.

Because we do directly experience our own awareness. There's nothing intervening.

Sounds like another turtle. How do you experience experience ?

Communicating consciousness is what human beings spend a large amount of their time doing. You should meet some some time.

Don't play dumb. We're talking about computers.

It's the fact that we've learned ways to tell each other the contents of our minds that we have generally avoided solipsism and tend to assume that other people experience the world in a comparable way to us.

Yes, and yet you deny that the same behaviour in a machine would NOT make us assume the same thing. The only reason you have is because we know why the computer acts this way. Again you are placing humans in a special category.

What is the difference between an experience and a pseudo-experience, or a proto-experience, or a meta-experience, or a quasi-experience? They seem to be exactly the same thing. How can you think you are having an experience without having an experience? It's an oxymoron, like imaginary pain.

What I'm saying is that "experience" may refer to several degrees of the same thing. You seem to imply that it's either human experience or none at all.

Maybe there's a particular philosophical viewpoint which accepts the existence of everything in the universe except solipsists.

:rolleyes:
 
That depends on how you define person.

If you want human-level cognition, then no.

If you want something that can explore and explain its own domain and its own reasoning, then we've had that since the late 60's.


Nope, we're not even close to that. The computing power for that would - even today - still fill several warehouses and require its own power station. And our understanding of the details of the structure of such a system is still incomplete.

Wow, it is amazing how someone asking a reasonable question elicts a reasonable response from you Pixy.
 
Can we do Star Trek TNG's Data?
or my "Apathia" (an android character in an unpublished novel.)

I disagree with Pixy here. We could make a Data but it would be prohibitively expensive. It might take the entire GNP of a first world country, or more. We do have the technology, though.

I realize I'm talking Psychology here.
How do we, how can we get to the Psychology?

Neural networks are implicit reasoning machines. Added to that, every thought we have is some type of reasoning. These two facts strongly suggest we should start with a very robust reasoning system, hook up a ****load (technical term) of sensors, and imbue it with some kind of basic goals (like evolution has done for us). After that I expect things would just fall into place.

Granted it wouldn't act human unless we impart more human goals and human structure to the system. But I am confident such a system would, for instance, be able to communicate with us of its own accord (albeit in a very alien manner).
 
Physicists don't study consciousness. That's neuroscience. Evolution is a physical theory, but it's not part of physics courses.

Evolution continues to be a useful example on several levels.

When Evolution was purely described in terms of observations, then the mechanism was a mystery. It was known that information was passed between one generation and the next, but this was not a sufficient explanation because information is a meaningless term in physics. In order to demonstrate how evolution actually worked it was necessary to examine how DNA replicates proteins. When it's examined on the molecular level, fuzzy concepts like "information" disappear, and we're left with the subset of physics called chemistry.

If we'd made the mistake of thinking that "information" was a meaningful concept in science, we might have ended up with another mystical, ill-defined non-solution.
 
Good, and I've given it some serious thought even before the OP was posted and I'll be glad to share what I've so far been able to discern from it.

In answer to your first question, being conscious of something is analogous to having something in your field of vision. The farther something is into the periphery of your conscious focus, the less conscious you are of it. Of course, this does not necessarily mean that one is literally looking at the subject of their focus; tho, actually looking at an object is an example of such [Interestingly enough, the focusing of conscious attention has been associated with the synchronous firing multiple neurons rather than the independent firing of individual neurons]. For example, one can focus their conscious attention to the pain of their stubbed toe without actually looking at it. Focusing your conscious attention on one thing can greatly hinder your ability to handle other tasks. It seems that conscious thinking has something analogous to an attention budget; tasks that are given more conscious attention can be consciously dealt with more effectively. Things completely outside of one's conscious field of focus are off the radar, so to speak, and must be dealt with unconsciously, if they are even processed at all.

In instances where one must multitask it seems that one must have conditioned behaviors to handle tasks more on the periphery of one's attention [or even completely outside of it]. An example of this is a person carrying out a detailed conversation while driving. Such a task is much more difficult for someone who is just learning to drive. But, once they have driving conditioned into their behavioral repertoire, they can relegate driving to the periphery of their conscious attention while assigning a greater portion of their conscious focus to other tasks. More extreme examples of this would be the unconscious functioning of autonomic processes in the body like instincts, reflexes, heart beat, etc. Of course, one can train themselves to consciously affect some of these auxiliary processes to a limited degree but, by and large, they are outside of one's direct conscious awareness or volition.


In regards to your second question, paying attention to one's own consciousness is called introspection. It is an instance of self-referential processing, but again, I must stress that it is distinguished from other instances of computational self-reference in that it is experiential. Self-reference, in and of itself, its quite well defined but consciousness [and by extension conscious self-reference] remains an undefined function. Currently, the evidence strongly indicates that properly defining this function will depend on a better understanding the physical processes of the computer solidly known to generate it: the brain.

I think that the field of AI has much to contribute to understanding general cognition but, as a means of explaining and generating actual consciousness, its putting the cart before the horse. There is going to have to be a lot more progress in the realm of biophysics, and neuroscience in particular, before researchers like you will be able to meaningfully attempt to create conscious machines. Until then, I'm afraid that attempts by AI researchers to recreate consciousness will be shots in the dark :(

Don't jump the gun! You don't need to qualify all your responses with the disclaimer that you don't agree with AI researcher's opinions. Focus on the task at hand.

Back to the task:

You seem to agree that something (including consciousness itself) must be "in focus" in some way for us to be conscious of it.

You also realize that "in focus" isn't equivalent to "in field of view." Your toe example is perfect. There are other examples, such as if I told you to focus on a target in front of you and your mind started wandering, thinking about other stuff.

Now I will ask you the opposite question -- do you think it is ever the case that something is "in focus" (using the very generic meaning of focus that we have agreed upon for this context) yet one is not conscious of it in some way?

I don't mean "in visual field of view," because for example it is obvious that if one is looking at a tree they are not aware of every single leaf yet every single leaf is in their field of view (incidentally Nick227 and Piggie wiggled out of this exercise by claiming they would be aware of every single leaf -- don't be that lame). So try to answer using the same meaning for "focus" as in your earlier response.
 
That depends on how you define person.

If you want human-level cognition, then no.

If you want something that can explore and explain its own domain and its own reasoning, then we've had that since the late 60's.


Nope, we're not even close to that. The computing power for that would - even today - still fill several warehouses and require its own power station. And our understanding of the details of the structure of such a system is still incomplete.

Ah, yes. The clean simplicity of smart systems and artificial intelligence as opposed to the messiness of the psychology of human cognition.

For example, verbal expressions that have set denotations instead of words like "consciouness," that are loaded with a broad range of meanings and connotations.

But I'm of the opinion that we will in time replicate our messy "consciouness."
But maybe, just maybe, we won't get a neurotic consciouness such as Hall 9000.
Or maybe the possibility of that comes with the territory?
 
I disagree with Pixy here. We could make a Data but it would be prohibitively expensive. It might take the entire GNP of a first world country, or more. We do have the technology, though.
I was thinking of a standalone, walking-around Data. That we can't do.

Well, we could build him 1200 feet tall. That would work. And then he could CRUSH OUR ENEMIES!
 
I disagree with Pixy here. We could make a Data but it would be prohibitively expensive. It might take the entire GNP of a first world country, or more. We do have the technology, though.



Neural networks are implicit reasoning machines. Added to that, every thought we have is some type of reasoning. These two facts strongly suggest we should start with a very robust reasoning system, hook up a ****load (technical term) of sensors, and imbue it with some kind of basic goals (like evolution has done for us). After that I expect things would just fall into place.

Granted it wouldn't act human unless we impart more human goals and human structure to the system. But I am confident such a system would, for instance, be able to communicate with us of its own accord (albeit in a very alien manner).

I'm inclined to think that we will need to evolve or let evolve a system that replicates the psychology we associte with being a self-aware person.
Even in our own case, this involves way more than just some smart circuitry.
It invoves bodies interacting with a physical and social environment.

You may be right that we have what we need now to make the snowball to roll down the hill to eventually avalanche.

And yes, it would be "Alien," for so much of human cognition involves the endocrine system.
 
And yet you assume that this apparence represents truth.

The alternative is to assume that appearance necessarily represents falsehood. As the evidence stands, human beings appear to be unique in the universe. Should we ignore the evidence and reject that on philosophical grounds.

My personal observations can be wrong. Therefore I rely on science.

I've been insisting on the necessity for science in this area from the start. And by science I don't mean Computer "Science". If CS is a well-defined discipline it's equal parts mathematics and engineering. Science is physics.

Sounds like another turtle. How do you experience experience ?

How can you not? Experience is the end of the line. There's nothing to intervene between you and experience. You are the experience.

Don't play dumb. We're talking about computers.

If we're talking about computers, then let one pass the Turing test.

Yes, and yet you deny that the same behaviour in a machine would NOT make us assume the same thing. The only reason you have is because we know why the computer acts this way. Again you are placing humans in a special category.

I'm distinguishing something we understand fully from something we don't understand at all.

What I'm saying is that "experience" may refer to several degrees of the same thing. You seem to imply that it's either human experience or none at all.

I'm quite open to the possibility that experience may be produced by some physical arrangement which might be duplicated outside of human beings. I think it highly probable that animals have some degree of consciousness. I think it highly improbable that rocks or thermostats have any degree of consciousness at all. However, in the absence of a physical theory, it's simply not possible to make any assumptions.
 
Evolution continues to be a useful example on several levels.

When Evolution was purely described in terms of observations, then the mechanism was a mystery. It was known that information was passed between one generation and the next, but this was not a sufficient explanation because information is a meaningless term in physics.
Bzzt. Epic fail.

In order to demonstrate how evolution actually worked it was necessary to examine how DNA replicates proteins. When it's examined on the molecular level, fuzzy concepts like "information" disappear
Fail.

and we're left with the subset of physics called chemistry.
Chemistry is not a subset of physics as such, but a physical model, an approximation.

If we'd made the mistake of thinking that "information" was a meaningful concept in science, we might have ended up with another mystical, ill-defined non-solution.
Epic fail with one and a half turns into a vat of custard.
 
Ah, yes. The clean simplicity of smart systems and artificial intelligence as opposed to the messiness of the psychology of human cognition.
Oh, artificial intelligence can get plenty messy. :)

For example, verbal expressions that have set denotations instead of words like "consciouness," that are loaded with a broad range of meanings and connotations.
Actually, even SHRDLU could handle poorly-defined terms. It would make an assumption, and tell you what it thought you were asking.

But I'm of the opinion that we will in time replicate our messy "consciouness."
But maybe, just maybe, we won't get a neurotic consciouness such as Hall 9000.
Or maybe the possibility of that comes with the territory?
To a degree, I expect it does. With all the complexity of the human mind will come a bunch of interesting failure modes. They will likely not be quite the same; many psychopathologies have specific biological origins that our AI won't replicate.

But we might well see, for example, OCD, when the AI gets stuck in a bad reinforcement loop.
 
I suggest that you, as an adult, have forgotten just how much learning it took for you to be able to speak coherently about your own private behavior. These things, which you claim are the only ones we directly experience, are far more difficult for us to describe than are objects in our environment.

And that has to be inherent. The closer we get to the actual experience, the further we are to common referents.

It's easy for us to point to circles, red, fuzzy, distant or heavy; describing pain is difficult ("is it a stabbing pain, a throbbing pain, a dull pain...?" note that we use terms that are based on things in our environment to describe the pain inside us!). Describing love, more so ("when you say you love me, do you mean the same thing I mean when I say I love you?").

There is quite a bit intervening when we experience our own awareness, but you have forgotten it, since most took place while you were still becoming verbal. To learn "dog", all you had to do was to be able to learn to agree with your verbal community, that this group of objects were dogs, and other things were non-dogs. There were any number of dogs around to point to, and virtually all of your verbal community could see them, just as you could. But... when learning your awareness, your pain, your love, learning any of the feelings, between which and you there is "nothing intervening"... the only way to learn those was from people who had no access to your feelings, and to whose feelings you had no access. So you learned them through the public behaviors, objects, and situations that accompany them. If our feelings are the same, it is because similar bodies (including nervous systems, for you reductionists out there), in similar situations, can be assumed to behave similarly (assumed, not proven).

And while we are learning the names for the feelings, we are learning in parallel the names for the behaviours. We start out drawing a happy face and a sad face, and then we realise that the man with the happy face can in fact be sad, and the man with the sad face can be happy.

All the similarity is in the physical systems; to the extent that one assumes a "consciousness" that arises, separately from the behavioral response to an environmental stimulus, there is no reason to assume that this "consciousness" (in scare quotes to distinguish it from my behavioral definition, which is actually coherent) is the same from person to person. If it is something that is not explainable by the self-referential feedback loops, if it is a mystery, then we have no reason to assume that we are talking about the same "experience" from person to person. The reasons we have to assume things are similar are all included in the feedback loop/behavioral version of awareness.

That is exactly so. The subjective experience can only be conveyed by analogy. And as I have mentioned before, much of human behaviour is an attempt to communicate subjective experience, in spite of the difficulties. But when it comes down to it, while we can agree to call something "red" we have no way at all to know if my red is the same as your red.

Long story short... (too late!), we actually have considerably less confidence in our introspective accounts than in publicly available (I won't call them "objective", since the objective/subjective distinction is a superfluous can of worms) relationships.

The inability to access or even effectively describe our "internal" experience is central to the HPC.
 
Now I will ask you the opposite question -- do you think it is ever the case that something is "in focus" (using the very generic meaning of focus that we have agreed upon for this context) yet one is not conscious of it in some way?

I don't mean "in visual field of view," because for example it is obvious that if one is looking at a tree they are not aware of every single leaf yet every single leaf is in their field of view (incidentally Nick227 and Piggie wiggled out of this exercise by claiming they would be aware of every single leaf -- don't be that lame). So try to answer using the same meaning for "focus" as in your earlier response.
Another one of those "a" words - attention. This is covered quite well in the lecture series.
 
Anything for which we lack a physical theory is a mystery.
But what you're proposing is that it is not only unknown now, it is fundamentally unknowable by your definition.

Compare, for example, the theory of evolution. When it was mooted from observation, Darwin and other scientists had no mechanism to explain how it worked.
...but it was, and is falsifiable.

It was possible to surmise that information was passed as part of the reproductive process, but the mechanism involved remained unknown until the analysis of the structure of DNA was complete. There were many theories, some completely incorrect.

What's significant is that until the process was understood at every level, it remained a mystery.
No, what was significant is that falsifiable predictions followed from the theory and were vindicated by evidence. Your theory of consciousness cannot possibly produce any predictions that can be measured objectively.

You might believe that consciousness is associated with information processing.
And why not? We can influnece consciousness by influencing the information flows in and to the brain. This can be predicted from a physical theory of consciousness, is falsifiable, and has been done.

I don't mind hunches, but lacking the physics, they remain hunches. And one hunch is as good as another until we back it up.
One hunch is not as good as another. Ask Occam

The problem is that consciousness is associated with a lot of other things apart from information processing. We also lack a physical theory for information processing. (We have a mathematical theory, but that's quite another thing).
I'm no expert, but I'm quite confident that you are wrong. Information processing is applied in physics, and probably not solely a question of trial and error.

I aim to please. I also try to be only 15% brusquer than the individual to whom I reply. Please remind me if I drift into the 20-25% range.
Can do.
 

Back
Top Bottom