The Hard Problem of Gravity

The question about more complex programming systems is - can this multi-processor, multi-level extremely complicated programming intelligence system be emulated - even in principle - as a series of single instructions on one big program.


Probably. Don't precisely see why not.


If it can, then that is what it is, and we have no reason to believe that such a program would develop into something else simply by complicating it.

Why would anyone think that such a program would develop into something else simply by complicating it? I don't understand your objection, since mere complexity is neither the goal nor a proposed solution to any issue in consciousness research.

Architecturally, complexity is a necessity to emulate our form of consciousness because we think through very complex means. But mere complexity is not the issue. The issue concerns the type of architecture necessary to produce human-like consciousness.

That architecture must contain certain properties including intentionality, motivational systems, emotional systems, language, etc. and involve recursion at higher levels. Mirror neurons are simply neurons that respond to complex (high-level abstract) information and depend on the function of many lower level neurons that 'feed' info into them.

Why in the world could we not simulate such activity on a computer?
 
If one didn't know any better, they'd be led to think that you suffer from megalomania. It would explain your argumentation by fiat and dogmatic inability to recognize when you're demonstrably wrong.
If I'm demonstrably wrong, why are you having such a hard time demonstrating this?

People here have already; numerous times and numerous ways. You're just psychologically incapable of recognizing it.
Really? Give me an example. One will do.

Note that the example must be (a) coherent, (b) well-defined, (c) operationally defined, (d) something that exists, and (e) something that contradicts my model.

You seem to mostly fail at (b). Sometimes (a).

Conscious experience. If you don't experience it yourself, tough luck.
What is conscious experience? Define it, operationally, show that it exists, and show that it contradicts my model.

Erm...hows about all of it?
Hows about you stop just waving your hands and think about what you're saying? What is this "conscious experience" that you claim contradicts my model? Does it happen? Where does it happen? How does it happen? And how does it contradict my model?
 
Yes. For me Pixy takes a strong AI definition of consciousness and then proceeds to defend this definition, never really stopping to investigate whether the myriad phenomena of human consciousness really fit with it. Anything which does not agree with his definition must be wrong because, well, just because! It's like discussing with the HAL computer from 2001.

The problem, as myriad commentators acknowledge, is that we don't have a proper agreed definition of the term "consciousness." If you're a computer this is a major drag, of course.

Nick

Its funny you should mention it. The more I observe Pixy speak on this issue the more he resembles a chat bot. He doesn't venture beyond the rigid confines of his programmed dogma. Any statements are arguments beyond the scope of his programming illicit a stock error message to the effect of "Wrong." or "Irrelevant.". He also seems unable to grasp semantics beyond a superficial level. Perhaps he's merely an experiment being carried out by an MIT post-doc; an AI program that not only does computer programming but argues for the Strong AI position on web forums.

Truly, PixyMisa is an engineering marvel :D
 
Last edited:
Like I mentioned earlier, a newborn baby's body contains all the information required to build a human but that child is not born with knowledge of biology. The entire body performs all the computational capacities, and more, that some here allege to be identical to consciousness. The point I'm making is that these are not sufficient IAOT to produce conscious experience.


It is important not to confuse the description with the thing itself. That the description is objective does not mean that the thing itself is objective in the way that we typically use that word.

I don't understand why the computational capacity of a computer or a human is not capable of producing conscious experience. Recall that a part of that computational capacity in a human includes emotion and motivational states. We leave those bits out of computers normally because of the way we use them -- computers are tools that perform some of our intellectual labor.

I see no theoretical problem with a computer being provided emotions and/or motivational states do you?


The mechanisms that ultimately give rise to consciousness are not themselves conscious.

Yeah, on that score I think we all agree. But the liquidity of water is also not in the structure of the water molecule either, so that is no objection.

There is still much to learn in regards to the 'hows' and 'whys' of this whole process. I find the whole attitude exhibited by some here of stating that not only is the question not important [or dogmatically declaring "*I* already solved it"] to be extremely unscientific. If they truly believed that they should just pack their bags and retire because they've nothing more to contribute.

Is anyone claiming to have solved all the intricacies of human consciousness and/or human mental activity? Pixy is claiming to have a solution to one definition of the word conscious, which is non-controversial. The problem, as is almost always the case, is that most words have many meanings which share family resemblances, and we squish around amongst all these meanings; so the argument can hide in this meaning for a time and then in that.


Yes, there is much to learn about this whole process and how the brain does it. But it is wrong to think that we have not made some progress in this matter.

If you want to attack this philosophically, then you need to begin by pinning down fairly rigid definitions and then exploring those definitions. That is what Pixy has done, so more power to him I say. There is no progress when folks say, "Ooh, it's unsolvable".

If you are serious about this, then would you like to explore how we use these words in detail? That is just the jumping off point. Boring, yes, but necessary.
 
Is anyone claiming to have solved all the intricacies of human consciousness and/or human mental activity? Pixy is claiming to have a solution to one definition of the word conscious, which is non-controversial. The problem, as is almost always the case, is that most words have many meanings which share family resemblances, and we squish around amongst all these meanings; so the argument can hide in this meaning for a time and then in that.
Exactly right, which is why I keep asking AkuManiMani, Westprog, and Beth to define their terms. For whatever reason, they have quite failed to do so. And so the argument squishes along, never getting anywhere.
 
Okay. Build a system that experiences sounds as colors.

That is easy. Start with a system that experiences colors, then swap the source of visual input with something that encodes auditory information. You could do it to a human, if you had a lot of money and were in certain countries.

Provide me a link explaining the computational model of nausea or vertigo.

I don't know of any. If you find one, let me know, because "emotion" is easily the most difficult issue when it comes to detailing human consciousness. I am particularly interested in how suffering and happiness arise.

However, emotion is still just a detail. Otherwise, exhibiting emotion would be a requirement for consciousness, and it isn't. At least, you haven't said so yet.

How wide is the range of potential subjective experience?

As wide as the range of things that can be computed.

What kinds of computations give rise to each?

The kinds of computation that give rise to each. A tautology, but then subjective experience is nothing but a tautology. System X experiences being system X because it is system X. What else could it be like to be system X?

What you experience is generated by reasoning, which means using existing facts about the world to infer new facts. Neural networks are implicit reasoning machines -- facts come in, new facts go out. Your brain is made of neural networks. If you disagree with any of this, feel free to enumerate the types of thought you are capable of that cannot be described in terms of reasoning -- including your precious "qualia."

Explain to me how a bat experiences it's own echo location. Do they experience it in a way analogous to our hearing, does it evoke the experience of something akin to a visible map, or do bats experiences it in a qualitative way completely alien to human experience?

I don't know the details, because I am not a bat and I haven't looked at a bat's brain code in a debugger.

However, I can confidently say that the bat experiences echo location the same way you experience anything without being conscious of the experience.

What did your toe feel like when you were driving home from work the other day? Don't remember? Does that mean there was no sensory input from your toe? Or does it mean you experienced sensory input but didn't reason about it and thus weren't actively conscious of it?

Is there anyone who's proposed a viable means to recreating the subjective experiences of one animal into another completely different species?

Such a thing is impossible by definition.

To experience things like a bat you would have to be a bat. To do so means you would no longer be a human, you would be a bat. Which means you would not be a completely different species, you would be the same species.

Hows about recreating those experiences in present day AI system?

Yes they are working on a rat brain I think:

http://www.guardian.co.uk/technology/2007/dec/20/research.it

As Pixy has said, simulated biological neural networks aren't very useful right now because the kinds of needs we have for AI right now is best served by very deterministic systems that we understand fully and have completely predictable behavior. That is to say, behavior that a human can sit down and predict in a debugger just by looking at some numbers.

This is slowly changing though.
 
OK, let's start with this one. I think we're making progress thou. I would say that it would be better if we could somehow make a distinction between the mechanism and the observation – in which case we're always only observing something (i.e., observation of). Without the mechanism there's no observation in any case. The mechanism is thus primary; it only allows observation to that which there is access to – mainly other processes or representations thereof, which are then interpreted and re-interpreted according to where access is gained henceforth (in a sort of cascading way). Obviously, access to more stable memory systems is also included.

When you say that the process of observation basically constitutes to being a conscious process, I cannot help but to see a rather odd redundancy in how you define them as being the same thing. Moreover, how can it be "regardless of the mechanism" if theres no observation without it, hence no conscious process without the mechanism? I would say that it is the mechanism which correlates more closely to that of "consciousness", and what is being accessed correlates more closely to the content of the "observation".

Obviously it is thus ultimately a conscious process in toto, but that would primarily be because of the inferred underlying mechanism rather than due to the content of what is observed (which can change and still not make a fundamental difference to the experience – read: inference – of being conscious).

This would also be the reason why it's so darn hard to simply rely on subjective experience when trying to pinpoint what we mean by consciousness. The irony is that from a first person perspective we are inclined to make it because of constant change of observed content, whereas from a third person perspective it would be inferred from looking at the mechanism, regardless of the observed content.

Finally, if the content would not change at all, could we say we are conscious in any meaningful way? How about if it would change, but in a very limited and repetitive fashion?

Very interesting points you've brought up so far, Lupus [keep'em coming :)]. I've been giving it some thought and here's the best I've been able to hash out so far...

[Bear with me, tho. On first reflection the concepts I'm presenting are liable to make one's eyes cross a bit -- I know I get a lil' dizzy thinking about them at times :boggled:]

Earlier [I'm not sure if I linked it on this thread] I brought up an ontological position that the world has 'inside' and 'outside aspects. The 'outside' aspect is the 'public' observable domain of objects while the 'inside' would be the 'private' observer's perspective. Each perspective, in some sense, exists 'within' the other. The two aspects aren't metaphysically separate, but aspects of the complementary whole of reality.

That being said, all subjective experiences would have to be from the 'inside' perspective of a given entity [as per the definition I give in 245]. Ofcourse, not all entities actually have subjective experiences because not all entities are conscious. Even so, all subjective experiences are necessarily the 'inside' perspectives of conscious entities. All public observations of conscious process in an entity would be from the 'outside' perspective. The tricky part is, actually being able to discern whether the public observation being made is of a conscious entity or not. One can only be absolutely sure of an entity's qualitative experience if, in fact, one is that entity. Consciousness is as much an epistemological problem as it is ontological -- which is why its so 'hard'.

Fortunately, intersubjective relation is an innate capacity of our species. It appears hardwired into us for us to empathize with the subjective experiences of others [mirror neurons being a good example of this]. Whatever the underlying nature of consciousness is, it appears that humans atleast share a common OS of conscious experience, so to speak. This kind of empathetic capacity is what allows humans to learn language. Words are just labels of common experience and syntax is just the formal structure of communicating the subjective meanings being carried by words. Since we humans have such an innate capacity to communicate our internal states with other humans [e.g. language, facial expressions, pheromones] the epistemological problem of determining consciousness within our own species isn't nearly as deep as it is in discerning it in non-human entities.

Now, being as how humans are so adept at communicating their internal states with one another, the ontological problem of consciousness lends itself more to external investigation. Using a combination of technology and the reports of human subjects, we can atleast get some rough fix on the correlations between 'outside' process and 'inside' experience. To date, such observations have strongly suggested that the carrier of conscious experience is the EM phenomenon generated by the brain. Its apparent that the best way to approach the ontological problem of consciousness [and by extension, the epistemological problem of determining it in non-humans] would be to better understand the nature of the correlation of conscious experience with the EM processes of the human nervous system. Once we gain a solid understanding of this correlation [the 'hows', 'whys', and 'wherefores'] we can apply that knowledge not only to studying consciousness in other animals but to possible synthetic constructs.
 
BTW, some interesting responses from 'Dodger et al. Don't have enough time to reply to them all, atm, but I'll get to them as soon as I can :)
 
If you want to attack this philosophically, then you need to begin by pinning down fairly rigid definitions and then exploring those definitions. That is what Pixy has done, so more power to him I say. There is no progress when folks say, "Ooh, it's unsolvable".

If you are serious about this, then would you like to explore how we use these words in detail? That is just the jumping off point. Boring, yes, but necessary.

It's the inability to pin these concepts down precisely that is the problem. Of course we can present a well-defined X and call it consciousness, but what's the point of that? It just leaves the big problem on one side.

Saying "Ooh, it's unsolvable" might not be helpful, but it isn't as harmful as saying "Problem? What problem?".
 
If the concept is fuzzy, then we simply cannot discuss it the way we are attempting to here.

My guess is that the concept remains fuzzy precisely because we don't try to pin it down to specifics.

In my brief time discussing this issue we always seem to end up in the relatively fuzzy area of "awareness" and "feelings". These are supposed to be undefinable, but I don't think they are. I tried to get Undercover Elephant/JustGeoff to work towards useful definitions a few years ago, but he wouldn't have any of it.

I grant that it's a difficult thing to do, but that is the only way we can make progress. I fear part of the reason we avoid strict definitions is because of what we are afraid we might find -- that the problem is not so intractable after all. The fuzziness is so much more romantic.
 
It's the inability to pin these concepts down precisely that is the problem. Of course we can present a well-defined X and call it consciousness, but what's the point of that? It just leaves the big problem on one side.

Saying "Ooh, it's unsolvable" might not be helpful, but it isn't as harmful as saying "Problem? What problem?".
How are you ever going to solve a problem if you are not allowed to ask what the problem is?

As far as I know the first step in problem solving is always "define the problem".
 
Ack! I didn't intend to give the impression that I believed consciousness is some kinda substance, or what have you. If I led you to believe that that is what I meant I apologize :covereyes
No, you misunderstand me, I didn't think you were saying that consciousness was a substance, I was saying that when we use certain forms of words we push ourselves towards saying things we do not intend.

I have since realised that my format is no better, when I say "consciousness happens", I am forcing myself the card that real time is the same as experienced time.
 
It's the inability to pin these concepts down precisely that is the problem.
No.

It's your inability to pin these concepts down that is your problem.

I don't have this problem. Rocketdodger doesn't have this problem. Mercutio certainly doesn't have this problem.

Of course we can present a well-defined X and call it consciousness, but what's the point of that? It just leaves the big problem on one side.
How would you even know? If you can't say what consciousness is, you can't claim that it isn't what I say it is.

Saying "Ooh, it's unsolvable" might not be helpful, but it isn't as harmful as saying "Problem? What problem?".
How do you even know there is a problem?
 
How are you ever going to solve a problem if you are not allowed to ask what the problem is?

As far as I know the first step in problem solving is always "define the problem".

Of course it is. I'm not saying we shouldn't define the problem. I'm saying we haven't defined the problem. It would be a very good thing if we could.

I don't see how defining a different problem, and saying that it's the same thing helps us at all. In fact, it's extremely misleading. To say that if we can duplicate the behaviours associated with consciousness we understand it is fallacious.
 
Of course it is. I'm not saying we shouldn't define the problem. I'm saying we haven't defined the problem.

:confused::confused::confused:

Can you imagine someone saying this in any other context?

Westprog: There is a problem with my car.
Me: Really? I know some about cars, whats the problem?
Westprog: I'm not sure yet.
Me: Well what's going wrong, Is it starting?
Westprog: I can't define the problem, but I know there is one.

Or

Westprog: I am having trouble I with a math problem.
Me: What problem? Let me have a look.
Westprog: I haven't encountered it yet, but I'm sure it's somewhere out there.

What does it even mean to say that there is a problem that you can not define? What are the effects of this 'problem' that make it problematic?

How can you even be a part of this argument if you have no idea what you are arguing about or why?
 
Of course it is. I'm not saying we shouldn't define the problem. I'm saying we haven't defined the problem. It would be a very good thing if we could.

I don't see how defining a different problem, and saying that it's the same thing helps us at all. In fact, it's extremely misleading. To say that if we can duplicate the behaviours associated with consciousness we understand it is fallacious.

So if we create a mind that exhibits the behavior of thinking it is conscious, we don't understand consciousness?
 
Of course it is. I'm not saying we shouldn't define the problem. I'm saying we haven't defined the problem. It would be a very good thing if we could.
I've defined the problem.

You apparently disagree with my definition. Since you have no definition of your own, and can't indicate where my definition is lacking, this problem is your problem.

I don't see how defining a different problem, and saying that it's the same thing helps us at all.
Your problem is that since you have no definition, you cannot know that it is a different problem.

In fact, it's extremely misleading.
You cannot know this. Since you are unable to specify your terms, you cannot know anything.

To say that if we can duplicate the behaviours associated with consciousness we understand it is fallacious.
Why?
 
It is important not to confuse the description with the thing itself. That the description is objective does not mean that the thing itself is objective in the way that we typically use that word.

I don't understand why the computational capacity of a computer or a human is not capable of producing conscious experience. Recall that a part of that computational capacity in a human includes emotion and motivational states. We leave those bits out of computers normally because of the way we use them -- computers are tools that perform some of our intellectual labor.

I see no theoretical problem with a computer being provided emotions and/or motivational states do you?

I agree that, in principle, it should be possible to produce a synthetic conscious entity. My point is that we don't understand it enough to conclusively say that we've actually reproduced such a process already. In fact, there are strong reasons to suspect otherwise.


Is anyone claiming to have solved all the intricacies of human consciousness and/or human mental activity? Pixy is claiming to have a solution to one definition of the word conscious, which is non-controversial. The problem, as is almost always the case, is that most words have many meanings which share family resemblances, and we squish around amongst all these meanings; so the argument can hide in this meaning for a time and then in that.

Yes, there is much to learn about this whole process and how the brain does it. But it is wrong to think that we have not made some progress in this matter.

If you want to attack this philosophically, then you need to begin by pinning down fairly rigid definitions and then exploring those definitions. That is what Pixy has done, so more power to him I say. There is no progress when folks say, "Ooh, it's unsolvable".

I'm not sure how much of the discussions you've had an opportunity to read so far, but I've stressed repeatedly that I do not think that the issue if unsolvable. Gaining more meaningful answers to this issue is of great interest to me.

There is nothing wrong with taking a crack at the issue and venturing a conjecture as to what consciousness is and how to model it. The problem is that the position being put forward by strong AI proponents like Pixy is dogmatically being touted as a sufficient answer to the problem of consciousness when, in reality, it completely side steps the issue by simply redefining it. His position is obscenely presumptuous -- especially considering that the model hes proposing is empirically falsified every day.

Pixy has not simply claimed to have solved the intricacies of human consciousness, hes outright stated that such questions are 'Irrelevant' and that the definitions he ascribes to are the sum of the matter. Hell, hes said that not only is the question of consciousness 'uninteresting', but the systems he has created are more exemplar of it than any human. :rolleyes:

If you are serious about this, then would you like to explore how we use these words in detail? That is just the jumping off point. Boring, yes, but necessary.

I've spent a lot of time giving the actual definition of consciousness and explained why the current operational defintitions of it are not sufficient. If you get the opportunity I recommend that you read earlier portions of this discussion.
 
If I'm demonstrably wrong, why are you having such a hard time demonstrating this?

Really? Give me an example. One will do.

Note that the example must be (a) coherent, (b) well-defined, (c) operationally defined, (d) something that exists, and (e) something that contradicts my model.

You seem to mostly fail at (b). Sometimes (a).


What is conscious experience? Define it, operationally, show that it exists, and show that it contradicts my model.

Hows about you stop just waving your hands and think about what you're saying? What is this "conscious experience" that you claim contradicts my model? Does it happen? Where does it happen? How does it happen? And how does it contradict my model?
[/QUOTE]

Good grief! You're just not getting it are you?

I, and others here, have explicitly, repeatedly, coherently, and in detail, explained to you what the definition consciousness is, and why there are currently no sufficient operational definitions of it. You've stated that reflexive computation is sufficient as an explanation and description of consciousness. I've pointed out the simple fact that such processes are fundamental to all biological systems -- including those that are unequivocally unconscious.

Your model fails as an adequate description of consciousness because the criteria for it are met even when individuals are not conscious. Your model is point-effing-gunshot-blank-wrong. It doesn't matter how 'coherent' or 'well defined' the model you're using is because, empirically, it does not fit what it alleges to be describing.

The only handwaving going on here is you're dogmatic insistence that, magically, your model is 'consciouness' and all that 'other' stuff [like actual conscious experience] is 'irrelevant' or 'uninteresting' when they are precisely the things at issue here. All of your argumentation thus far is based off of a completely irrelevant non sequitur.
 

Back
Top Bottom