• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

The Hard Problem of Gravity

Hmmm, an interesting theory.

If you are correct, what is to be done? I mean, you are complaining about a problem --what is your solution?

How should the people at the cutting edge of the search behave differently? What should I, as an A.I. programmer, do to 1) remedy my cowardice and 2) work harder towards the single problem that I want to solve more than anything else in the world?

[...]

Again -- assuming we come to grips with it, how should our behavior change?


I suppose one solutions would be to humbly recognize the magnitude of the problem you're trying to solve. If you can't bring yourself to see the problem for what it is how can you possibly make any meaningful progress?

Acknowledge possibility that you may be viewing the problem from the wrong angle -- so to speak -- and that some of your basic assumptions may have flaws. Be open to make some conceptual leaps.


Well, we know some things about it. For instance, we know that if it can be understood by us then it must be mathematically describable.


One of the main differences between you and your current creations is that you're able to think beyond any particular formal system. The formal structure is just a post hoc retracing of your steps -- putting symbols and representations to meaning.

I can't tell you exactly how to do it but I am certain that any truly thinking entity is an entity that grasps meaning and not just representations of meaning.

It would seem that the best course of investigation would be understand how things we know to be conscious actually operate and figure out what exactly makes them conscious. In other words -- look to biology for the model of intelligence. Beyond that... /shrug
 
Okay, you're waaay more complex that a door-knob. So what?

So..? My point was simply to state that you are correct in thinking that I believe that a door-knob, and my mind, are both material.

What if I built a machine that was comparable to you in complextity? Why shouldn't I treat you and my machine the same way?

If that machine is conscious in the same way that a human being is, I do not see why you would not treat it in the same manner that you do a human being. If it was conscious in the same way that a cat is conscious, I would assume that you would treat it like a cat.

The way that you treat anyone/anything is quite subjective, and up to you.

What if it can be demonstrated that not only is my machine more ''complex" than you but is also more useful to obtain my own personal goals? Why should I treat you any different than I would a less useful machine?

If your machine is conscious in the same way that a human being is conscious, I would not expect you to treat me any differently than a person that matters less to you than your more useful friend(the machine).

What if, not only were my creation more complex and more useful than you, but I had to choose between your life and the existence of my machine? Why should I choose your life over my precious machine?

You shouldn't, I wouldn't choose your life over the life of a friend or family member. I would not expect you to choose me. That wouldn't make sense.
 
Last edited:
I suppose one solutions would be to humbly recognize the magnitude of the problem you're trying to solve. If you can't bring yourself to see the problem for what it is how can you possibly make any meaningful progress?

Acknowledge possibility that you may be viewing the problem from the wrong angle -- so to speak -- and that some of your basic assumptions may have flaws. Be open to make some conceptual leaps.

OK, that is fine, but like I said... how should our behavior change?

One of the main differences between you and your current creations is that you're able to think beyond any particular formal system.

Unsupported assumption.

The formal structure is just a post hoc retracing of your steps -- putting symbols and representations to meaning.

Unsupported assumption.

I can't tell you exactly how to do it but I am certain that any truly thinking entity is an entity that grasps meaning and not just representations of meaning.

Ah, I see -- you are sure, but you don't know why.

It would seem that the best course of investigation would be understand how things we know to be conscious actually operate and figure out what exactly makes them conscious. In other words -- look to biology for the model of intelligence. Beyond that... /shrug

Oh. Well, people are already doing that. So thanks, but you are 30 years behind the curve.

Reading your posts, I would have assumed you actually had a suggestion for something we should do different.
 
If that machine is conscious in the same way that a human being is, I do not see why you would not treat it in the same manner that you do a human being. If it was conscious in the same way that a cat is conscious, I would assume that you would treat it like a cat.

The problem is that we don't know how a cat is conscious, or indeed if it is.

We can of course redefine consciousness in behavioural terms, which is a way of simply ignoring the issue.
 
The problem is that we don't know how a cat is conscious, or indeed if it is.

We can of course redefine consciousness in behavioural terms, which is a way of simply ignoring the issue.

With any other definition the only being you can be sure is conscious is yourself.

Defining consciousness in behavioral terms is the only useful way to define it.
 
The problem is that we don't know how a cat is conscious, or indeed if it is.

We can of course redefine consciousness in behavioural terms, which is a way of simply ignoring the issue.

This is not an argument for the HPC, this is an argument for solipsism.
 
In much the same way our theories are merely conceptual models of reality, and not reality itself, so aku-zombies would be models of consciousness and not necessarily be absolute exemplars of it.
Consciousness is a model. A model of consciousness is conscious, the same way that a map of a map is a map.

As I've already stated in post #37, whether or not someone chooses to designate consciousness as 'special' is irrelevant to the fact that we don't fully understand it. Since when is admitting to ignorance tantamount to invoking magic??
Since when has that had anything to do with HPC?

HPC is the assertion that it is impossible to explain consciousness except in dualistic terms.

The HPC [hard problem of consciousness] isn't an affirmative claim to knowledge about consciousness; its simply a recognition of the fact that we don't fully understand it. Period.
No. Completely wrong.

What annoys me to no end is that, out of sheer intellectual cowardice, intelligent people like rocketdodger go thru their entire careers completely ignoring the the problem they are supposed to be working towards solving: How does one create a conscious entity?
Most computers are conscious.

We're not going to make any meaningful progress in that regard so long as people at the cutting edge of the search loose their nerve, out of fear of their own ignorance.
Wrong. That problem was solved decades ago.

I'm going to tell you plainly. We don't know how recreate consciousness -- yet.
Wrong. We've done that. I, personally, have done that.

We don't know the fundamental why and how of qualitative experience -- yet.
See above.
 
Alright, I'm back. Just gotta tend to a few things then I'll try and address some of your responses as I can :)
 
Which is why p-zombies aren't as confounding as people make them out to be. One major feature that separates conscious entities from unconscious is the ability to grasp meaning [i.e. understanding].

An AI like Deep Blue could fit the bill as a kind of p-zombie. If, for example, if Kasparov were to play against deep blue over the internet and was not told he was playing against an AI he might have been fooled into thinking that DB was an actual person. It is able to defeat human chess masters but it cannot be said to understand chess an more than a calculator can be said to understand numbers. It only exists because entities which do have the capacity to understand exist and used that understanding to create it. It is, essentially, just an extension of the minds that made it.

They are algorithmic machines that manipulate syntax but semantics -- meaning -- is beyond their scope. Any construct that is only capable of syntax manipulation could count as some degree of p-zombie. Therefore, p-zombies are entities that can successfully simulate certain classes of cognitive function to give the appearance of intelligence, but can be identified by an inability to grasp meaning, which is inherently non-algorithmic.

Technically, a robotic toy could count as a p-zombie if it could fool a child, animal or anyone else into believing its conscious. It would be theoretically possible to construct a p-zombie sophisticated enough to possibly fool an adult expert but the difference between it and the child's toy would be a difference of degree only. I would posit that there can be no such thing as an indiscernible p-zombie.

All p-zombie constructs can, in principle, be tricked by a discerning conscious agent into revealing their illusory nature.

The only real issue left is that of qualia -- or 'seemingness'. It appears to be an intractable problem of determining the qualitative nature of one's subjective pallet. Qualitative experience is the basis for all meaning , understanding, and the creative capacity to imagine beyond a formal set of rules to generate new ones. At present, we cannot objectively determine what the 'seemingnes' of another entity is from the 'inside' perspective. This is what the core of the "hard problem" really is and what the OP (either intentionally or unintentionally) misses.

Nicely said. Nominated.
 
And you are just good at doing what you do, because your DNA knew how to construct a proper neural network + sensory organs?

The consciousness in Deep Blue, would obviously be much more narrow in scope than that of a human. Imagine if your genetic code had instructed that you be built as a person who only knew the best ways to play chess, and defeat other chess players. You would still have memories to call upon of the current chess game, and those memories would process in with sensory input(the chessboard positions). You would be conscious and interacting on a very crude level.

And if you go down that road far enough... conscious toasters :)
 
This is a pathetic, based upon nothing insult.
Can you show me where the hard problem of consciousness has been defined rigorously, rather than just vague "why is there a what it is like to be me?" questions or "it seems objectively unreasonable that consciousness can arise from physical processing"?
 
Consciousness is a model. A model of consciousness is conscious, the same way that a map of a map is a map.

No, consciousness is a thing IAOI which can have maps drawn within/upon it.

Since when has that had anything to do with HPC?

HPC is the assertion that it is impossible to explain consciousness except in dualistic terms.

No. The HPC is explicitly a statement that the issue of consciousness is a difficult problem; the fact that it seems to invoke dualism its just one of the many difficulties that come up in trying to seriously address it.

In another thread I've dealt with some of the ontological problems that come with Cartesian dualism (which you are already acutely aware of) the two main monist approaches to the issue (i.e. materialism and idealism). I believe there is another approach that eliminates the shortcomings of each view.

No. Completely wrong.


Most computers are conscious.


Wrong. That problem was solved decades ago.


Wrong. We've done that. I, personally, have done that.


See above.

Wow dude, you're reminding me of a crazed alchemist that goes to his grave believing hes found the elixir of immortality. Or Columbus going to his grave insisting that he found the westward route to Asia. I'm sure you've contributed a lot to the field of AI but you're really overstating yourself :-/
 
Last edited:
OK, that is fine, but like I said... how should our behavior change?

The way you conceptually approach a problem is a behavior, is it not? Newton and Einstein were both doing science but they conceptually tackled some of the same problems (in this case, gravity) in a different way.

If you actually pay attention to the arguments I'm bringing forward you'd have already noticed that I've already pointed out a atleast one major flaw in your current conceptual approach and a more cogent way of addressing the problem you are choosing to disregard.


AkuManiMani said:
One of the main differences between you and your current creations is that you're able to think beyond any particular formal system.

Unsupported assumption.

How so? Have you produced something that has the capacity for imagination?

AkuManiMani said:
The formal structure is just a post hoc retracing of your steps -- putting symbols and representations to meaning.

Unsupported assumption.

How do ya figure? Its no more unsupported than saying the symbol "1" is not identical to the concept of one.

AkuManiMani said:
I can't tell you exactly how to do it but I am certain that any truly thinking entity is an entity that grasps meaning and not just representations of meaning.

Ah, I see -- you are sure, but you don't know why.

Perhaps I should rephrase.

I can't tell you how to invent something that hasn't been invented yet. In this particular instance, don't know exactly how to solve the problem. What I'm attempting to do is help better define the problem so that is lends it self more to being solved.

From reflecting on my own consciousness I can discern that there is a significant difference between subjective phenomenon in my mind [such as meaning], the symbolic representations of that meaning, and the system of their organization. A word isn't a meaning; its just a tag to help organize and communicate meaning to other entities who've already assimilated the same system of communication. Syntax is just the formal method of organizing those tags. But its important to keep in mind that is not so much symbols that are being communicated -- they are just the packaging. Its the qualitative meaning

From what I've been able to tell, current research has made great strides in developing the logical framework thru which a conscious agent can work. The skeleton is there, but the flesh blood and sinew of meanings appears to be missing.

AkuManiMani said:
It would seem that the best course of investigation would be understand how things we know to be conscious actually operate and figure out what exactly makes them conscious. In other words -- look to biology for the model of intelligence. Beyond that... /shrug

Oh. Well, people are already doing that. So thanks, but you are 30 years behind the curve.

Reading your posts, I would have assumed you actually had a suggestion for something we should do different.

Ah, tut-tut-tut...

Not so hasty there. I was inna rush to tend to some IRL business and didn't get much chance to articulate more of what I was trying to convey. That was my bad.

I've put forward some ideas on another thread that address a lot of whats at issue here. Its a lot of material, so I'm not going to post it all here but I will provide some links [posts #244 and #245]and an excerpt of some of what I said:

The main reason 'materialists' object to Cartesian dualism is because it proposes that 'mental' and 'material' are metaphysically separate 'realms'. This kind of separation would preclude any kind of interaction between the two. We know that what we consider 'mental' is affected by the 'physical' (via perception) and the mental can affect the physical (via bodily action). It becomes apparent that, in drawing a metaphysical line between mind and matter, dualism creates a logical paradox. A paradox is a strong indicator that, somewhere, there's a false assumption in one's reasoning. There are different schools of thought that attempt to address and resolve this apparent paradox (one of them is epiphenomenalism, which we've already established is pure bunk).

Materialism has it's own resolution to the dualist paradox. Materialism is a monist ontology, which means that it views all things as being fundamentally the same "stuff". How it differs from other monist positions is that it views physical matter as the primary stuff from which all things arise. It's approach to resolving the dualistic paradox is to state that:

-All things are material.

-The material entity most clearly associated by evidence with the mental is the brain.

Therefore, the mental is merely a material phenomenon of the brain.



The Idealists also ascribe to the monist view but they flip the materialist argument on it's head. In their interpretation:

-The only thing that is ever experienced is the mental.

-The only way for us to perceive matter is if it is, in some sense, mental.

Therefore, all matter, including the brain, are merely mental phenomenon.

"Wait", the dualist might say to both, "Each of you ignores the fact that mental qualities and material objects have fundamentally different characteristics. The only logical solution is to conclude duality".

But, as I pointed out earlier, dualism introduces the interaction paradox which is a fatal flaw to the theory. The only way to resolve the paradox of dualism and overcome the shortcomings of the materialist/idealist dichotomy is to assume some form of neutral monism. The subjective and the objective must have a common metaphysical basis that is neither mental or material.

edit: FYI, in link #245 I elaborate a little more on the method of ontological classification I employ.
 
Last edited:
Akumani, Malerin is a dualist. While I strongly oppose his viewpoints on this issue, I at least understand where he sits.

I would like to ask you again: Do you think that there is an immaterial aspect to the mind?
 
Akumani, Malerin is a dualist. While I strongly oppose his viewpoints on this issue, I at least understand where he sits.

I would like to ask you again: Do you think that there is an immaterial aspect to the mind?

Read the links to my other responses. I've already spent a lot of time articulating my thoughts and typing them out. Its there for you to read. [And FYI, no I am not a dualist.]
 

Back
Top Bottom