The Hard Problem of Gravity

AkuManiMani said:
If there were a artificial construct created with operational complexity comparable to, or greater than, that of a human would you consider it to be have 'greater' value than the life of a human?


That depends.

Depends on what? Just out of curiosity, what would you say could count as a deciding factor?

AkuManiMani said:
For example, would it be justified to kill, or otherwise harm, a human to prevent harm from being done to said construct merely on the basis of its complexity?

Merely on the basis of complexity? No. Do we take complexity into account in such decisions? Certainly.

Alright, it seems we're getting somewhere.

So complexity can be considered a secondary factor but not necessarily the qualifying factor. In your opinion, what would you say is the defining characteristic to even make an entity eligible for ethical consideration?


AkuManiMani said:
Really? I thought the capacity to experience suffering was the basis for giving an entity ethical consideration.

It's one consideration. But very simple systems have no capacity to experience suffering, so it's really just a loose measure of complexity.

[...]

AkuManiMani said:
Ofcourse. Because a brick is not alive and, as far as can be told, bricks cannot experience anything, let alone suffering.

Right. They are insufficiently complex.

In other words, the necessary complexity [whatever that happens to be] is incidental to what is actually being considered.

AkuManiMani said:
The thing is, if one were to arrange bricks or their materials in a more 'complex' way this would not change.

Fallacy of division. Quite a staggering example, in fact, since you make the assertion not only for bricks but for their materials.

I think you've missed my point. Obviously, if you sufficiently change the fundamental components of a brick it can no longer be classified as a brick. Division and composition are irrelevant to the rhetorical point being made here.

The fundamental basis of moral consideration is the presumed subjective capacity of an agent. If the agent in question does not have the capacity to qualitatively experience subjective impressions then it cannot be said to experience suffering or harm. Ethical consideration would thus be non-applicable to such an agent.

What I'm saying is that complexity, in itself, is not a qualifying factor when considering an entity to have ethical significance.

AkuManiMani said:
Clearly there is more to ethical concerns than mere complexity.

Clearly you haven't thought this through.

I've thought this thru quite exhaustively. You are simply missing the point.
 
Last edited:
Required.

You really don't know anything about this topic, do you?

So, if we consider your example of getting out of a chair, you're stating that the number of processing functions undertaken by the organism could not be reduced whilst maintaining parity on a behavioural level? There are no possible short cuts?

Nick
 
Irrelevant, untrue, and a logical fallacy. Good work!

Which bit fits with which? Let me know and we can take a look. Let's see if it's up to the standard of your usual snap judgments. Maybe you'll get 1 out of 3 this time!



One reason is that, as with the teletransporter dilemma, it is logically hard to assert that anything has been lost should a human be deconstituted and reconstituted. Yet, even amongst Strong AI proponents, you won't find so many who will travel.

Nick
 
...There is such a disparity between how our brains typically conceive of self and how they actually manifest self that human culture inevitably exists on a continual existential precipice.
I thought this whole thread was concerned with the problem of establishing how our brains conceive of self, and here you are claiming to be able to measure it against its apparent manifestation, and expounding on its cultural implications... wouldn't it be wise first to establish how our brains typically conceive of self before trying to make such comparisons?

Value systems which reflect evolution-acquired biological needs will have to adapt to the reality of our computational nature.
No adaptation required. The reality of our computational nature has evolved as we evolved and our value systems are a product of that evolution. Perhaps you meant they will need to adapt to our realisation of our computational nature ?
 
My point was that cognitive complexity ["complexity of consciousness" or w/e you wanna call it] is hardly relevant in ethical or moral considerations. What qualifies a subject as being worthy of moral consideration is whether or not it has the capacity to experience suffering -- or experience anything in a qualitative manner at all. If one wanted to argue for ethics on the basis of "complexity of consciousness" then the life of a person of average intelligence would easily trump that of someone who is mental handicapped.
If that is your point (and I think it's a good one), then why not just state it instead of arguing around a straw-man misrepresentation of the OP? If it's accidental, it's careless, if not, it's underhand.
 
So, if we consider your example of getting out of a chair, you're stating that the number of processing functions undertaken by the organism could not be reduced whilst maintaining parity on a behavioural level? There are no possible short cuts?

Nick

No, not at all.

I am saying that you can only reduce the number of functions so far before you start to cut in on what contributes to the conscious behavior.

For example, you could most likely safely discard most celluar control functions that don't impact immediate muscle or neuron responses. That gets rid of a ton right there.

But what you are necessarily left with is still orders of magnitude more than what a 386 can handle. I know this because procedural skeletal mesh control in games is actually one of my specialties.

Just handling the kinematics of a two joint limb doing something as simple as resting on an object is a pretty hefty amount of computation, and that is restricted to something like 30 frames per second anyway, not continuous time.

What you have to understand, however, is that the computing paradigm used on a 386 is very different than a biological neural network. You might be able to argue that if all the transistors in a 386 were used to construct a dedicated neural network then maybe such a thing could control a limb with as many joints as a human arm in a fairly human-like fashion. But that isn't what we have -- if you want to use a neural network you have to merely emulate it using the serial execution of an x86 processor, which results in alot of wasted computation. So regardless of the method used on a 386, it just doesn't have the raw FLOPS required.
 
(ETA: Also, keep in mind that I'm pretty fairly skilled at OO programming, which corrupts the brain--because it pushes me into a crippled condition of thought whereby I'm forced to admit such ludicrous things as that circles are not ellipses).
Can't you fake it with a common superclass (ellipsoid?) that only implemented common methods/functions...?

Sorry,a bit off topic :rolleyes:
 
If that is your point (and I think it's a good one), then why not just state it instead of arguing around a straw-man misrepresentation of the OP? If it's accidental, it's careless, if not, it's underhand.

Well, my comments weren't directly referring to the topic of the OP; I've pretty much had my say on that a few dozen pages back. I was indirectly addressing Pixy's position on what constitutes a sufficient description of consciousness. In his view, reflexive processing it not only a necessary requisite of consciousness it is consciousness. A simple feedback device [like a thermostat] is 'aware' by his definition; add another regulatory feedback system on top of an 'aware' system and -- presto -- its 'conscious'.

When it was pointed out to him by me and others that this definition doesn't address or explain qualia [i.e. subjective experiences], which are the hallmark of consciousness, he just responded with his stock error messages: "Wrong", "Nonsense", or everyone's personal favorite "Irrelevant". When pressed further on the issue he simply filibusters. Apparently, certain concepts do not compute with him and conscious experience is one of them. I'm simply trying another means of helping him understand what is being discussed here.
 
Last edited:
I thought this whole thread was concerned with the problem of establishing how our brains conceive of self, and here you are claiming to be able to measure it against its apparent manifestation, and expounding on its cultural implications... wouldn't it be wise first to establish how our brains typically conceive of self before trying to make such comparisons?

There's not hard neurological agreement, to say the least. But, if Strong AI is correct, then we know it sure ain't like it seems. Once you're examining below the level of the whole organism there can be none of these dualistic notions that so characterise our whole-organism life. There's no experiencer, there's no experience. There's no "I." It's really just a machine churning out representations, observed by no one, and simultaneously overlaying onto them a narrative, a "voiceover" - a story of this "I", listened to by no one.

So, we don't know the hard neurology, but I think this much is pretty clear if the computational theory is correct. All perceived duality must stop as soon as you get below the level of the whole organism.

No adaptation required. The reality of our computational nature has evolved as we evolved and our value systems are a product of that evolution. Perhaps you meant they will need to adapt to our realisation of our computational nature ?

If Strong AI is correct then we've always been computational, it's just that we've not been aware of it previously. The computer learned to delude itself presumably because that was evolutionarily favoured. Selfhood is evolutionarily favoured. If you're fighting for survival having an overtly dualistic vision of the world is evolutionarily favoured.

The brain will/may have to adapt to its realisation of its computational nature, yes.

Nick
 
Last edited:
No, not at all.

I am saying that you can only reduce the number of functions so far before you start to cut in on what contributes to the conscious behavior.

For example, you could most likely safely discard most celluar control functions that don't impact immediate muscle or neuron responses. That gets rid of a ton right there.

But what you are necessarily left with is still orders of magnitude more than what a 386 can handle. I know this because procedural skeletal mesh control in games is actually one of my specialties.

Just handling the kinematics of a two joint limb doing something as simple as resting on an object is a pretty hefty amount of computation, and that is restricted to something like 30 frames per second anyway, not continuous time.

What you have to understand, however, is that the computing paradigm used on a 386 is very different than a biological neural network. You might be able to argue that if all the transistors in a 386 were used to construct a dedicated neural network then maybe such a thing could control a limb with as many joints as a human arm in a fairly human-like fashion. But that isn't what we have -- if you want to use a neural network you have to merely emulate it using the serial execution of an x86 processor, which results in alot of wasted computation. So regardless of the method used on a 386, it just doesn't have the raw FLOPS required.

Yes, fair enough. I do appreciate your point. My point originally was just to show that, although we have bucketloads of processing capacity, we don't necessarily make use of it. Consequently, in assigning value to consciousness, should this be based around what the being could do, or what it actually does do?

Computer programs, I imagine, are written with efficiency in mind. Natural selection doesn't always go that way.

Nick
 
Last edited:
Well, my comments weren't directly referring to the topic of the OP; I've pretty much had my say on that a few dozen pages back. I was indirectly addressing Pixy's position on what constitutes a sufficient description of consciousness. In his view, reflexive processing it not only a necessary requisite of consciousness it is consciousness. A simple feedback device [like a thermostat] is 'aware' by his definition; and another regulatory feedback system on top of an 'aware' system and -- presto -- its 'conscious'.

When it was pointed out to him by me and others that this definition doesn't address or explain qualia [i.e. subjective experiences], which are the hallmark of consciousness, he just responded with his stock error messages: "Wrong", "Nonsense", or everyone's personal favorite "Irrelevant". When pressed further on the issue he simply filibusters. Apparently, certain concepts do not compute with him and conscious experience is one of them. I'm simply trying another means of helping him understand what is being discussed here.

Well, qualia aren't all that popular a phenomenom around the Strong AI scene. The basic position is that the quale is conceptually erroneous. It's a concept that reinforces a viewpoint that is invalid - it's not what happens.

To get from this "erroneous" viewpoint to Strong AI can be quite a journey, and different approaches in trying to help someone along this journey have arisen. Pixy's seems to be what might be termed the "short, sharp shock" approach. I'll try to be a little more accommodating...

Subjectivity relies on the notion of there being a self that is experiencing. At the whole-organism level this might be a entirely valid notion, but beneath this level what would it look like? Would it be like Descartes' humunculus sitting in the pineal gland enjoying the show? This for most is unquestionably how it feels, but it's not so appealing to the modern, scientifically minded individual. There's no little men been found in post mortem, and even if there were this would only leave infinite regress issues.

The Strong AI or Computational approach to this HPC (subjectivity) is to assert that actually there is no observer, no experiencer, beneath the level of the whole organism or whole brain. In Global Neuronal Workspace Theory (GWT) for example, likely the dominant neurological model amongst professionals in this field, consciousness simply is the stage of the theatre. But there is no one watching, no humunculus. What is present in consciousness is simply that which is present in a vast network of neural connections that simultaneously feeds the same information to a host of unconscious neural modules. That's it, end of story! There are no other stages in the process.

I could go on but I don't know if this helps, or if you know all this stuff already so I'll leave it there for now. Feel free to challenge/ask questions. We're almost back with the OP now!

Nick

eta: Further reading - the excellent Are We Explaining Consciousness Yet? by Dan Dennett.
 
Last edited:
Qualia is like mind, an incoherent word lacking precision.

Nick227, when you talk about the computing of consciousness, you do realise that the visual processing it takes to 'mindlessly' watch TV takes, billions and billions of neurons in active states?
 
Can't you fake it with a common superclass (ellipsoid?) that only implemented common methods/functions...?

Sorry,a bit off topic :rolleyes:

An ellipsoid is a solid, so no, but the general point is sound. The problem is that a circle is a special case of an ellipse, but requires fewer parameters to define it. This might be critical in the case of a program storing vast number of objects - if each circle carried redundant data.

There's no "correct" answer to this. A more accurate representation of reality might not be the best way to access data.

The relevance to this thread is that computer programmers' view of the world might not be correct.
 
Qualia is like mind, an incoherent word lacking precision.

How would you define the phenomenon? Or would you simply ignore it because of the lack of a sufficiently precise definition, and assume that whatever is precisely defined is therefore both accurately described and complete?
 
Well, qualia aren't all that popular a phenomenom around the Strong AI scene. The basic position is that the quale is conceptually erroneous. It's a concept that reinforces a viewpoint that is invalid - it's not what happens.

I find the idea that qualia can simply be regarded as an absurdity. All the people making this argument presumably experience qualia - and yet they seem to regard them as some kind of guilty secret; a shameful betrayal of their materialist principles.

If there's no precise definition of qualia, then the first task should be to find such a definition.
 
I find the idea that qualia can simply be regarded as an absurdity. All the people making this argument presumably experience qualia - and yet they seem to regard them as some kind of guilty secret; a shameful betrayal of their materialist principles.

If there's no precise definition of qualia, then the first task should be to find such a definition.

There's no precise definition of "consciousness," never mind "qualia!" Pixy has his from AI, but for most of the discussions that take place here it's inadequate.

As I see it, you have to live with and come to understand the grey areas here. This is what I mostly miss from the AI'ers. Some seem completely unable to function without an absolutely rigid definition. It's not needed. Read Blackmore. Read Dennett. Read other consciousness researchers. They all admit that defining terms is an issue, yet they still write perfectly coherent books articulating the arguments from the various different perspectives. Lack of definition is not definitively a problem.

What do you understand by the term "qualia." If it's "the bit left over which processing can't explain" then the computational theory of consciousness is inevitably going to be a struggle. Maybe it's something else.

Nick
 
What do you understand by the term "qualia." If it's "the bit left over which processing can't explain" then the computational theory of consciousness is inevitably going to be a struggle. Maybe it's something else.

Nick

"Qualia" means what we subjectively experience. Calling them qualia doesn't have any particular implications. It's just a name for what everyone reading this experiences. If that experience rules out certain analyses of how consciousness works, then it's bad science to discard the data.
 

Back
Top Bottom