The Hard Problem of Gravity

If something is not part of a simulation, then obviously that simulation can't affect it. How stupid do you think we are?

Please, don't tempt me...

Do you honestly think that is what we are claiming, that a simulated car could run you over? This discussion is really trying my patience...

As always, Rocketdodger concedes the point gracefully.

Now, do you accept that if a physical process produces consciousness, it cannot, in principle, be emulated by a digital simulation? Or is it just cars?
 
Why? What would that prove?

The negative feedback input being processed is not stopped by identification of it as negative feedback input.

Nor has anyone claimed it would be.

So it's fair for you to conclude that you could hurt a machine and I am not talking human here?
 
Feelings have been well described IMO as being the "executors of evolutionary logic," and for me there is essentially a logic to them. Thus, the evocation and behavioural response to feeling is likely analogous to information processing, if you ask me.
I would agree here that behaviour is likely information processing but just as there a logic to feeling there is the experience which if you ask me does not translate into logic.

Of course, this does not mean that one can easily get a computer to feel pain.
I know! This is maddening!

But with this I think you have to also consider that computers haven't evolved into being over a few billion years.

These things don't make the HPC invalid, but they do provide avenues for research that may one day do so.

Nick
You can evolve little organisms in a computer and accelerate the simulation to the max. You should get there soon!
 
OK Chris, I tell you what the difference is. When I feel pain I do not have to convince myself that I do. I might not be able to show you my pain but I feel it and this is enough to convince me it is real.

So, because you don't know what a complex machine would or could "feel" at a given moment, because it doesn't show you, it has no feelings. But you argue to have some, despite the fact that you cant show me yours either?

Sweet smell of hypocrisy ...

However, the difference between me and a machine is I can have a look at the machine and how it processes information. When I look at it, I can try to see whether there is a process in there I could interpret as reproducing a feeling. After all, you say, the processing of information is all there is, so it is fair to approach the problem this way! If I am just a machine, then a machine should be able through information processing to convey a feeling I could identify with my owns. Problem is, every time I looked there was nothing of the sort. Hence, I do not have to assume that machines are capable of it whereas I can assume my feelings are real. Here is the difference!

Yes, processing of informations is all there is to it. There isn't anything more.

Oh, and you think you can look into a computer at any given moment and see what it is doing? And you can see how it came to this state? Are you seriously saying that? If so, i'd recommend you to take a deep look at stuff like ANN's, self modifying code, data structures, etc.

And while you are out looking, also take at a look at stuff like this, this, this and of course this.

So much for not being able to "see" feelings. And then look again and tell me that there still is "nothing of this sort". If you still don't see anything, i'd recommend you get a fresh pair of spectacles, your old ones seem to be broken.

Also, be careful not to confound a program with your understanding of it. It is not because you understand a program that putting it in the computer will transfer this understanding to the computer. This is no more true than putting the vegetables in the pot, transfer the knowledge of the recipe to the pot.

You better be careful not to assume that every program is static, fixed, can not change, without human intervention. Also be careful not to think that only program code defines the working of a program. There is also data which it works upon, you know. Depending on that data it behaves differently. And it can change that very data on its own. Which would result in a different behavior on the next run of a certain code fragment.

Your "vegetables in a pot" comparison is utterly flawed. The equivalent would be to throw a bunch of chips, some random PCB material and some solder into some metal container and claim it's a computer. You need a recipe to make a soup out of vegs, same as you need a "recipe" to make a computer out of a bunch of components. Same as you need a bunch of cells and other bio-matter to make a human. Your soup ends there, at the throwing-in and cooking stage. Humans and computers just start at that point, with humans learning and computers executing code.

Try better next time.

Greetings,

Chris
 
Last edited:
Blah blah blah! Why don't you point out what is so different from common algorithmic in these books that would make it so obvious that machines are capable of experiencing.

Also don't worry I have done enough formal logic and algorithmic at university to understand these books.

Do you know what reasoning is, in formal logical terms? Are you familiar with the various types of reasoning algorithms that have been developed (resolution and chaining, bayesian networks, neural networks)? Do you know how a neural network functions?

If you can answer yes to all of the above, then we can continue. If not, then go away n00b.

Blah, blah, blah. I don't need to introspect. When I am hurt, I am hurt! Apparently, buddhist monks can do what nick227 suggested when they meditate, why don't you study their books for the sake of honesty.

I have. I used to meditate all the time. Nick doesn't know what he is talking about -- meditation is about concentration and focus. If anything, the pinnacle of meditation is a state where one is entirely focused on a single thought. You become so focused on something that you loose your sense of self. That is in fact the total opposite of this thoughtless state nonsense that Nick made up.
 
I meant: I don't need to introspect when I am hurt. I understand the feeling straightaway. Obviously, you typically cut the part you did not want to understand. I bit like you did HPC...

That isn't what I meant.

I meant that next time you are hurt, you should try to critically evaluate what you are thinking and how you are behaving. Or, if the pain is too much, do it afterward, remembering how you felt. That is called introspection.

And when you do it, you will learn that pain isn't "just a feeling that can't be explained," it is number of behaviors and thoughts that all contribute to an overall experience you label "pain."
 
Now, do you accept that if a physical process produces consciousness, it cannot, in principle, be emulated by a digital simulation?

No, I do not.

What I accept is that it could not be emulated by a digital simulation within our current system. That is a very important distinction.

A car simulated by our own system -- by a computer, for instance -- cannot run you over. But if we are in a simulation, then a car that is part of this same simulation can certainly run you over. When you say "simulated" you seem to only be thinking of the former context. That is a fallacy.

So your argument here only holds water if you are asserting that consciousness, for some reason, is only able to occur at some arbitrary level of nested simulations. If you want you can say it must be level 0, but then it is very possible (some even say probable) that we are not conscious according to you.

But I don't see any advantage to doing that. I say call a spade a spade -- I act conscious, I think I am conscious, so I am conscious, regardless of whether this is a simulation or not.
 
Last edited:
Evasion noted. You could perfectly well have posted a link to where "self-referential information processing" was unequivocally defined. You could have just reposted the definition.

But then why would I ? You've rejected the ones given, so why should I even bother ? Hell, not only do you reject it, you then say it wasn't given.

The problem with the Strong AI idea of information processing is that it takes an arbitrary subset of the physical concept of information, and then uses handwaving to justify the restriction.

Come to think of it, since we actually become aware of our decisions AFTER making them, I'm not sure the HPC, even if true, means what you think it means.
 
You sound just like a theist.

Of course, when robots run twice as fast as humans, people like you are going to say "well, they run too perfectly -- look at the way Usain Bolt runs -- so no, you haven't invalidated what I said."

What a joke.

What I find frustrating about Westprog, Nick and the others here who've argued on the HPC side, though I respect their opinions per se, is that no matter how complex or convincing a machine we could make, they'd never accept it as conscious, either because they consider that only humans can be conscious, or rather because they think consciousness is the sole realm of the biological.

The fact that every biological action can be replicated artificially is irrelevant to them. Consciousness is "special".
 
Belz, I know darn well that genes are not the same as chromosomes -- which is why I used those terms as examples. Experiences are to qualia what chromosomes are to genes [tho, it seems codons would be more directly analogous to qualia]. Get it?

So are you now saying that experiences are made up of qualia ? And, if so, what the hell are qualia, again ?

With that said, I would like you to elaborate as to why the concept of emergence is necessarily "ridiculous"?

Let me help you with that boxfull of straw.

I didn't say there are no "emergent" properties, but the properties are still part of the constituents of the whole. They don't appear from thin air, and the whole is STILL the exact sum of its parts.

You're going to have to define what you mean by 'special' .

As in "special pleading". The human mind is immune to the sort of reasoning that made us understand flight, for instance.

Exactly. Thoughts about thoughts. Feelings about feelings.

Turtles about turtles.

Those are all examples of conscious self-referential processing -- what we call introspection. It seems that primates, especially humans, posses this capacity to a greater degree than other animals.

Did you just agree with Pixy about the definition of consciousness ?

So, in answer to your question, yes. To be introspectively conscious of one's own qualia necessarily generates a corresponding qualitative experience.

How are you conscious of qualia if qualia make up experiences ? It's like asking the program to be aware of the code. It isn't.
 
OK Chris, I tell you what the difference is. When I feel pain I do not have to convince myself that I do. I might not be able to show you my pain but I feel it and this is enough to convince me it is real.

However, the difference between me and a machine is I can have a look at the machine and how it processes information. When I look at it, I can try to see whether there is a process in there I could interpret as reproducing a feeling.

But the machine can't look at its own functioning any more than you can look at your own. You're (seemingly) using the fact that we can tell how a machine works as some sort of meaningful fact.
 
I think it's important here to distinguish between what is "normal" Strong AI, and Pixy's assertions regarding "self-referencing information processing."
You'll need to find a difference first.

AFAIA, no one other than Pixy maintains that consciousness actually is self-referencing information processing
Read Hofstadter.

likely for the simple reason that many aspects of consciousness quite obviously aren't.
Name one.

Sensory information isn't.
Sensory information is an aspect of consciousness? It's just information, Nick. It says so in the title.

That which enters awareness may be directed to be there by a self-referencing system - a cortico-thalamic loop or similar - but the actual phenomenal awareness itself is not innately self-referencing.
Of course not. But entirely irrelevant, because awareness and consciousness are two different things.
 
It seems to me that they are 2 camps within this thread: those who deny consciousness anything but information processing and those who posit there is that something else to it that they cannot describe, which is the cause of experience. I am going to address the mechanists.
How can you claim that something exists when you can't even describe it?

The reason I am in the opposite camp is because I cannot put that which I feel into information processing.
Why not?

The fact that you assume that it's in the program
I assume nothing. The program does what it does.

A computer is just a bunch of transistors that take 0 or 1 as values, a bus and a CPU.
And can compute any function that is computable.

This is all there is before running friendbot, this is all there is while friendbot is running and this is all there is when it has finished.
So?

Now that people believe that meaning or feeling or experience miraculously appear within the box just because a few transistors appear to change value is beyond me.
Meaning, feeling, and experience are not objects. They are processes. Informational processes.

This is true especially when I know that I myself is unable to translate experience in terms that would allow me to emulate it on a computer or to somebody else. And no, the fact that I cannot show my feeling to you does not mean that I am hanging on something that is not real
Yes, that's precisely what it means.

it just means that it is problematic and this is where HPC comes in.
HPC is specifically defined so as to be logically incoherent. Chalmers is a dualist - i.e. his worldview is logically incoherent, but given that worldview there is no further contradiction in HPC itself. Given any rational worldview, HPC is self-contradictory.

You people can deny it all you want since this is the premise by which you go and believe that self-referential information processing is key to your theory. Well, I am sorry but self-referential information processing is nothing but... information processing and I do not have to assume that it experiences unless this is a proven fact.
If you disagree, then simply tell us what it is about experience that isn't explained by self-referential information processing.

However, I know you cannot prove that because even I could not prove my feelings to you either... So, to simplify the matter you just prefer to deny that there is anything to prove and hide behind that.
No. Quite the opposite.

You claim there is something more. You cannot demonstrate this, cannot define it, cannot describe it, but you are quite insistent that it is real.

The difference between us is that I refuse to believe that my feeling is information processing unless I can understand it in these terms, which I don't.
What part don't you understand?

But you guys are readily assuming that I should not be worried about it.
We're not assuming anything.

Well, I am sorry but this is dishonest unless you can write me a line of code which for example would make the computer experience pain.
Friendship, joy, anger and sadness are not enough?

If we are machines and if all we do is processing information then it must be possible. So show me!
Of course it's possible. It's easy. But you will refuse to recognise it for what it is.
 
Theres also a camp that posits that consciousness is a form of information processing that simply hasn't been defined.
Self-referential information processing.

Based on current scientific understanding, all processes are inherently informational processes.
As has been pointed out repeatedly, this is simply not true.

Simply stating that consciousness is 'information processing' tells us nothing.
Self-referential information processing. And it tells us a great deal.

[Oh and FYI, don't expect much in the way of a reasoned response from Pixy. I have a sneaky suspicion that hes simply a chat-bot programed to argue for the strong AI position >_>]
If that were true, it would rather prove my point, wouldn't it?
 
Sometimes I think that if there were intelligences far superior than us in terms of complexity, especially in terms of potential behavioural complexity, they might have an analogous argument about whether humans really are conscious (in the way as we are talking about less complex machines currently).

Heh: turtles, ever more turtles!
 
Honest introspection of your own thoughts should reveal this, unless you are a woo like Nick227 who insists people can be conscious without thinking at all.

Yours and Pixy's position relies AFAICT on inner dialogue being present as a prerequisite for consciousness. Yet, thoughts come and go, but if I keep my eyes open then the monitor is always present.

I think only somone who's mind is endlessly churning out thoughts, without apparently any break in between, could come to believe such a thing.

Nick
 
You'll need to find a difference first.

Well, Dennett's version of Strong AI makes no mention of self-referencing being needed for consciousness. AFAIK, only yours does.

Again Pixy, you have not read the actual literature. You have not read Consciousness Explained so you do not actually understand what an alternative Strong AI position is.

Read Hofstadter.

I have. Hofstadter is not writing about sensory consciousness. When he asserts "I am a strange loop" he is talking about narrative selfhood. He's writing about the "I." This has nothing to do with sensory consciousness. You are simply projecting your own theory onto someone else's work.

Name one.

Seeing, hearing, tasting, touching, smelling. There you go, there's 5.

Nick
 
Last edited:

Back
Top Bottom