The Hard Problem of Gravity

So, because you don't know what a complex machine would or could "feel" at a given moment, because it doesn't show you, it has no feelings. But you argue to have some, despite the fact that you cant show me yours either?

Sweet smell of hypocrisy ...
The only person I have to convince is myself. Furthermore I had a look at all the information that the machine was processing and could see nothing like a feeling. Hence I do not have to assume it's there. Doing otherwise is madness. If I follow your reasoning anything can be made conscious provided it has mechanistic abilities. Is that more reasonable?

Yes, processing of informations is all there is to it. There isn't anything more.
You will need more than an assertion to convince me.

Oh, and you think you can look into a computer at any given moment and see what it is doing? And you can see how it came to this state? Are you seriously saying that? If so, i'd recommend you to take a deep look at stuff like ANN's, self modifying code, data structures, etc.
Absolutely! If you know the state of the computer at the start of any simulation and all its data input and their timing with the precision of the computer clock. Then you know all the outputs! Completely deterministic! You are the one who should reread your books.

And while you are out looking, also take at a look at stuff like this, this, this and of course this.

So much for not being able to "see" feelings. And then look again and tell me that there still is "nothing of this sort". If you still don't see anything, i'd recommend you get a fresh pair of spectacles, your old ones seem to be broken.
LOL! You big silly! This is what it's all about! Of course the brain is a machine albeit one we have not yet completly understood. But if I assume that it is all processing of information then any machine could take that function. Hence from my perspective it is safe to assume it is not just processing of information since I have never seen any information about feeling. Hence dualism and the HPC argument.

You better be careful not to assume that every program is static, fixed, can not change, without human intervention. Also be careful not to think that only program code defines the working of a program. There is also data which it works upon, you know. Depending on that data it behaves differently. And it can change that very data on its own. Which would result in a different behavior on the next run of a certain code fragment.
Exactly, if you know all inputs, the code and the timing, then this is all deterministic. Nothing new there. We have discussed that already. The fact is, if it was not deterministic then we could not use them. Is there anything new between deterministic and non-deterministic? Of course not!


Your "vegetables in a pot" comparison is utterly flawed. The equivalent would be to throw a bunch of chips, some random PCB material and some solder into some metal container and claim it's a computer. You need a recipe to make a soup out of vegs, same as you need a "recipe" to make a computer out of a bunch of components. Same as you need a bunch of cells and other bio-matter to make a human. Your soup ends there, at the throwing-in and cooking stage. Humans and computers just start at that point, with humans learning and computers executing code.

Try better next time.

Greetings,

Chris

I will try better then. Here is an example:

Imagine a desk with a secret compartment. This desk belonged to an inventor who recently died just after inventing the alarm clock and the digital calculator. He hid the alarm clock in the secret compartment, which nobody knows of, because he thought it was his best invention. He left the calculator in another drawer. Now, a mechanist happens to be by the desk when the alarm clock goes off. TUTTT! TUTTT! TUTTT! The alarm stops... "What was that noise? I have never heard anything like this before! it comes from the desk! Let's open it..." "Oh dear, there is that thing in here with all these button and operations and numbers" says the mechanist while looking at the calculator. Then he start analyzing it and finds nothing that looks like it could make a noise. But since he could find nothing else then he concludes fairly that the noise is coming for the calculator and that if he makes one himself then the noise is bound to happen again at some other time...

Madness!
 
Last edited by a moderator:
Robin said:
But again, people keep mentioning the HPC without answering my question - what precisely is the HPC?
David Chalmers said:
"If any problem qualifies as the problem of consciousness it is this one...even when we have explained the performance of all the cognitive facilities and behavioural functions in the vicinity of experience - perceptual discrimination, categorisation, internal access, verbal report - there may still remain a further unanswered question: Why is the performance of these functions accompanied by experience? Why doesn't all this information processing go on 'in the dark,' free of any inner feel?" - Chalmers as quoted by Blackmore in Consciousness: An Introduction
The operative word in my question was "precisely".

This stuff from Chalmers is just the sort of vague and imprecise stuff I was talking about before. There are many "why" questions - why is there gravity? Why is there anything at all? Why do birds suddenly appear?

We don't know why this information processing does not go on 'in the dark' free of any inner feel.

But on the other hand we don't know any special reason why it should go on 'in the dark' free of any inner feel.

So it is not so much as a "problem" as a question.
 
Last edited:
Easy, we define it that way. However awareness of feelings come before its definition.

You don't need a formal definition of a circle to be written down for the implications of the circle to assert themselves.

Next time you hurt your foot you just keep telling yourself that!

What if I felt the pain without any associated damage to the foot?

Is that "knowledge" of my foot being damaged?

What about phantom limb syndrome?

Honestly, you don't. But you are not going to convince me by denying me feelings anyway so what is your point?

It is simply an attempt to start one thinking about the frame of the problem. If you're going to wander about saying, "but I feel, and that machine over there doesn't!" then don't be surprised when the machine turns it back at you.
 
magine a desk with a secret compartment. This desk belonged to an inventor who recently died just after inventing the alarm clock and the digital calculator. He hid the alarm clock in the secret compartment, which nobody knows of, because he thought it was his best invention. He left the calculator in another drawer. Now, a mechanist happens to be by the desk when the alarm clock goes off. TUTTT! TUTTT! TUTTT! The alarm stops... "What was that noise? I have never heard anything like this before! it comes from the desk! Let's open it..." "Oh dear, there is that thing in here with all these button and operations and numbers" says the mechanist while looking at the calculator. Then he start analyzing it and finds nothing that looks like it could make a noise. But since he could find nothing else then he concludes fairly that the noise is coming for the calculator and that if he makes one himself then the noise is bound to happen again at some other time...

But you of course presume every desk has a hidden alarm because yours does...
 
What I find frustrating about Westprog, Nick and the others here who've argued on the HPC side, though I respect their opinions per se, is that no matter how complex or convincing a machine we could make, they'd never accept it as conscious, either because they consider that only humans can be conscious, or rather because they think consciousness is the sole realm of the biological.

The fact that every biological action can be replicated artificially is irrelevant to them. Consciousness is "special".

What I find frustrating about Belz, Pixy, Rocketdodger et al., is that they continually refer to how people would react to a conscious machine, as if they had actually produced something that looked remotely like that. In spite of the total failure after forty years to produce something that could pass the Turing test, they act as if they'd actually done it.

And in spite of my insistence throughout this thread that consciousness should be investigated on a physical basis, they insist that I'm only interested in biology, or the soul, or some kind of dualism. In the meantime they promote their concept of "information processing" and have yet to define precisely what they mean by it.
 
Well, Dennett's version of Strong AI makes no mention of self-referencing being needed for consciousness. AFAIK, only yours does.
Wrong.

Again Pixy, you have not read the actual literature. You have not read Consciousness Explained so you do not actually understand what an alternative Strong AI position is.
So tell me.

I have. Hofstadter is not writing about sensory consciousness.
What's "sensory consciousness"? You mean sensory awareness?

When he asserts "I am a strange loop" he is talking about narrative selfhood.
That depends on what you mean by "narrative selfhood". As long as you understand that the narrative self requires no lanugage, only symbols, then you are correct.

He's writing about the "I." This has nothing to do with sensory consciousness.
Again, what is this "sensory consciousness"? Do you perhaps mean simple awareness?

If so, then the reason Hofstadter isn't talking about that when he talks about strange loops - self-reference - is that you don't require even that for awareness. Where consciousness is simple, awareness is almost trivial. Dennett's whole point with the thermostat is that it is aware, but not conscious.

You are simply projecting your own theory onto someone else's work.
It's not my theory.

Seeing, hearing, tasting, touching, smelling. There you go, there's 5.
Nick, those are senses. They have nothing to do with consciousness, except as sources of information. And in that, they are the same as any other source of information - the other senses, like proprioception; direct stimulation via electrical impulses; or coded information (language) via any of the senses. In many ways they are also no different from internally-generated information - thoughts, dreams, hallucination.

All you are doing here is dragging in random baggage and slapping "consciousness" stickers on it. That doesn't make it consciousness, or an aspect of consciousness, or in any way relevant to the discussion. Baggage with a sticker is still just baggage.
 
The operative word in my question was "precisely".

This stuff from Chalmers is just the sort of vague and imprecise stuff I was talking about before. There are many "why" questions - why is there gravity? Why is there anything at all? Why do birds suddenly appear?
To get to the other side!

We don't know why this information processing does not go on 'in the dark' free of any inner feel.
I'd say we do: Because once you have evolved a complex, self-correcting information-processing system, you will have introspection, and this 'inner feel' is just that.

We're seeing exactly this happen as computer programs become more complex.

But on the other hand we don't know any special reason why it should go on 'in the dark' free of any inner feel.
Indeed, this appears to be logically impossible.
 
The only person I have to convince is myself. Furthermore I had a look at all the information that the machine was processing and could see nothing like a feeling. Hence I do not have to assume it's there. Doing otherwise is madness. If I follow your reasoning anything can be made conscious provided it has mechanistic abilities. Is that more reasonable?

So, on one hand you say that you need to be convinced that a complex computer running a complex program could have "feelings". But then you also say that it is only you that needs to be convinced that you yourself have feelings. That is just more hypocrisy from you. As long as you think you don't have to convince others that you have feelings, you are not in a position to demand convincing proof from anything or anybody else that it/she/he has feelings.

You can not have the cake and eat it at the same time.

You will need more than an assertion to convince me.

And you need more than a weak "i only have to convince myself of my feelings" argument to make me believe that you have feelings.

Absolutely! If you know the state of the computer at the start of any simulation and all its data input and their timing with the precision of the computer clock. Then you know all the outputs! Completely deterministic! You are the one who should reread your books.

LOL! You big silly! This is what it's all about! Of course the brain is a machine albeit one we have not yet completly understood. But if I assume that it is all processing of information then any machine could take that function. Hence from my perspective it is safe to assume it is not just processing of information since I have never seen any information about feeling. Hence dualism and the HPC argument.

Again i smell a lot of hypocrisy here. On one hand you say all that you need to see what a computer does at a given point in time is to look at it's construction, look at it's code, and look at all its input data. On the other hand you say that a brain is indeed nothing more than a very complex computer, albeit not fully understood yet, and that one could never deduce the very same internal state of it when looking how exactly it is constructed, what it has learned, and what input data it had so far?

If we would fully understand how a brain works (like we fully understand how a computer works), and if we fully know what knowledge and experiences it has gathered (like we know how the computer was programmed and what data it holds in its memory), and if we know all the input it had so far (like we know all the input the computer had so far), then it is obvious that we can see what the person, which that brain belongs to, thinks and feels at the moment of inspection. Same as we can see what a computer actually does at that point in time.

Again, there is just no difference.

Exactly, if you know all inputs, the code and the timing, then this is all deterministic. Nothing new there. We have discussed that already. The fact is, if it was not deterministic then we could not use them. Is there anything new between deterministic and non-deterministic? Of course not!

Same is true for a brain. If it couldn't produce deterministic, repeatable results for a given set of repeating input, it would be pretty much useless. Imagine your brain would stop making you breadth. Or it would stop making your heart beat. Just because it would randomly discard signals and information that tells it to do so. We would be dead in no time.

Or if the brain at one time interprets the sighting of a hungry lion that wants to attack us correctly and makes us flee, and the other time it makes us going to it to try to pet it. That would be just fatal.

You may want to talk to some persons working in the field of psychology, and ask them what happens to a human if the brain stops working in a correct, deterministic way and starts to produce random behavior.

I will try better then. Here is an example:

Imagine a desk with a secret compartment. This desk belonged to an inventor who recently died just after inventing the alarm clock and the digital calculator. He hid the alarm clock in the secret compartment, which nobody knows of, because he thought it was his best invention. He left the calculator in another drawer. Now, a mechanist happens to be by the desk when the alarm clock goes off. TUTTT! TUTTT! TUTTT! The alarm stops... "What was that noise? I have never heard anything like this before! it comes from the desk! Let's open it..." "Oh dear, there is that thing in here with all these button and operations and numbers" says the mechanist while looking at the calculator. Then he start analyzing it and finds nothing that looks like it could make a noise. But since he could find nothing else then he concludes fairly that the noise is coming for the calculator and that if he makes one himself then the noise is bound to happen again at some other time...

Madness!

I said you should try better, not worse. But you tried worse, Why? You comparison assumes that there is a hidden part that can't be found. If i build a computer, and put some hidden components in it, you would not be able find out its behavior given a program and data that you may know of.

If you want to introduce hidden things, then go ahead. But that misses the discussion at hand. If you assume that you know exactly how a computer works internally, what program it runs and what data it has, then you have to assume that one knows exactly how a brain works, what internal connections it has, and what input it has, to make a fair comparison.

Comparing apple's to orange's won't work. But maybe you only want to move goalposts because you already found out that there really is no difference between a brain with all it's thoughts, feelings, etc. and a computer with all it's code, data, input, etc. and simply won't admit it because then you had to stand up and admit to have been wrong initially.

Seriously, your arguments sound pretty woo-ish by now. Do you really want to imply that there is some hidden thing in our brain that causes things like feelings? That a brain is more than just, well, a brain? If so, you also have to explain what that hidden thing is. If you can't, and you think it is enough if you only convince yourself of it being so, then you are really arguing like a woo who says "I believe in ghosts. I have seen them, and i only need to know for myself that they are there."

Greetings,

Chris
 
Last edited:
The only person I have to convince is myself.

Well, you'll excuse us. Since humans are social animals we tend to like conversation.

Furthermore I had a look at all the information that the machine was processing and could see nothing like a feeling.

Most machines don't have it because we simply didn't put it there.

If I follow your reasoning anything can be made conscious provided it has mechanistic abilities. Is that more reasonable?

The argument is that consciousness is a function, one that we can replicate, and that it's not exclusive to humans or biological organisms.

Absolutely! If you know the state of the computer at the start of any simulation and all its data input and their timing with the precision of the computer clock. Then you know all the outputs! Completely deterministic! You are the one who should reread your books.

Same things for humans, silly. No matter how you define "random", humans and computers behave in the same way.

But if I assume that it is all processing of information then any machine could take that function. Hence from my perspective it is safe to assume it is not just processing of information since I have never seen any information about feeling. Hence dualism and the HPC argument.

Woah, slow down. So, basically you're now saying that the reason you don't believe that consciousness is information processing is because some parts of YOUR consciouness you can't find in a machine ? Whoever said machines had HUMAN consciousness ? Your refusal to agree on this seems based on the fact that you don't WANT machines to be conscious.
 
What I find frustrating about Belz, Pixy, Rocketdodger et al., is that they continually refer to how people would react to a conscious machine, as if they had actually produced something that looked remotely like that.

That looks and sounds like a post made out of sheer sarcasm and not really an answer to the one I made.

In spite of the total failure after forty years to produce something that could pass the Turing test, they act as if they'd actually done it.

The point is, westprog, that you keep asking for examples of conscious machines, and reject every single one given. I can only assume that no matter how convincing the example you'll always find a reason to ignore it. Hence my post.

Prove me wrong, tell me what would convince you a particular machine was conscious ?
 
And in spite of my insistence throughout this thread that consciousness should be investigated on a physical basis, they insist that I'm only interested in biology, or the soul, or some kind of dualism.

Lying isn't going to help you, lad. I said that you used dualistic talk, and that you still saw the mind in dualistic terms. I didn't say it was intentional.

In the meantime they promote their concept of "information processing" and have yet to define precisely what they mean by it.

We've already defined information processing. In fact, if you remember I was corrected on that definition and "information processing" basically includes every action in the universe. Self-referential information processing, however, has been defined well before that point in the thread.
 
What I find frustrating about Belz, Pixy, Rocketdodger et al., is that they continually refer to how people would react to a conscious machine
No we don't. I've never even mentioned it.

as if they had actually produced something that looked remotely like that.
SHRDLU.

In spite of the total failure after forty years to produce something that could pass the Turing test, they act as if they'd actually done it.
What does the Turing test have to do with it?

And in spite of my insistence throughout this thread that consciousness should be investigated on a physical basis, they insist that I'm only interested in biology, or the soul, or some kind of dualism. In the meantime they promote their concept of "information processing" and have yet to define precisely what they mean by it.
Information as in information theory. Processing as in switching. As we've said countless times.
 
Lying isn't going to help you, lad. I said that you used dualistic talk, and that you still saw the mind in dualistic terms. I didn't say it was intentional.

So I said you described me as using dualistic talk, and you call me a liar and say I use dualistic talk?

We've already defined information processing. In fact, if you remember I was corrected on that definition and "information processing" basically includes every action in the universe. Self-referential information processing, however, has been defined well before that point in the thread.

Evasion duly noted. Exactly how every action in the universe gets condensed down to single bits is precisely the issue. Sort that one out and you have a real theory. The kind you can publish scientific papers about, not just philosophy or computer "science".
 
That looks and sounds like a post made out of sheer sarcasm and not really an answer to the one I made.



The point is, westprog, that you keep asking for examples of conscious machines, and reject every single one given. I can only assume that no matter how convincing the example you'll always find a reason to ignore it. Hence my post.

Prove me wrong, tell me what would convince you a particular machine was conscious ?

I just mentioned the Turing test. If a machince "consciousness" were to pass that, I'd investigate the code to see how it did it. I'm not saying that I'd be convinced, but I'd give it a good look.
 
If you want to introduce hidden things, then go ahead. But that misses the discussion at hand. If you assume that you know exactly how a computer works internally, what program it runs and what data it has, then you have to assume that one knows exactly how a brain works, what internal connections it has, and what input it has, to make a fair comparison.

Comparing apple's to orange's won't work. But maybe you only want to move goalposts because you already found out that there really is no difference between a brain with all it's thoughts, feelings, etc. and a computer with all it's code, data, input, etc. and simply won't admit it because then you had to stand up and admit to have been wrong initially.

Yeah, a brain is really like a computer, with all that input, and data and stuff. So if you don't think they are really the same, you must be a closet dualist, or theist, or [insert pejorative phrase here].

The problem is that the brain does not work just like a computer, or like any given computer program. We don't know how critical the precise operations of neurons are to the brain. We don't know that the brains actions can be digitised without losing something. But we have these computers that do sort of the same kind of thing, so they must be the same kind of thing, right?
 

Back
Top Bottom