The Hard Problem of Gravity

Given that the p-zombie stuff is an admission that we cannot tell even if a human is actually conscious or just pretending, I cannot see this as a dealbreaker with regard to consciousness. How would we falsify the hypothesis that humans are conscious, as opposed to merely pretending?

He knows.
 
I suppose such a factory could be considered analogous to a cell; albeit a gigantic, crude, functional equivalent of a cell. Could the factory be considered 'alive'? I suppose in some sense it could. /shrug

Again, the definition of "life" is hard to pinpoint.

I saw an interesting link that someone posted a while ago, about a computer program that built virtual clocks in an evolutionary way. Biological systems are not unique in that regard.

Ah, there it is: http://www.youtube.com/watch?v=mcAq9bmCeR0
 
How could a human pretend to be conscious when he/she wasn't actually? Sleepwalking?

How do YOU know that other humans are conscious at all, Beth ? Other than just assuming so ?

It could very well be that some humans are NOT conscious at all, but all have the ability to act as though they are. You could never tell the difference, and so there is no difference at all, for all practical purposes.
 
Yes, they are. It's called DNA.

No, they are DNA. That's not really instruction. The point I was making is that a coder instructs a computer. Who (what) instructs a biological system?

This comes down to what I mention below, about novelty. If a system is a self-referential information system but incapable of only producing a finite number of thoughts, can it be said to be conscious?

It depends how we define consciousness. So, let's start by that. How do YOU define consciousness ?

I'm not sure. Isn't that what we're discussing? That's why I'm trying to really understand what Pixy's definition is, because it seems to be a good one, but not quite there.

For example, does consciousness require the ability to have novel thoughts? I would argue that it does, I think, but I'm able to be convinced otherwise.
 
I think you and I have had this discussion before... As always we'll have to agree to disagree because as you know I think that such an abstraction represents a change in the information.
Yep, I think we did, and yep, I think that's where we left it.
 
No, they are DNA. That's not really instruction.

DNA is not instruction ? What tells your cells how to produce proteins ?

The point I was making is that a coder instructs a computer. Who (what) instructs a biological system?

No, no, no. The CODE instructs the computer. The coder made the code. That the code was made by a person or by mutation and natural selection makes no difference.

This comes down to what I mention below, about novelty. If a system is a self-referential information system but incapable of only producing a finite number of thoughts, can it be said to be conscious?

I can only produce a limited number of thoughts, Volatile. Am I conscious ? How would you know ?

I'm not sure.

You don't know how to define consciousness ? Neither do I, really. No one seems to be able to do that except the way that Pixy and Mercutio and Dodger have been doing. So maybe we should stick with that.
 
DNA is not instruction ? What tells your cells how to produce proteins ?

What instructs the DNA, is what I mean. The computer programme has something external feeding in a set of instructions. The biological system does not. It does not have a finite set of behaviours, IYSWIM. What a computer can do (when we're talking about information processes) is fixed IN ADVANCE. What a biological system can do (when we're talking about consciousness) is not, or at least not in the same way.

No, no, no. The CODE instructs the computer. The coder made the code. That the code was made by a person or by mutation and natural selection makes no difference.
Really?

A computer can act only according to or perhaps more accurately within the parameters of its code. It cannot act in novel ways. This is not true of the mind.

I can only produce a limited number of thoughts, Volatile. Am I conscious ? How would you know ?
I don't know. Which precisely why we're having this discussion. How could I know?

You don't know how to define consciousness ? Neither do I, really. No one seems to be able to do that except the way that Pixy and Mercutio and Dodger have been doing. So maybe we should stick with that.
Even if that does not seem entirely correct? It's a definition, but it seems to be a partial one to me. Doesn't it to you?

I don't know what the answer is. That's why it's called the Hard Problem. I think it's a methodological rather than an ontological problem, but it's definitely a problem. Answers like Pixy's seem to me to be unsatisfyingly glib, somehow... which is why I'm trying to see if they really are glib, or just expressed glibly.
 
Last edited:
What instructs the DNA, is what I mean. The computer programme has something external feeding in a set of instructions. The biological system does not. It does not have a finite set of behaviours, IYSWIM. What a computer can do (when we're talking about information processes) is fixed IN ADVANCE. What a biological system can do (when we're talking about consciousness) is not, or at least not in the same way.

The DNA definitely had external input -- fit strands propagated, unfit ones did not.

The information processes of a computer are not necessarily fixed in advance -- using genetic programming, we can let the computer modify its own instructions to accomplish a given goal.

The biological systems also have the advantage of billions of years of optimization. Sure, monkey intelligence can help make up the gap, but a 7 order of magnitude difference in the timescales involved is still a monumental hurdle to overcome.
 
The DNA definitely had external input -- fit strands propagated, unfit ones did not.

The information processes of a computer are not necessarily fixed in advance -- using genetic programming, we can let the computer modify its own instructions to accomplish a given goal.

The biological systems also have the advantage of billions of years of optimization. Sure, monkey intelligence can help make up the gap, but a 7 order of magnitude difference in the timescales involved is still a monumental hurdle to overcome.

Oh, I agree, of course. The snag is that word "given", as in "given goal". It cannot determine its own goals beyond the pre-determined functions or capabilities it has been programmed with. It cannot work in novel ways. It has no imagination.

The quibble I have with comparing biological with machine systems is not so much that the machine has inputs, but that its inputs codify its outputs in prescriptive ways. That's what computer code is - a prescriptive instruction set.

Ever hear that phrase "Computers don't make errors, people do"? Computers only do what they're told to do - even if that instruction is "Reach a set goal in an emergent way".
 
volatile said:
For example, does consciousness require the ability to have novel thoughts? I would argue that it does, I think, but I'm able to be convinced otherwise.

This is partially a further elaboration of the information in post #691. There's some interesting links there (especially the last one).The other part is that it's pure speculation at this point. :boggled:


If we define consciousness in accordance to Dennett (1978) and Baars Global Workspace Model, we would have a definition looking something like this: consciousness is that which creates global access (to further internal operations). Consciousness would not be defined as a property but as a mechanism. Consciousness would thus also look vastly different in humans than in a computers because the mechanism and the environment where it would operate would be different. Ultimately it would mean that we would not determine consciousness through determining behaviour, but according to the mechanism that gives access to a variety of potential behaviours, which ultimately would be dependent on the qualities of the medium. i.e., "the medium would be the message."

In other words, we would always only be conscious of something, but there would not be consciousness as such. How would we determine if a system is conscious? We simply wouldn't, it wouldn't matter. We could only describe different mechanisms. By determining behaviour, we would only be determining potential complexity of behaviour.
 
I'm not sure if this might be the proper place because I might misunderstand what you two talk about completely, but here's some food for though at least.

From the following PloS Biology article – Exploring the "Global Workspace" of Consciousness:


The article is about two things: First, it's about Bernard Baars' theory about consciousness as a global workspace. Baars utilizes the Theatre metaphor quite a bit himself, but it's still just a metaphor. For instance in this article: In the Theatre of Consciousness, where he starts out like this (from the abstract):


Which takes us to the second point of the original article by Robinson: There seems to be some new and exiting evidence for explaining consciousness as a kind of global workspace, and which at least partially can illuminate how it all comes together; Converging Intracranial Markers of Conscious Access by Gaillard R, Dehaene S, Adam C, Clémenceau S, Hasboun D, et al. (in PloS Biology).

Interesting stuff. Thanks for the links, Lupus.
 
I'll stick my neck out here.

I don't believe (and it is nothing more than an opinion) that if we had an incredibly accurate computer simulation (running on hardware something like we have today i.e. transistors and the like) of a human being that responded exactly as I would do in its simulated universe that it would be conscious the same way as I am.

Right now I'm reading Seth Loyd's Programming the Universe. He mentions that one could not efficiently and accurately simulate a real physical system [even one containing only a few hundred atoms] using just the classical computers we use today. In order to accomplish such a feat, one would need to construct a quantum computer. Loyd estimates that, if one takes Moore's law into account, it would probably take about another 40 years before we had such computers.

Now, if classical computers are so inefficient at simulating systems as simple as a mere few hundred atoms, how in the world could they accurately simulate conscious processes in the human brain?
 
Oh, I agree, of course. The snag is that word "given", as in "given goal". It cannot determine its own goals beyond the pre-determined functions or capabilities it has been programmed with. It cannot work in novel ways. It has no imagination.
Wrong.

The quibble I have with comparing biological with machine systems is not so much that the machine has inputs, but that its inputs codify its outputs in prescriptive ways. That's what computer code is - a prescriptive instruction set.
Wrong.

Ever hear that phrase "Computers don't make errors, people do"? Computers only do what they're told to do - even if that instruction is "Reach a set goal in an emergent way".
And wrong.

You're confusing a small subset of the tasks we use computers for with what computers are capable of. What they are capable of is computing anything that is computable.
 
That you are not aware of this is not a very strong statement. You weren't aware of SHRDLU, either.

Systems incorporating optimising JIT compilers do this sort of thing. You very likely have one installed on your PC.

And most humans demonstrate little ability to do this in any case.

So, not only are present day computers 'conscious', they are even more 'conscious' than us humans! :rolleyes:
 
In other words, we would always only be conscious of something, but there would not be consciousness as such. How would we determine if a system is conscious? We simply wouldn't, it wouldn't matter. We could only describe different mechanisms. By determining behaviour, we would only be determining potential complexity of behaviour.

I like this, but it does seem to conflate (perhaps unavoidably) consciousness with complexity of behaviour. It seems to me to be possible to posit something that behaves in a complex way but that would not be conscious (or conscious of anything). It also places the question of whether machine intelligence can produce novel thoughts as a question adjunct to but separate from the question of whether it's conscious.

Otherwise, excellent. It seems to redefine the question in a useful way.
 
You're confusing a small subset of the tasks we use computers for with what computers are capable of. What they are capable of is computing anything that is computable.

Computers do what they're told.

For example, if you asked SHRDLU "What do you enjoy?", or asked it to describe its own ontological state, it couldn't. Because those aren't in the source code. (Even the programmer, it seems, wasn't so bold as you - "There are fundamental gulfs between the way that SHRDLU and its kin operate, and whatever it is that goes on in our brains.") It couldn't do anything it was not pre-designed to do. Could it?

Or can you give an example of a computer process that can produce an imaginative thought? Or act beyond the program it has been given? Or even how this might be done, if is has not been achieved already?

If I'm wrong (maybe I am, I often am), I'm not going to learn very much about what is actually a really interesting discussion if you just keep shouting "Wrong!" rather than explaining what you mean. There's no need to be obtuse, PM.
 
Last edited:
Computers do what they're told.

For example, if you asked SHRDLU "What do you enjoy?", or asked it to describe its own ontological state, it couldn't. Because those aren't in the source code. (Even the programmer, it seems, wasn't so bold as you - "There are fundamental gulfs between the way that SHRDLU and its kin operate, and whatever it is that goes on in our brains.") It couldn't do anything it was not pre-designed to do. Could it?

Or can you give an example of a computer process that can produce an imaginative thought? Or act beyond the program it has been given? Or even how this might be done, if is has not been achieved already?

If I'm wrong (maybe I am, I often am), I'm not going to learn very much about what is actually a really interesting discussion if you just keep shouting "Wrong!" rather than explaining what you mean. There's no need to be obtuse, PM.


What is an imaginative or novel thought but the confluence of several mundane ideas coexpressed? Why could we not program a computer to produce the same?

We generally do not because of the way that we use computers. They are tools that perform some of our mental labor instead of self-directed entities. We don't particularly want them to be self-directed entities; but I'm not sure I see the limitation in them that makes it impossible for them to be self-directed.

Just add a motivational/emotional type system and the ability to sift through competing claims/ideas and conjoin them in novel ways (and to decide which of these combinations are useful and which not) and we'd probably see something very similar to us.
 
Care to expand?

I didn't say _I_ knew.

What instructs the DNA, is what I mean.

The laws of physics.

The computer programme has something external feeding in a set of instructions.

We're just adding turtles, here. Once the computer program is completed, there is no need for the programmer to do anything else.

The biological system does not. It does not have a finite set of behaviours

Yes, it does.

What a computer can do (when we're talking about information processes) is fixed IN ADVANCE. What a biological system can do (when we're talking about consciousness) is not, or at least not in the same way.

I really like that last bit "at least not in the same way", as if you already saw my objection coming. Tell me, in what "way" is it not fixed in advance like a computer ?

Really?

A computer can act only according to or perhaps more accurately within the parameters of its code. It cannot act in novel ways. This is not true of the mind.

You are wrong, wrong, wrong. The mind can only act within its parameters as well. That the parameters more-or-less change with the addition of new data changes nothing because a computer program can do that as well.

I don't know. Which precisely why we're having this discussion. How could I know?

There's only one way to tell: behaviour.

Even if that does not seem entirely correct?

What it "seems" to be is irrelevant, as long as it is a useful definition.

It's a definition, but it seems to be a partial one to me. Doesn't it to you?

No, my "gut-feeling" doesn't tell me that my awareness is special in any way. In fact, I'd often have trouble telling you for sure that it's well defined at all.

That's why it's called the Hard Problem.

No, it's called "hard problem" because dualists have a hard time letting go of their beliefs in the soul. You and I need not do the same, unless you are a dualist.
 

Back
Top Bottom