The Hard Problem of Gravity

volatile said:
I like this, but it does seem to conflate (perhaps unavoidably) consciousness with complexity of behaviour. It seems to me to be possible to posit something that behaves in a complex way but that would not be conscious (or conscious of anything). It also places the question of whether machine intelligence can produce novel thoughts as a question adjunct to but separate from the question of whether it's conscious.

First, I wouldn't directly say that it conflates consciousness with complexity of behaviour. Second, I would also distinguish between something that behaves in a complex way and something that behaves in a complex way when responding as a whole system.


#1: Consciousness would not be the same as complexity of behaviour: Consciousness would be the mechanism by which a wide repertoire of potential operations in the system is accessed by other operations, thus creating a sort of synchronized global access.

#2a: Potentially, the system could however be far more complex than what is manifest in its behaviour; it could never manifest if the access-mechanism would be inefficient (not enough global access). Or vice versa; the mechanism could be highly efficient, but the potential behavioural complexity of the system would not, hence there could be global access, but still limited potential complexity of behaviour.

#2b: I would assume that for a system to behave in a complex way as a whole, it would both have to be fairly complex but it would also have to have a fairly efficient mechanism for global access. Otherwise it could not respond in a complex way as a whole system.
 
What is an imaginative or novel thought but the confluence of several mundane ideas coexpressed? Why could we not program a computer to produce the same?

I do not think machine intelligence or machine consciousness are necessarily impossible. I just don't think SHRDLU and simple self-referential information processors meet that criteria, that's all.

We generally do not because of the way that we use computers. They are tools that perform some of our mental labor instead of self-directed entities. We don't particularly want them to be self-directed entities; but I'm not sure I see the limitation in them that makes it impossible for them to be self-directed.

Architecturally, computer programs are just that - programs. They pre-determine the behaviour set, and, more than that, predetermine the thought set.

Just add a motivational/emotional type system

Again - this must be added. It's a pre-determined process.

and the ability to sift through competing claims/ideas and conjoin them in novel ways (and to decide which of these combinations are useful and which not) and we'd probably see something very similar to us.

I'm not sure I agree. Especially on the "ability to determine which combinations are useful". Because, again,the usefulness-function will need to be pre-established.
 
We're just adding turtles, here. Once the computer program is completed, there is no need for the programmer to do anything else.

Not quite the same thing as adding turtles. The whole point about a programme is that it is a programme, an instruction set. You tell a machine what to do, and it does it. It's possible to design an un-cooperative machine, but again it must be designed as such. The information step between programme and programmer which Darat alluded to seems to be really important.

A computer won't do anything that it's not been told to do. It may act unpredictably, of course, but that unpredictability will be a direct function of the code it was given at the beginning.

I really like that last bit "at least not in the same way", as if you already saw my objection coming. Tell me, in what "way" is it not fixed in advance like a computer ?

The mind is capable of being subjective, for a start. Which puts us right back where we started.

Now, I don't think machine (or synthetic) intelligence is impossible. After all, the brain is mechanistic. But what I'm getting at here is that self-referential programming is not, in itself, enough to produce something resembling consciousness. It's missing a layer.


No, it's called "hard problem" because dualists have a hard time letting go of their beliefs in the soul. You and I need not do the same, unless you are a dualist.

Not a dualist. Far from it, in fact. But even for non-dualists, there is still a question - for AI programmers and for neuroscientists and for philosophers of the mind - as to what behaviour sets, or more broadly what criteria, distinguish consciousness from non-conciousness. We've all agreed here, more or less, that falsifiying what we might call the consciousness hypothesis looks intractable right now. That's problematic. And hard.
 
I do not think machine intelligence or machine consciousness are necessarily impossible. I just don't think SHRDLU and simple self-referential information processors meet that criteria, that's all.

OK, agreed.



Architecturally, computer programs are just that - programs. They pre-determine the behaviour set, and, more than that, predetermine the thought set.

Right, but if we discuss consciousness within a computer system we simply cannot restrict our thinking to the way that a single program works (I think that is one of Searle's mistakes and why he looks more and more like a closet dualist). Our brains do the same sorts of things (not exactly mind you because there are definite limitations to the computer analogy) and our thought set is also pre-determined to a certain extent (we can thank Continental philosophy for that insight). The big difference is that we have "master"systems that monitor all the lower level systems; and we have motivational/emotional inputs that add value to certain types of output (thought, language, etc) -- all of which structures the very way that we do think. Having several different "programs" that can perform wildly contradictory actions requires that we have a means to sift through to the right "program" to answer the current problem set. I don't see why, with a lot of work and a few brilliant insights, we couldn't "master program" a computer to do the same. That we aren't there yet, I think, goes without saying. But I still don't see where the limitation on the possibility arises.


Again - this must be added. It's a pre-determined process.

Sure it must be added to current systems. But it isn't as if all motivational/emotive systems must be absolutely pre-determined in their function. After all, they can learn.

Humans are pre-determined to have these systems through the long evolutionary process that almost exclusively involved non-humans. If we understood the structure of these systems in us, we could try to recreate them in computers. We just don't know enough about how motivation/emotion systems work at a fine structural level -- in part because they were always considered in the Western philosophical tradition to be "animal" and not worthy of consideration. Strangely enough now they are considered the sine qua non of being human and/or conscious.


I'm not sure I agree. Especially on the "ability to determine which combinations are useful". Because, again,the usefulness-function will need to be pre-established.

What makes you think they are not pre-established in us, or at least appear as essentially pre-established through the process of genetic expression and learning? Computers can be set-up to learn just as we are.
 
Computers do what they're told.
Sometimes they do. Sometimes they don't.

As I said, you are mistaking a small subset of what we use computers for with what computers are capable of. Stop doing that.

For example, if you asked SHRDLU "What do you enjoy?", or asked it to describe its own ontological state, it couldn't.
So?

I said it was conscious. That doesn't mean it enjoys things or otherwise. That doesn't mean it knows the word "ontological".

Because those aren't in the source code.
So? Most of what any program does isn't in the source code.

(Even the programmer, it seems, wasn't so bold as you - "There are fundamental gulfs between the way that SHRDLU and its kin operate, and whatever it is that goes on in our brains.")
And so there are. That's completely irrelevant, of course.

It couldn't do anything it was not pre-designed to do. Could it?
Yes, of course. When you use your ridiculously narrow definition, computers do things they're not designed to do all the time. Even when you exclude the things they're designed not to do.

As I said, computers are designed to compute anything that is computable. In this, all computers are equivalent - Turing complete. Most computers these days also have a source of quantum randomness (usually thermal), and you can get that from external sources otherwise.

So any computation that is deterministic, and also any computation that is non-deterministic, can be performed by a computer.

And that doesn't leave a whole lot of "else".

Or can you give an example of a computer process that can produce an imaginative thought?
Irrelevant, but sure: Genetic algorithms certainly do this.

Or act beyond the program it has been given?
Absolutely, this is extremely common. There are all sorts of programs that are designed to analyse and dynamically adapt themselves to their data sets. Neural networks, expert systems, forecasting systems, query planners, routing optimisers, and so on.

Or even how this might be done, if is has not been achieved already?
Been done, decades since.

If I'm wrong (maybe I am, I often am), I'm not going to learn very much about what is actually a really interesting discussion if you just keep shouting "Wrong!" rather than explaining what you mean.
Then stop being wrong. Stop making bold and nonsensical declarations, and start asking questions.

Mind you, many of your questions have been wrong too.

There's no need to be obtuse, PM.
Apparently there is.
 
When you wake up in the morning it is self-evident that you are conscious. You don't need a mathematical proof to establish that you are awake and conscious. It is a given empirical fact.

If you define "conscious" simply in terms of "being awake," then I agree.

But that isn't what you, or any other HPC proponent, are doing.

You start with something that is self evident -- being awake, being aware, experiencing things, whatever. Then you extrapolate and include all sorts of other stuff that is not self evident. This is not logically valid. If you want to talk about something being self evident, you have to stick with only what is self evident.

And the reason you do this is because you lack a formal definition of "conscious." So you think "well, if I am awake I am conscious, and if I am conscious I must be able to experience qualia and subjectivity, and since it is self evident that I am awake, it must be self evident that qualia and subjectivity exist."

That is a fallacy.

Whats circular about saying "consciousness exists as a phenomenon; consciousness is a requisite of knowledge"? Its no more circular than saying "mass is a real property; mass is a requisite to weight."

Because any formal definition of consciousness must be predicated on the existence of knowledge.

If you disagree, just go ahead and try to define "consciousness" without somehow relying on the notion of "to know."

We know that consciousness exists as a phenomenon and that each of us experiences this state at various periods of the day; this is a given. What we don't know is what in physics necessitates or governs conscious [i.e. subjective] experience. This is the reason why we are stuck with informal, 'fuzzy' definitions. For reasons that I've already mentioned, its is evident that self-referential intelligence is not a sufficient requisite for conscious experience.

Another fallacy.

Are you seriously claiming that lack of knowledge of the mechanism causing a phenomenon necessarily prevents us from at least operationally defining the phenomenon?

Of course.

Modeling in finer detail the exact physiological processes that give rise to said consciousness will require considerably more than simply stating consciousness as a given. The point of me assigning an "X" variable to consciousness is to serve as a conceptual placeholder until there is such a formal method of modeling what it is, exactly. There is no convincing evidence that we have such a formal system yet. My purpose here is to suggest possible avenues of investigation to determine a means of crafting such a system. My guess is that we need to study the physical process of instances that we do know are conscious [e.g. living brains] and work from there.

Well, perhaps we have been too harsh on you then -- you clearly know nothing about computer science and computation theory.

All the fundamentals we need to describe human consciousness are already known. We know exactly how an individual neuron behaves. The question, as with any complex problem, is how to arrange the fundamentals into something greater than the sum of it's parts.

I feel like you aren't clear on just how much greater a phenomenon can be than the sum of it's parts. Let me make it clear just how much -- infinitely.
 
Then stop being wrong. Stop making bold and nonsensical declarations, and start asking questions.

Mind you, many of your questions have been wrong too.


Apparently there is.

I have concluded that trying to explain the infinite and all encompassing realm of computer science to people who haven't seen it themselves is a futile exercise.

They just need to figure it out themselves it seems.
 
Oh, here's a perfect example of a program that goes beyond its programming: Conway's Game of Life.

It's a very simple cellular automaton. And it's Turing complete.

One of the early discoveries in Life was the glider, a pattern that would "fly" across the matrix.

Then someone discovered the glider gun, a pattern that would generate gliders (infinitely).

And then someone discovered the breeder, a pattern that generates glider guns.

Are these in the source code of Life? Because I've written a version of it myself, and I don't recall putting them there.
 
I have concluded that trying to explain the infinite and all encompassing realm of computer science to people who haven't seen it themselves is a futile exercise.

They just need to figure it out themselves it seems.

Indeed. Juergen Schmidhuber's homepage is full of interesting counterexamples to bring up when people have a mistaken idea on what computers can and cannot do.
 
Icheumonwasp said:
Sure it must be added to current systems. But it isn't as if all motivational/emotive systems must be absolutely pre-determined in their function. After all, they can learn.

Humans are pre-determined to have these systems through the long evolutionary process that almost exclusively involved non-humans. If we understood the structure of these systems in us, we could try to recreate them in computers. We just don't know enough about how motivation/emotion systems work at a fine structural level -- in part because they were always considered in the Western philosophical tradition to be "animal" and not worthy of consideration. Strangely enough now they are considered the sine qua non of being human and/or conscious.

I think this can be used as further elaboration of my points in post #741.

It seems obvious that there has been an evolutionary advantage for a system to manifest potential complex behaviour. Thus we have ended up with brains with enormous complexity. But on the other hand, it would seem plausible that complexity alone isn't sufficient. Thus there would have to have been pressure for organization in a structured and hierarchical way. How else would the system behave in a "meaningful" way in regards to its internal and external environment?

Ultimately this would manifest in evolutionary pressure to organize in such a way as to highly restrict and regulate the mechanism of global access, and what could be accessed globally. Without restrictions there would not be behaviour at all, or it would be chaos all over the place, and the organism would not be able to respond in a particular way at all, or it could even die from lack of self-preservation on the spot. For instance, what would happened if there would be universal access in the whole of the brain at the same time? It would be a total disaster and we would probably die in a few minutes, or so I would assume at least. Access here meaning "conscious" access.

It would hence also seem plausible that due to the strict regulation and restriction of access, the organism would benefit from evolving even further complexity (as in connectivity etc.) in some "safe" and "specific" instances. Thus ultimately manifesting in abstract and linguistic reasoning etc. Even emotions and motivational manifestations could perhaps be considered as some kind of "representations" of elements that aren't "allowed" direct global access, but still being beneficial in terms of some other parts of the system being aware of them in an indirect way (and then connecting them to a circuit where global access would be both safe, beneficial and meaningful).
 
Not quite the same thing as adding turtles. The whole point about a programme is that it is a programme, an instruction set. You tell a machine what to do, and it does it.

You seem to be under the mistaken impression that living creatures, and humans in general, are any different. You are wholly constrained in your behaviour by things that have already been "decided". You're not making it up as you go along.

The mind is capable of being subjective, for a start. Which puts us right back where we started.

What do you mean by "subjective" ? If you mean that it interprets things according to its own experience, that also applies to computers.

Now, I don't think machine (or synthetic) intelligence is impossible. After all, the brain is mechanistic. But what I'm getting at here is that self-referential programming is not, in itself, enough to produce something resembling consciousness. It's missing a layer.

A layer that not only no one is capable of defining, but one whose existence cannot be distinguished, even in principle, from its non-existence.

But even for non-dualists, there is still a question [snip] as to what behaviour sets, or more broadly what criteria, distinguish consciousness from non-conciousness.

Woah. Déjà vu. As I read this, I became convinced I read and wrote exactly those posts before. Of course, it's just a defect in the system, but it serves to show how unreliable our own perceptions about our consciousness are.

We've all agreed here, more or less, that falsifiying what we might call the consciousness hypothesis looks intractable right now. That's problematic. And hard.

Not if we agree that the only way to test for consciousness is through behaviour. Again, I don't see why we're assuming that human consciousness is so special (so different, in anything else than complexity, from, say, a thermostat).
 
Are these in the source code of Life? Because I've written a version of it myself, and I don't recall putting them there.

The game of life produces patterns (very interesting patterns) based on a finite set of rules. That is, it does what its told to. That's not a very convincing example of something that is imaginative, because it most resolutely follows the program its been given. Indeed, it follows the rules given based on a designed starting point. Given the same starting point, it will always produce the same pattern.

That's not imagination. In fact, it's the opposite of imagination.
 
While you're correct as far as causality/randomness, I believe the cat is supposed to be a ridiculous example of how not to interpret QM.


http://www.tu-harburg.de/rzt/rzt/it/QM/cat.html#sect5

D'oh, I always thought that it was supposed to show how wild and crazy QM can be, by applying it to an everyday thing, like a cat(yes I understand that it wouldn't be both alive and dead at the same time). I took it as a very serious analog, because Beth, and a few others here, are taking effects only seen on quantum scales and applying them to an everyday object(our brain).

That link was enlightening. As I said, I am a layman to all of this QM stuff.

I think that my point still stands.
 
That you are not aware of this is not a very strong statement. You weren't aware of SHRDLU, either.

I was certainly aware of SHRDLU, and Eliza, and all the other simple scripts from the youth of AI, when everything seemed possible. I'm still not aware of SHRDLU the conscious program aware of its own existence.

Systems incorporating optimising JIT compilers do this sort of thing. You very likely have one installed on your PC.

And most humans demonstrate little ability to do this in any case.

A JIT compiler might, at a pinch, rewrite its own code while its running, on the fly. Does this imply some awareness of what being a compiler program is? Of course not.
 
I have concluded that trying to explain the infinite and all encompassing realm of computer science to people who haven't seen it themselves is a futile exercise.

They just need to figure it out themselves it seems.

I think that trying to tell people who've actually worked with computers that computer "science" is an infinite and all encompassing realm is a lot harder than convincing people who've never written a program themselves.

Someone who understands even so much as a "Hello, world" will realise that before executing the "Hello world" statement, the computer/operating system/program has no idea that it is going to do it. When it's performed the operation, it has no idea what it's doing. When it does it, it doesn't know that it's done it.

Add a boolean variable to say "I've just said 'Hello world', and does it know what it's done? No. It doesn't even know what the value of the variable is until it looks at it. When it looks away, it forgets again. And no matter how complicated the program might become, that's how it works. It never knows anything. It never remembers anything.
 
...snip...

Someone who understands even so much as a "Hello, world" will realise that before executing the "Hello world" statement, the computer/operating system/program has no idea that it is going to do it. When it's performed the operation, it has no idea what it's doing. When it does it, it doesn't know that it's done it.

Add a boolean variable to say "I've just said 'Hello world', and does it know what it's done? No. It doesn't even know what the value of the variable is until it looks at it. When it looks away, it forgets again. And no matter how complicated the program might become, that's how it works. It never knows anything. It never remembers anything.

They only problem with that objection to the use of "know" is that it applies to some humans as well e.g. ones with some forms of brain damage.
 
They only problem with that objection to the use of "know" is that it applies to some humans as well e.g. ones with some forms of brain damage.

Generally when we have to repeatedly tell a human with brain damage what his name is, we say that he doesn't know his name. If he needs to keep looking up what his name is, we say that he's forgotten his name.

I realise of course that when a person remembers their name, they are accessing some kind of data store. But the way a human "knows" information is very different to the way the typical computer program knows information.

If it is possible for a program to get around this limitation, then it will involve a lot more than we see in SHRDLU, Eliza or JIT compilers.
 

Back
Top Bottom