• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

On Consciousness

Is consciousness physical or metaphysical?


  • Total voters
    94
  • Poll closed .
Status
Not open for further replies.
So whatever we're looking for with this whole "intelligence" thing, whatever Derpy McBlackbox over there has that my pocket calculator don't, the dictionary alone doesn't have it either. Moreover, this is a general problem. Just because you have a big enough index to answer every question doesn't mean you can call it "thinking."

I don't see how this follows: the fact that one part of the machine can't be said to be intelligent doesn't mean that the machine as a whole isn't. And the fact that one part of the machine doesn't understand chinese doesn't mean that the machine as a whole doesn't.

I figure as long as it displays intelligent behavior, it's intelligent. I don't really understand what else "intelligent" could mean.
 
I don't see how this follows: the fact that one part of the machine can't be said to be intelligent doesn't mean that the machine as a whole isn't. And the fact that one part of the machine doesn't understand chinese doesn't mean that the machine as a whole doesn't.

I figure as long as it displays intelligent behavior, it's intelligent. I don't really understand what else "intelligent" could mean.

It means that the concept intelligence has a history which gives it more meaning than what you want to credit it with.
 
I figure as long as it displays intelligent behavior, it's intelligent. I don't really understand what else "intelligent" could mean.
It means that the concept intelligence has a history which gives it more meaning than what you want to credit it with.

Um, "intelligent" means that the concept of intelligence has a history which gives it more meaning than what I want to credit it with? Huh?

Perhaps you could simply explain what "intelligent" means beyond displaying intelligent behavior.
 
Um, "intelligent" means that the concept of intelligence has a history which gives it more meaning than what I want to credit it with? Huh?

Perhaps you could simply explain what "intelligent" means beyond displaying intelligent behavior.

Who said anything about beyond behavior?
Why bring behavior into the discussion?
The issue is, the meaning of intelligence.
As a word, a concept, it has a rich history of different meanings.
Which is the right one?
You got an answer?
 
I figure as long as it displays intelligent behavior, it's intelligent. I don't really understand what else "intelligent" could mean.
Robustness. Ask the room something not in the phrasebook but which it can answer, a differently worded question perhaps. A strong AI which understands Chinese could answer you anyway. Weak AI, using the lookup table alone, could not. Both are common definitions of the word "intelligence," which previously had been far from clear.

I should probably add here that I don't actually support the Chinese Room argument. It's wrong. Not because of any semantic foolishness, but because he assumes the operator (human or machine) has no capacity to learn the semantics of the symbols it manipulates. This was a perfectly fair assumption for its time, because people were generally arguing such a learning capacity would not be needed.

Add in that capability, though, and with time and practice you end up with an agent with some fragmentary shard of strong AI. It may not know any of the concepts the questions or answers refer to, but it truly understands how the one should map to the other.

Who said anything about beyond behavior?
Why bring behavior into the discussion?
The issue is, the meaning of intelligence.
As a word, a concept, it has a rich history of different meanings.
Which is the right one?
You got an answer?
They're all wrong. The word is a catch-all term for a large variety of behavioral and information processing steps, and these days is increasingly hijacked by people trying to push a "humans are special" agenda.

It's almost as bad as "consciousness."
 
Last edited:
Chess and the Chinese Room

A few years ago I was into playing chess on Yahoo. You set up a board and wait for a human opponent of similar rank to accept your game, and away you go.

Then one day, something disturbing happened. I was kicking someone's ass, and instantly after I won a piece, he started to play absolutely perfectly and and in very few moves destroyed me. I felt pretty sure that he was playing himself until I started to kill him, then started using a computer. I think he just didn't want to fall in the rankings.

The interesting things is the magic bean of my opponent's personality opponent went away, and I noticed it instantly. Something like playing tug of war with a person, feeling his living muscles through the rope, then it getting hitched to a bulldozer and getting pulled into the mud in one mechanical stroke.

Or, if there was a person who knew only a little Chinese in the room, then when they had to respond in a way over their heads, they switched to the book, complied by experts.

...but chess is not a look up table task for AI. There are too many possibilities. The table would have to be as big as the universe or something like that. I've worked on look-ahead games, and made one that had no such table. It "felt the future" by imagining every possible move its opponent might make, it's possible answers, etc. I also added emotion to it -- it put up a happy face when it expected a win, and a sad face when it saw it was losing. Unlike us, it didn't let it's emotions interfere with its intelligence.
 
No, because it isn't.

Searle formulated it in just about the stupidest way possible. He did that on purpose. He doesn't want people actually thinking about the issue, he wants them to be blinded with emotion and just give up.

Case in point -- why Chinese and not English? Why a man in the room, and not a robot? Why a room, and not the brain of a giant?

The whole thing is absurd.

Ah, I didn't know that, and it didn't seem like the narrator of the BBC show understood that either. Next time I watch it I'll see if I missed it.

(Chinese because it's often an example of a language that's so extremely cryptic to westerners. A man in a room because it brings home the point that the man has no understanding of the meaning of the messages he's transcribing. His magic bean of understanding is never engaged, yet the one outside the room feels it is.)

So, Searle was arguing that the Chinese Room, like expert systems, did not understand the subject, but were playing back the understanding of the experts that created the table. Funny how so many people misunderstand its point, like the point of Schrodinger's Cat.
 
...but chess is not a look up table task for AI. There are too many possibilities. The table would have to be as big as the universe or something like that.
Huh? We can almost do it now. It isn't even among the most computationally complicated discrete turn-based games by a long shot - that honor goes to Go and Arimaa.
 
Who said anything about beyond behavior?
Why bring behavior into the discussion?
Please read what you respond to. I said that if it displays intelligent behavior, it's intelligent.

You seemed to disagree. I was hoping you'd explain why. If you didn't disagree with that, all you have to do is say so.
 
Huh? We can almost do it now. It isn't even among the most computationally complicated discrete turn-based games by a long shot - that honor goes to Go and Arimaa.

From Number of possible chess games:

The number of legal chess positions is 10^40, the number of different possible games, 10^120.

There are only about 10^80 atoms in the universe.

But, whatever the number, lookup-table implementation is not feasible for chess playing machines, which need to feel the future to play well.
 
Last edited:
These computer scientists argued that a computer armed with enough of these lookup tables was intelligent. Not "indistinguishable from," not "might as well be considered," was. A computer with a sufficiently large Chinese-English dictionary would know how to translate between them.

But hold on, Searle said. Let's give this giant-ass lookup table to some jackass in a room instead. He don't know Chinese. He ain't gonna learn Chinese, not when he just looks up sentence indexes. He doesn't understand what you're asking him. Look at him, he gets paid to sit in a dark room and do whatever was the 1980 equivalent of filling out captchas all day.

The problem is the thought experiment is using absurdness to extinguish absurdness.

It is absurd to think that a giant lookup table is relevant to *anything* when it comes to intelligence because by definition we consider intelligence the ability to do something other than reference pre-defined behavioral reactions.

The proper counter to this stupid argument by the old computer scientists is to just point out that they are idiots. Not formulate an even more bizarre scenario that is so unclear that every armchair philosopher on the internet has spun it into supporting their own uneducated opinions.
 
Ah, I didn't know that, and it didn't seem like the narrator of the BBC show understood that either. Next time I watch it I'll see if I missed it.

Heh, I just made that up. That is my own interpretation, based on the fact that I could argue why a lookup table is not equivalent to intelligence without referencing absurd scenarios, and it would be far more clear to everyone.

Hence, there must have been an ulterior motive, I tell myself. I am wary of any philosopher interested in consciousness and cognition who doesn't immerse themselves in programming, it seems very non-genuine. And Searle, like Penrose, is that type. ( Penrose isn't a philosopher, but he isn't a programmer either, so any notion he has about what an algorithm can or cannot do is amateur, and that is why I don't respect him at all when it comes to this issue ).

Note that I feel sort of the same way about all these types, regardless of which side they support. Dennet, Blackmore, etc. I can't stand listening to people quote Daniel Dennet or Susan Blackmore talking about how little we really know when it comes to consciousness, and saying "see they are even supportes of the computational model and they admit that we don't know much."

So, Searle was arguing that the Chinese Room, like expert systems, did not understand the subject, but were playing back the understanding of the experts that created the table. Funny how so many people misunderstand its point, like the point of Schrodinger's Cat.

Yeah but here is the thing -- was Searle clear that the instructions the guy in the room follows are merely some implementation of a lookup table? I don't recall that being explicitly part of the description, and if they are, he hasn't done a good job squashing all the bad versions of the chinese room that are crawling around.

Because all I ever hear from armchair philosophers is that the chinese room is supposed to show that *any* mechanical instructions the guy follows somehow invalidate any possible understanding of chinese that the room might have.

In other words, I see the most common interpretation to be a suggestion that the idea of machine consciousness is absurd.

But you and I and anyone who thinks about it knows this isn't the case -- if the instructions on the cards represent something more like CPU instructions and register values, meaning the guy is actually just implementing an algorithm that could be anything, it is less clear cut that the idea of the room understanding chinese is absurd. And if the instructions on the cards represent something like a neural network simulation, then it isn't clear at all that the room doesn't understand chinese. In that case it seems like the room *does* understand chinese.

This is just one of those cases -- like every other case in this discussion, actually -- where incorrectness stems primarily from a failure of being specific when it comes to what we are talking about.
 
Last edited:
Huh? We can almost do it now. It isn't even among the most computationally complicated discrete turn-based games by a long shot - that honor goes to Go and Arimaa.

Yes, and actually the best chess engines use a huge amount of lookup table references in their logic. They call it "endgame tablebase" analysis.

However, that is *not* thinking. It is no different than you turning left out of your driveway because you are used to it. At some point, when you first bought your house, you had to *think* about which direction to turn, and the same at the next turn, etc, when you went to work in the morning. But after awhile it is burned into your memory, and you just do it without thinking.

It is also worth noting that endgame tablebases don't help win in games that aren't constrained by artificial rules, and they matter less and less in games with less artificial rules. They also don't really help that much in games where the tables can turn rapidly towards the end.
 
They're all wrong. The word is a catch-all term for a large variety of behavioral and information processing steps, and these days is increasingly hijacked by people trying to push a "humans are special" agenda.

It's almost as bad as "consciousness."
They are all wrong ?
I see, so what's your agenda' less right definition of special, intelligence and consciousness?
 
Please read what you respond to. I said that if it displays intelligent behavior, it's intelligent.

You seemed to disagree. I was hoping you'd explain why. If you didn't disagree with that, all you have to do is say so.

Will so what ? If it moves it moves.
Completely uninteresting.
The issue is what behavior is intelligent?
 
No Dodger, if you want to study Consciousness you need to study human behavior. Reducing human behavior to neuron behavior and then trying to build models of neuron behavior which becomes human behavior is useless unless we know what human behavior is.

Your continually making the false assumption that since humans brains are built of neurons if we study the behavior of neurons we will be able to create brains.
It may be the way you right comp. games, by building models from basic logical procedures, but it is useless if you don't know what the model is supposed to model.
Taking the PM approach of defining a complex human behavior such as consciousness as a simple behavior may make the idea of modeling from basic switch behavior easier, but that's irrelevant if we have yet to define the behaviors which make up consciousness.

An economist may define a human as a unit with x spending power for their economic model, but this definition is useless for a doctor who is modeling the spread of TB in a population.

Again, if your selling games to children and you want them to be convinced the behavior they are seeing is "real" then your skill relates to there ability to be fooled. Attempting to get everyone to accept a limited definition of consciousness so that they can be fooled into believing your programming leads to consciousness is not exactly scientific. The idea of getting everyone to learn programming so they also learn how to trick people and become convinced that tricking people is the way the real world works is also not scientific.
The agenda amongst computationalists is clearly to justify there ability to trick people by claiming that's how the real world also works.
Remind you of priests anyone?
 
Last edited:
Will so what ? If it moves it moves.
Completely uninteresting.
The issue is what behavior is intelligent?

It's fine that you don't find it interesting. It was simply my comment on the "chinese room", which seems to be an argument that intelligent behavior is not necessarily a product of intelligence.
 
No Dodger, if you want to study Consciousness you need to study human behavior. Reducing human behavior to neuron behavior and then trying to build models of neuron behavior which becomes human behavior is useless unless we know what human behavior is.

I completely agree.

What is the issue you are complaining about? I am not pixy, and neither are any of the very smart people doing research on machine consciousness. Understanding human behavior is the first step in all of the research that I familiarize myself with.

For example, in the paper I just discussed with mr. scott a few posts ago, the research was done according to known information about primate behavior, namely the way we plan and initiate movements in the context of learning by imitation. Furthermore the information includes things like MRI results so it isn't just pie in the sky either. This is very factual stuff.

The idea of getting everyone to learn programming so they also learn how to trick people and become convinced that tricking people is the way the real world works is also not scientific.
The agenda amongst computationalists is clearly to justify there ability to trick people by claiming that's how the real world also works.
Remind you of priests anyone?

That isn't the idea. The idea is to get everyone to learn programming because it is almost unique among human endeavors in that it *forces* the practitioner to think logically about something in order to see any results at all. And it is certainly the *only* such endeavor, from the already small set, that is so easily accessible to anyone -- anyone with a computer can start since there are thousands of free compilers and interpreters for whatever language one cares to use.

The fact is, computer science is really about wrapping your brain around algorithms, which are just sequences of events. It is about seeing how to get from point A to point B in reality, a skill far too few people have learned. I wish more scientists of all types were familiar with that skillset, I think the world would progress much faster. I can't tell you how many biology grad students I have worked with when I was a lab assistant who spent far too much effort trying to figure out why this or that cellular process or pathway worked the way it did when if they had taken some courses on programming they might easily see how the steps of the process fit together to lead to the results they were seeing.

So why should cognition be any different? It shouldn't. Our brains are made of stuff that behaves according to the laws of nature, and to figure out the ways that stuff might do stuff that leads to things like me typing a response to you simply requires an understanding of how sequences of events lead to results.

Computer science doesn't have to have anything to do with either computers or science. In fact I wish it wasn't named computer science because it is so misleading. It has to do with the study of step by step processes. The advantage I have over people who don't know how to program is that at this point I have an almost intuitive understanding of how step by step processes might lead to this or that result. If you had the same understanding, we wouldn't even be having this argument, because you would see the whole consciousness issue in a completely different light.
 
The proper counter to this stupid argument by the old computer scientists is to just point out that they are idiots. Not formulate an even more bizarre scenario that is so unclear that every armchair philosopher on the internet has spun it into supporting their own uneducated opinions.
Thanks for clarifying. Maybe insulting his audience to their faces would have been a more satisfying response to their assertions, but I doubt it would have had the same impact. Like it or not, his argument was very effective at its intended purpose, and if it's been hijacked these days by true believers, well, so what? They'd have just latched on to something else otherwise.
 
What you don't take into consideration is that you know nothing about 1) computing, 2) the brain, 3) advances in A.I.

If that *is* taken into consideration, it becomes clear that you are wrong.

In particular, there has been an amazing amount of progress when it comes to neural network models demonstrating fundamentally conscious behaviors in the last 10 years.
How did the program demonstrate it was fundamentally conscious. Ask for more RAM? Faster clock speed? More cd read/write space? Larger power supply? More pixels? 132 character highspeed printer? Other requests? :confused: :)
 
Status
Not open for further replies.

Back
Top Bottom