• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Explain consciousness to the layman.

Status
Not open for further replies.
Yet you have not convinced me that even the most detailed and accurate map is ever equivalent to the terrain that it models.

Yes this is correct.

However it has already been brought up that consciousness is a more like a map to begin with, and going from map to a map is much less lossy than going from the terrain to the map.

Do you disagree?

Kindly spell out the specific methodology you had in mind for such a verification. I'm not 100% convinced that everyone I converse with is conscious even when they are right in front of me. :-/

Well for starters, I am fairly certain you are conscious, just based on the replies you have given me to my statements. I can go more in depth in a bit.
 
The living cells are made of matter as are the transistors.

As our the rocks. Indeed, the rocks are made out of the same stuff as the transistors, and similar processes occur in them. Rocks have far more in common with transistors than either do to brains. Brains have far more in common with trees than either do to transistors.

That doesn't prove anything, of course. It is quite possible that there is a process going on in the brain and the computer, but not in the rock or the tree. However, it must be demonstrated.
 
As our the rocks. Indeed, the rocks are made out of the same stuff as the transistors, and similar processes occur in them. Rocks have far more in common with transistors than either do to brains. Brains have far more in common with trees than either do to transistors.

That doesn't prove anything, of course. It is quite possible that there is a process going on in the brain and the computer, but not in the rock or the tree. However, it must be demonstrated.

What an utter mess of bad strawman equivalences.

Comparing transistors to brains? Really?
 
The rebuttals tend to be along the lines of "Westprog can't tell the difference between a computer and a rock".
Well when you claim things like a computer is more similar to a rock than to a brain, what do you expect?

"Information", as in physical information, has a number of different overlapping definitions. Which of these we are supposed to be using is left as an exercise for the reader. Clearly a rock contains a similar amount of physical information both before and after converting to silicon chips. This isn't controversial stuff.

Which doesn't have anything to do with the current discussion. Nobody is talking about "physical information" but you.

What you are doing is taking a term that is being used in a very narrow sense here -- information -- and showing everyone "hey, there are many more definitions for it, and some of them apply to rocks as well as computers" and hoping that people will get confused.

Except nobody is confused, other than perhaps piggy.

It is obvious to everyone ( excluding you two ) that it is simply impossible to use a rock by itself to process any kind of information at all. It simply cannot be done, no matter how smart or how advanced the user of the rock is.

Trying to obfuscate the discussion by throwing notions like "physical" information into the fray isn't fooling as many people as you seem to think it is.
 
Actually, I'd like to know, too. Not because I agree with piggy or westprog on this, but because I'd like to know what to answer to someone who tells me information processing goes on in a rock.

A more formal definition than the one I had before:

Information is a mapping of a set of states in one system to a smaller set of states in another system.

Information processing is using that mapping to some end.

For example, the thermostat portion of your heating control ( which most people just call a thermostat, even though only part of it is the thermostat ) maps all the states of the environment to the two states of the signal it produces based on the temperature of the environment. All environment states above a given temp, the signal is OFF, all environment states below the given temp, the signal is ON ( or vice versa ).

That is information -- in this case, about the environment. Any system could use that information to change its behavior, at which point it would become an information processor. For example the control portion of your heating control takes the information from the thermostat ( in the form of the signal the thermostat produces ) and processes it, resulting in climate control.

Fast forward to something like our brains -- the signals caused by the firing of our retinal neurons is information because it maps the state of one system ( the environment ) based on some metric ( the number of photons coming from a given direction in the environment ) to a smaller set of states in another system ( the axons of the retinal neurons ). An infinite set of potential incoming photon states is mapped to a fairly small set of discrete firing patterns that travel along the surface of the neuron axon.

The brain then processes this information, resulting in our bodies being able to reproduce with greater likelihood ( by doing things like finding food or avoiding tigers ).

Make sense?

The idea of information and information processing is implicitly linked with the notion of "use." There is no such thing as information that can't be processed for some use, because that is what information is.
 
It is true that the best chess computer has a winning record against the best human player.

However most good human players can wipe the floor with most chess programs -- it takes a supercomputer to even the odds for the robot side.

This is not the case anymore. Current chess engines (like Houdini) will probably beat any human chess player in a game with time controls.

Already in 2009 the chess engine Hiarcs 13 (inside Pocket Friz 4) won the Copa Mercosur tournament, and it was running on a mobile phone. Currently Houdini seems to be the best of the chess engines, it would probably beat Hiarcs hands down.
 
In fact, the way a basic chess program plays chess is very much like the way a competent human player does. They will both typically make the initial moves from a memorised set of openings, then they will examine each legal move tree to a certain ply or depth, evaluate the resulting positions according to some tactical or strategic criteria and rank them accordingly, then select the highest ranked moves for further evaluation, or play the highest ranked move. Programs can also learn from previous games and evaluations.

We shouldn't be surprised at the similarities between the two, because chess programs are written to follow the basic procedures humans follow. If the human player that does that is using intelligence, why isn't the computer program also using intelligence? If nothing else, the program is using the intelligent analysis strategy of the person who devised the position evaluation algorithm.

Some might argue that if the program uses the programmer's evaluation algorithm, then it's the programmer that's intelligent, not the running program; but it's also possible to write a program that can learn a strategy and how to apply it, similar to a non-chess player learning how to play chess. Would we say the chess player who has learned from a club player isn't playing intelligently, but is only using the intelligence of the club player that taught him? If not, why say that of the program? Isn't there a hint of double standards in that argument?

Now we are delving into what intelligence is, which is related to but not the same thing as consciousness (even if I cannot think of an example where one is found without the other).

I would class myself as a competent chess player (strong club level but sub-master) and reading your post, I find it less obvious than I did before what the difference is between me and a computer. Sometimes I will play a move because it looks nice and sometimes I make a choice to steer the game into a channel which suits my style and I doubt whether the computer does either of these things.

But when calculating variations I suppose I am mostly trying and mostly failing to do what the computer does effortlessly and more or less faultlessly (computers never hang pieces or overlook simple checkmate). Still I am unwilling to concede intelligence to the mschine, without being able to say properly why.

In the same radio programme in which I heard about the Chinese Room and whether thermostats could think I recall one of the participants saying a computer had the intelligence of an earwig. This must have been one terrific radio show btw. because it was several decades ago and I seem to be able to remember a lot about it.

Anyway, the earwig was not explained further so I thought about it myself (dangerous, I know). An earwig can distinguish light and dark and probably can be counted on to head for the dark when given a choice. So it has a binary quality equivalent to the computer's fundamental oscilation between 0 and 1. If there were no time constraints (or if earwigs travelled at the speed of light) you could build an earwig computer in which the animal's preference for darkness could be used somehow with some clever engineering and programming and, provided you didn't mind the game taking several million years, you could probably play chess against it. Would the emergent properties of this earwig machine add up to intelligence or is this really intelligence decanted from the programmer's and the engineer's heads into the mostly inert earwig machine, as you suggest in your post?

Is a computer just a repository of human ntelligence with no intelligence of its own?
 
Still I am unwilling to concede intelligence to the mschine, without being able to say properly why.
You are not alone in this. This natural reluctance makes an interesting contrast with that other human trait - anthropomorphism. We constantly imbue the non-human with human characteristics, yet at the same time we jealously deny human characteristics to the non-human. Both are understandable, but neither is rational. We remake things in our own image, but must remain supremely apart; remind you of anyone?

Is a computer just a repository of human ntelligence with no intelligence of its own?
Sometimes a yes/no answer isn't enough. Do the social and cultural skills you have learned and been taught demonstrate your own intelligence or that of the culture you learned them from?

Could this be a false dichotomy? ;)
 
Last edited:
Well when you claim things like a computer is more similar to a rock than to a brain, what do you expect?



Which doesn't have anything to do with the current discussion. Nobody is talking about "physical information" but you.

What you are doing is taking a term that is being used in a very narrow sense here -- information -- and showing everyone "hey, there are many more definitions for it, and some of them apply to rocks as well as computers" and hoping that people will get confused.

Except nobody is confused, other than perhaps piggy.

It is obvious to everyone ( excluding you two ) that it is simply impossible to use a rock by itself to process any kind of information at all. It simply cannot be done, no matter how smart or how advanced the user of the rock is.

Trying to obfuscate the discussion by throwing notions like "physical" information into the fray isn't fooling as many people as you seem to think it is.

Well, if I pick up a rock and smash in your head I might be using a rock to process the information that I am mad at you.:)
 
This is not the case anymore. Current chess engines (like Houdini) will probably beat any human chess player in a game with time controls.

Already in 2009 the chess engine Hiarcs 13 (inside Pocket Friz 4) won the Copa Mercosur tournament, and it was running on a mobile phone. Currently Houdini seems to be the best of the chess engines, it would probably beat Hiarcs hands down.

Well that is interesting -- I wonder if they have moved away from the brute force algorithms of the past?

I am going to read up oh how Houdini does it.
 
Well, if I pick up a rock and smash in your head I might be using a rock to process the information that I am mad at you.:)

Well all kidding aside, in that case the rock is perhaps information but it is certainly not doing the processing.

If I throw a rock at your head, your brain would probably be correct in interpreting that information as evidence that I am mad at you. However it is your brain that is doing the processing, not the rock.
 
Well, since yy2 defines a brain as a "symbol system" I'm afraid that's a dead end there.
Since I have more time now, another reply.

First off--I didn't define a brain as a symbol system. In fact, I never even gave you a definition for brain. Therefore, one has to wonder where you got that this was my definition. If it wasn't from me, that leaves only you.

Now, you have no idea what symbol manipulation is, and no inclination to find out. You instead simply want to declare your expertise about it, and accuse those who do understand it of committing grave errors in thinking. When I finally figured out the exact nature of your error and pointed it out, instead of manning up and admitting you were wrong--you simply accused me of talking "shop talk" and trying to confuse you.

So I'll say it once more. A symbol is simply a uniquely identifiable state of a system that uses a series of transformations. These symbols are simply steady states of the system under the various transformations that the system performs. These symbols need not represent something--they just transform in particular ways given these transformations. What makes a particular instance of symbol that symbol per se is simply that it has the same effect on all transformations as any other instance of the symbol would have; what makes symbols different is that there exists some transformation that can distinguish them--that is, that for some transformation there is a different effect on one than there is for another.

Given a set of symbols and transformations such as the above, and an input, then you can perform analyses on the input using those symbols. In fact, we can define "input" into this system as some mechanism by which a particular set of symbols can be produced that reflects some state of the external world. For example, if you had a touchpad interface, and a grid of a particular symbol we call "1"; and pushing on it flips the symbol to another one that we call "0", then that is an input--and that can be analyzed by this system by transforming those two equivalence classes. Now per your account, I had to actually say that "1" meant "not touched" and "0" meant "touched", but really, nothing changes if I use 0 for untouched and 1 for touched. Or X for untouched and Y for touched. Or if I just left off all of the convenience labels, and simply said that there's a distinct symbol invoked when you touch the pad at certain locations with a particular pressure from when you don't.

But in all of those cases, there is a symbol in the system that corresponds to your touching the pad versus not touching it. And whatever you call those symbols, the transformations of them can analyze that touching. And by "analyze", I simply mean that the symbols--as defined above--can be affected in various complex ways by what's being touched.

So if your touchpad can identify when someone draws the symbol "2" on it, then that means exactly what it says. It means that it can produce distinct behaviors in terms of its internally identifiable states (i.e., symbols) when someone draws the symbol "2" versus when someone doesn't. Given a device does this, it does not matter what things are "supposed to represent"; that state will be there sometimes, and it won't be there other times, and the times it will be there would be when you draw that symbol, and the times it won't will be when you don't.

It's symbol manipulation. It's not poetry.
 
Last edited:
Well, if I pick up a rock and smash in your head I might be using a rock to process the information that I am mad at you.:)

A computer is certainly more useful at processing information for a person. That doesn't mean that more information passes through it. It just means that a person can put information in and get information out. This only becomes an issue if we start - for some odd reason - to consider how the computer feels about things on a standalone basis, rather that as a tool made by human beings.
 
Since I have more time now, another reply.

First off--I didn't define a brain as a symbol system. In fact, I never even gave you a definition for brain. Therefore, one has to wonder where you got that this was my definition. If it wasn't from me, that leaves only you.

Now, you have no idea what symbol manipulation is, and no inclination to find out. You instead simply want to declare your expertise about it, and accuse those who do understand it of committing grave errors in thinking. When I finally figured out the exact nature of your error and pointed it out, instead of manning up and admitting you were wrong--you simply accused me of talking "shop talk" and trying to confuse you.

So I'll say it once more. A symbol is simply a uniquely identifiable state of a system that uses a series of transformations. These symbols are simply steady states of the system under the various transformations that the system performs. These symbols need not represent something--they just transform in particular ways given these transformations. What makes a particular instance of symbol that symbol per se is simply that it has the same effect on all transformations as any other instance of the symbol would have; what makes symbols different is that there exists some transformation that can distinguish them--that is, that for some transformation there is a different effect on one than there is for another.

Given a set of symbols and transformations such as the above, and an input, then you can perform analyses on the input using those symbols. In fact, we can define "input" into this system as some mechanism by which a particular set of symbols can be produced that reflects some state of the external world. For example, if you had a touchpad interface, and a grid of a particular symbol we call "1"; and pushing on it flips the symbol to another one that we call "0", then that is an input--and that can be analyzed by this system by transforming those two equivalence classes. Now per your account, I had to actually say that "1" meant "not touched" and "0" meant "touched", but really, nothing changes if I use 0 for untouched and 1 for touched. Or X for untouched and Y for touched. Or if I just left off all of the convenience labels, and simply said that there's a distinct symbol invoked when you touch the pad at certain locations with a particular pressure from when you don't.

But in all of those cases, there is a symbol in the system that corresponds to your touching the pad versus not touching it. And whatever you call those symbols, the transformations of them can analyze that touching. And by "analyze", I simply mean that the symbols--as defined above--can be affected in various complex ways by what's being touched.

So if your touchpad can identify when someone draws the symbol "2" on it, then that means exactly what it says. It means that it can produce distinct behaviors in terms of its internally identifiable states (i.e., symbols) when someone draws the symbol "2" versus when someone doesn't. Given a device does this, it does not matter what things are "supposed to represent"; that state will be there sometimes, and it won't be there other times, and the times it will be there would be when you draw that symbol, and the times it won't will be when you don't.

It's symbol manipulation. It's not poetry.

It's always possible to assign symbols to any state changes of any physical system. If the possible state transitions are restricted, then the symbolic changes are restricted in the same way. However, such symbols are merely names we give to the state changes - and we could choose entirely different ways of assigning states, and assign different symbols to them, for any given physical system. None of these changes are any more "real" than any other.
 
You are not alone in this. This natural reluctance makes an interesting contrast with that other human trait - anthropomorphism. We constantly imbue the non-human with human characteristics, yet at the same time we jealously deny human characteristics to the non-human. Both are understandable, but neither is rational. We remake things in our own image, but must remain supremely apart; remind you of anyone?


Sometimes a yes/no answer isn't enough. Do the social and cultural skills you have learned and been taught demonstrate your own intelligence or that of the culture you learned them from?

Could this be a false dichotomy? ;)

Humans are able to demonstrate intelligence on their own, and they are able to amplify this intelligence in many ways - using culture, books, computers, the internet - but there's no reason to assume that these objects demonstrate intelligence on their own in the absence of human beings. If all the things that humans use to amplify intelligence were removed, humans would still demonstrate intelligence. If human beings were removed, then nothing demonstrating intelligence would exist until the dolphins grew legs and read some of the discarded books. Whether or not there's something called intelligence embedded in these objects, it certainly needs conscious human beings to release it.
 
It's always possible to assign symbols to any state changes of any physical system.
It's not necessarily always possible unless you explicitly consider trivial mappings. It may turn out to be always possible in a non-trivial way, but that doesn't change anything.
If the possible state transitions are restricted, then the symbolic changes are restricted in the same way.
That's certainly true. A calculator with water inside of it may actually "compute" some funky results, simply because the possible state transitions changed. Then again, that calculator without the water in it can simply be turned off or disassembled as well, and no longer be the same symbol manipulation mechanism; and while it's working properly, a series of computations can give accurate results precisely because it is calculating under the restriction... and during that series of calculations, we can describe the behavior of the calculator as a particular symbolic manipulation mechanism.

But this also doesn't imply anything to contrast with the human mind. If the human mind turned out to be built on a symbolic manipulation system, then we could have similar failures of the mind; we can have degenerative brain diseases that affect how the mind functions, and we can die.
However, such symbols are merely names we give to the state changes - and we could choose entirely different ways of assigning states, and assign different symbols to them, for any given physical system.
Not really. We consider it to be the same calculator if we give it to our neighbor as it is if we give it to a kid in China. We may give completely different names to the states, but that in itself doesn't mean we're describing the symbol manipulation mechanism differently.

And in this scenario, both the Chinese kid and our neighbor would very likely be thinking of the calculator in terms of the same symbol manipulation machine, despite the different names they give to the symbols. Furthermore, the calculator itself would be doing the same thing with the same physical states.

So, no. The symbols are not the names we give to the state changes. They are the configurations under the restricted set of transformations that we give those names to.

You can have multiple layers of a symbol manipulation machine, however. Given that:
None of these changes are any more "real" than any other.
...then sure.
 
If human beings were removed, then nothing demonstrating intelligence would exist until the dolphins grew legs and read some of the discarded books. Whether or not there's something called intelligence embedded in these objects, it certainly needs conscious human beings to release it.
Nonsense. Your bar for intelligence is not only set too high, but it's set to human. Other animals can demonstrably outperform humans in intellectual feats.

I dare you to keep up with what this chimp is doing:


ETA: I'm not claiming any non-human animal is going to read Chaucer any time soon, but this chimp's ability to read single digit numbers exceeds my own.
 
Last edited:
Nonsense. Your bar for intelligence is not only set too high, but it's set to human. Other animals can demonstrably outperform humans in intellectual feats.

I dare you to keep up with what this chimp is doing:


ETA: I'm not claiming any non-human animal is going to read Chaucer any time soon, but this chimp's ability to read single digit numbers exceeds my own.

Chimps can read? Work computers? Those were the examples I was giving, advisedly. Other animals can use extensions to their intelligence, but humans use more.

N.B. I specifically mentioned dolphins in this context.
 
Chimps can read? Work computers?
Well, yeah. That chimp is doing both. Am I missing something?

Those are numbers. And that's a computer. Obviously I'm not going to recommend hiring this chimp to work with me on computer programming; OTOH, that feat that the chimp is beating me on real-time hands down is an intellectual one. I for one count it.
 
Last edited:
Well, yeah. That chimp is doing both. Am I missing something?

Those are numbers. And that's a computer. Obviously I'm not going to recommend hiring this chimp to work with me on computer programming; OTOH, that feat that the chimp is beating me on real-time hands down is an intellectual one. I for one count it.

Fine, include chimps (with their brains made of neurons and DNA 95% the same as humans) in the category of intelligent beings. It really doesn't change the principle.

It's why I prefer to leave animals aside in discussions of consciousness. If they aren't conscious, it doesn't prove anything, and if they are conscious, it doesn't prove anything.
 
Status
Not open for further replies.

Back
Top Bottom