• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Explain consciousness to the layman.

Status
Not open for further replies.
I think that more time has been spent explaining that there's no point in giving the satisfactory definitions because of the motives of the people objecting to them than in just giving the definitions.

What an outright lie.

I can point to entire threads that got taken over by the endless merry-go-round of people trying to offer you a definition of these terms, the predictable responses that such a definition also includes rocks and soup or doesn't include monkeys or it is absurd that toasters are conscious, and people attempting to further refine their definition and re-present it to you.

The amount of words spent saying "there is no point" is a mere fraction of what has been spent arguing with you and others in an attempt to arrive at an agreement regarding the definition.

Your statement here is so inaccurate that I question whether you actually read these threads at all.
 
No, the chinese room is not conscious and yes, from the outside it would be impossible to tell.

However this isn't relevant to anyone except philosophy professors. We know for a fact that programming a chinese room would take exponentially more resources than simply programming a machine that can learn meaning by itself. Furthermore, the programmers themselves certainly know the approach they took.

So if I program a robot to learn, and it comes back a year later and has a conversation in chinese with me, I can safely assume that it is not a chinese room. I can safely assume that it genuinely understands chinese because that is by far the lowest cost approach to having a turing equivalent conversation in chinese, especially since I didn't program it to be a chinese room.

Not only that, but if it talks about stuff like meeting this woman and falling in love with her, or how pretty the sunset was in some part of China on this one night, or being afraid that the Chinese government would put it in jail for speaking out about human rights abuses, or even that it almost committed suicide with the rest of the workers at foxconn, I think it is a safe assumption that it is conscious. How else would it be aware of such deep meaning behind the conversation? A lookup table? Nah.

This is the same kind of argument that people apply to evolution to explain how our consciousness arose in the first place. It is simply too expensive to have a gigantic lookup table, or some other type of super-organized structure, dictating behavior. It is far cheaper, and thus far more useful for an organism trying to survive in a dynamic world, to be able to learn-->understand-->react. If evolution had lead to us being chinese rooms it would fly in the face of everything we know.

Does that make sense?
Yes it does.

I still don't think it explains what consciousness is and it also borders on conflating it with intelligence. It takes intelligence for humans to play chess well but computers can play much better now than us and they do so with neither intelligence nor consciousness. Maybe there are neural networks or something in development but computers presently do it the hard way, basically cheating by performing millions of calculations a second rather than thinking.

But now I am wondering whether my brain is doing the same thing but also throwing up a simulation of myself to conceal from me all the boring computations. In fact it must be doing something like this because it is running all sorts of operations without my being consious of them, although dome of them, like breathing I can choose to be aware of if I want.

If someone throws a ball and I catch it, is my brain performing quadrilateral equations (or some other complicated mathematics I wouldn't understand) which could theoretically be written out if we could access them? If so, this is another of the things it does which we cannot access.
 
So, a definition to be shot down: consciousness is the brain's representation of the self in its simulation of reality.
Hmm.. 'representation of self' would seem to require referencing of self, or 'Self Referencing'; are we half-way to SRIP already?
 
I am only aware of two people who hold the opinion that "information processing" is constantly going on in rocks: you and westprog.

You mean "self-referential" information processing, right ?

Frankly that isn't a big enough crowd to bother with, especially given what I know about you two. It would be an utterly pointless exercise to try and explain it further.

Actually, I'd like to know, too. Not because I agree with piggy or westprog on this, but because I'd like to know what to answer to someone who tells me information processing goes on in a rock.
 
You mean "self-referential" information processing, right ?



Actually, I'd like to know, too. Not because I agree with piggy or westprog on this, but because I'd like to know what to answer to someone who tells me information processing goes on in a rock.

I've never intentionally claimed that information processing goes on in a rock. I've simply noted that the definitions of information processing which supposedly apply to brains or computers can easily be applied to rocks as well. As soon as restrictions are applied, either computers or brains drop out of consideration. The rebuttals tend to be along the lines of "Westprog can't tell the difference between a computer and a rock".

"Information", as in physical information, has a number of different overlapping definitions. Which of these we are supposed to be using is left as an exercise for the reader. Clearly a rock contains a similar amount of physical information both before and after converting to silicon chips. This isn't controversial stuff.
 
What about the guy inside the sealed room with input and output slot for Chinese-only symbols? Or rather, what about the whole system which happens to have a guy inside it but actually could have anything capable of receiving and passing out symbols according to rules. It would not be conscious in our thought experiment but could be indistinguishable to an outside from something that was conscious (like a Chinese speaking human being).

Maybe the consciousness is in the programmer? Strike that for now. Just have a crack at the Chinese symbol-producing machine.

The Standford Encyclopedia of Philosophy has a readable section on Searle's Chinese Room, which (briefly) covers some of the stuff discussed in this thread.
 
Can you have one without the other ?

I don't know. But maybe some can't imagine strawberries without cream and vice versa and it wouldn't mean they are the same thing. Humans vary in intelligence but not in consciousness (don't ask me to prove that). At the moment, I cannot think of an example of either without the other but I expect someone around here will. Still, I think they are different, what with having different names and everything.
 
Actually, I'd like to know, too. Not because I agree with piggy or westprog on this, but because I'd like to know what to answer to someone who tells me information processing goes on in a rock.

Well, the simplest definition is that information processing goes on when something is being used to process information.

Cells use certain proteins to process the information in DNA.
Animals use sets of neurons to process internal and external state information.
Human bodies use brains to process many kinds of information.
Humans and robots use transistors to process any kind of information.

All of these scenarios entail a system that can potentially exhibit drastically different behavior based on extremely small changes in the "information" being processed, and they can do it continually and repeatedly.

Reading just a few nucleotides -- less than a hundred molecules total -- leads to huge differences in cell behavior. Detecting light -- just a few photons -- leads to huge differences in animal behavior. Hearing a woman scream -- just a slight disturbance in an auditory membrane -- leads to huge differences in human behavior. Changing a single bit in a computer somewhere -- just the atoms of a teeny tiny transistor -- can lead to huge differences in all sorts of systems, for example opening up the floodgates of a damn.

Can a system use a rock to process information continually like that? Nope. A bowl of soup? Nope. A powered down computer? Nope. A dead brain? Nope.

That doesn't exactly explain what information processing is, but it explains enough to get anyone started, and it certainly explains enough to discount almost the entire rest of the universe from being stuff that processes information.
 
Last edited:
.. It takes intelligence for humans to play chess well but computers can play much better now than us and they do so with neither intelligence nor consciousness. Maybe there are neural networks or something in development but computers presently do it the hard way, basically cheating by performing millions of calculations a second rather than thinking.
In fact, the way a basic chess program plays chess is very much like the way a competent human player does. They will both typically make the initial moves from a memorised set of openings, then they will examine each legal move tree to a certain ply or depth, evaluate the resulting positions according to some tactical or strategic criteria and rank them accordingly, then select the highest ranked moves for further evaluation, or play the highest ranked move. Programs can also learn from previous games and evaluations.

We shouldn't be surprised at the similarities between the two, because chess programs are written to follow the basic procedures humans follow. If the human player that does that is using intelligence, why isn't the computer program also using intelligence? If nothing else, the program is using the intelligent analysis strategy of the person who devised the position evaluation algorithm.

Some might argue that if the program uses the programmer's evaluation algorithm, then it's the programmer that's intelligent, not the running program; but it's also possible to write a program that can learn a strategy and how to apply it, similar to a non-chess player learning how to play chess. Would we say the chess player who has learned from a club player isn't playing intelligently, but is only using the intelligence of the club player that taught him? If not, why say that of the program? Isn't there a hint of double standards in that argument?
 
I am sorry but I don't understand that. Could you make it less cryptic?
Cryptic? Sorry, I didn't realise it was.

SRIP is 'Self-Referencing Information Processing', the operation definition of consciousness Pixy has used. Trivially, half of SRIP is 'Self-Referencing ...'.

'Representation of self' implies the referencing of self, i.e. self-referencing. So self-referencing is 'half-way to SRIP'.

It was an ironic attempt to point out how self-reference is key to consciousness, and how this only differs from Pixy's definition by not specifying 'Information Processing' as the mechanism.

ETA: ironic because while most accept the 'self-referencing', the 'information processing' seems a bigger hurdle, so it's actually more than half of SRIP in this respect.
 
Last edited:
In fact, the way a basic chess program plays chess is very much like the way a competent human player does. They will both typically make the initial moves from a memorised set of openings, then they will examine each legal move tree to a certain ply or depth, evaluate the resulting positions according to some tactical or strategic criteria and rank them accordingly, then select the highest ranked moves for further evaluation, or play the highest ranked move. Programs can also learn from previous games and evaluations.

We shouldn't be surprised at the similarities between the two, because chess programs are written to follow the basic procedures humans follow. If the human player that does that is using intelligence, why isn't the computer program also using intelligence? If nothing else, the program is using the intelligent analysis strategy of the person who devised the position evaluation algorithm.

Some might argue that if the program uses the programmer's evaluation algorithm, then it's the programmer that's intelligent, not the running program; but it's also possible to write a program that can learn a strategy and how to apply it, similar to a non-chess player learning how to play chess. Would we say the chess player who has learned from a club player isn't playing intelligently, but is only using the intelligence of the club player that taught him? If not, why say that of the program? Isn't there a hint of double standards in that argument?

It's not surprising that Chess is well-suited to computers, because it was one of the first ways that humans restricted themselves to a digital viewpoint. Rather than a way that computers became more like humans, it was a way for humans to think like computers.*

*Yes, I know that this happened before computers existed. Computers were invented to deal with the digital realm created by mathematics, chess etc.**

**No, I don't think it is an actual realm.
 
Cryptic? Sorry, I didn't realise it was.

SRIP is 'Self-Referencing Information Processing', the operation definition of consciousness Pixy has used. Trivially, half of SRIP is 'Self-Referencing ...'.

'Representation of self' implies the referencing of self, i.e. self-referencing. So self-referencing is 'half-way to SRIP'.

It was an ironic attempt to point out how self-reference is key to consciousness, and how this only differs from Pixy's definition by not specifying 'Information Processing' as the mechanism.

ETA: ironic because while most accept the 'self-referencing', the 'information processing' seems a bigger hurdle, so it's actually more than half of SRIP in this respect.

However, even if the brain is in some way self-referential, and a computer is in some way self-referential, this doesn't imply that they are self-referential in the same way.

It's also worth noting that the "self" created by the human mind is not the same thing as a person being able to scrutinise what's going on in his brain. One might say that the "self" is created in order to hide the workings of the brain.
 
In certain ways, neurons and transistors are similar. In others, they are massively different. Neurons, for example, are made up of living cells, much like the pile of grass. Transistors are made up of contaminated silicon, much like the sand that the ocean washes up against. It's certainly not the case that they are more like each other than they are like anything else in the universe.

The living cells are made of matter as are the transistors.
 
I still don't think it explains what consciousness is and it also borders on conflating it with intelligence. It takes intelligence for humans to play chess well but computers can play much better now than us and they do so with neither intelligence nor consciousness. Maybe there are neural networks or something in development but computers presently do it the hard way, basically cheating by performing millions of calculations a second rather than thinking.

Well, this is not entirely accurate.

It is true that the best chess computer has a winning record against the best human player.

However most good human players can wipe the floor with most chess programs -- it takes a supercomputer to even the odds for the robot side.

Furthermore, while most chess programs do use a very naive algorithm that basically looks into the future, many researchers are wrapping their heads around other methods that more resemble how human players seem to approach the game. In fact they have been doing it for some time I think.

Finally, looking at games besides chess is pretty illuminating. I could be wrong but I think it is much harder to get a computer to play simpler games like othello and backgammon that have rules which allow for rapid "turning of the tides" in later play. In chess, if you lost your queen and your knights, you might be out of luck. In othello or backgammon, you can be way under and then turn the game around completely with just a few moves. You can't write a winning program with those rules unless it "thinks" more like a human.

But now I am wondering whether my brain is doing the same thing but also throwing up a simulation of myself to conceal from me all the boring computations. In fact it must be doing something like this because it is running all sorts of operations without my being consious of them, although dome of them, like breathing I can choose to be aware of if I want.

Those are valid observations.

If someone throws a ball and I catch it, is my brain performing quadrilateral equations (or some other complicated mathematics I wouldn't understand) which could theoretically be written out if we could access them? If so, this is another of the things it does which we cannot access.

This is a good question. Certainly there is a massive amount of processing going on that you have no access to. Whether it ends up being an implicit trajectory calculation or is just a naive pattern matching with interpolation, I can't say. I would lean towards the latter, though.
 
Status
Not open for further replies.

Back
Top Bottom