• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Explain consciousness to the layman.

Status
Not open for further replies.
Can you give me a definition of non-physical algorithm that you are happy with so I can get an idea of how specific you want the words to be? I think there's merit in keeping these sorts of definition a bit vague but not too vague. So give me something to get an idea of where you are coming from with that question. How would you define the same word where it appears in a definition of an algorithm with intent? Your objection doesn't appear to be related to the concept of intent which was what you original idea here was.

I guess there's an idea that you have a physical apparatus that is subject to physical manipulations that can put it into different states and a "simple" manipulation is something that just moves from one state to another (in a way that can be more or less reversed) instead of just burning the whole thing down. I'd say that simple would also mean low energy change.

Wasp messing around doing waspi-ish things is simple. Rock falling is not simple. Wasp turning into a bar of gold or exploding is not simple.

I think that the very idea of a physical definition of an algorithm being performed is a faulty approach. An algorithm is a recipe for doing something, not a description of something that is being done. Any physical process could be described in algorithmic terms.

I'm afraid that I can't fix something that I think is fundamentally flawed in the first place.
 
It may be of interest that a bot based on the CERA-CRANIUM cognitive architecture (as described in the link previously given: Towards Conscious- like Behavior in Comput er Game Characters) won the 2K BotPrize 2010. Although it's still very much a work in progress, lacking several important features, and is focused towards videogame character bots, the fact that an implementation of this architecture, based on models of human brain functional architecture, outperforms other AIs in this field for human-like behaviour is very promising.
 
It may be of interest that a bot based on the CERA-CRANIUM cognitive architecture (as described in the link previously given: Towards Conscious- like Behavior in Comput er Game Characters) won the 2K BotPrize 2010. Although it's still very much a work in progress, lacking several important features, and is focused towards videogame character bots, the fact that an implementation of this architecture, based on models of human brain functional architecture, outperforms other AIs in this field for human-like behaviour is very promising.
I skimmed the link, and don't understand if the cognitive architecture is hardware, software, or a combination?
 
Well, you're simply contradicting yourself here.

You admit that the simulator isn't producing a real tornado, then you go on to claim that you don't see how the simulated tornado could possibly not exist.

"Something" <> "real tornado". I'm amazed you missed that.

If you don't have the time or discipline to follow an argument longer than that, well, OK, but that's hardly my problem.

You seem to be reading way too fast. Slow down. I am simply telling you what you might expect when you write a wall of text on a thread which moves quickly.
 
Which is what we expect from a fan.

I would say "nice try", but it really wasn't, was it?

If the computer is using an interface (a fan) to change the world around it according to the behaviour of the simulated tornado, then the simulated tornado has changed the world. Again, how can you miss that ?
 
It may be of interest that a bot based on the CERA-CRANIUM cognitive architecture (as described in the link previously given: Towards Conscious- like Behavior in Comput er Game Characters) won the 2K BotPrize 2010. Although it's still very much a work in progress, lacking several important features, and is focused towards videogame character bots, the fact that an implementation of this architecture, based on models of human brain functional architecture, outperforms other AIs in this field for human-like behaviour is very promising.

That is pretty awesome stuff.

I am gonna read the paper tonight while I wait for swtor pvp queues.
 
If the computer is using an interface (a fan) to change the world around it according to the behaviour of the simulated tornado, then the simulated tornado has changed the world. Again, how can you miss that ?


He did not miss it. It is just that he is not using a "monumentally simplistic" "operational definition" of a tornado.


What if you plug the fan in a wall socket instead of through the simulated tornado.....will it change the world too? In what way would that be different from when it is being controlled by the simulated tornado?


Do you think that a fan changes the world the same way as a tornado does?

Do you know what happens to entire houses and buildings let alone fans when a tornado is nearby?

Do you think a fan can generate a tornado? Do you know anything about meteorology?

Do you think that a fan, even if it could miraculously generate a tornado, would remain functional once the tornado starts?

What would maintain the tornado once the fan along with the computer and the building they are in are obliterated?

Do you know anything about meteorology?

Do you understand anything about what actually generates a tornado and maintains it while it is devastating everything around it?

Do you think a tornado is just fast wind? Do you know anything about meteorology?


Even if we were to grant you the "monumentally simplistic" notion that a fan could generate wind to EMULATE a tornado....just for argument's sake.... isn't this an EMULATION? What we have been saying all along is that an EMULATION is required to emulate consciousness….so how does the argument of hooking up a fan to emulate a tornado any different from what we have been arguing?

Do you really think that hooking up a fan to computer simulation of a tornado would generate a tornado? Or do you think it would just change the world? If the latter then do you think that this change is similar to the way a tornado would change the world? And how is that change of the world any different than if the fan was just plugged into a power outlet?
 
Last edited:
So I don't know for sure if I really see red out of the corner of my eye as it were, ie without focusing on the red. But it looks like you have several possibilities there (ignoring the past)

(1) aware of red out of the corner of your eye - does this really happen?
(2) focusing on red but not thinking about the redness
(3) thinking about redness without looking at anything red
(4) both focusing on red and thinking about red

Where by "focusing on" I mean directing your vision to center on the thing, and by 'thinking about' I mean probably some internal dialogue about it. You're saying you can't do (2) but I can. I am not sure if I can do (1). maybe if there was a lot of red.

I think (2) is the only scenario worth discussing because the others are non-controversial.

What, exactly, is going on in (2)?

Is there a quale ( singular? ) that needs to be explained in scenario (2)?
 
But what constitutes cheating? If the machine comes out with the claim of consciousness, it's certainly because it was programmed to come to that conclusion. How do we determine that the methodology it is using to come to its conclusion is valid? Won't it simply replicate the initial assumptions programmed in?

It will be programmed to have consciousness (well not programmed but built to have consciousness - I doubt programming alone would suffice) and to be sophisticated enough with the English language to answer fairly sophisticated abstract questions like "are you conscious?" truthfully. How is that "programmed to come to that conclusion"? If it isn't conscious it is programmed to say it is not. At that point it would be back to the drawing board for the consciousness building engineers.

In fact it could be that the standard robot chassis for truthfully answering sophisticated questions in English is provided by some third party a la Asimov. The scientist merely has to add the attempted consciousness module and pop the question. Easy. Easy assuming you can build a talking robot already.

Anyone suspecting the experiment is rigged or flawed need only repeat it for themselves as with all other experiments in science.
 
What if you plug the fan in a wall socket instead of through the simulated tornado.....will it change the world too? In what way would that be different from when it is being controlled by the simulated tornado?

Off the top of my head -- when the simulated tornado isn't formed yet in the simulation, the fan will be off. When it forms in the simulation, the fan will be on. When it ceases in the simulation, the fan will be off.

Is there a point to this line of questioning? I am rather bored by being consistently correct, and having to hand hold people through common sense, now that other actually interesting discussions are taking place.
 
Last edited:
I think that the very idea of a physical definition of an algorithm being performed is a faulty approach. An algorithm is a recipe for doing something, not a description of something that is being done. Any physical process could be described in algorithmic terms.

I'm afraid that I can't fix something that I think is fundamentally flawed in the first place.

I thought that you were trying to substantiate your opinion by exploring what a physical definition would look like. So why are you backing away from that now?

In reality I think you are asking for a motive-less definition not a physical one. In nature things that look a whole lot like algorithms do turn up. Several have been discussed already I will add the way that sunflowers seem able to calculate the Fibonacci sequence.

You asked what a non-intended algorithm would look like and you were answered. Now we all know what is being talked about I guess you have to convince people that these sure-look-like-algorithms in nature shouldn't count...?

Perhaps it is because from a human perspective these sequences look super obviously intentional even if we can agree as people who agree with evolution that there's no actual intent in the actions of eg a wasp or a sunflower, except the metaphorical intent of DNA replication? On the other hand why should that count? Are we really supposed to deny the obvious structure about how DNA creates proteins because we're afraid of some anthropomorphism? So there's no intent, but it is clear that algorithmic patterns must occur to create the complexity we see in nature.
 
I think (2) is the only scenario worth discussing because the others are non-controversial.

What, exactly, is going on in (2)?

Is there a quale ( singular? ) that needs to be explained in scenario (2)?

If it is questionable then I think it is only questionable for the same reason all consciousness in the past is. I am thinking of nothing. I am looking at red. I see red. Or do I?

because the only time I can question the truth of my assumption is when I come along later and ask... did I really think of nothing but just focused on the red? How can I be sure? How do I know that I was really not experiencing anything at all - essentially that I ceased to exist or that my consciousness had a gap in it, which I am only now thinking about and now my brain responds by filling in with some rubbish to give me the illusion of a continuous existence?

Nah. Pretty sure I actually do experience red.
 
Off the top of my head -- when the simulated tornado isn't formed yet in the simulation, the fan will be off. When it forms in the simulation, the fan will be on. When it ceases in the simulation, the fan will be off.

Is there a point to this line of questioning? I am rather bored by being consistently correct, and having to hand hold people through common sense, now that other actually interesting discussions are taking place.


Are you being facetious or are you actually serious?

If the former then :D …. If the latter then :confused:
 
Last edited:
It may be of interest that a bot based on the CERA-CRANIUM cognitive architecture (as described in the link previously given: Towards Conscious- like Behavior in Comput er Game Characters) won the 2K BotPrize 2010. Although it's still very much a work in progress, lacking several important features, and is focused towards videogame character bots, the fact that an implementation of this architecture, based on models of human brain functional architecture, outperforms other AIs in this field for human-like behaviour is very promising.



Thanks.... those were great links.

They also provided a very interesting bit of information

The aim of the 2K BotPrize competition is to develop a computer game bot which is undistinguishable from a human player, i.e. able to pass the Turing Test. Although the Conscious-Robots bot could not completely pass the Turing Test, she achieved a humanness rating of 31.8%. As of today, the Turing Test level intelligence has never been achieved by a machine. However, this year the gap between humans and bots was reduced, having a small difference between the Conscious-Robots bot (31.8%) and the least “human” human player (35.4%)


What should we tell people who claim to have managed to write simple mundane programs to make normal modern computers conscious?

What should we tell people who consider such an achievement to be unremarkable?

I guess this might be adequate for a start

[snip]
Granted, that is like saying "switching" when someone asks "how does a computer work?" but in truth it isn't incorrect, it is just monumentally simplistic.
[snip]
 
Last edited:
It seems that some people believe that if an entity seems conscious then it is conscious.

eg Star Trek's Data. If you hang out with him for a week and he says and does nothing that suggests he's not conscious, does that prove he's conscious? How do we judge it, anyway?

This is the same issue the Turing Test and P-Zombie attempt to address.

Anything can appear to be conscious to us, in part because of our flawed and hyperactive agency detection.

If someone claimed some new computer was conscious, how would we verify their claim?

If you ask Data if he is conscious he will say "yes" if he is. if he is not conscious then he'll say "no" or possibly get a little confused and ask for clarification but I think Data was obviously well enough read to fully comprehend the question and have been able to figure out the answer.

Consciousness is subjective and only apparent to the person themselves but it is apparent to the person themself. If it wasn't we wouldn't be discussing this. So there is an objective physically measurable way to measure consciousness in others and it is to ask them.

(In Data's case we can ignore the possibility of lying or a lack of sophistication to answer the question accurately)

Is this observation something we can agree upon? If we cannot then I don't know how we'd ever figure out that anything was conscious eg alien beings arrive on earth speaking English. Are they conscious? Only one way to know.
 
They also provided a very interesting bit of information
As of today, the Turing Test level intelligence has never been achieved by a machine.​
What should we tell people who claim to have managed to write simple mundane programs to make normal modern computers conscious?
You should tell them that when using "passes a Turing Test" as an operational definition of consciousness, no machine has achieved consciousness.
 
Is this observation something we can agree upon? If we cannot then I don't know how we'd ever figure out that anything was conscious eg alien beings arrive on earth speaking English. Are they conscious? Only one way to know.
Actually, it'd be a better test if the alien beings arrive on earth not speaking English, but capable of learning it.
 
Well, we don't have a metric per se. It's one of those "I know it when I see it" things.*

Besides testimony from internal subjective experience, I'm not aware of any criteria.

If I asked a p-zombie if it were conscious, it'd say it was, and how would I be sure it was lying? Dennett asserts that a p-zombie that behaved exactly if it were conscious would have to be conscious.

Which is not a standard which we would consider sound in any scientific area. Would we accept that we could identify zinc if it seemed like zinc to an objective observer - or would we have a list of objective properties which could be tested? No subjective test can possibly be considered in any way reliable.

Testimony of a witness (about whether they are conscious or not) is not subjective. Not when you can simply repeat the experiment arbitrarily. That is how science tests things like "psychic ability" and finds them wanting. The problem is NOT a single witness. The problem is if you can't repeat the experiment then it's not science. It doesn't matter how many witnesses you have or how many measurements you make for the experiment nobody else can repeat.
 
Status
Not open for further replies.

Back
Top Bottom