• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Explain consciousness to the layman.

Status
Not open for further replies.
At each stage, people said "We thought that's what intelligence was, but we were able to make a machine that could do it, so there's obviously more to it than that"; which effectively defines True Intelligence as that which machines can't do, and has an interesting parallel in the 'God of the Gaps' argument.

Yes, interesting...
 
Leumas:

Best I can tell, you have three themes here. First, there is whether or not the vending machine really says "Feed me". Second, there is the theme that humans have a tendency towards hyperactive agency detection. And third, there's the thesis that machines would never be conscious.

Also, how exactly are biological systems not machines?
If you read my posts you will find that I clearly say
Yes, but I read your post and saw you clearly say:
In my opinion Machines will never become conscious..... they will become more and more adept at imitating us to the extent where they would be a perfect illusion fooling us into thinking they are conscious. But it will always be a virtuality and not Reality........the reason is that we made them that way..... we created them....we designed them...... and thus anything coming out of them is by design and not auto-evolved.
...by which I supposed that you meant something akin to machines will never become conscious.

ETA: And yes, you said humans are machines, and of course you think humans are conscious. But the only criteria you used was evolved versus designed, and it's a bit unclear what you mean by the above--you're not scoping what you mean by machines, so technically this does suggest that if we up and build a human from scratch, you would not think that human conscious. I don't believe that's your position, but I do believe you need to be more cautious phrasing it, and that there's a missing link to what is actually relevant to consciousness here.
 
Last edited:
dlorde said:
We've seen this in relation to human vs machine intelligence. When we only had adding machines, the ability to perform rapid complex calculations was seen as a feature unique to human intelligence, beyond machines. When programmable computers arrived that could perform complex calculations beyond human abilities, the goalposts of True IntelligenceTM were moved to areas where humans were still considered supreme, e.g. chess. When chess programs became commonplace and Deep Blue beat Kasparov, the world champion, the goalposts moved again and True Intelligence now involved language processing, understanding, use of knowledge. Then IBM developed Watson, which beat the world's best Jeopardy players, and the goalposts still haven't settled.

At each stage, people said "We thought that's what intelligence was, but we were able to make a machine that could do it, so there's obviously more to it than that"; which effectively defines True Intelligence as that which machines can't do, and has an interesting parallel in the 'God of the Gaps' argument.

I suspect there will be a similar progression with machine consciousness, where at each stage, surprise will be followed by moving the goalposts of True ConsciousnessTM.

There certainly have been quite a bit of shifting goal posts throughout the history of technology.

I sometimes imagine how the same kind of human bias might look like from the perspective of vastly more complex and intelligent agents. They could possibly argue amongst themselves whether humans “actually” are conscious (by drawing parallels to their own superior intelligence and behavioral complexity).

Interestingly, they might actually exclude the possibility of humans being able to have consciousness by their standards, simply by knowing too much about how the human brain works. I.e., given the premise that the aliens could map every process of the human brain in superior detail to us, and understand the functional workings in and out, thus being able to anticipate almost any behavioral outcome, they could therefore conclude it’s just the “programming” doing its predictable thing. “Humans are just like automations, machines, or advance robots; no ‘real’ consciousness going on there.”
 
Last edited:
There certainly have been quite a bit of shifting goal posts throughout the history of technology.

I sometimes imagine how the same kind of human bias might look like from the perspective of vastly more complex and intelligent agents. They could possibly argue amongst themselves whether humans “actually” are conscious (by drawing parallels to their own superior intelligence and behavioral complexity).

Interestingly, they might actually exclude the possibility of humans being able to have consciousness by their standards, simply by knowing too much about how the human brain works. I.e., given the premise that the aliens could map every process of the human brain in superior detail to us, and understand the functional workings in and out, thus being able to anticipate almost any behavioral outcome, they could therefore conclude it’s just the “programming” doing its thing. “Humans are just like automations, machines, or advance robots; no ‘real’ consciousness going on there.”

Yes and if we look back at them what would we think. Would we assume they are gods, would we be able to see or perceive them at all?

For example plants probably can't perceive us and if they could they would probably assume we are gods or magicians ( in plant speak of course).
 
Interesting; can you give a couple of examples of such programs, and the fields in which they operate?


Since you seem to have a computer science background then instead of giving you the solution I am going to let you think about it by posing these questions to you.


You know that some sorting algorithms work better for certain kinds of data sets than others. So if you were to write a program that would utilize the most efficient sorting algorithm for any data set..... how would you do it?

If you wanted to write a program to control a roaming autonomous robot that can roam in any previously unspecified environment.... how would you make it at sometimes circumnavigate around an obstacle while at the same time it is homing in on a beacon? Also how would you make it get out of a corner it might get stuck in due to repeatedly applying the same algorithm? How would you make it prioritize whether to keep homing or to seek a recharging outlet​
 
There certainly have been quite a bit of shifting goal posts throughout the history of technology.

I sometimes imagine how the same kind of human bias might look like from the perspective of vastly more complex and intelligent agents. They could possibly argue amongst themselves whether humans “actually” are conscious (by drawing parallels to their own superior intelligence and behavioral complexity).

Interestingly, they might actually exclude the possibility of humans being able to have consciousness by their standards, simply by knowing too much about how the human brain works. I.e., given the premise that the aliens could map every process of the human brain in superior detail to us, and understand the functional workings in and out, thus being able to anticipate almost any behavioral outcome, they could therefore conclude it’s just the “programming” doing its predictable thing. “Humans are just like automations, machines, or advance robots; no ‘real’ consciousness going on there.”



Interesting Science Fiction..... I love it.

Humans recognize that there are "lower" animals that are conscious because they have devised SOME MEASURE to GAUGE it in them based upon our ever increasing understanding of the subject.

Superior beings who are vastly superior to us in intelligence and abilities most likely would have devised some mechanism based on their vastly superior knowledge of how to gauge consciousness and just like we can recognize it in dogs and squirrels I bet that they most likely would be able to recognize it in us when they encounter us.
 
Yes and if we look back at them what would we think. Would we assume they are gods, would we be able to see or perceive them at all?

For example plants probably can't perceive us and if they could they would probably assume we are gods or magicians ( in plant speak of course).



What does that say about humans who DID see some plants as gods? :D
 
We've seen this in relation to human vs machine intelligence. When we only had adding machines, the ability to perform rapid complex calculations was seen as a feature unique to human intelligence, beyond machines. When programmable computers arrived that could perform complex calculations beyond human abilities, the goalposts of True IntelligenceTM were moved to areas where humans were still considered supreme, e.g. chess. When chess programs became commonplace and Deep Blue beat Kasparov, the world champion, the goalposts moved again and True Intelligence now involved language processing, understanding, use of knowledge. Then IBM developed Watson, which beat the world's best Jeopardy players, and the goalposts still haven't settled.

At each stage, people said "We thought that's what intelligence was, but we were able to make a machine that could do it, so there's obviously more to it than that"; which effectively defines True Intelligence as that which machines can't do, and has an interesting parallel in the 'God of the Gaps' argument.

I suspect there will be a similar progression with machine consciousness, where at each stage, surprise will be followed by moving the goalposts of True ConsciousnessTM.

ETA - it also seems to me that there are two sides to this, the traditional idea of human uniqueness and superiority in mental abilities, and the ill-defined nature of the concepts involved. Fortunately we're seeing a rapid erosion of the former due to recent animal behaviour studies, so the goalposts may less in evidence as time passes.



Consciousness != Intelligence
 
Westprog, please take the time to READ my post. I said "powered flight". POWERED.

I know that. If you need it to be expanded - a person could see that birds and insects were capable of flying, hovering, etc, under their own power. He could see that other animals, including humans, were capable of carrying out similar motions. He could see that it was possible for inanimate objects to be hurled into the air and go great distances, but that they would always follow an arc and return to the ground. He could see that certain objects could glide, supporting their weight using the air.

From this, he could deduce that it should be possible for a machine to be built which would allow a person to be carried into the air for some distance. This was a logical extension of the facts available. A person trying to achieve powered flight was in no doubt what it was that he was trying to achieve, and would have no doubt about whether he'd done it or not. There would be no test that involved looking at a man in the air, and seeing how closely he resembled a bird.
 
Interesting; can you give a couple of examples of such programs, and the fields in which they operate?

It's common in games - if one were to recognise the sequence of shapes in Tetris, for example, it would make the game boring. Most computer randomisers are pseudo-random, of course - the programs remain entirely deterministic. Genuine randomness would require specialist hardware.
 
Do you think a simulated chess player will ever be able to play chess?

Clearly a computer program can easily be made able to make the moves according to the rules. Such programs may well reach the stage where they can win every game against a human opponent. Can they be said to be "playing" though?
 
Yes, but I read your post and saw you clearly say:

...by which I supposed that you meant something akin to machines will never become conscious.


You are using sophistry..... you know perfectly well that you should have taken that in the CONTEXT of the discussion.

Should I use your kind of “logic” and ask you if you then think that a combine harvester is conscious since you seem to advocate that machines are capable of consciousness? Wouldn’t it be OBVIOUSLY disingenuous of me to assume that you are saying that?

I think you and I are quite aware that a Combine Harvester is not included in the context of the ongoing topic and you are quite aware what I meant by machine in the context of the topic of this thread and the ongoing discussion.

So please.... do not use sophistry on me. You know jolly well that I meant a computer that is programmed....and my position on that is explained in various posts.


ETA: And yes, you said humans are machines, and of course you think humans are conscious. But the only criteria you used was evolved versus designed, and it's a bit unclear what you mean by the above--you're not scoping what you mean by machines, so technically this does suggest that if we up and build a human from scratch, you would not think that human conscious. I don't believe that's your position, but I do believe you need to be more cautious phrasing it, and that there's a missing link to what is actually relevant to consciousness here.


I very much did scope what I mean by it..... read the next bit down in my post.....not to mention numerous previous posts.

Have you scoped what YOU mean by machine? Do you think it would not be obviously ridiculous of me if I assumed by you saying that machines can become conscious that you mean that a chainsaws can become conscious?


I personally believe one day we will be able to EMULATE the human brain in a machine....... but not by programming it..... sure some parts may be programmed to do control of certain physical processes... but just like controlling a lathe.... we need the lathe’s mechanical parts to perform the lathing no matter how much programming there is.

I think something akin to a neural network that can emulate the PHYSICS of the brain will have a very good chance. But actual physics not simulated physics....
 
Last edited:
Superior beings who are vastly superior to us in intelligence and abilities most likely would have devised some mechanism based on their vastly superior knowledge of how to gauge consciousness and just like we can recognize it in dogs and squirrels I bet that they most likely would be able to recognize it in us when they encounter us.

Probably so.

Although it might depend on how "superior" they would be to us. The measurement, which to them would be the lower bound they could possibly accept of as "having consciousness", could still be too high for us. This would perhaps be a plausible scenario in a situation where they wouldn’t be carbon-based, nor look or behave like us. (This is also what I mean by "same kind of human bias"; not just bias in regards to functioning, but also in regards to structural similarity.)

Of course, it might also be the case that, due to their superior knowledge of the subject, they could identify consciousness in systems we normally wouldn’t regard as "having consciousness". Superior knowledge could also lessen the bias.
 
Yes and if we look back at them what would we think. Would we assume they are gods, would we be able to see or perceive them at all?

For example plants probably can't perceive us and if they could they would probably assume we are gods or magicians ( in plant speak of course).

The hypothetical has its limits. So I don’t really know.

I guess it would also depend on how they would manifest themselves to us (if we were to perceive them at all). We might even think they weren’t conscious, or independent agents, if their behavior or structure didn’t resemble anything we’re used to.
 
I know that. If you need it to be expanded - a person could see that birds and insects were capable of flying, hovering, etc, under their own power. He could see that other animals, including humans, were capable of carrying out similar motions. He could see that it was possible for inanimate objects to be hurled into the air and go great distances, but that they would always follow an arc and return to the ground. He could see that certain objects could glide, supporting their weight using the air.

From this, he could deduce that it should be possible for a machine to be built which would allow a person to be carried into the air for some distance. This was a logical extension of the facts available. A person trying to achieve powered flight was in no doubt what it was that he was trying to achieve, and would have no doubt about whether he'd done it or not. There would be no test that involved looking at a man in the air, and seeing how closely he resembled a bird.

Why can't the same logic be applied to consciousness ?
 
Leumas:

Thanks for clarifying. So I have:
...I meant a computer that is programmed
...but there remains a point of confusion here. When you say this:
I personally believe one day we will be able to EMULATE the human brain in a machine....... but not by programming it..... sure some parts may be programmed to do control of certain physical processes... but just like controlling a lathe.... we need the lathe’s mechanical parts to perform the lathing no matter how much programming there is.

I think something akin to a neural network that can emulate the PHYSICS of the brain will have a very good chance. But actual physics not simulated physics....
...then that seems pretty much to fit what I said in post #3005 by: "biological in some key way".

Do you agree with this assessment?

Also, relating the above to your claim that machines cannot be conscious--could you explicitly say whether or not such emulated machines could be conscious? Since the actual claim you had used the word "machine" unqualified, it's very unclear.

Thank you for your time.
 
Would this necessarily relate to quantum consciousness at all, or is it stepping aside into more demonstrable/testable theories and (relatively) straightforward anatomical explanations?

No, nothing to do with "quantum consciousness." Yes, it is stepping into more demonstrable/testable theories and (relatively) straightforward anatomical explanations.

It would just be another way neurons react to input. In particular, an extra mechanism for storing network state.

ETA: And does it say have any implications for AI/emergent theories or contradict materialism in any way? Does it provide any safe haven for qualia? Other than knowing nothing about neuroanatomy, cellular biology, quantum physics or computer programming I'm reasonably well-informed ;)

Nope.

All this would do is force people modeling biological neural networks to add more functionality to their neuron model. The link westprog posted doesn't go into detail, but it seems like these researchers posit some kind of "internal" state storage via how microtubles are arranged in the cell, and they specifically talk about "bits" of information. Doesn't sound like that is taking us away from computational models ...
 
Haul a pile of brush to the dump.

Lift a coffee cup.

No.

I told you, the machine makes it so the interactions between particles remain functionally equivalent.

So the pile of brush could be hauled to the dump, and the coffee cup could be lifted.

It is obvious that you don't understand this exercise, piggy.
 
Status
Not open for further replies.

Back
Top Bottom