• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Explain consciousness to the layman.

Status
Not open for further replies.
From a scientific point of view an opinion is irrelevant. What matters is what can be known, and with what degree of certainty. You might as well ask me if I think Obama is a nice person. I could give you my answer and you could agree or disagree. So what? That's why I'm saying subjectivity is not adequate to decide whether something is thinking or not. And why the Turing test doesn't test for thinking. Human opinion is not a scientific assessment.
The Turing Test was never proposed as a scientific test. It was proposed to highlight problems in the way we think about thinking. The way you are still thinking about thinking.
 
Everyone, now that you have defined your terms.

Which, as I said, means that your "tough question" is not tough at all, but trivial.

Are input values trivial? 0,1 misses a couple of bits and a paradox. Without the negative feedback that generates it's easy to get all hot under the collar. This statement is still false.
 
The Turing Test was never proposed as a scientific test. It was proposed to highlight problems in the way we think about thinking. The way you are still thinking about thinking.

Oh right, glad you cleared that one up. Sort of like testing to see if someone is a witch? "What also floats?" Can't think why we need science when we can just ask people. Taxonomy by popular perception. All sounds very sensible.
 
...But if these syntactic-semantic machines fail the Syntactic BS Detector Test as bad as the modern chatbots do (the modern day examples of "thinking machines") then they also are not truly thinking i.e., they don't understand a damn word of what you are saying.
Does anyone really consider chatbots to be the modern day examples of 'thinking machines'? I'm surprised.
 
Are input values trivial? 0,1 misses a couple of bits and a paradox. Without the negative feedback that generates it's easy to get all hot under the collar. This statement is still false.
Would you care to rephrase that in a way that makes sense?


Oh right, glad you cleared that one up.
You're welcome.

Sort of like testing to see if someone is a witch?
In a way, yes. Because if you are testing to see if someone is a witch, you're already thinking about things wrong.

"What also floats?" Can't think why we need science when we can just ask people. Taxonomy by popular perception. All sounds very sensible.
I don't know who you think you're talking to, but it's not me.
 
... subjectivity is not adequate to decide whether something is thinking or not. And why the Turing test doesn't test for thinking.
I'll take that as a qualified 'no'.

Human opinion is not a scientific assessment.

I'm not sure there is any scientific consensus as to what constitutes thinking or how to assess it, which is why I was interested in your personal opinion.

In a discussion about whether machines could think, it helps to know if the participants feel they could distinguish between a machine that thinks (by their definition) and one that doesn't. If they feel they could, then we can investigate the criteria they would use. If they feel they couldn't, they we can ask why, and what their definition of thinking is, and whether it is useful, and so-on.

But never mind. You seem reluctant to answer, so I won't press it.
 
When we say we're thinking, we really mean we're feelthinking. Can that be recreated in a machine? I'm not sure. If feelthoughts involve interaction with a material physical process which we can only determine statistically (for example, if the quantum state is physical not statistical), then that seems unlikely. That is not to suggest any woo, or magic bean, rather the possibility there may be physical systems that can't be modelled / imitated by subjective intelligent design. (You might be able to mirror quantum observations but this would only be reactive, not predictive.)
 
By this definition a computer isn't conscious. A computer crunches data. All data is of equal value to it. In order to get information you need sentient subjectivity. Information only exists for those who live in virtual reality.

That's exactly what the brain does. It crunches data, though in such massive amounts, and in such particular ways, that it generates an internal illusion of sentient subjectivity. I think that's wonderful.
 
I'm not sure there is any scientific consensus as to what constitutes thinking or how to assess it, which is why I was interested in your personal opinion.

If I may narrow it down, I'll give you a straight answer.

Can a machine that differs in terms of its construction and material makeup from a human think like a human? No.

Might we need more than superficial observation to tell the difference? Yes.
 
What is a modern day example of a thinking machine then?
There are quite a few robotic AI projects that demonstrate aspects of thinking, including learning, social interaction, developing language & communication, simple reasoning, exploring and mapping their environment, etc (e.g. COG, Kismet, and others). Then there are systems like Watson that use more abstract elements of consciousness models of thinking in their processing (multiple drafts, global workspace, associative mapping, etc), and implementations of the CERA-CRANIUM cognitive architecture mentioned earlier in the thread.

I think they all display aspects of thinking, and some could be said to think in a very broad sense, but none are the full story. I do think they're all more 'thoughtful' than any of the chatbots I've seen.
 
Would you care to rephrase that in a way that makes sense?

I don't want to give the story away. Someone might be watching on catchup. A clue? Alright then. Turing had it, but failed to notice binary coding isn't enough to generate it. Is homo asset - confused sentient regulator (11).
 
I think they all display aspects of thinking, and some could be said to think in a very broad sense, but none are the full story. I do think they're all more 'thoughtful' than any of the chatbots I've seen.

All of these show that thinking like a human requires being a human. Now they're going to try to map that in broad brush strokes, but it goes all the way down to the micro level. Until you get to the fact that to think like a human you have to be a human. The system is interconnected and interdependent. Data is form.
 
When we say we're thinking, we really mean we're feelthinking. Can that be recreated in a machine?
Since it's created in a machine in the first place, yes.

Can a machine that differs in terms of its construction and material makeup from a human think like a human? No.
Why not?

I don't want to give the story away. Someone might be watching on catchup. A clue? Alright then. Turing had it, but failed to notice binary coding isn't enough to generate it. Is homo asset - confused sentient regulator (11).
I take it then the answer is no.

All of these show that thinking like a human requires being a human. Now they're going to try to map that in broad brush strokes, but it goes all the way down to the micro level.
Even if that's true, it doesn't lead to your conclusion - unless you mean that when a computer thinks like a human, we should consider it to be human.

The system is interconnected and interdependent.
Of course. So?
 
Can a machine that differs in terms of its construction and material makeup from a human think like a human? No.
OK, I think that's a reasonable view, assuming construction and material makeup have a significant impact on thinking. Higher primates clearly think, but they don't think like humans; so it seems unlikely that a machine would (although one could speculate about the potential of human-like virtual embodiments in virtual environments).

Might we need more than superficial observation to tell the difference? Yes.
An interesting question is how could we tell; how much difference would be significant? human thought and behaviour has a pretty wide spectrum, and given that an advanced thinking machine could be developed, it would seem possible that it could, by design or instruction, imitate any human characteristics considered key identifiers...
 
Last edited:
Sorry, I can't parse that; could you rephrase it more clearly?

I have an auto-immune dis-ease called Ankylosing Spondolytis.
Had it since I was 20.
I am now 42.

What I have learnt from being specifically dis-eased with an auto-immune dis-ease is that my thoughts are profoundly affected by my feeling pain in my joints.
Dis-ease is really the feeling of uniqueness. The feeling and the corresponding thought that you have is something others around you don't have. In the set of all your feelings there are percepts were and which are unique to your brain.

All doctors are doing is given you the corresponding thought to what you feel.The only "thing" is what you feel. Feeling is our direct relationship with reality. It is the only monism. All knowledge is empirical.

Kant had things back to front. The senses don't lie, the thoughts do. There is only sense phenomena and these we experience directly all the time. Thoughts are simply what we invent to differentiate these phenomena. There is no such "thing" as a "horse". There are only real sense phenomena which we experience and we invented a word "horse" to summarize these sense phenomena. Phenomena = feelings = percepts.

The scientific method is simply a way of improving our naming skills. It's not like the names we predict are what we actually find. No, all scientific knowledge is right tell its wrong. Simply because a name contains a set of percepts does not mean it contains all the percepts. If we experience new percepts which make our set distinct then we find a new name for this new set.

I have the thoughts I have because of the dis-ease I feel. I identify myself by the thoughts I have had and continue to have.
Everyone experiences some dis-ease in life. Some chronic some acute. Even the thought I have "no feelings" is a type of dis-sease which will affect ones thoughts.

Our characters are the feelings-corresponding thoughts acted out/ willed.
Our willing corresponds to a set of thoughts each of which correspond to a set of feelings.
How our thoughts form sets of willing is what psychologists study.
The problem is that these thoughts correspond to sets of percepts. The approach of ignore certain percepts which have shaped our thoughts is medieval. If we are to have health we need to tackle the percepts which are the cause the dis-ease.
Health can only originate through the senses. When I take my anti-inflammatories they help stop the sense of pain so I can move from the set of uniqueness towards the set of collective.I feel better being like others again. Being at-ease again. I get a chance to realize what I want. I realize that the percepts I had, noise for instance, was making me dis-eased and I need to avoid it in future. This shapes my thoughts and willful action about doing this in the future so that I won't have those percepts again which made me feel dis-eased.

This relationship between my percepts, concepts and action is what makes me me(i.e. my "I" my consciousness).
I require dis-ease to establish this distinction. No dis-ease no distinction.

Computers that feel no dis-ease will not have consciousness by definition of consciousness being what makes them feel distinct.

Consciousness is either distinct or else it's meaningless, right Pixy?
 
Consciousness is either distinct or else it's meaningless, right Pixy?
First of all, sorry to hear about your condition. Auto-immune disorders are a bitch.

Second of all, your argument makes no sense whatsoever. At best it's a meandering circular equivocation fallacy.
 

A thing is its own attribute. You can't have the thing without the attribute. Because they are the same. It's just that the way language is constructed makes that hard to see sometimes.

She is beautiful.

That's a beautiful garden.

What a beautiful way of putting it.

The above possible constructions make us think there is an immaterial attribute that it is possible to assign to different things. When, in fact, there isn't.

God is love = nonsensical

"I'm intelligent" and "the computer is intelligent" does not mean we share the same attribute of intelligence.
 
Status
Not open for further replies.

Back
Top Bottom