• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Explain consciousness to the layman.

Status
Not open for further replies.
First of all, sorry to hear about your condition. Auto-immune disorders are a bitch.

Second of all, your argument makes no sense whatsoever. At best it's a meandering circular equivocation fallacy.

It made perfect sense to me.

Jeez, have some people swallowed the 'beginners guide to logic' bible?... Just dropping the word fallacy in here and there doesn't make you seem more intelligent. Any chatbot can cut and paste lingo. (You thought I hadn't spotted, didn't you?)

Embodied intelligence. A perfectly respectable and logically well-reasoned position to take.
 
First of all, sorry to hear about your condition. Auto-immune disorders are a bitch.

Second of all, your argument makes no sense whatsoever. At best it's a meandering circular equivocation fallacy.

Thanks for confirming my point.
My dis-ease lead you to feel something about my feelings.

We only recognize each other when we reveal our feelings.

Our thoughts shared as language are abstractions that don't exist until they are perceived and we feel them as distinct feelings.
This we can only do with our own thought experiences not with others expressed as language.
We know something is distinctly consciousness because we recognize this feeling ourselves.
No amount of language will create this distinction it has to be physically felt. Consciousness as something distinct to anything else is a physical feeling it is not the way we use language to convey our feelings. It is the feeling that conveyed feeling induces.
 
...
This relationship between my percepts, concepts and action is what makes me me(i.e. my "I" my consciousness).
I require dis-ease to establish this distinction. No dis-ease no distinction.
OK, I follow that.

Computers that feel no dis-ease will not have consciousness by definition of consciousness being what makes them feel distinct.
I agree that evaluations of desirability and undesirability and so-on are likely to be important in artificial consciousness, as they provide the basis for goal-oriented behaviour, and consciousness involves observing and modeling one's own behaviours. Whether they will be essential is harder to say.
 
It made perfect sense to me.

Jeez, have some people swallowed the 'beginners guide to logic' bible?... Just dropping the word fallacy in here and there doesn't make you seem more intelligent. Any chatbot can cut and paste lingo. (You thought I hadn't spotted, didn't you?)

Embodied intelligence. A perfectly respectable and logically well-reasoned position to take.

You should look up the
thread I started "Why Pixy Misa is wrong" .
 
Cherry picking a youtube video were a commentator uses the word machine to describe a ribosome is laughable.
She goes on to describe mRNA passing through the ribosome like "a computer tape".

Actually, I was cherry picking an animation of a ribosome in action that was a compromise between a schematic representation and an ultra-realistic one, that would show how machine-like it functioned. I picked one that was good visually. Of course, it made me smile that the narrator called it a machine and compared its reading of mRNA to reading a computer tape.

I learned about the amazing ribosome machine from the BBC program "The Cell" which has a really nice ribosome animation sequence. In part 3 it talks about how Craig Ventor led a team to create a synthetic living cell.

Google "cellular machinery" and you get 25 million hits. A tsunami of that size requires no cherry picking.

Here's a thought experiment about animations like this:

Start with the ribosome animation. Flesh it out so it's as operational as those in a living cell, such that it inputs a complete emulated mRNA and outputs a complete protein. Add all the other machinery to the animation to complete a cell (pick a nerve cell). Do this with enough nerve cells to complete a full brain simulation. Add the following physical (not emulated) peripherals: five senses for input, and a robot body for output. Now explain specifically what's missing that would make it not conscious.
 
Last edited:
An interesting question is how could we tell; how much difference would be significant? human thought and behaviour has a pretty wide spectrum, and given that an advanced thinking machine could be developed, it would seem possible that it could, by design or instruction, imitate any human characteristics considered key identifiers...

Here's the problem. The characteristics and identifiers of a system are infinite. You can divide and subdivide and subdivide but you are never going to reach the end of where you can draw a distinction. So I might list a function, and you can say, "I can make a machine that does that". So I list another function, and you give me the same answer. We're going to need a lot of coffee because this conversation is infinite. Before you have a machine that does everything a human can do, you're going to have a room of monkeys typing out the script of Hamlet. Because to do everything a human can do you need a human.

What difference would be significant? That's a subjective issue and depends what instruments you have at your disposal to detect it, and which particularly functions you're looking for. This is why I bring up mimicry in nature. Because we have plenty of examples of where one thing imitates another so that it fools the appropriate subjective audience. But this is very different to the one thing actually being the other thing. It isn't. Because the thing is its own attribute. Change the system, you change the attribute.
 
You should look up the
thread I started "Why Pixy Misa is wrong" .

:) Now when a machine chooses to do that, or comes up with an explanation for its own existence such as goddidit, we might be getting a little closer...
 
A thing is its own attribute. You can't have the thing without the attribute. Because they are the same.
How is that relevant?

"I'm intelligent" and "the computer is intelligent" does not mean we share the same attribute of intelligence.
That is precisely what it means. I suspect you're committing a reification fallacy, but your posts are too muddled for me to be sure about that.
 
Here's the problem. The characteristics and identifiers of a system are infinite. You can divide and subdivide and subdivide but you are never going to reach the end of where you can draw a distinction.
No.

So I might list a function, and you can say, "I can make a machine that does that".
Yes.

So I list another function, and you give me the same answer.
Yes.

We're going to need a lot of coffee because this conversation is infinite.
No.

Humans are not infinite, nor is the scope of their behaviours.

Before you have a machine that does everything a human can do, you're going to have a room of monkeys typing out the script of Hamlet. Because to do everything a human can do you need a human.
Which rather ruins your argument. A human is a machine that can do everything a human can do.

So either your point is untrue, or it is trivial.
 
Start with the ribosome animation. Flesh it out so it's as operational as those in a living cell, such that it inputs a complete emulated mRNA and outputs a complete protein. Add all the other machinery to the animation to complete a cell (pick a nerve cell). Do this with enough nerve cells to complete a full brain simulation. Add the following physical (not emulated) peripherals: five senses for input, and a robot body for output. Now explain specifically what's missing that would make it not conscious.


Er...then it's not a brain simulation, is it. It's a brain. So instead of AI you have a human brain with some robotic limbs. Behaving in the plastic way that human brains do to input. So what?

It seems likely we will continue to use the technology to outsource some of the functions our brains currently perform for us. We're already doing it. Wiki has more facts at its fingertips than any human ever did and computers are far superior as calculators when it comes to big sums. But that's a hybrid with the human part still very much required.
 
Er...then it's not a brain simulation, is it. It's a brain.
We're still talking about a simulation. If you are arguing that a sufficiently detailed simulation of a human is a human, then that's a curious choice of semantics, but not necessarily something I'd disagree with.

So instead of AI you have a human brain with some robotic limbs. Behaving in the plastic way that human brains do to input. So what?
So this is a computer program.
 
Thanks for the reference - I've certainly run across this before, just didn't recognise the term you used.

Now this (embodied cognition) is certainly true to some degree. Human physiology does shape the way we think. But that doesn't in any way preclude artificial intelligence, machine consciousness, or simulated human brains that think exactly like physical ones.
 
No.

No.

Humans are not infinite, nor is the scope of their behaviours.


Yes it does rather feel like that when you seem to come across the same idiot cloned several times...I blame the memes.

You never answered my question. Well, you never answered several of them but this one feels rather important to the issue at hand - is a car a horse? They both can get you from a to b. They both require fuel. One moves when you kick it, the other when you put pedal to metal. How horse-like could we make a car do you think? Might we keep finding differences the closer we looked and the harder we tried?

How many points are there on a line?

You seem to like the word machine so I'll meet you half way...

Is a human machine a nonhuman machine?
 
Thanks for the reference - I've certainly run across this before, just didn't recognise the term you used.

Now this (embodied cognition) is certainly true to some degree. Human physiology does shape the way we think. But that doesn't in any way preclude artificial intelligence, machine consciousness, or simulated human brains that think exactly like physical ones.

Then you haven't understood the argument. Because embodied intelligence suggests form dictates function (by which we take to mean the full scope of all possible function present in the form).
 
The question carried with it certain assumptions, as questions usually do, and those assumptions were what I pointed out.

So why did you call it argument from ignorance if it carried assumptions ?

Are you actually trying to discuss something ? It doesn't seem like you have a clue what you said yourself.
 
Status
Not open for further replies.

Back
Top Bottom