• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Explain consciousness to the layman.

Status
Not open for further replies.
You never answered my question. Well, you never answered several of them but this one feels rather important to the issue at hand - is a car a horse?
In fact, I did answer. I pointed out that this is a false analogy.

Then you haven't understood the argument. Because embodied intelligence suggests form dictates function (by which we take to mean the full scope of all possible function present in the form).
Oh, I understood the argument. It's just wrong.

Where's our form for quantum mechanics? Multivariable calculus? Zermelo-Fraenkel set theory?

Our physiology shapes the way we think. But it doesn't define it.

For any set of functions of human cognition that you care to define, we can define a different form that performs the same functions.

Your argument thus reduces to something that is either trivial or untrue.
 
Will it need to ****, eat, have periods, worry about getting cancer, have a mother, consider getting a pet dog, go to the dentist, forget where it put its keys, regret that tattoo of an ex's name? Humans think differently as a result of these functions and concerns.

Because they experience different things, not because computers can't.
 
Where's our form for quantum mechanics? Multivariable calculus? Zermelo-Fraenkel set theory?

These are not the form. These are informational interpretations of information we have (subjectively) received from the form.

It's really very simple. One thing is not another thing.
 
Actually, by that definition some computers are, but you seem to be too caught up in your biases to see that.

A computer knows nothing of information because information is meaning that is reliant on subjective sense-based interpretation. You could programme a computer to say the smell of cut grass was your grandmother on Tuesdays and the planet Jupiter after 2021 if you wanted to.
 
If material reality doesn't define a thing, what does?
As I said: For any set of functions of human cognition that you care to define, we can define a different form that performs the same functions.

Your position is either untrue or trivial. Your call.

These are not the form. These are informational interpretations of information we have (subjectively) received from the form.
They are the products of human cognition, but have nothing whatsoever to do with human physiology.

It's really very simple. One thing is not another thing.
Then your position is a trivial one. That means it's not untrue, just unhelpful.
 
A computer knows nothing of information because information is meaning that is reliant on subjective sense-based interpretation. You could programme a computer to say the smell of cut grass was your grandmother on Tuesdays and the planet Jupiter after 2021 if you wanted to.
Well, yes. But the exact same thing happens to humans. Our brains are computers. Intricate, fallible, squishy computers.
 
As I said: For any set of functions of human cognition that you care to define, we can define a different form that performs the same functions.

.

Okay, define the all the functions of human cognition then. I think you may be some time...
 
Self-referential information processing.
Can you provide proof that is consciousness other than It Must Be So else a Magic Bean exists?

'A Magic Bean(god) doesn't exist' is an axiom in your system, although you and some others here choose (er, sorry, are programmed) to believe that it's factual since objectively it's never been demonstrated. Absense of Evidence is not etc.

As mentioned previously, human opinion is not objective science.
 
Okay, define the all the functions of human cognition then. I think you may be some time...
Yes, I would. But that's not relevant. The set of functions is not infinite, and any given subset can be performed by a different system.

The only system that performs exactly and only the functions of a given human brain is that brain - but that is, as I've noted several times, a trivial and unhelpful point. It does not generalise to.... Anything.
 
Yes, I would. But that's not relevant. The set of functions is not infinite, and any given subset can be performed by a different system.

It's completely relevant. If you can't list the functions (how many are there at least, 42?), how can you start to know if they can be modelled in an alternative form?

The only system that performs exactly and only the functions of a given human brain is that brain - but that is, as I've noted several times, a trivial and unhelpful point. It does not generalise to.... Anything.

So one human brain-form can't function identically to another human brain-form, but a computer brain-form can function identically to a human brain-form...:boggled:
 
rocketdodger said:
Do you believe the Jeopardy champion computer is thinking rather than furnishing table-look-up rote responses after negotiating many if/then statements and database lookups?

I'd say no, that doesn't necessarily answer the 'Is it thinking?' question.

Then you are wrong.
Wouldn't be the first time.

Try learning about how the program actually works before you make statements like this.
If my statements 'table-look-up ', ' if/then statements', 'database lookups' incorrectly describe what Watson does either by hard code, by trained neural nets, or some newer techniques please provide a 2 or 3 sentence statement of how it works. I've had no luck with what I can find out about it in terms I and many others can understand.


This quote for example I find meaningless "there are systems like Watson that use more abstract elements of consciousness models of thinking in their processing (multiple drafts, global workspace, associative mapping, etc), and implementations of the CERA-CRANIUM cognitive architecture" courtesy of dlorde. What are my words lacking that don't cover this?
 
I'm an AI programmer and it is VERY interesting to me. It taught me to give up believing syntactic programs like syntactic chatbots understand a damn thing, so try another method. And thus was born semantic programming, semantic web and semantic reasoning (google for more). It will help the public to do the same. It seems like non-programmers have the hardest time understanding it's basic truth

Ok, I misspoke then.

However I have to wonder why, if you are an A.I. programmer, you ever believed that a syntactic chatbot understood anything.

The minute I learned about true reasoning in the first A.I. course I took I knew that a chatbot was just smoke and mirrors.
 
Status
Not open for further replies.

Back
Top Bottom