• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Explain consciousness to the layman.

Status
Not open for further replies.
And Watson's mistakes clearly reveal that he has no conscious understanding of what any of the questions are.
I don't think anyone is suggesting that consciousness has anything to do with it. It is just an example of the huge strides in artificial intelligence that are now becoming possible.
 
The sort achieved by groups of neurons acting in concert.

I can see that we could categorise what neurons do as information processing. I can see how we could categorise what transistors do as information processing. don't see how you can find an objective definition which includes both and excludes every other physical process, without reference to the interpretation of a human being.
 
I am trying to find a common ground with you regarding some fundamental concepts.

Do you or do you not accept the premise and the implications thus far discussed?

No, of course not.

Once again, you're treating physical computations as if they were obliged to behave like logical ones, and they aren't.

My truck depends on phenomena such as pressure and spark to operate.

Vaporize the thing, and they're gone. I mean really gone.

Your model breaks down after the first set of state changes, because the particles out in the intergalactic near-void, or wherever they end up, are not obliged to react to the first set of changes in the same way they respond to those same changes while they were part of my engine.

You could expect a cascade of diverging behavior in the system from the get go.

So my truck wouldn't work, and neither would my brain.
 
Man I really feel like Walter from "The Big Lebowski" speaking to Donnie when I argue with you piggy.

Case in point, "you're out of your element." "You have no frame of reference."

I made a specific response to a specific post of westprog's, in reference to an earlier set of posts, the context of which is simply 'how many tasks that only a few years ago people thought computers would never do.'

If an iphone pattern matches in order to talk back to people, and an argument uses this fact to somehow suggest that the iphone in fact doesn't do the task that only a few years ago people thought computers would never do, then such an argument is invalid -- because people pattern match as well.

I accept the argument that the iphone doesn't feature all the elements of human consciousness, but that was never the issue in contention.

Furthermore, is the remainder of this thread gonna be little more than you basically championing westprog's posts? That is getting really old, FYI.

Yeah, I was reading that conversation. Including westprog's point that computers are pretty much doing the kinds of things the nabobs figured was possible many decades ago.

But none of that much matters on this thread because it's about something computers haven't been able to do and nobody has any idea how to make them do.

I mean, let's face it, we've made some damn impressive strides in all sorts of technologies from architecture and public works to the space program and pharmaceuticals. That's no reason to posit that the next step is consciousness for any of these fields.

And your opinions about my conversations with westprog aren't of any particular interest to me.
 
My post concerned the historic development of certain computer systems in relation to what was deemed to constitute intelligence at the time, and how the lessons learned might relate to the future development of potentially conscious computer systems.

How might they relate?
 
The sort achieved by groups of neurons acting in concert.

If "neurons acting in concert" is what you mean by "information processing", then it's better just to talk about neural behavior.

And when you say "neural" are you talking about actual neurons, or non-neuronal "neural networks"?

But I don't think you actually mean to say that "groups of neurons acting in concert" is your definition of what "information processing" is, and that's what I'm trying to get at.

When you look at what the brain is doing -- I mean what the object is actually doing in spacetime -- what is it about that behavior which makes it "information processing" and what other things can we point to which are also performing that kind of "information processing"?
 
I don't think anyone is suggesting that consciousness has anything to do with it. It is just an example of the huge strides in artificial intelligence that are now becoming possible.

Well, that's kind of my point.

Nobody ever doubted that there was tremendous potential for AI in computers.

At the same time, no one has any idea how you might make one conscious, and we know you can't do it by programming alone, and neurobiology is making progress in other directions, so the strides in AI don't appear to be going down a road that should lead to consciousness.
 
..don't see how you can find an objective definition which includes both and excludes every other physical process, without reference to the interpretation of a human being.
Other physical processes are only relevant in as much as they make a direct functional contribution to the substantive output; i.e. how do neurohormones affect the functioning of neighboring neurons? account for those influences. Account for any other local neuronal influences. The judgement of a human being in these matters doesn't affect the function of the neurons in question.
 
Well, that's kind of my point.

Nobody ever doubted that there was tremendous potential for AI in computers.

At the same time, no one has any idea how you might make one conscious, and we know you can't do it by programming alone, and neurobiology is making progress in other directions, so the strides in AI don't appear to be going down a road that should lead to consciousness.

What we mean by artificial intelligence, and what we mean by conscious intelligence are different things. Artificial intelligence has been with us for thousands of years. Stonehenge and Newgrange have artificial intelligence. Notches on a stick are artificial intelligence. It's an extension to human intelligence, not interesting as a thing in itself.
 
Other physical processes are only relevant in as much as they make a direct functional contribution to the substantive output; i.e. how do neurohormones affect the functioning of neighboring neurons? account for those influences. Account for any other local neuronal influences. The judgement of a human being in these matters doesn't affect the function of the neurons in question.

"Function" is something that doesn't have a physical meaning. How can you separate out the functional from the incidental? There's no physical distinction between an effect that is designed (whether by a human mind or by evolution) and one that is an accident.
 

He makes mistakes that no human being would make.

That's mainly for two reasons.

The first is that Watson doesn't understand categories.

Categories proved to be useless because it was impossible to tell from a category anything about the likelihood of a given answer being right or wrong.

For example, the answer to a question in "American Presidents" could be the name of a war, or a country, or an animal, or part of a quotation about almost anything, or a number, or whatever.

The second is that Watson also doesn't understand the questions.

Watson uses some basic grammar rules to guess parts of speech, along with some of its own Jeopardy!-specific rules such as the significance of the word "this", and then does a kind of a Chinese Room act where he gets what most commonly goes along with the elements comprising the question.

Based on the sources and hits Watson calculates the probability of a given answer being right, along with the financial risks of wrong answers and benefits of right ones, and decides whether to ring in.

Here's one of Watson's errors:

Clue: It was this anatomical oddity of US gymnast George Eyser.

Watson: What is leg?


Eyser was missing a leg, but since Watson didn't understand the category or question, he made a mistake that no human would have made, even if s/he had no idea what the actual answer was and simply made a stab in the dark.


Watson's final Jeopardy fumble is wonderfully non-human:

Category: U.S. Cities

Clue: Its largest airport is named for a World War II hero; its second largest, for a World War II battle.

Watson: What is Toronto?
 
Nobody ever doubted that there was tremendous potential for AI in computers.

At the same time, no one has any idea how you might make one conscious, and we know you can't do it by programming alone, and neurobiology is making progress in other directions, so the strides in AI don't appear to be going down a road that should lead to consciousness.
Well the highlighted part is ambiguous and unproven; nevertheless, my post was not about making AIs conscious, it was making an historical point that might be relevant to the definition or understanding of consciousness in the light of potentially conscious systems development.
 
"Function" is something that doesn't have a physical meaning. How can you separate out the functional from the incidental? There's no physical distinction between an effect that is designed (whether by a human mind or by evolution) and one that is an accident.

OK, never mind.
 
Other physical processes are only relevant in as much as they make a direct functional contribution to the substantive output; i.e. how do neurohormones affect the functioning of neighboring neurons? account for those influences. Account for any other local neuronal influences. The judgement of a human being in these matters doesn't affect the function of the neurons in question.

But it doesn't just have to exclude other processes in the brain, but all kinds of processes, if it is to be useful in that way.

It should exclude, for instance, the behavior of the heart, or a pile of grass.
 
Well the highlighted part is ambiguous and unproven; nevertheless, my post was not about making AIs conscious, it was making an historical point that might be relevant to the definition or understanding of consciousness in the light of potentially conscious systems development.

But this is the question I keep asking you: Why do you think this might be so?

So far, there's been no indication that computers can be, or should be, capable of performing that particular function of the brain, and advances in neurobiology surely aren't pointing in that direction.
 
But this is the question I keep asking you: Why do you think this might be so?

So far, there's been no indication that computers can be, or should be, capable of performing that particular function of the brain, and advances in neurobiology surely aren't pointing in that direction.
Could you explain something to me about the above claim?

As I recall, your electromagnetic induction theory was supposed to account for integration as a critical feature for consciousness. I posted a 2007 presentation by Geoffrey Hinton explaining how a software NN can integrate visual data. But here you seem to say there's been no indication that computers can be capable of consciousness.

Do you stand by that claim still? Is there something in particular about Hinton's presentation that you do not see as an indication pointing in the direction that software can at least provide the integration functions you were trying to explain were key to consciousness? Or, if it does provide that function, can you tell me why you think this still isn't an indication that computers could generate consciousness?
 
Status
Not open for further replies.

Back
Top Bottom