• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Explain consciousness to the layman.

Status
Not open for further replies.
When you know the point of an analogy, why would you bother twisting it around, as if that made a difference ? People said for a while that powered flight couldn't be done, but they couldn't be bothered to say why. No one here has offered any reason why consciousness requires a biological substrate other than "well the ones we know so far have one." which is useless. That you sidestep my analogy completely simply shows that you know that, and would rather not address it directly.

Who, precisely, is claiming that consciousness requires a biological substrate? Certainly not me. Not Leumas. Not Piggy. I'm not sure about Punnsh, Annnoid or !Kaggen, but AFAIAA they haven't made such an assertion. They can confirm or deny.

What has been claimed is that it is at least possible that we can derive a particular physical principle from the biological process - which is what happened in the case of flight. (Though some of the physics was arrived at independently). Had computers been available, reproducing flight in computer simulations would have been entirely pointless had the physical principles not been fully established.

It might seem unfair that a telling analogy can be picked up and hurled back, but that's how this "argument" thing works. I do hope the moderators don't object.
 
In the sense of 'participating in a game or sport', yes, of course; they are playing chess. Making moves according to the rules is 'playing the game'. Beating a competent opponent is 'playing well'. Beating the world champion is being 'the best player in the world'.

However, this is missing the point - which was that there are plenty of examples of computer simulations that 'work' for real: Can a simulated adding machine add? can a simulated mixing desk mix music? can a simulated voice generate intelligible speech? can a simulated wall clock tell you the time? can you play pinball on a simulated pinball machine? can you play Solitaire with a simulated card deck? can simulated evolution give you novel designs? etc.

In each case, the relevant issue is the person interacting with the computer. That's where the game takes place, that's where the music is heard, that's where the intelligible speech is understood. Without the involvement of a conscious mind, the simulation isn't doing anything real.

The point is that computer hardware and software can perform functions that involve information processing, and the current discussion revolves around whether consciousness mainly involves or arises from information processing, and if so, whether there is any good reason why it cannot be done by computer hardware and software.

In this context, asking whether a simulated axe can chop wood, or a simulated truck can carry rubbish to the dump is irrelevant.

Whether or not the brain is carrying out "information processing" in the same sense as a computer is exactly the point. As I've repeatedly stated, no adequate physical definition of "information processing" has been provided in order that we can determine if that is in fact what is happening.
 
It postulates that each stage of consciousness has been claimed to be impossible for machines, but they are getting through them one by one. First complex calculation, then chess, and soon they'll be able to do all the stuff now reserved for human beings. The people who've had doubts about this were proved wrong before, and will be proved wrong again.
No, it suggests that given the effects of the progress in achieving what had been thought to be unique to human intelligence, we should be wary of making similar errors of judgement with consciousness; particularly in light of the vagueness of current definitions (or attempted definitions).

I wonder who these people are who cast doubt on the ability of machines to carry out complex calculations.

Who thought that chess would be impossible for a machine to play?
If you read what I said a little more carefully, you'll see that I made no suggestions of that kind.

So what are these continual breakthroughs in the area of consciousness? First calculations, then computer chess - now what? Faster calculations? Better chess programs? Computer Go?
Whut? I have no idea. I don't think chess (or Go) programs have anything to do with consciousness...

Is there a convincing bot out there which can carry on a brief conversation on Twitter, on any topic, and sound human - or indeed, as if any kind of conversation is going on?
No, indeed; but it's worth considering that even the team IBM put together to create Watson had serious doubts about whether it would be possible to get to the level of Joe Public, let alone championship or world level.

Back in the 1970's, the real soon now view of machine intelligence was credible. I remember reading about theJapanese Fifth Generation Project that was going to change the world. It ended up giving us slightly more efficient washing machines.
- and teaching us a great deal about various aspects of intelligence. When machine translation was first attempted using semantic and grammatical analysis, it seemed unlikely that it would ever be accurate or flexible enough to be practical; Google changed that by throwing out linguistic analysis and using sample-based processing of vast amounts of data and processing power, that were simply unavailable to the early developers.

One of the problems of AI is the vast amount of data the human brain absorbs as it matures, and the way it is processed and organised. We are now developing technologies that can organise and process vast quantities of data in analogous ways, as the Watson project demonstrates. I'm not suggesting Watson is necessarily the route to consciousness, but it is a custom AI, and when it is tailored to more practical applications, as planned, such as medical use, it will have major repercussions. This field is just beginning to be useful.
 
I was not evading the question... I was using a technique of Philosophy.
Well done :rolleyes:

Your deductions about the systems and the uses are quite right.... isn't it better to figure that out for yourself rather than me telling you.

Working it out for yourself also made you think about the original statement for which you asked me to give examples. The whole process of you working it out for yourself I think made you appreciate my original concept more in depth.
Interesting idea - it didn't work quite like that; I just stripped the 'Philosophical technique' from your post :D

I was just curious to see what examples you came up with.
 
Without the involvement of a conscious mind, the simulation isn't doing anything real.
The involvement of a conscious mind makes what it does 'real'?

Whether or not the brain is carrying out "information processing" in the same sense as a computer is exactly the point. As I've repeatedly stated, no adequate physical definition of "information processing" has been provided in order that we can determine if that is in fact what is happening.

I simply mean the translation of sequences of representative states into different forms or sequences, much as neurons or a microprocessors do. That ought to be enough to work with.
 
I don't think Leumas was referring to simple randomisation of that kind

What exactly would have made you think that..... given the fact that the post was talking about CONSCIOUSNESS and AI in computers and arguing that they are not conscious even though they may appear to be Artificially Intelligent....what would make you assume that I was not talking about

Using it for AI agents […] where it is used to make agent behaviours more human-like or challenging […].


In what other way would I be meaning what I said if I was talking in the CONTEXT of AI and consciousness of computer programs and how it is fooling humans into thinking that it is when it is not.

If I am talking about computer programming of consciousness in computers and you "obviously" know that this is all about AI and computer programming....then in what way would randomness and selecting subroutines to execute from among many be interpreted in light of the CONTEXT of the subject being talked about?

I mean what exactly would I be talking about if I am talking about COMPUTER PROGRAMMING and mention randomness and subroutines if as you apparently know these are some basic techniques in Computer Science?


So when you say
I was curious to know if he knew of some more interesting or less obvious applications…

So you were looking for some other field in which programming subroutines and randomness are used other than in computer programs…. Well….
Randomness is used in many fields other than computer programming…. It is for instance used to generate white noise in DSP applications. It is also used in Economics and Code generation and breaking and in cryptography….. but all of these are actually now done by programs…. So maybe they do not apply???... can you think of others?

I do not know of any other place where programming subroutines (or functions or methods or whatever you want to call them) might be used other than in a computer program….. do you?


but for whatever reason, he chose not to commit to a direct answer.


How is asking you to think about two very well specified AND INTERESTING situations not a direct answer. I gave you two scenarios for which the topics you asked about are applicable and I gave you a lot of information about the scenarios with enough details for you to figure out the mundane rest of the answer for yourself.

I was making the assumption that you seemed to have a Computer Science background. If you did not have that background then you are right it was not a sufficiently direct answer, but only because you do not have the background in the field..... in which case a direct answer would have required that I explain all the background necessary to appreciate the answer..... which I think you would agree is not feasible.

So there were two reasons for the way I answered
  • If you have the background: you would appreciate the scenarios given and they would make you think more about the topics as well as how they apply and since you have knowledge of computer science, you would then appreciate the concepts better if you already have not considered them before.
  • If you do not have the background: make you think about the scenarios given and make you realize that obviously there are applications but that the topic is too complex for you to appreciate in a few paragraphs on a forum.
 
No, it suggests that given the effects of the progress in achieving what had been thought to be unique to human intelligence, we should be wary of making similar errors of judgement with consciousness; particularly in light of the vagueness of current definitions (or attempted definitions).


If you read what I said a little more carefully, you'll see that I made no suggestions of that kind.

That seems to be the gist of your argument - that we've claimed in the past, that certain things are unique to human intelligence - and they turned out not to be. The point I was making is that generally, the things that computers turn out to be good at were always recognised as being suitable for machines. It was always credible that a machine could play chess. The game is inherently digital in nature, it has consistent and well defined rules - it almost demands the idea that a perfect strategy exists.

What my post was trying to show is that the things that computers are good at we've known they would be good at, for hundreds of years. These are things that we've been waiting to be automated.

The things that computers are bad at but humans are good at - we are still waiting for computers to get good at them, with no particular reason to assume that they will.

Whut? I have no idea. I don't think chess (or Go) programs have anything to do with consciousness...

No, indeed; but it's worth considering that even the team IBM put together to create Watson had serious doubts about whether it would be possible to get to the level of Joe Public, let alone championship or world level.

And yet they devoted the resources to getting a chess playing program written, rather than a program to carry out a conversation.

Of course there are doubts in any engineering project, but if they told their managers "I don't know why we're working on this, it's clearly impossible" I'd be surprised.

- and teaching us a great deal about various aspects of intelligence. When machine translation was first attempted using semantic and grammatical analysis, it seemed unlikely that it would ever be accurate or flexible enough to be practical; Google changed that by throwing out linguistic analysis and using sample-based processing of vast amounts of data and processing power, that were simply unavailable to the early developers.

That's very interesting. In spite of thousands of man years of work trying to get a program that could figure out what a particular phrase meant, Google decided that the only way was brute force sampling. Semantic and grammatical analysis - actual machine understanding - was a dead end. This is in spite of it being a Holy Grail for many, many years. Computers were going to stop trying to understand language, and just pass it on.

One of the problems of AI is the vast amount of data the human brain absorbs as it matures, and the way it is processed and organised. We are now developing technologies that can organise and process vast quantities of data in analogous ways, as the Watson project demonstrates. I'm not suggesting Watson is necessarily the route to consciousness, but it is a custom AI, and when it is tailored to more practical applications, as planned, such as medical use, it will have major repercussions. This field is just beginning to be useful.
 
Why does it matter that your head would appear as a sliver if I was moving at 0.99999c relative to you?

Because you are sitting here claiming that it is impossible for a grossly deformed system to continue to do the same work.

This is in direct opposition to what we know about physics. If I am moving at 0.99999c relative to you, your head is grossly deformed from my perspective, yet you function nominally according to your own perspective.

What, you don't believe in all this "relativity" mumbo-jumbo?

What happens at near-light speed is simply irrelevant to a discussion of what will happen to an object if you vaporize it.

It's not a question of looking at one type of "gross deformity" and reasoning from there to an entirely different type of deformity.
 
The involvement of a conscious mind makes what it does 'real'?

It makes the simulation real. Obviously it's doing something real.

I simply mean the translation of sequences of representative states into different forms or sequences, much as neurons or a microprocessors do. That ought to be enough to work with.
 
Consciousness != Intelligence

Not only that, but the definition used by neurbiologists and the bio faction here is one that firmly resists goalpost-shifting. That's one of the reasons it's so useful for research.

Plus, the argument to which you're responding was off target anyway, since the bio faction is not arguing that conscious machines are impossible in the first place, so there would be no need to shift the goalposts in order to deny that such a thing is possible if it happens.
 
A person trying to achieve powered flight was in no doubt what it was that he was trying to achieve, and would have no doubt about whether he'd done it or not.

Oh, c'mon, westprog, you know that the definition of "flight" is hopelessly vague!

I can rattle off a dozen things we don't know about flight in half a minute.

And when is something really flying, and not just leaping?

This is a hopeless concept. I say that we settle for "motion" and ditch this useless notion of "flight".
 
What happens at near-light speed is simply irrelevant to a discussion of what will happen to an object if you vaporize it.

It's not a question of looking at one type of "gross deformity" and reasoning from there to an entirely different type of deformity.

The important thing about relativity as it relates to this discussion is that systems operate according to their own perspective. The way a lathe operates is due to the fact that all the pieces and particles in it are in the same location, moving at the same speeds (relativistically speaking). If we were to distort the machine in its own frame of reference then it would stop working. (And that the machine has a frame of reference does not imply that it has "experiences").

Thus when we look at the spectra of distant, receding stars, we allow for the red shift to calculate the temperature of the star in its own frame of reference.
 
No.

I told you, the machine makes it so the interactions between particles remain functionally equivalent.

So the pile of brush could be hauled to the dump, and the coffee cup could be lifted.

It is obvious that you don't understand this exercise, piggy.

I understand it perfectly.

But if you think a vaporized truck that's spread across the galaxy can still haul brush, or an arm that's vaporized and spread across the galaxy can still lift a cup of coffee, you've taken leave of your senses.

It doesn't matter that "the interactions between the particles remain functionally equivalent".

Once you spread my exhaust system's particles across the galaxy, they cease to be usable for the work they used to do. Ditto for the spark plugs and pistons.

That's because they no longer form the object which they used to form which was able to do the work it was able to do in the world.

The fact that the particles are still dancing in relation to each other by magic doesn't change that.

ETA: I'll give you a choice... I'll either fire a bullet at your chest with my Ruger, or I'll fire a vaporized bullet at your chest (consisting of particles spread across light years which are behaving, relative to each other, the way the particles in the non-vaporized bullet are behaving). Do you have a preference?
 
Last edited:
Not only that, but the definition used by neurbiologists and the bio faction here is one that firmly resists goalpost-shifting. That's one of the reasons it's so useful for research.

Plus, the argument to which you're responding was off target anyway, since the bio faction is not arguing that conscious machines are impossible in the first place, so there would be no need to shift the goalposts in order to deny that such a thing is possible if it happens.

I'm sure that there are some people claiming that non-biological consciousness is impossible - but I've never claimed it, you've never claimed it, and Leumas hasn't - but that's the argument that our opponents like to rebut. I wonder why?
 
The involvement of a conscious mind makes what it does 'real'?



I simply mean the translation of sequences of representative states into different forms or sequences, much as neurons or a microprocessors do. That ought to be enough to work with.

And what makes those states representative, apart from the presence of a conscious mind?
 
That's an interesting view of the development of machine intelligence. It postulates that each stage of consciousness has been claimed to be impossible for machines, but they are getting through them one by one. First complex calculation, then chess, and soon they'll be able to do all the stuff now reserved for human beings. The people who've had doubts about this were proved wrong before, and will be proved wrong again.

I wonder who these people are who cast doubt on the ability of machines to carry out complex calculations. Surely not Pascal or Leibnitz. Machines have been used to aid calculation for hundreds of years. It's precisely what machines are good at. However, their usefulness in arithmetic is not matched with corresponding ability in mathematics, where their impact remains limited. The computer remains just a big abacus in mathematics, occasionally used for tedious repetitive work but providing little insight.

Who thought that chess would be impossible for a machine to play? The Mechanical Turk convinced people that a clockwork machine could play to a high standard back in the 1770's. It seems that they tended to over-estimate what a machine could do. In fact, the formal rule-based environment of chess was instantly recognised as ideal for computers. Weiner wrote a suggested strategy for computer chess back in 1948. The first computer chess program was written in 1957. That makes the field at least as old as I am.

So what are these continual breakthroughs in the area of consciousness? First calculations, then computer chess - now what? Faster calculations? Better chess programs? Computer Go?

What's noteworthy is that computers have gotten better and better at the things that computers are good at - but they've remained resolutely poor at the things that they were always bad at. We're coming up to the fiftieth anniversary of ELIZA. Is there a convincing bot out there which can carry on a brief conversation on Twitter, on any topic, and sound human - or indeed, as if any kind of conversation is going on?

Back in the 1970's, the real soon now view of machine intelligence was credible. I remember reading about theJapanese Fifth Generation Project that was going to change the world. It ended up giving us slightly more efficient washing machines.

It seems that dlorde is using an idiosyncratic definition of consciousness which, like PixyMisa's, does not actually distinguish consciousness from non-conscious activity in the brain.
 
Yeah but the problem is you stopped reading about advances in computer science "Back in the 1970's."

If you bothered to read up on current events, you would see things like software carrying out biological research, formulating mathematical proofs, and making business decisions for gigantic corporations.

Not to mention the fact that research groups are simulating portions of mamallian brains, soon to be entire mamallian brains.

And you can talk to your iphone, and for simple communications, it understands you.

How do you respond to that, westprog?

None of that has anything to do with consciousness, dodger.

None of those functions requires consciousness, and the people building the brain sims are not claiming that the resulting sims would be conscious.

And my phone "understands" what I'm saying in the same way that my door knob understands what my key is doing. This also has nothing at all to do with consciousness.
 
The point is that computer hardware and software can perform functions that involve information processing, and the current discussion revolves around whether consciousness mainly involves or arises from information processing, and if so, whether there is any good reason why it cannot be done by computer hardware and software.

In this context, asking whether a simulated axe can chop wood, or a simulated truck can carry rubbish to the dump is irrelevant.

This is where you're wrong.

There is absolutely no reason to doubt that consciousness is caused directly by the physical activity of the brain, just as every other bodily function is caused directly by physical activity, and every other spacetime event is caused directly by the interaction of matter and energy.

If you do want to pursue this "information processing" angle, then you can't propose the kind of info processing that computers do for us as the cause, because that is entirely symbolic.

So what sort of "information processing" are you talking about, exactly?
 
The involvement of a conscious mind makes what it does 'real'?

Yes.

When you check back with the simulator to see what the state of the simulation is, the physical state of your brain changes. And that is what it means for the "world of the simulation" to be real. There is no other way in which it is real.

Because outside of that, it's just a machine doing what a machine does.

If you subtract all brains from the picture, there is no simulation, just the simulator.

This framework is the only way in which the laws of conservation are not violated.
 
That's very interesting. In spite of thousands of man years of work trying to get a program that could figure out what a particular phrase meant, Google decided that the only way was brute force sampling. Semantic and grammatical analysis - actual machine understanding - was a dead end. This is in spite of it being a Holy Grail for many, many years. Computers were going to stop trying to understand language, and just pass it on.

And Watson's mistakes clearly reveal that he has no conscious understanding of what any of the questions are.
 
Status
Not open for further replies.

Back
Top Bottom