When will machines be as smart as humans?

I doubt it will ever happen, or at least not happen for 10's of thousands of years. We'd need a computer that would be like our brains, biological and capable of creating new 'pathways', not just using existing pathways efficiently.

Tens of thousands of years. You haven't been following the last few centuries, have you ?
 
I'm not interested so much in the different views. The fact is, I agree with you that cognition is the way it has to be done. I think that cognition itself is mathematical in function. In order for the machine to be cognizant, we have to program that. So, all thought is computational, mathematical in nature.

Yes, arbitrarily complex thinking could occur in a machine -- but Searle's point was that consciousness does not arise purely as a function of information. It is a physical thing, and a result of physical processes in the brain. Therefore it is irrational to presume it could possibly arise from purely information interchange. Would it arise from a computer programmed to operated exactly as a brain does? Don't know, because electrons flowing through wires may not give rise to the physical, existing phenomenon of consciousness.

And here's the rub: If it doesn't, then that means the simulation is missing something. But would a perfect simulation, including consciousness actually be conscious? Not necessarily, any more than the simulation of a rocket launcher in "Quake" actually causes things to explode.
 
There's a lot to read! LOL It's not too much to ask that I look at something, or question what I know or have studied. I think a lot of arguments are caused by not wanting to offend, so denfinitive questions are assumed rather than asked, and when they are asked, the other people become defensive becaused they were asked to define something. So, I think we all need to lighten up, ask questions, not make assumptions about how much or how little the other person knows of the world, and just let their conversational abilities speak for themselves.

I've read the Wikipedia article.

Personally, I accept the terms of access-consciousness, but not phenomenon-consciousness. I believe that anything we cannot relate in language is due to a shortcoming in the language. Therefore, I do not accept Qualia or p-zombies. I do accept Dennett's thoughts, as they seem closest to my own, especially concerning separation of "awareness" and "self-awareness".

For the sake of argument, I can accept p-consciousness and qualia, but not p-zombies for reasons that I will claim under that thread if needed.

I accept all of the information under cognitive neuroscience as true.

My main philosophers studied are Plato, Beckenstein, and (if you can consider some of his works) Chaucer, although they obviously didn't have much influence. Most of my beliefs are from the field of life-science, and apparently my beliefs are in tune to Dennett's.

So, my proposal for this conversation is this:

"Consciousness" is the ability to perceive ourselves and our relationship to the external world, via our five senses. "Qualia" are sensual experiences which cannot be defined using language. Our state and level of consciousness is directly derived from our ability to sense, plus our ability to use that information to interact with our external environment. Our ability to sense is directly derived from our sensory organs. "Unconsiousness" is the lack of ability to perceive our relationship to the outside world, or more specifically, the simultaneous lack of ability to gather and use information from all our sensory organs, regardless of whether the malfunction is in the brain or in the organs.

Can you accept this definition, or do you have any alterations/additions?
Thanks, this helps allot. I would say that our inability to precisely define consciousness isn't a semantic one. It is one of precise understanding. In the 19th century humans had a pretty good idea of what flight was but they didn't precisely understand it. We still don't understand it absolutely but we have come an awfully long way since the right brothers. I think it likely that we will turn a corner with consciousness like we did with aerodynamics.

I have to move today so I won't spend too much time on this post. I promise to address it more when I get back.
 
Yes, arbitrarily complex thinking could occur in a machine -- but Searle's point was that consciousness does not arise purely as a function of information. It is a physical thing, and a result of physical processes in the brain. Therefore it is irrational to presume it could possibly arise from purely information interchange. Would it arise from a computer programmed to operated exactly as a brain does? Don't know, because electrons flowing through wires may not give rise to the physical, existing phenomenon of consciousness.

And here's the rub: If it doesn't, then that means the simulation is missing something. But would a perfect simulation, including consciousness actually be conscious? Not necessarily, any more than the simulation of a rocket launcher in "Quake" actually causes things to explode.
My point with the oranges simulation. Agreed completely.
 
So, my proposal for this conversation is this:

"Consciousness" is the ability to perceive ourselves and our relationship to the external world, via our five senses. "Qualia" are sensual experiences which cannot be defined using language. Our state and level of consciousness is directly derived from our ability to sense, plus our ability to use that information to interact with our external environment. Our ability to sense is directly derived from our sensory organs. "Unconsiousness" is the lack of ability to perceive our relationship to the outside world, or more specifically, the simultaneous lack of ability to gather and use information from all our sensory organs, regardless of whether the malfunction is in the brain or in the organs.

Can you accept this definition, or do you have any alterations/additions?
No, I'm happy with this usage of the word so long as we understand that any usage will suffer from limitations because of our limited understanding of what exactly consciousness is.
 
There is a fundamental difference between the way you teach a child and the way you can teach a computer. With a child, you point to a red tractor and say "red tractor". Eventually the child associates experiences of red and of tractors with the words "red" and "tractor". The only way you could to this to a machine was if the machine was conscious.
Since this has been done, we can only conclude that computers are already conscious.

If all it is doing is processing information then it simply cannot associate any meaning with the word "red".
Whyever not? What is meaning if not information?

It is just a label that it is told applies to certain other things, which are also just labels.
Except that this is exactly what you are doing with the child.

All the computer can do is shuffle symbols about. All it can do is put "red" and "tractor" together to make "red tractor".
No. It can associate the terms, individually and in combination, with, for example, a picture of a red tractor. Which is exactly what we do.

It then says "tractor is red", but it never "knows" what any of the symbols actually mean, which is the point of Searle's Chinese Room argument.
That was indeed the point of Searle's Chinese Room argument, and it is something that Searle failed utterly to show.

To recap the argument: We have a man in a room full of books. Through a hole in one wall come messages written in Chinese, a language he can neither read, write, nor speak. He consults the books, and following simple functional rules, constructs a second message from the first - without ever translating either message into any language he understands.

But the second message is in fact an answer to the question in the first message. To answer the questions, the Chinese Room must understand Chinese.

The man, as we have stipulated, does not understand Chinese.

Books are merely data; they represent knowledge, but they certainly do not understand anything.

Searle and his supporters see this as a comprehensive refutation of functionalism. Functionalists see this as obvious nonsense. It's neither the man nor the books that understands Chinese, it's the combined system; the man providing the logical processing and the books providing the information.

Searle's response to this is to consider the case where the man has memorised the contents of the books. Now he does everything himself, and yet he still does not understand Chinese. He cannot tell you what any of the qestions means, or even the answers the he himself has written.

This rejoinder misses the point by a parsec. The man has constructed a new consciousness using his own conscious processing, just as he did in the Room. That consciousness understands Chinese, just as the Chinese Room understood Chinese. That it is happening inside his brain rather than by the interaction of his brain with the books makes not the slightest difference; it is exactly the same argument and fails in exactly the same way.
 
The man has constructed a new consciousness using his own conscious processing, just as he did in the Room. That consciousness understands Chinese, just as the Chinese Room understood Chinese. That it is happening inside his brain rather than by the interaction of his brain with the books makes not the slightest difference; it is exactly the same argument and fails in exactly the same way.
Does this mean that two different consciousnesses can potentially inhabit the same brain?

Furthermore, if this is truly a new consciousness that acts exactly in the same way as the instructions-on-books case does, does that mean that the instructions-on-books case houses a new consciousness as well?
 
Does this mean that two different consciousnesses can potentially inhabit the same brain?
I don't know to what degree this is possible in reality. There's something like that going on in split-brain patients, but I don't know if it's really truly two different consciousnesses.

But in terms of Searle's thought experiment, that is indeed what is happening. Though one is generated directly by the brain, and one is generated by conscious logical processing, both ultimately derive from the brain.

Furthermore, if this is truly a new consciousness that acts exactly in the same way as the instructions-on-books case does, does that mean that the instructions-on-books case houses a new consciousness as well?
Yes. Not the books themselves, of course, but the system of man-following-instructions. If it can understand and answer arbitrary questions, then, functionally speaking, it is conscious.

I'm not arguing here that functionalism is correct (though I do argue that elsewhere). I'm just pointing out that Searle's argument fails to make any kind of impact on functionalism.
 
Last edited:
Originally Posted by JustGeoff :
There is a fundamental difference between the way you teach a child and the way you can teach a computer. With a child, you point to a red tractor and say "red tractor". Eventually the child associates experiences of red and of tractors with the words "red" and "tractor". The only way you could to this to a machine was if the machine was conscious.

Since this has been done, we can only conclude that computers are already conscious.
Well tell us about it? Give us an example? Perhaps a computer can do this in a very limited and pre-set way.

I used to be very interested in AI and read everything I could get my hands on for nearly two decades. My understanding is that computers don't apply learning from one concept or discipline to another. I write programs though I'm not in AI and I learned the hard way that computers don't adapt very well. If a subroutine is even slightly off the computer fails. Take opening a door for example. A child can apply insights gained from opening a door with a knob to a door with a latch. In fact the more doors he or she opens the shorter the learning curve.

Computers are different. Humans don't build computers (robots) to learn how to open doors. On the contrary. They program every possible scenario into the computer. A sub routine is written for all of the subtleties of different types of doors and mechanisms including sliding doors, latches, knobs, swing out doors etc.

Computers can learn but they can only learn in narrowly defined ways. At this time chess programs can't apply strategy from chess to checkers or war games. People on the other hand do this.

I'll confess that I'm not as fanatic about AI as I once was. I've become disillusioned over the past two decades. So it wouldn't surprise me that some of the problems I am familiar with have been solved.

Still, claiming that they are solved is not proof. Do you have an example?
 
Well tell us about it? Give us an example? Perhaps a computer can do this in a very limited and pre-set way.
Example? Lots of image-processing work. Lots and lots of image-processing work.

I used to be very interested in AI and read everything I could get my hands on for nearly two decades. My understanding is that computers don't apply learning from one concept or discipline to another. I write programs though I'm not in AI and I learned the hard way that computers don't adapt very well. If a subroutine is even slightly off the computer fails. Take opening a door for example. A child can apply insights gained from opening a door with a knob to a door with a latch. In fact the more doors he or she opens the shorter the learning curve.
People spend years being bumbling idiots (we refer to them as "children") while constructing a system of causal inferences that allows them them to deal with the world. They are, without question, far better at this than any computer system yet developed.

But the claim that is anything more than a difference of degree is unsupported by evidence.

Computers are different. Humans don't build computers (robots) to learn how to open doors. On the contrary. They program every possible scenario into the computer. A sub routine is written for all of the subtleties of different types of doors and mechanisms including sliding doors, latches, knobs, swing out doors etc.
Not true. Some people do build computers to learn how to open doors. It is not what we use computers for most of the time; we want computers and robots to be useful, right now; not incompetent clowns we have to spend a quarter of our lifetime training.

Computers can learn but they can only learn in narrowly defined ways. At this time chess programs can't apply strategy from chess to checkers or war games. People on the other hand do this.
At this time, as far as I know, this is correct.
 
Example? Lots of image-processing work. Lots and lots of image-processing work.
I'm not sure what you are talking about. I understand pattern recognition and image processing work. I know that computers are good at narrowly defined processes but they don't have the same abilities as humans when it comes to concepts. A computer wouldn't match a caricature (drawing) to a photo of the same person without special programing to specifically do so and it is doubtful that it could do it at all if the caricature was sufficiently different from the photo. Yet people can do it relatively easily. Computers, as yet, don't solve problems the way people do. This isn't to say they won't only that currently they don't.

People spend years being bumbling idiots (we refer to them as "children") while constructing a system of causal inferences that allows them them to deal with the world. They are, without question, far better at this than any computer system yet developed.

But the claim that is anything more than a difference of degree is unsupported by evidence.
I disagree. The way computers solve problems is by brute force. Using complex algorithms to eliminate unlikely matches the computer then examines every possible match until it makes an exact match or comes close.

Not true. Some people do build computers to learn how to open doors.
You are correct. I was wrong. There are two ways to program a computer to open different doors.

1.) Create a program to account for as many variables as possible.

2.) Create a program to learn how to open doors under a narrowly defined set of variables.

Problem: Any "learning" will only be useful to opening doors. The computer won't be able to apply the "learning" to something outside of it's scope of learning "doors".

It is not what we use computers for most of the time; we want computers and robots to be useful, right now; not incompetent clowns we have to spend a quarter of our lifetime training.
But it would be very useful for a computer to learn even if it did take years. A computer that could learn as much as a child in 6 - 8 years would be unbelievably useful. We program robots that take years to reach distant planets why not robots that will take a decade or more to acquire AI. FWIW it is not for the lack of trying. Honestly we don't as yet know how to do that. BTW, the time could be compressed since such a robot wouldn't have the same needs of sleep, rest, eating, playing etc.
 
I'm not sure what you are talking about. I understand pattern recognition and image processing work. I know that computers are good at narrowly defined processes but they don't have the same abilities as humans when it comes to concepts.
I'm not saying that they have the same level of ability; this is clearly not true. I'm saying that they can perform the function.

A computer wouldn't match a caricature (drawing) to a photo of the same person without special programing to specifically do so and it is doubtful that it could do it at all if the caricature was sufficiently different from the photo. Yet people can do it relatively easily.
Sometimes. I've certainly looked at cartoons and thought who the splorp is that supposed to be?

Computers, as yet, don't solve problems the way people do. This isn't to say they won't only that currently they don't.
Well, there are two points there. One is how computers solve problems, and there is a huge array of problem-solving methods in use; some of them only in research, some of them more broadly. The second is that exactly how people solve those problems is still an open question in many ways. So saying that "computers ... don't solve problems the way people two" begs two questions at the same time.

My point is certainly not that computers are at a human-level of general problem-solving; this is far from true. My point is that we have many approaches for solving simple examples of every class of problem that humans are able to solve, and there is no solid evidence that there is a category difference between computer problem-solving and human problem-solving, rather than a difference of scaling.

Differences of scaling can prove intractable too, of course.

I disagree. The way computers solve problems is by brute force. Using complex algorithms to eliminate unlikely matches the computer then examines every possible match until it makes an exact match or comes close.
That's one way. It's also one way humans solve problems - massively parallel processing for image analysis, for example.

You are correct. I was wrong. There are two ways to program a computer to open different doors.

1.) Create a program to account for as many variables as possible.

2.) Create a program to learn how to open doors under a narrowly defined set of variables.

Problem: Any "learning" will only be useful to opening doors. The computer won't be able to apply the "learning" to something outside of it's scope of learning "doors".
Again, not true. A computer can adapt learning from one category and apply it to another related domain. Making the learning program more general, though, means that it isn't as good at learning how to solve the specific domain.

But it would be very useful for a computer to learn even if it did take years. A computer that could learn as much as a child in 6 - 8 yers would be unbelievably useful.
Yes, it would.

However, there is a second problem here. Although computers can be programmed to learn in the same way as children (or in what we think is the same way, since human learning is an active area of research), current computers are a few orders of magnitude less powerful in terms of processing and memory than the human brain. So even when equipped with suitable learning programs, they wouldn't be able to learn as much as a human child.

Yet.

We program robots that take years to reach distant planets why not robots that will take a decade or more to acquire AI. FWIW it is not for the lack of trying. Honestly we don't as yet know how to do that.
We don't know exactly how to do it, no. We can look at specific problem domains and develop learning systems for those domains, but putting it all together so that it forms and tests cross-domain inferences is something yet to be done.

BTW, the time could be compressed since such a robot wouldn't have the same needs of sleep, rest, eating, playing etc.
We don't know that - and in fact, I don't think it is true. Human children learn a great deal from "playing", so a robot developed along the same lines is likely to spend most of its early years the same way. (The same thing can be observed among most young mammals, so it seems to be pretty general.)
 
I'm not saying that they have the same level of ability; this is clearly not true. I'm saying that they can perform the function.
I don't think this has been demonstrated as yet.

Well, there are two points there. One is how computers solve problems, and there is a huge array of problem-solving methods in use; some of them only in research, some of them more broadly. The second is that exactly how people solve those problems is still an open question in many ways. So saying that "computers ... don't solve problems the way people two" begs two questions at the same time.
How is it begging the question? I don't see the fallacy you site so I would appreciate an explanation. I'll apologize in advance if I have indeed done so. Not the first. Won't be the last. In any event. I will concede that I have not been precise. Let me try and be clear.

1.) Yes, there are many ways to solve problems.
2.) Yes, in some ways computers and humans over lap in ways of solving problems.

Humans and computers both use "trial and error" to solve problems. However humans are capable of solving many problems without trial and error that computers can only perform with trial and error and perhaps algorithms to eliminate unlikely solutions. There are some real world examples that would take a computer days or months to solve that humans can solve in minutes. I'll come up with as many as I can and continue to post them as I remember them and look them up.

Humans have an incredible ability for symbolism, perception and context that computers just can't match. To a human a bird can mean flight, freedom, isolation, imprisonment, rebirth, peace, war, etc. depending on context.

In many instances it really isn't' a case of scale, not at all. Humans have many tricks up their sleeves that we simply don't understand at the moment.

See CMU student taps brain's game skills

"There's some meat to his idea," said his mentor, Manuel Blum, a CMU professor and a pioneer in the field of theoretical computer science. Producing word descriptions of images with the ESP Game is nice, of course, but the bigger idea is to entice people to cooperatively solve problems that defy electronic computers.

Also see: Intractability Principle and the Concept of Z-hardness

Also see: CAPTCHA

A CAPTCHA is a program that can generate and grade tests that most humans can pass, but current computer programs can't pass. For example, humans can read distorted text as the one shown below, but current computer programs can't:

sample_nsf.jpg
Now, it's true that people are programing computers to break CAPTCHA code but also note the problem is actually being tackled by humans. This isn't enabling computers to solve this type of problem the way humans do. It's enabling computers to solve a very specific problem that the programmers have identified. Every change in the CAPTCHA complexity requires additional programing on the part of the programmers. Humans are capable of solving millions of similar problems. At this time programmers would have to write code for ALL of these problems.

See also: Arimaa

To someone who is not familiar with how computers play Chess, such a victory may give the impression that computers can now think and plan better than the best humans. But have computers really caught up to the intelligence level of humans? Do they now have real intelligence?

In an attempt to show that computers are not even close to matching the kind of real intelligence used by humans in playing strategy games, we have created a new game called Arimaa. Here is a simple game that can be played using the same board and pieces provided in a standard Chess set. However the rules of the game are a bit different and suddenly the computers are left way behind. For humans the rules of Arimaa are very easy to understand and more intuitive than Chess, but to a computer the game is a thousand times more complex. To the best of our knowledge Arimaa is the first game that was designed intentionally to be difficult for computers to play.

My point is certainly not that computers are at a human-level of general problem-solving; this is far from true. My point is that we have many approaches for solving simple examples of every class of problem that humans are able to solve, and there is no solid evidence that there is a category difference between computer problem-solving and human problem-solving, rather than a difference of scaling.
I'm sorry but that is wrong. Again, I'm not saying computers never will be the same but the difference is not simply scale. The problem lies in understanding how humans solve such simple problems that computers at this time can't. To understand how perception along with other abilities is utilized to intuitively solve problems.

Again, not true. A computer can adapt learning from one category and apply it to another related domain. Making the learning program more general, though, means that it isn't as good at learning how to solve the specific domain.
You are going to have to give me examples. I've been following this field for decades and I am not familiar with your claim. I can't prove a negative. Hey, I've been wrong before but I can't analyze what you are saying blind.
 
Not true. Some people do build computers to learn how to open doors. It is not what we use computers for most of the time; we want computers and robots to be useful, right now; not incompetent clowns we have to spend a quarter of our lifetime training.

In fact, I came across an interesting little program about ten years back. It took many weeks or even months, but eventually it was capable of remarkable interraction through learning. Of course, it was pre-programmed, but the point is it was able to learn the relation between words and the meaning of new sentences and concepts.
 
My point is certainly not that computers are at a human-level of general problem-solving; this is far from true. My point is that we have many approaches for solving simple examples of every class of problem that humans are able to solve, and there is no solid evidence that there is a category difference between computer problem-solving and human problem-solving, rather than a difference of scaling.

Well, I think that the means by which computers "compute" is slightly different, simply because the way in which the hardware behaves is different. Simply increasing Mhz is not enough.
 
They're going to have to be worked up like the Insect robots. Essentially we'll have to build a machine and then spend a period of time teaching it what we want it to know - just like a child. THEN we'll have a real A.I. on our hands.
 
Well, I think that the means by which computers "compute" is slightly different, simply because the way in which the hardware behaves is different. Simply increasing Mhz is not enough.
No; or at least, mostly no.

That's Lucas's argument, and it doesn't seem to hold water. First, assert that consciousness is not Turing-computable (a claim for which there is no support); then assert that it is not computable by arbitrarily augmented Turing machines.

Computers and brains are different, but both are physical. Anything a brain does can be done by a computer; you just have to find out what the essential physical function is and implement - or simulate - that function in the computer. Since a Turing machine can simulate a classical physical process to an arbitrary degree of precision, the only escape hatch from Hard AI is the one Penrose took, to assert that quantum effects are directly involved in the generation of consciousness. Unfortunately for Penrose, there is no evidence at all that this is true, either in the mechanics of the brain or the behaviour of consciousness; and, as Max Tegmark showed, quantum events happen on a scale at least ten orders of magnitude too small to play such a role.

So, consciousness is a classical phenomenon, classical phenomena are Turing-computable, and it is just a matter of MHz.
 
Since a Turing machine can simulate a classical physical process to an arbitrary degree of precision, the only escape hatch from Hard AI is the one Penrose took, to assert that quantum effects are directly involved in the generation of consciousness. Unfortunately for Penrose, there is no evidence at all that this is true, either in the mechanics of the brain or the behaviour of consciousness; and, as Max Tegmark showed, quantum events happen on a scale at least ten orders of magnitude too small to play such a role.
It's been a while since I read The Emperor's New Mind, but wasn't Penrose's conclusion less quantum-dependent than that? I seem to recall that all he was saying was "we don't know enough about science yet to make true AI", and that quantum effects may or may not have a hand in that lack of knowledge.

If it is the case that quantum effects can be shown to not have a hand in it, there might be still other areas of science where we don't know enough to make AI. So Penrose's contention can't be dismissed that easily.

At least that's what I remember of the book. I could be wrong.
 

Back
Top Bottom