Why are we working on AI machines instead of starting out with something more simple like an artificial frog that is essentially indistinguishable from a real frog?
Greenpeace.
Spoiling things for mad scientists for years.
Why are we working on AI machines instead of starting out with something more simple like an artificial frog that is essentially indistinguishable from a real frog?
Why are we working on AI machines instead of starting out with something more simple like an artificial frog that is essentially indistinguishable from a real frog?
Out of curiosity, why do you think AI needs goals, desires, or other high-level interests to be dangerous?Sounds like the machine will not achieve something like human intelligence if it has no personal desires or goals - especially if it has no motivation to be correct in its actions or responses. Nothing bad will happen to it if it chooses to do something catastrophically awful. It has no reason to avoid that.
Hawkings particular worry regards AI with human-like intelligence. Given plausible developments in the field, I believe very sophisticated AI will approach human levels of intelligence within my lifetime, for reasons described in another thread.By now you have probably realized that I am strongly skeptical of claims of what the future will bring concerning real AI. Are there any current AI machines that apologize for their errors in trying to achieve the goals of their creators? Do they understand that we made them and want certain tasks to be accomplished without error? Do they comprehend that, or are they nowheres near even approaching intelligence yet?
Out of curiosity, why do you think AI needs goals, desires, or other high-level interests to be dangerous?
Strictly speaking, it just doesn't seem necessary given the existence of AI with little more intelligence than an insect to literally teach itself how to perform sophisticated tasks, such as walking upright, attacking an opponent, defending and avoiding against attacks.
The AI in his wheelchair has been talking in his stead for years.
He once warned us not to communicae with aliens for rear as communicating to the wrong ones would lead to these (the aliens) conquering us. Now hes warning us about computers becoming more intelligent than humans with disasterous consequences.
How rational are these fears? Does anyone agree with him?
I think you are being unfair to insects. Insects can do all those tasks -- and many more -- do them in real time, in the everchanging chaos of the actual real world instead of extremely simplified virtual environment, with actual noisy senses. Instead of the AI having little more intelligence than insects, it has a whole lot less.Strictly speaking, it just doesn't seem necessary given the existence of AI with little more intelligence than an insect to literally teach itself how to perform sophisticated tasks, such as walking upright, attacking an opponent, defending and avoiding against attacks.
Functioning outside of a controlled environment is a lot more difficult than you think.Out of curiosity, why do you think AI needs goals, desires, or other high-level interests to be dangerous?
The examples you gave are not valid because they used a small number of bounded parameters.I don't think you've justified this prerequisite. Strictly speaking, it just doesn't seem necessary given the existence of AI with little more intelligence than an insect to literally teach itself how to perform sophisticated tasks, such as walking upright, attacking an opponent, defending and avoiding against attacks. We've also seen how unintelligent processes can find innovative solutions to engineering problems, such as designing efficient wind turbines. (Detailed description here.)
Your other objection is that a machine may be a bit to cavalier about destroying itself, but I don't think you've justified that presumption either. Some of the AI in the link above, particularly this video, independently discover how to avoid being hit by an opponent's attack, without being told whether or how to do that in the first place; it's interesting that the AI doesn't "know" what happening, but that the fitness function for these systems has the simple objective to maximize wins, and the genetic process rwhich constructs the AI's neural net tends towards configurations which cause the creature to veer away from on-coming opponent attacks.
There is no architecture in existence right now that is capable of even rat level intelligence, much less human. This includes the neural network models. Unfortunately, having said that, I suppose it could be possible soon that I could design one.Hawkings particular worry regards AI with human-like intelligence. Given plausible developments in the field, I believe very sophisticated AI will approach human levels of intelligence within my lifetime, for reasons described in another thread.
I would agree. However, insects are only environmentally reactive; they don't have any actual intelligence.I think you are being unfair to insects. Insects can do all those tasks -- and many more -- do them in real time, in the everchanging chaos of the actual real world instead of extremely simplified virtual environment, with actual noisy senses. Instead of the AI having little more intelligence than insects, it has a whole lot less.
No, I could do that now.I think several major breakthroughs in AI are necessary to even come close to the adaptability of behaviour of even the simplest insect. Several more are needed to achieve the same level of intelligence packed into the size of an insect brain.
It wouldn't take that for anything environmentally reactive which gets you up to about the level of schooling fish. Territorial fish require something more complex and that is what no current cognitive theory can handle.Such breakthroughs may require major paradigm shifts in how AI is approached. It is certainly possible that those will happen within our lifetimes (several already occured during mine) but they are unpredictable.
I've struggled with this for awhile. I came up with a partial answer for a mechanical AI and then came up with a more complete answer which prohibits a giant clockwork AI in terms of functionality. However, I have not yet been able to answer whether such a machine could be conscious if heroic efforts were made to keep it running for a short time. An electrical, pneumatic, or hydraulic AI has the same problems. I have not yet determined whether electronic AI could be eliminated on a purely functional basis. And, if it is functional I still have not determined if it would be conscious.Whether an efficient human-like intelligence can be built with anything other than organic material remains -- I think -- an open question.
I can't talk completely freely about this but the human mind can be divided into various pieces including comprehension, intelligence, and free-will. These parts can be sub-divided further. Insects have most of comprehension but not free-will or intelligence. In the biological world, intelligence and free-will go together. In the movies like Terminator you have a robot that has comprehension and intelligence but without free-will. It's used in other scenarios such as Nomad on Star Trek. Is this actually possible? Well, so far, no one has been able to do it so it seems unlikely. A variant of this has to do with the flatness of abstraction. I don't have a proof for this yet but I'm currently thinking that free-will would not be possible with completely flat abstraction. And, if abstraction is not flat then you have what you mentioned in terms of preference and desire.Sounds like the machine will not achieve something like human intelligence if it has no personal desires or goals - especially if it has no motivation to be correct in its actions or responses. Nothing bad will happen to it if it chooses to do something catastrophically awful. It has no reason to avoid that.
There are no current machines that behave like humans. There is also no published theory of cognition that would allow designing such a machine. But that could change soon.By now you have probably realized that I am strongly skeptical of claims of what the future will bring concerning real AI. Are there any current AI machines that apologize for their errors in trying to achieve the goals of their creators? Do they understand that we made them and want certain tasks to be accomplished without error? Do they comprehend that, or are they nowheres near even approaching intelligence yet?
I think you are being unfair to insects. Insects can do all those tasks -- and many more -- do them in real time, in the everchanging chaos of the actual real world instead of extremely simplified virtual environment, with actual noisy senses. Instead of the AI having little more intelligence than insects, it has a whole lot less.
I think several major breakthroughs in AI are necessary to even come close to the adaptability of behaviour of even the simplest insect. Several more are needed to achieve the same level of intelligence packed into the size of an insect brain. Such breakthroughs may require major paradigm shifts in how AI is approached. It is certainly possible that those will happen within our lifetimes (several already occured during mine) but they are unpredictable.
Whether an efficient human-like intelligence can be built with anything other than organic material remains -- I think -- an open question.
Functioning outside of a controlled environment is a lot more difficult than you think.
The examples you gave are not valid because they used a small number of bounded parameters.
There is no architecture in existence right now that is capable of even rat level intelligence, much less human. This includes the neural network models. Unfortunately, having said that, I suppose it could be possible soon that I could design one.
I guess at this point we would have to start discussing cognitive theory which is way beyond anything Asimov ever had to consider.
Personally I think the greatest risk to our civilization is that we'll become consumed by our tech, and not because the tech rebels against us, but that it works exactly as intended, and makes us lazy to the point of oblivion.
He once warned us not to communicae with aliens for rear as communicating to the wrong ones would lead to these (the aliens) conquering us. Now hes warning us about computers becoming more intelligent than humans with disasterous consequences.
How rational are these fears? Does anyone agree with him?
No. You can build an AI today with insect level brain equivalence.Forget about rat intelligence. (Actually, rats are quite intelligent.) In fact, forget about insect intelligence. Hell, an AI doesn't even have the intelligence of a jellyfish at this point.
I agree. I think this will especially become a problem when truly immersive VR becomes widely available.
I'm curious. What would the threat from an intelligent machine be?Assuming we can create intelligence - and there's no reason to think it can't be done - and we do it, machines will have the same kind of advantage over us that we have over other animals
the threat from intelligent machines is very real.
I'm curious. What would the threat from an intelligent machine be?
The most likely threat is that it would attempt to do what humans originally designed it to do, in ways humans never intended -- and it would be incapable of understanding the side effects of its actions. The best illustration of what I am talking about is Paperclip maximizer.I'm curious. What would the threat from an intelligent machine be?