Z
Variable Constant
Unfortunately, Searle's Chinese Room problem and Dennett's Frame problem both suffer from the same fundamental error: assuming that an artificial intelligent system will be as our computers today are. Yes, if we give a robot instructions to deal with bombs and batteries, and it encounters a chihuahua, that factor is outside of its frame of reference. Luckily for us, most robots are programmed to ignore things that are not in its database, so it could still potentially succeed in its task. However, if the chihuahua interferes, the robot is brought to a standstill.
Yet, I can't help but notice how this is directly like the behavior of infants and young children. Keep them in familiar settings with familiar objects, and they accomplish their tasks fairly well; introduce a new, confounding variable, and they often freeze, or retreat, or lose all track of what they were doing. Of course, the difference there is that the child, even while having to deal with a previously unknown variable, is learning (reprogramming) about the new variable, while the robots we have today lack this ability.
But that's the key - 'we have today'.
I'm fascinated by the simple insect robots that have been assembled in the last ten years. These machines have no central processor, such that most computer scientists would recognize; rather, they have numerous processors all working in tandem, and a single 'goal' - usually to reach a marker or light source. There is no program for operating legs, for direction, for orientation; yet within minutes, these simple machines learn how to walk with six legs, and amazingly, in the same patterns as insects walk. They learn how to circumnavigate obstacles effectively. If there's any real flaw in these things, it's that they don't recall that learning later - and that may be something as simple as adding a memory card with some manner of autosave feature, to sort that problem out.
Yes, if we build a computer, such as we have on our desktops, it's never going to be able to learn and grow; but there's no reason why a sufficiently complex arrangement of simple processors provided with a simple set of instinctual goals couldn't learn anything and everything that we learn - after all, what is a brain, if not a collection of simple processors, with some instinctual goals hard-wired into them?
I'm not a big fan of 'embodiment theory' - largely because everything I've read about it doesn't say anything. It's like those writing about this theory can't figure out the right terms to use, to make themselves really understood. They skirt the issues, and assume the modern computer system to be the most advanced we'll ever have. That may be what's taught at universities today; but that doesn't make the theory any more or less right than computationalism or any other cognitive theory.
Whatever theory is right or wrong or whatever, one thing is certain: to assume that something is impossible because it cannot be done today, is short-sighted and ignorant. This may sound like I'm singing an old, old chord, but it's every bit as true now as any other time: people used to say the same things about flight, space travel, artificial hearts, and so on and so forth.
Artificial intelligence will happen - and I predict, probably within the next 200 years. We'll be faced with machines that are as smart - and probably smarter - than we are. It may even be the next stage of evolution. We can't look at our laptops and our RoboSapiens today and say, 'These things will never be smart. They suck.' Because things always change.
Yet, I can't help but notice how this is directly like the behavior of infants and young children. Keep them in familiar settings with familiar objects, and they accomplish their tasks fairly well; introduce a new, confounding variable, and they often freeze, or retreat, or lose all track of what they were doing. Of course, the difference there is that the child, even while having to deal with a previously unknown variable, is learning (reprogramming) about the new variable, while the robots we have today lack this ability.
But that's the key - 'we have today'.
I'm fascinated by the simple insect robots that have been assembled in the last ten years. These machines have no central processor, such that most computer scientists would recognize; rather, they have numerous processors all working in tandem, and a single 'goal' - usually to reach a marker or light source. There is no program for operating legs, for direction, for orientation; yet within minutes, these simple machines learn how to walk with six legs, and amazingly, in the same patterns as insects walk. They learn how to circumnavigate obstacles effectively. If there's any real flaw in these things, it's that they don't recall that learning later - and that may be something as simple as adding a memory card with some manner of autosave feature, to sort that problem out.
Yes, if we build a computer, such as we have on our desktops, it's never going to be able to learn and grow; but there's no reason why a sufficiently complex arrangement of simple processors provided with a simple set of instinctual goals couldn't learn anything and everything that we learn - after all, what is a brain, if not a collection of simple processors, with some instinctual goals hard-wired into them?
I'm not a big fan of 'embodiment theory' - largely because everything I've read about it doesn't say anything. It's like those writing about this theory can't figure out the right terms to use, to make themselves really understood. They skirt the issues, and assume the modern computer system to be the most advanced we'll ever have. That may be what's taught at universities today; but that doesn't make the theory any more or less right than computationalism or any other cognitive theory.
Whatever theory is right or wrong or whatever, one thing is certain: to assume that something is impossible because it cannot be done today, is short-sighted and ignorant. This may sound like I'm singing an old, old chord, but it's every bit as true now as any other time: people used to say the same things about flight, space travel, artificial hearts, and so on and so forth.
Artificial intelligence will happen - and I predict, probably within the next 200 years. We'll be faced with machines that are as smart - and probably smarter - than we are. It may even be the next stage of evolution. We can't look at our laptops and our RoboSapiens today and say, 'These things will never be smart. They suck.' Because things always change.