I still wonder why the simulation going on in my head not only enables me to avoid falling down holes and bumping into things, to find food, interact with other humans to my advantage (or not!) but also gives me an awareness that these things are happening. Surely I would function just as well without the awareness, like an ant (which I assume without being able to back it up has no consiousness) or a computer (ditto).
Would you, though?
Lets hash it out.
It is clear that we do things all the time without being "aware" of doing them. I always use the example of coming back from the restroom and walking to a cube where my desk
used to be rather than where it
is -- that is obviously a full set of behaviors ( walking, turning ) that I wasn't particularly aware of and I did just fine making it to the destination ( although it was an incorrect destination ).
But that is a behavior that I had done before, many many times. Could you engage in a
new behavior without being aware of what is happening? I may be wrong but I tend to think the answer is no.
When I am walking through the woods and I haven't been there before, part of me is certainly aware of the ground, looking for things to trip over and holes to fall in. Thats why we look down most of the time when we hike.
Finding food? I think the act of looking for something that appears like fruit, then planning a route to get there, then harvesting it, is definitely something that we would be fully aware of. Even something like peeling a banana I am fully aware of -- I have never started that task then been done with it "before I realize it."
Finally, interacting with humans... well there are times when people are at my desk and talking and I just nod and smile without even hearing what they say, I admit. But I think any behavior more complex that this surely requires awareness.
The fact that it seems impossible to do any complex task
without being "aware" makes me think that "awareness" itself perhaps isn't something distinct from such tasks.
I once heard an interesting conversation about AI in which one of the participants tried to argue that a thermostat has two thoughts: 'it's too warm' and 'it's too cold'. I suppose a similar argument could be made for a toaster. How do we know a toaster doesn't think 'the toast is done' before popping up?
Well, I would phrase it differently.
Suppose you have a toaster that you are trying to make the greatest toaster in the world. So you replace the extremely simple "pop toast up when dark" mechanism with one of your own devising (and perhaps you need to modify the housing as well to accomodate ).
You don't like the existing sensor, so you replace it with something like the human eye, with millions of photon detectors. Then you need to add a way for the information from those sensors to be filtered and interpreted down to usable chunks, so you add something like our visual processing system.
But now you want the toaster to be able to learn for itself when the toast is done. So you add chemical sensory mechanisms for it to sample the toast itself, and mechanisms to interpret those sensory results based on nutritional requirements that are aligned with those of a human. And you have to give it a way to remember the samplings of the past.
Then you need to add ways for it to logically infer between good samples and the behavior that led to them, so it "knows" what it did to produce good toast color that it wants to reproduce.
But now you want even more autonomous behavior from the toaster -- why should you have to load it up each time, when it could just go get the bread and have the toast waiting for you when you wake up?
So you add functionality to learn about the environment and distinguish between bread and non-bread, obstacles and non-obstacles, so it can scan the kitchen and head to the bread.
Except how can it load the bread? So you give it some robot manipulators and sensory mechanisms to see where they are. But now there is a problem -- for the toaster to learn about the sensory manipulators properly, it needs to know that they are part of it and not part of the kitchen -- otherwise for example the obstacle avoidance mechanisms would perpetually try to route around its own arm sticking out in front of the camera ! So you add functionality for it to be able to learn about
self.
This is a pretty darn good toaster, would you agree?
But lets make it even better. It sucks to have to reprogram it each time you want to wake up at a different time, or want the toast done differently, etc. So you add audio recognition mechanisms and functionality to semantically interpret human voice. Because your kitchen is extremely complex, it isn't enough to just program in a lookup table of commands -- the toaster needs to actually understand what you are telling it so that you don't need to constantly monitor what is going on.
Any other features you want to add before we are done?
So now we have an ultra toaster. Do you suspect something like a
thought of "the toast is done" occurs before the toaster pops up the toast ?