PixyMisa, I like the way you think.
Hmm. I think that's a definition rather than a necessary conclusion, but it's one that I can accept.
Is was really more of a generalization. Probably a bit overly so.
Sure it can. It's just that its understanding would be different. I'm not saying that machine intelligence would be human; merely that it is possible.
If we are programming machines to understand perception, we are limited by the fact that we only know the human perception. Therefore, we can only program machines to experience through similar ways that we can experience. I have doubts that we would be able to teach a machine to experience abstracts in any other way that what we are able to experience ourselves.
But that's just my opinion, I'm not set in stone about anything.
Here's the thing: I don't think this relates at all to the question of intelligence. It's a question of knowledge. Not the same at all.
The point I was getting at (long-windedly, of course) was that abstracts caused by our human condition, such as fear of death, ambition, etc. is what motivates us to use our intelligence.
I consider humans to be machines, so while I understand the question, I'm not sure my answer would satisfy you.
Actually, so do I.

That is why I think that first we will create machines that closely resemble human intelligence, give it the biological aspects to understand that abstractual perception of human intelligence, and end up just creating a human. After that, we would be doing nothing more than improving on the human design.
This is all my own abstractual thinking, of course. This may or may not happen. I'm having a specially-cooked meatloaf tomorrow, so I am happy.