Which question did you have in mind?
Yet we manage to develop ever more capable systems, and learn more about intelligence, skills, and abilities. Your point is?
Their contextual purpose would appear to be making intelligent systems. You'd have to ask them what other meanings or purposes they have.
You may feel it trivialises people's identity (why?), but that's your problem. I doubt there are many who believe it is easy to instantiate ingredients of identity in an AI; if it was easy, more progresss would have been made.
I don't recognise that characterisation at all. 'Trivially obvious' doesn't mean trivial, it means 'any fule kno'; of course what we make is dependent on our making it. What makes you think anyone wants to pretend otherwise? but if they did, so what?
…so what…who cares…the point…etc.
There have been a number of polls taken asking industry experts when they expect AI to appear (at the equivalent of human level…what’s referred to as AGI). At the AI@50 conference (2006)…a number of experts were asked when computers would be able to simulate every aspect of human intelligence. 41% said longer than 50 years…41% said never.
Amongst the experts, there also seems to be no small degree of uncertainty as to whether the achievement of human level AI will be a good thing…or a bad thing. Most agree that it would likely be one of, if not the, most significant event ever to occur on our planet (short of Jesus returning). Some think it would be incredibly positive, others think it could be catastrophic. Obviously it would impact not just the experts…but all of society.
Recall what Chomsky had to say about ‘the experts’:
“On the ordinary problems of human life, science tells us very little, and scientists as people are surely no guide. In fact they are often the worst guide, because they often tend to focus, laser-like, on their professional interests and know very little about the world.”
So what we have is what could be the most significant development in the history of history. So what is the opinion of the proletariat on this issue? An online (general population) poll was taken about a year after the AI@50 conference where the question was: When will AI surpass (not simulate, but surpass) human level intelligence? The majority of respondents were convinced this would occur in the next couple of decades.
Your suggestion that the majority recognize the difficulties of instantiating human identity into AI can only be regarded as at best naive. There would appear to be an enormous degree of not just ignorance, but outright misinformation amongst the general population. Considering how great an impact these issues have (not to mention could have) this misunderstanding can hardly be regarded as trivial.
The point (one of them) is, the mass of men interpret the issue in very simplistic terms (Data on Star Trek…R2D2 / C3PO on Star Wars…AI [Will Smith]…etc. etc.). When the issues are presented through the popular media (online technology sites, magazines, etc.) they are often presented in similarly simplistic / optimistic terms….not unlike the way it is often presented here. The idea of some kind of R2D2 running around has much more direct appeal than whatever moral / ethical / social (not to mention technical) issues may be involved in its occurrence. Considering how fundamental, significant, and potentially earth shattering this issue is…simplistic / trivial is much more likely to lead to problems / catastrophe (thus there is some motivation to behave otherwise).
How does AI human identity instantiation trivialize humanity (intentionally or not)? I guess we’re into big questions. To put it simply, human identity is a vast, complex, and at this point, mysterious thing. Respect for human identity is essential not just for a functional society, but for functional relationships of any kind. Reducing human identity to intelligible / manageable AI components encourages a distorted impression of human nature…especially amongst the less educated or gullible (and in the popular media [where a great many derive their information]…which typically reduces everything to the lowest common denominator and simplest possible explanation / description). The impression is given that science has conquered the mind (how far is that from the truth?)…that human nature is easily intelligible and is nothing but a set of components (the complexity of which is a direct function of the media source presenting the story) that can be resolved and manipulated at will. The dialogue shifts from a human-centered approach to an AI-centered approach where a human being becomes defined in terms of its AI components rather than visa versa. Does this happen? It already is. Why does this matter (apart from right now)? When / if the AGI time-bomb ever explodes, human beings had better be very clear about who and what they are, or those predicting a catastrophe may very well be right.