Caution and concern are not the same as feeling disturbed.
In other news, "we don't know what a human being is" is complete nonsense. We know exactly what humans are. We don't know everything about humans, but we do know what humans are, in the same way that we don't know everything that's in the ocean but we know what the ocean is.
It's a fine distinction, but it's an important one for the purpose of this discussion, so I figured someone should point it out.
Talk about déjà vu. True to form as ever Argent. Is it just me, or is there some kind of contradiction between the statement ‘we know exactly what we are’ and the statement ‘we don’t know everything about humans’. So is it ‘complete nonsense’, (like the last time you accused me of being…what was it…’completely ignorant’…or something)…or is it just partial nonsense. I guess you haven’t been following this thread too long. If you go back a couple dozen pages you’ll arrive at a quote that appears to represent the current consensus in the international cog sci community (remember Geraint Rees …it comes directly from his bunch). I’ll summarize it for you: ‘ We don’t know what consciousness is and we don’t know how the brain creates it.’ Pretty much their words exactly. If you want to believe otherwise, go right ahead. Maybe Pixy will be supporting you again, maybe not (hell, Pixy even believes in free will now…don’t you Pixy…or was that ‘maybe’ just maybe?).
So we have the words ‘don’t know what it is and don’t know how it’s created’. Sure doesn’t seem to be a lot of ambiguity there. Of course there is a great deal understood about various aspects of people and how they work…but there also seems to be a great deal that isn’t. The summary of the situation would appear to be exactly as Chomsky described it: “ Our understanding is thin and likely to remain so.” If anything, it would seem that it’s the computationlists (and related perspectives) who are the neurotic ones (…”humans aren’t special …we know what we are… “…just does not seem to be [for lack of a better word] true).
But useful just the same. A clear example of the contradiction explicit in that question I asked (which, as yet, remains unanswered [just a note Scott….I said I’ve no interest in your BS and I meant it…if you want to waste your time responding to my posts…go right ahead…I don’t bother anymore with juvenile skeptics who can’t handle a simple mystical reference without becoming apoplectic]).
On the one hand, the international cog sci community is unequivocally clear that ‘we do not know what consciousness is or how it is created’ (…come to think of it, it’s a bit of a paradox for the thing that is making the statement to attempt to describe ‘its’ own ignorance of ‘its’ own ability to admit ‘its’ own ignorance of the ability to admit ‘its’ own ignorance….or something like that).
On the other hand…this seems to flatly contradict our everyday experience. Like ‘…whaddya mean, I don’t know what I am!…I don’t feel like I don’t know what I am…quite the opposite…’ So we don’t know what we are…but we do know what we are. What’s the explanation for this?...and how might it be relevant to any attempt to satisfy the OP?
Quite the contrary, I'm having trouble thinking of many things I'd find more absolutely fascinating. Alien life would be one. Intelligent alien life, certainly. But a real, human level intelligence, implemented in code? In a manner we could understand, like... like printing out a blueprint? That would be amazing. No other word fits. Just amazing.
I'm sorry the prospect scares you. I truly am.
Scares me?!?!…it doesn’t ‘scare’ me. It’s academic until it isn’t. What is amazing is just how amazingly simplistic the perspectives on this issue often are (it reminds me of the debates that often surround such issues as euthanasia). The uncertainty level goes right off the charts….on every available social and psychological metric. ‘ In a manner “we” would understand’?!?!?!? Which ‘we’ are you talking about? You…me…Justin Beiber …Bill Gates…the president…? What if it occurred as a result of some unexpected anomaly (has science ever progressed in such ways before…no, of course not). All of a sudden there exists this ‘thing’ that has the capacity to conclude that it has the capacity to reach conclusions of its own. What if “we” don’t understand it (hell, we don’t even remotely understand ourselves, why this blind faith automatic assumption that some fictional ‘we’ would ‘understand’ something on that order of sophistication). It could (and almost inevitably would) conceivably create its own paradigms of behavior which would mean…what? You function as a result of massive intuitive assumptions about the coherence and robustness of your conceptual framework. If these frameworks are suddenly challenged by a fundamentally different one, which will prevail (just how disorienting can disorienting be?)? You may suddenly discover just how fragile you actually are when another paradigm asserts its own conditions of being (maybe ‘they’ would decide that people like you wouldn’t be allowed anywhere near ‘it’ for that very reason…would it still be amazing then?). But all of this is rampant speculation. But no, it isn’t. This is what we are…what the cog sci community is currently attempting to adjudicate the reality of…and there does exist…on some perhaps distant horizon…an HLMI (and it’s currently being reverse-engineered through AI so it’s hardly irrelevant). What ‘it’ will do, or be, nobody knows…partly because nobody (?) is yet clear about what we can do, or be.
Look at how biological psychology operationally defines consciousness, and all they have to say about it:
So that's what the layman should probably be told, and keep in mind.
The impression I get is that the cog sci community has a lot of different perspectives on the issues of what it’s dealing with. One point is…what actually is it that the public needs to see…and who is going to make these decisions?
These are not unlike the ethical / moral issues that the bio-medical community faces regarding questions like conception (how much choice should there be?), suffering, life-extending practices, euthanasia, etc. etc. From what I understand, many medical schools include mandatory courses on ethics in their curriculums. I wonder if the same can be said of computer science? From what I can see, this is not the case. Should it be?
I have no issue with humans being special as long as it has nothing to do with skydaddy.
In fact, lets be perfectly clear -- the history of this debate has been ME trying to argue that humans ( and life and computers ) are special compared to other stuff because of only how we behave, while YOU have consistently rejected such arguments, trying to push the idea that humans and life can ONLY be special in a certain way, a way that you can't currently define.
But the simple and unavoidable fact seems to be that we cannot, in fact, define whatever it is that makes us special. You can argue the details (behavior etc.) until you’re blue in the face, but the issues are unresolved. To whatever degree anything can be indisputable, that most certainly is. When there are issues of this magnitude that exist to such a degree of uncertainty, dismissing possibilities for no other reason than you find them distasteful is, at the very least, premature.
As for your certainty that you would be uber-cool when facing down an AGI for the first time….to me that simply betrays an admission of deceit. Have a look at my final comment (somewhere way down below). How can you possibly claim to know exactly what you are….when the blatant scientific consensus is that there does not exist an understanding of what you are. Either you’re lying, or you’re deluded, or you know something the international cog sci community does not know (or [gasp], you’re betraying religious behavior). The fact that you find your condition ‘enlightening’ is telling though. Enlightening. Why do you suppose people prefer that condition…even if they’re wrong about what they’re enlightened about (as you must unavoidably be)? I hate to say it Dodger…but you’re beginning to sound positively religious.
Ooh look! A definition of consciousness for the layman! Yay! And it only took four thousand five hundred and thirty eight posts!
I wonder if it takes a degree in biological psychology to conclude that if a person says they see a tree then they are experiencing the seeing of a tree?
Perhaps I should have been clearer; I was talking about people involved in or with an interest in this technology. If you're concerned about ignorance and misinformation of hoi polloi, the answer is to provide them with information and education. Have a word with the educational establishment and the media.
For some reason this reminds me of early church railing against science, and their implicit assumption of superiority as guardians 'protecting the souls of the ignorant and gullible from this dangerous information'. I think there is no 'reduction' of human identity going on, and that you underestimate the public. Naturally we, as a society, should attempt educate the ignorant and protect the gullible.
Most people involved would agree there are potential risks with powerful technologies, and there have already been numerous discussions, meetings, seminars, books, and articles about the rewards and risks of AI, and how we can maximize the one while minimizing the other.
The risks of man's creations turning malign are high in the public imagination, and always have been - the hubris theme is a popular one in fiction, from golems to Frankenstein, from Big Brother and Terminator to grey goo. A Henny Penny (Chicken Little) arm-waving hysteria is not a mature or sensible way to deal with these issues.
…so what is?... (and I don’t quite see my position as arm-waving hysteria…when I advocate smashing computers and burning AI researchers at the stake…that’s arm-waving hysteria)
Quite obviously….the ‘hoi paloi’, as you call them, have a significantly distorted understanding of what AI is (as is apparent from the blatant disparity between the results of those survey’s). There are some interesting papers about the cognitive biases inherent in this issue (and what may explain them). The results of those surveys quite clearly illustrate exactly the point I was making…the simplification / trivialization / commodification of human nature results in distortions of understanding (individual and collective). “ Human intelligence…nothing to it…we’ll be seeing R2D2 at Sears in my lifetime (but why can’t I understand why my teenage son is so depressed????)! “ I’m certainly not laying the blame at the feet of AI but it can encourage a trend.
Your confidence in the capacity of the general public to accurately adjudicate the situation is touching but there is little doubt that if another poll were taken tomorrow based on the assumption of the imminent introduction of full AGI (‘Data’ with all the trimmings….@ WalMart for $49.95)… in all likelihood the results would be overwhelmingly positive (of course…you might also have to deal with an equally irrational backlash response…depends which way the winds are blowing). Should the ‘general public’ decide such issues (they certainly don’t seem too knowledgeable if the results of those surveys are any example)? If not…who? Who has the authority, understanding, insight, or wisdom (aka: humanity)….or are any of these even necessary?
As I pointed out earlier…the bio-medical community include moral / ethical considerations in many of their activities….and many countries now practice legislative oversight of medical procedures…especially those involving reproductive issues (precisely because there are fundamental social, moral, and ethical issues involved). Reproductive issues…isn’t AGI a reproductive issue? Should the same practice be considered for AI or should we just let the obviously well-educated general public decide?
It’s actually amusing to see how many here are convinced that mine is an alarmist position…chicken little and all that. But ask for some kind of coherent explanation of why…and we get…faith. Things will work out, nothing bad will happen, people know what they’re doing, the world is a good place, it’s not complicated, etc. etc. Impressive.
…and in closing (because all good things must come to an end)…just speaking hypothetically of course…if anyone were to encounter a full AGI (I would include myself amongst the ranks of those in that first survey who are convinced that AGI is not possible…but for the sake of argument…) you would be encountering a ‘thing’ that actually knows what it means to be a human being (that’s what AGI means…complete human level intelligence). How many human beings do you know that can conclusively say they unconditionally know what it means to be a human being (nobody here can even provide a simple description of consciousness…the very thing that we all are!)? Thought so. Now tell me you wouldn’t find it disturbing (wait for it….).