• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Artificial Sentience. How would it be recognized?

Well, here's another thought. Perhaps the only sort of sentience we will recognize has emotional cues built in.

Take dogs as an example. I know how to "build" a dog. (Mommy dog and daddy dog love each other very much...) Now, do I think dogs have an "inner life" because I feel an emotional connection to them? How about other creatures we might describe as sentient, like an octopus?

And finally, suppose starfish were fully sentient, human level intelligence. However, they are keen only to do what starfish do with no impulse to relate to us at all. How am I going to detect this?

It seems to me the barrier may be relatability and communication, and those might require emotions like ours to work. I don't know much about autism and such, but isn't it sometimes described as a "different" sort of intelligence/sentience? And those are human beings.

I'm not sure I follow, again. My point is that if we create an intelligent being (other than by the usual methods), do we then add all the aspects inherent in humanity, not limited to certain death (often unpleasantly so) and make it certain that the end result cannot be reveresed and if we don't then what have we created?
 
Data was based on Isaac Asimov's non-science description of robots. Asimov actually had no idea how computers worked and only used the "positronic" brain to avoid using "electronic" which didn't sound futuristic enough. So, it was somewhat funny to see ST copying the {cough} "positronic" brain technology.

You could probably argue that ST addressed this question more directly with the M-5 computer which raised the issue of autonomous action with incomplete understanding of morality. I suppose in some ways this is reminiscent of Steinbeck's novella Of Mice And Men where Lennie too had independent action but likewise a disastrously incomplete understanding of morality. The main difference being that M-5 eventually understood its shortcomings whereas Lennie never did.



This is one reason why Data is such a poor example for this question. Data existed with perfect morality but no emotions of any kind. Emotions were added later suggesting that they are completely independent of morality. This is quite the opposite of another ST episode where a man is turned into a robot and thereby loses his moral compass.

I would like to give you some answers but this is somewhat complex in terms of cognitive versus computational theory. Within a computational framework, it should be possible to have entirely abstract actions based on limits that are artificially created. The problem with this assumption is that it ignores the frame problem which could make such computer programs impractical because of the opposition of size versus speed. In other words, by the time you have a program large enough to handle the complexity of a human environment, the speed might be degraded to the point where it would no longer be fast enough to interact with people.

In cognitive theory the issue is self motivation versus self control. Within a biological framework I can see how this works with emotion. I'm going to say that you can't entirely avoid emotion to create a cognitive AI but I'm not sure that you would need the full human range. And, I suppose it would be theoretically possible to create a cognitive AI that was a sociopath.

I used "Data" as an example of where the concept was addressed, not where it was defined. As to your technical musings about cognitive speed, that is irrelevant. This is not a discussion about the limits of technology.

My point is that we are what we are because of our biological limitations (or capabilities if you want). If we create something equally complex in a cognitive sense, do we deliberately impose similar constraints that cannot be reversed (if that were possible) or if not have we really created anything that has anything in common with ourselves?
 
In the evolution of our own consciousness, emotions came first. In the form of basic survival reactions that are still hard-wired in the more primitive portions of our brains. As we know, these primitive reactions can easily overwhelm our modern, "rational" processes....

They are part and parcel of our uniquely human consciousness. However, that's not to say that a perfectly functional consciousness could not be fabricated without emotions. If we were programming such a consciousness, and we wanted some sort of "moral" component....At that point we could no doubt build it in....Something along the lines of Asimov's laws.

Should we get to that point....

Asimov's Laws were the laws that fundamentalist religionists follow, knowing that they would be homicidal pshycopaths (they think) if they had not been taught the laws by the supreme programmer.

That is not how humanity functions nor is that what I hope to discuss.
 
Then we must also ask if pleasure and pain will need to be programmed in. Otherwise, what's there to be emotional about?

And now another problem arises - an ethical problem. Like God, we are left to decide if we shall create a being capable of feeling pain so the feeling can be exploited to get us where we want to go. Our machines will wonder, just as we wonder, why their creator was such an ass. Maybe AI will tell us.

And if it can be programmed in, can it not be programmed out?
 
Cute. Why don't you just say you have nothing to contribute?
My post is the most relevant conclusion you're likely to come to. You're worried about how you'd recognize something when you can't even say what it is.
 
This theme has been explored in plenty of scifi stories :)
However, I believe it is possible to construct an artificial intelligence which, within my lifetime, progressively approaches childlike degrees of sentience and autonomy. :)

I don't disagre with the essence of what you say, but I still revert to what I was trying to say, which is that the fundamentals of what we recognize as human, and sentience (however it may exist in other animals), is fundamentally linked to the parts of us that we cannot control, even as mundane as what the effects of what we eat are.

If we create something approximating that condition, do we not inherently know how to correct whatever flaws it may have (or can it correct them) in a manner that we cannot do to ourselves? How would that condition relate to humanity or what form of sentience would it produce? Surely not like a human one.
 
Last edited:
If we create something approximating that condition, do we not inherently know how to correct whatever flaws it may have (or can it correct them) in a manner that we cannot do to ourselves? How would that condition relate to humanity or what form of sentience would it produce? Surely not like a human one.
And why is that a bad thing?

For many years I had thought that the quest for artificial humaniform* intelligence is stupid. Far more useful would be an artificial intelligence fundamentally different from humans, hence capable of insights no human would ever have. But now that I had seen the movie "Ex Machina" that concept hit me so hard, I cannot see how anyone could walk out of that movie and think that "sentience approximating human" is anything but wild goose chase. In the movie, Nathan uses the entire history of Google searches (yes, every search anyone ever made since Google came into existence) to generate a model of average human personality. That's actually might not be a bad way to go about it... if that's what you really want. Why would anyone bother? There is a much easier way to create human personalities.

* To borrow Asimov's term.
 
Why would anyone bother? There is a much easier way to create human personalities.
True; but there is the question of whether and how we could relate to or even recognise a significantly alien kind of sentience (alien as in unfamiliar, not like the biological sentience we know)...
 
True; but there is the question of whether and how we could relate to or even recognise a significantly alien kind of sentience (alien as in unfamiliar, not like the biological sentience we know)...

I suspect we would not have a problem recognizing it, in a Turin test sense. What I question however is what would motivate such an AI, if anything not imposed by design.

Humans are after all motivated by factors that we cannot control, or by the attempt to control factors that we have poor command over.
 
True; but there is the question of whether and how we could relate to or even recognise a significantly alien kind of sentience (alien as in unfamiliar, not like the biological sentience we know)...
What's "sentience?"
 

Back
Top Bottom