My takeaway from this interesting thread seems currently to be:
- Sentience is not objectively defined, at least not yet, and may not be possible to accurately define.
- Sentient or not is not a binary definition; there seems to be a continuous range of sentience from the vaguely sentient reactions of a shoal of fish in the presence of a predator to that of humans, and possibly beyond.
- While we currently know of no non-biological sentient entities, we cannot rule out that future artificial constructions could be sentient.
- There is no reason to assume that an artificial sentient entity must behave like a biological entity, i.e. a smart computer may not behave like a human, and might as such elude any current tests (like a Turing test).
Hans
Pretty much sums up my views.
My broader view is that the current AIs - the generative/LLM types - are giving us hints as to how complex processes such as our sentience can come about (which is not to say that is how our sentience came about).
As noted several times we are no longer in the classical sense programming these AIs, and there are already aspects of these AIs that we cannot explain, and they are doing things we didn't think they could or should be able to do and we are struggling to understand how they can do that. That's the fascination for me.
For many folk who have long considered that we are merely an emergent property of the contaminated doughnut bag of water that is our physicality these early Ais are now beginning to demonstrate "emergent" properties can account for something as seemingly complex as our sentience.