• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

ChatGPT

Are dogs sentient in the same way humans are?
I'd say, yes in principle. They are not as sentient*) as humans are, but it is basically the same kind of sentience. A dog may not pass the mirror test, but if you say they are not self-aware, you have never had a Terrier.

*) I am here assuming that sentience is a point on a continuous scale.

Hans
 
No, you lost it when you said we can't compare wiring in computers to nerves..

Mmm, nerves, as in synapses, are quite comparable with wires, used in computers. Neurons, however, are far more complex than the transistors that make up the active elements in a computer.

One fundamental difference between a brain and current computer technology is that the brain can modify its wiring and that the program appears to be at least partly part of the hardware.

In a computer, the wiring is fixed and won't change, and the program exists as data code in it. Some types of software intending to mimic a biological brain (called neural networks) is built to be able to change its programming, and such may be the best bet for simulating a biological brain, but current AI is not quite like that.

Hans
 
Are dogs sentient in the same way humans are?

I was thinking along that same line, but using chimps as an example. I think I recall CS Lewis using a bear in an example.

There’s lots of philosophical debate about “What’s it like to be a bat”. I think they assume that a bat is sentient and wonder what it may be like to be inside the bat’s head*.

The podcast I linked above about octopi reflected on a sentience “continuum”, where even a bee or a worm might possess a primitive form of sentience. I think that’s likely.


*I’m reminded of “Being John Malkovich”!
 
Gawd would you read my posts and shut up with this lie about my POV! Or at least don't quote me or ask me to elucidate what I already have.

Let me make it 5 times. Please read carefully:

For the fourth [5th] time now, once we understand how consciousness works in a biological being we can try to build one in a non-biological being/machine.

How hard is that concept?



Fine, other than sci-fi you got a viable model?

Is this something we should look for a century or 2 from now?

I'm pretty sure we'll figure out biological consciousness long before that.


The problem with your (and others') sci-fi version of conscious or sentient AI programs is upon what is this based? More clever and expansive AI programs can't get from here to there unless you have some plan how that could happen.


All of human history has been an unending stream of examples of using and improving on things we don't fully understand long before we really begin to understand them. In many (if not most) cases such improvements were the genesis of our understanding.

My expectation is that the efforts to improve AI leading to inception will be the impetus for that understanding you are certain must come first.
 
I'd say, yes in principle. They are not as sentient*) as humans are, but it is basically the same kind of sentience. A dog may not pass the mirror test, but if you say they are not self-aware, you have never had a Terrier.

*) I am here assuming that sentience is a point on a continuous scale.

Hans

Dogs Pass Test for Awareness of Their Own Bodies: Study
While several studies have attempted to identify self-awareness in canines, few have successfully done so. Dogs typically fail the well-known mirror test, for example, in which an animal is marked with pen or paint and then presented with a mirror; animals are considered to have passed that test if they investigate the mark, because it suggests they recognize their own reflection.

In this study, researchers used the mat setup to test more than 50 dogs of various breeds, sexes, and ages. The team found that, like elephants and most toddlers, dogs are much more likely to get off the mat when asked to pick up an object attached to the mat than they are in any of the various control conditions. The researchers note in their paper that the study presents the first evidence of this kind of awareness in Canis familiaris.

There might be reasons dogs don't pass the mirror test. I can think of one in particular, the image is not one they recognize rather than them not recognizing self.
 
Interesting paper relevant to the current line of discussion: https://www.nature.com/articles/s43588-023-00527-x

Abstract:
We design a battery of semantic illusions and cognitive reflection tests, aimed to elicit intuitive yet erroneous responses. We administer these tasks, traditionally used to study reasoning and decision-making in humans, to OpenAI’s generative pre-trained transformer model family. The results show that as the models expand in size and linguistic proficiency they increasingly display human-like intuitive system 1 thinking and associated cognitive errors. This pattern shifts notably with the introduction of ChatGPT models, which tend to respond correctly, avoiding the traps embedded in the tasks. Both ChatGPT-3.5 and 4 utilize the input–output context window to engage in chain-of-thought reasoning, reminiscent of how people use notepads to support their system 2 thinking. Yet, they remain accurate even when prevented from engaging in chain-of-thought reasoning, indicating that their system-1-like next-word generation processes are more accurate than those of older models. Our findings highlight the value of applying psychological methodologies to study large language models, as this can uncover previously undetected emergent characteristics.
 
Hans got me interested in digging a little deeper into Neurons.

Neuron

A lot of interesting stuff but this simple statement makes one wonder why computers haven't evolved even more than they have.

The ability to generate electric signals was a key innovation in the evolution of the nervous system.

That pretty much happened on day one of computer technology..

Hans said:

Neurons, however, are far more complex than the transistors that make up the active elements in a computer.

However, the article says:

The signaling process is partly electrical and partly chemical. Neurons are electrically excitable, due to maintenance of voltage gradients across their membranes. If the voltage changes by a large enough amount over a short interval, the neuron generates an all-or-nothing electrochemical pulse called an action potential.

That sounds a lot like a two-state transistor to me.. Either off or on.

I would agree that neurons appear more complex than transistors, but in the end their action seems no different.
 
Why is sentience so important ? IMHO it's irrelevant and pointless term. Would it change anything if computers were sentient ? If dogs wouldn't be ? If people wouldn't be ?
 
Why is sentience so important ? IMHO it's irrelevant and pointless term. Would it change anything if computers were sentient ? If dogs wouldn't be ? If people wouldn't be ?


The ethical angle, for one thing.

Then there's the unpredictability angle. (Although that's kind of circular. You might just deal with the unpredictability directly, I suppose.)

And finally what it might help us understand about ourselves.
 
That sounds a lot like a two-state transistor to me.. Either off or on.
Transistors aren't just either on or off. They are analog devices whose output is proportional to the input. That is why they make excellent amplifiers. However a digital computer needs to have each node in only one of two states, 'on' and 'off'. To do this the circuit is arranged so the transistors will 'flip' from one state to another and stay there.

The fundamental circuit to store a single bit is called a 'flip-flop', where the output of one transistor is fed into the input of another one so that each one holds the other in the opposite state. However a practical circuit also needs a method of addressing and flipping the state, which generally requires 6 transistors per bit. This is called 'static' memory because the flip-flop stays in one state forever unless deliberately changed or the power is removed. These bits are often arranged in groups called 'registers' to represent larger numbers, eg. 8 bits (20 to 27) representing a number between 0 and 255.

But using 6 transistors per bit is wasteful, so another type of storage element was invented called 'dynamic memory' that stores the bit as charge on the Gate of a FET (Field Effect Transistor). Problem is the charge gradually leaks away, causing the FET to 'lose its memory' after a few fractions of a second. To avoid this each cell in the memory is periodically read and the charge replenished - thus the operation is 'dynamic' rather than 'static'.

I would agree that neurons appear more complex than transistors, but in the end their action seems no different.
There is a similarity, but it's not 'no different'.

The circuitry in a digital computer is carefully designed to ensure that nothing ever changes state accidentally. A single bit error will make it screw up unpredictably. As this absolute reliability is difficult to achieve in dynamic memory, modern computers often have 'error correcting' memory that is able to detect and repair any single bit error provided it doesn't happen too often.

Neurons don't work like that. They are inherently 'noisy' and prone to firing even without stimulus. This makes neurons more analog-like. A strong stimulus will make them fire more often. The brain then does a 'statistical analysis' of the information its getting, as opposed to the bits in a computer whose states are all equally (and vitally) important.

A digital computer made from neurons would be totally unreliable. A brain made from flip-flops expecting accurate data input would also be useless. A computer can simulate the statistical operation of the brain and nervous system, but this requires a huge number of computations.

Another problem is that the computer calculates one thing after another sequentially, whereas the brain is getting large amounts of information in parallel. In some cases - eg. the eye - a certain amount of 'processing' is done by the sensors themselves. The eye actually detects edges of shapes and color changes before sending impulses down the optic nerves. This method is also used in machine vision systems, where the image is divided into small areas which are preprocessed in parallel by GPUs with thousands of cores.

Trying to emulate the brain and nervous system with a conventional computer is a losing battle because you are attempting to make a machine that works sequentially with total accuracy act like a system that operates statistically on massive amounts of parallel data. The answer to this is to create hardware structures that are closer to the brain and nervous system.

As the size of transistors in ICs get smaller to achieve higher density, it becomes harder to maintain absolute reliability. However for 'AI' we might not need that reliability. Modern memory cards use multiple charge levels on the FETs to increase the data density, then apply error correction to deal with the inevitable errors that occur. For a neural network that error correction may not be necessary because the system is designed to tolerate it. It might not even require a CPU running program code to do its thing. With the right interconnections between 'neurons' a chip could perform the required processing itself, much like the eye processes images without doing any 'thinking'.
 
Uh, just this: The reason I claim a neuron is more complicated than transistor (or a flip-flop) is that it normally connects to several synapses, and it does some kind of calculations on how to react on signals.

Hans
 
You asked if dogs were sentient in the same way humans were. Why wouldn't they be if you weren't referring to their different intelligence?

Again no one else has brought intelligence into the discussion but you, you keep creating strawmen.

If a dog can be sentient in a different way to a human then it provides a counter claim to one of the assertions you use in your argument against AI being sentient.
 
The ethical angle, for one thing.

....

That's a human concept and problematic when you leave it up to humans to decide who or what, is deserving of ethical treatment.

In my opinion, humans are only special ( and deserving of ethical treatment ) because of our ability to think we are.

Then there is the caveat that sometimes one group of humans decides another group of humans, much less other animals or machines, are not deserving of ethical treatment.
 
Last edited:
The ethical angle, for one thing.

Then there's the unpredictability angle. (Although that's kind of circular. You might just deal with the unpredictability directly, I suppose.)

And finally what it might help us understand about ourselves.

Ethical angle ? How ? All ethics toward other species are purely utilitarian. If the species is harmful, we kill it. If it's tasty, we treat it well, until we kill it. It it's cute, we might give it almost human rights .. but it has nothing to do with sentience.
And IMHO we will treat AIs the same.
As for understanding ourselves, that is indeed important. And AIs might help, especially the kind where you simulate brain. But it's not strictly AI topic.
 
Ethical angle ? How ? All ethics toward other species are purely utilitarian. If the species is harmful, we kill it. If it's tasty, we treat it well, until we kill it. It it's cute, we might give it almost human rights .. but it has nothing to do with sentience.
And IMHO we will treat AIs the same.
As for understanding ourselves, that is indeed important. And AIs might help, especially the kind where you simulate brain. But it's not strictly AI topic.

Uh-huh ...

There are many people who don't eat or enslave animals and put sentience high on the list of reasons. So yes, ethical angle.
 

Back
Top Bottom