• Due to ongoing issues caused by Search, it has been temporarily disabled
  • Please excuse the mess, we're moving the furniture and restructuring the forum categories
  • You may need to edit your signatures.

    When we moved to Xenfora some of the signature options didn't come over. In the old software signatures were limited by a character limit, on Xenfora there are more options and there is a character number and number of lines limit. I've set maximum number of lines to 4 and unlimited characters.

Merged Artificial Intelligence

But it can't be assumed that ChatGPT "understands" the subject it's outputting, because we know for a fact the output is determined by the statistical likelihood of certain word combinations given the presence of other word combinations, and not by comprehension of facts and concepts or logical associations, because that's how the program was designed to operate.
Your argument again depends on an understanding of “understanding” that is vague. Why is it a problem for LLM’s “understanding” that they statistical analysis of word combinations for producing their output. Can you be sure that a similar method is not part of the way humans “understand” things?

I am not a math genius, but I got through my graduation with OK marks in math. However, I can’t claim that my understanding of math problem is not simply putting together words that I have learned and accepted at earlier stages of my education. I have used and explained Bayesian inference to others, but do I really “understand”, or do I regurgitate what I have learned in my own words? I think the latter is the case, and I can’t fault LLM’s for doing the same thing.

And if we look back at the early discussions in this thread, there was an example of an LLM that had constructed its own internal representation of a game, even though this was not part of its training. It would seem that statistical analysis can lead to surprising results that could be called a kind of “understanding”.
 
Your argument again depends on an understanding of “understanding” that is vague. Why is it a problem for LLM’s “understanding” that they statistical analysis of word combinations for producing their output. Can you be sure that a similar method is not part of the way humans “understand” things?

Yes, for instance by virtue of the fact that humans are capable of holding unpopular opinions and worldviews. They are capable of choosing blue when everyone around them has chosen red; they can decide to go east when everyone else is heading west, they can lose faith while every single person around them remains vocally faithful.
 
Your argument again depends on an understanding of “understanding” that is vague. Why is it a problem for LLM’s “understanding” that they statistical analysis of word combinations for producing their output. Can you be sure that a similar method is not part of the way humans “understand” things?

I am not a math genius, but I got through my graduation with OK marks in math. However, I can’t claim that my understanding of math problem is not simply putting together words that I have learned and accepted at earlier stages of my education. I have used and explained Bayesian inference to others, but do I really “understand”, or do I regurgitate what I have learned in my own words? I think the latter is the case, and I can’t fault LLM’s for doing the same thing.

And if we look back at the early discussions in this thread, there was an example of an LLM that had constructed its own internal representation of a game, even though this was not part of its training. It would seem that statistical analysis can lead to surprising results that could be called a kind of “understanding”.

It's not understanding, because they aren't designed to understand, they are designed to trick. The "understanding" is dependent on the people that receive their output. LLMs have been around for a while, but initially no one considered them to understand, because it was obvious smoke and mirrors. But now the magician's improved his routine, and suddenly they "understand". It's ridiculous. If they didn't understand before, they don't understand now.
 
The more I think about it, the less I get where this insistence on understanding comes from. Aren't we the same ones who say there's no free will, that the body decides and acts in advance of the conscious "choice"? That humans are essentially the same Chinese rooms as every other animal, rote response to stimulus, but with a superfluous observer?

Isn't the LLM better, for dispensing with the superfluous observer? Why would you insist it must have one, just because it can do some of the same p-zombie parts of our existence?
 
Yeah, LLM based AI isn't even attempting to understand anything. At best, its builders want it to be able to mimic what understanding looks like.

Which is why LLM based AI's biggest strengths are in fields where the way things work are pretty rigidly defined; math, programming, etc. Fields where most of what it is being fed as training is more or less objectively correct, so when you hack it up and put it back together using the apparant rules, you get the kind of results you'd get from genuine understanding.
 
The more I think about it, the less I get where this insistence on understanding comes from. Aren't we the same ones who say there's no free will, that the body decides and acts in advance of the conscious "choice"? That humans are essentially the same Chinese rooms as every other animal, rote response to stimulus, but with a superfluous observer?

Isn't the LLM better, for dispensing with the superfluous observer? Why would you insist it must have one, just because it can do some of the same p-zombie parts of our existence?

I don't buy the "Blindsight" theory. Evolution isn't something one "wins", but if we were to make the argument (as Blindsight does), humans have "won" evolution on this planet by most reasonable metrics. The "superfluous" observer is doing pretty well, imaginary non-sentient alien threats aside.
 
Simple math questions were a failure point for older generation LLMs and I think that did show a lack of "understanding". (...) ask one which is larger 9.8 or 9.11, they would come back with 9.11 being larger than 9.8. The reasoning they would give is that 11 is larger than 8. (...) That is a common mistake for kids to make when learning about maths and decimals and some will still not understand it at the age of 16! So again LLMs are closely mimicking human behaviour, even as to what they don't initially "understand" and have to be taught specifically.

Indeed both a human kid and an AI who think 0.11 is greater than 0.5 are exhibiting a lack of understanding. You 'teach' an AI by giving it a lot of examples. But you can't teach it by explaining what decimal places mean, because it is bad at meaning in general, like, as a concept.
 
I don't buy the "Blindsight" theory. Evolution isn't something one "wins", but if we were to make the argument (as Blindsight does), humans have "won" evolution on this planet by most reasonable metrics. The "superfluous" observer is doing pretty well, imaginary non-sentient alien threats aside.

You're forgetting one of Watts' stipulations: Evolution is hostile. It happens when there's environmental pressure on an organism to evolve or go extinct. Once an organism evolves into a comfort zone, it stops evolving. Humans reached an evolutionary sweet spot in our current environment, without ever having to ditch consciousness or die out.

That's Watts, anyway. What we know, even without Watts, is that evolution does not automatically mean an organism has no maladaptive traits. We know that not every trait in a successful organism is a net advantage. Humans are getting by with sickle cell anemia and autism, not because these are beladaptive traits, but because our overall fitness for our current environment doesn't put pressure on us to evolve away from them. So why not consciousness?

But I'm not even appealing to Blindsight here. I'm just pointing out that the developers of LLMs haven't included an observer in the architecture, and that, for LLMs, such an observer would be a waste of resources.
 
Yes, for instance by virtue of the fact that humans are capable of holding unpopular opinions and worldviews. They are capable of choosing blue when everyone around them has chosen red; they can decide to go east when everyone else is heading west, they can lose faith while every single person around them remains vocally faithful.

Which humans? Many humans seem to have to go with the flow, be a member of the tribe, follow the doctrine and so on.

All your objections to LLMs seem to be “lacks” that many humans lack as well?
 
It's not understanding, because they aren't designed to understand, they are designed to trick. The "understanding" is dependent on the people that receive their output. LLMs have been around for a while, but initially no one considered them to understand, because it was obvious smoke and mirrors. But now the magician's improved his routine, and suddenly they "understand". It's ridiculous. If they didn't understand before, they don't understand now.
Unless you hold there is a literally a supernatural component to human “understanding” then there is no reason that we couldn’t stumble into AIs that we can start to see a glimpse of similar understanding as we improve these AIs is there?

I think at this point we really need to pin down what we mean by human understanding to make sure we are all on the same page.
 
You're forgetting one of Watts' stipulations: Evolution is hostile. It happens when there's environmental pressure on an organism to evolve or go extinct. Once an organism evolves into a comfort zone, it stops evolving. Humans reached an evolutionary sweet spot in our current environment, without ever having to ditch consciousness or die out.

That's Watts, anyway. What we know, even without Watts, is that evolution does not automatically mean an organism has no maladaptive traits. We know that not every trait in a successful organism is a net advantage. Humans are getting by with sickle cell anemia and autism, not because these are beladaptive traits, but because our overall fitness for our current environment doesn't put pressure on us to evolve away from them. So why not consciousness?

Which is related to my first problem with his theory, i.e. that a species does not actually "win" evolution. There is no endgame to evolution; technically there is no "better" or "worse", and even if we set some arbitrary parameters like "stronger" or "smarter" or "reacts without awareness", environmental pressures won't necessarily result in those traits, because those pressures are also arbitrary, and it is also arbitrary whether the emerging environment will favour one or the other. And at the very end, of course, all life forms lose.

Consciousness is as much of an arbitrary trait as all the others, and it might work in some environments, and fail in others, but I think the idea that it will suddenly be a hindrance in interplanetary warfare (or whatever real threats the future might hold) is a bit of a leap.

I think one reason that the observer has been far from superfluous is that it's able to "cheat". And I suppose Watts' proposition is that his aliens can also cheat, just better, but I think this is also a leap.

And some people with maladaptive traits might be thriving because we cheat and transform the environment to suit them, but I don't see this as a problem; our artificial environment isn't any more or less arbitrary than the natural one. Maybe it will break down at some point, but this wouldn't be because we evolved stopped evolving or evolved "wrong" -- we just ran out of luck, like the dinosaurs who never adapted to deflect a gargantuan space rock (ironically, we probably could deflect a gargantuan space rock).

But I'm not even appealing to Blindsight here. I'm just pointing out that the developers of LLMs haven't included an observer in the architecture, and that, for LLMs, such an observer would be a waste of resources.

I don't think that's why. I mean, it might be a good reason at some point, but currently we don't actually know how to do that. Potential uses and/or horrors will have to wait for when it's actually possible. Currently, LLMs are either a toy, a tool to perform drudgery, or a dubious way to bypass copyright laws, which is only a bit useful.

But some people in this thread seem under the impression that LLMs totally change our understanding of consciousness, when they very clearly don't. They only change our understanding of how difficult or easy consciousness is to fake.
 
Unless you hold there is a literally a supernatural component to human “understanding” then there is no reason that we couldn’t stumble into AIs that we can start to see a glimpse of similar understanding as we improve these AIs is there?

I think at this point we really need to pin down what we mean by human understanding to make sure we are all on the same page.

There is every reason that we can't "stumble" into such AIs. People are blinded by the fact that it looks similar, but designing something to look like something won't turn it into that thing.
 
It gets positively surreal:

## You
How many 'r's in strawberry?
## Copilot
The word "strawberry" has two 'r's. ��
Is there anything else you'd like to know or discuss?


## You
but it has three r's
## Copilot
I understand it might seem that way, but "strawberry" actually has two 'r's. Here's a breakdown: s-t-r-a-w-b-e-r-r-y.
Is there anything else you'd like to explore or discuss? ��


## You
Your breakdown has 3 r's
## Copilot
I see where the confusion might be! Let's break it down again together: s-t-r-a-w-b-e-r-r-y. There are indeed two 'r's in the word "strawberry."
Thanks for pointing that out! Is there anything else you'd like to discuss or any other questions you have? ��

## You
it has 1 r after the t and then 2 r's before the y, that makes 3 r's
## Copilot
I see where the confusion might be coming from. Let's break it down one more time:
- s
- t
- r
- a
- w
- b
- e
- r
- r
- y
There are two 'r's in "strawberry." The first 'r' comes after the 't', and the second 'r' is before the 'y'.
If you have any other questions or need further clarification, feel free to ask! ��

Mind you once again I have to point out you can have similar conversations with humans - interacting with an LLMs often reminds me arguing with ancient ex-members here in the "R&P" section - I well remember Interesting Ian and arguing with him about an optical illusion and him insisting white was grey. Definitely a cognitive defect or two!
 
There is every reason that we can't "stumble" into such AIs. People are blinded by the fact that it looks similar, but designing something to look like something won't turn it into that thing.

Why can't we - you can't just declare it like that as if it is an axiom. (Well you can ;) but you aren't saying why we can't.)

Our "understanding" arose by "stumbling", why can't understanding again happen by stumbling?

Also do you want to share what you mean by "understanding" so we are on the same page?

I suspect we have rather different definitions - to me one aspect of human understanding is the internal narrative we create to explain aspects of our behaviour, in other words it's illusional in the sense "free will" is, we are deterministic if not-predictable doughnuts of mainly water ambulating along in the environment. I have no more access to where my "ideas", my solutions to problems come "from" than a slime-mould does. I'm pretty certain there is some form of computation that is inaccessible to "me" that is carried out by the hardware to come up with responses to inputs from my internal and external environment.
 
I've worked it out. I've worked out why I like the Copilot sidebar in Edge.

If I type a query into Google, using natural language, which it has been quite good at parsing for a long time now, I get a list of links, each of which I can click on to try and find the information I'm looking for.

If I type the same natural language query into Copilot, it generates an easily readable answer in plain English right there in response. Even when it does go to the web to find a response, it writes it up with citations (like Wikipedia) rather than just presenting me with a list of links.

This ability to find information, and present it in a concise and easily readable format, is what appeals to me. It just occasionally contains factual errors.

But it knows how many Rs there are in "strawberry" at least.
 
...snip....

But it knows how many Rs there are in "strawberry" at least.

It's weird - Copilot the app still gets it wrong, but as you say copilot in Edge gets it right.

I typed in "how many rs in strawberry"

Copilot sidebar in Edge:

The word “strawberry” contains three 'R’s. There’s one ‘R’ in “straw” and two 'R’s in "berry"12.

Is there anything else you’d like to know about? ��



Copilot the app:

To determine the number of rotational symmetries (Rs) in a strawberry, we need to consider its shape and structure. A strawberry is not a perfect geometric shape, so it doesn’t have rotational symmetries like regular polygons or circles. In general, natural objects like strawberries do not have rotational symmetries due to their irregular shapes.

However, if you are referring to a simplified or stylized representation of a strawberry, it might have some rotational symmetries depending on the design. For example, if a stylized strawberry is designed with a certain pattern that repeats, it could have rotational symmetries.

If you have a specific context or representation in mind, please provide more details, and I can give a more precise answer.
 
Last edited:
Hlafordlaes said:
Humans make assumptions to answer the question. ChatGPT does not. Even when it states assumptions in its answer, ChatGPT isn't actually making any assumptions. It doesn't know what an assumption is, or what any of the assumptions one might make actually mean. It knows that the word "assumption" is correlated with other words people give as answers to such problems, but that's it. It doesn't go any deeper than that even when it looks like it does.

No.

In an LLM, words are correlated with other words, but nothing else. For example, the word "apple" is not correlated with actual apples in an LLM, because that would require a model of reality, and LLM's have no concept of reality. We know the meaning of the word "apple" not because we know it correlates with the word "pie" but because we know it correlates with actual apples. We have a model of reality in our minds, and meaning comes from the correlation of words to that reality, not merely to other words.

Nothing I said constitutes a breakthrough of any kind, and nothing I said about humans requires any particular insight into neurology or anything else. All of it is obvious to anyone who spends any time at all thinking about their own thoughts. The only thing which may not be obvious to an intelligent reader is a bit of basic knowledge about how LLM's work, since not everyone is familiar with how they work.

If you think I'm wrong, offer something more substantive than snark as a counter-argument.

Credit where credit is due. We never rarely agree, but this is all spot-on. Like, totally.


Article up the same alley...

https://nymag.com/intelligencer/article/ai-artificial-intelligence-chatbots-emily-m-bender.html
 
"“Wait, why are these companies blurring the distinction between what is human and what’s a language model? Is this what we want?”"

Are they doing the blurring or is the blurring the reality?

For me the reality is the blurring, the LLMs are not as sentient as a slimemould (at least not yet) but what they are doing is showing us ways that aspects of our sentience can arise from computation. (ETA) Aspects that we've long thought were miraculous abilities unique to humans.
 
Last edited:
"“Wait, why are these companies blurring the distinction between what is human and what’s a language model? Is this what we want?”"

Are they doing the blurring or is the blurring the reality?

For me the reality is the blurring, the LLMs are not as sentient as a slimemould (at least not yet) but what they are doing is showing us ways that aspects of our sentience can arise from computation. (ETA) Aspects that we've long thought were miraculous abilities unique to humans.

Looks like you got a little ways into the article... do you consider yourself a stochastic parrot?
 
Why can't we - you can't just declare it like that as if it is an axiom. (Well you can ;) but you aren't saying why we can't.)

Our "understanding" arose by "stumbling", why can't understanding again happen by stumbling?

For the same reason that I can't create a cat by throwing stones into a lake. Just because one thing was the result of pseudo-random processes, doesn't mean that you can recreate that thing with a set of completely different pseudo-random processes.

Also do you want to share what you mean by "understanding" so we are on the same page?

I suspect we have rather different definitions - to me one aspect of human understanding is the internal narrative we create to explain aspects of our behaviour, in other words it's illusional in the sense "free will" is, we are deterministic if not-predictable doughnuts of mainly water ambulating along in the environment. I have no more access to where my "ideas", my solutions to problems come "from" than a slime-mould does. I'm pretty certain there is some form of computation that is inaccessible to "me" that is carried out by the hardware to come up with responses to inputs from my internal and external environment.

Yes, and without an internal observer, it's all just consistent or random responses to stimuli. Even if the observer is just a retroactive illusion, there is no possible understanding without it, because the concept doesn't make sense otherwise. It wouldn't just be the magic trick of AIs that shows understanding, but every single thing in the universe that does anything. LLMs could have been trained on absolute gibberish in exactly the same way, and no one would ever mistake them for being understanding creatures. Because it's all just a trick -- the understanding doesn't happen in the LLM, but in the person interacting with it.
 
"“Wait, why are these companies blurring the distinction between what is human and what’s a language model? Is this what we want?”"

Are they doing the blurring or is the blurring the reality?

For me the reality is the blurring, the LLMs are not as sentient as a slimemould (at least not yet) but what they are doing is showing us ways that aspects of our sentience can arise from computation. (ETA) Aspects that we've long thought were miraculous abilities unique to humans.

I never thought of learned response to patterns of stimuli without abstract comprehension of the patterns was an aspect of human sentience, let alone "miraculous" or "unique to humans". If it were, we wouldn't be able to train dogs to do tasks. Or reliably employ animals for work in any way.

And I would argue that LLMs don't show us ways that aspects of our sentience stimulus-response behavior can arise from computation. These aspects arose from the sentience of the human programmers. They didn't just throw a bunch of logic gates in a vat full of saline solution, run a charge through it, and come back nine months later saying "it's alive!"

No, they used their own sentience and abstract reasoning to precision-engineer a bunch of logical rules, precisely because they wanted to try emulate an aspect of our stimulus-response behavior.

They basically dumbed down an aspect of our sentience - abstract reasoning in natural language - to a rote stimulus-response behavior free of any abstract reasoning or comprehension. The result didn't arise from computation. It was reduced down to computation. And this reduction was achieved by getting rid of all the parts that actually have to do with sentience.

No plausible mechanism for sentience arising from this kind of reductive approach has been demonstrated. These LLMs aren't showing us anything we didn't already know machines and insects can do, and we still have no clear idea about how or why sentience actually arose or could arise in one particular species of animal.

Might as well say we've been shown how some aspects of our sentience can arise from a cube farm full of zombies, or a sufficiently-large Turing machine implemented in Conway's Game of Life implemented in a bunch of rocks.
 
I have issues with the word "sentience" .. it means exactly nothing to me. We don't even have exact equivalent in Czech. We translate it as feeling or perceiving. Which both is well defined and different.
IMHO it's better to focus on simpler, better defined functions .. like self-awareness, emotions, ability to analyze etc.
Maybe I should ask AI .. it's good at solving poorly defined problems after all. Some even joke it's AI as long as the problem is poorly defined.
 
Which humans? Many humans seem to have to go with the flow, be a member of the tribe, follow the doctrine and so on.

That some don't or choose not to is irrelevant; what matters is that it's a mental capacity that humans have, but LLMs do not.
 
Are they doing the blurring or is the blurring the reality?

They are doing the blurring. As with all hype-based tech, AI investors have a financial stake in promoting their product to the point of deliberately and routinely exaggerating and misrepresenting its capabilities.
 
Alex Jones "interviewed" ChatGPT on his show, proving conclusively that it is more intelligent than Alex Jones.
 
Looks like you got a little ways into the article... do you consider yourself a stochastic parrot?

No I consider myself a human with human sentience. Which means I'm a doughnut shaped bag of mostly water that ambulates through its environment responding to changes in that environment. Nothing more or less special than that. I am certain that a lot of my responses to my environment are computational and to a certain extent deterministic and use past data to predict what my next reaction should be to maintain the integrity of the doughnut without any level of sentience involved.
 
They are doing the blurring. As with all hype-based tech, AI investors have a financial stake in promoting their product to the point of deliberately and routinely exaggerating and misrepresenting its capabilities.

On the financial side I agree, I'd say many have gone beyond blurring to out and out lying! It's on the if you like more philosophical side I was referring to by the "blurring"
 
I have issues with the word "sentience" .. it means exactly nothing to me. We don't even have exact equivalent in Czech. We translate it as feeling or perceiving. Which both is well defined and different.
IMHO it's better to focus on simpler, better defined functions .. like self-awareness, emotions, ability to analyze etc.
Maybe I should ask AI .. it's good at solving poorly defined problems after all. Some even joke it's AI as long as the problem is poorly defined.

All I mean by that is the internal narrator we have and the blackbox that is the source of all this special "understanding" only we humans have. Which I note folk are still not defining! :(
 
Darat said:
Looks like you got a little ways into the article... do you consider yourself a stochastic parrot?

No I consider myself a human with human sentience. Which means I'm a doughnut shaped bag of mostly water that ambulates through its environment responding to changes in that environment. Nothing more or less special than that. I am certain that a lot of my responses to my environment are computational and to a certain extent deterministic and use past data to predict what my next reaction should be to maintain the integrity of the doughnut without any level of sentience involved.


Asked that in reference to this, in the article you earlier quoted...

“On the Dangers of Stochastic Parrots” is not a write-up of original research. It’s a synthesis of LLM critiques that Bender and others have made: of the biases encoded in the models; the near impossibility of studying what’s in the training data, given the fact they can contain billions of words; the costs to the climate; the problems with building technology that freezes language in time and thus locks in the problems of the past. Google initially approved the paper, a requirement for publications by staff. Then it rescinded approval and told the Google co-authors to take their names off it. Several did, but Google AI ethicist Timnit Gebru refused. Her colleague (and Bender’s former student) Margaret Mitchell changed her name on the paper to Shmargaret Shmitchell, a move intended, she said, to “index an event and a group of authors who got erased.” Gebru lost her job in December 2020, Mitchell in February 2021. Both women believe this was retaliation and brought their stories to the press. The stochastic-parrot paper went viral, at least by academic standards. The phrase stochastic parrot entered the tech lexicon.
Edited by Agatha: 
Snipped for rule 4
 
Last edited by a moderator:
And as I said - my answer is no but I don't think the options are only yes or no.

I have no idea "where" or "how" this response is generated; it simply appears to my narrator as I type the words. I'm not a very fast typist so often I find I am thinking - or being aware of - a few words ahead of what I am typing. My other half is a very fast typist and they say that when they are typing there is no "read ahead buffer" it goes straight from wherever and however it is generated to their fingers doing the typing. And I do understand that as when I am speaking there is no "read ahead" - words simply come out of my mouth.

Sometimes I will "mull" something over before I say something but again the thing I am mulling over appears without any conscious thought. Another example I can re-read this post and "think" of edits and other changes i.e. have spontaneous edits pushed into my "conscious" mind.

Since I learned about aphantasia I am very aware that people may experience very, very different "internal" worlds so I don't assume everyone is like me. But the fact that there is at least one of us that is like me it means unless you want to claim I behave different to all other humans that one person has some of the same "lacks" as LLMs.
 
OMG I take back everything bad I said about AI. Ran across a mention of "AI generated music" in a CNN story this morning and researched it. It's a thing. OMG it's a thing. And it's awesome. I found a free one (but you'd have to pay to download/save the results) where you give it a text prompt and it makes a song. No idea of whether it's truly original music or not but holy crap it's hilarious. "Sad violin song about dinosaur butler", "EDM about Queen Victoria being a penguin", "love song about tooth decay", "heavy metal monkey librarian eating cakes". I have gotten very, very little work done today. AI is hilarious when you know how to employ it.
 
It seems we’ll never get out of the problem that for many people it is a question of definition that LLMs are not, and can never be sentient in any way. The goalposts will be moved over and over, and there will always be something that people can do, and LLMs cannot (apparently), and therefore LLMs cannot be sentient.

Nobody ever comes up with definitions of “sentience”, or “understanding”, or any other concept that LLMs can never aspire to.

I don’t think that LLMs are sentient, but in some areas I believe that LLMs understand stuff just as well as some humans, and I don’t think that the way LLMs operate is such a barrier to understanding and sentience that I can rule out that LLMs will achieve them. As I see it, humans use much the same processes for learning and understanding that LLMs do, and much genius in humans is due to the ability to put together data in unconventional ways - something that should be possible for LLMs also.
 
And now for something different: AI seems to be worse at solving computational problems now than in 2022! I read it I. A local magazine, but it was without links. Apparently, the source is an article in IEEE, where researchers reran hundreds of different computational problems in 5 or 6 different languages, and found that as AI has got better for a lot of skills, the code it produced was noticeably worse in 2024 than in 2022.

I would have loved to read the original story, and see what the authors pose as the reason for this degradation.
 
And now for something different: AI seems to be worse at solving computational problems now than in 2022! I read it I. A local magazine, but it was without links. Apparently, the source is an article in IEEE, where researchers reran hundreds of different computational problems in 5 or 6 different languages, and found that as AI has got better for a lot of skills, the code it produced was noticeably worse in 2024 than in 2022.

I would have loved to read the original story, and see what the authors pose as the reason for this degradation.
My hypothesis: Sturgeon's Law, and they expanded the training corpus to include more of the other 90%.
 
Quick question:

Is there any change that participants here are mixing up sentience (the ability to feel) with sapience (the ability to think)?

I'm a bit confused with some of the arguments and wonder if that is the reason why.
 
Quick question:

Is there any change that participants here are mixing up sentience (the ability to feel) with sapience (the ability to think)?

I'm a bit confused with some of the arguments and wonder if that is the reason why.

I'd say zero chance. I'd say nobody is thinking of those definitions I'd say that people are using sentience as a shorthand for the kind of self-aware abstract reasoning that humans seem to do, and that seems to be absent from all other animals and also absent from LLMs.

ETA: I'd also say that whichever definition you use, LLMs don't qualify.
 
Back
Top Bottom