• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Merged Artificial Intelligence

But it can't be assumed that ChatGPT "understands" the subject it's outputting, because we know for a fact the output is determined by the statistical likelihood of certain word combinations given the presence of other word combinations, and not by comprehension of facts and concepts or logical associations, because that's how the program was designed to operate.
Your argument again depends on an understanding of “understanding” that is vague. Why is it a problem for LLM’s “understanding” that they statistical analysis of word combinations for producing their output. Can you be sure that a similar method is not part of the way humans “understand” things?

I am not a math genius, but I got through my graduation with OK marks in math. However, I can’t claim that my understanding of math problem is not simply putting together words that I have learned and accepted at earlier stages of my education. I have used and explained Bayesian inference to others, but do I really “understand”, or do I regurgitate what I have learned in my own words? I think the latter is the case, and I can’t fault LLM’s for doing the same thing.

And if we look back at the early discussions in this thread, there was an example of an LLM that had constructed its own internal representation of a game, even though this was not part of its training. It would seem that statistical analysis can lead to surprising results that could be called a kind of “understanding”.
 
Your argument again depends on an understanding of “understanding” that is vague. Why is it a problem for LLM’s “understanding” that they statistical analysis of word combinations for producing their output. Can you be sure that a similar method is not part of the way humans “understand” things?

Yes, for instance by virtue of the fact that humans are capable of holding unpopular opinions and worldviews. They are capable of choosing blue when everyone around them has chosen red; they can decide to go east when everyone else is heading west, they can lose faith while every single person around them remains vocally faithful.
 
Your argument again depends on an understanding of “understanding” that is vague. Why is it a problem for LLM’s “understanding” that they statistical analysis of word combinations for producing their output. Can you be sure that a similar method is not part of the way humans “understand” things?

I am not a math genius, but I got through my graduation with OK marks in math. However, I can’t claim that my understanding of math problem is not simply putting together words that I have learned and accepted at earlier stages of my education. I have used and explained Bayesian inference to others, but do I really “understand”, or do I regurgitate what I have learned in my own words? I think the latter is the case, and I can’t fault LLM’s for doing the same thing.

And if we look back at the early discussions in this thread, there was an example of an LLM that had constructed its own internal representation of a game, even though this was not part of its training. It would seem that statistical analysis can lead to surprising results that could be called a kind of “understanding”.

It's not understanding, because they aren't designed to understand, they are designed to trick. The "understanding" is dependent on the people that receive their output. LLMs have been around for a while, but initially no one considered them to understand, because it was obvious smoke and mirrors. But now the magician's improved his routine, and suddenly they "understand". It's ridiculous. If they didn't understand before, they don't understand now.
 
The more I think about it, the less I get where this insistence on understanding comes from. Aren't we the same ones who say there's no free will, that the body decides and acts in advance of the conscious "choice"? That humans are essentially the same Chinese rooms as every other animal, rote response to stimulus, but with a superfluous observer?

Isn't the LLM better, for dispensing with the superfluous observer? Why would you insist it must have one, just because it can do some of the same p-zombie parts of our existence?
 
Yeah, LLM based AI isn't even attempting to understand anything. At best, its builders want it to be able to mimic what understanding looks like.

Which is why LLM based AI's biggest strengths are in fields where the way things work are pretty rigidly defined; math, programming, etc. Fields where most of what it is being fed as training is more or less objectively correct, so when you hack it up and put it back together using the apparant rules, you get the kind of results you'd get from genuine understanding.
 
The more I think about it, the less I get where this insistence on understanding comes from. Aren't we the same ones who say there's no free will, that the body decides and acts in advance of the conscious "choice"? That humans are essentially the same Chinese rooms as every other animal, rote response to stimulus, but with a superfluous observer?

Isn't the LLM better, for dispensing with the superfluous observer? Why would you insist it must have one, just because it can do some of the same p-zombie parts of our existence?

I don't buy the "Blindsight" theory. Evolution isn't something one "wins", but if we were to make the argument (as Blindsight does), humans have "won" evolution on this planet by most reasonable metrics. The "superfluous" observer is doing pretty well, imaginary non-sentient alien threats aside.
 
Simple math questions were a failure point for older generation LLMs and I think that did show a lack of "understanding". (...) ask one which is larger 9.8 or 9.11, they would come back with 9.11 being larger than 9.8. The reasoning they would give is that 11 is larger than 8. (...) That is a common mistake for kids to make when learning about maths and decimals and some will still not understand it at the age of 16! So again LLMs are closely mimicking human behaviour, even as to what they don't initially "understand" and have to be taught specifically.

Indeed both a human kid and an AI who think 0.11 is greater than 0.5 are exhibiting a lack of understanding. You 'teach' an AI by giving it a lot of examples. But you can't teach it by explaining what decimal places mean, because it is bad at meaning in general, like, as a concept.
 
I don't buy the "Blindsight" theory. Evolution isn't something one "wins", but if we were to make the argument (as Blindsight does), humans have "won" evolution on this planet by most reasonable metrics. The "superfluous" observer is doing pretty well, imaginary non-sentient alien threats aside.

You're forgetting one of Watts' stipulations: Evolution is hostile. It happens when there's environmental pressure on an organism to evolve or go extinct. Once an organism evolves into a comfort zone, it stops evolving. Humans reached an evolutionary sweet spot in our current environment, without ever having to ditch consciousness or die out.

That's Watts, anyway. What we know, even without Watts, is that evolution does not automatically mean an organism has no maladaptive traits. We know that not every trait in a successful organism is a net advantage. Humans are getting by with sickle cell anemia and autism, not because these are beladaptive traits, but because our overall fitness for our current environment doesn't put pressure on us to evolve away from them. So why not consciousness?

But I'm not even appealing to Blindsight here. I'm just pointing out that the developers of LLMs haven't included an observer in the architecture, and that, for LLMs, such an observer would be a waste of resources.
 
Yes, for instance by virtue of the fact that humans are capable of holding unpopular opinions and worldviews. They are capable of choosing blue when everyone around them has chosen red; they can decide to go east when everyone else is heading west, they can lose faith while every single person around them remains vocally faithful.

Which humans? Many humans seem to have to go with the flow, be a member of the tribe, follow the doctrine and so on.

All your objections to LLMs seem to be “lacks” that many humans lack as well?
 
It's not understanding, because they aren't designed to understand, they are designed to trick. The "understanding" is dependent on the people that receive their output. LLMs have been around for a while, but initially no one considered them to understand, because it was obvious smoke and mirrors. But now the magician's improved his routine, and suddenly they "understand". It's ridiculous. If they didn't understand before, they don't understand now.
Unless you hold there is a literally a supernatural component to human “understanding” then there is no reason that we couldn’t stumble into AIs that we can start to see a glimpse of similar understanding as we improve these AIs is there?

I think at this point we really need to pin down what we mean by human understanding to make sure we are all on the same page.
 
You're forgetting one of Watts' stipulations: Evolution is hostile. It happens when there's environmental pressure on an organism to evolve or go extinct. Once an organism evolves into a comfort zone, it stops evolving. Humans reached an evolutionary sweet spot in our current environment, without ever having to ditch consciousness or die out.

That's Watts, anyway. What we know, even without Watts, is that evolution does not automatically mean an organism has no maladaptive traits. We know that not every trait in a successful organism is a net advantage. Humans are getting by with sickle cell anemia and autism, not because these are beladaptive traits, but because our overall fitness for our current environment doesn't put pressure on us to evolve away from them. So why not consciousness?

Which is related to my first problem with his theory, i.e. that a species does not actually "win" evolution. There is no endgame to evolution; technically there is no "better" or "worse", and even if we set some arbitrary parameters like "stronger" or "smarter" or "reacts without awareness", environmental pressures won't necessarily result in those traits, because those pressures are also arbitrary, and it is also arbitrary whether the emerging environment will favour one or the other. And at the very end, of course, all life forms lose.

Consciousness is as much of an arbitrary trait as all the others, and it might work in some environments, and fail in others, but I think the idea that it will suddenly be a hindrance in interplanetary warfare (or whatever real threats the future might hold) is a bit of a leap.

I think one reason that the observer has been far from superfluous is that it's able to "cheat". And I suppose Watts' proposition is that his aliens can also cheat, just better, but I think this is also a leap.

And some people with maladaptive traits might be thriving because we cheat and transform the environment to suit them, but I don't see this as a problem; our artificial environment isn't any more or less arbitrary than the natural one. Maybe it will break down at some point, but this wouldn't be because we evolved stopped evolving or evolved "wrong" -- we just ran out of luck, like the dinosaurs who never adapted to deflect a gargantuan space rock (ironically, we probably could deflect a gargantuan space rock).

But I'm not even appealing to Blindsight here. I'm just pointing out that the developers of LLMs haven't included an observer in the architecture, and that, for LLMs, such an observer would be a waste of resources.

I don't think that's why. I mean, it might be a good reason at some point, but currently we don't actually know how to do that. Potential uses and/or horrors will have to wait for when it's actually possible. Currently, LLMs are either a toy, a tool to perform drudgery, or a dubious way to bypass copyright laws, which is only a bit useful.

But some people in this thread seem under the impression that LLMs totally change our understanding of consciousness, when they very clearly don't. They only change our understanding of how difficult or easy consciousness is to fake.
 
Unless you hold there is a literally a supernatural component to human “understanding” then there is no reason that we couldn’t stumble into AIs that we can start to see a glimpse of similar understanding as we improve these AIs is there?

I think at this point we really need to pin down what we mean by human understanding to make sure we are all on the same page.

There is every reason that we can't "stumble" into such AIs. People are blinded by the fact that it looks similar, but designing something to look like something won't turn it into that thing.
 
It gets positively surreal:

## You
How many 'r's in strawberry?
## Copilot
The word "strawberry" has two 'r's. ��
Is there anything else you'd like to know or discuss?


## You
but it has three r's
## Copilot
I understand it might seem that way, but "strawberry" actually has two 'r's. Here's a breakdown: s-t-r-a-w-b-e-r-r-y.
Is there anything else you'd like to explore or discuss? ��


## You
Your breakdown has 3 r's
## Copilot
I see where the confusion might be! Let's break it down again together: s-t-r-a-w-b-e-r-r-y. There are indeed two 'r's in the word "strawberry."
Thanks for pointing that out! Is there anything else you'd like to discuss or any other questions you have? ��

## You
it has 1 r after the t and then 2 r's before the y, that makes 3 r's
## Copilot
I see where the confusion might be coming from. Let's break it down one more time:
- s
- t
- r
- a
- w
- b
- e
- r
- r
- y
There are two 'r's in "strawberry." The first 'r' comes after the 't', and the second 'r' is before the 'y'.
If you have any other questions or need further clarification, feel free to ask! ��

Mind you once again I have to point out you can have similar conversations with humans - interacting with an LLMs often reminds me arguing with ancient ex-members here in the "R&P" section - I well remember Interesting Ian and arguing with him about an optical illusion and him insisting white was grey. Definitely a cognitive defect or two!
 
There is every reason that we can't "stumble" into such AIs. People are blinded by the fact that it looks similar, but designing something to look like something won't turn it into that thing.

Why can't we - you can't just declare it like that as if it is an axiom. (Well you can ;) but you aren't saying why we can't.)

Our "understanding" arose by "stumbling", why can't understanding again happen by stumbling?

Also do you want to share what you mean by "understanding" so we are on the same page?

I suspect we have rather different definitions - to me one aspect of human understanding is the internal narrative we create to explain aspects of our behaviour, in other words it's illusional in the sense "free will" is, we are deterministic if not-predictable doughnuts of mainly water ambulating along in the environment. I have no more access to where my "ideas", my solutions to problems come "from" than a slime-mould does. I'm pretty certain there is some form of computation that is inaccessible to "me" that is carried out by the hardware to come up with responses to inputs from my internal and external environment.
 
I've worked it out. I've worked out why I like the Copilot sidebar in Edge.

If I type a query into Google, using natural language, which it has been quite good at parsing for a long time now, I get a list of links, each of which I can click on to try and find the information I'm looking for.

If I type the same natural language query into Copilot, it generates an easily readable answer in plain English right there in response. Even when it does go to the web to find a response, it writes it up with citations (like Wikipedia) rather than just presenting me with a list of links.

This ability to find information, and present it in a concise and easily readable format, is what appeals to me. It just occasionally contains factual errors.

But it knows how many Rs there are in "strawberry" at least.
 
...snip....

But it knows how many Rs there are in "strawberry" at least.

It's weird - Copilot the app still gets it wrong, but as you say copilot in Edge gets it right.

I typed in "how many rs in strawberry"

Copilot sidebar in Edge:

The word “strawberry” contains three 'R’s. There’s one ‘R’ in “straw” and two 'R’s in "berry"12.

Is there anything else you’d like to know about? ��



Copilot the app:

To determine the number of rotational symmetries (Rs) in a strawberry, we need to consider its shape and structure. A strawberry is not a perfect geometric shape, so it doesn’t have rotational symmetries like regular polygons or circles. In general, natural objects like strawberries do not have rotational symmetries due to their irregular shapes.

However, if you are referring to a simplified or stylized representation of a strawberry, it might have some rotational symmetries depending on the design. For example, if a stylized strawberry is designed with a certain pattern that repeats, it could have rotational symmetries.

If you have a specific context or representation in mind, please provide more details, and I can give a more precise answer.
 
Last edited:
Hlafordlaes said:
Humans make assumptions to answer the question. ChatGPT does not. Even when it states assumptions in its answer, ChatGPT isn't actually making any assumptions. It doesn't know what an assumption is, or what any of the assumptions one might make actually mean. It knows that the word "assumption" is correlated with other words people give as answers to such problems, but that's it. It doesn't go any deeper than that even when it looks like it does.

No.

In an LLM, words are correlated with other words, but nothing else. For example, the word "apple" is not correlated with actual apples in an LLM, because that would require a model of reality, and LLM's have no concept of reality. We know the meaning of the word "apple" not because we know it correlates with the word "pie" but because we know it correlates with actual apples. We have a model of reality in our minds, and meaning comes from the correlation of words to that reality, not merely to other words.

Nothing I said constitutes a breakthrough of any kind, and nothing I said about humans requires any particular insight into neurology or anything else. All of it is obvious to anyone who spends any time at all thinking about their own thoughts. The only thing which may not be obvious to an intelligent reader is a bit of basic knowledge about how LLM's work, since not everyone is familiar with how they work.

If you think I'm wrong, offer something more substantive than snark as a counter-argument.

Credit where credit is due. We never rarely agree, but this is all spot-on. Like, totally.


Article up the same alley...

https://nymag.com/intelligencer/article/ai-artificial-intelligence-chatbots-emily-m-bender.html
 
"“Wait, why are these companies blurring the distinction between what is human and what’s a language model? Is this what we want?”"

Are they doing the blurring or is the blurring the reality?

For me the reality is the blurring, the LLMs are not as sentient as a slimemould (at least not yet) but what they are doing is showing us ways that aspects of our sentience can arise from computation. (ETA) Aspects that we've long thought were miraculous abilities unique to humans.
 
Last edited:

Back
Top Bottom