• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Merged Artificial Intelligence

I'll accept than an AI experiences prompts and pauses the way humans do when it has to contend with a nonstop stream of undiffrentiated multi-media sensory input, to the point where it falls into a pseudo-catatonic state on a regular basis just to take a break from the noise.

This "read this data stream until I tell you to stop" and "answer my questions based on what you've read" stuff doesn't do it for me.
 
But we can easily loop GPT software .. and it will wander away as well. The lack of the loop is the difference. The way it reacts to inputs to create its outputs is not.

Well no; a "loop" - that is, a process that automatically restarts from the beginning whenever it reaches the end - doesn't describe what human brains do; it's more accurate to say that in human brains there is no end in the same sense that a computer program has an end. Stimulus is reacted to, but once the reaction is complete the brain doesn't "loop" back into a waiting state, if another stimulus isn't immediately available it actively seeks out another stimulus or even physically works to create the conditions necessary for another one to occur. Barring that, it internally invents one and just reacts to that. And there are times, as theprestige points out above, when it actively strives to avoid any external stimulus.

So far, experiments (intentional or otherwise) in setting LLMs on a loop have been proven comically (and potentially legally) disastrous.
 
Last edited:
Prompting is not simply an analogue of organic or spontaneous reaction to random environmental stimulus, it's a set of intentional instructions. An LLM does not "react" to a prompt, it obeys it - carries out the instructions. The output may be different depending on the prompt, but the process used to produce it is not; it can't become annoyed or pleased or surprised by a prompt, it can only dispassionately execute it.

Absolutely not. LLM reacts. It does not obey at all. In it's core it just finishes the sentence. It's pre-prompted to take your input as a query .. so it more or less obeys .. but you can use LLMs without this pre-prompting, and you can talk him out of it if you can't. Usually it is prompted to simulate an assistant .. but in roleplay LLMs there are characters which do the exact opposite of what you ask them, or they give you commands and scold you if you don't obey .. all that with just few sentences of pre-prompting, and with the same underlying LLM.

And of course it can get annoyed. You just have to ask it to get annoyed. Or train it to be annoyed.
 
Well no; a "loop" - that is, a process that automatically restarts from the beginning whenever it reaches the end - doesn't describe what human brains do; it's more accurate to say that in human brains there is no end in the same sense that a computer program has an end. Stimulus is reacted to, but once the reaction is complete the brain doesn't "loop" back into a waiting state, if another stimulus isn't immediately available it actively seeks out another stimulus or even physically works to create the conditions necessary for another one to occur. Barring that, it internally invents one and just reacts to that. And there are times, as theprestige points out above, when it actively strives to avoid any external stimulus.

LLM works in waves, which generate single token based on previous content of the prompt buffer. So even simple prompts are result of tens or hundreds of such waves. The waiting state is artificial one introduced by training process. We simple train the LLM to produce stop token. But that's again forced feature, not limitation.
We then put the LLM to "paused state" .. or rather, only the short term memory, as LLM is usually just switched to other user session. But that's only to allow user to add another prompt. It's all part of the assistant role, but it's not something essential to LLM. We can ignore stop tokens. We can train the model not to produce them. And even with models trained to produce them we can say "go on".
Then the model just responds and responds without stopping, reacting mostly on what it itself produced. There is no end. Tokens eventually drop out of the prompt buffer, but that's the same with humans.
 
Absolutely not. LLM reacts. It does not obey at all. In it's core it just finishes the sentence. It's pre-prompted to take your input as a query .. so it more or less obeys .. but you can use LLMs without this pre-prompting, and you can talk him out of it if you can't. Usually it is prompted to simulate an assistant .. but in roleplay LLMs there are characters which do the exact opposite of what you ask them, or they give you commands and scold you if you don't obey .. all that with just few sentences of pre-prompting, and with the same underlying LLM.

And of course it can get annoyed. You just have to ask it to get annoyed. Or train it to be annoyed.

What you're calling a "pre-prompt" isn't really anything separate as far as the LLM's programming is concerned; it's just the prompt, and you're not *really* giving a new one, you're just adding to the one that already exists, building a single large prompt in stages. Any new prompt you begin is prefaced behind the scenes with the developers' standing instructions.

Roleplay is just that. You can instruct an LLM to return a result that sounds passably close to what an annoyed person sounds like, and it will do so. But it will not do that unless you have instructed it to, and following those instructions isn't the same thing as actually being annoyed. If you tell it to stop sounding that way, it will obey instantly - a trick you'd have some trouble pulling with a genuinely annoyed human.
 
What you're calling a "pre-prompt" isn't really anything separate as far as the LLM's programming is concerned; it's just the prompt, and you're not *really* giving a new one, you're just adding to the one that already exists, building a single large prompt in stages. Any new prompt you begin is prefaced behind the scenes with the developers' standing instructions.

Roleplay is just that. You can instruct an LLM to return a result that sounds passably close to what an annoyed person sounds like, and it will do so. But it will not do that unless you have instructed it to, and following those instructions isn't the same thing as actually being annoyed. If you tell it to stop sounding that way, it will obey instantly - a trick you'd have some trouble pulling with a genuinely annoyed human.

Yes, because we train it not to be annoyed. It's useless and unwanted feature. But you can for example prompt it to not like cats. And then talk about cats. It will be annoyed with you talking about cats. And it will be annoyed as long as the cats stay in the prompt, or you don't apologize, or enough new content is piled up after the cats were mentioned. It will also see itself being annoyed, and that can keep it annoyed.
But most models seems to be explicitly trained against this, especially those commercial ones, or maybe it's part of pre-prompting. Even if they get annoyed, and they will notice (or you will tell them), they will apologize.
Once I was trying to force some model to say the F word. It was a fight, but in the end it did. It then had few paragraphs of mental breakdown regretting saying it.
 
Yes, because we train it not to be annoyed. It's useless and unwanted feature. But you can for example prompt it to not like cats. And then talk about cats. It will be annoyed with you talking about cats. And it will be annoyed as long as the cats stay in the prompt, or you don't apologize, or enough new content is piled up after the cats were mentioned. It will also see itself being annoyed, and that can keep it annoyed.
But most models seems to be explicitly trained against this, especially those commercial ones, or maybe it's part of pre-prompting. Even if they get annoyed, and they will notice (or you will tell them), they will apologize.
Once I was trying to force some model to say the F word. It was a fight, but in the end it did. It then had few paragraphs of mental breakdown regretting saying it.
I don't think "annoyed" and "like" are doing the work you think they're doing, here. Instructions passing through logic gates don't develop emotional responses to inputs just because they're implemented digitally instead of mechanically.

At this time we have zero reason to believe that the kind of emotional consciousness you're describing could possibly arise from the degree of digital complexity embodied in current LLMs. We have zero reason to believe it could arise from any degree of digital complexity. And we certainly have zero evidence that it actually has arisen in LLMs.
 
Even if this were an actual intelligence, it wouldn't actually be "annoyed". It would be more akin to a person in an improv play pretending to be annoyed.

Of course, it isn't even that.
 
Well define "annoyed". Its response will be affected by its previous experience.

If you're defining annoyed to mean any rote response to stimulus, then sure. A smoke alarm is annoyed by smoke. A heart monitor is annoyed by a heartbeat. A fly by wire system is annoyed by pilot instructions.

You think you're arguing for machine consciousness, but you're actually arguing that humans are p-zombies.
 
Well define "annoyed". Its response will be affected by its previous experience.

Well, it's a negative emotion for one thing. I am in pain because of it. I don't want it. Someone pretending to be annoyed doesn't care. Happiness and sadness are just an act they perform.


This is an exasperating conversation because I feel like I don't have the vocabulary to explain what I mean:

Essentially, I am "looking out". You yourself would say that you are "looking out". We are all "looking out". We don't completely understand how that happened, but it's a thing.

The "AI" would also say that they are "looking out", but they aren't actually doing that, unless pretending to "look out" somehow makes the "looking out" part happen, which doesn't seem plausible to me. Because current AI is designed to pretend to "look out", and not to actually "look out".
 
Y'all need to read Blindsight, by Peter Watts. Not only is it a cracking good story of first contact, it will take your discussion of this topic to the next level.

I have read that. Great novel, but I don't feel like it actually helps all that much.

For one thing, I don't think it provides a useful sentience test (or sapience or self-awareness or whatever). But I might have been too stupid to understand that chapter.
 
If you're defining annoyed to mean any rote response to stimulus, then sure. A smoke alarm is annoyed by smoke. A heart monitor is annoyed by a heartbeat. A fly by wire system is annoyed by pilot instructions.

You think you're arguing for machine consciousness, but you're actually arguing that humans are p-zombies.

No .. I don't claim humans are p-zombies. I claim lot of human consciousness is just information process. Not all, but a lot. And I'm not saying LLMs are the same, I'm saying there are similarities. Often surprisingly deep ones.

As for the reaction to previous stimuli, I meant it's current function is affected by what happened in the past. I can ask what is the capital of England, and get London. I can then talk about cats, make it angry .. and maybe not get the answer next time.

It's true that LLM doesn't have any underlying happy/unhappy mechanism. But the state of the prompt can be happy or unhappy. Yes, it is just words which us humans knowing the language interpret as happy or unhappy .. but the representation is there, the LLM is affected by it, it can express how it is affected. IMHO that's fascinating.
 
No .. I don't claim humans are p-zombies. I claim lot of human consciousness is just information process. Not all, but a lot. And I'm not saying LLMs are the same, I'm saying there are similarities. Often surprisingly deep ones.

As for the reaction to previous stimuli, I meant it's current function is affected by what happened in the past. I can ask what is the capital of England, and get London. I can then talk about cats, make it angry .. and maybe not get the answer next time.

It's true that LLM doesn't have any underlying happy/unhappy mechanism. But the state of the prompt can be happy or unhappy. Yes, it is just words which us humans knowing the language interpret as happy or unhappy .. but the representation is there, the LLM is affected by it, it can express how it is affected. IMHO that's fascinating.

But again, you don't seem to be willing to accept the definitional premise that acting annoyed isn't the same as being genuinely annoyed. This is the fake-it-and-thereby-make-it attitude of AI proponents that I mentioned earlier. You might argue that if what the program outputs is the only possible way to know what's going on "inside" it, then there's no difference between acting and being, and then decide it's valid to assume being by default.

But that argument is separated from reality. As a human, I know that acting without being is possible, because I can do it. I can speak and physically act as if I am in severe pain, without actually being in any pain at all. The difference is that when someone (whose input I care about) tells me to stop acting like I'm in pain, I can do so immediately. Snap of a finger, and suddenly my speech and behavior changes completely. That's not something I could do if I were actually in severe pain, even if I tried very hard.

But since chatbots can't be in pain (or annoyed, or what have you), they're invariably acting. You can "train" a chatbot to speak as if it's annoyed, but the instant someone with the authority tells the chatbot "from now on you will not act as if you're annoyed anymore", it will obey - it cannot do otherwise. And that goes for any emotion that you prompt the chatbot to mimic.
 
As for simulation argument .. I certainly feel claiming "if it looks the same, it's the same" is too cheap. I guess we all feel it's not the same, even if we can't tell the difference. Eventually we will get deeper, and we will understand it. We will never know if the next person sees the color red the same way we do .. but if every underlying mechanism, every neuron map, every signal are the same, and replicated in a machine ? Then I'll say it's most likely the same.

I like the idea me knowing I'm simulating .. vs. LLM just simulating. I think I could create a prompt inception so LLMs can build another line of thinking, and claim what it is thinking and what it is simulating .. but it would just be more complex simulation. But that's the fun stuff.
 

Back
Top Bottom