• Due to ongoing issues caused by Search, it has been temporarily disabled
  • Please excuse the mess, we're moving the furniture and restructuring the forum categories
  • You may need to edit your signatures.

    When we moved to Xenfora some of the signature options didn't come over. In the old software signatures were limited by a character limit, on Xenfora there are more options and there is a character number and number of lines limit. I've set maximum number of lines to 4 and unlimited characters.

Worried about Artificial Intelligence?

Down under there is lot common between self driving car and LLM. The math behind learning is the same. But the structure of the networks, how they are connected to the rest of the system, is different enough, that indeed you can tell there is little overlap.
Certainly cars won't have enough understanding of the world to solve trolley problem. ChatGPT can understand it, and you can talk with it about it .. but it has no way to utilize any conclusion.

Btw. funny discussion of trolley problem with AI: https://www.youtube.com/watch?v=6j2K8bdRUYo
Sadly I don't know what software does she runs on, if any .. seems to be less capable than ChatGPT, but with more character.
 
Last edited:
I don't think there's any danger of that at all, actually. If you listen to 80,000 hours or go on LessWrong you'll see a lot of discussion of issues related to the alignment problem, but basically never see Roko's Basilisk come up.
Not every argument about philosophy devolves into solipsism, either. But if it does not, it's because people are deliberately not taking it in that direction.
 
the precocious idea of Roko's Basilik is that people would care what happened to their future selves.
If they did, they would live more healthily and put aside more money.
 
Why? There doesn't seem to be any purpose to this exercise. If everyone in the world agreed it was reasonable, it would still be unreasonable.

...snip...

Do you mean illogical? Using the word "reasonable" in common parlance is a subjective evaluation made by humans about something. Therefore, learning what humans would say is reasonable or not is very much an important process if you want to say whether an answer given by an AI is reasonable or not.
 
Indeed. AIs only need to drive better than people. Which we are IMHO pretty near to. Especially if you consider the current boom of AI. Obviously, cars are not LLMs .. but all the hardware and infrastructure designed and made for LLMs will also be useful for all AI applications.

I think it will have to be significantly better than people to be accepted but I agree with your general case.

A concern I have is that many companies - such as those using it for customer facing support - appear to be implementing AI without careful evaluation as to whether the AI system they are implementing is better than humans at providing a service rather than merely being cheaper than having humans provide the service.
 
Maybe - but, I don't know, I kind of think that presumes AI is going to be thinking through its trolley problems like a human would only faster, and I'm not sure that's true at all.

I think the confusion is partly due to the way we frame the issue - we talk about AIs having "alignment", and teaching them things like values and ethical principles and then, I guess, just letting them mull those over and come to conclusions? But I don't see that as likely or even prudent; I see the more realistic course of action being just a line of code telling the AI that if there's a human in the way it should stop, end of story.

In other words, I don't see us trying to "teach AI ethics" and allowing it to make ethical decisions, I see humans making all of the ethical decisions long ahead of time and then just giving the AI a set of standing instructions.

I agree setting it as an ethical problem is not how one would or how engineers are currently tackling such a problem. And I'd like proof that humans evaluate these types of scenarios as ethical problems in real time before reacting.

As you say the AI will have been taught to try and avoid crashes by braking or steering or a combination of both, and that is what it will try and do. It will not of course be able to avoid all crashes, but how it can avoid them will be based on whether it can steer out of it or brake to avoid or both.

In a busy street full of pedestrians and vehicles, the car in front brakes. The AI detects this, works out that braking alone is not going to avoid the crash, it can then evaluate (probably doing all this in its black-box as it has long gone past simple "if then else" decision trees) its other option - steer around the car in front whilst braking, its sensors detect oncoming cars in the other lane so it can't steer into that lane, its sensors detect pedestrians on the pavement so it can't steer onto the pavement, it crashes into the back of the care in front.

In a busy street full of pedestrians and vehicles, a pedestrian walks into the road. The AI detects this, works out that braking alone is not going to avoid the crash, it can then evaluate (probably doing all this in its black-box as it has long gone past simple "if then else" decision trees) its other option - steer around the pedestrian in front whilst braking, its sensors detect oncoming cars in the other lane so it can't steer into that lane, its sensors detect pedestrians on the pavement so it can't steer onto the pavement, it crashes into the pedestrian.
 
That just means it doesn't understand what prime is. Ask 100 people and tell me how many got this one right.
Also this is how Bard responds: (at the moment on of the worse AIs on the market):

But the thing is, most people if giventhe access to the level of information LLM chatbots have would be able to figure out the correct answer. AI hasn't the intelligence to carry out pretty basic intuition.
 
But the thing is, most people if giventhe access to the level of information LLM chatbots have would be able to figure out the correct answer. AI hasn't the intelligence to carry out pretty basic intuition.

I'd like to see evidence for that assertion.

People are - on the whole - much dumber than we'd like to think we are.
 
I'd like to see evidence for that assertion.

People are - on the whole - much dumber than we'd like to think we are.

sure, but if I want to know something, I'm not just going to ask a random person on the street- I'll ask someone I think should know the answer.

And all these programs are hyped as being very competent, to the degree that they can pass SATs and Bar exams.
 
sure, but if I want to know something, I'm not just going to ask a random person on the street- I'll ask someone I think should know the answer.

And all these programs are hyped as being very competent, to the degree that they can pass SATs and Bar exams.

That's the important word in that sentence - the hype, as usual does not reflect the reality and to be fair the claims made by the companies behind these LLM AIs. The hype is often from "journalists" and others who should know better.
 
Humans: Excellent at internalizing value systems, moral reasoning, and intuitive leaps. Terrible at rote retention of data and methods.

AIs: Excellent at rote retention of data and methods, terrible at value systems, moral reasoning, and intuitive leaps.

A lawyer needs both, and law school is hard because it focuses on the part that's hard for humans to get.

AI shills make AIs look smart by focusing on how easily they do what's hard for humans, and not mentioning the part that AIs can't really do at all.

---

ETA: That said, I bet that pretty soon now we're going to see AIs coupled with facial recognition software being used to advise lawyers during jury selection. Not because they can make intuitive leaps about a potential juror's personality, but because they can more quickly brute force their way to a statistical conclusion from a corpus of microexpressions and correlated demographic data.
 
Last edited:
The AI detects this, works out that braking alone is not going to avoid the crash, it can then evaluate (probably doing all this in its black-box as it has long gone past simple "if then else" decision trees)

Hmm, I don't think it has to. I think it can definitely be a single logical argument; e.g. if distance to obstruction is less than stopping distance (as an expression of vehicle mass times speed) and path to the side is clear of obstructions, then swerve.
 
Do you mean illogical? Using the word "reasonable" in common parlance is a subjective evaluation made by humans about something. Therefore, learning what humans would say is reasonable or not is very much an important process if you want to say whether an answer given by an AI is reasonable or not.
There surely a subjective sense, but I meant it more or less literally--that X does not constitute a reason to believe Y. There is, at least sometimes, a truth of the matter there.
 
I was playing with open source text to speech XTTS2 model .. you need 10 seconds of voice sample .. and it clones it basically perfectly. You can have it installed in few minutes. Everybody can.
 
Humans sometimes act collectively in one organization that has competiting goals from other organizations, including organizations of which it is a subset. That's not new, it's literally older than civilization.

True. But interestingly enough, sometimes the lowest level customer-facing elements of these groups manage to completely ignore such goals and take the side of the consumer.

Doesn't happen when you're dealing with a true AI at those levels.

But yeah, there's nothing worse than rich people when there's a competition between honest dealing and greed. Greed wins for them every time. That's usually how they got rich in the first place. In a way, those lower-level workers (and even low-level managers) are sometimes there to mitigate the damage that can do (as well as to take all the pushback and convert it into something that management can comprehend).

So yeah, I don't think corporations are particularly "AI" until you get mechanization at the customer-facing level. But at that point, it sort of is. And yes, that includes automated answering systems when you try to call them on the phone or get assistance online, particularly when there's frequently no clear option for what you're actually trying to contact them about (sometimes even intentionally).

A corporation that thinks that it doesn't actually need feedback from its consumers (and users/viewers/audience, when those aren't the same people) is doomed to fail eventually, anyway. It may not be immediate, but they'll eventually lose touch enough to be the cause of their own demise. Facebook and "X" are actually good examples... they may not be dead yet, but they've already got some serious symptoms of the disease that will eventually kill them. Even YouTube is starting to make some pretty dumb decisions, but they'll likely last a quite a bit longer than the other two mentioned, mainly because of the "content creator" class wedged between them and the general public.

A truly "optimized" system of moneymaking is always doomed to fail on its own greed. It's just a question of when. Such things can last for quite some time, but they'll eventually destroy themselves as their internal culture gradually loses touch with the parts of external reality that can't be properly expressed in an equation or be accounted for on those silly graphs they enjoy so much. What those goofballs at the top sometimes don't realize is that the people they look down on the most (and often consider firing) in the lower levels often protect them from that somewhat, and that local conditions and cultures sometimes clash against company-wide policy.

Of course, sometimes the only thing keeping them afloat is lack of competition (or lack of competition that doesn't do the exact same problematic things). They're actually quite good at managing that part. But that just means the entire industry/business model goes down with them in some cases. They like to blame that on "trends" but it's often actually their own mismanagement and inflexibility to blame. They've often learned to rely too heavily on the very thing that's causing the problem, too.
 
Last edited:
Back
Top Bottom