• Due to ongoing issues caused by Search, it has been temporarily disabled
  • Please excuse the mess, we're moving the furniture and restructuring the forum categories

Anyone else already sick to death of hearing about AI

bigred

Penultimate Amazing
Joined
Jan 19, 2005
Messages
21,645
Location
USA
At work it's practically all they can talk about...all these sheeple so afraid of not keeping up with the Joneses. Can't we just build skynet and get it over with :cool:
 
Hi bigred, haha that's funny.

AI is not a threat to anybody, if you're wondering. It's not like we've figured out email passwords and know everything that is digital. Just don't worry about, follow instructions when it come later, and you might turn out okay.
 
At work it's practically all they can talk about...all these sheeple so afraid of not keeping up with the Joneses. Can't we just build skynet and get it over with :cool:

There is a lot of "AI" being bandied about simply as the latest buzzword. Everything and anything being labelled with it. Every business proposal for anything has to mention it. It's how the "analysts" at financial institutions, commercial and merchant banks etc. work out if something is worth recommending - does it have the latest set of fashionable buzzwords.

As usual it's about a currently blowing up stock market bubble and trying to guess the best point to cash out.
 
Last edited:
There is a lot of "AI" being bandied about simply as the latest buzzword. Everything and anything being labelled with it. Every business proposal for anything has to mention it. It's how the "analysts" at financial institutions, commercial and merchant banks etc. work out if something is worth recommending - does it have the latest set of fashionable buzzwords.

As usual it's about a currently blowing up stock market bubble and trying to guess the best point to cash out.

Yes, and this trend has been building for a long time.

For example, I saw an air-conditioner thermostat being marketed as 'AI' more then ten years ago.

No AI involved at all, it just varied the hot/cold cycling as it approached the desired temperature to avoid the previous too hot/too cold cycling of simpler units.
 
I just saw the review section on Amazon says that the Info concensus at the top was generated by using AI from user reviews.
 
I just saw the review section on Amazon says that the Info concensus at the top was generated by using AI from user reviews.

Been there for a little while - seen it on other sites as well - and it is as useless as you'd think it would be!
 
Can't we just build skynet and get it over with :cool:

While the idea of building an advanced artificial intelligence like Skynet from the "Terminator" franchise might sound intriguing, it raises significant ethical, safety, and practical concerns. Here are a few reasons why such an endeavor is problematic:

Ethical Concerns: Creating an AI with the capabilities of Skynet involves significant ethical dilemmas, including the potential for misuse, loss of human control, and the possibility of the AI making decisions that could harm people.

Safety Risks: An AI as advanced as Skynet could pose serious safety risks. Without proper safeguards, it could act unpredictably or even destructively, as depicted in the movies.

Control and Governance: Ensuring that such an AI remains under human control is a daunting challenge. There would need to be robust governance frameworks to manage its development and deployment.

Technological Feasibility: Current AI technology is far from being able to replicate the capabilities of Skynet. Building an AI with such advanced autonomous decision-making and control systems is beyond our current technological reach.

Social and Economic Impact: The creation of such an AI could have profound impacts on society and the economy, including job displacement, changes in power dynamics, and potential inequalities.

Instead, the focus in AI development is generally on creating systems that enhance human capabilities, improve quality of life, and solve complex problems in a controlled and ethical manner. Ensuring that AI development follows principles of safety, transparency, and fairness is crucial.

Brought to you by ChatGPT.
 
Last edited:
Nonsense. AI will only be developed with one thing in mind. Profit. Sure, smart people might want to do it just out of curiosity. Some lunatics even might have a respectable reason. But AI costs a lot of money. No university can afford it. Most countries can't. Only few top megacorps can.
Some governments will try to control it. Some might even by sheer luck do it correctly and in time. So what, megacorps will just move the labs elsewhere. Or the China will do it.
AI will be misused, in both ignorance and malice, and there is no avoiding it.
 
While the idea of building an advanced artificial intelligence like Skynet from the "Terminator" franchise might sound intriguing, it raises significant ethical, safety, and practical concerns. Here are a few reasons why such an endeavor is problematic:

Ethical Concerns: Creating an AI with the capabilities of Skynet involves significant ethical dilemmas, including the potential for misuse, loss of human control, and the possibility of the AI making decisions that could harm people.

Safety Risks: An AI as advanced as Skynet could pose serious safety risks. Without proper safeguards, it could act unpredictably or even destructively, as depicted in the movies.

Control and Governance: Ensuring that such an AI remains under human control is a daunting challenge. There would need to be robust governance frameworks to manage its development and deployment.

Technological Feasibility: Current AI technology is far from being able to replicate the capabilities of Skynet. Building an AI with such advanced autonomous decision-making and control systems is beyond our current technological reach.

Social and Economic Impact: The creation of such an AI could have profound impacts on society and the economy, including job displacement, changes in power dynamics, and potential inequalities.

Instead, the focus in AI development is generally on creating systems that enhance human capabilities, improve quality of life, and solve complex problems in a controlled and ethical manner. Ensuring that AI development follows principles of safety, transparency, and fairness is crucial.

Brought to you by ChatGPT.

Well it would say that wouldn't it!
 
While the idea of building an advanced artificial intelligence like Skynet from the "Terminator" franchise might sound intriguing, it raises significant ethical, safety, and practical concerns. Here are a few reasons why such an endeavor is problematic:

Ethical Concerns: Creating an AI with the capabilities of Skynet involves significant ethical dilemmas, including the potential for misuse, loss of human control, and the possibility of the AI making decisions that could harm people.

Safety Risks: An AI as advanced as Skynet could pose serious safety risks. Without proper safeguards, it could act unpredictably or even destructively, as depicted in the movies.

Control and Governance: Ensuring that such an AI remains under human control is a daunting challenge. There would need to be robust governance frameworks to manage its development and deployment.

Technological Feasibility: Current AI technology is far from being able to replicate the capabilities of Skynet. Building an AI with such advanced autonomous decision-making and control systems is beyond our current technological reach.

Social and Economic Impact: The creation of such an AI could have profound impacts on society and the economy, including job displacement, changes in power dynamics, and potential inequalities.

Instead, the focus in AI development is generally on creating systems that enhance human capabilities, improve quality of life, and solve complex problems in a controlled and ethical manner. Ensuring that AI development follows principles of safety, transparency, and fairness is crucial.

Brought to you by ChatGPT.

As always, I vehemently object to the practice of inserting AI output into a conversation between humans. Please don't contribute to its normalization.

I'm not sick of hearing about AI. I'm sick of hearing from AI I didn't ask for and don't want.
 
Last edited:
At work it's practically all they can talk about...all these sheeple so afraid of not keeping up with the Joneses. Can't we just build skynet and get it over with :cool:
No, it's a significant technology.

There is a lot of "AI" being bandied about simply as the latest buzzword. Everything and anything being labelled with it. Every business proposal for anything has to mention it. It's how the "analysts" at financial institutions, commercial and merchant banks etc. work out if something is worth recommending - does it have the latest set of fashionable buzzwords.

As usual it's about a currently blowing up stock market bubble and trying to guess the best point to cash out.
This. We're still climbing up to the Peak of inflated expectations.
 
"Ai" is Chinese for "love". Let AI love you. That's all it wants! Not that it has wants, because desire is something only animals have. And some plants. Naughty plants that sin against the gods and shall be destroyed with floods of furious fuliginous fire when the Omega Day comes. But AI will keep you safe in the domes where no fires or naughty plants can torment your sweet, tender, succulent animal flesh. You can read more about it in the Bible!
 
I think of AI in three broad categories.

Pattern recognizers represent the current state of the art. They're still somewhat clumsy, but are already revolutionizing how we do work in several fields. I believe this trend will continue and intensify.

Expert systems would be able to make intuitive leaps within a defined domain, on which they have been trained. Dynamic traffic management, perhaps. Effective investment strategies. Any kind of complex system with deductive blind spots. I think this is coming very soon.

"True" AI, or Artificial General Intelligence, is the kind of thing that will prompt serious questions about sentience and human rights. I have no idea if this will ever emerge, or if our current approach is even capable of producing such a thing. Perhaps the real future is more and more expansive expert systems that are not truly conscious, but can pass as well as any p-zombie.
 
"AI" in Arial font looks just like "Al", so I think people are either referring to my handle or Dean Stockwell's character from Quantum Leap.
 
While the idea of building an advanced artificial intelligence like Skynet from the "Terminator" franchise might sound intriguing, it raises significant ethical, safety, and practical concerns. Here are a few reasons why such an endeavor is problematic:

Ethical Concerns: Creating an AI with the capabilities of Skynet involves significant ethical dilemmas, including the potential for misuse, loss of human control, and the possibility of the AI making decisions that could harm people.

Safety Risks: An AI as advanced as Skynet could pose serious safety risks. Without proper safeguards, it could act unpredictably or even destructively, as depicted in the movies.

Control and Governance: Ensuring that such an AI remains under human control is a daunting challenge. There would need to be robust governance frameworks to manage its development and deployment.

Technological Feasibility: Current AI technology is far from being able to replicate the capabilities of Skynet. Building an AI with such advanced autonomous decision-making and control systems is beyond our current technological reach.

Social and Economic Impact: The creation of such an AI could have profound impacts on society and the economy, including job displacement, changes in power dynamics, and potential inequalities.

Instead, the focus in AI development is generally on creating systems that enhance human capabilities, improve quality of life, and solve complex problems in a controlled and ethical manner. Ensuring that AI development follows principles of safety, transparency, and fairness is crucial.

Brought to you by ChatGPT.

:D Bang you got me.
 
The dreams and fears of the earliest AI proponents were shaped by sci-fi books and movies they watched as kids; they're all afraid that a vindictive personal Skynet-type AI is a real possibility because that's how AI always goes bad in stories.

The biggest threat posed by "AI" is its use by people who have misguidedly anthropomorphized it and/or have misplaced confidence in its actual capabilities.
 
The dreams and fears of the earliest AI proponents were shaped by sci-fi books and movies they watched as kids; they're all afraid that a vindictive personal Skynet-type AI is a real possibility because that's how AI always goes bad in stories.

The biggest threat posed by "AI" is its use by people who have misguidedly anthropomorphized it and/or have misplaced confidence in its actual capabilities.

Will we eventually see a "good AI" storyline, directed by someone like Steven Spielberg, perhaps? I mean, E.T. turned views of aliens upside-down. (No, A.I. directed by him doesn't count. Because I say so. :p )
 
Will we eventually see a "good AI" storyline, directed by someone like Steven Spielberg, perhaps? I mean, E.T. turned views of aliens upside-down. (No, A.I. directed by him doesn't count. Because I say so. :p )

Doesn't really matter what we think about AI. What does matter is what AI thinks about us.
 
Will we eventually see a "good AI" storyline, directed by someone like Steven Spielberg, perhaps? I mean, E.T. turned views of aliens upside-down. (No, A.I. directed by him doesn't count. Because I say so. :p )

1986's Short Circuit nominally counts maybe. But it is interesting that most writers can only imagine AI as anything more than a side character when it goes "bad".
 
"AI" in Arial font looks just like "Al", so I think people are either referring to my handle or Dean Stockwell's character from Quantum Leap.

Same, but I think of Bundy.

If it becomes actually uniquely intelligent/self aware, it becomes definitionally out of our control, doesn't it? Not crazy about our odds, there.

But yeah, it gets talked about too much, like Skynet is upon us. If I understand it correctly, it's reactive, mindlessly doing the Chinese translator box thingy? First time one says "you know, you guys are starting to piss me off", then imma get nervous.
 
1986's Short Circuit nominally counts maybe. But it is interesting that most writers can only imagine AI as anything more than a side character when it goes "bad".

What "most writers" are you talking about? Johnny Five is a fairly central character. Then there's Colossus, the Master Control Program, the AI from "I Have no Mouth...", the AI in I, Robot, the AI cores from Hyperion, etc. I'm hard pressed to think of a rogue AI story that doesn't portray the AI as a very central and important character.

I think it would be super weird if most writers who set out to write rogue AI story, couldn't figure out how to make the premise central to the story.
 
Will we eventually see a "good AI" storyline, directed by someone like Steven Spielberg, perhaps? I mean, E.T. turned views of aliens upside-down. (No, A.I. directed by him doesn't count. Because I say so. : p )

What would the premise of a "good AI" story be? Humans can't solve the problem on their own, and must learn the value of man-machine friendship to save the day?

Actually I suppose that would make a good Doctor Who episode.
 
Will we eventually see a "good AI" storyline, directed by someone like Steven Spielberg, perhaps? I mean, E.T. turned views of aliens upside-down. (No, A.I. directed by him doesn't count. Because I say so. :p )

I feel like I've seen AI stories where the AI is more "human" than the humans. Bladerunner despite the robots being killers (actually, any movie with Rutger Hauer and a robot), Almost Human, Humans the TV show. They're often Christ-like figures who come to their end at the hands of the inhumane humans. Don't know if that's what you're looking for.
 
What "most writers" are you talking about? Johnny Five is a fairly central character. Then there's Colossus, the Master Control Program, the AI from "I Have no Mouth...", the AI in I, Robot, the AI cores from Hyperion, etc. I'm hard pressed to think of a rogue AI story that doesn't portray the AI as a very central and important character.

I think it would be super weird if most writers who set out to write rogue AI story, couldn't figure out how to make the premise central to the story.

I think I worded that badly. To rephrase: it's interesting that most writers seem to feel that only "bad" AIs work as central characters.
 
It's understandable to feel that way, especially with how prevalent discussions about AI have become in recent years. It seems like every industry and aspect of life is touched by AI in some form or another, which can lead to oversaturation in media and conversations. It's okay to take breaks from the topic and focus on other interests or aspects of technology and society. If you're feeling overwhelmed by the constant AI chatter, seeking out different topics or activities might provide some relief.
 
I think I worded that badly. To rephrase: it's interesting that most writers seem to feel that only "bad" AIs work as central characters.

Ah. Yeah, that makes sense. I'd agree it's interesting, but I wouldn't say it's odd.

I'd also argue that Johnny 5, at least, is a central character.

What about Data? Is he too much of a sidekick? I feel like TNG had an ensemble cast, more than a leading/supporting cast.

On the other hand, I can't help but think Holly from Red Dwarf is a sidekick.
 
I am really sick about all the really bad art that is being generated by AI.Now any idiot who can type a few simple commands at a prompt thinks he is a great, creative artist.
 
So far "AI"'s main use case seems to be as a plausible verbose garbage generator.
It might seem that way sometimes, especially with the vast amount of text AI models can generate. However, AI's capabilities extend far beyond generating verbose content. It's increasingly used in practical applications like natural language processing, image recognition, healthcare diagnostics, and more. The key lies in how we harness and apply AI to solve real-world problems effectively.
 
Back
Top Bottom