• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

ChatGPT

Okay so LLM would have to be perfect and incapable of any error in order to make up for our ignorance in how they work? That's a pretty unreasonable standard to say the least.

The standard proposed by Dr Martell, for the DOD to make use of LLMs and other "AIs", seems much more reasonable: For a given use case, define a measurable level of reliability, and measure whether the AI in question meets that level of reliability. If it does, the DOD will consider using it for that particular use case. If not, it won't.

In fact Dr Martell says we should be demanding this of AIs in general, and we should be repudiating AI providers that don't offer this degree of confidence.
 
Translation should be easier for bots than chatting, because in translation the meaning is already there, it just needs to be translated. When chatting, the bot needs to generate its own meaning.

What would a human do when translating a string of words that don’t make sense? Probably just translate the words, and make no attempt to make sense. A meaningless string of words cannot be generated when chatting without rousing suspicion that the author has no idea what is being generated.
 
Translation should be easier for bots than chatting, because in translation the meaning is already there, it just needs to be translated. When chatting, the bot needs to generate its own meaning.

This exactly how I believe we should be thinking about AIs. In particular, we should be calling on the authors of ChatGPT and other such tools to be transparent with us. To tell us up front, in no uncertain terms, "this app is great for rapid translation of non-critical topics, but it hallucinates uncontrollably when asked to generate meaning from prompts."
 
Trying to make me redundant, so that I'd starve to death is one thing. But forcing me to make myself redundant. That's a whole 'nother kind of evil.

I have been working in translation for 25 years now and I see it finally happening. 10 years ago machine translation was still really bad. It has gotten much better but it still needs to be checked by a person if it's something important. And nobody will pay for a document to be translated if it isn't important. Patents are a big part of our business and it's what I specialize in. But you can always look at the change history of a document and see how many edits the checker made. If it gets to the point where the checker makes only few or no edits, I'm sure some people who want to cut costs will say "This is good enough. We don't have to pay someone to check it anymore."
 
Translation should be easier for bots than chatting, because in translation the meaning is already there, it just needs to be translated. When chatting, the bot needs to generate its own meaning.

What would a human do when translating a string of words that don’t make sense? Probably just translate the words, and make no attempt to make sense. A meaningless string of words cannot be generated when chatting without rousing suspicion that the author has no idea what is being generated.

See for instance some English user instructions for products made in China. .. "Chinglish".

Hans ;)
 
You mean the original Chinese did not make sense, and that is why the got the unintelligible machine-translated manuals?

Well, it's a bit more complicated than that. Of course, the text made sense in Cinese, but syntax and semantics are so different from English that a dictionary-based word for word translations tend to produce a considerable amount of gibberish.

However, I do realize that your point was what to do when the initial text was gibberish. ;)

I don't know how an AI translator will handle this, but that would be possible to try out.

Hans
 
I don't know for Chinese, but with Japanese the word order and syntax is so different from English that, after reading a sentence through once to understand what you are about to render into English, it is usually easier to begin at the end and work your way back. In other words the stuff that comes at the beginning of an English sentence comes at the end in Japanese more often than not.

There are also bad human translators. Some poor guy at the factory who once took a few English lessons told to translate something by handing him a Chinese-to-English dictionary and telling him to have at it. It was unlikely that they had access to machine translation until quite recently. Now everyone does, and that sort of thing, Chinglish or Japlish instruction manuals may be a thing of the past soon.
 
Are you sure about translating? Chatting in one language is one thing. Accurately translating from one to another is another thing.

Perhaps I'm mistaken but I think machine translation and chatbots are rather different. The former being trained on translations (made by human translators) i.e., both a source language text and a corresponding target language text, the latter being trained on single language texts without any one-to-one counterpart in another language.

Since no human yet understands whale language, I don't see how a chat AI engine could translate whale language into human language or vice versa, as the training data it would require to learn how to do it does not exist.

You can use ChatGPT to translate:



Apropos of nothing but I always think Arabic looks beautiful.

If it can translate it into Japanese I could tell whether the translation is accurate. I don't understand French, German or Arabic. I agree that it looks beautiful though.

ETA:


Looks pretty legit.

The best way to tell if a translation is any good is to translate it back into English. Here is the Japanses translated back into English via Google Translate
"Maybe I'm wrong, but I think machine translation and chatbots are quite different. The former are trained on translations (by human translators), which means that they are Both corresponding target language texts are included, the latter being trained on monolingual texts that have no one-to-one correspondence with other languages.
 
The best way to tell if a translation is any good is to translate it back into English. Here is the Japanses translated back into English via Google Translate

"Maybe I'm wrong, but I think machine translation and chatbots are quite different. The former are trained on translations (by human translators), which means that they are Both corresponding target language texts are included, the latter being trained on monolingual texts that have no one-to-one correspondence with other languages.

Well that contains grammatical errors that weren't in the Japanese translation. That one's on Google Translate, not the original ChatGPT translation into Japanese.
 
Besides, like with the "Chinese Room", these language models don't actually understand what they output.

The Chinese Room just shows that one part of the system doesn't understand what is being outputted. This doesn't demonstrate that the system as a whole doesn't have understanding, any more than a small collection of neurons in your brain not understanding this post doesn't demonstrate that you don't understand it.

Whether or not current LLMs have understanding is a different point, but the Chinese Room doesn't really tell us anything about that question.
 
Plagiarism machine go brrrrrrrr

The A.V. Club's AI-Generated Articles Are Copying Directly From IMDb

To calibrate your expectations, here's the disclaimer that accompanies articles by the A.V. Club Bot: "This article is based on data from IMDb," it reads. "Text was compiled by an AI engine that was then reviewed and edited by the editorial staff." Its author page adds that "these stories were produced with the help of an AI engine."

You'd think that "based on" and "produced with" would imply something transformative happening — a change of phrasing, a reworking in the outlet's tone, an addition of a spicy detail.

But it seems that "compiled" is doing a lot of work here. On our review, the bulk of the A.V. Club's AI-generated articles appear to be copied directly from IMDb. Not "based on," but copied verbatim.

https://futurism.com/the-av-club-imdb
 
I was trying to implement this into my Home Assistant instance (because what could go wrong?) but they changed something in the API. Once I can fix the code, I'll let you know if my house has turned against me.
 
That would be an obvious violation of copyright unless they are licensing the content.

Apparently they are, according to the article:
In fact, it turns out that there's a deeper relationship between G/O and IMDb than is mentioned anywhere in the disclaimer. Reached with questions, both groups confirmed that G/O is licensing access to IMDb's cache of information about the movie industry.

So legally they may be OK, but why not just use the original in that case? There's no added value.
 
That would be an obvious violation of copyright unless they are licensing the content.

Apparently they are, according to the article:


So legally they may be OK, but why not just use the original in that case? There's no added value.

Content mill presentation might have additional advertising value, even if it's just regurgitating existing material. The never ending arms race of online publications to game search engine optimization and capture precious ad revenue.
 
Content mill presentation might have additional advertising value, even if it's just regurgitating existing material. The never ending arms race of online publications to game search engine optimization and capture precious ad revenue.

Well, OK. I meant no added value for the reader.
 
That would be an obvious violation of copyright unless they are licensing the content.

Apparently they are, according to the article:


So legally they may be OK, but why not just use the original in that case? There's no added value.

As the article sort of mentions it could be a lot of the "AI" claims are a tad overblown, being used to puff up financial statements and spin a story. Companies have always used the latest technological "buzz" to try and pretend they are more exciting than they really are. Wouldn't be surprised if some of this AI is a few macros running in Excel!
 
And people say these aren't conscious. What could be more human than a farmed out crappy job being done by a quick bit of copy and pasting!

I guess the lowest hanging fruit always get picked first, but it's pretty unimpressive that so far the only real use case for this stuff is by replacing the type of human work that went into producing content mill dreck.
 

Back
Top Bottom