• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Merged Artificial Intelligence

Chances are that these riddles were some that were part of co-pilot’s original training. After all, they are trained on a gazillion of texts.
I do think it's more likely that it knew the answers to all of them, either because it was part of its training data set or because at some point some user told it the correct answer than that it figured those out by some sort of reasoning process.
 
I do think it's more likely that it knew the answers to all of them, either because it was part of its training data set or because at some point some user told it the correct answer than that it figured those out by some sort of reasoning process.
Well, at least this can be tested. Is anybody here a good riddle master? Then please devise a good one, but do not write the answer here - or write it backwards, or whatever. Then Darat can present it to Co-Pilot, and we’ll see.

Or perhaps the answer should not be presented here at all, because somebody might give it away by boasting about how she cleverly solved the encryption!
 
Chances are that these riddles were some that were part of co-pilot’s original training. After all, they are trained on a gazillion of texts.

I do think it's more likely that it knew the answers to all of them, either because it was part of its training data set or because at some point some user told it the correct answer than that it figured those out by some sort of reasoning process.

I think you will be correct BUT that wasn't really my point. My point was that these research papers keep referring to human thinking/cognition/whatwethinkthinkingisthisweek as if all humans are equal, how many people can do the tasks that require this unique "human thinking"? How many humans do anything but regurgitate what they have learnt? How many humans can do the tasks that the researchers claim are unique to human cognition, and that involves something other than rearranging stuff we've already learnt? We should be comparing like to like.

(ETA: And again let me make it clear I do not think these AI's think as humans do but I do more and more think we do some things similar to how these LLMs work.)
 
How many humans do anything but regurgitate what they have learnt? How many humans can do the tasks that the researchers claim are unique to human cognition, and that involves something other than rearranging stuff we've already learnt? We should be comparing like to like.
Well, it's probably a small percentage, but I think it must be more than zero, or else wouldn't we still be stuck in the Stone Age?
 
Well, it's probably a small percentage, but I think it must be more than zero, or else wouldn't we still be stuck in the Stone Age?
Of course it is greater than zero, but do we need to compare AI’s to the best among us. What about comparing to the likes of me who can’t solve a single riddle, and has hardly ever had an original thought?
 
Interesting article discussing not just the accuracy of generative AI but the AIs’ confidence in their accuracy.

That is something that is well known. Check everything that it produces. People have been in trouble for blindly accepting what AI has said.
 
Watching How It's Made, I can't help but think an AI or even a not-so-smart robot would be better at certain mind-numbing, repetitive jobs than humans. Granted, some of those shows are pretty old and the sites may have been updated since then. Also granting that there are some things humans are much better at. But slipping frozen burgers into a package? If it's a very small operation, maybe -- but this one looked like a major factory.
The thing is, no AI is needed for such a routine task. Just have an array of sensors that trip an automated arm (or whatever robot tool is needed) as the items to be packaged, assembled, etc. passes a certain point. It's been done for years now, no AI needed.
 
I think you will be correct BUT that wasn't really my point. My point was that these research papers keep referring to human thinking/cognition/whatwethinkthinkingisthisweek as if all humans are equal, how many people can do the tasks that require this unique "human thinking"? How many humans do anything but regurgitate what they have learnt? How many humans can do the tasks that the researchers claim are unique to human cognition, and that involves something other than rearranging stuff we've already learnt? We should be comparing like to like.

(ETA: And again let me make it clear I do not think these AI's think as humans do but I do more and more think we do some things similar to how these LLMs work.)
When it it comes to it, I would be pretty confident that all these bots will be absolutely useless at solving novel problems that a properly trained human will be able to solve. They still are chatbots after all, just ones with access to huge (pirated) libraries.
 
When it it comes to it, I would be pretty confident that all these bots will be absolutely useless at solving novel problems that a properly trained human will be able to solve. They still are chatbots after all, just ones with access to huge (pirated) libraries.
And I am more than confident that most humans even with training and education would fail at solving "novel problems", my evidence is the 50 plus years I've been observing humans.
 
I think you will be correct BUT that wasn't really my point. My point was that these research papers keep referring to human thinking/cognition/whatwethinkthinkingisthisweek as if all humans are equal, how many people can do the tasks that require this unique "human thinking"? How many humans do anything but regurgitate what they have learnt? How many humans can do the tasks that the researchers claim are unique to human cognition, and that involves something other than rearranging stuff we've already learnt? We should be comparing like to like.

(ETA: And again let me make it clear I do not think these AI's think as humans do but I do more and more think we do some things similar to how these LLMs work.)
Rote recall of arbitrary data is something computers are especially good at. That's why we invented them and gave them work that monetizes rote recall of arbitrary data. It's no surprise that a computer with access to every joke ever written is able to perform rote recall of punchlines for an arbitrary list of setups.

Computers doing what they do best isn't the same as computers doing everything that humans are good at.
 

Has it hit a wall? Is it plateauing? Just a brief pause before the next quantum leap forward?
I think it is more a matter of a practical lmitation of one specific class of AI. I think time and time again we have to unravel the investor/marketing bubble hype to get to the strand of reality. LLMs are great for marketing and making investors drool because they make great demonstrations and as a result unfortunately have become necessary for a company to get a tick from the "financial analysts". But their real world use is quite limited and are not going to create huge share price increases!

Will LLMs become the fabled "general AI"? I'm confident in saying no, but they may well be part of a "general AI".

Personally I struggle to understand why folk think a general AI is in anyway a meaningful definition at best it seems to be defined as "can solve problems like humans do". But why on earth do we want to re-create such a bad type of intelligence?

I don't want an artificial intelligence that has the flaws of human intelligence, I want something much much better.
 
I hear we did hit text test data wall. We are using all available text on the internet. There won't be more.
I hear we did hit wall in computation power. Only few companies in the world can do cutting edge research and investments are astronomical. Same with monetization. People can get GTP3 for free, and can pay a bit for GPT4. But Sora (video generator) didn't go commercial because it would be too expensive, it's just too hardware demanding.
But we were far away from these walls just few years ago. That's because there were breakthroughs in theory. Especially text tokenization and transformers pileline, and then few years of working on the training process. And only then we did hit these walls.
There still might be improvements in what we do with the data and the processing power. And they might be major. And the processing power is still covered by the Moore's law. And as the training and experimenting gets cheaper .. the breakthroughs are more likely.
So I'm quite sure there will be progress. Probably not so fast. The bubble will collapse. And the collapse itself will slow it even more.
But the progress will still be too fast. Faster than society will adapt, and it won't end well.
 
I hear we did hit text test data wall. We are using all available text on the internet. There won't be more.
...snip...
Something that raised the hair at the back of neck is that the big data crunchers, like OpenAI may use "synthetic" data to further train their models. With of course that synthetic word meaning computer generated text. Have they never put a microphone near to a speaker outputting the mike's input?
 
There may be a bubble collapse, when the hype gets too far ahead of reality and expectations have to be reigned in.

But if we remember similar things in the past, like the collapse of the "dot com bubble" where people realized that just because a company has a website doesn't mean it can print money and some of those stocks were grossly overvalued. But the internet still went ahead, and certain companies like Amazon.com and Google really did figure it all out, while others like Pets.com did not.
 

Back
Top Bottom