• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Merged Artificial Intelligence

Google is very impressive in what it is doing with its AI development. Yes it was caught out with how appealing the likes of the GPTs and generative art systems would be to ill informed city analysts and business journalists so has had to waste resources to keep up with the Joneses but it is also building very strong foundations. It will be able to weather the bubble bursting better than the "big" new AI companies. If I was a betting person my money would be on Google out of all the big players to be the company that cracks the goal of "general AI".
General AI is a fairy tale. It will never happen.
 
What definition are you using for general AI?
I'm thinking of anthropogenic AI, "human-like" reasoning and behavior. However, we are a long way from AI that is able to address a wide variety of types of problems, too.
 
I'm thinking of anthropogenic AI, "human-like" reasoning and behavior. However, we are a long way from AI that is able to address a wide variety of types of problems, too.
There are some people who regard it as a matter of faith: humans are unique, and AI will never be able to achieve true intelligence.
I wonder where your position is.
 
I'm thinking of anthropogenic AI, "human-like" reasoning and behavior. However, we are a long way from AI that is able to address a wide variety of types of problems, too.
I think we can get close to that but my very personal view is that to do that we would need to model more than just the brain - which is what we are doing at the moment - and why do we want to achieve a better mimicry of natural intelligence (NI)? We know NI is crap, we hallucinate, we make up crap, we lie, we cheat etc. Those are not behaviours we should be looking to mimic in our AIs and a lot of the point improvements in much of the public facing AIs is developers trying to stop AIs accurately mimicking human behaviour!

I'm of the view that we should not be looking to mimic human behaviour in our AIs, we have a few billion of such NIs already why do we need more? That's why I think Google is doing something different with its AI development (albeit as I said the financial fashion means it has to keep churning out point improvements in its chatbots etc.) it's looking at where AI can do things differently to humans and do things us NIs can't do.
 

Eliezer Yudkowsky is as afraid as you could possibly be. He makes his case.Yudkowsky is a pioneer of A.I. safety research, who started warning about the existential risks of the technology decades ago, – influencing a lot of leading figures in the field. But over the last couple of years, talk of an A.I. apocalypse has become a little passé. Many of the people Yudkowsky influenced have gone on to work for A.I. companies, and those companies are racing ahead to build the superintelligent systems Yudkowsky thought humans should never create. But Yudkowsky is still out there sounding the alarm. He has a new book out, co-written with Nate Soares, “If Anyone Builds It, Everyone Dies,” trying to warn the world before it’s too late.So what does Yudkowsky see that most of us don’t? What makes him so certain? And why does he think he hasn’t been able to persuade more people?
 
There are some people who regard it as a matter of faith: humans are unique, and AI will never be able to achieve true intelligence.
I wonder where your position is.
My position is that any brain, from fruit fly up to human, is incredibly complex in structure and function, and we would hardly be able to replicate any of them. The human brain is said to be the most complex structure known in the Universe. The function of any brain is hardly understood, much less the human brain. Besides that, nobody in AI is even trying to replicate the human brain. They are trying to mimic the function of the human brain, usually using electronics. For decades, they've claimed that these electronic circuits were designed to function "just like" human neurons or human brains, but that is propaganda. Electronics don't behave like biology. So, not only are AI researchers trying to replicate something that nobody understands or has mapped out, but they aren't even going in the right direction.
 
No, a car is driven by a person. Corporations control how AI responds. Look at what Musk has done with Grok.
Corporations own cars and tell employees to drive those cars to do certain tasks. Corporations control lots of tools to accomplish their ends. AI is just one of those tools. An AI never does something of its own volition, because no AI has volition. No AI has curiosity. It does the type of task for which it is designed and built, and that's it. Elon Musk has a naive, simplistic and frankly cartoonish idea of what AI is.
 
We can't get people to be overly concerned about climate changed so "And why does he think he hasn’t been able to persuade more people?" really has a simple answer - our NI wasn't developed to worry about non-immediate threats!
 
My position is that any brain, from fruit fly up to human, is incredibly complex in structure and function, and we would hardly be able to replicate any of them. The human brain is said to be the most complex structure known in the Universe.
Yet, none of the structure or function we've seen has proven to be inherently impossible to simulate. What steenkh is getting at (correct me if I'm wrong), is whether you think the task is impossible or merely impractical, which impacts whether the discussion about it belongs here or in the Philosophy section.

To briefly summarize a series of extended threads we've had, the common position of the "impossible" side is what I like to call magic bean theory. If you have a software replica of a human brain, modeled down the last synapse, does it function as a human brain, is it a person, or will it always lack some je ne sais quoi, some magic bean, that science has not found and may not find but which you are nevertheless certain is absolutely required for conscious experience?
 
So, not only are AI researchers trying to replicate something that nobody understands or has mapped out, but they aren't even going in the right direction.
Why do you think AI researchers want replicate a human brain, and not just intelligence?
It is my understanding that AI research has some time ago given up going the purely neural network path.
I also think that it is difficult to determine if AI is on the path to intelligence, because nobody has come up with a definition of intelligence that is not relying on an exact replication of the human brain. And even if it was replicated, it would still not be right, because it isn’t biological.
 
Yet, none of the structure or function we've seen has proven to be inherently impossible to simulate. What steenkh is getting at (correct me if I'm wrong), is whether you think the task is impossible or merely impractical, which impacts whether the discussion about it belongs here or in the Philosophy section.

To briefly summarize a series of extended threads we've had, the common position of the "impossible" side is what I like to call magic bean theory. If you have a software replica of a human brain, modeled down the last synapse, does it function as a human brain, is it a person, or will it always lack some je ne sais quoi, some magic bean, that science has not found and may not find but which you are nevertheless certain is absolutely required for conscious experience?
Humans have been simulating human parts for thousands of years. Have they ever made an exact replacement for anything? No. We have artificial hips, artificial legs, artificial hands, artificial eyes and ears, but none of them are an exact match for even these basic components of the human body. The human brain is far, far beyond our comprehension, never mind our ability to replicate. It is practically impossible for us to reproduce the human brain or all its functions. At best, we can mimic some functions.
 
Why do you think AI researchers want replicate a human brain, and not just intelligence?
It is my understanding that AI research has some time ago given up going the purely neural network path.
I also think that it is difficult to determine if AI is on the path to intelligence, because nobody has come up with a definition of intelligence that is not relying on an exact replication of the human brain. And even if it was replicated, it would still not be right, because it isn’t biological.
People often copy or imitate successful implementations, especially when tackling difficult problems. We know the human brain produces human intelligence, and nothing else so far has, so we copy the human brain to some degree. We want machines that have something like human intelligence so they can make human decisions, perhaps even provide us with better versions of ourselves, hopefully freed of our weaknesses and failings.

Neural networks still form the core of a lot of AI, including LLMs.
 
Well .. we are moving further away from original biologic inspirations. First we use matrices instead of networks. In matrix every neuron is connected to every neuron from previous layer, and to nothing else. That's quite a departure from original models. Also we threw away old biology inspired sigmoid activation functions and we use simple linear RELU. Of course the whole back propagation algorithm has nothing to do with biology, reinforcement learning even less so.
Add specialty surrounding structures of LLM like embedding and self attention .. and I'd say LLM is neural network really only by tradition.

That said .. there are similarities in what LLM does and how it thinks compared to human brain. It's like the intelligence is a function achievable by many means. Or even unavoidable, once your "means" are good enough. LLMs and current boom in AI helps us primarily to understand what intelligence actually is. We can't make a human eye .. but our cameras are way better. Because we understand what we want, we can engineer stuff better than lame evolution. We're not there yet with intelligence .. but we are certainly getting closer.
 
I'm of the view that we should not be looking to mimic human behaviour in our AIs, we have a few billion of such NIs already why do we need more?
I realize this isn’t your point but there are lots of reasons why the corporations funding AI want more of them. Sure they need hardware and power, but AI can do a job without pay or benefits. No need to fund their lives outside of the job. They won’t assert their rights or need time off. No worry that they will be unhappy and quit or vote against your interests.

Of course all of the out of work citizens will end up being a problem. Perhaps a new law could count AI as 3/5 of a person providing additional political representation to their owners.
 
Humans have been simulating human parts for thousands of years. Have they ever made an exact replacement for anything? No. We have artificial hips, artificial legs, artificial hands, artificial eyes and ears, but none of them are an exact match for even these basic components of the human body. The human brain is far, far beyond our comprehension, never mind our ability to replicate. It is practically impossible for us to reproduce the human brain or all its functions. At best, we can mimic some functions.
So does "practically impossible" mean that it's impossible, or impractical? I apologize for being pedantic, but there's a theological divide here and effectively conversing with someone on one side as if they're on the other is at first impractical and then impossible.
 
So does "practically impossible" mean that it's impossible, or impractical? I apologize for being pedantic, but there's a theological divide here and effectively conversing with someone on one side as if they're on the other is at first impractical and then impossible.
I don't know if AGI is merely impractical, unreachable or impossible; I don't have enough information on which to base a conclusion. I don't see that it makes a difference. We aren't going to get it, regardless.

I believe in God and that He created all things, and human intelligence is a rare instance in all of creation. Whether that makes achieving it impossible for mere mortals, I don't know. I do know that we aren't even close, and won't ever be close.
 

Back
Top Bottom