• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

The Singularity is Near (or not)

calebprime

Penultimate Amazing
Joined
Jul 5, 2006
Messages
13,001
http://en.wikipedia.org/wiki/The_Singularity_Is_Near


Four central postulates of the book are as follows:

1) A technological-evolutionary point known as "the singularity" exists as an achievable goal for humanity.
2) Through a law of accelerating returns, technology is progressing toward the singularity at an exponential rate.
3) The functionality of the human brain is quantifiable in terms of technology that we can build in the near future.
4) Medical advancements make it possible for a significant number of his generation (Baby Boomers) to live long enough for the exponential growth of technology to intersect and surpass the processing of the human brain.


Personally, I doubt 3) and 4). I speculate that these developments are hundreds of years away. They are not 'close', imho.

If neurologists like Damasio are correct, what's worth preserving in our consciousness depends on being embodied; it depends on very rich two-way connections between innervated flesh and brain.

Understanding of the brain is still in its infancy. We still don't understand what causes schizophrenia or depression. We don't have anything like artificial neurons or chips that we can substitute for brain parts.

There's nothing like a technology for transferring or 'uploading' our consciousness to a computer.

Here's an excerpt from an interview with Douglas Hofstadter:

http://www.americanscientist.org/bookshelf/pub/douglas-r-hofstadter

Q. There's a popular idea currently that technology may be converging on some kind of culmination—some people refer to it as a singularity. It's not clear what form it might take, but some have suggested an explosion of artificial intelligence. Do you have any thoughts about that?

Oh, yeah, I've organized several symposia about it; I've written a long article about it; I've participated in a couple of events with Ray Kurzweil, Hans Moravec and many of these singularitarians, as they refer to themselves. I have wallowed in this mud very much. However, if you're asking for a clear judgment, I think it's very murky.

The reason I have injected myself into that world, unsavory though I find it in many ways, is that I think that it's a very confusing thing that they're suggesting. If you read Ray Kurzweil's books and Hans Moravec's, what I find is that it's a very bizarre mixture of ideas that are solid and good with ideas that are crazy. It's as if you took a lot of very good food and some dog excrement and blended it all up so that you can't possibly figure out what's good or bad. It's an intimate mixture of rubbish and good ideas, and it's very hard to disentangle the two, because these are smart people; they're not stupid.




My purpose in starting this thread is to challenge the believers in the Kurzweil Singularity to make their cases. I don't have much to contribute myself, besides raising the question.
 
Personally I think that the Singularity is a realistic goal, and will occur far sooner than anyone expects. But then again, I'm not taking into account Hofstadter's LawWP.
 
I think number three happened about 1974 - All those chemicals being introduced to the brain had to have some long term effect
 
1. I think that anyone who bases a theory on dividing by zero and everything happening at the same time at infinite speed, is too stupid to be allowed to breed (warning: not literally, so don't go on "OMG nazi population control" tangents;)) not someone to follow religiously.

2, For that matter, there is a difference between exponential and the kind of vertical asymptote that would be needed for a real "singularity." That some people don't seem to know the difference -- or just need another illogical belief that badly -- isn't exactly inspiring confidence. At any rate, mere exponential progress would _not_ cause a singularity.

3. Plus, I'm not convinced that that exponential rate actually exists. A more realistic assessment of progress in any field looks more like a Gauss curve. There is a period of increasingly fast progress at the start, and it looks like only the sky is the limit, but then it plateaus, and then actually the rate of progress starts going down.

You can see that in just about any field, if you look dispassionately. E.g., antibiotics seemed to take off exponentially at some point, but actually by now we're getting considerably less new patents per year than even in the 40's, and most of them are actually just derivatives of existing ones. E.g., aerospace is _long_ past that exponential growth period, and by now jobs in designing new aircraft or innovations per year have tappered off massively. Etc.

That overall exponential progress just is not happening. We had a bunch of overlapping curves as several domains had basically their golden age around the same time, but the flip coin is that all of them are also now past their peak and all tappering off. Anyone selling you an exponential progress that still happens, is a crook or deluded, plain and simple.

4. A lot of it is, simply put, perspective effect. We see more details in the things that are close to us (in time), but lack those details in things that happened 100 or 1000 years ago. Which can paint the wrong image that actually that progress accelerated a lot faster than it actually did.

We look at, say, the age of sail and see it as some monolythic period where for hundreds of years nothing new happened. We completely miss the evolution from, say, galley to galleass, or the rapid increase in ship tonnage (and construction techniques to make that happen) during the relatively brief Hanseatic League times. But if one CPU this year is 10% faster than last year, then OMG it's exponential progress.
 
I think we are approaching a singularity, but not in the sense that they do.

I think we are approaching the threshold of incomprehensibility; We will have advanced to the point where nobody understands enough of the information available to make a rational judgment as to the best or even a good course of action.

We have been move towards specialization for a very long time. The Renaissance Man truly was a creature of that time and no actual examples have existed since. That was the last time where one brilliant and well-educated person could understand everything there was to know and have a comprehension of the limitations thereof.

We are to the point where PhD specialists in things like Physics have to defer to others to interpret results that are not in their areas of specialization.

And it will only get worse.

Will AI come along and save us? To a degree information technologies have been instrumental in creating the problem. I have a computer system with terabytes of data on it in my home - how could I even conceivable understand anything except a digest of that data?

My opinion is that the first true AIs, that is, self-aware machines capable of forming their own goals, will not be of any help whatsoever - They will have their own agendas that we will not be able to guess and we will not be certain if following their advice is a good idea from humanity's point of view or only from the point of view of the AI...

And I am not at all certain we will be able to make a self-aware machine that does not have its own ideas of whether it wants to work with us or not.

And I really believe that such machines will have the same difficulties processing the details of reality as we do, and they may be fast but this will not make them geniuses - Imagine that dull boy you knew Freshman year in High School, only, he's a shiny new AI. Would he be able to comprehend climate modeling or high-energy physics no matter how much time he had to think about it?
 
Brains are computers.

They process information. But it is really unclear if there is much structural similarity. Go look up back-propagation neural networks for the closest electronic analog we have come up with. You can solve a certain class of intractable problems approximately with neural networks, but they do not behave much like computers even when you use one to simulate them.
 
Brains are computers.

Brains compute, but that's not a very good description of what they do and are in a broader sense. Brains are organs, squishy and complex.

Metaphors of computer storage retrieval and processing have been useful to talk about some of the functions of thought, but we can't confuse the metaphor for the enchilada.
 
In 1969, I watched Apollo 11 and thought we would be on Mars before I was forty.
I'm fifty-five.

Apart from microprocessors, I see no sign of an acceleration in technological ability or man-machine convergence, or anything suggestive of a "singularity". I see a falling back from the hard stuff, an obsession with nonsense and more mouths to feed.
 
Apart from microprocessors, I see no sign of an acceleration in technological ability or man-machine convergence, or anything suggestive of a "singularity". I see a falling back from the hard stuff, an obsession with nonsense and more mouths to feed.

You probably missed a few rather revolutionary things, then. Did you know organ transplants are now a viable treatment?

McHrozni
 
You probably missed a few rather revolutionary things, then. Did you know organ transplants are now a viable treatment?

McHrozni

Organs up to hearts had been transplanted by 1969. Admittedly, thanks to immunosuppressive drugs they are viable for a lot more people now, but organ transplants were known to be possible for 15 years at that point.

We have been move towards specialization for a very long time. The Renaissance Man truly was a creature of that time and no actual examples have existed since. That was the last time where one brilliant and well-educated person could understand everything there was to know and have a comprehension of the limitations thereof.

I really doubt that. The whole "Renaissance Man" thing has never impressed me: they largely did terrible work outside one or two areas. Leonardo da Vinci is frequently given as an example of a polymath. However, his science was limited to observation with no theories or explanations of things even attempted, his engineering involved many devices that were never built and wouldn't work if he had tried, and he made no contribution to mathematics.
 
Oh, and regarding the 'singularity', while computational power has grown exponentially for the last ~50 years, the physical limits of silicon, or even exotic semiconductor materials, mean that it's unlikely they'll continue to advance the same way for the next 50 years. Some people think there's more like 10 years of progress. Maybe entirely new things can be invented, but entirely new things are not what has fuel exponential growth for the last 50 years.

AI, in terms of thinking like a human, has been a dismal failure, and there's no reason to think that can be fixed by the limited amount of progress that's available until we hit the physical limits of the way that we can design computer chips.
 
I thought we'd have solved old age by now, and now I'm really not so sure. I am sure enough it's "near", in a historical sense anyway, if not a one lifetime sense, that I'm pissed I may have just barely missed it.


As for the singularity, I don't think it's necessarily at the point of a smooth and accelerating curve of technological development. Rather, it will be closer to the point where humans can manipulate reality the way they do with virtual worlds and computers today.


If you could pull up, say, a person's entire body in a computer, down to every atom, and run it through a cleanup program (repair damage, reset telomeres, et al.) you'd pretty much be there. Such tech also implies the ability to "print out" any working device that's been invented.

Such high tech would not necessarily represent a singularity, though. And even a godlike intellect needs something to do.
 
... AI, in terms of thinking like a human, has been a dismal failure, and there's no reason to think that can be fixed by the limited amount of progress that's available until we hit the physical limits of the way that we can design computer chips.

The difference is most likely structural rather than performance-related.
 
Brains compute, but that's not a very good description of what they do and are in a broader sense. Brains are organs, squishy and complex.

Metaphors of computer storage retrieval and processing have been useful to talk about some of the functions of thought, but we can't confuse the metaphor for the enchilada.
No confusion, you are just applying too narrow a concept of computers.

Brains are computers.

They process information. But it is really unclear if there is much structural similarity. Go look up back-propagation neural networks for the closest electronic analog we have come up with. You can solve a certain class of intractable problems approximately with neural networks, but they do not behave much like computers even when you use one to simulate them.
As I said, brains are computers.

This doesn't mean that the singularity is near - that depends on progress in hardware (which looks very promising) and software (which looks very unpromising). But it means that the singularity is incontrovertibly possible - and approaching.
 
...

As I said, brains are computers.

This doesn't mean that the singularity is near - that depends on progress in hardware (which looks very promising) and software (which looks very unpromising). But it means that the singularity is incontrovertibly possible - and approaching.

You can keep saying it, but this doesn't make it so.

In point of fact, there is no evidence that your brain runs any software at all.

Neural networks seem to be defined entirely by their physical structure and the structure and modes of attachment of their inputs. They do not run any software at all. There is no central clock. There is no instruction set.

You can SIMULATE a neural network with a computer, but nobody knows how to simulate a computer with a neural network until it gets to the scale of a human brain that can IMAGINE that computer.
 
You can keep saying it, but this doesn't make it so.
My saying it doesn't make it so, sure. That fact that it's true - and indeed, bleedin' obvious - makes it so.

In point of fact, there is no evidence that your brain runs any software at all.
So what?

Neural networks seem to be defined entirely by their physical structure and the structure and modes of attachment of their inputs. They do not run any software at all. There is no central clock. There is no instruction set.
Neural networks are computers.

You can SIMULATE a neural network with a computer
Neural networks are computers.

but nobody knows how to simulate a computer with a neural network
That statement is is as wrong as it is possible to be.
 

Back
Top Bottom