I am practicing the Comtean “mental health” I described in my last post by writing without reading your recent comments. I doubt that I have missed much.
I might get around to reading some of them. I am street orator. One of my biggest motivators is a stupid statement that needs to be set right. You guys are a wonderful source of stupid statements. You may know some physics (your knowledge of even that subject is specialized, not balanced) but you are incredibly stupid on everything else. Did anyone here ever take a course in economics? And I can reuse elsewhere some of the material I generate here. Therefore I might get around to reading some of your comments. Or maybe not: I have more important audiences to work off of right now, and better things to write about.
There is occasionally value among the stupidity here. Sol Invictus’s statement that not doing the LHC is more risky than doing it is simplistic but promising, a nice thing to riff off of. The LHC is a low probability risk, with a subjective probability that has varied, increasing substantially as safety factors eroded, then improving considerably with Mangano’s study, but not going to zero. Read Toby Ord’s paper for one reason why it is not zero. [Toby Ord, Rafaela Hillerbrand and Anders Sandberg, “Probing the Improbable: Methodological Challenges for Risks with Low Probabilities and High Stakes”] Rational people don’t want Earth subjected to even low probability risk unnecessarily. People don’t think well about existential risk; the mathematics of expected value is a good corrective. We bring in Sol’s point when we acknowledge that the appropriate math considers benefits as well as risks. Even the humongous negative value of existential risk (even if improbable) might be countered by a small probability of a transcendent discovery that, for example, might increase the human race by orders of magnitude and reduce risk in other ways. An example from crazy science fiction is a Star Trek space drive that would let us inhabit the galaxy. I think a discovery of that magnitude, a discovery that could be made by the LHC but in no other way, is quite improbable. But we are talking about improbabilities here. An improbable galactic conquest, or other transcendent discovery, might balance improbable existential risk.
Note that a space drive could also be an existential risk. Just get your space ship near light speed and ram Earth. The results of science are not always positive. Consider the atomic bomb, a previous product of physics, in the calculus of the costs and benefits of science. We are very lucky that production of enriched uranium and plutonium require massive resources. Make the costs in the range individuals can afford, and civilization would not last a year. Upcoming science may put similar powers in the hands of individuals.
I acknowledge, and wonder at, the marvels that science has produced. Look around yourself in an urban environment. Almost everything you see is a benefit of science and technology. Nevertheless, it is not obvious that science is automatically good. Actualization of just one existential risk would wipe out all of these benefits. Sir Martin Rees, Astronomer Royal of England, estimates that the probability that humanity will not survive this century is 50%. I think him pessimistic, but he has a point.
Someone here sneered about Luddites. (I don’t get how “mad scientists,” who in the LHC case have been quoted with several precise equivalents to “nothing could possibly go wrong,” come across sneering about stereotypes.) The Luddites have been more or less wrong economically for 200 years. However, is not obvious that they will be wrong forever.
The Luddites were concerned that technology, in the form of the power loom, would put them out of work. It has turned out that technical change does put people out of work, but this turns out to be a minor problem. A few who can’t transition may suffer, but the majority find work in other fields, doing things that machines can’t do, and the economy becomes more productive. In the 200 years since the Luddites were active technology has expanded marvelously, the economy has become marvelously more productive, and, with the exception of a few recessions, we still have more or less full employment in advanced economies. However, the Luddites may very well be right about machines putting people out of work in the long run, when artificial intelligence gives us inexpensive machines that can do EVERYTHING better than humans. Now, even intelligent machines are not likely to do EVERYTHING better than humans. For example, technology already can make recordings that sound better than any but the best human music groups, but some still find advantage in, and pay a premium for, a live orchestra. And some may prefer the human version of sex, and the human result. Nevertheless, machines able to do ALMOST everything better than humans would be sufficient to trash our current economy. But we can solve that. I used to joke that we could solve the problem by employing the inefficiency of communism. A better solution is to revise an old IBM slogan, “Machines should work, people should think.” Artificial intelligence that can think better than humans might generate the revised version: “Machines should think, people should bowl.” We could solve the money problem by considering all humans to be inheritors of the legacy of the past, and paying each dividends on that legacy. This would solve the Luddites’ problem about jobs. But the Luddites were still right if the machines become an actualized existential risk.
Incidentally, I think that artificial intelligence is an existential risk, a risk with a higher probability than LHC risk. But AI has more loci and so is harder to turn off than the LHC, and AI has more immediate benefits. AI risk has been the subject of many bad movies. At least AI researchers, unlike LHC physicists, acknowledge this risk. But they aren’t doing much about it. One group does propose making the machines friendly, a good idea, but some promise that they will prove friendliness mathematically in advance, probably impossible. (I have a bit of a conflict of interest when it comes to AI research. I would like to do it myself.)
The benefits of science are obvious, but the future is not clear. Some think they see a declining marginal return to science and technology. There are still important inventions, but the size of the research establishment required to produce them is increasing. Others cite things like Moore’s law to see acceleration of technology. Some think that acceleration of technology will give use a “singularity” in the near future where magic nanotech and so forth gives us god-like powers. [See Kurzweil, “The Singularity is Near,” and quite a few others.] Note that this is not automatically good: while some legendary gods were good, others were not. The idea that there is some science that should not be done has been implemented at Asilomar.
The LHC may teach us magic stuff. On the other hand, I am guessing that someone gets the Nobel for finding the Higgs, but nothing much else comes of it. We risk earth for this?