James Blodgett
Student
- Joined
- May 27, 2009
- Messages
- 27
Contemplate the subjective risk from colliders as it existed before the Giddings and Mangano paper. At that time, collider advocates were relying on Hawking radiation, recently questioned by published physics papers (cited above), and on the crude cosmic ray analogy, with no consideration of white dwarfs. At that time, two previous safety studies were seriously outmoded. The RHIC study said that black hole production was impossible, while subsequent physics papers predicted black hole production. The first CERN safety study said that black holes would dissipate via Hawking radiation, while subsequent physics papers questioned the fundamental theory behind Hawking radiation, and the first CERN safety study relied on the crude collider/cosmic ray analogy, which Giddings and Mangano found inadequate. It seems amazing that, at that time, CERN was claiming that the risk was zero and was ready to launch! Were it not for pressure from collider critics, CERN would not have done the LSAG (Large Hadron Collider Safety Analysis Group) studies including Giddings and Mangano. Collider advocates point to their three safety studies--how could anyone ask for more? Actually the necessity for three studies was a serious weakness. The subsequent studies were necessary because the previous studies turned out to be inadequate.
If science is valuable, the history of this debate is unfortunate since it will be a black eye for science. There is a “damn the torpedoes, full speed ahead” attitude on the part of collider advocates that does not look good for their cause. A little more care on the part of collider advocates would make them look better in future contexts. Consider the difficulty of selling the next collider.
Some collider advocates say that the LSAG studies only showed that they were right all along. I disagree. Russian roulette is not a good thing even if you win. As a parable, consider a ride with a reckless bush pilot. As you get in the airplane for a flight over tough country, you ask, “Have you completed the checklist?” “Another bureaucrat!” the pilot grumps, then grabs a clipboard and pointedly circles the plane, looking at every tire and checking the dipstick in every oil reservoir. “See, nothing wrong!” he exclaims as he guns the plane down the runway. The philosophical question: is the plane safer now? In fact, there was nothing wrong. However, I would much rather fly after the checklist is completed. Running the checklist was a good thing. The subjective probability of trouble is lower.
I think it would have been immoral to launch with the grossly inadequate safety factors pre LSAG. The situation now is less clear. I think the LSAG did a fairly good job, mainly due to Mangano, who considered seriously the points of collider critics (unlike most here) and who was criticized for taking as long as he did. However, the LSAG methodology was far from balanced, and subsequent criticisms get the short shrift we see on this website. For example, the LSAG was composed only of physicists, somewhat a conflict of interest since most physicists eagerly await LHC data. One of their main safety factors is astronomical. It might be more believable that they made a balanced assessment of the astronomical data if an astronomer had been a member of the group. Their main point was risk assessment. Their risk assessment methods might have been more appropriate had a professional risk assessor been a member. Indeed a balance in the expertise of team members is a standard part of risk assessment best practices. Some of my colleagues suggested these things to CERN when the LSAG was constituted. My interpretation is that CERN would have been uncomfortable with any deck that was not stacked, and indeed was uncomfortable with Mangano for being as unfixed as he turned out to be.
I think the Giddings and Mangano paper was fairly good. However, the LSAG was not quite as good on the issue of strangelets. Also, subsequent work that might suggest problems is given the short shrift we see here.
Incidentally, collider advocates widely deplore the concept of a lawsuit. Few mention the main point of that suit, that US groups supporting CERN to the tune of $500 million should comply with US environmental protection laws and do an environmental assessment, which they did not do. I suppose that physicists are special infallible people who should be exempt from such requirements. Potential destruction of earth is an environmental impact, and the environmental assessment procedure would at least consider both sides. The initial case was decided narrowly on the grounds that the US contribution was not substantial because it was less than 10% of the total effort, so that US environmental laws do not apply. A prior case in a totally different area, involving much less money, was decided on similar grounds. The decision is being appealed.
Sol Invictus says that we should consider the risks of not doing science. I agree. I sometimes point that out to others. However, even some scientists see the value of restrictions on some science. Consider the Asilomar compromise on recombinant DNA.
Each global risk presents differently. Asteroid impact is a global risk. We are doing something about it. There is a near-Earth asteroid watch, and there have been studies about moving such objects. The main question is whether resources are being put into these efforts at the appropriate level. Another global risk, generally conceded as valid even by practitioners in the field, is the risk from artificial intelligence if and when it gets way beyond us. There are obviously both risks and benefits here. Most AI practitioners concede the risk (unlike collider advocates) but nothing much is being done about it. One thought is to build in friendliness, a good idea, but those advocating it also claim they will prove mathematically that it will work in advance, a feat I would like to see but doubt can be done. I do not think we can totally avoid these risks, but good thought can reduce the risks at least by a small amount. The mathematics of expected value say that even a small reduction in a global risk is worth a lot. As an example of an actual reduction at least of subjective risk, I would cite the LSAG studies.
I would love to seriously try to consider some of the science and some of the math behind the collider debate. However, this place is basically a kangaroo court, so I don’t see that as possible here. I would welcome continuing the conversation elsewhere with anyone who actually wants a conversation. Contact me at Risk Evaluation Forum, accessible via Google.
If science is valuable, the history of this debate is unfortunate since it will be a black eye for science. There is a “damn the torpedoes, full speed ahead” attitude on the part of collider advocates that does not look good for their cause. A little more care on the part of collider advocates would make them look better in future contexts. Consider the difficulty of selling the next collider.
Some collider advocates say that the LSAG studies only showed that they were right all along. I disagree. Russian roulette is not a good thing even if you win. As a parable, consider a ride with a reckless bush pilot. As you get in the airplane for a flight over tough country, you ask, “Have you completed the checklist?” “Another bureaucrat!” the pilot grumps, then grabs a clipboard and pointedly circles the plane, looking at every tire and checking the dipstick in every oil reservoir. “See, nothing wrong!” he exclaims as he guns the plane down the runway. The philosophical question: is the plane safer now? In fact, there was nothing wrong. However, I would much rather fly after the checklist is completed. Running the checklist was a good thing. The subjective probability of trouble is lower.
I think it would have been immoral to launch with the grossly inadequate safety factors pre LSAG. The situation now is less clear. I think the LSAG did a fairly good job, mainly due to Mangano, who considered seriously the points of collider critics (unlike most here) and who was criticized for taking as long as he did. However, the LSAG methodology was far from balanced, and subsequent criticisms get the short shrift we see on this website. For example, the LSAG was composed only of physicists, somewhat a conflict of interest since most physicists eagerly await LHC data. One of their main safety factors is astronomical. It might be more believable that they made a balanced assessment of the astronomical data if an astronomer had been a member of the group. Their main point was risk assessment. Their risk assessment methods might have been more appropriate had a professional risk assessor been a member. Indeed a balance in the expertise of team members is a standard part of risk assessment best practices. Some of my colleagues suggested these things to CERN when the LSAG was constituted. My interpretation is that CERN would have been uncomfortable with any deck that was not stacked, and indeed was uncomfortable with Mangano for being as unfixed as he turned out to be.
I think the Giddings and Mangano paper was fairly good. However, the LSAG was not quite as good on the issue of strangelets. Also, subsequent work that might suggest problems is given the short shrift we see here.
Incidentally, collider advocates widely deplore the concept of a lawsuit. Few mention the main point of that suit, that US groups supporting CERN to the tune of $500 million should comply with US environmental protection laws and do an environmental assessment, which they did not do. I suppose that physicists are special infallible people who should be exempt from such requirements. Potential destruction of earth is an environmental impact, and the environmental assessment procedure would at least consider both sides. The initial case was decided narrowly on the grounds that the US contribution was not substantial because it was less than 10% of the total effort, so that US environmental laws do not apply. A prior case in a totally different area, involving much less money, was decided on similar grounds. The decision is being appealed.
Sol Invictus says that we should consider the risks of not doing science. I agree. I sometimes point that out to others. However, even some scientists see the value of restrictions on some science. Consider the Asilomar compromise on recombinant DNA.
Each global risk presents differently. Asteroid impact is a global risk. We are doing something about it. There is a near-Earth asteroid watch, and there have been studies about moving such objects. The main question is whether resources are being put into these efforts at the appropriate level. Another global risk, generally conceded as valid even by practitioners in the field, is the risk from artificial intelligence if and when it gets way beyond us. There are obviously both risks and benefits here. Most AI practitioners concede the risk (unlike collider advocates) but nothing much is being done about it. One thought is to build in friendliness, a good idea, but those advocating it also claim they will prove mathematically that it will work in advance, a feat I would like to see but doubt can be done. I do not think we can totally avoid these risks, but good thought can reduce the risks at least by a small amount. The mathematics of expected value say that even a small reduction in a global risk is worth a lot. As an example of an actual reduction at least of subjective risk, I would cite the LSAG studies.
I would love to seriously try to consider some of the science and some of the math behind the collider debate. However, this place is basically a kangaroo court, so I don’t see that as possible here. I would welcome continuing the conversation elsewhere with anyone who actually wants a conversation. Contact me at Risk Evaluation Forum, accessible via Google.