I'm good with that. Of course, I'm distinguishing the process of science - which is whatever gets the job done - with the conclusions of science - which are expressed as published papers. (And conferences typically involve published papers, as noted). So if clarification were needed, voila.
Have you ever tried to track down the data in one of GSA's Abstracts with Programs? I have--it's a nightmare. The data are there, and are commonly referenced by scientists who were at the talks. But the abstracts themselves are something of a joke: many scientists post some abstract, with a vague notion of actually finishing the research before the conference, but end up presenting another talk all together. Makes figuring out what talks to attend all kinds of fun, and it makes tracking down that data so you can do a BLM PFYC analysis even more brain-trauma-inducing.
You know, I wonder what they'd say about making those sessions into a podcast or the like? But anyway, I digress...
The simple fact of the matter is that science is far, far more than merely what's in the peer reviewed literature. It's trivially easy for anyone to find well-respected scientists complaining about well-known biases in the literature. To pick an example I have some experience with: it's nearly impossible to publish an article explaining a methodology that doesn't work. At the same time, doing so had innumerable practical benefits, ranging from researchers wasting time on something you could easily tell them doesn't work to companies pay millions of dollars to develop methods that won't do what they intend. And if you think that science isn't done over a few drinks during a conference, you're simply delusional. Science is inherently collaborative, and the nature of that collaboration is dictated by social pressures--in other words, having a few drinks with someone may open up possibilities that would otherwise not only be closed, but frankly invisible to a researcher. Half the reason to attend conferences is the socialization aspect. And a lot of science remains unpublished--to the point where the people who actually put the science to use, the engineers and the consultants and the like, are told very early on in their career that peer-reviewed publication isn't the only place to find data, and that in many ways they themselves constitute peer review. If you don't believe me, try to get a BLM Paleontological Resources Survey permit for a Basin and Range valley without talking to the guys who do hobby collection. You'll get laughed at.
However, the circumspect language is careful for a reason. Any rebuttal of a scientific theory has to be phrased in scientific language. It's not a matter of politeness. It's a matter of practicing science.
I'm sorry, but I have to ask your qualifications for telling me how to discuss science. Because I've flat-out told academic researchers that they were wrong--quite literally, and in documents that I know they at least have the opportunity to read. It seems odd that you're telling me that I didn't do what I did. I distinctly remember doing it.
Absence of proof vs. proof of absence.
The counter to that whole false dichotomy is the null hypothesis. The null hypothesis is, properly, "there's nothing to explain". Only once you've demonstrated that there's something to explain are you justified in explaining it--and "it's a miracle!" is an explanation, all be it a very poor one.