Why is there so much crackpot physics?

Exactly in what manner are the meaning of the words "copper," "conducts" and "electricity" undocumented.

"My" meaning is what Quine, et al. call the underdetermination problem deriving from the problem of “auxiliary hypotheses”.

The hypothesis “All copper conducts electricity” is neither verifiable nor falsifiable in isolation. We need auxiliary hypotheses from our basis of prior assumptions and beliefs regarding electrical conductance, conductance meters, copper wire, the atomic number of copper, how pure a substance might need to be to be called "copper", etc.

Therefore, no hypothesis (Quine, et al. show) is testable in isolation from the whole. Those include related hypotheses and theories, such as prior causes and conditions. Each of these premises contains terms which depend on similar hypotheses and theories, and so on...

Documenting an entire web of this type is, they say, is impossible. I agree.
 
A Quine example was the assumed meanings of "copper", "conducts" and "electricity" in the physics statement: "All copper conducts electricity." The assumed meaning of "sunrise" in the Pre-Copernican view seems most famous. The assumed meaning of "species" to pre-Darwinians is another.

If you went to a pre-Copernican scientist and told them "define everything" they would do so.

  • Sunrise: when the sun appears over the horizon.
  • Sun: The great bright light-emitter in the sky which the Greeks said was the god Helios.
  • Horizon: the farthest we can see.
  • See: Visual impressions upon the eyes.
  • Impressions: When the mind forms an idea of the external world through the senses. ("AHA," says the project manager, "we're getting somewhere, this is a big mess, flag for followup.")
  • Eyes: Roundish sense organs that appear in twos on the faces of many animals
  • Two: one plus one. ("A process concept!" says the manager. "I smell revolution!")
  • One: The smallest number.
  • Animals: Physical objects that eat and multiply. Except insects. Maybe plants. Aristotle goes into this in great detail. ("I'll need that document on file," says the manager, "but that closes that question.")
  • ...

I'm not seeing where the manager (without knowing the answer) picks out "sunrise" as a particularly-interesting unexamined concept. It was 1400AD. Everything was an unexamined concept. Not that merely examining the concept gets you anywhere. Remember, Regiomontanus would have been able to explain to the complete satisfaction of a nonexpert of the time that we already knew that the Sun went around the Earth. "The Earth is stationary, because if it were not we'd be flung off violently." Case closed. Manager puts check-mark next to "Ptolemaic astronomy" and moves on to "Why did Atlantis sink", a much more important question as far as he can tell.

ETA: just saw BS's post right above this. If complete documentation is impossible, why are we talking about it? And why were "undocumented assumptions" such a frequent feature in your fragmentary explanations?
 
Last edited:
"My" meaning is what Quine, et al. call the underdetermination problem deriving from the problem of “auxiliary hypotheses”.

The hypothesis “All copper conducts electricity” is neither verifiable nor falsifiable in isolation. We need auxiliary hypotheses from our basis of prior assumptions and beliefs regarding electrical conductance, conductance meters, copper wire, the atomic number of copper, how pure a substance might need to be to be called "copper", etc.

Therefore, no hypothesis (Quine, et al. show) is testable in isolation from the whole. Those include related hypotheses and theories, such as prior causes and conditions. Each of these premises contains terms which depend on similar hypotheses and theories, and so on...

Documenting an entire web of this type is, they say, is impossible. I agree.

Your meaning remains obscure. Firstly, all copper does not conduct electricity. Who is supposed to make such a statement? How could such a statement be an aspect of physics research?
The circumstances under which copper is conductive, how conductive it might be -- and in what compounds it may be a semiconductor -- and when it might not conduct at all -- are all well understood and documented as a consequence of many years of experimentation.
The statement in isolation, “All copper conducts electricity” is not the kind of vague and valueless generalization made by physicists.
All you seem to have accomplished here is to have demonstrated for all to see that project managers (without a strong background in physics) would provide a nighmare if allowed to meddle in physics research -- and would provide the greatest "risk" of all!
 
the language of project management (the engineering practice of using known techniques to get to desired goals)

I'm using project management as defined by PMI standards rather than engineering. That is: " The application of knowledge, skills, tools, and techniques to project activities to meet the project requirements."

...and guessing that it can be applied to physics

More or less, yes.

You say physics DOESN'T "analyze strategic organizational risk"

In the sense that this analysis is a task for senior leadership & administration, yes.

...but you have no idea if it's POSSIBLE to "analyze strategic organizational risk".

The availability of tools & techniques to perform such analysis suggests it is. Techniques include strategic planning, SWOT, Monte Carlo, PEST, etc., and are taught in grad business schools.

A well-informed concrete example of how to do it would go a long way towards convincing me that you could be right.

The last time I presented the first thing recommended in the Rational Unified Process, it was ridiculed: improve the definition & shared understanding of transformative research, especially at NSF.

Do you think different management can state the truth or falsehood of A from the premises "A -> B", "C -> B", "NOT (A && C)" and "B"?

No.

If not, you haven't solved the problem of distinguishing indistinguishable theories.

That's true, and it is a known problem...but it seemed proper to establish agreement that distinguishing relative merit actually is a problem. I think we have established that agreement.
 
I
The availability of tools & techniques to perform such analysis suggests it is. Techniques include strategic planning, SWOT, Monte Carlo, PEST, etc., and are taught in grad business schools.

From the SWOT wikipedia page: "First, the decision makers should consider whether the objective is attainable, given the SWOTs. If the objective is not attainable a different objective must be selected and the process repeated." When we are doing science this is impossible by definition. We don't know whether the objective is attainable until we attain it.

Monte Carlo analysis is just a method of turning many small known probabilities into an overall system probability. There is no sense whatsoever that this can be applied to, say, the evaluation of a set of physics research proposals. Monte Carlo is "garbage in, garbage out".

"PEST Analysis is a simple and widely used tool that helps you analyze the Political, Economic, Socio-Cultural, and Technological changes in your business environment. This helps you understand the "big picture" forces of change that you're exposed to, and, from this, take advantage of the opportunities that they present." You have got to be kidding.

Was that four completely random bits of project-management jargon, or was there a point? Remember, BS, that my assertion is scientific discovery is different than other project management targets, because the goals are always unknown, the tools always new, the successes serendipitous. I have repeatedly made the point "Just because project management works in some fields does not mean it works in frontier science"---and here you are, yet again, repeating the generic mantra that you're a project manager and project management works. Yeah, in some fields. Your job is to convince someone that it works in frontier science, given the difference between science and engineering (or business, or whatever.)

Here is an analogy for BS's sales pitch. (Just an analogy!)

  • BS: You should buy a Tesla roadster, it's a great way to commute.
  • BM: I live on a tiny, roadless island and commute by rowboat. What good is a Tesla roadster in my case?
  • BS: You are clearly not a car expert then. Car And Driver Magazine, who should know, say it's got excellent handling and low road noise.
  • BM: I need a vehicle that travels over water, not over roads. Does the Tesla travel over water?
  • BS: Let me explain how electric cars work. There is a motor and a battery ...


The last time I presented the first thing recommended in the Rational Unified Process, it was ridiculed: improve the definition & shared understanding of transformative research, especially at NSF.

Amazing how you manage to continue hinting at the existence of "recommendations" without actually posting anything concrete.

Do you think you know how to "improve the definition and shared understanding of" transformative research? If so, post details of your methods, or something else concrete and discussion-worthy along these lines.

That's true, and it is a known problem...but it seemed proper to establish agreement that distinguishing relative merit actually is a problem. I think we have established that agreement.

Why did you think it "proper to establish agreement" on something so obvious? Remember, BS, that you're pretending to present a new idea---a proposal or something---for applying some sort of management principle. Get to the proposal, the new part.

You do not need to propose that 1+1=2. We knew that, no thanks to PM.
You do not need to propose that we study spacetime. We knew that, no thanks to PM.
You do not need to propose that we need to put different weights on ideas whose formal truth-value is undetermined. We knew that, no thanks to PM.

Please get to the new thing we're supposed to get from your application PM, the thing we don't supposedly don't already have from 300 years of practicing science.
 
Last edited:
crackpots' uncanny talent for face-plants

Geez. This is sort of like walking into a neurology ward and saying "I have some ideas about this because I've read an Oliver Sacks book or two." It's like walking into the FDA and saying "This would all be managed better if you read 'The Jungle' by Upton Sinclair."


To borrow a phrase from Star Trek, it is indeed fascinating.

When advocates of crackpot physics get into an extended argument, they almost always manage to steer the conversation toward tangentially related subjects in which their opponents have genuine expertise.

Much of that is easy to explain. Most advocates of crackpot physics know very little about real physics or mathematics, while their opponents are likely to be real physicists and mathematicians.

But it can't be easy to go from marginally crackpot advocacy of faster-than-light travel to full-throated crackpottery about quaternions, fractal dimensions, Gödel's second incompleteness theorem, and Quine's ontological relativity (to give just a few examples).

That took real effort. Why would someone who doesn't know a hyperbola from hyperbole, or contrapositive from negation, work so hard to shift the discussion from physics to mathematics, logic, and philosophy?

[size=-1]Just in case someone missed those howlers:
Buck Field said:
Planetary orbits and the rotation of the Earth have specific, measurable velocities associated with them, and math formulas describe ellipses of stable orbits and “fly-by” hyperboles through our solar system.

— Buck Field, "Revolution in the Understanding of Space-Time: A Project".
Online at http://fqxi.org/community/forum/topic/387
This seems like willful ignorance supporting a lack of understanding logic, resulting in sloppy categorization. For example: my assertion that 'long-term, persistent problems in physics exist' is not refuted by correctly citing millions of fabulous successes in physics. It is refuted by demonstrating the contra-positive, i.e.: demonstrating "no instances of long-term, persistent problems in physics exist".
[/size]​

And how does someone who knows so little about mathematics, logic, and philosophy manage to guide the conversation toward the narrow subareas of those subjects in which their opponents have significant knowledge and expertise?

It's uncanny.

We know BurntSynapse has put some effort into opposition research, and some of his opponents have admitted to taking philosophy courses as an undergraduate, but how could BurntSynapse have known W V O Quine had written some of the textbooks for those courses? Several of his opponents have admitted to graduate study in physics or mathematics, but how could BurntSynapse have known one of their PhD qualifying exams stressed logic and recursion theory? It must have been obvious that some of the folks here teach at universities, but how could BurntSynapse have known one of them had taught a course in logic for PhD students in computer science, covering the standard metatheory up through Gödel's incompleteness theorems? And how could BurntSynapse possibly have known that so many of the participants in this thread have enough project and management experience to know the difference between competent project management and the bizarre caricature of project management BurntSynapse has been describing?

And that's just one example of the phenomenon. We've seen this sort of thing happen in thread after thread, year after year.

I don't understand why it happens so often, but I do have some conjectures to offer. I think many (though not all) advocates of crackpot physics eventually realize they aren't likely to get anywhere arguing about physics and math with physicists and mathematicians. If they were willing to drop the argument just because it's a lost cause, they wouldn't be crackpots, so they shift the argument in hope of finding some argument they can win. (We refer to this as moving the goalposts.)

When they change the subject, they're likely to shift toward some subject they think they understand. There probably aren't a lot of subjects they really understand (which is true of non-crackpots as well, of course), so their choice of subject is pretty limited, and they're fairly likely to pick a subject they understand less well than they realize (which, again, would be true for non-crackpots as well, but non-crackpots are more likely to back off when they realize they're losing an argument).

That would explain why the advocates of crackpot physics often veer off into subjects they don't understand very well, but how do they manage to pick subjects their opponents understand so much better?

Part of the answer, I think, is that their opponents aren't crackpots. As non-crackpots, they prefer to express opinions on matters in which they have at least some expertise, and are especially likely to discuss matters that lie within their special areas of expertise. Furthermore, they are likely to speak up only when the crackpot says something silly. As crackpots interact with non-crackpots, therefore, the non-crackpots tend to steer the crackpots toward topics the non-crackpots understand pretty well and the crackpots understand hardly at all.
 
I'm using project management as defined by PMI standards rather than engineering. That is: " The application of knowledge, skills, tools, and techniques to project activities to meet the project requirements."

And what are the project requirements of physics?
 
At this point, BurntSynapse has three options:
  • Admit that Gödel's incompleteness theorems do not imply any risk for physics.
  • Explain why he believes there is a serious risk of Peano arithmetic being inconsistent.
  • Pretend his bafflegab and name-dropping have been misunderstood.


BurntSynapse chose the third option. Why did I not see that coming?

To disguise his basic move, BurntSynapse has dropped all mention of Gödel, whose (misspelled) name he had been repeating so prominently, and whose second incompleteness theorem he had misinterpreted so badly. He is now pretending his argument had been based upon (BurntSynapse's misinterpretation of) Quine's underdetermination theory, which has absolutely nothing to do with Gödel or the incompleteness theorems or logic or mathematics, and hence has nothing whatsoever to do with the "undocumented assumptions" BurntSynapse had been asserting as a major risk resulting from physicists' use of mathematics.

Complex maneuvers score high on degree of difficulty. Had BurntSynapse pulled it off, he might have scored points.

As rhetorical devices go, name-dropping is pretty lame. The following example has more to do with crackpot mathematics, crackpot logic, and crackpot philosophy than with crackpot physics, but the perpetrator believes this all has something to do with undocumented assumptions in physics, which he believes to be a major risk.

In PM we call them undocumented assumptions, physics is not where such problems would come up because physics doesn't analyze strategic organizational risk from a perspective of administration (of a physics research portfolio).


Had BurntSynapse been saying anything intelligible here, he would have been repudiating his own argument. If the undocumented assumptions come up only in PM, not in physics, then any risk they create should arise only within PM, not within physics.

The existence of the risk was not identified by either PM or physics groups. It was philosophy of science people, as you listed earlier.


Lovely. BurntSynapse appears to be suggesting I agree with him, or have at least provided evidence for the risk he hallucinates.

I shouldn't have to point out that I have not listed "philosophy of science people" who have identified the risk BurntSynapse hallucinates. What I have done was more like the opposite: I have established that Gödel, who had been the philosopher BurntSynapse was mentioning most often in support of his hallucinated risk, provides no support whatsoever for BurntSynapse's alleged risk, and had in fact written a concrete algorithm for documenting the only assumption that could reasonably be construed as giving BurntSynapse even an imaginary reason to mention Gödel.

Quine's justification for his underdetermination theory showed that "no matter how much data comes in, it doesn't force us to a unique theory".


That's true enough. Had BurntSynapse stopped there, he wouldn't have been saying anything of interest, but he wouldn't have been saying anything so heroically silly as his next sentence:

If any web of belief we have can be adjusted to remain intact despite any falsifying evidence, it becomes very difficult to formally demonstrate one theory (Zeus) is better or worse at explaining falling than a theory of gravity.


That's hilariously wrong. It's also a grotesque distortion of what Quine was saying.

According to Quine (and most physicists, and most engineers), a theory's ability to explain falling is secondary to the theory's ability to predict what happens. If the Zeus theory were to deliver quantitative predictions that are just as good as those provided by Newton's theory, then we might have the luxury of considering its ability to explain falling. As it happens, however, Newton's theory delivers far better quantitative predictions.

It's hard to know where BurntSynapse acquired his ignorance of Quine's philosophy, but I'm inclined to suspect it came from uncritical reading of crackpot web sites akin to those that misinformed him about vector calculus and quaternions, fractal dimensions, and Gödel's incompleteness theorems.

Burton Dreben, a colleague of Quine's at Harvard for much of his career, was also an authority on Quine's philosophical thought. In his journal paper on "Putnam, Quine, and the Facts", Dreben refutes a misinterpretation that may be related to BurntSynapse's:

Burton Dreben said:
For example, in hope of mitigating the misleading import of what had proved to be an unduly provocative comparison, Quine wrote in the 1980 foreward to the reprinting of From a Logical Point of View:
...in likening the physicists' posits to the gods of Homer, in ["On What There Is"] and in "Two Dogmas", I was talking epistemology and not metaphysics. Posited objects can be real. As I wrote elsewhere, to call a posit a posit is not to patronize it.​
Quine, of course, is not saying we have the same evidence for the existence of Homer's gods and the existence of physical objects. Quine is saying that from the purely evidential standpoint—from within the order of Knowing—Homer's gods and physical objects are on an equal footing, that is, claims as to their existence are to be assessed in exactly the same way, must satisfy exactly the same standards of evidence. Hence, Quine would be the first to assert that the evidence we do have conclusively establishes that chairs, say, do exist, makes it highly probable that electrons do exist, and conclusively establishes that Zeus does not exist.


BurntSynapse continued:

I don't side with either of the radical positions adopted in the controversy that Quine generated, but the debates still going on now suggest to me a) this is generally accepted in that community as being a real problem, and b) we may consider the jury still out on the details.


I'm sure BurntSynapse's interpretation of an arcane controversy within the field of philosophy is just as valuable as his interpretation of Gödel's incompleteness theorems.

Addressing ben m, BurntSynapse wrote:

The last time I presented the first thing recommended in the Rational Unified Process, it was ridiculed: improve the definition & shared understanding of transformative research, especially at NSF.


Hilariously wrong. Improving "the definition & shared understanding of transformative research, especially at NSF" is definitely not the first thing recommended by the Rational Unified Process.

More importantly, BurntSynapse may be the only participant within this thread for whom the main point of the ridicule must be highlighted in yellow:

A specific example of what I would do differently, (given the opportunity) would be to modify the NSF definition of transformative research to better reflect the understanding of experts from relevant HPS disciplines focused on it.


So BurntSynapse's specific plan to achieve faster-than-light travel is to fiddle with the wording of a definition.

Who could argue with that?

BurntSynapse's bold plan might become controversial once he proposes specific changes to the definition, but he hasn't done that.
 
Hi BurntSynapse, I think you may find this lecture by Richard Feynman, The Character of Physical Law, Finding New Laws, to be quite interesting, as he addresses some of the issues that you are interested in.

One of the things I found interesting, and pertinent to this discussion, is his suggesting that the ways in which prior laws have been discovered are things that physicists are all already trying, therefore (his suggestion is) the next revolution must come about through some other, historically unprecedented process. I'm not completely convinced, but it's a thought provoking idea.

In case my youtube tags don't work here's the link:
http://www.youtube.com/watch?v=MIN_-Flswy0

And my attempt at an embedded link:
 
From the SWOT wikipedia page: "First, the decision makers should consider whether the objective is attainable, given the SWOTs. If the objective is not attainable a different objective must be selected and the process repeated." When we are doing science this is impossible by definition. We don't know whether the objective is attainable until we attain it.

This seem true for both for a)strategic planning with a general goal like "better support transformative research" and b) precise, individual science research projects like "Sonny White's warp bubble experiments". I didn't know whether I could predict jury awards in nursing home lawsuits until after I'd assumed it was a possibility in my hypothesis, and built a model able to test the notion.

Attainability (or not) is typically much less certain the more precisely we define the goal (Sonny White), whereas general goals (NSB) can be made really accurate by defining them as simple as "improvement"...which is a pretty easy target to hit if we haven't even been trying up til now.

The problem you point out seems close to what Mill, a predecessor of Quine addressed with a set of inductive processes known as Mill's Methods, which could roughly be said to specify methods for making more reliable guesses.

You do not need to propose that 1+1=2. We knew that.

True, but what seems less familiar is that the equation only holds true if our framework assumes the addition operation on the left doesn't count for anything on the right. The equation implies two things and an operation is the same as two things. How often is that assumption of math documented compared to how often it is used? Essentially never.

Now, that answer easily seems specious if one doesn't understand the problem it's designed to answer.

In cognitive science connecting concepts creates a third concept, a principle of synergy which underlies humor as well. That's a 1+1=3.

Example: "I'm married, said Tom with abandon." When we realize an unexpected linguistic connection in that sentence, the humor is revealed, but doesn't actually appear anywhere within the sentence itself.

Please get to the new thing we're supposed to get from your application PM, the thing we don't supposedly don't already have from 300 years of practicing science.

As explained a couple of times, there is nothing impossible as a state of nature that application of knowledge, skills, tools, and techniques somehow makes possible. The advantages of PM are relative improvements in cost, duration, and quality of results.

Process improvements which better support transformative research are what I hypothesize we might expect from appropriate application of PM to the design of science administration and organization.
 
This seem true for both for a)strategic planning with a general goal like "better support transformative research" and b) precise, individual science research projects like "Sonny White's warp bubble experiments".

Get to the point. You walk into a room full of physicist and current physics decisionmakers. You say "Support transformative research". They'll say "We do." You say "No, project management say you need to do it strategically". They'll say "How would we do it strategically?"

Which is the same question you've been asked a hundred times.

True, but what seems less familiar is that the equation only holds true if our framework assumes the addition operation on the left doesn't count for anything on the right. The equation implies two things and an operation is the same as two things. How often is that assumption of math documented compared to how often it is used? Essentially never.

Wasn't there a Buffy episode where Xander accidentally summons a malicious project manager demon, making everyone speak only in bafflegab? Doesn't Sunnydale get sucked into Hell at the end?

As explained a couple of times, there is nothing impossible as a state of nature that application of knowledge, skills, tools, and techniques somehow makes possible. The advantages of PM are relative improvements in cost, duration, and quality of results.

Process improvements which better support transformative research are what I hypothesize we might expect from appropriate application of PM to the design of science administration and organization.

Or maybe Willow resurrects the ghost of Richard Feynman and saves them all?

Can you describe a "process improvement which better supports transformative research"? Of course you can't.
 
You walk into a room full of physicist and current physics decisionmakers. You say "Support transformative research".

It seems implausible I (or anyone) would work for almost 9 years with people in TR support (or in anything), and then make a recommendation to work on what both I and they have already been working on for so long.

This seems like going downstairs and telling my wife "You know this house we're in right now, purchased 6 years ago, and that we've been living in every June through December since? We should move in."

I can only guess my recommendations for potential improvements to current TR support processes are being mistaken for implying that I think no such processes currently exist, despite quoting these processes with citations.

That suggests I should probably emphasize the history of official TR support guidelines more often and prominently in introducing the topic. I can't honestly recall when I last started with the NSF's preliminary Sante Fe workshop and then told the story of how TR support evolved, probably because my focus has been more on technical PM aspects.

Tracking the history with a narrative to the current state, then bringing in PM for analysis & recommendations seems like a good way better communicate such analysis.

It seems plausible that presenting to (perhaps overly) favorably biased audiences over a fairly long time has encouraged lazy structure with lots of gaps that might lead to the objection you raise here.
 
Last edited:
I can only guess my recommendations for potential improvements to current TR support processes are being mistaken for implying that I think no such processes currently exist, despite quoting them with citation.

WHAT RECOMMENDATIONS FOR POTENTIAL IMPROVEMENTS? POST SOME.

Note that in the 5-sentence opening paragraph of my post, you replied to the three sentences that introduce the idea that people will ask you what your recommendations are, and ignored the part where people ask what your recommendations are.

What are your recommendations?

Deep into your 160-post-and-counting failure to state any recommendations, why should anyone care?
 
Last edited:
I think perhaps we need to start a little simpler.

BurntSynapse, can you describe a process improvement that mitigates the risk of undocumented assumptions in "1+1=2"?

Context &/or goals determine that.

If a project to enhance synergy with existing information processes that depended on 1+1=2 style outputs and measurements, I'd probably accept a contract if the project qualified.
 
WHAT RECOMMENDATIONS FOR POTENTIAL IMPROVEMENTS? POST SOME.

Here's 4, with sub points:

Perform gap analyses of TR support;
Incorporate HPS expertise in:
--- the gap analyses;
--- setting metrics & acceptance criteria;
--- relevant definitions;
Update current procedures to incorporate knowledge gained by above;
Develop educational materials to:
--- Train NSF PRP members in resulting:
........definitions;
........procedures;
........illustrative examples.
 
Last edited:
Here's 4, with sub points:

Perform gap analyses of TR support;
Incorporate HPS expertise in:
--- the gap analyses;
--- setting metrics & acceptance criteria;
--- relevant definitions;
Update current procedures to incorporate knowledge gained by above;
Develop educational materials to:
--- Train NSF PRP members in resulting:
........definitions;
........procedures;
........illustrative examples.

*fireworks!* A direct answer! Let me point out, hopefully for the last time, that the above could have been the answer for any of the ~50 previous posts by people asking you for specifics, recommendations, etc., rather than evasion. If there was some magic in my question-phrasing that made it possible for you to answer this time, let everyone know what it is so we can recreate it.

So, finally, something reasonably specific and unmistakeable to address. Perhaps we can narrow the problem down to you believe your own bafflegab.

"Perform gap analysis of TR support".

I guess this is a term of art. Let's see:

wiki said:
In the management literature, gap analysis is the comparison of actual performance with potential performance. If a company or organization does not make the best use of current resources, or foregoes investment in capital or technology, it may produce or perform below its potential.

Identify the existing process
Identify the existing outcome
Identify the desired outcome
Identify the process to achieve the desired outcome
Identify Gap, Document the gap
Develop the means to fill the gap
Develop and prioritize Requirements to bridge the gap

Those are management-speak bullet-points wrapped around ... nothing. Seriously, there's nothing there. This is a Powerpoint-friendly way of saying "Think about how to do things better, then do them better." Transformative research is very, very hard to do better, because no one really knows whether or not they are doing it. You are daydreaming that there's a magical way of doing this "better", which will be revealed when someone thinks about it carefully. There isn't.

"Incorporate HPS expertise in ... "

Another meaningless bullet point. If the HPS expertise does not generate any actionable insights, there is nothing to "incorporate". You've attempted to cite HPS insights already, and you're nowhere near finding a fragment of anything worth "incorporating".

Update current procedures to incorporate knowledge gained by above;

... which presumes there *is* knowledge gained. Which there won't be. For reasons that have been explained to you.

I will save you some committee work and tell you how this will play out.

  • Perform gap analyses of TR support. "The NSF finds that we fund some high-risk, high-reward research, and would fund more if we have more money. This obvious and trivial point has been stretched out across six Powerpoint slides."
  • Incorporate HPS expertise in: "HPS experts were consulted but did not supply credible forward-looking goal-setting or analysis methods, or methods for obtaining such methods, etc."
  • Update current procedures to incorporate knowledge gained by above; "We slightly rephrased the transformative-research sentence in our boilerplate grant solicitation and review guidelines. Just to feel like we had accomplished something."
  • Develop educational materials to: "Given the lack of anything novel in our findings, the committee would rather stab itself in the eyes with a pipette than develop educational materials. Let's jump straight to the coffee break."

The hard part of the science-funding enterprise is that no one can predict which research efforts will, on completion, turn out to be important. That's true on the micro-scale (one grant) and the macro-scale (one subfield). That's the hard part now. That's still the hard part if you precede it with ten non-hard bullet points. It's still the hard part if one of your bullet points is "come up with a plan for making it easier". It's still the hard part if one of your bullet points is "convene a committee to study how to make a plan to make it easier." You have, as far as I can tell, no insight into the actual hard part of the problem---yet you manage to pretend that your managerial bullet points amount to insight---if they're not solutions themselves, they're the ability to find a solution. That's all part of what we call "bafflegab"---layers and layers of management-speak and acronyms and PowerPoint that obscure the kernel of the problem.
 
Last edited:
The hard part of the science-funding enterprise is that no one can predict which research efforts will, on completion, turn out to be important. That's true on the micro-scale (one grant) and the macro-scale (one subfield).

The complex nature of the world makes your objection valid over a far greater domain than our narrow focus here on TR support models and methods. Mill's methods attempted to deal with this problem, and it has cropped up again and again.

In any area of empirical application, we can't say which driver will die in a crash or be saved by wearing a seatbelt, although the odds favor belting. We can't say which child will eventually get lung cancer but statistically, second hand smoke is bad for kids, generally. As someone quite properly pointed out earlier, Schwarz Criteria are generally accepted as a valid means to support various positions on such matters.

Perform gap analyses of TR support. "The NSF finds that we fund some high-risk, high-reward research, and would fund more if we have more money."

There are a couple of problems with this objection. First is that your objection implies resource allocation percentages are not independent of resourcing level. In fact, increases in available funding can and do lead to complete elimination of some outlays. For example: if a poor person gains $100M, we can easily imagine their bus fare cost to drop to zero. Specific allocation of funds by percentage can go up or down independent of funding. A separate issue is absolute levels based on gap analysis:

Gap analysis hopes to reveal potential improvements to resource allocation for portfolio effectiveness, rather than resource levels. Criticism of a well-established tool for not dealing with something for which that tool is not intended is hopefully not our strongest attack. If it actually is our strongest, I think most neutral observers would consider such criticism an endorsement of our opponent's tool or position.

I will save you some committee work and tell you how this will play out.

...Develop educational materials to: "Given the lack of anything novel in our findings, the committee would rather stab itself in the eyes with a pipette than develop educational materials.

During my attendance at and participation in TR support discussions with directorate heads and two NSB chairs over the course of many years, materials of this type were developed and are now part of the standard PRP member packet. I therefore doubt conversations anything like that to occur anytime soon in Arlington, absent fairly serious changes in priorities, IMO.

Interactions you suggest would seem to have near-term plausibility in other venues though, and represent understandable objections.
 
Last edited:
Here's 4, with sub points:

Perform gap analyses of TR support;
Incorporate HPS expertise in:
--- the gap analyses;
--- setting metrics & acceptance criteria;
--- relevant definitions;
Update current procedures to incorporate knowledge gained by above;
Develop educational materials to:
--- Train NSF PRP members in resulting:
........definitions;
........procedures;
........illustrative examples.

In other words not a single pragmatic suggestion.

Noting at all that will aid is dealing with the issue of not being able to travel FTL.

Just empty semantics.

try to come up with something utilitarian, it is obvious you have never sat in on a funding review for the NSF.

"We have thousand of applications of only enough for 5% of suggested research proposals but if we fine tune our 'gap analysis'..."
 
The Register today has a somewhat interesting/amusing article about bafflegab. Anyone up for a game of "BS or not"?

-Fully leverage internal and external partnerships to collaboratively discover targets;
-Collectively foster an environment that encourages and rewards diversity, empowerment, innovation, risk-taking and agility;
-Update current procedures to incorporate knowledge gained;
-Enable better, more efficient management of the mission and business by establishing new, modifying current, and eliminating inefficient, business processes;
-Reveal potential improvements to resource allocation for portfolio effectiveness;
-Counterpoint the surrealism of the underlying metaphor.

It's all BS
 

Back
Top Bottom