From the way you're presenting it, it seems mostly a combination of educated guesswork and ideological indoctrination. Whatever it is, it sure ain't a logic though.
"A logic" typically refers to a formal system, yes, but the study of logic historically and appropriately refers to more than that. It is the study of reasoning (in a prescriptive sense), the distinction between reasoning effectively and not.
Historically alchemy was a part of science, that doesn't mean much.
When I think of logic I think of the study of logical systems, aka formal systems with semantics. All of which have precise definitions, precise methods, etc. As opposed to that "informal logic" which seems some vague set of heuristics floating around in mid-air without grounding.
Not at all. The rules for judging the strength of, say, an argument by analogy are well-motivated, though in practice certainly more vague than a formal logic. That's the nature of the beast.
Not limited to Bayesian. I find the view that any meaningful reasoning can be done without a formal framework to be a fantastic fiction.
Except that your assignment of "trust" to the "authorities" in this case (US government and intelligence agencies) is ideological rather than empirical. By this logic I can assign "trust" to the pope and then claim to have evidence for us having immortal souls, because the pope says so.
All that's happening here is that people are packaging their ideological preferences under "trust assignments" and then claiming that their appeals to these "trusted" authorities aren't fallacies because reasons.
And that's the thing that bothers me about this, if people were to just say "my ideology requires me to believe the claims" then sure, I wouldn't have a problem with that. But no, they just have to go and package it under "critical thinking". Especially the appeals to scientific authority to support the case is an abuse of science at a level which is no better than so-called "creation science".
No, we must distinguish between evaluating an argument form (very loosely understood, in an informal setting) and the truth of the premises.
Suppose I'm dead wrong about whether these sources are indeed trustworthy authorities. Then some of the premises of my argument are false, and so the argument does not support the probability of the conclusion.
In deduction, we distinguish between validity (appropriate form, roughly) and soundness (valid with true premises). I apologize that I don't seem to have my old text at hand, so do not recall the appropriate terms for induction and am not sure that the distinction between the two concepts is as clearly made there. But, given that A is a trustworthy authority on X, from A's assertion that X, I can infer that X is probable. The better the evidence of trustworthiness and expert knowledge, the more probable X will be.
Of course, my claims about A's trustworthiness or knowledge could be false. In that case, it would still be so that, had my premises been true, X would be probable. Since some of my premises are false, X is not probable.
Very similar things happen in deduction. Many times, in a mathematical proof, I use a statement which I am damn sure is a theorem in order to prove something, but I'll be darned if I'm not wrong. It happens. Individuals' abilities to determine the truth of premises is prone to error.
In this case, we are worse off, since you and I have no good way for ensuring that, through patient discussion and careful consideration, we will come to agree on whether or not these agencies and the bipartisan committees are reliable authorities. The stuff of political discussion is messy compared to mathematics. So it goes.
You are doing statistical reasoning. You've earlier established a correlation between "hearing funny noise" and "having bad starter" which you then use the next time you hear the noise. This could easily be put in a Bayesian framework as well.
Goodness, who would put it in a Bayesian framework?
I would rather determine whether the analogy between the two situations is sufficiently compelling to make a probable diagnosis. I haven't the time or interest to muck about with statistical methods before testing the starter.
The informal methods of reasoning we use are imperfect, vague, would be much better replaced by statistical methods in many, many instances. But we live in the world, and we reason as things happen, and while it may be jolly fun and useful to build artificial agents to reason more formally, we are not those agents. Thus, we ought to concern ourselves with insuring that our understanding of these informal methods is as thorough as is practicable.
Sorry, I won't be teaching freshmen that they need to use Bayesian methods in order to reliably diagnose starter issues.