• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

NIST Petition Demands Corrections

Let's attempt to keep this civil LashL.

Labelling the petitioners, or any other 'non-clowns' as such, is a tactic that only reveals your uncontrollable prejudice and judgmental attitude, and consequently undermines your credibility.

MM

No, what it reveals, Miragememories, is your lack of reading comprehension ability. As should be obvious, even to you, I did not label the petitioners at all. That was not my post but one clearly and properly quoted and identified as that of another poster.

In light of your inability to grasp even the simple concept of reading for comprehension something as simple as that, I am not surprised that you apparently fail to comprehend how or why the petition at issue is so poorly framed, poorly conceived, and rather an affront to honest and diligent scientists.

Don't you worry for a second about my credibility. It is perfectly intact. I cannot say the same for yours, however, and you owe me an apology for your gratuitous insults.
 
Last edited:
the trouble with the 9/11 studies Journal so called "Peer Review" process was the fact that the editor, not only was bias, in that he was a believer in one side of the arguement, namely the CT side, but he was also submitting papers for review, and as editor was picking the peer review committee members. So we have bias, as well as conflict of interest.

As well, the selected committee members had the same bias, being members of the scholars group, a group that are believers in the "inside job" theory.

So to even suggest that in the honest, scientific sense, that the journal had its articles "peer reviewed", is to make a mockery of the entire "peer review" process.

TAM:)
 
If I, as writer, were to submit a purely scientific paper on the merits of Ciprofloxacin versus Clarithromycin in terms of coverage in Community Acquired Pneumonia, to a journal, I would not consider it valid for the editor to choose experts that worked for either Bayer or Abbott (make Cipro and Clarithromycin respectively) to review my work, unless there were an equal representation of both companies on the review committee.

Not a great example, but you get my point. A purely scientific article should be "peer reviewed" by EXPERTS in the field in question, and by experts with little to no bias or investment in the papers results.

TAM:)

TAM:)
 
The Doc I have had just about enough of the true believers here myself. Please retract your statement or I no longer will be conversing with you. Call me petty or anything you like butif I have to deal with this kind of childishness then I will take Darats advice and simply put you on ignore.

He has nothing to retract. He specified which claim he was talking about, and you (presumably on purpose) tried to make it seem like it was another LC claim that was relevant. LC says 9.2 seconds. That's no approximation.

Gravy said:
You didn't read my previous posts carefully. They started the stopwatch from 0:00 more than two seconds after the collapse had commenced.

An unfortunate mistake, if true.

Yes. I'm sure they didn't see that mistake until it was too late. :rolleyes:

Most claims in LC are true.

Most claims in LC are outward lies. I guess "truth" in "truth movement" doesn't mean the same thing it does north of the border.

The viewer is left to ponder the significance of them and is encouraged to do further research.

And then he DOESN'T research, which means he trusts the conclusions given in the film. So why isn't the film more honest, exact and thorough in its treatment of the events ?
 
the trouble with the 9/11 studies Journal so called "Peer Review" process was the fact that the editor, not only was bias, in that he was a believer in one side of the arguement, namely the CT side, but he was also submitting papers for review, and as editor was picking the peer review committee members. So we have bias, as well as conflict of interest.

As well, the selected committee members had the same bias, being members of the scholars group, a group that are believers in the "inside job" theory.

So to even suggest that in the honest, scientific sense, that the journal had its articles "peer reviewed", is to make a mockery of the entire "peer review" process.

TAM:)

I for one don't really know how to feel about peer review as an argument. I mean doesn't all it take is for the 'peers' doing the review to have the same political bias as you? Or the same preconceived conclusions?

What exactly constitutes peer reviewed in the best way? What's stopping me sharing something I wrote to some mates and have them 'peer review' it and say it's accurate?
 
I for one don't really know how to feel about peer review as an argument. I mean doesn't all it take is for the 'peers' doing the review to have the same political bias as you? Or the same preconceived conclusions?

What exactly constitutes peer reviewed in the best way? What's stopping me sharing something I wrote to some mates and have them 'peer review' it and say it's accurate?
The trouble comes in when groups lik ST911 use the phrase "peer reviewed" with no qualifiers and allow people to assume that it is the same "peer review" as is used by academic and trade journals like SCIAM, JAMA, ASCE, ASME, etc. They're using equivocation to give their papers an air of scientific soundness that they do not warrant.
 
If I, as writer, were to submit a purely scientific paper on the merits of Ciprofloxacin versus Clarithromycin in terms of coverage in Community Acquired Pneumonia, to a journal, I would not consider it valid for the editor to choose experts that worked for either Bayer or Abbott (make Cipro and Clarithromycin respectively) to review my work, unless there were an equal representation of both companies on the review committee.

Not a great example, but you get my point. A purely scientific article should be "peer reviewed" by EXPERTS in the field in question, and by experts with little to no bias or investment in the papers results.

TAM:)

TAM:)


Yet you don't seem too concerned about the conflicts of interest in the 911 commissioners...
 
I for one don't really know how to feel about peer review as an argument. I mean doesn't all it take is for the 'peers' doing the review to have the same political bias as you? Or the same preconceived conclusions?

What exactly constitutes peer reviewed in the best way? What's stopping me sharing something I wrote to some mates and have them 'peer review' it and say it's accurate?

Peer review is a tricky issue, precisely because there is the possibility of that sort of bias. In general with the scientific press it's done anonymously - that is to say, the authors of the article don't know who the reviewers are, although the reviewers know who the authors are - and the reviewers are chosen from a panel by the staff of the journal, usually with a view to making sure that their expertise is relevant to the subject of the paper. Reviewers can choose not to review a paper if they feel they're not appropriately qualified, and can recommend a colleague who's better able to comment. Depending on the journal, an author will generally be able to request feedback from the reviewer and may be able to request an alternative review. The aim is to choose reviewers based on expertise rather than opinion, and drawn from as broad a base within the discipline as possible. There are criticisms of peer review as a system - in particular, it's sometimes asserted that it prevents unorthodox ideas from being accepted - but in general, like democracy, it's the worst possible solution except for all the other possible solutions.

In the long run, the only measure of whether peer review is being carried out effectively is the reputation of the journal. There tends to be an element of positive feedback there, because journals with good reputations attract good contributors and good reviewers.

The main things to look for in determining whether peer review is being done effectively are anonymity and impartiality of reviewers and a broad reviewer base. Another option is to look at the quality of the papers published; as a rule, if a relatively uninformed observer like myself can pick several pages' worth of holes in a paper without really trying, the reviewers haven't been doing their job.

Dave
 
Yet you don't seem too concerned about the conflicts of interest in the 911 commissioners...
Argumentum ad hominem tu quoque. If you want to discuss possible conflicts of interest with the 911 Commission I suggest you start a thread on that topic as it is not relevant to the topic of discussion on this thread.
 
If CD was used in WTC 1, 2 & 7, there is no reason to expect a familiar demolition profile

This is something I have issue with. CTers regularily mention the appearance of the collapses as the first sign that something was amiss. And yet, when you start debunking their claims of CD, they retreat to this: "Well, it may not look like a conventional CD, but this may have been intentional."

The problem is that, in abandoning the very source of the CT belief, the rest of their argument, which were most likely cherry-picked because they seemed to support their conclusions based on the appearance of the collapses, fail.

So I ask you this: did the collapses of the WTC look like a CD, or not ? And if not, what is your starting point for believing that something was terribly wrong about those collapses ?
 
The Significance of the flawed NIST Model

There appears to be some doubt about the significance of the flaws in the NIST WTC 1 & 2 computer models.

It has been suggested that the fact that in the less severe, base and extreme case scenarios, that the NIST model failed to produce the observed opposite side exit damage, that this is insignificant because it only indicates that even the worse case scenario was an under-estimate and therefore still valid.

The problem with this is that we know landing gear exited WTC 1 (North Tower).
"Landing gear was observed exiting the south side of WTC 1 at about 105 mph." (NCSTAR 1-2B, p.344)

For WTC 1, the NIST WTC Report states: “No portion of the landing gear was observed to exit the
tower in the simulations, but rather was stopped inside, or just outside, of the core.”

We know that landing gear and a jet engine exited WTC 2 (South Tower).

“None of the three WTC 2 global impact simulations resulted in a large engine fragment exiting the tower.” (NCSTAR 1-2B, p.353)

If the model fails in all 3 scenarios to reproduce an observed fact that should have occurred in the model, then the model is behaving in a manner contrary to the observed behaviour of the buildings the models are supposed to mimic. Reality shows us one thing but the model must be creating something different because it is failing to show us the known reality.

This is very important because NIST concludes that collapse initiation was the result of aircraft impact damage and the subsequent steel weakening fires.

With the NIST Models, they are accurate to the point of aircraft entry into the towers. Once the aircraft are inside the towers, we have no idea how the models are handling the collision data because they fail to reproduce what was observed on the opposite wall of the collision.

Since the aircraft parameters were precisely known, and since the parameters of impacted towers were precisely known, the model should have accurately mimicked what actually occurred inside the towers concluding with the observed opposite wall damage, which would validate the accuracy of the models.

Without that impact/exit corroboration, we know that the model is taking known data, i.e. the landing gear that was known to eject from the opposite wall of WTC 1 (NCSTAR 1-2B, p.344), and is now processing it differently, “No portion of the landing gear was observed to exit the tower in the simulations, but rather was stopped inside, or just outside, of the core.”(NCSTAR 1-2B, p.345) .

It also means exit wounds that existed in reality, do not exist in the model which means ejected jet fuel and the observed fireball at those points, is now 'internalized' and factored differently inside the model.

Here's an example of one problem with the NIST Computer Model for WTC 1 (North Tower);

The landing gear is not ejecting from the WTC1 (North Tower) at 105 mph as it was observed to be. We therefore have to assume the Model which has all the data for the impacting jet, is now processing this heavy steel landing gear and it's kinetic energy, contrary to the known reality, because of it's failure to show it exiting out of the building at 105 mph.

Erroneously, the NIST Computer Model must be misshandling this heavy landing gear, which should be ejecting from the opposite wall at 105 mph, and is modeling it in some unexpected fashion inside the building model. We know the model has accurate data for both the landing gear and the building.

Clearly, this means significant, erroneous damage is being created inside the tower by the model. Damage that could NEVER HAVE OCCURRED.

We now have a Model of the tower which has greater damage inside than we know was there. The model maintains this excess damage for all 3 scenarios since the generated error is based on the part of the event they all share in common.

Even with this erroneous additional damage, the less severe and the base scenarios, were unable to induce the flawed WTC 1 Model to start a collapse initiation!

Once NIST subjected this flawed WTC 1 Model to their extreme case scenario, they were able to successfully achieve a collapse initiation.

Similarly, in the WTC 2 (South Tower) Model we have landing gear and a jet engine apparently wreaking havoc when the observed reality was that they exited the opposite side of impact.

Once NIST subjected this flawed WTC 2 Model for to their extreme case scenario, they were able to successfully achieve a collapse initiation.

This is why the flawed WTC 1 and WTC 2 Models significantly and critically reflect on the observed results.

MM
 
The NIST report also talks about why, with minor modifications that are well within the margin for error in model generation, both the base and severe case will match concerns about landing gear and engine fragments. The less severe case will not match the landing gear event, though it can also match the engine fragment observation.

Please read the chapter before posting such ignorance again.

"Oooh, I like to dance the little sidestep
Now they see me now they don't
I've come and gone..."
 
Last edited:
No, what it reveals, Miragememories, is your lack of reading comprehension ability. As should be obvious, even to you, I did not label the petitioners at all. That was not my post but one clearly and properly quoted and identified as that of another poster.

In light of your inability to grasp even the simple concept of reading for comprehension something as simple as that, I am not surprised that you apparently fail to comprehend how or why the petition at issue is so poorly framed, poorly conceived, and rather an affront to honest and diligent scientists.

Don't you worry for a second about my credibility. It is perfectly intact. I cannot say the same for yours, however, and you owe me an apology for your gratuitous insults.

I apologize for wrongly attributing those comments to you.

Your excessive quoting sandwiched around your single sentence, lead me to falsely assume some of the quoted material was your own.

The "petition denied" part really sucked me in.

And look at the obvious pleasure I've given you, since now you progress from a simple statement of "it bears re-reading" to an extensive reply dumping all over me.

I have nothing bad to say about honest and diligent scientists.

MM
 
Why would they say only the "less severe case" would not match the landing gear event? The plane entering the model should be identical in all 3 scenarios since that is not a variable. The landing gear should have exited at 105 mph in ALL 3 scenarios. What parameters are they tweaking that would result in a constant becoming a variable?



MM

Sorry, but in the real world, chaotic processes are not as deterministic as you would like. When the airliner came apart, there were many unknowns that could have influenced exactly the manner in which the pieces separated from each other, collided again, and rebounded.

Computer models are approximations. There is no way to know the exact value of every initial condition, including location of office furniture, movement of people within the building, or office supplies that may have been stored there. All it would take is to deflect a piece of debris a tiny bit to make the difference between hitting a support column or travelling straight through the building.

In answer to your question: I don't know what parameters were being tweaked, but they certainly weren't "constants" that were turned into "variables".
 
You do like to; "Oooh, I like to dance the little sidestep"

Your using the equivalent of going "shush" and using double talk.

"Minor modifications", "margin of error", "match concerns"...how about "muddying the waters" with dismissive explanations that fail to validate their reason, or yours , for little or no concern about something that shouldn't be, and an error factor that consequently they have no right to be confidently sure of?

Why would they say only the "less severe case" would not match the landing gear event? The plane entering the model should be identical in all 3 scenarios since that is not a variable. The landing gear should have exited at 105 mph in ALL 3 scenarios. What parameters are they tweaking that would result in a constant becoming a variable?

I think you and NIST prefer obfuscation to a reasoned response.

MM

The problem here is a matter of education.
What an engineer accepts as a "minor modification" can seem, to someone uneducated in the exact field under discussion, a major change in direction. I have no problems with the approximations and assumptions. Real life cannot be modeled with 100% accuracy, especially when the event being modeled is a one-time scenario, with approximations avaiaalble for 1/2 the event (no photos of the first hit) , and which only a resolution of +- 5 feet is available(which, btw, is <2.5% of the building width, but is a near 50% error on a single floor--which is vital)
The reason we don't have a problem with those approximations is because we realize, through experience and training that It Doesn't Really Make a Significant Difference in the overall analysis.
To the untrained ear, an "A" note at 430 Hz is not a big deal--to the trained ear, it is horribly flat. To the trained musician, fixing the problem is simple. To the untrained, it is next to imposible.
Who is right? A big deal depends on POV, and training.
 

I remind ALL PARTICIPANTS in this thread of the new, more stringent interpretations of the forum rules are in place. Keep things civil going forward; attack the argument, and not the person.
Replying to this modbox in thread will be off topic  Posted By: jmercer
 
Sorry, but in the real world, chaotic processes are not as deterministic as you would like. When the airliner came apart, there were many unknowns that could have influenced exactly the manner in which the pieces separated from each other, collided again, and rebounded.

Computer models are approximations. There is no way to know the exact value of every initial condition, including location of office furniture, movement of people within the building, or office supplies that may have been stored there. All it would take is to deflect a piece of debris a tiny bit to make the difference between hitting a support column or travelling straight through the building.

In answer to your question: I don't know what parameters were being tweaked, but they certainly weren't "constants" that were turned into "variables".

I am aware that computer models are approximations and what this signifies.

The thrust of my argument dealt with known information that should have been reproducible in a valid computer model. The model is supposed to be created so that it responds as accurately as possible based on what is known about what is being modeled. It was known that the landing gear exited the opposite side of the building at 105 mph.

However, in the NIST WTC Report, they state, for WTC 1 that neither the base case nor the more severe case matched this “key observable”. For WTC 1, the WTC Report states: “No portion of the landing gear was observed to exit the tower in the simulations, but rather was stopped inside, or just outside, of the core.” (NCSTAR 1-2B, p.345).

This indicates a flawed model and begs the question. How closely did the model have to match observations before NIST would have ruled the model was not viable?

MM
 
If the model fails in all 3 scenarios to reproduce an observed fact that should have occurred in the model, then the model is behaving in a manner contrary to the observed behaviour of the buildings the models are supposed to mimic.

That's a ridiculous burden of proof. No computer model could accurately model something as inherently complex and chaotic as the impacts and collapses of the towers. At best, you get a best guess based on available evidence.

Or do you think you could come up with an exact simulation ?
 

Back
Top Bottom