• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

MM, Let's Discuss NIST

There is no rebuttal to dogma.

People convinced in their beliefs open no doors to doubt.

It's not a matter of providing evidence. I see little indication that anyone here cares about proof or reasonable doubt.

You can call me names and label my responses all you like but I'm not so dumb as to play the fool's game of trying to have a reasonable discussion with unreasonable people. [And on and on and on...]

This is the longest and most ironic excuse yet to my question, which was to show me where in the NIST report, the NIST investigation team did what you claim they did.

You made that statement. You back it up.
 
I suppose you're right. The problem was that I asked him for genuine, specific criticisms, and he provided rather paltry generalist ones.

[snipped for brevity]

My point is that it's very clear that MM has never read the NCSTAR, and is rather parroting claims and criticisms he heard from other websites. The focus on UL and on the simulations is very typical of the scholars for truth garbage and David Ray Griffin's claptrap. Therefore, pointing to specific references in the NCSTAR is really pretty worthless.
I guess so, but you gave him an opportunity to dodge, which he did. Then again, he would have in any case.

Ah well...
 
Wow. 144 posts in this thread, and MM still hasn't provided a single aspect of the NIST report to have issue with.

-Gumboot
 
I suppose you're right. The problem was that I asked him for genuine, specific criticisms, and he provided rather paltry generalist ones. For instance:



Doesn't really allow me to reference anything. I could point to NCSTAR1-2 and 1-5 which details the specifics of the computer model. But that's not really much better than saying "you're wrong."

Nowhere, however, does NIST state "Our entire thesis is based on this computer model." I'm aware of a few executive summaries which state "The sequences are supported by extensive computer modeling and the evidence held by NIST." (NCSTAR1-6, lxvi) That's a misrepresentation of NIST's conclusion at best, and a blatant lie at worst.



I've asked him repeatedly to point to where NIST states that they were forced to use the extreme case. Thus far, he's blustered and has not produced anything resembling evidence. I honestly don't know where he got this conclusion or what part of the report says it. I can't do anything other than state that NIST began its investigation by reviewing hundreds of videos, thousands of people and interviewing firefighters and first responders. It seems to me that the physical scenario was greatly considered from the beginning of the investigation. NCSTAR1 states, "To increase confidence in the simulation results, NIST used the visual evidence, eyewitness accounts from inside and outside the buildings, laboratory tests involving large fires and the heating of structural componenets, and formal statistical methods to identify influential parameters and quantify the variablility in analysis results." (NIST NCSTAR1, xxxvii)

My point is that it's very clear that MM has never read the NCSTAR, and is rather parroting claims and criticisms he heard from other websites. The focus on UL and on the simulations is very typical of the scholars for truth garbage and David Ray Griffin's claptrap. Therefore, pointing to specific references in the NCSTAR is really pretty worthless.

From the NIST Report;
The more severe case (which became Case B for WTC 1 and Case D for WTC 2) was used for the global analysis of each tower. Complete sets of simulations were then performed for Cases B and D. To the extent that the simulations deviated from the photographic evidence or eyewitness reports [e.g., complete collapse occurred], the investigators adjusted the input, but only within the range of physical reality. Thus, for instance,...the pulling forces on the perimeter columns by the sagging floors
were adjusted... (NIST, 2005, p. 142)
The primary role of the floors in the collapse of the towers was to provide inward pull forces that induced inward bowing of perimeter columns. (NIST, 2005, p. 180)

Just a sample.

MM
 
Gravy falls back on his firefighter quotes and the rest of you are content to believe what you are told no matter how unlikely.
Prove one claim of mine false, O fixéd star of ignorance.

You stand behind your words, don't you? Then proceed.
 
From the NIST Report;
The more severe case (which became Case B for WTC 1 and Case D for WTC 2) was used for the global analysis of each tower. Complete sets of simulations were then performed for Cases B and D. To the extent that the simulations deviated from the photographic evidence or eyewitness reports [e.g., complete collapse occurred], the investigators adjusted the input, but only within the range of physical reality. Thus, for instance,...the pulling forces on the perimeter columns by the sagging floors
were adjusted... (NIST, 2005, p. 142)
The primary role of the floors in the collapse of the towers was to provide inward pull forces that induced inward bowing of perimeter columns. (NIST, 2005, p. 180)

Just a sample.

MM
Yo, Einstein: they made those adjustments BOTH WAYS, towards more damage and towards less damage. The fact that you don't know that is evidence that you haven't read the report. Please do so, rather than continuing to argue from ignorance.
 
From the NIST Report;
The more severe case (which became Case B for WTC 1 and Case D for WTC 2) was used for the global analysis of each tower. Complete sets of simulations were then performed for Cases B and D. To the extent that the simulations deviated from the photographic evidence or eyewitness reports [e.g., complete collapse occurred], the investigators adjusted the input, but only within the range of physical reality. Thus, for instance,...the pulling forces on the perimeter columns by the sagging floors
were adjusted... (NIST, 2005, p. 142)
The primary role of the floors in the collapse of the towers was to provide inward pull forces that induced inward bowing of perimeter columns. (NIST, 2005, p. 180)

MM

So 4 pages of bluster and this is what you come up with? So the investigators adjust the parameters of the model so that is accurately models what happened. What is your issue? The more severe case was the one that actually reflects the video and photographic evidence.

NIST NCSTAR1 Section 6.14.1 said:
The less severe cases were discarded after the aircraft impact results were compared to observed events. The middle cases [...] were discarded after the structural response analysis of major subsystems were compared to observed events.

Frankly, I don't see your comments as a valid criticism of the NIST report. The severe case was modeled because it was the one that most accurately reflected the observed damage. Why would NIST choose to model less severe cases that patently disagreed with things like the presence of smoke and fire and broken windows?

Your criticism stems from a blatant misrepresentation of NIST's methodology. You, like Griffin, think that NIST changed the model to fit the hypothesis. Rather, the NIST report clearly states that the model was changed to fit the data, giving precedence to observable data, not to the model.
 
So 4 pages of bluster and this is what you come up with? So the investigators adjust the parameters of the model so that is accurately models what happened. What is your issue? The more severe case was the one that actually reflects the video and photographic evidence.

Exactly. This is just like fellow laughing-stock A-Train, finding it "suspicious" that the recovered CVR from Flight 93 supports the 9/11 Commission report.

It's a fancy variation of the "assuming the consequent" fallacy.

Anyway, distractions aside, here's the original quote from MirageMemories:
The NIST model failed continued to match the observed visual evidence until NIST used an extreme case scenario with unsubstantiated, speculative data.

Independent laboratory fire-testing (Underwriter Labs) failed to produce the necessary results required to validate the extreme case scenario NIST 'coaxed' out of their computer model.

And we're still waiting for any kind of support.
 
And we're still waiting for any kind of support.

I think this is what he's been driving at all along. My issue is not with the fact that NIST used the extreme case, it's with his characterization that it was speculative data. That's a terrible misrepresentation of the method. NIST set out 3 distinct cases, and found through experimentation that all but the most severe of the considered cases were incapable of matching the physical and visual evidence.

Again, I don't find this to be a valid criticism. Are scientists just supposed to throw up their hands when the first iteration of the model does not work and proclaim, "It was explosives! We're all wrong!"?
 
Last edited:
So 4 pages of bluster and this is what you come up with? So the investigators adjust the parameters of the model so that is accurately models what happened. What is your issue? The more severe case was the one that actually reflects the video and photographic evidence.



Frankly, I don't see your comments as a valid criticism of the NIST report. The severe case was modeled because it was the one that most accurately reflected the observed damage. Why would NIST choose to model less severe cases that patently disagreed with things like the presence of smoke and fire and broken windows?

Your criticism stems from a blatant misrepresentation of NIST's methodology. You, like Griffin, think that NIST changed the model to fit the hypothesis. Rather, the NIST report clearly states that the model was changed to fit the data, giving precedence to observable data, not to the model.

Bluster? LOL. You folks have that game down pat. I wouldn't think of spoiling your post frenzies.

I can't wait for some of your future thread topics;

What Dylan had for breakfast today.
What Dylan had for lunch today.
What Alex Jones had for breakfast today.
What Alex Jones had for lunch today.
Yadadada

Back to NIST;

I have no quibble that the extreme case model was the one that NIST required to generate their ultimate conclusion.

Since the collapse of the twin towers was such an unlikely event under the circumstances that existed, it stands to reason that extreme data would be
required to achieve collapse initiation.

What I take issue with in particular, is that when the extreme data set failed to match the video evidence of a collapse, they further adjusted the input data until they persuaded the model to respond more closely to the observed data.

They never said how much they adjusted but only qualified this activity as;

"To the extent that the simulations deviated from the
photographic evidence or eyewitness reports [e.g., complete collapse occurred], the investigators adjusted the input, but only within the range of physical reality. For instance,...the pulling forces on the perimeter columns by the sagging floors were adjusted... (NIST, 2005, p. 142)

Physical reality? Well how definitive is that?

In a paper by fire-engineering experts, Lane and Lamont, in the UK, they stated: The basis of NIST’s collapse theory is… column behaviour in fire… However, we believe that a considerable difference in downward displace between the [47] core and [240] perimeter columns, much greater than the 300 mm proposed, is required for the collapse theory to hold true… [Our] lower reliance on passive fire protection is in contrast to the NIST work where the amount of fire protection on the truss elements is believed to be a significant factor in defining the time to collapse… The [proposed effect] is swamped by thermal expansion …Thermal expansion and the response of the whole frame to this effect has NOT been described as yet [by NIST]. (Lane and Lamont, 2005.)

That models of WTC trusses at Underwriter Laboratories (UL) subjected to fires did NOT fail is also admitted in the final NIST report:

NIST contracted with Underwriters Laboratories, Inc. to conduct tests to obtain information on the fire endurance of trusses like those in the WTC towers.... All four test specimens sustained the maximum design load for approximately 2 hours without collapsing... The Investigation Team was cautious about using these results directly in the formulation of collapse hypotheses. In addition to the scaling issues raised by the test results, the fires in the towers on September 11, and the resulting exposure of
the floor systems, were substantially different from the conditions in the test furnaces. Nonetheless, the [empirical test] results established that this type of assembly was capable of sustaining a large gravity load, without collapsing, for a substantial period of time relative to the duration of the fires in any given location on September 11. (NIST, 2005, p. 141)

An article in the journal New Civil Engineering (NCE) lends support to concerns about the NIST analysis of the WTC collapses. It states:

World Trade Center disaster investigators [at NIST] are refusing to show computer visualizations of the collapse of the Twin Towers despite calls from leading structural and fire engineers, NCE has learned. Visualisations of collapse mechanisms are routinely used to validate the type of finite element analysis model used by the [NIST] investigators. The collapse mechanism and the role played by the hat truss at the top of the tower has been the focus of debate since the US National Institute of Standards & Technology (NIST) published its findings....
University of Manchester [U.K.] professor of structural engineering Colin Bailey said there was a lot to be gained from visualising the structural response. “NIST should really show the visualisations; otherwise the opportunity to correlate them back to the video evidence and identify any errors in the modeling will be lost,” he said....
A leading US structural engineer said NIST had obviously devoted enormous resources to the development of the impact and fire models. “By comparison the global structural model is not as sophisticated,” he said. “The software used [by NIST] has been pushed to new limits, and there have been a lot of simplifications, extrapolations and judgment calls.” (Parker, 2005; emphasis added.)

MM
 
No hurt feelings here pussy, I mean WildCat.

The NIST Report hangs it's case basically on it's computer model.

The computer model is only as good as the data it's provided with.

NIST admitted, that in their model, the opposite building face from the aircraft entry was limited to a coarser design due to the limitations of their computing facility. Accuracy of their test simulations was based on matching actual visual evidence of damage to the opposite building face to what occured in the simulation. These design compromises necessarily reduced the accuracy of their model's behaviour.

Actually, if you are referring to the FEA model, it is common practice in ANY FE model to use a combination of coarse and fine mesh depending on the areas of interest. Where the stress gradients are low, a coarse mesh will suffice. This is to allow the model to converge within a reasonable timeframe.

Engineering judgement is used to ensure that the FE model is appropriate.

Lurker
 
[Off-topic flame-baiting deleted]

Back to NIST;

I have no quibble that the extreme case model was the one that NIST required to generate their ultimate conclusion.

Since the collapse of the twin towers was such an unlikely event under the circumstances that existed, it stands to reason that extreme data would be
required to achieve collapse initiation.
Assuming the consequent again. You have not demonstrated that the collapse was an unlikely event, you merely asserted it. Stop doing that. Let's continue:

In a paper by fire-engineering experts, Lane and Lamont, in the UK, they stated: The basis of NIST’s collapse theory is… column behaviour in fire… However, we believe that a considerable difference in downward displace between the [47] core and [240] perimeter columns, much greater than the 300 mm proposed, is required for the collapse theory to hold true… [Our] lower reliance on passive fire protection is in contrast to the NIST work where the amount of fire protection on the truss elements is believed to be a significant factor in defining the time to collapse… The [proposed effect] is swamped by thermal expansion …Thermal expansion and the response of the whole frame to this effect has NOT been described as yet [by NIST]. (Lane and Lamont, 2005.)

I assume you're referring to the unpublished, unreviewed presentation by Lane and Lemont, available here.

If you actually bother to read it, you will note that Lane and Lemont absolutely do not agree with you that the collapse was due to anything other than damage and fire. They are quite satisfied that no explosives, no space-based laser beams, no subterfuge of the kind occurred.

Instead, they are arguing -- and justifiably, in my opinion -- that there are still more physical mechanisms in something like the WTC case that deserve scrutiny. In particular, they argue for better modeling of thermal expansion and its impact on structural integrity. Which, by the way, is all over NISTNCSTAR1-6; it hasn't been ignored, rather there may be room for sharper refinement still.

They do so for three important reasons, all mentioned in the talk. The first is that their clients seek additional ways to solve safety problems, besides just slapping on excessive levels of fire protection, and better modeling will allow an architect to evaluate many different solutions. The second is that existing buildings may still have hidden vulnerabilities that might be inferred from a deeper analysis of what happened in the WTC towers. The third is that, like so many other facets of structural engineering, the WTC collapses were such a novel event that we still have much to learn.

But nothing in there about conspiracy theories, not even speculation. Sorry.

As usual, conspiracy nuts like yourself see disagreement between experts, which is a healthy and necessary component of the Scientific Method, as evidence that everybody is wrong, and therefore through application of the False Choice fallacy, you must be correct. It isn't so. Lane and Lemont's objections are valid criticism intended to refine the NIST conclusions. You are misconstruing their words intending to throw out the NIST report entirely. I call fraud.

If these are the best gems you and Steven Jones can bring back from the quote mine, then kindly retract your statement. You still have not backed up your claims about the NIST report, and you are even further away from casting doubt on its conclusions than you were before.
 
Assuming the consequent again. You have not demonstrated that the collapse was an unlikely event, you merely asserted it. Stop doing that. Let's continue:



I assume you're referring to the unpublished, unreviewed presentation by Lane and Lemont, available here.

If you actually bother to read it, you will note that Lane and Lemont absolutely do not agree with you that the collapse was due to anything other than damage and fire. They are quite satisfied that no explosives, no space-based laser beams, no subterfuge of the kind occurred.

Instead, they are arguing -- and justifiably, in my opinion -- that there are still more physical mechanisms in something like the WTC case that deserve scrutiny. In particular, they argue for better modeling of thermal expansion and its impact on structural integrity. Which, by the way, is all over NISTNCSTAR1-6; it hasn't been ignored, rather there may be room for sharper refinement still.

They do so for three important reasons, all mentioned in the talk. The first is that their clients seek additional ways to solve safety problems, besides just slapping on excessive levels of fire protection, and better modeling will allow an architect to evaluate many different solutions. The second is that existing buildings may still have hidden vulnerabilities that might be inferred from a deeper analysis of what happened in the WTC towers. The third is that, like so many other facets of structural engineering, the WTC collapses were such a novel event that we still have much to learn.

But nothing in there about conspiracy theories, not even speculation. Sorry.

As usual, conspiracy nuts like yourself see disagreement between experts, which is a healthy and necessary component of the Scientific Method, as evidence that everybody is wrong, and therefore through application of the False Choice fallacy, you must be correct. It isn't so. Lane and Lemont's objections are valid criticism intended to refine the NIST conclusions. You are misconstruing their words intending to throw out the NIST report entirely. I call fraud.

If these are the best gems you and Steven Jones can bring back from the quote mine, then kindly retract your statement. You still have not backed up your claims about the NIST report, and you are even further away from casting doubt on its conclusions than you were before.

Nice bit of cherry picking R. Mackey.

You looked at my response and picked the only item you could maybe squeeze some counter-argument out of while ignoring the rest of my post.

Lane and Lemont were a small portion of my response. They provided an example of disagreement with NIST. I never said they disagreed with the NIST conclusions. You worked as much of your own spin as you could into that small part of my posting.

What hypocracy!

You get on my case for not saying anying of substance, and when I do you selectively ignore my cited examples of those experts who question NIST's behaviour.

And you claim to wonder why people refuse to waste their time presenting arguments here.

MM
 
MM, the fact that you're citing only one executive summary from NIST NCSTAR 1, and don't seem to be familiar with the contents of the other 9,700 pages, tells me that you've got some reading to do. Attempting to critique a report that you haven't read is a bad idea. We're not substitute teachers. We know the assignments. You haven't completed yours, and you can't fool those who have.
 
Nice bit of cherry picking R. Mackey.
I just showed how you were using one of your three sources totally out of context -- and you accuse me of cherry-picking?

You looked at my response and picked the only item you could maybe squeeze some counter-argument out of while ignoring the rest of my post.

Lane and Lemont were a small portion of my response. They provided an example of disagreement with NIST. I never said they disagreed with the NIST conclusions. You worked as much of your own spin as you could into that small part of my posting.

What hypocracy!

You get on my case for not saying anying of substance, and when I do you selectively ignore my cited examples of those experts who question NIST's behaviour.
You still never answered my original question.

Instead, you throw so much crap in the air, hoping some of it will stay aloft, that it's important to deal with it one at a time. Hence, I dealt with Lane and Lamont first.

Once I get some acknowledgement out of you, I'll be more than happy to discuss your other, equally quote-mined claims.

Deal?
 
Bluster? LOL. You folks have that game down pat. I wouldn't think of spoiling your post frenzies.

I can't wait for some of your future thread topics;

What Dylan had for breakfast today.
What Dylan had for lunch today.
What Alex Jones had for breakfast today.
What Alex Jones had for lunch today.
Yadadada
As this has nothing to do with your argument, it would have been wiser to edit this childish opening.
Back to NIST;

I have no quibble that the extreme case model was the one that NIST required to generate their ultimate conclusion.

This statement:
The NIST model failed continued to match the observed visual evidence until NIST used an extreme case scenario with unsubstantiated, speculative data.
Indicated to me that you did. But I'm glad we agree on this much.
Since the collapse of the twin towers was such an unlikely event under the circumstances that existed, it stands to reason that extreme data would be
required to achieve collapse initiation.
I beg your pardon? What statistical analysis have you done that shows this event is unlikely?
What I take issue with in particular, is that when the extreme data set failed to match the video evidence of a collapse, they further adjusted the input data until they persuaded the model to respond more closely to the observed data.
The extreme data set was the one that most closely modeled the damage shown by the photographic evidence. Why do you take issue with this? Why shouldn't the best model agree with most of the physical data?

They never said how much they adjusted but only qualified this activity as; [...]

Physical reality? Well how definitive is that?
I'll give you a hint. NIST did not invent wildly speculative numbers or breech the laws of physics to make their model work. They didn't hammer the square peg into the round hole. Physical reality means just that, all of the numbers were physically possible, and there is absolutely no reason to discard the model based on that.
Thermal expansion and the response of the whole frame to this effect has NOT been described as yet [by NIST]. (Lane and Lamont, 2005.)
I'll leave you to R.Mackey's post on this. I would note, however, that this claim by Lane and Lamont is only half true. Thermal expansion is covered in exhaustive detail in NCSTAR1-6. Included are interesting notes on pages 113-115 where NIST shows the thermodynamic response to large sections of the towers due to thermal expansion. Page 195 describes the structural response of an entire floor including thermal expansion.

I understand Lane and Lamont's criticism that the entire frame was not modeled, but for the purposes of NIST's investigation, given the time frame and resourse limitations involved, this was an acceptable course of action. I would make it clear however, that such a simplification does not make the model less convincing. Any enhancements made to the model to include the entire frame would not reverse the hypothesis or prove NIST wrong in any way.

That models of WTC trusses at Underwriter Laboratories (UL) subjected to fires did NOT fail is also admitted in the final NIST report: [Pages 142 - 143 in the revised draft]
The purpose of the UL test was not to simulate the real conditions in the WTC towers by combining load, stripped fireproofing and thermal response. Rather, the purpose, as stated by NIST, was to determine the effects of scale, SFRM thickness and test restraint conditions. Indeed, your argument seems to stem from a fundamental misunderstanding of what UL was attempting to accomplish. The results of their tests were not aimed at recreating the precise conditions of the WTC towers, but rather at establishing empirically three parameters whose influence on the destruction of the towers was paramount.

I also take issue with your characterization. NIST did not admit that the trusses remained in tact, but rather correctly reported that they did. You will also note that NIST cautiously included the results because of the scaling issues. I would direct you the fourth bullet point on page 143 showing that a full scale floor subsystem did not achieve the 2 hour fire rating that the 1/2 scale system did.
An article in the journal New Civil Engineering (NCE) lends support to concerns about the NIST analysis of the WTC collapses. It states:
Before we get into this, I'm having a bit of trouble finding the full article. I can find stuff by Dave Parker, and stuff about NIST, but every link I follow is broken, including the Wikipedia article. Do you have the original weblink for this article?
World Trade Center disaster investigators [at NIST] are refusing to show computer visualizations of the collapse of the Twin Towers despite calls from leading structural and fire engineers, NCE has learned.
Ok, NIST's computer visualizations are listed here. NIST did not make a dynamic simulaiton of the collapse, but rather numerous simulations of events leading up to the collapse. That page was updated in August of 2006, and I see that the date of the article, according to your reference, is 2005. It may be that NIST has since released the animations, however, without the full article, I can't make that claim yet.
 

Back
Top Bottom