Richard Gage Blueprint for Truth Rebuttals on YouTube by Chris Mohr

Status
Not open for further replies.
Sorry. I'm not responding to gibberish and thread disruption by nonsensical query.

No, you're responding with gibberish and thread disruption in answer to perfectly sensible and reasonable questions. Your premises have been disputed; is your response simply to refuse to discuss them, and do you think this will increase or decrease your credibility?

Dave
 
So is your argument that not everything is possible in programming, therefore programmers can predict exactly the distribution of material from a smashed egg?

Dave

My argument is that a program can predict exactly the possible range of distribution of material from a smashed egg and each possible distribution depending upon the accuracy of the defined materials.
 
My argument is that a program can predict exactly the possible range of distribution of material from a smashed egg and each possible distribution depending upon the accuracy of the defined materials.

Which is a worthless observation, even if true, as it makes no comment on the accuracy of the definition of the original boundary conditions required to obtain a given level of accuracy in the final prediction. The same is equally true of a similar statement concerning the simulation of a building collapse.

Dave
 
Which is a worthless observation, even if true, as it makes no comment on the accuracy of the definition of the original boundary conditions required to obtain a given level of accuracy in the final prediction. The same is equally true of a similar statement concerning the simulation of a building collapse.

Dave

Prediction of where every thing lands?

First the possible failures have to be recreated within and against the properties of the building.
 
Role playing action games are a multibillion dollar industry. The simulations can be programmed so that reactions to force, are realistic. The must subtle nuance must be anticipated and programmed.

The programming of an accurate simulation, if it were possible, of a "COLLAPSE" would be child's play in comparison to the complexity of a popular state of the art role playing game.

think of an egg hitting the ground and splattering. now model it exactly matching how the yolk would spread,

That would be a snap.

Excellent!

So you'll be the first truther to put his money where his mouth is, so to speak?

You'll be doing that for us?

I rarely expect you and you and you to get anything. The difference is that I know things. I know that an experienced programmer in that type field, given the correct parameters, could easily perform that task of determining all the variations that could take place when the egg landed.

Whereas you all know that the egg would break.

ARE you an experienced programmer in that type of field?

Even if the correct parameters were agreed upon, complete, detailed, and sufficient, that would still be an enormous task. Using finite difference equations to figure out the temperature at various points of a simple and well understood heat exchanger (if you're working from scratch and only working in two dimensional modeling) takes hundreds of lines of code (pure logic code, not html sputtering vb). And it would affected by stochastic processes, so if the goal is like a video game's needs, it only needs to look cool. If it has to be accurate, then a monte carlo simulation run thousands of times would only provide probabilities of things happening a certain way.

The programming task is no great shakes. It's a business as usual, been there done that, humdrum task.

Far too many unknowns involved and far too many variables. No amount of claiming it's common sense on your part is going to change that. In fact, it just shows an appalling ignorance on yet another subject.

I don't care about the egg. I am interested in how "you KNOW" how long it takes to program simulations per the discussion.

Pretty much of anything is possible in programming....

Saying that a programming task would be a snap is relative. It would be a snap for team of experienced programmers with the necessary software.
No.

Then how come weather forecasting is so unreliable?

No, it's not an irrelevant question; it goes directly to the heart of the fallacy you're employing.

It's a nonsense question.

I did say pretty much anything, which is not everything.

So is your argument that not everything is possible in programming, therefore programmers can predict exactly the distribution of material from a smashed egg?

My argument is that a program can predict exactly the possible range of distribution of material from a smashed egg and each possible distribution depending upon the accuracy of the defined materials.

Which is a worthless observation, even if true, as it makes no comment on the accuracy of the definition of the original boundary conditions required to obtain a given level of accuracy in the final prediction. The same is equally true of a similar statement concerning the simulation of a building collapse.

Prediction of where every thing lands?

First the possible failures have to be recreated within and against the properties of the building.
Okay, let's suppose it were that simple. (It isn't.)

How many possible failures are there? Let's suppose we could identify every single one of those possible failures. Let's suppose there are only 50 things that could fail (instead of 50 million). Some failures cause other failures. To predict where everything lands, we'll have to calculate probabilities for each ordering of the failures within each subset.

There are 2^50 = 1125899906842624 subsets of those 50 failures. The number of possible orders for the largest of those subsets is
Code:
50! = 30414093201713378043612608166064768844377641568960512000000000000
That's 3 times 10 to the 64th power.

Let's suppose we could design incredibly fast special-purpose hardware that, for each ordering of those 50 failures, could calculate where everything lands in just 1 nanosecond. With that ridiculously fast hardware, under these ridiculously over-simplified assumptions, the number of years it would take to calculate where everything lands would still be more than 10 to the 48th power. That's about 10 to the 38th power times the number of years that have elapsed since the Big Bang.

Clayton Moore says it's a snap.
 
No, you're responding with gibberish and thread disruption in answer to perfectly sensible and reasonable questions. Your premises have been disputed; is your response simply to refuse to discuss them, and do you think this will increase or decrease your credibility?

Dave


he decreases it with every post he makes. Before I came across twoofers I knew stupidity existed in a abstract sort of way but to see it so clearly displayed by people so stupid they don't even know they are stupid is an eye opener. Is this a result of us treating them as "special"? Would have it been kinder to have told them the truth, that they were as dumb as dirt, from a young age? Perhaps then they could have realized their limitations and planned for life taking them into account. Instead they imagine they have a clue and face continual failure and mockery.
 
Chris - your videos are probably fairly dry to people who haven't dealt with Truthers in any substantial way, but they are, so far, logically and factually spot-on, and I appreciate them.

Thanks.
Thanks Minadin,

They are indeed dry. I have a good sense of humor, sometimes on the bizarre side. I reined it all in to put out something respectful. In an alternate universe I would have let er rip and we all could have had a good laugh!
 
No.












Okay, let's suppose it were that simple. (It isn't.)

How many possible failures are there? Let's suppose we could identify every single one of those possible failures. Let's suppose there are only 50 things that could fail (instead of 50 million). Some failures cause other failures. To predict where everything lands, we'll have to calculate probabilities for each ordering of the failures within each subset.

There are 2^50 = 1125899906842624 subsets of those 50 failures. The number of possible orders for the largest of those subsets is
Code:
50! = 30414093201713378043612608166064768844377641568960512000000000000
That's 3 times 10 to the 64th power.

Let's suppose we could design incredibly fast special-purpose hardware that, for each ordering of those 50 failures, could calculate where everything lands in just 1 nanosecond. With that ridiculously fast hardware, under these ridiculously over-simplified assumptions, the number of years it would take to calculate where everything lands would still be more than 10 to the 48th power. That's about 10 to the 38th power times the number of years that have elapsed since the Big Bang.

Clayton Moore says it's a snap.
And let's add to Clinger's simplified assumptions the fact that the collapse sequence in all three buildings was invisible to the naked eye (hat truss failures, column 79 in Building 7, etc.). The precise size and location of the fires, visual evidence of sagging steel etc. were all hidden from view. NIST had to rely on visual clues from the perimeter walls such as the pattern of breaking windows, a few sags here and there, smoke and fire billowing out of the visible exteriors, etc. The vast majority of the variables couldn't even be fed into the equations!

Why are we having this argument? Does anyone believe that Clayton will ever be convinced that creating a likely collapse sequence for a skyscraper is not a snap? Clayton is making no sense, but is it any more sensible for us to try to convince him?

And BTW has Bill Smith been banned from JREF?
 
And let's add to Clinger's simplified assumptions the fact that the collapse sequence in all three buildings was invisible to the naked eye (hat truss failures, column 79 in Building 7, etc.). The precise size and location of the fires, visual evidence of sagging steel etc. were all hidden from view. NIST had to rely on visual clues from the perimeter walls such as the pattern of breaking windows, a few sags here and there, smoke and fire billowing out of the visible exteriors, etc. The vast majority of the variables couldn't even be fed into the equations!

Why are we having this argument? Does anyone believe that Clayton will ever be convinced that creating a likely collapse sequence for a skyscraper is not a snap? Clayton is making no sense, but is it any more sensible for us to try to convince him?

And BTW has Bill Smith been banned from JREF?



Just how do you geniuses think builders determine whether a building will be safe?
 
Just how do you geniuses think builders determine whether a building will be safe?

You do your best you can with the information you have and the tools available. You also understand that situations may occur that you did not forsee and you do your best to mitigate these risks. When you do encounter unforseen situations you try to learn from them and adapt accordingly.
 
from the Bentham paper said:
"In a paper presented first online in autumn 2006 regarding anomalies observed in the World Trade Center destruction, a general request was issued for samples of the WTC dust. The expectation at that time was that a careful examination of the dust might yield evidence to support the hypothesis that explosive materials other than jet fuel caused the extraordinarily rapid and essentially total destruction of the WTC buildings."
Congratulations: You have conjectured a scientific hypothesis.

Without disputing your conjecture, . . .
Miragememories said:
"I suggest you stick to the context of my argument next time and avoid the self indulgence."
DGM said:
"How does that quote from the paper help your case? It certainly doesn't address anything Dave said. :confused: "

The quote I used from the Bentham paper, revealed a clear investigative direction regarding what was expected to be discovered in the WTC dust.

The scientists believed nano-thermite was used (hence their reference to explosive materials) and so they tested the same way you would with known nano-thermite (as directed by Tillotson), to see if comparable results would occur.

So far the Official Story supporters here have been implying, that by not knowing it was nano-thermite in advance, that regardless of what the test results produced, the material could have been anything.

Of course no one has shown anything that would have produced comparable results other than, drumroll, nano-thermite.

Of course the primer paint adherents are stubbornly pursuing a totally unsupported belief that volatile paint would have been used on the WTC steel.

Funny, typical how Dave Rogers totally backed off from responding to my full response to him here;
http://www.internationalskeptics.com/forums/showpost.php?p=7674291&postcount=3169

MM
 
Miragememories said:
"The scientists believed nano-thermite was used (hence their reference to explosive materials) and so they tested the same way you would with known nano-thermite (as directed by Tillotson), to see if comparable results would occur."
NoahFence said:
"Contradictory statement. Try again."

http://en.wikipedia.org/wiki/Nano-thermite
Super-thermites are generally developed for military use, propellants, explosives, and pyrotechnics. Research into military applications of nano-sized materials began in the early 1990s. Because of their highly increased reaction rate, nanosized thermitic materials are being studied by the U.S. military with the aim of developing new types of bombs several times more powerful than conventional explosives. Nanoenergetic materials can store more energy than conventional energetic materials and can be used in innovative ways to tailor the release of this energy.

MM
 
The quote I used from the Bentham paper, revealed a clear investigative direction regarding what was expected to be discovered in the WTC dust.

The scientists believed nano-thermite was used (hence their reference to ]explosive materials) and so they tested the same way you would with known nano-thermite (as directed by Tillotson), to see if comparable results would occur.


MM

Why would they expect that? Wouldn't that be a bias associated with their research? The only way to test for an "unknown" using a "known" is to start with a pure sample. Are you saying that's what they did? If so, you have not read the paper.

:confused:
 
Last edited:
http://en.wikipedia.org/wiki/Nano-thermite
Super-thermites are generally developed for military use, propellants, explosives, and pyrotechnics. Research into military applications of nano-sized materials began in the early 1990s. Because of their highly increased reaction rate, nanosized thermitic materials are being studied by the U.S. military with the aim of developing new types of bombs several times more powerful than conventional explosives. Nanoenergetic materials can store more energy than conventional energetic materials and can be used in innovative ways to tailor the release of this energy.

MM

Thank you for proving my point.
 
Just how do you geniuses think builders determine whether a building will be safe?

Simply speaking, during design assumptions are made about what the building is expected to experience during its lifetime ranging from dead loads to live loads, wind loads, etc. These are modeled in BIM applications to see help verify that the structure will perform for the expected conditions; sometimes, hand calculations are still done at least for smaller scale projects (and this is how they did the "models" in the 1960's and 70's when the WTC was built).

Most computer models that "test" a building during design development only have to account for a limited set of assumptions and ensure that the building will be sturdy so as to avoid things like collapse in part or whole in the first place, which usually involves analyzing load paths in the existing design ideas, and ensuring that the structural members and assemblies are sized correctly.

The modeling in say, a pre design stage or existing where the conditions are static, rather than dynamic, is much easier because you have a constant and limited set of variables. Whereas, a precipitous chain of failures begins in one point and has an exponentially growing set of parameters that needs to be met to keep accuracy. As one poster pointed out, to account for every possibility to perfection as you demand would require so much time to accomplish that you wouldn't even live to see it...
 
Simply speaking, during design assumptions are made about what the building is expected to experience during its lifetime ranging from dead loads to live loads, wind loads, etc. These are modeled in BIM applications to see help verify that the structure will perform for the expected conditions; sometimes, hand calculations are still done at least for smaller scale projects (and this is how they did the "models" in the 1960's and 70's when the WTC was built).
.

The original WTC7 structure was completed in 1987, however the computing was still nowhere near today's standards, nor were CAD and Finite Element (FEM/FEA) as advanced. Since material variation modelling requires for the most part reliance on Monte Carlo techniques, the lack of computing power at the time would have been a very significantly limiting factor.
 
Just how do you geniuses think builders determine whether a building will be safe?

To tag on Grizzly Bear's post, you are trying to compare an analysis which assumes static conditions (the building deforms by small amounts under an assumed set of loading conditions, several sets of loads are analyzed) to a dynamic condition under which millions of parts are moving, crashing into each other, breaking. You would have to make massive numbers of assumptions about the loads, about what stress would break each particular element, about how that would affect all the other elements. Elements breaking free are propelled about with unknown force due to the release of stress on them. Even modeling a single combination of events would be a prohibitive undertaking. (Not to mention pointless.)

You are forgetting that some of us are structural engineers. We DO know something about this stuff, unlike yourself, who has nothing but personal incredulity to go by. It's not serving you well, BTW.
 
Just how do you geniuses think builders determine whether a building will be safe?

I'm a genius, and I'm here to tell you that they just have to build it however they want, and then once it's complete they get out special detectors to see if they builders conspired with any of George Bush's brothers and packed all the columns with super-nano-therm%te.

If there's no super-nano-therm%te, then the building can be deemed indestructible.
 
Status
Not open for further replies.

Back
Top Bottom