Zeuzz,
Interesting, I'd like to know the value of x*. The smaller the more suspicious still but i'm prepared to accept that could be true.
Well if true the v/t graph might not be missing the beginning portion I thought it was then, that's the impression I got from watching Chandlers video.
A few first principles. If you know all of this, then I apologize for being pedantic. (Gotta do what I do best...)
Conservation of energy (conversion of potential energy (PE) to kinetic energy (KE). I assume you've got this.
Equivalence of work & energy: Work is defined as the product of an instantaneous force times the distance over which the force acts. For a component that breaks, in essence, the area under a "force vs. distance" curve from its initial position until fracture.
Both energy & work are scalar quantities, and you can simply subtract & add them, once expressed in the same units.
You can establish an "energy budget" as PE decreases & KE increases, giving a total PE + KE for the system. In the absence of other sinks of energy (i.e., work required to break things), then that total will remain a constant.
The work it takes to break something sucks away some energy from that total, and the KE won't rise as much as it would in the absence of this work.
As I said, the work to break something is the area under the force vs. deflection (or force vs. distance) curve for each element broken.
In gross terms:
"Weak" = low force to break
"Strong" = high force to break.
"Brittle" = low deflection to break.
"Ductile" = large deflection to break.
The size of the part obviously plays a significant role in "how far" it deforms. A 1" diameter bolt is not going to deform much before it snaps compared to a 36' tall column.
And all the work (energy) sinks add up as scalars. 10x as many fractures consumes 10x as much energy as one identical fracture.
You can see that weak/brittle, weak/ductile, and strong/brittle objects have small areas under their "load to failure curves", take small amounts of work to break, and therefore consume little of the kinetic energy.
The combination that absorbs lots of energy is strong/ductile. Which is exactly why they use structural steel in the first place, btw.
So you've got to consider, when thinking about this, what exactly are the parts that break, how much force it takes to break them, how much they have to deform before they break, and how many of them have to break.
For example, it would take a LOT of energy to crush down a column. But it would take a minuscule amount (in comparison) to snap the 8 or 12 or 20 bolts that connect it to adjacent columns. If the failure mechanism is crushing the columns, then you'd expect a lot of energy to be diverted to that process, and the downward acceleration to be smaller.
If the failure mechanism is fractured bolts, then only a (comparatively) tiny amount of that energy is going to be sapped away from the descending part's acceleration.
You can immediately tell, by looking at the columns in the debris piles, that very little energy was sapped away in crushing, deforming, bending columns. The vast majority of the columns were still pretty straight. Not grossly deformed or ripped apart. By far, the most common failure was the snapping of relatively small bolts & welds, or pull thru of those bolts thru thin angle plates.
Note that there are other sinks of energy, too. Moving air, throwing objects, etc.
___
Regarding your question about the acceleration vs. time curve for WTC7 …
This was the topic of much discussion here.
Here is my best take on "what really happened" vs. "the concept of 2.25 seconds of free fall acceleration". This is from data that was taken (by a crypto-truther poster named femr2). It looks at every frame of the videos, instead of every 5th, 10th, or 15th. The Chandler & NIST analyses did not look at each frame, and therefore lost a bunch of fine detail of the collapse motion.
BTW, you do realize, I hope, that by choosing to fit his velocity vs. time data to a linear regression, Chandler artificially FORCES the measured acceleration to be a constant over that time interval. A linear change in velocity over time is, BY DEFINITION, a constant acceleration.
The curve below does not impress this artificial constraint.
I wrote this, because I saw (but couldn't figure out what you were trying to say) a comment about "acceleration being small at first, and getting larger."
Just wanted to make sure that you didn't make a common mistake. For an object that suddenly enters free fall (say by having its legs taken out from underneath it by an explosive charge), its velocity starts out small & grows over time, but its acceleration does not.
Its acceleration is zero before the charge goes off, and then immediately jumps to the full value of "g". The acceleration does not build up gradually. I've shown that sudden "step function" in acceleration as the red line in the graph below.
The light green line (sorry its almost invisible) shows acceleration of the north west corner of WTC7 (the same point that Chandler, but not NIST, measured), using all the data frames.
The key point is that, if WTC7 really did fall "at free fall acceleration for 2.25 seconds", as Chandler claims, then there would be a 2.25 second long interval where the green line laid exactly on top of the red line. Clearly this is not true. Therefore the inescapable conclusion is that, in contradiction to what Chandler says, WTC7 did NOT fall "at free fall for 2.25 seconds."
Be warned that a complication hidden in this analysis is the proper filtering of the position vs. time & velocity vs. time graphs, in order that one may ultimately calculate acceleration vs. time. There are interminable, painful threads slicing this hair about 54 times.
I believe that the graph above addresses these complication appropriately. The integrated average acceleration that I get over Chandler's 2.25 seconds (between about 5.0 seconds & 7.25 seconds, in the graph above) is 0.94G. Not 1.0G.
If you believe my analysis, then you can see that acceleration profile that Chandler champions as "2.25 seconds of pure free fall" is nothing of the sort.
Other people here have done their own interpretations of the same data, come up with similar, but slightly different acceleration curves (it all depends on the data filtering), and reached essentially the same conclusion: Chandler's "constant G" is not accurate.
If you look carefully at Chandler (or NIST's) data, you can easily see these intervals of lower & higher accelerations, that Chandler (& then NIST) smooth out (i.e., "ignore") by taking the slope of the best fit linear curve.
You'll also note the curiosity of two brief periods of "greater than G" acceleration. Also the subject of much discussion. And, in my opinion, really happened.
Hope this wasn't too painful…
Welcome to the forum.
Tom