• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

The physics toolkit

Well femr,

I hate to be the bearer of bad news regarding your video data, but...

In short, either:

A. The measured point on that building is undergoing absurd levels of acceleration (10 to 80 G's).
B. I can't program on the fly like I used to.
C. There are some serious artifacts that your video technique is introducing into your data.

I vote "C".

I can assure you that A is false. No building is going to withstand that sort of acceleration without flying apart.

I can pretty much assure you that B is true. But I have reason to believe that my program is behaving. (i.e., I tested it with some well behaved artificial data and it worked perfectly.)

So that leaves A.

It's certainly possible that I've transposed bracket or misplaced comma. Such is the nature of Mathematica programming. We'll see if / as we get into this.

Your raw data produces squirrely results. I believe that I know why. We'll see as this conversation evolves.

This was pretty much a colossal waste of time that I don't have. That said, I've got the program now, & can turn data into finished analysis in seconds.

All told, I'd guess that I've got eight friggin' hours in this. When will I learn...?

Most of that time was wasted trying to chase down why the program adamantly refuses to produce an exponential model of the same type that NIST used on their data. (That's a mystery that I'm not likely to bother chasing down.)

Not even a high order polynomial would fit all the data well at both extremes. So I broke it up into two:

1. a 3rd order polynomial for 0 < t < 1.3 seconds
2. a 5th order polynomial for 1.3 < t < 4.7 seconds (end of data).

Note that I re-indexed the time reference such that the ultimate descent starts at 0 seconds.

I'll post the program, the resulting images & a (brief) discussion in the next three posts.


Tom
 
The Mathematica Code:

If you know someone with the program, you can input it & run it yourself.

picture.php


picture.php


picture.php



That's it.
 
The Results:

Graph:

1. Your data, plotted as points.
2. Your data, first 1 second expanded.
3. Your data plus empirical equation #1 (red line, applies from 0 to 1.3 sec.)
Next 3, Your data plus empirical equations #1 (Red) & #2 (blue line, applies from 1.3 sec to 4.7 sec)

You can see that the fit & the transition between the lines are good.

picture.php


picture.php


The Empirical Equations for drop, velocity & acceleration vs. Time.

The next graph is the "residual" (=data - model) for drop distance. Not bad. Typical variation < about 1 foot.

But the next graph starts to show the real problem...

Drop velocity vs. time.

The solid curve is the empirical equation.

The dots are from your data points, calculated as a "balanced difference". That is, the velocity is given by (DropPoint[i+1] - DropPoint)/(Time[i+1] - Time). This value is set at the midpoint of the sample times. (= Time + 0.5 dt, where dt = constant for your data = Time[i+1]-Time.)

The empirical equation velocity is also calculated at this midpoint time.

As mentioned, I used one empirical equation for t< 1.3 sec & a different one for t > 1.3 seconds. The discontinuities at 1.3 seconds are not surprising.

You can immediately see that your velocity data is all over the map. This is a direct result of your very short time base between data points. Even small errors in the measured location will result in huge variations in velocity.

I strongly believe that this scatter is an artifact of the errors in your measurement technique.

I also believe that the only way to get rid of this is to apply a smoothing filter to the data.

Which, of course, gets rid of all the high frequency components that your data shows.

But here's the rub: I do NOT believe that those high frequency changes in velocity are real. I believe that they are an artifact of your (camera, compression, analysis technique (i.e., pixellation), etc.

If one accepts them as "real", one has to go to the next step & accept that the building is undergoing completely absurd accelerations.
___

The acceleration is handled exactly the same as the velocity was.

But now, you can see that you're manipulating velocity data that has huge artifacts built in.

This makes the calculated acceleration data absurd: over 2500 ft/sec^2 or ~80G's.

Here are the curves:

picture.php



And, last, here is the results of the Empirical Equation "best fit".



picture.php


The "best fit" to your drop distance vs. time data produces 41 ft/sec^2 (about 1.3Gs of acceleration initially, decreasing to about 33 ft/sec^2 (just above 1 G) over the first 1.3 seconds.

Sorry, I don't believe this for a second.

I've got other things that I've got to do.

I'll talk to you about this over the next couple of days.

I can send you the raw data, but you can just as easily imput the empirical equations into an Excel spreadsheet & create the graphs yourself.


Tom
 
The Results:

Graph:
Can't see your graphs. Can't see them even if I C&P the URL on edit.

But the next graph starts to show the real problem...
I know what's coming, of course, yes, there's noise when going to velocity and accel.. 3DP sub-pixel tracing at 59.94 samples per second includes an amount of noise, that must be smoothed.

I generally use 9 sample wide symmetric differencing, though have been looking at the effectiveness of XlXtrFun for high-order least squares fits.

You can immediately see that your velocity data is all over the map. This is a direct result of your very short time base between data points. Even small errors in the measured location will result in huge variations in velocity.
Of course they will. No option but to smooth or curve fit. Would have thought that would be obvious to you. Relative to all other datasets the accuracy of each individual datapoint is unrivalled. Throw away 9 out of ten samples if you like...

I strongly believe that this scatter is an artifact of the errors in your measurement technique.
Well, yes, but those *errors* are (in terms of the raw footage) sub-pixel (as soon as you deal with deinterlace jitter). More to do with the procedure you are using. Of course it's going to amplify the very small measurement errors if you don't smooth/curve fit. And especially more-so if you use adjacent points to determine velocity. Use a wider band.

I also believe that the only way to get rid of this is to apply a smoothing filter to the data.
Whoop, whoop. Yes, 14 months of working with data like this has resulted in honing techniques and procedures for actually using it. You asked for the raw data, so that's what I gave you. First thing to do is apply a 2 sample running SD to iron out any interlace jitter. Other smoothing processes must follow.

If one accepts them as "real", one has to go to the next step & accept that the building is undergoing completely absurd accelerations.
That would just be silly.

The "best fit" to your drop distance vs. time data produces 41 ft/sec^2 (about 1.3Gs of acceleration initially, decreasing to about 33 ft/sec^2 (just above 1 G) over the first 1.3 seconds.
Horrible isn't it. I've been replicating using the NIST camera 3 view which has highlighted slight scaling issues, but not much. The maximum acceleration of the NW corner still reaches 37ft/s^2. Bizarre.

Interesting to note, however, that the NIST curve fit also exceeds 32.2ft/s^ ;)

Sorry, I don't believe this for a second.
I don't like it myself, and I'm trying to find reasons for it. First dip is the base NIST scaling which is a couple of feet off. Next port of call is to look as various video time-bases to see if that can shed any light, or the reality will be >32.2.

I'll talk to you about this over the next couple of days.
Okay.

I can send you the raw data, but you can just as easily imput the empirical equations into an Excel spreadsheet & create the graphs yourself.
I have endless graphs thanks. I may make my spreadsheets a bit more presentable and upload, but I think it'll be useful for you to get to grips with effective noise removal techniques, as I'm not really liking (trusting) the results of higher order curve fits that I've performed and it's always useful to be on a level field.

Again, the per-sample accuracy is very accurate, and of course includes some noise (very low magnitude) and interlace jitter (easily sorted). I've been working with data like this for a long time now, and have picked up lot's of useful techniques on dealing with noise along the way.

Quite happy to give you more tips, but I imagine you'll be happier with curve fitting, which is fine.

I don't intend on getting into endless banter about data quality though. It's great base data. Perhaps I should have given you the data in pixel/frame units instead. That would let you see the noise magnitude in physical measurement terms. Hmm.

DO have to deal with the slight NIST scaling metric problem (shortly) but even that doesn't *sort* the above-freefall issue.

The NIST metric of 242ft is actually just under 240ft.

If the NIST metric on slab-to-slab height for typical floors is accurate (12ft 9in) then all that's left is video timebase...

Later.
 

(click to zoom)


(click to zoom)

I assumed the above graphs posted earlier would have made the noise-level clear.

The lower graph is a zoom of the upper, and I suggest an estimated noise-level of +/- 0.7ft (+/- 0.2px)

(Your previous suggested maximum accuracy was 6ft, so not shabby at all ;) )
 
Last edited:
Well femr,

I hate to be the bearer of bad news regarding your video data, but...
Let's identify the video data in question. It took me a while to figure out that you're talking about video data for WTC7, and not the data femr2 provided for WTC1 in post 94.

In short, either:

A. The measured point on that building is undergoing absurd levels of acceleration (10 to 80 G's).
B. I can't program on the fly like I used to.
C. There are some serious artifacts that your video technique is introducing into your data.

I vote "C".
I vote "D": Every set of physical measurements contains noise. Those who wish to interpret physical data bear the responsibility for extracting signal from noise. Alternative "A" comes from over-interpreting the noise, and the blame for that would rest with tfk. Alternative "C" is blaming femr2 for the noise, but I'm not sure that's fair; the noise (or most of it) could already have been present in the data before femr2 touched anything.

Not even a high order polynomial would fit all the data well at both extremes. So I broke it up into two:

1. a 3rd order polynomial for 0 < t < 1.3 seconds
2. a 5th order polynomial for 1.3 < t < 4.7 seconds (end of data).
In other words, you are fitting a curve to the data. Your decision to use curve-fitting implies your assumption that the original signal was simple enough to be modelled by piecing together two low-degree polynomials. That assumption is related to the Chandler-MacQueen-Szamboti fallacy I attacked in post 79. I'm disappointed to see you succumb to a similar fallacy.

You can immediately see that your velocity data is all over the map. This is a direct result of your very short time base between data points. Even small errors in the measured location will result in huge variations in velocity.

I strongly believe that this scatter is an artifact of the errors in your measurement technique.

I also believe that the only way to get rid of this is to apply a smoothing filter to the data.
That's one way; downsamplingWP is another.

Which, of course, gets rid of all the high frequency components that your data shows.

But here's the rub: I do NOT believe that those high frequency changes in velocity are real. I believe that they are an artifact of your (camera, compression, analysis technique (i.e., pixellation), etc.
This is not the first time in history that numerical analysts have been faced with such problems. We do not have to fall back on arguments from personal belief or incredulity.

The technical problem is to apply enough smoothing, downsampling, and other techniques to reduce the noise to an acceptable level without throwing away all information about high frequency components. That's a well understood tradeoff, and it's essentially mathematical. We can calculate how much information we're losing, and can state quantitative limits to our knowledge.

We can also use science and engineering to estimate the noise level.

For example, we know that downward accelerations greater than 1g are physically implausible. Treating the descent as an initial value problem, we find that limiting the downward acceleration to 1g makes it impossible to match femr2's data at full resolution: Noise in the data force occasional large upward accelerations in the model, reducing the downward velocity so much that it can't recover in time to match the next sampled position. (Tony Szamboti has made a similar argument, although he usually gets the calculations wrong and refuses to acknowledge corrections unless they further his beliefs.) One way to estimate the noise is to reduce the resolution by smoothing and/or downsampling until the downward-acceleration-limited models begin to match the sampled positions.

I have done that for femr2's WTC1 data, and intend to use the results of that analysis to reinforce the point of my earlier analysis of the Chandler-MacQueen-Szamboti fallacy. For that pedagogical purpose, it hardly matters whether the noise in femr2's data was present in the original video or was added by femr2's processing of that video.
 
Last edited:
Hey WD,

Wow. You're harsh.

I like that...

Let's identify the video data in question. It took me a while to figure out that you're talking about video data for WTC7, and not the data femr2 provided for WTC1 in post 94.

Sorry, I should have ID'd the source. It came from post #176. Since it came from femr in a reply discussing WTC7, I assumed that it was from WTC7 data. Although femr didn't say that explicitly with the data.

Correct, femr?

I vote "D": Every set of physical measurements contains noise. Those who wish to interpret physical data bear the responsibility for extracting signal from noise. Alternative "A" comes from over-interpreting the noise, and the blame for that would rest with tfk. Alternative "C" is blaming femr2 for the noise, but I'm not sure that's fair; the noise (or most of it) could already have been present in the data before femr2 touched anything.

If it please the persecution... ;-)

Granted, there is a fair amount of "my preliminary beliefs" in what I wrote. This is because this is my first pass on femr's data. And I have no idea of the details of his video or how he generated his numbers.

Nonetheless, I think that I've made it pretty clear that I believe that there is a lot of noise in the data, and that it does not reflect real motions of the building. And that, as a direct result, I do not (at this time) accept the conclusions of this analysis. In either the raw data OR the empirical data that resulted from a very "smoothed" raw data.

(You can see that the empirical equation generated for the first 1.3 seconds in the last post's drop vs. time graph is a darn good "smoothed" version of the raw data. And yet this still produced an acceleration significantly greater than 1G for the first 1.3 seconds. I think that I made it clear that I do not believe this conclusion. Which constitutes a rejection of the whole raw data set.)

I do not know femr's techniques for producing these numbers. I think that I've also made it clear that I do not believe that he can produce anywhere near the accuracy that he claims.

He seems to be saying things that are self-contradictory:

That his data reveals real high freq transients, but yet it needs to be smoothed because of noise.

Can't have it both ways.

Femr suggested that the noise is being introduced by my technique. That's not true. Granted that I haven't taken steps to eliminate noise, but I have not introduced any. As I mentioned, I've run validation runs with artificial data to verify this.


In other words, you are fitting a curve to the data. Your decision to use curve-fitting implies your assumption that the original signal was simple enough to be modelled by piecing together two low-degree polynomials. That assumption is related to the Chandler-MacQueen-Szamboti fallacy I attacked in post 79. I'm disappointed to see you succumb to a similar fallacy.

If you look at the displacement vs time data set (the raw data & the overlain empirical curve) in my previous post, you'll see a pretty good agreement. Once the raw data is low-pass filtered, I believe that the agreement will be even better.

If the agreement is this good, then increasing the polynomials degree amounts to a bit of gilding the lily. And will likely result in poly constants that are close to zero.

Femr, have you already done this (drop -> velocity -> acceleration) analysis yourself?

If so, please post your velocity & acceleration vs. time data (or graphs).
__

Nonetheless, I've already redone the analysis using 5th order (with 6 constants), and the results are not hugely different. I'll be interested to see what happens with smoothed data.

Here's the result of using this higher order polynomial. (I used it over the entire times span. You can see that it doesn't provide as good a fit at the early times as the previous one. But you can also see that it follows the gross (i.e., low freq) shape of the raw data pretty darn well.

picture.php


picture.php


You can see that the fit between the empirical curve & raw data is pretty good. And that the empirical curve is a pretty good "smoothed" version of the raw data.

The acceleration is proportional to the radius of curvature of the red line in the drop curves. I can see that a better fit (as the lower order poly in the previous graphs) at the earliest time (t< 0.6). But I don't see much leeway for increasing the radius between 0.7 < t < 1.4 seconds. And the results say that this amount of curvature in the drop curve results in >1G accel.

It's possible to construct a "1 G" arc for this chart to see if it can be fit to this raw data. Looking at the data, the curvature of the empirical (red line) equation right around 1.4 seconds corresponds to 1G of acceleration.

In order for femr's data to be correct, one would have to be able to overlay that degree of curvature (or less) on all the data points throughout the data set. I do not see how that is going to happen for t < 1 second. No matter how much low-pass filtering one does.

Here are the resultant velocity & acceleration curves, again for a 5th order poly with 6 constants:

picture.php


Again, for t < 1.4 seconds, accel is > 1G.

That's one way; downsamplingWP is another.

Without knowing the origin of the noise, I'd prefer smoothing to downsampling. Can't a priori tell if your chosen point is a good one or not.

This is not the first time in history that numerical analysts have been faced with such problems. We do not have to fall back on arguments from personal belief or incredulity.

I thought that I made the basis for my incredulity clear: the wall is far too massive & fragile to exhibit or withstand the acceleration levels that the data implies.

The technical problem is to apply enough smoothing, downsampling, and other techniques to reduce the noise to an acceptable level without throwing away all information about high frequency components. That's a well understood tradeoff, and it's essentially mathematical. We can calculate how much information we're losing, and can state quantitative limits to our knowledge.

I see two problems.

First, I don't believe that smoothing the data is going to significantly reduce the empirically derived acceleration. The low-order polynomial has already essentially done that, and I still came up with > 1G acceleration.

I could be wrong about that. It's happened before. We'll see.

Second, we've got a chicken & egg problem. We're trying to figure out what the acceleration really was. But we're going to smooth the data until the acceleration (maybe) gets "reasonable".

The ultimate conclusion will simply mirror what we deem "reasonable".

We can also use science and engineering to estimate the noise level.

Perhaps femr can, because he has access to his original data. And the details of how he generated the numbers.

For example, we know that downward accelerations greater than 1g are physically implausible. Treating the descent as an initial value problem, we find that limiting the downward acceleration to 1g makes it impossible to match femr2's data at full resolution: Noise in the data force occasional large upward accelerations in the model, reducing the downward velocity so much that it can't recover in time to match the next sampled position. (Tony Szamboti has made a similar argument, although he usually gets the calculations wrong and refuses to acknowledge corrections unless they further his beliefs.) One way to estimate the noise is to reduce the resolution by smoothing and/or downsampling until the downward-acceleration-limited models begin to match the sampled positions.

I'll be interested to see if smoothing the data can get us from here to there. My current impression is that the answer is "no".

I have done that for femr2's WTC1 data, and intend to use the results of that analysis to reinforce the point of my earlier analysis of the Chandler-MacQueen-Szamboti fallacy. For that pedagogical purpose, it hardly matters whether the noise in femr2's data was present in the original video or was added by femr2's processing of that video.

Agreed.

Femr, would you care to add some detailed explanation of your number generation technique. (Or point to where you've described it previously.)

Most specifically, what do you estimate as your error (in pixels), and what is the pixel to real world scale factor.


Tom
 
Last edited:
It took me a while to figure out that you're talking about video data for WTC7
Yes, my bad. Followed a discussion with tfk, so was posted with a *here y'are* only. It's from the *Dan Rather* view.



the noise (or most of it) could already have been present in the data before femr2 touched anything.
There is an amount of variance in the tracing process. I use the SynthEyes system, which incorporates professional-level feature tracking facilities. Tracked feature position is output in pixels, to 3DP. As indicated in post #205 the noise level in the provided graphs equates to +/- 0.2px (call it half a pixel).

I do not stabilise the video data, to ensure it is *untouched*, so the alternate frame jitter resulting from deinterlacing is also present in the underlying data. My preference with dealing with that is to subtract a static point trace (which by definition has the same vertical alternate frame shift) to cancel it out.

An alternative (or additional step) is applying the 2-sample running average previously suggested.

That does of course still leave an amount of noise, which must be treated as required.

Your decision to use curve-fitting implies your assumption that the original signal was simple enough to be modelled by piecing together two low-degree polynomials.
I should highlight that the purpose of providing tfk with the data was not to identify low magnitude variance in velocity/acceleration, but rather to extrapolate and determine the descent time for WTC 7. In that context I personally have no issue with the general trend.

I have done that for femr2's WTC1 data, and intend to use the results of that analysis to reinforce the point of my earlier analysis of the Chandler-MacQueen-Szamboti fallacy.
Interesting. I'll make sure I read it.
 
produced an acceleration significantly greater than 1G for the first 1.3 seconds.
As previously stated, I've replicated similar results using the NIST Camera #3 footage (which has also highlighted a slight error in the baseline NIST scaling factor, as the dimension they state as 242ft is nearer to 240). That error alone does not account for the above-G acceleration however, and I'm still looking for other sources.

The pixel-level trace data however is rock-solid in it's attachment to the NW corner, so I have no personal issues with the underlying pixel position data.

I may have to look at perspective correction, though analysis of the scene reveals very little variance in same-height feature separation (the inter-window distances), so even that will make very little difference.



I do not know femr's techniques for producing these numbers.
Quite happy to go into as much detail as is necessary.

I think that I've also made it clear that I do not believe that he can produce anywhere near the accuracy that he claims.
As you don't know the techniques, this is some more hand-waving I'm afraid.

Femr suggested that the noise is being introduced by my technique.
Not at all. I suggested the problem with your treatment of the data, not that you introduced noise into it. The noise is already there.

Femr, have you already done this (drop -> velocity -> acceleration) analysis yourself?
Of course.

If so, please post your velocity & acceleration vs. time data (or graphs).
I'm cleaning up the presentation of the spreadsheet for the Dan Rather view, but here's a slightly more presentable one for the NIST Camera #3 view...
Download
You'll need excel and a plugin available here to open it...
http://www.xlxtrfun.com/XlXtrFun/XlXtrFun.htm

(First thing you'll have to do is change the NIST Height parameter from 242 to 239.8)

You can also use that parameter to scale the entire dataset to whatever metric you please.

(tfk - I cannot see any of your graphs. Do you know why ?)

I still came up with > 1G acceleration.
So did NIST.
 
Can't see your graphs. Can't see them even if I C&P the URL on edit.

I'm not sure. Try logging out & back in to your JREF account.

On my system (a mac):
If you're not logged in (with Chrome or Safari), then the graphs don't show. Curiously, they do show with Firefox. When logged in, they show in all these browsers.

Anyone else not seeing them?

Tom
 
Last edited:
I'm not sure. Try logging out & back in to your JREF account.
No joy. Tried all sorts. Nowt. PC on Firefox. Tried taking the text out of quote. Changing qimg to img and to url, posting the url directly etc. Nothin'.

No worries though. I can visualise them from your description.

The links don't refer directly to images btw, but to albums on JREF via a script. If you can link to .jpg, .png or .bmp I'll definitely be able to see'em.
 
Hey WD,

Wow. You're harsh.

I like that...
Sorry. It was late, and I could hardly believe you were criticizing femr2 for providing data that (gasp!) contain noise.

Sorry, I should have ID'd the source. It came from post #176.
Thanks.

Nonetheless, I think that I've made it pretty clear that I believe that there is a lot of noise in the data, and that it does not reflect real motions of the building. And that, as a direct result, I do not (at this time) accept the conclusions of this analysis. In either the raw data OR the empirical data that resulted from a very "smoothed" raw data.
Everyone agrees there's a lot of noise in the data. By definition, the noise does not reflect real motions of the building.

On the other hand, I see no reason to doubt that femr2's data, when analyzed properly, will reflect real motions of the building.

On the third hand, my main goal here is to explain the limits of such analysis, to warn against over-analysis of sampled data, and especially to warn against the overconfidence shown by certain people who perform a single very poor analysis and then condemn the rest of the world for not sharing a preconceived conclusion that's contradicted by their own data.

(You can see that the empirical equation generated for the first 1.3 seconds in the last post's drop vs. time graph is a darn good "smoothed" version of the raw data. And yet this still produced an acceleration significantly greater than 1G for the first 1.3 seconds. I think that I made it clear that I do not believe this conclusion. Which constitutes a rejection of the whole raw data set.)
That's where I think you're going wrong. The decision to fit a single smooth curve to that data was your decision, not something that was forced upon you by the data. That decision was equivalent to deciding that nothing terribly interesting could be going on during the first 1.3 seconds. Speaking now of my beliefs, for which I will give evidence, I do not believe your decision is justified by the data.

Even after reducing the noise by crude averaging of every 6 adjacent data points, which reduces the sampling interval to 1/10 second, I still see several discrete jolts during the first 1.3 seconds. As I will show in a later post using femr2's WTC1 data, the magnitude and location of those jolts depend upon artifacts of the noise reduction and the resolution, but that doesn't mean the jolts aren't real. As can be shown mathematically, real jolts would also show up with different magnitudes and locations when sampled at slightly different times or resolutions.

So at least some of those apparent jolts could be real. If so, attempting to fit a smooth curve to the data can give misleading results.

I do not know femr's techniques for producing these numbers. I think that I've also made it clear that I do not believe that he can produce anywhere near the accuracy that he claims.

He seems to be saying things that are self-contradictory:

That his data reveals real high freq transients, but yet it needs to be smoothed because of noise.

Can't have it both ways.
I'm with you on all of that.

One way to demonstrate that point to femr2 and others is to vary the smoothing and resolution in a systematic way, obtaining a dozen different results. By doing the same thing for a known signal (that is, a fixed mathematical model), and observing the same kind of variation in results, we can show that this variation is exactly what we would expect to happen for indubitably real signals. The morals of the story are (1) there is only so much information we can extract by sampling a non-cyclic signal, and (2) it's easy to draw incorrect inconclusions from artifacts of our analysis.

Without knowing the origin of the noise, I'd prefer smoothing to downsampling.
Why not analyze using smoothing and also analyze using downsampling? That's what I'm doing. (I have to admit that one of my motivations for using downsampling is to demonstrate how much the results can vary as the resolution changes, even while the signal is held fixed.)

I see two problems.

First, I don't believe that smoothing the data is going to significantly reduce the empirically derived acceleration. The low-order polynomial has already essentially done that, and I still came up with > 1G acceleration.
That just means you can't get a good fit to the data with a low-order polynomial. That in turn means (1) high-frequency noise isn't the problem, and (2) you should suspect irregularities (jolts) in the actual signal. If you analyze the data using techniques that allow you to see irregularities, as I have done, you'll see them.

Second, we've got a chicken & egg problem. We're trying to figure out what the acceleration really was. But we're going to smooth the data until the acceleration (maybe) gets "reasonable".

The ultimate conclusion will simply mirror what we deem "reasonable".
Yes, that's a problem. It's part of the Chandler-MacQueen-Szamboti fallacy. They picked a low sampling rate and (in MacQueen and Szamboti's case) lowered the resolution still further by smoothing, using noise to argue for both. Ignoring the fundamental theorem of sampling, they then used their smoothed, low-resolution data as "proof" there were no 90-millisecond jolts in the signal.

You and I don't have to be quite so incompetent.
 
I do not know femr's techniques for producing these numbers. I think that I've also made it clear that I do not believe that he can produce anywhere near the accuracy that he claims.

He seems to be saying things that are self-contradictory:

That his data reveals real high freq transients, but yet it needs to be smoothed because of noise.

Can't have it both ways.
I'm with you on all of that.
Am not at all sure sure what this relates to.
The only *claim* I've made about the data is an estimation of the noise level...
172155712.jpg

Which I'd put at +/- 0.7ft (which is sub-pixel for the Dan Rather viewpoint)

The spreadhseet I included above for the NIST Camera #3 viewpoint includes the raw pixel data.
I prefer the viewpoint, and it can be compared with the NIST results fairly directly.

The only thing I need to look into further is some perspective correction. I've checked vertical and it's fairly trivial, but may see what horizontal perspective correction is required to convert the (slightly incorrect) NIST scaling metric in the middle of the building to the NW corner.

ETA: That might be the beastie. Initial rough estimate could approach an additional modifier of something around 1.02. I'll have to spend a bit of time extracting horizontal metrics, so that value is far from definite, but horizontal skew does look like a culprit for excessive over-G derivations.
 
Last edited:
WD,

Are you able to see my graphs?

Perhaps a way thru the chicken & egg problem of how smoothing affects the measured acceleration.

We've got two accelerations over any interval: the gross average acceleration that we can get from the position data at any two points, and the instantaneous acceleration that results from the model.

For the model to be correct, this could be the criteria: "The integrated average of the instantaneous acceleration over that interval has to equal the gross acceleration." (Or equivalently: "the total distance dropped over that interval has to be the same using both accelerations.")

You'd have to keep careful track of the initial velocity on that interval, because high initial velocity will yield lower gross acceleration for the same displacement. But that should be doable starting from an initial point (0 ft/sec). Of course, this error will accumulate over time, but I don't think it'll be a problem for the time intervals we're discussing.

And this technique will allow any combination of jolts within the interval.

maybe...


Tom
 
Abstract: The corner's downward acceleration of greater than 1g at the beginning of its descent is just what you'd expect from the physics.

WD,

Are you able to see my graphs?
Yes. I couldn't see them at first, but I saw them when I logged in to JREF, and I continued to see them after I logged out.

I don't know what t=0 means in your graphs, so I'll use femr2's time scale (without rounding).

The corner's descent is preceded by some oscillation and begins in earnest near t=4.871538205. One second later, at t=5.872539206, the corner of the building has dropped 19.7 feet. Two seconds later, at t=6.873540207, the total drop for the first two seconds is 74.0 feet. That's an average of over 1.2g for the first second, and about 1.15g over the first two seconds.

I have to retract this statement:
For example, we know that downward accelerations greater than 1g are physically implausible.
That isn't true in this case, because the corner being tracked is not the roof's center of gravity.

Looking at the YouTube video femr2 cited, I see that the corner did not begin its descent until some time after the opposite side of the building had already fallen some distance and built up downward velocity. The means the roof's center of gravity began its descent before t=4.871538205, and the roof had rotated during that descent.

Taking a idealized view of the situation, let's assume the roof's rotation ends abruptly at t=4.871538205, and the corner's downward component of velocity (from that time onward) is equal to the downward velocity for the roof's center of gravity. That implies a very large downward acceleration for the corner at t=4.871538205 as its velocity rises rapidly from zero to match the velocity of the roof's center of gravity, to which the corner is still attached. If you spread that acceleration over the first two seconds, say, you'll get accelerations that look like what you see in femr2's data and in tfk's graphs.

In short: The corner's downward acceleration of greater than 1g at the beginning of its descent is just what you'd expect from the physics.

ETA:
For the model to be correct, this could be the criteria: "The integrated average of the instantaneous acceleration over that interval has to equal the gross acceleration." (Or equivalently: "the total distance dropped over that interval has to be the same using both accelerations.")

You'd have to keep careful track of the initial velocity on that interval, because high initial velocity will yield lower gross acceleration for the same displacement. But that should be doable starting from an initial point (0 ft/sec). Of course, this error will accumulate over time, but I don't think it'll be a problem for the time intervals we're discussing.

And this technique will allow any combination of jolts within the interval.

maybe...
You're suggesting we look at it as an initial value problemWP, which is what I've been doing throughout.

The spreadhseet I included above for the NIST Camera #3 viewpoint includes the raw pixel data.
I prefer the viewpoint, and it can be compared with the NIST results fairly directly.

The only thing I need to look into further is some perspective correction. I've checked vertical and it's fairly trivial, but may see what horizontal perspective correction is required to convert the (slightly incorrect) NIST scaling metric in the middle of the building to the NW corner.

ETA: That might be the beastie. Initial rough estimate could approach an additional modifier of something around 1.02. I'll have to spend a bit of time extracting horizontal metrics, so that value is far from definite, but horizontal skew does look like a culprit for excessive over-G derivations.
If "horizontal skew" means rotation of the roof, then I agree. What we really need are data for the center of the roof with exactly the same timescale as the corner. (Ideally, we'd want data for the opposite corner as well, but I don't know whether video of that is available.) Once all of that data is in hand, we'll be able to calculate how well it matches the physical explanation I offered above.
 
Last edited:
Yes. I couldn't see them at first, but I saw them when I logged in to JREF, and I continued to see them after I logged out.
Hmm. Logged in, out, shook it all about. Nowt.

If "horizontal skew" means rotation of the roof, then I agree.
Not exactly.

NIST used a point towards the centre of the roofline. I'm using the NW corner. Features at the West edge are a bit closer to the camera, and so are roughly 1.026 times the size of my previous scale multiplier. My base pixel multiplier has changed to 0.467662228

That has the effect of reducing the over-G values to around 36ft/s^2

What we really need are data for the center of the roof with exactly the same timescale as the corner.
Can do, although there is not much horizontal detail for SynthEyes to latch onto. I would suggest the best route would be to lock horizontal position. Yes ?

If so, what point ?

370825048.jpg


The NIST description is slightly vague...
The chosen feature was the top of the parapet wall on the roofline aligned with the east edge of the louvers on the north face.
I take that to be the position at the right hand edge of the *black box* on the facade (for a few reasons that might not be obvious).

(Ideally, we'd want data for the opposite corner as well, but I don't know whether video of that is available.)
The source video includes the frame as-per the image above, so in theory, no problem. In reality though, the NE corner contrast is very poor and the trace quality may not be too hot.

Once all of that data is in hand, we'll be able to calculate how well it matches the physical explanation I offered above.
I have no issue generating mounds of data. It's coming out m'ears at the mo.

Any preference on units ?

I'm pretty sure my scaling variable is as good as it's going to get for the NW corner now, but there's still possibility for MINOR change to it (0.467662228 - ft/pixel NW corner)

Important note: The scalar is for the latest NIST Camera #3 data, NOT the Dan Rather data.
 
NIST used a point towards the centre of the roofline. I'm using the NW corner. Features at the West edge are a bit closer to the camera, and so are roughly 1.026 times the size of my previous scale multiplier. My base pixel multiplier has changed to 0.467662228

That has the effect of reducing the over-G values to around 36ft/s^2
A point toward the center of the roofline would be closer to the center of gravity, which would simplify the physics and improve accuracy of the analysis.

Can do, although there is not much horizontal detail for SynthEyes to latch onto. I would suggest the best route would be to lock horizontal position. Yes ?
Should be good enough.

The NIST description is slightly vague...
The chosen feature was the top of the parapet wall on the roofline aligned with the east edge of the louvers on the north face.
I take that to be the position at the right hand edge of the *black box* on the facade (for a few reasons that might not be obvious).
Not obvious to me. I'd have assumed "the louvers on the north face" are on the wall of the building below the roof, and the feature is above and aligned with the left edge of those louvers in the photograph.

Any preference on units ?
Only because the WTC7 data you've already posted are in feet.
 
A point toward the center of the roofline would be closer to the center of gravity, which would simplify the physics and improve accuracy of the analysis.
I'll stick it at the point the kink develops then.

Only because the WTC7 data you've already posted are in feet.
Okay, though I'll additionally include the pixel data with a single-cell scalar.

Now, just the NIST Camera #3 data ? (NW corner, NE corner, Kink)

Include static point data ?

Include ANY horizontal data ? (Will make the spreadsheet cleaner if omitted)

(Not a problem to dump the Dan Rather data. The metrics on the NIST view are more accurate)
 
Last edited:
WD,

Abstract: The corner's downward acceleration of greater than 1g at the beginning of its descent is just what you'd expect from the physics.

Stating the obvious, if in freefall, the accel jumps to 1.0g at the start of fall & stays there.

I don't know what t=0 means in your graphs...,

Just to make the math simple, I indexed z to the start of downward motion.

The IDing of t0 is a complexity for all the analyses.

The corner's descent is preceded by some oscillation and begins in earnest near t=4.871538205. One second later, at t=5.872539206, the corner of the building has dropped 19.7 feet. Two seconds later, at t=6.873540207, the total drop for the first two seconds is 74.0 feet. That's an average of over 1.2g for the first second, and about 1.15g over the first two seconds.

I have to retract this statement: " For example, we know that downward accelerations greater than 1g are physically implausible. "

That isn't true in this case, because the corner being tracked is not the roof's center of gravity.

Looking at the YouTube video femr2 cited, I see that the corner did not begin its descent until some time after the opposite side of the building had already fallen some distance and built up downward velocity. The means the roof's center of gravity began its descent before t=4.871538205, and the roof had rotated during that descent.

Taking a idealized view of the situation, let's assume the roof's rotation ends abruptly at t=4.871538205, and the corner's downward component of velocity (from that time onward) is equal to the downward velocity for the roof's center of gravity. That implies a very large downward acceleration for the corner at t=4.871538205 as its velocity rises rapidly from zero to match the velocity of the roof's center of gravity, to which the corner is still attached. If you spread that acceleration over the first two seconds, say, you'll get accelerations that look like what you see in femr2's data and in tfk's graphs.

Four ways (that I can think of) to get > 1G fall:

1. falling rigid body that rotating (if rotating clockwise, part at 3 o'clock will have a downward velocity that is greater than can be attributed by g. It will actually have an linear downward acceleration greater than g for positions between 12 & 6 o'clock, reaching a max at 3 o'clock. It will have a downward acceleration less than g for positions between 6 & 12, reaching a minimum at 9 o'clock.

I think this is implausible as a cause, because the rotation is so slow & the roof line is at approx. 1 o'clock to the cg, giving any rotation a tiny effect.

2. A variant of the same: a falling lever, pinned on the ground at one end. The free end of the lever falls at slightly greater than g. Favorite physics demo.
http://www.youtube.com/watch?v=SfZk6o88nSU&feature=related
I think this is possible, but it requires an internal member to be supported on a structure, and heavy weight to fall on an intervening beam. Possible, but IMO, unlikely.

3. Hang a heavy weight out in space off of the roof of a building. Attach it to an object on the roof with a beam. Put pivot joints at each end of the beam. Drop the weight. The weight falls at g. Initially, the beam pivots, and the object to which it is tied stays stationary on the roof. Finally, the beam can pivot no more, and the object is jerked off the roof. The objects initial acceleration will be greater than G.

This is similar to the description that you proposed, WG.

But here's a slight variant that I think has merit. (Think of Sprint's "pin drop" commercial.)

4. Everything starts just as WD described it. The wall fails first near the east end, where we know the initial failure occurred (i.e., at the kink). The east end of the wall falls nearly at G because it has a multi-story buckling failure. The wall as a whole falls & pivots counterclockwise, west end high, because the west end has not yet failed. The bulk of the wall & attached structure builds up momentum.

This is just like the situation you described, WD.

Suddenly, some point at the bottom of the falling section near the east end hits some significant resistance. As we know it ultimately will.

As long as the impact point was east of the wall's c.g., this impact would transmit a huge dynamic load thru the wall, perhaps instigating the failure at the west end of the wall.

If true (and it does make sense), there should be a sudden, perhaps measurable drop in both the CCW angular velocity of the wall & the downward linear velocity of the wall towards the east end just before the west end starts its fall.

If it turns out that the initial G is > 1, then this is, IMO, the best explanation.

I'm not yet convinced that the initial G is this high. But I will believe good data.
 
Last edited:

Back
Top Bottom