Suppose we kept flashing the surface from above with bright lights, back and forth from different angles over time. Would we not be able to "pick out the persistent surface patterns" in the RD images?
That is a
post hoc rationalization in an apparent attempt to cover a serious deficiency in your understanding. There are no bright flashing lights involved in creating a running difference image. There are two source images, neither of which has anything to do with
"flashing the surface from above with bright lights, back and forth from different angles over time". Each source image is a measurement of thermal characteristics. Typically 171Å light is showing us where the corona is around a million degrees Kelvin. That's the corona, from several thousand to tens of thousands of kilometers above the photosphere, nowhere near your imaginary iron shell.
The brightness of any one pixel in the 171Å source image is caused by that filter's sensitivity to light in roughly the million degree range. Dimmer pixels are cooler or hotter. All the light comes from thousands of kilometers above the photosphere. The source images are gathered much like a regular camera gathers light for a snapshot. They're still images taken with no
"flashing the surface from above with bright lights, back and forth from different angles over time". To suggest that would be a gross misrepresentation of how it actually works.
To make a running difference image, a second 171Å source image is used as well as the one described above. It was taken some amount of time after that first one. Another still image, no flashing, no angles, no back and forth. All the pixels in that second image are caused by the same thing as the ones in the first, light at 171Å typically generated in plasma around a million degrees. Hotter plasma won't show in the image. Cooler plasma won't show. (Brighter
does not mean hotter. It means closer to the maximum sensitivity of the filter.) And just like the first image, all the light comes from several thousand kilometers up to tens of thousands of kilometers above the photosphere.
So we take these two images, both created by 171Å light, all of which comes from the corona, neither of which is anything like
"flashing the surface from above with bright lights, back and forth from different angles over time", and we do a simple mathematical comparison. Each pixel's brightness is represented as a numerical value. (Yes, this stuff is quantitative regardless of Michael's dreaded avoidance of that part of science.) Pixel A1 in Image 1 is compared to pixel A1 in Image 2. The result of that calculation, simply the mathematical difference between the values of the pixels, becomes pixel A1 in the output, the running difference graph. Move on and do exactly the same thing to pixel A2, then A3, then A4... and eventually the output of the process is completed.
What we end up with is a graphical representation of a mathematical comparison of the values of the corresponding pixels in a pair of source images. It's a graph, a chart, a way to visually represent a change in brightness between the pixels in the original images. The running difference image is no longer a picture of anything physical. You can't see stuff in it. You can't see things in it. No surface, no hills, no valleys, nothing. Nothing will appear in the output that didn't appear in one of those source images
and change in the other.
There. Every pixel in every running difference graph. Explained. Every single last blessed pixel. Explained. In plain English that checks to about a 9th grade reading level.
Now
if I'm wrong -- which, by the way, would make the people at LMSAL wrong, and the people at NASA, all the people involved with the SOHO and STEREO and TRACE solar research projects, among others, who work with these images every single day --
if I'm wrong, and
if there's a better way to describe it, quantitatively of course, Michael has yet to attempt it. For some reason he refuses to explain why, for example, the pixel in column 418 row 114 has the value that it has in the first image, why that pixel in the second image has its value, and how those values, after doing a simple mathematical comparison, can suddenly become part of an actual picture of something that would have been impossible to see in either of the source images.
And my prediction is: We will never see his pixel by pixel explanation because Michael does not have the qualifications he claims to have regarding his understanding of satellite imagery in general and running difference graphs in particular. His use of those very first images on his web site as evidence for his crackpot solid surfaced Sun conjecture is fraud. And anyone who doesn't like it can sue me.
