dafydd
Banned
- Joined
- Feb 14, 2008
- Messages
- 35,398
If it comes with the cash, I'll take it.![]()
A word to the wise,don't hold your breath.
If it comes with the cash, I'll take it.![]()
I really don't have the slightest idea what you're talking about.
The neon layer and oxygen and other ionized elements inside that neon. If I could figure out what absorbs 94A, I might figure out what "cools off" and then is re excited, and that might allow me go give you a temperature range of something. Without that bit of info however, I'm clueless how to give you surface temperatures at the surface of the photosphere so I'll stick with the standard model for the time being.
A word to the wise,don't hold your breath.
I want quantitative predictions, complete with error bars about the location of the sphere in an RD image at various iron ion wavelengths with respect to the chromosphere. I need numbers. Got numbers?
What do you mean by quantitative predictions? About what? What do you mean by error bars? What sphere are you talking about? In which running difference image(s)? Which various iron wavelengths? With respect to the chromosphere how? Really, Michael, what the hell are you babbling about?
Does anyone else have the slightest idea what this constant incoherent badgering and taunting is all about?
Wow, thanks very much GM!Running difference images made from the color separated pairs of images above wouldn't show much contrast because the source images are only 10 frames apart. But with a 100 frame offset the running difference images would be what you see in any single frame from the videos below.
This first video was made by removing the green and blue from the source video leaving just the red. Then all the red was converted to grayscale. Then two frames are taken from the video 100 frames apart starting with frames 1 and 100. Then 50% gray is added to each pixel in the first image, and the second image is subtracted pixel by pixel from the first. The result becomes frame 1 in the output. Then move to frame 2 and frame 102 and repeat with all frame X and frame X+100... (These pixel values are numerical values of gray from 0 = black to 255 = white.)
Then of course these videos are sized down to 640x320 and letterboxed to 640x480 to make them fit YouTube and common video viewers. And I trimmed them down to just the first 20 seconds to make more reasonable download sizes.
The second video, above, is the same only I took out the red and the blue, leaving only the green from the original video. And the third, below, is the running difference video made from just the blue. I believe these red, green, and blue colors represent 211Å, 193Å, and 171Å source data respectively.
I use a proprietary script I wrote myself to do this processing, so I won't be more specific. Anyone with a little math background, a modicum of expertise in computer video and graphics, and some reasonable programming skills could certainly do this.
This is a very important point for this discussion. Although the running difference material we find at NASA and LMSAL probably is made pretty much exactly the way I've done it, they can look quite different by adjusting only a couple of things.
First, if you subtract frame x+1 from frame x+100 you get the videos we see above. If you instead subtract frame x+100 from frame x+1 you end up with something that looks like a negative of that video. Either the lighting comes from the other side of the mountain or your mountain turns into a valley. Take your pick.
Second, the contrast between images will obviously be affected by the offset, or how many frames apart you use for the compared images. And you need to remember that is based on the time difference between the original frames, too. A running difference video made with an offset of 10 frames might show so little change that it would look almost like a smooth gray throughout. Compare images 100 frames apart and you can see the changes between source images more clearly. You can shrink and grow your mountains by comparing frames closer or further apart in the sequence.
Many of the running difference images available from NASA and LMSAL have quite different sizes of mountains, some so huge that it's amazing we don't see them with the naked eye when there's a solar eclipse!![]()
Do you - or any reader for that matter - know why MM would want to determine "the location of the sphere" from an RD image?!?I want quantitative predictions, complete with error bars about the location of the sphere in an RD image at various iron ion wavelengths with respect to the chromosphere. I need numbers. Got numbers?
For five years GM you have dogged me around and complained about how I never quantified anything and I knew nothing about RD images. I have now put numbers on the table related to RD images with respect to the chromosphere. It's your turn. You've probably called me a crackpot on the internet 10,000 times by now. Don't you think a "professional" would put up some numbers if a mere hick from Mt. Shasta can put up some numbers. Can't you compete with a guy that can't balance his checkbook?
So I'll just mark you down as "clueless coward"?
Yes. According to the standard model, that is where your coronal loops should start to light up. Don't you think we should use that as the starting point?
According to this model, the solid surface is located at about 4800 KM under the base of the chomosphere.
For the record.
OK, so adding "photosphere" - meaning where the white light we see when we look at the Sun comes from - "chromosphere", and "transition layer", the MM model looks like this:So the 3D model is (from the outside in)
coronal loops
--------------
bright and electrically-active layer
------------
"transparent neon-based substance TBD"
ETA "transparent silicon substance TBD"
------------
ultrahot iron vapor emitting 171A
----
solid iron at 2000K
Do I have that right? This is the first time you've mentioned light from the electrical activity so forgive me if my mind-reading skills have put it in the wrong place. Which layer is responsible for the "green" emissions which end up in the famous 20-pixel-wide stripe on the 2D image provided by the SDO PR department? I've asked eight times, explain already.
All that I'm looking for is where you expect to see that "opaque" (your definition) limb appear in an RD image with respect to the chromosphere. How hard can it be?
"How hard can it be to predict the length of my cat's tail? To predict the St. Louis Cardinal's performance this postseason? To predict the exact balance between emission and absorption in a tangential slice through the solar corona?"
Very hard. It depends on the detailed structure of the corona.
Where would you expect to find the opaque edge of an RD image in say 171A with respect to the chromosphere?
The partition function, you mean? It's a sort of generating function that contains any information you'd like to know about a thermodynamic system. (One in thermodynamic equilibrium anyway.) You start with a Hamiltonian H, which gives the energy of the system for a certain microscopic state. H depends on a bunch of parameters, some of which are microscopic degrees of freedom, and others of which are external, constant, or controllable. To get the partition function you take exponentials of the form exp(-H/kT) and you sum over its evaluations for every possible microscopic state---and this sum is the partition function Z, which is a function of those parameters that aren't microscopic degrees of freedom (such as temperature, pressure, volume, external fields, etc).
A lot of information can be obtained from Z by taking derivatives with respect to the parameters. Some useful thermodynamic identities can be proven from it, too.
I can't say I know anything about solar or plasma physics, but if someone is trying to describe the macroscopic properties of the plasma, solar corona, or what-have-you by starting with microscopic properties, they'd probably be making use of the partition function at some point. Though they'd probably also be using some fairly advanced techniques in addition that I don't yet know. Looking at the Saha equation (which the post you quoted mentioned), it seems that one form of it does make use of Z.
And I seriously doubt someone would be able to overthrow any standard model in physics without knowing something as basic and ubiquitous as the partition function. =p
No. The key assumption is that the allegedly solid surface is being seen in profile, which is equivalent to assuming that the line of sight is tangent to the allegedly solid surface. So long as the allegedly solid surface is spherical, the tangent line will not intersect the allegedly solid surface.A thought occurs to me (it happens occasionally - usually I try to suppress it before I hurt myself). If Michael's solid surface is at 4800km, and he is claiming to see this surface by counting pixels at the limb, would the necessary oblique line-of-sight angle through the sun to the limb pass below 4800km at some point? In other words, to see 4800km deep at the limb in a 2d image, would he have to see through his iron surface on the way through?
Somebody not named Mozina please set me straight.
