femr,
A couple of questions about your data.
You've described it as "assuming it adhered to ITU-R BT.601". Having been broadcast in the US, would it not originally have been in NTSC-M? From what you can tell, was the original source taken from a recording of a broadcast? Frame rate of original broadcast?
This chart shows your static points, NormY & DJ_Y. (What do "Norm" & "DJ" stand for?)
I understand that you're getting rid of jitter by averaging points.
Couple of questions:
I assume that your program takes data points from both frames, has a built in factor that gives an "interlace offset Y" value that it adds to one frame's data, to calculate an accurate "interlaced" resultant Y.
1. Looking at your NormY graph, the jitter is mostly about 0.5 pixel. But occasionally reduces to about 0.1 pixel. Any reason for that?
2. If that "jitter" goes to 0.1 pixel periodically, does that mean that the interlace offset is correct, and the excess "jitter" is related somehow to the algorithm that picks out the feature?
I've got a couple of other questions, but they depend on that first assumption being correct. So I'll ask if you can just describe the process that SynthEyes uses to determine these points.
tom
A couple of questions about your data.
You've described it as "assuming it adhered to ITU-R BT.601". Having been broadcast in the US, would it not originally have been in NTSC-M? From what you can tell, was the original source taken from a recording of a broadcast? Frame rate of original broadcast?
This chart shows your static points, NormY & DJ_Y. (What do "Norm" & "DJ" stand for?)
I understand that you're getting rid of jitter by averaging points.
Couple of questions:
I assume that your program takes data points from both frames, has a built in factor that gives an "interlace offset Y" value that it adds to one frame's data, to calculate an accurate "interlaced" resultant Y.
1. Looking at your NormY graph, the jitter is mostly about 0.5 pixel. But occasionally reduces to about 0.1 pixel. Any reason for that?
2. If that "jitter" goes to 0.1 pixel periodically, does that mean that the interlace offset is correct, and the excess "jitter" is related somehow to the algorithm that picks out the feature?
I've got a couple of other questions, but they depend on that first assumption being correct. So I'll ask if you can just describe the process that SynthEyes uses to determine these points.
tom