Ringed Saturn Visible All Night

Could you just clarify SNR please - presumably put simply it's a measure of the 'wanted' photons to the 'unwanted'. How does stacking alter the ratio? Does it concentrate the wanted photons, but because the unwanted are more 'random' they don't get so concentrated?

The noise level is like the standard deviation of the count, and the signal is the actual level from the photons you intend to catch. By stacking, the noise level goes up, but not as fast as the signal level. The signal level should be proportional to the amount of stacking you do, but the noise level goes up slower - in the same way that if you were to toss a coin n times you would expect 0.5n heads to come up, but you'd expect the standard deviation in the number of heads to be 0.5 sqrt(n). The absolute number of heads you are likely to be away from 0.5 goes up the more times you toss the coin, but as a proportion of the total it goes down.

When it comes to planetary imaging I guess my point was that it is comparatively easy to collect lots of photons in a few minutes of imaging (unlike a lot of deep sky work) so you after a while you stop gaining in useful signal to noise - you've beaten the noise down to invisible levels fairly quickly, but the longer you image for the more good frames you are likely to have, and the more finely you can cream off the top the images you need to get enough signal.

Unfortunately it's cloudy here tonight though, and it won't be till Monday I might get a chance to get out myself.
 
The big contributor to noise comes from the camera itself. That is why the dedicated astro cameras are cooled. This noise is mitigated by noise reduction techniques such as dark frames. This in effect measures and subtracts the camera noise from the image. DSLR's have noise reduction programs that can be activated to automate this as manually performing the dark frames is time consuming.

Here is a link to astrophotography, dark, flat and bias frames for calibrating the camera and reducing noise

http://www.mediafire.com/view/kwyryimjjhu/DSLR_Astrophotography.pdf
 
Last edited:
When it comes to planetary imaging I guess my point was that it is comparatively easy to collect lots of photons in a few minutes of imaging (unlike a lot of deep sky work) so you after a while you stop gaining in useful signal to noise - you've beaten the noise down to invisible levels fairly quickly, but the longer you image for the more good frames you are likely to have, and the more finely you can cream off the top the images you need to get enough signal.
Aah ... so that's why the exposure time needs to be short - so that the noise falls below the register threshold for each frame, right, and therefore never features in the stacking process, as it would with a long exposure, right?
 
The big contributor to noise comes from the camera itself. That is why the dedicated astro cameras are cooled.
My Dobsonian has a cooling fan under the mirror, but I understood that it's to help it get down to ambient temperature quicker when moving it from inside to outside. Would that be right, or is it to do with noise? Never used the fan, BTW.

This noise is mitigated by noise reduction techniques such as dark frames. This in effect measures and subtracts the camera noise from the image. DSLR's have noise reduction programs that can be activated to automate this as manually performing the dark frames is time consuming.
Sounds like the visual equivalent of Bose noise cancelling headphones, would that be correct?

I've just 'borrowed' a remote webcam from our conference room at work to give it a go over the weekend. It's the type that sits on top of a TV screen. Just wondering how best to try attaching it to the telescope. Presumably it needs to sit infront of the eyepiece, where the human eye would normally be. Any ideas?

Also, I connected it to my laptop just to work out how to video with it. I've downloaded AVS Video Editor, which seems to work fine. I noticed that it saves the video as an MPEG file. Will I be able to use this with some freeby stacking software, such as Registax?
 
Last edited:
Aah ... so that's why the exposure time needs to be short - so that the noise falls below the register threshold for each frame, right, and therefore never features in the stacking process, as it would with a long exposure, right?

When photographing bright planets the exposure needs to be kept short to prevent over exposing the image. The noise is dealt with by stacking and correct calibration.

Once you over expose the image there is no way to retrieve planetary detail.
 
My Dobsonian has a cooling fan under the mirror, but I understood that it's to help it get down to ambient temperature quicker when moving it from inside to outside. Would that be right, or is it to do with noise? Never used the fan, BTW.

The mirror is a thick piece of glass and the fan speeds the cooling of the mirror to ambient. When the mirror is "hot" it expands and distorts the mirror.
This will cause all kinds of optical aberrations.


Sounds like the visual equivalent of Bose noise cancelling headphones, would that be correct?

Perhaps, I would have to give that some thought.

I've just 'borrowed' a remote webcam from our conference room at work to give it a go over the weekend. It's the type that sits on top of a TV screen. Just wondering how best to try attaching it to the telescope. Presumably it needs to sit infront of the eyepiece, where the human eye would normally be. Any ideas?

Here is a tutorial on using a webcam

http://www.astro.shoregalaxy.com/webcam_astro.htm

Also, I connected it to my laptop just to work out how to video with it. I've downloaded AVS Video Editor, which seems to work fine. I noticed that it saves the video as an MPEG file. Will I be able to use this with some freeby stacking software, such as Registax?

Registax and AutoStakkert!2 are examples of freeby software for dealing with mpeg video. There are quite a few free downloads for this purpose but the two mentioned will do the job.

The most important thing is to have fun. It is a steep learning curve, can get expensive and if you are not having fun you may get frustrated.
 
Aah ... so that's why the exposure time needs to be short - so that the noise falls below the register threshold for each frame, right, and therefore never features in the stacking process, as it would with a long exposure, right?

There's the overexposure issue Skwinty mentioned, but you also want it short so that you can capture an image on the sort of timescale of the atmospheric fluctuations that cause blurring - you want to get those brief instances where it's particularly sharp.
 
I've just 'borrowed' a remote webcam from our conference room at work to give it a go over the weekend. It's the type that sits on top of a TV screen. Just wondering how best to try attaching it to the telescope. Presumably it needs to sit infront of the eyepiece, where the human eye would normally be. Any ideas?

Also, I connected it to my laptop just to work out how to video with it. I've downloaded AVS Video Editor, which seems to work fine. I noticed that it saves the video as an MPEG file. Will I be able to use this with some freeby stacking software, such as Registax?

Unfortunately webcams usually come with quite wide-angle lenses attached, which won't make it suitable to do the imaging. In the process of modding one for mounting easily to a telescope the lens is also taken out so that just the telescope optics are used. You will probably find it won't do the job without surgery that your workplace will like you to perform!

You'll also want to use an uncompressed format if you can. I don't know much about suitable recording software for that (as a Mac user with a barely compatible webcam I have very limited choices myself).
 
Sounds like the visual equivalent of Bose noise cancelling headphones, would that be correct?

I was going to post a reply to the earlier one, then had a thought, hence two posts in a row here replying to the same post - sorry!

Noise cancelling headphones take the external 'noise' - actually in a certain sense an unwanted signal, invert it, and play it back together with the desired signal.

An astronomical image has several components that are unwanted, but all end up recorded on the same detector in a way that can't be directly unentangled.

You have the signal from space you're actually trying to measure - let's call that S.
The camera has a bias level which is always present at the same level no matter the length of exposure - call that B.
It has the dark current, which is a function of the exposure time and the sensor temperature - roughly speaking it goes up proportionally to the exposure time and as some function of temperature too. Call that D(t,T).
And the entire optical system has a throughput per pixel that includes things like vignetting - the image getting darker towards the edge of the image, and things like dust on the sensor causing local reductions in the throughput. I'll call that F.

Then the result R you record at a pixel is
R=B+D(t,T)+F*S
and each one of those except F is also subject to some level of noise as well. The dark current for example will have its own fluctuations, although in the end result you won't be able to tell apart the overall statistical noises.

The astronomer records zero length exposures (or as near as they can zero length) in absolute darkness to try to estimate what B is. They stack these zero length exposures to get an estimate of B without much noise in it. Then they can simply take B off from R.

They do the same sort of thing by taking exposures of length equal to their actual exposures to estimate D. They take B off their stacked dark images to find D (as the darks also have the bias) then take their estimate of D off R as well. You might ask why they don't just take the dark frames with the bias included off and be done with it, but by doing that step to get D separately then you can use estimates of how it varies with time t and temperature T to modify the dark images you used if you want to apply them to the normal exposures taken in different conditions. A DSLR will do that sort of step itself if you turn noise reduction on - it'll immediately take a dark frame and subtract it off the image. This is good because it is convenient and taken at the same temperature and exposure as the image you just shot, but often has the major downside if you're continuously exposing the camera will insist on taking up half your time on the sky by taking darks, so many of us do this manually before or after. Darks also deal with certain kinds of sensor defects - they can help you map dead pixels which always record ~0, or 'hot' pixels that always put out some high value, and exclude those.

Lastly, you take images of as uniformly a lit surface as you can called flat frames, and use (minus the bias and dark) to estimate what F is.

You can then take your actual images, subtract B and D, divide by F and hopefully get something sane at the end, and rely on stacking to take care of the random noise in B+D and S.

Now that's all important in deep sky imaging where S is often tiny, but planetary imaging you can find the bias and dark are not all that important. They're also per-pixel effects so can be averaged out to some extent by imaging the planet on different parts of your sensor over time. I don't bother with bias or darks at all for planetary imaging. I personally don't bother with flats either, as vignetting isn't too severe on the scales I get to view planets at and the image bounces around so much that dust is rarely a major issue (it has been in a couple of cases and I've regretted not flatting after actually)

Finally the signal itself often has the unwanted component of light pollution. Fortunately this has the property of being relatively smooth over the scale of an image, so you separate off bright pixels where you think your actual signal is strong and fit a smooth surface to the rest and subtract it off. Again that's not a major problem for bright planets but it's a really major thing to do for deep sky imaging for some of us.

So... is it like noise cancelling headphones? Well both try to guess what the unwanted signal is and just subtract it, but noise cancelling headphones have a much easier way of doing it on the spot. They know the signal they're trying to get to the detector in your ear and can intercept the additional noise to apply the subtraction. An astronomer has to guess the signal before or after the recording is done and has to hope they can average out the fluctuations alright.
 
So ... I've been playing around tonight with a Logitech webcam fixed to a telescope eyepiece with a makeshift jubilee clip/elastic band arrangement, just to see whether I could get any sort of result.

Here's my first attempt at videoing and stacking using Registax.



I'm not sure whether stacking is necessary with lunar shots given the brightness of the moon. I've taken some stills with the same webcam that seem equally nice.

The telescope is a 200mm f1200 Dobsonian with Super Plossl 6mm eyepiece fitted.

:)
 
Thanks - I reckon it could get addictive - but it requires patience (and a fully charged laptop battery! :mad:).

And dont forget the very deep pockets to do properly. If you are really going to get into this, you are going to need to sell that scope and get something with an equatorial mount and drives to match.
 
[post above about image stacking].

I understand this on a simpler level, but perhaps it is correct, but just not as detailed:

The image is formed from a signal that you want and a signal that you don't want. The signal that you don't want is of two kinds random and consistent. There isn't anything that can be done with the consistent noise with software. If you want to get rid of that you need to do something in the image making process to get rid of it. It is the same from image to image and stacking images doesn't get rid of it. However the part of the signal that is random can be reduced by stacking images because it varies from image to image whereas the signal that you do want is the same from image to image.

As I write this I see three ways that this kind of image stacking might work.
1. Add up all the values for a pixel and average them out.
2. Look at the value for the pixel in each of the images and keep the value that is most commonly represented.
3. It might be possible to get some information about what the value of the pixel is supposed to be by looking at neighboring pixels.

I don't know which technique is used, perhaps a bit of all the techniques is used.

This kind of image stacking seems quite a bit different than image stacking designed to improve depth of field. In that the software needs to look at all the images and find areas that it determines to be in focus and use those pixels to make up the final image.

And as edd noted all this is very different from noise cancelling in head phones which attempt to sample background noise with a microphone pointed away from the probable direction of the desired signal source and then add an inverted sample of the background noise into the signal that is sent to the headphone speaker thereby subtracting the background noise from the headphone speakers.

Noise cancellation is also possible on microphones. The simplest idea is passive noise cancellation where a sample of the ambient noise is ported to behind the microphone. The idea is that the ambient noise will be the same on the front and back of the microphone and it will cancel itself out where as the intended sound source will be mostly present at the front of the microphone.
 
I understand this on a simpler level, but perhaps it is correct, but just not as detailed:

The image is formed from a signal that you want and a signal that you don't want. The signal that you don't want is of two kinds random and consistent. There isn't anything that can be done with the consistent noise with software. If you want to get rid of that you need to do something in the image making process to get rid of it.
Right. In the image acquisition process it is possible to reduce the visibility of the 'consistent noise' by dithering - moving the position of the target on the sensor so that the unwanted signal is spread over the image in a less obvious way. The better way of course is to try to measure and subtract it if you can.

As I write this I see three ways that this kind of image stacking might work.
1. Add up all the values for a pixel and average them out.
2. Look at the value for the pixel in each of the images and keep the value that is most commonly represented.
3. It might be possible to get some information about what the value of the pixel is supposed to be by looking at neighboring pixels.
The better stacking software will allow mean, median and perhaps modal averages as you describe, and also allow clipping. It'll look at all the pixel values you have and cut out particularly extreme values before the averaging (handy if a plane gets in the way in one exposure, for example). Each variety of algorithm has different strengths and figuring out which to use is very much an experimental art.

When it comes to looking at neighbouring pixels as well this is used in more explicit denoising routines later on, but not so much in the stacking process per se - at least not in any of the stacking routines I've used.

Looking at neighbouring pixels is also really important in a lot of the other image processing that goes on of course.
 
A little update on my attempts to image the moons of Jupiter with my SX1 camera:
I tried imaging the moons of Jupiter the next night, I didn't see them and I was afraid the moons I picked up were stars on the previous night, I got discouraged and pretty much gave up until I could check my pictures against pictures I plan to make with my new DSLR and my brother's telescope. But I was looking at the pictures I took on the second day on my computer instead of in the camera and I saw little faint dots where I had seen the moons on the night before. The problem was that in the previous evening I had a longer exposure which made seeing the moons easier and it didn't effect the image of Jupiter much because the difference between completely over exposed and completely over exposed even more on the image is insignificant.

I did try imaging Jupiter itself to see if I could pick up the red spot. This was tedious and unsuccessful. It was hard to get exactly to the right exposure. Jupiter was either overexposed to the point of obscuring all detail or underexposed to the point of being nearly invisible. My guess, is that with image stacking and a lot of patience I might be able to see the red spot but I wasn't close to that with my effort and skill level.

I'm looking forward to hooking up my new camera to my brother's telescope but I think he's still banging around his closets trying to figure out what he did with it. It was a pretty cool telescope with a motor drive with a control that let him just put in the name of what he was looking for and the scope would move to image it.
 
A little update on my attempts to image the moons of Jupiter with my SX1 camera:

<snip>

I'm looking forward to hooking up my new camera to my brother's telescope but I think he's still banging around his closets trying to figure out what he did with it. It was a pretty cool telescope with a motor drive with a control that let him just put in the name of what he was looking for and the scope would move to image it.

Hi davefoc. To avoid further frustration and disappointment it's probably worth investing a couple of hours of your time reading up on telescopes, and understanding what the limiting factors are to even seeing celestial bodies well, let alone photographing them.

I've found this a particularly interesting and readable reference. There's stacks more on the technicalities of telescopes and viewing the night skies - just go to the home page and start there.

Much of what is written about the characteristics of telescopes can be interpolated and applied to cameras. Accordingly, you'll soon realise the limitations of a regular DSLR camera (even with a zoom lens) in photographing planets and their moons.

The biggest upshot, in my opinion, is that size matters, by which I mean light-gathering capability, determined by apperture. Without a relative abundance of light you have nothing to work with. In my view, 300mm of aperture is the minimum you should be aiming for (my Dobsonian is 200mm, and while it's great for observing the Moon and picking out the rings of Saturn and the tropical regions (bands) on Jupiter, plus the larger moons, if it's detail you're after then it's a little under-sized). That said, I've never strayed from the back yard with it, and I live in a suburban neighbourhood, so the seeing conditions are never great.

Seriously davefoc, be patient and read up a little to understand what you're up against.

Good luck!
 
Taken an age to get the opportunity, but here we go. Logitech webcam, 2x Barlow, 11" SCT, AutoStakkert and PixInsight for the processing:
14393781602_00508bb49e_o.png

Atmospheric aberration is painful, making that rainbow effect. That said it's still a great sight. Always a delight to look at by eye.
 
Taken an age to get the opportunity, but here we go. Logitech webcam, 2x Barlow, 11" SCT, AutoStakkert and PixInsight for the processing

Atmospheric aberration is painful, making that rainbow effect. That said it's still a great sight. Always a delight to look at by eye.
Well worth the effort edd I'd say - I'd be pretty pleased with that result!
 

Back
Top Bottom