Process

Home

I only started astrophotography in late 2020, but I can say I really *like* this hobby. Yes, some of it is because of its still freshness to me. But a lot of it is because it is not easy. It takes a non-trivial combination of knowledge, patience, intellectual curiosity, and financial resources to pull it off. Yes, this may sound elitist, and it probably is, but that doesn't make it not factual: very few people are capable of taking recognizable pictures of these objects in deep space thousands of light years away from us.

In this page, I am going to overview my process. This starts with my image capture, then image pre-processing, and finally with image post-processing. I fully expect things to change as I go along so consider this a snapshot of my current state rather a running log of everything I have ever done.

I don't have any one particular reason for writing this down. One reason is that I tend to figure out whether I understand something by writing it down. Another is that finding other people's notes was tremendously helpful to me when I was starting, so maybe some other newbie might find this useful. And yet another reason is that this hobby is filled with lots of time when one cannot be imaging due to weather or other reasons, and this is a better time filler than TV. Reasons aside, this is how I personally go about getting the astrophotos you might have seen posted.

My astrophotography process starts with following main stages to get to a "starting" image:

  1. Image Capture (lights)
  2. Taking darks, flats, and bias
  3. Calibration
  4. Registration
  5. Integration (stacking)

This starting image is what is called a stacked image. To a quick view, it just looks black, but all the detail is actually in there and just needs to be "pulled out" though image processing. I use PixInsight for my image processing, and my current workflow for post-processing comprises the following main steps:

  1. Dynamic Crop
  2. Dynamic Background Extraction (DBE)
  3. Photometric Color Calibration (PCC)
  4. Multiscale Linear Transform (MLT) for Noise Reduction
  5. Screen Transfer (STF) + Histogram Transfer (HTF)
  6. Arcsinh Stretch
  7. TGVDenoise
  8. Curves (for saturation)
  9. Curves (for desaturation)

Everyone who does astrophotography has some personal workflow. If you look at an experienced imager, their flow will have a slew of additional processes (in particular deconvolution). I am not yet at that level and, for me, the bottleneck is my telescope so these more advanced tools are not yet useful.

That is the overview of the entire process. In the remaining sections, I will add more details about the individual steps of the process.

NOTE:

I still use an achromatic telescope which is intended for visual astronomy rather than astrophotography. The achromat produces extremely bloated stars with blue halos, especially for bright stars. (A proper apochromat is on order.) For this walkthrough, I use my processing for the Pleiades (M45) which is perhaps a terrible choice given this telescope. However, it happened to be what I was imaging when I started this writeup.

Image Capture (Lights)

For image capture, I use the NINA software package. This extraordinary free (opensource) software controls all aspects of the capture process including camera and mount control, image location and centering, sequencing, creation of all calibration frames, and much more. The imaging pane of NINA is shown below.

These are the important characteristics for my image capture:

My procedure for getting capture up and going is pretty mechanical at this point. Since I now have my setup in a permanent position in my back yard, I don't have any physical setup or polar alignment. It takes maybe 15 to 20 minutes from the time I take the Telegizmo cover off to when I can actually start imaging.

  1. Power on mount and camera
  2. Start up NINA on laptop
  3. Refresh the equipment→camera list and connect
  4. Refresh the equipment→telescope list and connect
  5. Unpark the telescope
  6. Slew to my target. I generally do this from the SkyAtlas after searching for target.
  7. Platesolve. My settings resync and reslew the scope to center the target. This repeats until the error is less than 1'
  8. Go to equipment->guider and connect to start PHD2
  9. In PHD2, after it loops and finds stars, have it auto-select a star and start guiding. I then wait until it has guided about a minute. (I get wild mount movements in the first seconds after starting guiding.)
  10. Load the sequence file. My settings here have a start delay of about 10s and dither on every 3rd frame. I do not do anything complicated in my sequences so this is generally a single target with a single line for number of exposures

Taking Darks, Flats, and Bias

Before discussing how I take darks, flats, and bias frames, I want to briefly discuss what these are and why we use them. There are any number of excellent tutorials on this that one can find on the web, and the regulars on any of the astronomy forums such as cloudynights will be happy to explain the details. If you are actually starting to *do* astrophotography, I encourage to make use of additional resources on this topic. My purpose here is just an overview for context.

Most people who use digital cameras use them for taking daylight photography. The pictures from even the cheapest digital cameras today produce spectacular images, but one can easily see the limitations of the digital sensor when one takes a picture in a dimly lit area (without flash). The picture is noisy and has bad contrast among other issues. Astrophotography exacerbates these problems to the extreme. The target being imaged is very faint, often only slightly brighter than the background. In constrast to daylight photos, special care and techniques must be applied to produce anything resembling a recognizable image.

There are a couple characteristics of digital sensors that become meaningful in this domain. The first is the notion of "dark current." When a sensor is recording, it will "record" photons due to electrical activity even without any photons present. This read noise varies with the exposure duration and the temperature of the sensor and typically doubles with every 5° Celsius. Note that this is not a constant value but rather varies across the pixels of the sensor. When we take an actual image (called a "light"), we want to remove the effect of this dark current to end up with an image that represent only the photons that arrived from the target.

Here is an example of the dark current that accumulated in a 120s exposure at 16° C. This image has been stretched to more clearly show the effect.

Fortunately, while this dark current depends on exposure time and the sensor temperature, these are all it depends on. Consequently, we can take so called "dark" frames at the same exposure time and temperature as our light frames and use these to correct the lights by subtracting the pixels of this dark frame from the equivalent light frames.

Another characteristic of digital imaging that can be problematic is reading out very small amount of signal. To ensure that we don't get negative values, the sensor electronics will add a small positive "bias" signal. This bias value does not represent actual signal so we want to exclude it. Unfortunately, it is not a single constant value that can be globally subtracted but varies from pixel to pixel. Fortunately, for a particular pixel, it is roughly constant across images so we can measure it once by taking a very fast (so dark current doesn't build up) dark image (with a lens cap on). These frames are called "bias" frames.

The last nuance has more to do with the complete optical train rather than the digital sensor alone. Various effects can cause unevenness of light intensity hitting the sensor. For example, a dust mote on the sensor would cause the area on the mote to be slightly darker, even with the same light intensity hitting the entire sensor. This would show up as a darker spot on your final image. Another effect is the dropoff of light radially from the center of the telescope lens to the edges. This causes a vignetting effect of the final image. Below is an example of the vignetting effect. Note that darker circular areas towards edges of the frame.

Similar to dark current and bias, the vignetting characteristics of the optical train remain the same provided that the optical train has not been modified in any way (for example, to change a filter). Even the dust motes will stay if the setup is undisturbed. Given this, there is an equivalent way of adjusting for the light intensity unevenness. Suppose for example that in our light frame one pixel gets the full intensity of light and some other pixel away from it is only receiving 70% of the light due to either vignetting or a dust mote. If we also take a so called "flat" frame which is just a picture of a plain white image using the same optical train, then the light on equivalent pixels on the flat frame will also be 100% and 70%. Now, if you take the light frame pixels and divide them by the equivalent flat frame pixels, voila! The light frame pixels are corrected to the values they would have without the uneven illumination.

Now with the reason for darks, flats, and bias out of the way, how do we take them. First, for each of these, we do not take a single instance but rather a set of them. We have already covered how I use NINA to take my lights. Recall that darks are at the same exposure time and temperature as the lights, but without any light hitting the sensor. Consequently, it is easy to use the same process as for my lights: cover the telescope opening and take the darks as if they were lights using the same exposure time right after taking the lights (so the sensor temperature will be the same). The bias are simply very fast exposures without light hitting the sensor. This is just like taking a dark (with the scope opening covered) but with exposure time set to its minimum (0.0025s in my case).

The flats are quite a bit different. Proper flats require that evenly diffuse light is captured at an exposure time that puts the histogram somewhere in the middle of the scale. The first problem is the light source used. There are different methods including using natural sky light to using some sort of "light box". I personally use a LED tracing pad ($25 on Amazon) with a few sheets of white paper in front of it (to diffuse the light sufficiently) attached to a cardboard cutout that allows it to stably sit on top my scope. My contraption is shown below:

The other difficulty in taking flats is getting the exposure time right. NINA actually has a Flats Wizard which supposedly helps with this. So far, I have not quite gotten this to work and instead do it manually. I know that depending on the filter I am using, with my light box it takes about 0.02 to 0.1s to get the correct exposure. So I take a couple of samples to get the correct one, then I take about 40 exposures using that setting.

Calibration

The procedure I previously described using flat frames isn't quite correct. Recall that every image has included in it the bias offset on every pixel. And for anything other than very short frames, it also includes dark current. So the light frame values actually contain the light photon counts plus the dark current plus the bias. And the flat frames actually contain the white light plus the bias (because these are very short, the dark current is negligible). Working out the math will show that straight division as implied in the previous paragraph does not give the desired result. It is *close*... but in dealing with images like this that are SO faint relative to the background, even small errors like this make it problematic to get a decent image.

To reiterate, if S is actual interesting signal, D is the dark current, B is the bias, and W is the white light used for the flats, then
  Light frame L = S + d + b
  Dark frame D = 0 + d + b
  Flat frame F = w + 0 + b
  Bias frame B = 0 + 0 + b

Now, given a light frame, we can correct for dark current, bias, and uneven light intensity with the following equation:
  (L - D) / (F - B) =
    [(S + d + b) - (0 + d + b)] / [(w + 0 + b) - (0 + 0 + b)] =
      S / w

where S and w have the same attenuation for each pixel.

The above procedure must be done on each light frame. Furthermore, since there is always variation in images read, both for "real" light frames as well as dark, flat, and bias frames, this process actually first combines multiple dark frames into a dark "master", multiple flats into a flat master, and multiple bias frames into a bias master. It is these masters that are used in the equation above on each light frame.

The processing is mechanical but tedious and it has to be handled by software that can do this en mass. I use the PixInsight Weighted-Batch-PreProcessing (WBPP) script for my calibration. The main screen of the 1.5.2 version of the script is shown below.

Along the bottom left, there are tabs for bias, darks, flats, and lights. You select each of these and then add the files in each category. Next, in lower right, set the output directory. Virtually all of my settings are the defaults values, with some of the more important settings below:

There are some additional configurables associated with registration. I will cover these in the registration section next.

Registration

In the next section, I will discuss the Integration process which takes multiple individual images and combines them into a better single image. I will leave the discussion about why we do this to that section but suffice it for now to say that this is a "vertical" combination where the "same" images are stacked on top of each other for the combination. In order for this to work, clearly the pictures have to be aligned exactly the same: if one frame is offset relative to the others, then it will preclude this stacking.

Registration is the process of aligning a sequence of images to some reference image. First, this reference image is selected and some set of stars in the image are identified. Then, for each non-reference image, the same set of stars are identified, and the frame is offset and/or rotated so that the stars in this image exactly align with the stars in reference image. Obviously it is more complicated than this and there are a variety of options in terms of thresholds that determine the stars, the method of doing the alignment, what to do with non-overlapping portions of the frames, etc. However, this general description should give a good enough feel for what happens in this process.

One might ask why the frames need to be aligned in the first place. After all, if you have a tracking mount, doesn't that keep the telescope pointing at exactly the same spot so all the individual frames are automatically aligned by default? Conceptually this is correct but it is a matter of accuracy. Take my setup, for example, which is reasonably typical. My "image scale" is 1.96 arcsec/px. This means that if the telescope pointing is more than 1.96 arcsec off from the last frame, then the photons from each point will end up on the wrong pixel. Well, how much is 1.96 arcsec? There are 60 arcsecs in 1 arc-minute, and 60 arc-minutes in 1 degree. Most people think of a degree as being relatively tiny, but here we are saying that if my mount is off by 0.0005°, the images don't line up. That is simply not possible with consumer grade mechanics. You have to accept that separate frames will not be exactly aligned and, hence, some type of registration to align them will be necessary.

I also do my registration as part of the PixInsight WBPP processing. The WBPP screen for the registration is shown below.

To include registration in the sequence, the following configurables must be set:

At the end of the WBPP run, PixInsight puts all the calibrated, debayered, and registered images in a directory. There will be one file for each input light frame. These registered frames are now ready to be stacked as described in the next section.

Integration

Integration is the combination of many light frames (calibrated and registered) into a single, higher-quality image. There are two points hinted at in this description. First, it is the combination of many light frames. Second, the combination produces a higher-quality image. Before describing my methodology for integration, let me discuss these two points about integration.

First, why would one use multiple frames? It should be reasonably obvious that Deep Space Objects (DSOs) are very very faint so we need to capture a lot of photons to see anything. One way to get a lot of photons is to take extremely long exposure images. There is a limit to this, however. Remember that to keep a telescope pointed at the exact same spot in the sky, the mount has to exactly track the earth's rotation. As the length of the exposure goes up, the chances that the mount will have enough error to "blur" the image goes up. With the class of mount that I have, for example, even with additional guiding I end up with elongated stars (a symptom of inexact tracking) above 5 minutes of exposure. This is not nearly enough. Another way of getting a lot of photons is taking multiple exposures and then "adding" them up. Once the pixels of each frame are exactly aligned (see previous section on Registration), the photon count for a pixel will be the same whether you took one 1-hr exposure or added up 12 5-minute exposures. (This is not precisely correct due to various noise factors and the fact that averaging instead of summing is used, but the overall effect is the same.) Besides being easier on the mount, think about what happens if, for example, clouds cross the target during the imaging. With a very long exposure, the entire exposure is ruined. With multiple shorter exposures, only some of the exposures need to be thrown out. This method of taking many shorter exposures and stacking them is standard procedure in digital astrophotography.

The second part of the description suggests that the combination of the multiple frames into one frame results in a higher-quality image than any of the individual frames. This is not generally obvious because most people think of digital sensors as being a "perfect recorder" of the image. In fact, due both to the physics of light and the limitations of electronics, there is noise in the count for each pixel. Suppose that the "true" count for a pixel for the 5 minutes is 100. Due to noise, one interval may record 97, another 101, another 103, etc. The individual frame values are all "wrong". However, an average over a large number of frames starts getting close to the correct value.

The effect of averaging a large number of light frames over a single frame is quite dramatic. Below are examples of a single 120s frame compared to a stack of 285 of these 120s frames. Both have been stretched to show the effect more clearly.

I use the PixInsight Integration process to stack my light frames. The screen for that process is shown below. The primary configurables (other than the listing of the frames to integrate of course) are the method for combining and the method for detecting and removing outliers. For the former, I use averaging. For the latter, I use Windsorized sigma clipping which replaces outliers for a pixel with the median.

At the end of this, we have a "stacked" image. In general, this just looks black because most of the pixel values are very close to the black point. However, the image is really in there and can be seen with a temporary contrast "stretch". Below is the initial stacked image in "linear" form, first as is and then with a temporary stretch.


Dynamic Crop

At this stage, the image is "linear." What this basically means is that the pixel values are essentially the values as recorded. An mentioned in the Integration section above, this typically means everything looks "black" because most pixel values are very close to the black point value. The first set of operations in post-processing work in this "linear" form. However, to actually see what we are doing, a temporary contrast stretch is usually applied. Note that this is only for the view: it doesn't actually change the values of the pixels. With the PixInsight software I use, you can see that this temporary stretch is applied by the green highlight in the tab along the left hand side of the image.

Stacking introduces artifacts around the edges of the image as the individual light frames were shifted and/or rotated during registration. The first step is to crop out the edges. In my case, I am using a DSLR with an APS-C aspect ratio, but I prefer something a bit more "square" so I also crop off quite a bit off the sides.

The dynamic crop process settings I use and the resulting crop region is shown in the following diagram.

Dynamic Background Extraction

The background of the image is generally not even. This can happen for a lot of reasons but in my case it is mostly due to the extreme light pollution (LP). Since this LP is coming from the "ground," one side of the image will be brighter than the other. Note that because the camera is rotated at varying angles relative to the ground depending on where the telescope is pointing, this bridge side is not necessarily from the bottom: it can be from any side and angle (but generally not from the top down).

The background extraction process is intended to remove these gradients and to produce a more even, neutral background. PixInsight has two such processes but the one I use more often is called the Dynamic Background Extraction (DBE). In this tool, you place samples on the image at points that are supposed to be pure background (i.e. no nebulosity or any part of your target). The following diagram shows the tool settings and the sample points I chose for this particular image.

The tool then builds a background model using a splines fit to those points and then subtracts out that background model. This gives a resulting image with a more even background. For me, this is rarely fully "clean" as can be seen in the resulting image below. I am not sure what it is about my equipment or technique that is the cause as other imagers seem to get much cleaner results with this tool.

Photometric Color Calibration

The "color" captured by the camera isn't generally reasonable "as is" for astro photographs. There are several reasons for this. One is that the standard color matrix used on one-shot-color (OSC) cameras is the Bayer matrix which is RGGB in my case. In other words, there are two green pixels for every one blue and one red. As such, the image will be greener than expected. (The reason it is done this way is that our eyes are more sensitive to green than either red or blue, so having more green pixels produces terrestrial photos with less apparent noise.) Another is that our digital sensors filter out or are not as sensitive to all wavelengths from astronomical objects. Finally, the imaging train itself may include filters that attempt to mitigate light pollution or enhance contrast for emission nebulae.

Regardless of the reason, we are left with an image that is a poor color match. The color calibration step attempts to correct the color. PixInsight has a couple of tools for this but I generally use the newer Photometric Color Calibration. The concept being this is genius (to me at least). Essentially, you tell the tool what target is in the image and it uses that information to obtain the true color information of the stars around the target from an astronomy database. It then extracts the stars in your image and compares it against the matching stars from the database. Using this, it determines the appropriate color correction to correct your stars to the reference colors. It then applies this correct to the entire image producing an image which is much closer to "proper" color.

The following diagrams show the tool screen with some of my settings, and the result after applying the tool to the image. As you can see, the transformation here is magic.


Multiscale Linear Transform (for noise reduction)

Image post-processing usually involves noise reduction at several points in the process. Right after color calibration is the first spot at which I apply noise reduction. This is by no means universal, and you will typically see many more variants of noise reduction by experienced imagers. In truth, my images are not particularly good and more effort at noise reduction is pointless.

My tool of choice for noise reduction in the linear phase is the Multiscale Linear Transform. This tool first separates the features of the image into different "scales" (presumably using wavelets but I don't know for sure). It then applies different levels of "blurring" at the different scales. This allows it to, for example, clean up the high frequency pixel-to-pixel noise without also blurring the larger size structures that are part of the actual target.

The first picture below shows the noise with the image considerably magnified. The second picture shows the MLT settings I used and the resulting image. With the rights settings (which does take some experimentation), the results are quite dramatic.


Screen Transfer + Histogram Transfer (Stretching)

All the previous steps were done in the "linear" phase which, as previously described, means that the pixel values are as recorded by the camera (or more correctly, recorded by the camera and then averaged through integration). We see the details in my examples above only because a temporary stretch was applied within PixInsight. However, if we export that image to a JPEG and then use a regular photo viewer to look at it, it will be black. At this point, we "stretch" the image. This is not for viewing only but rather actually modifies the pixel values and creates a new image. This image, when viewed by an external photo application, will show the target in a manner similar to the temporary stretching.

As with all things PixInsight, there are multiple ways of doing stretching. Experienced imagers have their own favorite methods and combinations, sometimes dependent on the specific target. For my beginner level, I do something relatively straightforward. I first mostly replicate what the temporary stretch does using the PixInsight Screen Transfer Function (STF) copied onto a Histogram Transfer (HT) which is then applied to the image. I generally do "back down" the STF so it is a bit darker because I also apply another stretch (described in next section).

The STF and HT is shown in the image below along with the resulting image. Note that there is no green highlight on the left tab indicating that no temporary stretch is applied.

Archsinh Stretch

When I use the default STF/HT to do the full stretch, I usually end up with something that doesn't look quite right. Hence, as mentioned in the previous section, I usually "back down" from the full STF/HT stretch and then add an arcsinh stretch. The arcsinh stretch is known for properly preserving color and this combination of stretching seems to work for me. (I do not use the MaskedStretch which is another one commonly used by the experienced imagers.)

The following pictures show the image before and after the arcsinh stretch along with the arcsinh parameters. (The effect is subtle and not easily seen at this resolution.)


TGV Denoise

After stretching, it is common to do some noise reduction. Many experienced imagers use TGV Denoise at this stage and I have simply copied that. At this point, we want to limit the noise reduction to the background; if it were applied globally, it would cause the loss of detail on the target parts of the image. To achieve this, one creates a mask and then applies it to the image. For this case, I use a luminosity mask (i.e a mask created based on the luminous portions of the image) and then apply it to protect everything other than the background.

The generated luminosity mask and the mask applied to the image is shown below. The red areas indicate the regions protected from whatever process is to be applied.

Now the TGVDenoise is applied. The TGVDenoise settings and the result is shown below.

Curves (for saturation)

The image now has the contrast needed and a reasonably clean background. However the colors will still look "dull," even more so after stretching as that desaturates it. We need to saturate the colors to make the image bright. In most cases, we are not changing the color (hue) just saturating it. In this case, I am also slightly tweaking the blue as this target is rich in blue nebulosity. (Color is a very subjective thing in astrophotography. I try to stay as close as possible to the color expected based on the known composition of the target.) Saturation is done using a curves tool which users of most image processing tools would recognize.

We want to limit the saturation to the target itself. Below shows the mask (same luminosity mask as used in previous section) protecting the background, and the curves settings and outcome of target saturation.


Curves (for desaturation)

This last step is optional and depends on the target. In some images, the dynamic background extraction and color calibration does not remove all the color from the background. There is often faint traces there that shouldn't be (artifacts of less than ideal data collection or image processing). A final cleanup sometimes requires a desaturation of the background. This doesn't remove the unwanted color but dulls it and makes it less noticeable.

We obviously apply the desaturation to the background so the target must be masked. I reuse the same luminosity mask. The mask applied to protect the target, and the curves setting and output is shown below.


Wrapup

That represents my current image collection and processing workflow. At this stage, the image is stored in the PixInsight native format (xisf); I also export as JPG in both full jpeg resolution and downscaled for web viewing.

I hope this gives you a good feel for the overall astrophotography process. It might seem daunting, and it is at first, but eventually the steps become routine. I will reiterate that I am a relative beginner... not absolutely at the start, but also nowhere near an experienced imager. If you were reading this as a reference, you absolutely should also find and reference the workflow examples from the many experienced individuals on the CloudyNights forum and on YouTube.

Thanks for reading and Cheers!!