top of page

Details on my current image processing methodology 2024-12: Part 1 (Raw Data to RGB TIFF)

Writer's picture: Calvin KlattCalvin Klatt

Image 1: Wizard Nebula


Overall Philosophy, or lack thereof

My image processing methodology was developed over several years of hit and miss fumbling around. I minimized the use of paid software but in the end I do have several such tools. The only really costly one is Photoshop, and I could probably do without it if I really needed to. I use GIMP for certain things because I find it more intuitive but this could be done in other software as well. GIMP is free, so I don’t feel bad adding it to the list. Astra Image is a simple piece of software that is not necessary either (Photoshop could easily do what it does) but it is simple and easy to use. Most advanced astrophotographers (probably) use a massive piece of software called PixInsight. I tried this software a few years ago and nothing worked. Perhaps my computer was too slow.


The information here applies to the analysis of RASA data from a good sensor that is in the right position (not too far back, not too far forward, not badly tilted). In other words the focal plane is evenly distributed on the sensor. Note that getting a flat sensor field is the most difficult challenge when using RASA or other "fast" telescopes. Processing data from a SCT would require additional steps such as the use of "flats" and "darks" to remove telescope artifacts.


Data is collected on my desktop computer at Lac Teeples via a USB-3 cable (running through the wall) to the camera mounted on the telescope.  I am pushing the length limit for USB-3 and good cables are absolutely essential for the high data rates I need. Half the "quality" cables I've purchased on Amazon don't work. The telescope is commanded using Stellarium and imaging is handled by SharpCap. 


I usually collect 1 minute or 64 second subframes, but in some conditions I cut this in half.  Longer subframes are good for dim targets and because there is less data to manage, but they are more likely to be ruined by passing satellites, poor polar alignment, gusty winds or other problems.  Short subframes are good if I am only using part of the sensor (less data) and if the target is bright.  A very dim target needs longer subframes to be detectable.


How much data is collected depends on my mood, the target, the weather, how long the target is visible, other targets to look at, etc. Generally I want at least one hour of good data (observing 90 minutes), and sometimes I want four hours (observing five hours or so). I get a great image of the Orion Nebula in around 15 minutes, but it is quite a bit better when I observe for an hour. Some people observe for ten times longer. I usually choose one target to observe during the wee hours of the night and go to bed, so that target will have plenty of data when I get up in the morning.


The RASA at F2 is vastly faster than most scopes. One hour with the F2 RASA equals 4 hours with an F4 scope. The typical SCT scope is F10, or F7 with the reducer. Nominally one RASA hour equals 12 hours of a F7 SCT (7/2, squared).


With the RASA telescope I don’t use calibration frames, “flats” or “darks” so we will ignore this subject entirely. 


RGB Images from Raw Data


Data is captured in the FITS format (SharpCap) and is stacked with “Deep Sky Stacker” or DSS.  In rare cases I use colour TIFF screen captures from Sharpcap. If there is narrowband data I note which file is the reference frame and use that one as the reference frame for the other data being collected. With the ZWO ASI-6200MC Pro colour camera, at 9576x6388 pixels, 16 bits/pixel, and three channels (RGB), the resulting stacked FITS file is around 360 MB. This file is used to create three TIFF files for R, G and B.  Note that if you have flipped the image in Sharpcap the colour channels may be misaligned. Schmidt-Cassegrain and RASA telescopes need the image flipped, but don’t do it until the final step.


When using Deep Sky Stacker/DSS I set the flags to remove hot pixels, which is necessary because every camera will have them and they create odd patterns as the image drifts slightly through the observing session.  The best way to do this is by using “darks” but the DSS software picks them up very easily.  The software currently does NOT do a good job of detecting passing satellites so the only reliable method to deal with this is to manually step through the images.  Sorting by time sometimes helps because one satellite may be in several frames. Alternatively sorting by quality score sometimes helps because the satellites will either give anomalously high or low quality scores.  Sometimes it is not worth it: I just do the stacking and check for any tracks.


In DSS I sort by FWHM (star size) and remove any obvious bad frames. I then look at the overall list and look for an obvious place to draw a line and drop all the worst ones (10% or more).  I do the same for “number of stars detected”.  Generally I delete the worst files altogether, I don’t just drop them from the list.  I will normally just use the default reference frame (the one with the highest quality), but sometimes I want to centre the image slightly differently or align it to another stack.


I save the stacked image in FITS format and load this in “FITS Liberator”.  This quirky free software will detect that you need to flip the image and will default to flipping. I turn off the flipping.  It will start with the first “plane”, which is normally the Red channel.  I scale the image with the ASINH option and set the scaled peak level to 1000 (from the initial value of 10). “Apply to Image”. Slide the black and white bars at the bottom around until you are satisfied that you have removed most background noise and the image is good.  Take a good look at what you’ve done here and save the image labelled Red. Then move to the second “plane” and try to repeat the same thing for the Green, and then the third plane, Blue.  The curves will be similar but different for each channel because the image is different at these different colours. The dim light may be different due to coloured light pollution.  Try to treat each frame in a similar fashion, noting that fine balancing the colour can be done later. Sometimes FITS Liberator retains information from a previous action. In this case I advise shutting it down, restarting and making sure you "Apply to Image" a few times and that there are no odd settings.


Now I will have three TIFF images, Red, Green and Blue. Combining them can be done in a number of ways, but I use a script in Photoshop to do this, part of the “Astronomy Tools Actions Set” by prodigital software ($22 USD). Once I have an RGB image I use another prodigital Photoshop tool called the “AstroFlat Pro” ($35USD) to remove the background lighting and central rings of light, etc.  I save this image with no other changes.


End of part 1.

1 view0 comments

Recent Posts

See All

תגובות


bottom of page