Ideally one would use stacking of many photos and a much more fine and smooth process than a simple program application or app, like Filterstorm.
Filterstorm for the iPad is a rather rough photo editing program for the iPad. I think due to limitations on the iPad it was written to use less than the full range of colorspace which would be available in other more expensive programs (like photoshop, nebulosity, practically any advanced editing program on the Macintosh or PC). The normal PC program likely has more internal color space storage for each pixel of the image, inside the program. Because of the limited processing, I believe color gradients may appear to be more rough and posterized with filterstorm, than one might get with a better program.
The positive of the app is instant feedback on a touchpad with Filterstorm. It seems like the process for editing is more organic and quick than a desktop program would offer.
So in this case, I traded quality for speed of use, especially with curve filters.
We can still use a curve process to try to take some sky glow out.
In this case I'm going to use some rather radical processes which is somewhat radical to most photo processors who do astrophotography. I'm going to use negative curves. I'm using negative curves to reduce the sky glow that we were picking up from the longer exposures. This sky glow usually is in the reddish range from lights at night in the atmosphere, that is an orange like tint, from sodium street lights. We can reduce that by targeting the tint in the reddish range. The theory which is put forth in the Nebulosity manual, which can be found on the internet, is one can separate the colors from a bright photo and split off the histograms, using processes in Nebulosity that will do this. This is the opposite of what most image processors due with faint images. They usually try to merge the colors in the histogram. They usually want the red, blue and green color curves to be aligned. In reducing sky glow from a photo that is to bright, we will try to split the histogram in Nebulosity, and then pull down the sky glow portion of the image using negative curves.
Negative curves tend to bring an image back or part of the image back and cause the display to look like it was drawn back into the photo in a 3d kind of way. It often ends up showing a white over saturated image in parts often bright stars, and this looks like it was literally pulled back into the photo, giving a quasi 3d look to the photo. The posterized effect in the photo is due to the radical use of curves and there being little differences in gradient values, that is actual values of the trail of the comet from the sky glow.
So we end up with an image that looks like it's more posterized and where many values look like they were clipped. It's a posterized kind of view of the comet. Having less color space in internal processing will cause clipping and a rougher gradient as well, giving a pixelated posterized sudden drop off that is less smooth than most astrophotographers would like.
I can always say I was trying to be Artistic in a poster kind of way. Will that get me off the hook?
Some heavily processed photos that people create of Hubble deep space images end up having a posterized and somewhat granular glow and look to them. I suspect they are using negative curve processes or positive curve processes that are so extreme that the bright areas are pulled out and the 3d like effect seems to occur.