Jump to content

Recommended Posts

I did some test shots (so please dont judge the image itself) yesterday with an ND400 filter on a Batis 18mm and the A7R2 for a night shot session that i plan to do in the coming weeks.

As the street that I will be shooting is not heavy traffic, I want to set exposure times as long a possible to get some light trails onto the images.

 

So for the test I chose 180s exposure. I am aware that some static sensor noise is always a problem on exposures and similar to milky way or deep sky sessions with hundreds 20-60s exposures, so I added some dark frames.

 

So total image for the test consists of 7 lights and 3 darks at 180s each. Camera 100% manual, no processing etc. ISO 800 and F4.0 to have balanced mix between DOF and noise...

 

Now as you can see in the cropped images below, the noise was bad. Really bad and even the processing did not remove it and both blue and red pixels widely remained in the processed image after calculation of the dark frames....

 

Any help or tips how to reduce/avoid this?

Welcome, dear visitor! As registered member you'd see an image here…

Simply register for free here – We are always happy to welcome new members!

Link to post
Share on other sites

stacked

Welcome, dear visitor! As registered member you'd see an image here…

Simply register for free here – We are always happy to welcome new members!

Link to post
Share on other sites

a few more... unfortunately the compression from the upload reduces the resolution quite a lot ... bottom right or last one is the stacked image. Looks like the master files dont work on these longs exposures and different pixels create the noise?

Welcome, dear visitor! As registered member you'd see an image here…

Simply register for free here – We are always happy to welcome new members!

Link to post
Share on other sites

You might get a better result with Noise Reduction turned on. The camera will take an exposure, and then a dark frame, and process that automatically.

 

IIRC, exposures longer than 30 seconds also reduce the bit-depth of the image, but check the manual.

 

At least try a few 30 seconds exposures, with and without NR, and see if its any better. If you are stacking images then it will not matter too much how long each exposure is. And you will still get the light trails.

 

 

What did you use to stack these? And process the darks.

 

Its hard to tell, but is the noise in the same place for the darks and lights?

Link to post
Share on other sites

What timde said.

 

bulb exposures is really not the strong suite of the A7rII. Due to the nature of bsi sensors there is more crosstalk noise in the recorded data. I haven't done a lot with dark frames yet but in my experiences they did not help too much with the noise. In more astrophotography related forums I got the advice that dark frames only show a good effect if you take a lot of them and average -  so 3 frames might just not cut it. 

 

There has however been some cases were the stuck/dead pixel map that the camera generates from time to time when you turn it off - was not properly triggered by the chron-job of the firmware. One thing you could try is to set the camera's internal date ahead in time for like 2 years or something, turn the camera off and hope that it takes the calibration exposure with shut curtain. Resetting the date to the actual time won't delete the stuck/dead pixel map. My camera seems to take this exposure roughly every 2 weeks when I turn it off. 

 

Hope that helps,

 

Ben

Link to post
Share on other sites

good point, will try the NR and have the dark frame directly at the time of the image. Was hoping to keep the long exposure because the trails of fewer cars in the actual condition next week would not get interrupted and has one continuous line, rather than multiple short ones, but worth a try.

 

For the test I just used StarStax to run the image through and also tried DeepSkyStacker to create a MasterDark, but results were similar.

 

Spots are on the same locations on the full resolution pictures, the cropped images are not exactly the same areas thou. I believe one of the problems is that both of these tools are optimized for calculation against a dark background. So they remove a red or blue dot and turn it into a darker color. That is why the previously red pixels in my sample turn in to dark greenish/grey pixels in the processed image and are now still visible against a white/light background.

 

I could do some manual processing but that is really quite a lot to do for my mediocre photo skills :-)

 

 

 

 

You might get a better result with Noise Reduction turned on. The camera will take an exposure, and then a dark frame, and process that automatically.

 

IIRC, exposures longer than 30 seconds also reduce the bit-depth of the image, but check the manual.

 

At least try a few 30 seconds exposures, with and without NR, and see if its any better. If you are stacking images then it will not matter too much how long each exposure is. And you will still get the light trails.

 

 

What did you use to stack these? And process the darks.

 

Its hard to tell, but is the noise in the same place for the darks and lights?

Link to post
Share on other sites

Thanks!

 

you might be right that both the 7 lights and 3 darks are not enough to reduce the noise properly. Will try to follow the suggestion above and rather shoot 70 and 30 images but with shorter exposures to compare the results. Will also try to trigger the re-calibration. Another thing I read about and probably also doesnt help is the current outside temperature down here. We are close to 30°C at night which brings the overall temp of the camera and sensor up quite significantly.  

 

What timde said.

 

bulb exposures is really not the strong suite of the A7rII. Due to the nature of bsi sensors there is more crosstalk noise in the recorded data. I haven't done a lot with dark frames yet but in my experiences they did not help too much with the noise. In more astrophotography related forums I got the advice that dark frames only show a good effect if you take a lot of them and average -  so 3 frames might just not cut it. 

 

There has however been some cases were the stuck/dead pixel map that the camera generates from time to time when you turn it off - was not properly triggered by the chron-job of the firmware. One thing you could try is to set the camera's internal date ahead in time for like 2 years or something, turn the camera off and hope that it takes the calibration exposure with shut curtain. Resetting the date to the actual time won't delete the stuck/dead pixel map. My camera seems to take this exposure roughly every 2 weeks when I turn it off. 

 

Hope that helps,

 

Ben

Link to post
Share on other sites

Running a couple of more tests based on the feedback above...

 

First run.

 

70x18s lights and 30x18s darks with otherwise same settings as above. Interestingly the stacked image with overall same exposure time of 1260s is a lot darker. Noise level is reduced but as visible in the image with increased contrast and light, the sensor noise is still clearly visible.

 

Running 70x18s with NR "on" next... then 7x180s with NR "on", lastly the suggested lower ISO and F2.8 to get a comparison if ISO has any impact on sensor noise. 

Welcome, dear visitor! As registered member you'd see an image here…

Simply register for free here – We are always happy to welcome new members!

Link to post
Share on other sites

From the set above, post the same crop of a single dark frame, matching the crop above, at the best quality you can manage (IIRC around 1MB will upload per post).

 

Just to see what noise is actually in the darks.

Link to post
Share on other sites

uploading full resolution to flickr now. 1 sample light, 1 sample dark and the stacked full res.

 

I guess the difference to the 7x180s in brightness is due to the algorithm in that StarStax uses for the stacking.

 

 

From the set above, post the same crop of a single dark frame, matching the crop above, at the best quality you can manage (IIRC around 1MB will upload per post).

 

Just to see what noise is actually in the darks.

Link to post
Share on other sites

I've been told I don't know what I'm talking about ... so take this with a grain of salt.

 

The dark seems clean, no noise, nearly all values are 0 or 1 (i.e. black), but lots of stuck pixels. And actually the single shot looks pretty clean too. The red street light seems a little blotchy, but the yellow street lights on the left seem relatively good - could it be some other effect or surface texture.

 

When I push the exposure I can see the stuck pixels clearly, and it seems like most of them are cleaned up, so the dark frames seem to be working. Are the stuck pixels consistent from frame to frame?

 

And I wonder if some of the stuck pixels in the stacked image might actually be legitimate light sources?

 

 

 

I would try shooting at Base ISO, guess that is 100. At least then you should get the best performance.

 

It seems that there are a few blending modes with StarStax, could it be that they are somehow contributing to the noise?

 

 

 

What ever you are doing, it seems to get some interesting results.

 

You know, I just added a little sharpening to the stacked image ... it looked really good!

Link to post
Share on other sites

Thanks a lot! Much appreciated!

 

I made some good progress yesterday based on all three feedback above.

 

I did 2-3 test shots with reduced ISO 200 @ F2.8 and the noise reduced further! NR "on" is also another major improvement and I used 180s for the exposure timing.

I think I see a pattern how the camera reacts to the different settings. At this point I only tested the extreme settings and will now try to balance them.

 

StarStax is definitely the weak spot on the stacking of the short exposures. The "lighten" algorithm only adds the brighter areas of the images - which of course it is designed for when shooting star trails. There is a 2nd set of settings that adds the exposures of the entire image. When doing this with 70 images at once, the brighter areas get completely blown out, so I think I will have to run a 2 step process when using shorter exposures.

 

Step 1

Combine a batch of 5-10 images each with the "add" algorithm in StarStax to recreate a fake long exposure (if this makes sense)

Step 2

Re-run the "lighten" algorithm on the 10-15 batch images created in Step 1 in StarStax to create the final image and not overexpose

 

 

Tonight I will try a longer set of images with a medium exposure time of 40-60s, ISO 100 and NR "on" and run them through the above process.

If I can reduce the stuck pixels to a few dozen in the final image, then I am happy and can easily remove those in post processing for the actual

image I want to take next week. None of the above have been shot in raw yet, so this will also give me some more flexibility when processing

the final image...

 

Thanks again, I will post my results!

 

 

I've been told I don't know what I'm talking about ... so take this with a grain of salt.

 

The dark seems clean, no noise, nearly all values are 0 or 1 (i.e. black), but lots of stuck pixels. And actually the single shot looks pretty clean too. The red street light seems a little blotchy, but the yellow street lights on the left seem relatively good - could it be some other effect or surface texture.

 

When I push the exposure I can see the stuck pixels clearly, and it seems like most of them are cleaned up, so the dark frames seem to be working. Are the stuck pixels consistent from frame to frame?

 

And I wonder if some of the stuck pixels in the stacked image might actually be legitimate light sources?

 

 

 

I would try shooting at Base ISO, guess that is 100. At least then you should get the best performance.

 

It seems that there are a few blending modes with StarStax, could it be that they are somehow contributing to the noise?

 

 

 

What ever you are doing, it seems to get some interesting results.

 

You know, I just added a little sharpening to the stacked image ... it looked really good!

Link to post
Share on other sites

I am glad you are making progress. Short explanation about the different modes in which stacking programs work.

 

Average(DSS), Mean(Photoshop): creates the mathematical mean value which adds all values and divides it by the number of frames. This method yields the best results for reducing noise and even increases accuracy(effective bit depth). But every outlier will influence the final outcome. I think outliers will not influence your result because you have nothing you want to filter out.

 

Median (DSS, Photoshop): creates the statistical mean value which puts all pixel values in an ordered line and takes the center one. This method is perfect for rejecting outliers. In your case on a long enough acquisition phase would filter out the car lights and produce an image with only the things in the frame that are visible most of the time.

 

Maximum (Photoshop), lighten (star stax): creates the statistical maximum and puts all values in an ordered line and takes the brightest one. This method is good for starstreaking but bad for noise as hot and stuck pixels are usually the brightest ones.

 

There are some advanced stacking methods like sigma-kappa clipping but those are only useful for weighting rejection in a meaningful way. I would stick to stack with median/average in your scenario.

 

Regards,

 

Ben

Link to post
Share on other sites

thanks a lot for this!  Will try DSS as well as an alternative. Running the 50 lights now with ISO100 @F2.8 and 60s with NR enabled.

 

Just a thought... shouldnt it be possible to run some bias frames similar to the process of vignetting correction on DSS to expose the stuck pixels and process them out of the final image?

 

I am glad you are making progress. Short explanation about the different modes in which stacking programs work.

 

Average(DSS), Mean(Photoshop): creates the mathematical mean value which adds all values and divides it by the number of frames. This method yields the best results for reducing noise and even increases accuracy(effective bit depth). But every outlier will influence the final outcome. I think outliers will not influence your result because you have nothing you want to filter out.

 

Median (DSS, Photoshop): creates the statistical mean value which puts all pixel values in an ordered line and takes the center one. This method is perfect for rejecting outliers. In your case on a long enough acquisition phase would filter out the car lights and produce an image with only the things in the frame that are visible most of the time.

 

Maximum (Photoshop), lighten (star stax): creates the statistical maximum and puts all values in an ordered line and takes the brightest one. This method is good for starstreaking but bad for noise as hot and stuck pixels are usually the brightest ones.

 

There are some advanced stacking methods like sigma-kappa clipping but those are only useful for weighting rejection in a meaningful way. I would stick to stack with median/average in your scenario.

 

Regards,

 

Ben

Link to post
Share on other sites

Happy now with the results...

 

Image below is processed from 50 raw images with 60s NR "on", ISO100@F2.8. Raws batch processed in RawTherapee and exported to JPG.

The processed in StarStax with lighten and average settings and stacked for a 2nd time.

 

upload the full res to flickr. Resized image below...   Thanks again for all the help and ideas!

 

 

Welcome, dear visitor! As registered member you'd see an image here…

Simply register for free here – We are always happy to welcome new members!

Link to post
Share on other sites

thanks a lot for this!  Will try DSS as well as an alternative. Running the 50 lights now with ISO100 @F2.8 and 60s with NR enabled.

 

Just a thought... shouldnt it be possible to run some bias frames similar to the process of vignetting correction on DSS to expose the stuck pixels and process them out of the final image?

 

 

I do not think that bias frames are meant to counter hot / dead pixels. AfaIk they are meant to compensate for artificially added values that are added between exposure and raw file writing by the firmware. Some manufacturers do this to keep the look of the files consistent throughout the ISO ranges by effectively in- and decreasing the blackpoint of the data. But in general DSS can handle many calibration frames for each stack - be it flat, dark or bias. You can configure in what way the sets of calibration frame types are stacked, too(usually you do not want to emphasize rejection here and choose average). DSS creates a master calibration frame for every category of calibration frame. So yes you can take a lot of flat frames to calibrate the vignette and dust spots, as well as dark frames to counter the general electronic noise (which encompasses hot and dead pixels), as well as bias frames to calibrate for a proper black point of your data (which is mostly needed for scientific purposes).

 

Your results look really lovely. If you want to do something about the color cast of the pictures You will soon see that fiddling around with white balance has some bad effects. Working with subtract layers is a way better approach when you want to counter effects like city glow or other light pollution. This can also work with gradients and masks. It can however get very tedious. I hope to see more of your night city scapes :)

Link to post
Share on other sites

Thanks again! Above subject was at this point just a technical survey to find some decent exposure settings with the filter and the Batis without too much noise.

Have not done anything in PS or other software to manipulate the outcome besides the stacking.

 

The settings above are now close enough to do some actual shots! Will hopefully be able to post some nice results sooner than later...

 

 

I do not think that bias frames are meant to counter hot / dead pixels. AfaIk they are meant to compensate for artificially added values that are added between exposure and raw file writing by the firmware. Some manufacturers do this to keep the look of the files consistent throughout the ISO ranges by effectively in- and decreasing the blackpoint of the data. But in general DSS can handle many calibration frames for each stack - be it flat, dark or bias. You can configure in what way the sets of calibration frame types are stacked, too(usually you do not want to emphasize rejection here and choose average). DSS creates a master calibration frame for every category of calibration frame. So yes you can take a lot of flat frames to calibrate the vignette and dust spots, as well as dark frames to counter the general electronic noise (which encompasses hot and dead pixels), as well as bias frames to calibrate for a proper black point of your data (which is mostly needed for scientific purposes).

 

Your results look really lovely. If you want to do something about the color cast of the pictures You will soon see that fiddling around with white balance has some bad effects. Working with subtract layers is a way better approach when you want to counter effects like city glow or other light pollution. This can also work with gradients and masks. It can however get very tedious. I hope to see more of your night city scapes :)

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...