Jump to content

Recommended Posts

Hi folks,

 

Wherever I look online, people seem to be acquiring 12 or 14 bit RAW images with the Sony a6500. But unless I'm missing something, my camera only ever produces 8 bit (per colour channel) images.

Am I mistaken in how the higher bit depth should manifest? In my head, the pixel values on inspection in editing software should reach over 256 in intensity, and the file sizes should be (without compression) 24MP * 14/8 bytes * 3 channels * compression factor. My ARW files are ~25MB. 

My current camera settings:

Single shot 

Silent shooting: off

Quality: RAW

I've played with what feels like every setting in the menu and not noticed a change.

What am I doing wrong?

 

Thanks!

Link to post
Share on other sites

I thought the color depth is per pixel.

Are  you saying you think its 14 bits for R, 14 bits for G and 14 bits for B.  I think that is wrong.  That would be immense 42 bits per pixel.  That would be 4,398,046,511,103 colors per pixel.

Is that 4.4 trillion colors.

There has to be something wrong with our understanding.

Edited by lonewolf101
Link to post
Share on other sites

I'm not 100% sure how it's managed in software and files and I'm sure more knowledgeable people can weigh in. But an '8 bit' image is definitely 8 bits per color channel, total 24 bits per image pixel, 16 million colors.

Higher bit depths should surely indeed hold more intensity precision. A 14 bit camera should produce 42 bpp images with 4.4 trillion colors. Mostly for improving dynamic range, i.e. boosting contrast in shadows while preserving high contrast in brighter areas. 8 bits is definitely not enough dynamic range to do this without loss in quality.

What may(?) play a role in reducing the craziness (at least in file storage terms) is that in fact each pixel of the sensor of course only measures R, G or B according to its color filter, then an R, G and B value are created for each through interpolation ('de-bayering'). I don't know if RAW/AWR files store an R, G and B value for each pixel (42 bpp) or just store what comes off the sensor and interpolate in software (14 bpp).

What I don't understand with my a6500 is why I'm not seeing that 14 bit dynamic range manifested anywhere.

Option A could be the camera doesn't actually produce 12/14 bit files for the user, it's just some kind of internal image quality improvement thing.

Option B could be RAW editing software histograms etc. work differently to what I expect and the 12/14 bits manifest in a different way, or

Option C I'm doing something wrong and my camera can make 12/14 bit images, but I've never successfully made it do that.

Link to post
Share on other sites

It's not a RAW file if it's 8 bit. RAW files are typically 12 or 14 bit, or even 16 bit on some medium format cameras. 8 bit files are usually JPEG - are you shooting JPEG instead of RAW?

There is one possible explanation: are you using Adobe Camera RAW to process the files? It used to be that the default for ACR was to create 8 bit images unless you set it explicitly to 16 bit (it converts the 12 or 14 bit RAW files into 16 bit Adobe format during the RAW processing step).

If you are using different RAW processing software, maybe it has a similar issue? Read up on your software and how to tell it the bit depth you want.

Link to post
Share on other sites

2 hours ago, FunWithCameras said:

It's not a RAW file if it's 8 bit. RAW files are typically 12 or 14 bit, or even 16 bit on some medium format cameras. 8 bit files are usually JPEG - are you shooting JPEG instead of RAW?

There is one possible explanation: are you using Adobe Camera RAW to process the files? It used to be that the default for ACR was to create 8 bit images unless you set it explicitly to 16 bit (it converts the 12 or 14 bit RAW files into 16 bit Adobe format during the RAW processing step).

If you are using different RAW processing software, maybe it has a similar issue? Read up on your software and how to tell it the bit depth you want.

Ah this is reassuring, thanks - looks like I'm probably just too much of a newbie with RAW editing software. I have affinity Photo, but I've almost never worked with RAW because I didn't see the point (at my low skill level) - now I definitely see the point.

I have found an info readout where it says '14 bits per pixel' in Affinity. I just expected to see the "actual" 14-bit pixel value somewhere, which I've yet to find - all values inc on the histogram only go up to 256, but I guess that corresponds to what's displayed, not to the data.

Edited by louisk
Link to post
Share on other sites

10 hours ago, louisk said:

Ah this is reassuring, thanks - looks like I'm probably just too much of a newbie with RAW editing software. I have affinity Photo, but I've almost never worked with RAW because I didn't see the point (at my low skill level) - now I definitely see the point.

I have found an info readout where it says '14 bits per pixel' in Affinity. I just expected to see the "actual" 14-bit pixel value somewhere, which I've yet to find - all values inc on the histogram only go up to 256, but I guess that corresponds to what's displayed, not to the data.

The histograms go to 255 because people are used to that - show them a histogram going to 16383 (14 bit) and they will look at you blankly 🙂 likewise 4095 (12 bit). Besides, you don't get much use from showing more than 256 values on the histogram - a difference of 1 in 16384 is going to get lost in the noise (pun intended) - and it's faster and more useful to group the values that way.

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...