Compressing RAW Photos with SPIHT

This is a spinoff of this thread:

For those curious, this is what my image compression algorithm looks like in action. You might want to look at these full screen and in FullHD to see all the detail that is being resolved:

The image is 2048 by 2048 pixels, 16 bit grayscale. The “bpp” is bits per pixel. Uncompressed, this would be 16 bits per pixel, logically, so 8 MiB. Using my compression I manage to get it down to 0.4 bits per pixel (1.6 MiB, so 20% of the uncompressed size) and you would be hard pressed to notice any differences with the naked eye. That is very impressive, but you don’t normally use 16 bit images. Nor is that low bitrate sufficient for all of the applications you would normally use 16 bit images for.
Most images you see every day are 8 bit per channel:

Here the reduction is less drastic. That’s mostly because your screen can actually render these 8 bits whereas it can’t show the full 16 bits of the previous photo. Many screens have a 10 bit panel and some of them actually use it, so there’s a whole 6 bits of information wasted which can safely be thrown away without you noticing a thing.
This example also uses a different wavelet, one that has discontinuities and these result in the blocky artefacts in the image. You wouldn’t typically use this for image compression because people don’t normally appreciate blocky images. However, it is very good for seeing how the algorithm works.

The videos are short by design. They are meant to be scrubbed through and examined more or less frame by frame.
I can tell you more and make more of these videos if anyone is interested.


Yes very interested!

Why do you say screens can’t show the full 16bits? I thought they could?

1 Like

That’s because they can’t :slight_smile:
For an RGB image 16 bits per channel would amount to a bit depth of 48 bits and I would very much like to see the monitor that can actually display that.
Even if it could, your OS, e.g., Windows and GPU only support 24 bit color. I know it says 32 bit, i.e., TrueColor, but that is simply RGBA. It still uses only 8 bit for each of these four channels. The added alpha channel does not actually make it to your eyes.

16 bit is only useful for editing since it gives you greater flexibility for adjusting brightness, exposure, levels, white balance, etc.

I thought we were talking about 16 bits per pixel though, not 16 per channel! Or does the compression happen channelwise?

1 Like

The compression happens both per channel and for all channels combined. Here’s a description of the full pipeline: