Kompressor.app and Wavelet Image Compression Library 3.3.3

Shortly before the year is out (and as result of my vacation), there is some fresh software to be had… πŸ™‚
I’ve now written an Mac OS 10.4 application called Kompressor.app to compress, inspect and display WKO images. This release goes hand in hand with version 3.3.3 of the wavelet image compression library itself.
Because this is the first (semi-)proper Mac-application I’ve written, I would welcome any form of testing or feedback people can provide, especially on the user-interface side. The application is universal and thus should work on PPC and Intel Macs.
Here’s a bit (all of it actually) of the supplied online help to get started…

Introduction

Kompressor.app is an application to create highly compressed bitmap images using my Wavelet Image Compression Library. These images have the extension .wko and can be progressively transmitted or even truncated at any point while still providing the best possible quality in some sense.

Operation

Essentially, you open an existing image and then modify the settings until you’re satisfied with the end result. At that point, you can then save the image with the current settings.

Opening Images

When opening existing images, you have the choice of a color-space into which the image should be converted. This selection only applies when opening non-“.wko” images that have at least 3 channels. The RGB color-space leaves the source as it is, while both variants of the YCoCg color-space perform a conversion. For natural images (and most others, too) the YCoCg color-spaces result in (sometimes much) smaller files. The YCoCg color-space incurs a mean-square error (MSE) of about 0.25 compared to the original image, which in nearly all cases is perfectly fine. If you still need bit-perfect reconstruction, you can use the reversible variant YCoCg-R.

You can add more channels (i.e. images) to the current document provided that they have identical dimensions. This can either be done with the Add File… option in the Edit menu, or via the clipboard.

Settings

There are two ways of influencing the quality of the resulting image. One is with the File Size Slider on the side of the drawer, where the top position selects lossless compression and positions near the bottom result in smaller (but non-identical or “lossy”) files.

The other option for modifying the outcome of the compression algorithm is the Quality Slider for each individual channel. If we are in lossless mode (i.e. the File Size Slider is at the very top) then the Quality slider prescribes the absolute quality required for each channel. On the other hand, if we are in lossy mode, then the Quality Slider assigns an importance to each individual channel; the higher the importance of a channel, the more bits are spent while compressing it — and thus representing it more accurately than channels with a lower importance.

View Options

The image view displays both the original image as well as its compressed kin at the same time. They are separated by a thin gray line. You can move this separator by dragging it across the image. As soon as you drag the divider line close to the center of an edge of the view, it will switch orientation as appropriate.

By default, the channel selection is not reflected in the image view as that proves rather distracting when changing quality settings for individual channels. As only up to three channels (from the same color-space) can be displayed at the same time, you can examine individual channels — as dictated by the selection — by enabling Display Honors Selection in the View menu.

Sometimes it is easier to spot compression artifacts (or determining acceptable quality) by flipping between the original and the compressed image. The Swap Original & Compressed option in the View menu allows you to do exactly that, by swapping on which side of the divider line original and compressed image are displayed.

Restrictions

Although the format and the library itself is very flexible — many settings can be changed at compile time, for example support for deeper pixel formats (up to 61 bits per channel) — this version is currently compiled for

  • 8 bits per component, internally up to 13 bits are kept,
  • 16 channels per file, and
  • based on library version 3.3.3.

14 thoughts on “Kompressor.app and Wavelet Image Compression Library 3.3.3

  1. Joseph Oren

    A speaker at SPIE recommended a function of Kompressor that compared two images to determine the loss or distortion introduced by some arbitrary manipulation – compression being the original intent. The result was represented as highly correlated to subjective assessment. Is it your software that can be used in this manner?

    Regards,

    Joe Oren
    Cinea, Inc.

  2. [maven] Post author

    To be entirely honest, probably not. It is only good for comparing the distortion introduced by my particular compression algorithm, which is not an arbitrary manipulation…

  3. cyber

    Hi, I’m from Minsk, Belarus.
    I’ve found your code and it looks rather clear to me to study wavelet coders.

    I’m trying to modify your code allitle.

    How do you think, will it be faster if to replace divisions in wavelet transform by bitshifts?

    My experimens somewhy didn’t show any improvements πŸ™‚

    And another thing – why are you using Hilbert scan and not that scan, provided in that PWC article?

    I really can’t figure out is your scan reorders coefficients only in blocks (LH, LH, HH) or also does interblock shuffling?

    Thanks πŸ™‚

  4. [maven] Post author

    I don’t think replacing the divisions by shifts will helps much as the compiler should translate most of them to shifts anyway.
    Hilbert scan order should help with the spatial locality of the coefficients, but you could easily try any other order by modifying the code that computes the reorder table.
    There is no interblock coding (although all three of each set have their error estimates computed at the same time); the next band or block to be written is simply the one that reduces the error the most.

  5. cyber

    Hi again!
    I’ve another pack of questions, ofcoz.

    Is it possible (and meaningfull) to replace (or to expand ) Rice code with Huffman code?

    Is it possible to turn off sheduler and encode image consequently from most significant to less significant plane (for some speedup)?

  6. [maven] Post author

    As an aside, I’ve just pushed an updated version which fixes two (encoding related) bugs.

    Of course it is possible to replace the Rice coding with something else, but I’m not sure how well that will work out, especially as you would still want to encode the coefficient bit-planes in order from top-down (and not the coefficients themselves). I haven’t tried it, though.
    You can either pass the Approximate-flag to wv_channel_compact() (which turns off the error estimation but still computes the size of the encoded channel), or construct a schedule manually. But note that the schedule doesn’t determine which bit-plane is written next, but from which block the next significant (going from MSB to LSB) bit-plane is written.

  7. cyber

    Thanks again.

    You are calculating psnr on wavelet coeffs, not on image itself. This is legal ofcoz, but not realy good if to compare with other algorithms…

    Turning on approximate mode sometimes gives better results πŸ™‚ is it normal?

  8. [maven] Post author

    Unless I am mistaken, the estimate_error() routine in transform.c estimates the error in image space for truncated the wavelet coefficients. As an aside, this routine should be exact IMO but it isn’t so there could well be bugs there; also see the included Mathematica notebook (wavelet_analysis.nb) for the derivation. Or are you talking about something else?

    Yes, sometimes approximate mode results in slightly better results, which also points to the error estimation not being quite correct. If you have any suggestions for improvements / fixes, I’d welcome them… πŸ™‚

  9. cyber

    Hi again.

    Thought it might be intrresting – I’ve added two additional scans – linear and that from arcicle with inter-subband reordering? but none of them shows any compression improvements.

    Only linear shows 10% speed gain with less then 0,5 dB PSNR loss.

    Γ‘β€šΓΒ°ΓΒΊΓΒ°Γ‘Β ΓΒ²ΓΒΎΓ‘β€š хрСнь…

  10. [maven] Post author

    I know; I tried all those once upon a time, and settled on Hilbert as the best, followed by Morton / Z-order and then any variation of the linear scans.

  11. cyber

    Hi, Daniel.

    Here is some stuff witch looks confusing abit:

    channel.h contains t_wv_block struct
    ——-
    /** Information about a block of wavelet coefficients.
    * In this context, a block refers to the detail coefficients of a
    * particular wavelet decomposition level. For example, a 512×512 image is first
    * decomposed into a 256×256 average (this is then again decomposed recursively)
    * and 3 256×256 detail coefficients. These 3 256×256 detail coefficients make
    * up a so-called block.
    */
    ——–
    so block here is actually whole transform level

    but here
    ——-
    cc->num_blocks = 1 + 3 * num_levels; /* initial average + N * (HD + VD + DD) */

    cc->block = malloc(cc->num_blocks * sizeof *cc->block);
    ——

    looks like a block here is a single subband.

  12. [maven] Post author

    Good point. I forgot do update the documentation / comments when I split up HD / VD / DD coefficients into their own block. I will fix this for the next release.

Leave a Reply

Your email address will not be published. Required fields are marked *