Category Archives: wavelet

Wavelet Image Compression

bitbucket

I finally got around to a) updating my old WordPress-installation (after finding out spam-bots had already created custom folders on my installation) and b) uploading all my public code to bitbucket (as Mercurial repositories).
That includes my wavelet image compression library, its Mac OS X previewing GUI as well as WowPlot (including some fairly decent Objective-C WoWCombatLog.txt parsing). I converted most of those repositories from darcs (which in the case of my wavelet lib took about 3h to convert from darcs1 to darcs2 format), but on first glance they look alright.

Wavelet 3.4.0

A slightly bigger release, which brings two major changes. Not compatible with older files due to the the reorder-changes. The improvements to bit.c are not terribly well tested. More here, as usual. As an aside, Kompressor is now served in a ZIP-archive, instead of a DMG…

Changelog

  • Overhauled the reordering-code to make the table used independent of the aspect-ratio of the image. This makes old images incompatible with this version of the code. The smallest dimension (in wv_create_reorder_table) is now relevant for the largest table entry. Any image whose smallest dimension is smaller than the one used to create the table originally can safely use it.
  • Added a “min bits” criterion to the scheduler, that reserves a certain amount of bits for certain channels. Perceived image quality has improved a fair amount, the same default values are used in Kompressor and main.c.
  • We can now pass a write buffer into bit_open(), added bit_free() for deallocating automatically allocated regions. Only accepts lower-case mode-strings now.
  • Fixed (and simplified) scheduler preparations for very large absolute target errors.

It’s wavelet bugfix time — 3.3.4 is here!

While compressing a multi-channel file with a target bitrate and no specific target MSEs the resulting bit distribution between the channels seemed rather odd, and comparing the results to an older version revealed that it was indeed totally bogus!
So I changed the target MSE computation in main.c to be more inline with what happens in Kompressor.app, which revealed a another bug where tiny negative (i.e. relative) target MSEs passed to wv_query_scheduler() / wv_encode() were converted to 0 (instead of the smallest negative fixed point number representable) and thus interpreted as absolute target MSEs.
Both of these are fixed in 3.3.4 (and Kompressor.app has also been recompiled with the relevant fix).
Other than those two fixes (both of which only relate to target MSE evaluation when compressing), the code is identical to (and thus fully compatible with) version 3.3.3.

Kompressor.app and Wavelet Image Compression Library 3.3.3

Shortly before the year is out (and as result of my vacation), there is some fresh software to be had… 🙂
I’ve now written an Mac OS 10.4 application called Kompressor.app to compress, inspect and display WKO images. This release goes hand in hand with version 3.3.3 of the wavelet image compression library itself.
Because this is the first (semi-)proper Mac-application I’ve written, I would welcome any form of testing or feedback people can provide, especially on the user-interface side. The application is universal and thus should work on PPC and Intel Macs.
Here’s a bit (all of it actually) of the supplied online help to get started… Continue reading

Rate-Distortion Graph

I’ve invested a bit of time in getting some nice rate-distortion graphs out of my wavelet image compression library. Now that it’s embedded, the process is fairly easy: Compress once into a single file and then decompress only enough bits from the file until the desired rate is achieved. As such, the graph also shows the quality over the progress of the file.

Here is an example of one such graph, for a 708×1024 image of Mena Suvari (which unfortunately comes from a JPEG source and thus has block noise), as the Lena image (notice the name similarity? ;)) is past its prime IMO, especially the color version has plenty of noise in the blue channel. The image was encoded in the YCoCg colour-space, which explains the slightly higher quality of the green channel and why even the highest rate is not lossless. Compared to the RGB version, the YCoCg representation incurs a mean-square error of ~0.25 (which results in the root mean-square error of 0.5 shown).

Rate / Distortion curve for a colour image of Mena Suvari

In theory, the rate/distortion graph should have a negative, but monotonically increasing derivative (which in normal language means something like “bigger improvements are closer to the beginning of the file”). We try achieve this by scheduling data units (which are essentially bitplanes of blocks) by how much they reduce the error in the coded image for each bit of their size. There are two reasons for this “close-but-not-quite-monotonically-increasing-slope”. One is that the error (distortion) is not computed exactly; instead I use an approximiation that based on the wavelet transform used. The other is the fact that we deal with packets / data units / blocks of finite size, and as soon as more than one channel is written to a single file, it becomes evident that a single packet will only improve a single channel – which in turn means the other channels have a stagnating improvement (zero derivative) for the size (or duration) of that packet.

These plots also make it easier to spot differences (usually either improvements or regressions) for changes in the code, so I’ve created graphs for all my test images, so that I can make these comparisons more easily in the future.

I’ve also taken an old pre-3.1 version (thank you, darcs!) that still used the recursive quantizer selection to produce similar plots (taking much longer as the image had to be compressed anew for each rate) and the result was pretty much a draw, in spite of the embedded version having to store a bit more sideband data (number of bitplanes for each block in the header, and which block is coded next in the bitstream itself). All of which makes the embedded version the “better choice”, due to its other advantages such as simpler code and “compress once, decompress at any rate”.

Wavelet 3.2

As I’ve taken a two-day vacation pre-easter, I’ve gotten some more work done on my Wavelet Image Compression Library (and not played games as some of my colleagues were led to believe ;)).
Before this version all the subbands of an octave (i.e. HD, VD, and DD coefficients) were written as a single block in an interleaved manner. The changes I’ve made give each subband its own block, which increases the number of blocks, and thus allows for better and more fine-grained scheduling (which increases compression performance). Unfortunately, this led to a three times slower error estimation, which I then managed to cunningly avoid by computing the estimate for 3 blocks at a time.
Other changes include a more consistent behaviour for the -bpp command line switch, bug fixes to the scheduling where my accumulated fixed-point error estimates would overflow, and quite a few other things.
A experimental change was an exact error estimation, which I’ve then removed again as the code became unreadable, it was slow as hell, and didn’t actually help that much.
There is now a Darcs repository here (don’t try browsing there) from which you can get the current version and against which you can send me patches. You can also read my totally unfunny changelog entries if you need more information on the changes between versions.

Wavelet 3.1

Another week, another release of my Wavelet image compression. I figured out how to do complete embedding, which justifies another release. This means you can compress an image once, and then get different rates by simply truncating that file! The resulting decompressed image will always have the highest possible quality. This sped the code up dramatically as it no longer needs to look for quantizers, it simply writes in the determined (“scheduled”) order until the bit-budget is used up.
I’ve also added the link to the sidebar. Go there for the documentation / changelog.

Wavelet 3.0

I’ve been steadily working on my wavelet image compression for the past few weeks, and in the process have improved it in many ways. These are largely not technical improvements, but rather a huge code refactoring, the creation of decent documentation, reducing memory usage an so on.
You can read the freshly pressed documentation or simply download the source.
It is a fairly simple but thus compact (executable with compression and decompressing is 30kb uncompressed) and relatively speedy image compression library, that provides up to 16 channels per file and combines lossless with lossy compression in a single algorithm, that can even be changed from channel to channel. As it’s based on the wavelet transform, it allows for progressive decoding (which means that if you only have the beginning of the file, you get a lower quality version of the whole file) and can also extract smaller “thumbnails” of the original image.
For encoding it also support various modes, one is to give a mean-square error for each channel (similar to the JPEG quality setting), and another one is to fit the best quality image into a given amount of bits.
Unfortunately, there is a catch with the new version, too (and this is the reason why the sidebar link still refers to the old version). As my primary development platform has moved from Windows to Mac OS (and Linux), I have not updated the Windows GUI (written in Delphi) nor the Web-Browser plugin. I plan to offer new GUIs eventually; the current plan is to write one in C# for Windows and Linux, and do a native Cocoa one for Mac OS.
Finally, I’ve changed the license from the GPL to the zlib-license, which should allow use in closed source applications. If you decide to use it, or even decide not to use it, feedback and suggestions would be much appreciated.

Hilbert Curve

I’ve been reading a bit about the spacefilling curves for my wavelet image compression (take a look here and here).
There is a very nice way to convert from the Hilbert derived key to (multi-dimensional coordinates) described by John J. Bartholdi, III and Paul Goldsman in “Vertex-Labeling Algorithms for the Hilbert Spacefilling Curve”.
They describe a recursive procedure, but in the particular case mentioned above, this can be easily unrolled. It also works very well with fixed point arithmetic. The following source code can be further optimized by storing each point’s x and y coordinate in a single unsigned int, as everything except for the final averaging is easily replicated across the vector by applying the operation to the combined integer.
Continue reading

Better Wavelets

Another update to the wavelet-code (now standing at 2.7). This one is INCOMPATIBLE with older versions and will crash them. The new version has lots of failsafes and should be “immune” towards new versions and corrupt data (if the header is intact). Get the complete package (with source) or visit the demo-page. Warning! Upgrade plugin first if you have an old version installed!

Complete changelog:

  • removed MMX optimisations for wavelet transforms and made code even faster
  • removed unused MaxBits parameter from wv_init_channel
  • changed bitstream format (order in which bits are written) and removed writing unneccessary zeros at the end of each block
  • changed yuv transform slightly (Cr / CB are now centred around 0, not 128), as we’re writing the sign in any case
  • changed colorspace conversion to be in-place
  • fixed bug in raw_load if file was too small
  • misc optimisations to bit-files
  • added wv_ prefix to log2i, mse_to_psnr, psnr_to_mse
  • changed the # of iterations for the multi-channel size selector (now bails out earlier)
  • new function to return the header of an image (wv_read_header), changed layout of t_wv_dchannels
  • changed decompression to work for (hopefully) all invalid data w/o overwriting anything in memory
  • wv_init_decode_channels now accepts an extra reduction parameter (return a scaled down version of image) (see -dr parameter in wako.exe)