Another week, another release of my Wavelet image compression. I figured out how to do complete embedding, which justifies another release. This means you can compress an image once, and then get different rates by simply truncating that file! The resulting decompressed image will always have the highest possible quality. This sped the code up dramatically as it no longer needs to look for quantizers, it simply writes in the determined (“scheduled”) order until the bit-budget is used up.
I’ve also added the link to the sidebar. Go there for the documentation / changelog.
I’ve been steadily working on my wavelet image compression for the past few weeks, and in the process have improved it in many ways. These are largely not technical improvements, but rather a huge code refactoring, the creation of decent documentation, reducing memory usage an so on.
You can read the freshly pressed documentation or simply download the source.
It is a fairly simple but thus compact (executable with compression and decompressing is 30kb uncompressed) and relatively speedy image compression library, that provides up to 16 channels per file and combines lossless with lossy compression in a single algorithm, that can even be changed from channel to channel. As it’s based on the wavelet transform, it allows for progressive decoding (which means that if you only have the beginning of the file, you get a lower quality version of the whole file) and can also extract smaller “thumbnails” of the original image.
For encoding it also support various modes, one is to give a mean-square error for each channel (similar to the JPEG quality setting), and another one is to fit the best quality image into a given amount of bits.
Unfortunately, there is a catch with the new version, too (and this is the reason why the sidebar link still refers to the old version). As my primary development platform has moved from Windows to Mac OS (and Linux), I have not updated the Windows GUI (written in Delphi) nor the Web-Browser plugin. I plan to offer new GUIs eventually; the current plan is to write one in C# for Windows and Linux, and do a native Cocoa one for Mac OS.
Finally, I’ve changed the license from the GPL to the zlib-license, which should allow use in closed source applications. If you decide to use it, or even decide not to use it, feedback and suggestions would be much appreciated.
I’ve been reading a bit about the spacefilling curves for my wavelet image compression (take a look here and here).
There is a very nice way to convert from the Hilbert derived key to (multi-dimensional coordinates) described by John J. Bartholdi, III and Paul Goldsman in “Vertex-Labeling Algorithms for the Hilbert Spacefilling Curve”.
They describe a recursive procedure, but in the particular case mentioned above, this can be easily unrolled. It also works very well with fixed point arithmetic. The following source code can be further optimized by storing each point’s x and y coordinate in a single unsigned int, as everything except for the final averaging is easily replicated across the vector by applying the operation to the combined integer.
Another update to the wavelet-code (now standing at 2.7). This one is INCOMPATIBLE with older versions and will crash them. The new version has lots of failsafes and should be “immune” towards new versions and corrupt data (if the header is intact). Get the complete package (with source) or visit the demo-page. Warning! Upgrade plugin first if you have an old version installed!
- removed MMX optimisations for wavelet transforms and made code even faster
- removed unused MaxBits parameter from wv_init_channel
- changed bitstream format (order in which bits are written) and removed writing unneccessary zeros at the end of each block
- changed yuv transform slightly (Cr / CB are now centred around 0, not 128), as we’re writing the sign in any case
- changed colorspace conversion to be in-place
- fixed bug in raw_load if file was too small
- misc optimisations to bit-files
- added wv_ prefix to log2i, mse_to_psnr, psnr_to_mse
- changed the # of iterations for the multi-channel size selector (now bails out earlier)
- new function to return the header of an image (wv_read_header), changed layout of t_wv_dchannels
- changed decompression to work for (hopefully) all invalid data w/o overwriting anything in memory
- wv_init_decode_channels now accepts an extra reduction parameter (return a scaled down version of image) (see -dr parameter in wako.exe)
Been busy optimising the wavelet-code (but not really getting anywhere… hand-made assembly for the bit-encoding gains 20-30% max). Even busier playing “Ikaruga” and “Animal Crossing” (US Import as it probably won’t come out in Europe)…
Updated the wavelet-code to version 2.6. What’s new? Speed… 😉
I essentially added a new data-independent error-estimator (“Approximate” in the GUI) and optimised the wavelet-transforms (MMX-code) and lots of other bits and pieces… Complete Package. Demo-Page.
I’ve been thinking about relicensing the wavelet-code under the zlib-license… Anyone interested enough in that to be worth doing?