In order to actually compress an image, we need to take advantage of redundancy in the data. A very simple way to do this without a complicated prediction mechanism, is run-length coding. In fact, we only run-length code subsequent 0s. We still want to take advantage of the original 2D nature of the data, which implies that coefficients close to each other have a higher probability of being similar, but without the complication of a 2D context-model like quad- or zero- trees.
Another reason for this reordering is that we can preserve the progressive nature of wavelet coding by first writing the important coefficients and then the less and less important ones (which we call the levels of the decomposition).
Then the coefficients are "reordered" into a one-dimensional array by octave-bands (and using a Hilbert space-filling curve within each octave, first is a block of coefficients with HD (horizontal detail), followed by a block of VD (vertical detail) coefficients, and finally a block the DD (diagonal detail) coefficients.
Now, we are able to write the coefficients of each block to disk via Coding Process.
- John J. Bartholdi, III and Paul Goldsman, "Vertex-Labeling Algorithms for the Hilbert Spacefilling Curve", Software - Practice and Experience, 2001, Volume 31-5
- Dr. Volker Markl, Frank Ramsak, "Universalschlüssel - Datenbankindexe in mehreren Dimensionen", 2001, c't 01/2001 - Magazin für Computertechnik
- Henrique Malvar, "Progressive Wavelet Coding of Images", 1999, IEEE Data Compression Conference March 1999
Generated on Tue Jul 10 20:44:34 2007 for wavelet by