We have a one-dimensional array of data (usually consisting of quantized wavelet coefficients) that we want to store efficiently, but also in a progressive manner. This is achieved by coding the bit-planes from the top down, e.g. first we code bit 8 of all coefficients, then bit 7 of all coefficients, and so on until we have coded all the information.
Due to the wavelet transform we hope to have reduced most coefficients to small values (or 0 in the ideal case), thus we only run-length code runs of consecutive 0 bits. Each 0 bit in the coded stream represents 2^k zeros, whereas a 1 in the coded stream is always followed by k bits (denoting some value n) and a sign bit. This is interpreted as n 0 bits followed by a single 1 bit (and the value to which the 1 bit belongs as having that particular sign).
Now, two additional things are done:
- The value of k does not have to be fixed. It can also be adaptive, as long as the encoder and decoder always make the same choice of k. We increase k after emitting a 0 bit and decrease it after emitting a 1 bit.
- Secondly, we make a distinction between significant bits and so-called refinement bits. Significant bits are those whose most significant bit has not already been sent / coded. Consequently, refinement bits are the ones for which more significant 1 bits have already been coded. The significant bits have a high probability of being 0, the refinement bits in contrast only have a probability close to 0.5 of being 0. Thus, we ignore the refinement bits when run-length coding zero-runs of the significant bits (as the decoder is aware for which coefficients this is the case) and embed the "ignored" refinement bits into the bit-stream without any coding after writing the next 1 bit.
Generated on Tue Jul 10 20:44:34 2007 for wavelet by