No I didn't have a need; sounds like a weird use of the word given its most used form; didn't say wrong, just weird.. Here a nice explination; of which one might fit. LOL!
Dithering (another)
The process of shifting the 6-MHz satellite-tv signal up and down the 36-MHz satellite transponder spectrum at a rate of 30 times per second (30 Hertz). The satellite signal is "dithered" to spread the transmission energy out over a band of frequencies far wider than a terrestrial common carrier microwave circuit operates within, thereby minimizing the potential interference that any one single terrestrial microwave transmitter could possibly cause to the satellite transmission.
Explinations
The simplest and best-known form of quantization is referred to as scalar quantization, since it operates on scalar (as opposed to multi-dimensional vector) input data. In general, a scalar quantization operator can be represented as
Q(x) = g(\lfloor f(x) \rfloor)
where
* x is a real number,
* \lfloor x \rfloor is the floor function, yielding the integer i = \lfloor f(x) \rfloor
* f(x) and g(i) are arbitrary real-valued functions.
The integer value i is the representation that is typically stored or transmitted, and then the final interpretation is constructed using g(i) when the data is later interpreted. The integer value i is sometimes referred to as the quantization index.
In computer audio and most other applications, a method known as uniform quantization is the most common. There are two common variations of uniform quantization, called mid-rise and mid-tread uniform quantizers.
If x is a real-valued number between -1 and 1, a mid-rise uniform quantization operator that uses M bits of precision to represent each quantization index can be expressed as
Q(x) = \frac{\left\lfloor 2^{M-1}x \right\rfloor+0.5}{2^{M-1}}.
In this case the f(x) and g(i) operators are just multiplying scale factors (one multiplier being the inverse of the other) along with an offset in g(i) function to place the representation value in the middle of the input region for each quantization index. The value 2 ? (M ? 1) is often referred to as the quantization step size. Using this quantization law and assuming that quantization noise is approximately uniformly distributed over the quantization step size (an assumption typically accurate for rapidly varying x or high M) and further assuming that the input signal x to be quantized is approximately uniformly distributed over the entire interval from -1 to 1, the signal to noise ratio (SNR) of the quantization can be computed as
\frac{S}{N_q} \approx 20 \log_{10}(2^M) = 6.0206 M \ \operatorname{dB}.
From this equation, it is often said that the SNR is approximately 6 dB per bit.
For mid-tread uniform quantization, the offset of 0.5 would be added within the floor function instead of outside of it.
Sometimes, mid-rise quantization is used without adding the offset of 0.5. This reduces the signal to noise ratio by approximately 6.02 dB, but may be acceptable for the sake of simplicity when the step size is small.
In digital telephony, two popular quantization schemes are the 'A-law' (dominant in Europe) and '?-law' (dominant in North America and Japan). These schemes map discrete analog values to an 8-bit scale that is nearly linear for small values and then increases logarithmically as amplitude grows. Because the human ear's perception of loudness is roughly logarithmic, this provides a higher signal to noise ratio over the range of audible sound intensities for a given number of bits.
Quantization and data compression
Quantization plays a major part in lossy data compression. In many cases, quantization can be viewed as the fundamental element that distinguishes lossy data compression from lossless data compression, and the use of quantization is nearly always motivated by the need to reduce the amount of data needed to represent a signal. In some compression schemes, like MP3 or Vorbis, compression is also achieved by selectively discarding some data, an action that can be analyzed as a quantization process (e.g., a vector quantization process) or can be considered a different kind of lossy process.
One example of a lossy compression scheme that uses quantization is JPEG image compression. During JPEG encoding, the data representing an image (typically 8-bits for each of three color components per pixel) is processed using a discrete cosine transform and is then quantized and entropy coded. By reducing the precision of the transformed values using quantization, the number of bits needed to represent the image can be reduced substantially. For example, images can often be represented with acceptable quality using JPEG at less than 3 bits per pixel (as opposed to the typical 24 bits per pixel needed prior to JPEG compression). Even the original representation using 24 bits per pixel requires quantization for its PCM sampling structure.
In modern compression technology, the entropy of the output of a quantizer matters more than the number of possible values of its output (the number of values being 2M in the above example).
Relation to quantization in nature
At the most fundamental level, all physical quantities are quantized. This is a result of quantum mechanics (see Quantization (physics)). Signals may be treated as continuous for mathematical simplicity by considering the small quantizations as negligible.
In any practical application, this inherent quantization is irrelevant. First of all, it is overshadowed by signal noise, the intrusion of extraneous phenomena present in the system upon the signal of interest. The second, which appears only in measurement applications, is the inaccuracy of instruments.