Chroma Subsampling
This Wiki article explains what [ame="http://en.wikipedia.org/wiki/4:2:2"]Chroma subsampling - Wikipedia, the free encyclopedia@@AMEPARAM@@/wiki/File:Question_book-new.svg" class="image"><img alt="Question book-new.svg" src="http://upload.wikimedia.org/wikipedia/en/thumb/9/99/Question_book-new.svg/50px-Question_book-new.svg.png"@@AMEPARAM@@en/thumb/9/99/Question_book-new.svg/50px-Question_book-new.svg.png[/ame]
Chroma Subsampling techniques are, including 4:2:0 and 4:2:2.
"
Chroma subsampling is the practice of encoding images by implementing less resolution for chroma information than for luma information. In video,
Luminance (luma) represents the brightness in an image (the "black and white" or achromatic portion of the image).
Chrominance (chroma for short), is the signal used in video systems to convey the color information of the picture, separately from the accompanying luma signal. Chrominance is usually represented as two color-difference components: U = B'–Y' (blue – luma) and V = R'–Y' (red – luma). Each of these difference components may have scale factors and offsets applied to them, as specified by the applicable video standard.
In composite video signals, the U and V signals modulate a color carrier signal, and the result is referred to as the chrominance signal; the phase and amplitude of this modulated predominance signal correspond approximately to the hue and saturation of the color. In digital-video and still-image colorspaces such as Y'CbCr, the luma and chrominance components are digital sample values. Separating RGB color signals into luma and chrominance allows the bandwidth of each to be determined separately. Typically, the chrominance bandwidth is reduced in analog composite video by reducing the bandwidth of a modulated color subcarrier, and in digital systems by chroma subsampling."
skysurfer
What were the channels with lowest and highest SR you mentioned?
I agree, merely symbol rate lowering wouldn't be beneficial, but using more advanced modulation like 8psk instead of QPSK and decreasing FEC (the amount of error correction data) by improving link quality (signal strength), its possible to increase
signal bit rate while decreasing
symbol rate, since the higher the modulation the more tones are used and more bits transmitted with each symbol. However, for highly dynamic scenes using only this technique may not be enough, requiring also more efficient
codecs and
video compression algorithms like H.264 variations allowing to decrease required number of bits to deliver the same video quality.
So, I assume its a combination of all known techniques that allows to decrease required symbol rate without affecting pic quality. Such signal may require however newer receivers to decode, which support newer modulations and incorporate newer codecs, unless more codecs can be added to the receiver, like in Linux STBs.