I don't think I need to get into comparing tons of different screenshots under different conditions, so I decided to do a quick Dish vs. Itself comparison to show that more bandwidth does not automatically mean quality is better.
Note that since my first attempt at doing SD comparisons in another thread, I’ve learned a bit about what Dish is doing to the video they receive before re-encoding it, so I hope my comments on the images here are a bit more meaningful and accurate. The images here have not been rescaled, so they're 544x480.
This first image is from a Starz channel on 119W. This is an I-frame using 20,976 bytes. It most noticeably suffers from softening and significant horizontal edge enhancement.
This second image is from an Encore channel on 110W. This is an I-frame using 17,528 bytes. It most noticeably suffers from softening.
The image from Starz uses more bits and the frame generally uses quantizer values that are 2-4 lower on each macroblock than the Encore image. From a technical standpoint in terms of the MPEG-2 standard, which is largely what kstuart's thinking seems to be based on, the quality should be universally superior to the Encore image. At a glance, the Starz image may look sharper and a possibly even more detailed. On closer inspection, this is almost entirely due to the edge enhancement (EE) artifacts. The Starz image is actually loaded with noise and out-of-proportion edges. The EE used by Dish has the visible effects of expanding horizontal edges, merging fine lines, and introducing noise where small points of contrast are misinterpreted as edges.
One example is that the frilly things towards the bottom of the shirt have artificially thick light and dark outlines in the Starz image whereas in the Encore image they are presented more naturally. If you're wondering how much the thickness can get exaggerated, look at the top of the Starz image where the the entire top edge is glowing.
Another example is that the woman’s face, hair, and surrounding areas are full of unnatural noise in the Starz image, whereas in the Encore image these areas are more clean, more natural, and have levels of detail equivalent to the Starz image if you can see past the artificial changes made by the EE. For help seeing past the artificial changes, look at the next image:
This third image is what the Starz channel looks like after a bit of post-processing is done to reduce (but not eliminate) the horizontal edge enhancement. The image makes clearer which “details” in the image are actually details retained from the source and not EE-related gunk. It looks much more similar to the Encore image now, as it should have in the first place. If it hadn’t been for the EE applied to the Starz channel, Dish probably could have used those extra bits to re-encode the source video with a bit less softening than was applied to the Encore channel without sacrificing the need for a certain minimum amount of filtering to prevent visible compression artifacts. This would allow the Starz channel to have a picture that is
actually slightly sharper and
actually slightly more detailed than the Encore channel instead of opting to produce a soft image that is full of artificial noise and unnaturally thickened dark and light lines around horizontal edges in a poor attempt to imitate details.
The set of images compared in
this other post includes an EE-filtered image from HBO on Dish with higher-quality sources for comparison. Those should help to make clear how much damage Dish’s questionable encoding and inappropriate filtering can do to picture quality if the examples given here don’t seem entirely convincing.
If the resolution-reducing business that started this thread is really all about maximizing effective use of bandwidth, why would bandwidth be wasted in such a careless and off-putting manner on the SD channels with heavy EE filtering applied to them? SD bitrates on Dish are generally higher for channels that have far more filtering applied to the source material. This can be verified easily, so I’m not going to bother proving it here. Isn’t the point of heavily filtering video prior to recompression supposed to be to maximize compressibility, thus reducing the bitrate while retaining a reasonable representation of the source material? Why then would the bitrates of the less-filtered channels be lower, making those channels more susceptible to compression artifacts than if the allocated bitrates were reversed? It all seems to defy some of the most basic principles of proper video encoding.