HD Picture Quality

Just trouble shooting to make sure.

Thanks man. Sorry, I didn't mean to sound short with you in that last post.

Anyway, it looks like a lot of people are unhappy with the MPEG compression quality, macroblocking and noise. I guess I can live with the lower PQ, but I just want to make sure there isn't anything wrong with my particular setup/connection...

I e-mailed Dish about the PQ 2 days ago and haven't heard anything back from them...not even a boilerplate response.
 
I have not read anywhere that MPEG 4 uses any of the DCT scheme that JPEG does. So give us some info to back up the statement. I'm not talking about MPEG but MPEG 4 which as far as I know it isn't used for a single frame but image groups.


MPEG4 IS MPEG COMPRESSION. It doesn't matter if it is MPEG 1, 2 or 4. They all use DCT.


https://eww.pavc.panasonic.co.jp/pro-av/technology/technology.pdf

That should be sufficient. Plus, video is just a succession of still images. If you can encode 100000 frames with it you can encode just one. That one frame is nothing but a picture. It just isn't used that way because it would be foolish to use a codec optimized for sequences of motion for a single still image.
 
I asked for and glad to see it

MPEG4 IS MPEG COMPRESSION. It doesn't matter if it is MPEG 1, 2 or 4. They all use DCT.


https://eww.pavc.panasonic.co.jp/pro-av/technology/technology.pdf

That should be sufficient. Plus, video is just a succession of still images. If you can encode 100000 frames with it you can encode just one. That one frame is nothing but a picture. It just isn't used that way because it would be foolish to use a codec optimized for sequences of motion for a single still image.

I had admitted all along that they used data compression. I was not familiar with the DCT was a major part of it. Had you shown this earlier it would've been much better and the "yelling" at the start of your reply was unnecessary.
 
I had admitted all along that they used data compression. I was not familiar with the DCT was a major part of it. Had you shown this earlier it would've been much better and the "yelling" at the start of your reply was unnecessary.


Sorry. I didn't mean any offense by any of my posts. I was just using the caps as extra emphasis.
 
It is best not to go into details on video compression schemes here as to be honest, most people don't understand. The point I was trying to make and the only things that matter are that:

The MPEG schemes allow for fine tuning to balance the visual information that is lost, with the available room for the data to be output. This tuning is based on scientific knowledge of how the human eye can capture an image, and how the human mind can process it. Thus allowing less data than available to be used without a human mind perceiving any difference between a compressed or uncompressed image.

In a perfect MPEG encoding of a movie you wouldn't see a difference. But there are very few mediums that allow for a perfect MPEG encoding. Dishnetwork is limited in the amount of data they can transmit by the physical limitations of the transponders on their satellites. They use a complex scheme that attempts to balance things automatically. Sometimes this automatic process fails, and we need to let Dish know when that is happening.

Below are pictures I took to show some examples of the quality I am seeing. In general it's not bad. Spike HD was terrible, nearly as bad as FoxnewsHD. But overall it isn't too bad.

This is a picture of my TV capturing a frame from the SpikeHD channel and the UFC Finals night. The picture was taken from about 4 feet away from the tv. The second picture is a close up showing the lack of detail. Note that these pictures were taken with a 3.2 megapixel camera and likely further compressed by my upload.
DSCF3325.jpg

DSCF3326.jpg


These pictures are of an over the air recording of the show Fringe. Note the detail not seen in the previous pictures, in particular the zoom shows tremendous detail in the jacket and the actors face.
DSCF3319.jpg

DSCF3320.jpg


Here are pictures from the show Medium that was recorded off the dish rebroadcast of my local CBS affiliate. The quality is actually pretty good. You can see some of the loss of detail though along the nose near the eye. The detail is mottled there but overall the image is pretty good.
DSCF3323.jpg

DSCF3324.jpg


And finally, the image from Gran Torino. Pretty good detail and I'd say it is noticeably softer than with the OTA broadcast of Fringe.
DSCF3321.jpg

DSCF3322.jpg
 
Like comparing Apples & Oranges

The set of pictures that you have supplied while you did a pretty good job of displaying them are like comparing apples & oranges. They are both fruits but nothing else is related. UFC is shot as a live action event and the lighting is not constant on the edge of the ring. Problems with it can very easily be technical in the shot and they are dealing with it as best they can to get the product out. You then compare the lack of detail of "Fringe" to UFC which I see edge problem on the badly lighted fence. So what is the problem. The lack of detail of a zoomed picture of the coat looks like he was moving at the time. It's also a depth of filed problem as HD is not forgiving for distance mixed with movement. Same thing with the shot from "Medium" depth of field is not a photog or editors friend in HD. Even the shot of "Gran Torino" what's the problem? If you are talking a little loss of detail in the neck the focus is set for it. the focus is on the face. Here again it's depth of field problem. So I'm not sure what your point is here. Please explain wht are the problems?
 
When taking the pictures, I worked to find a fixed frame which takes some time with the dish DVR. As you may or may not know, the frames of a MPEG stream are not each one clear. The images with the zoom were taken with the camera at full frame. Then optical zoom 3x to a section on the image. The UFC shot was as stable as it could be. Note the lack of detail around their heads. The complete lack of detail in the bodies. If you want to call it lighting issues, good luck. It's not.

The OTA capture was done as the character bent over, in motion. The Medium capture was done at a fixed frame that existed on screen for a few seconds, with only slight motion as the character was in a car that was moving. The Gran Torino shot was during a camera pan/zoom.

Problems visible:
1st set - not a high enough bitrate to support the action in the scene at a high quality. Barely good quality.
2nd set - No issues visible
3rd set - Slight issues which I view as related to the method Dish uses to save bandwidth. Localized areas of detail loss, while other areas appear to have lots of detail. Note the lower eyelid. The detail towards the nose is lost in a band that is towards the outside of the eye. Fine lines visible on the inner part of lower eye turn into a flat colored area as it moves out. Certainly livable as in general the picture looks pretty good. But that type of detail loss is much more obvious when things go wrong.
4th set - Similar to the 3rd set there is some facial detail that appears to be softened and smoothed out but again, it's certainly within the realm of high quality HD broadcasts.

The main issue is dish needs to have less of set #1, keep up with set #3 and #4, and keep shooting for set #2. If you're watching SD and see something like or worse then the Set #1, you should let dish know so they can do better.
 
If it is an entire Dish problem then why you reckon not everyone is seeing the problem? I am not an expert when it comes to broadcast signals, but if we are getting the same content of the same satellites then wouldn't you see the problems everywhere?

AHA! Finally, we've got a thinker on board!


Broadcasters

The broadcaster is one source of compression errors. These errors come from the compression and coding process, thus the name compression errors. Compression errors that are created at the source by the coding process and the transmission process are compression errors that have become a part of the bit stream (signal) and have become an integral part of your end picture quality. These errors would be consistent to all viewers with systems that have similar set-up. Errors that come from the broadcaster will be seen by all customers that have “eyes to see” them. These compression errors are truly out of our control.

The problem here is that if all the compression errors that are witnessed, complained about, and credited to the broadcasters were ACTUALLY all coming from the broadcasters, then every Dish customer would have the same artifact complaints. Every local broadcast viewer would see the same artifacts as the next in a local market. But they don’t. Some praise Dish for their great HD quality (and so do I), while others complain about poor picture quality and compression artifacts. How come? What can explain this wide range of differing picture quality views when the broadcaster sends the same quality to all who view their content?

Some of this can be explained by the number of people out there that think they are watching great HD, when they’ve never actually seen great HD. As a friend of mine who’s been in car audio for 30 years said to me, “Most people think that their sound system is great. They think they are listening to “A” quality sound. In reality, they just hadn’t ever heard “A” quality sound and had no reference to make their opinion. When given the opportunity to hear “A” quality sound, they realize that what they were listening to all this time was “B” quality.” The same analogy applies with HDTV, but this only explains a small part of the situation. Differences in viewer perception, television resolution, and manufacturers, still fall short of accounting for this discrepancy.

The Overlooked Source of Compression Artifacts

The second and commonly overlooked source of compression errors is found at home, in your receiver. It doesn’t matter whether we are talking about OTA HDTV, cable, or satellite reception. This even applies to cell phones and computers. Compression and signal conversion technology is used in ALL digital systems both in broadcasting and in receiving.

The receiver side of the compression equation is where the majority of compression errors enter the average system. Though it is not like me to stick up for anyone in this sad and sick industry, the truth is that the broadcasters are doing a better job than they realize. What is being sent by the broadcasters and Dish Network IS mostly good signal. Their engineers have done the math and are aware of the constraints of Shannon in regards to capacity. The broadcasters are not the source of the majority of the compression artifacts. If all of the compression errors came from the broadcasters, and everyone had an identically set up system, then everyone would have the same picture quality. This is not the case. Why? At home is where most compression errors are introduced and at home is where you can take steps to avoid

RF Front End

Did you know that every digital system has what is termed as an “RF Front End”? They do! The RF front end refers to the signal reception portion of any communication system – both analog and digital systems. The RF front end consists of the antenna and the cables that deliver the signal to the receiver. With satellite the antenna is the dish, and with OTA broadcast it is the TV antenna.

Each individual’s in-home system performance is determined first, by how well the signal is received at the RF front end. It is the integrity or quality of the arriving signal delivered to the receiver that will determine the performance of any digital system. The goal of a properly designed system is to maintain the integrity of the signal as it is received, decoded, and presented. Everything about these systems is dependant on the quality of the signal you have to work with.

The measure of performance of a communications system is called signal quality or signal fidelity. Signal quality is a simple relationship of signal power to noise power. It is called, signal-to-noise ratio or SNR.

clip_image001.jpg
Signal quality (SNR) meters are found in many television receivers and are typical with satellite systems. The value expressed on satellite signal meters is not signal strength, but signal quality, or SNR.

I hear many of you saying, “We are using digital systems and BER (bit error rate) is what counts.” Truly, BER is the term applied to describe digital system performance and I will get to that shortly. What we need to realize is that BER is a function of SNR. In other words, SNR determines BER. They are tied together and cannot be separated.

(There are two ways to affect the signal quality or SNR. The first way is to increase signal strength. The signal quality (SNR) will be increased and the BER greatly decreased, by even a small increase in the signal strength. The second way to increase signal quality is to reduce noise. Either of these actions will result in greater signal quality and better performance, but the “biggest bang for the buck” is increased signal strength.)

What’s the point?

As SNR increases, BER decreases (good). As SNR decreases, BER increases (bad). What we have failed to realize is that as the SNR decreases, the number of errors and the rate of errors increases. Also, as SNR decreases, so does the BITRATE decrease! (Can't tell me that signal doesn't matter. Does bitrate affect PQ?)



The greater numbers of errors that result from lower SNR all have to be corrected or covered by the FEC (forward error correction) algorithms – the coding and decoding of the signal. The greater the amount of errors that need to be corrected in a bit stream, the further from the original quality will be your end result. FEC corrects or ‘diminishes’, or hides errors to the best of its ability. For it to correct errors there needs to be enough good redundant information to replace or correct the corrupted bits. In the absence of all the right stuff, FEC uses its tricks and tools to interpolate (guess) the correct information from what’s left. The more errors you have means the more guessing the FEC has to do. Every guess is one step further from the best quality available.

Now that we know how signal quality is measured, how FEC works and that BER is dependant on SNR, it’s time to turn our attention back to bit error rate.

Bit Error Rate

A system can get lock on a signal as low as 10 -1 bit error rate (BER). This would correspond roughly to 13-20 on a Dish Network signal meter. It would be the lowest signal quality meter reading that you still have reception. You can in many cases watch a program “as long as you have lock”. But here is some information that you might be interested in knowing. MPEG compression doesn't even start working until the BER reaches 10-4, or conversely, stops working if you drop below 10-4 BER! If you are watching a program with signal quality that low, the error correction isn’t even working. (As a matter of fact, the misnamed “digital cliff” (more later) is caused by MPEG FEC. The “fast drop” comes with the sudden loss of the correction benefits of MPEG as signal fluctuations cause the BER to cross the 10-4 threshold where MPEG stops working.

Not everyone will see a problem at that level of signal. Some will have compromised picture quality but won’t notice. Cool! They don’t have to worry about picture quality concerns at all. Others (who have eyes to see) may notice a blurry or grainy picture. Still others (with eyes to see) may not experience deficiency in performance that is visible. The degraded performance may show up in loss of signal, pixilation, or timer issues and DVR function. The bottom line is that errors increase as SNR decreases and signal quality is the determinant factor in all areas of performance from stability and function to quality.

Another notable bit rate benchmark is 10-6. A bit error rate of 10-6 is the point where, below 10-6, the average person perceives a degraded picture. 10-6 BER is also the Quasi error free (QEF) point described as one visible error per hour. This is the point that, if there has to be a "minimum", should be achieved. Ideally, we want to maximize our signal.

(With Dish Network, I believe the minimum signal should be about 66, including headroom. Prior to the signal meter change, this “minimum” benchmark was 70. Since the introduction of the new “improved” signal meter which resulted in the loss of all previous standards, it is still difficult to determine the precise number.)

MPEG error correction is in full swing at 10-6 BER input and with all of its tools and tricks, the output “resembles” a much lower but higher quality BER. At BER of 10-6 there are still a lot of errors but the forward error correction (FEC) compensates, covers, and hides the errors quite well for a BER of 10-6 and fewer errors.

It is not until we've moved further up the signal quality performance scale, to a BER of 10 -10, do we reach the benchmark labeled "High quality video". Here is where “WOW!” is actually found! For the most discriminating eye and for the highest quality picture with trouble free performance, this is really where we want to be! Beyond a BER of 10-10, the top end of the scale is between 10-12 and 10-13. In this area there becomes too much signal for the digital signal processor and it becomes overloaded resulting in pixilation and loss of signal similar to the bottom of the scale.

Here's the thing...digital performance is not actually "all-or-nothing". If the BER performance were actually “all-or-nothing”, a graph of it would be a straight line. A straight horizontal line on a graph represents "no change", or a constant value; which is exactly what we would expect in an "All-or-nothing" scenario.

When we actually take a look at the digital performance graphs, the first thing we see is that we are not dealing with a straight line of constant performance, but a curved line denoting variable performance. While the "digital cliff" idea only hints at the inaccuracy of the "All-or-nothing" fable, it has been known for quite some time that there isn't even a digital "cliff", but rather a digital waterfall. It seems to me that no one wants to tell us about it.

Here's a quote from an article written in 2002, How Forward Error Correction Works

"Represented graphically, the general error-performance characteristics of most digital communication systems have a waterfall-shaped appearance. System performance improves (i.e., bit-error rate decreases) as the signal-to-noise ratio increases."

Here is the link: How Forward Error-Correcting Codes Work

clip_image002.gif


I found this graph to be in a strange orientation versus what I would call a typical graph. I am accustomed to viewing a graph that increases in performance when read from left to right. This graph presents the digital waterfall, but the way it is oriented, you might think, as I did at first, that the “water’ is flowing “down”, from left to right, but that is not correct. For the “waterfall”, to be accurately reflected as flowing “down” (as water does) requires a different orientation.

To view the graph in a more sensible fashion (reading left to right, low performance to high performance, I have included the graph with its new orientation.

clip_image004.jpg


This graph gives clear evidence of the BER to SNR relationship and now, with different orientation, represents a performance graph from “nothing” (bottom left), to higher performance as we read to the right.

This graph represents what used to be called the “digital cliff” but is more accurately called the digital waterfall. (You might notice that this only represents BER in the range of 10-1 to 10-6, or so. It is quite common in the BER/performance graphs that I have found, for them to only include portions of the total graph. Common among them are graphs that stop at 10-4 or 10-6. These stopping points are common because 10-4 is where you should reach for FEC to begin working and 10-6 is where you should be to begin watching really good video.
 

Attachments

  • transfe_snr.JPG
    transfe_snr.JPG
    33.6 KB · Views: 91
  • BER vs system loss.png
    BER vs system loss.png
    32 KB · Views: 99
  • ERRORCHT.jpg
    ERRORCHT.jpg
    6.6 KB · Views: 89
  • 480px-Fec_survey_tcm_performance.png
    480px-Fec_survey_tcm_performance.png
    13.7 KB · Views: 95
  • snr.jpg
    snr.jpg
    3.2 KB · Views: 101
Actually the main difference between providers is the way they reencode the signal from the broadcaster. Dish will take X number of channels, recompress them and mux them in real time. It works fairly well, but there is a lot of ways for it to go wrong (too many channels need bits at the same time, they get chopped). The pictures are reformated, Dish for example cuts down horizontal resolution on most channels.

Dish and DIRECTV mix different channels together and allocate their bits differently.

A system like FIOS has the capacity to just send through the info as the provider provides if the provider is using a compatible format. If not it too must be converted.

All these conversion happen in real time on specialized hardware. They pay big bucks for this hardware and it is constantely being tweaked and improved. But, as demonstrated in the past they tend to squeeze more channels in as they improve the compression rather than let the picture quality improve. It is quantity not quality that sells.
 
(With Dish Network, I believe the minimum signal should be about 66, including headroom. Prior to the signal meter change, this “minimum” benchmark was 70. Since the introduction of the new “improved” signal meter which resulted in the loss of all previous standards, it is still difficult to determine the precise number.)


That is interesting as my signal strength for Cinemax (129 tp27) is 55. Wouldn't the impact on quality due to FEC be consistent across channels with a similar signal strength? My SpikeHD (129 tp26) has a signal strength of 55 as well, yet the quality of the image is quite different. I would expect that the digital OTA signals also include some type of forward error correction.
 
When all is said and done and the majority of all the cable channels are in HD, I am afraid that the quality of the picture will look no better than the SD channels did before HD was the norm. Both DISH and DIRECTV will continue to cram as many hd channels as they can on a transponder to save bandwith. We will all see the quality go down and details in the picture blur or fade. The only thing that is keeping the providers from just cramming 10 or 12 hd channels a transponder is that there is still ota and blu-ray dvds to compare it to. I don't think that DISH could justify any more degrading the hd quality as long as there is ota channels that are broadcasting in full 1080i hd. Now I wonder how the picture will look once you can set your receiver to output full 1080p? The 922 is supposed to be able to do just that. Just imagine all the picture flaws & artifacts in 1080p.
 
I'm not really seeing a trend that will go to worse picture quality. We now have the majority of our channels available in HD, and the capacity is only going up with newer birds. I am sure they are aware that many people are getting bigger tv's and are more prone to complain. I myself will jump ship to whoever has the best picture regardless of +/- $10-$20 on either provider after my 2 year commitment is up.

Deep down I think that most of us are being overly dramatic in this transition phase, and the companies are not merely focused on squeezing as much as possible, if that were the case, I am sure we would have even more HD than we already have access to.
 
When all is said and done and the majority of all the cable channels are in HD, I am afraid that the quality of the picture will look no better than the SD channels did before HD was the norm. Both DISH and DIRECTV will continue to cram as many hd channels as they can on a transponder to save bandwith. We will all see the quality go down and details in the picture blur or fade. The only thing that is keeping the providers from just cramming 10 or 12 hd channels a transponder is that there is still ota and blu-ray dvds to compare it to. I don't think that DISH could justify any more degrading the hd quality as long as there is ota channels that are broadcasting in full 1080i hd. Now I wonder how the picture will look once you can set your receiver to output full 1080p? The 922 is supposed to be able to do just that. Just imagine all the picture flaws & artifacts in 1080p.

Setting the receiver to 1080p should make no difference. If it does, it just means the 922 is better/worse at de-interlacing than your TV. The source is still 1080i, unless you rent 1080p24 VOD.
 
That is interesting as my signal strength for Cinemax (129 tp27) is 55. Wouldn't the impact on quality due to FEC be consistent across channels with a similar signal strength? My SpikeHD (129 tp26) has a signal strength of 55 as well, yet the quality of the image is quite different. I would expect that the digital OTA signals also include some type of forward error correction.

That's because it is not signal strength dependent. Don't read too much into his post, must of us ignore it. :)
 
I'm hoping with new satellites going up quality will improve. But, if what DISH did to SD quality is any clue into the future. Then we are in trouble. :( By the way does STARZ comedyHD look like crap right now? I'm watching "Don't mess with Zohan" and it looks about like a PS3 upconvert.
 
I'm hoping with new satellites going up quality will improve. But, if what DISH did to SD quality is any clue into the future. Then we are in trouble. :( By the way does STARZ comedyHD look like crap right now? I'm watching "Don't mess with Zohan" and it looks about like a PS3 upconvert.

SD? What is that?
 

Only Me this would happen to!

? when HBO GO will be a Dish option

Users Who Are Viewing This Thread (Total: 0, Members: 0, Guests: 0)

Who Read This Thread (Total Members: 2)