The only thing is that DirecTV will never issue a statement regarding what happened. This is the total opposite of a professional service provider or consulting firm. For example, there was an issue in one of providers Miami datacenters yesterday. Today we've received an e-mail with full explanation "Network Operations Center (NOC) received multiple alerts for devices in the Miami Data Center. Local Data Center Technicians and Network Engineers were immediately mobilized to investigate the issue. During the investigation we were able to identify a large layer 2 loop, once the source of the loop was identified, a failed switch was immediately replaced and full connectivity was restored at 10:55 ET. ". Also followed by this (which I only pasted a snippet), is a more detailed mitigation plan such as "accelerating already planned upgrades at the data center that addresses legacy devices at the aggregation layer and distribution layer. The new design will reduce the layer 2 complexity and remove reliance on protocols such as spanning tree". They go into a little more than that... but that shows the difference in communication between an enterprise service provider and a TV provider.
I saw the Hardcore Pawn stuttering as well. We recorded the SD re-airings that rerun after midnight. When I went to bed things seemed fine, but we left the SD recordings setup just in case. From the lag and stuttering issue, I would place my blame on a networking issue between the source content acquisition and the encoder. The same stutters and issues also presented themselves to the iPad and iPhone live streaming channels. Therefore I do not see it as an uplink or issue with the bird at 103. A power issue could have thrown a rack offline containing the encoders or signal acquisition equipment for those affected channels, however when they came back up there was serious lip sync issues and stuttering (reminded me of a underpowered PC trying to play HD video).
A few years ago Comcast had a strange issue on local HD channels with sync issues, stuttering and going offline. The problem was a 10GbE interface into the core switch for the local channel pod. Even though things are redundant and there are dual 10GbE interfaces into the pod, the primary was not completely dead. It was accepting traffic, however it wasn't able to keep up. It didn't "appear down" as 95% of packets arrived on time... therefore the system never failed over to the secondary interface. I'm sure DirecTV has plenty of redundancy built into it's core video network... but sometimes if something is "partially working", DR mechanisms do not see failure and switch over to backup. Now that's just a networking issue... I've certainly seen my fair share of power issues where a generator doesn't kick on... or it kicks on but the transfer switch does not engage... or the UPS fails and does not buffer the load while the Generator ramps up to full RPM's. Point is, there's so many different things that could have gone wrong here. An explanation would be a nice to have... but I think all we are going to get is either it's fixed or there are issues.
?
I saw the Hardcore Pawn stuttering as well. We recorded the SD re-airings that rerun after midnight. When I went to bed things seemed fine, but we left the SD recordings setup just in case. From the lag and stuttering issue, I would place my blame on a networking issue between the source content acquisition and the encoder. The same stutters and issues also presented themselves to the iPad and iPhone live streaming channels. Therefore I do not see it as an uplink or issue with the bird at 103. A power issue could have thrown a rack offline containing the encoders or signal acquisition equipment for those affected channels, however when they came back up there was serious lip sync issues and stuttering (reminded me of a underpowered PC trying to play HD video).
A few years ago Comcast had a strange issue on local HD channels with sync issues, stuttering and going offline. The problem was a 10GbE interface into the core switch for the local channel pod. Even though things are redundant and there are dual 10GbE interfaces into the pod, the primary was not completely dead. It was accepting traffic, however it wasn't able to keep up. It didn't "appear down" as 95% of packets arrived on time... therefore the system never failed over to the secondary interface. I'm sure DirecTV has plenty of redundancy built into it's core video network... but sometimes if something is "partially working", DR mechanisms do not see failure and switch over to backup. Now that's just a networking issue... I've certainly seen my fair share of power issues where a generator doesn't kick on... or it kicks on but the transfer switch does not engage... or the UPS fails and does not buffer the load while the Generator ramps up to full RPM's. Point is, there's so many different things that could have gone wrong here. An explanation would be a nice to have... but I think all we are going to get is either it's fixed or there are issues.
?