I'd be very disappointed if the software integrators didn't test this with real data.
Take a snapshot subset of live data in a staged environment and pass it through your system to see where it breaks.
Would they take all the data? Surely not. Would not having all the data run through in the test environment still cause issues when it gets rolled out? Sure.
But the issues that should crop up should be rather case-by-case, not bring the whole thing to a grinding halt.
One of the most important things to do on data-reliant software projects is testing with "real" data that's also recent data. Not mocked up data. Not real data, but from 5 years ago. Today's data. Guaranteed if you don't use real data, something bad will happen.
And if this is what happened...someone needs to have their employment reviewed.
As several people have posted, they could have easily run a test on a backup of all the data, recent data, backed up moments before the test, and still had a "surprise" when they tried to run it on the real system.
I suspect it's not a matter of the script handling the data itself, but the archetecture of the system is at the root of the problem.
The system is likely a monster of components installed from 1994 to the present, all strung togather in one-of-a-kind proprietary ways.
We know this system is tenticaled into every desktop of every CSR, the live feed of data to the satellites themselves (so it can send authorization commands to your receivers) and to D*'s web site (they have a button that you can use to "refresh" your receiver's authorization all by yourself).
Simulating the monster, that I'm sure this system is, would be simply impossible to do perfectly to every detail and nuance.
Is it possible that someone did something glaringly stupid and their employment should be reviewed as a result? Sure.
Is it possible that the problem was so subtle that only the wildest stroke of luck or genius would have caught it before it bit them in the butt? Yes, that's also possible.