More and more streaming FAST channels on DISH

That is fine if you have an ethernet connection available but if not you need wifi and there is an adapter for that for the Wally.

i do have the wifi adapter installed. ill check it out what channels are they on if system has them?
 
  • Like
Reactions: charlesrshell
They are not satellite signals. They need to record the IP stream which I am told is a lot different then recording the satellite stream.
My guess is that it’s a hardware implementation issue of the H3. Likely there is some kind of HW assist to move the satellite data directly to the hard drive Independent of the H3 CPU. And that HW assist is likely hardwired tailored for just satellite data as streaming data was never conceived of when the H3 was designed. The only alternative for streaming data is using the CPU for this time intensive operation, which could affect the performance of the entire H3. Balancing this could be very challenging to create.
 
  • Like
Reactions: charlesrshell
My guess is that it’s a hardware implementation issue of the H3. Likely there is some kind of HW assist to move the satellite data directly to the hard drive Independent of the H3 CPU. And that HW assist is likely hardwired tailored for just satellite data as streaming data was never conceived of when the H3 was designed. The only alternative for streaming data is using the CPU for this time intensive operation, which could affect the performance of the entire H3. Balancing this could be very challenging to create.
I find this hypothesis hard to believe. There is obviously a live buffer in use right now. I can pause, rewind, fast forward, etc. the FAST channels. So it would seem the streams can be saved to disk now.
 
My guess is that it’s a hardware implementation issue of the H3. Likely there is some kind of HW assist to move the satellite data directly to the hard drive Independent of the H3 CPU. And that HW assist is likely hardwired tailored for just satellite data as streaming data was never conceived of when the H3 was designed. The only alternative for streaming data is using the CPU for this time intensive operation, which could affect the performance of the entire H3. Balancing this could be very challenging to create.
There is software out there that can record almost any stream to a PC, perhaps something similar could be coded for the Hopper.
I find this hypothesis hard to believe. There is obviously a live buffer in use right now. I can pause, rewind, fast forward, etc. the FAST channels. So it would seem the streams can be saved to disk now.
All that is done at the provider stream not on the Hopper. I can do that also with my Fire Stick on many streams but there is no buffer capability on the Fire Stick.
 
  • Like
Reactions: charlesrshell
All that is done at the provider stream not on the Hopper. I can do that also with my Fire Stick on many streams but there is no buffer capability on the Fire Stick.
I don't believe that is true for the Dish Fast channels. The displays for the controls are Hopper displays. The functionality is the same as Hooper functionality. So IMHO the stream is being used by the Hopper. Of course that is just my opinion.
 
  • Like
Reactions: charlesrshell
All that is done at the provider stream not on the Hopper. I can do that also with my Fire Stick on many streams but there is no buffer capability on the Fire Stick.
I cannot pause/rew/fwd on any thing playing Live on Pluto, On Demand yes, on any device.
 
  • Like
Reactions: charlesrshell
I find this hypothesis hard to believe. There is obviously a live buffer in use right now. I can pause, rewind, fast forward, etc. the FAST channels. So it would seem the streams can be saved to disk now.
It’s not a matter of having a data buffer. Rather, it’s how the data in the data buffer gets to the hard drive for recording. Move data is a very CPU intensive operation; tight loops to move the data one word at a time, and that means the CPU can’t be doing anything else. Using the CPU like this causes delays for all the other things the CPU has to do (display updates, IP I/O, Bluetooth data, etc.)

That’s why it is common in realtime systems like the H3 to have dedicated DMA HW engines (Direct Memory Access) to move repetitive high speed data independent if the CPU so that CPU can keep up with all of it’s housekeeping tasks.

And remember, the satellite data has multiple stages to go through; stream out of the tuner, into the live buffer, into the HW frame buffer for display, into the HW AES encryption (another very time intensive operation), and then into the HW disk controller to be stored onto the hard drive. There could be more stages that I haven’t though of.

To keep the cost of the HW down, operations like this are often “glued” together in HW with no way to alternatively interject data into one of these stages. For streaming, the chain of stages is going to need to be reproduced for full display and recording functionality. If the existing satellite HW can’t be utilized for streaming, then using CPU for all those stages might not have enough bandwidth to work.

But remember, when the H3 was first designed, IP streaming probably wasn’t part of the design criteria. So the HW for IP streaming might just not exist. We would have to see the HW data sheet specs to know exactly what is and isn’t possible.

I design and work on real-time data systems for a living (my screen name stands for Real Time Computing Dude). These are very common trade offs in real-time system environments. It’s a lot more complicated than it seems from the outside. You know you’ve done your job when all the internal complexities are hidden and the systems just work for the user.
 
And remember, the satellite data has multiple stages to go through; stream out of the tuner, into the live buffer, into the HW frame buffer for display, into the HW AES encryption (another very time intensive operation), and then into the HW disk controller to be stored onto the hard drive. There could be more stages that I haven’t though of.
I am a little confused here. Are you saying that when I pause an ip stream on the H3 for up to an hour, that the data is not written to the hard drive? It would seem that a lot of temporary memory would be required to buffer an hour's worth of data.
 
  • Like
Reactions: charlesrshell
I am a little confused here. Are you saying that when I pause an ip stream on the H3 for up to an hour, that the data is not written to the hard drive? It would seem that a lot of temporary memory would be required to buffer an hour's worth of data.
I'd say, when you paused the IP stream it was paused at the server not on your device.
 
  • Like
Reactions: charlesrshell
I am a little confused here. Are you saying that when I pause an ip stream on the H3 for up to an hour, that the data is not written to the hard drive? It would seem that a lot of temporary memory would be required to buffer an hour's worth of data.
I could be wrong about this so forgive me if it is... but that temporary IP streaming buffer is stored in a seperate smaller area and that area s not big enough to record full shows. And then tying that separate partition so that things recorded there show up in the DVR menu is another thing.

As I have said they are working on it.
 
They are working on making them recordable. :)
This would welcome change all though the live buffer seem to work on streams last i check i could hit rewind for 10s and it would go back and even show buffer being filled.

Would be nice if sort out guide info on those change it dont show season/episode info (all though this seem to streaming thing in general non stream versions of channels show that info IF there non streaming version but all stream channels seem to do this).

And other times it out right wrong on what is actual showing. I watch Horror by alter and few other channels that are wrong air times wrong end time and the guide general show previous programing when next one already started.
 
  • Like
Reactions: charlesrshell
I am a little confused here. Are you saying that when I pause an ip stream on the H3 for up to an hour, that the data is not written to the hard drive? It would seem that a lot of temporary memory would be required to buffer an hour's worth of data.
My apologies if my simplified description left some confusion.

Before I try to answer your question, let me give you a brief overview of MPEG-4 video compression. The MPEG-4 standard is huge with only a part of it dealing with video compression, so I will only describe the compression part. But, even a simplified description is a bit long, so bear with me.

The most common usage is H.264 compression. Usually the lossy part of the standard is used that throws out bits of the video that are redundant or aspects that the human eye can’t see or won’t miss. A video frame is divided into a variable a number of rectangular regions. Each region may be compressed differently depending on the content of the video it contains. The compression can vary from very little to 80% - 90% or more in size. This gives the video stream its size reduction. In addition, each region will contain reference snapshots of the data in the region. The reference snapshots are used to compare the video change in the region over time. If nothing changes in the region, the reference snapshot is used to reconstruct the video frame. If there are changes in the region, only the changed data is include in the region data steam and the rest of the region reconstruction comes from the previous reference snapshot. If the changes are extensive enough, a whole new reference snapshot is created. This gives further data size reduction and temporal compression. All of this work is done by the encoder. And the resulting data stream is a constant varying list of varying sized region changes and reference snapshots. The resulting data stream is construct so that it is a far simpler process to decompress than to compress. Additionally, time codes are buried in MPEG-4 data stream to correlate the varying compressed video with linear time.

To make things like trick play (fast foreword/reverse, single frame stepping, etc.) workable and useable, a method has to be used quickly access the correct point within the varying compressed video data stream base on linear time. Many methods have been devised to do this that and have spawned many patents and patents fights. But in general, all of these methods build some kind of indexed lookup database to lookup where to go to in the varying video data stream based in a desired linear time index.

Now to your question. You correctly assume that there is no way that a whole hour of even compressed video could be held in memory. Instead, the common method is to use a three stage buffer scheme. The first stage is to keep a couple of fully decompressed frames in memory surrounding the current display point. This allows things like frame-by-frame single stepping with no perceivable time delay. This is the same video data that is sent to the HW frame buffer for display. The second stage is to buffer some chunk of compressed video stream data in memory for quick access depending the direction the video is being displayed (forward or backwards) using the time indexing method. Getting the video stream data in large chunks reduces the number hard drive disk accesses and the load on the managing CPU. This stage comes into play for fast forward/reverse or skip forward or reverse. The last stage is just to use in the time index method to go directly to the hard drive for larger time spans within the varying video data stream. Typically when a video is first started or returning to a previously stopped video for replay. (And again think many patents and patent fights)

As you can imagine, it is a complex process to coordinate all of the buffering so that things don’t get out of sync. And remember, there is a similar process going on for the audio that I didn’t talk about that has to synced with the video.

Why is there all of this complexity? It’s to reduce the SW/HW resources to accomplish the task, and to reduce the cost of the HW and CPU horsepower needed. Otherwise products would be prohibitively expensive and unaffordable. All of this leads to a long list compromises and trade offs when a product is first design and specked. And because not every feature can be foreseen or anticipated, those compromises and trade offs often lead to limitations of what new features and functionality can be squeezed in to the product in the future. This is why we get newer-better products versions instead of endlessly updated products.

As I said in the original post, things are way more complicated under the covers than it seems looking from the outside. And now think about messing with that delicate complexity it add a new feature. I’m sure you can see why some things somethings just take a long time to develop.
 
Top