I expected two hopper integration to be fairly simple and work like this:
One is the master and one is the slave.
All joeys connect to the master hopper (that's up to six Joeys plus lets call the slave Hopper three more data channels, all told that's a max of nine HD data streams which is well within the specs for MoCA so there shouldn't be any networking issues).
From the users perspective there is only one Hopper with a unified DVR listing, schedule and tuner list. You see everything as though it were one 6-tuner, unified-database system.
The master hopper would set aside 1TB of space for all the things a single hopper system would store. Likewise, one of its three tuners will be used for recording the big-4's primetime content. This leaves 1 TB on the master and 2TB of space on the slave for user selected recordings.
In the crudest implementation: When only one event is scheduled to record it will record to the slave. When two events are called for, they will both go to the slave as it has 2TB of storage and three unused tuners compared to the master which has only 1TB and two tuners (one is assumed lost to the big-4's primetime recording). A third event will go to the master. A fourth to the slave and a fifth to the master. Should a sixth be available, obviously it would fall to the master. To summarize:
0 (aka big-4) master
1) slave
2) slave
3) master
4) slave
5) master
6*) master
A more advanced system would dynamically assign tuners based on available storage and even shift files between master and slave when needed.
The master's archive would be constructed of actual files on it's 1TB partition and psuedo-files consisting merely of pointers referencing the remaining files actually residing on the slave (this system works well with external hard drives added to both hoppers). If a file is called it will stream from whichever device holds it directly to the user's box but the directory will always reside on the master and all queries will be handled by the master. Essentially the second hopper is treated as a remote hard drive and if it crashes or is otherwise unavailable there are established protocols to deal with that, the most basic of which say you just pretend like it's still operating and generate error messages for the user if you try to access the inaccessible data. From the master's perspective the files still exist as pointers whether or not the pointed-to-data is available. When the slave comes back online, it should be trivial to sync the data, requiring only a few hundred KB of headers/pointers to be sent updating the master's database.
So, under this crude approach the user sees a single DVR archive with a unified schedule offering 5-6 tuners the use of which is auto-assigned. The principle bottleneck would be the 2TB storage capacity of the slave. A more advanced system would dynamically load-balance the tuners to efficiently use the full 3TB, even going so far as to transfer files between the devices when necessary (hence, the three extra data-channels mentioned above). Obviously plugging in external hard drives changes the calculus a bit but need not change the basic or advanced implementation of the system. If you added 1TB hard drives to each hopper, the basic case stays the same except with 3TB on the slave and 2TB on the master with the slave tending to fill up first. The advanced solution makes full use of the sum of both hoppers' capacity, regardless of how far it is expanded.
Such an implementation is on the level of a class project. It expands the system from essentially two free tuners to five and 1TB of space to 2TB plus an unoptimized extra TB. It also offers two OTA tuners rather than one, great for those that want the CW and PBS in HD. Crucial to this system is the idea of integrating the scheduling and database as well as automatically managing the distribution of tuners. So too is the non-duplication of the reserved partition.
Unfortunately it sounds like Dish is not presently implementing any of this. I can understand why they would not want to try jumping straight to a dynamically managed database but building a crude master-slave system with a base 2TB+ capacity and dynamically assigned tuners and unified database/scheduler is not too much to expect.