There’s no way to do what you’re asking.
Some answers, plus my $0.29 worth.
We don’t provide backup MX service for clients. Our mail servers either accept incoming mail as local or don’t accept it. It’s not a terribly difficult thing to do, but it’s not a service we provide.
Even if we could serve as a backup MX host, you wouldn’t be able to access your email when the exchange server was down. Imagine the chaos if some mail to your domain got delivered to local accounts on our servers, but the bulk of it got delivered to the other server.
If you’re just concerned about mail being lost while the mail server is down, mail servers that are sending mail to your domain will queue the mail and attempt to resend for at least 4 days before returning it. In most cases, a backup MX is NOT a good idea - it simply adds more complexity and more things to go wrong, and (unless the backup MX host is configured with the same anti-spam settings and the same knowledge of what mail to accept / reject) can result in you getting more spam / viruses, and probably will result in you being a bad neighbor by accepting mail and then bouncing it (rather than rejecting during the SMTP transaction).
Basically, in most cases, you don’t need a backup MX. If you do decide you want or need one, make sure you really know what you’re doing, and see if your ISP provides this service; otherwise there are a number of outside services which will do this, either for free or for a small charge.
HOWEVER, if there’s a backup mail server, the mail will just be queued there, and then be delivered to the primary mail host when it comes online. To do the type of redundancy it sounds like you’re looking for (if you really want people to be able to access their mail in the event of a system failure), you’ll need a somewhat more complex setup. I don’t know how you’d go about doing this with M$ Exchange; in our system, there are a few reasons we’re able to do it:
Centralized storage over NFS (all mail machines have access to user data), and in a format which doesn’t require file locking to work properly.
Synchronization of configuration files / identical configuration (via our backend) and synchronization of alias tables etc. (in this case, using MySQL). This, of course, also creates some central points of failure.
More or less identical software on the various machines.
To be honest, all of this adds a lot of complexity, and a lot of the reason we have this redundancy built in is to deal with load sharing etc. In most cases, I think you’re going to be better off with a single machine.
Hope this makes sense; I can try to clarify further if you have any questions.