Tugger the SLUGger!SLUG Mailing List Archives

Re: [SLUG] Backup theory


david <david@xxxxxxxxxxxxx> writes:

> I've got the following:
>
> 2 x servers - single small hard drives in each
> 1 x desktop - four hard drives including one removeable drive in a caddy
> intended solely for back up purposes.
>
> I run Mondo on the two servers periodically with the intention of
> being able to do a disaster [1] recovery quickly. Mondo produces 2 DVD
> images for each server. I run rsync nightly (good enough for my
> purposes) for more volatile data such as email, databases
> etc. Everything is very tidy.
>
> The desktop has about 350G of data and software. The software is
> unbelievably complicated because I use it to test server set-ups and
> odd bits of software etc. In other words, it's a dog's breakfast.
>
> I would like to run Mondo or something similar on this machine too,
> but I fear it would not be practical. At the moment I run rsync for
> the most obvious data, but that doesn't help with all the complicated
> software, and I would like to be able to recover that too in the event
> of disaster [1].
>
> What's the current best practice for back up in this kind of
> situation?

It varies.  Personally, I take advantage of the fact that a Linux system
has no magic "metadata", so a copy of all the files is enough to perform
a bare-metal restore.

I think use BackupPC[1] to provide me a space-efficient copy of all the
files.  In normal use the web interface is sufficient to recover from
most problems.

In a disaster I boot from a LiveCD, partition, format, etc, the disks,
and then use a combination of the command-line tar creation code in
BackupPC, netcat, and tar in the LiveCD to stream the data back over the
network.

This is reasonably easy to achieve, but requires a little low level
knowledge of how partitioning, etc, work under Linux.  Mondo does
capture that information much more nicely, I confess.

> PS: On a Mac, you can usually take a hard drive out of one machine and
> put it in another and it will "just work". How much tweaking to get
> the same result on linux/ubuntu?

With a recent Debian or Ubuntu, zero.[2]  Getting X running again after
doing that /might/ take a bit of work, but not much, and the basic
system itself should be good.

If you use something less capable, like older RHEL systems, a fair bit
of work is required to get it booting, but the basic process is more or
less the same.

I don't know where Fedora sits, but I presume they have also moved more
to the Debian new-style "ship all the drivers in initramfs, detect the
hardware" strategy than the older RHEL "ship exactly what is required
for the current machine, hard code everything" model.

Regards,
        Daniel

Footnotes: 
[1]  http://backuppc.sf.net/

[2]  Technically, you need to ensure the CPU architecture is compatible,
     so an x86_64 deployment will not run on an i386-only host, but
     otherwise you are good to go.