SLUG Mailing List Archives
Re: [SLUG] clone non-LVM system onto new LVM drive
- To: slug@xxxxxxxxxxx
- Subject: Re: [SLUG] clone non-LVM system onto new LVM drive
- From: John Clarke <johnc+slug@xxxxxxxxxxx>
- Date: Thu, 30 Jun 2011 14:30:41 +1000
- User-agent: Mutt/1.5.18 (2008-05-17)
On Tue, Jun 14, 2011 at 02:15:18PM +1000, david wrote:
> Next, how do I persuade the new partition to boot? Do I have to do some
> magic with grub? If so, what? Do I cpio the old /boot onto the new,
> non-LVM boot partition? or can I use /boot within the new LV?
> Everything I read says to put /boot into a non-lvm partition. Does
> grub-install from a live CD give me the opportunity to spell out the
> right parameters?
I'm trying to so the same thing right now, and I've got *almost*
everything working. Now when I try to boot from the new drive, I see
some error messages flash by during boot (they're not logged to syslog
and don't appear when I run dmesg) saying that it can't write to
/lib/modules/`uname -r`/volatile because it's a read-only filesystem.
This is happening because the tmpfs that's normally mounted there isn't
being mounted, and I have no idea why. I don't know where this mount is
supposed to be done and I'm not having any success finding anything
useful via Google, and without knowing where it's done I have no idea
how to fix it.
Is there anyone out there who knows how to fix this, or who can give me
a clue or two to help me figure it out?
This is what I've done so far:
I split the new drive (/dev/sdb) into three partitions and started a
degraded RAID 1 array on each of them. I formatted the first with ext3
(for /boot) the second as swap, and the third is my LVM volume with
separate partitions for /, /home, /tmp, /usr and /var. I mounted all of
the new filesystems under /media/lvm and copied the files from the
existing drives, then created a new initramfs and installed grub, like
# these two files are used bu update-initramfs to build the
cp /proc/cmdline /media/lvm/proc/
cp /proc/modules /media/lvm/proc/
# create an mdadm.conf on the new drive
mdadm -E -s >> /media/lvm/etc/mdadm/mdadm.conf
# chroot into the new drive
# change the root device in the copy of /proc/cmdline, mine
# now contains "root=/dev/mapper/vg0-root ro"
# update the mounts in the new fstab to use the new RAID/LVM
# edit the new grub memu so that the kernel's root device is
# the new LVM root device (e.g. /dev/mapper/vg0-root), the
# grub root device is the new /boot partition or RAID array,
# and the kernel and initrd pathnames are relative to /boot
# create a new initrd that includes LVM and RAID support
# do this once for each kernel version you want to be able
# to boot (replace "`uname -r`" with the kernel version)
update-initramfs -c -k `uname -r`
# install grub on the new drive
Any suggestions on how to fix my mount problem are welcome.
I don't know what Connect[.com.au] were thinking when they put sprinklers
in their data centre. I wonder what they'd do if you asked for a quote for
enough rack space to hold 3 servers, a router, a switch and an umbrella?
-- Richard Archer