[TriLUG] How to migrate LVM to LVM on Raid-1

Rick DeNatale rick.denatale at gmail.com
Fri Feb 17 09:37:12 EST 2006


On 2/17/06, Brian McCullough <bdmc at bdmcc-us.com> wrote:
> On Thu, Feb 16, 2006 at 04:23:45PM -0500, Rick DeNatale wrote:

> > 1. I need to change the partition type of /dev/sda5 from 8e to fd
> > "Linux raid auto"  I can't seem to figure out how to do this with
> > sfdisk, so I guess I'll do that interactively with fdisk.
>
>
> NO, NO, NO.  At least, not yet.  This is the last step -- or one of the
> last steps.  Doing this destroys your existing LVM partition. ( or at
> least has that effect.  You haven't actually overwritten anything,
> but.... )

Okay, I'm starting to understand the whole picture here.  At first I
was wondering why I had to copy the data, isn't that what md is for.
Now, I think what I really need to do is to create a new md array with
/dev/sdb5 and a missing partner, populate it with the data, reboot and
test, and then add /dev/sda5 to the array at which point it's going to
get overwritten by md to be a twin of /dev/sdb5.


> >  5. Now create a VG
> >     $vgcreate Ubuntu /dev/md0
> >     QUESTIONS: Same question about uuid, plus is it a problem having
> > two VGs with the same name on different devices? Note that the
> > original PV, VG, and LVs are still in use.
>
>
> I ( was going to say think, actually ) know that you will have problems
> ( read -- it won't let you ) creating duplicate but different VGs in the
> same system.  The only possibility would be to make it ignore the
> existing name, but that would then invalidate your entire system so....
>
> What you want to do is use a _new_ name in this command and all of the
> others that are dealing with the new device.  "Ubuntu" isn't magic.

Right, it's a new name, and /dev/sda5 is going to have a clone after
it get's put into the new raid1 array later.

> >   6 Create the LVs
> >     $lvcreate -l 2013 Ubuntu -n root
> >     $lvcreate -l 95 Ubuntu -n  swap_1
> >
> >      QUESTIONS:  Are there any other parameters I need here?  are the
> > short names root and swap_1 ok or do I need e.g. /dev/Ubuntu/root?

> The short names are what are expected.  The LV name is "root".  The VG
> name is "Ubuntu."
>
> I tend to express these in megabytes ( or some other unit ) rather than
> extents.  That's just me.

I guess that there's no operational difference in the result, or is
there?  I got the numbers from lvdisplay


> >  11. Edit /mnt/root/etc/fstab to use the md device.
> >
> >    HERE'S A PLACE WHERE I'M STUMPED.
> >   Here are the lines in the current /etc/fstab for mounting / and swap
> >
> > /dev/mapper/Ubuntu-root /               ext3
> > defaults,errors=remount-ro 0       1
> > /dev/mapper/Ubuntu-swap_1 none            swap    sw              0       0
> >
> >    What if any changes do I need to make here?  Is there some LVM
> > config to substitute?
>
>
> Only the name that you chose for your new VG.

Got it.

> >   11. Edit /mnt/boot/grub/menu.lst????
> >
> >   STUMPED AGAIN.  Here's an example of a kernel line from my /boot/grub/menu.lst
> > kernel          /vmlinuz-2.6.12-10-686-smp
> > root=/dev/mapper/Ubuntu-root ro quiet splash
> >
> >    DO I REALLY NEED TO CHANGE ANYTHING HERE???
>
> Only the name of the new VG.

Got it.

> > If so how does it
> > interact with ubuntu/debians automagic updates to this file on new
> > kernel package installs?
>
> No issue.  This is "/boot" for your system.

Got it.

> >   12. Install grub on the new drive so we can still boot
> >      $sudo grub-install /dev/sdb
> >        grub
> >        grub: device (hd0) /dev/sdb
> >        grub: root (hd0,0)
> >        grub: setup (hd0)
> >        grub: quit
>
>
> At this point, although I am not a GRUB expert, I think you are wrong.
> If, as I expect, we are going to physically swap the two SCSI devices,
> you won't be booting /dev/sdb any more.  Also, this is suggesting to me
> that your /boot is on /dev/hda.  ( all of those hd0 lines ) Unless the
> device command is telling it that sdb IS hd0?  Which, if you swap
> drives, would no longer be correct.  If however, you don't swap drives
> and just boot to /dev/sdb instead of /dev/sda -- I see where you are
> going.  It just might work.  Again, ask a GRUB expert about the
> possibilities.

I wasn't planning on making any physical configuration changes.

Grub uses a bios based numbering system for drives. (hd0,...) doesn't
(necessarily) map to /dev/hda it maps to the first drive in the bios
list.

That said, I think that I need to $s/hd0/hd1/g in the above. My
current setup had hd0 in the menu.  What I'm doing in installing grub
here is to allow booting from either boot partition /dev/sda1, (hd0,0)
in grub parlance, or /dev/sdb1, (hd1,0) in case one of the drives
fails.

> >      QUESTION: The debian-admin article had this as grub-install
> > /dev/sda but this surely must be a typo since grub is already
> > installed on /dev/sda YES?

> True.

And the grub installation on /dev/sda will still be there and usable
since only /dev/sda5 is in the md array.


> >     14. Copy the new /etc/fstab and grub configuration to the old drive
> >         $sudo cp -p /mnt/root/etc/fstab /etc
> >         $sudo cp -p /mnt/boot/grub/menu.1st /boot/grub
>
>
> Not yet.  You want to leave the old drive as clean as possible.

Okay, so I leave it for now and change the boot order in BIOS to test
the new drive.

> >    15. Cross fingers and reboot
> >         QUESTION: How do I know it worked.  The debian-admin says to
> > use df and see that / is mounted on /dev/md0, but I think that it's
> > still going to show it mounted on /dev/mapper/Ubuntu-root
>
>
> True -- except that it is no longer "Ubuntu".  It can't be.
>
> Test thoroughly!  At least far enough to make sure that this is
> the "same machine."
>
>
> >    16. Now add /dev/sda5 to the raid
> >      $sudo mdadm --add /dev/md0 /dev/sda5
>
>
> HERE is where you change the partition type of /dev/sda.
> At this point you are committed.

I'm assuming you mean here that the change is a side-effect of the
mdadm --add command, right?

> >     17. Wait until the md finishes syncing the drives
> >      $watch cat /proc/mdstat
>
>
> Yup.
>
>
>
> >      It's done when this shows a status of [UU] for /dev/md0
>
>
> Correct.

And in future, if a drive (say sdb) fails, am I correct that the
recovery is simply,

a) $mdadmin /dev/md0 -f /dev/sdb5 -r /dev/sdb5
     The fail might not be necessary since md will probably already
have noticed.
b ) Install and partition a new drive
b) mdadm /dev/md0 --a /dev/sdb
c) Wait for the sync to complete

Yes?

--
Rick DeNatale

Visit the Project Mercury Wiki Site
http://www.mercuryspacecraft.com/



More information about the TriLUG mailing list