[TriLUG] Problem with LVM
T. Bryan
tbryan at python.net
Wed May 10 23:20:29 EDT 2006
So...where we left off with my LVM problem. I added /dev/hde1 and /dev/hdg1
to the lvm.conf filters, but then lvmdiskscan was not detecting /dev/md0 as
an LVM physical volume.
Here's what I tried ('cause I was running out of ideas).
I used fdisk on /dev/hde and /dev/hdg.
I changed /dev/hde1 from Linux partition (83) to Linux LVM (8e).
I changed /dev/hdg1 from Linux partition (83) to Linux raid autodetect (fd).
Now, when I reboot, I have the same problem as before. It looks like LVM is
up and /dev/md0 has been started according to mdadm. Unfortunately,
lvmdiskscan still does not detect /dev/md0 as an LVM physical volume.
So, I tried
# mdadm -S /dev/md0
# /etc/init.d/lvm stop
# mdadm -As /dev/md0
# /etc/init.d/lvm start
Now, LVM sees /dev/md0, and I can mount it. (I've changed the mount options
to read only for now until I figure out whether my RAID is completely bogux.)
# lvmdiskscan -v
/dev/md0 [ 111.79 GB] LVM physical volume
/dev/hda1 [ 101.94 MB]
/dev/hdc1 [ 5.59 GB]
/dev/hda2 [ 7.71 GB]
/dev/hdc2 [ 2.28 GB]
/dev/hda5 [ 37.26 GB]
/dev/hda6 [ 37.26 GB]
0 disks
6 partitions
0 LVM physical volume whole disks
1 LVM physical volume
Now, I'm just trying to figure out whether I should figure out how to get this
all to work at boot time...or whether I should view the whole thing as
suspect, copy off the data, and rebuild the RAID as a mirror with 1 disk and
then mirror it.
# grep default /etc/inittab
# The default runlevel.
id:2:initdefault:
# ls -1 /etc/rc2.d/S* | grep -e md -e lvm
/etc/rc2.d/S25mdadm
/etc/rc2.d/S25mdadm-raid
/etc/rc2.d/S26lvm
If any of this rings any bells or makes the problem obvious to someone, I'd
love to hear it.
Thanks,
---Tom
More information about the TriLUG
mailing list