[TriLUG] Problem with LVM

T. Bryan tbryan at python.net
Sun Jun 4 15:55:49 EDT 2006


I tried rebuilding everything.  My only remaining problem is that things don't 
quite work on boot.  Details are below, but after I boot and login, the 
quickest way to get everything working is to 
/etc/init.d/lvm stop
mdadm -S /dev/md0
mdadm -As /dev/md0
/etc/init.d/lvm start

Then it all seems to work fine.  Odd, eh?

On Saturday 03 June 2006 10:42 pm, Brian McCullough wrote:
> > Do I need to do anything special to get mdadm and LVM to forget about the
> > previous configuration?  I figured that I'd remove /etc/mdadm/mdadm.conf.
> >  I
>
> Remove or edit.
>
> The fdisk "automatic raid" type, along with the configuration file, is
> the signal to mdadm.

Okay.  That seems to work fine.  After I reboot, if I cat /proc/mdstat, I get 
Personalities : [raid1]
md0 : active raid1 hde[0] hdg[1]
      117220736 blocks [2/2] [UU]

unused devices: <none>


Unfortunately, LVM doesn't seem to see this device initially.  Perhaps there's 
something screwed up in the boot order/when modules are loaded?  My machine 
boots.  dmesg has info about the RAID
md: md driver 0.90.3 MAX_MD_DEVS=256, MD_SB_DISKS=27
md: bitmap version 4.39
md: raid1 personality registered as nr 3
md: md0 stopped.
md: bind<hdg>
md: bind<hde>
raid1: raid set md0 active with 2 out of 2 mirrors
md: md1 stopped.
md: md1 stopped.

But I don't see any messages that look like they're from LVM.  The LVM 
"device" is not there  (in /dev/mapper/ or at /dev/localvg).

At this point, lsmd includes
raid1                  18048  1
md_mod                 61396  1 raid1
dm_mod                 51512  0

The runlevel command says that I'm at runlevel 2.

I have the following symlinks in /etc/rc2.d/
S25mdadm
S25mdadm-raid
S26lvm

I tried switching lvm so S24lvm.  No change in behavior.


So, I boot, mdadm seems to start /dev/md0 fine.  At this point, lvmdiskscan 
reports
  /dev/md0  [      111.79 GB]
  /dev/hda1 [      101.94 MB]
  /dev/hdc1 [        5.59 GB]
  /dev/hda2 [        7.71 GB]
  /dev/hdc2 [        2.28 GB]
  /dev/hda5 [       37.26 GB]
  /dev/hda6 [       37.26 GB]
  0 disks
  7 partitions
  0 LVM physical volume whole disks
  0 LVM physical volumes

That seems odd.  So, I restart LVM.

# /etc/rc2.d/S26lvm stop
Shutting down LVM Volume Groups...
  No volume groups found
# /etc/rc2.d/S26lvm start
Setting up LVM Volume Groups...
  Reading all physical volumes.  This may take a while...
  No volume groups found
  No volume groups found
  No volume groups found

Odd.  Still no volume group found.  Then, 

# mdadm -S /dev/md0
# mdadm -As /dev/md0
mdadm: /dev/md0 has been started with 2 drives.

Now, if I run lvmdiskscan, it sees the LVM physical volume
  /dev/md0  [      111.79 GB] LVM physical volume

But, I still don't have the device.
# mount /opt
mount: special device /dev/localvg/lv_local1 does not exist

So, finally, I restarted LVM again.

# /etc/rc2.d/S26lvm stop
Shutting down LVM Volume Groups...
  0 logical volume(s) in volume group "localvg" now active

# /etc/rc2.d/S26lvm start
Setting up LVM Volume Groups...
  Reading all physical volumes.  This may take a while...
  Found volume group "localvg" using metadata type lvm2
  /dev/localvg: opendir failed: No such file or directory
  1 logical volume(s) in volume group "localvg" now active

Now, I can see the logical device in /dev/mapper/ and at /dev/localvg.  
Strange.  I could script this workaround every time I boot, but it sounds 
like something is broken.  I'd prefer fix it now while I'm messing with it.  
Unfortunately, I'm not sure what I should be expecting.  Does this behavior 
sound broken to you?  

---Tom



More information about the TriLUG mailing list