[TriLUG] Department server sonfig suggestions please.
David A. Cafaro
dac at trilug.org
Wed Aug 4 08:11:36 EDT 2004
On Wed, 2004-08-04 at 05:46, Matthew Lavigne wrote:
> On Tue, 03 Aug 2004 21:32:27 -0400, David A. Cafaro <dac at trilug.org> wrote:
> >
> > Last I checked it doesn't, each member partition of a RAID5 system needs
> > to be the same size. So you would have to loose 1.5GB on each disk
> > after sdb based on your layout.
>
> I disagree with you here, if you are doing RAID5 in HW (really the
> only place to do it). You use all the disks to make 1 large Raid 5
> Disk that is usually equal to disk size x4 and you partition that.
> Therefore the OS only sees an sda. I have that with a system that is
> sda - sde. Each drive is a Raid5 device and the OS sees it as a drive
Actually I think that is in agreement with what I had said (though I
didn't mention hardware raid, which of course is the better choice if
you have it). I'm assuming that all your HDs in your hardware raid
setup are of the same size? That would be the same as making all the
software raid partitions the same size (in software raid, instead of
seeing a new 4/5 sda device you see a new 4/5 mda device, but the same
basic thing). So it acts the same as your example just replacing
/dev/sda with /dev/mda.
> Example:
>
> [lavigne at avtestsvr lavigne]$ df -h
> Filesystem Size Used Avail Use% Mounted on
> /dev/sda1 1.4G 415M 986M 30% /
> /dev/sda2 43G 38G 3.6G 92% /home
> /dev/sda3 6.2G 2.0G 3.9G 34% /usr
> /dev/sda5 5.8G 53M 5.4G 1% /tmp
> /dev/sda6 3.4G 334M 2.8G 11% /var
> /dev/sda7 2.9G 195M 2.5G 7% /opt
> /dev/sdb1 67G 64G 0 100% /ISOs
> /dev/sdc1 133G 73G 54G 58% /storage
> /dev/sdc2 133G 72G 54G 57% /mnt/sdc2
> /dev/sdd1 133G 51G 75G 41% /builddrive
> /dev/sdd2 133G 95G 31G 75% /mnt/sdd2
> /dev/sde1 367G 130G 218G 38% /images
> [lavigne at avtestsvr lavigne]$
>
>
> > In my world it's always nice to run the OS on RAID5 if you have the
> > option. Remember that it's not just the down time to reinstall the OS.
> > On RAID5 the system isn't going to stop running when a disk fails (it
> > will slow down). You can even put the /boot on the raid, and with some
> > carefull grub in MBR setup you can even make sure that your system
> > always reboots even with a failed disk. Of course this is all about
> > making the system hard to knock down.
>
> I agree completely here. The setup above has been up an running for
> over 18 months with a total of 9 disk failures in that time (great
> thing about developmental hard disks) and I have never lost the OS or
> data. (Knock on wood).
>
> Matthew
--
David A. Cafaro
dac(at)cafaro.net
Admin to User: "You did what!?!?!"
More information about the TriLUG
mailing list