[TriLUG] Linux Clustering (high availability) and file systems
Jon Carnes
jonc at nc.rr.com
Tue Apr 27 15:46:23 EDT 2004
On Tue, 2004-04-27 at 14:42, Tarus Balog wrote:
> Gang:
>
> Okay, I want to make a particular file system highly available.
>
> For example, suppose, just suppose, I had a directory called
> "/var/opennms" that I wanted multiple machines to be able to write to.
> So if I had two active machines writing to that file system, and one
> died, I'd have a third machine that could come on-line and pick up
> where the failed machine left off.
>
> It has to be fast and reliable (so nothing like NFS). Has anyone worked
> with SAN equipment where we could dual attach two or more machines over
> SCSI or Fiber Channel?
>
> How do Linux Clusters handle making data highly available.
>
> Relevant links and RTFM suggestions welcome.
>
I'm guessing that "Cheap" is also one of the criteria (or at least
"inexpensive").
I do this using NFS on a back-end private switched network that connects
multiple Intel boxes together. Sort of the equivalent of Blade server -
but using cheap Intel boxes. If you don't happen to like NFS as the
transport then use rsync and ssh - doesn't matter as it's all happening
on a back-end private network.
Will there be multiple writes to the same files/directories? If you are
mostly doing reads then there is no need for a SAN setup. Anything that
puts new data out to the directory structures in a timely fashion will
suffice.
Otherwise - A SAN is a nice idea.
Jon Carnes
More information about the TriLUG
mailing list