[TriLUG] clustering or server mirroring

John Berninger johnw at berningeronline.net
Tue Apr 19 10:26:06 EDT 2005


On Tue, 19 Apr 2005, David McDowell wrote:

> It would seem to further what I am looking for, High Availability over
> load balance... I'm not running a gazillion hit a day site, but more
> of one that just needs to not go down.  This setup will be between 2
> slots in the same rack on gigabit eth.  Yes, if the T1 goes, so goes
> the site.  The website won't be storage intensive, though it will be
> interacting with a DB.  To have a 2nd DB box might be the difficult
> purchase to make, however, a 2nd web box won't hurt to badly.  Yes, if
> the DB goes, so goes the site.  Shared storage?  Like an external SAN
> (if I have my technology right)??
> 
> Also, if you have Load Balance vs High Availability, if one of your
> boxes dies, as long as one can handle the current load, haven't you
> also accomplished your goal?  I am thinking that if I change
> httpd.conf on box A, that I would want that change replicated to box B
> immediately.

        Ok, then LB would probably get you where you need to be.  In
your situation, you could probably get away with "shared storage" as an
NFS export from some other box.  One possibility you might want to look
at is using Piranha (LVS) with Direct Routing to LB two Apache servers,
which gives you a modicum of HA as well.  Piranha is available upstream
of RHEL, or you might be able to find a rebuild somewhere.  DR is
covered in the doc I'm attaching.

        If you want more details on the idea in my head, feel free to
find me offline.

> 
> Thanks,
> David McD
> 
> *goes back to reading that article posted earlier in thread*
> 
> 
> On 4/19/05, John Berninger <johnw at berningeronline.net> wrote:
> > On Tue, 19 Apr 2005, David McDowell wrote:
> > 
> > > One in the same?  Here's my idea.  I'd like to use CentOS 4 if
> > > possible to do this.  I would like to have my webserver mirrored on
> > > another machine so that if one goes down, the site continues to run.
> > > If I change a config on one machine, the config should change on the
> > > mirrored machine.  Is this running a cluster or is this some other
> > > kind of setup?  Basically I have some time at work to play.  Any good
> > > resources for this kind of information?  Basically I want 2 servers to
> > > be identical mirrors of one another so that if one of the 2 goes down,
> > > I'm still online.  And, if I repair the broken one, it can resync
> > > itself so that the mirror of the 2 machines is identical again.
> > > Suggestions, links, etc?
> > 
> >        The same, but not.
> > 
> >        Take the following with a grain of salt, as I'm coming from the
> > standpoint of "CentOS is a rebuilt RHEL, RHEL has packages to do this,
> > why not pay up for it" despite knowing that's not gonna happen.
> > 
> >        There are two types of clustering that I know of - high
> > availability and load balancing.  HA gets you fault tolerance, LB gets
> > you greater throughput with a little bit of fault tolerance.
> > 
> >        What you describe could be either - if you just need the data
> > replicated, and the actual httpd.conf won't change, LB would be easier.
> > set up a "shared" repo for the data, point both web servers at it for a
> > DocumentRoot, of off you go.  If the httpd.conf will change, LB is still
> > possible, but more of a PITA, and then HA becomes easier.
> > 
> >        Either way, doing it "right" involves a lot of extra stuff that
> > you probably won't be able to convince your boss to pony up for - like
> > true shared storage, multiple machines, etc.
> > 
> >        Are you wanting to have tolerance between remove sites, or
> > between slots in a rack?
> > 
> > --
> > John Berninger
> > 
> > GPG Key ID: A8C1D45C
> >        Fingerprint: B1BB 90CB 5314 3113 CF22  66AE 822D 42A8 A8C1 D45C
> > 
> > Ita erat quando hic adveni.
> > --
> > --
> > TriLUG mailing list        : http://www.trilug.org/mailman/listinfo/trilug
> > TriLUG Organizational FAQ  : http://trilug.org/faq/
> > TriLUG Member Services FAQ : http://members.trilug.org/services_faq/
> > TriLUG PGP Keyring         : http://trilug.org/~chrish/trilug.asc
> >
> -- 
> TriLUG mailing list        : http://www.trilug.org/mailman/listinfo/trilug
> TriLUG Organizational FAQ  : http://trilug.org/faq/
> TriLUG Member Services FAQ : http://members.trilug.org/services_faq/
> TriLUG PGP Keyring         : http://trilug.org/~chrish/trilug.asc
> 
> 

-- 
John Berninger
                                                                                
GPG Key ID: A8C1D45C
        Fingerprint: B1BB 90CB 5314 3113 CF22  66AE 822D 42A8 A8C1 D45C

Ita erat quando hic adveni.
--
-------------- next part --------------
Piranha 0.7.7+ Direct Routing Mini-HOWTO

Scope:  This only contains relevant information on how to make direct 
routing to work with Piranha, it does not explain how to configure Piranha
services.

Setting up Piranha:

(1) Ensure that the following packages are installed on the LVS directors:

    * piranha
    * ipvsadm

   Ensure that the following packages are installed on the LVS real servers:

    * iptables
    * arptables_jf

(2) Set up and log in to the Piranha web-based GUI.  See the following link:

    http://www.redhat.com/docs/manuals/enterprise/RHEL-3-Manual/cluster-suite/ch-lvs-piranha.html

(3) Configure Piranha for Direct Routing.

    In the "GLOBAL SETTINGS" tab of the Piranha configuration tool, enter
    the primary server's public IP address in the box provided.  The private
    IP address is not needed/used for Direct Routing configurations.  In a 
    direct routing configuration, all real servers as well as the LVS
    directors share the same virtual IP addresses and should have the same
    IP route configuration.  Click the "Direct Routing" button to enable
    Direct Routing support on the Piranha LVS director node(s).

(4) Configure services + real servers using the Piranha GUI.

(5) Set up the each of the real servers using one of the methods below.

===========================================================================

Setting up the Real Servers, method #1: Using arptables_jf

How it works:
    Each real server has the virtual IP address(es) configured, so they
    can directly route the packets.  ARP requests for the VIP are ignored
    entirely by the real servers, and any ARP packets which might otherwise
    be sent containing the VIPs are mangled to contain the real server's IP
    instead of the VIPs.

Main Advantages:
  * Ability for applications to bind to each individual VIP/port the real
    server is servicing.  This allows, for instance, multiple instances of
    Apache to be running bound explicitly to different VIPs on the system.
  * Performance.

Disadvantages: 
  * The VIPs can not be configured to start on boot using standard RHEL
    system configuration tools.

How to make it work:

(1) BACK UP YOUR ARPTABLES CONFIGURATION.

(2) Configure each real server to ignore ARP requests for each of the
    virtual IP addresses the Piranha cluster will be servicing.  To do
    this, first create the ARP table entries for each virtual IP address
    on each real server (the real_ip is the IP the director uses to 
    communicate with the real server; often this is the IP bound to
    "eth0"):

	arptables -A IN -d <virtual_ip> -j DROP
	arptables -A OUT -d <virtual_ip> -j mangle --mangle-ip-s <real_ip>

    This will cause the real servers to ignore all ARP requests for the
    virtual IP addresses, and change any outbound ARP responses which 
    might otherwise contain the virtual IP so that they contain the real
    IP of the server instead.  The only node in the Piranha cluster which
    should respond to ARP requests for any of the VIPs is the current
    active Piranha LVS director node.

    Once this has been completed on each real server, we can save the ARP
    table entries for later.  Run the following commands on each real
    server:

	service arptables save
	chkconfig --level 2345 arptables_jf on

    The second command will cause the system to reload the arptables
    configuration we just made on boot - before the network is started.

(3) Configure the virtual IP address on all real servers using 'ifconfig'
    to create an IP alias:

	ifconfig eth0:1 192.168.76.24 netmask 255.255.252.0 \
		broadcast 192.168.79.255 up

    Or using the iproute2 utility "ip", for example:

	ip addr dev eth0 add 192.168.76.24

    As noted previously, the virtual IP addresses can not be configured
    to start on boot using the Red Hat system configuration tools. 
    One way to work around this is to simply place these commands in
    /etc/rc.d/rc.local.

===========================================================================

Setting up the Real Servers, method #2: Use iptables to tell the real
servers to handle the packets.

How it works:
    We use an IP tables rule to create a transparent proxy so that a node
    will service packets sent to the virtual IP address(es), even though
    the virtual IP address does not exist on the system.

Advantages:
  * Simple to configure.
  * Avoids the LVS "ARP problem" entirely.  Because the virtual IP 
    address(es) only exist on the active LVS director, there _is_ no ARP
    problem!

Disadvantages:
  * Performance.  There is overhead in forwarding/masquerading every
    packet.
  * Impossible to reuse ports.  For instance, it is not possible to run
    two separate Apache services bound to port 80, because both must
    bind to INADDR_ANY instead of the virtual IP addresses.

(1) BACK UP YOUR IPTABLES CONFIGURATION.

(2) On each real server, run the following for every VIP / port / protocol
    (TCP, UDP) combination intended to be serviced for that real server:

	iptables -t nat -A PREROUTING -p <tcp|udp> -d <vip> \
		--dport <port> -j REDIRECT

    This will cause the real servers to process packets destined for the
    VIP which they are handed.

	service iptables save
	chkconfig --level 2345 arptables_jf on

    The second command will cause the system to reload the arptables
    configuration we just made on boot - before the network is started.


More information about the TriLUG mailing list