[TriLUG] SAN file locking

bak bak at picklefactory.org
Mon Dec 19 09:52:05 EST 2011


On 12/18/11 7:35 PM, Joseph Mack NA3T wrote:
> Thanks everyone

Anytime Joe; this is a topic I actually know something about, so glad
to share. :)

> sorry that went over my head. With a SAN you have blocks, but you're
> going to have to partition them and put a FS on them before you use
> them. After that the only difference I see is whether the disks are
> connected internally or over FC or ethernet. What am I missing?

Let me put it another way.

With NAS, the NAS controller owns the blocks and the filesystem and has
an /etc/exports file.

With SAN, the SAN controller owns the blocks, but not the filesystem.
The individual servers connecting to the SAN handle that.

Some operating systems are OK with having a read-only filesystem
attached. But solutions like this for the SAN space are not there,
because the problem to be solved would have to be

-- Useful even with a read-only filesystem
-- Requiring the sort of low-latency performance SAN provides
-- Not more cheaply and easily deployed with a r/o NFS export

So your discussion with Greg below is interesting from a theoretical
point of view but I am not aware of any SAN deployments in the real
world that look like this.

By the way, SAN is not only FC -- iSCSI and Fibre Channel over Ethernet
also are considered SAN protocols, since like FC they are basically
encapsulated SCSI commands.

> I don't know where redundancy is. I would have expected it at the block
> level and the blocks that the SAN hands you are already RAID'ed before
> you get them.

Usually you can specify what sort of RAID levels you'd like (5, 6, 1,
0+1 are common). The idea is that any disk problems or failures will be
invisible to the server.

Disk arrays that don't do this are known as JBOD (just a bunch of disks)
and it is up to the host's HBA (SCSI host bus adapter) to handle RAID.

>> What happens when you have a gaming rig?
> 
> I'm assuming a gaming rig is only at home and you won't be having a SAN
> at home (too expensive and not enough machines to share the cost of the
> SAN).

Yes.

> I saw perhaps 10cabs of 42U of disks the other day, all FC SAN. There's
> got to be more than boot disks and Oracle databases on there for the 160
> machines I was checking.

MS Exchange, business analytics, big databases, VMWare guests. It's
common for SAN environments to have a tiny boot drive and everything
else lives on shared SAN storage. Especially VMWare. Those 10 cabinets
of disks plus another cabinet or two of beefy x86 servers with gobs of
RAM might have been 30 cabinets of stand-alone servers, each with their
barely-utilized 3-5 disks. Now they have one or two disks, or they boot
from the SAN.

Also with SANs, density varies hugely for performance reasons. Every
disk in a SAN has a maximum number of operations per second it can
handle. I'm really simplifying here, but all other things being equal,
28 drives that are 500GB each are going to be faster than 14 drives that
are 1TB each. More spinning disks means more disks to satisfy read/write
requests.

Of course the SAN controller is doing some coordinating behind the
scenes to make reads and writes across all those disks as efficient as
possible, and there's certainly a point of diminishing returns, but
plenty of folks are happy with more disks and less storage for
performance reasons.

Of course the introduction of SSD technology is changing the way all
this works quite quickly.

I agree with everything Aaron said. :)

--bak



More information about the TriLUG mailing list