[TriLUG] SAN file locking

Joseph Mack NA3T jmack at wm7d.net
Sun Dec 18 19:35:25 EST 2011


Thanks everyone

On Fri, 16 Dec 2011, bak wrote:

> SAN storage is not meant to be shared in the way that you 
> describe, at the file level. Individual files are not 
> shared in SAN setups, so file locking is not a relevant 
> concept.

that clarifies things considerably.

I worked on a datacenter move recently and I had to check 
that all machines could see the SAN. I didn't get to poke 
around and find what the machines were doing, but they had 
their own disks plus access to the SAN. Some were seeing 
1000+ SAN devices (these were probably mostly ESX machines, 
but I'm pretty sure some regular machines were seeing a lot 
of SAN devices). I had no idea why they were seeing all 
these SAN devices and it wasn't my business to ask. I 
assumed when the setup was running, that many, if not most 
of the machines, were seeing the same blocks.

> thorough explanation...

thanks

> SAN is all about 'limited space, one server at a time, but 
> very fast' -- databases, virtual disks... whereas NAS is 
> about monster buckets of data that are used more rarely -- 
> home directories, pictures.

On Fri, 16 Dec 2011, Greg Cox wrote:

> When you need some storage, you usually want NAS.
> When you need a disk, you usually want SAN.

sorry that went over my head. With a SAN you have blocks, 
but you're going to have to partition them and put a FS on 
them before you use them. After that the only difference I 
see is whether the disks are connected internally or over FC 
or ethernet. What am I missing?

>> Is everyone just mounting /usr (ro) and sharing that, 
>> while mounting home directories (rw) expecting the user 
>> to handle the exclusive writing manually?
>
> That, IMO, would be a bad use of SAN, from either 
> direction.  Let's say you have 10 machines, and your SAN 
> is FC-based.
>
> 1) OK, /usr COULD be a shared r/o LUN, but.. boring. You'd 
> have your LUN visible r/o to everyone,

I thought that was the point; everyone gets a uniform /usr 
no matter which machine they come in through.

> but.. how did you boot? It's a crying shame to spin local 
> disk AND pay for 10 HBA's.  Why not just simplify your 
> life and make 10 LUNs, say, 20GB each, and put the whole 
> OS on it?  Map then 1-to-1 to the farm, and boot that 
> 'disk'. If your storage controller is worth its salt, you 
> spent 20GB just then because the blocks are going to be 
> thin-provisioned.  They COULD un-deduplicate and cost you 
> the whole 200GB in time, but, still, 200GB in your 
> controller is still nicer than the 20 spinning drives

sounds good to me. I expect there's no point in having an 
HBA if you can't boot over it.

> (you WERE going to mirror them, right?)

I don't know where redundancy is. I would have expected it 
at the block level and the blocks that the SAN hands you are 
already RAID'ed before you get them.

> 2) Why would you put /home on a LUN?

Well I assumed you'd want to make the machine that's 
executing the code independant of the box that has the 
storage. A user doesn't have to know which box he's logging 
into at any time, as long as he sees the same files.

> What happens when you have a gaming rig

I'm assuming a gaming rig is only at home and you won't be 
having a SAN at home (too expensive and not enough machines 
to share the cost of the SAN).

> and want to get your /home files on there via SMB... but 
> you have an ext3-formatted LUN?

I'm not sure if you're being retorical or I'm being thick. 
You export them with smbfs?

> So, you want storage, not necessarily a 'disk'.

we're back to the original problem

> Make a NAS share and have autofs mount it up when you log 
> in on Linux/Solaris, and SMB on Windows?

I do this at home. Just about all machines can see all disks 
and windows laptops get their disks automounted for backup 
of "\Documents and Settings". My wife's and my son's winxp 
laptops have a `cp -auv` run on them every hour whenever 
they're on the network.

but what's this to do with SAN?

> Oh, sure, NFS locking is nobody's friend, but, if you're 
> running something more than vi against your homedir, 
> you're probably not being a good architect.

Another one over my head I'm afraid.

>> In this case there is no real sharing. If there's no real 
>> sharing, what are SANs being used for?
>
> Boot LUNs.  Swap LUNs for stupid Oracle boxes that require 
> more swap than they'll ever use.  Backing storage for old 
> versions of Oracle that refused to do locking over NFS. 
> ESX datastores.

I saw perhaps 10cabs of 42U of disks the other day, all FC 
SAN. There's got to be more than boot disks and Oracle 
databases on there for the 160 machines I was checking.

On Sat, 17 Dec 2011, Aaron Joyner wrote:

> More succinctly, you use a SAN if you want to be able to:
>
> 1) aggregate the storage of multiple machines into one 
> array of physical disks, to reduce the overhead of RAID, 
> capacity planning, provisioning, etc.

yes

> 2) near-instantly move the persistent storage of a critical
> application between two machines (virtual, or otherwise)

don't know how you do this. I assume $current_machine has to 
close the file(s), something has to umount/mount and 
$next_machine has to open the files.

>
> The reasons you don't use a SAN are:
> 1) COST.
>
> .
> .
> Even the most basic setup is going to come at a 
> tremendous cost.  If you can design your system such that 
> you can avoid that cost, you should.
> .

Thanks Joe



-- 
Joseph Mack NA3T EME(B,D), FM05lw North Carolina
jmack (at) wm7d (dot) net - azimuthal equidistant map
generator at http://www.wm7d.net/azproj.shtml
Homepage http://www.austintek.com/ It's GNU/Linux!



More information about the TriLUG mailing list