[TriLUG] NFS write performance problem
Stephen P. Schaefer
sschaefer at acm.org
Fri Dec 27 14:39:58 EST 2002
When you say "Unmounts between changes to clear caching", do you mean
unmounts (and mounting back again) on the server? If not, I suspect
that your good read times are because you're reading the *server's*
memory cache, not the client's.
Before we explore NFS, let's see how fast the local disk drives really
are. On the server, compile and run
/* Copyright 2002, Stephen P. Schaefer.
You may use this according to the GPL license, available at
http://www.gnu.org/copyleft/gpl.html
*/
#include <unistd.h>
#include <stdlib.h>
int
main(int argc, char *argv[])
{
int i;
char buf[8192];
for (i=0; i<32768; i++) {
if (write(1,buf,8192) != 8192) {
write(2,"write not working!\n",19);
exit(1);
}
sync();
}
exit(0);
}
$ gcc -Wall ds.c
$ time ./a.out > foo
real 1m23.580s
user 0m0.043s
sys 0m1.576s
That time was on my laptop, a P4 1.7GHz, 256DDR ram machine.
Note the sync call: the NFS standard demands a sync after every write.
If you want to live dangerously for more speed, you can forgo the
standard and NFS export your file systems "async" -- see the exports(5)
man page.
- Stephen P. Schaefer
Scott Stancil wrote:
> I am having NFS write performance issues with a backend dedicated NFS
> server (RedHat 8.0, PIII 550, 128MB ram, 100Mbps network). The client (RH
> 6.1, Kernel 2.2.19SMP with nfs-utils-0.1.9.1-1, dual PIII 550's, 1GB ram,
> 100Mbps network) is mounting the home directory off of the server.
> Integrated Intel EtherExpress 100 ethernet devices on both.
>
> Testing Methods:
> I am using the following to test the network transfers from the client to
> the server.
> Write:
> time dd if=/dev/zero of=/mnt/home/testfile bs=x count=y
>
> Read:
> time dd if=/mnt/home/testfile of=/dev/null bs=x
>
>
> Results:
> Unmounts between changes to clear caching. Reading and writing a 262MB file.
>
> X=8 Y=32768, Write=2:48, Read=0:10, 1.55 MB/second, 24.25MB/second read
>
> X=16 Y=16384, Write=2:48, Read=0:25, 1.55 MB/second write, 10.5 MB/second
>
> read X=32 Y=8291, Write=2:48, Read=0:15, 1.55MB/second write, 17.5
> MB/second read
>
>
> Hard drive settings:
>
> RAID 1 EIDE drives are showing the following with "hdparm -Tt /dev/hda".
>
> /dev/hda:
> Timing buffer-cache reads: 128 MB in 1.33 seconds = 96.24 MB/sec
> Timing buffered disk reads: 64 MB in 1.39 seconds = 45.96 MB/sec
>
> /dev/hdc:
> Timing buffer-cache reads: 128 MB in 1.33 seconds = 96.24 MB/sec
> Timing buffered disk reads: 64 MB in 2.31 seconds = 27.68 MB/sec
>
> This doesn't suprise me as I suspect that the secondary controller is not
> quite as fast as the primary controller, but still well above the read
> bottleneck.
>
> hdparm <device>:
>
> /dev/hda:
> multcount = 16 (on)
> IO_support = 0 (default 16-bit)
> unmaskirq = 0 (off)
> using_dma = 1 (on)
> keepsettings = 0 (off)
> readonly = 0 (off)
> readahead = 8 (on)
> geometry = 4865/255/63, sectors = 78165360, start = 0
>
> /dev/hdc:
> multcount = 16 (on)
> IO_support = 0 (default 16-bit)
> unmaskirq = 0 (off)
> using_dma = 1 (on)
> keepsettings = 0 (off)
> readonly = 0 (off)
> readahead = 8 (on)
> geometry = 77545/16/63, sectors = 78165360, start = 0
>
>
> Although this is a test server and I can mess with it as much as I want, I
> would prefer not to perform a reinstall if at all possible. :)
>
> 1. I have examples of testing reads from the disk performance, but what
> might I use to test writes, especially to a RAID 1 slice/partition?
> 2. Geometry is "off" in /dev/hdc or is it? It has worked without
> complaint for about a month now, but they are identical disks and the
> geometry is way off. Could the difference in controllers cause this?
> 3. Anyone have any ideas on how to improve the pitiful write performance?
> Or perhaps how to benchmark/troubleshoot my performance a little better?
>
More information about the TriLUG
mailing list