I'm given a project where the only objective is to monitor a network's NFS performance. I know there's a bunch of open source tools out there, but still I would like to get the basic idea behind in order to better tweak those around. So the network consists of some hundred linux systems and some thousand accounts with NFS mounted home dir's; the script can be pushed out to every station, server is also possible, if that way does any good. Afaik, essentially all the script should do is a few dd's and watch the IO rate over NFS.
And my question is just what is the proper way of doing so? Do I add a new account to the system solely to run the scripts?
Some general thoughts are greatly appreciated :)
Bonnie
A classical performances evaluation tool tests. The main program tests database type access to a single file (or a set of files if you wish to test more than 1G of storage), and it tests creation, reading, and deleting of small files which can simulate the usage of programs such as Squid, INN, or Maildir format email.
Relevance to NFS:: Performance testing, workload
DBench
Dbench was written to allow independent developers to debug and test SAMBA. It is heavily inspired of the original SAMBA tool : NetBench
As NetBench it allow to:
torture the file system improve the network load independently of the disk IO Measure performances
But it does not need as much hardware resources as NetBench to run.
Relevance to NFS::
IOZone
Performance tests suite. POSIX and 64 bits compliant. This tests is the file system test from the L.S.E. Main features
POSIX async I/O, Mmap() file I/O, Normal file I/O Single stream measurement, Multiple stream measurement, Distributed file server measurements (Cluster) POSIX pthreads, Multi-process measurement Selectable measurements with fsync, O_SYNC Latency plots
Relevance to NFS:: Performance testing. Good for exercising a given mount point under various load conditions.
ful detail can be found here . http://wiki.linux-nfs.org/wiki/index.php/Testing_tools