The basic idea is simple. You have disk throughput (OSS) and network capacity (the size of
the pipe). Whichever has the least bandwidth will be the limiting factor in your
configuration. The MDS will have an impact on latency (how fast you can do a transaction,
such as a file create)
RAID controller and RAID configuration are important. Server backplane/chipset can also
have a large impact on performance.
Typically, Lustre will see 90% of the raw disk performance.
We usually use sgpdd or other low-level tools to verify the raw disk.
Once you know how much IO you can get from your disk, next question is can your network
sink that much IO?
Lustre includes a very useful tool, lnet_selftest which can be used to measure network
And of course all of the above is predicated on the notion that your compute nodes can
sink the IO from the disks.
The reality is because of the number of parts, and the distributed nature of Lustre,
it's a bit complex, and best done in a bottom-up fashion.
Test the disk, test the server network, add clients, add workload.
As Colin mentioned, because of the nature of Lustre and the differing nature of Lustre
workloads, it is difficult to estimate performance in the abstract, benchmarking the real
hardware is the usual method. (We also find that the performance of real hardware is
sometimes quite different than a vendor's estimate!)
However, you can benchmark a subset of your hardware (one OSS, etc) and extrapolate from
that data in many cases.
From: Michael McManus <firstname.lastname@example.org<mailto:email@example.com>>
Date: Thursday, February 28, 2013 6:18 AM
Subject: [HPDD-discuss] Predicting Lustre I/O Rates
Does anyone have a software tool or have a mathematical formula to predict/estimate
Read/Write rates on a Lustre system?
I have a specific OSS design, a selected hard disk type, a specific MDS node design, and
the size of the pipe between the compute nodes, MDS node and the OSS nodes.
What other factors am I missing? RAID controller information?
I am using CentOS 6.3 with Lustre 2.3.