Yes, we got significant performance improvements with 4MB RPCs. Not only peak performance,
but also high sustained performance even with a lot of concurrent access to OSTs.
max_dirty_mb is one of impotent parameter for 4MB RPCs, but it's now automatically set
to suitable value from max_pages_per_rpc and max_rpcs_in_flight. (see LU-4933)
However, the more importantly, you should have end to end 4MB IO from client to disk. I
mean clients send to server with 4MB RPCs, but OSS also needs to pass efficient IO size to
OSTs. I believe you are missing this part.
On Oct 23, 2014, at 10:16 AM, Simmons, James A.
So recently we have moved our systems from 1.8 to 2.5 clients and have lost of the
performance we had from before which is expected. So I thought we could try using
4MB RPCs instead of the default 1MB RPC packet. I set max_pages_per_rpc to 1024
and looked at the value of max_dirty_mb which was 32 and max_rpcs_in_flight which
is 8. By default a dirty cache of 32MB should be enough in this case. So It tested it
saw no performance improvements. After that I boosted max_dirty_mb to 64 and still
no improvements over the default settings. Has anyone seen this before? What could
I be missing?
HPDD-discuss mailing list