Lustre jobstats on ost - min, max, sum
by Scott Nolin
Can anyone explain what the min, max, and sum values mean
in jobstats for OST's?
The values for each job/timestamp look like this:
read: { samples: 102, unit: bytes, min: 4096, max:
1048576, sum: 45989888 }
write: { samples: 1123, unit: bytes, min: 178, max:
1048576, sum: 1012315079 }
Thank you,
Scott
7 years, 9 months
mount mdt with external journal where journal device changed
by Brock Palen
Is there a way to pass ldiskfs (formatted under lustre 1.8) an option like ext3's journal_path or journal_dev
When I try I get errors:
LDISKFS-fs (sdb): Unrecognized mount option "journal_path=/dev/sdc" or missing value
LDISKFS-fs (sdb): Unrecognized mount option "journal_dev=8:32" or missing value
Thanks,
Brock Palen
www.umich.edu/~brockp
CAEN Advanced Computing
XSEDE Campus Champion
brockp(a)umich.edu
(734)936-1985
7 years, 9 months
LAD'14: last week of early-bird registration
by THIELL Stephane
Hello all,
Some news for the next LAD European workshop...
*LAD'14 Agenda is online*
The LAD'14 Program Committee is happy to announce that it has finalized
the agenda, which is now only subject to slight changes. We think it is
a great agenda! Please take a look at:
http://www.eofs.eu/?id=lad14
*Early-bird registration*
We offer early registrants' savings of 100 EUR off before August 27th,
2014, so don't forget to register at:
http://lad.eofs.org/register.php
Registration deadline is set to September 12, 2014.
Thanks to our following sponsors for making this event possible:
Bull, CEA, Cray, DataDirect Networks, Intel and Xyratex !
LAD'14 will take place in Reims, France, September 22-23 at Domaine
Pommery, a True Traditional Champagne House! Reims is very reachable
thanks to the French high-speed train, which connects the town to
hundreds of destinations and is only 45 minutes from Paris. LAD is a
great opportunity for Lustre worldwide administrators and developers to
gather and exchange their experiences, developments, tools, good
practices and more. Expect everything you loved about last year's
successful workshop and even more.
If you have any questions, please contact us at lad(a)eofs.eu.
We look forward to seeing you at LAD'14 in Reims!
--
Stéphane
for the LAD Organizing Team
7 years, 9 months
Re: [HPDD-discuss] [Lustre-discuss] Lustre and ZFS notes available
by Scott Nolin
Hi Andrew,
Much of this information is notes and not in a finished format, so it's
a problem of how much time we have.
The other issue is contributing to the manual is somewhat cumbersome as
you have to submit patches -
https://wiki.hpdd.intel.com/display/PUB/Making+changes+to+the+Lustre+Manu...
The bar there is a bit higher - we have to be pretty confident the
information that's added is correct, know if it's just for some versions
of lustre, and so on. As opposed to simply "here are our notes that work
for us".
I will try to review what we have and if anything looks really incorrect
or missing in the lustre manual we will attempt to issue a patch.
I think in general the lustre manual is correct, but not always
sufficient. I think the process does make sure incorrect stuff doesn't
go in at least, but makes it hard to add information.
Scott
On 8/14/2014 6:13 AM, Andrew Holway wrote:
> Hi Scott,
>
> Great job! Would you consider merging with the standard Lustre docs?
>
> https://wiki.hpdd.intel.com/display/PUB/Documentation
>
> Thanks,
>
> Andrew
>
>
> On 12 August 2014 18:58, Scott Nolin <scott.nolin(a)ssec.wisc.edu
> <mailto:scott.nolin@ssec.wisc.edu>> wrote:
>
> Hello,
>
> At UW SSEC my group has been using Lustre for a few years, and
> recently Lustre with ZFS as the back end file system. We have found
> the Lustre community very open and helpful in sharing information.
> Specifically information from various LUG and LAD meetings and the
> mailing lists has been very helpful.
>
> With this in mind we would like to share some of our internal
> documentation and notes that may be useful to others. These are
> working notes, so not a complete guide.
>
> I want to be clear that the official Lustre documentation should be
> considered the correct reference material in general. But this
> information may be helpful for some -
>
> http://www.ssec.wisc.edu/~__scottn/ <http://www.ssec.wisc.edu/~scottn/>
>
> Topics that I think of particular interest may be lustre zfs install
> notes and JBOD monitoring.
>
> Scott Nolin
> UW SSEC
>
>
>
> _______________________________________________
> Lustre-discuss mailing list
> Lustre-discuss(a)lists.lustre.org <mailto:Lustre-discuss@lists.lustre.org>
> http://lists.lustre.org/mailman/listinfo/lustre-discuss
>
>
7 years, 9 months
OST Tuning sgpdd-survey question
by Kumar, Amit
Dear All,
I read the following note on tuning storage devices from: http://wiki.lustre.org/manual/LustreManual20_HTML/BenchmarkingTests.html
<snip>
24.2.1 Tuning Linux Storage Devices
To get large I/O transfers (1 MB) to disk, it may be necessary to tune several kernel parameters as specified:
/sys/block/sdN/queue/max_sectors_kb = 4096
/sys/block/sdN/queue/max_phys_segments = 256
/proc/scsi/sg/allow_dio = 1
/sys/module/ib_srp/parameters/srp_sg_tablesize = 255
/sys/block/sdN/queue/scheduler
</snip>
I have a question to help me understand the performance numbers I notice below:
Setup: LUN = (8 data disks + 2 Parity) Block size on RAID format is 4096; Storage array DDN;
Q1. When a LUN which is formatted with 4096 block size, And max_sectors_kb set to 4096 on the host side, why do I see minor poorer performance as seen below between the default value and the recommended value to align with the IO size and stripe width. I was expecting it to be the other way around as I have taken care of formatting the OSTs as per
<stripe_width_blocks> = <chunk_blocks> * <number_of_data_disks> = 1 MB
In my case: -E stride=32,stripe_width=256
Although I am not sure if I am comparing appels to appels, because sgpdd runs on the scsi device and I am not sure if the ldiskfs does some magic in between before the IO goes to the scsi device to achieve better performance.
BTW I am not seeing any performance issue but I was taking time to tune on the outset if it made sense to do so.
Any thoughts on this will be very helpful.
Thank you,
Amit
# cat /tmp/*.summary with: /sys/block/sdN/queue/max_sectors_kb = 4096
Tue Aug 12 06:52:36 CDT 2014 sgpdd-survey on /dev/sdb
dev 1 sz 8388608K rsz 1024K crg 1 thr 1 write 115.21 MB/s 1 x 115.21 = 115.21 MB/s read 582.06 MB/s 1 x 582.28 = 582.28 MB/s
dev 1 sz 8388608K rsz 1024K crg 1 thr 2 write 235.84 MB/s 1 x 235.88 = 235.88 MB/s read 838.81 MB/s 1 x 839.26 = 839.26 MB/s
dev 1 sz 8388608K rsz 1024K crg 1 thr 4 write 418.66 MB/s 1 x 418.78 = 418.78 MB/s read 1417.23 MB/s 1 x 1418.61 = 1418.61 MB/s
dev 1 sz 8388608K rsz 1024K crg 1 thr 8 write 902.29 MB/s 1 x 902.84 = 902.84 MB/s read 1439.37 MB/s 1 x 1440.78 = 1440.78 MB/s
dev 1 sz 8388608K rsz 1024K crg 1 thr 16 write 1386.26 MB/s 1 x 1387.55 = 1387.55 MB/s read 1440.14 MB/s 1 x 1441.51 = 1441.51 MB/s
dev 1 sz 8388608K rsz 1024K crg 2 thr 2 write 156.03 MB/s 2 x 78.02 = 156.04 MB/s read 807.73 MB/s 2 x 404.05 = 808.11 MB/s
dev 1 sz 8388608K rsz 1024K crg 2 thr 4 write 297.36 MB/s 2 x 148.71 = 297.41 MB/s read 1120.12 MB/s 2 x 560.50 = 1121.01 MB/s
dev 1 sz 8388608K rsz 1024K crg 2 thr 8 write 528.16 MB/s 2 x 264.17 = 528.34 MB/s read 1188.89 MB/s 2 x 594.92 = 1189.84 MB/s
dev 1 sz 8388608K rsz 1024K crg 2 thr 16 write 817.80 MB/s 2 x 409.13 = 818.25 MB/s read 1293.56 MB/s 2 x 647.37 = 1294.75 MB/s
dev 1 sz 8388608K rsz 1024K crg 2 thr 32 write 789.20 MB/s 2 x 394.81 = 789.62 MB/s read 1295.69 MB/s 2 x 648.41 = 1296.83 MB/s
dev 1 sz 8388608K rsz 1024K crg 4 thr 4 write 162.58 MB/s 4 x 40.65 = 162.58 MB/s read 1099.55 MB/s 4 x 275.08 = 1100.31 MB/s
dev 1 sz 8388608K rsz 1024K crg 4 thr 8 write 314.89 MB/s 4 x 78.74 = 314.94 MB/s read 1139.88 MB/s 4 x 285.21 = 1140.82 MB/s
dev 1 sz 8388608K rsz 1024K crg 4 thr 16 write 586.44 MB/s 4 x 146.67 = 586.66 MB/s read 1162.53 MB/s 4 x 290.93 = 1163.71 MB/s
dev 1 sz 8388608K rsz 1024K crg 4 thr 32 write 550.92 MB/s 4 x 137.78 = 551.11 MB/s read 1172.37 MB/s 4 x 293.32 = 1173.29 MB/s
dev 1 sz 8388608K rsz 1024K crg 4 thr 64 write 563.08 MB/s 4 x 140.83 = 563.32 MB/s read 1164.33 MB/s 4 x 291.31 = 1165.24 MB/s
dev 1 sz 8388608K rsz 1024K crg 8 thr 8 write 165.33 MB/s 8 x 20.67 = 165.33 MB/s read 973.23 MB/s 8 x 121.74 = 973.89 MB/s
dev 1 sz 8388608K rsz 1024K crg 8 thr 16 write 324.42 MB/s 8 x 40.56 = 324.48 MB/s read 960.74 MB/s 8 x 120.17 = 961.38 MB/s
dev 1 sz 8388608K rsz 1024K crg 8 thr 32 write 461.81 MB/s 8 x 57.74 = 461.96 MB/s read 1145.76 MB/s 8 x 143.33 = 1146.62 MB/s
dev 1 sz 8388608K rsz 1024K crg 8 thr 64 write 494.97 MB/s 8 x 61.89 = 495.15 MB/s read 1083.78 MB/s 8 x 135.57 = 1084.59 MB/s
dev 1 sz 8388608K rsz 1024K crg 8 thr 128 write 486.25 MB/s 8 x 60.80 = 486.37 MB/s read 1121.30 MB/s 8 x 140.27 = 1122.13 MB/s
dev 1 sz 8388608K rsz 1024K crg 16 thr 16 write 179.30 MB/s 16 x 11.21 = 179.29 MB/s read 952.32 MB/s 16 x 59.57 = 953.06 MB/s
dev 1 sz 8388608K rsz 1024K crg 16 thr 32 write 282.92 MB/s 16 x 17.69 = 283.05 MB/s read 1025.33 MB/s 16 x 64.13 = 1026.15 MB/s
dev 1 sz 8388608K rsz 1024K crg 16 thr 64 write 376.39 MB/s 16 x 23.53 = 376.43 MB/s read 1030.90 MB/s 16 x 64.48 = 1031.65 MB/s
dev 1 sz 8388608K rsz 1024K crg 16 thr 128 write 397.60 MB/s 16 x 24.86 = 397.80 MB/s read 1113.20 MB/s 16 x 69.64 = 1114.20 MB/s
dev 1 sz 8388608K rsz 1024K crg 16 thr 256 write 434.59 MB/s 16 x 27.17 = 434.72 MB/s read 1049.02 MB/s 16 x 65.61 = 1049.80 MB/s
dev 1 sz 8388608K rsz 1024K crg 32 thr 32 write 171.02 MB/s 32 x 5.34 = 170.90 MB/s read 902.83 MB/s 32 x 28.24 = 903.63 MB/s
dev 1 sz 8388608K rsz 1024K crg 32 thr 64 write 300.66 MB/s 32 x 9.39 = 300.60 MB/s read 966.62 MB/s 32 x 30.23 = 967.41 MB/s
dev 1 sz 8388608K rsz 1024K crg 32 thr 128 write 378.49 MB/s 32 x 11.84 = 378.72 MB/s read 967.73 MB/s 32 x 30.27 = 968.63 MB/s
dev 1 sz 8388608K rsz 1024K crg 32 thr 256 write 385.06 MB/s 32 x 12.04 = 385.13 MB/s read 1067.40 MB/s 32 x 33.39 = 1068.42 MB/s
dev 1 sz 8388608K rsz 1024K crg 32 thr 512 write 455.00 MB/s 32 x 14.23 = 455.32 MB/s read 1017.93 MB/s 32 x 31.84 = 1018.98 MB/s
dev 1 sz 8388608K rsz 1024K crg 64 thr 64 write 199.82 MB/s 64 x 3.12 = 199.58 MB/s read 887.81 MB/s 64 x 13.89 = 888.67 MB/s
dev 1 sz 8388608K rsz 1024K crg 64 thr 128 write 296.58 MB/s 64 x 4.63 = 296.63 MB/s read 925.64 MB/s 64 x 14.48 = 926.51 MB/s
dev 1 sz 8388608K rsz 1024K crg 64 thr 256 write 402.48 MB/s 64 x 6.29 = 402.83 MB/s read 937.37 MB/s 64 x 14.66 = 938.11 MB/s
dev 1 sz 8388608K rsz 1024K crg 64 thr 512 write 427.50 MB/s 64 x 6.69 = 427.86 MB/s read 966.39 MB/s 64 x 15.12 = 967.41 MB/s
dev 1 sz 8388608K rsz 1024K crg 64 thr 1024 write 428.08 MB/s 64 x 6.69 = 428.47 MB/s read 946.22 MB/s 64 x 14.80 = 947.27 MB/s
dev 1 sz 8388608K rsz 1024K crg 128 thr 128 write 204.15 MB/s 128 x 1.59 = 203.86 MB/s read 876.41 MB/s 128 x 6.86 = 877.69 MB/s
dev 1 sz 8388608K rsz 1024K crg 128 thr 256 write 323.54 MB/s 128 x 2.53 = 323.49 MB/s read 911.02 MB/s 128 x 7.12 = 911.87 MB/s
dev 1 sz 8388608K rsz 1024K crg 128 thr 512 write 421.51 MB/s 128 x 3.29 = 421.14 MB/s read 920.23 MB/s 128 x 7.20 = 921.63 MB/s
dev 1 sz 8388608K rsz 1024K crg 128 thr 1024 write 442.79 MB/s 128 x 3.46 = 443.12 MB/s read 887.19 MB/s 128 x 6.93 = 887.45 MB/s
dev 1 sz 8388608K rsz 1024K crg 128 thr 2048 write 413.75 MB/s 128 x 3.23 = 413.82 MB/s read 849.72 MB/s 128 x 6.65 = 850.83 MB/s
dev 1 sz 8388608K rsz 1024K crg 256 thr 256 write 228.51 MB/s 256 x 0.90 = 229.49 MB/s read 879.51 MB/s 256 x 3.44 = 881.35 MB/s
dev 1 sz 8388608K rsz 1024K crg 256 thr 512 write 331.11 MB/s 256 x 1.30 = 332.03 MB/s read 878.87 MB/s 256 x 3.43 = 878.91 MB/s
dev 1 sz 8388608K rsz 1024K crg 256 thr 1024 write 420.35 MB/s 256 x 1.64 = 419.92 MB/s read 839.53 MB/s 256 x 3.28 = 839.84 MB/s
dev 1 sz 8388608K rsz 1024K crg 256 thr 2048 write 407.09 MB/s 256 x 1.59 = 407.71 MB/s read 805.58 MB/s 256 x 3.15 = 805.66 MB/s
dev 1 sz 8388608K rsz 1024K crg 256 thr 4096 write 423.09 MB/s 256 x 1.65 = 422.36 MB/s read 692.53 MB/s 256 x 2.71 = 693.36 MB/s
# cat /root/sgpdd_runs/sdb-ost9-sas1/*.summary with /sys/block/sdN/queue/max_sectors_kb = default(32767)
Tue Aug 12 06:02:51 CDT 2014 sgpdd-survey on /dev/sdb
dev 1 sz 8388608K rsz 1024K crg 1 thr 1 write 115.99 MB/s 1 x 116.03 = 116.03 MB/s read 580.83 MB/s 1 x 581.04 = 581.04 MB/s
dev 1 sz 8388608K rsz 1024K crg 1 thr 2 write 237.90 MB/s 1 x 237.94 = 237.94 MB/s read 845.28 MB/s 1 x 845.72 = 845.72 MB/s
dev 1 sz 8388608K rsz 1024K crg 1 thr 4 write 389.79 MB/s 1 x 389.89 = 389.89 MB/s read 1436.95 MB/s 1 x 1438.23 = 1438.23 MB/s
dev 1 sz 8388608K rsz 1024K crg 1 thr 8 write 909.36 MB/s 1 x 909.88 = 909.88 MB/s read 1439.63 MB/s 1 x 1440.94 = 1440.94 MB/s
dev 1 sz 8388608K rsz 1024K crg 1 thr 16 write 1390.04 MB/s 1 x 1391.24 = 1391.24 MB/s read 1440.02 MB/s 1 x 1441.32 = 1441.32 MB/s
dev 1 sz 8388608K rsz 1024K crg 2 thr 2 write 157.83 MB/s 2 x 78.92 = 157.83 MB/s read 812.07 MB/s 2 x 406.24 = 812.47 MB/s
dev 1 sz 8388608K rsz 1024K crg 2 thr 4 write 298.19 MB/s 2 x 149.13 = 298.25 MB/s read 1127.00 MB/s 2 x 563.94 = 1127.87 MB/s
dev 1 sz 8388608K rsz 1024K crg 2 thr 8 write 513.79 MB/s 2 x 256.99 = 513.97 MB/s read 1206.77 MB/s 2 x 603.86 = 1207.71 MB/s
dev 1 sz 8388608K rsz 1024K crg 2 thr 16 write 821.34 MB/s 2 x 410.90 = 821.80 MB/s read 1278.98 MB/s 2 x 640.04 = 1280.08 MB/s
dev 1 sz 8388608K rsz 1024K crg 2 thr 32 write 841.46 MB/s 2 x 420.96 = 841.92 MB/s read 1288.13 MB/s 2 x 644.60 = 1289.20 MB/s
dev 1 sz 8388608K rsz 1024K crg 4 thr 4 write 161.84 MB/s 4 x 40.46 = 161.86 MB/s read 1113.69 MB/s 4 x 278.63 = 1114.50 MB/s
dev 1 sz 8388608K rsz 1024K crg 4 thr 8 write 315.10 MB/s 4 x 78.79 = 315.17 MB/s read 1137.07 MB/s 4 x 284.47 = 1137.89 MB/s
dev 1 sz 8388608K rsz 1024K crg 4 thr 16 write 588.28 MB/s 4 x 147.13 = 588.53 MB/s read 1165.16 MB/s 4 x 291.52 = 1166.08 MB/s
dev 1 sz 8388608K rsz 1024K crg 4 thr 32 write 545.81 MB/s 4 x 136.50 = 546.00 MB/s read 1150.96 MB/s 4 x 287.95 = 1151.81 MB/s
dev 1 sz 8388608K rsz 1024K crg 4 thr 64 write 573.67 MB/s 4 x 143.47 = 573.88 MB/s read 1153.93 MB/s 4 x 288.71 = 1154.82 MB/s
dev 1 sz 8388608K rsz 1024K crg 8 thr 8 write 168.87 MB/s 8 x 21.11 = 168.91 MB/s read 946.06 MB/s 8 x 118.34 = 946.73 MB/s
dev 1 sz 8388608K rsz 1024K crg 8 thr 16 write 327.68 MB/s 8 x 40.97 = 327.76 MB/s read 970.74 MB/s 8 x 121.42 = 971.37 MB/s
dev 1 sz 8388608K rsz 1024K crg 8 thr 32 write 478.45 MB/s 8 x 59.82 = 478.59 MB/s read 1117.55 MB/s 8 x 139.80 = 1118.39 MB/s
dev 1 sz 8388608K rsz 1024K crg 8 thr 64 write 499.48 MB/s 8 x 62.46 = 499.65 MB/s read 1125.30 MB/s 8 x 140.77 = 1126.17 MB/s
dev 1 sz 8388608K rsz 1024K crg 8 thr 128 write 468.83 MB/s 8 x 58.62 = 468.98 MB/s read 1083.02 MB/s 8 x 135.48 = 1083.83 MB/s
dev 1 sz 8388608K rsz 1024K crg 16 thr 16 write 200.05 MB/s 16 x 12.50 = 200.04 MB/s read 931.16 MB/s 16 x 58.24 = 931.85 MB/s
dev 1 sz 8388608K rsz 1024K crg 16 thr 32 write 264.30 MB/s 16 x 16.52 = 264.28 MB/s read 1025.74 MB/s 16 x 64.16 = 1026.61 MB/s
dev 1 sz 8388608K rsz 1024K crg 16 thr 64 write 360.96 MB/s 16 x 22.56 = 361.02 MB/s read 1024.78 MB/s 16 x 64.10 = 1025.54 MB/s
dev 1 sz 8388608K rsz 1024K crg 16 thr 128 write 353.03 MB/s 16 x 22.07 = 353.09 MB/s read 1046.71 MB/s 16 x 65.48 = 1047.67 MB/s
dev 1 sz 8388608K rsz 1024K crg 16 thr 256 write 387.39 MB/s 16 x 24.21 = 387.42 MB/s read 1049.81 MB/s 16 x 65.66 = 1050.57 MB/s
dev 1 sz 8388608K rsz 1024K crg 32 thr 32 write 183.08 MB/s 32 x 5.72 = 183.11 MB/s read 898.33 MB/s 32 x 28.10 = 899.05 MB/s
dev 1 sz 8388608K rsz 1024K crg 32 thr 64 write 300.33 MB/s 32 x 9.38 = 300.29 MB/s read 935.15 MB/s 32 x 29.25 = 935.97 MB/s
dev 1 sz 8388608K rsz 1024K crg 32 thr 128 write 402.90 MB/s 32 x 12.60 = 403.14 MB/s read 994.12 MB/s 32 x 31.09 = 994.87 MB/s
dev 1 sz 8388608K rsz 1024K crg 32 thr 256 write 431.51 MB/s 32 x 13.49 = 431.82 MB/s read 1027.41 MB/s 32 x 32.14 = 1028.44 MB/s
dev 1 sz 8388608K rsz 1024K crg 32 thr 512 write 472.92 MB/s 32 x 14.78 = 473.02 MB/s read 978.77 MB/s 32 x 30.61 = 979.61 MB/s
dev 1 sz 8388608K rsz 1024K crg 64 thr 64 write 174.25 MB/s 64 x 2.73 = 174.56 MB/s read 887.77 MB/s 64 x 13.89 = 888.67 MB/s
dev 1 sz 8388608K rsz 1024K crg 64 thr 128 write 304.43 MB/s 64 x 4.76 = 304.57 MB/s read 915.25 MB/s 64 x 14.31 = 916.14 MB/s
dev 1 sz 8388608K rsz 1024K crg 64 thr 256 write 411.21 MB/s 64 x 6.43 = 411.38 MB/s read 945.09 MB/s 64 x 14.78 = 946.04 MB/s
dev 1 sz 8388608K rsz 1024K crg 64 thr 512 write 450.88 MB/s 64 x 7.05 = 451.05 MB/s read 965.13 MB/s 64 x 15.10 = 966.19 MB/s
dev 1 sz 8388608K rsz 1024K crg 64 thr 1024 write 417.15 MB/s 64 x 6.52 = 417.48 MB/s read 942.48 MB/s 64 x 14.74 = 943.60 MB/s
dev 1 sz 8388608K rsz 1024K crg 128 thr 128 write 225.25 MB/s 128 x 1.76 = 225.83 MB/s read 896.70 MB/s 128 x 7.01 = 897.22 MB/s
dev 1 sz 8388608K rsz 1024K crg 128 thr 256 write 325.95 MB/s 128 x 2.55 = 325.93 MB/s read 914.43 MB/s 128 x 7.15 = 915.53 MB/s
dev 1 sz 8388608K rsz 1024K crg 128 thr 512 write 437.91 MB/s 128 x 3.42 = 438.23 MB/s read 922.04 MB/s 128 x 7.21 = 922.85 MB/s
dev 1 sz 8388608K rsz 1024K crg 128 thr 1024 write 449.51 MB/s 128 x 3.51 = 449.22 MB/s read 902.09 MB/s 128 x 7.06 = 903.32 MB/s
dev 1 sz 8388608K rsz 1024K crg 128 thr 2048 write 400.69 MB/s 128 x 3.13 = 400.39 MB/s read 825.51 MB/s 128 x 6.46 = 826.42 MB/s
dev 1 sz 8388608K rsz 1024K crg 256 thr 256 write 224.48 MB/s 256 x 0.88 = 224.61 MB/s read 860.48 MB/s 256 x 3.37 = 861.82 MB/s
dev 1 sz 8388608K rsz 1024K crg 256 thr 512 write 332.51 MB/s 256 x 1.30 = 332.03 MB/s read 887.48 MB/s 256 x 3.47 = 888.67 MB/s
dev 1 sz 8388608K rsz 1024K crg 256 thr 1024 write 408.58 MB/s 256 x 1.59 = 407.71 MB/s read 866.96 MB/s 256 x 3.39 = 866.70 MB/s
dev 1 sz 8388608K rsz 1024K crg 256 thr 2048 write 413.03 MB/s 256 x 1.61 = 412.60 MB/s read 803.31 MB/s 256 x 3.14 = 803.22 MB/s
dev 1 sz 8388608K rsz 1024K crg 256 thr 4096 write 424.10 MB/s 256 x 1.66 = 424.80 MB/s read 685.05 MB/s 256 x 2.68 = 686.04 MB/s
Thank you, Amit H. Kumar
7 years, 9 months
Lustre and ZFS notes available
by Scott Nolin
Hello,
At UW SSEC my group has been using Lustre for a few years, and recently
Lustre with ZFS as the back end file system. We have found the Lustre
community very open and helpful in sharing information. Specifically
information from various LUG and LAD meetings and the mailing lists has
been very helpful.
With this in mind we would like to share some of our internal
documentation and notes that may be useful to others. These are working
notes, so not a complete guide.
I want to be clear that the official Lustre documentation should be
considered the correct reference material in general. But this
information may be helpful for some -
http://www.ssec.wisc.edu/~scottn/
Topics that I think of particular interest may be lustre zfs install
notes and JBOD monitoring.
Scott Nolin
UW SSEC
7 years, 9 months
removing dead OST,
by Brock Palen
We just lost an OST failure in a legacy lustre 1.8 filesystem,
How can one go about bringing the filesystem up without this OST?
Thanks,
Brock Palen
www.umich.edu/~brockp
CAEN Advanced Computing
XSEDE Campus Champion
brockp(a)umich.edu
(734)936-1985
7 years, 9 months
Re: [HPDD-discuss] Lustre 2.6.0 released
by Jones, Peter A
It may still get updates but there is no fixed schedule for these.
On 8/3/14, 7:12 AM, "E.S. Rosenberg" <esr(a)cs.huji.ac.il<mailto:esr@cs.huji.ac.il>> wrote:
But will 2.4.x still get updates or was 2.4.3 the last?
On Sun, Aug 3, 2014 at 5:03 PM, Jones, Peter A <peter.a.jones(a)intel.com<mailto:peter.a.jones@intel.com>> wrote:
Hi Eli
2.6 is a feature release and is not intended to be a maintenance release stream. See https://wiki.hpdd.intel.com/display/PUB/Lustre+Releases
Peter
On 8/3/14, 6:49 AM, "E.S. Rosenberg" <esr+hpdd-discuss(a)mail.hebrew.edu<mailto:esr%2Bhpdd-discuss@mail.hebrew.edu><mailto:esr+hpdd-discuss@mail.hebrew.edu<mailto:esr%2Bhpdd-discuss@mail.hebrew.edu>>> wrote:
What does this mean for 2.4.x and 2.5.x?
Originally 2.4.x was supposed to be the version that would be supported for a long persiod, then 2.5.x became that version because of HFS as far as I understand.
Thanks,
Eli
On Thu, Jul 31, 2014 at 4:03 AM, Jones, Peter A <peter.a.jones(a)intel.com<mailto:peter.a.jones@intel.com><mailto:peter.a.jones@intel.com<mailto:peter.a.jones@intel.com>>> wrote:
We are pleased to announce that the Lustre 2.6.0 Release has been declared GA and is available for download<https://downloads.hpdd.intel.com/public/lustre/latest-feature-release/> . You can also grab the source from git<http://git.whamcloud.com/fs/lustre-release.git/commit/73ea776053d99f74a9f...>
This major release includes new features:
MDT-OST Consistency Check and Repair (LFSCK Phase 2)- Allows the MDS to verify the consistency of a Lustre filesystem while it is mounted and in use. The latest enhancements are to check and repair the validity of OST objects of regular files and to identify and optionally remove or link into lost+found OST objects that are not referenced by any files on the MDS. This development is funded by OpenSFS (LU-1267<https://jira.hpdd.intel.com/browse/LU-1267>)
Single Client Performance Improvements – Single thread per process IO performance has been improved. This work was discussed in details at LUG<http://cdn.opensfs.org/wp-content/uploads/2014/04/D1_S6_LustreClientIOPer...> (LU-3321<https://jira.hpdd.intel.com/browse/LU-3321>)
Striped Directories -Enables a single directory to be striped across multiple MDTs to improve single directory performance and scalability. This is a technology preview of part of the DNE phase 2 work funded by OpenSFS that will be fully available in a future Lustre release (LU-3531<https://jira.hpdd.intel.com/browse/LU-3531>)
Fuller details can be found in the change log<https://wiki.hpdd.intel.com/display/PUB/Changelog+2.6>, the scope statement<https://wiki.hpdd.intel.com/display/PUB/Lustre+2.6+Scope+Statement> <https://wiki.hpdd.intel.com/display/PUB/Lustre+2.6+Scope+Statement> and the test mat<https://wiki.hpdd.intel.com/display/PUB/Lustre+2.6>rix<https://wiki.hpdd.intel.com/display/PUB/Lustre+2.6>
The following are known issues in the Lustre 2.6 Release:
LU-5057<https://jira.hpdd.intel.com/browse/LU-5057> - A rare race condition can lead to an LASSERT when unmounting an OST.
LU-5150 <https://jira.hpdd.intel.com/browse/LU-5150> - Empty access control lists (ACLs) will be stored for copied files when using a ZFS MDS. It does not affect ldiskfs MDSes.
LU-4367 -<https://jira.hpdd.intel.com/browse/LU-4367> Metadata performance is affected for unlinking files in a single shared directory in a manner common with mdtest benchmark pattern.
LU-5420<https://jira.hpdd.intel.com/browse/LU-5420> - DNE configurations with multiple MDTs sharing a single node with an MGS may hang during MDT mount or fail to mount an MDT after unclean shutdown.
Work is in progress for these issues.
NOTE: Users should note that usage of the e2fsprogs-based lfsck has been deprecated and replaced by "lctl lfsck_start”. Using older e2fsprogs-based lfsck may lead to filesystem corruption. Once available, it is also recommended to use e2fsprogs-1.42.11.wc2 (or newer).
Please log any issues found in the issue tracking system<https://jira.hpdd.intel.com/>
We would like to thank OpenSFS<http://www.opensfs.org/>, for their contributions towards the cost of the release and also to all Lustre community members who have contributed to the release with code and/or testing.
_______________________________________________
HPDD-discuss mailing list
HPDD-discuss(a)lists.01.org<mailto:HPDD-discuss@lists.01.org><mailto:HPDD-discuss@lists.01.org<mailto:HPDD-discuss@lists.01.org>>
https://lists.01.org/mailman/listinfo/hpdd-discuss
7 years, 9 months