Re: [HPDD-discuss] [Lustre-discuss] will obdfilter-survey destroy an already formatted file system
by Dilger, Andreas
On 2013/21/03 4:09 AM, "Michael Kluge" <Michael.Kluge(a)tu-dresden.de> wrote:
>I have read through the documentation for obdfilter-survey but could not
>found any information on how invasive the test is. Will it destroy an
>already formatted OST or render user data unusable?
It shouldn't - the obdfilter-survey uses a different object sequence (2)
compared to normal filesystem objects (currently always 0), so the two do
not collide.
Cheers, Andreas
--
Andreas Dilger
Lustre Software Architect
Intel High Performance Data Division
7 years, 9 months
Missing files between adjacent lfs file runs
by Götz Waschk
Dear all,
a user has noticed a weird problem with our Lustre setup. The servers
are running Lustre 2.1.3 and the client is 1.8.9, both on SL6.
When I run the command 'lfs find .' several times in a row, each time
the result is different. Some files are removed from the listing, some
are added, no regular pattern to be seen.
There are no unusual syslog messages on the servers or the client and
the file system is not very busy.
Do you have any idea how to resolve this problem?
Regards, Götz Waschk
7 years, 10 months
Lustre 2.4 update - April 26th 2013
by Jones, Peter A
Hi there
Here is an update on the Lustre 2.4 release.
Landings
========
-A number of landings made - see http://git.whamcloud.com/?p=fs/lustre-release.git;a=shortlog;h=refs/heads...
Testing
=======
-Testing on the 2.3.64 tag is drawing to a close; a new tag is anticipated early next week
Blockers
========
-Full list available at https://jira.hpdd.intel.com/issues/?filter=10292
-If there are any issues not presently marked as blockers that you believe should be, please let me know
Other
=====
-New version of e2fsprogs (1.42.7-wc1) released for compatibility with 2.4 features
-We are in the stabilization period for the release now so this is an ideal time for community members to test tags and open JIRA tickets for any issues encountered
Thanks
Peter
7 years, 10 months
Re: [HPDD-discuss] Upgrading 1.8.X -> 2.X
by Dilger, Andreas
On 2013/29/04 1:51 PM, "Ben Evans" <Ben.Evans(a)terascala.com> wrote:
>Clients can maintain connections to multiple MGSes,
Right, this is fairly common.
>but you can't connect to a 1.8 and a 2.x filesystem at the same time IIRC.
It's possible that this is the case, but a bit surprising since I'm not
aware
of any reason why this wouldn't work. The state on the client should be
kept
on a per-mountpoint basis. While we test 1.8 clients with 2.x servers, I
don't
think we've ever done a test with mixed versions on one client at the same
time.
Cheers, Andreas
>________________________________________
>From: hpdd-discuss-bounces(a)lists.01.org
>[hpdd-discuss-bounces(a)lists.01.org] on behalf of John Richards
>[john.richards(a)icecube.wisc.edu]
>Sent: Monday, April 29, 2013 3:39 PM
>To: Dilger, Andreas
>Cc: hpdd-discuss(a)lists.01.org
>Subject: Re: [HPDD-discuss] Upgrading 1.8.X -> 2.X
>
>On Apr 29, 2013, at 11:15 , "Dilger, Andreas" <andreas.dilger(a)intel.com>
>wrote:
>
>> On 2013-04-26, at 12:32, "John Richards"
>><john.richards(a)icecube.wisc.edu> wrote:
>>
>>> Lustre fans,
>>>
>>> We have four lustre filesystems in production, and are considering
>>>running Lustre 2.X in the next year. I'd like a bit of advice on
>>>upgrade paths, and I apologize if some of this has been covered before
>>>- some of the details are eluding me. The main question is, can the
>>>MGS be upgraded to 2.X on its own? If so, would it still support Lustre
>>>1.8.3 OSSs and MDSs? Would a Lustre 2.X MGS be possible if all the
>>>other systems were at Lustre 1.8.7 or better?
>>
>> It isn't possible to run different 1.8/2.x versions on the servers, and
>>this includes the MGS. As it stands in 1.8 and 2.x MGS code it knows far
>>too much about the internal details of how the servers are configured,
>>yet the way this happens is somewhat specific to each version of Lustre.
>>In 2.4 this has been improved somewhat, but I think some more separation
>>is still needed.
>>
>> Probably the way you want to move forward is to split your single MGS
>>into per-filesystem MGS (possibly running on the backup MDS), and then
>>they can be upgraded together with their respective filesystems.
>>
>> Cheers, Andreas
>
>Andreas,
>
>Thanks for the suggestion - per filesystem MGSs would allow us to upgrade
>individual filesystems without tackling them all at once.
>
>Clients only maintain communication with a single MGS, correct? We
>usually mount multiple Lustre filesystems on clients at the same time,
>and we'd have to do some juggling to handle that. I remember someone
>saying you could mount filesystems from different MGSs on the same
>client, but would have to expect that the client would only receive
>updates from the last MGS contacted. Or that could be an unstable
>situation (one that we should avoid entirely) and we'd need to restrict
>each client to a single filesystem (and MGS) at a time.
>
>Even so, it is nice to have more options. I appreciate the help.
>
>John
>john.richards(a)icecube.wisc.edu
>
>_______________________________________________
>HPDD-discuss mailing list
>HPDD-discuss(a)lists.01.org
>https://lists.01.org/mailman/listinfo/hpdd-discuss
>
>
>
Cheers, Andreas
--
Andreas Dilger
Lustre Software Architect
Intel High Performance Data Division
7 years, 10 months
Re: [HPDD-discuss] Upgrading 1.8.X -> 2.X
by Daniel Basabe
Enviado desde mi HTC
----- Reply message -----
De: "John Richards" <john.richards(a)icecube.wisc.edu>
Para: "Dilger, Andreas" <andreas.dilger(a)intel.com>
CC: <hpdd-discuss(a)lists.01.org>, "Ben Evans" <Ben.Evans(a)terascala.com>
Asunto: [HPDD-discuss] Upgrading 1.8.X -> 2.X
Fecha: mar., abr. 30, 2013 21:22
Andreas and Ben,
Sounds like I have an assignment - create a Lustre 2.X filesystem and test a client (1.8.7) which connects to it and one of our older filesystems.
Hopefully I'll give you some feedback soon.
Thanks for the advice,
John
john.richards(a)icecube.wisc.edu
On Apr 30, 2013, at 02:21 , "Dilger, Andreas" <andreas.dilger(a)intel.com> wrote:
> On 2013/29/04 1:51 PM, "Ben Evans" <Ben.Evans(a)terascala.com> wrote:
>> Clients can maintain connections to multiple MGSes,
>
> Right, this is fairly common.
>
>> but you can't connect to a 1.8 and a 2.x filesystem at the same time IIRC.
>
> It's possible that this is the case, but a bit surprising since I'm not
> aware
> of any reason why this wouldn't work. The state on the client should be
> kept
> on a per-mountpoint basis. While we test 1.8 clients with 2.x servers, I
> don't
> think we've ever done a test with mixed versions on one client at the same
> time.
>
> Cheers, Andreas
>
>> ________________________________________
>> From: hpdd-discuss-bounces(a)lists.01.org
>> [hpdd-discuss-bounces(a)lists.01.org] on behalf of John Richards
>> [john.richards(a)icecube.wisc.edu]
>> Sent: Monday, April 29, 2013 3:39 PM
>> To: Dilger, Andreas
>> Cc: hpdd-discuss(a)lists.01.org
>> Subject: Re: [HPDD-discuss] Upgrading 1.8.X -> 2.X
>>
>> On Apr 29, 2013, at 11:15 , "Dilger, Andreas" <andreas.dilger(a)intel.com>
>> wrote:
>>
>>> On 2013-04-26, at 12:32, "John Richards"
>>> <john.richards(a)icecube.wisc.edu> wrote:
>>>
>>>> Lustre fans,
>>>>
>>>> We have four lustre filesystems in production, and are considering
>>>> running Lustre 2.X in the next year. I'd like a bit of advice on
>>>> upgrade paths, and I apologize if some of this has been covered before
>>>> - some of the details are eluding me. The main question is, can the
>>>> MGS be upgraded to 2.X on its own? If so, would it still support Lustre
>>>> 1.8.3 OSSs and MDSs? Would a Lustre 2.X MGS be possible if all the
>>>> other systems were at Lustre 1.8.7 or better?
>>>
>>> It isn't possible to run different 1.8/2.x versions on the servers, and
>>> this includes the MGS. As it stands in 1.8 and 2.x MGS code it knows far
>>> too much about the internal details of how the servers are configured,
>>> yet the way this happens is somewhat specific to each version of Lustre.
>>> In 2.4 this has been improved somewhat, but I think some more separation
>>> is still needed.
>>>
>>> Probably the way you want to move forward is to split your single MGS
>>> into per-filesystem MGS (possibly running on the backup MDS), and then
>>> they can be upgraded together with their respective filesystems.
>>>
>>> Cheers, Andreas
>>
>> Andreas,
>>
>> Thanks for the suggestion - per filesystem MGSs would allow us to upgrade
>> individual filesystems without tackling them all at once.
>>
>> Clients only maintain communication with a single MGS, correct? We
>> usually mount multiple Lustre filesystems on clients at the same time,
>> and we'd have to do some juggling to handle that. I remember someone
>> saying you could mount filesystems from different MGSs on the same
>> client, but would have to expect that the client would only receive
>> updates from the last MGS contacted. Or that could be an unstable
>> situation (one that we should avoid entirely) and we'd need to restrict
>> each client to a single filesystem (and MGS) at a time.
>>
>> Even so, it is nice to have more options. I appreciate the help.
>>
>> John
>> john.richards(a)icecube.wisc.edu
>>
>> _______________________________________________
>> HPDD-discuss mailing list
>> HPDD-discuss(a)lists.01.org
>> https://lists.01.org/mailman/listinfo/hpdd-discuss
>>
>>
>>
>
>
> Cheers, Andreas
> --
> Andreas Dilger
>
> Lustre Software Architect
> Intel High Performance Data Division
>
>
_______________________________________________
HPDD-discuss mailing list
HPDD-discuss(a)lists.01.org
https://lists.01.org/mailman/listinfo/hpdd-discuss
7 years, 10 months
Re: [HPDD-discuss] Upgrading 1.8.X -> 2.X
by Daniel Basabe
M.
Enviado desde mi HTC
----- Reply message -----
De: "John Richards" <john.richards(a)icecube.wisc.edu>
Para: "Dilger, Andreas" <andreas.dilger(a)intel.com>
CC: <hpdd-discuss(a)lists.01.org>, "Ben Evans" <Ben.Evans(a)terascala.com>
Asunto: [HPDD-discuss] Upgrading 1.8.X -> 2.X
Fecha: mar., abr. 30, 2013 21:22
Andreas and Ben,
Sounds like I have an assignment - create a Lustre 2.X filesystem and test a client (1.8.7) which connects to it and one of our older filesystems.
Hopefully I'll give you some feedback soon.
Thanks for the advice,
John
john.richards(a)icecube.wisc.edu
On Apr 30, 2013, at 02:21 , "Dilger, Andreas" <andreas.dilger(a)intel.com> wrote:
> On 2013/29/04 1:51 PM, "Ben Evans" <Ben.Evans(a)terascala.com> wrote:
>> Clients can maintain connections to multiple MGSes,
>
> Right, this is fairly common.
>
>> but you can't connect to a 1.8 and a 2.x filesystem at the same time IIRC.
>
> It's possible that this is the case, but a bit surprising since I'm not
> aware
> of any reason why this wouldn't work. The state on the client should be
> kept
> on a per-mountpoint basis. While we test 1.8 clients with 2.x servers, I
> don't
> think we've ever done a test with mixed versions on one client at the same
> time.
>
> Cheers, Andreas
>
>> ________________________________________
>> From: hpdd-discuss-bounces(a)lists.01.org
>> [hpdd-discuss-bounces(a)lists.01.org] on behalf of John Richards
>> [john.richards(a)icecube.wisc.edu]
>> Sent: Monday, April 29, 2013 3:39 PM
>> To: Dilger, Andreas
>> Cc: hpdd-discuss(a)lists.01.org
>> Subject: Re: [HPDD-discuss] Upgrading 1.8.X -> 2.X
>>
>> On Apr 29, 2013, at 11:15 , "Dilger, Andreas" <andreas.dilger(a)intel.com>
>> wrote:
>>
>>> On 2013-04-26, at 12:32, "John Richards"
>>> <john.richards(a)icecube.wisc.edu> wrote:
>>>
>>>> Lustre fans,
>>>>
>>>> We have four lustre filesystems in production, and are considering
>>>> running Lustre 2.X in the next year. I'd like a bit of advice on
>>>> upgrade paths, and I apologize if some of this has been covered before
>>>> - some of the details are eluding me. The main question is, can the
>>>> MGS be upgraded to 2.X on its own? If so, would it still support Lustre
>>>> 1.8.3 OSSs and MDSs? Would a Lustre 2.X MGS be possible if all the
>>>> other systems were at Lustre 1.8.7 or better?
>>>
>>> It isn't possible to run different 1.8/2.x versions on the servers, and
>>> this includes the MGS. As it stands in 1.8 and 2.x MGS code it knows far
>>> too much about the internal details of how the servers are configured,
>>> yet the way this happens is somewhat specific to each version of Lustre.
>>> In 2.4 this has been improved somewhat, but I think some more separation
>>> is still needed.
>>>
>>> Probably the way you want to move forward is to split your single MGS
>>> into per-filesystem MGS (possibly running on the backup MDS), and then
>>> they can be upgraded together with their respective filesystems.
>>>
>>> Cheers, Andreas
>>
>> Andreas,
>>
>> Thanks for the suggestion - per filesystem MGSs would allow us to upgrade
>> individual filesystems without tackling them all at once.
>>
>> Clients only maintain communication with a single MGS, correct? We
>> usually mount multiple Lustre filesystems on clients at the same time,
>> and we'd have to do some juggling to handle that. I remember someone
>> saying you could mount filesystems from different MGSs on the same
>> client, but would have to expect that the client would only receive
>> updates from the last MGS contacted. Or that could be an unstable
>> situation (one that we should avoid entirely) and we'd need to restrict
>> each client to a single filesystem (and MGS) at a time.
>>
>> Even so, it is nice to have more options. I appreciate the help.
>>
>> John
>> john.richards(a)icecube.wisc.edu
>>
>> _______________________________________________
>> HPDD-discuss mailing list
>> HPDD-discuss(a)lists.01.org
>> https://lists.01.org/mailman/listinfo/hpdd-discuss
>>
>>
>>
>
>
> Cheers, Andreas
> --
> Andreas Dilger
>
> Lustre Software Architect
> Intel High Performance Data Division
>
>
_______________________________________________
HPDD-discuss mailing list
HPDD-discuss(a)lists.01.org
https://lists.01.org/mailman/listinfo/hpdd-discuss
7 years, 10 months
Re. Regarding Lustre Setup for data analytics..
by linux freaker
I ran hadoop over lustre with 1 Namenode and 3 datanode running on Lustre
client. Here is my findings:
Scenerio1: 1 MDS, 2 OSS/OST, 3 Lustre Clients (1 NameNode and 2 DataNode),
Stripping : -1, Dataset: 18GB, Reducer: 20
Time taken: 59 min. 52 sec
Scenerio2: 1 MDS, 2 OSS/OST, 3 Lustre Clients (1 NameNode and 2 DataNode),
Stripping : -1, Dataset: 18GB, Reducer:30
Time Taken: 1 Hour . 5min
Ques: Did the time interval increase due to increase in reducer?
I have been using Ethernet. How much time(guess) will it take if I go for
Infiniband?
7 years, 10 months
.lustre directory missing
by Paul Unger
Hi folks,
we were working on one of our Lustre file systems and removed the
/mountpoint/.lustre directory by accident. To make things worse the mds and
ost's have been rebooted due to a power failure. Now we are not able to
mount this particular file system anymore.
Error:
please have a look at the attached file lustre_log.txt
We already reproduced this behavior with another Lustre test file system
and tried to fix the issue by regenerating the Lustre configuration logs
via:
tunefs.lustre --writeconf <device>
But no luck, we still have the same error when mounting the meta data
partition.
Any help would be kindly appreciated!
So long!
Paul
7 years, 10 months
striped write performance in lustre 2.X?
by Erich Focht
Hi,
I'm puzzled about the poor scaling in Lustre 2.X (meaning 2.1.4, 2.1.5,
2.3) when writing one stream from one client to one striped file. With 2
OSSes, each having 4 OSTs capable (with one write stream) of 500-600MB/s
each, the performance with 8-fold striping barely exceeds 700MB/s. From 1.8
I was used to easily exceed 1GB/s for one dd to a 8-fold striped file.
Here are some numbers:
stripes: 1 size: 16384kB
268435456000 bytes (268 GB) copied, 526.797 s, 510 MB/s
stripes: 2 size: 16384kB
268435456000 bytes (268 GB) copied, 413.182 s, 650 MB/s
stripes: 4 size: 16384kB
268435456000 bytes (268 GB) copied, 374.03 s, 718 MB/s
stripes: 6 size: 16384kB
268435456000 bytes (268 GB) copied, 382.277 s, 702 MB/s
stripes: 8 size: 16384kB
268435456000 bytes (268 GB) copied, 378.835 s, 709 MB/s
Obtained basically with:
dd if=/dev/zero of=test256g bs=16M count=$((256/16*1000))
I used a 1.8.8 Lustre client over FDR IB, no bandwidth limitations there.
With the Lustre 2.1.5 client performance is even worse.
Is there anything basically wrong with striping in Lustre 2.X? Is the
degradation understandable, somehow? Or do I miss something? (I tried
various tweaks on server and client side, checksums are disabled, etc...
but nothing helped).
Best regards,
Erich
7 years, 10 months