LUG 2014: Registration Closes March 31st
by OpenSFS Administration
<http://www.opensfs.org/events/lug14/>
Register Today: Lustre User Group 2014
Registration Closes March 31st
Time is running out to register for the 12th Annual Lustre
<http://www.opensfs.org/lug14/> R User Group (LUG) Conference! Don't miss
the opportunity to join the 175+ already registered attendees from over 50
different companies and organizations including DataDirect Networks, Intel,
NetApp, Oak Ridge National Laboratory, Xyratex, and more.
LUG 2014 will be held on April 8th - 10th at the Miami Marriott Biscayne Bay
in Miami, Florida, and will bring together industry leaders, end-users,
developers, and vendors to talk Lustre, contribute to the community, and
move the technology forward.
The event will feature 29 sessions, including:
* Exascale Computing Vision
* Lustre Future Features
* Lustre 2.5 Performance Evaluation
* Lustre Releases
* Metadata Benchmarks and MD Performance Metrics
* Progress Report on Efficient Integration of Lustre and Hadoop/YARN
<https://opensfs.wufoo.com/forms/lug-conference-2014/> Register by March
31st
We encourage you to review the <http://www.opensfs.org/lug-2014-agenda/>
LUG 2014 Agenda to learn more about the networking opportunities, poster
exhibition, and sessions at this year's conference.
We look forward to seeing you at LUG! If you have any questions, please feel
free to contact <mailto:admin@opensfs.org> admin(a)opensfs.org.
Best regards,
OpenSFS LUG Planning Committee
_________________________
OpenSFS Administration
3855 SW 153rd Drive Beaverton, OR 97006 USA
Phone: +1 503-619-0561 | Fax: +1 503-644-6708
Twitter: <https://twitter.com/opensfs> @OpenSFS
Email: <mailto:admin@opensfs.org> admin(a)opensfs.org | Website:
<http://www.opensfs.org> www.opensfs.org
Open Scalable File Systems, Inc. was founded in 2010 to advance Lustre
development, ensuring it remains vendor-neutral, open, and free. Since its
inception, OpenSFS has been responsible for advancing the Lustre file system
and delivering new releases on behalf of the open source community. Through
working groups, events, and ongoing funding initiatives, OpenSFS harnesses
the power of collaborative development to fuel innovation and growth of the
Lustre file system worldwide.
<http://www.opensfs.org/lug-2014-sponsorship/>
<http://www.opensfs.org/lug-2014-sponsorship/> Click here to learn how to
become a LUG Sponsor
8 years, 2 months
ofd_grant_sanity_check errors
by rf@q-leap.de
Hi,
our newly setup Lustre system with 2.5.1/ZFS on the server and 3.12
in-kernel client (a number of patches applied) occasionally shows the
following type of errors on the servers. There seem to be no directly
noticeable consequences though. Is this something to worry about?
Mar 23 18:28:31 jaws kernel: [513636.382374] LustreError: 13911:0:(ofd_grant.c:169:
ofd_grant_sanity_check()) ofd_statfs: tot_dirty 0 != fo_tot_dirty 786432
Mar 23 18:28:34 jaws kernel: [513639.606120] LustreError: 7292:0:(ofd_grant.c:169:o
fd_grant_sanity_check()) ofd_destroy_export: tot_dirty 0 != fo_tot_dirty 786432
Mar 23 18:28:36 jaws kernel: [513641.390708] LustreError: 13911:0:(ofd_grant.c:169:
ofd_grant_sanity_check()) ofd_statfs: tot_dirty 0 != fo_tot_dirty 786432
Mar 23 18:28:41 jaws kernel: [513646.401646] LustreError: 7459:0:(ofd_grant.c:169:o
fd_grant_sanity_check()) ofd_statfs: tot_dirty 0 != fo_tot_dirty 786432
Mar 23 18:28:46 jaws kernel: [513651.411444] LustreError: 7459:0:(ofd_grant.c:169:o
fd_grant_sanity_check()) ofd_statfs: tot_dirty 0 != fo_tot_dirty 786432
Mar 23 18:28:51 jaws kernel: [513656.419729] LustreError: 13911:0:(ofd_grant.c:169:
ofd_grant_sanity_check()) ofd_statfs: tot_dirty 0 != fo_tot_dirty 786432
Mar 23 18:28:56 jaws kernel: [513661.428126] LustreError: 13911:0:(ofd_grant.c:163:
ofd_grant_sanity_check()) ofd_statfs: tot_granted 107511808 != fo_tot_granted 8734670848
Thanks,
Roland
8 years, 2 months
mv on the same file system does copy
by Wojciech Turek
I am trying to move some large directories on the same lustre filesystem
however I can see that instead of quick name remapping mv does copy
operation. Is there a way to avoid this?
As a test I created new directory with hundreds of large files and moved it
and mv works as expected. I do not understand what is different about the
other directories that they can not be simply moved.
Best regards,
Wojciech
--
Wojciech Turek
Senior System Architect
<< Tomorrow (noun) A mystical land where 99% of all human productivity,
motivation and achievement is stored >>
8 years, 2 months
mkfs.lustre FATAL: failed to write local files in HPC SFS
by Martin Hecht
Hi,
I would like to report about a strange issue I have seen with HP's HPC
SFS G3.2-3 running an early version of Lustre 1.8, (I believe it was
1.8.3) on Centos 5.3.
lustre_config could create the ldiskfs on the ext2 level, but then
exited with the following message on all OSSes:
mkfs.lustre FATAL: failed to write local files
I could mount the OSTs as ldiskfs, but there was no CONFIGS directory.
When I tried to create one manually, the node crashed immediately, but I
have found the following entry in /var/log/messages:
Mar 18 10:51:15 sfs3 kernel: LDISKFS-fs error (device dm-1):
ldiskfs_ext_find_extent: :463: bad header in inode #229302273: invalid
magic - magic 0, entries 0, max 0(0), depth 0(0)
I upgraded e2fsprogs to the latest version, but again, lustre_conrfig
failed. I ran an e2fsck on the osts, which seemed to fix the problem
according to its output, and a second run of e2fsck confirmed that
everything was ok. However, when I mounted the OST as ldiskfs again and
tried to create the CONFIGS directory the node crashed again.
The solution to this problem was: Installing lustre 1.8.9 from
whamcloud. After that I hit another minor issue
(https://jira.hpdd.intel.com/browse/LU-4789), but after fixing that one
lustre_config completed successfully. Either this was a bug which was
fixed between 1.8.3 and 1.8.9, or there was an issue with an expired
license which caused the lustre shipped with the SFS to behave
strangely. I believe the second explanation is the better one, because
the exact same thing has worked some years ago and has stopped working
after the support contract has ended (installed software has remained
unchanged since then).
best regards,
Martin
8 years, 2 months
Re: [HPDD-discuss] Which stable version to install
by Dennis Nelson
So, Peter, are you saying that Lustre 1.8.9 clients will not work or be
supported with Lustre 2.5.x servers?
--
Dennis Nelson
Mobile: 817-233-6116
Applications Support Engineer
DataDirect Networks, Inc.
dnelson(a)ddn.com
On 3/19/14, 8:18 AM, "Jones, Peter A" <peter.a.jones(a)intel.com> wrote:
>Matt
>
>I can¹t say that a clear answer jumps out from the below. If you want to
>keep the door open to possibly interoperating to a 1.8.x release in the
>future then 2.4.3 might be the best bet, but once you explore HSM you will
>need 2.5.x. At this point in time, the information that I have suggests
>2.4.x releases are being quite widely used, so a conservative approach
>might be to move initially to 2.4.3 and then upgrade to a 2.5.x release
>when you are ready to use HSM.
>
>Let us know how you get on!
>
>Peter
>
>On 3/18/14, 7:57 AM, "Matt Bettinger" <iamatt(a)gmail.com> wrote:
>
>>Hello,
>>
>>We currently run 2 lustre file systems 1.8.6 (qdr) and 1.8.8 (fdr).
>>We will be taking 1.8.6 off-line for an "upgrade" ( bare metal
>>reinstall of OS and lustre!) to 2.X.
>>
>> I see quite a bit of activity on different releases which is a
>>confusing as to decide which release to install. What is the main
>>'stable' release that is suggested for a new 2.X installation? 2.5?
>>We are interested in looking at the newer tools such as lester,
>>robinhood, latest collectl, and the HSM bits. The interconnects are
>>Mellanox QDR to a very finicky IBM IB switch and fibre back end. The
>>OS is going to be RHEL or CentOs but does not matter however we would
>>prefer to use the lustre RPMs if possible so 6.4/6.5 I am guessing.
>>We have two MDS available on this system as well.
>>
>>It is not a requirement that the 1.8.8 be able to talk to 2.X through
>>Linux gateway routers but things change and we may need to have 1.8.8
>>cross mount the new 2.x. Does that have any bearing on which 2.X
>>version we decide to land on? Thanks~
>>
>>Matt Bettinger
>>
>>On Fri, Mar 14, 2014 at 2:05 PM, Jones, Peter A <peter.a.jones(a)intel.com>
>>wrote:
>>>
>>> Hi there
>>>
>>> Here is an update on the Lustre 2.6 release.
>>>
>>> Landings
>>> ========
>>>
>>> -A number of landings made
>>>http://git.whamcloud.com/?p=fs/lustre-release.git;a=shortlog;h=refs/head
>>>s
>>>/master
>>>
>>> Testing
>>> =======
>>>
>>> -Testing has continued on the 2.5.56 tag
>>>
>
>_______________________________________________
>HPDD-discuss mailing list
>HPDD-discuss(a)lists.01.org
>https://lists.01.org/mailman/listinfo/hpdd-discuss
8 years, 2 months
performance split mds mgs
by Alfonso Pardo
Hello,
I must implement a new lustre file system and optimize it to get a high level of performance. I wonder: can I get more performance, If I split the MDS and the MGS in two different machines?
Thanks!!!
Alfonso Pardo Diaz
System Administrator / Researcher
c/ Sola nº 1; 10200 Trujillo, ESPAÑA
Tel: +34 927 65 93 17 Fax: +34 927 32 32 37
----------------------------
Confidencialidad:
Este mensaje y sus ficheros adjuntos se dirige exclusivamente a su destinatario y puede contener información privilegiada o confidencial. Si no es vd. el destinatario indicado, queda notificado de que la utilización, divulgación y/o copia sin autorización está prohibida en virtud de la legislación vigente. Si ha recibido este mensaje por error, le rogamos que nos lo comunique inmediatamente respondiendo al mensaje y proceda a su destrucción.
Disclaimer:
This message and its attached files is intended exclusively for its recipients and may contain confidential information. If you received this e-mail in error you are hereby notified that any dissemination, copy or disclosure of this communication is strictly prohibited and may be unlawful. In this case, please notify us by a reply and delete this email and its contents immediately.
----------------------------
8 years, 2 months