I started trying lustre + kernel 3.16 and sadly it doesn't like this kernel
(it is fine with 3.14 though), not being afraid of a bit of code I figured
I'd have a look at the location in the code that the error messages seem to
suggest, but those lines seem wrong.
Here's the message I get:
kernel: LNetError: 2895:0:(linux-tcpip.c:82:libcfs_ipif_query()) Can't get
flags for interface ib0
kernel: LNetError: 2895:0:(o2iblnd.c:2694:kiblnd_create_dev()) Can't query
IPoIB interface ib0: -515
hm-40 kernel: LNetError: 105-4: Error -100 starting up LNI o2ib
linux-tcpip.c:82 -- precompiler ifdef, libcfs_ipif_query() is only defined
25 lines later, seems may have to be line 127
o2iblnd.c:2694 -- seems it should be line 2754
1. Am I correct in my line guesses?
2. How come these line# are so terribly off?
Is there any matrix published that can tell with
OS/kernel/OFED(MOFED)/stacks are supported?
My particular question is if it possible to compile latest Lustre master
branch against Ubuntu 14, kernel-3.12 and MOFED-2.2
Just wondering if anyone tried to build a Lustre 1.8 client on a RedHat 7
system (I believe I saw some chatter about building a Lustre 2+ client on el7).
I have a legacy Lustre 1.8.4 file system, that I did not chance to upgrade to
Lustre 2+, and I'd like to mount it on new RedHat 7 computes. On a RedHat 6
system a Whamcloud Lustre 1.8.7 client worked, but I was not able to get a
Lustre 1.8.* Whamcloud rpm to install on RHEL7 --- too many old dependencies. I
also tried to do a rpm --rebuild, but kept getting build errors such as:
cc1: all warnings being treated as errors
make: *** [/root/rpmbuild/BUILD/lustre-2.5.2/lustre/llite/llite_lib.o] Error
make: *** [/root/rpmbuild/BUILD/lustre-2.5.2/lustre/llite] Error 2
make: *** [/root/rpmbuild/BUILD/lustre-2.5.2/lustre] Error 2
make: *** [_module_/root/rpmbuild/BUILD/lustre-2.5.2] Error 2
make: *** [modules] Error 2
make: *** [all-recursive] Error 1
make: *** [all] Error 2
error: Bad exit status from /var/tmp/rpm-tmp.hxvMyD (%build)
RPM build errors:
Bad exit status from /var/tmp/rpm-tmp.hxvMyD (%build)
both for Lustre 1.8 client source rpms and Lustre 2.5 (on a RHEL7 system).
I've never tried to build any part of Lustre from source --- only used the
available rpms. However, nothing appears to have been released yet for RHEL7.
When there is a release, I assume it will be for Lustre 2+, which is
incompatible with Lustre 1.8.4 servers, right?
I recently had to set up a few post-processing machines based on Ubuntu trusty kernel 2 3.13.0-32-generic #57-Ubuntu. The kernel lustre module worked fine, until LNET complained about:
LNetError: 2026:0:(lib-lnet.h:399:lnet_md_alloc()) LNET: out of memory at /build/buildd/linux-3.13.0/drivers/staging/lustre/include/linux/lnet/lib-lnet.h:399 (tried to alloc '(md)' = 4208)
Sep 10 10:41:37 vis-m2 kernel: [180229.112013] LNetError: 2026:0:(lib-lnet.h:399:lnet_md_alloc()) LNET: 274426476 total bytes allocated by lnet
Has anyone seen this error before, or a fix? Attached you will find a more detailed kernel log.
LU-3585 indicates a similar problem, but the bug seems to be resolved by now.
All machines based on Scientific Linux 6, kernel 2.6.32 are working as expected. When hitting the bug the throughput to the file system drops quite a lot and the log shows corresponding errors. The problem is reproducible.
Let me know, if you have an idea.
Register now for the October 14th Lustre Users Group in Beijing.
REGISTER NOW <http://pages.intel.com/TSvPt00000000WO30Z00lY0>
Join us at the Regent Hotel <http://pages.intel.com/U0U000Y0ZW0Plt030v000Q0>
for a day full of discussions around how Lustre is being used to solve
today’s most demanding and important storage challenges.
The Luster Users Group China event is free to attend but you must
pre-register to attend. Don’t miss the opportunity to hear about the latest
developments in the Lustre space.
REGISTER NOW <http://pages.intel.com/TSvPt00000000WO30Z00lY0>
Call for abstracts is now open
LUG events are ideal opportunities to give a technical presentation about
how you’re using the Lustre file system. We encourage you to submit a brief
abstract that describes the topic you’d like to present. Presentation
opportunities are limited, so we encourage you to submit your abstract,
topic title and contact information to Fan Yong today at fan.yong(a)intel.com
If your company or institution is interested in being a sponsor, we have a
number of sponsorship packages available. More details about sponsorship
opportunities are available by contacting Tijik Di at tijik.di(a)intel.com
We look forward to having you as our guest at the China Lustre User Group
event. Click Here <http://pages.intel.com/O0x0000Y0300vPZu0C0l00W> to Add
Event to your Calendar.
Intel(r) High Performance Data Team
Copyright (c) 2014 Intel Corporation. All rights reserved. Intel, the Intel
logo, and Intel Xeon are trademarks of Intel Corporation in the U.S. and/or
other countries. *Other names and brands may be claimed as the property of
is there any chance I can compile Lustre client 2.4.3 on Ubuntu 14.04
Trusty or am I fighting with the wind?
Specialiste sénior en systèmes d'exploitation | Senior OS specialist
Environnement Canada | Environment Canada
2121, route Transcanadienne | 2121 Transcanada Highway
Dorval, QC H9P 1J3
Téléphone | Telephone 514-421-5303
Télécopieur | Facsimile 514-421-7231
Gouvernement du Canada | Government of Canada
LAD'14 -- September 22-23, 2014 -- Reims, France
This is a last reminder for next Lustre Admins & Devs workshop in Europe
(LAD'14) as registration ends in about one week on Sept. 12, 2014!
* Access to the talks; agenda is online at http://www.eofs.eu/?id=lad14
* Monday and Tuesday lunches
* Monday's social event
* Monday's diner
* Free coffee... ;-)
* Free shuttle from/to Reims downtown
Register here: http://lad.eofs.org/register.php
LAD is a great opportunity for European and worldwide people interested
in Lustre® to gather and exchange their experiences, developments,
tools, good practices and more. Time and space is set aside for the
exchange of info and ideas directly with Lustre admins and developers.
LAD'14 sponsors are Bull, CEA, Cray, DataDirect Networks, Intel and
Xyratex, and all of them will send Lustre experts there, so you can talk
to them easily!
This year, LAD will take place in Reims, France, September 22-23 at
Domaine Pommery, a True Traditional Champagne House! Reims is very
reachable thanks to the French high-speed train, which connects the town
to hundreds of destinations and is only 45 minutes from Paris!
If you have any questions, please contact us at lad(a)eofs.eu.
We look forward to seeing you at LAD'14 in Reims!
for the LAD Organizing Team
has anyone used 'lfs migrate [--block]' to live migrate lots of data?
it worked ok?
any hints for best usage? (how many migrates running per OST etc.)
the context is that we've doubled our number of OSTs and now need to
rebalance our ~1 PB of data by moving roughly 1/2 of it onto the new
I have yet to chat to anyone who's used 'lfs migrate' (either directly
or via lfs_migrate) in production, so I'm being paranoid and looking
for comforting war stories where it's been used to shift around a lot
of data without problems...
documentation is a bit scarce. maybe just
and 'lfs help migrate'. but with --block it sounds pretty amazing.
it should be able to do the rebalance live and without a downtime
(with some delays to file access).
we're using the latest(?) Intel Enterprise Lustre version 18.104.22.168
(which appears to be 2.5.2 based). we've heard via Intel support that
'lfs migrate' runs a verify pass, which sounds nice.
Dr Robin Humble