We have just installed Lustre 2.1.6 on SL6.4 systems. It is working
well. However, I find that I am unable to apply root squash parameters.
We have separate mgs and mdt machines. Under Lustre 1.8.4 this was not
an issue for root squash commands applied on the mdt. However, when I
modify the command syntax for lctl conf_param to what I think should now
be appropriate, I run into difficulty.
[root@lmd02 tools]# lctl conf_param
No device found for name MGS: Invalid argument
This command must be run on the MGS.
error: conf_param: No such device
[root@mgs ~]# lctl conf_param
error: conf_param: Invalid argument
I have not yet looked at setting the "root_squash" value, as this
problem has stopped me cold. So, two questions:
1. Is this even possible with our split mgs/mdt machines?
2. If possible, what have I done wrong above?
I am installing Lustre 1.87 on RHEL 5.8 and having issue while configuring OST on one of the node. These are the steps I am performing reading through manual, please reply how to proceed further. Please let me know if you find any issue with these steps:
1. I have 2 RHEL 5.8 hosts, one for MGS and another one for OSS
2. Configured Static IP address on both the hosts
3. Disabled SELINUX in /etc/selinux/config file
4. Provision 1 raw disk each to both the systems for MDT and OST, did not created any partition on them
5. Installed lustre 1.87 RPM packages on both the machines in the given order
o Am I missing any package or order?
6. Modified /etc/modprobe.conf to have : options lnet networks=tcp0(eth0)
7. Rebooted both the machines
8. Disabled iptables and ip6tables
9. On machine 1 for MDS / MGS Server, successfully executed following command to create MGS/MDT file system
a. mkfs.lustre -fsname=lustre -mgs -mdt /dev/sdb
b. mount -t lustre /dev/sdb /mnt
10. On machine 2 executed following command, but it fails with the error as I mentioned before:
a. mkfs.lustre -fsname=lustre -ost -mgsnode=10.243.107.39@tcp0<mailto:-mgsnode=10.243.107.39@tcp0> /dev/sdb
i. Fails with the error :
1. mkfs.lustre: Can't make configs dir /tmp/mntRiWwg6/CONFIGS: Input/output error
2. mkfs.lustre FATAL: failed to write local files
3. mkfs.lustre: exiting with -1 (Unknown error 18446744073709551615)
b. I tried the same command with the machine 1 as well to have MGS and OSS on the same machine, but still have the same issue.
Please let me know are these the similar approach you take or something else I need to do. Next thing I am planning is to download and compile the sources of Lustre 2.3 and perform the same exercise on RHEL 6.3. Can you please let me know what all source rpms I need to download and build?
The error message you're getting is what's expected if you still have SELinux enabled on that system. I don't think you need to worry about lctl or the LNET settings right now - The mkfs.lustre is failing because it can't make the temporary directory.
I'd make sure selinux is set to disabled in /etc/selinux/config.
The line which starts 'selinux=' should read 'disabled'.
You can check whether or not that setting has taken effect with:
I'd suggest making sure of that config file change and restarting the system, then try your format command again.
- Patrick Farrell
Network failures may be transient. To _*avoid invoking recovery_*, the
client tries, initially, to re-send any _timed out_ request to the server.
-- What timeout is it referring to ? /proc/fs/lustre/timeout (obd_timeout)?
-- The _time_ after target disconnect (due to transient n/w issues) and
before the recovery starts for obd_timeout period? - Do we have this
_time_ defined anywhere ?
IIRC in above scenario the recovery hasn't kicked in yet. Thus the client
hasn't been evicted either. Please correct me?
-- Can you point to the source, for above please ?
If the resend also fails, the client tries to re-establish a connection to
the server. *"_Clients can detect harmless partition upon reconnect if the
server has not had any reason to evict the client._ "
-- How _*Clients can detect harmless partition upon reconnect_ * ? Can you
point to the source, for above please ?
--What does resend above refer to - requests committed to sever but w/o any
replies seen by client OR new requests with trans no higher than last_recvd
OR something else?
Thanks for your time.
OpenSFS is happy to announce that LUG 2014 will be held in Miami, Florida,
from April 8-10, 2014. Please save the date!
OpenSFS <http://www.opensfs.org/> , in collaboration with EOFS
<http://www.eofs.eu> , is proud to host the 12th annual Lustre User Group
(LUG) conference. LUG continues to be the primary venue for discussion and
seminars on the LustreR parallel file system and other open source file
system technologies. The event will include more than 50 sessions and
panels, where attendees have the opportunity to:
. Hear from the world's leading developers, administrators, solution
providers, and users of Lustre
. Be an active participant in industry dialogue on best practices
and emerging technologies
. Explore upcoming developments of the Lustre file system
. Immerse in the strong Lustre community, working collaboratively to
further the development of Lustre
As one of the many benefits of being a member of OpenSFS -- the organization
that drives HPC open source file system community efforts - OpenSFS is
pleased to offer one complimentary pass to each current member entity of
OpenSFS. Details are coming soon with the opening of the LUG 2014
The venue for LUG 2014 is the
Miami Marriott Biscayne Bay, located right on Biscayne Bay and just minutes
from South Beach and other famous Miami attractions. Please visit the LUG
2013 web page <http://www.opensfs.org/events/lug13/> to view the agenda and
caliber of presentations of past LUG events. With the ongoing growth and
outstanding progress of Lustre, the 2014 event promises to be exciting and
If your company is interested in sponsoring LUG 2014 -- reaching more than
200+ focused Lustre attendees - the OpenSFS has a number of sponsorship
opportunities available. Please contact admin(a)opensfs.org to learn more
about these opportunities including onsite presence, social events, meals,
speaking opportunities, and more.
If you have any questions, please feel free to contact OpenSFS. We look
forward to seeing you in Miami at LUG 2014!
3855 SW 153rd Drive Beaverton, OR 97006 USA
Phone: +1 503-619-0561 | Fax: +1 503-644-6708
Twitter: <https://twitter.com/opensfs> @OpenSFS
Email: <mailto:firstname.lastname@example.org> admin(a)opensfs.org | Website:
* Lustre is a registered trademark of Xyratex Technology Ltd.
[Apologies if you received multiple copies of this email. ]
The 8th Parallel Data Storage Workshop (PDSW13)
held in conjunction with IEEE/ACM Supercomputing (SC) 2013
Denver, Colorado, Monday, November 18, 2013
URL: http://www.pdsw.org <http://www.pdsw.org/>
Peta- and exascale computing infrastructures make unprecedented demands on
storage capacity, performance, concurrency, reliability, availability, and
manageability. This one-day workshop focuses on the data storage and
management problems and emerging solutions found in peta- and exascale
scientific computing environments, with special attention to issues in
which community collaboration can be crucial for problem identification,
workload capture, solution interoperability, standards with community
buy-in, and shared tools. Addressing storage media ranging from tape, HDD,
and SSD, to new media like NVRAM, the workshop seeks contributions on
relevant topics, including but not limited to:
- performance and benchmarking
- failure tolerance problems and solutions
- APIs for high performance features
- parallel file systems
- high bandwidth storage architectures
- support for high velocity or complex data
- metadata intensive workloads
- autonomics for HPC storage
- virtualization for storage systems
- archival storage advances
- resource management innovations
- incorporation of emerging storage technologies.
The Parallel Data Storage Workshop holds a peer reviewed competitive
process for selecting short papers. Submit a not previously published
short paper of up to 5 pages, not less than 10 point font and not
including references, in a PDF file as instructed on the workshop web
site. Submitted papers will be reviewed under the supervision of the
workshop program committee. Submissions should indicate authors and
affiliations. Final papers must not be longer than 5 pages (excluding
references). Selected papers and associated talk slides will be made
available on the workshop web site; the papers will also be published in
the digital library of the IEEE or ACM.
Paper Submission Deadline: Sun, Oct. 6, 2013, 11:59 pm EDT
Paper Notification: Fri, Oct. 25, 2013
Camera Ready Due: Wed, Nov. 13, 2013
Softcopy and Slides Due: Sat, Nov. 16, 2013, 5:00 pm ET, BEFORE the
There will also be a poster session at the workshop; accepted papers will
ALWAYS be accepted for a poster. Others interested in presenting a related
technical poster (posters with technical results for storage products are
also encouraged) should submit a short poster abstract as instructed on
the workshop web site.
Poster Submission Deadline: Tuesday, Nov. 12, 2013
Poster Notification: Thursday, Nov. 14, 2013
Karsten Schwan, Georgia Tech (PC Chair)
Dean Hildebrand, IBM (PC Chair)
Ahmed Amer, Santa Clara University
John Bent, EMC
Randal Burns, Johns Hopkins University
Andreas Dilger, Intel
Fred Douglis, EMC
Garth Gibson, Carnegie Mellon University and Panasas Inc.
Peter Honeyman, University of Michigan
Song Jiang, Wayne State University
Carlos Maltzahn, University of California, Santa Cruz
Meghan Wingate McClelland, Xyratex
Ron Oldfield, Sandia National Laboratories
Narasimha Reddy, Texas A&M University
Robert Ross, Argonne National Laboratory
Keith A. Smith, NetApp
Yuan Tian, Oak Ridge National Laboratory
John Bent, EMC
Scott Brandt, University of California, Santa Cruz
Evan J. Felix, Pacific Northwest National Laboratory
Garth A. Gibson, Carnegie Mellon University and Panasas Inc.
Gary Grider, Los Alamos National Laboratory
Peter Honeyman, University of Michigan
Bill Kramer, National Center for Supercomputing Applications
University of Illinois Urbana-Champaign
Darrell Long, University of California, Santa Cruz
Carlos Maltzahn, University of California, Santa Cruz
Rob Ross, Argonne National Laboratory
Philip C. Roth, Oak Ridge National Laboratory
John Shalf, National Energy Research Scientific Computing Center
Lawrence Berkeley National Laboratory
Lee Ward, Sandia National Laboratories
Lustre Software Architect
Intel High Performance Data Division
I've got a question about locating files on a given OST. First some background.
We recently had 1 hard drive failure in conjunction with 2 other drives developing SMART errors in short order on one of our OSS's. This OSS is running hardware RAID6 so we didn't lose the raid array or the OST's (there are 4 OST's on this OSS) but the controller is now reporting that there are some bad stripes on the array. The only way to clear the bad stripes is to reformat the array so we are now in the process of migrating the data off of these OST's. So we got a list of the files on those OST's:
lfs find /lustre/ -obd hpfs-eg3-OST002c > oss11ost1
lfs find /lustre/ -obd hpfs-eg3-OST002d > oss11ost2
lfs find /lustre/ -obd hpfs-eg3-OST002e > oss11ost3
lfs find /lustre/ -obd hpfs-eg3-OST002f > oss11ost4
And now we're using lfs_migrate to move the files to other OST's. The migration is ongoing and, fortunately, the data corruption due to the bad stripes seems to be minimal. The intention is that once we think the migration is done, to repeat the lfs find commands to verify the OST's are indeed empty. My question: is there a faster way to get the file list? We ran those lfs find commands in parallel but it took over 12 hours for them to complete and found about 28 million files. The MDT is build on top of LVM so I was imagining something along the lines of taking an LVM snapshot, mounting that read only and extracting the info. But I know nothing about the file system structure so this may not even be possible. Any advice would be appreciated - either about this approach or something else that would be faster.