RHEL6.5 support
by Adesanya, Adeyemi
Hi.
Do we have any timetable for RHEL6.5 support? Just discovered the 2.4.1 client does not build with 2.6.32-431 kernel.
--------
Yemi
8 years, 5 months
Re: [HPDD-discuss] [Lustre-discuss] lfs_migrate errors
by Dilger, Andreas
I saw this same problem on my system - it happens when trying to migrate a file created with Lustre 1.8.
See https://jira.hpdd.intel.com/browse/LU-4293 for details.
I have an updated version of lfs_migrate that works around this problem that I should push to Gerrit. The patch will be linked to the above bug when ready.
Cheers, Andreas
On Dec 17, 2013, at 18:43, "Peter Mistich" <peter.mistich(a)rackspace.com<mailto:peter.mistich@rackspace.com>> wrote:
Hello,
today I added a ost and trying to balance them and when I run
lfs_migrate I get cannot swap layouts between <filename> and a volatile
file (Operation not permitted)
I am running lustre-2.5.52
any help would be great.
Thanks,
Pete
_______________________________________________
Lustre-discuss mailing list
Lustre-discuss(a)lists.lustre.org<mailto:Lustre-discuss@lists.lustre.org>
http://lists.lustre.org/mailman/listinfo/lustre-discuss
8 years, 5 months
Re: [HPDD-discuss] Infiniband & Lustre Module Unloading on RHEL 6.4
by Patrick Farrell
Andrew, Rob,
We may be seeing some related behavior here as well.
Can you give any more details about the errors you're getting on
shutdown/unmount?
Our problem manifested itself as the MDS not being able to unmount,
because it's waiting for communication from the clients, while
unmounting/shutting down. (Eventually, messages about hung threads
appear on the MDS.) It may not be the sane thing (we're seeing it with
2.5 and have only begun seeing it recently), but it is similar and
happening on systems using IB.
--
Patrick Farrell
Developer, IO File Systems
Cray, Inc.
8 years, 5 months
False: No Space Left on device or inode limitation reached?
by Kumar, Amit
Dear All,
Lustre: v1.8.5
One of our user is running into this error: "IOError: [Errno 28] No space left on device: "
Although this is a python error, user is running into this multiple times while running on Lustre file system, during the life of his job. Hence I am trying to figure out if there is a way to tell if we hit Lustre/etx3 number of files limitation in a directory. Although I do not think this is the case because I read that the ext3 limitation is 15million files in a directory. I have also checked ulimit for the user and do not find any issues there. Also we are only 60% used on disk space for Lustre file system so we have not hit capacity issue.
Currently user generates about over 3.5 million files during his job run. After the failure we tested creating files manually and by script in the same directory and it works without error.
Any insight into this will be greatly appreciated.
Regards,
Amit H. Kumar
8 years, 5 months
Re: [HPDD-discuss] [Lustre-discuss] Setting up a lustre zfs dual mgs/mdt over tcp - help requested
by Dilger, Andreas
On 2013/12/17 9:37 AM, "Sten Wolf" <sten(a)checkpalm.com> wrote:
>This is my situation:
>I have 2 nodes MDS1 , MDS2 (10.0.0.22 , 10.0.0.23) I wish to use as
>failover MGS, active/active MDT with zfs.
>I have a jbod shelf with 12 disks, seen by both nodes as das (the shelf
>has 2 sas ports, connected to a sas hba on each node), and I am using
>lustre 2.4 on centos 6.4 x64
If you are using ZFS + DNE (multiple MDTs), I'd strongly recommend to use
Lustre 2.5 instead of 2.4. There were quite a bunch of fixes in this
version for both of those features (which are both new in 2.4). Also,
Lustre 2.5 is the new long-term maintenance stream, so there will be
regular updates for that version.
I have to admit that the combination of those two features has been tested
less than either ZFS + 1 MDT or ldiskfs + 2+ MDTs separately. There are
also a couple of known performance issues with the interaction of these
features that are not yet fixed.
I do expect that this combination is working, but there will likely be
some issues that haven't been seen before.
Cheers, Andreas
>I have created 3 zfs pools:
>1. mgs:
># zpool create -f -o ashift=12 -O canmount=off lustre-mgs mirror
>/dev/disk/by-id/wwn-0x50000c0f012306fc
>/dev/disk/by-id/wwn-0x50000c0f01233aec
># mkfs.lustre --mgs --servicenode=mds1@tcp0 --servicenode=mds2@tcp0
>--param sys.timeout=5000 --backfstype=zfs lustre-mgs/mgs
>
> Permanent disk data:
>Target: MGS
>Index: unassigned
>Lustre FS:
>Mount type: zfs
>Flags: 0x1064
> (MGS first_time update no_primnode )
>Persistent mount opts:
>Parameters: failover.node=10.0.0.22@tcp failover.node=10.0.0.23@tcp
>sys.timeout=5000
>
>2 mdt0:
># zpool create -f -o ashift=12 -O canmount=off lustre-mdt0 mirror
>/dev/disk/by-id/wwn-0x50000c0f01d07a34
>/dev/disk/by-id/wwn-0x50000c0f01d110c8
># mkfs.lustre --mdt --fsname=fs0 --servicenode=mds1@tcp0
>--servicenode=mds2@tcp0 --param sys.timeout=5000 --backfstype=zfs
>--mgsnode=mds1@tcp0 --mgsnode=mds2@tcp0 lustre-mdt0/mdt0
>warning: lustre-mdt0/mdt0: for Lustre 2.4 and later, the target index
>must be specified with --index
>
> Permanent disk data:
>Target: fs0:MDT0000
>Index: 0
>Lustre FS: fs0
>Mount type: zfs
>Flags: 0x1061
> (MDT first_time update no_primnode )
>Persistent mount opts:
>Parameters: failover.node=10.0.0.22@tcp failover.node=10.0.0.23@tcp
>sys.timeout=5000 mgsnode=10.0.0.22@tcp mgsnode=10.0.0.23@tcp
>
>checking for existing Lustre data: not found
>mkfs_cmd = zfs create -o canmount=off -o xattr=sa lustre-mdt0/mdt0
>Writing lustre-mdt0/mdt0 properties
> lustre:version=1
> lustre:flags=4193
> lustre:index=0
> lustre:fsname=fs0
> lustre:svname=fs0:MDT0000
> lustre:failover.node=10.0.0.22@tcp
> lustre:failover.node=10.0.0.23@tcp
> lustre:sys.timeout=5000
> lustre:mgsnode=10.0.0.22@tcp
> lustre:mgsnode=10.0.0.23@tcp
>
>3. mdt1:
># zpool create -f -o ashift=12 -O canmount=off lustre-mdt1 mirror
>/dev/disk/by-id/wwn-0x50000c0f01d113e0
>/dev/disk/by-id/wwn-0x50000c0f01d116fc
># mkfs.lustre --mdt --fsname=fs0 --servicenode=mds2@tcp0
>--servicenode=mds1@tcp0 --param sys.timeout=5000 --backfstype=zfs
>--index=1 --mgsnode=mds1@tcp0 --mgsnode=mds2@tcp0 lustre-mdt1/mdt1
>
> Permanent disk data:
>Target: fs0:MDT0001
>Index: 1
>Lustre FS: fs0
>Mount type: zfs
>Flags: 0x1061
> (MDT first_time update no_primnode )
>Persistent mount opts:
>Parameters: failover.node=10.0.0.23@tcp failover.node=10.0.0.22@tcp
>sys.timeout=5000 mgsnode=10.0.0.22@tcp mgsnode=10.0.0.23@tcp
>
>checking for existing Lustre data: not found
>mkfs_cmd = zfs create -o canmount=off -o xattr=sa lustre-mdt1/mdt1
>Writing lustre-mdt1/mdt1 properties
> lustre:version=1
> lustre:flags=4193
> lustre:index=1
> lustre:fsname=fs0
> lustre:svname=fs0:MDT0001
> lustre:failover.node=10.0.0.23@tcp
> lustre:failover.node=10.0.0.22@tcp
> lustre:sys.timeout=5000
> lustre:mgsnode=10.0.0.22@tcp
> lustre:mgsnode=10.0.0.23@tcp
>
>a few basic sanity checks:
># zfs list
>NAME USED AVAIL REFER MOUNTPOINT
>lustre-mdt0 824K 3.57T 136K /lustre-mdt0
>lustre-mdt0/mdt0 136K 3.57T 136K /lustre-mdt0/mdt0
>lustre-mdt1 716K 3.57T 136K /lustre-mdt1
>lustre-mdt1/mdt1 136K 3.57T 136K /lustre-mdt1/mdt1
>lustre-mgs 4.78M 3.57T 136K /lustre-mgs
>lustre-mgs/mgs 4.18M 3.57T 4.18M /lustre-mgs/mgs
>
># zpool list
>NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
>lustre-mdt0 3.62T 1.00M 3.62T 0% 1.00x ONLINE -
>lustre-mdt1 3.62T 800K 3.62T 0% 1.00x ONLINE -
>lustre-mgs 3.62T 4.86M 3.62T 0% 1.00x ONLINE -
>
># zpool status
> pool: lustre-mdt0
> state: ONLINE
> scan: none requested
>config:
>
> NAME STATE READ WRITE CKSUM
> lustre-mdt0 ONLINE 0 0 0
> mirror-0 ONLINE 0 0 0
> wwn-0x50000c0f01d07a34 ONLINE 0 0 0
> wwn-0x50000c0f01d110c8 ONLINE 0 0 0
>
>errors: No known data errors
>
> pool: lustre-mdt1
> state: ONLINE
> scan: none requested
>config:
>
> NAME STATE READ WRITE CKSUM
> lustre-mdt1 ONLINE 0 0 0
> mirror-0 ONLINE 0 0 0
> wwn-0x50000c0f01d113e0 ONLINE 0 0 0
> wwn-0x50000c0f01d116fc ONLINE 0 0 0
>
>errors: No known data errors
>
> pool: lustre-mgs
> state: ONLINE
> scan: none requested
>config:
>
> NAME STATE READ WRITE CKSUM
> lustre-mgs ONLINE 0 0 0
> mirror-0 ONLINE 0 0 0
> wwn-0x50000c0f012306fc ONLINE 0 0 0
> wwn-0x50000c0f01233aec ONLINE 0 0 0
>
>errors: No known data errors
># zfs get lustre:svname lustre-mgs/mgs
>NAME PROPERTY VALUE SOURCE
>lustre-mgs/mgs lustre:svname MGS local
># zfs get lustre:svname lustre-mdt0/mdt0
>NAME PROPERTY VALUE SOURCE
>lustre-mdt0/mdt0 lustre:svname fs0:MDT0000 local
># zfs get lustre:svname lustre-mdt1/mdt1
>NAME PROPERTY VALUE SOURCE
>lustre-mdt1/mdt1 lustre:svname fs0:MDT0001 local
>
>So far, so good.
>My /etc/ldev.conf:
>mds1 mds2 MGS zfs:lustre-mgs/mgs
>mds1 mds2 fs0-MDT0000 zfs:lustre-mdt0/mdt0
>mds2 mds1 fs0-MDT0001 zfs:lustre-mdt1/mdt1
>
>my /etc/modprobe.d/lustre.conf
># options lnet networks=tcp0(em1)
>options lnet ip2nets="tcp0 10.0.0.[22,23]; tcp0 10.0.0.*;"
>--------------------------------------------------------------------------
>---
>
>Now, when starting the services, I get strange errors:
># service lustre start local
>Mounting lustre-mgs/mgs on /mnt/lustre/local/MGS
>Mounting lustre-mdt0/mdt0 on /mnt/lustre/local/fs0-MDT0000
>mount.lustre: mount lustre-mdt0/mdt0 at /mnt/lustre/local/fs0-MDT0000
>failed: Input/output error
>Is the MGS running?
># service lustre status local
>running
>
>attached lctl-dk.local01
>
>If I run the same command again, I get a different error:
>
># service lustre start local
>Mounting lustre-mgs/mgs on /mnt/lustre/local/MGS
>mount.lustre: according to /etc/mtab lustre-mgs/mgs is already mounted
>on /mnt/lustre/local/MGS
>Mounting lustre-mdt0/mdt0 on /mnt/lustre/local/fs0-MDT0000
>mount.lustre: mount lustre-mdt0/mdt0 at /mnt/lustre/local/fs0-MDT0000
>failed: File exists
>
>attached lctl-dk.local02
>
>What am I doing wrong?
>I have tested lnet self-test as well, using the following script:
># cat lnet-selftest.sh
>#!/bin/bash
>export LST_SESSION=$$
>lst new_session read/write
>lst add_group servers 10.0.0.[22,23]@tcp
>lst add_group readers 10.0.0.[22,23]@tcp
>lst add_group writers 10.0.0.[22,23]@tcp
>lst add_batch bulk_rw
>lst add_test --batch bulk_rw --from readers --to servers \
>brw read check=simple size=1M
>lst add_test --batch bulk_rw --from writers --to servers \
>brw write check=full size=4K
># start running
>lst run bulk_rw
># display server stats for 30 seconds
>lst stat servers & sleep 30; kill $!
># tear down
>lst end_session
>
>and it seemed ok
># modprobe lnet-selftest && ssh mds2 modprobe lnet-selftest
># ./lnet-selftest.sh
>SESSION: read/write FEATURES: 0 TIMEOUT: 300 FORCE: No
>10.0.0.[22,23]@tcp are added to session
>10.0.0.[22,23]@tcp are added to session
>10.0.0.[22,23]@tcp are added to session
>Test was added successfully
>Test was added successfully
>bulk_rw is running now
>[LNet Rates of servers]
>[R] Avg: 19486 RPC/s Min: 19234 RPC/s Max: 19739 RPC/s
>[W] Avg: 19486 RPC/s Min: 19234 RPC/s Max: 19738 RPC/s
>[LNet Bandwidth of servers]
>[R] Avg: 1737.60 MB/s Min: 1680.70 MB/s Max: 1794.51 MB/s
>[W] Avg: 1737.60 MB/s Min: 1680.70 MB/s Max: 1794.51 MB/s
>[LNet Rates of servers]
>[R] Avg: 19510 RPC/s Min: 19182 RPC/s Max: 19838 RPC/s
>[W] Avg: 19510 RPC/s Min: 19182 RPC/s Max: 19838 RPC/s
>[LNet Bandwidth of servers]
>[R] Avg: 1741.67 MB/s Min: 1679.51 MB/s Max: 1803.83 MB/s
>[W] Avg: 1741.67 MB/s Min: 1679.51 MB/s Max: 1803.83 MB/s
>[LNet Rates of servers]
>[R] Avg: 19458 RPC/s Min: 19237 RPC/s Max: 19679 RPC/s
>[W] Avg: 19458 RPC/s Min: 19237 RPC/s Max: 19679 RPC/s
>[LNet Bandwidth of servers]
>[R] Avg: 1738.87 MB/s Min: 1687.28 MB/s Max: 1790.45 MB/s
>[W] Avg: 1738.87 MB/s Min: 1687.28 MB/s Max: 1790.45 MB/s
>[LNet Rates of servers]
>[R] Avg: 19587 RPC/s Min: 19293 RPC/s Max: 19880 RPC/s
>[W] Avg: 19586 RPC/s Min: 19293 RPC/s Max: 19880 RPC/s
>[LNet Bandwidth of servers]
>[R] Avg: 1752.62 MB/s Min: 1695.38 MB/s Max: 1809.85 MB/s
>[W] Avg: 1752.62 MB/s Min: 1695.38 MB/s Max: 1809.85 MB/s
>[LNet Rates of servers]
>[R] Avg: 19528 RPC/s Min: 19232 RPC/s Max: 19823 RPC/s
>[W] Avg: 19528 RPC/s Min: 19232 RPC/s Max: 19824 RPC/s
>[LNet Bandwidth of servers]
>[R] Avg: 1741.63 MB/s Min: 1682.29 MB/s Max: 1800.98 MB/s
>[W] Avg: 1741.63 MB/s Min: 1682.29 MB/s Max: 1800.98 MB/s
>session is ended
>./lnet-selftest.sh: line 17: 8835 Terminated lst stat
>servers
>
>
>Addendum - I can start the MGS service on the 2nd node, and then start
>mdt0 service on local node:
># ssh mds2 service lustre start MGS
>Mounting lustre-mgs/mgs on /mnt/lustre/foreign/MGS
># service lustre start fs0-MDT0000
>Mounting lustre-mdt0/mdt0 on /mnt/lustre/local/fs0-MDT0000
># service lustre status
>unhealthy
># service lustre status local
>running
>
Cheers, Andreas
--
Andreas Dilger
Lustre Software Architect
Intel High Performance Data Division
8 years, 5 months
Infiniband & Lustre Module Unloading on RHEL 6.4
by Andrew Wagner
Hello all,
I've recently started working with Lustre and setting up a couple of new
filesystems on RHEL 6.4 w/ Lustre 2.4 from the ZFS repository (we're
using Lustre on ZFS) with an Infiniband networking infrastructure using
OpenIB from the RedHat repositories.
I've encountered a problem that I'm curious if anyone else has
encountered. When shutting down machines with Lustre OSTs mounted on
them, the default shutdown scripts cause a hang when the OpenIB modules
begin to unload. This is due to the Lustre/LNET stop scripts not
completely unloading Lustre modules. While investigating, I discovered
that the following sequence would successfully unload the Lustre modules
such that IB modules could also unload:
1. Stop Lustre
2. Stop LNET (Outputs "ERROR: Module osc has non-zero reference count.")
3. Run lustre_rmmod (Outpus "Modules still loaded:
lnet/klnds/o2iblnd/ko2iblnd.o lnet/lnet/lnet.o libcfs/libcfs/libcfs.o
4. Stop LNET again to unload the three remaining modules.
I've written this into a shutdown script, which works as a solution, but
does not address the underlying problem.
Has anyone else seen this behavior?
--
Andrew Wagner
Research System Administrator
Technical Computing
UW-Space Science and Engineering
AOSS Room 439
8 years, 5 months
Lustre Client 2.4.1 on Ubuntu Precise 12.04 with Mellanox OFED gen2
by Patrice Hamelin
Hi,
Somebody has ever successfully compiled Lustre Client 2.4.1 on Ubuntu
Precise 12.04 with Mellanox OFED 2.0.3? I am stucked with this error:
mel-bc1e41-be14:/usr/src/lustre-2.4.1# ./configure
--with-o2ib=/usr/src/mlnx-ofed-kernel-2.0 --disable-server
checking build system type... x86_64-unknown-linux-gnu
.
.
.
checking whether to enable OpenIB gen2 support... no
configure: error: can't compile with OpenIB gen2 headers under
/usr/src/mlnx-ofed-kernel-2.0
I tried a couple of patches/hacks found on Google but without success.
Thanks.
--
Patrice Hamelin
Specialiste sénior en systèmes d'exploitation | Senior OS specialist
Gouvernement du Canada | Government of Canada
8 years, 5 months
MDT does not start after writeconf
by Laifer, Roland (SCC)
Dear list,
after a writeconf we cannot mount the MDT. This happened with
Lustre 2.1.3. Any hints for fixing this problem would be greatly
appreciated. For details and log messages see below.
Details:
The file system is already more than 5 years old and was created
with Lustre 1.6. Later it was running with Lustre 1.8 and we upgraded
to version 2.1.3 a year ago. Since that time we had very few problems.
However, we frequently got LustreError messages on clients because
some applications wanted to use ACLs and ACLs were not enabled.
In order to change the ACL configuration we did a writeconf which
probably was a bad idea since afterwards the MDT did not start.
Removing pfs1work-MDT0000 on MGS/MDS or pfs1work-client on the MGS
did not help. Upgrading to version 2.1.6 on MDS and MDT did not fix
this problem. We made a backup of the MDT device and downgraded MDS
and MDT to version 1.8 since the writeconf had worked with that version
and indeed we were able to start the MDT. However, after upgrading to
version 2.1.3 the MDT does not mount again. We ran a read-only e2fsck
on the MDT and this did not find any problems. We are wondering if an
upgrade to version 2.4 would fix the problem.
Here are the messages from the MDS:
Dec 11 19:28:18 pfs1n2 kernel: [19046.838713] LDISKFS-fs (dm-6): mounted
filesystem with ordered data mode
Dec 11 19:28:18 pfs1n2 kernel: [19046.855112] Lustre:
MGC172.26.1.1@o2ib: Reactivating import
Dec 11 19:28:18 pfs1n2 kernel: [19046.922722] Lustre: Enabling ACL
Dec 11 19:28:18 pfs1n2 kernel: [19047.259229] LustreError:
28547:0:(mdd_device.c:1164:mdd_prepare()) Error(-2) initializing .lustre
objects
Dec 11 19:28:18 pfs1n2 kernel: [19047.337228] LustreError:
28547:0:(mdt_handler.c:4606:mdt_init0()) Can't init device stack, rc -2
Dec 11 19:28:18 pfs1n2 kernel: [19047.417024] LustreError:
28547:0:(obd_config.c:565:class_setup()) setup pfs1work-MDT0000 failed (-2)
Dec 11 19:28:18 pfs1n2 kernel: [19047.426650] LustreError:
28547:0:(obd_config.c:1491:class_config_llog_handler()) Err -2 on cfg
command:
Dec 11 19:28:19 pfs1n2 kernel: [19047.436520] Lustre: cmd=cf003
0:pfs1work-MDT0000 1:pfs1work-MDT0000_UUID 2:0
3:pfs1work-MDT0000-mdtlov 4:f
Dec 11 19:28:19 pfs1n2 kernel: [19047.447504] LustreError: 15c-8:
MGC172.26.1.1@o2ib: The configuration from log 'pfs1work-MDT0000' failed
(-2). This may be the result of communication errors between this node
and the MGS, a bad configuration, or other errors. See the syslog for
more information.
Dec 11 19:28:19 pfs1n2 kernel: [19047.471946] LustreError:
28516:0:(obd_mount.c:1192:server_start_targets()) failed to start server
pfs1work-MDT0000: -2
Dec 11 19:28:19 pfs1n2 kernel: [19047.483194]
LustreError:28516:0:(obd_mount.c:1738:server_fill_super()) Unable to
start targets: -2
Dec 11 19:28:19 pfs1n2 kernel: [19047.492704] LustreError:
28516:0:(obd_config.c:610:class_cleanup()) Device 2 not setup
Dec 11 19:28:19 pfs1n2 kernel: [19047.501082] LustreError:
28516:0:(ldlm_request.c:1174:ldlm_cli_cancel_req()) Got rc -108 from
cancel RPC: canceling anyway
Dec 11 19:28:19 pfs1n2 kernel: [19047.512672] LustreError:
28516:0:(ldlm_request.c:1801:ldlm_cli_cancel_list())
ldlm_cli_cancel_list: -108
Dec 11 19:28:19 pfs1n2 kernel: [19047.554700] Lustre: server umount
pfs1work-MDT0000 complete
Dec 11 19:28:19 pfs1n2 kernel: [19047.560542] LustreError:
28516:0:(obd_mount.c:2203:lustre_fill_super()) Unable to mount (-2)
At the same time on the MGS:
Dec 11 19:27:47 pfs1n1 kernel: [18002.010225] LDISKFS-fs (dm-6): mounted
filesystem with ordered data mode
Dec 11 19:27:47 pfs1n1 kernel: [18002.029854] Lustre: MGS MGS started
Dec 11 19:27:47 pfs1n1 kernel: [18002.034066] Lustre: 23937:0
(ldlm_lib.c:952:target_handle_connect()) MGS: connection from
628d6315-3333-d644-d0b0-314bb162402d@0@lo t0 exp (null) cur 1386786467
last 0
Dec 11 19:27:47 pfs1n1 kernel: [18002.049753] Lustre:
23937:0:(ldlm_lib.c:952:target_handle_connect()) Skipped 1 previous
similar message
Dec 11 19:27:47 pfs1n1 kernel: [18002.060070] Lustre:
MGC172.26.1.1@o2ib: Reactivating import
Dec 11 19:28:03 pfs1n1 kernel: [18018.052321] Lustre:
23937:0:(ldlm_lib.c:952:target_handle_connect()) MGS: connection from
2abc20b4-6abb-0604-5760-8f458c1ef6c6@172.26.23.231@o2ib t0 exp (null)
cur 1386786483 last 0
Dec 11 19:28:18 pfs1n1 kernel: [18032.725732] Lustre: MGS: Logs for fs
pfs1work were removed by user request. All servers must be restarted in
order to regenerate the logs.
Dec 11 19:28:18 pfs1n1 kernel: [18032.740584] Lustre: Setting parameter
pfs1work-MDT0000.mdd.quota_type in log pfs1work-MDT0000
Thanks,
Roland
--
Karlsruhe Institute of Technology (KIT)
Steinbuch Centre for Computing (SCC)
Roland Laifer
Scientific Computing und Simulation (SCS)
Zirkel 2, Building 20.21, Room 209
76131 Karlsruhe, Germany
Phone: +49 721 608 44861
Fax: +49 721 32550
Email: roland.laifer(a)kit.edu
Web: http://www.scc.kit.edu
KIT – University of the State of Baden-Wuerttemberg and
National Laboratory of the Helmholtz Association
8 years, 5 months
Gerrit (review.whamcloud.com) upgrade
by Joshua J. Kugler
Howdy! This e-mail is to let you know about upcoming changes to a key
Whamcloud/Intel HPDD service.
We will soon be upgrading our code review tool, Gerrit. This will take place
on December 20th at 5PM Pacific time, and review.whamcloud.com will be down
for the duration of this upgrade. We anticipate this will take no longer than
three hours.
We will send out a reminder e-mail the day before the upgrade.
Please contact joshua.kugler(a)intel.com with any questions or concerns.
--
Dev/Ops Lead
High Performance Data Division (formerly Whamcloud)
Intel
8 years, 5 months
Lustre 2.6 update - December 6th 2013
by Jones, Peter A
Hi there
Here is an update on the Lustre 2.6 release.
Landings
========
-A number of landings made http://git.whamcloud.com/?p=fs/lustre-release.git;a=shortlog;h=refs/heads...
Testing
=======
-Testing on 2.5.51 tag is complete; testing on the 2.5.52 tag is underway
Blockers
========
-https://jira.hpdd.intel.com/issues/?jql=project%20%3D%20LU%20AND%20fixVersion%20%3D%20%22Lustre%202.6.0%22%20AND%20resolution%20%3D%20Unresolved%20AND%20priority%20%3D%20Blocker%20ORDER%20BY%20key%20DESC
-If there are any issues not presently marked as blockers that you believe should be, please let me know
Other
=====
-Master is presently open for feature landings; feature freeze is January 31st
Thanks
Peter
PS/ You can also keep up to date with matters relating to the 2.6 release on the CDWG wiki - http://wiki.opensfs.org/Lustre_2.6.0
8 years, 5 months