Our MDT volume went read-only this morning apparently due to a bad
> Oct 27 09:00:56 mds2 kernel: LDISKFS-fs error (device dm-10):
ldiskfs_dx_find_entry: bad entry in directory #85863408: rec_len % 4 !=
0 - block=42931201offset=24(24), inode=0, rec_len=2049, name_len=0
Does this indicate a problem with the MDT block device/hardware ?
We are using lustre 2.5.29.
I saw in the documentation of Lustre that the version of Centos for Lustre 22.214.171.124 is the 6.5.
I compiled lustre with centos 6.5 for OSS and MDS. But I compiled the lustre client on the Centos 6.7.
Is somebody already tried this configuration ? Do you think that it can work ?
Actualy, I can mount files sytem, but I don't know if this solution is safe for a production mode.
Could you give me your opinion please ?
Can any one mention what are the disadvantages of having MDS and MDT
configured on a single server compared to configuration of having MGS and
MDT on different servers.
Thanks & Regards,
On 2015/10/07, 6:54 AM, "HPDD-discuss on behalf of Kurt Strosahl"
<hpdd-discuss-bounces(a)lists.01.org on behalf of strosahl(a)jlab.org> wrote:
> I'm interested in an oss configuration where you have two heads that
>serve up the osts, with the osts split evenly among the two heads... but
>for each head to be able to take over for the other in case of its
>failure. We are using zfs for the back end, and the documentation seems
>to imply that there isn't anything different when using that as a back
> I'm curious if anyone else out there has experimented with such a
>setup (ost failover with a zfs back end), and what potential issues I
You definitely need to have reliable power control (STONITH) of the OSS
nodes using your HA scripts. Otherwise, there is the potential for ZFS to
mount the same filesystem on both nodes at the same time and severely
corrupt the filesystem.
Lustre Software Architect
Intel High Performance Data Division
I'm interested in an oss configuration where you have two heads that serve up the osts, with the osts split evenly among the two heads... but for each head to be able to take over for the other in case of its failure. We are using zfs for the back end, and the documentation seems to imply that there isn't anything different when using that as a back end.
I'm curious if anyone else out there has experimented with such a setup (ost failover with a zfs back end), and what potential issues I might encounter.
Kurt J. Strosahl
Scientific Computing Group, Thomas Jefferson National Accelerator Facility
i have currently two lustre filesystems (fs1, fs2) managed by the same mgs:
# cat /proc/fs/lustre/mgs/MGS/filesystems
mdt is formatted to allow multiple filesystems:
# tunefs.lustre --dryrun /dev/mapper/mgt
Read previous values:
Mount type: ldiskfs
(MGS no_primnode )
Persistent mount opts: user_xattr,errors=remount-ro
I've umounted all ost/mdt related to fs2, and want to get rid of fs2
How to achieve that?