lustre 2.5 MDT bad entry in directory
by Chris Hunter
Hi,
Our MDT volume went read-only this morning apparently due to a bad
directory entry:
> Oct 27 09:00:56 mds2 kernel: LDISKFS-fs error (device dm-10):
ldiskfs_dx_find_entry: bad entry in directory #85863408: rec_len % 4 !=
0 - block=42931201offset=24(24), inode=0, rec_len=2049, name_len=0
Does this indicate a problem with the MDT block device/hardware ?
We are using lustre 2.5.29.
thanks,
chris hunter
chris.hunter(a)yale.edu
5 years, 4 months
Compatibility between lustre 2.5.3.90 and centos 6.7
by David Roman
Hello,
I saw in the documentation of Lustre that the version of Centos for Lustre 2.5.3.90 is the 6.5.
I compiled lustre with centos 6.5 for OSS and MDS. But I compiled the lustre client on the Centos 6.7.
Is somebody already tried this configuration ? Do you think that it can work ?
Actualy, I can mount files sytem, but I don't know if this solution is safe for a production mode.
Could you give me your opinion please ?
Thank
Regards
5 years, 4 months
Event Reminder – Lustre* User Group PRC 2015
by Yarlagadda, Eman
[Intel® - High Performance Data Division]
Event Details:
Hear directly from industry thought leaders about the latest Lustre* file system trends by attending the 2015 PRC Lustre* Users Group conference. Don’t miss this exclusive opportunity to collaborate with industry leaders to advance the Lustre* file system and contribute to new releases on behalf of the open source community.
Register Here<http://pages.intel.com/a0300Z015P0100JYkWvD6Yl>
Event Date: October 20th, 2015 from 8:30am - 5:00pm (Click Here to Add Event to your Calendar<http://pages.intel.com/x0P0000D3l10ZvW0Kl6Y1Y5>)
Location: Regent Hotel Beijing<http://pages.intel.com/s0mD0Z101WvL0P056Y30l0Y> (99 Jinbao Street, Dongcheng District, Beijing, 100005, China)
Telephone: +86 10 8522 1888<http://pages.intel.com/V1WM000lD10Y300PZ65Ynv0>
Review Agenda<http://pages.intel.com/Mv0Y0501PD61l00NZ0YW0o3> (Draft)
UNABLE TO ATTEND? We’re sorry that you cannot make it to the event, but you can still sign-up for our mailing list to receive future communications about this gathering and valuable information about Intel ® Solutions for Lustre* software.
Subscribe to Newsletter (Click Here)<http://pages.intel.com/s0qD0Z101WvP0P056Y30l0Y>
[HPDD PRC Footer]
[Intel®]
Trademarks<http://pages.intel.com/a0300Z015P0100QYrWvD6Yl> | Terms of Use<http://pages.intel.com/hZY1s6W00P0005lDv030YR1> | Privacy Policy<http://pages.intel.com/utv0001W6Z0l0S03DY5PY10> | Contact Us
<http://pages.intel.com/V1WT000lD10Y300PZ65Yuv0>
Copyright © 2014 Intel Corporation. All rights reserved. Intel, the Intel logo, and Intel Xeon are trademarks of Intel Corporation in the U.S. and/or other countries. *Other names and brands may be claimed as the property of others.
*Other names and brands may be claimed as the property of others.
If you no longer wish to receive these emails, click on the following link: Unsubscribe<http://pages.intel.com/u/f00Dl0Z01v001W35PvYUY06>
5 years, 4 months
Disadvantages of MGS & MDS on single server
by Narsimha Reddy
Dear Team,
Can any one mention what are the disadvantages of having MDS and MDT
configured on a single server compared to configuration of having MGS and
MDT on different servers.
Thanks & Regards,
Narsimha.
5 years, 4 months
Re: [HPDD-discuss] failover for zfs osts
by Dilger, Andreas
On 2015/10/07, 6:54 AM, "HPDD-discuss on behalf of Kurt Strosahl"
<hpdd-discuss-bounces(a)lists.01.org on behalf of strosahl(a)jlab.org> wrote:
>Hello,
>
> I'm interested in an oss configuration where you have two heads that
>serve up the osts, with the osts split evenly among the two heads... but
>for each head to be able to take over for the other in case of its
>failure. We are using zfs for the back end, and the documentation seems
>to imply that there isn't anything different when using that as a back
>end.
>
> I'm curious if anyone else out there has experimented with such a
>setup (ost failover with a zfs back end), and what potential issues I
>might encounter.
You definitely need to have reliable power control (STONITH) of the OSS
nodes using your HA scripts. Otherwise, there is the potential for ZFS to
mount the same filesystem on both nodes at the same time and severely
corrupt the filesystem.
Cheers, Andreas
--
Andreas Dilger
Lustre Software Architect
Intel High Performance Data Division
5 years, 4 months
failover for zfs osts
by Kurt Strosahl
Hello,
I'm interested in an oss configuration where you have two heads that serve up the osts, with the osts split evenly among the two heads... but for each head to be able to take over for the other in case of its failure. We are using zfs for the back end, and the documentation seems to imply that there isn't anything different when using that as a back end.
I'm curious if anyone else out there has experimented with such a setup (ost failover with a zfs back end), and what potential issues I might encounter.
Respectfully,
Kurt J. Strosahl
System Administrator
Scientific Computing Group, Thomas Jefferson National Accelerator Facility
5 years, 4 months
remove lustre filesystem
by Marcin Dulak
Hi,
i have currently two lustre filesystems (fs1, fs2) managed by the same mgs:
# cat /proc/fs/lustre/mgs/MGS/filesystems
params
fs1
fs2
mdt is formatted to allow multiple filesystems:
# tunefs.lustre --dryrun /dev/mapper/mgt
Reading CONFIGS/mountdata
Read previous values:
Target: MGS
Index: unassigned
Lustre FS:
Mount type: ldiskfs
Flags: 0x1004
(MGS no_primnode )
Persistent mount opts: user_xattr,errors=remount-ro
I've umounted all ost/mdt related to fs2, and want to get rid of fs2
completely.
How to achieve that?
Marcin
5 years, 5 months
[PATCH] staging: lustre: lustre: llite: Added a blank line
by Anjali Menon
Added a blank line after declaration to fix the coding
style warning detected by checkpatch.pl
WARNING: Missing a blank line after declarations
Signed-off-by: Anjali Menon <cse.anjalimenon(a)gmail.com>
---
drivers/staging/lustre/lustre/llite/llite_capa.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/staging/lustre/lustre/llite/llite_capa.c b/drivers/staging/lustre/lustre/llite/llite_capa.c
index aec9a44..a626871 100644
--- a/drivers/staging/lustre/lustre/llite/llite_capa.c
+++ b/drivers/staging/lustre/lustre/llite/llite_capa.c
@@ -140,6 +140,7 @@ static void sort_add_capa(struct obd_capa *ocapa, struct list_head *head)
static inline int obd_capa_open_count(struct obd_capa *oc)
{
struct ll_inode_info *lli = ll_i2info(oc->u.cli.inode);
+
return atomic_read(&lli->lli_open_count);
}
--
1.9.1
5 years, 5 months