Lustre and kernel buffer interaction
by John Bauer
I have been trying to understand a behavior I am observing in an IOR
benchmark on Lustre. I have pared it down to a simple example.
The IOR benchmark is running in MPI mode. There are 2 ranks, each
running on its own node. Each rank does the following:
Note : Test was run on the "swan" cluster at Cray Inc., using /lus/scratch
write a file. ( 10GB )
fsync the file
close the file
MPI_barrier
open the file that was written by the other rank.
read the file that was written by the other rank.
close the file that was written by the other rank.
The writing of each file goes as expected.
The fsync takes very little time ( about .05 seconds).
The first reads of the file( written by the other rank ) start out *very
*slowly. While theses first reads are proceeding slowly, the
kernel's cached memory ( the Cached: line in /proc/meminfo) decreases
from the size of the file just written to nearly zero.
Once the cached memory has reached nearly zero, the file reading
proceeds as expected.
I have attached a jpg of the instrumentation of the processes that
illustrates this behavior.
My questions are:
Why does the reading of the file, written by the other rank, wait until
the cached data drains to nearly zero before proceeding normally.
Shouldn't the fsync ensure that the file's data is written to the
backing storage so this draining of the cached memory should be simply
releasing pages with no further I/O?
For this case the "dead" time is only about 4 seconds, but this "dead"
time scales directly with the size of the files.
John
--
John Bauer
I/O Doctors LLC
507-766-0378
bauerj(a)iodoctors.com
5 years, 10 months
quotas on 2.4.3
by Matt Bettinger
Hello,
We have a fresh 2.4.3 lustre upgrade that is not yet put into
production running on rhel 6.4.
We would like to take a look at quotas but looks like there is some
major performance problems with 1.8.9 clients.
Here is how I enabled quotas
[root@lfs-mds-0-0 ~]# lctl conf_param lustre2.quota.mdt=ug
[root@lfs-mds-0-0 ~]# lctl conf_param lustre2.quota.ost=ug
[root@lfs-mds-0-0 ~]# lctl get_param osd-*.*.quota_slave.info
osd-ldiskfs.lustre2-MDT0000.quota_slave.info=
target name: lustre2-MDT0000
pool ID: 0
type: md
quota enabled: ug
conn to master: setup
space acct: ug
user uptodate: glb[1],slv[1],reint[0]
group uptodate: glb[1],slv[1],reint[0]
The quotas seem to be working however the write performance from
1.8.9wc client to 2.4.3 with quotas on is horrific. Am I not setting
quotas up correctly?
I try to make a simple user quota on /lustre2/mattb/300MB_QUOTA directory
[root@hous0036 mattb]# lfs setquota -u l0363734 -b 307200 -B 309200 -i
10000 -I 11000 /lustre2/mattb/300MB_QUOTA/
See quota change is in effect...
[root@hous0036 mattb]# lfs quota -u l0363734 /lustre2/mattb/300MB_QUOTA/
Disk quotas for user l0363734 (uid 1378):
Filesystem kbytes quota limit grace files quota limit grace
/lustre2/mattb/300MB_QUOTA/
310292* 307200 309200 - 4 10000 11000 -
Try and write to quota directory as the user but get horrible write speed
[l0363734@hous0036 300MB_QUOTA]$ dd if=/dev/zero of=301MB_FILE bs=1M count=301
301+0 records in
301+0 records out
315621376 bytes (316 MB) copied, 61.7426 seconds, 5.1 MB/s
Try file number 2 and then quota take effect, so it seems.
[l0363734@hous0036 300MB_QUOTA]$ dd if=/dev/zero of=301MB_FILE2 bs=1M count=301
dd: writing `301MB_FILE2': Disk quota exceeded
dd: closing output file `301MB_FILE2': Input/output error
If I disable quotas using
[root@lfs-mds-0-0 ~]# lctl conf_param lustre2.quota.mdt=none
[root@lfs-mds-0-0 ~]# lctl conf_param lustre2.quota.oss=none
Then try and write the same file the speeds are more like we expect
but then can't use quotas.
[l0363734@hous0036 300MB_QUOTA]$ dd if=/dev/zero of=301MB_FILE2 bs=1M count=301
301+0 records in
301+0 records out
315621376 bytes (316 MB) copied, 0.965009 seconds, 327 MB/s
[l0363734@hous0036 300MB_QUOTA]$ dd if=/dev/zero of=301MB_FILE2 bs=1M count=301
I have not tried this with a 2.4 client, yet since all of our nodes
are 1.8.X until we rebuild our images.
I was going by the manual on
http://build.whamcloud.com/job/lustre-manual/lastSuccessfulBuild/artifact...
but it looks like I am running into interoperability issue (which I
thought I fixed by using 1.8.9-wc client) or just not configuring
this correctly.
Thanks!
MB
6 years, 1 month
Fwd: lustre client 2.4.3 on rhel 6.6
by Michael Di Domenico
I'm trying to get lustre to work on a rhel 6.6 running kernel
2.6.32-504.el6.x86_64
the compile of the client seems to go okay, but when i mount i get
"protocol error" from the mount command
the console shows (paraphrased)
lustre_unpack_rep_ptlrpc_body: bad lustre msg magic: 000000000
unpack ptrlrpc body failed
The same machine running 6.5 with a 2.4.3 client seems to work fine,
so i'm fairly certain it's just the upgrade to 6.6 that broke things
6 years, 2 months
New liblustreapi ?
by Simmons, James A.
Now that lustre 2.7 is coming up soon I like to open the discussion
on one of the directions we could go. Recently several projects have sprung
up that impact liblustreapi. During one of those discussion the idea of a new
liblustreapi was brought up. A liblustreapi 2.0 you could say. So I like to
get feel in the community about this. If people want this proposal I like to
recommend that we gradually build this new library along side the original
liblustreapi and link it when necessary to the lustre utilities. First I
like the discussion of using the LGPL license with this new library. I look
forward to the feed back.
6 years, 3 months
lustre client modules and support for weak-modules
by Adesanya, Adeyemi
I just discovered what appears to be working weak-modules support for Lustre 2.5.1 client modules on RHEL6. I saw our lustre filesystem was mounted on a host running 2.6.32-431.29.2.el6.x86_64 kernel but with client modules compiled for 2.6.32_431.23.3.el6.x86_64. Sure enough, the symlinks are in place under /lib/modules/<kernel-version>. I tried booting into a couple of other kernel versions with module symlinks and the lustre client worked there too. This is a pretty significant feature......when was it introduced? Is it supported?
--------
Yemi
On Sep 15, 2011, at 4:22 PM, Adeyemi Adesanya wrote:
>
> Hi Brian.
>
> I don't even see compatibility between "-274" kernels. I built and installed on 2.6.18-274.3.1.el5 but the only module that got symlinked under 2.6.18-274.el5 was libcfs.ko.
> Thanks for the info regarding RedHat and kABI.
>
> ------
> Yemi
>
> On Sep 15, 2011, at 4:12 PM, Brian J. Murrell wrote:
>
>> On 11-09-15 06:57 PM, Adesanya, Adeyemi wrote:
>>>
>>> I just dug up a message from lustre-discuss last year regarding support for weak-modules. It would be great I didn't have to rebuild the lustre-modules client RPM (lustre 1.8.6) against every new RHEL5 kernel that gets released.
>>
>> Indeed, it would be good. You don't have this problem for SLES kernels,
>> FWIW. But for RH kernels, weak (Lustre at least) modules are not
>> possible due to RedHat only supporting a subset of the kABI for weak
>> modules and Lustre utilizes symbols outside of that subset (they call
>> the "whitelist").
>>
>> I tried to get the whitelist updated for Lustre quite a while ago but
>> was met with silence.
>>
>>> weak-modules reported that nearly all of the modules were incompatible with other kernels including the recent 2.6.18-274.el5, 2.6.18-238.19.1.el5, etc.
>>
>> I'm not positive but I don't think those two kernels are intended to be
>> ABI compatible. My (rather old, so not great) understanding is that the
>> first component after the 2.6.18- identifies kernels that are supposed
>> to be binary compatible and need to be the same between two kernels to
>> ensure kABI compatibility.
>>
>> Cheers,
>> b.
>>
>> --
>> Brian J. Murrell
>> Senior Software Engineer
>> Whamcloud, Inc.
>>
>
6 years, 3 months
a bunch of lustre bugs...
by Al Viro
1) ECHO_IOC_GET_STRIPE starts with
copy_to_user (ulsm, lsm, sizeof(*ulsm)), where ulsm is a user-supplied
pointer to struct lov_stripe_md. Which starts with
struct lov_stripe_md {
atomic_t lsm_refc;
spinlock_t lsm_lock;
and since sizeof(spinlock_t) depends on a slew of CONFIG_..., so do the
offsets of everything after it. May I politely inquire how the hell
does it manage to be of any use for userland code?
2) echo_copyout_lsm() proceeds to do the following:
for (i = 0; i < lsm->lsm_stripe_count; i++) {
if (copy_to_user (ulsm->lsm_oinfo[i], lsm->lsm_oinfo[i],
sizeof(lsm->lsm_oinfo[0])))
return -EFAULT;
}
What do you think will happen if &(ulsm->lsm_oinfo) happens to be on a page
boundary, with the next page unmapped? Or, for that matter, what happens
if that gets used on an architecture with separate address spaces for kernel
and userland. Sparc, for example... That one is trivial to fix - it's
just missing get_user(up, ulsm->lsm_oinfo + i), with copy_to_user(up, ....)
following it.
3) echo_copyin_lsm() has the same issues (both of them).
4) fld_proc_hash_seq_write() does this:
if (!strncmp(fld_hash[i].fh_name, buffer, count)) {
'buffer' is a userland pointer - argument of write(2), actually.
5) ll_fiemap() does
memcpy(fieinfo->fi_extents_start, &fiemap->fm_extents[0],
fiemap->fm_mapped_extents *
sizeof(struct ll_fiemap_extent));
It's _not_ a nice thing to do, seeing that fi_extents_start is a userland
pointer. Granted, it has passed access_ok() in ioctl_fiemap(), so it's
not an instant roothole on x86. On anything with separate ASI for kernel
and userland it might very well be, depending on whether any kernel addresses
pass access_ok() there. parisc, for example, has access_ok() always 1 and
there it *is* a roothole. And it's certainly oopsable on x86.
Incidentally, WTF are ll_fiemap_extent and ll_user_fiemap? AFAICS these
are identical copies on include/uapi/linux/fiemap.h stuff, which had been
there for 6 years already...
Anyway, fixes for missing get_user() and for strncmp() on userland pointers
follow. The rest is a bit trickier.
Al, really annoyed by swimming through the lustre sewerful of ioctls...
>From fee276ea51f61386438e8e65f8e39babad8c6a25 Mon Sep 17 00:00:00 2001
From: Al Viro <viro(a)zeniv.linux.org.uk>
Date: Sun, 30 Nov 2014 00:12:37 -0500
Subject: [PATCH 1/2] lustre: strncmp() on user-supplied address is a Bad
Thing(tm)
Signed-off-by: Al Viro <viro(a)zeniv.linux.org.uk>
---
drivers/staging/lustre/lustre/fld/lproc_fld.c | 8 +++++++-
1 file changed, 7 insertions(+), 1 deletion(-)
diff --git a/drivers/staging/lustre/lustre/fld/lproc_fld.c b/drivers/staging/lustre/lustre/fld/lproc_fld.c
index 95e7de1..3b6e13f 100644
--- a/drivers/staging/lustre/lustre/fld/lproc_fld.c
+++ b/drivers/staging/lustre/lustre/fld/lproc_fld.c
@@ -92,16 +92,21 @@ fld_proc_hash_seq_write(struct file *file, const char *buffer,
{
struct lu_client_fld *fld;
struct lu_fld_hash *hash = NULL;
+ char *s;
int i;
fld = ((struct seq_file *)file->private_data)->private;
LASSERT(fld != NULL);
+ s = memdup_user(buffer, count);
+ if (IS_ERR(s))
+ return PTR_ERR(s);
+
for (i = 0; fld_hash[i].fh_name != NULL; i++) {
if (count != strlen(fld_hash[i].fh_name))
continue;
- if (!strncmp(fld_hash[i].fh_name, buffer, count)) {
+ if (!strncmp(fld_hash[i].fh_name, s, count)) {
hash = &fld_hash[i];
break;
}
@@ -115,6 +120,7 @@ fld_proc_hash_seq_write(struct file *file, const char *buffer,
CDEBUG(D_INFO, "%s: Changed hash to \"%s\"\n",
fld->lcf_name, hash->fh_name);
}
+ kfree(s);
return count;
}
--
1.7.10.4
>From fc00a7396d279f77ef192fb442dc05daecb6136d Mon Sep 17 00:00:00 2001
From: Al Viro <viro(a)zeniv.linux.org.uk>
Date: Sun, 30 Nov 2014 16:02:33 -0500
Subject: [PATCH 2/2] lustre: echo_copy.._lsm() dereferences userland pointers
directly
missing get_user()
Signed-off-by: Al Viro <viro(a)zeniv.linux.org.uk>
---
.../staging/lustre/lustre/obdecho/echo_client.c | 20 +++++++++++---------
1 file changed, 11 insertions(+), 9 deletions(-)
diff --git a/drivers/staging/lustre/lustre/obdecho/echo_client.c b/drivers/staging/lustre/lustre/obdecho/echo_client.c
index 98e4290..373b2a3 100644
--- a/drivers/staging/lustre/lustre/obdecho/echo_client.c
+++ b/drivers/staging/lustre/lustre/obdecho/echo_client.c
@@ -1251,6 +1251,7 @@ static int
echo_copyout_lsm (struct lov_stripe_md *lsm, void *_ulsm, int ulsm_nob)
{
struct lov_stripe_md *ulsm = _ulsm;
+ struct lov_oinfo **p;
int nob, i;
nob = offsetof (struct lov_stripe_md, lsm_oinfo[lsm->lsm_stripe_count]);
@@ -1260,9 +1261,10 @@ echo_copyout_lsm (struct lov_stripe_md *lsm, void *_ulsm, int ulsm_nob)
if (copy_to_user (ulsm, lsm, sizeof(*ulsm)))
return -EFAULT;
- for (i = 0; i < lsm->lsm_stripe_count; i++) {
- if (copy_to_user (ulsm->lsm_oinfo[i], lsm->lsm_oinfo[i],
- sizeof(lsm->lsm_oinfo[0])))
+ for (i = 0, p = lsm->lsm_oinfo; i < lsm->lsm_stripe_count; i++, p++) {
+ struct lov_oinfo __user *up;
+ if (get_user(up, ulsm->lsm_oinfo + i) ||
+ copy_to_user(up, *p, sizeof(struct lov_oinfo)))
return -EFAULT;
}
return 0;
@@ -1270,9 +1272,10 @@ echo_copyout_lsm (struct lov_stripe_md *lsm, void *_ulsm, int ulsm_nob)
static int
echo_copyin_lsm (struct echo_device *ed, struct lov_stripe_md *lsm,
- void *ulsm, int ulsm_nob)
+ struct lov_stripe_md __user *ulsm, int ulsm_nob)
{
struct echo_client_obd *ec = ed->ed_ec;
+ struct lov_oinfo **p;
int i;
if (ulsm_nob < sizeof (*lsm))
@@ -1288,11 +1291,10 @@ echo_copyin_lsm (struct echo_device *ed, struct lov_stripe_md *lsm,
return -EINVAL;
- for (i = 0; i < lsm->lsm_stripe_count; i++) {
- if (copy_from_user(lsm->lsm_oinfo[i],
- ((struct lov_stripe_md *)ulsm)-> \
- lsm_oinfo[i],
- sizeof(lsm->lsm_oinfo[0])))
+ for (i = 0, p = lsm->lsm_oinfo; i < lsm->lsm_stripe_count; i++, p++) {
+ struct lov_oinfo __user *up;
+ if (get_user(up, ulsm->lsm_oinfo + i) ||
+ copy_from_user(*p, up, sizeof(struct lov_oinfo)))
return -EFAULT;
}
return 0;
--
1.7.10.4
6 years, 3 months
[PATCH] staging: lustre: fix sparse warnings related to lock context imbalance
by Loic Pefferkorn
Add __acquires() and __releases() function annotations, to fix sparse warnings related to lock context imbalance.
This fixes the following warnings:
drivers/staging/lustre/lustre/libcfs/linux/linux-tracefile.c:153:5: warning: context imbalance in 'cfs_trace_lock_tcd' - wrong count at exit
drivers/staging/lustre/lustre/libcfs/hash.c:128:1: warning: context imbalance in 'cfs_hash_spin_lock' - wrong count at exit
drivers/staging/lustre/lustre/libcfs/hash.c:142:9: warning: context imbalance in 'cfs_hash_rw_lock' - wrong count at exit
drivers/staging/lustre/lustre/ptlrpc/../../lustre/ldlm/l_lock.c:57:17: warning: context imbalance in 'lock_res_and_lock' - wrong count at exit
drivers/staging/lustre/lustre/libcfs/libcfs_lock.c:93:1: warning: context imbalance in 'cfs_percpt_lock' - wrong count at exit
Signed-off-by: Loic Pefferkorn <loic(a)loicp.eu>
---
drivers/staging/lustre/lustre/libcfs/hash.c | 4 ++++
drivers/staging/lustre/lustre/libcfs/libcfs_lock.c | 2 ++
drivers/staging/lustre/lustre/libcfs/linux/linux-tracefile.c | 2 ++
drivers/staging/lustre/lustre/obdclass/cl_object.c | 2 ++
4 files changed, 10 insertions(+)
diff --git a/drivers/staging/lustre/lustre/libcfs/hash.c b/drivers/staging/lustre/lustre/libcfs/hash.c
index 32da783..7c6e2a3 100644
--- a/drivers/staging/lustre/lustre/libcfs/hash.c
+++ b/drivers/staging/lustre/lustre/libcfs/hash.c
@@ -126,18 +126,21 @@ cfs_hash_nl_unlock(union cfs_hash_lock *lock, int exclusive) {}
static inline void
cfs_hash_spin_lock(union cfs_hash_lock *lock, int exclusive)
+ __acquires(&lock->spin)
{
spin_lock(&lock->spin);
}
static inline void
cfs_hash_spin_unlock(union cfs_hash_lock *lock, int exclusive)
+ __releases(&lock->spin)
{
spin_unlock(&lock->spin);
}
static inline void
cfs_hash_rw_lock(union cfs_hash_lock *lock, int exclusive)
+ __acquires(&lock->rw)
{
if (!exclusive)
read_lock(&lock->rw);
@@ -147,6 +150,7 @@ cfs_hash_rw_lock(union cfs_hash_lock *lock, int exclusive)
static inline void
cfs_hash_rw_unlock(union cfs_hash_lock *lock, int exclusive)
+ __releases(&lock->rw)
{
if (!exclusive)
read_unlock(&lock->rw);
diff --git a/drivers/staging/lustre/lustre/libcfs/libcfs_lock.c b/drivers/staging/lustre/lustre/libcfs/libcfs_lock.c
index 2c199c7..1e529fc 100644
--- a/drivers/staging/lustre/lustre/libcfs/libcfs_lock.c
+++ b/drivers/staging/lustre/lustre/libcfs/libcfs_lock.c
@@ -91,6 +91,7 @@ EXPORT_SYMBOL(cfs_percpt_lock_alloc);
*/
void
cfs_percpt_lock(struct cfs_percpt_lock *pcl, int index)
+ __acquires(pcl->pcl_locks[index])
{
int ncpt = cfs_cpt_number(pcl->pcl_cptab);
int i;
@@ -125,6 +126,7 @@ EXPORT_SYMBOL(cfs_percpt_lock);
/** unlock a CPU partition */
void
cfs_percpt_unlock(struct cfs_percpt_lock *pcl, int index)
+ __releases(pcl->pcl_locks[index])
{
int ncpt = cfs_cpt_number(pcl->pcl_cptab);
int i;
diff --git a/drivers/staging/lustre/lustre/libcfs/linux/linux-tracefile.c b/drivers/staging/lustre/lustre/libcfs/linux/linux-tracefile.c
index 976c61e..257669b 100644
--- a/drivers/staging/lustre/lustre/libcfs/linux/linux-tracefile.c
+++ b/drivers/staging/lustre/lustre/libcfs/linux/linux-tracefile.c
@@ -151,6 +151,7 @@ cfs_trace_buf_type_t cfs_trace_buf_idx_get(void)
* for details.
*/
int cfs_trace_lock_tcd(struct cfs_trace_cpu_data *tcd, int walking)
+ __acquires(&tcd->tc_lock)
{
__LASSERT(tcd->tcd_type < CFS_TCD_TYPE_MAX);
if (tcd->tcd_type == CFS_TCD_TYPE_IRQ)
@@ -165,6 +166,7 @@ int cfs_trace_lock_tcd(struct cfs_trace_cpu_data *tcd, int walking)
}
void cfs_trace_unlock_tcd(struct cfs_trace_cpu_data *tcd, int walking)
+ __releases(&tcd->tcd_lock)
{
__LASSERT(tcd->tcd_type < CFS_TCD_TYPE_MAX);
if (tcd->tcd_type == CFS_TCD_TYPE_IRQ)
diff --git a/drivers/staging/lustre/lustre/obdclass/cl_object.c b/drivers/staging/lustre/lustre/obdclass/cl_object.c
index ce96bd2..8577f97 100644
--- a/drivers/staging/lustre/lustre/obdclass/cl_object.c
+++ b/drivers/staging/lustre/lustre/obdclass/cl_object.c
@@ -193,6 +193,7 @@ static spinlock_t *cl_object_attr_guard(struct cl_object *o)
* cl_object_attr_get(), cl_object_attr_set().
*/
void cl_object_attr_lock(struct cl_object *o)
+ __acquires(cl_object_attr_guard(o))
{
spin_lock(cl_object_attr_guard(o));
}
@@ -202,6 +203,7 @@ EXPORT_SYMBOL(cl_object_attr_lock);
* Releases data-attributes lock, acquired by cl_object_attr_lock().
*/
void cl_object_attr_unlock(struct cl_object *o)
+ __releases(cl_object_attr_guard(o))
{
spin_unlock(cl_object_attr_guard(o));
}
--
2.1.2
6 years, 3 months
Evict clients / clear exports
by Thomas Roth
Hi all,
I have disconnected an entire network segment of clients from an OSS (on purpose), now I'd like to
clean up a little.
Starting the OSTs on that server of course makes them trying to recover all those clients. In the log
there are then some sparse 'I think it's dead' lines, but can I get rid of them all at once?
In some rather old manual I found something like
> lctl set_param obdfilter.OST-NAME.evict_client=uuid
The OSS (Lustre 1.8.9) seems to understand this command ("evicting ... at administrative request").
I also did echo something to "obdfilter/NAME/exports/clear".
But neither this nor umount/mount would clear these clients from the system.
Any other trick?
Regards,
Thomas
--
--------------------------------------------------------------------
Thomas Roth
Department: Informationstechnologie
Location: SB3 1.262
Phone: +49-6159-71 1453 Fax: +49-6159-71 2986
GSI Helmholtzzentrum für Schwerionenforschung GmbH
Planckstraße 1
64291 Darmstadt
www.gsi.de
Gesellschaft mit beschränkter Haftung
Sitz der Gesellschaft: Darmstadt
Handelsregister: Amtsgericht Darmstadt, HRB 1528
Geschäftsführung: Professor Dr. Dr. h.c. Horst Stöcker,
Dr.-Ing. Jürgen Henschel
Vorsitzende des Aufsichtsrates: Dr. Beatrix Vierkorn-Rudolph
Stellvertreter: Ministerialdirigent Dr. Rolf Bernhardt
6 years, 3 months
Re: [HPDD-discuss] [Lwg] Splitting up Lustre and LNET RPMs?
by James Simmons
Currently there are some challenges to this. First for LNet we have lnetctl
and lctl for configuring the network. The nice thing about lnetctl it is
light weight with no need of liblustreapi. This is not the case for lctl.
So if you wanted a LNET only rpm it would only have to support lnetctl.
Lastly setting LNet parameters via procfs/sysfs is handled only with lctl
which again introduces the liblustreapi dependency. That I hope to resolve
with LU-5030.
On Thu, Nov 20, 2014 at 10:49 AM, Nathan Rutman <nathan.rutman(a)seagate.com>
wrote:
> What does the community think about splitting the LNET build out of Lustre
> as a separate set of (source and binary) RPMs?
> This would make it easier/faster to build, install, and upgrade Lustre in
> existing installations without changing things we don't need to. It would
> also make it easier to re-use LNET in other projects, and make it easier
> for unusual users to maintain a customized LNET.
> I have no idea how this might affect landing in mainstream kernel.
> Thoughts / opinions?
>
> *--*
>
> *Nathan Rutman · Principal Systems ArchitectSeagate Technology** · *+1
> 503 877-9507* · *PST
>
> _______________________________________________
> lwg mailing list
> lwg(a)lists.opensfs.org
> http://lists.opensfs.org/listinfo.cgi/lwg-opensfs.org
>
>
6 years, 3 months
[PATCH 00/10] staging: lustre: ldlm: Fix some checkpatch warnings and errors
by Andreas Ruprecht
This patch series removes warnings generated by scripts/checkpatch.pl
in the lustre/ldlm/ subdirectory of the driver.
Not all warnings are covered by this, especially the ones about quoted
strings being split across lines, but I currently don't see a
checkpatch.pl-conform way to reformat those. Some overlong lines are
still present for readability reasons, and a few warnings about breaks
not being useful also still remain.
Andreas Ruprecht (10):
staging: lustre: ldlm: Add missing newlines after declarations
staging: lustre: ldlm: Fix overlong lines
staging: lustre: ldlm: Fix warning about missing spaces
staging: lustre: ldlm: Fix indentation errors for switch-case
staging: lustre: ldlm: Fix initialization of static variables
staging: lustre: ldlm: Fix warning about unneeded return statement
staging: lustre: ldlm: Remove unnecessary line continuations
staging: lustre: ldlm: Remove unnecessary braces at ifs
staging: lustre: ldlm: Remove space before braces for defined() check
staging: lustre: ldlm: Add a space in debug output
drivers/staging/lustre/lustre/ldlm/interval_tree.c | 5 +++
drivers/staging/lustre/lustre/ldlm/ldlm_extent.c | 4 +-
drivers/staging/lustre/lustre/ldlm/ldlm_flock.c | 7 ++-
drivers/staging/lustre/lustre/ldlm/ldlm_internal.h | 4 +-
drivers/staging/lustre/lustre/ldlm/ldlm_lib.c | 3 +-
drivers/staging/lustre/lustre/ldlm/ldlm_lock.c | 51 ++++++++++++----------
drivers/staging/lustre/lustre/ldlm/ldlm_lockd.c | 17 +++++---
drivers/staging/lustre/lustre/ldlm/ldlm_pool.c | 48 +++++++++++---------
drivers/staging/lustre/lustre/ldlm/ldlm_request.c | 43 +++++++++++-------
drivers/staging/lustre/lustre/ldlm/ldlm_resource.c | 18 +++++---
10 files changed, 122 insertions(+), 78 deletions(-)
--
1.9.1
6 years, 3 months