Re: [HPDD-discuss] Lustre staging driver cleanup
by Rita Sinha
Hi Greg,
I totally agree maintaining a single tree and too
preferably one from kernel.org so that the code has maximum visibility
and accessibility.
To my surprise, there is a huge difference between tree at
http://git.whamcloud.com/fs/lustre-release.git and the staging tree
driver in kernel.org.
Many subsystems of luster driver like -
liblustre,
lfsck,
mdd,
qouta etc.
which can be seen here
http://git.whamcloud.com/fs/lustre-release.git/tree/HEAD:/lustre are
missing in the staging tree driver at kernel.org.
What can be the reason for this?
Are we maintaining an incomplete driver in the kernel.org staging tree?
Regards,
Saket Sinha
On Fri, Aug 29, 2014 at 11:44 PM, <greg(a)kroah.com> <greg(a)kroah.com> wrote:
> On Fri, Aug 29, 2014 at 05:50:46PM +0000, Simmons, James A. wrote:
>> >Hi Oleg,
>> >
>> >Please find my response inline.
>>
>> Hi Saket. I have been active in the very work you brought up. The goal I have set is to resync the Intel branch to what is current in the upstream kernel. This has required us to update the intel branch to handle kernel api changes in newer kernels as well as remove api wrappers like the ones you pointed out. Currently our efforts
>> are covered with several tickets.
>>
>> LU-3963 : Move libcfs wrappers to linux api
>> LU-5275 : Remove procfs technical debt
>> LU-5530 : Support for sysfs
>> LU-5443 : Use kernel timer apis
>> LU-4416/LU-4493: New kernel support.
>> LU-4423 : backport upstream kernel patches.
>>
>> >From what you posted about it seems LU-3963 is the best place for your work. Also work was done for server/client splitting (LU-1330) but more work could be done since it will be needed for the coming client/server rpm split. Here is the link about submitting work to the Intel lustre tree:
>>
>> https://wiki.hpdd.intel.com/display/PUB/Submitting+Changes
>
> Ick, no, just send patches upstream against the kernel.org tree, don't
> deal with a third-party tree, that way lies madness and should be
> avoided at all costs.
>
> I wish this external tree would just be deleted, it's doing nothing but
> cause confusion.
>
> greg k-h
6 years, 7 months
Re: [HPDD-discuss] Lustre staging driver cleanup
by Drokin, Oleg
Hello!
On Sep 1, 2014, at 11:26 AM, Greg Kroah-Hartman wrote:
>> Many subsystems of luster driver like -
>> liblustre,
>> lfsck,
>> mdd,
>> qouta etc.
>>
>> which can be seen here
>> http://git.whamcloud.com/fs/lustre-release.git/tree/HEAD:/lustre are
>> missing in the staging tree driver at kernel.org.
>>
>> What can be the reason for this?
>> Are we maintaining an incomplete driver in the kernel.org staging tree?
> If so, I should just delete the in-kernel version as it makes no sense
> to keep both around...
Well, we cannot really drop our own tree since it also has the server code that is not even in staging.
And the new development goes there. Also it's geared towards older kernels used in popular distributions
in HPC, like RedHat 6's 2.6.32+
I guess the current situation is not ideal, but I am pretty sure you would not want to accept
everything we have in that other tree into even staging anyway.
We also still need to find replacements for various debugging aids from our tree that were helpfully
removed while in staging and learn how to properly use them.
The staging lustre client is fully functional, though. Currently I ensure it stays this way,
but it's about to be added to our automatic test roster to stay that way.
Bye,
Oleg
6 years, 7 months
Fwd: Question regarding lustre patches
by Pratik Rupala
-------- Original Message --------
Subject: Question regarding lustre patches
Date: Mon, 18 Aug 2014 17:19:07 +0530
From: Pratik Rupala <pratik.rupala(a)calsoftinc.com>
To: lustre-devel(a)lists.lustre.org
Hi,
In the older version of lustre, I can see a patch named
"*raid5-merge-ios-rhel5.patch*" which is related to RAID performance
improvement for RHEL5.
As per my understanding, it accumulates the bios in a single
make_request function of RAID layer and sends them collectively by
generic_make_request instead of sending them separately per stripe basis.
By this means, performance of RAID could be improved.
But this patch is not present in newer version of lustre and especially
for RHEL6.
So, is it possible to port that patch for RHEL 6, provided RAID
architecture is changed significantly and handling of stripe has been
made asynchronous in RHEL 6 as compared to synchronous handling of
stripe in RHEL 5?
And even if it can be ported for RHEL6, will it give advantage as it was
giving in RHEL 5 with different RAID architecture?
Regards,
Pratik
6 years, 7 months