FYI,
I confirmed the lustre.mkfs default is still 1 inode per 2k bytes for
MDT volumes.
chris hunter
On Mon, 2016-03-28 at 10:24 -0400, Chris Hunter wrote:
Thanks for the replies.
I did some tests with "mkfs.lustre --mdt" command from ieel2 release.
On my VMs it uses 1 inode per 4k bytes.
I recall using 1 inode per 2k bytes in the past(ie. lustre 2.1) for
MDT volumes. I can override the defaults but perhaps there is a good
reason for the change.
regards,
chris hunter
On Mar 27, 2016 2:23 AM, "Dilger, Andreas" <andreas.dilger(a)intel.com>
wrote:
Right, the new default is 1 inode per 2048 bytes of MDT space
(512M inodes per TB). This is as low as we are comfortable to
go by default, since the way the filesystem is used can vary a
lot.
If you know the environment better (e.g. have an existing
filesystem and can measure MDT space used / inode used), and
MDT space is precious then you might tune this a better for
your environment.
Cheers, Andreas
> On Mar 25, 2016, at 09:27, Bob Ball <ball(a)umich.edu> wrote:
>
> Just as a data point, we have a 750GB ldiskfs RAID-10
combined mdt/mgs volume, that default formatted out to ~525M
inodes. This is Lustre 2.7.0. I believe Lustre used 1 inode
per 2048 bytes.
>
> bob
>
>> On 3/25/2016 11:03 AM, Chris Hunter wrote:
>> Hello,
>> I had an enquiry about appropriate number of MDS & MDT
volumes for a
>> filesystem that could potentially hold 4bn files. I expect
the typical
>> "working size" will be much less than 1bn files.
>>
>> My understanding for ldiskfs MDT volume has a ratio of 1
inode per 4096
>> bytes. So ldiskfs MDT volume with 1bn inodes would need a
4TB MDT size.
>> Using DNE, we could have 4 MDS servers, each with a 4TB MDT
volume to
>> achieve a filesystem with 4bn inodes.
>>
>> FYI, 4TB volume is a reasonable size when using SSD
physical drives (ie.
>> 16TB is not). Any recommendations for reasonable mix of
number of DNE
>> MDS + MDT volume size to achieve 4bn files?
>>
>> regards,
>> chris hunter
>> chuntera(a)gmail.com
>>
>>
>>
>>
>> _______________________________________________
>> HPDD-discuss mailing list
>> HPDD-discuss(a)lists.01.org
>>
https://lists.01.org/mailman/listinfo/hpdd-discuss
>
> _______________________________________________
> HPDD-discuss mailing list
> HPDD-discuss(a)lists.01.org
>
https://lists.01.org/mailman/listinfo/hpdd-discuss