Thanks for all the great replies.
Going back to older kernels is not an option for us, we will be looking
into upgrading to lustre 2.5.x soon I hope.
In the mean time I'll probably try nfs-ganesha
On Thu, Jun 19, 2014 at 7:23 AM, Nguyen Viet Cuong <mrcuongnv(a)gmail.com>
wrote:
Hi JC,
I wonder whether we can utilize pNFS from NFS v4.1 implemented by
NFS-Ganesha for accessing Lustre in parallel way?
Regards,
Cuong
On Wed, Jun 18, 2014 at 5:38 PM, Jacques-Charles Lafoucriere <
jacques-charles.lafoucriere(a)cea.fr> wrote:
> Hello
>
> we are using NFS-ganesha (
https://github.com/nfs-ganesha/nfs-ganesha) to
> export Lustre.
> As it is a user space NFS server, you "just" need to have a running
> Lustre client.
>
> Bye
>
> JC
>
>
> On 06/16/2014 08:49 PM, E.S. Rosenberg wrote:
>
> In the interest of easy data access from computers that are not
> part of the cluster we would like to export the lustre filesystem as NFS
> from one of the nodes.
>
> From what I understood this should be possible but so far we are getting
> kernel panics.
>
> So:
> - Has anyone done it?
> - What are the pitfalls?
> - Any other useful tips?
>
> Tech details:
> Lustre: 2.4.3
> Kernel: 3.14.3 + aufs
> Distro: Debian testing/sid
>
> We will probably be upgrading to lustre 2.5.x in the near future.
>
> Thanks,
> Eli
>
> Trace from the test subject:
> Jun 16 18:07:17 kernel:LustreError:
> 3795:0:(llite_internal.h:1141:ll_inode2fid()) ASSERTION( inode != ((void
> *)0) ) failed:
> Jun 16 18:07:17 kernel:LustreError:
> 3795:0:(llite_internal.h:1141:ll_inode2fid()) LBUG
> Jun 16 18:07:17 kernel:CPU: 0 PID: 3795 Comm: nfsd Tainted: G WC
> 3.14.3-aufs-mos-1 #1
> Jun 16 18:07:17 kernel:Hardware name: Dell Inc. PowerEdge C6220/03C9JJ,
> BIOS 1.2.1 05/27/2013
> Jun 16 18:07:17 kernel: 0000000000000000 ffff881047dd5ba0
> ffffffff8175c9e4 ffffffffa1842970
> Jun 16 18:07:17 kernel: ffff881047dd5bc0 ffffffffa006954c
> 0000000000000000 ffff880845cd5148
> Jun 16 18:07:17 kernel: ffff881047dd5c00 ffffffffa18054fd
> 0cb158b46edf5345 0000000000000013
> Jun 16 18:07:17 kernel:Call Trace:
> Jun 16 18:07:17 kernel: [<ffffffff8175c9e4>] dump_stack+0x45/0x56
> Jun 16 18:07:17 kernel: [<ffffffffa006954c>] lbug_with_loc+0x3c/0x90
> [libcfs]
> Jun 16 18:07:17 kernel: [<ffffffffa18054fd>] ll_encode_fh+0x109/0x13e
> [lustre]
> Jun 16 18:07:17 kernel: [<ffffffff81203f79>]
> exportfs_encode_inode_fh+0x1b/0x86
> Jun 16 18:07:17 kernel: [<ffffffff8120402f>] exportfs_encode_fh+0x4b/0x60
> Jun 16 18:07:17 kernel: [<ffffffff810f420f>] ? lookup_real+0x27/0x42
> Jun 16 18:07:17 kernel: [<ffffffff81207689>] _fh_update.part.7+0x39/0x48
> Jun 16 18:07:17 kernel: [<ffffffff81207c2a>] fh_compose+0x3d1/0x3fa
> Jun 16 18:07:17 kernel: [<ffffffff81210fe4>]
> encode_entryplus_baggage+0xd3/0x125
> Jun 16 18:07:17 kernel: [<ffffffff8121121f>]
> encode_entry.isra.14+0x150/0x2cb
> Jun 16 18:07:17 kernel: [<ffffffff8121247d>]
> nfs3svc_encode_entry_plus+0xf/0x11
> Jun 16 18:07:17 kernel: [<ffffffff81209e7e>] nfsd_readdir+0x160/0x1f8
> Jun 16 18:07:17 kernel: [<ffffffff8121246e>] ?
> nfs3svc_encode_entry+0xe/0xe
> Jun 16 18:07:17 kernel: [<ffffffff8120831b>] ? nfsd_splice_actor+0xe8/0xe8
> Jun 16 18:07:17 kernel: [<ffffffff81056154>] ? groups_free+0x22/0x44
> Jun 16 18:07:17 kernel: [<ffffffff8120fa3d>]
> nfsd3_proc_readdirplus+0xe3/0x1df
> Jun 16 18:07:17 kernel: [<ffffffff81205269>] nfsd_dispatch+0xca/0x1ad
> Jun 16 18:07:17 kernel: [<ffffffff8173579b>] svc_process+0x469/0x768
> Jun 16 18:07:17 kernel: [<ffffffff81204d39>] nfsd+0xc5/0x117
> Jun 16 18:07:17 kernel: [<ffffffff81204c74>] ? nfsd_destroy+0x6b/0x6b
> Jun 16 18:07:17 kernel: [<ffffffff81051764>] kthread+0xd6/0xde
> Jun 16 18:07:17 kernel: [<ffffffff8105168e>] ?
> kthread_create_on_node+0x15d/0x15d
> Jun 16 18:07:17 kernel: [<ffffffff8176400c>] ret_from_fork+0x7c/0xb0
> Jun 16 18:07:17 kernel: [<ffffffff8105168e>] ?
> kthread_create_on_node+0x15d/0x15d
>
>
> _______________________________________________
> HPDD-discuss mailing
listHPDD-discuss@lists.01.orghttps://lists.01.org/mailman/listinfo/hpdd-discuss
>
>
>
> _______________________________________________
> HPDD-discuss mailing list
> HPDD-discuss(a)lists.01.org
>
https://lists.01.org/mailman/listinfo/hpdd-discuss
>
>
--
Nguyen Viet Cuong
_______________________________________________
HPDD-discuss mailing list
HPDD-discuss(a)lists.01.org
https://lists.01.org/mailman/listinfo/hpdd-discuss