Thanks for the additional insight. On the py-spdk client yes I see the value that you are describing however instead of adding on the py-spdk client a better approach might be take the scripts/rpc.py and scripts/rpc components and build those out/refactor them to provide more/better functionality. There$B!G(Bs enough $B!H(Blikeness$B!I(B between what you are proposing and what$B!G(Bs already there that I$B!G(Bm afraid a lot of the reluctance to adopt this is based on the increased maintenance of adding this new code weighed against the increased value. If you improve on what$B!G(Bs there one piece at a time I think you can get a lot accomplished here.
Also, you should be aware that there$B!G(Bs some pending work to consolidate the SPDK target applications themselves, to have one target application with parameters instead of multiple binaries that do very similar work. I can see a good fit for your goals and that activity to work together to have a single python generic interface as well as a single binary underneath it both architected at the same time.
Would you be interested in working with some other community folks on that? I$B!G(Bm not sure who it was that was talking about looking into that but hopefully they$B!G(Bll see this and chime in J I think there might be a community meeting this Thu, if someone knows please reply – we could talk on that call if that would help.
PS: Or, you could work independently to get a generic python layer usable by applications $B!H(Bbuilt into$B!I(B the existing rpc/scripts code. A good place to start would be to pick one sample app, like nvmf target, and list out the functions that are currently provided via the existing rpc py code and see how you can add the ones that are in your patch by extending/re-organizing/refactor existing code. I would definitely start with just one application though and get buy-in from a maintainer that this is the right design choice. Then, the rest of the work will go quite smoothly I think.
I am trying to answer your questions from two points.
Firstly, the py-spdk/nvmf_client.py provides the same abilities that can be done in the /scripts/rpc/nvmf.py, the difference, however, is that they are in the upper and lower layer respectively. Why should we do it? Imagine that there is only one management app which needs to invoke SPDK-base apps, we can write the module alone in your own management app and directly communicate with the directly /scripts/rpc/nvmf.py. However, when there are multiple upper management apps that need to invoke SPDK-base apps, we need to provide the unified software named $B!F(Bpy-spdk$B!G(B and laid between the SPDK-based apps and upper management apps. If we don$B!G(Bt do so, then each management app needs to write the same module to interact with /scripts/rpc/nvmf.py. The py-spdk provides a generic interface for upper management apps. In other words, the upper management apps are not care about the functional realization of the backend, they just work through the interfaces provided by the py-spdk.
Secondly, why did we use the protobuf as data model? Because the protobuf provides a better standardized description of the API than JSON. From the code point of view, we only need to define a $B!F(B.proto$B!G(Bfile that describes the data and compiled it with different languages to the lib file (such as: spdk_pb2.py, spdk.go, etc.). Each $B!F(Bmessage$B!G(Bof the $B!F(B.proto$B!G(Bfile will be compiled to a protobuf obj which served as the carrier of the data and returned to the upper management apps. Compared to the JSON, the upper management apps can quickly learn that the returned value which is passed through the lib file rather than the manual document. From the ecological point of view$B!$(Bmany mainstream cloud-native apps are using protobuf as their data model (like k8s)$B!$(Bwhich is also an opportunity for us.
I don't know do I make sense.
$B:_(B 2018$BG/(B1$B7n(B28$BF|!$>e8a(B5:25$B!$(BLuse, Paul E <email@example.com> $B
I think the main question right now is about the new core functionality in the patch versus what Ben mentioned wrt what is there today in scripts/rpc:
__init__.py client.py lvol.py nvmf.py
app.py iscsi.py nbd.py pmem.py
bdev.py log.py net.py vhost.py
This is the change he mentioned that happened since the original py-spdk client was added. So for example if I look at what is in the patch in py-spdk/nvmf_client.py I don$B!G(Bt see any capabilities that can$B!G(Bt be done in thescripts/rpc/nvmf.py but I could be missing something. I think the protobuf and repo location discussions are secondary to first establishing the value of the core functionality of the patch to the community.
Can you explain how the SDK Framework layer might be a better approach as compared to working on the files listed above?
PS: There is an open community conference call every other week, I think the next one is coming up here soon and its usually announced on IRC a few days in advance. If real time discussion would help we could always look at scheduling a separate open meeting at time more friendly to your time zone.
From: SPDK [mailto:firstname.lastname@example.org] On Behalf Of Zhipeng Huang
Sent: Saturday, January 27, 2018 12:11 AM
To: Storage Performance Development Kit <email@example.com>
Subject: Re: [SPDK] Add py-spdk client for SPDK
After discussing with my team, the general feeling is that keeping protobuf would be a preferred choice. The reason, as i stated earlier, is that proto provides a better standardized description of the API than JSON and also we could create data model rather quickly for bindings/tools written in other languages. We have already generated the go-spdk based upon the proto model for OpenSDS's interaction with SPDK drivers. It provides benefits at least judging from our own practice.
Frankly given gRPC's wide adoption I don't think this should be a big issue. If you still have doubt about this, I think maybe we could setup a conf call, or I could discuss with Harris when he's in China for SPDK summit.
For repo, I think it is entirely up to the community's decision on whether maintain it in the main repo or creating a new one.
On Sat, Jan 27, 2018 at 6:28 AM, Walker, Benjamin <firstname.lastname@example.org> wrote:
On Wed, 2018-01-24 at 07:13 +0800, Zhipeng Huang wrote:
> Do we have a conclusion on this issue ? If it is ok to have a spdk/sdk repo,
> then wewe will modify the current patch (get rid of protobuff) and resubmit
> the patch to the new repo once it is established (meanwhile abandon the
> current one to spdk/spdk).
If you remove protobuf, can you describe what is left? Recently scripts/rpc.py
was refactored to break it up into a set of Python libraries in scripts/rpc,
plus the command line tool at scripts/rpc.py. What functionality does this new
code provide over and above what is already present there?
SPDK is certainly in need of better management tools, so in the most general
sense the community is very supportive of your effort here. New management tools
can also go directly into the main spdk repository (a separate repository was
only suggested when we thought this was a Python binding to the SPDK libraries).
I'm wondering if an easier way forward would be to continue refining the Python
packages in scripts/rpc to be more general purpose libraries for sending the
JSON RPCs. What are your thoughts on that?
Zhipeng (Howard) Huang
IT Standard & Patent/IT Product Line
Huawei Technologies Co,. Ltd
Office: Huawei Industrial Base, Longgang, Shenzhen
Mobile Ad-Hoc Network Lab, Calit2
University of California, Irvine
Office: Calit2 Building Room 2402
OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado