Hi Paul,

I am trying to answer your questions from two points.

Firstly, the py-spdk/nvmf_client.py provides the same abilities that can be done in the /scripts/rpc/nvmf.py, the difference, however, is that they are in the upper and lower layer respectively. Why should we do it? Imagine that there is only one management app which needs to invoke SPDK-base apps, we can write the module alone in your own management app and directly communicate with the directly /scripts/rpc/nvmf.py. However, when there are multiple upper management apps that need to invoke SPDK-base apps, we need to provide the unified software named ‘py-spdk’ and laid between the SPDK-based apps and upper management apps. If we don’t do so, then each management app needs to write the same module to interact with /scripts/rpc/nvmf.py. The py-spdk provides a generic interface for upper management apps. In other words, the upper management apps are not care about the functional realization of the backend, they just work through the interfaces provided by the py-spdk.


Secondly, why did we use the protobuf as data model? Because the protobuf provides a better standardized description of the API than JSON. From the code point of view, we only need to define a ‘.proto’file that describes the data and compiled it with different languages to the lib file (such as: spdk_pb2.py, spdk.go, etc.). Each ‘message’of the ‘.proto’file will be compiled to a protobuf obj which served as the carrier of the data and returned to the upper management apps. Compared to the JSON, the upper management apps can quickly learn that the returned value which is passed through the lib file rather than the manual document. From the ecological point of viewmany mainstream cloud-native apps are using protobuf as their data model (like k8s)which is also an opportunity for us.    

I don't know  do I make sense. 


在 2018年1月28日,上午5:25,Luse, Paul E <paul.e.luse@intel.com> 写道:

Hi Howard,
I think the main question right now is about the new core functionality in the patch versus what Ben mentioned wrt what is there today in scripts/rpc:
__init__.py        client.py             lvol.py                nvmf.py
app.py                iscsi.py nbd.py  pmem.py
bdev.py              log.py                 net.py                 vhost.py
This is the change he mentioned that happened since the original py-spdk client was added.  So for example if I look at what is in the patch in py-spdk/nvmf_client.py I don’t see any capabilities that can’t be done in thescripts/rpc/nvmf.py but I could be missing something. I think the protobuf and repo location discussions are secondary to first establishing the value of the core functionality of the patch to the community.
Can you explain how the SDK Framework layer might be a better approach as compared to working on the files listed above?
PS:  There is an open community conference call every other week, I think the next one is coming up here soon and its usually announced on IRC a few days in advance.  If real time discussion would help we could always look at scheduling a separate open meeting at time more friendly to your time zone.
From: SPDK [mailto:spdk-bounces@lists.01.org] On Behalf Of Zhipeng Huang
Sent: Saturday, January 27, 2018 12:11 AM
To: Storage Performance Development Kit <spdk@lists.01.org>
Subject: Re: [SPDK] Add py-spdk client for SPDK
Hi Ben,
After discussing with my team, the general feeling is that keeping protobuf would be a preferred choice. The reason, as i stated earlier, is that proto provides a better standardized description of the API than JSON and also we could create data model rather quickly for bindings/tools written in other languages. We have already generated the go-spdk based upon the proto model for OpenSDS's interaction with SPDK drivers. It provides benefits at least judging from our own practice.
Frankly given gRPC's wide adoption I don't think this should be a big issue. If you still have doubt about this, I think maybe we could setup a conf call, or I could discuss with Harris when he's in China for SPDK summit.
For repo, I think it is entirely up to the community's decision on whether maintain it in the main repo or creating a new one.
On Sat, Jan 27, 2018 at 6:28 AM, Walker, Benjamin <benjamin.walker@intel.com> wrote:
On Wed, 2018-01-24 at 07:13 +0800, Zhipeng Huang wrote:
> Do we have a conclusion on this issue ? If it is ok to have a spdk/sdk repo,
> then wewe will modify the current patch (get rid of protobuff) and resubmit
> the patch to the new repo once it is established (meanwhile abandon the
> current one to spdk/spdk).

If you remove protobuf, can you describe what is left? Recently scripts/rpc.py
was refactored to break it up into a set of Python libraries in scripts/rpc,
plus the command line tool at scripts/rpc.py. What functionality does this new
code provide over and above what is already present there?

SPDK is certainly in need of better management tools, so in the most general
sense the community is very supportive of your effort here. New management tools
can also go directly into the main spdk repository (a separate repository was
only suggested when we thought this was a Python binding to the SPDK libraries).
I'm wondering if an easier way forward would be to continue refining the Python
packages in scripts/rpc to be more general purpose libraries for sending the
JSON RPCs. What are your thoughts on that?
SPDK mailing list

Zhipeng (Howard) Huang
Standard Engineer
IT Standard & Patent/IT Product Line
Huawei Technologies Co,. Ltd
Office: Huawei Industrial Base, Longgang, Shenzhen
Research Assistant
Mobile Ad-Hoc Network Lab, Calit2
University of California, Irvine
Office: Calit2 Building Room 2402
OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado
SPDK mailing list