It must be next Thu, Piotr?
I checked the mailing list about the meeting, but it was a bit too long.
Could you tell me how I attend the SPDK community meeting this Thu？
在 2018年2月6日，下午12:07，Luse, Paul E <email@example.com> 写道：
That’s great Helloway. I would strongly suggest one thing at a time though; first tackle the front end interfaces and then once that’s completed then start seeing where people’s heads are at wrt things like protobuf. That’s pretty common in most open source communities and definitely is here – smaller patches and smaller projects are always better. Easier to understand, easier to review and easier to land!
Keep your eyes open on the dist list for the meeting time, look forward to talking to you finally J
Thank you for your reply and advice.
First of all I whole-heartly agree with you that it is a better approach to take the scripts/rpc.py and scripts/rpc components and build those out/refactor them to provide more/better functionality.
Secondly, I think you really hit on the target and the very essence of the benefit that a consistent data model protobuf could help bring. We are definitely in line on this one.
I would like to help to lead the effort which has been discussed thoroughly in this email thread, and work with any contributor in the SPDK community who is interested and willing to participate :)
Also look forward to my first community meeting this Thursday !
在 2018年2月6日，上午3:20，Luse, Paul E <firstname.lastname@example.org> 写道：
Thanks for the additional insight. On the py-spdk client yes I see the value that you are describing however instead of adding on the py-spdk client a better approach might be take the scripts/rpc.py and scripts/rpc components and build those out/refactor them to provide more/better functionality. There’s enough “likeness” between what you are proposing and what’s already there that I’m afraid a lot of the reluctance to adopt this is based on the increased maintenance of adding this new code weighed against the increased value. If you improve on what’s there one piece at a time I think you can get a lot accomplished here.
Also, you should be aware that there’s some pending work to consolidate the SPDK target applications themselves, to have one target application with parameters instead of multiple binaries that do very similar work. I can see a good fit for your goals and that activity to work together to have a single python generic interface as well as a single binary underneath it both architected at the same time.
Would you be interested in working with some other community folks on that? I’m not sure who it was that was talking about looking into that but hopefully they’ll see this and chime in J I think there might be a community meeting this Thu, if someone knows please reply – we could talk on that call if that would help.
PS: Or, you could work independently to get a generic python layer usable by applications “built into” the existing rpc/scripts code. A good place to start would be to pick one sample app, like nvmf target, and list out the functions that are currently provided via the existing rpc py code and see how you can add the ones that are in your patch by extending/re-organizing/refactor existing code. I would definitely start with just one application though and get buy-in from a maintainer that this is the right design choice. Then, the rest of the work will go quite smoothly I think.
I am trying to answer your questions from two points.
Firstly, the py-spdk/nvmf_client.py provides the same abilities that can be done in the /scripts/rpc/nvmf.py, the difference, however, is that they are in the upper and lower layer respectively. Why should we do it? Imagine that there is only one management app which needs to invoke SPDK-base apps, we can write the module alone in your own management app and directly communicate with the directly /scripts/rpc/nvmf.py. However, when there are multiple upper management apps that need to invoke SPDK-base apps, we need to provide the unified software named ‘py-spdk’ and laid between the SPDK-based apps and upper management apps. If we don’t do so, then each management app needs to write the same module to interact with /scripts/rpc/nvmf.py. The py-spdk provides a generic interface for upper management apps. In other words, the upper management apps are not care about the functional realization of the backend, they just work through the interfaces provided by the py-spdk.
Secondly, why did we use the protobuf as data model? Because the protobuf provides a better standardized description of the API than JSON. From the code point of view, we only need to define a ‘.proto’file that describes the data and compiled it with different languages to the lib file (such as: spdk_pb2.py, spdk.go, etc.). Each ‘message’of the ‘.proto’file will be compiled to a protobuf obj which served as the carrier of the data and returned to the upper management apps. Compared to the JSON, the upper management apps can quickly learn that the returned value which is passed through the lib file rather than the manual document. From the ecological point of view，many mainstream cloud-native apps are using protobuf as their data model (like k8s)，which is also an opportunity for us.
I don't know do I make sense.
I think the main question right now is about the new core functionality in the patch versus what Ben mentioned wrt what is there today in scripts/rpc:
__init__.py client.py lvol.py nvmf.py
app.py iscsi.py nbd.py pmem.py
bdev.py log.py net.py vhost.py
This is the change he mentioned that happened since the original py-spdk client was added. So for example if I look at what is in the patch in py-spdk/nvmf_client.py I don’t see any capabilities that can’t be done in thescripts/rpc/nvmf.py but I could be missing something. I think the protobuf and repo location discussions are secondary to first establishing the value of the core functionality of the patch to the community.
Can you explain how the SDK Framework layer might be a better approach as compared to working on the files listed above?
PS: There is an open community conference call every other week, I think the next one is coming up here soon and its usually announced on IRC a few days in advance. If real time discussion would help we could always look at scheduling a separate open meeting at time more friendly to your time zone.
After discussing with my team, the general feeling is that keeping protobuf would be a preferred choice. The reason, as i stated earlier, is that proto provides a better standardized description of the API than JSON and also we could create data model rather quickly for bindings/tools written in other languages. We have already generated the go-spdk based upon the proto model for OpenSDS's interaction with SPDK drivers. It provides benefits at least judging from our own practice.
Frankly given gRPC's wide adoption I don't think this should be a big issue. If you still have doubt about this, I think maybe we could setup a conf call, or I could discuss with Harris when he's in China for SPDK summit.
For repo, I think it is entirely up to the community's decision on whether maintain it in the main repo or creating a new one.
On Wed, 2018-01-24 at 07:13 +0800, Zhipeng Huang wrote:
> Do we have a conclusion on this issue ? If it is ok to have a spdk/sdk repo,
> then wewe will modify the current patch (get rid of protobuff) and resubmit
> the patch to the new repo once it is established (meanwhile abandon the
> current one to spdk/spdk).
If you remove protobuf, can you describe what is left? Recently scripts/rpc.py
was refactored to break it up into a set of Python libraries in scripts/rpc,
plus the command line tool at scripts/rpc.py. What functionality does this new
code provide over and above what is already present there?
SPDK is certainly in need of better management tools, so in the most general
sense the community is very supportive of your effort here. New management tools
can also go directly into the main spdk repository (a separate repository was
only suggested when we thought this was a Python binding to the SPDK libraries).
I'm wondering if an easier way forward would be to continue refining the Python
packages in scripts/rpc to be more general purpose libraries for sending the
JSON RPCs. What are your thoughts on that?
Zhipeng (Howard) Huang
IT Standard & Patent/IT Product Line
Huawei Technologies Co,. Ltd
Office: Huawei Industrial Base, Longgang, Shenzhen
Mobile Ad-Hoc Network Lab, Calit2
University of California, Irvine
Office: Calit2 Building Room 2402
OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado