On Tue, Nov 21, 2017 at 10:19 AM, Rik van Riel <riel(a)redhat.com> wrote:
On Fri, 2017-11-03 at 14:21 +0800, Xiao Guangrong wrote:
> On 11/03/2017 12:30 AM, Dan Williams wrote:
> > Good point, I was assuming that the mmio flush interface would be
> > discovered separately from the NFIT-defined memory range. Perhaps
> > via
> > PCI in the guest? This piece of the proposal needs a bit more
> > thought...
> Consider the case that the vNVDIMM device on normal storage and
> vNVDIMM device on real nvdimm hardware can both exist in VM, the
> flush interface should be able to associate with the SPA region
> respectively. That's why I'd like to integrate the flush interface
> into NFIT/ACPI by using a separate table. Is it possible to be a
> part of ACPI specification? :)
It would also be perfectly fine to have the
virtio PCI device indicate which vNVDIMM
range it flushes.
Since the guest OS needs to support that kind
of device anyway, does it really matter which
direction the device association points?
We can go with the "best" interface for what
could be a relatively slow flush (fsync on a
file on ssd/disk on the host), which requires
that the flushing task wait on completion
If that kind of interface cannot be advertised
through NFIT/ACPI, wouldn't it be perfectly fine
to have only the virtio PCI device indicate which
vNVDIMM range it flushes?
Yes, we could do this with a custom PCI device, however the NFIT is
frustratingly close to being able to define something like this. At
the very least we can start with a "SPA Range GUID" that is Linux
specific to indicate "call this virtio flush interface on FUA / flush
cache requests" as a stop gap until a standardized flush interface can