On Mi, 2015-11-25 at 08:10 +0000, Tian, Kevin wrote:
> From: Jike Song
> Sent: Wednesday, November 25, 2015 11:20 AM
> > The hack to change device identity needs to go away for merging stuff
> > upstream. So seabios needs some way to figure what the hostbridge
> > really is (i440fx / q35) to initialize it properly, without playing
> > masquerade games. One possible option would be a special subsystem id.
> > Comments? Other suggestions?
> If I remember correctly, this workaround as only necessary for Windows
> graphics driver, who had some information based on config registers
> of host bridge.
> On the other hand, I have been told that Intel was working on removing
> such dependency - the Windows gfx driver won't look at Host Bridge
> config registers any more. Not sure if this driver already released,
> but let's assume that it will resolve this hack for us :)
Yes, the on-going effort in driver side which removes lots of hacks in
Qemu side would benefit both passthru and vgpu.
Is this a pure software issue? Or needs the hardware fixing too (i.e.
it'll work with newer igd only)? What is the guest driver state?
Public available already?
What are your upstream intentions here? Get things merged? Or require
users install recent enough guest drivers instead?
> > (2) ISA bridge
> > ==============
> > Lives at 1f.0. Device id filled from host. Looks pretty much like a
> > dummy device, seems it doesn't actually do anything. Did I miss
> > something? If not: What this is needed for? Get past guest driver
> > sanity checks?
> The ISA bridge is provided only for guest gfx driver to calculate the
> offset of some MMIO registers. Both Linux and Windows gfx driver uses
> it. And the good news is, it will also be remove from both Linux/Windows
> drivers, as described above.
One purpose is to detect Intel graphics generation. Recently Linux i915
accepts a patch to avoid exposing host 1f.0, as an example:
That seems to specifically match the vmware emulated device.
For "-M q35" we should add the q35 lpc pci id there. I've tried to
masquerade the q35 lpc as host lpc (QM77), which didn't work very well.
They are not compatible enough for that. ACPI stopped working, seems
the register locations have changed.
For "-M pc" we still need a dummy device @ 1f.0 (seems to not cause
> > (3) opregion
> > ============
> > what is this exactly? Paravirtual communication path between host and
> > guest igd driver? If so: Can this be moved to vfio?
> OpRegion was not provided for PV communication. OpRegion is an IGD
> addon to ACPI spec:
> It's somehow necessary and won't be removed. Currently iGVT-g has OpRegion
> implemented in kernel and we are working on moving it into QEMU. Once
> the emulation is done in QEMU, there won't be tricky bits esp for KVMGT:
> * pcicfg 0xfc is emulated in QEMU
> * OpRegion memslot is allocated in QEMU
> * guest E820 memmap is composed in QEMU, so reserving 2-pages
> for OpRegion is trivial
> OpRegion is vendor specific, so I guess implementing it in QEMU or VFIO
> driver doesn't matter?
Adding vfio expert (alex) to cc (he is on vacation atm so the answer
will take a week or so ..) for comment.
Can OpRegion be emulated completely in userspace? I suspect qemu would
have to notify the kernel on certain guest actions, and we would need an
interface for that ...
OpRegion is not a vgpu specific thing. Same for passthru of Intel
Ok, so when extending the vfio interface to expose the opregion as vfio
region to qemu we could have an identical interface for both vgpu and
passthru. Makes sense to me, even though it is vendor-specific. Lets
see what Alex thinks.
When going that route the opregion emulation would need to stay in the
What else is needed to make opregion work? Looking at the specs shows
acpi is involved, so I suspect the acpi tables for the guest need
changes too so the guest os actually finds the opregion?
However it matters only when graphics device is the primary one. If
exposed as the secondary graphics device (w/ primary one as emulated
VGA device), we can avoid such OpRegion trick in Qemu.
Good to know, so we can get started without opregion support.
Actually this is
what's planned in coming GVT-d KVM support (i.e. passthru). The drawback
is that local display might not work w/o seeing VBT information contained
i.e. routing the display to one physical monitor connected to the host?
You have to use remote connection like vnc, but it should be
fine for dominant server/cloud gfx virtualization usages.
Currently GVT-g exposes vgpu as primary gfx device. It's reasonable
to choose same secondary-gfx-device approach as GVT-d, given the
trickiness which Jike described. We may maintain primary-gfx-device as
an off-tree feature instead.
In general, GVT-d passthru for KVM is aiming to make IGD assigned same
as other PCI device. GVT-g vgpu for KVM (KVMGT) would follow that
direction too. :-)