> From: Tian, Kevin
> Sent: Friday, November 20, 2015 4:36 PM
> > > > So, for non-opengl rendering qemu needs the guest framebuffer data so it
> > > > can feed it into the vnc server. The vfio framebuffer region is meant
> > > > to support this use case.
> > >
> > > what's the format requirement on that framebuffer? If you are familiar
> > > with Intel Graphics, there's a so-called tiling feature applied on frame
> > > buffer so it can't be used as a raw input to vnc server. w/o opengl you
> > > need do some conversion on CPU first.
> > Yes, that conversion needs to happen, qemu can't deal with tiled
> > graphics. Anything which pixman can handle will work. Prefered would
> > be PIXMAN_x8r8g8b8 (aka DRM_FORMAT_XRGB8888 on little endian host) which
> > is the format used by the vnc server (and other places in qemu)
> > internally.
Now the format is reported based on guest setting. Some agent needs to
do format conversion in user space.
> > qemu can also use the opengl texture for the guest fb, then fetch the
> > data with glReadPixels(). Which will probably do exactly the same
> > conversion. But it'll add a opengl dependency to the non-opengl
> > rendering path in qemu, would be nice if we can avoid that.
> > While being at it: When importing a dma-buf with a tiled framebuffer
> > into opengl (via eglCreateImageKHR + EGL_LINUX_DMA_BUF_EXT) I suspect we
> > have to pass in the tile size as attribute to make it work. Is that
> > correct?
> I'd guess so, but need double confirm later when reaching that level of detail.
> some homework on dma-buf is required first. :-)
btw some questions here:
for non-gl and gl rendering in Qemu, are they based on dma-buf already?
once we can export guest framebuffer in dma-buf, is there additional work
required or just straightforward to integrate with SPICE?
> From: Dr. Greg Wettstein [mailto:email@example.com]
> Sent: Monday, November 30, 2015 2:05 AM
> Hi, I hope everyone has had an enjoyable weekend, particularly for
> those who were enjoying the Thanksgiving holiday.
> We've been following the i915 graphics virtualization project for some
> time. We have been working on the engineering behind some solutions
> which we hope to base on this technology.
Thanks for your interest in our technology. Can you share your usage
scenarios on it?
> We had ported the Xen 4.3 based version of the iGVT-G support into 4.4
> using the Q1-2015 xen/qemu/kernel releases. Most of our development
> has been on this platform release and have found it extremely stable
> through hundreds of dom0 reboots and VM starts.
Good to know that. :-)
> For a 'Thanksgiving weekend project' I took on porting our 4.4 version
> into 4.5 and slogged through all the issues around the new hypervisor
> ioreq server model. I was just starting to validate functionality
> when I discovered, midway through the weekend, the 'official' 4.5
> release based on the new server architecture... :-)(.
> All through the work on the port it felt like we were driving a square
> peg into a round hole given how the new ioreq server architecture was
> being done. It was obvious this was the 'correct' way to do the
> virtual machine I/O region mapping but wanted to get something we were
> familiar with working.
> About the time I started testing the port our Golden Retriever vomited
> on one of my keyboards, which I took as the final sign that our code
> was an ugly hack so I decided to bring up the official 4.5 release for
> testing.... :-)
> Unfortunately we haven't found the success with the 4.5 release that
> we experienced with the 4.4 'old I/O model' code. On identical
> hardware we see very intermittent success on getting dom0 booted to
> operational status. The failures occur when the i915 modeset is
> executed in dom0, which of course corresponds to the initialization of
> the VGT instance.
> The failure occurs both with a hypervisor built from the Github branch
> of the 4.5 code as well as with a hypervisor built from 4.5.2 sources
> patched with VGT support. I'm including below the console messages of
> a representative boot failure.
> I did note the 'Unclaimed register detected' error and will get
> i915.mmio_debug output from that tonight but as I noted the same
> hardware functions flawlessly on the 4.4 based implementation.
> On the rare boots which are successful we get the following message
> out of the hypervisor when a VGT based HVM is started:
> (XEN) traps.c:668:d1v0 Bad GMFN 8000000080 (MFN ffffffffffffffff) to MSR 40000000
> Which results in a segmentation fault of the VGT QEMU instance.
> This is on a Haswell based system. We have testing scheduled for a
> Broadwell platform but since support is less advanced on the latter
> platform we didn't want to add another variable to the situation.
> This is extremely useful and powerful technology and we want to
> support its development so we would be happy to dig into whatever
> additional debugging would be useful. We have pretty solid
> engineering skills across the range of technologies in play but we
> would certainly not claim considerable expertise on the i915 hardware
> I've copied a smattering of the involved Intel folks on this as well.
> One of our concerns is whether or not this is an 'experiment' or
> something Intel plans on supporting in the long term. We have obvious
> concerns about basing solutions on technology if the underlying
> hardware should change in a manner that we could not support the
> solution ourselves and if Intel were to abandon the concept.
Intel will support this technology in the long term. This project starts
from HSW, now stable on BDW, preliminary SKL support comes in
Q4, etc. Yes, it will continue.
> Have a good day.
If I read above correctly, you are using 2015-Q1 release and doing
your own porting from 4.3->4.4->4.5... Why not trying the latest
2015-Q3 release which is already based on 4.5 w/ new ioreq