HDMI problems
by Bao Ha
I am having problems connecting from HDMI to HDMI display. The screen is
all messed up.
However, it works fine from mini-DP to DP, or from HDMI to HDMI-VGA adapter.
Looking at the debug listing. The HDMI->HDMI has the following:
[drm:intel_modeset_readout_hw_state] [CRTC:21] hw state readout: enabled
while HDMI->HDMI/VGA has it disabled.
Also, during the boot up of HDMI->HDMI, there is a kernel panic and a dump
of configuration:
[ 4.131333] [drm:check_crtc_state] [CRTC:21]
[ 4.131783] [drm:intel_pipe_config_compare [i915]] *ERROR* mismatch in
pch_pfit.enabled (expected 0, found 1)
[ 4.132221] [drm:intel_pipe_config_compare [i915]] *ERROR* mismatch in
scaler_state.scaler_id (expected -1, found 0)
[ 4.132350] ------------[ cut here ]------------
[ 4.132816] WARNING: CPU: 3 PID: 989 at
drivers/gpu/drm/i915/intel_display.c:12807
intel_modeset_check_state+0x5a4/0x6a6 [i915]()
[ 4.132919] pipe state doesn't match!
[ 4.133305] Modules linked in: kvm i915 e1000e ptp pps_core i2c_algo_bit
drm_kms_helper sd_mod drm xhci_pci xhci_hcd
[ 4.133531] CPU: 3 PID: 989 Comm: kworker/u8:5 Not tainted
4.3.0-rc6-vgt+ #5
[ 4.133858] Hardware name: /NUC6i5SYB, BIOS
SYSKLi35.86A.0054.2016.0930.1102 09/30/2016
[ 4.134024] Workqueue: events_unbound async_run_entry_fn
[ 4.134269] 0000000000000000 ffff880482a238d8 ffffffff812629e2
ffff880482a23920
[ 4.134507] ffff880482a23910 ffffffff81054825 ffffffffa0169a6e
ffff8800478ae000
[ 4.134747] ffff880482583800 ffff8800478e5000 ffff880482652c00
ffff880482a23978
[ 4.134786] Call Trace:
[ 4.134947] [<ffffffff812629e2>] dump_stack+0x44/0x55
[ 4.135142] [<ffffffff81054825>] warn_slowpath_common+0x94/0xad
[ 4.135433] [<ffffffffa0169a6e>] ?
intel_modeset_check_state+0x5a4/0x6a6 [i915]
[ 4.135615] [<ffffffff81054881>] warn_slowpath_fmt+0x43/0x4b
[ 4.135922] [<ffffffffa0165262>] ?
intel_pipe_config_compare+0x1395/0x13e8 [i915]
[ 4.136209] [<ffffffffa0169a6e>] intel_modeset_check_state+0x5a4/0x6a6
[i915]
[ 4.136479] [<ffffffffa0171af8>] intel_atomic_commit+0x4dc/0x50e [i915]
[ 4.136702] [<ffffffffa003d55b>] drm_atomic_commit+0x48/0x4d [drm]
[ 4.136958] [<ffffffffa0097f31>] restore_fbdev_mode+0xf5/0x26c
[drm_kms_helper]
[ 4.137287] [<ffffffffa0099913>]
drm_fb_helper_restore_fbdev_mode_unlocked+0x31/0x68 [drm_kms_helper]
[ 4.137552] [<ffffffffa0099984>] drm_fb_helper_set_par+0x3a/0x46
[drm_kms_helper]
[ 4.137829] [<ffffffffa0187981>] intel_fbdev_set_par+0x12/0x4f [i915]
[ 4.137996] [<ffffffff812a0a9f>] fbcon_init+0x315/0x421
[ 4.138155] [<ffffffff8130be66>] visual_init+0xc8/0x11d
[ 4.138341] [<ffffffff8130d5ad>] do_bind_con_driver+0x1b1/0x2d0
[ 4.138535] [<ffffffff8130d9d3>] do_take_over_console+0x15f/0x189
[ 4.138724] [<ffffffff812a0158>] do_fbcon_takeover+0x5b/0x97
[ 4.138921] [<ffffffff812a36cf>] fbcon_event_notify+0x30c/0x62b
[ 4.139104] [<ffffffff8106becd>] notifier_call_chain+0x39/0x5c
[ 4.139329] [<ffffffff8106c124>]
__blocking_notifier_call_chain+0x41/0x5c
[ 4.139548] [<ffffffff8106c14e>] blocking_notifier_call_chain+0xf/0x11
[ 4.139750] [<ffffffff812a8331>] fb_notifier_call_chain+0x16/0x18
[ 4.139952] [<ffffffff812aa077>] register_framebuffer+0x288/0x2c0
[ 4.140249] [<ffffffffa0099c32>]
drm_fb_helper_initial_config+0x2a2/0x318 [drm_kms_helper]
[ 4.140542] [<ffffffffa01882d9>] intel_fbdev_initial_config+0x16/0x18
[i915]
[ 4.140738] [<ffffffff8106d182>] async_run_entry_fn+0x34/0xbe
[ 4.140926] [<ffffffff81066983>] process_one_work+0x1a7/0x31a
[ 4.141102] [<ffffffff81067386>] worker_thread+0x26f/0x35b
[ 4.141285] [<ffffffff81067117>] ? rescuer_thread+0x274/0x274
[ 4.141433] [<ffffffff8106b42c>] kthread+0xcd/0xd5
[ 4.141628] [<ffffffff8106b35f>] ? kthread_worker_fn+0x139/0x139
[ 4.141802] [<ffffffff814eb54f>] ret_from_fork+0x3f/0x70
[ 4.142002] [<ffffffff8106b35f>] ? kthread_worker_fn+0x139/0x139
[ 4.142142] ---[ end trace 6cba3123f2f58fce ]---
[ 4.142437] [drm:intel_dump_pipe_config] [CRTC:21][hw state] config
ffff8804824e3400 for pipe A
[ 4.142606] [drm:intel_dump_pipe_config] cpu_transcoder: A
[ 4.142809] [drm:intel_dump_pipe_config] pipe bpp: 36, dithering: 0
[ 4.143167] [drm:intel_dump_pipe_config] fdi/pch: 0, lanes: 0, gmch_m:
0, gmch_n: 0, link_m: 0, link_n: 0, tu: 0
[ 4.143503] [drm:intel_dump_pipe_config] dp: 0, lanes: 0, gmch_m: 0,
gmch_n: 0, link_m: 0, link_n: 0, tu: 0
[ 4.143867] [drm:intel_dump_pipe_config] dp: 0, lanes: 0, gmch_m2: 0,
gmch_n2: 0, link_m2: 0, link_n2: 0, tu2: 0
[ 4.144052] [drm:intel_dump_pipe_config] audio: 1, infoframes: 1
[ 4.144210] [drm:intel_dump_pipe_config] requested mode:
[ 4.144502] [drm:drm_mode_debug_printmodeline] Modeline 0:"" 0 0 1920 0
0 0 1080 0 0 0 0x0 0x0
[ 4.144664] [drm:intel_dump_pipe_config] adjusted mode:
[ 4.144938] [drm:drm_mode_debug_printmodeline] Modeline 0:"" 0 0 0 0 0 0
0 0 0 0 0x0 0x5
[ 4.145335] [drm:intel_dump_crtc_timings] crtc timings: 148500 1920 2008
2052 2200 1080 1084 1089 1125, type: 0x0 flags: 0x5
[ 4.145503] [drm:intel_dump_pipe_config] port clock: 222750
[ 4.145696] [drm:intel_dump_pipe_config] pipe src size: 1920x1080
[ 4.146001] [drm:intel_dump_pipe_config] num_scalers: 2, scaler_users:
0x80000000, scaler_id: 0
[ 4.146372] [drm:intel_dump_pipe_config] gmch pfit: control: 0x00000000,
ratios: 0x00000000, lvds border: 0x00000000
[ 4.146659] [drm:intel_dump_pipe_config] pch pfit: pos: 0x00000000,
size: 0x00000000, enabled
[ 4.146797] [drm:intel_dump_pipe_config] ips: 0
[ 4.146954] [drm:intel_dump_pipe_config] double wide: 0
[ 4.147333] [drm:intel_dump_pipe_config] ddi_pll_sel: 1; dpll_hw_state:
ctrl1: 0x21, cfgcr1: 0x80400173, cfgcr2: 0x2a5
[ 4.147511] [drm:intel_dump_pipe_config] planes on this crtc
[ 4.147774] [drm:intel_dump_pipe_config] STANDARD PLANE:18 plane: 0.0
idx: 0 enabled
[ 4.148023] [drm:intel_dump_pipe_config] FB:63, fb = 1920x1080
format = 0x34325258
[ 4.148310] [drm:intel_dump_pipe_config] scaler:-1 src (0, 0)
1920x1080 dst (0, 0) 1920x1080
[ 4.148629] [drm:intel_dump_pipe_config] CURSOR PLANE:20 plane: 0.1 idx:
1 disabled, scaler_id = -1
[ 4.148950] [drm:intel_dump_pipe_config] STANDARD PLANE:22 plane: 0.1
idx: 2 disabled, scaler_id = -1
[ 4.149263] [drm:intel_dump_pipe_config] [CRTC:21][sw state] config
ffff880482652c00 for pipe A
[ 4.149431] [drm:intel_dump_pipe_config] cpu_transcoder: A
[ 4.149636] [drm:intel_dump_pipe_config] pipe bpp: 36, dithering: 0
[ 4.149993] [drm:intel_dump_pipe_config] fdi/pch: 0, lanes: 0, gmch_m:
0, gmch_n: 0, link_m: 0, link_n: 0, tu: 0
[ 4.150329] [drm:intel_dump_pipe_config] dp: 0, lanes: 0, gmch_m: 0,
gmch_n: 0, link_m: 0, link_n: 0, tu: 0
[ 4.150687] [drm:intel_dump_pipe_config] dp: 0, lanes: 0, gmch_m2: 0,
gmch_n2: 0, link_m2: 0, link_n2: 0, tu2: 0
[ 4.150873] [drm:intel_dump_pipe_config] audio: 1, infoframes: 1
[ 4.151031] [drm:intel_dump_pipe_config] requested mode:
[ 4.151442] [drm:drm_mode_debug_printmodeline] Modeline 0:"1920x1080" 60
148500 1920 2008 2052 2200 1080 1084 1089 1125 0x48 0x5
[ 4.151597] [drm:intel_dump_pipe_config] adjusted mode:
Appreciate any helps!
--
Best Regards.
Bao C. Ha
Hacom - Embedded Systems and Appliances
http://www.hacom.net
voice: 657-859-9422
3 years, 12 months
*ERROR* gvt: vgpu1: read untracked MMIO
by Nick S
I was able to resolve my iommu group issue with Alex help and I am now
getting a warning stack in dmesg when trying to start my virtual machine.
The machine itself does not start and just sits on "Starting Windows"
screen. This began after I installed the latest stable Intel graphics
drivers. I was able to see two GPUs before the driver installation. Below
are my start command and dmesg output. Any suggestions where to look will
be much appreciated.
One more thing I've noticed is that the guide requests 15.45.14.4910 for
the driver version and on the download link 15.45.14.4590 is the latest.
When I install it, for some reason 21.20.16.4590 is printed as the version.
Also, the values I get with *ERROR* gvt: vgpu1: read untracked MMIO are
sometimes different.i.e. these two entries after another try:
[ 244.833653] [drm:intel_vgpu_emulate_mmio_read [i915]] *ERROR* gvt:
vgpu1: read untracked MMIO d40(4B) val a9c9fb5d
[ 244.833936] [drm:intel_vgpu_emulate_mmio_read [i915]] *ERROR* gvt:
vgpu1: read untracked MMIO d48(4B) val a9c9fb5d
Thank you!
qemu-system-x86_64 -enable-kvm -cpu host,kvm=off -m 6144 -smp
sockets=1,cores=2,threads=2 \
-serial none -vga qxl -show-cursor \
-ctrl-grab -no-quit \
-parallel none \
-usbdevice tablet -usbdevice keyboard \
$soundarg \
-name devvm \
-rtc base=localtime \
-netdev user,id=hn0 -device e1000,netdev=hn0,id=nic1,mac=$macaddr \
-device virtio-scsi-pci,id=scsi \
-device scsi-hd,drive=hd,serial=$hddserial \
-drive file=$folder/W7_UEFI.qcow2,id=hd,if=none \
-machine kernel_irqchip=on \
-device vfio-pci,sysfsdev=/sys/bus/pci/devices/0000:00:02.0/a297db4a
-f4c2-11e6-90f6-d3b88d6c9525
[ 44.638710] iommu: Adding device a297db4a-f4c2-11e6-90f6-d3b88d6c9525 to
group 7
[ 44.638714] vfio_mdev a297db4a-f4c2-11e6-90f6-d3b88d6c9525: MDEV:
group_id = 7
[ 64.169722] [drm:intel_vgpu_emulate_mmio_read [i915]] *ERROR* gvt:
vgpu1: read untracked MMIO d40(4B) val 1a49962a
[ 64.672726] ------------[ cut here ]------------
[ 64.672756] WARNING: CPU: 0 PID: 2020 at drivers/gpu/drm/i915/gvt/gtt.c:1830
intel_vgpu_emulate_gtt_mmio_write+0x1cb/0x240 [i915]
[ 64.672756] vgpu1: found oob ggtt write, offset 0
[ 64.672757] Modules linked in: ctr ccm bnep arc4 nls_iso8859_1 iwlmvm
snd_soc_skl snd_soc_skl_ipc snd_hda_codec_hdmi mac80211 snd_soc_sst_ipc
snd_soc_sst_dsp snd_hda_ext_core intel_rapl snd_soc_sst_match
x86_pkg_temp_thermal snd_hda_codec_realtek intel_powerclamp coretemp
snd_hda_codec_generic snd_soc_core snd_compress kvm_intel snd_seq_midi
ac97_bus uvcvideo snd_seq_midi_event snd_pcm_dmaengine iwlwifi snd_rawmidi
videobuf2_vmalloc snd_hda_intel videobuf2_memops snd_hda_codec
videobuf2_v4l2 videobuf2_core snd_hda_core cfg80211 snd_hwdep input_leds
joydev btusb videodev btrtl snd_pcm serio_raw btbcm thinkpad_acpi snd_seq
media btintel rtsx_pci_ms bluetooth mei_me nvram snd_seq_device memstick
snd_timer shpchp mei intel_pch_thermal snd soundcore mac_hid tpm_crb
parport_pc ppdev lp parport autofs4
[ 64.672780] algif_skcipher af_alg dm_crypt uas usb_storage vfio_mdev
kvmgt vfio_iommu_type1 mdev vfio kvm irqbypass rtsx_pci_sdmmc
crct10dif_pclmul crc32_pclmul ghash_clmulni_intel i915 pcbc aesni_intel
i2c_algo_bit aes_x86_64 drm_kms_helper crypto_simd glue_helper cryptd
syscopyarea sysfillrect e1000e sysimgblt fb_sys_fops ahci ptp psmouse drm
rtsx_pci pps_core libahci wmi video fjes
[ 64.672793] CPU: 0 PID: 2020 Comm: qemu-system-x86 Not tainted
4.10.0-rc7+ #1
[ 64.672794] Hardware name: LENOVO 20F6CTO1WW/20F6CTO1WW, BIOS R02ET52W
(1.25 ) 12/05/2016
[ 64.672794] Call Trace:
[ 64.672798] dump_stack+0x63/0x90
[ 64.672800] __warn+0xcb/0xf0
[ 64.672802] warn_slowpath_fmt+0x5f/0x80
[ 64.672816] intel_vgpu_emulate_gtt_mmio_write+0x1cb/0x240 [i915]
[ 64.672829] intel_vgpu_emulate_mmio_write+0x3e1/0x600 [i915]
[ 64.672846] ? kvm_arch_vcpu_ioctl_run+0x6e6/0x1570 [kvm]
[ 64.672848] intel_vgpu_rw+0x114/0x150 [kvmgt]
[ 64.672849] intel_vgpu_write+0x13a/0x180 [kvmgt]
[ 64.672850] vfio_mdev_write+0x20/0x30 [vfio_mdev]
[ 64.672853] vfio_device_fops_write+0x24/0x30 [vfio]
[ 64.672855] __vfs_write+0x37/0x160
[ 64.672857] ? apparmor_file_permission+0x18/0x20
[ 64.672859] ? security_file_permission+0x3b/0xc0
[ 64.672859] vfs_write+0xb8/0x1b0
[ 64.672860] SyS_pwrite64+0x95/0xb0
[ 64.672862] entry_SYSCALL_64_fastpath+0x1e/0xad
[ 64.672863] RIP: 0033:0x7fcc7a11ada3
[ 64.672864] RSP: 002b:00007fcc72d09780 EFLAGS: 00000293 ORIG_RAX:
0000000000000012
[ 64.672865] RAX: ffffffffffffffda RBX: 0000000000000000 RCX:
00007fcc7a11ada3
[ 64.672865] RDX: 0000000000000004 RSI: 00007fcc72d097b0 RDI:
0000000000000018
[ 64.672866] RBP: 00007fcc72d09a20 R08: 0000000000000004 R09:
00000000ffffffff
[ 64.672866] R10: 0000000000800000 R11: 0000000000000293 R12:
0000000000000000
[ 64.672867] R13: 00007ffe876cc91f R14: 00007fcc72d0a9c0 R15:
0000000000000000
[ 64.672867] ---[ end trace 2a0b9d00c86a2b17 ]---
4 years
Re: [iGVT-g] [Xen-devel] XenGT GPU virtualization
by Haozhong Zhang
Cc'ed to the mailing list of Intel graphic virtualization
[Sorry for the spam. The last cc failed as I didn't subscribe to igvt-g(a)lists.01.org]
On 02/23/17 12:36 +0000, Paul Durrant wrote:
> Hi,
>
> I’m not actually sure where the latest public release of the xengt code is. Perhaps someone from Intel can comment?
>
> Otherwise, if you grab the source ISOs from xenserver.org you can look in the SRPM for xengt. The xengt kernel module is responsible for auditing the servicing the GPU commands from guests.
>
> Cheers,
>
> Paul
>
>
> From: bharat gohil [mailto:ghl.bhrt@gmail.com]
> Sent: 23 February 2017 12:30
> To: Paul Durrant <Paul.Durrant(a)citrix.com>
> Cc: Anshul Makkar <anshul.makkar(a)citrix.com>; xen-devel(a)lists.xenproject.org
> Subject: Re: [Xen-devel] XenGT GPU virtualization
>
> Thanks paul and anshul
> Can you guys point out source code which is audit the GPU command?
>
> Thanks
> Bharat
>
> On Mon, Feb 20, 2017 at 9:01 PM, Paul Durrant <Paul.Durrant(a)citrix.com<mailto:Paul.Durrant@citrix.com>> wrote:
> No, that’s not correct. The GPU commands are whitelisted and only the commands that can be audited are handled.
>
> Paul
>
> From: Xen-devel [mailto:xen-devel-bounces@lists.xen.org<mailto:xen-devel-bounces@lists.xen.org>] On Behalf Of anshul makkar
> Sent: 20 February 2017 15:16
> To: bharat gohil <ghl.bhrt(a)gmail.com<mailto:ghl.bhrt@gmail.com>>; xen-devel(a)lists.xenproject.org<mailto:xen-devel@lists.xenproject.org>
> Subject: Re: [Xen-devel] XenGT GPU virtualization
>
>
>
>
> On 18/01/17 13:21, bharat gohil wrote:
> Hello
>
> I am new to GPU and GPU virtualization and found that xen support intel GPU virtualization using XenGT.
> I want to know,
> 1) What are the critical GPU command pass from xen to Dom0?
> 2) How the Dom0 mediator or xen validate the GPU command which is passed from domU GPU driver?
> 3) If one of the domU guest send bad(malicious) command to GPU which led GPU to bad state. Can Dom0 mediator or xen prevents this kind of scenario?
> As far as I know, there is know mediation to check for the commands. Xen does audit the target address space, but not GPU commands.
>
> --
> Regards,
> Bharat Gohil
>
>
>
>
> _______________________________________________
>
> Xen-devel mailing list
>
> Xen-devel(a)lists.xen.org<mailto:Xen-devel@lists.xen.org>
>
> https://lists.xen.org/xen-devel
>
>
>
>
> --
> Regards,
> Bharat Gohil
> Sr.Software Engineer
> bharat.gohil(a)harman.com<mailto:bharat.gohil@harman.com>
> +919427054633
> _______________________________________________
> Xen-devel mailing list
> Xen-devel(a)lists.xen.org
> https://lists.xen.org/xen-devel
4 years
Re: [iGVT-g] [Xen-devel] XenGT GPU virtualization
by Haozhong Zhang
Cc'ed to the mailing list of Intel graphic virtualization
[Sorry for the spam. The last cc failed as I didn't subscribe to igvt-g(a)lists.01.org]
On 01/18/17 18:51 +0530, bharat gohil wrote:
> Hello
>
> I am new to GPU and GPU virtualization and found that xen support intel GPU
> virtualization using XenGT.
> I want to know,
> 1) What are the critical GPU command pass from xen to Dom0?
> 2) How the Dom0 mediator or xen validate the GPU command which is passed
> from domU GPU driver?
> 3) If one of the domU guest send bad(malicious) command to GPU which led
> GPU to bad state. Can Dom0 mediator or xen prevents this kind of scenario?
>
> --
> Regards,
> Bharat Gohil
> _______________________________________________
> Xen-devel mailing list
> Xen-devel(a)lists.xen.org
> https://lists.xen.org/xen-devel
4 years
no iommu_group found
by Nick S
I was able to compile the kernel and quemu based on the guide (
https://github.com/01org/gvt-linux/wiki/GVTg_Setup_Guide#21-operating-sys...).
When I am trying to add a virtual adapter to my old OVMF-based virtual
machine, I get the following error:
qemu-system-x86_64: -device
vfio-pci,sysfsdev=/sys/bus/pci/devices/0000:00:02.0/a297db4a-f4c2-11e6-90f6-d3b88d6c9525:
vfio error: a297db4a-f4c2-11e6-90f6-d3b88d6c9525: no iommu_group found: No
such file or directory
The /sys/bus/pci/devices/0000:00:02.0/a297db4a-f4c2-11e6-90f6-d3b88d6c9525
directory gets created, intel_iommu=on added to the kernel arguments to
enable those groups (same error before I did it). There is no iommu_group
subfolder under a297db4a-f4c2-11e6-90f6-d3b88d6c9525 and I believe this is
where vfio-pci is looking.
CPU: Intel(R) Core(TM) i7-6600U CPU @ 2.60GHz
/sys/bus/pci/devices/0000:00:02.0/iommu_group ->
../../../kernel/iommu_groups/1/
ll /sys/kernel/iommu_groups/
total 0
drwxr-xr-x 9 root root 0 Feb 23 14:21 ./
drwxr-xr-x 11 root root 0 Feb 23 11:40 ../
drwxr-xr-x 3 root root 0 Feb 23 14:21 0/
drwxr-xr-x 3 root root 0 Feb 23 14:20 1/
drwxr-xr-x 3 root root 0 Feb 23 14:21 2/
drwxr-xr-x 3 root root 0 Feb 23 14:21 3/
drwxr-xr-x 3 root root 0 Feb 23 14:21 4/
drwxr-xr-x 3 root root 0 Feb 23 14:21 5/
drwxr-xr-x 3 root root 0 Feb 23 14:21 6/
Any idea what I may be missing?
Thank you!
4 years
KVMGT and kernel 4.10
by Mathieu Maret
Hi,
I would like to test kvmgt with the last upstream kernel 4.10.
So I build the kvmgt 2016q3 and I got the following error:
'kvm version too old'
So I tryed to re-apply kvmgt patches on 2.8.0 kvm, but it's not a success ..
Is there a particullar way to do that?
Thanks,
Mathieu
4 years
Code 43 with upstream
by Alex Ivanov
Deviced to try current upstream GVT-g and failed. Please help. Here's my
steps:
1. Installed 4.10-rc7 kernel with following options enabled:
CONFIG_VFIO_MDEV m
CONFIG_VFIO_MDEV_DEVICE m
CONFIG_DRM_I915_GVT y
CONFIG_DRM_I915_GVT_KVMGT m
2. Added i915.enable_gvt=1 to kernel arguments (also tried to add
intel_iommu=igfx_off and i915.hvm_boot_foreground=1)
$ lsmod | grep kvmgt
kvmgt 20480 1
mdev 20480 2 kvmgt,vfio_mdev
vfio 24576 3 vfio_iommu_type1,kvmgt,vfio_mdev
drm 299008 10 kvmgt,i915,drm_kms_helper
kvm 507904 2 kvm_intel,kvmgt
$ ls /sys/bus/pci/devices/0000:00:02.0/mdev_supported_types/
i915-GVTg_V5_1 i915-GVTg_V5_2 i915-GVTg_V5_4
3. Created VGPU
$ echo "894f3983-1a36-42b3-b52c-1024aca216be" >
"/sys/bus/pci/devices/0000:00:02.0/mdev_supported_types/i915-GVTg_V5_1/create"
$ ls /sys/bus/pci/devices/0000:00:02.0/894f3983-1a36-42b3-b52c-1024aca216be
driver iommu_group mdev_type power remove subsystem uevent
4. Tried both current qemu master built as per instructionand qemu 2.7
with following run script run as per instruction
The result is that qemu reports
qemu-system-x86_64: vfio-pci: Cannot read device rom at
894f3983-1a36-42b3-b52c-1024aca216be
Device option ROM contents are probably invalid (check dmesg).
Skip option ROM probe with rombar=0, or load from file with romfile=
I then connect via VNC and driver in Windows 10 guest always reports
"Code 43"
the only new kernel messages on host are
[ 66.125104] [drm:intel_vgpu_emulate_mmio_write [i915]] *ERROR* gvt:
vgpu1: write untracked MMIO 7139c len 4 val 0
[ 66.125141] [drm:intel_vgpu_emulate_mmio_write [i915]] *ERROR* gvt:
vgpu1: write untracked MMIO 4 len 4 val 0
[ 66.125186] [drm:intel_vgpu_emulate_mmio_write [i915]] *ERROR* gvt:
vgpu1: write untracked MMIO 4 len 4 val 0
[ 66.125224] [drm:intel_vgpu_emulate_mmio_write [i915]] *ERROR* gvt:
vgpu1: write untracked MMIO 4 len 4 val 0
[ 66.125254] [drm:intel_vgpu_emulate_mmio_write [i915]] *ERROR* gvt:
vgpu1: write untracked MMIO 7239c len 4 val 0
[ 376.512758] [drm:intel_vgpu_emulate_mmio_write [i915]] *ERROR* gvt:
vgpu1: write untracked MMIO 7139c len 4 val 0
[ 376.512810] [drm:intel_vgpu_emulate_mmio_write [i915]] *ERROR* gvt:
vgpu1: write untracked MMIO 4 len 4 val 0
[ 376.512854] [drm:intel_vgpu_emulate_mmio_write [i915]] *ERROR* gvt:
vgpu1: write untracked MMIO 4 len 4 val 0
[ 376.512894] [drm:intel_vgpu_emulate_mmio_write [i915]] *ERROR* gvt:
vgpu1: write untracked MMIO 4 len 4 val 0
[ 376.512923] [drm:intel_vgpu_emulate_mmio_write [i915]] *ERROR* gvt:
vgpu1: write untracked MMIO 7239c len 4 val 0
[ 489.535048] [drm:intel_vgpu_emulate_mmio_write [i915]] *ERROR* gvt:
vgpu1: write untracked MMIO 7139c len 4 val 0
[ 489.535084] [drm:intel_vgpu_emulate_mmio_write [i915]] *ERROR* gvt:
vgpu1: write untracked MMIO 4 len 4 val 0
[ 489.535126] [drm:intel_vgpu_emulate_mmio_write [i915]] *ERROR* gvt:
vgpu1: write untracked MMIO 4 len 4 val 0
[ 489.535164] [drm:intel_vgpu_emulate_mmio_write [i915]] *ERROR* gvt:
vgpu1: write untracked MMIO 4 len 4 val 0
[ 489.535192] [drm:intel_vgpu_emulate_mmio_write [i915]] *ERROR* gvt:
vgpu1: write untracked MMIO 7239c len 4 val 0
[ 1188.457177] [drm:intel_vgpu_emulate_mmio_write [i915]] *ERROR* gvt:
vgpu1: write untracked MMIO 7139c len 4 val 0
[ 1188.457212] [drm:intel_vgpu_emulate_mmio_write [i915]] *ERROR* gvt:
vgpu1: write untracked MMIO 4 len 4 val 0
[ 1188.457255] [drm:intel_vgpu_emulate_mmio_write [i915]] *ERROR* gvt:
vgpu1: write untracked MMIO 4 len 4 val 0
[ 1188.457291] [drm:intel_vgpu_emulate_mmio_write [i915]] *ERROR* gvt:
vgpu1: write untracked MMIO 4 len 4 val 0
[ 1188.457320] [drm:intel_vgpu_emulate_mmio_write [i915]] *ERROR* gvt:
vgpu1: write untracked MMIO 7239c len 4 val 0
4 years
GVT-g upstream status introduction
by Wang, Hongbo
Hi all,
GVT-g (Intel Graphics Virtualization Technology) development is in a
transition period in 2017Q1, moving from an off-tree project to an
upstreamed project. Starting from kernel 4.10, you will see our GVT-g code
inside.
During this period, there could be some confusion regarding to repo, branch,
maillist, etc since two "projects" are in parallel. I'd like to use this
email/blog to introduce the background, current status, our plan and logistic
stuff.
Background
---------------------------------------------------------
We initiated vGVT (Mediated Pass Through virtualization for GPU) idea as
early as 2011, then did Proof of Concept (PoC) on Intel SandyBridge platform,
later on XenGT was officially supported on Haswell platform (Intel 4th
generation Intel Core(TM) processor). In April 2014, Intel branded Intel(r)
Graphics Virtualization Technology (Intel(r) GVT), which is an umbrella name
to cover 3 different architectures and approaches for GPU virtualization:
GVT-s for API forwarding, GVT-d for pass through and GVT-g for mediated
pass through. In December 2014, we launched the first KVMGT release to
support GPU virtualization on KVM hypervisor.
As the times goes on, GVT-g project scope has expanded from XenGT to
KVMGT, supported platforms from client machines to server machines. We
also see more and more usages coming up, like Virtual Desktop
Infrastructure (VDI), Media Cloud, In-Vehicle Infotainment (IVI), and so on.
Community contribution and involvement are the key to project success,
that's why we open source GVT-g project from the very beginning and set
our goal to upstream at the first day when we kicked off this project. In the
past 8 months from April 2016 to December 2016, we worked with Intel i915
team and outside community, redesigned GVT-g architecture, refactor and
rewrite codes, finally upstreamed 16K line of code to Linux kernel 4.10.
With Linux kernel 4.10 releasing very soon, you will see all GVT-g codes in
Linux kernel under drivers\gpu\drm\i915\gvt.
Transition Period Plan
---------------------------------------------------------
In 2017Q1, you will see 2 releases, 2 repos and even 2 maillist for GVT-g
project. We plan to complete a last community release based on old
architecture, then freeze this old repo, after that all development work,
release will shift to new upstream repo which has new architecture design.
Comparison between old "off-tree" GVT-g and new "upstream" GVT-g
---------------------------------------------------------
"Off-tree" version GVT-g
First public release date XenGT: Apr. 2014
KVMGT: Dec. 2014
Last public release date Feb. 2017
First Version Kernel: 3.14
Xen: 4.3
QEMU: 1.3
Last Version Kernel: 4.3
Xen: 4.6
QEMU: 2.3
Repo Kernel: https://github.com/01org/igvtg-kernel
Xen: https://github.com/01org/igvtg-xen
QEMU: https://github.com/01org/igvtg-qemu
Maillist for user igvt-g(a)lists.01.org
Maillist for developer igvt-g(a)lists.01.org
Architecture: In Dom0/host, GVT-g module will be the one to talk and
manipulate GPU hardware directly, both Guest VM graphics workloads and
Host i915 graphics driver need to go through GVT-g module.
Size (Line of Code): ~35K
"Upstream" version GVT-g
First public release date Feb. 2017
Last public release date ---
First Version Kernel: 4.10
Xen: 4.7
QEMU: 2.8.50
Last Version --
Repo Kernel: https://github.com/01org/gvt-linux.git
Xen: http://xen.org/
QEMU: git://git.qemu.org/qemu.git
Maillist for user igvt-g(a)lists.01.org
Maillist for developer intel-gvt-dev(a)lists.freedesktop.org
Architecture: In Dom0/host, i915 graphics driver will be the one to talk and
manipulate GPU hardware directly, GVT-g module will work as Dom0/Host
i915 driver's client. All Guest VM graphics workloads are handled by GPU-g
module first, then GVT-g mediator submits them to GPU hardware through
Dom0/Host i915 driver.
Size (Line of Code): ~16K
GVT-g future release and support plan
---------------------------------------------------------
We strongly suggest community users to download GVT-g code from
kernel.org after version 4.10 to try out GVT-g, any issues please submit
through bugs.freedesktop.org. Comment and suggestion are also welcome
in our maillist igvt-g(a)lists.01.org
If you want to look through latest GVT-g features which is under
development but not upstreamed yet, we maintain a "gvtg-staging"
development branch to host latest GVT-g code, including upstreamed
patches, working in progress features and latest bug fixing. Community users
can always get latest GVT-g codes from this branch. At the same time, we
will send regular pull request to Intel i915 driver maintainer to merge GVT-g
code into i915, then i915 merges into kernel mainline.
In order to speed up GVT-g adoption by community, we'll continue provide
GVT-g stable release as reference to align each key kernel versions, for
example, 4.10, 4.11, etc.
The branches and release plan can be illustrated as below:
| 4.10 | 4.11 | 4.12 | 4.13 |
Linux Kernel mainline
-------------------------------------------------------------------------------
| | |
/|\ ... /|\ ... /|\
| | |
Intel i915 branch
----------------------------------------------------------------------------------
| | |
/|\ ... /|\ ... /|\
| | |
gvtg_staging
--------------------------------------------------------------------------------------------
(latest code) \ \ \
\ \ \
\------gvtg-stable-4.10 \
\ \
\---------gvtg-stable-4.11
\
\------gvtg-stable-4.12
Note: GVT-g repo: https://github.com/01org/gvt-linux.git
END
---------------------------------------------------------
Our GVT-g website will keep no change https://01.org/igvt-g
Any feedback and comment are welcome, feel free to let us know!
Best regards.
Hongbo
Tel: +86-21-6116 7445
MP: +86-1364 1793 689
Mail: hongbo.wang(a)intel.com
4 years
i915.enable_cmd_parser=0?
by yourbestfriend@openmailbox.org
Hi.
What i915.enable_cmd_parser=0 means and why this parameter is added on
iGVT-g setup?
Thanks,
Alex
4 years
Looking for a way to share big buffer (>8MB) across different domains without data copy
by Dongwon Kim
Hi all,
I actually asked same question to "xen-user" but just in case I can
have a better luck, I am posting it here as well.
I am actually looking for a way to get pages (frames)
shared across different domains. I saw using a grant-table is a
standard way to do it by assigning and passing references for
pages to be shared to the remote domain.
However, I also saw there's a limitation in terms of size of grant
table, meaning we can share only certain number of frames (e.g. 32).
The buffer I want to share is way bigger than this (~8MB), so I think
this is pretty big blocker for us. Am I understanding about this
size limitation correctly?
If this restriction on # of frames is something that can't be easily
removed, I would like to know if there's any other alternative way
to achieve our goal (sharing big buffer with more than 2000 pages).
Thanks,
DW
4 years