On 05/08/2018 08:01 PM, Alex Williamson wrote:
On Tue, 8 May 2018 19:06:17 -0400
Don Dutile <ddutile(a)redhat.com> wrote:
> On 05/08/2018 05:27 PM, Stephen Bates wrote:
>> As I understand it VMs need to know because VFIO passes IOMMU
>> grouping up into the VMs. So if a IOMMU grouping changes the VM's
>> view of its PCIe topology changes. I think we even have to be
>> cognizant of the fact the OS running on the VM may not even support
>> hot-plug of PCI devices.
> Really? IOMMU groups are created by the kernel, so don't know how
> they would be passed into the VMs, unless indirectly via PCI(e)
> layout. At best, twiddling w/ACS enablement (emulation) would cause
> VMs to see different IOMMU groups, but again, VMs are not the
> security point/level, the host/HV's are.
Correct, the VM has no concept of the host's IOMMU groups, only the
hypervisor knows about the groups, but really only to the extent of
which device belongs to which group and whether the group is viable.
Any runtime change to grouping though would require DMA mapping
updates, which I don't see how we can reasonably do with drivers,
vfio-pci or native host drivers, bound to the affected devices. Thanks,
A change in iommu groups would/could require a device remove/add cycle to get an
updated DMA-mapping (yet-another-overused-term: iommu 'domain').