<cole-h>
qyliss: I'll review that patchset (and the previous crosvm patch) tomorrow or Wednesday. Been a little busy :P
TheJollyRoger has quit [Remote host closed the connection]
TheJollyRoger has joined #spectrum
<ncm[m]>
Wondering if the Vulkan API is reasonable as an interface boundary between guests and host. I.e. guests' Vulkan calls get translated to corresponding ops by host in its real Vulkan impl, without risk of exposing host's or other guests' pixels, events, or memory.
cole-h has quit [Ping timeout: 252 seconds]
<puck>
ncm[m]: i think OpenGL on top of vulkan is still a bit wobbly, but apparently virtio-gpu vulkan has merged into mesa recently
<qyliss>
I don't know enough about GPU stuff to know if this is the Vulkan API as the interface binary, like ncm[m] says. Is it?
<puck>
it serializes vulkan commands over virtio-gpu
<qyliss>
oh neat
<puck>
like, virgl afaict just serializes vulkan/opengl over a ~byte stream
<qyliss>
oh, so it doesn't do any interpreting of vulkan at all?
<puck>
well, it has to understand all the API calls to know how to serialize, but it just passes it through, i think
<JJJollyjim>
virgl and virtio-gpu are the same right
<puck>
virgl is the protocol used for 3d, which is then run over virtio-gpu
<puck>
i think there's some 2d acceleration in virtio-gpu too, but not many people use that, obvs
<puck>
now, i wonder if one could take dxvk and virgl, package it up, and run windows in virtio-gpu
<JJJollyjim>
hmm running dxvk in the guest? interesting
<puck>
i wonder how well the isolation of virtio-gpu works
<JJJollyjim>
yeah, presumably at the very least you’re opening up the host’s mesa interface as an attack surface
<JJJollyjim>
but also you have code execution on a second processor now
<JJJollyjim>
which feel scary
<JJJollyjim>
*feels
<JJJollyjim>
idk how well-isolated gpu things are in general
<qyliss>
not well aiui
<JJJollyjim>
yeah :/
<puck>
i think modern GPUs might have decent-ish isolation
<puck>
like, the iGVT-G and vGPU stuff basically works by swapping out the entire GPU context; memory isolation between e.g. textures, and full stalls (i've had some interesting issues that ended up just freezing and causing a timeout, tho i'm not sure that actually ended up interfering with any other vGPU) are the big question marks imo
<puck>
clearly the solution here is to get a Larrabee, and make it work, somehow
<JJJollyjim>
i’m also not 100% sure what the situation is with host memory access from the gpu
<JJJollyjim>
i guess things are somewhat protected by the iommu now?
<puck>
ooh, good question on this actually
<puck>
the whole GPU should be IOMMUd, i think
<ncm[m]>
So, seems like if the guest supplies code to be run on the real GPU, in the host env, something needs to ensure that code operates only on data that should be visible to the guest -- either by inspection of the code, or by hardware restricting what that code is allowed to touch.
<ncm[m]>
I guess the latter would be the IOMMU.
<JJJollyjim>
the iommu won’t see accesses across VM boundaries in the gpu though
<JJJollyjim>
hopefully there is similar protection implemented internally
<JJJollyjim>
gpu architecture is frustratingly opaque :(
<ncm[m]>
If the IOMMU can perform translations from addresses in the supplied code to correct addresses in the host memory, it ought to also be able to trap wild accesses. But does it map accesses to video memory by the GPU, or only to regular RAM?
<ncm[m]>
Anyway I guess they must have worked all this out.
MichaelRaskin has quit [Ping timeout: 240 seconds]