Topic 1: Yocto Architecture for VirtIO FEDD integration Mikhail Golubev
some SoC vendor's BSP kernel doesn't support Yocto Linux so that the virtio.scc may be not included.
it's better to maintain the front end driver or the kernel configuration in AGL.
Release candidate branch for AGL will be in January.
general idea is on master, but for recipe (patch and yocto layer only) → sandbox, source code → code repository Jerry, Jiancong Zhao need to check with Walt Miner on application of a new repository (JIRA ticket)
How we can brainstorm what kind of virtio devices are necessary to AGL
method 1: invite IVI, IC experts from other EG to join the discussion
Email in the mailing list is okay
Inter-EG activities are handled in the SAT
Announce & Invite in Dev Call is also possible (most people)
method 2: invite GENIVI AVPS member to share their progress on the automotive virtio discussion
What is our policy to decide which device should be integrated into AGL
Option 1 (most strict): Only if the device has been upstreamed and official published in the OASIS specification
Publish of OASIS specification will be the trigger for AGL virtio porting activities
Option 2 (moderate): If the device has been upstreamed and has a certain plan to be included in the OASIS specification in near future
Option 3 (most loose): Any devices that Virtualization EG members have consensus
Discussion:
Upstreamed drivers are mandatory
Depending on AGL kernel version, some new devices may not be included. Backporting from the newer kernel is needed (in some circumstances, trouble may happen) → discuss whether it can be integrated in the form inside kernel trees or out of the kernel trees
New device drivers (not upstreamed) has the similar situation with the above
Conclusion: Depending on the maturity of the virtio device drivers, we can put into different places → all virtio drivers can be integrated in this way
Recipe: Put the new drivers to the meta-agl-devel for developing and testing, integrate the stable drivers in the meta-agl.
Source Code: Staging (new drivers) & Src (stable drivers)
Topic 2: OpenSynergy Plans for ALS VirtIO PoC & AGL KK Contribution Mikhail Golubev OpenSynergy will distribute the PoC after KK contribution as a reference implementation of VirtIO. Jan Simon has suggested distributing methods such as "accepting terms" which already used by some AGL mebmer.
We are very excited to kick-off a new webinar series that will feature presentations from AGL members on open source technologies, trends, product/project updates, tutorials, and more.
Please register to join us for our first webinar on November 12th at 2:00 pm PT:
Topic 2: ALS Presentation & AGL KK VirtIO Porting Progress Checking - Linaro: Jellyfish a) KVM works fine. → working fine on the real board b) Xen Dom0 works fine. has some trouble with DomU (on QEMU is okay, but on hardware still some ongoing issues)? Xen virtio implementation → only supported virtio blk device
- OpenSynergy:
Mikhail Golubev please upload the slides you presented here. on-schedule for the KK virtio porting
only virtio-input multi-touch patch is still not available because not upstreamed in the Linux kernel main line (waiting for the LKML to accept)
Yocto machine: merged in the AGL and additionally
Question: demo image (set-up) → hypervisor distribution: Mikhail Golubevcheck internally → QEMU can also be also considered. VICTOR DUAN tell us where we can find the implementation of Linaro's ALS QEMU demo
ALS image is the qemuarm64 JJ image from the website
The PoC will be ready by April and able to request download from AGL websites
The backend and HV are proprietary but frontend (including virtio drivers) are open and can be customized
→ Announce the new virtio feature and reference implementation on KK in AB and AMM(?) Walt Miner → After moving to meta-agl, it can be announced A draft of two/three scentences are needed
Jan-Simon Moeller create new category for virt-EG to write documentation about virtio
Collaboration proposal from RHSA-EG (Mazda is the leader)
Present the reference implementation PoC in AMM
What is the deadline for CFP
Workshop for discussion about virtio planned to be held after new year (rough plan)
1st Open Discussion Workshop: Optimization of VirtIO-gpu3d (how to achieve zero-copy in gpu 3d mode)
2nd Open Discussion Workshop: What devices need to be virtualized for an virtual AGL (what else virtio-devices needed for AGL) TBD
Future EG Planning
1st Workshop (virtio-gpu3d topic) → Jerry, Jiancong Zhao will email to invite community members (especially gpu, mesa/virgl expert to join)
normal EG session
TBD: 2nd Workshop (necessary virtual devices for AGL) → Jerry, Jiancong Zhao will email to invite community members (especially gpu and virtio expert to join)
Walt Miner please share the google doc link of past EG white paper.
Last but important:
Thanks so much for all the EG members for great work in 2020! Happy new year!
Jan 20, 2021
Attendees:
Jerry Zhao - Panasonic
Jan-Simon Moeller - Linux Foundation
Scott Murray - Konsulko
Mikhail Golubev - Open Synergy
Vasyl Vavrychuk - OpenSynergy
Andriy Tryshnivsky - OpenSynergy
Laurent Cremmer - Carmeq
Mark Silberberg - Volkswagen
Victor Duan - Linaro
Alex Bennée (Stratos Tech Lead) - Linaro
Peter Griffin (Multimedia Tech Lead) - Linaro
Tadao Tanikawa - Panasonic
Binghua - Qualcomm
Marius Vlad - Collabora
Tadao Tanikawa - Panasonic
Kenji Hosokawa - ADIT
Harunobu Kurokawa - Renesas
Masahiro Hasegawa - Renesas
Tomeu Vizoso
Venkata Ramalinga Prasad Tadepalli
Parag Borkar - OpenSynergy
Agenda:
1st VirtIO Workshop: VirtIO GPU-3d Performance - How to achieve Zero-Copy with AGL
Does VirtIO GPU supports zero-copy in QEMU, crosvm, etc?
crosvm uses host allocations, is it requirement for something?
progress in crosvm is on-going and realted to Vulkan.
crosvm use the virtio-pci (to expose host memory to guest - which also avoid memory copy)
Any available standard solution/platform to solve this zero-copy for the AGL community
Tomeu: Collabora is having project in ChromeBook (which is more focusing on gaming) but expose host memory to guest
Alex: not using virtio-pci may be the limitation (kiran explained why it is not because limitation →with 64 bit PCI addressing the windows are more than large enough)
Conclusion: there's no good solution available at the moment for zero-copy mechanism for automotive use case
Note: sometimes, copying smaller buffers ( < 4k) is faster then using zero copying
Does virglrender supports zero-copy of OpenGL buffers (vertex buffer, element buffer, etc.)?
there are some mechanisms in virglrender. no copy happened. (although some limitation may exist)
EXT_image_dma_buf_import fails with EGL_BAD_ACCESS
Tomeu: Reason is that specific platform stride requirements are not propagated to the guest. New IOCTL exist to propagate stride info to the guest, but not mainlined yet.
Discussion on two methods to achieve zero-copy: dynamic vs dedicated heap
GPU/CPU cache coherency challenges
Host allocation (share hostmem with guest) vs Guest allocation
ChromeOS uses dynamic hostside allocation
does host allocation and sharing introduce security concerns?
(over to Parag, OpenSynergy)
Dynamic vs Dedicated hostside allocation
where in host memory needs to be exposed to guest
host also has knowledge of where the real HW can see
Dedicated GPU heap (VRAM):
Idea: have a dedicated inter-vm shared mem area, guest requests buf params from hosts and allocates objects in shared memory and "sends" objects to host (paddrs)
What is needed to be done:
virtio gpu kernel driver adaptations → allocations from vram will be needed as well
MESA3d adaptations?
virgl adaptations
Can VFIO on Intel be applicable to the zero-copy case?
Which address space are buffers in? Can we use IOMMU for that?
IOMMU is a must have to defeat against DMA attacks
Intel pioneered VFIO; Huawei did develop so called "warpdrive" technology
François-Frédéric Ozogto check if the Huawei guy would like to give a speech on "warpdrive" in the Virtualization EG?