Introduction

Discussion at AGL workshops and meetings during 2021 led to the proposal of dropping the existing demonstration application framework due to a combination of:

  • a lack of member interest in assisting in maintenance or using it as the starting point for evolving a common platform in upstream AGL.
  • its reliance on the SMACK Linux Security Module (LSM) framework, at least in the version upstreamed into AGL.
  • recognition that its WebSocket plus custom JSON API scheme is somewhat limited versus the current state of the art available via other FOSS projects, and that future evolution into a cloud-friendly stack would benefit from using technologies known to that ecosystem.

While there was potential for approaches such as attempting migration to a new version of the application framework components from IoT.bzh's upstream or doing parallel development in AGL's version to remove/replace the usage of SMACK with another LSM, it was decided that complete removal would instead be the starting point of future development.  Overall, starting from a clean slate seemed a better approach with respect to fostering reuse of external FOSS projects.

Current Status

As things stand, the Marlin 13.0 release will contain the following new development as an initial/interim replacement for the previous application framework:

  • A minimal application launcher daemon that exposes a D-Bus API for application discovery and start/stop of applications.  Application configuration is via .destkop files as with a session manager / launcher in a Linux desktop distribution.  Application start up is done either by direct spawn or D-Bus activation (the latter is currently not used in the AGL demo platform).
  • The launcher and homescreen Qt demo applications have been reworked to use the new application launcher D-Bus API instead of the previous app-framework-main one.
  • The launcher and homescreen web app demo applications have been reworked to use a wrapping of the new application launcher D-Bus API inside a newly developed Web Application Framework (WAF) extension.
  • The settings, homescreen, and mediaplayer Qt demo applications have had their usage of the agl-service-network, agl-service-bluetooth, and agl-service-mediaplayer bindings replaced with a shift of the ConnMan, BlueZ, and media playing API abstractions into the existing libqtappfw library via the use of two new of libraries (bluez-glib and connman-glib) refactored out of the agl-service-network and agl-service-bluetooth bindings.
  • The other Qt demo applications that were feasible to re-enable building via stubbing out their usage of now removed bindings have been ported over and are included in builds to provide for a more comparable look to previous releases in the demo image.

Perceived Outstanding Requirements

Application Sandboxing

From EG discussions in 2021 it seems clear that some degree of enabling application sandboxing in a replacement demo application framework is desirable.  One proposed scheme for this was attempting to adapt Flatpak into AGL, which has some appeal given it is achieving a degree of traction in the desktop Linux distribution space.  However, investigation into this approach seems to indicate that the research and development effort required is perhaps beyond the level of commitment that AGL is able to invest in.  Given that, leveraging systemd's sandboxing controls somewhat in the fashion of the previous application framework seems a lot more feasible, as does attempting some extension beyond that with some of the options available in current systemd.  The additional benefit to such an approach would be potential synergy with Toyota's announced desire to work with upstream AGL on integrating a systemd based replacement for the resource/task management scheme in their base system for the Production Readiness EG.

Service API Infrastructure

With the application framework removal for Marlin 13.0, it was accepted that the APIs previously provided by the various demo service bindings would be reevaluated with respect to:

  • what minimal set of services it makes sense for AGL to provide as technology demonstration and member demo enablement.
  • what IPC mechanism it makes sense to use for the services AGL does develop, both in regards to effort and potential interest to members as a useful technology demonstration.

On the effort side, a lot of initial focus has been on attempting re-use of available FOSS services and their IPC mechanisms.  In practice, that has led to using D-Bus for e.g. the new application launcher, and that has perhaps proven to be less than ideal for consumption in the web applications and with respect to potential for sandboxing application access.  It seems clear that there are several forms of service such as audio mixer, radio, vehicle signaling, etc. that AGL likely needs to do some development on to enable member demos and serve as a potential starting point for member interest in upstream development.  From discussion in the 2021 workshops and EG meetings into 2022, it seems that there is a rough consensus that grpc (grpc.io) seems a reasonable framework to use as a basis for such service development.  A non-exhaustive list of the rationale for this is as follows:

  • grpc as well as the protobuf tooling it is based on have a large and active development community.
  • grpc tooling and programming language support is extensive, and well beyond AGL could hope to easily provide itself.  For potential future Flutter app development, this includes a well-maintained Dart grpc library and protobuf compiler support.
  • both grpc and protobufs are extensively used in cloud services, and there is potential for synergy with the Cloud EG with respect to their plans for service mesh enablement.

There are however, a couple of known potential drawbacks:

  • Using grpc from web applications currently requires the use of a proxy mechanism as web engine HTTP/2 support is not yet complete enough to allow native implementation.  In practice this means using the grpc-web library with either Envoy (envoyproxy.io) or Improbable's standalone proxy.  Recent discussion in the EG indicates that this is perhaps acceptable in the interim, and that wrapping grpc would be likely be preferable to e.g. D-Bus if development on WAF API extensions ends up still being required.  Additionally, the Cloud EG has indicated that Envoy is potentially something they would be interested in seeing available to ease service mesh demonstrations.
  • Toyota have indicated that they considered using grpc for APIs in their own internal Flutter application development, but decided to move to using custom Flutter platform channel API wrappers (similar to what is done with WAF + web apps) for currently unquantified performance reasons.  However, they agree that using grpc for AGL developed demo services seems a reasonable approach.

Replacing SMACK

The SMACK Linux Security Module (LSM) at this point is mostly only used by Samsung in their now mostly internal Tizen development.  As mentioned above this was one of the motivations for removing the previous application framework.  However, it seems reasonable to attempt demonstrating one of the more commonly used LSMs, e.g. SELinux or AppArmor.  Multiple AGL member companies have indicated they use SELinux in their products, so investing effort into some form of enablement / technology demonstration in upstream AGL seems worthwhile if done in a modular fashion.

Proposed Development Tasks

Given the discussion above of perceived requirements from a technology demonstration and demo enabling perspective, the proposed 2022 development roadmap is as follows.

Application Launcher

Development tasks:

  1. Add discovery and launching of AGL applications via the systemd D-Bus API.  AGL applications would provide a systemd user unit applaunchd could discover via the systemd API and then stop/start in response to its own API mechanism.  The goal would be to at first deprecate, and then obsolete (for e.g. the Octopus 15.0 release) the direct spawn and D-Bus activation schemes currently implemented.  Potential standardization/templating of application systemd units and whether to still rely on the use of .desktop files is something that would require investigation and discussion in the EG.  A lesser reason for using systemd units rather than D-Bus or direct launching is for an improvement in logging, a discussion on this can be found in the comments on SPEC-4211.
  2. Add a grpc API that duplicates the current D-Bus one, with an eye towards potentially obsoleting the latter in a future release once grpc use is better understood.  Rework of the demo homescreen and launchers to use the grpc API would likely be part of this effort unless a separate test application is developed.

Service API Infrastructure

Development tasks:

  1. Develop a grpc API provider for the previous agl-service-audiomixer binding, to allow for both the Qt and web app demo mixer applications to be reworked against it and re-added to their demo images.  This would act as a frontrunner for demonstrating using grpc/protobufs in AGL potentially in parallel with (2) above, though it may be more straightforward for one or the other to be chosen as an initial project to establish an example.  The audio mixer binding is viewed as a good example service to reimplement in that the PipeWire and WirePlumber APIs it wraps will require non-trivial development to re-enable in demos no matter the approach taken.
  2. In parallel with (1), start an investigation into the feasibility of enabling grpc-web usage in the demo web apps.  This will involve evaluation of making Envoy or Improbable's proxy available inside of AGL with respect to build and configuration, and once that is in hand attempting a demonstration of using an API in a web application with grpc-web.  It is possible (or perhaps likely) that this may need be decomposed into separate tasks for the build versus web app development components.
  3. Once the audio mixer and/or application launcher APIs have been shown feasible, move on to reimplementing some of the other binding services that have no available FOSS API mechanisms that are easily leveraged, with likely priorities being services such as the radio and the HVAC bindings.  It is likely that services such as telephony are good candidates here as well, since their APIs are relatively simple and development of some kind would be required to (re)integrate them into the demo applications in any instance.  From a technology demonstration perspective, once there are some examples in hand there may be opportunity to drive member engagement on API requirements with respect to serving as useful abstraction layers for proprietary implementations.  Potential rework of the Bluetooth, network, and media playing APIs should only be considered once it is clear that the grpc approach is workable and that follow on maintenance will not be a significant issue.  Bluetooth is one area where there has previous been member interest in working on a reusable API, so that might make it a more worthwhile candidate for attempting later in 2022.
  4. Investigate enabling and using the grpc API in the kuksa.val vehicle signaling framework.  The planned use of kuksa.val in Marlin 13.0.x will be limited to the standard VISS WebSocket API due to build issues stemming from that project's use of CMake intersecting with a known upstream OpenEmbedded/Yocto Project limitation.  Working with upstream of potentially both projects will be required to enable using grpc with kuksa.val, with a potential significant benefit to OE/YP/AGL ecosystem use of grpc in the future.  One note with respect to this effort is that some care will need to be taken in any of AGL's own development around grpc to avoid use of the CMake grpc module until the issue with it can be resolved with upstream.
  5. Outside of base grpc usage, authorization and service discovery are functionality that need further research and development to work towards a complete technology demonstration.  It is likely that the two features need separate research tasks that would lead to potential development later in 2022 or in 2023 once the EG reaches some consensus on any proposed development stemming from the research.  There is some potential on both fronts to work with the Cloud EG in requirements definition and development, as their micro-services and service mesh plans have overlaps in these areas.  One point of concern that will likely need to factor into research in this area is that some of the available solutions in at least the service discovery side are somewhat heavyweight for an embedded/automotive system, this is likely another area where there is some good synergy with the Cloud EG plans, as it is also a concern for them.

SELinux

Development tasks:

  1. Add the meta-selinux layer to the default AGL demo image builds, with an initial goal of getting builds working with the upstream targeted reference policy in permissive (i.e. non-enforcing / warning) mode.  The aim would be to have the SELinux kernel configuration, tooling, and base reference policy available in the AGL infrastructure for members to leverage.
  2. Start an investigation into the iterative effort to tweak the SELinux policy to allow running the AGL demo images with SELinux in enforcing mode.  This should ideally involve with working with meta-selinux upstream, as currently this is not possible when using systemd in even a plain core-image-minimal Yocto image.  It is not expected that this work would be completed in 2022, though there is some potential for being able to enable demos on telematics or cloud gateway demos.  Demonstrating the ability to have SELinux work in enforcing mode in a non-IVI image with a container engine runtime may be an achievable goal for 2022, and has good synergy with Cloud EG requirements.




  • No labels

14 Comments

  1. Right now, some bits of the application life cycle, seems to be split into homescreen and the compositor. Currently, applaunchd delegates activation and switching of applications to homescreen. homescreen has been also growing some rudimentary activity manager functionality into it. Should we consider having some sort of activation manager independently of the toolkit we're using, or given that Qt will be deprecated, flutter which for the time being will be just using a single monolithic app, it only leaves WAM for the short/middle future? Duplicating something similar we have from homescreen into WAM wouldn't be such an issue IMO, but I guess what I'm asking if is it worth considering some sort of a activity manager, independent of the toolkit we're using. I also get the felling that if we're going to migrate to using systemd D-Bus maybe this activity manager will basically be systemd, which actually starts the apps? Am I wrong into considering that (point 1 of development task of App Launcher)?

    A second question, to clarify something wrt to the development task for Application Launcher. Once gRPC is more understood and we get some experience with it, the plan would be to just nullify D-Bus entirely including the activation / switching over D-Bus and use gRPC for it? 

    1. If you're talking about driving surface activation, then yes, we will likely need something to replace the agl-shell logic in homescreen if we go with Flutter (I am assuming we will not switch to Flutter unless it supports multiple applications) or WAM.  Whether that activation management needs to be wrapped up in some form of reusable API is a good question, but I'd say we perhaps wait on doing anything on that front until AGL's future focus becomes clearer.  I could imagine migrating surface activation into applaunchd along the lines of the previous agl-service-homescreen API, and that might be one way to migrate to a Flutter homescreen along the lines of how the current Qt demo works, but until we know Flutter is both usable and the goal for CES 2023, it seems better to not rush ahead.  As well, there might be interest from Toyota to wire up agl-shell via a Flutter platform channel extension (and/or they might even be in a position to contribute that), which might make developing a separate API less of an immediate priority.  On the WAM front, I'm not sure what Jose has done / is planning wrt activation, so clarifying that would be useful.

      As to your second question, the goal of the applaunchd task 2 as outlined is to work towards it only exposing a grpc interface if possible.  Given that work already has been done for web apps to use the D-Bus interface via WAF extension, it's perhaps not something that necessarily needs to be started immediately, but it would perhaps become more interesting if we know we need to build a full Flutter demo for CES 2023, as then the Flutter homescreen could use it / be developed in concert with it (as opposed to having to try to use e.g. Canonical's Dart D-Bus library).

      1. > On the WAM front, I'm not sure what Jose has done / is planning wrt activation, so clarifying that

        Sure, then I guess we can ping Jose Dapena Pazon the matter, maybe he has something already. My plan with WAM is to get that multi surface MR merged, and see what else it's needed to get it going, including activation of surfaces. So unless Jose Dapena Pazhas something to add, I'll get back with some feedback once I know more. 

        > outlined is to work towards it only exposing a grpc interface if possible

        Alright. Got it. 

        1. Regarding activation, I am essentially copying the current behavior in Qt apps, where we wait for applaunchd ready and then call activate_app wayland method.

          I don't have any strong opinion on what would we prefer for activation. We could essentially replicate what is being done in Qt. Maybe a better question is... how is activation handled in XDG shell? I would like to just activate-on-first-swap or similar, so we would not need any extension. The idea is that applications are responsible for sending a proper first frame and window is not visible before. In case we have DBUS activation support, we could just show an splash screen while first frame is not ready.

          For multiple surfaces support, where we want to show a window group at a time, it looks like it would be good to be able to define window groups at wayland level, and have a policy for activation (i.e. activate when all visible windows have first frame ready, activate each window separately, etc). The current implementation I added for marlin is very simple: we can have more than one application with agl shell access but we just need to make WAM aware it is going to implement the protocol. This way we can have several applications sharing the xdg shell work. Multiple surfaces for a single application can be tricky in web world, and it is not exactly what is being done by multisurface patch (launch several web apps from same app manifest).

          Right now, it looks to me we could just drop multisurface support as unneeded after the new implementation.

          1. Thanks for the update, Jose.  One thing to note with respect to supporting multiple surfaces is that it will not be a requirement for Flutter.  AFAIK the Flutter engine design is such that an application = one window/surface, and I believe that is unlikely to change anytime soon.

            1. Web platform is on the way to support multiple screens, and launch a specific window on an specific screen. And even some support for making the screen a combination of different spans. But even with that essentially we get the mapping 1-window ↔ 1-web context. The proposed APIs are also a bit different from what we would need for the multiple panels approach.

              Though, web platform could have several contexts using a single renderer process if they share an origin.

              So, from memory point of view, there are some savings in that case. WAM does not support right now using a single web app for several screens or windows using a single process though. In webOS even the case of an application and its companion overlay applet are usually different applications (that's usually good from memory point of view for our case as the life cycles are completely different).

          2. Some explanations about "activation", because the term tends to incorporate a few things, and denote other things as well:

            • activation of a surface: the process of making that surface currently displayed, as in to receive input
            • xdg-shell activation of a surface: described better here: https://gitlab.freedesktop.org/wayland/wayland-protocols/-/blob/main/stable/xdg-shell/xdg-shell.xml#L811. It is basically a way to inform the user, visually that the current surface has received focus. Note, not to be confused with the first item above, nor there is a requirement for it to happen. For clients without decoration you probably would not see it any change
            • and finally, activation of an application: currently focused application is now displayed to the user (which happens with activate_app from agl-shell protocol).

            The last one should incorporate all others. Mostly, when we refer to activation we refer to last one. 

            >  I would like to just activate-on-first-swap or similar

            What swap do you mean? When the client performs an eglSwap? If presentation (agl-shell protocol ready() req has been delivered) is enabled, then compositor will start presenting the application as soon as it's starts issueing wl_surface::commit() (with a buffer attached to it). The issue is when you want to activate another application. On this note, if 2 applications render something continuously which one would you activate? The requirement is to have a bg surface set and to enable presentation. Starting a client will be activated by default without the need to explicitly call activate_app. We explicitly delay presentation for the shell to load up surfaces as to avoid attempting to display/render incomplete data. There's no other reason behind it. 


            > so we would not need any extension

            We (still) need a way to convey the compositor various surfaces roles. There's no way around it, compositors other than libweston make use private extensions as well, for their desktop capabilities.  But, I can think of alternatives, with set_parent() from xdg-shell, where you incorporate all your surfaces into one big application and top-level children of, much like flutter (I speculate). Another alternative you can also have sub-surfaces (each application is actually a subsurface). All of these alternatives imply that you manage all of them from one single application which I don't think we want. Worth mentioning here is xdg-foreign extension which might allow to borrow/lend/lease surfaces from one surface to another, but you'd still need an extension. 

            Given that in wayland the paradigm is that we control the whole stack, we added an extension protocol to define and describe AGL policies: this area is for clients to render, this area is for (client) shell to manage. 

            > For multiple surfaces support

            The shell, which you can configure using the protocol extension, is the one that describes how windows behave. In AGL, we don't have free floating windows, they're basically tiled, and maximized. The shell is the one that needs to be able to manage multiple surfaces, but also in traditional desktop environments, we have panels, we have sys trays and switchers etc. This is what that multiple surface support (which seems to be misnomer on its own) is about, because otherwise chromium underneath does use xdg-topevel surfaces, but also sub-surfaces to render video/content. WAM basically needs to be client shell, but in the same time, also be able run "regular" clients (with a single xdg-shell toplevel).

            A WAM instance of the client means just a single top level surface, and the shell needs at least one (the background) where we place/install/show "regular" clients. Adding a surface panel means we need another instance. But that's because WAM was designed to a single, fullscreen application at a time. Problematic is also the fact that WAM itself doesn't really touch any wayland primitives and objects. 

            Thing is, we can't really do anything at the wayland level, wayland being just a protocol that defines how the client and server communicates. Surfaces roles are defined using the xdg-shell protocol: toplevel and popups. And yeah, wayland used to have a wl_shell which is now deprecated and literally buried, with code literally being removed from compositors. 

            Allowing wayland primitives and objects, or expose a direct channel (w/ chromium) into WAM, think would have make things much more simpler, rather than having to creating a new instance for each surface. 

            > where we want to show a window group at a time, it looks like it would be good to be able to define window groups at wayland level, and have a policy for activation

            There's no activation happening here, once the surface has been loaded, the shell should know when it is ready to present. I vaguely remember that fact that from the shell (WAM) I wasn't able to get a page loading status, which I suppose is why we're (still) going after frame request completion, but I don't see how that  guarantees that the application is ready to present, a frame is just a frame, you'll have hundreds of frame requests / completion until the data is ready to be presented, or there's some assumption I don't see at this moment. 


            1. What swap do you mean? When the client performs an eglSwap? If presentation (agl-shell protocol ready() req has been delivered) is enabled, then compositor will start presenting the application as soon as it's starts issueing wl_surface::commit() (with a buffer attached to it). The issue is when you want to activate another application. On this note, if 2 applications render something continuously which one would you activate? The requirement is to have a bg surface set and to enable presentation. Starting a client will be activated by default without the need to explicitly call activate_app. We explicitly delay presentation for the shell to load up surfaces as to avoid attempting to display/render incomplete data. There's no other reason behind it. 


              This applies to first launch: not attempting to show the window without a first meaningful swap happens. Ideally, from WAM side, it is just that it will only start swapping when it detects contents are meaningful. This is for preventing the white screen on new applications.

              This does not relate to the problem of having several applications swapping all the time. Once they swapped one time, then we just follow the regular activation mechanism (showing on activate).


              The shell, which you can configure using the protocol extension, is the one that describes how windows behave. In AGL, we don't have free floating windows, they're basically tiled, and maximized. The shell is the one that needs to be able to manage multiple surfaces, but also in traditional desktop environments, we have panels, we have sys trays and switchers etc. This is what that multiple surface support (which seems to be misnomer on its own) is about, because otherwise chromium underneath does use xdg-topevel surfaces, but also sub-surfaces to render video/content. WAM basically needs to be client shell, but in the same time, also be able run "regular" clients (with a single xdg-shell toplevel).


              I am not talking here in any case about subsurface, that is a completely different thing (in web platform it can be basically used for overlay-like effects, where chromium delegates to compositor for blending).

              About panels, switches, etc, as said, web platform does not support multiple windows for a single JS context. So you at least need one web application instance in WAM per window. Ideally in the future we could add support for several web applications sharing same security origin, so even if you run several JS contexts, they run in same renderer process getting some saves because of that. But right now, each panel or window displayed on screen should be a single web application. Multisurface patch does not address that as it will still mean multiple security origins.


              A WAM instance of the client means just a single top level surface, and the shell needs at least one (the background) where we place/install/show "regular" clients. Adding a surface panel means we need another instance. But that's because WAM was designed to a single, fullscreen application at a time. Problematic is also the fact that WAM itself doesn't really touch any wayland primitives and objects. 


              BTW, WAM is not  designed as a single web application at a time. In the current status of work we already run at the same time background, homescreen, and the focused application (launcher or the running application). There is no limit on that, and that comes already from webOS where we have overlay applications and widgets. It supports even outputs to different screens.


              Thing is, we can't really do anything at the wayland level, wayland being just a protocol that defines how the client and server communicates. Surfaces roles are defined using the xdg-shell protocol: toplevel and popups. And yeah, wayland used to have a wl_shell which is now deprecated and literally buried, with code literally being removed from compositors. 


              WAM is designed to know just the minimum about underlaying compositing protocol. It should be only aware of configuration details (i.e the ones coming from manifest), that should then be passed to Chromium, that delegates accordingly to its ozone implementation. There are several layers we would skip if don't do that. And that would prevent future architecture changes in Chromium regarding where wayland communication happens (that is not much safe nowadays). So anything related to actual wayland communication should happen in the ozone platform backend.


              Allowing wayland primitives and objects, or expose a direct channel (w/ chromium) into WAM, think would have make things much more simpler, rather than having to creating a new instance for each surface. 

              We are not creating one WAM instance per surface because of WAM, but because of web platform (one output per web application context). Apart from that, there is only a single browser process for all the applications (so only one process talking to wayland). And each renderer process is a sandboxed container for running the web platform code including javascript.


              There's no activation happening here, once the surface has been loaded, the shell should know when it is ready to present. I vaguely remember that fact that from the shell (WAM) I wasn't able to get a page loading status, which I suppose is why we're (still) going after frame request completion, but I don't see how that  guarantees that the application is ready to present, a frame is just a frame, you'll have hundreds of frame requests / completion until the data is ready to be presented, or there's some assumption I don't see at this moment. 

              There are hooks for first paint, first swap, first meaningful paint, ... Those are used to schedule a show() request, that will traverse all the layers to request wayland to present the window. In webOS it is just setting the window to fullscreen. In AGL it should use, at that point, whatever AGL wants to be used. That's it.

              Actually, in webOS we had a JS API to notify when window was ready, so if in manifest application would state it would use that, then that would be the event used to request show for the first time.

              The problem I found recently has not been that much about first show though. It was more about when a running application was activated again. I had to implement the logic present in QML homescreen, that would listen to applaunchd "started", and then it would call agl_shell_activate_app.


              1. it is just that it will only start swapping when it detects contents are meaningful


                Don't really follow, what exactly is there to detect.... when actually hte application itself is in charge of changing the content, who it knows exactly the moment to swap..... What I'm missing here? 


                 But right now, each panel or window displayed on screen should be a single web application. Multisurface patch does not address that as it will still mean multiple security origins.


                To me it seems we're addressing just that with that patch series and the modified homescreen application from  https://github.com/rogerzanoni/html5-homescreen.

                I mean that's the purpose of this entire patch series, as to allow WAM be able to run multiple things at time, from one single web context.

                If there's no point on trying to have shell in WAM then what would be the point of using it in the first place?


                 BTW, WAM is not designed as a single web application at a time. In the current status of work we already run at the same time background, homescreen, and the focused application (launcher or the running application). There is no limit on that, and that comes already from webOS where we have overlay applications and widgets. It supports even outputs to different screens.



                We run them independently, chain loaded, one after another. We have an application that draws background and one that draws panels. It is silly, convoluted an unnecessary, and against the whole wayland paradigm. 

                And not only that, but it is half broken, half working due to the fact that sometimes the panel application starts before the background or vice versa. The problem itself is not that we can't control the order, but we, like I've said above, we don't have a way to retrieve the correct status of the application when it is being fully loaded, and for that we use artificial timers which depends quit a lot on the load on the machine. For this to work correctly, you need a way to fence that off, in the last application, something like: compositor → panel 1 → panel 2→ pane X → background (fence, wait for all panels to commit a buffer). And I think there's a hidden problem, you can't really wait, as that deadlocks the entire thread. 

                Thing is, even with the multiple surface patch series, we're going to have this issue... because we're just iterating over each WAM instance when loading each page. So somehow this still needs to be fixed, even if we decide that WAM should behave like a shell or not. 


                 WAM is designed to know just the minimum about underlaying compositing protocol


                Yeah, I kind of agree here. I'm trying to push square pegs in a round holes and it fights with me at all the steps, so maybe we should take step back and analyse what exactly we want? Access to underlying wayland protocol to be able to design a coherent shell is one of them. 


                 So anything related to actual wayland communication should happen in the ozone platform backend.


                That doesn't mean we shouldn't have a way to expose that to the client if it wants it. Qt and GTK both expose the native windowing system to the client if they such desire. 



                 There are hooks for first paint, first swap, first meaningful paint, ... Those are used to schedule a show() request, that will traverse all the layers to request wayland to present the window. In webOS it is just setting the window to fullscreen. In AGL it should use, at that point, whatever AGL wants to be used. That's it.


                I'll give this a try, but last time when I've tried waiting in the main thread was basically dead-locking everything while waiting for a change in the page status.




                1. While I appreciate this being discussed and find it informative, I think it's perhaps gotten to the point where it might be better served as a JIRA issue along the lines of "WAM surface activation requirements" or the like.

                  1. Apologize for hijacking the thread.... we already have SPEC for it. It's SPEC-4009, and all the issues associated with it are linked there. 

                    I've (already) talked to Jose Dapena Pazto at least try clarify some of these aspects and reach some sort of consensus in one the app fw meetings coming up. 

                  2. As a conclusion, to our pass conversations related to multiple surface issue, according the appfw call we settled for:

                    • leave the multiple surface implementation aside for the time being 
                    • focus on having an IPC mechanism  in WAM to allow some kind of synchronization/fencing between applications, specifically to address start-up of panels + background, which would also be needed if we were to implement the first item

                    Going to open a Jira for the IPC one and continue there. Jose Dapena Paz does ^ look alright? 

                    1. Yes, we have two different tasks here. One is about reimplementing the IPC and it has been already added to JIRA as https://jira.automotivelinux.org/browse/SPEC-4252 . The other one is rearrange launching of homescreen/background/launcher to not conflict with each other and show at the same time (didn't write a JIRA about that yet). This second one (proper system UI applications launch sequence) depends on the first one (a reliable internal IPC).

  2. Scott Murray Marius Vlad

    gRPC is suitable for device/cloud, or device/device.  The overhead is quite high in IPC (multiple processes on the same system) use cases.

    I did a fair amount of research on this topic prior to landing Toyota on Cap'nProto.  
    https://capnproto.org
    https://github.com/capnproto/capnproto

    To support a variety of transport topologies I recommend:
    https://nanomsg.org
    https://github.com/nanomsg/nng

    This also needs to service JavaScript scenarios, is why gRPC?  If there was a standardized browser such as content shell with Javascript/Native bridge this might be more performant.