Symbian
Symbian Developer Library

SYMBIAN OS V9.4

Feedback

[Index] [Previous] [Next]


New Graphics Architecture Phase One Overview

[Top]


Architecture

The New Graphics Architecture in Phase One provides video use case support to User Interface (UI) windows. It does this by providing a mechanism to combine the video and the UI whilst still maintaining an acceptable video frame rate. Until Symbian v9.5, windows rendering and composition all occurred within the Window Server (WSERV) component.

The two aims of the Phase One work are:

  1. To provide “in-scene” video use case support for UI windows by providing a mechanism to combine the video and the UI at an acceptable video frame rate. This means that the combining needs to be done in hardware, and the UI needs to be cached between frames when it is not changing.

  2. To provide the licensees with a useful first stage of the overall next generation graphics architecture.

Composition is achieved by physically splitting out the graphics sub-system into 2 stages:

  1. Rendering to a surface where a surface is a hardware independent place to hold an image or part of a scene.

  2. A Graphics Composition Engine (GCE) is responsible for combining surfaces to layers and the GCE backend is responsible for compositing these layers to the screen.

This approach provides more flexibility for licensees, for example if the composition is performed in software this may be optimised, or composition could be performed in hardware. Additionally, this provides the groundwork for future architectures which can make use of industry-standard graphics hardware accelerator chips.

With this new architecture, adaptation specific code will need a “Surface” supplied by a Surface Manager and then the surface is then registered for composition with the Graphics Composition Engine. This surface is used for rendering by the adaptation specific code. All software surfaces are managed by the Surface Manager apart from the UI surface which is managed by the LCD Driver and exposed to the rest of the system by the Screen Driver. The Surface Manager is designed for software surfaces.

[Top]


Using Surfaces


Introduction

A surface is a hardware independent place to hold an image or part of a scene. Surfaces are represented using a 128 bit surface ID in a TSurfaceId class. The 128 bit ID is made up of 120 bits to identify the surface and 8 bits of surface type. This allows 256 different surface types in the system and allow multiple surface managers/providers. Each surface provider/manager can create the 120 bit random ID itself on condition that they are unique within the surface type so an overall manager with knowledge of all surfaces is not mandatory.

There can be more than one surface manager type component in the system, which allows surfaces of different types to be created, for example, a driver which creates surfaces in GPU memory will create surfaces with a different type. It is recommended that licensees who want to add their own types of surfaces (for instance hardware specific surfaces) create and add their own surface manager.

Note: The Graphics Phase One project is restricted to one Opaque Background Surface per window.


Surface types

Two types of surfaces are provided for Reference Platform: the Surface Manager surfaces and a UI Surface. There are also other surfaces available.

Surface Manager Surfaces

The Surface Manager provides surfaces which are identified by the type field (8 bits) of the 128 bit identifier, TSurfaceId. The remaining 120 bits are random to limit the possibility of guessing a surface ID. Surface Manager surfaces contain attributes relevant to the surface and a pixel buffer which is in system memory (RChunk). Pixel buffers can be single or multi-buffered. An adaptation requiring a surface must create a surface, open it, and map the surface into its process if it is required to access the pixel buffer.

Surfaces created by the Surface Manager are ultimately owned by it.

Client processes identify surfaces to use by specifying Surface IDs when they communicate with the Surface Manager.

UI Surface

The UI Surface itself is a different type of surface to the Surface Manager, that is it is not a surface which is created and destroyed. It is a TSurfaceId with the type set to UI Surface, the remaining 120 bits of surface-specific information contain the screen number, orientation and pixel format.

Until Symbian OS v9.5 the output location of the Screen Driver was the screen buffer, which is also the target for Direct Screen Access (DSA). For the New Graphics Architecture this location is no longer the primary display buffer, instead a pixel buffer provides the data for a surface that the GCE will combine with other surfaces in the system to create the final scene that will be displayed. In other words it is now the pixel buffer of a UI Surface.

There are also other surfaces available:

OpenGL ES and OpenVG Surfaces

The EGL allows OpenGL ES and OpenVG to draw to bitmaps or to UI windows. To draw to a window, the window is first created in WSERV by the application. This window is then bound to OpenGL ES or OpenVG as appropriate using EGL, which is a common component of an OpenGL ES and OpenVG system. EGL dynamically creates a Surface Manager surface, which can be shared across processes, using calls to the Surface Manager.

This surface will be in CPU memory for the default (reference) implementation for surfaces. Where the surface memory is in GPU memory, a separate Surface Manager will need to be written.

Other renderers and surfaces

Other renderers are similar to OpenGL ES and OpenVG when drawing to surfaces. To draw to a renderer’s surface, a window is created in WSERV by the application. This window is then passed to the renderer which then creates a surface (which also has process sharing control) using calls to the Surface Manager. The renderer then registers the new surface as the background for the window in WSERV and proceeds to use the surface as needed.

[Top]


Composition

The GCE Backend is responsible for compositing layers, where a layer represents the z-order (front to back) positioning on the display. The layer order itself is determined by the order that WSERV adds the layers. For a surface to be composed to the display it needs to be associated with a layer. A given layer object makes a reference to one surface, however a surface may be referenced by many layers.

WSERV stores information about the location and size of the window into which the surface is logically pasted. It is defined that the surface is the background for that window. Therefore all drawing to that surface logically appears within the window’s frame, behind any BITGDI drawing in that window and in front of any other windows behind the one it is associated with. To enable this functionality, the WSERV marks the window region by identifying transparent pixels and setting the appropriate alpha channel so that the GCE can add the UI data at a later time. In essence the GCE will need to use alpha blending or colour keys to blend these regions in the GPU on devices where a GPU is present, so the WSERV will use an alpha or colour key to represent the hole where the data will be added in the UI surface.

The compositor will combine the layers into an output buffer.

[Top]


Flow control

Flow control is an interface between the GCE Backend and any renderer which has updated its surface and needs to inform the GCE Backend to update its output. This mechanism provides a means to decouple the Adaptation GCE Backend component with the rest of the graphics sub-system. Flow Control executes within the WSERV process and provides a secure Symbian client-server interface for any client renderer which may or not be in the WSERV process. Flow Control can also ask the GCE Backend to inform it when composition has completed and pass this information back to any renderers waiting on completion.

[Top]


Screen Rotation

Screen “rotation” is really screen “orientation” by the time it gets beyond WSERV functionality. What this means is that the absolute orientation of the drawn image relative to the physical orientation of the screen device itself is known as “rotation”. The orientation of the screen is a reflection of the angle that you have to rotate the device through to make the display appear correct.

The GCE supports two types of rotation - layer rotation and full-screen rotation.