|
||
The window server handles many application function calls in the following way:
client interface packages the information for sending to server
a context switch to the server
processing in the window server - with possible inter-thread communication to and from the client
a context switch back to the client
This guarantees that operations are performed in the right sequence, and that operations have been performed when control returns to the application program.
However, in the case of drawing and many other function calls, this approach is too slow, and is not strictly necessary.
The overheads of the client/server communication swamp the time needed for drawing, and cause severe system degradation.
it is unnecessary because the client program does not need a result from drawing functions. This is also the case with other, non-drawing, functions.
Because of this, drawing functions, and other functions that do not need a synchronous response, are buffered by the client interface in an application’s window server buffer. When necessary, this buffer is flushed by a call to the window server, which executes all the drawing functions in sequence. Thus, two context switches are sufficient for an entire buffer full of drawing requests.
The window server buffer is flushed when
it is full
a function requiring a result is called
one of the event-request functions is called
(RWsSession::EventReady()
,
RWsSession::RedrawReady()
,
RWsSession::PriorityKeyReady()
)
certain high-level functions are called by the client, such as
CWindowGc::DrawPolyLine()
, which involve mass data
transfer which would overflow the buffer, and which may fail
an explicit RWsSession::Flush()
call is made
after any function call if the
RWsSession::SetAutoFlush()
auto flushing function is set
on
It is desirable that the buffer be flushed when the application has
finished a unit of drawing. This is usually done as a natural consequence of
calling the function to wait for the next event from the window server. This
covers all cases where drawing is initiated in response to a window server
event. There are a few cases where drawing is initiated in response to other
events: in these cases, an explicit Flush()
call must be made.
The use of a buffer has two main impacts on drawing logic:
while debugging, client requests will not immediately result in screen updates: this can make debugging confusing at times
If this is a problem, the application program can turn on auto-flushing, which causes flushing after each function call. The Uikon UI provides a hot-key, ctrl+alt+shift+F, to turn on auto-flushing. (The resulting degradation in performance is a convincing demonstration of the need for the buffer)
the BITGDI is so fast that most drawing activity that takes place within the context of a single flush will appear as an almost instantaneous update. If, however, a drawing sequence is interrupted by a flush, there may be a perceptible delay. In some circumstances, this may result in flicker.
The aesthetics of a program can be badly affected by flicker during
drawing. Applications should use flicker-free functions (such as
DrawText()
with a rectangle), and try to minimise flushing during
critical redraw sequences. If completely flicker-free drawing is impossible,
short drawing sequences which are flush-free may sometimes be an acceptable
alternative.
The window server manages the buffer intelligently so as to minimise flushing. The opcodes stored in the buffer are short, and repeated use of the same graphics context is indicated by a single bit rather than repeated reference to the graphics context.
Applications may modify the size, or the maximum size of the buffer to tune performance or to accommodate long messages. A larger buffer might reduce flicker by collecting more drawing commands. A smaller buffer uses less memory. Setting a maximum size allows the Window Server to perform its own optimisations.
If a message is too big for the buffer a panic (#5 -
EW32PanicDataExceedsBufferLength
) will occur.