Table of Contents Previous Next
Logo
Client-Side Slice-to-Objective-C Mapping : 18.17 Asynchronous Method Invocation (AMI)
Copyright © 2003-2010 ZeroC, Inc.

18.17 Asynchronous Method Invocation (AMI)

Asynchronous Method Invocation (AMI) is the term used to describe the client-side support for the asynchronous programming model. AMI supports both oneway and twoway requests, but unlike their synchronous counterparts, AMI requests never block the calling thread. When a client issues an AMI request, the Ice run time hands the message off to the local transport buffer or, if the buffer is currently full, queues the request for later delivery. The application can then continue its activities and poll or wait for completion of the invocation, or receive a callback when the invocation completes.
AMI is transparent to the server: there is no way for the server to tell whether a client sent a request synchronously or asynchronously.
To use AMI with Objective‑C, you must annotate your Slice definitions with an ["objc:ami"] metadata directive. This directive instructs slice2objc to generate AMI support in addition to synchronous API (which is always gener­ated).
The metadata directive applies interfaces or operations, for example:
["objc:ami"] interface I {
  bool isValid();
  float computeRate();
};

interface J {
  void startProcess();
  ["objc:ami"] int endProcess();
};
In this example, all proxy methods of interface I are generated with support for synchronous and asynchronous invocations. In interface J, the startProcess operation uses asynchronous dispatch, and the endProcess operation supports asynchronous invocation and dispatch.
Specifying metadata at the operation level, rather than at the interface or class level, not only minimizes the amount of generated code, but more importantly, it minimizes complexity. Although the asynchronous model is more flexible, it is also more complicated to use. It is therefore in your best interest to limit the use of the asynchronous model to those operations for which it provides a particular advantage, while using the simpler synchronous model for the rest.

Proxy Methods

Besides the synchronous proxy methods, slice2objc generates an additional proxy method with the name <operation>_async. For example, consider the following definition:
["ami"] interface Intf {
    string op(int i, string s, out double d, out bool b);
};
The corresponding asynchronous proxy method is:1
(BOOL) op_async:(id)target_
                 response:(SEL)response_
                 exception:(SEL)exception_
                 i:(ICEInt)i
                 s:(NSString *)s;
The return value and the parameters work as follows:
• An asynchronous invocation returns YES if it was written to the local trans­port and NO if it was queued for later delivery.
• The first parameter (target) of an asynchronous operation is the callback object that is notified once the invocation completes.
• The second parameter (response) is the selector of a method of the target callback object. This method is called by the Ice run time once the operation invocation completes successfully. In other words, this selector is used if the operation did not raise an exception.
• The third parameter (exception) is the selector of a method of the target callback object. This method is called by the Ice run time once the operation invocation completes unsuccessfully. In other words, this selector is used if the operation did raise an exception.
• The remaining parameters are the in-parameters for the operation, in the order in which they are defined in Slice. (Out-parameters are not visible on the invoking end of the API.)
Given a proxy to an object of type Intf, you can invoke op assynchronously as follows:
id<EXIntfPrx> proxy = ...; // Get proxy...

@try {
    [proxy op_async:cb response:@selector(opResponse:d:b:)
                    exception:@selector(opException:)
                    i:99 s:@"Hello"]) {
} @catch (ICECommunicatorDestroyedException *ex)
        // Communicator no longer exists.
}
Note that the in-parameters passed to the operation are 99 and "Hello". (We will return to the callback parameters shortly.)
In your code, you are unlikely to catch ICECommunicatorDestroye­dException for every asynchronous invocation. Instead, it usually is easier to catch this exception higher up in the call hierarchy and deal with it there. (After all, this indication most likely indicates that you have initiated program termina­tion.) We have included the catch handler here for illustration purposes only.
Once the Ice run time has successfully initiated an asynchronous invocation, control returns to the caller. The actual invocation is processed in the background (if it could not be written to the network immediately). Eventually, the operation will complete, either successfully or with an exception.
If the operation raised an exception, the Ice run time delivers the exception to the selector that you passed to the invocation. The exception callback method accepts a single argument of type ICEException that informs it of the cause of the failure. The exception callback must have void return type.
If the operation completed successfully, the Ice run time calls the response callback method whose selector you passed to the invocation (opResponse:d:b: for the preceding example). The response callback must have void return type. The rules for how the parameter list of the response callback is formed are as follows:
• If an operation has void return type and does not use out-parameters, the response callback has no arguments.
• If an operation has non-void return type, the first parameter of the response callback is the return value of the operation.
• If an operation has out-parameters, each out-parameters becomes an argument for the response callback, in the same order as the order of out-parameters in the corresponding Slice definition. The arguments for out-parameters follow the argument for the return value (if any).
For our op_async invocation, here is how we could write our callback object to process the results:
@interface Callback
// ...
@end

@implementation Callback
(void) opResponse:(NSString *)ret d:(ICEDouble) b:(BOOL)b
{
    // Process results...
}

(void) opException:(ICEException *)ex
{
    // Handle exception...
}
@end
Of course, you are free to add constructors and other methods to your callback object. For example, it is common for the callback object to store a reference to an object that can further process the results (such as display them to the user). Typi­cally, the constructor stores that reference in an instance variable.
On page 611, we mentioned that each Slice operation generates four _async methods. Here is one of these for our example operation op:
(BOOL) op_async:(id)target_
                 response:(SEL)response_
                 exception:(SEL)exception_
                 i:(ICEInt)i
                 s:(NSString *)s
                 context:(ICEContext *)context;
This is exactly the same as the version we have already seen, except for the trailing context parameter. This parameter allows you to pass a context with asynchronous operations. (See Section 32.12 for details on contexts.)
The remaining two variants also come in “without context” and “with context” versions, but also have a sent parameter:
(BOOL) op_async:(id)target_
                 response:(SEL)response_
                 exception:(SEL)exception_
                 sent:(SEL)sent_
                 i:(ICEInt)i
                 s:(NSString *)s;
(BOOL) op_async:(id)target_
                 response:(SEL)response_
                 exception:(SEL)exception_
                 sent:(SEL)sent_
                 i:(ICEInt)i
                 s:(NSString *)s
                 context:(ICEContext *)context;
When you invoke an operation asynchronously, the Ice run time attempts to write the invocation to the local network buffers immediately. However, if doing so would block, the invocation is instead queued for later processing in the back­ground. In other words, once control returns to you after making an asynchronous invocation, you do not know whether the invocation was written to the local trans­port or whether it will be sent some time later.
The purpose of the sent parameter (which always follows the exception parameter and precedes the in-parameters) is to notify you if an asynchronous invocation could not be written immediately. If the invocation was queued because it could not be written to the local transport, the Ice run time calls the sent call­back. The sent callback has void return type and accepts no parameters:
@implementation Callback
// ...

(void) sent
{
    // Invocation was queued for later delivery.
}
@end
The reason for providing this callback is that, without it, a client could flood the Ice run time with asynchronous requests and run out of memory. For example, the client might asynchronously invoke operations in a loop. If the network is tempo­rarily congested, or the client loses connectivity, all the client’s invocations will end up being queued in the Ice run time (at least until they time out, if the client has configured a timeout). Of course, there is only a limited amount of buffer space available and, unless the client realizes that all its invocations are piling up in memory, it will die an untimely death.
The sent callback allows you to implement flow-control for asynchronous invocations. If an asynchronous invocation returns NO, you know that the invoca­tion has not been written to the local transport yet. In that case, you can increment a counter to keep track of the number of queued invocations. In the sent call­back, you can decrement that counter again. This mechanism allows you to limit the number of queued invocations and avoid running out of memory.

18.17.1 Concurrency Issues

Support for asynchronous invocations in Ice is enabled by the client thread pool (see Section 32.10), whose threads are primarily responsible for processing reply messages. It is important to understand the concurrency issues associated with asynchronous invocations:
• A callback object must not be used for multiple simultaneous invocations. An application that needs to aggregate information from multiple replies can create a separate object to which the callback objects delegate.
• Calls to the callback object are always made by threads from an Ice thread pool, therefore synchronization may be necessary if the application might interact with the callback object at the same time as the reply arrives. Further­more, since the Ice run time never invokes callback methods from the client’s calling thread, the client can safely make AMI invocations while holding a lock without risk of a deadlock.
• The number of threads in the client thread pool determines the maximum number of simultaneous callbacks possible for asynchronous invocations. The default size of the client thread pool is one, meaning invocations on callback objects are serialized. If the size of the thread pool is increased, the application may require synchronization, and replies can be dispatched out of order. The client thread pool can also be configured to serialize messages received over a connection so that AMI replies from a connection are dispatched in the order they are received (see Section 32.10.4).
• AMI invocations do not use collocation optimization (see Section 32.20). As a result, AMI invocations are always sent “over the wire” and thus are dispatched by the server thread pool.

18.17.2 Flushing Batch Requests

Applications that send batched requests (see Section 32.16) can either flush a batch explicitly or allow the Ice run time to flush automatically. The proxy method ice_flushBatchRequests performs an immediate flush using the synchronous invocation model and may block the calling thread until the entire message can be sent. Ice also provides an asynchronous version of this method for applications that wish to flush batch requests without the risk of blocking.
The proxy method ice_flushBatchRequests_async initiates an asynchronous flush. Its only argument is a callback object; this object must define an ice_exception method for receiving a notification if an error occurs before the message is sent.
If the application is interested in flow control (see page 614), the return value of ice_flushBatchRequests_async is a boolean indicating whether the message was sent synchronously. Furthermore, the callback object can define an ice_sent method that is invoked when an asynchronous flush completes.

18.17.3 Limitations

AMI invocations cannot be sent using collocated optimization. If you attempt to invoke an AMI operation using a proxy that is configured to use collocation opti­mization, the Ice run time raises CollocationOptimizationException if the servant happens to be collocated; the request is sent normally if the servant is not collocated. Section 32.21 provides more information about this optimization and describes how to disable it when necessary.

1
Each asynchronous operation actually results in four methods. We will return to these once we have covered the basics.


Table of Contents Previous Next
Logo