As we discussed in Section 33.3.2, the AMI model allows applications to use the synchronous invocation model if desired: specifying the AMI metadata for an operation leaves the proxy method for synchronous invocation intact, and causes an additional proxy method to be generated in support of asynchronous invocation.
The same is not true for AMD, however. Specifying the AMD metadata causes the method for synchronous dispatch to be
replaced with a method for asynchronous dispatch.
The asynchronous dispatch method has a signature similar to that of AMI: the arguments consist of a callback object and the operation’s
in parameters. In AMI the callback object is supplied by the application, but in AMD the callback object is supplied by the Ice run time and provides methods for returning the operation’s results or reporting an exception. The implementation is not required to invoke the callback object before the dispatch method returns; the callback object can be invoked at any time by any thread, but may only be invoked once. The name of the callback class is constructed so that it cannot conflict with a user-defined Slice identifier.
1. A callback class used by the implementation to notify the Ice run time about the completion of an operation. The name of this class is formed using the pattern
AMD_class_op. For example, an operation named
foo defined in interface
I results in a class named
AMD_I_foo. The class is generated in the same scope as the interface or class containing the operation. Several methods are provided:
The ice_response method allows the server to report the successful completion of the operation. If the operation has a non-
void return type, the first parameter to
ice_response is the return value. Parameters corresponding to the operation’s
out parameters follow the return value, in the order of declaration.
This version of ice_exception allows the server to raise any standard except, Ice run time exception, or Ice user exception.
This version of ice_exception allows the server to report an
UnknownException.
Neither ice_response nor
ice_exception throw any exceptions to the caller.
2.
The dispatch method, whose name has the suffix _async. This method has a
void return type. The first parameter is a smart pointer to an instance of the callback class described above. The remaining parameters comprise the
in parameters of the operation, in the order of declaration.
interface I {
["amd"] int foo(short s, out long l);
};
class AMD_I_foo : public ... {
public:
void ice_response(Ice::Int, Ice::Long);
void ice_exception(const std::exception&);
void ice_exception();
};
void foo_async(const AMD_I_fooPtr&, Ice::Short);
1. A callback interface used by the implementation to notify the Ice run time about the completion of an operation. The name of this interface is formed using the pattern
AMD_class_op. For example, an operation named
foo defined in interface
I results in an interface named
AMD_I_foo. The interface is generated in the same scope as the interface or class containing the operation. Two methods are provided:
The ice_response method allows the server to report the successful completion of the operation. If the operation has a non-
void return type, the first parameter to
ice_response is the return value. Parameters corresponding to the operation’s
out parameters follow the return value, in the order of declaration.
The ice_exception method allows the server to raise an exception. With respect to exceptions, there is less compile-time type safety in an AMD implementation because there is no
throws clause on the dispatch method and any exception type could conceivably be passed to
ice_exception. However, the Ice run time validates the exception value using the same semantics as for synchronous dispatch (see
Section 4.10.4).
Neither ice_response nor
ice_exception throw any exceptions to the caller.
2.
The dispatch method, whose name has the suffix _async. This method has a
void return type. The first parameter is a reference to an instance of the callback interface described above. The remaining parameters comprise the
in parameters of the operation, in the order of declaration.
interface I {
["amd"] int foo(short s, out long l);
};
public interface AMD_I_foo {
void ice_response(int __ret, long l);
void ice_exception(java.lang.Exception ex);
}
void foo_async(AMD_I_foo __cb, short s);
1. A callback interface used by the implementation to notify the Ice run time about the completion of an operation. The name of this interface is formed using the pattern
AMD_class_op. For example, an operation named
foo defined in interface
I results in an interface named
AMD_I_foo. The interface is generated in the same scope as the interface or class containing the operation. Two methods are provided:
The ice_response method allows the server to report the successful completion of the operation. If the operation has a non-
void return type, the first parameter to
ice_response is the return value. Parameters corresponding to the operation’s
out parameters follow the return value, in the order of declaration.
The ice_exception method allows the server to raise an exception.
Neither ice_response nor
ice_exception throw any exceptions to the caller.
2.
The dispatch method, whose name has the suffix _async. This method has a
void return type. The first parameter is a reference to an instance of the callback interface described above. The remaining parameters comprise the
in parameters of the operation, in the order of declaration.
interface I {
["amd"] int foo(short s, out long l);
};
public interface AMD_I_foo
{
void ice_response(int __ret, long l);
void ice_exception(System.Exception ex);
}
public abstract void foo_async(AMD_I_foo __cb, short s,
Ice.Current __current);
For each AMD operation, the Python mapping emits a dispatch method with the same name as the operation and the suffix
_async. This method returns
None. The first parameter is a reference to a callback object, as described below. The remaining parameters comprise the
in parameters of the operation, in the order of declaration.
The ice_response method allows the server to report the successful completion of the operation. If the operation has a non-
void return type, the first parameter to
ice_response is the return value. Parameters corresponding to the operation’s
out parameters follow the return value, in the order of declaration.
The ice_exception method allows the server to report an exception.
Neither ice_response nor
ice_exception throw any exceptions to the caller.
interface I {
["amd"] int foo(short s, out long l);
};
class ...
#
# Operation signatures.
#
# def ice_response(self, _result, l)
# def ice_exception(self, ex)
def foo_async(self, __cb, s)
There are two processing contexts in which the logical implementation of an AMD operation may need to report an exception: the dispatch thread (i.e., the thread that receives the invocation), and the response thread (i.e., the thread that sends the response)
1. Although we recommend that the callback object be used to report all exceptions to the client, it is legal for the implementation to raise an exception instead, but only from the dispatch thread.
As you would expect, an exception raised from a response thread cannot be caught by the Ice run time; the application’s run time environment determines how such an exception is handled. Therefore, a response thread must ensure that it traps all exceptions and sends the appropriate response using the callback object. Otherwise, if a response thread is terminated by an uncaught exception, the request may never be completed and the client might wait indefinitely for a response.
Whether raised in a dispatch thread or reported via the callback object, user exceptions are validated as described in
Section 4.10.2, and local exceptions may undergo the translation described in
Section 4.10.4.
module Demo {
sequence<float> Row;
sequence<Row> Grid;
exception RangeError {};
interface Model {
["ami", "amd"] Grid interpolate(Grid data, float factor)
throws RangeError;
};
};
Our servant class derives from Demo::Model and supplies a definition for the
interpolate_async method:
class ModelI : virtual public Demo::Model,
virtual public IceUtil::Mutex {
public:
virtual void interpolate_async(
const Demo::AMD_Model_interpolatePtr&,
const Demo::Grid&,
Ice::Float,
const Ice::Current&);
private:
std::list<JobPtr> _jobs;
};
The implementation of interpolate_async uses synchronization to safely record the callback object and arguments in a
Job that is added to a queue:
void ModelI::interpolate_async(
const Demo::AMD_Model_interpolatePtr& cb,
const Demo::Grid& data,
Ice::Float factor,
const Ice::Current& current)
{
IceUtil::Mutex::Lock sync(*this);
JobPtr job = new Job(cb, data, factor);
_jobs.push_back(job);
}
After queuing the information, the operation returns control to the Ice run time, making the dispatch thread available to process another request. An application thread removes the next
Job from the queue and invokes
execute to perform the interpolation.
Job is defined as follows:
class Job : public IceUtil::Shared {
public:
Job(
const Demo::AMD_Model_interpolatePtr&,
const Demo::Grid&,
Ice::Float);
void execute();
private:
bool interpolateGrid();
Demo::AMD_Model_interpolatePtr _cb;
Demo::Grid _grid;
Ice::Float _factor;
};
typedef IceUtil::Handle<Job> JobPtr;
The implementation of execute uses
interpolateGrid (not shown) to perform the computational work:
Job::Job(
const Demo::AMD_Model_interpolatePtr& cb,
const Demo::Grid& grid,
Ice::Float factor) :
_cb(cb), _grid(grid), _factor(factor)
{
}
void Job::execute()
{
if (!interpolateGrid()) {
_cb‑>ice_exception(Demo::RangeError());
return;
}
_cb‑>ice_response(_grid);
}
If interpolateGrid returns
false, then
ice_exception is invoked to indicate that a range error has occurred. The
return statement following the call to
ice_exception is necessary because
ice_exception does not throw an exception; it only marshals the exception argument and sends it to the client.
If interpolation was successful, ice_response is called to send the modified grid back to the client.
Our servant class derives from Demo._ModelDisp and supplies a definition for the
interpolate_async method that creates a
Job to hold the callback object and arguments, and adds the
Job to a queue. The method is synchronized to guard access to the queue:
public final class ModelI extends Demo._ModelDisp {
synchronized public void interpolate_async(
Demo.AMD_Model_interpolate cb,
float[][] data,
float factor,
Ice.Current current)
throws RangeError
{
_jobs.add(new Job(cb, data, factor));
}
java.util.LinkedList _jobs = new java.util.LinkedList();
}
After queuing the information, the operation returns control to the Ice run time, making the dispatch thread available to process another request. An application thread removes the next
Job from the queue and invokes
execute, which uses
interpolateGrid (not shown) to perform the computational work:
class Job {
Job(Demo.AMD_Model_interpolate cb,
float[][] grid,
float factor)
{
_cb = cb;
_grid = grid;
_factor = factor;
}
void execute()
{
if (!interpolateGrid()) {
_cb.ice_exception(new Demo.RangeError());
return;
}
_cb.ice_response(_grid);
}
private boolean interpolateGrid() {
// ...
}
private Demo.AMD_Model_interpolate _cb;
private float[][] _grid;
private float _factor;
}
If interpolateGrid returns
false, then
ice_exception is invoked to indicate that a range error has occurred. The
return statement following the call to
ice_exception is necessary because
ice_exception does not throw an exception; it only marshals the exception argument and sends it to the client.
If interpolation was successful, ice_response is called to send the modified grid back to the client.
Our servant class derives from Demo._ModelDisp and supplies a definition for the
interpolate_async method that creates a
Job to hold the callback object and arguments, and adds the
Job to a queue. The method uses a lock statement to guard access to the queue:
public class ModelI : Demo.ModelDisp_
{
public override void interpolate_async(
Demo.AMD_Model_interpolate cb,
float[][] data,
float factor,
Ice.Current current)
{
lock(this)
{
_jobs.Add(new Job(cb, data, factor));
}
}
private System.Collections.ArrayList _jobs
= new System.Collections.ArrayList();
}
After queuing the information, the operation returns control to the Ice run time, making the dispatch thread available to process another request. An application thread removes the next
Job from the queue and invokes
execute, which uses
interpolateGrid (not shown) to perform the computational work:
public class Job {
public Job(Demo.AMD_Model_interpolate cb,
float[][] grid, float factor)
{
_cb = cb;
_grid = grid;
_factor = factor;
}
public void execute()
{
if (!interpolateGrid()) {
_cb.ice_exception(new Demo.RangeError());
return;
}
_cb.ice_response(_grid);
}
private boolean interpolateGrid()
{
// ...
}
private Demo.AMD_Model_interpolate _cb;
private float[][] _grid;
private float _factor;
}
If interpolateGrid returns
false, then
ice_exception is invoked to indicate that a range error has occurred. The
return statement following the call to
ice_exception is necessary because
ice_exception does not throw an exception; it only marshals the exception argument and sends it to the client.
If interpolation was successful, ice_response is called to send the modified grid back to the client.
Our servant class derives from Demo.Model and supplies a definition for the
interpolate_async method that creates a
Job to hold the callback object and arguments, and adds the
Job to a queue. The method uses a lock to guard access to the queue:
class ModelI(Demo.Model):
def __init__(self):
self._mutex = threading.Lock()
self._jobs = []
def interpolate_async(self, cb, data, factor, current=None):
self._mutex.acquire()
try:
self._jobs.append(Job(cb, data, factor))
finally:
self._mutex.release()
After queuing the information, the operation returns control to the Ice run time, making the dispatch thread available to process another request. An application thread removes the next
Job from the queue and invokes
execute, which uses
interpolateGrid (not shown) to perform the computational work:
class Job(object):
def __init__(self, cb, grid, factor):
self._cb = cb
self._grid = grid
self._factor = factor
def execute(self):
if not self.interpolateGrid():
self._cb.ice_exception(Demo.RangeError())
return
self._cb.ice_response(self._grid)
def interpolateGrid(self):
# ...
If interpolateGrid returns
False, then
ice_exception is invoked to indicate that a range error has occurred. The
return statement following the call to
ice_exception is necessary because
ice_exception does not throw an exception; it only marshals the exception argument and sends it to the client.
If interpolation was successful, ice_response is called to send the modified grid back to the client.