The copy-on-write semantics has been supported for a while now.
The event service library has been divided in several
smaller libraries, so applications only link the required
components.
The base code for the Event Service is located in the
TAO_RTEvent
library.
TAO_RTOLDEvent
contains the old implementation
for the real-time Event Service,
in addition to this the TAO_RTSchedEvent
contains the components that will support scheduling in the
new Event Service.
This means that applications using only the
TAO_RTEvent
library do not need to link the
scheduling service.
More details can be found on the README
file
in the $TAO_ROOT/orbsvcs/orbsvcs/Event
directory.
Add strategies to remove unresponsive or dead consumers and/or suppliers
Lots of bug fixes since the last time this releases notes where updated.
The new implementation has been designed to simplify its use in applications that do not require an scheduling service and to minimize the code footprint when the scheduling service is only required for dispatching
To achieve this goals the EC will able to run without any scheduling service or only consulting the schedule, but not updating the dependencies.
Using strategies and factories we will be able to configure the EC to update the schedule only in the configurations that required. Unfortunately this features have not been implemented yet.
Many lower level issues and tasks can be found in the DOC Center Bugzilla webpage .
The simplest test for the Event Channel is Event_Latency, below are the basic instructions to run it:
Run the naming service, the scheduling service, the event service and the test in $TAO_ROOT/TAO/orbsvcs/tests/Event_Latency. As in:
$ cd $TAO_ROOT/orbsvcs
$ cd Naming_Service ; ./Naming_Service &
$ cd Event_Service ; ./Event_Service &
$ cd tests/Event_Latency ; ./Event_Latency -m 20 -j &
You may want to run each program in a separate window. Try using a fixed port number for the Naming Service so you can use the NameService environment variable.
The script start_services in $TAO_ROOT/orbsvcs/tests can help with this.
In this release the EC supports atomic updates of subscriptions and publications. In previous versions events could be lost during an update of the subscription list.
The internal data structures in the event channel have been strategized, for example, it is possible to use RB-trees instead of ordered lists. The benefits are small at this stage.
New implementation of the serialization protocols. The new version is based on "internal iterators" (aka Worker). This implementation can support copy-on-read (already implemented) and copy-on-write (in progress).
The new EC allows the suppliers and consumers to update
their publications and subscriptions, they can simply call
the corresponding connect
operation.
The default EC configuration disallows this, but it is very
easy to change it.
The new EC uses an abstract factory to build its strategies, this factory can be dynamically loaded using the service configurator.
The new EC can use trivial filters for both consumers and suppliers, resulting in optimal performance for broadcasters.
Most of the locks on the new EC are strategized.
The duration of all locks in the EC can be bounded, resulting in very predictable behavior.
Added fragmentation and reassembly support for the multicast gateways
Continued work on the multicast support for the EC, we added a new server that maps the event types (and supplier ids) into the right mcast group. Usually this server is collocated with the helper classes that send the events through multicast, so using a CORBA interface for this mapping is not expensive, further it adds the flexibility of using a global service with complete knowledge of the traffic in the system, that could try to optimize multicast group usage.
The subscriptions and publications on a particular EC can be remotely observed by instances of the RtecChannelAdmin::Observer class. Once more using CORBA for this interface cost us little or nothing because it is usually used by objects collocated with the EC.
TAO_EC_UDP_Receiver is a helper class that receives events from multicast groups and dispatches them as a supplier to some event channel. This class has to join the right multicast groups, using the Observer described above and the RtecUDPAdmin to map the subscriptions into multicast groups it can do this dynamically, as consumers join or leave its Event Channel.
When sending Events through multicast all the TAO_EC_UDP_Sender objects can shared the same socket.
Added a prototype Consumer and Supplier that can send events though multicast groups (or regular UDP sockets).
The Event Channel can be configured using a Factory that constructs the right modules (like changing the dispatching module), in the current release only the default Factory is implemented.
When several suppliers are consumers are distributed over the network it could be nice to exploit locality and have a separate Event Channel on each process (or host). Only when an event is required by some remote consumer we need to send it through the network.
The basic architecture to achieve this seems very simple, each Event Channel has a proxy that connects to the EC peers, providing a "merge" of its (local) consumer subscriptions as its own subscription list.
Locally the proxy connects as a supplier, publishing all the events it has register for.
To avoid event looping the events carry a time-to-live field that is decremented each time the event goes through a proxy, when the TTL gets to zero the event is not propagated by the proxy.
In the current release an experimental implementation is provided, it basically hardcodes all the subscriptions and publications, we are researching on how to automatically build the publication list.
We use the COS Time Service types (not the services) to specify time for the Event Service and Scheduling Service.
The Gateway to connect two event channels was moved from a test to the library. The corresponding test (EC_Multiple) has been expanded and improved.
The user can register a set of EC_Gateways with the EventChannel implementation, the event channel will automatically update the subscription list as consumers subscribe to the EC.
The code for consumer and supplier disconnection was improved and seems to work without problems now
The Event_Service program creates a collocated Scheduling Service this works around a problem in the ORB when running on multiprocessor.
Startup and shutdown were revised, the event channel shutdown cleanly now.
Added yet another example ($TAO_ROOT/orbsvcs/tests/EC_Throughput), this one ilustrate how to use the TAO extensions to create octet sequences based on CDR streams, without incurring in extra copies. This is useful to implement custom marshaling or late dermarhaling of the event payload. Future versions of the test will help measuring the EC throughput, hence the name.