#include <Transport.h>
Inheritance diagram for TAO_Transport:
Public Types | |
enum | { TAO_ONEWAY_REQUEST = 0, TAO_TWOWAY_REQUEST = 1, TAO_REPLY } |
Public Methods | |
TAO_Transport (CORBA::ULong tag, TAO_ORB_Core *orb_core) | |
default creator, requres the tag value be supplied. More... | |
virtual | ~TAO_Transport (void) |
destructor. More... | |
CORBA::ULong | tag (void) const |
The OMG assigns unique tags (a 32-bit unsigned number) to each protocol. More... | |
TAO_ORB_Core * | orb_core (void) const |
Access the ORB that owns this connection. More... | |
TAO_Transport_Mux_Strategy * | tms (void) const |
The role of the TAO_Transport_Mux_Strategy is described in more detail in that class' documentation. More... | |
TAO_Wait_Strategy * | wait_strategy (void) const |
The role of the TAO_Wait_Strategy is described in more detail in that class' documentation. More... | |
int | handle_output (void) |
Callback method to reactively drain the outgoing data queue. More... | |
int | bidirectional_flag (void) const |
Get/Set the bidirectional flag. More... | |
void | bidirectional_flag (int flag) |
void | cache_map_entry (TAO_Transport_Cache_Manager::HASH_MAP_ENTRY *entry) |
Set/Get the Cache Map entry. More... | |
TAO_Transport_Cache_Manager::HASH_MAP_ENTRY * | cache_map_entry (void) |
int | id (void) const |
If not set, this will return an integer representation of the this pointer for the instance on which it's called. More... | |
void | id (int id) |
unsigned long | purging_order (void) const |
Get and Set the purging order. The purging strategy uses the set version to set the purging order. More... | |
void | purging_order (unsigned long value) |
int | queue_is_empty (void) |
void | provide_handle (ACE_Handle_Set &reactor_registered, TAO_EventHandlerSet &unregistered) |
Called by the cache when the cache is closing in order to fill in a handle_set in a thread-safe manner. More... | |
int | register_handler (void) |
Register the handler with the reactor. More... | |
ssize_t | send (iovec *iov, int iovcnt, size_t &bytes_transferred, const ACE_Time_Value *timeout=0) |
This method serializes on handler_lock_, guaranteeing that only thread can execute it on the same instance concurrently. More... | |
ssize_t | recv (char *buffer, size_t len, const ACE_Time_Value *timeout=0) |
This method serializes on handler_lock_, guaranteeing that only thread can execute it on the same instance concurrently. More... | |
virtual int | messaging_init (CORBA::Octet major, CORBA::Octet minor)=0 |
Initialising the messaging object. More... | |
virtual int | tear_listen_point_list (TAO_InputCDR &cdr) |
Extracts the list of listen points from the <cdr> stream. The list would have the protocol specific details of the ListenPoints. More... | |
int | generate_locate_request (TAO_Target_Specification &spec, TAO_Operation_Details &opdetails, TAO_OutputCDR &output) |
This is a request for the transport object to write a LocateRequest header before it is sent out. More... | |
virtual int | generate_request_header (TAO_Operation_Details &opd, TAO_Target_Specification &spec, TAO_OutputCDR &msg) |
This is a request for the transport object to write a request header before it sends out the request. More... | |
int | recache_transport (TAO_Transport_Descriptor_Interface *desc) |
recache ourselves in the cache. More... | |
virtual void | connection_handler_closing (void) |
Method for the connection handler to signify that it is being closed and destroyed. More... | |
virtual int | handle_input_i (TAO_Resume_Handle &rh, ACE_Time_Value *max_wait_time=0, int block=0) |
The ACE_Event_Handler adapter invokes this method as part of its handle_input() operation. More... | |
virtual int | send_request (TAO_Stub *stub, TAO_ORB_Core *orb_core, TAO_OutputCDR &stream, int message_semantics, ACE_Time_Value *max_time_wait)=0 |
Preparing the ORB to receive the reply only once the request is completely sent opens the system to some subtle race conditions: suppose the ORB is running in a multi-threaded configuration, thread A makes a request while thread B is using the Reactor to process all incoming requests. More... | |
virtual int | send_message (TAO_OutputCDR &stream, TAO_Stub *stub=0, int message_semantics=TAO_Transport::TAO_TWOWAY_REQUEST, ACE_Time_Value *max_time_wait=0)=0 |
Once the ORB is prepared to receive a reply (see send_request() above), and all the arguments have been marshaled the CDR stream must be 'formatted', i.e. More... | |
virtual int | send_message_shared (TAO_Stub *stub, int message_semantics, const ACE_Message_Block *message_block, ACE_Time_Value *max_wait_time) |
int | send_message_block_chain (const ACE_Message_Block *message_block, size_t &bytes_transferred, ACE_Time_Value *max_wait_time=0) |
Send a message block chain,. More... | |
int | send_message_block_chain_i (const ACE_Message_Block *message_block, size_t &bytes_transferred, ACE_Time_Value *max_wait_time) |
Send a message block chain, assuming the lock is held. More... | |
void | purge_entry (void) |
Cache management. More... | |
int | make_idle (void) |
Cache management. More... | |
int | handle_timeout (const ACE_Time_Value ¤t_time, const void *act) |
Control connection lifecycle | |
These methods are routed through the TMS object.
The TMS strategies implement them correctly. | |
virtual int | idle_after_send (void) |
Request has been just sent, but the reply is not received. Idle the transport now. More... | |
virtual int | idle_after_reply (void) |
Request is sent and the reply is received. Idle the transport now. More... | |
virtual void | close_connection (void) |
Call the implementation method after obtaining the lock. More... | |
Static Public Methods | |
TAO_Transport * | _duplicate (TAO_Transport *transport) |
void | release (TAO_Transport *transport) |
Protected Methods | |
virtual ACE_Event_Handler * | event_handler_i (void)=0 |
Normally a concrete TAO_Transport object has-a ACE_Event_Handler member that function as an adapter between the ACE_Reactor framework and the TAO pluggable protocol framework. More... | |
virtual TAO_Connection_Handler * | connection_handler_i (void)=0 |
virtual ACE_Event_Handler * | invalidate_event_handler_i (void)=0 |
Typically, this just sets the pointer to the associated connection handler to zero, although it could also clear out any additional resources associated with the handler association. More... | |
virtual TAO_Pluggable_Messaging * | messaging_object (void)=0 |
Return the messaging object that is used to format the data that needs to be sent. More... | |
virtual ssize_t | send_i (iovec *iov, int iovcnt, size_t &bytes_transferred, const ACE_Time_Value *timeout=0)=0 |
Often the implementation simply forwards the arguments to the underlying ACE_Svc_Handler class. More... | |
virtual ssize_t | recv_i (char *buffer, size_t len, const ACE_Time_Value *timeout=0)=0 |
virtual int | register_handler_i (void)=0 |
This method is used by the Wait_On_Reactor strategy. More... | |
int | parse_consolidate_messages (ACE_Message_Block &bl, TAO_Resume_Handle &rh, ACE_Time_Value *time=0) |
Called by the handle_input_i (). This method is used to parse message read by the handle_input_i () call. It also decides whether the message needs consolidation before processing. More... | |
int | parse_incoming_messages (ACE_Message_Block &message_block) |
Method does parsing of the message if we have a fresh message in the <message_block> or just returns if we have read part of the previously stored message. More... | |
size_t | missing_data (ACE_Message_Block &message_block) |
Return if we have any missing data in the queue of messages or determine if we have more information left out in the presently read message to make it complete. More... | |
virtual int | consolidate_message (ACE_Message_Block &incoming, ssize_t missing_data, TAO_Resume_Handle &rh, ACE_Time_Value *max_wait_time) |
Consolidate the currently read message or consolidate the last message in the queue. The consolidation of the last message in the queue is done by calling consolidate_message_queue (). More... | |
int | consolidate_fragments (TAO_Queued_Data *qd, TAO_Resume_Handle &rh) |
Bala: Docu??? More... | |
int | consolidate_message_queue (ACE_Message_Block &incoming, ssize_t missing_data, TAO_Resume_Handle &rh, ACE_Time_Value *max_wait_time) |
First consolidate the message queue. If the message is still not complete, try to read from the handle again to make it complete. If these dont help put the message back in the queue and try to check the queue if we have message to process. (the thread needs to do some work anyway :-)). More... | |
int | consolidate_extra_messages (ACE_Message_Block &incoming, TAO_Resume_Handle &rh) |
Called by parse_consolidate_message () if we have more messages in one read. Queue up the messages and try to process one of them, atleast at the head of them. More... | |
int | process_parsed_messages (TAO_Queued_Data *qd, TAO_Resume_Handle &rh) |
Process the message by sending it to the higher layers of the ORB. More... | |
TAO_Queued_Data * | make_queued_data (ACE_Message_Block &incoming) |
Make a queued data from the <incoming> message block. More... | |
int | send_message_shared_i (TAO_Stub *stub, int message_semantics, const ACE_Message_Block *message_block, ACE_Time_Value *max_wait_time) |
Implement send_message_shared() assuming the handler_lock_ is held. More... | |
int | check_event_handler_i (const char *caller) |
Protected Attributes | |
CORBA::ULong | tag_ |
IOP protocol tag. More... | |
TAO_ORB_Core * | orb_core_ |
Global orbcore resource. More... | |
TAO_Transport_Cache_Manager::HASH_MAP_ENTRY * | cache_map_entry_ |
Our entry in the cache. We dont own this. It is here for our convinience. We cannot just change things around. More... | |
TAO_Transport_Mux_Strategy * | tms_ |
Strategy to decide whether multiple requests can be sent over the same connection or the connection is exclusive for a request. More... | |
TAO_Wait_Strategy * | ws_ |
Strategy for waiting for the reply after sending the request. More... | |
int | bidirectional_flag_ |
Have we sent any info on bidirectional information or have we received any info regarding making the connection served by this transport bidirectional. More... | |
TAO_Queued_Message * | head_ |
Implement the outgoing data queue. More... | |
TAO_Queued_Message * | tail_ |
TAO_Incoming_Message_Queue | incoming_message_queue_ |
Queue of the incoming messages.. More... | |
ACE_Time_Value | current_deadline_ |
The queue will start draining no later than <queing_deadline_> *if* the deadline is. More... | |
long | flush_timer_id_ |
The timer ID. More... | |
TAO_Transport_Timer | transport_timer_ |
The adapter used to receive timeout callbacks from the Reactor. More... | |
ACE_Lock * | handler_lock_ |
This is an ACE_Lock that gets initialized from TAO_ORB_Core::resource_factory()->create_cached_connection_lock () . More... | |
int | id_ |
This never *never* changes over the lifespan, so we don't have to worry about locking it. More... | |
unsigned long | purging_order_ |
Used by the LRU, LFU and FIFO Connection Purging Strategies. More... | |
Private Methods | |
TAO_Transport_Cache_Manager & | transport_cache_manager (void) |
Helper method that returns the Transport Cache Manager. More... | |
int | drain_queue (void) |
As the outgoing data is drained this method is invoked to send as much of the current message as possible. More... | |
int | drain_queue_i (void) |
Implement drain_queue() assuming the lock is held. More... | |
int | queue_is_empty_i (void) |
This version assumes that the lock is already held. More... | |
int | drain_queue_helper (int &iovcnt, iovec iov[]) |
A helper routine used in drain_queue_i(). More... | |
int | schedule_output_i (void) |
Schedule handle_output() callbacks. More... | |
int | cancel_output_i (void) |
Cancel handle_output() callbacks. More... | |
void | cleanup_queue (size_t byte_count) |
Exactly <byte_count> bytes have been sent, the queue must be cleaned up as potentially several messages have been completely sent out. More... | |
int | check_buffering_constraints_i (TAO_Stub *stub, int &must_flush) |
Copy the contents of a message block into a Queued_Message TAO_Queued_Message *copy_message_block (const ACE_Message_Block *mb); Check if the buffering constraints have been reached. More... | |
int | send_synchronous_message_i (const ACE_Message_Block *message_block, ACE_Time_Value *max_wait_time) |
Send a synchronous message, i.e. block until the message is on the wire. More... | |
int | send_reply_message_i (const ACE_Message_Block *message_block, ACE_Time_Value *max_wait_time) |
Send a reply message, i.e. do not block until the message is on the wire, but just return after adding them to the queue. More... | |
int | send_synch_message_helper_i (TAO_Synch_Queued_Message &s, ACE_Time_Value *max_wait_time) |
A helper method used by <send_synchronous_message_i> and <send_reply_message_i>. Reusable code that could be used by both the methods. More... | |
int | flush_timer_pending (void) const |
Check if the flush timer is still pending. More... | |
void | reset_flush_timer (void) |
The flush timer expired or was explicitly cancelled, mark it as not pending. More... | |
void | report_invalid_event_handler (const char *caller) |
Print out error messages if the event handler is not valid. More... | |
int | process_queue_head (TAO_Resume_Handle &rh) |
int | notify_reactor (void) |
ACE_Event_Handler * | invalidate_event_handler (void) |
Grab the mutex and then call invalidate_event_handler_i(). More... | |
void | send_connection_closed_notifications (void) |
Notify all the components inside a Transport when the underlying connection is closed. More... | |
void | close_connection_i (void) |
Implement close_connection() assuming the handler_lock_ is held. More... | |
void | close_connection_no_purge (void) |
Close the underlying connection, do not purge the entry from the map (supposedly it was purged already, trust the caller, yuck!). More... | |
void | close_connection_shared (int disable_purge, ACE_Event_Handler *eh) |
Close the underlying connection, implements the code shared by all the close_connection_* variants. More... | |
TAO_Transport (const TAO_Transport &) | |
Prohibited. More... | |
void | operator= (const TAO_Transport &) |
Friends | |
class | TAO_Block_Flushing_Strategy |
This class needs priviledged access to - queue_is_empty_i() - drain_queue_i(). More... | |
class | TAO_Reactive_Flushing_Strategy |
These classes need privileged access to: - schedule_output_i() - cancel_output_i(). More... | |
class | TAO_Leader_Follower_Flushing_Strategy |
class | TAO_Transport_Cache_Manager |
This class needs priviledged access to: close_connection_no_purge (). More... |
The transport object is created in the Service handler constructor and deleted in the Service Handler's destructor!!
The main responsability of a Transport object is to encapsulate a connection, and provide a transport independent way to send and receive data. Since TAO is heavily based on the Reactor for all if not all its I/O the Transport class is usually implemented with a helper Connection Handler that adapts the generic Transport interface to the Reactor types.
One of the responsibilities of the TAO_Transport class is to send out GIOP messages as efficiently as possible. In most cases messages are put out in FIFO order, the transport object will put out the message using a single system call and return control to the application. However, for oneways and AMI requests it may be more efficient (or required if the SYNC_NONE policy is in effect) to queue the messages until a large enough data set is available. Another reason to queue is that some applications cannot block for I/O, yet they want to send messages so large that a single write() operation would not be able to cope with them. In such cases we need to queue the data and use the Reactor to drain the queue.
Therefore, the Transport class may need to use a queue to temporarily hold the messages, and, in some configurations, it may need to use the Reactor to concurrently drain such queues.
Consequently, the Transport must also know if the head of the queue has been partially sent. In that case new messages can only follow the head. Only once the head is completely sent we can start sending new messages.
Blocking I/O is still attractive for some applications. First, my eliminating the Reactor overhead performance is improved when sending large blocks of data. Second, using the Reactor to send out data opens the door for nested upcalls, yet some applications cannot deal with the reentrancy issues in this case.
One of the main responsibilities of the transport is to read and process the incoming GIOP message as quickly and efficiently as possible. There are other forces that needs to be given due consideration. They are
The messages should be checked for validity and the right information should be sent to the higher layer for processing. The process of doing a sanity check and preparing the messages for the higher layers of the ORB are done by the messaging protocol.
To keep things as efficient as possible for medium sized requests, it would be good to minimise data copying and locking along the incoming path ie. from the time of reading the data from the handle to the application. We achieve this by creating a buffer on stack and reading the data from the handle into the buffer. We then pass the same data block (the buffer is encapsulated into a data block) to the higher layers of the ORB. The problems stem from the following (a) Data is bigger than the buffer that we have on stack (b) Transports like TCP do not guarantee availability of the whole chunk of data in one shot. Data could trickle in byte by byte. (c) Single read gives multiple messages
We solve the problems as follows
(a) First do a read with the buffer on stack. Query the underlying messaging object whether the message has any incomplete portion. If so, we just grow the buffer for the missing size and read the rest of the message. We free the handle and then send the message to the higher layers of the ORB for processing.
(b) If we block (ie. if we receive a EWOULDBLOCK) while trying to do the above (ie. trying to read after growing the buffer size) we put the message in a queue and return back to the reactor. The reactor would call us back when the handle becomes read ready.
(c) If we get multiple messages (possible if the client connected to the server sends oneways or AMI requests), we parse and split the messages. Every message is put in the queue. Once the messages are queued, the thread picks up one message to send to the higher layers of the ORB. Before doing that, if it finds more messages, it sends a notify to the reactor without resuming the handle. The next thread picks up a message from the queue and processes that. Once the queue is drained the last thread resumes the handle.
We could use the outgoing path of the ORB to send replies. This would allow us to reuse most of the code in the outgoing data path. We were doing this till TAO-1.2.3. We run in to problems. When writing the reply the ORB gets flow controlled, and the ORB tries to flush the message by going into the reactor. This resulted in unnecessary nesting. The thread that gets into the Reactor could potentially handle other messages (incoming or outgoing) and the stack starts growing leading to crashes.
The solution that we (plan to) adopt is pretty straight forward. The thread sending replies will not block to send the replies but queue the replies and return to the Reactor. (Note the careful usages of the terms "blocking in the Reactor" as opposed to "return back to the Reactor".
See Also:
|
|
|
default creator, requres the tag value be supplied.
|
|
destructor.
|
|
Prohibited.
|
|
|
|
|
|
Get/Set the bidirectional flag.
|
|
|
|
Set/Get the Cache Map entry.
|
|
Cancel handle_output() callbacks.
|
|
Copy the contents of a message block into a Queued_Message TAO_Queued_Message *copy_message_block (const ACE_Message_Block *mb); Check if the buffering constraints have been reached.
|
|
|
|
Exactly <byte_count> bytes have been sent, the queue must be cleaned up as potentially several messages have been completely sent out. It leaves on head_ the next message to send out. |
|
Call the implementation method after obtaining the lock.
|
|
Implement close_connection() assuming the handler_lock_ is held.
|
|
Close the underlying connection, do not purge the entry from the map (supposedly it was purged already, trust the caller, yuck!).
|
|
Close the underlying connection, implements the code shared by all the close_connection_* variants.
|
|
Method for the connection handler to signify that it is being closed and destroyed.
|
|
Reimplemented in TAO_IIOP_Transport. |
|
Called by parse_consolidate_message () if we have more messages in one read. Queue up the messages and try to process one of them, atleast at the head of them.
|
|
Bala: Docu??? @ |
|
Consolidate the currently read message or consolidate the last message in the queue. The consolidation of the last message in the queue is done by calling consolidate_message_queue ().
|
|
First consolidate the message queue. If the message is still not complete, try to read from the handle again to make it complete. If these dont help put the message back in the queue and try to check the queue if we have message to process. (the thread needs to do some work anyway :-)).
|
|
As the outgoing data is drained this method is invoked to send as much of the current message as possible. Returns 0 if there is more data to send, -1 if there was an error and 1 if the message was completely sent. |
|
A helper routine used in drain_queue_i().
|
|
Implement drain_queue() assuming the lock is held.
|
|
Normally a concrete TAO_Transport object has-a ACE_Event_Handler member that function as an adapter between the ACE_Reactor framework and the TAO pluggable protocol framework. In all the protocols implemented so far this role is fullfilled by an instance of ACE_Svc_Handler.
Reimplemented in TAO_IIOP_Transport. |
|
Check if the flush timer is still pending.
|
|
This is a request for the transport object to write a LocateRequest header before it is sent out.
|
|
This is a request for the transport object to write a request header before it sends out the request.
Reimplemented in TAO_IIOP_Transport. |
|
The ACE_Event_Handler adapter invokes this method as part of its handle_input() operation.
Once a complete message is read the Transport class delegates on the Messaging layer to invoke the right upcall (on the server) or the TAO_Reply_Dispatcher (on the client side).
|
|
Callback method to reactively drain the outgoing data queue.
|
|
|
|
|
|
If not set, this will return an integer representation of the
|
|
Request is sent and the reply is received. Idle the transport now.
|
|
Request has been just sent, but the reply is not received. Idle the transport now.
|
|
Grab the mutex and then call invalidate_event_handler_i().
|
|
Typically, this just sets the pointer to the associated connection handler to zero, although it could also clear out any additional resources associated with the handler association.
Reimplemented in TAO_IIOP_Transport. |
|
Cache management.
|
|
Make a queued data from the <incoming> message block.
|
|
Initialising the messaging object. This would be used by the connector side. On the acceptor side the connection handler would take care of the messaging objects. Reimplemented in TAO_IIOP_Transport. |
|
Return the messaging object that is used to format the data that needs to be sent.
Reimplemented in TAO_IIOP_Transport. |
|
Return if we have any missing data in the queue of messages or determine if we have more information left out in the presently read message to make it complete.
|
|
|
|
|
|
Access the ORB that owns this connection.
|
|
Called by the handle_input_i (). This method is used to parse message read by the handle_input_i () call. It also decides whether the message needs consolidation before processing.
|
|
Method does parsing of the message if we have a fresh message in the <message_block> or just returns if we have read part of the previously stored message.
|
|
Process the message by sending it to the higher layers of the ORB.
|
|
|
|
Called by the cache when the cache is closing in order to fill in a handle_set in a thread-safe manner.
|
|
Cache management.
|
|
|
|
Get and Set the purging order. The purging strategy uses the set version to set the purging order.
|
|
|
|
This version assumes that the lock is already held. Use with care!
|
|
recache ourselves in the cache.
|
|
This method serializes on handler_lock_, guaranteeing that only thread can execute it on the same instance concurrently.
|
|
Reimplemented in TAO_IIOP_Transport. |
|
Register the handler with the reactor. This method is used by the Wait_On_Reactor strategy. The transport must register its event handler with the ORB's Reactor. |
|
This method is used by the Wait_On_Reactor strategy. The transport must register its event handler with the ORB's Reactor.
Reimplemented in TAO_IIOP_Transport. |
|
|
|
Print out error messages if the event handler is not valid.
|
|
The flush timer expired or was explicitly cancelled, mark it as not pending.
|
|
Schedule handle_output() callbacks.
|
|
This method serializes on handler_lock_, guaranteeing that only thread can execute it on the same instance concurrently. Often the implementation simply forwards the arguments to the underlying ACE_Svc_Handler class. Using the code factored out into ACE. Be careful with protocols that perform non-trivial transformations of the data, such as SSLIOP or protocols that compress the stream.
ENOENT . |
|
Notify all the components inside a Transport when the underlying connection is closed.
|
|
Often the implementation simply forwards the arguments to the underlying ACE_Svc_Handler class. Using the code factored out into ACE. Be careful with protocols that perform non-trivial transformations of the data, such as SSLIOP or protocols that compress the stream.
Reimplemented in TAO_IIOP_Transport. |
|
Once the ORB is prepared to receive a reply (see send_request() above), and all the arguments have been marshaled the CDR stream must be 'formatted', i.e. the message_size field in the GIOP header can finally be set to the proper value. Reimplemented in TAO_IIOP_Transport. |
|
Send a message block chain,.
|
|
Send a message block chain, assuming the lock is held.
|
|
Reimplemented in TAO_IIOP_Transport. |
|
Implement send_message_shared() assuming the handler_lock_ is held.
|
|
Send a reply message, i.e. do not block until the message is on the wire, but just return after adding them to the queue.
|
|
Preparing the ORB to receive the reply only once the request is completely sent opens the system to some subtle race conditions: suppose the ORB is running in a multi-threaded configuration, thread A makes a request while thread B is using the Reactor to process all incoming requests. Thread A could be implemented as follows: 1) send the request 2) setup the ORB to receive the reply 3) wait for the request but in this case thread B may receive the reply between step (1) and (2), and drop it as an invalid or unexpected message. Consequently the correct implementation is: 1) setup the ORB to receive the reply 2) send the request 3) wait for the reply The following method encapsulates this idiom.
Reimplemented in TAO_IIOP_Transport. |
|
A helper method used by <send_synchronous_message_i> and <send_reply_message_i>. Reusable code that could be used by both the methods.
|
|
Send a synchronous message, i.e. block until the message is on the wire.
|
|
The OMG assigns unique tags (a 32-bit unsigned number) to each protocol. New protocol tags can be obtained free of charge from the OMG, check the documents in corbafwd.h for more details. |
|
Extracts the list of listen points from the <cdr> stream. The list would have the protocol specific details of the ListenPoints.
Reimplemented in TAO_IIOP_Transport. |
|
The role of the TAO_Transport_Mux_Strategy is described in more detail in that class' documentation. Enough is to say that the class is used to control how many threads can have pending requests over the same connection. Multiplexing multiple threads over the same connection conserves resources and is almost required for AMI, but having only one pending request per connection is more efficient and reduces the possibilities of priority inversions. |
|
Helper method that returns the Transport Cache Manager.
|
|
The role of the TAO_Wait_Strategy is described in more detail in that class' documentation. Enough is to say that the ORB can wait for a reply blocking on read(), using the Reactor to wait for multiple events concurrently or using the Leader/Followers protocol. |
|
This class needs priviledged access to - queue_is_empty_i() - drain_queue_i().
|
|
|
|
These classes need privileged access to: - schedule_output_i() - cancel_output_i().
|
|
This class needs priviledged access to: close_connection_no_purge ().
|
|
Have we sent any info on bidirectional information or have we received any info regarding making the connection served by this transport bidirectional. The flag is used as follows: + We dont want to send the bidirectional context info more than once on the connection. Why? Waste of marshalling and demarshalling time on the client. + On the server side -- once a client that has established the connection asks the server to use the connection both ways, we *dont* want the server to pack service info to the client. That is not allowed. We need a flag to prevent such a things from happening. The value of this flag will be 0 if the client sends info and 1 if the server receives the info. |
|
Our entry in the cache. We dont own this. It is here for our convinience. We cannot just change things around.
|
|
The queue will start draining no later than <queing_deadline_> *if* the deadline is.
|
|
The timer ID.
|
|
This is an This way, one can use a lock appropriate for the type of system, i.e., a null lock for single-threaded systems, and a real lock for multi-threaded systems. |
|
Implement the outgoing data queue.
|
|
This never *never* changes over the lifespan, so we don't have to worry about locking it. HINT: Protocol-specific transports that use connection handler might choose to set this to the handle for their connection. |
|
Queue of the incoming messages..
|
|
Global orbcore resource.
|
|
Used by the LRU, LFU and FIFO Connection Purging Strategies.
|
|
IOP protocol tag.
|
|
|
|
Strategy to decide whether multiple requests can be sent over the same connection or the connection is exclusive for a request.
|
|
The adapter used to receive timeout callbacks from the Reactor.
|
|
Strategy for waiting for the reply after sending the request.
|