Table of Contents Previous Next
Logo
Object Life Cycle : 31.8 Life Cycle and Parallelism
Copyright © 2003-2008 ZeroC, Inc.

31.8 Life Cycle and Parallelism

With the design we have explored so far, we get the following degree of concurrency:
• All operations on the factory are serialized, so only one of create, list, getDetails, and find can execute at a time.
• Concurrent operation invocations on the same servant are serialized, but concurrent operation invocations on different servants can proceed in parallel.
For the vast majority of applications, this degree of concurrency is entirely adequate: life cycle operations are rare compared to normal operations, as are concurrent invocations on the same servant. However, for some applications, serializing operations such as list and find can be a problem, particularly if they are implemented by iterating over a large number of records in a collection of files or a database. In that case, the operations might take quite some time to complete. Also, list, getDetails, and find do not change any client-visible state so, on the face of it, there is no reason to prevent clients from executing these operations concurrently.
If you find that you need the extra concurrency, you can interlock create, list, getDetails, and find with a read–write recursive mutex (see Section 27.6).1 This mutex provides separate operations for acquiring a read lock and a write lock. Multiple readers can concurrently hold a read lock, but a write lock requires exclusive access: the write lock is granted to exactly one writer once there are no readers or writers holding the lock. If a waiter is waiting to get a write lock, readers attempting to get a read lock are delayed, that is, writers are given preference and get hold of the write lock as soon as the last reader releases its lock.
Ice provides a read–write recursive mutex with the IceUtil::RWRecMutex class. We can gain increased parallelism by changing the type of _lcMutex to RWRecMutex. create then acquires a write lock on this mutex, and list and find acquire a read lock. This allows calls to list and find to proceed concurrently, but create can run only while no calls to list and find, and no other calls to create are in progress:
class PhoneEntryFactory : public PersonFactory
{
public:
    // ...

private:
    RWRecMutex _lcMutex;

};

PhoneEntryPrx
PhoneEntryFactory::create(const string& name,
                          const string& phNum,
                          const Current& c)
{
    RWRecMutex::WLock lock(_lcMutex);   // Write lock

    // Implementation as before...
}

PhoneEntries
PhoneEntryFactoryI::list(const Current&)
{
    RWRecMutex::RLock lock(_lcMutex);   // Read lock

    // Implementation as before, but no reaping.
}

DetailsSeq
PhoneEntryFactoryI::getDetails(const Current &)
{
    RWRecMutex::RLock lock(_lcMutex);   // Read lock

    // Implementation here, without reaping.
}

PhoneEntryPrx
PhoneEntryFactory::find(const string& name, const Current&)
{
    RWRecMutex::RLock lock(_lcMutex);   // Read lock

    // Implementation as before, but no reaping.
}
Note that list, getDetails, and find can no longer do any reaping because they acquire a read lock, but reaping requires a write lock because it modifies the factory’s state. In turn, this means that these operations need to make sure that they do not return any zombies, such as by testing a flag in each servant.
If you need even further increases in parallelism, you can also use a read–write recursive mutex for accessors and mutators on your servants. For example, getNumber could obtain a read lock, and setNumber and destroy could obtain a write lock. That way, multiple clients can concurrently call the getNumber operation on the same servant, and are serialized only for write operations.
However, this level of fine-grained locking is rarely necessary. (Note that read–write mutexes are both slower and larger than ordinary mutexes—see Section 27.10.) Before you implement such locking, you should ensure that it is actually worthwhile, that is, you should demonstrate that the inability of clients to concurrently execute operations on the same servant significantly degrades performance. This will be the case only if you have many clients that are interested in the same servant, and if the operations they invoke are long-running. Usually, this is not the case, and you are better off keeping the locking simple.
Remember, the more locks and intricate locking strategies you come up with, the more likely it is that the code contains an error that leads to a race condition or a deadlock. As rule, when it comes to threading, simpler is better and—more often than not—just as fast.

1
As of version 1.5, Java provides the ReentrantReadWriteLock class, and .NET provides the ReaderWriterLock class, both of which serve the same purpose.

Table of Contents Previous Next
Logo