Now that we have had a look at the issues around object life cycle, let us return to our file system application and add life cycle operations to it, so clients can create and destroy files and directories.
module Filesystem {
exception GenericError {
string reason;
};
exception PermissionDenied extends GenericError {};
exception NameInUse extends GenericError {};
exception NoSuchName extends GenericError {};
interface Node {
idempotent string name();
void destroy() throws PermissionDenied;
};
// ...
};
Note that destroy can throw a
PermissionDenied exception. This is necessary because we must prevent attempts to destroy the root directory.
The File interface is the same as the one we saw in
Chapter 5:
module Filesystem {
// ...
sequence<string> Lines;
interface File extends Node {
idempotent Lines read();
idempotent void write(Lines text) throws GenericError;
};
// ...
};
Note that, because File derives from
Node, it inherits the
destroy operation we defined for
Node.
The Directory interface now looks somewhat different from the previous version:
•
The list operation returns a sequence of structures instead of a list of proxies: for each entry in a directory, the
NodeDesc structure provides the name, type, and proxy of the corresponding file or directory.
•
Directories provide a find operation that returns the description of the nominated node. If the nominated node does not exist, the operation throws a
NoSuchName exception.
•
The createFile and
createDirectory operations create a file and directory, respectively. If a file or directory already exists, the operations throw a
NameInUse exception.
module Filesystem {
// ...
enum NodeType { DirType, FileType };
struct NodeDesc {
string name;
NodeType type;
Node* proxy;
};
sequence<NodeDesc> NodeDescSeq;
interface Directory extends Node {
idempotent NodeDescSeq list();
idempotent NodeDesc find(string name) throws NoSuchName;
File* createFile(string name) throws NameInUse;
Directory* createDirectory(string name) throws NameInUse;
};
};
Note that this design is somewhat different from the factory design we saw in Section 31.5.1. In particular, we do not have a single object factory; instead, we have as many factories as there are directories, that is, each directory creates files and directories only in that directory.
The following two sections describe the implementation of this design in C++ and Java. You can find the full code of the implementation (including languages other than C++ and Java) in the
demo/book/lifecycle directory of your Ice distribution.
•
When destroy is called on a node, the node needs to destroy itself and inform its parent directory that it has been destroyed (because the parent directory is the node’s factory and also acts as a collection manager for child nodes). To avoid the potential deadlock issues we discussed in
Section 31.7, this implementation uses reaping instead of calling into the parent.
Note that, in contrast to the code in Chapter 9, the entire implementation resides in a
FilesystemI namespace instead of being part of the
Filesystem namespace. Doing this is not essential, but is a little cleaner because it keeps the implementation in a namespace that is separate from the Slice-generated namespace.
The NodeI Base Class
namespace FilesystemI {
class DirectoryI;
typedef IceUtil::Handle<DirectoryI> DirectoryIPtr;
class NodeI : public virtual Filesystem::Node {
public:
virtual std::string name(const Ice::Current&);
Ice::Identity id() const;
Ice::ObjectPrx activate(const Ice::ObjectAdapterPtr&);
protected:
NodeI(const std::string& name,
const DirectoryIPtr& parent);
const std::string _name;
const DirectoryIPtr _parent;
bool _destroyed;
Ice::Identity _id;
IceUtil::Mutex _m;
};
// ...
}
The purpose of the NodeI class is to provide the data and implementation that are common to both
FileI and
DirectoryI, which use implementation inheritance from
NodeI.
As in Chapter 9,
NodeI provides the implementation of the
name operation and stores the name of the node and its parent directory in the
_name and
_parent members. (The root directory’s
_parent member is null.) These members are immutable and initialized by the constructor and, therefore,
const.
The _destroyed member, protected by the mutex
_m, prevents the race condition we discussed in
Section 31.6.5. The constructor initializes
_destroyed to
false and creates an identity for the node (stored in the
_id member):
FilesystemI::NodeI::NodeI(const string& name,
const DirectoryIPtr& parent)
: _name(name), _parent(parent), _destroyed(false)
{
_id.name = parent ? IceUtil::generateUUID() : "RootDir";
}
The id member function returns a node’s identity, stored in the
_id data member. The node must remember this identity because it is a UUID and is needed when we create a proxy to the node:
Identity
FilesystemI::NodeI::id() const
{
return _id;
}
The activate member function adds the servant to the ASM and connects the servant to the parent directory’s
_contents map:
ObjectPrx
FilesystemI::NodeI::activate(const ObjectAdapterPtr& a)
{
ObjectPrx node = a‑>add(this, _id);
if (parent)
_parent‑>addChild(_name, this);
return node;
}
The data members of NodeI are protected instead of private to keep them accessible to the derived
FileI and
DirectoryI classes. (Because the implementation of
NodeI and its derived classes is quite tightly coupled, there is little point in making these members private and providing separate accessors and mutators for them.)
The implementation of the Slice name operation simply returns the name of the node, but also checks whether the node has been destroyed, as described in
Section 31.6.5:
string
FilesystemI::NodeI::name(const Current&)
{
IceUtil::Mutex::Lock lock(_m);
if (_destroyed)
throw ObjectNotExistException(__FILE__, __LINE__);
return _name;
}
The DirectoryI Class
Next, we need to look at the implementation of directories. The DirectoryI class derives from
NodeI and the Slice-generated
Directory skeleton class. Of course, it must implement the pure virtual member functions for its Slice operations, which leads to the following (not yet complete) definition:
namespace FilesystemI {
// ...
class DirectoryI : virtual public NodeI,
virtual public Filesystem::Directory {
public:
virtual Filesystem::NodeDescSeq list(const Ice::Current&);
virtual Filesystem::NodeDesc find(const std::string&,
const Ice::Current&);
Filesystem::FilePrx createFile(const std::string&,
const Ice::Current&);
Filesystem::DirectoryPrx
createDirectory(const std::string&,
const Ice::Current&);
virtual void destroy(const Ice::Current&);
// ...
private:
// ...
};
}
namespace FilesystemI {
// ...
class DirectoryI : virtual public NodeI,
virtual public Filesystem::Directory {
public:
// ...
DirectoryI(const ObjectAdapterPtr& a,
const std::string& name,
const DirectoryIPtr& parent = 0);
void addChild(const std::string& name,
const NodeIPtr& node);
static IceUtil::StaticMutex _lcMutex;
private:
typedef std::map<std::string, NodeIPtr> Contents;
Contents _contents;
// ...
};
}
Note that, as for our design in Section 31.6.4, we use a member
_lcMutex to interlock life cycle operations. This member is static because there is a single mutex for all directories, so life cycle operations on
all directories are interlocked, instead of life cycle operations for each directory separately. (We will see the motivation for this decision shortly.)
The addChild member function is provided by the parent and adds the child to the parent’s
_contents map:
void
FilesystemI::DirectoryI::addChild(const string& name,
const NodeIPtr& node)
{
_contents[name] = node;
}
Note that (except for the root directory), this means that the constructor must be called with
_lcMutex locked, to prevent concurrent modification of the parent’s
_contents map.
Before we can look at how to implement the createDirectory,
createFile, and
destroy operations, we need to consider reaping. As mentioned previously, we have as many factories as we have directories. This immediately raises the question of how we should implement object destruction. On
page 992, we mentioned a potential problem with reaping: it increases the cost of
createFile,
createDirectory, and
find to O(
n) whereas, without reaping, these operations can be implemented in
O(log
n) time (where
n is the number of nodes created by the factory).
For our application, with its multiple factories, the question is how we should reap. For example, if a client calls
find on a particular directory, we can choose to reap only zombie nodes created by this directory.
O(
n) performance for this is acceptable if we assume that the directory does not have many entries (say, fewer than one hundred). However, for directories with thousands of entries, this approach is no longer feasible.
There is also another problem: if reaping only reaps zombies that are children of the directory on which an operation is invoked, the number of zombies could easily pile up. For example, a client might delete one thousand files in a particular directory but then leave that directory untouched. In that case, the server would hang onto one thousand servants until, finally,
list,
find, or a create operation is called on that same directory.
This suggests that we need an implementation of reaping that gets rid of all zombie servants, not just those of the directory on which
list,
find, or a create operation is called. In addition, we need an implementation that does not impose an undue performance penalty on
find or the create operations.
As suggested on page 993, we can implement reaping efficiently by avoiding a scan for zombies. Instead, we add each zombie to a separate data structure so, when it is time to reap, we only deal with servants that are known to be zombies, instead of having to scan for them. Here are the relevant definitions for
DirectoryI to implement this:
namespace FilesystemI {
// ...
class DirectoryI : virtual public NodeI,
virtual public Filesystem::Directory {
public:
// ...
void addReapEntry(const std::string&);
IceUtil::StaticMutex _lcMutex;
private:
typedef ::std::map<::std::string, NodeIPtr> Contents;
Contents _contents;
typedef ::std::map<DirectoryIPtr,
std::vector<std::string> > ReapMap;
static ReapMap _reapMap;
static void reap();
};
}
The main supporting data structure is the _reapMap data member, which maps a directory servant to its deleted entries. When a client calls
destroy on a node, the node adds its parent directory and its name to this map by calling the
addReapEntry member function on its parent directory. The
reap member function iterates over the map and, for each entry, removes the corresponding pair from the parent’s
_contents map. This reduces the cost of reaping a servant from
O(
n) (where
n is the number of nodes that were created by a directory) to
O(log
n). The cost of reaping all servants then is the
O(
z log
d), where
z is the total number of zombies, and
d is the average number of entries per directory. Moreover, whenever we call reap,
all zombie servants are reclaimed, not just those of a particular directory. With this implementation, reaping is efficient even for file systems with millions of nodes.
Implementing addReapEntry is straightforward:
FilesystemI::DirectoryI::ReapMap
FilesystemI::DirectoryI::_reapMap;
void
FilesystemI::DirectoryI::addReapEntry(const string& name)
void
{
ReapMap::iterator pos = _reapMap.find(this);
if (pos != _reapMap.end()) {
pos‑>second.push_back(name);
} else {
vector<string> v;
v.push_back(name);
_reapMap[dir] = v;
}
}
The member function simply adds this and the passed name to the map or, if
this is already in the map, adds the name to the vector of names that are already present for this entry.
The implementation of reap is also very simple: for each entry in the map, it removes the names that are stored with that entry from the corresponding parent before clearing the reap map:
void
FilesystemI::DirectoryI::reap()
{
for (ReapMap::const_iterator i = _reapMap.begin();
i != _reapMap.end(); ++i) {
for (vector<string>::const_iterator j = i‑>second.begin();
j != i‑>second.end(); ++j) {
i‑>first‑>_contents.erase(*j);
}
}
_reapMap.clear();
}
Here is destroy member function for directories:
void
FilesystemI::DirectoryI::destroy(const Current& c)
{
if (!_parent)
throw PermissionDenied("Cannot destroy root directory");
IceUtil::Mutex::Lock lock(_m);
if (_destroyed)
throw ObjectNotExistException(__FILE__, __LINE__);
IceUtil::StaticMutex::Lock lcLock(_lcMutex);
reap();
if (!_contents.empty())
throw PermissionDenied("Directory not empty");
c.adapter‑>remove(id());
_parent‑>addReapEntry(_name);
_destroyed = true;
}
The code first prevents destruction of the root directory and then checks whether this directory was destroyed previously. It then acquires the life cycle lock, reaps any zombie entries and checks that the directory is non-empty. Note that it is important to call
reap before scanning the map, otherwise, if the directory was emptied earlier, but its entries had not been reaped yet,
createDirectory would incorrectly throw a
PermissionDenied exception.
Also note that reap and
addReapEntry must be called with
_lcMutex locked; without this lock, concurrent invocations of other life cycle operations could corrupt the reap map.
The code holds the lock on _m is while
destroy locks
_lcMutex. Doing this is necessary because we cannot mark the node as destroyed until we can be sure that it actually will be destroyed, but this is possible only
after calling
reap because, prior to that, the
_contents map may still contain zombies. Finally,
destroy removes the ASM entry for the destroyed directory and adds the directory’s parent and the name of the destroyed directory to the reap map, and marks the node as destroyed.
The createDirectory implementation locks the life cycle mutex and calls
reap to make sure that the
_contents map is up to date before checking whether the directory already contains a node with the given name (or an invalid empt name). If not, it creates a new servant and returns its proxy:
DirectoryPrx
FilesystemI::DirectoryI::createDirectory(const string& name,
const Current& c)
{
{
IceUtil::Mutex::Lock lock(_m);
if (_destroyed)
throw ObjectNotExistException(__FILE__, __LINE__);
}
IceUtil::StaticMutex::Lock lock(_lcMutex);
reap();
if (name.empty() || _contents.find(name) != _contents.end())
throw NameInUse(name);
DirectoryIPtr d = new DirectoryI(name, this);
return DirectoryPrx::uncheckedCast(d‑>activate(c.adapter));
}
The createFile implementation is identical, except that it creates a file instead of a directory:
FilePrx
FilesystemI::DirectoryI::createFile(const string& name,
const Current& c)
{
{
IceUtil::Mutex::Lock lock(_m);
if (_destroyed)
throw ObjectNotExistException(__FILE__, __LINE__);
}
IceUtil::StaticMutex::Lock lock(_lcMutex);
reap();
if (name.empty() || _contents.find(name) != _contents.end())
throw NameInUse(name);
FileIPtr f = new FileI(name, this);
return FilePrx::uncheckedCast(f‑>activate(c.adapter));
}
NodeDescSeq
FilesystemI::DirectoryI::list(const Current& c)
{
{
IceUtil::Mutex::Lock lock(_m);
if (_destroyed)
throw ObjectNotExistException(__FILE__, __LINE__);
}
IceUtil::Mutex::Lock lock(_lcMutex);
reap();
NodeDescSeq ret;
for (Contents::const_iterator i = _contents.begin();
i != _contents.end(); ++i) {
NodeDesc d;
d.name = i‑>first;
d.type = FilePtr::dynamicCast(i‑>second)
? FileType : DirType;
d.proxy = NodePrx::uncheckedCast(
c.adapter‑>createProxy(i‑>second‑>id()));
ret.push_back(d);
}
return ret;
}
After acquiring the life cycle lock and calling reap, the code iterates over the directory’s contents and adds a
NodeDesc structure for each entry to the returned vector. (Again, it is important to call
reap before iterating over the
_contents map, to avoid adding entries for already deleted nodes.)
The find operation proceeds along similar lines:
NodeDesc
FilesystemI::DirectoryI::find(const string& name,
const Current& c)
{
{
IceUtil::Mutex::Lock lock(_m);
if (_destroyed)
throw ObjectNotExistException(__FILE__, __LINE__);
}
IceUtil::Mutex::Lock lock(_lcMutex);
reap();
Contents::const_iterator pos = _contents.find(name);
if (pos == _contents.end())
{
throw NoSuchName(name);
}
NodeIPtr p = pos‑>second;
NodeDesc d;
d.name = name;
d.type = FilePtr::dynamicCast(p)
? FileType : DirType;
d.proxy = NodePrx::uncheckedCast(
c.adapter‑>createProxy(p‑>id()));
return d;
}
The FileI Class
The constructor of FileI is trivial: it simply initializes the data members of its base class::
The implementation of the three member functions of the FileI class is also trivial, so we present all three member functions here:
Lines
FilesystemI::FileI::read(const Current&)
{
IceUtil::Mutex::Lock lock(_m);
if (_destroyed)
throw ObjectNotExistException(__FILE__, __LINE__);
return _lines;
}
// Slice File::write() operation.
void
FilesystemI::FileI::write(const Lines& text, const Current&)
{
IceUtil::Mutex::Lock lock(_m);
if (_destroyed)
throw ObjectNotExistException(__FILE__, __LINE__);
_lines = text;
}
// Slice File::destroy() operation.
void
FilesystemI::FileI::destroy(const Current& c)
{
{
IceUtil::Mutex::Lock lock(_m);
if (_destroyed)
throw ObjectNotExistException(__FILE__, __LINE__);
_destroyed = true;
}
IceUtil::Mutex::Lock lock(_lcMutex);
c.adapter‑>remove(id());
_parent‑>addReapEntry(_name);
}
The preceding implementation is provably deadlock free. Except for DirectoryI::destroy, each member function holds only one lock at a time, so these member functions cannot deadlock with each other or themselves.
DirectoryI::destroy first locks its own mutex
_m and then locks
_lcMutex. However, while the locks are held, the function does not call other member functions that acquire locks, so any potential deadlock can only arise by concurrent calls to
DirectoryI::destroy, either on the same node or on different nodes. For concurrent calls on the same node, deadlock is impossible because such calls are strictly serialized on the mutex
_m; for concurrent calls on different nodes, each node locks its respective mutex
_m and then acquires
_lcMutex before releasing both locks, also making deadlock impossible.
As we discussed in Section 31.8, this implementation strictly serializes life cycle operations as well as
list and
find, and it permits only one operation in each directory and file to execute at a time. However, read and write operations on different files can execute concurrently. In practice, this degree of concurrency will usually be adequate; if you need more parallelism, you can use read–write locks as described earlier. It is also possible to allow a larger degree of parallelism for directories, by using finer-grained locking strategies that permit create operations to proceed in parallel, provided that they are invoked on different directories. However, that approach requires either a per-directory reaping strategy, or making do without reaping, at the cost of having to carefully design the code to be free of deadlocks.
•
When destroy is called on a node, the node needs to destroy itself and inform its parent directory that it has been destroyed (because the parent directory is the node’s factory and also acts as a collection manager for child nodes). To avoid the potential deadlock issues we discussed in
Section 31.7, this implementation uses reaping instead of calling into the parent.
Note that, in contrast to the code in Chapter 13, the entire implementation resides in a
FilesystemI package instead of being part of the
Filesystem package. Doing this is not essential, but is a little cleaner because it keeps the implementation in a package that is separate from the Slice-generated package.
The NodeI Interface
Our DirectoryI and
FileI servants derive from a common
NodeI base interface. This interface is not essential, but useful because it allows us to treat servants of type
DirectoryI and
FileI polymorphically:
package FilesystemI;
public interface NodeI
{
Ice.Identity id();
}
The only method is the id method, which returns the identity of the corresponding node.
The DirectoryI Class
As in Chapter 13, the
DirectoryI class derives from the generated
_DirectoryDisp and
_DirectoryOperations bases. In addition, it derives from the
NodeI interface.
DirectoryI must implement each of the Slice operations, leading to the following outline:
package FilesystemI;
import Ice.I;
import Filesystem.*;
public class DirectoryI extends _DirectoryDisp
implements NodeI, _DirectoryOperations
{
public Identity
id();
public synchronized String
name(Current c);
public NodeDesc[]
list(Current c);
public NodeDesc
find(String name, Current c) throws NoSuchName;
public FilePrx
createFile(String name, Current c) throws NameInUse;
public DirectoryPrx
createDirectory(String name, Current c) throws NameInUse;
public void
destroy(Current c) throws PermissionDenied;
// ...
}
package FilesystemI;
import Ice.*;
import Filesystem.*;
public class DirectoryI extends _DirectoryDisp
implements NodeI, _DirectoryOperations
{
// ...
public DirectoryI();
public DirectoryI(String name, DirectoryI parent);
public DirectoryPrx activate(Ice.ObjectAdapter a);
public void addChild(String name, NodeI node);
public static java.lang.Object _lcMutex =
new java.lang.Object();
private String _name; // Immutable
private DirectoryI _parent; // Immutable
private Identity _id; // Immutable
private boolean _destroyed;
private java.util.Map<String, NodeI> _contents;
// ...
}
The _name and
_parent members store the name of this node and a reference to the node’s parent directory. (The root directory’s
_parent member is null.) Similarly, the
_id member stores the identity of this directory. The
_name,
_parent, and
_id members are immutable once they have been initialized by the constructor. The
_destroyed member prevents the race condition we discussed in
Section 31.6.5; to interlock access to
_destroyed (as well as the
_contents member) we can use synchronized methods (as for the
name method), or use a
synchronized(this) block.
The _contents map records the contents of a directory: it stores the name of an entry, together with a reference to the child node.
The static _lcMutex member implements the life cycle lock we discussed in
Section 31.6.4. This member is static because there is a single mutex for all directories, so life cycle operations on
all directories are interlocked, instead of life cycle operations for each directory separately.
public DirectoryI()
{
this("/", null);
}
public DirectoryI(String name, DirectoryI parent)
{
_name = name;
_parent = parent;
_id = new Identity();
_destroyed = false;
_contents = new java.util.HashMap<String, NodeI>();
_id.name = parent == null ? "RootDir" : Util.generateUUID();
}
The real constructor initializes the _name,
_parent,
_id,
_destroyed, and
_contents members. Note that nodes other than the root directory use a UUID as the object identity.
The addChild method simply adds the pass name–directory pair to the
_contents map:
public void
addChild(String name, NodeI node)
{
_contents.put(name, node);
}
Note that, except for creation of the root directory, addChild must be called with
_lcMutex locked, to prevent concurrent modification of the parent’s
_contents map. We will see this shortly.
The activate method adds the servant to the ASM and calls
addChild:
public DirectoryPrx
activate(Ice.ObjectAdapter a)
{
DirectoryPrx node =
DirectoryPrxHelper.uncheckedCast(a.add(this, _id));
if (_parent != null)
_parent.addChild(_name, this);
return node;
}
The implementation of the Slice name operation simply returns the name of the node, but also checks whether the node has been destroyed, as described in
Section 31.6.5:
public synchronized String
name(Current c)
{
if (_destroyed)
throw new ObjectNotExistException();
return _name;
}
Note that this method is synchronized, so the _destroyed member cannot be accessed concurrently.
Before we can look at how to implement the createDirectory,
createFile, and
destroy operations, we need to consider reaping. As mentioned previously, we have as many factories as we have directories. This immediately raises the question of how we should implement object destruction. On
page 992, we mentioned a potential problem with reaping: it increases the cost of
createFile,
createDirectory, and
find to O(
n) whereas, without reaping, these operations can be implemented in
O(log
n) time (where
n is the number of nodes created by the factory).
For our application, with its multiple factories, the question is how we should reap. For example, if a client calls
find on a particular directory, we can choose to reap only zombie nodes created by this directory.
O(
n) performance for this is acceptable if we assume that the directory does not have many entries (say, fewer than one hundred). However, for directories with thousands of entries, this approach is no longer feasible.
There is also another problem: if reaping only reaps zombies that are children of the directory on which an operation is invoked, the number of zombies could easily pile up. For example, a client might delete one thousand files in a particular directory but then leave that directory untouched. In that case, the server would hang onto one thousand servants until, finally,
list,
find, or a create operation is called on that same directory.
This suggests that we need an implementation of reaping that gets rid of all zombie servants, not just those of the directory on which
list,
find, or a create operation is called. In addition, we need an implementation that does not impose an undue performance penalty on
find or the create operations.
As suggested on page 993, we can implement reaping efficiently by avoiding a scan for zombies. Instead, we add each zombie to a separate data structure so, when it comes time to reap, we only deal with servants that are known to be zombies, instead of having to scan for them. Here are the relevant definitions for
DirectoryI to implement this:
package FilesystemI;
import Ice.*;
import Filesystem.*;
public class DirectoryI extends _DirectoryDisp
implements NodeI, _DirectoryOperations
{
// ...
public void
addReapEntry(String name);
private static void
reap();
private static java.util.Map
_reapMap = new java.util.HashMap();
private static
java.util.Map<DirectoryI, java.util.List<String>
_reapMap = new java.util.HashMap<DirectoryI,
java.util.List<String> >();
}
The main supporting data structure is the _reapMap data member, which maps a directory servant to its deleted entries. When a client calls
destroy on a node, the node adds its parent directory and its name to this map by calling the
addReapEntry member function. The
reap member function iterates over the map and, for each entry, removes the corresponding pair from the parent’s
_contents map. This reduces the cost of reaping a servant from
O(
n) (where
n is the number of nodes that were created by a directory) to
O(log
n). The cost of reaping all servants then is the
O(
z log
d), where
z is the total number of zombies, and
d is the average number of entries per directory. Moreover, whenever we call reap,
all zombie servants are reclaimed, not just those of a particular directory. With this implementation, reaping is efficient even for file systems with millions of nodes.
Implementing addReapEntry is straightforward:
public void
addReapEntry(String name)
{
java.util.List<String> l = _reapMap.get(this);
if (l != null) {
l.add(name);
} else {
l = new java.util.ArrayList<String>();
l.add(name);
_reapMap.put(this, l);
}
}
The function simply adds the name for the servant to the map or, if the servant is already in the map, adds the name to the the list of names that are already present for this servant.
The implementation of reap is also very simple: for each entry in the map, it removes the names that are stored with that entry from the corresponding directory before clearing the reap map:
private static void
reap()
{
java.util.Iterator<
java.util.Map.Entry<DirectoryI,
java.util.List<String> > > i =
_reapMap.entrySet().iterator();
while (i.hasNext()) {
java.util.Map.Entry<DirectoryI,
java.util.List<String> > e =
i.next();
DirectoryI dir = e.getKey();
java.util.List<String> l = e.getValue();
java.util.Iterator<String> j = l.iterator();
while (j.hasNext())
dir._contents.remove(j.next());
}
_reapMap.clear();
}
Here is destroy member function for directories:
public void
destroy(Current c) throws PermissionDenied
{
if (_parent == null)
throw new PermissionDenied(
"Cannot destroy root directory");
synchronized (this) {
if (_destroyed)
throw new ObjectNotExistException();
synchronized (_lcMutex) {
reap();
if (_contents.size() != 0)
throw new PermissionDenied(
"Cannot destroy non‑empty directory");
c.adapter.remove(id());
_parent.addReapEntry(_name);
_destroyed = true;
}
}
}
The code first prevents destruction of the root directory and then checks whether this directory was destroyed previously. It then acquires the life cycle lock, reaps any zombie entries and checks that the directory is non-empty. Note that it is important to call
reap before scanning the map, otherwise, if the directory was emptied earlier, but its entries had not been reaped yet,
createDirectory would incorrectly throw a
PermissionDenied exception.
Also note that reap and
addReapEntry must be called with _
lcMutex locked; without this lock, concurrent invocations of other life cycle operations could corrupt the reap map.
The code holds the lock on this is while
destroy locks
_lcMutex. Doing this is necessary because we cannot mark the node as destroyed until we can be sure that it actually will be destroyed, but this is possible only
after calling
reap because, prior to that, the
_contents map may still contain zombies. Finally,
destroy removes the ASM entry for the destroyed directory and adds the directory’s parent and the name of the destroyed directory to the reap map, and marks the node as destroyed.
The createDirectory implementation locks the life cycle mutex and calls
reap to make sure that the
_contents map is up to date before checking whether the directory already contains a node with the given name. If not, it creates a new servant and returns its proxy:
public DirectoryPrx
createDirectory(String name, Current c) throws NameInUse
{
synchronized(this) {
if (_destroyed)
throw new ObjectNotExistException();
}
synchronized(_lcMutex) {
reap();
if (_contents.containsKey(name))
throw new NameInUse(name);
return new DirectoryI(name, this).activate(c.adapter);
}
}
The createFile implementation is identical, except that it creates a file instead of a directory:
public FilePrx
createFile(String name, Current c) throws NameInUse
{
synchronized(this) {
if (_destroyed)
throw new ObjectNotExistException();
}
synchronized(_lcMutex) {
reap();
if (_contents.containsKey(name))
throw new NameInUse(name);
return new FileI(name, this).activate(c.adapter);
}
}
public NodeDesc[]
list(Current c)
{
synchronized(this) {
if (_destroyed)
throw new ObjectNotExistException();
}
synchronized(_lcMutex) {
reap();
NodeDesc[] ret = new NodeDesc[_contents.size()];
java.util.Iterator<java.util.Map.Entry<String, NodeI> >
pos = _contents.entrySet().iterator();
for (int i = 0; i < _contents.size(); ++i) {
java.util.Map.Entry<String, NodeI> e = pos.next();
NodeI p = e.getValue();
ret[i] = new NodeDesc();
ret[i].name = e.getKey();
ret[i].type = p instanceof FileI ?
NodeType.FileType : NodeType.DirType;
ret[i].proxy = NodePrxHelper.uncheckedCast(
c.adapter.createProxy(p.id()));
}
return ret;
}
}
After acquiring the life cycle lock and calling reap, the code iterates over the directory’s contents and adds a
NodeDesc structure for each entry to the returned array. (Again, it is important to call
reap before iterating over the
_contents map, to avoid adding entries for already deleted nodes.)
The find operation proceeds along similar lines:
public NodeDesc
find(String name, Current c) throws NoSuchName
{
synchronized(this) {
if (_destroyed)
throw new ObjectNotExistException();
}
synchronized(_lcMutex) {
reap();
NodeI p = (NodeI)_contents.get(name);
if (p == null)
throw new NoSuchName(name);
NodeDesc d = new NodeDesc();
d.name = name;
d.type = p instanceof FileI ?
NodeType.FileType : NodeType.DirType;
d.proxy = NodePrxHelper.uncheckedCast(
c.adapter.createProxy(p.id()));
return d;
}
}
The FileI Class
The FileI class is similar to the
DirectoryI class. The data members store the name, parent, and identity of the file, as well as the
_destroyed flag and the contents of the file (in the
_lines member). The constructor initializes these members:
package FilesystemI;
import Ice.*;
import Filesystem.*;
import FilesystemI.*;
public class FileI extends _FileDisp
implements NodeI, _FileOperations
{
// ...
public FileI(ObjectAdapter a, String name, DirectoryI parent)
{
_name = name;
_parent = parent;
_destroyed = false;
_id = new Identity();
_id.name = Util.generateUUID();
}
private String _name;
private DirectoryI _parent;
private boolean _destroyed;
private Identity _id;
private String[] _lines;
}
public synchronized String
name(Current c)
{
if (_destroyed)
throw new ObjectNotExistException();
return _name;
}
public FilePrx
activate(Ice.ObjectAdapter a)
{
FilePrx node = FilePrxHelper.uncheckedCast(a.add(this, _id));
_parent.addChild(_name, this);
return node;
}
public Identity
id()
{
return _id;
}
public synchronized String[]
read(Current c)
{
if (_destroyed)
throw new ObjectNotExistException();
return _lines;
}
public synchronized void
write(String[] text, Current c)
{
if (_destroyed)
throw new ObjectNotExistException();
_lines = text.clone();
}
public void
destroy(Current c)
{
synchronized (this) {
if (_destroyed)
throw new ObjectNotExistException();
_destroyed = true;
}
synchronized (_parent._lcMutex) {
c.adapter.remove(id());
_parent.addReapEntry(_name);
}
}
The preceding implementation is provably deadlock free. Except for DirectoryI.destroy, each method holds only one lock at a time, so these methods cannot deadlock with each other or themselves.
DirectoryI.destroy first locks itself then locks
_lcMutex. However, while the locks are held, the method does not call other methods that acquire locks, so any potential deadlock can only arise by concurrent calls to
DirectoryI.destroy, either on the same node or on different nodes. For concurrent calls on the same node, deadlock is impossible because such calls are strictly serialized on
this; for concurrent calls on different nodes, each node locks itself and then acquires
_lcMutex before releasing both locks, also making deadlock impossible.
As we discussed in Section 31.8, this implementation strictly serializes life cycle operations as well as
list and
find, and it permits only one operation in each directory and file to execute at a time. However, read and write operations on different files can execute concurrently. In practice, this degree of concurrency will usually be adequate; if you need more parallelism, you can use read–write locks as described earlier. It is also possible to allow a larger degree of parallelism for directories, by using finer-grained locking strategies that permit create operations to proceed in parallel, provided that they are invoked on different directories. However, that approach requires either a per-directory reaping strategy, or making do without reaping, at the cost of having to carefully design the code to be free of deadlocks.