Getting Started with dip.model

The dip.model module implements a declarative type system based around a model that contains a number of attributes each of which is defined by an attribute type.

A Simple Model

Typed attributes are declared as class attributes of a Model sub-class. The following shows the declaration of a simple model, the instantiation of an instance of that model and the values of the model instance:

from dip.model import Int, Model, Str


class ExampleModel(Model):

    name = Str()

    age = Int()


model = ExampleModel()

print("Name:", model.name)
print("Age:", model.age)

ExampleModel contains two attributes: name which is a string, and age which is an integer.

When the model is instantiated corresponding instance attributes are automatically created and given default values. In the above example the default values will be the defaults provided by the type. For example, for Str it will be an empty string, and for Int it will be 0.

Of course a type’s default value may not be what we want, so a type allows the default to be overridden as follows:

class ExampleModel(Model):

    name = Str('Bill')

    age = Int(30)

Values will be validated according to their type. An excepton will be raised if a value has an inappropriate type as in the following example:

model = ExampleModel()

model.name = 10

As a special case typed attributes may have the value None if the attribute type’s allow_none argument is True. In the following example the first assignment statement will not raise an exception, but the second one will:

class ExampleModel(Model):

    name = Str(allow_none=True)

    address = Str()


model = ExampleModel()

# This will not raise an exception.
model.name = None

# This will raise an exception.
model.address = None

The default value of allow_none is False for all types except Any, Callable, Instance and Subclass.

Explicit Initial Values

The initial values of a model may also be explicitly set when the model is instantiated as follows:

class ExampleModel(Model):

    name = Str('Bill')

    age = Int(30)


model = ExampleModel(name='Fred')

As a result the initial values of model.name and model.age will be 'Fred' and 30 respectively.

Note the distinction between default values and initial values.

Computed Default Values

A common situation is that the default value needs to be computed. An attribute type defines a default() decorator that, when used to decorate a method with the same name as the attribute, causes that method to be called to return the default value of the attribute.

The decorator is used in exactly the same way as the getter and setter decorators of standard Python properties.

The following shows a very contrived example:

class ExampleModel(Model):

    name = Str()

    age = Int()

    @name.default
    def name(self):
        import name_database

        return name_database.most_common_name()

If the model instance was created with an explicit initial value for name then the decorated method will never be called.

Another important feature of the decorated method is that it is only ever called once at most, and only when the required value is needed for the first time. If the name attribute is never actually referenced then the decorated method will never be called.

The above example exploits this behaviour to delay the (possibly expensive) import of the name_database module until it is known that it is really needed.

Attributes are Properties

Typed attributes behave like properties and you can provide your own getters and setters to customise access to them, just like you can with standard Python properties.

The getter for an attribute is a method with the same name as the attribute decorated with the attribute type’s getter() decorator. Likewise the setter for an attribute is a method with the same name as the attribute decorated with the attribute type’s setter() decorator.

The actual value of attribute is available as an ordinary instance attribute with the name of the attribute with a _ prepended. This is sometimes referred to as the shadow attribute.

The following example shows a getter and setter that mimics the default behaviour:

class ExampleModel(Model):

    name = Str()

    @name.getter
    def name(self):
        return self._name

    @name.setter
    def name(self, value):
        self._name = value

Note that a setter is not called to set an attribute’s default value, but it is called to set an attribute’s initial value.

It is also possible to define a typed attribute and a getter in one step. The following is the equivalent of the above:

class ExampleModel(Model):

    @Str()
    def name(self):
        return self._name

    @name.setter
    def name(self, value)
        self._name = value

Finally, it is also possible to pass a getter and a setter as arguments to the type. This enables getters and setters to be simple lambda functions, for example:

class ExampleModel(Model):

    name = Str(getter=lambda s: s.get_name(),
            setter=lambda s, v: s.set_name(v))

Overriding Types in Model Sub-classes

As you would expect you can override the attributes of a model in a sub-class of the model. If you only want to change the default value you simply specify that value rather then re-specifying the attribute type.

The following example shows both:

class ExampleModel(Model):

    name = Str()

    age = Int()


class Submodel(ExampleModel):

    name = 'Bill'

    age = Float(10.0)

In an instance of a Submodel, name will be a string with a default value of 'Bill', and age will be a float with a default value of 10.0.

As with standard Python properties, care should be taken when defining setters or getters for an attribute in a sub-class:

class ExampleModel(Model):

    name = Str()


class SubModel(ExampleModel):

    @ExampleModel.name.getter
    def name(self):
        """ The getter for ExampleModel.name. """

    @name.setter
    def name(self, value):
        """ The setter for ExampleModel.name. """

When defining the name getter, name does not yet exist in the SubModel class, so the super-class ExampleModel.name.getter decorator must be used. As a result of defining the name getter, name is defined in the SubModel class, so its decorators should be used for subsequent definitions. If the ExampleModel.name.setter decorator were used instead of name.setter, the name getter would be lost.

Observing Attributes

A common requirement is to observe a model for changes. This can be done with the observe() function. A callable, i.e. the observer, will be called every time a particular model attribute changes.

The observer will be passed an AttributeChange instance that describes the details of the change. This includes the old and new values. Normally these are the old and new values of the attribute. However if the type of the attribute is a collection (e.g. List) then old is the subset of the collection that has been removed from the attribute and new is the subset of the collection that has been added.

Note that if an observer has been called because a collection attribute has been rebound, then it is quite possible that old and new will have values in common.

It is possible to specify '*' as the name of the attribute to observe. This has the effect of observing all the attributes of the model. Alternatively, a method can be decorated more than once in order to observe more than one attribute.

The following example uses the observe() function as a decorator:

from dip.model import Instance, Int, Model, observe, Str


class ExampleModel(Model):

    name = Str()

    age = Int()


class Monitor(Model):

    example = Instance(ExampleModel)

    @observe('example.name')
    @observe('example.age')
    def __on_change(self, change):

        print("Value of '{0}' is {1}".format(change.name, change.new))


example = ExampleModel()
monitor = Monitor(example=example)

example.name = 'John'
example.age = 26

Running this code gives the following output:

Value of 'name' is John
Value of 'age' is 26

Note the use of the double underscore prefix in the name of the method handling the observation. This ensures that it is this particular method that will be called and not any other method with the same name defined in a sub-class.

observe() can also be used as a function:

from dip.model import Model, observe, Str


class ExampleModel(Model):

    name = Str()

    age = Str()


example = ExampleModel()

observe('*', example,
        lambda change: print("Value of '{0}' is {1}".format(
                change.name, change.new)))

example.name = 'First'
example.name = 'Second'

Typed attributes that are implemented as properties can be observed just like any other. The only difference is that when the attribute’s value changes then any observers must be explicitly notified. This is done by calling the notify_observers() function which takes the attribute name, model and the new and old values of the attribute as its arguments.

Sometimes the monitoring of an attribute’s value can be an expensive operation. If a method is provided with the same name as a typed attribute and decorated with the attribute type’s observed() decorator then that method will be called each time an observer of the attribute is added or removed. The number of observers is passed as the method’s only argument. This allows the potentially expensive monitoring to be enabled only when it is known that there is an observer interested in the changes.

Asynchronous Events

A model may also be the source of asynchronous events. An event is defined as a Trigger attribute. Setting the value of such an attribute does nothing to the model and the value is discarded. However, any observers of the attribute will first be invoked with the value contained in the new attribute of the AttributeChange instance. The value may be of any type.

The following example demonstrates its use:

from dip.model import Model, observe, Trigger


class ExampleModel(Model):

    trigger = Trigger()


model = ExampleModel()

observe('trigger', model, lambda change: print("Value:", change.new))

model.trigger = 'First'
model.trigger = 'Second'

Like the previous example, running this will cause the print() function to be called twice, once for each rebinding of model.trigger.

Interfaces

An interface is a way of formally declaring an API that an object is expected to conform to, without saying anything about a particular implementation. It can be considered a more formal style of duck-typing.

Declaring an attribute of a model as an instance of an interface, rather than an instance of an implementation, decouples the user of that API from any particular implementation. This makes it easier to replace an implementation at a later date without changing any code that calls it.

An interface is a class that sub-classes Interface. It contains a number of attributes each defined by an attribute type just like a Model. An interface should also include stubs for any methods that should be implemented, but they should not include any method implementations. Interfaces can be sub-classed just like any other class.

The following example defines an interface for downloading a file from a URL:

from dip.model import Interface, Str


class IDownload(Interface):

    # The URL to download from.
    url = Str()

    def download(self):
        """ Download the file. """

By convention dip always uses I as the first character of interface names.

The implements() class decorator can be used to indicate that a class implements a particular interface. An implementation of the IDownload interface might look like the following:

from dip.model import implements, Model

from idownload import IDownload


@implements(IDownload)
class Download(Model)

    def download(self):
        """ Download the file using urllib. """

In many ways using the implements() decorator is similar to sub-classing the interface. In particular:

  • issubclass(Download, IDownload) returns True
  • isinstance(Download(), IDownload) returns True
  • there is no need to declare the url attribute, it is done for you.

Of course it is not the exactly the same as sub-classing. Any operation that uses the method resolution order of a class will ignore any implemented interfaces.

A class can implement any number of interfaces. Simply pass the interfaces as additional arguments to implements() or decorate the class multiple times.

Abstract base classes are an alternative approach to using interfaces. Which is used is often a matter of personal preference. The following shows a class implementing the download API using sub-classing:

from dip.model import Model, Str


class AbstractDownload(Model):

    # The URL to download from.
    url = Str()

    def download(self):
        """ Download the file. """


class Download(AbstractDownload):

    def download(self):
        """ Download the file using urllib. """

However, interfaces are particularly useful when using in conjunction with adapters as described in the next section.

Adapters

An important feature of an integration framework is the ability to take an existing object that knows nothing about the framework and to write code (called an adapter) that allows the object to be used by the framework but without having to change the object.

An application, or the framework itself, will define the API (i.e. the interface) that it requires an object to implement. If an object does not already implement the interface then dip will see if a suitable adapter is available that is able to implement the interface on the object’s behalf.

Let’s say we have an existing piece of code that will download a file from a URL. It is widely used and tested but doesn’t conform to our IDownload interface:

class FileDownloader:

    def __init__(self, target_file):

        # Save the URL for later.
        self.target_file = target_file

    def get_file(self):

        # Download the file self.target_file

We could modify this code to conform to our API, but that would leave us with an unwanted maintenance burden. Instead we create an adapter as follows:

from dip.model import adapt, Adapter, DelegatedTo

@adapt(FileDownloader, to=IDownload)
class FileDownloaderIDownloadAdapter(Adapter):
    """ This adapts FileDownloader to IDownload. """

    url = DelegatedTo('adaptee.target_file')

    def download(self):
        """ Implement IDownload.download(). """

        return self.adaptee.get_file()

We’ll walk through this code a step at a time. First the adapt() class decorator is used to register the adapter class:

@adapt(FileDownloader, to=IDownload)

An adapter may handle any number of interfaces by specifying them as a list to the to argument. Note that the type being adapted may itself be an interface.

Next we define the adapter class itself:

class FileDownloaderIDownloadAdapter(Adapter):

It doesn’t matter what the name of the class is. The only requirement is that it is a sub-class of Adapter. The main purpose of the Adapter class is to provide the adaptee attribute. This is a reference to the object being adapted and is set automatically when the adapter is instantiated.

Next we need to implement the url attribute of the IDownload interface so that it uses the target_file attribute of the FileDownloader class. One way to do this is to implement a getter and setter as follows:

@IDownload.url.getter
def url(self):
    """ The getter for the IDownload.url attribute. """

    return self.adaptee.target_file

@url.setter
def url(self, url):
    """ The setter for the IDownload.url attribute. """

    self.adaptee.target_file = url

However a much more convenient way of implementing this is to define url as a DelegatedTo attribute as follows:

url = DelegatedTo('adaptee.target_file')

Finally we implement the download() method of the IDownload interface:

def download(self):
    """ Implement IDownload.download(). """

    return self.adaptee.get_file()

Again, this is a simple wrapper around the adapted object’s get_file() method.

So how does the adapter get invoked for an interface? The simplest way is to call the interface with the object as its argument. If the object already implements the interface then it is returned unchanged. If it doesn’t then dip will look for an appropriate adapter. If such an adapter has already been instantiated for the object then it is returned. (This means that adapters can have internal state that won’t be lost.) Finally, if necessary, the adapter is instantiated and returned.

The following shows how we apply this to our download example:

downloader = IDownload(FileDownloader())

If an appropriate adapter does not exist then an exception will be raised. You might have a situation where this is not an exceptional condition, perhaps if you are introspecting a number of objects to see which can implement an interface. Rather than wrap the interface call with a try/except block you can instead pass False to the exception argument of the interface call. Rather than raise an exception, the call will return None if there is no appropriate adapter. Note, however, that this will create an adapter if possible. If you only want to check if an object has been adapted then you can pass False to the adapt argument of the interface call.

An adapter is also automatically invoked if a model attribute is defined as an instance of an interface. For example:

class FileManager(Model):
    """ A class that peforms many file related tasks. """

    # The downloader used to get remote files.
    downloader = Instance(IDownload)

    def get_remote_file(self):
        """ Get a remote file. """

        # Get the downloader to do it.
        return self.downloader.download()

    @downloader.default
    def downloader(self):
        """ Return the default downloader. """

        # This will be adapted automatically.
        return FileDownloader()

We can then create a FileManager instance that uses (by default but we can always override it) our incompatible external downloader.

When choosing an adapter for an object dip looks for one that can handle the object’s type and also any interfaces directly implemented by the object. It will also consider the interfaces implemented by other adapters that already adapt the object, i.e. there is limited support for adapter chaining. This is shown in the following example:

@adapt(Klass, to=Iface1)
class KlassIface1Adapter(Adapter):
    pass

@adapt(Iface1, to=Iface2)
class Iface1Iface2Adapter(Adapter):
    pass

k = Klass()

# This will fail because we don't know how to adapt Klass to Iface2.
Iface2(k)

# This will succeed because we know how to adapt Klass to Iface1.
Iface1(k)

# This will now succeed because we know that k has been adapted to Iface1
# and we know how to adapt Iface1 to Iface2.
Iface2(k)

An interesting question about the Iface1Iface2Adapter adapter in this example is what is referenced by its adaptee attribute? Is it the k instance or is it the KlassIface1Adapter adapter that adapts k to Iface1? The answer is that adaptee is always a reference to the original object being adapted, i.e. k in this case.

Adapters are chosen based on the types of the objects to be adapted and the interface they are to be adapted to. This is sufficient for most purposes, but what if it is some other aspect of an object that determines if it can be adapted or not? To handle this situation an adapter should reimplement the isadaptable() class method. It will be passed the object to be adapted and should return True if it can be adapted.

In the following example the adapter will adapt any object to the Iface interface that has an __adaptable__ attribute:

@adapt(object, to=Iface)
class objectIfaceAdapter(Adapter):

    @classmethod
    def isadaptable(cls, adaptee):

        return hasattr(adaptee, '__adaptable__')

The use of adapters shouldn’t be restricted to integrating external objects into an application. It is a valuable mechanism for decoupling components even within an application allowing additional behaviour (i.e. a new interface) to be added to an existing component without having to change that component.

Singletons

dip provides support for the standard singleton pattern, i.e. where only one instance of an object can be created, using the Singleton base class. However this is not a base class for singleton objects, instead it is a base class for singleton wrappers around ordinary objects that impose singleton behaviour on those objects. This approach is taken because singleton objects can be:

  • difficult to test
  • difficult to replace.

Let’s say our application has some central management object that implements an IManager interface. There will only be one instance of it and we may want to refer to that instance from many other parts of our application. It would be very inconvenient, and reduce the readability of our code, if we were forced to pass it as an argument everywhere it might be needed.

Our IManager interface is defined in i_manager.py as follows:

from dip.model import Interface, List, Str

class IManager(Interface):
    """ This is the management interface. """

    # A log of activity.
    log = List(Str())

    def manage(self):
        """ Do some managing. """

We provide an implementation of this interface called Manager in the module default_manager.py.

Our singleton, also called Manager, is defined as follows:

from dip.model import Instance, Singleton

from i_manager import IManager

class Manager(Singleton):
    """ A singleton that provides access to an IManager implementation. """

    # The actual manager instance.
    instance = Instance(IManager)

    @instance.default
    def instance(self):
        """ Invoked to return the default manager instance. """

        from default_manager import Manager

        return Manager()

Any call to Manager() will return the same singleton object. The singleton object’s instance attribute will contain the wrapped object that actually implements the IManager interface. The singleton object also acts as a proxy for the attributes of the wrapped object so that they appear to be class attributes of the singleton object. So, calling the manager’s manage() method is done as follows:

Manager.manage()

Similarly adding a new log message would be done as follows:

Manager.log.append("A log message")

Note that only the getting of attributes is proxied, and not the setting. If you want to, for example, replace the log you would have to update the wrapped object explicitly as follows:

Manager().instance.log = []

Specifying a different IManager implementation is simply done by updating the instance attribute of the singleton object as follows:

Manager().instance = MyNewManager()

Adding Meta-data

It is possible to supply meta-data when declaring the type of an attribute. The meta-data is defined using keyword arguments. The type system ignores the meta-data but makes it available as the metadata attribute of the type object.

As an example, the dip.ui module uses any status_tip, tool_tip and whats_this meta-data whenever the attribute appears in a view.

Use the get_attribute_type() function to access the type object of an attribute of a model.

Model as a Mixin

In all the examples we have shown so far a model has only been derived from the Model class. However it can also be used as a mixin. In particular it can be used as a mixin with any PyQt class.

Models and __init__

The handling of the initial values of attributes is performed by the meta-class of the Model class. This is done before the model’s __init__() method (if there is one) is called. Any explicit initial values will have been removed from the keyword arguments passed to __init__().

When Model is the only super-class of a model it is, in fact, rarely necessary to implement an __init__() method at all.