This page last changed on Oct 20, 2006 by cholmes.

This originally appeared here.

We are in the process of refactoring the TransactionResponse code so it should be easier to maintain in the future.

What makes GeoServer Special

Recently there has been a lot of press about MapServer, everything from MapServer Junior to the formation of a Foundation. Welcome to the other team, the Java team, and here is what GeoServer does so well - WFS-T.

For those new to the Open Geospatial Consortium standards scene there are a couple of levels of "compiliancy".

  • WFS Compliant - Supports GetCapabilities, GetFeatures, DescribeFeatureType
  • WFS-T Compliant - Supports Transaction (aka allows you to modify information)

There are a couple of leftover things, like LockFeature and GetFeatureWithLock - they don't get their abbrevation. Rest assured, GeoServer does those as well.

MapServer gets Ugly

One thing that both GeoServer and MapServer products share is a fair bit of "ugly". MapServer starts with an advantage in this regard, as it is hard for us to compete with pointers.

We have had occasion to look at the MapServer codebase, for label positioning code I think, at the time we were relieved. It was an intricate, ornate construction, where the slightest tweak would have far flung consequences. What MapServer Classic (I mean Cheeta) has going for it is the open source advantage, it has been deployed, and tweaked, so often that it shines. Shines in this case means speed.

The only way to really clean it up would be to start again from the same building blocks, something we did not think they would do unless a major sponsor came around. Enter AutoDesk ...

I suppose that means we will need to work for a living ...

GeoServer gets Ugly

Okay I will fess up, the following code is 98% my fault. At this point I will pass the keyboard over to Richard Gould as he intro's the 800 line function of Doom.

Richad Gould writes:

In January of 2004 Jody and I were working on the Validation Web Feature Service aspect of GeoServer. More specifically Jody was working on that, and I was working on GeoServer's web configuration interface. The day before the project was due, Jody asks me for help. After two weeks of failed debugging, Jody decided we should re-write the WFS transaction code. Let me first say that before starting this work on GeoServer, I had never even heard of GIS before. I didn't even really understand what a Feature was. So for the next 17.5 hours, reaching until 5:30 am, I ran around absolutely clueless of the big (or even small) picture. Somehow, under the guidance of Jody, we came up with the biggest method of Java I have ever seen, and it worked. These 800 lines of code almost made Chris Holmes give up on open source! I have not touched the code since, but I fear it is still running about like some titanic bull

I can't ask for a better intro then that ... thanks Richard the truth is supposed to hurt right?

Transaction - The Method Behind the Madness

Yes there actually was a plan, it involved introducing test cases to the codebase so we could tell when we were done. Up until this time GeoServer sufficed on a diet of Cite tests which only Chris Holmes managed to reliably run. Fixes involved poking the code, and then pinging an external test suite (called Cite) and asking it to perform 400 odd tests. Wash, rinse, repeat.

Oh and the method name is TransactionResponse what does it do? Well it kinds of does everything, it is the only method in the WFS patagon that changes anything.

But lets start with what a WFS Transaction is supposed to process...here is an outline of a Transaction Request:

<?xmlversion="1.0"?>
<wfs:Transaction version="1.0.0" service="WFS" ....>
  <LockId>00A01</LockId>
  <wfs:Insert>
     ...GML...
  </wfs:Insert>
  <wfs:Update typeName="myns:OCEANSA_1M">
    <wfs:Property>
       <wfs:Name>myns:DEPTH</wfs:Name>
       <wfs:Value>2400</wfs:Value>
    </wfs:Property>
    <ogc:Filter>
      <ogc:PropertyIsGreaterThan>
        <ogc:PropertyName>OCEANSA_1M.DEPTH</ogc:PropertyName>
        <ogc:Literal>2400</ogc:Literal>
      </ogc:PropertyIsGreaterThan>
    </ogc:Filter>
  </wfs:Update>
  <wfs:DeletetypeName="INWATERA_1M">
    <ogc:Filter><ogc:FeatureIdfid="INWATERA_1M.1013"/></ogc:Filter>
  </wfs:Delete>
</wfs:Transaction>

From this we can see a number of interesting details:

  • Insert
  • Delete
  • Update
  • Locking/Unlocking

In response to this we need to return a *TransactionResponse* document, that the "Feature ID" of any features added ... and how successful we were. Yes that sounds messed up, we could be SUCCESS, FAILURE, or PARTIAL at the end of the day. I am sure PARTIAL is just a quick hack, not sure why the standards body thought this one up, we went to a lot of trouble not to support this feature.

As long as we are going to trouble, we had some more features to consider:

  • Writability of DataSources
  • Restrictions on each of the kinds of operations
  • Validation

You may note that the last entry, Validation, was the purpose of the Validating Web Feature Server Project we were working on - so we had a vested interest in getting this method to work. Validation is grouped into two areas of applicability:

  • Feature Validation - per feature tests, can be applied before an insert, or after an update
  • Integrity Validation - tests that take in the big picture, checked before commiting all the changes off to the data sources

That is it, what we need to do, now for how we did it.

TransactionResponse - GeoServer in a Nutshell

Welcome to TransactionResponse, the only class you need to know to "understand" GeoServer, first of all let's see what this class has to say for itself...

From TransactionResponse.java:

/**
 * Handles a Transaction request and creates a TransactionResponse string.
 * @author Chris Holmes, TOPP
 */
public class TransactionResponse implements Response {
...
}

Thanks Chris, you're a pal. Let's try again. GeoServer is well documented, just wander up the class heirarchy and you will find information somewhere.

Javadocs from Response:

The Response interface serves as a common denominator for all service operations that generates content.

The work flow for this kind of objects is divided in two parts: the first is executing a request and the second writing the result to an OuputStream.

   1. Execute: execute(Request)
          * Executing the request means taking a Request object and, based on it's set of request parameters, do any heavy processing necessary to produce the response.
          * Once the execution has been made, the Response object should be ready to send the response content to an output stream with minimal risk of generating an exception.
          * Anyway, it is not required, even recomended, that the execution process generates the response content itself; just that it performs any query or processing that should generate a trapable error.
          * Execute may throw a ServiceException if they wish to supply a specific response in error. As an example the WFSTransaction process has a defined Transaction Document with provisions for reporting error information.
   2. ContentType: getContentType()
          * Called to set the response type. Depending on the stratagy used by AbstractService the framework may be commited to returning this type.
   3. Writing: writeTo(OutputStream)
          * Write the response to the provided output stream.
          * Any exceptions thrown by this writeTo method may never reach the end user in useable form. You should assume you are writing directly to the client.

Note: abort() will be called as part of error handling giving your response subclass a chance to clean up any temporary resources it may have required in execute() for use in writeTo().

This is specially usefull for streamed responses such as wfs GetFeature or WMS GetMap, where the execution process can be used to parse parameters, execute queries upon the corresponding data sources and leave things ready to generate a streamed response when the consumer calls writeTo.

Author:
    Gabriel Rold?n

Thanks Gabriel, note Gabriel is from Spain and actually has interesting letters in his name - he is not trying to be leet.

So let's explain what is going on:

1. The Transaction class receives a HTTPRequest, its super class WFSService checks enablement and handles errors, and the super class AbstractService is a HttpServlet and does magic.
2. Magic sounds like a framework, the GeoServer framework, lets look at what AbstractService does for you:

  • get a Request reader
  • ask the RequestReader for the Request object
  • initialize the resulting Request object with the ServletRequest
  • get the appropriate ResponseHandler for the Request
  • set the http response's content type
  • write to the http response's output stream
  • call Response cleanup
  • if anything goes wrong produce a ServiceException and write it out instead
    3. The fun thing is GeoServer has a couple of "strategies" for doing this, SPEED, FILE, BUFFER with different speed vs performance tradeoffs.

So Gabriel's javadocs do make sense, we need to be able to execute the Transaction operation, indicate the contentType, and finally produce a TransactionDocument so we can writeTo the provided OutputStream.

Just incase you were wondering, a new TransactionResponse object is created for each request - we don't have to worry about concurrent access to this object.

TransactionResponse.execute

To start with, let's have a look at the method signature:

protected void execute(Request transactionRequest)
        throws ServiceException, WfsException {
....
}

The TransactionRequest contains the information (the Transaction Servlet produced this object from the HttpRequest for us). We will get back to this object later.

Exceptions during Response Handling

Lets consider the exceptions:

  • ServiceException - this is an Exception that knows how to write itself out to the output stream as a proper WFSServiceException document. That is about all I know, the javadocs still contain my best guess of what the parameters mean.
  • WfsException - this is an Exception that knows how to write itself out as a TransactionResponse with success of type FAILURE

So let's consider when to use these two: ServiceException should be used when it is GeoServer's fault that things are bad (perhaps the database is down?), WfsException should be used when it is the user's fault that things are bad (perhaps they are trying to modify a Feature that is locked?).

The ability to throw exceptions, and get out of normal processing when things went bad, really helped make this code clear.

Finding the GeoServer Module

And now for the implementation:

if (!(request instanceof TransactionRequest)) {
        throw new WfsException(
            "bad request, expected TransactionRequest, but got " + request);
    }
    if ((request.getWFS().getServiceLevel() & WFSDTO.TRANSACTIONAL) == 0) {
        throw new ServiceException("Transaction support is not enabled");
    }

So we ensure that this really is a TransactionRequest, stranger things have happened and good error messages do pay off.

The "service level" check warrants further study, and some background information. GeoServer is arranged as a number of independent "modules". These modules float around in the ApplicationContainer knowing as little about one another as possible. Struts is currently used to set up these modules and dump them in the servlet container in an orderly fashion.

The modules are:

  • Data - holds all the fun data stuff, in uDig this would be the "LocalCatalog"
  • WFS - care and feeding of the Web Feature Server servlettes and configuration
  • WMS - same idea for the Web Map Server

The interesting thing is that each model can be run against a different set of data, indeed you could load up a couple of WFS modules
with different configuration, and data access, on a user by user basis.

Back to reality, the Request must be consulted to locate the modules because the request is aware of the ApplicationContainer.
The WFS.getServiceLevel() is a magic integer bitmask that indicates what WFS operations are "turned on" - the bit WFSDTO.TRANSACTIONAL indicates that GeoServer is configured as a WFS-T.

Reading between the lines you can recognize that configuration is handled via Data Transfer Objects (DTO) that are written to and from your GeoServer data directory as XML. We hope to be able to store these objects with a configuration service such as JMX.

Now that the sanity checks are out of the way lets do some work:

//REVISIT: this should maybe integrate with the other exception
    //handlers better - but things that go wrong here should cause
    //transaction exceptions.
    //try {
    execute((TransactionRequest) request);

    //} catch (Throwable thrown) {
    //    throw new WfsTransactionException(thrown);
    //}

As you can see, we believe in commenting code out, with long explanations, rather then trusting version control. As for getting to work, I lied, we isolated work into the ...

800 line method of Doom

And here we are:

protected void execute(TransactionRequest transactionRequest)
        throws ServiceException, WfsException {
...
}

Now we are sure that a) We have a TransactionRequest and that b) we passed configuration checks to get here.

Let's see what javadocs have to say for themseleves:

Execute Transaction request.

    The results of this opperation are stored for use by writeTo:

        * transaction: used by abort & writeTo to commit/rollback
        * request: used for users getHandle information to report errors
        * stores: FeatureStores required for Transaction
        * failures: List of failures produced

    Because we are using geotools2 locking facilities our modification will simply fail with IOException if we have not provided proper authorization.

    The specification allows a WFS to implement PARTIAL sucess if it is unable to rollback all the requested changes. This implementation is able to offer full Rollback support and will not require the use of PARTIAL success.

    Parameters:
        transactionRequest -
    Throws:
        ServiceException - DOCUMENT ME!
        WfsException
        WfsTransactionException - DOCUMENT ME!

That is better then a poke in the eye with a sharp stick, it even has documented some member variables that will be used to communicate with writeTo when generating a TransactionResponse document. We will discuss the member variables as they are encountered in the implementation.

Setting up

Well to start out with we need a couple of things:

request = transactionRequest; // preserved toWrite() handle access
        transaction = new DefaultTransaction();
        LOGGER.fine("request is " + request);

We are saving the request for later. Each request has an optional "handle" that may be specified by the user. This handle is supposed to be used for error reporting. On the off chance we error out during writeTo() we need report this information.

GeoTools offers cross data store transaction support. This was constructed explictly for the GeoServer application and this method. To make use of the facility we will need to make ourselves a transaction, the default implementation will work just fine.

It is a little known fact that the default constructor for DefaultTransaction constructs a stack trace and uses it to determine what method started the transaction. This is used during error reporting to make your life easier. DefaultTransaction also supports a constructor where a handle is provided, so we really should be using the handle provided by the user.

TODO:

transaction = request.getHandle() == null ? new DefaultTransaction() : new DefaultTransaction( request.getHandle() );

Finally GeoServer makes use of the java.util.logging facilities, we used Log4j for the longest time and you may still find some references in the codebase.

Data Access

You already know about the different modules, to make this literal the module we want is called Data:

Data catalog = transactionRequest.getWFS().getData();

The difference between literal programming and literate programming is almost the point of object oriented programming. And my spelling was never that good...

WfsTransResponse

You may remember that writeTo needs to send out a document later in the day. This object is going to "collect" all the information needed.

WfsTransResponse build = new WfsTransResponse(WfsTransResponse.SUCCESS,
        transactionRequest.getGeoServer().isVerbose());

This brings up another interesting configuration idea, GeoServer supports a verbose mode where it gets extra happy with reporting of details.
The happiness comes from including stack traces as part of a ServiceException document.

Minding the (Data) Stores

Okay this time we really will hunt down some data. GeoTools makes use of a high level data acess API called FeatureSource. Rather than including both read and write methods in this API (and having half of them throw exceptions when writing is not an option), we have broken the idea up into two classes. The FeatureStore interface extends FeatureSource with methods required for data modification.

First some book keeping allowing us to validate, and clean up, on the off chance we accomplish something.

//
        // We are going to preprocess our elements,
        // gathering all the FeatureSources we need
        //
        // Map of required FeatureStores by typeName
        Map stores = new HashMap();

        // Map of required FeatureStores by typeRef (dataStoreId:typeName)
        // (This will be added to the contents are harmed)
        Map stores2= new HashMap();

So we have two maps, and two ways of referring to a FeatureStore:

  • typeName - this is the name of the FeatureType, all Features produced by the FeatureSource will conform to a schema with this typeName. The typeName is used (along with a namespace) when writing the content out as XML.
  • typeRef - typeRef is something I made up combining the two bits of information we need to locate a FeatureSource. Inside the Data module information is organized into DataStore objects, each of which is given a dataStoreId. You can use a DataStore to obtain a FeatureSource if you have a typeName.

Okay you caught me out, the above also talks about the GeoTools DataStore class. DataStore is the low-level data access API, providing access to an entire DataBase (or file at a time). It allows for low-level feature by feature access, it also provides a list of typeName for the content it knows about.

Rule of thumb:

  • DataStore == a database, or a shapefile
  • FeatureSource / FeatureStore == a table, or the contents of a shapefile

Note that because FeatureStore is a high-level API it is much easier to use, and optimized for common activities - often generating direct SQL statements rather then dragging everything into Java for processing.

PreProcessing TransactionRequest

Now it is time to start picking apart our transaction request with the following goal in mind:

  • Figure out what FeatureStores are going to be modifed by this opperation
  • Set them up to work on our transaction

Tally-ho:

// Gather FeatureStores required by Transaction Elements
        // and configure them with our transaction
        //
        // (I am using element rather than transaction sub request
        // to agree with the spec docs)
        for (int i = 0; i < request.getSubRequestSize(); i++) {
            SubTransactionRequest element = request.getSubRequest(i);


            String typeRef = null;
            String elementName = null;
            FeatureTypeInfo meta = null;

In order to proceed we are going to two things: the typeName and the FeatureTypeInfo (the elementName is just used for error reporting).

So now it is time to explain about FeatureTypeInfo - this class captures everything that GeoServer knows about the data. Both the information on how to connect to the data, and the configuration supplied by the user. This information goes beyond what can be determined by inspecting the data source itself (for example GeoServer allows you to configure a global bounding box for the data).

This metadata information is really simple, based roughly on Dublin Core. It is mostly enough information to generate the capabilities document. We have ported this information to GeoTools and uDig. The latest implementation of this idea is part of the GeoTools catalog package. GeoServer is still using the original at this time.

Processing an InsertRequest:

if (element instanceof InsertRequest) {
                // Option 1: Guess FeatureStore based on insert request
                //
                Feature feature = ((InsertRequest) element).getFeatures()
                                   .features().next();

                if (feature != null) {
                    String name = feature.getFeatureType().getTypeName();
                    URI uri = feature.getFeatureType().getNamespace();

                    LOGGER.fine("Locating FeatureSource uri:'"+uri+"' name:'"+name+"'");
                    meta = catalog.getFeatureTypeInfo(name, uri==null?null:uri.toString());  //change suggested by DZweirs

                    //HACK: The insert request does not get the correct typename,
                    //as we can no longer hack in the prefix since we are using the
                    //real featureType.  So this is the only good place to do the
                    //look-up for the internal typename to use.  We should probably
                    //rethink our use of prefixed internal typenames (cdf:bc_roads),
                    //and have our requests actually use a type uri and type name.
                    //Internally we can keep the prefixes, but if we do that then
                    //this will be less hacky and we'll also be able to read in xml
                    //for real, since the prefix should refer to the uri.
                    //
                    // JG:
                    // Transalation Insert does not have a clue about prefix - this provides the clue
                    element.setTypeName( meta.getNameSpace().getPrefix()+":"+meta.getTypeName() );
                }
                else {
                    LOGGER.finer("Insert was empty - does not need a FeatuerSoruce");
                	continue; // insert is actually empty
                }
            }

Here we have a bit of a gap in GeoServer right now. Our ability to parse GML during a TransactionInsert is based on a SAX parser and does a pretty blind job of it (not taking the known FeatureType information into account). The GeoTools FeatureType construct maintains a concept of "namespace", and the DataStore's keep track of the FeatureTypes known to them. You can see us doing a lookup in the catalog to determine how we would
write out content of this type, and modifying the original element to agree with our assumptions.

The other two request types are easier, no GML needs be harmed:

else {
                // Option 2: lookup based on elmentName (assume prefix:typeName)
                typeRef = null; // unknown at this time
                elementName = element.getTypeName();
                if( stores.containsKey( elementName )) {
                    LOGGER.finer("FeatureSource '"+elementName+"' already loaded." );
                    continue;
                }
                LOGGER.fine("Locating FeatureSource '"+elementName+"'...");
                meta = catalog.getFeatureTypeInfo(elementName);
                element.setTypeName( meta.getNameSpace().getPrefix()+":"+meta.getTypeName() );
            }

Here we can just figure out the what type is referenced. We do ignore the prefix information - and at home we wont have a conflict. Once again we mangle the element to agree with our concept of prefix.

And now the fun stuff finding the data:

typeRef = meta.getDataStoreInfo().getId()+":"+meta.getTypeName();
            elementName = meta.getNameSpace().getPrefix()+":"+meta.getTypeName();
            LOGGER.fine("located FeatureType w/ typeRef '"+typeRef+"' and elementName '"+elementName+"'" );
            if (stores.containsKey(elementName)) {
                // typeName already loaded
                continue;
            }
            try {
                FeatureSource source = meta.getFeatureSource();
                if (source instanceof FeatureStore) {
                    FeatureStore store = (FeatureStore) source;
                    store.setTransaction(transaction);
                    stores.put( elementName, source );
                    stores2.put( typeRef, source );
                } else {
                    throw new WfsTransactionException(elementName
                        + " is read-only", element.getHandle(),
                        request.getHandle());
                }
            } catch (IOException ioException) {
                throw new WfsTransactionException(elementName
                    + " is not available:" + ioException,
                    element.getHandle(), request.getHandle());
            }

Now that we have located the correct FeatureTypeInfo (aka meta) we can figure out a typeRef and elementName to use. After a quick check to see if we have already located it - we can start the lookup process.

A helper method of FeatureTypeInfo actually cuts to the chase - getFeatureSource() will create us a new FeatureSource all set up and ready to go. We now have a couple of checks. If the FeatureSource is not available (IOException) or writable (instanceof FeatureStore) we need to throw a WfsTrasactionException to let the user know.

We can then arange the feature source into our two book keeping maps for later use.

(Un)Locking

The WFS specification has an interesting locking system. It actually represents a compromise between "strong transaction support" (that lasts between sessions), and something simple enough to be implemented.

// provide authorization for transaction
        //
        String authorizationID = request.getLockId();

        if (authorizationID != null) {
            if ((request.getWFS().getServiceLevel() & WFSDTO.SERVICE_LOCKING) == 0) {
                // could we catch this during the handler, rather than during execution?
                throw new ServiceException("Lock support is not enabled");
            }
            LOGGER.finer("got lockId: " + authorizationID);

            if (!catalog.lockExists(authorizationID)) {
                String mesg = "Attempting to use a lockID that does not exist"
                    + ", it has either expired or was entered wrong.";
                throw new WfsException(mesg);
            }

            try {
                transaction.addAuthorization(authorizationID);
            } catch (IOException ioException) {
                // This is a real failure - not associated with a element
                //
                throw new WfsException("Authorization ID '" + authorizationID
                    + "' not useable", ioException);
            }
        }

Yes the entire method from this point forward goes modal from here on out. If authorizationId != null we are dealing with locks. What do we do with the authroizationID? We feed to to the transaction and stand back and enjoy the show (the various DataStores will check for this authorization Id as needed).

The above does contain a small mistake, these are long term transaction locks - that are not always maintained by GeoServer. If GeoSever is restarted it will not have a memory of locks already in use by an external Database (indeed the lock may have been obtained with an application other then GeoServer).

TODO: if (!catalog.lockExists(authorizationID))

Unknown macro: { LOGGER.warn( "Not locked by this instanceof GeoServer" ); }

Of course we will wait for a bug report to come in on this one.

Transaction Processing

Now that we have all the setup we could ever imagine, it is time to get down to processing the individual elements:

// execute elements in order,
        // recording results as we go
        //
        // I will need to record the damaged area for
        // pre commit validation checks
        //
        Envelope envelope = new Envelope();

        for (int i = 0; i < request.getSubRequestSize(); i++) {
            SubTransactionRequest element = request.getSubRequest(i);

            // We expect element name to be of the format prefix:typeName
            // We take care to force the insert element to have this format above.
            //
            String elementName = element.getTypeName();
            String handle = element.getHandle();
            FeatureStore store = (FeatureStore) stores.get(elementName);
            if( store == null ){
            	throw new ServiceException( "Could not locate FeatureStore for '"+elementName+"'" );
            }
            String typeName = store.getSchema().getTypeName();

            ....

Once again we have brought together all the information needed - this time for an individual element. Of interest is obtaining the local variable store via a lookup to the FeatureStores we already collected during preprocessing.

DeleteRequest Element

Lets comence with the cerimonial sanity checks:

if (element instanceof DeleteRequest) {
                if ((request.getWFS().getServiceLevel() & WFSDTO.SERVICE_DELETE) == 0) {
                    // could we catch this during the handler, rather than during execution?
                    throw new ServiceException(
                        "Transaction Delete support is not enabled");
                }

                DeleteRequest delete = (DeleteRequest) element;

                //do a check for Filter.NONE, the spec specifically does not
                // allow this
                if (delete.getFilter() == Filter.NONE) {
                	throw new ServiceException(
            			"Filter must be supplied for Transaction Delete"
                	);
                }

                LOGGER.finer( "Transaction Delete:"+element );

After checking the WFS configuration to ensure that this user is authroized to perform SERVICE_DELETE we can setting our self up with a DeleteRequest. The WFS specification requires that a Filter be provided, since we are not validating against the schema, we will need to explicitly check for this ourseleves - producing a ServiceException when in error.

PreProcessing Validation Hints

Now that things are getting serious (with real data access) we are going to break out a try/catch block as IOExceptions become a fact of life:

try {
                    Filter filter = delete.getFilter();

                    Envelope damaged = store.getBounds(new DefaultQuery(
                                delete.getTypeName(), filter));

                    if (damaged == null) {
                        damaged = store.getFeatures(filter).getBounds();
                    }

In this initial stretching excercise we are trying to figure out what area will be damaged (ie. modified) by the delete opperatio. This information gathered so we can limit the scope of any validation checks to be performed after the fact. We need to do this check before the change takes place (because after words the content will not be there to check).

TODO: If no validation is needed we could skip this pass through the data!

Delete with ReleaseAction.SOME

Moving on - we get the first of our Lock checks:

if ((request.getLockId() != null)
                            && store instanceof FeatureLocking
                            && (request.getReleaseAction() == TransactionRequest.SOME)) {
                        FeatureLocking locking = (FeatureLocking) store;

Then we are due for some dream time:

// TODO: Revisit Lock/Delete interaction in gt2
                        if (false) {
                            // REVISIT: This is bad - by releasing locks before
                            // we remove features we open ourselves up to the danger
                            // of someone else locking the features we are about to
                            // remove.
                            //
                            // We cannot do it the other way round, as the Features will
                            // not exist
                            //
                            // We cannot grab the fids offline using AUTO_COMMIT
                            // because we may have removed some of them earlier in the
                            // transaction
                            //
                            locking.unLockFeatures(filter);
                            store.removeFeatures(filter);
                        }

This is way the game is ment to be played, simple direct readable - but wrong.

And now for reality:

else {
                            // This a bit better and what should be done, we will
                            // need to rework the gt2 locking api to work with
                            // fids or something
                            //
                            // The only other thing that would work would be
                            // to specify that FeatureLocking is required to
                            // remove locks when removing Features.
                            //
                            // While that sounds like a good idea, it would be
                            // extra work when doing release mode ALL.
                            //
                            DataStore data = store.getDataStore();
                            FilterFactory factory = FilterFactory
                                .createFilterFactory();
                            FeatureWriter writer;
                            writer = data.getFeatureWriter(typeName, filter,
                                    transaction);

                            try {
                                while (writer.hasNext()) {
                                    String fid = writer.next().getID();
                                    locking.unLockFeatures(factory
                                        .createFidFilter(fid));
                                    writer.remove();
                                }
                            } finally {
                                writer.close();
                            }

                            store.removeFeatures(filter);
                        }

Reality is a dark and gloomy place. To start with we backtrac up to the DataStore and get a low-level FeatureWriter. The FeatureWriter interface works like an Iterator that throws IOExceptions. Like a ListIterator it allows content to be remove()ed - hense our interest. Since FeatureWriter throws IOExceptions we always need to make use of a try/finally block or risk disaster.

TODO: Figure out why store.removeFeatures( filter ) is there, if FeatureWriter is doing its job this line would not do anything

Normal Delete (or Delete with ReleaseAction ALL )

This is much easier:

else {
                        // We don't have to worry about locking right now
                        //
                        store.removeFeatures(filter);
                    }

Yes working with out locks is much easier, concurancy always has a price.

Delete Element Cleanup

A little bit of book keeping and we are done:

envelope.expandToInclude(damaged);
                } catch (IOException ioException) {
                    throw new WfsTransactionException(ioException.getMessage(),
                        element.getHandle(), request.getHandle());
                }
            }

We exampand the member field envelope to include the area damaged by this element. The envelope will be used to limit validation checking later.

So by the time we are done, the content has been removed (but the transaction is not commited yet). We have record an envelope describing where the change occured. If we were doing ReleaseAction.SOME we have carefully released locks on only those features actually deleted.

Insert Element Processing

Processing the insert element is technically the most risky proposition - because it involves parsing GML content. While that does represent plenty of interesting chalanages - it is not the subject of this article. For more information please look at how the TransactionRequest object is constructued.

Ritual security check insues:

if (element instanceof InsertRequest) {
                if ((request.getWFS().getServiceLevel() & WFSDTO.SERVICE_INSERT) == 0) {
                    // could we catch this during the handler, rather than during execution?
                    throw new ServiceException(
                        "Transaction INSERT support is not enabled");
                }
                LOGGER.finer( "Transasction Insert:"+element );

No surprises there, with a little try/catch block we can move onto the real work:

try {
                    InsertRequest insert = (InsertRequest) element;
                    FeatureCollection collection = insert.getFeatures();

                    FeatureReader reader = DataUtilities.reader(collection);
                    FeatureType schema = store.getSchema();

The GML content has already been parsed into a FeatureCollection for us, DataUtilities can adapat this to a FeatureReader for later use.

Now we can look up enough information to perform our first validation check:

// Need to use the namespace here for the lookup, due to our weird
                    // prefixed internal typenames.  see
                    //   http://jira.codehaus.org/secure/ViewIssue.jspa?key=GEOS-143

                    // Once we get our datastores making features with the correct namespaces
                    // we can do something like this:
                    // FeatureTypeInfo typeInfo = catalog.getFeatureTypeInfo(schema.getTypeName(), schema.getNamespace());
                    // until then (when geos-144 is resolved) we're stuck with:
                    FeatureTypeInfo typeInfo = catalog.getFeatureTypeInfo(element.getTypeName() );

                    // this is possible with the insert hack above.
                    LOGGER.finer("Use featureValidation to check contents of insert" );
                    featureValidation( typeInfo.getDataStoreInfo().getId(), schema, collection );

The featureValidation( dataStoreId, schema, collection ) method will figure out what validation tests can be run right away. The content is checked before making it anywhere near the data source!

Now we can finally get down to inserting the features:

Set fids = store.addFeatures(reader);
                    build.addInsertResult(element.getHandle(), fids);

                    //
                    // Add to validation check envelope
                    envelope.expandToInclude(collection.getBounds());
                } catch (IOException ioException) {
                    throw new WfsTransactionException(ioException,
                        element.getHandle(), request.getHandle());
                }

The GeoTools addFeatures method will return a Set of the FeatureIds of the newly created features. This is not quite ideal - it would of been kind to return them in order with a List. This information is important as we need it in order to create our TransactionResponse document, the local variable build is gathering up this information for later. Finally we maintain that envelope for use in later checks.

TODO: Use a List of FeatureIds so response is returned in order of creation.

All in all this is more straight forward then deleting.

Update Element Processing

So what do we do when we have the complexities of checking locks, along with the joy of parsing? The answer is contained in the depths of the processing the update Element

Ritualistic security check (almost makes me wish for Aspects - hint hint):

if (element instanceof UpdateRequest) {
                if ((request.getWFS().getServiceLevel() & WFSDTO.SERVICE_UPDATE) == 0) {
                    // could we catch this during the handler, rather than during execution?
                    throw new ServiceException(
                        "Transaction Update support is not enabled");
                }
                LOGGER.finer( "Transaction Update:"+element);

PreProcessing Validation Hints

Now we can start by gathering up the information needed to make a query:

try {
                    UpdateRequest update = (UpdateRequest) element;
                    Filter filter = update.getFilter();

                    AttributeType[] types = update.getTypes(store.getSchema());
                    Object[] values = update.getValues();

                    DefaultQuery query = new DefaultQuery(update.getTypeName(),
                            filter);

And yes Query was ment in the literal DefaultQuery sort of way, we are only requesting the values that are going to get modified.

Why would we do this? Because we are going to remember the bounds for later, and also which exact features were harmed:

// Pass through data to collect fids and damaged region
                    // for validation
                    //
                    Set fids = new HashSet();
                    LOGGER.finer("Preprocess to remember modification as a set of fids" );
                    FeatureReader preprocess = store.getFeatures( filter ).reader();
                    try {
                        while( preprocess.hasNext() ){
                            Feature feature = preprocess.next();
                            fids.add( feature.getID() );
                            envelope.expandToInclude( feature.getBounds() );
                        }
                    } catch (NoSuchElementException e) {
                        throw new ServiceException( "Could not aquire FeatureIDs", e );
                    } catch (IllegalAttributeException e) {
                        throw new ServiceException( "Could not aquire FeatureIDs", e );
                    }
                    finally {
                        preprocess.close();
                    }

This is a straightforward pass through the data.

TODO: If no validation is needed we could skip this.

Update Features

We can now proceed with the update, since the highlevel FeatureSource API was created with this method in mind the process is straight forward:

try{
	                    if (types.length == 1) {
	                        store.modifyFeatures(types[0], values[0], filter);
	                    } else {
	                        store.modifyFeatures(types, values, filter);
	                    }
                    } catch (IOException e)  // DJB: this is for cite tests.
                       // We should probaby do this for all the exceptions here
                       // - throw a transaction FAIL instead of serice exception
				{
                	//this failed - we want a FAILED not a service exception!
                	build = new WfsTransResponse(WfsTransResponse.FAILED,
                            transactionRequest.getGeoServer().isVerbose());
                	 // add in exception details here??
                	    build.setMessage(e.getLocalizedMessage());
                	response = build;
                	// DJB: it looks like the transaction is rolled back in writeTo()
                	return;
		    }

We even snuck in an optimization when only a single property is updated. And then we run into a surprise - thowing a WfsException is supposed to be sufficient. There should be no need to construct a special WfsTransResponse your self.

TODO: throw new WfsException( e ) and flush out the bugs that must of prevented cite tests from passing

Unlocking The Modifed Features

A bit more fun here, we need to unlock the modified feautres if TransactionResponse is SOME:

if ((request.getLockId() != null)
                            && store instanceof FeatureLocking
                            && (request.getReleaseAction() == TransactionRequest.SOME)) {
                        FeatureLocking locking = (FeatureLocking) store;
                        locking.unLockFeatures(filter);
                    }

We have what looks to be another obscure bug (anything w/ locking is obscure). The problem: filter may not return the exact same features as before the modification was made.

TODO: Construct a new FidFilter from fids (ie the list we made for validation checking)

Validation on Updated Content

Now that we have modifed some features we may as well check if they are any good:

// Post process - check features for changed boundary and
                    // pass them off to the ValidationProcessor
                    //
                    if( !fids.isEmpty() ) {
                        LOGGER.finer("Post process update for boundary update and featureValidation");
                        FidFilter modified = FilterFactory.createFilterFactory().createFidFilter();
                        modified.addAllFids( fids );

                        FeatureCollection changed = store.getFeatures( modified ).collection();
                        envelope.expandToInclude( changed.getBounds() );

                        FeatureTypeInfo typeInfo = catalog.getFeatureTypeInfo(element.getTypeName());
                        featureValidation(typeInfo.getDataStoreInfo().getId(),store.getSchema(), changed);
                    }

This time around we can see a FidFilter being created, we construct a FeatureCollection in the usual manner and send it off to the featureValidation method for review. It should be noted that the featureValidation method will happly throw IOException if somebody is not behaving well, causing the Transaction to be rolled back and the data left in a consistent state.

Cleaning up after the Update

A little house keeping and we are done:

Final Validation Check

Now that we have done all the work we can check if it is any good:

} catch (IOException ioException) {
                    throw new WfsTransactionException(ioException,
                        element.getHandle(), request.getHandle());
                } catch (SchemaException typeException) {
                    throw new WfsTransactionException(typeName
                        + " inconsistent with update:"
                        + typeException.getMessage(), element.getHandle(),
                        request.getHandle());
                }
            }
        }
// All opperations have worked thus far
        //
        // Time for some global Validation Checks against envelope
        //
        try {
            integrityValidation(stores2, envelope);
        } catch (Exception invalid) {
            throw new WfsTransactionException(invalid);
        }

        // we will commit in the writeTo method
        // after user has got the response
        response = build;
    }

After the validation check we set the field response to the build object we have been carefully constructing.

Writing out the TransactionDocument

Okay you have survived the 800 line method of doom, why am I still talking? Because our result has not been a) commited or b) sent off to the client. I hate cliff hanger endings and their is a lot of data lurking on the edge at this point in the story.

If you were paying attention to the saga of ApplicationService we got one more responsibility before we get around to writing out content:

public String getContentType(GeoServer gs) {
        return gs.getMimeType();
    }

That is right, the content type is completly defined by the configuration.

writeTo

And now for the good stuff:

/**
     * Writes generated xmlResponse.
     *
     * <p>
     * I have delayed commiting the result until we have returned it to the
     * user, this gives us a chance to rollback if we are not able to provide
     * a response.
     * </p>
     * I could not quite figure out what to about releasing locks. It could be
     * we are supposed to release locks even if the transaction fails, or only
     * if it succeeds.
     *
     * @param out DOCUMENT ME!
     *
     * @throws ServiceException DOCUMENT ME!
     * @throws IOException DOCUMENT ME!
     */
    public void writeTo(OutputStream out) throws ServiceException, IOException {

So the writeTo method get to both commit the content, generate the Transaction Document and release any locks.

May as well get started:

if ((transaction == null) || (response == null)) {
            throw new ServiceException("Transaction not executed");
        }

        if (response.status == WfsTransResponse.PARTIAL) {
            throw new ServiceException("Canceling PARTIAL response");
        }

If we have not started yet (ie execture) has not been called, or if some developer is trying to support the PARTIAL response type it is time to die.

Writing is straightforward:

try {
            Writer writer;

            writer = new OutputStreamWriter(out);
            writer = new BufferedWriter(writer);

            response.writeXmlResponse(writer, request);
            writer.flush();

Good thing that response knows how to write itself out.

Now we get to the heart of the application:

switch (response.status) {
            case WfsTransResponse.SUCCESS:
                transaction.commit();
                break;

            case WfsTransResponse.FAILED:
                transaction.rollback();
                break;
            }

I am glad to see so much work looking so simple at the end of things - great work everyone!

A little bit of error wrangling and we are done:

} catch (IOException ioException) {
            transaction.rollback();
            throw ioException;
        } finally {
            transaction.close();
            transaction = null;
        }

Un Locking

Okay we are not quite done, here is a little bit of Lock cleanup.

//
        // Lets deal with the locks
        //
        // Q: Why talk to Data you ask
        // A: Only class that knows all the DataStores
        //
        // We really need to ask all DataStores to release/refresh
        // because we may have locked Features with this Authorizations
        // on them, even though we did not refer to them in this transaction.
        //
        // Q: Why here, why now?
        // A: The opperation was a success, and we have completed the opperation
        //
        // We also need to do this if the opperation is not a success,
        // you can find this same code in the abort method
        //
        Data catalog = request.getWFS().getData();

        if (request.getLockId() != null) {
            if (request.getReleaseAction() == TransactionRequest.ALL) {
                catalog.lockRelease(request.getLockId());
            } else if (request.getReleaseAction() == TransactionRequest.SOME) {
                catalog.lockRefresh(request.getLockId());
            }
        }

We are foced to ask the Data module for enough information to clean up after locks - a single Lock may be used on more then one DataStore. The DataModule is the only class that "knows" about the DataStores. This facility is available through the GeoTools

Document generated by Confluence on Jan 16, 2008 23:26