Note: use of Quartz is now deprecated in favour of the newer Flint disk-based format. We plan to entirely remove the quartz backend in Xapian 1.1.0. However a lot of this document is also relevant to Flint as well.
Xapian can access information stored in various different formats. Generally these are disk-based, but there's also the InMemory format which is stored entirely in memory.
Each of these formats is comprised by a set of classes providing an interface to a Database object and several other related objects (PostList, TermList, etc...).
Quartz is simply the name of Xapian's first high-performance backend. The design of Quartz draws on all our past experience to satisfy the following criteria:
Different backends can be optionally compiled into the Xapian library (by specifying appropriate options to the configure script). Quartz is compiled by default.
Why do we call it Quartz - where does the name come from?
Well, we had to call it something, and Quartz was simply the first name we came up with which we thought we could live with...
These tables consist of a set of key-tag pairs, which I shall often refer to these as items or entries. Items may be accessed randomly by specifying a key and reading the item pointed to, or in sorted order by creating a cursor pointing to a particular item. The sort order is a lexicographical ordering based on the contents of the keys. Only one instance of a key may exist in a single table - inserting a second item with the same key as an existing item will overwrite the existing item.
Positioning of cursors may be performed even when a full key isn't known, by attempting to access an item which doesn't exist: the cursor will then be set to point to the first item with a key before that requested.
The Btree class defines the standard interface to a table. This has a subclass for each table - QuartzRecordTable, QuartzValueTable, QuartzPostListTable, QuartzPositionListTable, and QuartzTermListTable. Apart from QuartzPostListTable, these are fairly thin wrappers. QuartzPostListTable buffers up the inverted changes internally to allow fast updatig.
Changes are made to the Btree by calling add() and del(), but they will not be seen by readers until commit() is called. Alternatively, calling cancel() will abandon changes. This allows atomic transactions to be implemented.
The Btree class is optimised to be fast when changes are applied in sorted order. For most tables, this means indexing documents in docid order. QuartzPostListTable takes care of this as part of the inversion process.
There are five tables comprising a quartz database.
Key: lsb ... msb of the docid, until all remaining bytes are zero
The record table also holds a couple of special values, stored under the key consisting of a single zero byte (this isn't a valid encoded docid). The first value is the next document ID to use when adding a document (document IDs are allocated in increasing order, starting at 1, and are currently never reused). The other value is the total length of the documents in the database, which is used to calculate the average document length, which we need to calculate normalised document lengths.
Key: lsb ... msb of the docid, until all remaining bytes are zero
Currently, there is one B-tree entry for each document in the database that has one or more values associated with it. This entry consists of a list of value_no-s and values for that document.
An alternative implementation would be to store an item for each value, whose key is a combination of the document ID and the keyno, and whose tag is the value. Which implementation is better depends on the access pattern: if a document is being passed across a network link, all the values for a document are read - if a document is being dealt with locally, usually only some of the values will be read.
Documents will usually have very few values, so the current implementation may actually be the most suitable.
Key: lsb ... msb of the docid, until all remaining bytes are zero
The list first stores the document length, and the number of entries in the termlist (this latter value is stored for quick access - it could also be determined by running through the termlist). It then stores a set of entries: each entry in the list consists of a term (as a string), and the wdf (within document frequency - how many times the term appears in the document) of that term.
In a non-modifiable database, the term frequency could be stored in the termlist for each entry in each list. This would enable query expansion operations to occur significantly faster by avoiding the need for a large number of extra lookups - however, this cannot be implemented in a writable database without causing any modifications to modify a very large proportion of the database.
Key: pack_uint(did) + tname
Key: pack_string_preserving_sort(tname) [first chunk]
Key: pack_string_preserving_sort(tname) + pack_uint_preserving_sort(first_did_in_chunk) [second and subsequent chunks]
The current implementation uses simple compression - we're investigating more effective schemes - these are (FIXME: this is slightly out of date now):
To deal with this, we store posting lists in small chunks, each the right size to be stored in a single B-tree block, and hence to be accessed with a minimal amount of disk latency.
The key for the first chunk in a posting list is the term name of the term whose posting list it is. The key in subsequent chunks is the term name followed by the document ID of the first document in the chunk. This allows the cursor methods to be used to scan through the chunks in order, and also to jump to the chunk containing a particular document ID.
It is quite possible that the termlists and position lists would benefit from being split into chunks in this way.
A B-tree is a fairly standard structure for storing this kind of data, so I will not describe it in detail - see a reference book on database design and algorithms for that. The essential points are that it is a block-based multiply branching tree structure, storing keys in the internal blocks and key-tag pairs in the leaf blocks.
Our implementation is fairly standard, except for its revision scheme, which allows modifications to be applied atomically whilst other processes are reading the database. This scheme involves copying each block in the tree which is involved in a modification, rather than modifying it in place, so that a complete new tree structure is built up whilst the old structure is unmodified (although this new structure will typically share a large number of blocks with the old structure). The modifications can then be atomically applied by writing the new root block and making it active.
After a modification is applied successfully, the old version of the table is still fully intact, and can be accessed. The old version only becomes invalid when a second modification is attempted (and it becomes invalid whether or not that second modification succeeds).
There is no need for a process which is writing the database to know whether any processes are reading previous versions of the database. As long as only one update is performed before the reader closes (or reopens) the database, no problem will occur. If more than one update occurs whilst the table is still open, the reader will notice that the database has been changed whilst it has been reading it by comparing a revision number stored at the start of each block with the revision number it was expecting. An appropriate action can then be taken (for example, to reopen the database and repeat the operation).
An alternative approach would be to obtain a read-lock on the revision being accessed. A write would then have to wait until no read-locks existed on the old revision before modifying the database.
The revisioning scheme described earlier comes to the rescue! By carefully making sure that we open all the tables at the same revision, and by ensuring that at least one such consistent revision always exists, we can extend the scope of atomicity to cover all the tables. In detail:
Some of the constants mentioned below depend upon a byte being 8 bits, but this assumption is not built into the code.
In the B-tree key-tag pairs are ordered, and the order is the ASCII collating order of the keys. Very precisely, if key1 and key2 point to keys with lengths key1_len, key2_len, key1 is before/equal/after key2 according as the following procedure returns a value less than, equal to or greater than 0,
static int compare_keys(const byte * key1, int key1_len, const byte * key2, int key2_len) { int smaller = key1_len < key2_len ? key1_len : key2_len; for (int i = 0; i < smaller; i++) { int diff = (int) key1[i] - key2[i]; if (diff != 0) return diff; } return key1_len - key2_len; }
[This is okay, but none of the code fragments below have been checked.]
Any large-scale operation on the B-tree will run very much faster when the keys have been sorted into ASCII collating order. This fact is critical to the performance of the B-tree software.
A key-tag pair is called an 'item'. The B-tree consists therefore of a list of items, ordered by their keys:
I1 I2 ... Ij-1 Ij Ij+1 ... In-1 In
Item Ij has a 'previous' item, Ij-1, and a 'next' item, Ij+1.
When the B-tree is created, a single item is added in with null key and null tag. This is the 'null item'. The null item may be searched for, and it's possible, although perhaps not useful, to replace the tag part of the null item. But the null item cannot be deleted, and an attempt to do so is merely ignored.
A key must not exceed 252 bytes in length.
A tag may have length zero. There is an upper limit on the length of a tag, but it is quite high. Roughly, the tag is divided into items of size L - kl, where L is a a few bytes less than a quarter of the block size, and kl is length of its key. You can then have 64K such items. So even with a block size as low as 2K and key length as large as 100, you could have a tag of 2.5 megabytes. More realistically, with a 16K block size, the upper limit on the tag size is about 256 megabytes.
The B-tree has a revision number, and each time it is updated, the revision number increases. In a single transaction on the B-tree, it is first opened, its revision number, R is found, updates are made, and then the B-tree is closed with a supplied revision number. The supplied revision number will typically be R+1, but any R+k is possible, where k > 0.
If this sequence fails to complete for some reason, revision R+k of the B-tree will not, of course, be brought into existence. But revision R will still exist, and it is that version of the B-tree that will be the starting point for later revisions.
If this sequence runs to a successful termination, the new revision, R+k, supplants the old revision, R. But it is still possible to open the B-tree at revision R. After a successful revision of the B-tree, in fact, it will have two valid versions: the current one, revision R+k, and the old one, revision R.
You might want to go back to the old revision of a B-tree if it is being updated in tandem with second B-tree, and the update on the second B-tree fails. Suppose B1 and B2 are two such B-trees. B1 is opened and its latest revision number is found to be R1. B2 is opened and its latest revision number is found to be R2. If R1 > R2, it must be the case that the previous transaction on B1 succeeded and the previous transaction on B2 failed. Then B1 needs to opened at its previous revision number, which must be R1.
The calls using revision numbers described below are intended to handle this type of contingency.
When the B-tree is opened without any particular revision number being specified, the later of baseA and baseB is chosen as the opening base, and as soon as a write to the file DB occurs, the earlier of baseA or baseB is deleted. On closure, the new revision number is written to baseB if baseA was the opening base, and to baseA if baseB was the opening base. If the B-tree update fails for some reason, only one base will usually survive.
The bitmap stored in each base file will have bit n set if block n is in use in the corresponding revision of the B-tree.
void BtreeCheck::check(const string & name, int opts);
BtreeCheck::check(s, opts) is essentially equivalent toBtree B(s, false); B.open(); { // do a complete integrity check of the B-tree, // reporting according to the bitmask opts }The option bitmask may consist of any of the following values |-ed together:The options control what is reported - the entire B-tree is always checked as well as reporting the information.
- OPT_SHORT_TREE - short summary of entire B-tree
- OPT_FULL_TREE - full summary of entire B-tree
- OPT_SHOW_BITMAP - print the bitmap
- OPT_SHOW_STATS - print the basic information (revision number, blocksize etc.)
Let us say an item is 'new' if it is presented for addition to the B-tree and its key is not already in the B-tree. Then presenting a long run of new items ordered by key causes the B-tree updating process to switch into a mode where much higher compaction than 75% is achieved - about 90%. This is called 'sequential' mode. It is possible to force an even higher compaction rate with the procedure
void Btree::full_compaction(bool parity);So
B.full_compaction(true);switches full compaction on, and
B.full_compaction(false);switches it off. Full compaction may be switched on or off at any time, but it only affects the compaction rate of sequential mode. In sequential mode, full compaction gives around 98-99% block usage - it is not quite 100% because keys are not split across blocks.
The downside of full compaction is that block splitting will be heavy on the next update. However, if a B-tree is created with no intention of being updated, full compaction is very desirable.
To make a really fast structure for retrieval therefore, create a new B-tree, open it for updating, set full compaction mode, and add all the items in a single transaction, sorted on keys. After closing, do not update further.
Xapian includes a utility which performs this process on all the Btrees in a quartz database - it's call quartzcompact. You can refer to the source code of the quartzcompact utility to see how this is implemented.
Only the Btree structure is changed - all the keys and tags are unaltered, so the database is the same as far as an application using Xapian is concerned. In particular, all the document ids are the same.
This may change in the future with code redesign, but meanwhile note that a K term query that needs k <= K cursors open at once to process, will demand 2*K*B bytes of memory in the B-tree manager.
It is possible to do retrieval while the B-tree is being updated. If the updating process overwrites a part of the B-tree required by the retrieval process, then a Xapian::DatabaseModifiedError exception is thrown.
This should be handled, and suitable action taken - either the operation aborted, or the Btree reopened at the latest revision and the operation retried. Here is a model scheme:
static Btree * reopen(Btree * B) { // Get the revision number. This will return the correct value, even when // B->overwritten is detected during opening. uint4 revision = B->get_open_revision_number(); while (true) { try { delete B; /* close the B-tree */ B = new Btree(s, true); B->open(s); /* and reopen */ break; } catch (const Xapian::DatabaseModifiedError &) { } } if (revision == B->get_open_revision_number()) { // The revision number ought to have gone up from last time, // so if we arrive here, something has gone badly wrong ... printf("Possible database corruption!\n"); exit(1); } return B; } .... char * s = "database/"; Btree * B = 0; uint4 revision = 0; /* open the B-tree */ while (true) { try { delete B; /* close the B-tree */ B = new Btree(s, true); B->open(); /* and reopen */ break; } catch (const Xapian::DatabaseModifiedError &) { } } string t; while (true) { try { B->find_tag("brunel", &t); /* look up some keyword */ break; } catch (const Xapian::DatabaseModifiedError &) { B = reopen(s); } } ...If the overwritten condition were detected in updating mode, this would mean that there were two updating processes at work, or the database has become corrupted somehow. If this is detected, a Xapian::DatabaseCorruptError is thrown. There's not much which can usefully be done to automatically handle this condition.
In retrieval mode, the following can cause Xapian::DatabaseModifiedError to be thrown:
Btree::open_to_read(name); Btree::open_to_read(name, revision); Bcursor::next(); Bcursor::prev(); Bcursor::find_key(const string &key); Bcursor::get_tag(string * tag);The following can not:
Bcursor::Bcursor(Btree * B); Bcursor::get_key(string * key);Note particularly that opening the B-tree can cause it, but Bcursor::get_key(..) can't.