The stock KahaDB persistence adapter works well when all of the destinations being managed by the broker have similar performance and reliability profiles. When one destination has a radically different performance profile, for example its consumer is exceptionally slow compared to the consumers on other destinations, the message store's disk usage can grow rapidly. When one or more destinations don't require disc synchronization and the others do require it, all of the destinations must take the performance hit.
The distributed KahaDB persistence adapter allows you to distribute a broker's destinations across multiple KahaDB message stores. Using multiple message stores allows you to tailor the message store more precisely to the needs of the destinations using it. Destinations and stores are matched using filters that take standard wild card syntax.
The distributed KahaDB persistence adapter configuration wraps more than one KahaDB message store configuration.
The distributed KahaDB persistence adapter configuration is specified using the
mKahaDB
element. The mKahaDB
element
has a single attribute, directory
, that specifies the location
where the adapter writes its data stores. This setting is the default value for the
directory
attribute of the embedded KahaDB message store
instances. The individual message stores can override this default setting.
The mKahaDB
element has a single child
filteredPersistenceAdapters
. The
filteredPersistenceAdapters
element contains multiple
filteredKahaDB
elements that configure the KahaDB message stores
that are used by the persistence adapter.
Each filteredKahaDB
element configures one KahaDB message
store. The destinations matched to the message store are specified using attributes on the
filteredKahaDB
element:
queue
—specifies the name of queuestopic
—specifies the name of topics
The destinations can be specified either using explicit destination names or using wildcards. For information on using wildcards see Filters. If no destinations are specified the message store will match any destinations that are not matched by other filters.
The KahaDB message store configured by a filteredKahaDB
element is configured using the standard KahaDB persistence adapter configuration.
It consists of a kahaDB
element wrapped in a
persistenceAdapter
element. For details on configuring a
KahaDB message store see Configuring the KahaDB Message Store.
You can use wildcards to specify a group of destination names. This is useful for situations where your destinations are set up in federated hierarchies.
For example, imagine you are sending price messages from a stock exchange feed. You might name your destinations as follows:
PRICE.STOCK.NASDAQ.ORCL
to publish Oracle Corporation's price on NASDAQPRICE.STOCK.NYSE.IBM
to publish IBM's price on the New York Stock Exchange
You could use exact destination names to specify which message store will be used to persist message data, or you could use wildcards to define hierarchical pattern matches to the pair the destinations with a message store.
Fuse Message Broker uses the following wild cards:
.
separates names in a path*
matches any name in a path>
recursively matches any destination starting from this name
For example using the names above, these filters are possible:
PRICE.>
—any price for any product on any exchangePRICE.STOCK.>
—any price for a stock on any exchangePRICE.STOCK.NASDAQ.*
—any stock price on NASDAQPRICE.STOCK.*.IBM
—any IBM stock price on any exchange
Example 3.1 shows a distributed KahaDB persistence adapter that distributes destinations accross two KahaDB message stores. The first message store is used for all queues managed by the broker. The second message store will is used for all other destinations. In this case, it will be used for all topics.
Example 3.1. Distributed KahaDB Persistence Adapter Configuration
<persistenceAdapter> <mKahaDB directory="${activemq.base}/data/kahadb"> <filteredPersistenceAdapters> <!-- match all queues --> <filteredKahaDB queue=">"> <persistenceAdapter> <kahaDB journalMaxFileLength="32mb"/> </persistenceAdapter> </filteredKahaDB> <!-- match all destinations --> <filteredKahaDB> <persistenceAdapter> <kahaDB enableJournalDiskSyncs="false"/> </persistenceAdapter> </filteredKahaDB> </filteredPersistenceAdapters> </mKahaDB> </persistenceAdapter>
Transactions can span multiple journals if the destinations are distributed. This means that two phase completion is required. This does incur the performance penalty of the additional disk sync to record the commit outcome.
If only one journal is involved in the transaction, the additional disk sync is not used. The performance penalty is not incurred in this case.