Protocol optimizations can be made in different protocol layers, as follows:
In general, it is usually possible to improve the performance of the TCP layer by increasing buffer sizes, as follows:
Socket buffer size—the default TCP socket buffer size is 64 KB. While this is adequate for the speed of networks in use at the time TCP was originally designed, this buffer size is sub-optimal for modern high-speed networks. The following rule of thumb can be used to estimate the optimal TCP socket buffer size:
Buffer Size = Bandwidth x Round-Trip-Time
Where the Round-Trip-Time is the time between initially sending a TCP packet and receiving an acknowledgement of that packet (ping time). Typically, it is a good idea to try doubling the socket buffer size to 128 KB. For example:
tcp://hostA:61617?socketBufferSize=131072
For more details, see the Wikipedia article on Network Improvement.
I/O buffer size—the I/O buffer is used to buffer the data flowing between the TCP layer and the protocol that is layered above it (such as OpenWire). The default I/O buffer size is 8 KB and you could try doubling this size to achieve better performance. For example:
tcp://hostA:61617?ioBufferSize=16384
The OpenWire protocol exposes several options that can affect performance, as shown in Table 1.1.
Table 1.1. OpenWire Parameters Affecting Performance
Parameter | Default | Description |
---|---|---|
cacheEnabled | true | Specifies whether to cache commonly repeated values, in order to optimize marshaling. |
cacheSize | 1024 | The number of values to cache. Increase this value to improve performance of marshaling. |
tcpNoDelayEnabled | false | When true , disable the Nagles algorithm. The Nagles
algorithm was devised to avoid sending tiny TCP packets
containing only one or two bytes of data; for example, when TCP is
used with the Telnet protocol. If you disable the Nagles algorithm,
packets can be sent more promptly, but there is a risk that the
number of very small packets will increase. |
tightEncodingEnabled | true | When true , implement a more compact encoding of
basic data types. This results in smaller messages and better
network performance, but comes at a cost of more calculation and
demands made on CPU time. A trade off is therefore required: you
need to determine whether the network or the CPU is the main factor
that limits performance. |
To set any of these options on an Apache Camel URI, you must add the
wireFormat.
prefix. For example, to double the size of the OpenWire
cache, you can specify the cache size on a URI as follows:
tcp://hostA:61617?wireFormat.cacheSize=2048
If your application sends large messages and you know that your network is slow, it might be worthwhile to enable compression on your connections. When compression is enabled, the body of each JMS message (but not the headers) is compressed before it is sent across the wire. This results in smaller messages and better network performance. On the other hand, it has the disadvantage of being CPU intensive.
To enable compression, enable the useCompression
option on the
ActiveMQConnectionFactory
class. For example, to initialize a JMS
connection with compression enabled in a Java client, insert the following
code:
// Java
...
// Create the connection.
ActiveMQConnectionFactory connectionFactory = new ActiveMQConnectionFactory(user, password, url);
connectionFactory.setUseCompression(true);
Connection connection = connectionFactory.createConnection();
connection.start();
Alternatively, you can enable compression by setting the
jms.useCompression
option on a producer URI—for
example:
tcp://hostA:61617?jms.useCompression=true