The Ingres Cluster Solution is a variation of a typical Ingres instance in which Ingres runs simultaneously on multiple host machines to provide cooperative and transparent access to one or more databases.
Ingres is installed in the typical manner, except most file locations must be on cluster file systems—that is, file systems using hardware and software that allow safe simultaneous update access from all machines intended to be part of the cluster. Certain other locations must not be on cluster file systems if the file system supports only block-oriented data transfer (for example, Oracle Cluster File System). Once installed, you run the iimkcluster utility to convert the initial instance into one of the cluster members (nodes), and then run the iisunode utility to add more nodes.
Use of the Ingres Cluster Solution requires specialized hardware and third-party software.
The Ingres Cluster Solution is incompatible with the following Ingres features:
On each node, you should make sure that Ingres and your applications perform as expected. Some of the restrictions on lock level and lock mode are handled internally by Ingres, but may result in increased contention or deadlocks.
The Ingres Cluster Solution requires that you install Ingres enabled for Native POSIX Threads. To install Ingres enabled for Native POSIX Threads, you must be running a Linux distribution that provides Native POSIX Thread Library (NPTL) support. All Linux distributions based on the 2.6 kernel provide NTPL support, as does the Red Hat Enterprise Linux AS 3.0 2.4.21EL kernel.
In addition, you must have the following:
All disk locations must be set up as cluster file systems on storage accessible from the same path from all members of the cluster.
Before installing Ingres in a Linux Cluster environment, follow these steps:
Follow these steps to install and configure Ingres in a Linux Cluster environment:
Typically, it is easier to resolve any configuration issues at this stage because only one machine is in use. In addition, aside from the cluster file system support, Ingres is using operating system software only.
iimkcluster
The utility prompts you for a node number and a nickname.
Node numbers are unique integers in the range 1 through the maximum supported cluster members for your platform (currently 15). During a partial cluster failure, the surviving cluster member (node) with the lowest node number is responsible for recovering transactions on behalf of the failed nodes, so you should assign low numbers to the more powerful machines in the cluster.
The nickname is an optional simple alias, which you can use in any context in which you could specify a nodename parameter. The nickname appears in the error log in lieu of the machine name.
The iimkcluster utility renames the transaction logs and certain diagnostic log files (iircp.log, iiacp.log, and so on) by appending the host name of the machine on which the cluster member is running. Also created is a sub-directory in the $II_SYSTEM/ingres/files/memory directory with the name of the host machine, as well as directory $II_SYSTEM/ingres/admin/hostname to which the symbol.tbl is relocated.
This step keeps entities that are normally operated upon by only one node separate from corresponding objects that will be created by the other nodes.
You should also perform application testing to confirm that certain Ingres Cluster Solution restrictions, such as lack of support for row-level locking, will not impact the usability of your applications.
iisunode
The utility prompts you for a unique node number and nickname. Once entered and confirmed, iisunode does the following:
Applications can access Ingres configured for Ingres Cluster Solution by using any of the following methods:
Ingres allows the creation of server classes that function as regular DBMS servers but can be configured for specialized situations. The server parameter class_node_affinity, if set for a server class, allows servers in this class to be started on only one node at a time.
The configuration name and server class name for the default CNA classes generated is iicnann, where nn is the node number, zero padded.
While iimkcluster and iisunode set up a separate CNA class for each node they are run on, these classes are not bound to the node they were defined on, but can be started on any node. In addition, any database (except iidbdb) that is connected to a server class using CNA cannot be opened by any other server class, including the default class.
The advantage of these restrictions is that because all operations on the database are known to be being performed on one node with pages resident in one cache, operations on the database do not require distributed locks, and the pages for the database do not need to participate in Distributed Multi-Cache Management (DMCM) protocols. For an installation servicing multiple databases, this allows you to increase efficiency by grouping your database operations by node, which significantly increases cache hit rates and decreases the latency of lock resolution and the overhead associated with DMCM. In addition, Row Level Locking and Update Mode Locks are automatically supported for databases serviced by CNA classes, instead of being silently converted to coarser granularity locks and stronger lock modes.
The Red Hat Global File System (GFS) has poor performance using direct I/O and small page sizes, as required when using 2 KB pages with Ingres. (This problem was seen on GFS version 6.)
When creating databases with II_DATABASE on a GFS device, we recommend that you use the createdb –page_size option to specify a page size for your system catalogs of at least 4096 (4 KB). All tables residing in whole or in part on a GFS device should use page sizes of 4096 or higher. If you have work locations on GFS devices or want to avoid the need to explicitly specify a page size when creating a table, set the default page size (default_page_size) to 4096 or higher.
If the transaction logs reside in whole or in part on a GFS device, block_size should be at least 4.