Linux Kernel  3.7.1
 All Data Structures Namespaces Files Functions Variables Typedefs Enumerations Enumerator Macros Groups Pages
Data Fields
netio_input_config_t Struct Reference

An object for specifying the characteristics of NetIO communication endpoint. More...

#include <netio_intf.h>

Data Fields

int flags
 
const charinterface
 
int num_receive_packets
 
unsigned int queue_id
 
int num_send_buffers_small_total
 
int num_send_buffers_small_prealloc
 
int num_send_buffers_large_total
 
int num_send_buffers_large_prealloc
 
int num_send_buffers_jumbo_total
 
int num_send_buffers_jumbo_prealloc
 
uint64_t total_buffer_size
 
uint8_t buffer_node_weights [NETIO_NUM_NODE_WEIGHTS]
 
voidfixed_buffer_va
 
int num_sends_outstanding
 

Detailed Description

An object for specifying the characteristics of NetIO communication endpoint.

The netio_input_register() function uses this structure to define how an application tile will communicate with an IPP.

Future updates to NetIO may add new members to this structure, which can affect the success of the registration operation. Thus, if dynamically initializing the structure, applications are urged to zero it out first, for example:

since that guarantees that any unused structure members, including members which did not exist when the application was first developed, will not have unexpected values.

If statically initializing the structure, we strongly recommend use of C99-style named initializers, for example:

.num_receive_packets = NETIO_MAX_RECEIVE_PKTS,
.queue_id = 0,
},

instead of the old-style structure initialization:

// Bad example! Currently equivalent to the above, but don't do this.
},

since the C99 style requires no changes to the code if elements of the config structure are rearranged. (It also makes the initialization much easier to understand.)

Except for items which address a particular tile's transmit or receive characteristics, such as the NETIO_RECV flag, applications are advised to specify the same set of configuration data on all registrations. This prevents differing results if multiple tiles happen to do their registration operations in a different order on different invocations of the application. This is particularly important for things like link management flags, and buffer size and homing specifications.

Unless the NETIO_FIXED_BUFFER_VA flag is specified in flags, the NetIO buffer pool is automatically created and mapped into the application's virtual address space at an address chosen by the operating system, using the common memory (cmem) facility in the Tilera Multicore Components library. The cmem facility allows multiple processes to gain access to shared memory which is mapped into each process at an identical virtual address. In order for this to work, the processes must have a common ancestor, which must create the common memory using tmc_cmem_init().

In programs using the iLib process creation API, or in programs which use only one process (which include programs using the pthreads library), tmc_cmem_init() is called automatically. All other applications must call it explicitly, before any child processes which might call netio_input_register() are created.

Definition at line 2184 of file netio_intf.h.

Field Documentation

uint8_t buffer_node_weights[NETIO_NUM_NODE_WEIGHTS]

Buffer placement weighting factors.

This array specifies the relative amount of buffering to place on each of the available Linux NUMA nodes. This array is indexed by the NUMA node, and the values in the array are proportional to the amount of buffer space to allocate on that node.

If memory striping is enabled in the Hypervisor, then there is only one logical NUMA node (node 0). In that case, NetIO will by default ignore the suggested buffer node weights, and buffers will be striped across the physical memory controllers. See UG209 System Programmer's Guide for a description of the hypervisor option that controls memory striping.

If memory striping is disabled, then there are up to four NUMA nodes, corresponding to the four DDRAM controllers in the TILE processor architecture. See UG100 Tile Processor Architecture Overview for a diagram showing the location of each of the DDRAM controllers relative to the tile array.

For instance, if memory striping is disabled, the following configuration strucure:

.
.
.
.total_buffer_size = 4 * 16 * 1024 * 1024;
.buffer_node_weights = { 1, 0, 1, 0 },
},

would result in 32 MB of buffers being placed on controller 0, and 32 MB on controller 2. (Since buffers are allocated in units of 16 MB, some sets of weights will not be able to be matched exactly.)

For the weights to be effective, total_buffer_size must be nonzero. If total_buffer_size is zero, causing the default 32 MB of buffer space to be used, then any specified weights will be ignored, and buffers will positioned as they were in previous versions of NetIO:

  • For xgbe/0 and gbe/0, 16 MB of buffers will be placed on controller 1, and the other 16 MB will be placed on controller 2.
  • For xgbe/1 and gbe/1, 16 MB of buffers will be placed on controller 2, and the other 16 MB will be placed on controller 3.

If total_buffer_size is nonzero, but all weights are zero, then all buffer space will be allocated on Linux NUMA node zero.

By default, the specified buffer placement is treated as a hint; if sufficient free memory is not available on the specified controllers, the buffers will be allocated elsewhere. However, if the NETIO_STRICT_HOMING flag is specified in flags, then a failure to allocate buffer space exactly as requested will cause the registration operation to fail with an error of NETIO_CANNOT_HOME.

Note that maximal network performance cannot be achieved with only one memory controller.

Definition at line 2406 of file netio_intf.h.

void* fixed_buffer_va

Fixed virtual address for packet buffers. Only valid when NETIO_FIXED_BUFFER_VA is specified in flags; see the description of that flag for details.

Definition at line 2412 of file netio_intf.h.

int flags

Registration characteristics.

This value determines several characteristics of the registration; flags for different types of behavior are ORed together to make the final flag value. Generally applications should specify exactly one flag from each of the following categories:

  • Whether the application will be transmitting packets on this queue, and if so, whether it will request egress checksum calculation (NETIO_XMIT, NETIO_XMIT_CSUM, or NETIO_NO_XMIT). It is legal to call netio_get_buffer() without one of the XMIT flags, as long as NETIO_RECV is specified; in this case, the retrieved buffers must be passed to another tile for transmission.

To accommodate applications written to previous versions of the NetIO interface, none of the flags above are currently required; if omitted, NetIO behaves more or less as if NETIO_RECV | NETIO_XMIT_CSUM | NETIO_TAG_NONE were used. However, explicit specification of the relevant flags allows NetIO to do a better job of resource allocation, allows earlier detection of certain configuration errors, and may enable advanced features or higher performance in the future, so their use is strongly recommended.

Note that specifying NETIO_NO_RECV along with NETIO_NO_XMIT is a special case, intended primarily for use by programs which retrieve network statistics or do link management operations. When these flags are both specified, the resulting queue may not be used with NetIO routines other than netio_get(), netio_set(), and netio_input_unregister(). See link for more information on link management.

Other flags are optional; their use is described below.

Definition at line 2227 of file netio_intf.h.

Interface name. This is a string which identifies the specific Ethernet controller hardware to be used. The format of the string is a device type and a device index, separated by a slash; so, the first 10 Gigabit Ethernet controller is named "xgbe/0", while the second 10/100/1000 Megabit Ethernet controller is named "gbe/1".

Definition at line 2235 of file netio_intf.h.

int num_receive_packets

Receive packet queue size. This specifies the maximum number of ingress packets that can be received on this queue without being retrieved by netio_get_packet(). If the IPP's distribution algorithm calls for a packet to be sent to this queue, and this number of packets are already pending there, the new packet will either be discarded, or sent to another tile registered for the same queue_id (see drops). This value must be at least NETIO_MIN_RECEIVE_PKTS, can always be at least NETIO_MAX_RECEIVE_PKTS, and may be larger than that on certain interfaces.

Definition at line 2248 of file netio_intf.h.

int num_send_buffers_jumbo_prealloc

Number of jumbo send buffers to be preallocated at registration. If this value is nonzero, the specified number of empty jumbo egress buffers will be requested from the IPP during the netio_input_register operation; this may speed the execution of netio_get_buffer(). This may be no larger than num_send_buffers_jumbo_total. See epp for more details on empty buffer caching.

Definition at line 2316 of file netio_intf.h.

int num_send_buffers_jumbo_total

Maximum number of jumbo send buffers to be held in the local empty buffer cache. This specifies the size of the area which holds empty jumbo egress buffers requested from the IPP but not yet retrieved via netio_get_buffer(). This value must be greater than zero if the application will ever use netio_get_buffer() to allocate empty jumbo egress buffers; it may be no larger than NETIO_MAX_SEND_BUFFERS. See epp for more details on empty buffer caching.

Definition at line 2307 of file netio_intf.h.

int num_send_buffers_large_prealloc

Number of large send buffers to be preallocated at registration. If this value is nonzero, the specified number of empty large egress buffers will be requested from the IPP during the netio_input_register operation; this may speed the execution of netio_get_buffer(). This may be no larger than num_send_buffers_large_total. See epp for more details on empty buffer caching.

Definition at line 2297 of file netio_intf.h.

int num_send_buffers_large_total

Maximum number of large send buffers to be held in the local empty buffer cache. This specifies the size of the area which holds empty large egress buffers requested from the IPP but not yet retrieved via netio_get_buffer(). This value must be greater than zero if the application will ever use netio_get_buffer() to allocate empty large egress buffers; it may be no larger than NETIO_MAX_SEND_BUFFERS. See epp for more details on empty buffer caching.

Definition at line 2288 of file netio_intf.h.

int num_send_buffers_small_prealloc

Number of small send buffers to be preallocated at registration. If this value is nonzero, the specified number of empty small egress buffers will be requested from the IPP during the netio_input_register operation; this may speed the execution of netio_get_buffer(). This may be no larger than num_send_buffers_small_total. See epp for more details on empty buffer caching.

Definition at line 2278 of file netio_intf.h.

int num_send_buffers_small_total

Maximum number of small send buffers to be held in the local empty buffer cache. This specifies the size of the area which holds empty small egress buffers requested from the IPP but not yet retrieved via netio_get_buffer(). This value must be greater than zero if the application will ever use netio_get_buffer() to allocate empty small egress buffers; it may be no larger than NETIO_MAX_SEND_BUFFERS. See epp for more details on empty buffer caching.

Definition at line 2269 of file netio_intf.h.

int num_sends_outstanding

Maximum number of outstanding send packet requests. This value is only relevant when an EPP is in use; it determines the number of slots in the EPP's outgoing packet queue which this tile is allowed to consume, and thus the number of packets which may be sent before the sending tile must wait for an acknowledgment from the EPP. Modifying this value is generally only helpful when using netio_send_packet_vector(), where it can help improve performance by allowing a single vector send operation to process more packets. Typically it is not specified, and the default, which divides the outgoing packet slots evenly between all tiles on the chip, is used.

If a registration asks for more outgoing packet queue slots than are available, NETIO_TOOMANY_XMIT will be returned. The total number of packet queue slots which are available for all tiles for each EPP is subject to change, but is currently NETIO_TOTAL_SENDS_OUTSTANDING.

This value is ignored if NETIO_XMIT is not specified in flags. If you want to specify a large value here for a specific tile, you are advised to specify NETIO_NO_XMIT on other, non-transmitting tiles so that they do not consume a default number of packet slots. Any tile transmitting is required to have at least NETIO_MIN_SENDS_OUTSTANDING slots allocated to it; values less than that will be silently increased by the NetIO library.

Definition at line 2440 of file netio_intf.h.

unsigned int queue_id

The queue ID being requested. Legal values for this range from 0 to NETIO_MAX_QUEUE_ID, inclusive. NETIO_MAX_QUEUE_ID is always greater than or equal to the number of tiles; this allows one queue for each tile, plus at least one additional queue. Some applications may wish to use the additional queue as a destination for unwanted packets, since packets delivered to queues for which no tiles have registered are discarded.

Definition at line 2258 of file netio_intf.h.

uint64_t total_buffer_size

Total packet buffer size. This determines the total size, in bytes, of the NetIO buffer pool. Note that the maximum number of available buffers of each size is determined during hypervisor configuration (see the System Programmer's Guide for details); this just influences how much host memory is allocated for those buffers.

The buffer pool is allocated from common memory, which will be automatically initialized if needed. If your buffer pool is larger than 240 MB, you might need to explicitly call tmc_cmem_init(), as described in the Application Libraries Reference Manual (UG227).

Packet buffers are currently allocated in chunks of 16 MB; this value will be rounded up to the next larger multiple of 16 MB. If this value is zero, a default of 32 MB will be used; this was the value used by previous versions of NetIO. Note that taking this default also affects the placement of buffers on Linux NUMA nodes. See buffer_node_weights for an explanation of buffer placement.

In order to successfully allocate packet buffers, Linux must have available huge pages on the relevant Linux NUMA nodes. See the System Programmer's Guide for information on configuring huge page support in Linux.

Definition at line 2341 of file netio_intf.h.


The documentation for this struct was generated from the following file: