This topic describes how to implement the DUsbClientController interface to provide a platform-specific layer implementation for the USB client controller.
The platform-specific layer only contains functionality that cannot be abstracted and made generic because different USB device controller designs operate differently. For example, the internal (and therefore external) management of the endpoint FIFOs can be different. In some USB device controller designs, the endpoints have hardwired FIFOs of specific sizes, which may possibly be configurable, whereas other designs have a defined maximum amount of FIFO RAM available that has to be shared by all endpoints in use in a given configuration. The way that the chip is programmed can also differ. Some designs have a single register into which all commands and their parameters are written, whereas others are programmed via a number of special purpose control registers.
Everything else that has to be common because it is defined in the USB specification, is contained in the platform-independent layer.
The operation of the USB device controller is hardware specific and the implementor of the platform-specific layer is free to do whatever is necessary to use the hardware. All of this is transparent to the platform-independent layer, which uses only the fixed set of pure virtual functions defined in DUsbClientController to communicate with the platform-specific layer, and therefore with the hardware.
The platform-specific layer is also responsible for managing the transfer of data over the endpoints, and it is important that it can provide the services expected by the platform-independent layer. Data transfers are normally set up by the platform-independent layer by calling one of the following member functions of DUsbClientController:
TInt SetupEndpointRead(TInt aRealEndpoint, TUsbcRequestCallback& aCallback) = 0; TInt SetupEndpointWrite(TInt aRealEndpoint, TUsbcRequestCallback& aCallback) = 0; TInt SetupEndpointZeroRead() = 0; TInt SetupEndpointZeroWrite(const TUint8* aBuffer, TInt aLength, TBool aZlpReqd=EFalse) = 0; TInt SendEp0ZeroByteStatusPacket() = 0;
which the platform-specific layer implements.
These data transfer functions fall into two groups: one for handling endpoint-0 and another for handling general endpoints. Endpoint-0 is handled differently from general endpoints throughout the USB stack. The functions for handling general endpoints are used to transfer user data. In addition to taking an endpoint number, these functions also take a reference to a TUsbcRequestCallback object. This is a request callback, it is key to the data transfer mechanism.
The platform-independent layer issues a request to read data by calling DUsbClientController::SetupEndpointRead(), which is implemented by the platform-specific layer, passing it the endpoint and a TUsbcRequestCallback object.
Data is read into a large buffer, whose address and length are passed as data members of the TUsbcRequestCallback object: TUsbcRequestCallback::iBufferStart and TUsbcRequestCallback::iLength respectively. For Bulk reads, this buffer is intended to catch a burst of data transmitted from the host rather than just a single packet.
For all other transfer types (Control, Interrupt, Isochronous) it is usual to return only single packets to the LDD. The amount of data returned is controlled – and limited - by the USB client driver (the LDD) through the iLength value.
The TUsbcRequestCallback object also supplies a pair of arrays:
TUsbcRequestCallback::iPacketIndex containing the offset(s) into the data buffer of the single packet or burst data.
TUsbcRequestCallback::iPacketSize containing the size(s) of the packet or burst data.
These arrays are logically linked; both are the same size and can hold information for two entries each.
Therefore, received single packets have to be merged into one 'superpacket' in the read buffer. It is assumed that these merged packets consist of maximum packet sized packets, possibly terminated by a short packet. A zero length packet or ZLP must appear separately; this would be the optional second packet in the respective array.
For example, for a Bulk endpoint with a maximum packet size of 64 bytes:
If 10 x 64 byte packets and one 10 byte packet arrive, then these are marked as a single large 650 byte packet.
If 10 x 64 byte packets and one ZLP arrive, then these should be entered as two packets in the arrays; one of size 640 bytes and one of size zero.
The general aim when servicing a Bulk read request from the platform-independent layer is to capture as much data as possible whilst not holding onto data too long before returning the data to the LDD and receiving another buffer. There is considerable flexibility in how this is achieved and the design does not mandate any particular method; it also depends to a certain extent on the USB device controller hardware used.
The platform implementation can use a re-startable timer to see whether data has been received in the time (usually milliseconds) since the last data was received. If data has been received, then the timer is restarted. If data has not been received, then the timer is not restarted, and the buffer is marked as complete for the platform-independent layer by calling DUsbClientController::EndpointRequestComplete().
Note the following:
In the interrupt service routine (ISR), the flag iRxMoreDataRcvd is used to indicate that data has been received.
The timer is not restarted in the ISR - the timer is allowed to expire and then a test is made for received data.
Typical values for the timer range from 1—5 ms.
Each OUT endpoint requires a separate timer.
The read is not flagged as complete to the platform-independent layer if no data has been received. The timer is only started once the first packet/transfer has been received.
The bare minimum of work is done in the ISR. After draining the FIFO, update the iPacketSize and iPacketIndex arrays and recalculate the buffer address ready for the next packet, taking into account any alignment restrictions.
The platform-specific layer completes a non endpoint-0 read request by calling the platform-independent layer function DUsbClientController::EndpointRequestComplete(). This function takes as its sole argument a pointer to the updated request callback structure that was initially passed to the platform-specific layer for that read. Members that need to be updated, in addition to the packet arrays TUsbcRequestCallback::iPacketIndex and TUsbcRequestCallback::iPacketSize, are:
TUsbcRequestCallback::iRxPackets, with possible values: 1 or 2.
Summary of Completion Criteria for general Endpoints
The platform-specific layer completes a read request by calling DUsbClientController::EndpointRequestComplete() when any of the following conditions are met:
The requested number of bytes has been received.
A short packet, including a ZLP, is received. In the case of a ZLP being received, it must be represented separately in the DUsbClientController::iPacketIndex and DUsbClientController::iPacketSize arrays.
If the number of bytes in the current packet (still in the FIFO) were to cause the total number of bytes to exceed the buffer length DUsbClientController::iLength.
The timer has expired, data is available, but no further packets have been received.
The handling of endpoint-0 read requests is similar to general endpoints except for the following:
The platform-independent layer issues the request by calling DUsbClientController::SetupEndpointZeroRead(). This function does not take any parameters because:
The endpoint number is known, this is zero.
The request is completed after every received packet, and a TUsbcRequestCallback object is not needed.
All the platform-specific layer needs to know is where to place the received data, and its size. Both are fixed values: DUsbClientController::iEp0_RxBuf and KUsbcBufSzControl respectively.
An endpoint-0 read request is completed by calling DUsbClientController::Ep0RequestComplete(), passing the endpoint number (0, or symbolically, KEp0_Out), the number of bytes received, and the resulting error code for this request.
The platform-independent layer issues a request to write data by calling DUsbClientController::SetupEndpointWrite(), which is implemented by the platform-specific layer, passing the function an endpoint and a TUsbcRequestCallback object.
The address of the buffer, containing the data to be written, is passed into the data member TUsbcRequestCallback::iBufferStart. The length of this buffer is passed into TUsbcRequestCallback::iLength. The buffer is a single contiguous piece of memory.
The platform-specific layer's implementation needs to set up the transfer, either using DMA or a conventional interrupt driven mechanism, and then wait for the host to collect, by sending an IN token, the data from the respective endpoint’s primed FIFO. This continues until all data from the buffer has been transmitted to the host.
If the ZLP request flag iZlpReqd in the TUsbcRequestCallback structure is set (=ETrue), then the platform specific layer must determine, after all data has been sent, whether to send a ZLP or not. The decision is based on the size of the last packet sent and the current maximum packet size of the endpoint.
To summarise, a ZLP should be sent at the end of a write request if:
The ZLP flag is set.
The last packet of the write request was not a short packet (i.e. it was a max-packet-sized packet).
The platform-specific layer completes a non endpoint-0 write request by calling the platform-independent layer function DUsbClientController::EndpointRequestComplete(). This function takes only one argument, a pointer to the updated request callback structure passed to the platform-specific layer for this write. Members that need to be updated are:
The handling of endpoint-0 write requests is similar to general endpoints except for the following:
The platform-independent layer issues the request by calling DUsbClientController::SetupEndpointZeroWrite(). Unlike the equivalent read function, this function takes a number of parameters:
the address of the location containing the data to be written
the length of the data to be written
a TBool parameter that indicates whether a zero length packet (ZLP) is to be sent immediately after the data has been sent.
An endpoint-0 write request is completed by calling DUsbClientController::Ep0RequestComplete(), in the platform-independent layer, passing the function the endpoint number (0, or symbolically KEp0_In), the number of bytes written, and the error code for this write request.
There is another endpoint-0 write function, DUsbClientController::SendEp0ZeroByteStatusPacket(), which is used for sending a zero length packet (ZLP) on its own. This separate function is provided because the USB device controller mechanism for transmitting a ZLP on its own can be different to the mechanism for transmitting the ZLP at the end of a data transmission.
When planning to use DMA for transferring data from and to the endpoints’ FIFOs, there are two things that must be considered:
Flushing cached information for the data buffers
The cached information for the data buffers must be flushed before a DMA write operation, (i.e. transfer memory to a FIFO), and both before and after a DMA read operation, (i.e. transfer a FIFO to memory).
The kernel provides three functions for that purpose. These are static functions in the class Cache, declared in ...\e32\include\kernel\cache.h:
Implementing DMA mode for OUT transfers (DMA reads)
The implementation of DMA mode for IN transfers is normally relatively straightforward, however complications can occur with OUT transfers (DMA reads), depending on the DMA controller, the USB device controller and the way the two are connected.
There are two issues:
If we set up a DMA read for 4kB, for example, and this request returns after a short packet, we must be able to tell how much data has been received and is now in our buffer. In other words, the DMA controller must provide a way of finding out how many bytes have been received, otherwise we cannot complete the read request to the LDD.
Here is a theoretical solution for a Scatter/Gather controller that doesn’t provide information about the number of bytes transferred directly. Note: This proposal has not been tested in practice!
Instead of using one large DMA descriptor for 4kB, we could set up a chain of descriptors for max-packet-size bytes each. When the DMA completes, it will be:
for the whole transfer, in which case we know the number of bytes received.
because it is a short packet. In this case we can try and find out which descriptor was being served at the time. The number of bytes received is then
number of completed descriptors * max-packet-size + number of bytes in short packet.
Another potential problem is posed by the restartable OUT endpoint timer described in the section Reading data. The situation in the timer callback DFC, when the LDD read request is about to complete, has to be handled with care in order to avoid data loss. If at this point the pending DMA request has already started transferring data from the endpoint FIFO into our data buffer, then we cannot complete the LDD request anymore (this data would be lost because the LDD would not know about it as the variables in the request structure do not take account of it). The buffer cannot be removed while it is receiving data from a DMA transfer. A possible solution would be as follows: (again, this is just an idea and therefore untested). In the timer DFC, before we complete to the LDD, we cancel the pending DMA request. If a DMA transfer was already happening, then this will somehow complete and the DMA complete DFC will be queued. What we need to find out in that situation is whether or not that DFC is pending:
if not, then there was no DMA transfer ongoing, and we can just complete our read request to the LDD.
otherwise we have to abandon, at this point, the plan to complete and return from the timer callback (without doing any damage). In this case, the DMA complete DFC will run next, the LDD read request structure will be updated, the RX timer will be set again, and we can proceed as normal.