Linux Kernel
3.7.1
|
Go to the source code of this file.
Functions | |
dma_addr_t | qib_map_page (struct pci_dev *hwdev, struct page *page, unsigned long offset, size_t size, int direction) |
int | qib_get_user_pages (unsigned long start_page, size_t num_pages, struct page **p) |
void | qib_release_user_pages (struct page **p, size_t num_pages) |
qib_get_user_pages - lock user pages into memory : the start page : the number of pages : the output page structures
This function takes a given start page (page aligned user virtual address) and pins it and the following specified number of pages. For now, num_pages is always 1, but that will probably change at some point (because caller is doing expected sends on a single virtually contiguous buffer, so we can do all pages at once).
Definition at line 132 of file qib_user_pages.c.
dma_addr_t qib_map_page | ( | struct pci_dev * | hwdev, |
struct page * | page, | ||
unsigned long | offset, | ||
size_t | size, | ||
int | direction | ||
) |
qib_map_page - a safety wrapper around pci_map_page()
A dma_addr of all 0's is interpreted by the chip as "disabled". Unfortunately, it can also be a valid dma_addr returned on some architectures.
The powerpc iommu assigns dma_addrs in ascending order, so we don't have to bother with retries or mapping a dummy page to insure we don't just get the same mapping again.
I'm sure we won't be so lucky with other iommu's, so FIXME.
Definition at line 101 of file qib_user_pages.c.