Linux Kernel
3.7.1
|
doc_seek - Set both flash planes to the specified block, page for reading : the device : the first plane block index : the second plane block index
: if true, read will occur on the 4 extra bytes of the wear area : offset in page to read
Programs the flash even and odd planes to the specific block and page. Alternatively, programs the flash to the wear area of the specified page.
doc_write_seek - Set both flash planes to the specified block, page for writing : the device : the first plane block index : the second plane block index
: offset in page to write
Programs the flash even and odd planes to the specific block and page. Alternatively, programs the flash to the wear area of the specified page.
doc_read_page_prepare - Prepares reading data from a flash page : the device : the first plane block index on flash memory : the second plane block index on flash memory
: the offset in the page (must be a multiple of 4)
Prepares the page to be read in the flash memory :
After a call to this method, a call to doc_read_page_finish is mandatory, to end the read cycle of the flash.
Read data from a flash page. The length to be read must be between 0 and (page_size + oob_size + wear_size), ie. 532, and a multiple of 4 (because the extra bytes reading is not implemented).
As pages are grouped by 2 (in 2 planes), reading from a page must be done in two steps:
Returns 0 if successful, -EIO if a read error occurred.
single_erase_cmd - [GENERIC] NAND standard block erase command function : MTD device structure
Standard erase command for NAND chips.
multi_erase_cmd - [GENERIC] AND specific block erase command function : MTD device structure
AND multi block erase command function. Erase 4 consecutive blocks.
read_bbt - [GENERIC] Read the bad block table starting from page : MTD device structure : temporary buffer
: the number of bbt descriptors to read : the bbt describtion table : offset in the memory table
Read the bad block table starting from page.
block_invalidatepage - invalidate part or all of a buffer-backed page
: the index of the truncation point
block_invalidatepage() is called when all or part of the page has become invalidated by a truncate operation.
block_invalidatepage() does not have to release all buffers, but it must ensure that no dirty buffer is left outside and that no I/O is underway against any of the blocks which are outside the truncation point. Because the caller is about to free (and possibly reuse) those blocks on-disk.
stuffed_readpage - Fill in a Linux page with stuffed file data : the inode
Returns: errno
gfs2_releasepage - free the metadata associated with a page
: passed from Linux VFS, ignored by us
Call try_to_free_buffers() if the buffers in this page can be released.
Returns: 0
gfs2_log_write - write to log : the filesystem
: the size of the data to write : the offset within the page
Try and add the page segment to the current bio. If that fails, submit the current bio to the device and create a new one, and then add the page segment to that.
aops.h - Defines for NTFS kernel address space operations and page cache handling. Part of the Linux-NTFS project.
Copyright (c) 2001-2004 Anton Altaparmakov Copyright (c) 2002 Richard Russon
This program/include file is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version.
This program/include file is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License along with this program (in the main directory of the Linux-NTFS distribution in the file COPYING); if not, write to the Free Software Foundation,Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA ntfs_unmap_page - release a page that was mapped using ntfs_map_page()
Unpin, unmap and release a page that was obtained from ntfs_map_page().
page_is_file_cache - should the page be on a file LRU or anon LRU?
Returns 1 if
page_lru_base_type - which LRU list type should a page be on?
Used for LRU list index arithmetic.
Returns the base LRU type - file or anon -
page_off_lru - which LRU list was page on? clearing its lru flags.
Returns the LRU list a page was on, as an index into the array of LRU lists; and clears its Unevictable or Active flags, ready for freeing.
page_lru - which LRU list should a page be on?
Returns the LRU list a page should be on, as an index into the array of LRU lists.
struct pipe_buffer - a linux kernel pipe buffer
: offset of data inside the
: the modified page we replace page by
Returns 0 on success, -EFAULT on failure.
delete_from_page_cache - delete page from page cache
This must be called only on pages that have been verified to be in the page cache and locked. It will never put the page into the free list, the caller has a reference on the page.
unlock_page - unlock a locked page
Unlocks the page and wakes up sleepers in ___wait_on_page_locked(). Also wakes sleepers in wait_on_page_writeback() because the wakeup mechananism between PageLocked pages and PageWriteback pages is shared. But that's OK - sleepers in wait_on_page_writeback() just go back to sleep.
The mb is necessary to enforce ordering between the clear_bit and the read of the waitqueue (to avoid SMP races with a parallel wait_on_page_locked()).
end_page_writeback - end writeback against a page
__lock_page - get a lock on the page, assuming we need to sleep to get it
try_to_release_page() - release old fs-specific metadata on a page
: memory allocation flags (and I/O mode)
The address_space is to try to release any data against the page (presumably at page->private). If the release was successful, return `1'. Otherwise return zero.
This may also be called if PG_fscache is set on a page, indicating that the page is known to the local caching routines.
The argument specifies whether I/O may be performed to release this page (__GFP_IO), and whether the call may block (__GFP_WAIT & __GFP_FS).
replace_page - replace page in vma by new ksm page : vma that holds the pte pointing to page
: the ksm page we replace page by : the original value of the pte
Returns 0 on success, -EFAULT on failure.
mem_cgroup_page_lruvec - return lruvec for adding an lru page
: zone of the page
mem_cgroup_move_account - move account of the page
: number of regular pages (>1 for huge pages) : page_cgroup of the page. : mem_cgroup which the page is moved from. : mem_cgroup which the page is moved to. != .
The caller must confirm following.
This function doesn't do "charge" to new cgroup and doesn't do "uncharge" from old cgroup.
write_one_page - write out a single page and optionally wait on I/O
: if true, wait on writeout
The page must be locked by the caller and will be unlocked upon return.
write_one_page() returns a negative error code if I/O failed.
page_cache_async_readahead - file readahead for marked pages : address_space which holds the pagecache and I/O vectors : file_ra_state which holds the readahead state : passed on to ->readpage() and ->readpages()
: start offset into , in pagecache page-sized units : hint: total size of the read which the caller is performing in pagecache pages
page_cache_async_readahead() should be called when a page is used which has the PG_readahead flag; this is a marker to suggest that the application has used up enough of the readahead window that we should start pulling in more pages.
page_mapped_in_vma - check whether a page is really mapped in a VMA
: the VMA to test
Returns 1 if the page is mapped into the page tables of the VMA, 0 if the page is not mapped into the page tables of this VMA. Only valid for normal file or anonymous VMAs.
page_referenced_file - referenced check for object-based rmap
: target memory control group : collect encountered vma->vm_flags who actually referenced the page
For an object-based mapped page, find all the places it is mapped and check/clear the referenced flag. This is done by following the page->mapping pointer, then walking the chain of vmas it holds. It returns the number of references it found.
This function is only called from page_referenced for object-based pages.
page_referenced - test if the page was referenced
: caller holds lock on the page : target memory cgroup : collect encountered vma->vm_flags who actually referenced the page
Quick test_and_clear_referenced for all mappings to a page, returns the number of ptes which referenced the page.
page_move_anon_rmap - move a page to our anon_vma
: the vma the page belongs to : the user virtual address mapped
When a page belongs exclusively to one process after a COW event, that page can be moved into the anon_vma that belongs to just that process, so the rmap code will not search the parent or sibling processes.
__page_check_anon_rmap - sanity check anonymous rmap addition
: the vm area in which the mapping is added : the user virtual address mapped
page_add_anon_rmap - add pte mapping to an anonymous page
: the vm area in which the mapping is added : the user virtual address mapped
The caller needs to hold the pte lock, and the page must be locked in the anon_vma case: to serialize mapping,index checking after setting, and to ensure that PageAnon is not being upgraded racily to PageKsm (but PageKsm is never downgraded to PageAnon).
page_add_new_anon_rmap - add pte mapping to a new anonymous page
: the vm area in which the mapping is added : the user virtual address mapped
Same as page_add_anon_rmap but must only be called on new pages. This means the inc-and-test can be bypassed. Page does not have to be locked.
page_add_file_rmap - add pte mapping to a file page
The caller needs to hold the pte lock.
try_to_unmap_anon - unmap or unlock anonymous page using the object-based rmap method
: action and flags
Find all the mappings of a page using the mapping pointer and the vma chains contained in the anon_vma struct it points to.
This function is only called from try_to_unmap/try_to_munlock for anonymous pages. When called from try_to_munlock(), the mmap_sem of the mm containing the vma where the page was found will be held for write. So, we won't recheck vm_flags for that VMA. That should be OK, because that vma shouldn't be 'LOCKED.
try_to_unmap_file - unmap/unlock file page using the object-based rmap method
: action and flags
Find all the mappings of a page using the mapping pointer and the vma chains contained in the address_space struct it points to.
This function is only called from try_to_unmap/try_to_munlock for object-based pages. When called from try_to_munlock(), the mmap_sem of the mm containing the vma where the page was found will be held for write. So, we won't recheck vm_flags for that VMA. That should be OK, because that vma shouldn't be 'LOCKED.
try_to_unmap - try to remove all page table mappings to a page
: action and flags
Tries to remove all the page table entries which are mapping this page, used in the pageout path. Caller must hold the page lock. Return values are:
SWAP_SUCCESS - we succeeded in removing all mappings SWAP_AGAIN - we missed a mapping, try again later SWAP_FAIL - the page is unswappable SWAP_MLOCK - page is mlocked.
try_to_munlock - try to munlock a page
Called from munlock code. Checks all of the VMAs mapping the page to make sure nobody else has this page mlocked. The page will be returned with PG_mlocked cleared if no other vmas have it mlocked.
Return values are:
SWAP_AGAIN - no vma is holding page mlocked, or, SWAP_AGAIN - page mapped in mlocked vma – couldn't acquire mmap sem SWAP_FAIL - page cannot be located at present SWAP_MLOCK - page is now mlocked.
lru_cache_add_lru - add a page to a page list
: the LRU list to which the page is added.
add_page_to_unevictable_list - add a page to the unevictable list
Add page directly to its zone's unevictable list. To avoid races with tasks that might be making the page evictable, through eg. munlock, munmap or exit, while it's not on the lru, we want to add the page while it's locked or otherwise "invisible" to other tasks. This is difficult to do when using the pagevec cache, so bypass that.
do_invalidatepage - invalidate part or all of a page
: the index of the truncation point
do_invalidatepage() is called when all or part of the page has become invalidated by a truncate operation.
do_invalidatepage() does not have to release all buffers, but it must ensure that no dirty buffer is left outside and that no I/O is underway against any of the blocks which are outside the truncation point. Because the caller is about to free (and possibly reuse) those blocks on-disk.