mirror of
https://github.com/Fishwaldo/linux-bl808.git
synced 2025-06-17 20:25:19 +00:00
mm: introduce get_user_pages_fast
Introduce a new get_user_pages_fast mm API, which is basically a get_user_pages with a less general API (but still tends to be suited to the common case): - task and mm are always current and current->mm - force is always 0 - pages is always non-NULL - don't pass back vmas This restricted API can be implemented in a much more scalable way on many architectures when the ptes are present, by walking the page tables locklessly (no mmap_sem or page table locks). When the ptes are not populated, get_user_pages_fast() could be slower. This is implemented locklessly on x86, and used in some key direct IO call sites, in later patches, which provides nearly 10% performance improvement on a threaded database workload. Lots of other code could use this too, depending on use cases (eg. grep drivers/). And it might inspire some new and clever ways to use it. [akpm@linux-foundation.org: build fix] [akpm@linux-foundation.org: coding-style fixes] Signed-off-by: Nick Piggin <npiggin@suse.de> Cc: Dave Kleikamp <shaggy@austin.ibm.com> Cc: Andy Whitcroft <apw@shadowen.org> Cc: Ingo Molnar <mingo@elte.hu> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Andi Kleen <andi@firstfloor.org> Cc: Dave Kleikamp <shaggy@austin.ibm.com> Cc: Badari Pulavarty <pbadari@us.ibm.com> Cc: Zach Brown <zach.brown@oracle.com> Cc: Jens Axboe <jens.axboe@oracle.com> Reviewed-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This commit is contained in:
parent
a0a8f5364a
commit
21cc199baa
1 changed files with 33 additions and 0 deletions
|
@ -833,6 +833,39 @@ extern int mprotect_fixup(struct vm_area_struct *vma,
|
||||||
struct vm_area_struct **pprev, unsigned long start,
|
struct vm_area_struct **pprev, unsigned long start,
|
||||||
unsigned long end, unsigned long newflags);
|
unsigned long end, unsigned long newflags);
|
||||||
|
|
||||||
|
#ifdef CONFIG_HAVE_GET_USER_PAGES_FAST
|
||||||
|
/*
|
||||||
|
* get_user_pages_fast provides equivalent functionality to get_user_pages,
|
||||||
|
* operating on current and current->mm (force=0 and doesn't return any vmas).
|
||||||
|
*
|
||||||
|
* get_user_pages_fast may take mmap_sem and page tables, so no assumptions
|
||||||
|
* can be made about locking. get_user_pages_fast is to be implemented in a
|
||||||
|
* way that is advantageous (vs get_user_pages()) when the user memory area is
|
||||||
|
* already faulted in and present in ptes. However if the pages have to be
|
||||||
|
* faulted in, it may turn out to be slightly slower).
|
||||||
|
*/
|
||||||
|
int get_user_pages_fast(unsigned long start, int nr_pages, int write,
|
||||||
|
struct page **pages);
|
||||||
|
|
||||||
|
#else
|
||||||
|
/*
|
||||||
|
* Should probably be moved to asm-generic, and architectures can include it if
|
||||||
|
* they don't implement their own get_user_pages_fast.
|
||||||
|
*/
|
||||||
|
#define get_user_pages_fast(start, nr_pages, write, pages) \
|
||||||
|
({ \
|
||||||
|
struct mm_struct *mm = current->mm; \
|
||||||
|
int ret; \
|
||||||
|
\
|
||||||
|
down_read(&mm->mmap_sem); \
|
||||||
|
ret = get_user_pages(current, mm, start, nr_pages, \
|
||||||
|
write, 0, pages, NULL); \
|
||||||
|
up_read(&mm->mmap_sem); \
|
||||||
|
\
|
||||||
|
ret; \
|
||||||
|
})
|
||||||
|
#endif
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* A callback you can register to apply pressure to ageable caches.
|
* A callback you can register to apply pressure to ageable caches.
|
||||||
*
|
*
|
||||||
|
|
Loading…
Add table
Add a link
Reference in a new issue