mirror of
https://github.com/Fishwaldo/linux-bl808.git
synced 2025-03-17 12:34:01 +00:00
mm/userfaultfd: replace kmap/kmap_atomic() with kmap_local_page()
kmap() and kmap_atomic() are being deprecated in favor of kmap_local_page() which is appropriate for any thread local context.[1] A recent locking bug report with userfaultfd showed that the conversion of the kmap_atomic()'s in those code flows requires care with regard to the prevention of deadlock.[2] git archaeology implied that the recursion may not be an actual bug.[3] However, depending on the implementation of the mmap_lock and the condition of the call there may still be a deadlock.[4] So this is not purely a lockdep issue. Considering a single threaded call stack there are 3 options. 1) Different mm's are in play (no issue) 2) Readlock implementation is recursive and same mm is in play (no issue) 3) Readlock implementation is _not_ recursive (issue) The mmap_lock is recursive so with a single thread there is no issue. However, Matthew pointed out a deadlock scenario when you consider additional process' and threads thusly. "The readlock implementation is only recursive if nobody else has taken a write lock. If you have a multithreaded process, one of the other threads can call mmap() and that will prevent recursion (due to fairness). Even if it's a different process that you're trying to acquire the mmap read lock on, you can still get into a deadly embrace. eg: process A thread 1 takes read lock on own mmap_lock process A thread 2 calls mmap, blocks taking write lock process B thread 1 takes page fault, read lock on own mmap lock process B thread 2 calls mmap, blocks taking write lock process A thread 1 blocks taking read lock on process B process B thread 1 blocks taking read lock on process A Now all four threads are blocked waiting for each other." Regardless using pagefault_disable() ensures that no matter what locking implementation is used a deadlock will not occur. Complete kmap conversion in userfaultfd by replacing the kmap() and kmap_atomic() calls with kmap_local_page(). When replacing the kmap_atomic() call ensure page faults continue to be disabled to support the correct fall back behavior and add a comment to inform future souls of the requirement. [1] https://lore.kernel.org/all/20220813220034.806698-1-ira.weiny@intel.com/ [2] https://lore.kernel.org/all/Y1Mh2S7fUGQ%2FiKFR@iweiny-desk3/ [3] https://lore.kernel.org/all/Y1MymJ%2FINb45AdaY@iweiny-desk3/ [4] https://lore.kernel.org/lkml/Y1bXBtGTCym77%2FoD@casper.infradead.org/ [ira.weiny@intel.com: v2] Link: https://lkml.kernel.org/r/20221025220136.2366143-1-ira.weiny@intel.com Link: https://lkml.kernel.org/r/20221024043452.1491677-1-ira.weiny@intel.com Signed-off-by: Ira Weiny <ira.weiny@intel.com> Cc: Matthew Wilcox <willy@infradead.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Peter Xu <peterx@redhat.com> Cc: Axel Rasmussen <axelrasmussen@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
This commit is contained in:
parent
78a498c3a2
commit
5521de7ddd
1 changed files with 21 additions and 4 deletions
|
@ -157,11 +157,28 @@ static int mcopy_atomic_pte(struct mm_struct *dst_mm,
|
|||
if (!page)
|
||||
goto out;
|
||||
|
||||
page_kaddr = kmap_atomic(page);
|
||||
page_kaddr = kmap_local_page(page);
|
||||
/*
|
||||
* The read mmap_lock is held here. Despite the
|
||||
* mmap_lock being read recursive a deadlock is still
|
||||
* possible if a writer has taken a lock. For example:
|
||||
*
|
||||
* process A thread 1 takes read lock on own mmap_lock
|
||||
* process A thread 2 calls mmap, blocks taking write lock
|
||||
* process B thread 1 takes page fault, read lock on own mmap lock
|
||||
* process B thread 2 calls mmap, blocks taking write lock
|
||||
* process A thread 1 blocks taking read lock on process B
|
||||
* process B thread 1 blocks taking read lock on process A
|
||||
*
|
||||
* Disable page faults to prevent potential deadlock
|
||||
* and retry the copy outside the mmap_lock.
|
||||
*/
|
||||
pagefault_disable();
|
||||
ret = copy_from_user(page_kaddr,
|
||||
(const void __user *) src_addr,
|
||||
PAGE_SIZE);
|
||||
kunmap_atomic(page_kaddr);
|
||||
pagefault_enable();
|
||||
kunmap_local(page_kaddr);
|
||||
|
||||
/* fallback to copy_from_user outside mmap_lock */
|
||||
if (unlikely(ret)) {
|
||||
|
@ -646,11 +663,11 @@ retry:
|
|||
mmap_read_unlock(dst_mm);
|
||||
BUG_ON(!page);
|
||||
|
||||
page_kaddr = kmap(page);
|
||||
page_kaddr = kmap_local_page(page);
|
||||
err = copy_from_user(page_kaddr,
|
||||
(const void __user *) src_addr,
|
||||
PAGE_SIZE);
|
||||
kunmap(page);
|
||||
kunmap_local(page_kaddr);
|
||||
if (unlikely(err)) {
|
||||
err = -EFAULT;
|
||||
goto out;
|
||||
|
|
Loading…
Add table
Reference in a new issue