linux-bl808/fs/gfs2
Andreas Gruenbacher 3d36e57ff7 gfs2: gfs2_create_inode rework
When gfs2_lookup_by_inum() calls gfs2_inode_lookup() for an uncached
inode, gfs2_inode_lookup() will place a new tentative inode into the
inode cache before verifying that there is a valid inode at the given
address.  This can race with gfs2_create_inode() which doesn't check for
duplicates inodes.  gfs2_create_inode() will try to assign the new inode
to the corresponding inode glock, and glock_set_object() will complain
that the glock is still in use by gfs2_inode_lookup's tentative inode.

We noticed this bug after adding commit 486408d690 ("gfs2: Cancel
remote delete work asynchronously") which allowed delete_work_func() to
race with gfs2_create_inode(), but the same race exists for
open-by-handle.

Fix that by switching from insert_inode_hash() to
insert_inode_locked4(), which does check for duplicate inodes.  We know
we've just managed to to allocate the new inode, so an inode tentatively
created by gfs2_inode_lookup() will eventually go away and
insert_inode_locked4() will always succeed.

In addition, don't flush the inode glock work anymore (this can now only
make things worse) and clean up glock_{set,clear}_object for the inode
glock somewhat.

Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2021-12-02 12:41:10 +01:00
..
acl.c
acl.h
aops.c
aops.h
bmap.c
bmap.h
dentry.c
dir.c
dir.h
export.c
file.c
gfs2.h
glock.c
glock.h
glops.c
glops.h
incore.h
inode.c
inode.h
Kconfig
lock_dlm.c
log.c
log.h
lops.c
lops.h
main.c
Makefile
meta_io.c
meta_io.h
ops_fstype.c
quota.c
quota.h
recovery.c
recovery.h
rgrp.c
rgrp.h
super.c
super.h
sys.c
sys.h
trace_gfs2.h
trans.c
trans.h
util.c
util.h
xattr.c
xattr.h