syzbot


possible deadlock in vfs_rename (2)

Status: upstream: reported on 2024/12/31 00:31
Reported-by: syzbot+e7f6162c3a4c0b0f3133@syzkaller.appspotmail.com
First crash: 94d, last: 20d
Similar bugs (4)
Kernel Title Repro Cause bisect Fix bisect Count Last Reported Patched Status
linux-5.15 possible deadlock in vfs_rename 13 18d 331d 0/3 upstream: reported on 2024/05/08 00:59
upstream possible deadlock in vfs_rename kernel 1 786d 786d 0/28 closed as invalid on 2023/02/08 16:28
linux-6.1 possible deadlock in vfs_rename 8 200d 235d 0/3 auto-obsoleted due to no activity on 2024/12/25 19:29
upstream possible deadlock in vfs_rename (2) ntfs3 3 77d 125d 0/28 upstream: reported on 2024/11/30 00:07

Sample crash report:
======================================================
WARNING: possible circular locking dependency detected
6.1.131-syzkaller #0 Not tainted
------------------------------------------------------
syz.2.958/8442 is trying to acquire lock:
ffff888056305fa0 (&type->i_mutex_dir_key
#8){++++}-{3:3}, at: inode_lock include/linux/fs.h:758 [inline]
#8){++++}-{3:3}, at: vfs_rename+0x814/0x10f0 fs/namei.c:4839

but task is already holding lock:
ffff8880710ed260 (&type->i_mutex_dir_key#8/2){+.+.}-{3:3}, at: inode_lock_nested include/linux/fs.h:793 [inline]
ffff8880710ed260 (&type->i_mutex_dir_key#8/2){+.+.}-{3:3}, at: vfs_rename+0x7a2/0x10f0 fs/namei.c:4837

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #2 (&type->i_mutex_dir_key#8/2){+.+.}-{3:3}:
       lock_acquire+0x1f8/0x5a0 kernel/locking/lockdep.c:5662
       down_write_nested+0x39/0x60 kernel/locking/rwsem.c:1689
       inode_lock_nested include/linux/fs.h:793 [inline]
       xattr_rmdir fs/reiserfs/xattr.c:106 [inline]
       delete_one_xattr+0x102/0x2f0 fs/reiserfs/xattr.c:338
       reiserfs_for_each_xattr+0x9b2/0xb40 fs/reiserfs/xattr.c:311
       reiserfs_delete_xattrs+0x1b/0x80 fs/reiserfs/xattr.c:364
       reiserfs_evict_inode+0x20c/0x460 fs/reiserfs/inode.c:53
       evict+0x529/0x930 fs/inode.c:705
       d_delete_notify include/linux/fsnotify.h:267 [inline]
       vfs_rmdir+0x381/0x4b0 fs/namei.c:4204
       do_rmdir+0x3a2/0x590 fs/namei.c:4252
       __do_sys_unlinkat fs/namei.c:4432 [inline]
       __se_sys_unlinkat fs/namei.c:4426 [inline]
       __x64_sys_unlinkat+0xdc/0xf0 fs/namei.c:4426
       do_syscall_x64 arch/x86/entry/common.c:51 [inline]
       do_syscall_64+0x3b/0xb0 arch/x86/entry/common.c:81
       entry_SYSCALL_64_after_hwframe+0x68/0xd2

-> #1 (&type->i_mutex_dir_key#8/3){+.+.}-{3:3}:
       lock_acquire+0x1f8/0x5a0 kernel/locking/lockdep.c:5662
       down_write_nested+0x39/0x60 kernel/locking/rwsem.c:1689
       inode_lock_nested include/linux/fs.h:793 [inline]
       open_xa_root fs/reiserfs/xattr.c:127 [inline]
       open_xa_dir+0x132/0x670 fs/reiserfs/xattr.c:152
       xattr_lookup+0x24/0x280 fs/reiserfs/xattr.c:395
       reiserfs_xattr_set_handle+0xf9/0xd70 fs/reiserfs/xattr.c:533
       reiserfs_xattr_set+0x44e/0x570 fs/reiserfs/xattr.c:633
       __vfs_setxattr+0x3e7/0x420 fs/xattr.c:182
       __vfs_setxattr_noperm+0x12a/0x5e0 fs/xattr.c:216
       vfs_setxattr+0x21d/0x420 fs/xattr.c:309
       ovl_do_setxattr fs/overlayfs/overlayfs.h:252 [inline]
       ovl_setxattr fs/overlayfs/overlayfs.h:264 [inline]
       ovl_make_workdir fs/overlayfs/super.c:1435 [inline]
       ovl_get_workdir+0xdfe/0x17b0 fs/overlayfs/super.c:1539
       ovl_fill_super+0x1b85/0x2a20 fs/overlayfs/super.c:2095
       mount_nodev+0x52/0xe0 fs/super.c:1489
       legacy_get_tree+0xeb/0x180 fs/fs_context.c:632
       vfs_get_tree+0x88/0x270 fs/super.c:1573
       do_new_mount+0x2ba/0xb40 fs/namespace.c:3056
       do_mount fs/namespace.c:3399 [inline]
       __do_sys_mount fs/namespace.c:3607 [inline]
       __se_sys_mount+0x2d5/0x3c0 fs/namespace.c:3584
       do_syscall_x64 arch/x86/entry/common.c:51 [inline]
       do_syscall_64+0x3b/0xb0 arch/x86/entry/common.c:81
       entry_SYSCALL_64_after_hwframe+0x68/0xd2

-> #0 (&type->i_mutex_dir_key#8){++++}-{3:3}:
       check_prev_add kernel/locking/lockdep.c:3090 [inline]
       check_prevs_add kernel/locking/lockdep.c:3209 [inline]
       validate_chain+0x1661/0x5950 kernel/locking/lockdep.c:3825
       __lock_acquire+0x125b/0x1f80 kernel/locking/lockdep.c:5049
       lock_acquire+0x1f8/0x5a0 kernel/locking/lockdep.c:5662
       down_write+0x36/0x60 kernel/locking/rwsem.c:1573
       inode_lock include/linux/fs.h:758 [inline]
       vfs_rename+0x814/0x10f0 fs/namei.c:4839
       do_renameat2+0xde0/0x1440 fs/namei.c:5027
       __do_sys_renameat2 fs/namei.c:5060 [inline]
       __se_sys_renameat2 fs/namei.c:5057 [inline]
       __x64_sys_renameat2+0xce/0xe0 fs/namei.c:5057
       do_syscall_x64 arch/x86/entry/common.c:51 [inline]
       do_syscall_64+0x3b/0xb0 arch/x86/entry/common.c:81
       entry_SYSCALL_64_after_hwframe+0x68/0xd2

other info that might help us debug this:

Chain exists of:
  &type->i_mutex_dir_key#8 --> &type->i_mutex_dir_key#8/3 --> &type->i_mutex_dir_key#8/2

 Possible unsafe locking scenario:

       CPU0                    CPU1
       ----                    ----
  lock(&type->i_mutex_dir_key#8/2);
                               lock(&type->i_mutex_dir_key#8/3);
                               lock(&type->i_mutex_dir_key#8/2);
  lock(&type->i_mutex_dir_key#8);

 *** DEADLOCK ***

5 locks held by syz.2.958/8442:
 #0: ffff8880292d2460 (sb_writers#15){.+.+}-{0:0}, at: mnt_want_write+0x3b/0x80 fs/namespace.c:393
 #1: ffff8880292d2748 (&type->s_vfs_rename_key#2){+.+.}-{3:3}, at: lock_rename fs/namei.c:3038 [inline]
 #1: ffff8880292d2748 (&type->s_vfs_rename_key#2){+.+.}-{3:3}, at: do_renameat2+0x5a0/0x1440 fs/namei.c:4966
 #2: ffff8880710e82e0 (&type->i_mutex_dir_key#8/1){+.+.}-{3:3}, at: lock_rename fs/namei.c:3039 [inline]
 #2: ffff8880710e82e0 (&type->i_mutex_dir_key#8/1){+.+.}-{3:3}, at: do_renameat2+0x61e/0x1440 fs/namei.c:4966
 #3: ffff888056300980 (&type->i_mutex_dir_key#8/5){+.+.}-{3:3}, at: do_renameat2+0x65a/0x1440 fs/namei.c:4966
 #4: ffff8880710ed260 (&type->i_mutex_dir_key#8/2){+.+.}-{3:3}, at: inode_lock_nested include/linux/fs.h:793 [inline]
 #4: ffff8880710ed260 (&type->i_mutex_dir_key#8/2){+.+.}-{3:3}, at: vfs_rename+0x7a2/0x10f0 fs/namei.c:4837

stack backtrace:
CPU: 1 PID: 8442 Comm: syz.2.958 Not tainted 6.1.131-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 02/12/2025
Call Trace:
 <TASK>
 __dump_stack lib/dump_stack.c:88 [inline]
 dump_stack_lvl+0x1e3/0x2cb lib/dump_stack.c:106
 check_noncircular+0x2fa/0x3b0 kernel/locking/lockdep.c:2170
 check_prev_add kernel/locking/lockdep.c:3090 [inline]
 check_prevs_add kernel/locking/lockdep.c:3209 [inline]
 validate_chain+0x1661/0x5950 kernel/locking/lockdep.c:3825
 __lock_acquire+0x125b/0x1f80 kernel/locking/lockdep.c:5049
 lock_acquire+0x1f8/0x5a0 kernel/locking/lockdep.c:5662
 down_write+0x36/0x60 kernel/locking/rwsem.c:1573
 inode_lock include/linux/fs.h:758 [inline]
 vfs_rename+0x814/0x10f0 fs/namei.c:4839
 do_renameat2+0xde0/0x1440 fs/namei.c:5027
 __do_sys_renameat2 fs/namei.c:5060 [inline]
 __se_sys_renameat2 fs/namei.c:5057 [inline]
 __x64_sys_renameat2+0xce/0xe0 fs/namei.c:5057
 do_syscall_x64 arch/x86/entry/common.c:51 [inline]
 do_syscall_64+0x3b/0xb0 arch/x86/entry/common.c:81
 entry_SYSCALL_64_after_hwframe+0x68/0xd2
RIP: 0033:0x7f9d6ad8d169
Code: ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 a8 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007f9d6bb3f038 EFLAGS: 00000246 ORIG_RAX: 000000000000013c
RAX: ffffffffffffffda RBX: 00007f9d6afa6080 RCX: 00007f9d6ad8d169
RDX: ffffffffffffff9c RSI: 0000400000000400 RDI: ffffffffffffff9c
RBP: 00007f9d6ae0e2a0 R08: 0000000000000000 R09: 0000000000000000
R10: 0000400000000440 R11: 0000000000000246 R12: 0000000000000000
R13: 0000000000000001 R14: 00007f9d6afa6080 R15: 00007fff0ac0f728
 </TASK>
REISERFS warning (device loop2): vs-13060 reiserfs_update_sd_size: stat data of object [1 2 0x0 SD] (nlink == 1) not found (pos 2)

Crashes (3):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2025/03/15 15:28 linux-6.1.y 344a09659766 e2826670 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan possible deadlock in vfs_rename
2025/01/09 03:00 linux-6.1.y 7dc732d24ff7 f3558dbf .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan possible deadlock in vfs_rename
2024/12/31 00:30 linux-6.1.y 563edd786f0a d3ccff63 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan-arm64 possible deadlock in vfs_rename
* Struck through repros no longer work on HEAD.