syzbot


INFO: rcu detected stall in dev_ioctl (2)

Status: auto-obsoleted due to no activity on 2025/04/05 21:09
Subsystems: mm
[Documentation on labels]
First crash: 187d, last: 103d
Similar bugs (7)
Kernel Title Repro Cause bisect Fix bisect Count Last Reported Patched Status
linux-4.19 INFO: rcu detected stall in dev_ioctl 1 1690d 1690d 0/1 auto-closed as invalid on 2020/12/30 14:33
linux-4.19 INFO: rcu detected stall in dev_ioctl (2) 1 1328d 1328d 0/1 auto-closed as invalid on 2021/12/27 14:21
upstream INFO: rcu detected stall in dev_ioctl batman 18 1103d 1971d 0/28 auto-closed as invalid on 2022/08/09 13:29
linux-4.19 INFO: rcu detected stall in dev_ioctl (3) 1 1205d 1205d 0/1 auto-closed as invalid on 2022/04/29 15:27
linux-5.15 INFO: rcu detected stall in dev_ioctl 1 138d 138d 0/3 auto-obsoleted due to no activity on 2025/03/11 17:46
upstream BUG: soft lockup in dev_ioctl batman 2 338d 403d 0/28 auto-obsoleted due to no activity on 2024/08/13 06:45
android-5-15 BUG: soft lockup in dev_ioctl 7 266d 356d 0/2 auto-obsoleted due to no activity on 2024/10/24 08:58

Sample crash report:
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
rcu: INFO: rcu_preempt detected stalls on CPUs/tasks:
rcu: 	Tasks blocked on level-0 rcu_node (CPUs 0-1): P9130/1:b..l
rcu: 	(detected by 1, t=10503 jiffies, g=19257, q=1333 ncpus=2)
task:syz.1.837       state:R  running task     stack:26752 pid:9130  tgid:9122  ppid:5828   flags:0x00004006
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5369 [inline]
 __schedule+0x1850/0x4c30 kernel/sched/core.c:6756
 preempt_schedule_irq+0xfb/0x1c0 kernel/sched/core.c:7078
 irqentry_exit+0x5e/0x90 kernel/entry/common.c:354
 asm_sysvec_apic_timer_interrupt+0x1a/0x20 arch/x86/include/asm/idtentry.h:702
RIP: 0010:kasan_check_range+0xb/0x290 mm/kasan/generic.c:188
Code: 6f e3 ff 90 0f 0b 66 0f 1f 84 00 00 00 00 00 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 66 0f 1f 00 55 41 57 41 56 41 54 <53> b0 01 48 85 f6 0f 84 a0 01 00 00 4c 8d 04 37 49 39 f8 0f 82 56
RSP: 0018:ffffc9001b7cf5f8 EFLAGS: 00000246
RAX: 0000000000000000 RBX: 0000000000000001 RCX: ffffffff817ad6a0
RDX: 0000000000000000 RSI: 0000000000000008 RDI: ffffffff90198170
RBP: ffffc9001b7cf748 R08: ffffffff821164c6 R09: 1ffff11003b40920
R10: dffffc0000000000 R11: ffffed1003b40921 R12: 1ffff920036f9ed0
R13: ffffffff82116537 R14: ffff88801da04944 R15: dffffc0000000000
 instrument_atomic_read include/linux/instrumented.h:68 [inline]
 _test_bit include/asm-generic/bitops/instrumented-non-atomic.h:141 [inline]
 cpumask_test_cpu include/linux/cpumask.h:570 [inline]
 cpu_online include/linux/cpumask.h:1117 [inline]
 trace_lock_release include/trace/events/lock.h:69 [inline]
 lock_release+0xb0/0xa30 kernel/locking/lockdep.c:5860
 rcu_lock_release include/linux/rcupdate.h:347 [inline]
 rcu_read_unlock include/linux/rcupdate.h:880 [inline]
 page_ext_put+0xa3/0xc0 mm/page_ext.c:550
 __reset_page_owner+0x2de/0x430 mm/page_owner.c:300
 reset_page_owner include/linux/page_owner.h:25 [inline]
 free_pages_prepare mm/page_alloc.c:1127 [inline]
 free_unref_page+0xd3f/0x1010 mm/page_alloc.c:2659
 discard_slab mm/slub.c:2688 [inline]
 __put_partials+0x160/0x1c0 mm/slub.c:3157
 put_cpu_partial+0x17c/0x250 mm/slub.c:3232
 __slab_free+0x290/0x380 mm/slub.c:4483
 qlink_free mm/kasan/quarantine.c:163 [inline]
 qlist_free_all+0x9a/0x140 mm/kasan/quarantine.c:179
 kasan_quarantine_reduce+0x14f/0x170 mm/kasan/quarantine.c:286
 __kasan_kmalloc+0x23/0xb0 mm/kasan/common.c:385
 kasan_kmalloc include/linux/kasan.h:260 [inline]
 __kmalloc_cache_noprof+0x243/0x390 mm/slub.c:4329
 kmalloc_noprof include/linux/slab.h:901 [inline]
 kzalloc_noprof include/linux/slab.h:1037 [inline]
 call_usermodehelper_setup+0x8e/0x270 kernel/umh.c:362
 call_modprobe kernel/module/kmod.c:97 [inline]
 __request_module+0x3cd/0x640 kernel/module/kmod.c:172
 dev_ioctl+0x52e/0x1340 net/core/dev_ioctl.c:709
 sock_do_ioctl+0x240/0x460 net/socket.c:1223
 sock_ioctl+0x626/0x8e0 net/socket.c:1328
 vfs_ioctl fs/ioctl.c:51 [inline]
 __do_sys_ioctl fs/ioctl.c:906 [inline]
 __se_sys_ioctl+0xf5/0x170 fs/ioctl.c:892
 do_syscall_x64 arch/x86/entry/common.c:52 [inline]
 do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f471f785d29
RSP: 002b:00007f4720695038 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
RAX: ffffffffffffffda RBX: 00007f471f975fa0 RCX: 00007f471f785d29
RDX: 0000000020000380 RSI: 0000000000008933 RDI: 0000000000000004
RBP: 00007f471f801b08 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 0000000000000000 R14: 00007f471f975fa0 R15: 00007ffda0f8bbe8
 </TASK>
rcu: rcu_preempt kthread starved for 747 jiffies! g19257 f0x0 RCU_GP_WAIT_FQS(5) ->state=0x0 ->cpu=0
rcu: 	Unless rcu_preempt kthread gets sufficient CPU time, OOM is now expected behavior.
rcu: RCU grace-period kthread stack dump:
task:rcu_preempt     state:R  running task     stack:26264 pid:17    tgid:17    ppid:2      flags:0x00004000
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5369 [inline]
 __schedule+0x1850/0x4c30 kernel/sched/core.c:6756
 __schedule_loop kernel/sched/core.c:6833 [inline]
 schedule+0x14b/0x320 kernel/sched/core.c:6848
 schedule_timeout+0x15a/0x290 kernel/time/sleep_timeout.c:99
 rcu_gp_fqs_loop+0x2df/0x1330 kernel/rcu/tree.c:2045
 rcu_gp_kthread+0xa7/0x3b0 kernel/rcu/tree.c:2247
 kthread+0x2f0/0x390 kernel/kthread.c:389
 ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
 </TASK>
rcu: Stack dump where RCU GP kthread last ran:
Sending NMI from CPU 1 to CPUs 0:
NMI backtrace for cpu 0
CPU: 0 UID: 0 PID: 16 Comm: ksoftirqd/0 Not tainted 6.13.0-rc5-syzkaller-00142-g8ce4f287524c #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 09/13/2024
RIP: 0010:stack_access_ok arch/x86/kernel/unwind_orc.c:393 [inline]
RIP: 0010:deref_stack_reg+0xb7/0x210 arch/x86/kernel/unwind_orc.c:403
Code: f3 48 c1 eb 03 0f b6 04 13 84 c0 0f 85 14 01 00 00 41 83 3e 00 74 1c 4c 39 c6 77 17 4d 39 c7 76 12 49 8d 40 08 48 39 f0 76 09 <4c> 39 f8 0f 86 b1 00 00 00 49 8d 7e 28 48 89 f8 48 c1 e8 03 80 3c
RSP: 0018:ffffc900001564f8 EFLAGS: 00000206
RAX: ffffc90000156a48 RBX: 1ffff9200002acc4 RCX: 0000000000000000
RDX: dffffc0000000000 RSI: ffffc90000150000 RDI: ffffc90000156620
RBP: ffffc90000156628 R08: ffffc90000156a40 R09: 0000000000000000
R10: ffffc90000156670 R11: fffff5200002acd0 R12: 1ffff9200002acc5
R13: 1ffff9200002acc6 R14: ffffc90000156620 R15: ffffc90000158000
FS:  0000000000000000(0000) GS:ffff8880b8600000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007f52e0b452d8 CR3: 0000000044b54000 CR4: 00000000003526f0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
 <NMI>
 </NMI>
 <TASK>
 unwind_next_frame+0x1799/0x22d0
 arch_stack_walk+0x11c/0x150 arch/x86/kernel/stacktrace.c:25
 stack_trace_save+0x118/0x1d0 kernel/stacktrace.c:122
 kasan_save_stack mm/kasan/common.c:47 [inline]
 kasan_save_track+0x3f/0x80 mm/kasan/common.c:68
 unpoison_slab_object mm/kasan/common.c:319 [inline]
 __kasan_slab_alloc+0x66/0x80 mm/kasan/common.c:345
 kasan_slab_alloc include/linux/kasan.h:250 [inline]
 slab_post_alloc_hook mm/slub.c:4119 [inline]
 slab_alloc_node mm/slub.c:4168 [inline]
 kmem_cache_alloc_noprof+0x1d9/0x380 mm/slub.c:4175
 skb_ext_maybe_cow net/core/skbuff.c:6942 [inline]
 skb_ext_add+0x1d6/0x910 net/core/skbuff.c:7016
 nf_bridge_unshare net/bridge/br_netfilter_hooks.c:169 [inline]
 br_nf_forward_ip+0xd8/0x7b0 net/bridge/br_netfilter_hooks.c:712
 nf_hook_entry_hookfn include/linux/netfilter.h:154 [inline]
 nf_hook_slow+0xc3/0x220 net/netfilter/core.c:626
 nf_hook include/linux/netfilter.h:269 [inline]
 NF_HOOK+0x2a7/0x460 include/linux/netfilter.h:312
 __br_forward+0x489/0x660 net/bridge/br_forward.c:115
 deliver_clone net/bridge/br_forward.c:131 [inline]
 maybe_deliver+0xb3/0x150 net/bridge/br_forward.c:190
 br_flood+0x2e4/0x660 net/bridge/br_forward.c:236
 br_handle_frame_finish+0x18ba/0x1fe0 net/bridge/br_input.c:215
 br_nf_hook_thresh+0x472/0x590
 br_nf_pre_routing_finish_ipv6+0xaa0/0xdd0
 NF_HOOK include/linux/netfilter.h:314 [inline]
 br_nf_pre_routing_ipv6+0x379/0x770 net/bridge/br_netfilter_ipv6.c:184
 nf_hook_entry_hookfn include/linux/netfilter.h:154 [inline]
 nf_hook_bridge_pre net/bridge/br_input.c:277 [inline]
 br_handle_frame+0x9fd/0x1530 net/bridge/br_input.c:424
 __netif_receive_skb_core+0x14eb/0x4690 net/core/dev.c:5568
 __netif_receive_skb_one_core net/core/dev.c:5672 [inline]
 __netif_receive_skb+0x12f/0x650 net/core/dev.c:5787
 process_backlog+0x662/0x15b0 net/core/dev.c:6119
 __napi_poll+0xcb/0x490 net/core/dev.c:6885
 napi_poll net/core/dev.c:6954 [inline]
 net_rx_action+0x89b/0x1240 net/core/dev.c:7076
 handle_softirqs+0x2d4/0x9b0 kernel/softirq.c:561
 run_ksoftirqd+0xca/0x130 kernel/softirq.c:950
 smpboot_thread_fn+0x544/0xa30 kernel/smpboot.c:164
 kthread+0x2f0/0x390 kernel/kthread.c:389
 ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
 </TASK>
net_ratelimit: 25446 callbacks suppressed
bridge0: received packet on veth0_to_bridge with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:36:2a:ad:5b:c0:96, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:1b, vlan:0)
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:1b, vlan:0)
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
net_ratelimit: 29782 callbacks suppressed
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:36:2a:ad:5b:c0:96, vlan:0)
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:1b, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:1b, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)

Crashes (3):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2025/01/05 21:07 net 8ce4f287524c f3558dbf .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-net-this-kasan-gce INFO: rcu detected stall in dev_ioctl
2024/11/29 21:30 net f1cd565ce577 68914665 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-net-this-kasan-gce INFO: rcu detected stall in dev_ioctl
2024/10/13 22:22 net 174714f0e505 084d8178 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-net-this-kasan-gce INFO: rcu detected stall in dev_ioctl
* Struck through repros no longer work on HEAD.