qemu/accel-tcg-fix-race-in-cpu_exec_step_atomic-bug-18630.patch
Jiabo Feng e4214041dc QEMU update to version 4.1.0-80
- accel/tcg: fix race in cpu_exec_step_atomic (bug 1863025)
  - pci: assert configuration access is within bounds
  - io: remove io watch if TLS channel is closed during handshake

Signed-off-by: Jiabo Feng <fengjiabo1@huawei.com>
2023-09-11 19:54:51 +08:00

94 lines
3.2 KiB
Diff

From a893da358918fb1f0fb0d243076bc63cd2f6e1e8 Mon Sep 17 00:00:00 2001
From: liuxiangdong <liuxiangdong5@huawei.com>
Date: Sun, 10 Sep 2023 21:27:42 +0800
Subject: [PATCH] accel/tcg: fix race in cpu_exec_step_atomic (bug 1863025)
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
The bug describes a race whereby cpu_exec_step_atomic can acquire a TB
which is invalidated by a tb_flush before we execute it. This doesn't
affect the other cpu_exec modes as a tb_flush by it's nature can only
occur on a quiescent system. The race was described as:
B2. tcg_cpu_exec => cpu_exec => tb_find => tb_gen_code
B3. tcg_tb_alloc obtains a new TB
C3. TB obtained with tb_lookup__cpu_state or tb_gen_code
(same TB as B2)
A3. start_exclusive critical section entered
A4. do_tb_flush is called, TB memory freed/re-allocated
A5. end_exclusive exits critical section
B2. tcg_cpu_exec => cpu_exec => tb_find => tb_gen_code
B3. tcg_tb_alloc reallocates TB from B2
C4. start_exclusive critical section entered
C5. cpu_tb_exec executes the TB code that was free in A4
The simplest fix is to widen the exclusive period to include the TB
lookup. As a result we can drop the complication of checking we are in
the exclusive region before we end it.
Cc: Yifan <me@yifanlu.com>
Buglink: https://bugs.launchpad.net/qemu/+bug/1863025
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <20200214144952.15502-1-alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: liuxiangdong <liuxiangdong5@huawei.com>
---
accel/tcg/cpu-exec.c | 20 ++++++++++----------
1 file changed, 10 insertions(+), 10 deletions(-)
diff --git a/accel/tcg/cpu-exec.c b/accel/tcg/cpu-exec.c
index 6c85c3ee1e..97803f0f7f 100644
--- a/accel/tcg/cpu-exec.c
+++ b/accel/tcg/cpu-exec.c
@@ -243,6 +243,8 @@ void cpu_exec_step_atomic(CPUState *cpu)
volatile bool in_exclusive_region = false;
if (sigsetjmp(cpu->jmp_env, 0) == 0) {
+ start_exclusive();
+
tb = tb_lookup__cpu_state(cpu, &pc, &cs_base, &flags, cf_mask);
if (tb == NULL) {
mmap_lock();
@@ -250,8 +252,6 @@ void cpu_exec_step_atomic(CPUState *cpu)
mmap_unlock();
}
- start_exclusive();
-
/* Since we got here, we know that parallel_cpus must be true. */
parallel_cpus = false;
in_exclusive_region = true;
@@ -274,14 +274,14 @@ void cpu_exec_step_atomic(CPUState *cpu)
assert_no_pages_locked();
}
- if (in_exclusive_region) {
- /* We might longjump out of either the codegen or the
- * execution, so must make sure we only end the exclusive
- * region if we started it.
- */
- parallel_cpus = true;
- end_exclusive();
- }
+ /*
+ * As we start the exclusive region before codegen we must still
+ * be in the region if we longjump out of either the codegen or
+ * the execution.
+ */
+ g_assert(in_exclusive_region);
+ parallel_cpus = true;
+ end_exclusive();
}
struct tb_desc {
--
2.41.0.windows.1