diff options
author | Ilya Leoshkevich | 2022-07-11 20:56:38 +0200 |
---|---|---|
committer | Richard Henderson | 2022-07-12 07:13:33 +0200 |
commit | b0f650f0477ae775e0915e3d60ab5110ad5e9157 (patch) | |
tree | ae709274082e661508bfa4ea9879c702ba6b5d02 /scripts/xen-detect.c | |
parent | tcg: Fix returned type in alloc_code_gen_buffer_splitwx_memfd() (diff) | |
download | qemu-b0f650f0477ae775e0915e3d60ab5110ad5e9157.tar.gz qemu-b0f650f0477ae775e0915e3d60ab5110ad5e9157.tar.xz qemu-b0f650f0477ae775e0915e3d60ab5110ad5e9157.zip |
accel/tcg: Fix unaligned stores to s390x low-address-protected lowcore
If low-address-protection is active, unaligned stores to non-protected
parts of lowcore lead to protection exceptions. The reason is that in
such cases tlb_fill() call in store_helper_unaligned() covers
[0, addr + size) range, which contains the protected portion of
lowcore. This range is too large.
The most straightforward fix would be to make sure we stay within the
original [addr, addr + size) range. However, if an unaligned access
affects a single page, we don't need to call tlb_fill() in
store_helper_unaligned() at all, since it would be identical to
the previous tlb_fill() call in store_helper(), and therefore a no-op.
If an unaligned access covers multiple pages, this situation does not
occur.
Therefore simply skip TLB handling in store_helper_unaligned() if we
are dealing with a single page.
Fixes: 2bcf018340cb ("s390x/tcg: low-address protection support")
Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
Message-Id: <20220711185640.3558813-2-iii@linux.ibm.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Diffstat (limited to 'scripts/xen-detect.c')
0 files changed, 0 insertions, 0 deletions