Browse Source
Currently, we have partial Spectre mitigation: we protect heap accesses with dynamic bounds checks. Specifically, we guard against errant accesses on the misspeculated path beyond the bounds-check conditional branch by adding a conditional move that is also dependent on the bounds-check condition. This data dependency on the condition is not speculated and thus will always pick the "safe" value (in the heap case, a NULL address) on the misspeculated path, until the pipeline flushes and recovers onto the correct path. This PR uses the same technique both for table accesses -- used to implement Wasm tables -- and for jumptables, used to implement Wasm `br_table` instructions. In the case of Wasm tables, the cmove picks the table base address on the misspeculated path. This is equivalent to reading the first table entry. This prevents loads of arbitrary data addresses on the misspeculated path. In the case of `br_table`, the cmove picks index 0 on the misspeculated path. This is safer than allowing a branch to an address loaded from an index under misspeculation (i.e., it preserves control-flow integrity even under misspeculation). The table mitigation is controlled by a Cranelift setting, on by default. The br_table mitigation is always on, because it is part of the single lowering pseudoinstruction. In both cases, the impact should be minimal: a single extra cmove in a (relatively) rarely-used operation. The table mitigation is architecture-independent (happens during legalization); the br_table mitigation has been implemented for both x64 and aarch64. (I don't know enough about s390x to implement this confidently there, but would happily review a PR to do the same on that platform.)pull/4105/head
Chris Fallin
3 years ago
committed by
GitHub
12 changed files with 300 additions and 21 deletions
@ -0,0 +1,38 @@ |
|||||
|
test compile precise-output |
||||
|
set enable_safepoints=true |
||||
|
set enable_table_access_spectre_mitigation=true |
||||
|
|
||||
|
target x86_64 |
||||
|
|
||||
|
function %table_set(i32, r64, i64 vmctx) { |
||||
|
gv0 = vmctx |
||||
|
gv1 = load.i64 notrap aligned gv0 |
||||
|
gv2 = load.i32 notrap aligned gv0 +8 |
||||
|
table0 = dynamic gv1, element_size 1, bound gv2, index_type i32 |
||||
|
|
||||
|
block0(v0: i32, v1: r64, v2: i64): |
||||
|
v3 = table_addr.i64 table0, v0, +0 |
||||
|
store.r64 notrap aligned v1, v3 |
||||
|
return |
||||
|
} |
||||
|
|
||||
|
; pushq %rbp |
||||
|
; movq %rsp, %rbp |
||||
|
; block0: |
||||
|
; movl 8(%rdx), %eax |
||||
|
; cmpl %eax, %edi |
||||
|
; jb label1; j label2 |
||||
|
; block1: |
||||
|
; movl %edi, %r9d |
||||
|
; movq 0(%rdx), %rdx |
||||
|
; movq %rdx, %r8 |
||||
|
; addq %r8, %r9, %r8 |
||||
|
; cmpl %eax, %edi |
||||
|
; cmovnbq %rdx, %r8, %r8 |
||||
|
; movq %rsi, 0(%r8) |
||||
|
; movq %rbp, %rsp |
||||
|
; popq %rbp |
||||
|
; ret |
||||
|
; block2: |
||||
|
; ud2 table_oob |
||||
|
|
Loading…
Reference in new issue