The immediate splitting code contained a bug causing both low and high
to be equal for i128. This is the root cause for
bjorn3/rustc_codegen_cranelift#1097 and likely the only bug preventing
cg_clif from bootstrapping rustc.
In particular, introduce initial support for the MOVI and MVNI
instructions, with 8-bit elements. Also, treat vector constants
as 32- or 64-bit floating-point numbers, if their value allows
it, by relying on the architectural zero extension. Finally,
stop generating literal loads for 32-bit constants.
Copyright (c) 2020, Arm Limited.
This commit removes the binaryen support for fuzzing from wasmtime,
instead switching over to `wasm-smith`. In general it's great to have
what fuzzing we can, but our binaryen support suffers from a few issues:
* The Rust crate, binaryen-sys, seems largely unmaintained at this
point. While we could likely take ownership and/or send PRs to update
the crate it seems like the maintenance is largely on us at this point.
* Currently the binaryen-sys crate doesn't support fuzzing anything
beyond MVP wasm, but we're interested at least in features like bulk
memory and reference types. Additionally we'll also be interested in
features like module-linking. New features would require either
implementation work in binaryen or the binaryen-sys crate to support.
* We have 4-5 fuzz-bugs right now related to timeouts simply in
generating a module for wasmtime to fuzz. One investigation along
these lines in the past revealed a bug in binaryen itself, and in any
case these bugs would otherwise need to get investigated, reported,
and possibly fixed ourselves in upstream binaryen.
Overall I'm not sure at this point if maintaining binaryen fuzzing is
worth it with the advent of `wasm-smith` which has similar goals for
wasm module generation, but is much more readily maintainable on our
end.
Additonally in this commit I've added a fuzzer for wasm-smith's
`SwarmConfig`-based fuzzer which should expand the coverage of tested
modules.
Closes#2163
Added in c4e10227de I think the original
reason (which I'm not entirely knowledgeable of) may no longer be
applicable? In any case this is a significant difference on Windows from
other platforms because it makes loads/stores of wasm code have manual
checks instead of relying on the guard page, causing runtime and
compile-time slowdowns on Windows-only.
I originally rediscovered this when investigating #2318 and saw that
both the compile time of the module in question and trap information
tables were much larger than they were on Linux. Removing this
Windows-specific configuration fixed the discrepancies and afterwards
Linux and Windows were basically the same.
The changes in https://github.com/bytecodealliance/wasmtime/pull/2278 added `SourceLoc`s to several x64 `Inst` variants; between when that PR was last run in CI and when it was merged, new instructions were added that require this new parameter. This change adds the parameter in order to fix CI.
Turns out this wasn't needed anywhere! Additionally we can construct it
from `InstructionAddressMap` anyway. There's so many pieces of trap
information that it's best to keep these structures small as well.
This commit compresses `FunctionAddressMap` by performing a simple
coalescing of adjacent `InstructionAddressMap` descriptors if they
describe the same source location. This is intended to handle the common
case where a sequene of machine instructions describes a high-level wasm
instruction.
For the module on #2318 this reduces the cache entry size from 306MB to
161MB.
Update `Module::{serialize,deserialize}` to use variable-length integers
with `bincode` to make the output artifacts smaller. Locally this
reduces the size of #2318 from 160 to 110 MB, a 30% decrease in size!
Deserialization performance is slightly slower, but seemingly within the
range of noise locally for me.
This patch implements, for aarch64, the following wasm SIMD extensions
Floating-point rounding instructions
https://github.com/WebAssembly/simd/pull/232
Pseudo-Minimum and Pseudo-Maximum instructions
https://github.com/WebAssembly/simd/pull/122
The changes are straightforward:
* `build.rs`: the relevant tests have been enabled
* `cranelift/codegen/meta/src/shared/instructions.rs`: new CLIF instructions
`fmin_pseudo` and `fmax_pseudo`. The wasm rounding instructions do not need
any new CLIF instructions.
* `cranelift/wasm/src/code_translator.rs`: translation into CLIF; this is
pretty much the same as any other unary or binary vector instruction (for
the rounding and the pmin/max respectively)
* `cranelift/codegen/src/isa/aarch64/lower_inst.rs`:
- `fmin_pseudo` and `fmax_pseudo` are converted into a two instruction
sequence, `fcmpgt` followed by `bsl`
- the CLIF rounding instructions are converted to a suitable vector
`frint{n,z,p,m}` instruction.
* `cranelift/codegen/src/isa/aarch64/inst/mod.rs`: minor extension of `pub
enum VecMisc2` to handle the rounding operations. And corresponding `emit`
cases.
The `bitmask.{8x16,16x8,32x4}` instructions do not map neatly to any single
AArch64 SIMD instruction, and instead need a sequence of around ten
instructions. Because of this, this patch is somewhat longer and more complex
than it would be for (eg) x64.
Main changes are:
* the relevant testsuite test (`simd_boolean.wast`) has been enabled on aarch64.
* at the CLIF level, add a new instruction `vhigh_bits`, into which these wasm
instructions are to be translated.
* in the wasm->CLIF translation (code_translator.rs), translate into
`vhigh_bits`. This is straightforward.
* in the CLIF->AArch64 translation (lower_inst.rs), translate `vhigh_bits`
into equivalent sequences of AArch64 instructions. There is a different
sequence for each of the `{8x16, 16x8, 32x4}` variants.
All other changes are AArch64-specific, and add instruction definitions needed
by the previous step:
* Add two new families of AArch64 instructions: `VecShiftImm` (vector shift by
immediate) and `VecExtract` (effectively a double-length vector shift)
* To the existing AArch64 family `VecRRR`, add a `zip1` variant. To the
`VecLanesOp` family add an `addv` variant.
* Add supporting code for the above changes to AArch64 instructions:
- getting the register uses (`aarch64_get_regs`)
- mapping the registers (`aarch64_map_regs`)
- printing instructions
- emitting instructions (`impl MachInstEmit for Inst`). The handling of
`VecShiftImm` is a bit complex.
- emission tests for new instructions and variants.
The test that triggers the rebuild of the peephole optimizers is in the
`cranelift-codegen` crate, not the umbrella cranelift crate. This was previously
successfully running zero tests, and then successfully reporting no `git diff`
because no peephole optimizers were ever rebuilt.
This change fixes it so that we run the correct test that triggers the
rebuilding of the peephole optimizers.
Similar to other data structures owned by the `Store` there's no need
for `Instance` to have a strong `Arc` reference, instead it's sufficient
for `Store` to have the owning reference.
In the current translation of wasm (128-bit) SIMD into CLIF, we work around differences in the
type system models of wasm vs CLIF by inserting `bitcast` (a no-op cast) CLIF instructions before
more or less every use of a SIMD value. Unfortunately this was not being done consistently and
even small examples with a single if-then-else diamond that produces a SIMD value, could cause a
verification failure downstream. In this case, the jump out of the "else" block needed a
bitcast, but didn't have one.
This patch wraps creation of CLIF jumps and conditional branches up into three functions,
`canonicalise_then_jump` and `canonicalise_then_br{z,nz}`, and uses them consistently. They
first cast the relevant block formal parameters, then generate the relevant kind of branch/jump.
Hence, provided they are also used consistently in future to generate branches/jumps in this
file, we are protected against such failures.
The patch also adds a large(ish) comment at the top explaining this in more detail.
There's no need to name each export since each synthetic instance we're
creating only has one export, so let's use the empty string which
doesn't require any allocations.
Fixes RUSTSEC-2020-0054 warning from cargo-audit/cargo-deny, follows the recommendation to switch to the new maintained `directories-next` crate fork
Only affects the cache directory determination for the environment and was a simple search'n'replace to this fork so don't think behavior has changed.
https://rustsec.org/advisories/RUSTSEC-2020-0054