* Run a `cargo update` over our dependencies
This'll notably fix a `cargo audit` error where we have a pinned version
of the `regex` crate which has a CVE assigned to it.
* Update to `object` and `hashbrown` crates
Prune some duplicate versions showing up from the previous `cargo update`
Currently, a variable can be named in two different ways in an ISLE
pattern. One can write a pattern like `(T x y)` that binds the two
args of `T` with the subpatterns `x` and `y`, each of which match
anything and capture the value as a bound variable. Or, one can write
a pattern like `(T x =x)`, where the first arg pattern `x` captures
the value in `x` and the second arg pattern `=x` matches only the same
value that was already captured.
It turns out (thanks to @fitzgen for this insight here [1]) that this
distinction can actually be inferred easily: if `x` isn't bound, then
mentioning it binds it; otherwise, it matches only the already-bound
variable. There's no concern about ordering (one mention binding
vs. the other) because (i) the value is equal either way, and (ii) the
types at both sites must be the same.
This language tweak seems like it should simplify things nicely! We
can remove the `=x` syntax later if we want, but this PR doesn't do
so.
[1] https://github.com/bytecodealliance/wasmtime/pull/4071#discussion_r859111513
Also fix and extend the current implementation:
- AtomicRMWOp::Clr != AtomicRmwOp::And, as the input needs to be
inverted first.
- Inputs to the cmp for the RMWLoop case are sign-extended when
needed.
- Lower Xchg to Swp.
- Lower Sub to Add with a negated input.
- Added more runtests.
Copyright (c) 2022, Arm Limited.
x64 backend: add lowerings with load-op-store fusion.
These lowerings use the `OP [mem], reg` forms (or in AT&T syntax, `OP
%reg, (mem)`) -- i.e., x86 instructions that load from memory, perform
an ALU operation, and store the result, all in one instruction. Using
these instruction forms, we can merge three CLIF ops together: a load,
an arithmetic operation, and a store.
The recent work in #4061 introduced a notion of "unique uses" for CLIF
values that both simplified the load-op merging rules and allowed
loads to merge in some more places.
Unfortunately there's one factor that PR didn't account for: a unique
use at the CLIF level could become a multiple-use at the VCode level,
when a lowering uses a value multiple times!
Making this less error-prone in general is hard, because we don't know
the lowering in VCode until it's emitted, so we can't ahead-of-time
know that a value will be used multiple times and prevent its
merging. But we *can* know in the lowerings themselves when we're
doing this. At least we get a panic from regalloc when we get this
wrong; no bad code (uninitialized register being read) should ever
come from a backend bug like this.
This is still a bit less than ideal, but for now the fix is: in
`cmp_and_choose` in the x64 backend (which compares values, then
picks one or the other with a cmove), explicitly put values in
registers.
Fixes#4067 (thanks @Mrmaxmeier for the report!).
* Cranelift: fix#3953: rework single/multiple-use logic in lowering.
This PR addresses the longstanding issue with loads trying to merge
into compares on x86-64, and more generally, with the lowering
framework falsely recognizing "single uses" of one op by
another (which would normally allow merging of side-effecting ops like
loads) when there is *indirect* duplication.
To fix this, we replace the direct `value_uses` count with a
transitive notion of uniqueness (not unlike Rust's `&`/`&mut` and how
a `&mut` downgrades to `&` when accessed through another `&`!). A
value is used multiple times transitively if it has multiple direct
uses, or is used by another op that is used multiple times
transitively.
The canonical example of badness is:
```
v1 := load
v2 := ifcmp v1, ...
v3 := selectif v2, ...
v4 := selectif v2, ...
```
both `v3` and `v4` effectively merge the `ifcmp` (`v2`), so even
though the use of `v1` is "unique", it is codegenned twice. This is
why we ~~can't have nice things~~ can't merge loads into
compares (#3953).
There is quite a subtle and interesting design space around this
problem and how we might solve it. See the long doc-comment on
`ValueUseState` in this PR for more justification for the particular
design here. In particular, this design deliberately simplifies a bit
relative to an "optimal" solution: some uses can *become* unique
depending on merging, but we don't design our data structures for such
updates because that would require significant extra costly
tracking (some sort of transitive refcounting). For example, in the
above, if `selectif` somehow did not merge `ifcmp`, then we would only
codegen the `ifcmp` once into its result register (and use that
register twice); then the load *is* uniquely used, and could be
merged. But that requires transitioning from "multiple use" back to
"unique use" with careful tracking as we do pattern-matching, which
I've chosen to make out-of-scope here for now. In practice, I don't
think it will matter too much (and we can always improve later).
With this PR, we can now re-enable load-op merging for compares. A
subsequent commit does this.
* Update x64 backend to allow load-op merging for `cmp`.
* Update filetests.
* Add test for cmp-mem merging on x64.
* Comment fixes.
* Rework ValueUseState analysis for better performance.
* Update s390x filetest: iadd_ifcout cannot merge loads anymore because it has multiple outputs (ValueUseState limitation)
* Address review comments.
* Update more workflows to only this repository
This adds `if: github.repository == 'bytecodealliance/wasmtime'` to a
few more workflows related to the release process which should only run
in this repository and no other (e.g. forks).
* Also only run verify-publish in the upstream repo
No need for local deelopment to be burdened with ensuring everything is
actually publish-able, that's just a concern for the main repository.
* Gate workflows which need secrets on this repository
* Fix the release process's latest step
The automated release of 0.36.0 was attempted last night but it failed
due to a [failure on CI][bad]. This failure comes about because it was
trying to change the release date of 0.35.0 which ended up not modifying
any fails so `git` failed to commit as no files were changed.
The original bug though was that 0.35.0 was being changed instead of
0.36.0. The reason for this is that the script used
`--sort=-committerdate` to determine the latest branch. I forgot,
though, that with backports it's possible for 0.35.0 to have a more
recent commit date than 0.36.0 (as is currently the case). This commit
updates the script to perform a numerical sort outside of git to get the
latest release branch.
Additionally this adds in some `set -ex` commands for the shell which
should help print out commands as they're run and assist in future
debugging.
[bad]: https://github.com/bytecodealliance/wasmtime/runs/6087188708
* Replace sed with rust
* Reduce contention on the global module rwlock
This commit intendes to close#4025 by reducing contention on the global
rwlock Wasmtime has for module information during instantiation and
dropping a store. Currently registration of a module into this global
map happens during instantiation, but this can be a hot path as
embeddings may want to, in parallel, instantiate modules.
Instead this switches to a strategy of inserting into the global module
map when a `Module` is created and then removing it from the map when
the `Module` is dropped. Registration in a `Store` now preserves the
entire `Module` within the store as opposed to trying to only save it
piecemeal. In reality the only piece that wasn't saved within a store
was the `TypeTables` which was pretty inconsequential for core wasm
modules anyway.
This means that instantiation should now clone a singluar `Arc` into a
`Store` per `Module` (previously it cloned two) with zero managemnt on
the global rwlock as that happened at `Module` creation time.
Additionally dropping a `Store` again involves zero rwlock management
and only a single `Arc` drop per-instantiated module (previously it was
two).
In the process of doing this I also went ahead and removed the
`Module::new_with_name` API. This has been difficult to support
historically with various variations on the internals of `ModuleInner`
because it involves mutating a `Module` after it's been created. My hope
is that this API is pretty rarely used and/or isn't super important, so
it's ok to remove.
Finally this change removes some internal `Arc` layerings that are no
longer necessary, attempting to use either `T` or `&T` where possible
without dealing with the overhead of an `Arc`.
Closes#4025
* Move back to a `BTreeMap` in `ModuleRegistry`
This commit implements an optimization to help improve concurrently
creating instances of a module on many threads simultaneously. One
bottleneck to this measured has been the reference count modification on
`Arc<HostFunc>`. Each host function stored within a `Linker<T>` is
wrapped in an `Arc<HostFunc>` structure, and when any of those host
functions are inserted into a store the reference count is incremented.
When the store is dropped the reference count is then decremented.
This ends up meaning that when a module imports N functions it ends up
doing 2N atomic modifications over the lifetime of the instance. For
embeddings where the `Linker<T>` is rarely modified but instances are
frequently created this can be a surprising bottleneck to creating many
instances.
A change implemented here is to optimize the instantiation process when
using an `InstancePre<T>`. An `InstancePre` serves as an opportunity to
take the list of items used to instantiate a module and wrap them all up
in an `Arc<[T]>`. Everything is going to get cloned into a `Store<T>`
anyway so to optimize this the `Arc<[T]>` is cloned at the top-level and
then nothing else is cloned internally. This continues to, however,
preserve a strong reference count for all contained items to prevent
them from being deallocated.
A new variant of `FuncKind` was added for host functions which is
effectively stored via `*mut HostFunc`. This variant is unsafe to create
and manage and has been documented internally.
Performance-wise the overall impact of this change is somewhat minor.
It's already a bit esoteric if this atomic increment and decrement are a
bottleneck due to the number of concurrent instances being created. In
my measurements I've seen that this can reduce instantiation time by up
to 10% for a module that imports two dozen functions. For larger modules
with more imports this is expected to have a larger win.
We should still get the same amount of fuzzing using libfuzzer's mutators and
using `wasm-mutate` as a mutator now, but they can share the same corpus,
allowing mutations that one performed but the other didn't to reach new areas.
This tells Cranelift to run regalloc2's symbolic verifier on the results
of register allocation after compiling each function.
We already fuzz regalloc2 independently, but that provides coverage
using regalloc2's purpose-built (synthetic) `Function` implementation.
This fuzz target with this change, in contrast, exercises regalloc2 with
whatever particular details of generated code Cranelift generates.
Testing the whole pipeline together and ensuring that the register
allocation is still valid is at least as important as fuzzing regalloc2
independently, IMHO.
Fuzzed locally for a brief time (~10M inputs) to smoke-test; let's see
what oss-fuzz can find (hopefully it's boring)!
With these fixes, all this PR has to do is instantiate and run the
checker on the `regalloc2::Output`. This is off by default, and is
enabled by setting the `regalloc_checker` Cranelift option.
This restores the old functionality provided by e.g. the
`backtracking_checked` regalloc algorithm setting rather than
`backtracking` when we were still on regalloc.rs.
Currently wasmtime's async tests use a mixture of `#[tokio::test]` and
`dummy_waker`. To be consistent this tries to move all tests possible to
`#[tokio::test]` and just a two need to keep using `dummy_waker` (no
renamed to `noop_waker`) due to what they're testing.
regalloc2 is a bit pickier about critical edges than regalloc.rs was,
because of how it inserts moves. In particular, if a branch has any
arguments (e.g., a conditional branch or br_table), its successors must
all have only one predecessor, so we can do edge moves at the top of
successor blocks rather than at the end of this block. Otherwise, moves
that semantically must come after the block's last uses (the branch's
args) would be placed before it.
This is almost always the case, because crit-edge splitting ensures that
if we have more than one succ, all our succs will have only one pred.
This is because branch kinds that take arguments (fixed args, not the
blockparam args) tend to have more than one successor: conditionals and
br_tables.
However, a fuzzbug recently illuminated one corner case I had missed: a
br_table can have *one* successor only, if it has a default target and
an empty table. In this case, crit-edge splitting will happily skip a
split and assume that we can insert edge moves at the end of the block
with the br_table. But this will fail.
regalloc2 explicitly checks this and bails with a panic, rather than
continue, so no miscompilation is possible; but without this fix, we
will get these panics on br_tables with empty tables.
This commit removes support for the `userfaultfd` or "uffd" syscall on
Linux. This support was originally added for users migrating from Lucet
to Wasmtime, but the recent developments of kernel-supported
copy-on-write support for memory initialization wound up being more
appropriate for these use cases than usefaultfd. The main reason for
moving to copy-on-write initialization are:
* The `userfaultfd` feature was never necessarily intended for this
style of use case with wasm and was susceptible to subtle and rare
bugs that were extremely difficult to track down. We were never 100%
certain that there were kernel bugs related to userfaultfd but the
suspicion never went away.
* Handling faults with userfaultfd was always slow and single-threaded.
Only one thread could handle faults and traveling to user-space to
handle faults is inherently slower than handling them all in the
kernel. The single-threaded aspect in particular presented a
significant scaling bottleneck for embeddings that want to run many
wasm instances in parallel.
* One of the major benefits of userfaultfd was lazy initialization of
wasm linear memory which is also achieved with the copy-on-write
initialization support we have right now.
* One of the suspected benefits of userfaultfd was less frobbing of the
kernel vma structures when wasm modules are instantiated. Currently
the copy-on-write support has a mitigation where we attempt to reuse
the memory images where possible to avoid changing vma structures.
When comparing this to userfaultfd's performance it was found that
kernel modifications of vmas aren't a worrisome bottleneck so
copy-on-write is suitable for this as well.
Overall there are no remaining benefits that userfaultfd gives that
copy-on-write doesn't, and copy-on-write solves a major downsides of
userfaultfd, the scaling issue with a single faulting thread.
Additionally copy-on-write support seems much more robust in terms of
kernel implementation since it's only using standard memory-management
syscalls which are heavily exercised. Finally copy-on-write support
provides a new bonus where read-only memory in WebAssembly can be mapped
directly to the same kernel cache page, even amongst many wasm instances
of the same module, which was never possible with userfaultfd.
In light of all this it's expected that all users of userfaultfd should
migrate to the copy-on-write initialization of Wasmtime (which is
enabled by default).
Previously, the block successor accumulation and the blockparam branch
arg setup were decoupled. The lowering backend implicitly specified
the order of successor edges via its `MachTerminator` enum on the last
instruction in the block, while the `Lower` toplevel
machine-independent driver set up blockparam branch args in the edge
order seen in CLIF.
In some cases, these orders did not match -- for example, when the
conditional branch depended on an FP condition that was implemented by
swapping taken/not-taken edges and inverting the condition code.
This PR refactors the successor handling to be centralized in `Lower`
rather than flow through the terminator `MachInst`, and adds a
successor block and its blockparam args at the same time, ensuring the
orders match.
Following the merge of regalloc2 support, this became slower because we
are stricter about the critical-edge invariant, generating a separate
edge block for every out-edge even if two or more out-edges go to the
same successor (this is significant in cases of `br_table` with many
entries having the same target block, for example).
Many of those edge blocks are empty and end up collapsed by the
MachBuffer, which leads to a large set of aliased labels.
The invariant validation will dutifully iterate over all the data
structures at every step, validating all of our conditions. But this
gets way slower in the new context, to the point that we'll probably
have some fuzz timeouts.
This was pointed out in [1] but I missed removing this in #3989. Given
that `MachBuffer` has been around for nearly two years now, has been
fuzzed continuously with the invariant validation for that time, and
also has a correctness proof in the comments, it's probably reasonable
to remove this high (recently increased) cost from the fuzzing-specific
compilation configuration.
[1]
https://github.com/bytecodealliance/wasmtime/pull/3989#discussion_r847712263
Merge Mov32 and Mov64 into a single instruction parameterized by a new
OperandSize field. Also combine the Mov[K,N,Z] into a single instruction
with a new opcode to select between the operations.
Copyright (c) 2022, Arm Limited.
* Store the `ValRaw` type in little-endian format
This commit changes the internal representation of the `ValRaw` type to
an unconditionally little-endian format instead of its current
native-endian format. The documentation and various accessors here have
been updated as well as the associated trampolines that read `ValRaw`
to always work with little-endian values, converting to the host
endianness as necessary.
The motivation for this change originally comes from the implementation
of the component model that I'm working on. One aspect of the component
model's canonical ABI is how variants are passed to functions as
immediate arguments. For example for a component model function:
```
foo: function(x: expected<i32, f64>)
```
This translates to a core wasm function:
```wasm
(module
(func (export "foo") (param i32 i64)
;; ...
)
)
```
The first `i32` parameter to the core wasm function is the discriminant
of whether the result is an "ok" or an "err". The second `i64`, however,
is the "join" operation on the `i32` and `f64` payloads. Essentially
these two types are unioned into one type to get passed into the function.
Currently in the implementation of the component model my plan is to
construct a `*mut [ValRaw]` to pass through to WebAssembly, always
invoking component exports through host trampolines. This means that the
implementation for `Result<T, E>` needs to do the correct "join"
operation here when encoding a particular case into the corresponding
`ValRaw`.
I personally found this particularly tricky to do structurally. The
solution that I settled on with fitzgen was that if `ValRaw` was always
stored in a little endian format then we could employ a trick where when
encoding a variant we first set all the `ValRaw` slots to zero, then the
associated case we have is encoding. Afterwards the `ValRaw` values are
already encoded into the correct format as if they'd been "join"ed.
For example if we were to encode `Ok(1i32)` then this would produce
`ValRaw { i32: 1 }`, which memory-wise is equivalent to `ValRaw { i64: 1 }`
if the other bytes in the `ValRaw` are guaranteed to be zero. Similarly
storing `ValRaw { f64 }` is equivalent to the storage required for
`ValRaw { i64 }` here in the join operation.
Note, though, that this equivalence relies on everything being
little-endian. Otherwise the in-memory representations of `ValRaw { i32: 1 }`
and `ValRaw { i64: 1 }` are different.
That motivation is what leads to this change. It's expected that this is
a low-to-zero cost change in the sense that little-endian platforms will
see no change and big-endian platforms are already required to
efficiently byte-swap loads/stores as WebAssembly requires that.
Additionally the `ValRaw` type is an esoteric niche use case primarily
used for accelerating the C API right now, so it's expected that not
many users will have to update for this change.
* Track down some more endianness conversions
This PR switches Cranelift over to the new register allocator, regalloc2.
See [this document](https://gist.github.com/cfallin/08553421a91f150254fe878f67301801)
for a summary of the design changes. This switchover has implications for
core VCode/MachInst types and the lowering pass.
Overall, this change brings improvements to both compile time and speed of
generated code (runtime), as reported in #3942:
```
Benchmark Compilation (wallclock) Execution (wallclock)
blake3-scalar 25% faster 28% faster
blake3-simd no diff no diff
meshoptimizer 19% faster 17% faster
pulldown-cmark 17% faster no diff
bz2 15% faster no diff
SpiderMonkey, 21% faster 2% faster
fib(30)
clang.wasm 42% faster N/A
```
Relevant to Wasmtime, this fixes undefined references to `utimensat` and
`futimens` on macOS 10.12 and earlier. See bytecodealliance/rustix#157
for details.
It also contains a fix for s390x which isn't currently needed by Wasmtime
itself, but which is needed to make rustix's own testsuite pass on s390x,
which helps people packaging rustix for use in Wasmtime. See
bytecodealliance/rustix#277 for details.
Issue #3963 identified a miscompilation with select in which the second
in the pair of `CMOV`s (one pair per `i128` register) used the wrong
flag. This change fixes the error in the x64 ISLE helper function
emitting these `CMOV` instructions.
This makes the generator more similar to `wasm-smith` where it is keeping track
of what is on the stack and making choices about what instructions are valid to
generate given the current stack state. This should in theory allow the
generator to emit GC calls while there are live refs on the stack.
Fixes#3917
This commit adds a method, `Module::initialize_copy_on_write_image`,
which will explicitly create the copy-on-write image during the method
call instead of lazily deferring it until the first instantiation of a
module. Given the relative expense of creation of a memfd on Linux this
can otherwise run the risk of unnaturally perturbing the
time-of-instantiation of a module. Additionally whenever lazy
initialization is provided in an API it's typically good practice to
also have an optionally-called forced initialization.
The documentation for the `wasm-spec-interpreter` was not up-to-date,
causing some confusion on non-Ubuntu machines. This change adds the
correct dependencies to install and includes the `libgmp` path for
Fedora by default (i.e., `/lib64`).
This change moves the majority of the lowerings for CLIF's `load`
instruction over to ISLE. To do so, it also migrates the previous
mechanism for creating an `Amode` (`lower_to_amode`) to several ISLE
rules (see `to_amode`).
* Add m1 to release process
This will create a pre-compiled binary for m1 macs and adds
a link to review embark studios CI for verification.
* remove test for macos arm
Tests will not succeed for macos arm until GitHub provides a an m1 hosted runner.
Co-authored-by: Bailey Hayes <bhayes@singlestore.com>