Currently wasmtime will generate a `SignatureIndex`-per-type in the
module itself, even if the module itself declares the same type multiple
times. To make matters worse if the same type is declared across
multiple modules used in a module-linking-using-module then the
signature will be recorded each time it's declared.
This commit adds a simple map to module translation to deduplicate these
function types. This should improve the performance of module-linking
graphs where the same function type may be declared in a number of
modules. For modules that don't use module linking this adds an extra
map that's not used too often, but the time spent managing it should be
dwarfed by other compile tasks.
* Increase allowances for values when fuzzing
The wasm-smith limits for generating modules are a bit higher than what
we specify, so sync those up to avoid getting too many false positives
with limits getting blown.
* Ensure fuzzing `*.wat` files are in sync
I keep looking at `*.wat` files that are actually stale, so remove stale
files if we write out a `*.wasm` file and can't disassemble it.
* Enable shadowing in dummy_linker
Fixes an issues where the same name is imported twice and we generated
two values for that. We don't mind the error here, we just want to
ignore the shadowing errors.
Currently this exposes a bug where modules broken by module linking
cause failures in the fuzzer, but we want to fuzz those modules since
module linking isn't enabled when generating these modules.
This commit fixes an issue where when module linking was enabled for
fuzzing (which it is) import types of modules show as imports of
instances. In an attempt to satisfy the dummy values of such imports the
fuzzing integration would create instances for each import. This would,
however, count towards instance limits and isn't always desired.
This commit refactors the creation of dummy import values to decompose
imports of instances into imports of each individual item. This should
retain the pre-module-linking behavior of dummy imports for various
fuzzers.
* Use stable Rust on CI to test the x64 backend
This commit leverages the newly-released 1.51.0 compiler to test the
new backend on Windows and Linux with a stable compiler instead of a
nightly compiler. This isolates the nightly build to just the nightly
documentation generation and fuzzing, both of which rely on nightly for
the best results right now.
* Use updated stable in book build job
* Run rustfmt for new stable
* Silence new warnings for wasi-nn
* Allow some dead code in the x64 backend
Looks like new rustc is better about emitting some dead-code warnings
* Update rust in peepmatic job
* Fix a test in the pooling allocator
* Remove `package.metdata.docs.rs` temporarily
Needs resolution of https://github.com/rust-lang/cargo/pull/9300 first
* Fix a warning in a wasi-nn example
The codegen for div/rem ops has two modes, depending on the
`avoid_div_traps` flag: it can either do all checks for trapping
conditions explicitly, and use explicit trap instructions, then use a
hardware divide instruction that will not trap (`avoid_div_traps ==
true`); or it can run in a mode where a hardware FP fault on the divide
instruction implies a Wasm trap (`avoid_div_traps == false`). Wasmtime
uses the former while Lucet (for example) uses the latter.
It turns out that because we run all our spec tests run under Wasmtime,
we missed a spec corner case that fails in the latter: INT_MIN % -1 == 0
per the spec, but causes a trap with the x86 signed divide/remainder
instruction. Hence, in Lucet, this specific remainder computation would
incorrectly result in a Wasm trap.
This PR fixes the issue by just forcing use of the explicit-checks
implementation for `srem` even when `avoid_div_traps` is false.
* Combine stack-based cleanups for faster wasm calls
This commit is an extension of #2757 where the goal is to optimize entry
into WebAssembly. Currently wasmtime has two stack-based cleanups when
entering wasm, one for the externref activation table and another for
stack limits getting reset. This commit fuses these two cleanups
together into one and moves some code around which enables less captures
for fewer closures and such to speed up calls in to wasm a bit more.
Overall this drops the execution time from 88ns to 80ns locally for me.
This also updates the atomic orderings when updating the stack limit
from `SeqCst` to `Relaxed`. While `SeqCst` is a reasonable starting
point the usage here should be safe to use `Relaxed` since we're not
using the atomics to actually protect any memory, it's simply receiving
signals from other threads.
* Determine whether a pc is wasm via a global map
The macOS implementation of traps recently changed to using mach ports
for handlers instead of signal handlers. This means that a previously
relied upon invariant, each thread fixes its own trap, was broken. The
macOS implementation worked around this by maintaining a global map from
thread id to thread local information, however, to solve the problem.
This global map is quite slow though. It involves taking a lock and
updating a hash map on all calls into WebAssembly. In my local testing
this accounts for >70% of the overhead of calling into WebAssembly on
macOS. Naturally it'd be great to remove this!
This commit fixes this issue and removes the global lock/map that is
updated on all calls into WebAssembly. The fix is to maintain a global
map of wasm modules and their trap addresses in the `wasmtime` crate.
Doing so is relatively simple since we're already tracking this
information at the `Store` level.
Once we've got a global map then the macOS implementation can use this
from a foreign thread and everything works out.
Locally this brings the overhead, on macOS specifically, of calling into
wasm from 80ns to ~20ns.
* Fix compiles
* Review comments
This commit implements a few optimizations, mainly inlining, that should
improve the performance of calling a WebAssembly function. This code
path can be quite hot depending on the embedding case and we hadn't
really put much effort into optimizing the nitty gritty.
The predominant optimization here is adding `#[inline]` to trivial
functions so performance is improved without having to compile with LTO.
Another optimization is to call `lazy_per_thread_init` when traps are
initialized per-thread (when a `Store` is created) rather than each time
a function is called. The next optimization is to change the unwind
reason in the `CallThreadState` to `MaybeUninit` to avoid extra checks
in the default case about whether we need to drop its variants (since in
the happy path we never need to drop it). The final optimization is to
optimize out a few checks when `async` support is disabled for a small
speed boost.
In a small benchmark where wasmtime calls a simple wasm function my
macOS computer dropped from 110ns to 86ns overhead, a 20% decrease. The
macOS overhead is still largely dominated by the global lock acquisition
and hash table management for traps right now, but I suspect the Linux
overhead is much better (should be on the order of ~30 or so ns).
We still have a long way to go to compete with SpiderMonkey which, in
testing, seem to have ~6ns overhead in calling the same wasm function on
my computer.
This logging step may be quite expensive, since logging has never been
optimized at all. Removing it is a clear win in compile times on my
machine for a large wasm module, for which parallel compilation is
lowering from 6 seconds to 1.5 seconds.
Co-authored-by: bjorn3 <bjorn3@users.noreply.github.com>
Add support for `poll_oneoff` calls which just sleep on a relative
timeout. This fixes a bug handling code compiled with WASI libc's `sleep`
family of functions, which call `poll_oneoff` with a `CLOCK_REALTIME`
timer, which wasn't previously implemented.
This bumps target-lexicon and adds support for the AppleAarch64 calling
convention. Specifically for WebAssembly support, we only have to worry
about the new stack slots convention. Stack slots don't need to be at
least 8-bytes, they can be as small as the data type's size. For
instance, if we need stack slots for (i32, i32), they can be located at
offsets (+0, +4). Note that they still need to be properly aligned on
the data type they're containing, though, so if we need stack slots for
(i32, i64), we can't start the i64 slot at the +4 offset (it must start
at the +8 offset).
Added one test that was failing on the Mac M1, as well as other tests
stressing different yet similar situations.
* Add assert to `StackPool::deallocate` to ensure the fiber stack given to it
comes from the pool.
* Remove outdated comment about windows and stacks as the allocator now returns
fiber stacks.
* Remove conditional compilation around `stack_size` in the allocators as it
was just clutter.
This commit splits out a `FiberStack` from `Fiber`, allowing the instance
allocator trait to return `FiberStack` rather than raw stack pointers. This
keeps the stack creation mostly in `wasmtime_fiber`, but now the on-demand
instance allocator can make use of it.
The instance allocators no longer have to return a "not supported" error to
indicate that the store should allocate its own fiber stack.
This includes a bunch of cleanup in the instance allocator to scope stacks to
the new "async" feature in the runtime.
Closes#2708.
* Switch macOS to using mach ports for trap handling
This commit moves macOS to using mach ports instead of signals for
handling traps. The motivation for this is listed in #2456, namely that
once mach ports are used in a process that means traditional UNIX signal
handlers won't get used. This means that if Wasmtime is integrated with
Breakpad, for example, then Wasmtime's trap handler never fires and
traps don't work.
The `traphandlers` module is refactored as part of this commit to split
the platform-specific bits into their own files (it was growing quite a
lot for one inline `cfg_if!`). The `unix.rs` and `windows.rs` files
remain the same as they were before with a few minor tweaks for some
refactored interfaces. The `macos.rs` file is brand new and lifts almost
its entire implementation from SpiderMonkey, adapted for Wasmtime
though.
The main gotcha with mach ports is that a separate thread is what
services the exception. Some unsafe magic allows this separate thread to
read non-`Send` and temporary state from other threads, but is hoped to
be safe in this context. The unfortunate downside is that calling wasm
on macOS now involves taking a global lock and modifying a global hash
map twice-per-call. I'm not entirely sure how to get out of this cost
for now, but hopefully for any embeddings on macOS it's not the end of
the world.
Closes#2456
* Add a sketch of arm64 apple support
* store: maintain CallThreadState mapping when switching fibers
* cranelift/aarch64: generate unwind directives to disable pointer auth
Aarch64 post ARMv8.3 has a feature called pointer authentication,
designed to fight ROP/JOP attacks: some pointers may be signed using new
instructions, adding payloads to the high (previously unused) bits of
the pointers. More on this here: https://lwn.net/Articles/718888/
Unwinders on aarch64 need to know if some pointers contained on the call
frame contain an authentication code or not, to be able to properly
authenticate them or use them directly. Since native code may have
enabled it by default (as is the case on the Mac M1), and the default is
that this configuration value is inherited, we need to explicitly
disable it, for the only kind of supported pointers (return addresses).
To do so, we set the value of a non-existing dwarf pseudo register (34)
to 0, as documented in
https://github.com/ARM-software/abi-aa/blob/master/aadwarf64/aadwarf64.rst#note-8.
This is done at the function granularity, in the spirit of Cranelift
compilation model. Alternatively, a single directive could be generated
in the CIE, generating less information per module.
* Make exception handling work on Mac aarch64 too
* fibers: use a breakpoint instruction after the final call in wasmtime_fiber_start
Co-authored-by: Alex Crichton <alex@alexcrichton.com>