* Reduce contention on the global module rwlock
This commit intendes to close#4025 by reducing contention on the global
rwlock Wasmtime has for module information during instantiation and
dropping a store. Currently registration of a module into this global
map happens during instantiation, but this can be a hot path as
embeddings may want to, in parallel, instantiate modules.
Instead this switches to a strategy of inserting into the global module
map when a `Module` is created and then removing it from the map when
the `Module` is dropped. Registration in a `Store` now preserves the
entire `Module` within the store as opposed to trying to only save it
piecemeal. In reality the only piece that wasn't saved within a store
was the `TypeTables` which was pretty inconsequential for core wasm
modules anyway.
This means that instantiation should now clone a singluar `Arc` into a
`Store` per `Module` (previously it cloned two) with zero managemnt on
the global rwlock as that happened at `Module` creation time.
Additionally dropping a `Store` again involves zero rwlock management
and only a single `Arc` drop per-instantiated module (previously it was
two).
In the process of doing this I also went ahead and removed the
`Module::new_with_name` API. This has been difficult to support
historically with various variations on the internals of `ModuleInner`
because it involves mutating a `Module` after it's been created. My hope
is that this API is pretty rarely used and/or isn't super important, so
it's ok to remove.
Finally this change removes some internal `Arc` layerings that are no
longer necessary, attempting to use either `T` or `&T` where possible
without dealing with the overhead of an `Arc`.
Closes#4025
* Move back to a `BTreeMap` in `ModuleRegistry`
Currently wasmtime's async tests use a mixture of `#[tokio::test]` and
`dummy_waker`. To be consistent this tries to move all tests possible to
`#[tokio::test]` and just a two need to keep using `dummy_waker` (no
renamed to `noop_waker`) due to what they're testing.
This commit removes support for the `userfaultfd` or "uffd" syscall on
Linux. This support was originally added for users migrating from Lucet
to Wasmtime, but the recent developments of kernel-supported
copy-on-write support for memory initialization wound up being more
appropriate for these use cases than usefaultfd. The main reason for
moving to copy-on-write initialization are:
* The `userfaultfd` feature was never necessarily intended for this
style of use case with wasm and was susceptible to subtle and rare
bugs that were extremely difficult to track down. We were never 100%
certain that there were kernel bugs related to userfaultfd but the
suspicion never went away.
* Handling faults with userfaultfd was always slow and single-threaded.
Only one thread could handle faults and traveling to user-space to
handle faults is inherently slower than handling them all in the
kernel. The single-threaded aspect in particular presented a
significant scaling bottleneck for embeddings that want to run many
wasm instances in parallel.
* One of the major benefits of userfaultfd was lazy initialization of
wasm linear memory which is also achieved with the copy-on-write
initialization support we have right now.
* One of the suspected benefits of userfaultfd was less frobbing of the
kernel vma structures when wasm modules are instantiated. Currently
the copy-on-write support has a mitigation where we attempt to reuse
the memory images where possible to avoid changing vma structures.
When comparing this to userfaultfd's performance it was found that
kernel modifications of vmas aren't a worrisome bottleneck so
copy-on-write is suitable for this as well.
Overall there are no remaining benefits that userfaultfd gives that
copy-on-write doesn't, and copy-on-write solves a major downsides of
userfaultfd, the scaling issue with a single faulting thread.
Additionally copy-on-write support seems much more robust in terms of
kernel implementation since it's only using standard memory-management
syscalls which are heavily exercised. Finally copy-on-write support
provides a new bonus where read-only memory in WebAssembly can be mapped
directly to the same kernel cache page, even amongst many wasm instances
of the same module, which was never possible with userfaultfd.
In light of all this it's expected that all users of userfaultfd should
migrate to the copy-on-write initialization of Wasmtime (which is
enabled by default).
* Store the `ValRaw` type in little-endian format
This commit changes the internal representation of the `ValRaw` type to
an unconditionally little-endian format instead of its current
native-endian format. The documentation and various accessors here have
been updated as well as the associated trampolines that read `ValRaw`
to always work with little-endian values, converting to the host
endianness as necessary.
The motivation for this change originally comes from the implementation
of the component model that I'm working on. One aspect of the component
model's canonical ABI is how variants are passed to functions as
immediate arguments. For example for a component model function:
```
foo: function(x: expected<i32, f64>)
```
This translates to a core wasm function:
```wasm
(module
(func (export "foo") (param i32 i64)
;; ...
)
)
```
The first `i32` parameter to the core wasm function is the discriminant
of whether the result is an "ok" or an "err". The second `i64`, however,
is the "join" operation on the `i32` and `f64` payloads. Essentially
these two types are unioned into one type to get passed into the function.
Currently in the implementation of the component model my plan is to
construct a `*mut [ValRaw]` to pass through to WebAssembly, always
invoking component exports through host trampolines. This means that the
implementation for `Result<T, E>` needs to do the correct "join"
operation here when encoding a particular case into the corresponding
`ValRaw`.
I personally found this particularly tricky to do structurally. The
solution that I settled on with fitzgen was that if `ValRaw` was always
stored in a little endian format then we could employ a trick where when
encoding a variant we first set all the `ValRaw` slots to zero, then the
associated case we have is encoding. Afterwards the `ValRaw` values are
already encoded into the correct format as if they'd been "join"ed.
For example if we were to encode `Ok(1i32)` then this would produce
`ValRaw { i32: 1 }`, which memory-wise is equivalent to `ValRaw { i64: 1 }`
if the other bytes in the `ValRaw` are guaranteed to be zero. Similarly
storing `ValRaw { f64 }` is equivalent to the storage required for
`ValRaw { i64 }` here in the join operation.
Note, though, that this equivalence relies on everything being
little-endian. Otherwise the in-memory representations of `ValRaw { i32: 1 }`
and `ValRaw { i64: 1 }` are different.
That motivation is what leads to this change. It's expected that this is
a low-to-zero cost change in the sense that little-endian platforms will
see no change and big-endian platforms are already required to
efficiently byte-swap loads/stores as WebAssembly requires that.
Additionally the `ValRaw` type is an esoteric niche use case primarily
used for accelerating the C API right now, so it's expected that not
many users will have to update for this change.
* Track down some more endianness conversions
* Refactor away the `Instantiator` type in Wasmtime
This internal type in Wasmtime was primarily used for the module linking
proposal to handle instantiation of many instances and refactor out the
sync and async parts to minimize duplication. With the removal of the
module linking proposal, however, this type isn't really necessary any
longer. In working to implement the component model proposal I was
looking already to refactor this and I figured it'd be good to land that
ahead of time on `main` separate of other refactorings.
This commit removes the `Instantiator` type in the `instance` module.
The type was already private to Wasmtime so this shouldn't have any
impact on consumers. This allows simplifying various code paths to avoid
another abstraction. The meat of instantiation is moved to
`Instance::new_raw` which should be reusable for the component model as
well.
One bug is actually fixed in this commit as well where
`Linker::instantiate` and `InstancePre::instantiate` failed to check
that async support was disabled on a store. This means that they could
have led to a panic if used with an async store and a start function
called an async import (or an async resource limiter yielded). A few
tests were updated with this.
* Review comments
* Upgrade all crates to the Rust 2021 edition
I've personally started using the new format strings for things like
`panic!("some message {foo}")` or similar and have been upgrading crates
on a case-by-case basis, but I think it probably makes more sense to go
ahead and blanket upgrade everything so 2021 features are always
available.
* Fix compile of the C API
* Fix a warning
* Fix another warning
* Run the GC smoketest with epoch support enabled as well.
* Handle safepoints in cold blocks properly.
Currently, the way that we find safepoint slots for a given instruction
relies on the instruction index order in the safepoint list matching the
order of instruction emission.
Previous to the introduction of cold-block support, this was trivially
satisfied by sorting the safepoint list: we emit instructions 0, 1, 2,
3, 4, ..., and so if we have safepoints at instructions 1 and 4, we will
encounter them in that order.
However, cold blocks are supported by swizzling the emission order at
the last moment (to avoid having to renumber instructions partway
through the compilation pipeline), so we actually emit instructions out
of index order when cold blocks are present.
Reference-type support in Wasm in particular uses cold blocks for
slowpaths, and has live refs and safepoints in these slowpaths, so we
can reliably "skip" a safepoint (not emit any metadata for it) in the
presence of reftype usage.
This PR fixes the emission code by building a map from instruction index
to safepoint index first, then doing lookups through this map, rather
than following along in-order as it emits instructions.
* Remove the module linking implementation in Wasmtime
This commit removes the experimental implementation of the module
linking WebAssembly proposal from Wasmtime. The module linking is no
longer intended for core WebAssembly but is instead incorporated into
the component model now at this point. This means that very large parts
of Wasmtime's implementation of module linking are no longer applicable
and would change greatly with an implementation of the component model.
The main purpose of this is to remove Wasmtime's reliance on the support
for module-linking in `wasmparser` and tooling crates. With this
reliance removed we can move over to the `component-model` branch of
`wasmparser` and use the updated support for the component model.
Additionally given the trajectory of the component model proposal the
embedding API of Wasmtime will not look like what it looks like today
for WebAssembly. For example the core wasm `Instance` will not change
and instead a `Component` is likely to be added instead.
Some more rationale for this is in #3941, but the basic idea is that I
feel that it's not going to be viable to develop support for the
component model on a non-`main` branch of Wasmtime. Additionaly I don't
think it's viable, for the same reasons as `wasm-tools`, to support the
old module linking proposal and the new component model at the same
time.
This commit takes a moment to not only delete the existing module
linking implementation but some abstractions are also simplified. For
example module serialization is a bit simpler that there's only one
module. Additionally instantiation is much simpler since the only
initializer we have to deal with are imports and nothing else.
Closes#3941
* Fix doc link
* Update comments
* Instead of simply panicking, return an error when we attempt to resume on a dying fiber.
This situation should never occur in the existing code base, but can be
triggered if support for running outside async code in a call hook.
* Shift `async_cx()` to return an `Option`, reflecting if the fiber is dying.
This should never happen in the existing code base, but is a nice
forward-looking guard. The current implementations simply lift the
trap that would eventually be produced by such an operation into
a `Trap` (or similar) at the invocation of `async_cx()`.
* Add support for using `async` call hooks.
This retains the ability to do non-async hooks. Hooks end up being
implemented as an async trait with a handler call, to get around some
issues passing around async closures. This change requires some of
the prior changes to handle picking up blocked tasks during fiber
shutdown, to avoid some panics during timeouts and other such events.
* More fully specify a doc link, to avoid a doc-building error.
* Revert the use of catchable traps on cancellation of a fiber; turn them into expect()/unwrap().
The justification for this revert is that (a) these events shouldn't
happen, and (b) they wouldn't be catchable by wasm anyways.
* Replace a duplicated check in `async` hook evaluation with a single check.
This also moves the checks inside of their respective Async variants,
meaning that if you're using an async-enabled version of wasmtime but
using the synchronous versions of the callbacks, you won't pay any
penalty for validating the async context.
* Use `match &mut ...` insead of `ref mut`.
* Add some documentation on why/when `async_cx` can return None.
* Add two simple test cases for async call hooks.
* Fix async_cx() to check both the box and the value for current_poll_cx.
In the prior version, we only checked that the box had not been cleared,
but had not ensured that there was an actual context for us to use. This
updates the check to validate both, returning None if the inner context
is missing. This allows us to skip a validation check inside `block_on`,
since all callers will have run through the `async_cx` check prior to
arrival.
* Tweak the timeout test to address PR suggestions.
* Add a test about dropping async hooks while suspended
Should help exercise that the check for `None` is properly handled in a
few more locations.
Co-authored-by: Alex Crichton <alex@alexcrichton.com>
* Delete historical interruptable support in Wasmtime
This commit removes the `Config::interruptable` configuration along with
the `InterruptHandle` type from the `wasmtime` crate. The original
support for adding interruption to WebAssembly was added pretty early on
in the history of Wasmtime when there was no other method to prevent an
infinite loop from the host. Nowadays, however, there are alternative
methods for interruption such as fuel or epoch-based interruption.
One of the major downsides of `Config::interruptable` is that even when
it's not enabled it forces an atomic swap to happen when entering
WebAssembly code. This technically could be a non-atomic swap if the
configuration option isn't enabled but that produces even more branch-y
code on entry into WebAssembly which is already something we try to
optimize. Calling into WebAssembly is on the order of a dozens of
nanoseconds at this time and an atomic swap, even uncontended, can add
up to 5ns on some platforms.
The main goal of this PR is to remove this atomic swap on entry into
WebAssembly. This is done by removing the `Config::interruptable` field
entirely, moving all existing consumers to epochs instead which are
suitable for the same purposes. This means that the stack overflow check
is no longer entangled with the interruption check and perhaps one day
we could continue to optimize that further as well.
Some consequences of this change are:
* Epochs are now the only method of remote-thread interruption.
* There are no more Wasmtime traps that produces the `Interrupted` trap
code, although we may wish to move future traps to this so I left it
in place.
* The C API support for interrupt handles was also removed and bindings
for epoch methods were added.
* Function-entry checks for interruption are a tiny bit less efficient
since one check is performed for the stack limit and a second is
performed for the epoch as opposed to the `Config::interruptable`
style of bundling the stack limit and the interrupt check in one. It's
expected though that this is likely to not really be measurable.
* The old `VMInterrupts` structure is renamed to `VMRuntimeLimits`.
* Implement runtime checks for compilation settings
This commit fills out a few FIXME annotations by implementing run-time
checks that when a `Module` is created it has compatible codegen
settings for the current host (as `Module` is proof of "this code can
run"). This is done by implementing new `Engine`-level methods which
validate compiler settings. These settings are validated on
`Module::new` as well as when loading serialized modules.
Settings are split into two categories, one for "shared" top-level
settings and one for ISA-specific settings. Both categories now have
allow-lists hardcoded into `Engine` which indicate the acceptable values
for each setting (if applicable). ISA-specific settings are checked with
the Rust standard library's `std::is_x86_feature_detected!` macro. Other
macros for other platforms are not stable at this time but can be added
here if necessary.
Closes#3897
* Fix fall-through logic to actually be correct
* Use a `OnceCell`, not an `AtomicBool`
* Fix some broken tests
This commit fixes calling the call hooks configured in a store for host
functions defined with `Func::new_unchecked` or similar. I believe that
this was just an accidental oversight and there's no fundamental reason
to not support this.
This commit aims to achieve the goal of being able to run the test suite
on Windows with `--test-threads 1`, or more notably allowing Wasmtime's
defaults to work better with the main thread on Windows which appears to
have a smaller stack by default than Linux by comparison. In decreasing
the default wasm stack size a test is also update to probe for less
stack to work on Windows' main thread by default, ideally allowing the
full test suite to work with `--test-threads 1` (although this isn't
added to CI as it's not really critical).
Closes#3857
* Remove the `ModuleLimits` pooling configuration structure
This commit is an attempt to improve the usability of the pooling
allocator by removing the need to configure a `ModuleLimits` structure.
Internally this structure has limits on all forms of wasm constructs but
this largely bottoms out in the size of an allocation for an instance in
the instance pooling allocator. Maintaining this list of limits can be
cumbersome as modules may get tweaked over time and there's otherwise no
real reason to limit the number of globals in a module since the main
goal is to limit the memory consumption of a `VMContext` which can be
done with a memory allocation limit rather than fine-tuned control over
each maximum and minimum.
The new approach taken in this commit is to remove `ModuleLimits`. Some
fields, such as `tables`, `table_elements` , `memories`, and
`memory_pages` are moved to `InstanceLimits` since they're still
enforced at runtime. A new field `size` is added to `InstanceLimits`
which indicates, in bytes, the maximum size of the `VMContext`
allocation. If the size of a `VMContext` for a module exceeds this value
then instantiation will fail.
This involved adding a few more checks to `{Table, Memory}::new_static`
to ensure that the minimum size is able to fit in the allocation, since
previously modules were validated at compile time of the module that
everything fit and that validation no longer happens (it happens at
runtime).
A consequence of this commit is that Wasmtime will have no built-in way
to reject modules at compile time if they'll fail to be instantiated
within a particular pooling allocator configuration. Instead a module
must attempt instantiation see if a failure happens.
* Fix benchmark compiles
* Fix some doc links
* Fix a panic by ensuring modules have limited tables/memories
* Review comments
* Add back validation at `Module` time instantiation is possible
This allows for getting an early signal at compile time that a module
will never be instantiable in an engine with matching settings.
* Provide a better error message when sizes are exceeded
Improve the error message when an instance size exceeds the maximum by
providing a breakdown of where the bytes are all going and why the large
size is being requested.
* Try to fix test in qemu
* Flag new test as 64-bit only
Sizes are all specific to 64-bit right now
* Port fix for `CVE-2022-23636` to `main`.
This commit ports the fix for `CVE-2022-23636` to `main`, but performs a
refactoring that makes it unnecessary for the instance itself to track if it
has been initialized; such a change was not targeted enough for a security
patch.
The pooling allocator will now only initialize an instance if all of its
associated resource creation succeeds. If the resource creation fails, no
instance is dropped as none was initialized.
Also updates `RELEASES.md` to include the related patch releases.
* Add `Instance::new_at` to fully initialize an instance.
Added `Instance::new_at` to fully initialize an instance at a given address.
This will hopefully prevent the possibility that an `Instance` structure
doesn't have an initialized `VMContext` when it is dropped.
This commit corrects a few places where `MemoryIndex` was used and treated like
a `DefinedMemoryIndex` in the pooling instance allocator.
When the unstable `multi-memory` proposal is enabled, it is possible to cause a
newly allocated instance to use an incorrect base address for any defined
memories by having the module being instantiated also import a memory.
This requires enabling the unstable `multi-memory` proposal, configuring the
use of the pooling instance allocator (not the default), and then configuring
the module limits to allow imported memories (also not the default).
The fix is to replace all uses of `MemoryIndex` with `DefinedMemoryIndex` in
the pooling instance allocator.
Several `debug_assert!` have also been updated to `assert!` to sanity check the
state of the pooling allocator even in release builds.
* Consolidate methods of memory initialization
This commit consolidates the few locations that we have which are
performing memory initialization. Namely the uffd logic for creating
paged memory as well as the memfd logic for creating a memory image now
share an implementation to avoid duplicating bounds-checks or other
validation conditions. The main purpose of this commit is to fix a
fuzz-bug where a multiplication overflowed. The overflow itself was
benign but it seemed better to fix the overflow in only one place
instead of multiple.
The overflow in question is specifically when an initializer is checked
to be statically out-of-bounds and multiplies a memory's minimum size by
the wasm page size, returning the result as a `u64`. For
memory64-memories of size `1 << 48` this multiplication will overflow.
This was actually a preexisting bug with the `try_paged_init` function
which was copied for memfd, but cropped up here since memfd is used more
often than paged initialization. The fix here is to skip validation of
the `end` index if the size of memory is `1 << 64` since if the `end`
index can be represented as a `u64` then it's in-bounds. This is
somewhat of an esoteric case, though, since a memory of minimum size `1
<< 64` can't ever exist (we can't even ask the os for that much memory,
and even if we could it would fail).
* Fix memfd test
* Fix some tests
* Remove InitMemory enum
* Add an `is_segmented` helper method
* More clear variable name
* Make arguments to `init_memory` more descriptive
This fixes a bug in the memfd-related management of a linear memory
where for dynamic memories memfd wasn't informed of the extra room that
the dynamic memory could grow into, only the actual size of linear
memory, which ended up tripping an assert once the memory was grown. The
fix here is pretty simple which is to factor in this extra space when
passing the allocation size to the creation of the `MemFdSlot`.
This PR introduces a new way of performing cooperative timeslicing that
is intended to replace the "fuel" mechanism. The tradeoff is that this
mechanism interrupts with less precision: not at deterministic points
where fuel runs out, but rather when the Engine enters a new epoch. The
generated code instrumentation is substantially faster, however, because
it does not need to do as much work as when tracking fuel; it only loads
the global "epoch counter" and does a compare-and-branch at backedges
and function prologues.
This change has been measured as ~twice as fast as fuel-based
timeslicing for some workloads, especially control-flow-intensive
workloads such as the SpiderMonkey JS interpreter on Wasm/WASI.
The intended interface is that the embedder of the `Engine` performs an
`engine.increment_epoch()` call periodically, e.g. once per millisecond.
An async invocation of a Wasm guest on a `Store` can specify a number of
epoch-ticks that are allowed before an async yield back to the
executor's event loop. (The initial amount and automatic "refills" are
configured on the `Store`, just as for fuel.) This call does only
signal-safe work (it increments an `AtomicU64`) so could be invoked from
a periodic signal, or from a thread that wakes up once per period.
* fuzz: Refactor Wasmtime's fuzz targets
A recent fuzz bug found is related to timing out when compiling a
module. This timeout, however, is predominately because Cranelift's
debug verifier is enabled and taking up over half the compilation time.
I wanted to fix this by disabling the verifier when input modules might
have a lot of functions, but this was pretty difficult to implement.
Over time we've grown a number of various fuzzers. Most are
`wasm-smith`-based at this point but there's various entry points for
configuring the wasm-smith module, the wasmtime configuration, etc. I've
historically gotten quite lost in trying to change defaults and feeling
like I have to touch a lot of different places. This is the motivation
for this commit, simplifying fuzzer default configuration.
This commit removes the ability to create a default `Config` for
fuzzing, instead only supporting generating a configuration via
`Arbitrary`. This then involved refactoring all targets and fuzzers to
ensure that configuration is generated through `Arbitrary`. This should
actually expand the coverage of some existing fuzz targets since
`Arbitrary for Config` will tweak options that don't affect runtime,
such as memory configuration or jump veneers.
All existing fuzz targets are refactored to use this new method of
configuration. Some fuzz targets were also shuffled around or
reimplemented:
* `compile` - this now directly calls `Module::new` to skip all the
fuzzing infrastructure. This is mostly done because this fuzz target
isn't too interesting and is largely just seeing what happens when
things are thrown at the wall for Wasmtime.
* `instantiate-maybe-invalid` - this fuzz target now skips instantiation
and instead simply goes into `Module::new` like the `compile` target.
The rationale behind this is that most modules won't instantiate
anyway and this fuzz target is primarily fuzzing the compiler. This
skips having to generate arbitrary configuration since
wasm-smith-generated-modules (or valid ones at least) aren't used
here.
* `instantiate` - this fuzz target was removed. In general this fuzz
target isn't too interesting in isolation. Almost everything it deals
with likely won't pass compilation and is covered by the `compile`
fuzz target, and otherwise interesting modules being instantiated can
all theoretically be created by `wasm-smith` anyway.
* `instantiate-wasm-smith` and `instantiate-swarm` - these were both merged
into a new `instantiate` target (replacing the old one from above).
There wasn't really much need to keep these separate since they really
only differed at this point in methods of timeout. Otherwise we much
more heavily use `SwarmConfig` than wasm-smith's built-in options.
The intention is that we should still have basically the same coverage
of fuzzing as before, if not better because configuration is now
possible on some targets. Additionally there is one centralized point of
configuration for fuzzing for wasmtime, `Arbitrary for ModuleConfig`.
This internally creates an arbitrary `SwarmConfig` from `wasm-smith` and
then further tweaks it for Wasmtime's needs, such as enabling various
wasm proposals by default. In the future enabling a wasm proposal on
fuzzing should largely just be modifying this one trait implementation.
* fuzz: Sometimes disable the cranelift debug verifier
This commit disables the cranelift debug verifier if the input wasm
module might be "large" for the definition of "more than 10 functions".
While fuzzing we disable threads (set them to 1) and enable the
cranelift debug verifier. Coupled with a 20-30x slowdown this means that
a module with the maximum number of functions, 100, gives:
60x / 100 functions / 30x slowdown = 20ms
With only 20 milliseconds per function this is even further halved by
the `differential` fuzz target compiling a module twice, which means
that, when compiling with a normal release mode Wasmtime, if any
function takes more than 10ms to compile then it's a candidate for
timing out while fuzzing. Given that the cranelift debug verifier can
more than double compilation time in fuzzing mode this actually means
that the real time budget for function compilation is more like 4ms.
The `wasm-smith` crate can pretty easily generate a large function that
takes 4ms to compile, and then when that function is multiplied 100x in
the `differential` fuzz target we trivially time out the fuzz target.
The hope of this commit is to buy back half our budget by disabling the
debug verifier for modules that may have many functions. Further
refinements can be implemented in the future such as limiting functions
for just the differential target as well.
* Fix the single-function-module fuzz configuration
* Tweak how features work in differential fuzzing
* Disable everything for baseline differential fuzzing
* Enable selectively for each engine afterwards
* Also forcibly enable reference types and bulk memory for spec tests
* Log wasms when compiling
* Add reference types support to v8 fuzzer
* Fix timeouts via fuel
The default store has "infinite" fuel so that needs to be consumed
before fuel is added back in.
* Remove fuzzing-specific tests
These no longer compile and also haven't been added to in a long time.
Most of the time a reduced form of original the fuzz test case is added
when a fuzz bug is fixed.
* Add a compilation section to disable address maps
This commit adds a new `Config::generate_address_map` compilation
setting which is used to disable emission of the `.wasmtime.addrmap`
section of compiled artifacts. This section is currently around the size
of the entire `.text` section itself unfortunately and for size reasons
may wish to be omitted. Functionality-wise all that is lost is knowing
the precise wasm module offset address of a faulting instruction or in a
backtrace of instructions. This also means that if the module has DWARF
debugging information available with it Wasmtime isn't able to produce a
filename and line number in the backtrace.
This option remains enabled by default. This option may not be needed in
the future with #3547 perhaps, but in the meantime it seems reasonable
enough to support a configuration mode where the section is entirely
omitted if the smallest module possible is desired.
* Fix some CI issues
* Update tests/all/traps.rs
Co-authored-by: Nick Fitzgerald <fitzgen@gmail.com>
* Do less work in compilation for address maps
But only when disabled
Co-authored-by: Nick Fitzgerald <fitzgen@gmail.com>
Following up on WebAssembly/wasi-sdk#210, this makes the trap message
for `unreachable` traps more descriptive of what actually caused the
trap, so that it doesn't sound like maybe Wasmtime itself executed a
`unreachable!()` macro in Rust.
Before:
```
wasm trap: unreachable
wasm backtrace:
[...]
```
After:
```
wasm trap: wasm `unreachable` instruction executed
wasm backtrace:
[...]
```
This pulls in a fix for Android, where Android's seccomp policy on older
versions is to make `openat2` irrecoverably crash the process, so we have
to do a version check up front rather than relying on `ENOSYS` to
determine if `openat2` is supported.
And it pulls in the fix for the link errors when multiple versions of
rsix/rustix are linked in.
And it has updates for two crate renamings: rsix has been renamed to
rustix, and unsafe-io has been renamed to io-extras.
When encountering a subprogram that is dead code (as indicated by the
dead code proposal
https://dwarfstd.org/ShowIssue.php?issue=200609.1), don't generate debug
output for the subprogram or any of its children.
* Add a configuration option to force "static" memories
In poking around at some things earlier today I realized that one
configuration option for memories we haven't exposed from embeddings
like the CLI is to forcibly limit the size of memory growth and force
using a static memory style. This means that the CLI, for example, can't
limit memory growth by default and memories are only limited in size by
what the OS can give and the wasm's own memory type. This configuration
option means that the CLI can artificially limit the size of wasm linear
memories.
Additionally another motivation for this is for testing out various
codegen ramifications of static/dynamic memories. This is the only way
to force a static memory, by default, for wasm64 memories with no
maximum size listed for example.
* Review feedback
When there is a linking error caused by an undefined instance, list all
the instances exports in the error message. This will clarify errors for
undefined two-level imports that get desugared to one-level instance
imports under the module-linking proposal.
This commit fixes an issue in Cranelift where legalization of
`heap_addr` instructions (used by wasm to represent heap accesses) could
be off-by-two where loads that should be valid were actually treated as
invalid. The bug here happened in an optimization where tests against
odd constants were being altered to tests against even constants by
subtracting one from the limit instead of adding one to the limit. The
comment around this area has been updated in accordance with a little
more math-stuff as well to help future readers.
* Update the spec reference testsuite submodule
This commit brings in recent updates to the spec test suite. Most of the
changes here were already fixed in `wasmparser` with some tweaks to
esoteric modules, but Wasmtime also gets a bug fix where where import
matching for the size of tables/memories is based on the current runtime
size of the table/memory rather than the original type of the
table/memory. This means that during type matching the actual value is
consulted for its size rather than using the minimum size listed in its
type.
* Fix now-missing directories in build script
* Optimize `Func::call` and its C API
This commit is an alternative to #3298 which achieves effectively the
same goal of optimizing the `Func::call` API as well as its C API
sibling of `wasmtime_func_call`. The strategy taken here is different
than #3298 though where a new API isn't created, rather a small tweak to
an existing API is done. Specifically this commit handles the major
sources of slowness with `Func::call` with:
* Looking up the type of a function, to typecheck the arguments with and
use to guide how the results should be loaded, no longer hits the
rwlock in the `Engine` but instead each `Func` contains its own
`FuncType`. This can be an unnecessary allocation for funcs not used
with `Func::call`, so this is a downside of this implementation
relative to #3298. A mitigating factor, though, is that instance
exports are loaded lazily into the `Store` and in theory not too many
funcs are active in the store as `Func` objects.
* Temporary storage is amortized with a long-lived `Vec` in the `Store`
rather than allocating a new vector on each call. This is basically
the same strategy as #3294 only applied to different types in
different places. Specifically `wasmtime::Store` now retains a
`Vec<u128>` for `Func::call`, and the C API retains a `Vec<Val>` for
calling `Func::call`.
* Finally, an API breaking change is made to `Func::call` and its type
signature (as well as `Func::call_async`). Instead of returning
`Box<[Val]>` as it did before this function now takes a
`results: &mut [Val]` parameter. This allows the caller to manage the
allocation and we can amortize-remove it in `wasmtime_func_call` by
using space after the parameters in the `Vec<Val>` we're passing in.
This change is naturally a breaking change and we'll want to consider
it carefully, but mitigating factors are that most embeddings are
likely using `TypedFunc::call` instead and this signature taking a
mutable slice better aligns with `Func::new` which receives a mutable
slice for the results.
Overall this change, in the benchmark of "call a nop function from the C
API" is not quite as good as #3298. It's still a bit slower, on the
order of 15ns, because there's lots of capacity checks around vectors
and the type checks are slightly less optimized than before. Overall
though this is still significantly better than today because allocations
and the rwlock to acquire the type information are both avoided. I
personally feel that this change is the best to do because it has less
of an API impact than #3298.
* Rebase issues
* Use rsix to make system calls in Wasmtime.
`rsix` is a system call wrapper crate that we use in `wasi-common`,
which can provide the following advantages in the rest of Wasmtime:
- It eliminates some `unsafe` blocks in Wasmtime's code. There's
still an `unsafe` block in the library, but this way, the `unsafe`
is factored out and clearly scoped.
- And, it makes error handling more consistent, factoring out code for
checking return values and `io::Error::last_os_error()`, and code that
does `errno::set_errno(0)`.
This doesn't cover *all* system calls; `rsix` doesn't implement
signal-handling APIs, and this doesn't cover calls made through `std` or
crates like `userfaultfd`, `rand`, and `region`.
This test uses `rlimit` which can't be executed in parallel with other
tests. Previously this used `libc::fork` but the call afterwards to
`libc::wait` was racing all other child subprocesses since it would wait
for any child instead of the specific child we were interested in. There
was also difficulty getting the output of the child on failure coming
to the parent, so this commit simplifies the situation by moving the
test to its own executable where it's the only test.
* allow the ResourceLimiter to reject a memory grow before the
memory's own maximum.
* add a hook so a ResourceLimiter can detect any reason that
a memory grow fails, including if the OS denies additional memory
* add tests for this new functionality. I only took the time to
test the OS denial on Linux, it should be possible on Mac OS
as well but I don't have a test setup. I have no idea how to
do this on windows.
This can be useful for host functions that want to consume fuel to
reflect their relative cost. Additionally it's a relatively easy
addition to have and someone's asking for it!
Closes#3315