The current definition of `ValueSlice` is not usable, since any call to
a constructor returning a `ValueSlice` will extend the mutable borrow
on the context taken by the constructor call, with the result that it
cannot be passed to any other constructor ever.
Re-implement `ValueSlice` as a pair of a `ValueList` identifer plus an
offset into the list. This type can simply be copied without requiring
a borrow on the context.
This changes the output of the `lower` constructor from a
`ValueRegs` to a new `InstOutput` type, which is a vector
of `ValueRegs`.
Code in `lower_common` is updated to use this new type to
handle instructions with multiple outputs. All back-ends
are updated to use the new type.
This PR makes use of the new implicit-conversion feature of the ISLE DSL
that was introduced in #3807 in order to make the lowering rules
significantly simpler and more concise.
The basic idea is to eliminate the repetitive and mechanical use of
terms that convert from one type to another when there is only one real
way to do the conversion -- for example, to go from a `WritableReg` to a
`Reg`, the only sensible way is to use `writable_reg_to_reg`.
This PR generally takes any term of the form "A_to_B" and makes it an
automatic conversion, as well as some others that are similar in spirit.
The notable exception to the pure-value-convsion category is the
`put_in_reg` family of operations, which actually do have side-effects.
However, as noted in the doc additions in #3807, this is fine as long as
the side-effects are idempotent. And on balance, making `put_in_reg`
automatic is a significant clarity win -- together with other operand
converters, it enables rules like:
```
;; Add two registers.
(rule (lower (has_type (fits_in_64 ty)
(iadd x y)))
(add ty x y))
```
There may be other converters that we could define to make the rules
even simpler; we can make such improvements as we think of them, but
this should be a good start!
This commit fixes a potential issue where the fast-path instantiate in
`MemoryImageSlot` where when the previous image is compared against the
new image it only performed file descriptor equality, but nowadays with
loading images from `*.cwasm` files there might be multiple images in
the same file so the offsets also need to be considered. I think this
isn't really easy to hit today, it would require combining both module
linking and multi-memory which gets into the realm of being pretty
esoteric so I haven't added a test case here for this.
It seems our `compile` fuzz target for ISLE has not been regularly
tested, as it was never updated for the `isle` -> `cranelift_isle` crate
renaming. This PR fixes it to compile again.
This also includes a simple fix in the typechecking: when verifying that
a term decl is valid, we might insert a term ID into the name->ID map
before fully checking that all of the types exist, and then skipping
(for error recovery purposes) the actual push onto the term-signature
vector if one of the types does have an error. This phantom TID can
later cause a panic. The fix is to avoid adding to the map until we have
fully verified the term decl.
Add support for implicit type conversions to ISLE.
This feature allows the DSL user to register to the compiler that a
particular term (used as a constructor or extractor) converts from one
type to another. The compiler will then *automatically* insert this term
whenever a type mismatch involving that specific pair of types occurs.
This significantly cleans up many uses of the ISLE DSL. For example,
when defining the compiler backends, we often have newtypes like `Gpr`
around `Reg` (signifying a particular type of register); we can define
a conversion from Gpr to Reg automatically.
Conversions can also have side-effects, as long as these side-effects
are idempotent. For example, `put_value_in_reg` in a compiler backend
has the effect of marking the value as used, causing codegen to produce
it, and assigns a register to the value; but multiple invocations of
this will return the same register for the same value. Thus it is safe
to use it as an implicit conversion that may be invoked multiple times.
This is documented in the ISLE-Cranelift integration document.
This PR also adds some testing infrastructure to the ISLE compiler,
checking that "pass" tests pass through the DSL compiler, "fail" tests
do not, and "link" tests are able to generate code and link that code
with corresponding Rust code.
Without async fuzzing, we won't be able to test the most interesting
aspects of epoch interruption, namely the
interrupt/update-deadline/resume flow. However, the "trap on epoch
change" behavior works even for synchronous stores, so we can fuzz with
this the same way we fuzz with the interrupt flag.
* fuzzing: Add a custom mutator based on `wasm-mutate`
* fuzz: Add a version of the `compile` fuzz target that uses `wasm-mutate`
* Update `wasmparser` dependencies
* Panic on resetting image slots back to anonymous memory
This commit updates `Drop for MemoryImageSlot` to panic instead of
ignoring errors when resetting memory back to a clean slate. On reading
some of this code again for a different change I realized that if an
error happens in `reset_with_anon_memory` it would be possible,
depending on where another error happened, to leak memory from one image
to another.
For example if `clear_and_remain_ready` failed its `madvise` (for
whatever reason) and didn't actually reset any memory, then if `Drop for
MemoryImageSlot` also hit an error trying to remap memory (for whatever
reason), then nothing about memory has changed and when the
`MemoryImageSlot` is recreated it'll think that it's 0-length when
actually it's a bit larger and may leak data.
I don't think this is a serious problem since we don't know any
situation under which the `madvise` would fail and/or the resetting with
anonymous memory, but given that these aren't expected to fail I figure
it's best to be a bit more defensive here and/or loud about failures.
* Update a comment
* x64: port `select` using an FP comparison to ISLE
This change includes quite a few interlocking parts, required mainly by
the current x64 conventions in ISLE:
- it adds a way to emit a `cmove` with multiple OR-ing conditions;
because x64 ISLE cannot currently safely emit a comparison followed
by several jumps, this adds `MachInst::CmoveOr` and
`MachInst::XmmCmoveOr` macro instructions. Unfortunately, these macro
instructions hide the multi-instruction sequence in `lower.isle`
- to properly keep track of what instructions consume and produce
flags, @cfallin added a way to pass around variants of
`ConsumesFlags` and `ProducesFlags`--these changes affect all
backends
- then, to lower the `fcmp + select` CLIF, this change adds several
`cmove*_from_values` helpers that perform all of the awkward
conversions between `Value`, `ValueReg`, `Reg`, and `Gpr/Xmm`; one
upside is that now these lowerings have much-improved documentation
explaining why the various `FloatCC` and `CC` choices are made the
the way they are.
Co-authored-by: Chris Fallin <chris@cfallin.org>
* Enable copy-on-write heap initialization by default
This commit enables the `Config::memfd` feature by default now that it's
been fuzzed for a few weeks on oss-fuzz, and will continue to be fuzzed
leading up to the next release of Wasmtime in early March. The
documentation of the `Config` option has been updated as well as adding
a CLI flag to disable the feature.
* Remove ubiquitous "memfd" terminology
Switch instead to forms of "memory image" or "cow" or some combination
thereof.
* Update new option names
Ended up being a routine update but seemed good to go ahead and hook up
updates. While I was at it I went ahead and hooked up multi-value
swarm fuzzing as well now that wasm-smith implements it.
Triage is scheduled to run once every 5 minutes but it's often queued up
during the day as builders are otherwise occupied with actual CI builds.
This can end up in a lot of triage tasks queued up back-to-back. While
this doesn't seem to be a huge issue one thing I suspect is that this is
perhaps somewhat related to API rate limits getting hit when recent
versions were published. In any case there's no need for each and every
triage run to do something, it's fine to only have one at a time
pending.
* Enable SSE 4.2 unconditionally
Fuzzing over the weekend found that `i64x2` comparison operators
require `pcmpgtq` which is an SSE 4.2 instruction. Along the lines of #3816
this commit unconditionally enables and requires SSE 4.2 for compilation
and fuzzing. It will no longer be possible to create a compiler for
x86_64 with simd enabled if SSE 4.2 is disabled.
* Update comment
In #3820 we see an issue with the new heuristics that control use of
memfd: it's entirely possible for a reasonable Wasm module produced by a
snapshotting system to have a relatively sparse heap (less than 50%
filled). A system that avoids memfd because of this would have an
undesirable performance reduction on such modules.
Ultimately we should try to implement a hybrid scheme where we support
outlier/leftover initializers, but for now this PR makes the "always
allow dense" limit configurable. This way, embedders that want to ensure
that memfd is used can do so, if they have other knowledge about the
maximum heap size allowed in their system.
(Partially addresses #3820 but let's leave it open to track the hybrid
idea)
Looks like the 0.34.1 release is missing artifacts and some jobs
building artifacts ended up being cancelled because of API rate limits
being hit on the builders. Artifacts are uploaded to the job, however,
which means we can always go back and grab them to upload them, unless
the whole job was cancelled. For 0.34.1 it looks like the Linux builder
hit an error but its error then subsequently cancelled the Windows
builders, so we don't actually have artifacts for Windows for the 0.34.1
release. This will hopefully prevent this from causing further issues in
the future where if one builder hits an error while uploading artifacts
the others will continue and we can manually upload what's missing if
necessary.
cc #3812
* x64: enable VTune support by default
After significant work in the `ittapi-rs` crate, this dependency should
build without issue on Wasmtime's supported operating systems: Windows,
Linux, and macOS. The difference in the release binary is <20KB, so this
change makes `vtune` a default build feature. This change upgrades
`ittapi-rs` to v0.2.0 and updates the documentation.
* review: add configuration for defaults in more places
* review: remove OS conditional compilation, add architecture
* review: do not default vtune feature in wasmtime-jit
* Fix typo
* Move vmoffset field size and field name together
The previous code was quite confusing about what applied to which field.
The new code also makes it easier to move fields around and insert and
delete fields.
* Move builtin_functions before all variable sized fields
This allows the offset to be calculated at compile time
* Add cadd and cmul convenience functions
* Remove comment
* Change fields! syntax as per review
* Add implicit u32::from to fields!
This commit fixes the spectests fuzz target to set a lower bound on the
arbitrary pooling allocator configurations of 10 memory pages so that the limit
doesn't interfere with what's required in the spec tests.
* Document the invoke argument of the run command.
* Update docs/cli-options.md
Co-authored-by: Kyle Brown <kyleb@liquidrocketry.com>
Co-authored-by: Nick Fitzgerald <fitzgen@gmail.com>
Per-`Store` allocations are already limited with the `StoreLimits`
structure while fuzzing to ensure fuzz targets don't allocate more than
1GB of memory, but the `instantiate-many` fuzzer created many separate
stores which each had their own limit, meaning that the 2GB limit of
fuzzing could be pretty easily reached.
This commit fixes the issue by making `StoreLimits` a shareable type via
`Rc` to ensure the same limits can be applied to all stores created
within a fuzz run, globally limiting the memory even across stores to 1GB.
In #3800 I added support to consume fuzz input as selection of whether
or not target features should be enabled. This was done in a
platform-specific manner, however, which means that I can no longer
reliably take the fuzz reproducer cases from oss-fuzz and reproduce them
locally on an aarch64 machine. This commit fixes this problem by
unconditionally pulling bytes from the input for fuzz features,
irrespective of the host platform. Features are then discarded if
they're not applicable.
Combine the two opcodes into one and pass and add an OperandSize
field to these instructions, as well as an ISLE helper to perform
the conversion from Type.
This saves us from having having to write ISLE helpers to select the
correct opcode, based on type, and reduces the amount of code needed
for emission.
Copyright (c) 2022, Arm Limited.
* Update memfd image construction to avoid excessively large images
Previously memfd-based image construction had a hard limit of a 1GB
memory image but this mean that tiny wasm modules could allocate up to
1GB of memory which became a bit excessive especially in terms of memory
usage during fuzzing. To fix this the conversion to a static memory
image has been updated to first do a conversion to paged memory
initialization, which is sparse, followed by a second conversion to
static memory initialization.
The sparse construction for the paged step should make it such that the
upper/lower bounds of the initialization image are easily computed, and
then afterwards this limit can be checked against some heuristics to
determine if we're willing to commit to building up a whole static image
for that module. The heuristics have been tweaked from "must be less
than 1GB" to one of two conditions must be true:
* Either the total memory image size is at most twice the size of the
original paged data itself.
* Otherwise the memory image size must be smaller than a reasonable
threshold, currently 1MB.
We'll likely need to tweak this over time and it's still possible to
cause a lot of extra memory consumption, but for now this should be
enough to appease the fuzzers.
Closes#3815
* Review comments
Addresses #3809: when we are asked to create a Cranelift backend with
shared flags that indicate support for SIMD, we should check that the
ISA level needed for our SIMD lowerings is present.
* Port fix for `CVE-2022-23636` to `main`.
This commit ports the fix for `CVE-2022-23636` to `main`, but performs a
refactoring that makes it unnecessary for the instance itself to track if it
has been initialized; such a change was not targeted enough for a security
patch.
The pooling allocator will now only initialize an instance if all of its
associated resource creation succeeds. If the resource creation fails, no
instance is dropped as none was initialized.
Also updates `RELEASES.md` to include the related patch releases.
* Add `Instance::new_at` to fully initialize an instance.
Added `Instance::new_at` to fully initialize an instance at a given address.
This will hopefully prevent the possibility that an `Instance` structure
doesn't have an initialized `VMContext` when it is dropped.
* Unconditionally enable sse3, ssse3, and sse4.1 when fuzzing
This commit unconditionally enables some x86_64 instructions when
fuzzing because the cranelift backend is known to not work if these
features are disabled. From discussion on the wasm simd proposal the
assumed general baseline for running simd code is SSE4.1 anyway.
At this time I haven't added any sort of checks in Wasmtime itself.
Wasmtime by default uses the native architecture and when explicitly
enabling features this still needs to be explicitly specified.
Closes#3809
* Update crates/fuzzing/src/generators.rs
Co-authored-by: Andrew Brown <andrew.brown@intel.com>
Co-authored-by: Andrew Brown <andrew.brown@intel.com>
* Automatically label Wasmtime `Config` object changes
* Add the "Label Messager" github action
This allows us to have custom messages left in comments on issues and pull
requests when they get labeled with a specific label.
* Add a message for `wasmtime:config`-labeled pull requests
* CI: Consolidate issue/PR triage workflows
This commit improves the stability of the fuzz targets by ensuring the
generated configs and modules are congruent, especially when the pooling
allocator is being used.
For the `differential` target, this means both configurations must use the same
allocation strategy for now as one side generates the module that might not be
compatible with another arbitrary config now that we fuzz the pooling
allocator.
These changes also ensure that constraints put on the config are more
consistently applied, especially when using a fuel-based timeout.
* Fuzz cranelift cpu flag settings with Wasmtime
This commit updates the `Config` fuzz-generator to consume some of the
input as configuration settings for codegen flags we pass to cranelift.
This should allow for ideally some more coverage where settings are
disabled or enabled, ideally finding possible bugs in feature-specific
implementations or generic implementations that are rarely used if the
feature-specific ones almost always take precedent.
The technique used in this commit is to weight selection of codegen
settings less frequently than using the native settings. Afterwards each
listed feature is individually enabled or disabled depending on the
input fuzz data, and if a feature is enabled but the host doesn't
actually support it then the fuzz input is rejected with a log message.
The goal here is to still have many fuzz inputs accepted but also ensure
determinism across hosts. If there's a bug specifically related to
enabling a flag then running it on a host without the flag should
indicate that the flag isn't supported rather than silently leaving it
disabled and reporting the fuzz case a success.
* Use built-in `Unstructured::ratio` method
* Tweak macro
* Bump arbitrary dep version
* Added 'add_fuel' command line option
* Added default value to 'add_fuel' config option
* Added 'add_fuel' to run command Store instantiation
* Added comment
* Added warning for add-fuel without consume-fuel
* Formatting
* Changed out --add-fuel and --consume-fuel to --fuel
* Formatting
* Update src/lib.rs
* Update src/commands/run.rs
Co-authored-by: Nick Fitzgerald <fitzgen@gmail.com>
In #3721, we have been discussing what to do about the ARM32 backend in
Cranelift. Currently, this backend supports only 32-bit types, which is
insufficient for full Wasm-MVP; it's missing other critical bits, like
floating-point support; and it has only ever been exercised, AFAIK, via
the filetests for the individual CLIF instructions that are implemented.
We were very very thankful for the original contribution of this
backend, even in its partial state, and we had hoped at the time that we
could eventually mature it in-tree until it supported e.g. Wasm and
other use-cases. But that hasn't yet happened -- to the blame of no-one,
to be clear, we just haven't had a contributor with sufficient time.
Unfortunately, the existence of the backend and lack of active
maintainer now potentially pose a bit of a burden as we hope to make
continuing changes to the backend framework. For example, the ISLE
migration, and the use of regalloc2 that it will allow, would need all
of the existing lowering patterns in the hand-written ARM32 backend to
be rewritten as ISLE rules.
Given that we don't currently have the resources to do this, we think
it's probably best if we, sadly, for now remove this partial backend.
This is not in any way a statement of what we might accept in the
future, though. If, in the future, an ARM32 backend updated to our
latest codebase with an active maintainer were to appear, we'd be happy
to merge it (and likewise for any other architecture!). But for now,
this is probably the best path. Thanks again to the original contributor
@jmkrauz and we hope that this work can eventually be brought back and
reused if someone has the time to do so!
We already defined the `Gpr` newtype and used it in a few places, and we already
defined the `Xmm` newtype and used it extensively. This finishes the transition
to using the newtypes extensively in lowering by making use of `Gpr` in more
places.
Fixes#3685
This commit makes it such that the pooling allocator will be configured with a
much lower upper bound for memory pages, which will greatly reduce the
likelihood that the fuzzer memory limits will be hit from having too many
memories from too many instances committed.
* Shrink the size of `FuncData`
Before this commit on a 64-bit system the `FuncData` type had a size of
88 bytes and after this commit it has a size of 32 bytes. A `FuncData`
is required for all host functions in a store, including those inserted
from a `Linker` into a store used during linking. This means that
instantiation ends up creating a nontrivial number of these types and
pushing them into the store. Looking at some profiles there were some
surprisingly expensive movements of `FuncData` from the stack to a
vector for moves-by-value generated by Rust. Shrinking this type enables
more efficient code to be generated and additionally means less storage
is needed in a store's function array.
For instantiating the spidermonkey and rustpython modules this improves
instantiation by 10% since they each import a fair number of host
functions and the speedup here is relative to the number of items
imported.
* Use `ptr::copy_nonoverlapping` during initialization
Prevoiusly `ptr::copy` was used for copying imports into place which
translates to `memmove`, but `ptr::copy_nonoverlapping` can be used here
since it's statically known these areas don't overlap. While this
doesn't end up having a performance difference it's something I kept
noticing while looking at the disassembly of `initialize_vmcontext` so I
figured I'd go ahead and implement.
* Indirect shared signature ids in the VMContext
This commit is a small improvement for the instantiation time of modules
by avoiding copying a list of `VMSharedSignatureIndex` entries into each
`VMContext`, instead building one inside of a module and sharing that
amongst all instances. This involves less lookups at instantiation time
and less movement of data during instantiation. The downside is that
type-checks on `call_indirect` now involve an additionally load, but I'm
assuming that these are somewhat pessimized enough as-is that the
runtime impact won't be much there.
For instantiation performance this is a 5-10% win with
rustpyhon/spidermonky instantiation. This should also reduce the size of
each `VMContext` for an instantiation since signatures are no longer
stored inline but shared amongst all instances with one module.
Note that one subtle change here is that the array of
`VMSharedSignatureIndex` was previously indexed by `TypeIndex`, and now
it's indexed by `SignaturedIndex` which is a deduplicated form of
`TypeIndex`. This is done because we already had a list of those lying
around in `Module`, so it was easier to reuse that than to build a
separate array and store it somewhere.
* Reserve space in `Store<T>` with `InstancePre`
This commit updates the instantiation process to reserve space in a
`Store<T>` for the functions that an `InstancePre<T>`, as part of
instantiation, will insert into it. Using an `InstancePre<T>` to
instantiate allows pre-computing the number of host functions that will
be inserted into a store, and by pre-reserving space we can avoid costly
reallocations during instantiation by ensuring the function vector has
enough space to fit everything during the instantiation process.
Overall this makes instantiation of rustpython/spidermonkey about 8%
faster locally.
* Fix tests
* Use checked arithmetic
* Skip memfd creation with precompiled modules
This commit updates the memfd support internally to not actually use a
memfd if a compiled module originally came from disk via the
`wasmtime::Module::deserialize_file` API. In this situation we already
have a file descriptor open and there's no need to copy a module's heap
image to a new file descriptor.
To facilitate a new source of `mmap` the currently-memfd-specific-logic
of creating a heap image is generalized to a new form of
`MemoryInitialization` which is attempted for all modules at
module-compile-time. This means that the serialized artifact to disk
will have the memory image in its entirety waiting for us. Furthermore
the memory image is ensured to be padded and aligned carefully to the
target system's page size, notably meaning that the data section in the
final object file is page-aligned and the size of the data section is
also page aligned.
This means that when a precompiled module is mapped from disk we can
reuse the underlying `File` to mmap all initial memory images. This
means that the offset-within-the-memory-mapped-file can differ for
memfd-vs-not, but that's just another piece of state to track in the
memfd implementation.
In the limit this waters down the term "memfd" for this technique of
quickly initializing memory because we no longer use memfd
unconditionally (only when the backing file isn't available).
This does however open up an avenue in the future to porting this
support to other OSes because while `memfd_create` is Linux-specific
both macOS and Windows support mapping a file with copy-on-write. This
porting isn't done in this PR and is left for a future refactoring.
Closes#3758
* Enable "memfd" support on all unix systems
Cordon off the Linux-specific bits and enable the memfd support to
compile and run on platforms like macOS which have a Linux-like `mmap`.
This only works if a module is mapped from a precompiled module file on
disk, but that's better than not supporting it at all!
* Fix linux compile
* Use `Arc<File>` instead of `MmapVecFileBacking`
* Use a named struct instead of mysterious tuples
* Comment about unsafety in `Module::deserialize_file`
* Fix tests
* Fix uffd compile
* Always align data segments
No need to have conditional alignment since their sizes are all aligned
anyway
* Update comment in build.rs
* Use rustix, not `region`
* Fix some confusing logic/names around memory indexes
These functions all work with memory indexes, not specifically defined
memory indexes.