Addresses #3809: when we are asked to create a Cranelift backend with
shared flags that indicate support for SIMD, we should check that the
ISA level needed for our SIMD lowerings is present.
* Port fix for `CVE-2022-23636` to `main`.
This commit ports the fix for `CVE-2022-23636` to `main`, but performs a
refactoring that makes it unnecessary for the instance itself to track if it
has been initialized; such a change was not targeted enough for a security
patch.
The pooling allocator will now only initialize an instance if all of its
associated resource creation succeeds. If the resource creation fails, no
instance is dropped as none was initialized.
Also updates `RELEASES.md` to include the related patch releases.
* Add `Instance::new_at` to fully initialize an instance.
Added `Instance::new_at` to fully initialize an instance at a given address.
This will hopefully prevent the possibility that an `Instance` structure
doesn't have an initialized `VMContext` when it is dropped.
* Unconditionally enable sse3, ssse3, and sse4.1 when fuzzing
This commit unconditionally enables some x86_64 instructions when
fuzzing because the cranelift backend is known to not work if these
features are disabled. From discussion on the wasm simd proposal the
assumed general baseline for running simd code is SSE4.1 anyway.
At this time I haven't added any sort of checks in Wasmtime itself.
Wasmtime by default uses the native architecture and when explicitly
enabling features this still needs to be explicitly specified.
Closes#3809
* Update crates/fuzzing/src/generators.rs
Co-authored-by: Andrew Brown <andrew.brown@intel.com>
Co-authored-by: Andrew Brown <andrew.brown@intel.com>
* Automatically label Wasmtime `Config` object changes
* Add the "Label Messager" github action
This allows us to have custom messages left in comments on issues and pull
requests when they get labeled with a specific label.
* Add a message for `wasmtime:config`-labeled pull requests
* CI: Consolidate issue/PR triage workflows
This commit improves the stability of the fuzz targets by ensuring the
generated configs and modules are congruent, especially when the pooling
allocator is being used.
For the `differential` target, this means both configurations must use the same
allocation strategy for now as one side generates the module that might not be
compatible with another arbitrary config now that we fuzz the pooling
allocator.
These changes also ensure that constraints put on the config are more
consistently applied, especially when using a fuel-based timeout.
* Fuzz cranelift cpu flag settings with Wasmtime
This commit updates the `Config` fuzz-generator to consume some of the
input as configuration settings for codegen flags we pass to cranelift.
This should allow for ideally some more coverage where settings are
disabled or enabled, ideally finding possible bugs in feature-specific
implementations or generic implementations that are rarely used if the
feature-specific ones almost always take precedent.
The technique used in this commit is to weight selection of codegen
settings less frequently than using the native settings. Afterwards each
listed feature is individually enabled or disabled depending on the
input fuzz data, and if a feature is enabled but the host doesn't
actually support it then the fuzz input is rejected with a log message.
The goal here is to still have many fuzz inputs accepted but also ensure
determinism across hosts. If there's a bug specifically related to
enabling a flag then running it on a host without the flag should
indicate that the flag isn't supported rather than silently leaving it
disabled and reporting the fuzz case a success.
* Use built-in `Unstructured::ratio` method
* Tweak macro
* Bump arbitrary dep version
* Added 'add_fuel' command line option
* Added default value to 'add_fuel' config option
* Added 'add_fuel' to run command Store instantiation
* Added comment
* Added warning for add-fuel without consume-fuel
* Formatting
* Changed out --add-fuel and --consume-fuel to --fuel
* Formatting
* Update src/lib.rs
* Update src/commands/run.rs
Co-authored-by: Nick Fitzgerald <fitzgen@gmail.com>
In #3721, we have been discussing what to do about the ARM32 backend in
Cranelift. Currently, this backend supports only 32-bit types, which is
insufficient for full Wasm-MVP; it's missing other critical bits, like
floating-point support; and it has only ever been exercised, AFAIK, via
the filetests for the individual CLIF instructions that are implemented.
We were very very thankful for the original contribution of this
backend, even in its partial state, and we had hoped at the time that we
could eventually mature it in-tree until it supported e.g. Wasm and
other use-cases. But that hasn't yet happened -- to the blame of no-one,
to be clear, we just haven't had a contributor with sufficient time.
Unfortunately, the existence of the backend and lack of active
maintainer now potentially pose a bit of a burden as we hope to make
continuing changes to the backend framework. For example, the ISLE
migration, and the use of regalloc2 that it will allow, would need all
of the existing lowering patterns in the hand-written ARM32 backend to
be rewritten as ISLE rules.
Given that we don't currently have the resources to do this, we think
it's probably best if we, sadly, for now remove this partial backend.
This is not in any way a statement of what we might accept in the
future, though. If, in the future, an ARM32 backend updated to our
latest codebase with an active maintainer were to appear, we'd be happy
to merge it (and likewise for any other architecture!). But for now,
this is probably the best path. Thanks again to the original contributor
@jmkrauz and we hope that this work can eventually be brought back and
reused if someone has the time to do so!
We already defined the `Gpr` newtype and used it in a few places, and we already
defined the `Xmm` newtype and used it extensively. This finishes the transition
to using the newtypes extensively in lowering by making use of `Gpr` in more
places.
Fixes#3685
This commit makes it such that the pooling allocator will be configured with a
much lower upper bound for memory pages, which will greatly reduce the
likelihood that the fuzzer memory limits will be hit from having too many
memories from too many instances committed.
* Shrink the size of `FuncData`
Before this commit on a 64-bit system the `FuncData` type had a size of
88 bytes and after this commit it has a size of 32 bytes. A `FuncData`
is required for all host functions in a store, including those inserted
from a `Linker` into a store used during linking. This means that
instantiation ends up creating a nontrivial number of these types and
pushing them into the store. Looking at some profiles there were some
surprisingly expensive movements of `FuncData` from the stack to a
vector for moves-by-value generated by Rust. Shrinking this type enables
more efficient code to be generated and additionally means less storage
is needed in a store's function array.
For instantiating the spidermonkey and rustpython modules this improves
instantiation by 10% since they each import a fair number of host
functions and the speedup here is relative to the number of items
imported.
* Use `ptr::copy_nonoverlapping` during initialization
Prevoiusly `ptr::copy` was used for copying imports into place which
translates to `memmove`, but `ptr::copy_nonoverlapping` can be used here
since it's statically known these areas don't overlap. While this
doesn't end up having a performance difference it's something I kept
noticing while looking at the disassembly of `initialize_vmcontext` so I
figured I'd go ahead and implement.
* Indirect shared signature ids in the VMContext
This commit is a small improvement for the instantiation time of modules
by avoiding copying a list of `VMSharedSignatureIndex` entries into each
`VMContext`, instead building one inside of a module and sharing that
amongst all instances. This involves less lookups at instantiation time
and less movement of data during instantiation. The downside is that
type-checks on `call_indirect` now involve an additionally load, but I'm
assuming that these are somewhat pessimized enough as-is that the
runtime impact won't be much there.
For instantiation performance this is a 5-10% win with
rustpyhon/spidermonky instantiation. This should also reduce the size of
each `VMContext` for an instantiation since signatures are no longer
stored inline but shared amongst all instances with one module.
Note that one subtle change here is that the array of
`VMSharedSignatureIndex` was previously indexed by `TypeIndex`, and now
it's indexed by `SignaturedIndex` which is a deduplicated form of
`TypeIndex`. This is done because we already had a list of those lying
around in `Module`, so it was easier to reuse that than to build a
separate array and store it somewhere.
* Reserve space in `Store<T>` with `InstancePre`
This commit updates the instantiation process to reserve space in a
`Store<T>` for the functions that an `InstancePre<T>`, as part of
instantiation, will insert into it. Using an `InstancePre<T>` to
instantiate allows pre-computing the number of host functions that will
be inserted into a store, and by pre-reserving space we can avoid costly
reallocations during instantiation by ensuring the function vector has
enough space to fit everything during the instantiation process.
Overall this makes instantiation of rustpython/spidermonkey about 8%
faster locally.
* Fix tests
* Use checked arithmetic
* Skip memfd creation with precompiled modules
This commit updates the memfd support internally to not actually use a
memfd if a compiled module originally came from disk via the
`wasmtime::Module::deserialize_file` API. In this situation we already
have a file descriptor open and there's no need to copy a module's heap
image to a new file descriptor.
To facilitate a new source of `mmap` the currently-memfd-specific-logic
of creating a heap image is generalized to a new form of
`MemoryInitialization` which is attempted for all modules at
module-compile-time. This means that the serialized artifact to disk
will have the memory image in its entirety waiting for us. Furthermore
the memory image is ensured to be padded and aligned carefully to the
target system's page size, notably meaning that the data section in the
final object file is page-aligned and the size of the data section is
also page aligned.
This means that when a precompiled module is mapped from disk we can
reuse the underlying `File` to mmap all initial memory images. This
means that the offset-within-the-memory-mapped-file can differ for
memfd-vs-not, but that's just another piece of state to track in the
memfd implementation.
In the limit this waters down the term "memfd" for this technique of
quickly initializing memory because we no longer use memfd
unconditionally (only when the backing file isn't available).
This does however open up an avenue in the future to porting this
support to other OSes because while `memfd_create` is Linux-specific
both macOS and Windows support mapping a file with copy-on-write. This
porting isn't done in this PR and is left for a future refactoring.
Closes#3758
* Enable "memfd" support on all unix systems
Cordon off the Linux-specific bits and enable the memfd support to
compile and run on platforms like macOS which have a Linux-like `mmap`.
This only works if a module is mapped from a precompiled module file on
disk, but that's better than not supporting it at all!
* Fix linux compile
* Use `Arc<File>` instead of `MmapVecFileBacking`
* Use a named struct instead of mysterious tuples
* Comment about unsafety in `Module::deserialize_file`
* Fix tests
* Fix uffd compile
* Always align data segments
No need to have conditional alignment since their sizes are all aligned
anyway
* Update comment in build.rs
* Use rustix, not `region`
* Fix some confusing logic/names around memory indexes
These functions all work with memory indexes, not specifically defined
memory indexes.
This commit has a few minor updates and some improvements to the
instantiation benchmark harness:
* A `once_cell::unsync::Lazy` type is now used to guard creation of
modules/engines/etc. This enables running singular benchmarks to be
much faster since the benchmark no longer compiles all other
benchmarks that are filtered out. Unfortunately I couldn't find a way
in criterion to test whether a `BenchmarkId` is filtered out or not so
we rely on the runtime laziness to initialize on the first run for
benchmarks that do so.
* All files located in `benches/instantiation` are now loaded for
benchmarking instead of a hardcoded list. This makes it a bit easier
to throw files into the directory and have them benchmarked instead of
having to recompile when working with new files.
* Finally a module deserialization benchmark was added to measure the
time it takes to deserialize a precompiled module from disk (inspired
by discussion on #3787)
While I was at it I also upped some limits to be able to instantiate
cfallin's `spidermonkey.wasm`.
* Move function names out of `Module`
This commit moves function names in a module out of the
`wasmtime_environ::Module` type and into separate sections stored in the
final compiled artifact. Spurred on by #3787 to look at module load
times I noticed that a huge amount of time was spent in deserializing
this map. The `spidermonkey.wasm` file, for example, has a 3MB name
section which is a lot of unnecessary data to deserialize at module load
time.
The names of functions are now split out into their own dedicated
section of the compiled artifact and metadata about them is stored in a
more compact format at runtime by avoiding a `BTreeMap` and instead
using a sorted array. Overall this improves deserialize times by up to
80% for modules with large name sections since the name section is no
longer deserialized at load time and it's lazily paged in as names are
actually referenced.
* Fix a typo
* Fix compiled module determinism
Need to not only sort afterwards but also first to ensure the data of
the name section is consistent.
* Add the instance allocation strategy to generated fuzzing configs.
This commit adds support for generating configs with arbitrary instance
allocation strategies.
With this, the pooling allocator will be fuzzed as part of the existing fuzz
targets.
* Refine maximum constants for arbitrary module limits.
* Add an `instantiate-many` fuzz target.
This commit adds a new `instantiate-many` fuzz target that will attempt to
instantiate and terminate modules in an arbitrary order.
It generates up to 5 modules, from which a random sequence of instances will be
created.
The primary benefactor of this fuzz target is the pooling instance allocator.
* Allow no aliasing in generated modules when using the pooling allocator.
This commit prevents aliases in the generated modules as they might count
against the configured import limits of the pooling allocator.
As the existing module linking proposal implementation will eventually be
deprecated in favor of the component model proposal, it isn't very important
that we test aliases in generated modules with the pooling allocator.
* Improve distribution of memory config in fuzzing.
The previous commit attempted to provide a 32-bit upper bound to 64-bit
arbitrary values, which skewed the distribution heavily in favor of the upper
bound.
This commit removes the constraint and instead uses arbitrary 32-bit values
that are converted to 64-bit values in the `Arbitrary` implementation.
In working on #3787 I see now that our coverage of loading precompiled
files specifically is somewhat lacking, so this adds a config option to
the fuzzers where, if enabled, will round-trip all compiled modules
through the filesystem to test out the mmapped-file case.
Other than doc updates, this just contains bytecodealliance/cap-std#235,
a fix for compilation errors on Rust nightly that look like this:
```
error[E0308]: mismatched types
--> cap-primitives/src/fs/via_parent/rename.rs:22:58
|
22 | let (old_dir, old_basename) = open_parent(old_start, &old_path)?;
| ^^^^^^^^^ expected struct `Path`, found opaque type
|
::: cap-primitives/src/rustix/fs/dir_utils.rs:67:48
|
67 | pub(crate) fn strip_dir_suffix(path: &Path) -> impl Deref<Target = Path> + '_ {
| ------------------------------ the found opaque type
|
= note: expected struct `Path`
found opaque type `impl Deref<Target = Path>`
```
During instance initialization, we build two sorts of arrays eagerly:
- We create an "anyfunc" (a `VMCallerCheckedAnyfunc`) for every function
in an instance.
- We initialize every element of a funcref table with an initializer to
a pointer to one of these anyfuncs.
Most instances will not touch (via call_indirect or table.get) all
funcref table elements. And most anyfuncs will never be referenced,
because most functions are never placed in tables or used with
`ref.func`. Thus, both of these initialization tasks are quite wasteful.
Profiling shows that a significant fraction of the remaining
instance-initialization time after our other recent optimizations is
going into these two tasks.
This PR implements two basic ideas:
- The anyfunc array can be lazily initialized as long as we retain the
information needed to do so. For now, in this PR, we just recreate the
anyfunc whenever a pointer is taken to it, because doing so is fast
enough; in the future we could keep some state to know whether the
anyfunc has been written yet and skip this work if redundant.
This technique allows us to leave the anyfunc array as uninitialized
memory, which can be a significant savings. Filling it with
initialized anyfuncs is very expensive, but even zeroing it is
expensive: e.g. in a large module, it can be >500KB.
- A funcref table can be lazily initialized as long as we retain a link
to its corresponding instance and function index for each element. A
zero in a table element means "uninitialized", and a slowpath does the
initialization.
Funcref tables are a little tricky because funcrefs can be null. We need
to distinguish "element was initially non-null, but user stored explicit
null later" from "element never touched" (ie the lazy init should not
blow away an explicitly stored null). We solve this by stealing the LSB
from every funcref (anyfunc pointer): when the LSB is set, the funcref
is initialized and we don't hit the lazy-init slowpath. We insert the
bit on storing to the table and mask it off after loading.
We do have to set up a precomputed array of `FuncIndex`s for the table
in order for this to work. We do this as part of the module compilation.
This PR also refactors the way that the runtime crate gains access to
information computed during module compilation.
Performance effect measured with in-tree benches/instantiation.rs, using
SpiderMonkey built for WASI, and with memfd enabled:
```
BEFORE:
sequential/default/spidermonkey.wasm
time: [68.569 us 68.696 us 68.856 us]
sequential/pooling/spidermonkey.wasm
time: [69.406 us 69.435 us 69.465 us]
parallel/default/spidermonkey.wasm: with 1 background thread
time: [69.444 us 69.470 us 69.497 us]
parallel/default/spidermonkey.wasm: with 16 background threads
time: [183.72 us 184.31 us 184.89 us]
parallel/pooling/spidermonkey.wasm: with 1 background thread
time: [69.018 us 69.070 us 69.136 us]
parallel/pooling/spidermonkey.wasm: with 16 background threads
time: [326.81 us 337.32 us 347.01 us]
WITH THIS PR:
sequential/default/spidermonkey.wasm
time: [6.7821 us 6.8096 us 6.8397 us]
change: [-90.245% -90.193% -90.142%] (p = 0.00 < 0.05)
Performance has improved.
sequential/pooling/spidermonkey.wasm
time: [3.0410 us 3.0558 us 3.0724 us]
change: [-95.566% -95.552% -95.537%] (p = 0.00 < 0.05)
Performance has improved.
parallel/default/spidermonkey.wasm: with 1 background thread
time: [7.2643 us 7.2689 us 7.2735 us]
change: [-89.541% -89.533% -89.525%] (p = 0.00 < 0.05)
Performance has improved.
parallel/default/spidermonkey.wasm: with 16 background threads
time: [147.36 us 148.99 us 150.74 us]
change: [-18.997% -18.081% -17.285%] (p = 0.00 < 0.05)
Performance has improved.
parallel/pooling/spidermonkey.wasm: with 1 background thread
time: [3.1009 us 3.1021 us 3.1033 us]
change: [-95.517% -95.511% -95.506%] (p = 0.00 < 0.05)
Performance has improved.
parallel/pooling/spidermonkey.wasm: with 16 background threads
time: [49.449 us 50.475 us 51.540 us]
change: [-85.423% -84.964% -84.465%] (p = 0.00 < 0.05)
Performance has improved.
```
So an improvement of something like 80-95% for a very large module (7420
functions in its one funcref table, 31928 functions total).
This commit corrects a few places where `MemoryIndex` was used and treated like
a `DefinedMemoryIndex` in the pooling instance allocator.
When the unstable `multi-memory` proposal is enabled, it is possible to cause a
newly allocated instance to use an incorrect base address for any defined
memories by having the module being instantiated also import a memory.
This requires enabling the unstable `multi-memory` proposal, configuring the
use of the pooling instance allocator (not the default), and then configuring
the module limits to allow imported memories (also not the default).
The fix is to replace all uses of `MemoryIndex` with `DefinedMemoryIndex` in
the pooling instance allocator.
Several `debug_assert!` have also been updated to `assert!` to sanity check the
state of the pooling allocator even in release builds.
Add accessors to prelude.isle to access data fields of
`func_addr` and `symbol_value` instructions.
These are based on similar versions I had added to the s390x
back-end, but are a bit more straightforward to use.
- func_ref_data: Extract SigRef, ExternalName, and RelocDistance
fields given a FuncRef.
- symbol_value_data: Extract ExternalName, RelocDistance, and
offset fields given a GlobalValue representing a Symbol.
- reloc_distance_near: Test for RelocDistance::Near.
The s390x back-end is changed to use these common versions.
Note that this exposed a bug in common isle code: This extractor:
(extractor (load_sym inst)
(and inst
(load _ (def_inst (symbol_value
(symbol_value_data _
(reloc_distance_near) offset)))
(i64_from_offset
(memarg_symbol_offset_sum <offset _)))))
would raise an assertion in sema.rs due to a supposed cycle in
extractor definitions. But there was no actual cycle, it was
simply that the extractor tree refers twice to the `insn_data`
extractor (once via the `load` and once via the `symbol_value`
extractor). Fixed by checking for pre-existing definitions only
along one path in the tree, not across the whole tree.
This adds support for all atomic operations that were unimplemented
so far in the s390x back end:
- atomic_rmw operations xchg, nand, smin, smax, umin, umax
- $I8 and $I16 versions of atomic_rmw and atomic_cas
- little endian versions of atomic_rmw and atomic_cas
All of these have to be implemented by a compare-and-swap loop;
and for the $I8 and $I16 versions the actual atomic instruction
needs to operate on the surrounding aligned 32-bit word.
Since we cannot emit new control flow during ISLE instruction
selection, these compare-and-swap loops are emitted as a single
meta-instruction to be expanded at emit time.
However, since there is a large number of different versions of
the loop required to implement all the above operations, I've
implemented a facility to allow specifying the loop bodies
from within ISLE after all, by creating a vector of MInst
structures that will be emitted as part of the meta-instruction.
There are still restrictions, in particular instructions that
are part of the loop body may not modify any virtual register.
But even so, this approach looks preferable to doing everything
in emit.rs.
A few instructions needed in those compare-and-swap loop bodies
were added as well, in particular the RxSBG family of instructions
as well as the LOAD REVERSED in-register byte-swap instructions.
This patch also adds filetest runtests to verify the semantics
of all operations, in particular the subword and little-endian
variants (those are currently only executed on s390x).
This commit updates the `memfd` support in Wasmtime to have a runtime
toggle as to whether it's used or not. The compile-time feature gating
`memfd` support is now also re-enabled by default, but the new runtime
switch is still disabled-by-default.
Additionally this commit updates our fuzz oracle to turn on/off the
memfd flag to re-enable fuzzing with memfd on oss-fuzz.
We currently skip some tests when running our qemu-based tests for
aarch64 and s390x. Qemu has broken madvise(MADV_DONTNEED) semantics --
specifically, it just ignores madvise() [1].
We could continue to whack-a-mole the tests whenever we create new
functionality that relies on madvise() semantics, but ideally we'd just
have emulation that properly emulates!
The earlier discussions on the qemu mailing list [2] had a proposed
patch for this, but (i) this patch doesn't seem to apply cleanly anymore
(it's 3.5 years old) and (ii) it's pretty complex due to the need to
handle qemu's ability to emulate differing page sizes on host and guest.
It turns out that we only really need this for CI when host and guest
have the same page size (4KiB), so we *could* just pass the madvise()s
through. I wouldn't expect such a patch to ever land upstream in qemu,
but it satisfies our needs I think. So this PR modifies our CI setup to
patch qemu before building it locally with a little one-off patch.
[1]
https://github.com/bytecodealliance/wasmtime/pull/2518#issuecomment-747280133
[2]
https://lists.gnu.org/archive/html/qemu-devel/2018-08/msg05416.html
Currently the "sequential" and "parallel" benchmarks reports somewhat
different timings. For sequential it's time-to-instantiate, but for
parallel it's time-to-instantiate-10k instances. The parallelism in the
parallel benchmark can also theoretically be affected by rayon's
work-stealing. For example if rayon doesn't actually do any work
stealing at all then this ends up being a sequential test again.
Otherwise though it's possible for some threads to finish much earlier
as rayon isn't guaranteed to keep threads busy.
This commit applies a few updates to the benchmark:
* First an `InstancePre<T>` is now used instead of a `Linker<T>` to
front-load type-checking and avoid that on each instantiation (and
this is generally the fastest path to instantiate right now).
* Next the instantiation benchmark is changed to measure one
instantiation-per-iteration to measure per-instance instantiation to
better compare with sequential numbers.
* Finally rayon is removed in favor of manually creating background
threads that infinitely do work until we tell them to stop. These
background threads are guaranteed to be working for the entire time
the benchmark is executing and should theoretically exhibit what the
situation that there's N units of work all happening at once.
I also applied some minor updates here such as having the parallel
instantiation defined conditionally for multiple modules as well as
upping the limits of the pooling allocator to handle a large module
(rustpython.wasm) that I threw at it.
This patch implements CLI options to insert pre-opened sockets.
`--listenfd` : Inherit environment variables and file descriptors following
the systemd listen fd specification (UNIX only).
`--tcplisten <SOCKET ADDRESS>`: Grant access to the given TCP listen socket.
Signed-off-by: Harald Hoyer <harald@profian.com>
Since memfd support just landed, and has had only ~0.5 weeks to bake
with fuzzing, we want to make release 0.34.0 of Wasmtime without it
enabled by default. This PR disables memfd by default; it can be enabled
by specifying the `memfd` feature for the `wasmtime` crate, or when
building the commandline binary.
We plan to explicitly add memfd-enabled fuzzing targets, let that go for
a while, then probably re-enable memfd in the subsequent release if no
issues come up.
The debuginfo analyses are written with the assumption that the order of
instructions in the VCode is the order of instructions in the final
machine ocde. This was previously a strong invariant, until we
introduced support for cold blocks. Cold blocks are implemented by
reordering during emission, because the VCode ordering has other
requirements related to lowering (respecting def-use dependencies in the
reverse pass), so it is much simpler to reorder instructions at the last
moment. Unfortunately, this causes the breakage we now see.
This commit fixes the issue by skipping all cold instructions when
emitting value-label ranges (which are translated into debuginfo). This
means that variables defined in cold blocks will not have DWARF
metadata. But cold blocks are usually compiler-inserted slowpaths, not
user code, so this is probably OK. Debuginfo is always best-effort, so
in any case this does not violate any correctness constraints.
* Consolidate methods of memory initialization
This commit consolidates the few locations that we have which are
performing memory initialization. Namely the uffd logic for creating
paged memory as well as the memfd logic for creating a memory image now
share an implementation to avoid duplicating bounds-checks or other
validation conditions. The main purpose of this commit is to fix a
fuzz-bug where a multiplication overflowed. The overflow itself was
benign but it seemed better to fix the overflow in only one place
instead of multiple.
The overflow in question is specifically when an initializer is checked
to be statically out-of-bounds and multiplies a memory's minimum size by
the wasm page size, returning the result as a `u64`. For
memory64-memories of size `1 << 48` this multiplication will overflow.
This was actually a preexisting bug with the `try_paged_init` function
which was copied for memfd, but cropped up here since memfd is used more
often than paged initialization. The fix here is to skip validation of
the `end` index if the size of memory is `1 << 64` since if the `end`
index can be represented as a `u64` then it's in-bounds. This is
somewhat of an esoteric case, though, since a memory of minimum size `1
<< 64` can't ever exist (we can't even ask the os for that much memory,
and even if we could it would fail).
* Fix memfd test
* Fix some tests
* Remove InitMemory enum
* Add an `is_segmented` helper method
* More clear variable name
* Make arguments to `init_memory` more descriptive
This primary motivation of this large commit (apologies for its size!) is to
introduce `Gpr` and `Xmm` newtypes over `Reg`. This should help catch
difficult-to-diagnose register class mixup bugs in x64 lowerings.
But having a newtype for `Gpr` and `Xmm` themselves isn't enough to catch all of
our operand-with-wrong-register-class bugs, because about 50% of operands on x64
aren't just a register, but a register or memory address or even an
immediate! So we have `{Gpr,Xmm}Mem[Imm]` newtypes as well.
Unfortunately, `GprMem` et al can't be `enum`s and are therefore a little bit
noisier to work with from ISLE. They need to maintain the invariant that their
registers really are of the claimed register class, so they need to encapsulate
the inner data. If they exposed the underlying `enum` variants, then anyone
could just change register classes or construct a `GprMem` that holds an XMM
register, defeating the whole point of these newtypes. So when working with
these newtypes from ISLE, we rely on external constructors like `(gpr_to_gpr_mem
my_gpr)` instead of `(GprMem.Gpr my_gpr)`.
A bit of extra lines of code are included to add support for register mapping
for all of these newtypes as well. Ultimately this is all a bit wordier than I'd
hoped it would be when I first started authoring this commit, but I think it is
all worth it nonetheless!
In the process of adding these newtypes, I didn't want to have to update both
the ISLE `extern` type definition of `MInst` and the Rust definition, so I move
the definition fully into ISLE, similar as aarch64.
Finally, this process isn't complete. I've introduced the newtypes here, and
I've made most XMM-using instructions switch from `Reg` to `Xmm`, as well as
register class-converting instructions, but I haven't moved all of the GPR-using
instructions over to the newtypes yet. I figured this commit was big enough as
it was, and I can continue the adoption of these newtypes in follow up commits.
Part of #3685.
As a followup to the recent memfd allocator work, this PR makes the
memfd image creation occur on the first instantiation, rather than
immediately when the `Module` is loaded.
This shaves off a potentially surprising cost spike that would have
otherwise occurred: prior to the memfd work, no allocator eagerly read
the module's initial heap state into RAM. The behavior should now more
closely resemble what happened before (and the improvements in overall
instantiation time and performance, as compared to either eager init
with pure-mmap memory or user-mode pagefault handling with uffd,
remain).