Tree:
896e25e37f
cfallin/lucet-pr612-base
fitzgen-patch-1
main
pch/bound_tcp_userland_buffer
pch/bump_wasm_tools_210
pch/cli_wasi_legacy
pch/component_call_hooks
pch/resource_table
pch/resource_table_2
pch/upstream_wave
release-0.32.0
release-0.33.0
release-0.34.0
release-0.35.0
release-0.36.0
release-0.37.0
release-0.38.0
release-0.39.0
release-0.40.0
release-1.0.0
release-10.0.0
release-11.0.0
release-12.0.0
release-13.0.0
release-14.0.0
release-15.0.0
release-16.0.0
release-17.0.0
release-18.0.0
release-19.0.0
release-2.0.0
release-20.0.0
release-21.0.0
release-22.0.0
release-23.0.0
release-24.0.0
release-3.0.0
release-4.0.0
release-5.0.0
release-6.0.0
release-7.0.0
release-8.0.0
release-9.0.0
revert-9191-trevor/upgrade-regalloc
revert-union-find
stable-v0.26
trevor/fuzz-pcc
trevor/hyper-rc4
trevor/io-error-interface
0.2.0
0.3.0
cranelift-v0.31.0
cranelift-v0.32.0
cranelift-v0.33.0
cranelift-v0.34.0
cranelift-v0.35.0
cranelift-v0.36.0
cranelift-v0.37.0
cranelift-v0.39.0
cranelift-v0.40.0
cranelift-v0.41.0
cranelift-v0.42.0
cranelift-v0.43.0
cranelift-v0.43.1
cranelift-v0.44.0
cranelift-v0.45.0
cranelift-v0.46.0
cranelift-v0.46.1
cranelift-v0.60.0
cranelift-v0.61.0
cranelift-v0.62.0
cranelift-v0.69.0
dev
filecheck-v0.0.1
minimum-viable-wasi-proxy-serve
v0.10.0
v0.11.0
v0.12.0
v0.15.0
v0.16.0
v0.17.0
v0.18.0
v0.19.0
v0.2.0
v0.20.0
v0.21.0
v0.22.0
v0.22.1
v0.23.0
v0.24.0
v0.25.0
v0.26.0
v0.26.1
v0.27.0
v0.28.0
v0.29.0
v0.3.0
v0.30.0
v0.31.0
v0.32.0
v0.32.1
v0.33.0
v0.33.1
v0.34.0
v0.34.1
v0.34.2
v0.35.0
v0.35.1
v0.35.2
v0.35.3
v0.36.0
v0.37.0
v0.38.0
v0.38.1
v0.38.2
v0.38.3
v0.39.0
v0.39.1
v0.4.0
v0.40.0
v0.40.1
v0.6.0
v0.8.0
v0.9.0
v1.0.0
v1.0.1
v1.0.2
v10.0.0
v10.0.1
v10.0.2
v11.0.0
v11.0.1
v11.0.2
v12.0.0
v12.0.1
v12.0.2
v13.0.0
v13.0.1
v14.0.0
v14.0.1
v14.0.2
v14.0.3
v14.0.4
v15.0.0
v15.0.1
v16.0.0
v17.0.0
v17.0.1
v17.0.2
v17.0.3
v18.0.0
v18.0.1
v18.0.2
v18.0.3
v18.0.4
v19.0.0
v19.0.1
v19.0.2
v2.0.0
v2.0.1
v2.0.2
v20.0.0
v20.0.1
v20.0.2
v21.0.0
v21.0.1
v22.0.0
v23.0.0
v23.0.1
v23.0.2
v24.0.0
v3.0.0
v3.0.1
v4.0.0
v4.0.1
v5.0.0
v5.0.1
v6.0.0
v6.0.1
v6.0.2
v7.0.0
v7.0.1
v8.0.0
v8.0.1
v9.0.0
v9.0.1
v9.0.2
v9.0.3
v9.0.4
${ noResults }
473 Commits (896e25e37fca56ac915b62b6329bcb4944b9a8a1)
Author | SHA1 | Message | Date |
---|---|---|---|
Alex Crichton |
99c6d7c083
|
components: Improve heuristic for splitting adapters (#4827)
This commit is a (second?) attempt at improving the generation of adapter modules to avoid excessively large functions for fuzz-generated inputs. The first iteration of adapters simply translated an entire type inline per-function. This proved problematic however since the size of the adapter function was on the order of the overall size of a type, which can be exponential for a type that is otherwise defined in linear size. The second iteration of adapters performed a split where memory-based types would always be translated with individual functions. The theory here was that once a type was memory-based it was large enough to not warrant inline translation in the original function and a separate outlined function could be shared and otherwise used to deduplicate portions of the original giant function. This again proved problematic, however, since the splitting heuristic was quite naive and didn't take into account large stack-based types. This third iteration in this commit replaces the previous system with a similar but slightly more general one. Each adapter function now has a concept of fuel which is decremented each time a layer of a type is translated. When fuel runs out further translations are deferred to outlined functions. The fuel counter should hopefully provide a sort of reasonable upper bound on the size of a function and the outlined functions should ideally provide the ability to be called from multiple places and therefore deduplicate what would otherwise be a massive function. This final iteration is another attempt at guaranteeing that an adapter module is linear in size with respect to the input type section of the original module. Additionally this iteration uniformly handles stack and memory-based translations which means that stack-based translations can't go wild in their function size and memory-based translations may benefit slightly from having at least a little bit of inlining internally. The immediate impact of this is that the `component_api` fuzzer seems to be running at a faster rate than before. Otherwise #4825 is sufficient to invalidate preexisting fuzz-bugs and this PR is hopefully the final nail in the coffin to prevent further timeouts for small inputs cropping up. Closes #4816 |
2 years ago |
Alex Crichton |
57dca934ad
|
Upgrade wasm-tools crates, namely the component model (#4715)
* Upgrade wasm-tools crates, namely the component model This commit pulls in the latest versions of all of the `wasm-tools` family of crates. There were two major changes that happened in `wasm-tools` in the meantime: * bytecodealliance/wasm-tools#697 - this commit introduced a new API for more efficiently reading binary operators from a wasm binary. The old `Operator`-based reading was left in place, however, and continues to be what Wasmtime uses. I hope to update Wasmtime in a future PR to use this new API, but for now the biggest change is... * bytecodealliance/wasm-tools#703 - this commit was a major update to the component model AST. This commit almost entirely deals with the fallout of this change. The changes made to the component model were: 1. The `unit` type no longer exists. This was generally a simple change where the `Unit` case in a few different locations were all removed. 2. The `expected` type was renamed to `result`. This similarly was relatively lightweight and mostly just a renaming on the surface. I took this opportunity to rename `val::Result` to `val::ResultVal` and `types::Result` to `types::ResultType` to avoid clashing with the standard library types. The `Option`-based types were handled with this as well. 3. The payload type of `variant` and `result` types are now optional. This affected many locations that calculate flat type representations, ABI information, etc. The `#[derive(ComponentType)]` macro now specifically handles Rust-defined `enum` types which have no payload to the equivalent in the component model. 4. Functions can now return multiple parameters. This changed the signature of invoking component functions because the return value is now bound by `ComponentNamedList` (renamed from `ComponentParams`). This had a large effect in the tests, fuzz test case generation, etc. 5. Function types with 2-or-more parameters/results must uniquely name all parameters/results. This mostly affected the text format used throughout the tests. I haven't added specifically new tests for multi-return but I changed a number of tests to use it. Additionally I've updated the fuzzers to all exercise multi-return as well so I think we should get some good coverage with that. * Update version numbers * Use crates.io |
2 years ago |
Alex Crichton |
5add267b87
|
Fix a soundness issue with lowering variants (#4723)
* Fix a compile error on nightly Rust It looks like Rust nightly has gotten a bit more strict about attributes-on-expressions and previously accepted code is no longer accepted. This commit updates the generated code for a macro to a form which is accepted by rustc. * Fix a soundness issue with lowering variants This commit fixes a soundness issue lowering variants in the component model where host memory could be leaked to the guest module by accident. In reviewing code recently for `Val::lower` I noticed that the variant lowering was extending the payload with `ValRaw::u32(0)` to appropriately fit the size of the variant. In reading this it appeared incorrect to me due to the fact that it should be `ValRaw::u64(0)` since up to 64-bits can be read. Additionally this implementation was also incorrect because the lowered representation of the payload itself was not possibly zero-extended to 64-bits to accommodate other variants. It turned out these issues were benign because with the dynamic surface area to the component model the arguments were all initialized to 0 anyway. The static version of the API, however, does not initialize arguments to 0 and I wanted to initially align these two implementations so I updated the variant implementation of lowering for dynamic values and removed the zero-ing of arguments. To test this change I updated the `debug` mode of adapter module generation to assert that the upper bits of values in wasm are always zero when the value is casted down (during `stack_get` which only happens with variants). I then threaded through the `debug` boolean configuration parameter into the dynamic and static fuzzers. To my surprise this new assertion tripped even after the fix was applied. It turns out, though, that there was other leakage of bits through other means that I was previously unaware of. At the primitive level lowerings of types like `u32` will have a `Lower` representation of `ValRaw` and the lowering is simply `dst.write(ValRaw::i32(self))`, or the equivalent thereof. The problem, that the fuzzers detected, with this pattern is that the `ValRaw` type is 16-bytes, and `ValRaw::i32(X)` only initializes the first 4. This meant that all the lowerings for all primitives were writing up to 12 bytes of garbage from the host for the wasm module to read. It turned out that this write of a `ValRaw` was sometimes 16 bytes and sometimes the appropriate size depending on the number of optimizations in play. With enough inlining for example `dst.write(ValRaw::i32(self))` would only write 4 bytes, as expected. In debug mode though without inlining 16 bytes would be written, including the garbage from the upper bits. To solve this issue I ended up taking a somewhat different approach. I primarily updated the `ValRaw` constructors to simply always extend the values internally to 64-bits, meaning that the low 8 bytes of a `ValRaw` is always initialized. This prevents any undefined data from leaking from the host into a wasm module, and means that values are also zero-extended even if they're only used in 32-bit contexts outside of a variant. This felt like the best fix for now, though, in terms of not really having a performance impact while additionally not requiring a rewrite of all lowerings. This solution ended up also neatly removing the "zero out the entire payload" logic that was previously require. Now after a payload is lowered only the tail end of the payload, up to the size of the variant, is zeroed out. This means that each lowered argument is written to at most once which should hopefully be a small performance boost for calling into functions as well. |
2 years ago |
Alex Crichton |
bc8e36a6af
|
Refactor and optimize the flat type calculations (#4708)
* Optimize flat type representation calculations Previously calculating the flat type representation would be done recursively for an entire type tree every time it was visited. Additionally the flat type representation was entirely built only to be thrown away if it was too large at the end. This chiefly presented a source of recursion based on the type structure in the component model which fuzzing does not like as it reports stack overflows. This commit overhauls the representation of flat types in Wasmtime by caching the representation for each type in the compile-time `ComponentTypesBuilder` structure. This avoids recalculating each time the flat representation is queried and additionally allows opportunity to have more short-circuiting to avoid building overly-large vectors. * Remove duplicate flat count calculation in wasmtime Roughly share the infrastructure in the `wasmtime-environ` crate, namely the non-recursive and memoizing nature of the calculation. * Fix component fuzz build * Fix example compile |
2 years ago |
Alex Crichton |
1e12645ab1
|
Fix a bad bounds check in component trampolines (#4716)
A `GtU` condition needed to actually be `GeU`, as the comment right above it stated but apparently I forgot to translate the comment to actual code. This fixes a fuzz bug that arose from oss-fuzz over the weekend. |
2 years ago |
Benjamin Bouvier |
8a9b1a9025
|
Implement an incremental compilation cache for Cranelift (#4551)
This is the implementation of https://github.com/bytecodealliance/wasmtime/issues/4155, using the "inverted API" approach suggested by @cfallin (thanks!) in Cranelift, and trait object to provide a backend for an all-included experience in Wasmtime. After the suggestion of Chris, `Function` has been split into mostly two parts: - on the one hand, `FunctionStencil` contains all the fields required during compilation, and that act as a compilation cache key: if two function stencils are the same, then the result of their compilation (`CompiledCodeBase<Stencil>`) will be the same. This makes caching trivial, as the only thing to cache is the `FunctionStencil`. - on the other hand, `FunctionParameters` contain the... function parameters that are required to finalize the result of compilation into a `CompiledCode` (aka `CompiledCodeBase<Final>`) with proper final relocations etc., by applying fixups and so on. Most changes are here to accomodate those requirements, in particular that `FunctionStencil` should be `Hash`able to be used as a key in the cache: - most source locations are now relative to a base source location in the function, and as such they're encoded as `RelSourceLoc` in the `FunctionStencil`. This required changes so that there's no need to explicitly mark a `SourceLoc` as the base source location, it's automatically detected instead the first time a non-default `SourceLoc` is set. - user-defined external names in the `FunctionStencil` (aka before this patch `ExternalName::User { namespace, index }`) are now references into an external table of `UserExternalNameRef -> UserExternalName`, present in the `FunctionParameters`, and must be explicitly declared using `Function::declare_imported_user_function`. - some refactorings have been made for function names: - `ExternalName` was used as the type for a `Function`'s name; while it thus allowed `ExternalName::Libcall` in this place, this would have been quite confusing to use it there. Instead, a new enum `UserFuncName` is introduced for this name, that's either a user-defined function name (the above `UserExternalName`) or a test case name. - The future of `ExternalName` is likely to become a full reference into the `FunctionParameters`'s mapping, instead of being "either a handle for user-defined external names, or the thing itself for other variants". I'm running out of time to do this, and this is not trivial as it implies touching ISLE which I'm less familiar with. The cache computes a sha256 hash of the `FunctionStencil`, and uses this as the cache key. No equality check (using `PartialEq`) is performed in addition to the hash being the same, as we hope that this is sufficient data to avoid collisions. A basic fuzz target has been introduced that tries to do the bare minimum: - check that a function successfully compiled and cached will be also successfully reloaded from the cache, and returns the exact same function. - check that a trivial modification in the external mapping of `UserExternalNameRef -> UserExternalName` hits the cache, and that other modifications don't hit the cache. - This last check is less efficient and less likely to happen, so probably should be rethought a bit. Thanks to both @alexcrichton and @cfallin for your very useful feedback on Zulip. Some numbers show that for a large wasm module we're using internally, this is a 20% compile-time speedup, because so many `FunctionStencil`s are the same, even within a single module. For a group of modules that have a lot of code in common, we get hit rates up to 70% when they're used together. When a single function changes in a wasm module, every other function is reloaded; that's still slower than I expect (between 10% and 50% of the overall compile time), so there's likely room for improvement. Fixes #4155. |
2 years ago |
Alex Crichton |
bd70dbebbd
|
Deduplicate some size/align calculations (#4658)
This commit is an effort to reduce the amount of complexity around managing the size/alignment calculations of types in the canonical ABI. Previously the logic for the size/alignment of a type was spread out across a number of locations. While each individual calculation is not really the most complicated thing in the world having the duplication in so many places was constantly worrying me. I've opted in this commit to centralize all of this within the runtime at least, and now there's only one "duplicate" of this information in the fuzzing infrastructure which is to some degree less important to deduplicate. This commit introduces a new `CanonicalAbiInfo` type to house all abi size/align information for both memory32 and memory64. This new type is then used pervasively throughout fused adapter compilation, dynamic `Val` management, and typed functions. This type was also able to reduce the complexity of the macro-generated code meaning that even `wasmtime-component-macro` is performing less math than it was before. One other major feature of this commit is that this ABI information is now saved within a `ComponentTypes` structure. This avoids recursive querying of size/align information frequently and instead effectively caching it. This was a worry I had for the fused adapter compiler which frequently sought out size/align information and would recursively descend each type tree each time. The `fact-valid-module` fuzzer is now nearly 10x faster in terms of iterations/s which I suspect is due to this caching. |
2 years ago |
Alex Crichton |
66025636fd
|
Remove a layer of recursion in adapter compilation (#4657)
In #4640 a feature was added to adapter modules that whenever translation goes through memory it instead goes through a helper function as opposed to inlining it directly. The generation of the helper function happened recursively at compile time, however, and sure enough oss-fuzz has found an input which blows the host stack at compile time. This commit removes the compile-time recursion from the adapter compiler when translating these helper functions by deferring the translation to a worklist which is processed after the original function is translated. This makes the stack-based recursion instead heap-based, removing the stack overflow. |
2 years ago |
Alex Crichton |
867f5c1244
|
Update behavior of zero-length lists/strings (#4648)
The spec was expected to change to not bounds-check 0-byte lists/strings but has since been updated to match `memory.copy` which does indeed check the pointer for 0-byte copies. |
2 years ago |
Alex Crichton |
c816a52746
|
Reuse locals in adapter trampolines (#4646)
This commit implements a scheme I've been meaning to work on in the adapter compiler where instead of always generating a fresh local for all operations locals may now be reused. Locals generated are explicitly free'd when their lexical scope has ended, allowing reuse in translation of later types in the adapter. This also implements a new scheme for initializing locals where previously a local could simply be generated, but now the local must be fused with its initializer where a `local.{tee,set}` instruction is always generated. This should help prevent a bug I ran into with strings where one usage of a local was forgotten to be initialized which meant that when it was used during a loop it may have had a stale value from before. Modeling this in Rust isn't possible at compile time unfortunately so I opted for the next best thing, runtime panics. If a local is accidentally not released back to the pool of free locals then it will panic. The fuzzer for simply generating and validating adapter modules should be good at exercising this and it weeded out a few forgotten free's and should be good now. |
2 years ago |
Alex Crichton |
866ec46613
|
Implement roundtrip fuzzing of component adapters (#4640)
* Improve the `component_api` fuzzer on a few dimensions * Update the generated component to use an adapter module. This involves two core wasm instances communicating with each other to test that data flows through everything correctly. The intention here is to fuzz the fused adapter compiler. String encoding options have been plumbed here to exercise differences in string encodings. * Use `Cow<'static, ...>` and `static` declarations for each static test case to try to cut down on rustc codegen time. * Add `Copy` to derivation of fuzzed enums to make `derive(Clone)` smaller. * Use `Store<Box<dyn Any>>` to try to cut down on codegen by monomorphizing fewer `Store<T>` implementation. * Add debug logging to print out what's flowing in and what's flowing out for debugging failures. * Improve `Debug` representation of dynamic value types to more closely match their Rust counterparts. * Fix a variant issue with adapter trampolines Previously the offset of the payload was calculated as the discriminant aligned up to the alignment of a singular case, but instead this needs to be aligned up to the alignment of all cases to ensure all cases start at the same location. * Fix a copy/paste error when copying masked integers A 32-bit load was actually doing a 16-bit load by accident since it was copied from the 16-bit load-and-mask case. * Fix f32/i64 conversions in adapter modules The adapter previously erroneously converted the f32 to f64 and then to i64, where instead it should go from f32 to i32 to i64. * Fix zero-sized flags in adapter modules This commit corrects the size calculation for zero-sized flags in adapter modules. cc #4592 * Fix a variant size calculation bug in adapters This fixes the same issue found with variants during normal host-side fuzzing earlier where the size of a variant needs to align up the summation of the discriminant and the maximum case size. * Implement memory growth in libc bump realloc Some fuzz-generated test cases are copying lists large enough to exceed one page of memory so bake in a `memory.grow` to the bump allocator as well. * Avoid adapters of exponential size This commit is an attempt to avoid adapters being exponentially sized with respect to the type hierarchy of the input. Previously all adaptation was done inline within each adapter which meant that if something was structured as `tuple<T, T, T, T, ...>` the translation of `T` would be inlined N times. For very deeply nested types this can quickly create an exponentially sized adapter with types of the form: (type $t0 (list u8)) (type $t1 (tuple $t0 $t0)) (type $t2 (tuple $t1 $t1)) (type $t3 (tuple $t2 $t2)) ;; ... where the translation of `t4` has 8 different copies of translating `t0`. This commit changes the translation of types through memory to almost always go through a helper function. The hope here is that it doesn't lose too much performance because types already reside in memory. This can still lead to exponentially sized adapter modules to a lesser degree where if the translation all happens on the "stack", e.g. via `variant`s and their flat representation then many copies of one translation could still be made. For now this commit at least gets the problem under control for fuzzing where fuzzing doesn't trivially find type hierarchies that take over a minute to codegen the adapter module. One of the main tricky parts of this implementation is that when a function is generated the index that it will be placed at in the final module is not known at that time. To solve this the encoded form of the `Call` instruction is saved in a relocation-style format where the `Call` isn't encoded but instead saved into a different area for encoding later. When the entire adapter module is encoded to wasm these pseudo-`Call` instructions are encoded as real instructions at that time. * Fix some memory64 issues with string encodings Introduced just before #4623 I had a few mistakes related to 64-bit memories and mixing 32/64-bit memories. * Actually insert into the `translate_mem_funcs` map This... was the whole point of having the map! * Assert memory growth succeeds in bump allocator |
2 years ago |
Alex Crichton |
650979ae40
|
Implement strings in adapter modules (#4623)
* Implement strings in adapter modules This commit is a hefty addition to Wasmtime's support for the component model. This implements the final remaining type (in the current type hierarchy) unimplemented in adapter module trampolines: strings. Strings are the most complicated type to implement in adapter trampolines because they are highly structured chunks of data in memory (according to specific encodings). Additionally each lift/lower operation can choose its own encoding for strings meaning that Wasmtime, the host, may have to convert between any pairwise ordering of string encodings. The `CanonicalABI.md` in the component-model repo in general specifies all the fiddly bits of string encoding so there's not a ton of wiggle room for Wasmtime to get creative. This PR largely "just" implements that. The high-level architecture of this implementation is: * Fused adapters are first identified to determine src/dst string encodings. This statically fixes what transcoding operation is being performed. * The generated adapter will be responsible for managing calls to `realloc` and performing bounds checks. The adapter itself does not perform memory copies or validation of string contents, however. Instead each transcoding operation is modeled as an imported function into the adapter module. This means that the adapter module dynamically, during compile time, determines what string transcoders are needed. Note that an imported transcoder is not only parameterized over the transcoding operation but additionally which memory is the source and which is the destination. * The imported core wasm functions are modeled as a new `CoreDef::Transcoder` structure. These transcoders end up being small Cranelift-compiled trampolines. The Cranelift-compiled trampoline will load the actual base pointer of memory and add it to the relative pointers passed as function arguments. This trampoline then calls a transcoder "libcall" which enters Rust-defined functions for actual transcoding operations. * Each possible transcoding operation is implemented in Rust with a unique name and a unique signature depending on the needs of the transcoder. I've tried to document inline what each transcoder does. This means that the `Module::translate_string` in adapter modules is by far the largest translation method. The main reason for this is due to the management around calling the imported transcoder functions in the face of validating string pointer/lengths and performing the dance of `realloc`-vs-transcode at the right time. I've tried to ensure that each individual case in transcoding is documented well enough to understand what's going on as well. Additionally in this PR is a full implementation in the host for the `latin1+utf16` encoding which means that both lifting and lowering host strings now works with this encoding. Currently the implementation of each transcoder function is likely far from optimal. Where possible I've leaned on the standard library itself and for latin1-related things I'm leaning on the `encoding_rs` crate. I initially tried to implement everything with `encoding_rs` but was unable to uniformly do so easily. For now I settled on trying to get a known-correct (even in the face of endianness) implementation for all of these transcoders. If an when performance becomes an issue it should be possible to implement more optimized versions of each of these transcoding operations. Testing this commit has been somewhat difficult and my general plan, like with the `(list T)` type, is to rely heavily on fuzzing to cover the various cases here. In this PR though I've added a simple test that pushes some statically known strings through all the pairs of encodings between source and destination. I've attempted to pick "interesting" strings that one way or another stress the various paths in each transcoding operation to ideally get full branch coverage there. Additionally a suite of "negative" tests have also been added to ensure that validity of encoding is actually checked. * Fix a temporarily commented out case * Fix wasmtime-runtime tests * Update deny.toml configuration * Add `BSD-3-Clause` for the `encoding_rs` crate * Remove some unused licenses * Add an exemption for `encoding_rs` for now * Split up the `translate_string` method Move out all the closures and package up captured state into smaller lists of arguments. * Test out-of-bounds for zero-length strings |
2 years ago |
Alex Crichton |
1ce9e8aa5f
|
Fix an issue in adapter module partitioning (#4622)
When an adapter module depends on a particular core wasm instance this means that it actually depends on not only that instance but all prior core wasm instances as well. This is because core wasm instances must be instantiated in the specified order within a component and that cannot change depending on the dataflow between adapters. This commit fixes a possible panic from linearizing the component dfg where an adapter module tried to depend on an instance that hadn't been instantiated yet because the ordering dependency between core wasm instances hadn't been modeled. |
2 years ago |
wasmtime-publish |
412fa04911
|
Bump Wasmtime to 0.41.0 (#4620)
Co-authored-by: Wasmtime Publish <wasmtime-publish@users.noreply.github.com> |
2 years ago |
Peter Huene |
42233e8eda
|
components: ignore export aliases to types in translation. (#4604)
* components: ignore export aliases to types in translation. Currently, translation is ignoring type exports from components during translation by skipping over them before adding them to the exports map. If a component instantiates an inner component and aliases a type export of that instance, it will cause wasmtime to panic with a failure to find the export in the exports map. The fix is to add a representation for exported types to the map that is simply ignored when encountered. This also makes it easier to track places where we would have to support type exports in translation in the future. * Keep type information for type exports. This commit keeps the type information for type exports so that types can be properly aliased from an instance export and thereby adjusting the type index space accordingly. * Add a simple test case for type exports for the component model. |
2 years ago |
Alex Crichton |
b4d7ab36f9
|
Add a dataflow-based representation of components (#4597)
* Add a dataflow-based representation of components This commit updates the inlining phase of compiling a component to creating a dataflow-based representation of a component instead of creating a final `Component` with a linear list of initializers. This dataflow graph is then linearized in a final step to create the actual final `Component`. The motivation for this commit stems primarily from my work implementing strings in fused adapters. In doing this my plan is to defer most low-level transcoding to the host itself rather than implementing that in the core wasm adapter modules. This means that small cranelift-generated trampolines will be used for adapter modules to call which then call "transcoding libcalls". The cranelift-generated trampolines will get raw pointers into linear memory and pass those to the libcall which core wasm doesn't have access to when passing arguments to an import. Implementing this with the previous representation of a `Component` was becoming too tricky to bear. The initialization of a transcoder needed to happen at just the right time: before the adapter module which needed it was instantiated but after the linear memories referenced had been extracted into the `VMComponentContext`. The difficulty here is further compounded by the current adapter module injection pass already being quite complicated. Adapter modules are already renumbering the index space of runtime instances and shuffling items around in the `GlobalInitializer` list. Perhaps the worst part of this was that memories could already be referenced by host function imports or exports to the host, and if adapters referenced the same memory it shouldn't be referenced twice in the component. This meant that `ExtractMemory` initializers ideally needed to be shuffled around in the initializer list to happen as early as possible instead of wherever they happened to show up during translation. Overall I did my best to implement the transcoders but everything always came up short. I have decided to throw my hands up in the air and try a completely different approach to this, namely the dataflow-based representation in this commit. This makes it much easier to edit the component after initial translation for injection of adapters, injection of transcoders, adding dependencies on possibly-already-existing items, etc. The adapter module partitioning pass in this commit was greatly simplified to something which I believe is functionally equivalent but is probably an order of magnitude easier to understand. The biggest downside of this representation I believe is having a duplicate representation of a component. The `component::info` was largely duplicated into the `component::dfg` module in this commit. Personally though I think this is a more appropriate tradeoff than before because it's very easy to reason about "convert representation A to B" code whereas it was very difficult to reason about shuffling around `GlobalInitializer` items in optimal fashions. This may also have a cost at compile-time in terms of shuffling data around, but my hope is that we have lots of other low-hanging fruit to optimize if it ever comes to that which allows keeping this easier-to-understand representation. Finally, to reiterate, the final representation of components is not changed by this PR. To the runtime internals everything is still the same. * Fix compile of factc |
2 years ago |
Joel Dice |
ed8908efcf
|
implement fuzzing for component types (#4537)
This addresses #4307. For the static API we generate 100 arbitrary test cases at build time, each of which includes 0-5 parameter types, a result type, and a WAT fragment containing an imported function and an exported function. The exported function calls the imported function, which is implemented by the host. At runtime, the fuzz test selects a test case at random and feeds it zero or more sets of arbitrary parameters and results, checking that values which flow host-to-guest and guest-to-host make the transition unchanged. The fuzz test for the dynamic API follows a similar pattern, the only difference being that test cases are generated at runtime. Signed-off-by: Joel Dice <joel.dice@fermyon.com> |
2 years ago |
Nick Fitzgerald |
70ce288dc7
|
Save exit Wasm FP and PC in component-to-host trampolines (#4601)
* Wasmtime: Add a pointer to `VMRuntimeLimits` in component contexts * Save exit Wasm FP and PC in component-to-host trampolines Fixes #4535 * Add comment about why we deref the trampoline's FP * Update some tests to use new `vmruntime_limits_*` methods |
2 years ago |
Alex Crichton |
ff6082c0af
|
Improve readability of memory64 compat in `fact` (#4581)
This commit aims to improve the readability of supporting the memory64 proposal in the `fact` adapter trampoline compiler. Previously there were a few sprinkled blocks that used `if` to generate different instructions inline, but as I've worked on support for strings this has become pretty unwieldy as strings do far more memory manipulation than other type conversions. A pattern that's easier to read is to have small instruction helpers that take the pointer width as an argument and internally dispatch to the correct instruction. This keeps the main translation code branch-free and a bit easier to follow. Additionally for more complicated branching logic it allows for deduplicating the main translation path by having lots of little branches instead of one large branch with everything duplicated on both halves. |
2 years ago |
Alex Crichton |
9f82644cc3
|
Some minor cleanups/refactorings in components (#4582)
This is a collection of some minor renamings, refactorings, sharing of code, etc. This was all discovered during my addition of string support to adapter functions and I figured it'd be best to frontload this and land it ahead of the full patch since it's getting complex. |
2 years ago |
Alex Crichton |
0a6baeddf4
|
Improve some support in `factc`: (#4580)
* Support CLI parameters for string encoding * Fix `--skip-validate` * Fix printing binary to stdout |
2 years ago |
Peter Huene |
43125aa994
|
components: fix trampoline compilation for lists. (#4579)
This commit fixes trampoline compilation for lists where the loop condition would only branch if the amount remaining was 0 instead of not 0. It resulted in only the first element of the list being copied. |
2 years ago |
Alex Crichton |
fb59de15af
|
Implement fused adapters for `(list T)` types (#4558)
* Implement fused adapters for `(list T)` types This commit implements one of the two remaining types for adapter fusion, lists. This implementation is particularly tricky for a number of reasons: * Lists have a number of validity checks which need to be carefully implemented. For example the byte length of the list passed to allocation in the destination module could overflow the 32-bit index space. Additionally lists in 32-bit memories need a check that their final address is in-bounds in the address space. * In the effort to go ahead and support memory64 at the lowest layers this is where much of the magic happens. Lists are naturally always stored in memory and shifting between 64/32-bit address spaces is done here. This notably required plumbing an `Options` around during flattening/size/alignment calculations due to the size/types of lists changing depending on the memory configuration. I've also added a small `factc` program in this commit which should hopefully assist in exploring and debugging adapter modules. This takes as input a component (text or binary format) and then generates an adapter module for all component function signatures found internally. This commit notably does not include tests for lists. I tried to figure out a good way to add these but I felt like there were too many cases to test and the tests would otherwise be extremely verbose. Instead I think the best testing strategy for this commit will be through #4537 which should be relatively extensible to testing adapters between modules in addition to host-based lifting/lowering. * Improve handling of lists of 0-size types * Skip overflow checks on byte sizes for 0-size types * Skip the copy loop entirely when src/dst are both 0 * Skip the increments of src/dst pointers if either is 0-size * Update semantics for zero-sized lists/strings When a list/string has a 0-byte-size the base pointer is no longer verified to be in-bounds to match the supposedly desired adapter semantics where no trap happens because no turn of the loop happens. |
2 years ago |
Alex Crichton |
893fadb485
|
components: Fix support for 0-sized flags (#4560)
This commit goes through and updates support in the various argument passing routines to support 0-sized flags. A bit of a degenerate case but clarified in WebAssembly/component-model#76 as intentional. |
2 years ago |
Nick Fitzgerald |
46782b18c2
|
`wasmtime`: Implement fast Wasm stack walking (#4431)
* Always preserve frame pointers in Wasmtime This allows us to efficiently and simply capture Wasm stacks without maintaining and synchronizing any safety-critical side tables between the compiler and the runtime. * wasmtime: Implement fast Wasm stack walking Why do we want Wasm stack walking to be fast? Because we capture stacks whenever there is a trap and traps actually happen fairly frequently with short-lived programs and WASI's `exit`. Previously, we would rely on generating the system unwind info (e.g. `.eh_frame`) and using the system unwinder (via the `backtrace`crate) to walk the full stack and filter out any non-Wasm stack frames. This can, unfortunately, be slow for two primary reasons: 1. The system unwinder is doing `O(all-kinds-of-frames)` work rather than `O(wasm-frames)` work. 2. System unwind info and the system unwinder need to be much more general than a purpose-built stack walker for Wasm needs to be. It has to handle any kind of stack frame that any compiler might emit where as our Wasm frames are emitted by Cranelift and always have frame pointers. This translates into implementation complexity and general overhead. There can also be unnecessary-for-our-use-cases global synchronization and locks involved, further slowing down stack walking in the presence of multiple threads trying to capture stacks in parallel. This commit introduces a purpose-built stack walker for traversing just our Wasm frames. To find all the sequences of Wasm-to-Wasm stack frames, and ignore non-Wasm stack frames, we keep a linked list of `(entry stack pointer, exit frame pointer)` pairs. This linked list is maintained via Wasm-to-host and host-to-Wasm trampolines. Within a sequence of Wasm-to-Wasm calls, we can use frame pointers (which Cranelift preserves) to find the next older Wasm frame on the stack, and we keep doing this until we reach the entry stack pointer, meaning that the next older frame will be a host frame. The trampolines need to avoid a couple stumbling blocks. First, they need to be compiled ahead of time, since we may not have access to a compiler at runtime (e.g. if the `cranelift` feature is disabled) but still want to be able to call functions that have already been compiled and get stack traces for those functions. Usually this means we would compile the appropriate trampolines inside `Module::new` and the compiled module object would hold the trampolines. However, we *also* need to support calling host functions that are wrapped into `wasmtime::Func`s and there doesn't exist *any* ahead-of-time compiled module object to hold the appropriate trampolines: ```rust // Define a host function. let func_type = wasmtime::FuncType::new( vec![wasmtime::ValType::I32], vec![wasmtime::ValType::I32], ); let func = Func::new(&mut store, func_type, |_, params, results| { // ... Ok(()) }); // Call that host function. let mut results = vec![wasmtime::Val::I32(0)]; func.call(&[wasmtime::Val::I32(0)], &mut results)?; ``` Therefore, we define one host-to-Wasm trampoline and one Wasm-to-host trampoline in assembly that work for all Wasm and host function signatures. These trampolines are careful to only use volatile registers, avoid touching any register that is an argument in the calling convention ABI, and tail call to the target callee function. This allows forwarding any set of arguments and any returns to and from the callee, while also allowing us to maintain our linked list of Wasm stack and frame pointers before transferring control to the callee. These trampolines are not used in Wasm-to-Wasm calls, only when crossing the host-Wasm boundary, so they do not impose overhead on regular calls. (And if using one trampoline for all host-Wasm boundary crossing ever breaks branch prediction enough in the CPU to become any kind of bottleneck, we can do fun things like have multiple copies of the same trampoline and choose a random copy for each function, sharding the functions across branch predictor entries.) Finally, this commit also ends the use of a synthetic `Module` and allocating a stubbed out `VMContext` for host functions. Instead, we define a `VMHostFuncContext` with its own magic value, similar to `VMComponentContext`, specifically for host functions. <h2>Benchmarks</h2> <h3>Traps and Stack Traces</h3> Large improvements to taking stack traces on traps, ranging from shaving off 64% to 99.95% of the time it used to take. <details> ``` multi-threaded-traps/0 time: [2.5686 us 2.5808 us 2.5934 us] thrpt: [0.0000 elem/s 0.0000 elem/s 0.0000 elem/s] change: time: [-85.419% -85.153% -84.869%] (p = 0.00 < 0.05) thrpt: [+560.90% +573.56% +585.84%] Performance has improved. Found 8 outliers among 100 measurements (8.00%) 4 (4.00%) high mild 4 (4.00%) high severe multi-threaded-traps/1 time: [2.9021 us 2.9167 us 2.9322 us] thrpt: [341.04 Kelem/s 342.86 Kelem/s 344.58 Kelem/s] change: time: [-91.455% -91.294% -91.096%] (p = 0.00 < 0.05) thrpt: [+1023.1% +1048.6% +1070.3%] Performance has improved. Found 6 outliers among 100 measurements (6.00%) 1 (1.00%) high mild 5 (5.00%) high severe multi-threaded-traps/2 time: [2.9996 us 3.0145 us 3.0295 us] thrpt: [660.18 Kelem/s 663.47 Kelem/s 666.76 Kelem/s] change: time: [-94.040% -93.910% -93.762%] (p = 0.00 < 0.05) thrpt: [+1503.1% +1542.0% +1578.0%] Performance has improved. Found 5 outliers among 100 measurements (5.00%) 5 (5.00%) high severe multi-threaded-traps/4 time: [5.5768 us 5.6052 us 5.6364 us] thrpt: [709.68 Kelem/s 713.63 Kelem/s 717.25 Kelem/s] change: time: [-93.193% -93.121% -93.052%] (p = 0.00 < 0.05) thrpt: [+1339.2% +1353.6% +1369.1%] Performance has improved. multi-threaded-traps/8 time: [8.6408 us 9.1212 us 9.5438 us] thrpt: [838.24 Kelem/s 877.08 Kelem/s 925.84 Kelem/s] change: time: [-94.754% -94.473% -94.202%] (p = 0.00 < 0.05) thrpt: [+1624.7% +1709.2% +1806.1%] Performance has improved. multi-threaded-traps/16 time: [10.152 us 10.840 us 11.545 us] thrpt: [1.3858 Melem/s 1.4760 Melem/s 1.5761 Melem/s] change: time: [-97.042% -96.823% -96.577%] (p = 0.00 < 0.05) thrpt: [+2821.5% +3048.1% +3281.1%] Performance has improved. Found 1 outliers among 100 measurements (1.00%) 1 (1.00%) high mild many-modules-registered-traps/1 time: [2.6278 us 2.6361 us 2.6447 us] thrpt: [378.11 Kelem/s 379.35 Kelem/s 380.55 Kelem/s] change: time: [-85.311% -85.108% -84.909%] (p = 0.00 < 0.05) thrpt: [+562.65% +571.51% +580.76%] Performance has improved. Found 9 outliers among 100 measurements (9.00%) 3 (3.00%) high mild 6 (6.00%) high severe many-modules-registered-traps/8 time: [2.6294 us 2.6460 us 2.6623 us] thrpt: [3.0049 Melem/s 3.0235 Melem/s 3.0425 Melem/s] change: time: [-85.895% -85.485% -85.022%] (p = 0.00 < 0.05) thrpt: [+567.63% +588.95% +608.95%] Performance has improved. Found 8 outliers among 100 measurements (8.00%) 3 (3.00%) high mild 5 (5.00%) high severe many-modules-registered-traps/64 time: [2.6218 us 2.6329 us 2.6452 us] thrpt: [24.195 Melem/s 24.308 Melem/s 24.411 Melem/s] change: time: [-93.629% -93.551% -93.470%] (p = 0.00 < 0.05) thrpt: [+1431.4% +1450.6% +1469.5%] Performance has improved. Found 3 outliers among 100 measurements (3.00%) 3 (3.00%) high mild many-modules-registered-traps/512 time: [2.6569 us 2.6737 us 2.6923 us] thrpt: [190.17 Melem/s 191.50 Melem/s 192.71 Melem/s] change: time: [-99.277% -99.268% -99.260%] (p = 0.00 < 0.05) thrpt: [+13417% +13566% +13731%] Performance has improved. Found 4 outliers among 100 measurements (4.00%) 4 (4.00%) high mild many-modules-registered-traps/4096 time: [2.7258 us 2.7390 us 2.7535 us] thrpt: [1.4876 Gelem/s 1.4955 Gelem/s 1.5027 Gelem/s] change: time: [-99.956% -99.955% -99.955%] (p = 0.00 < 0.05) thrpt: [+221417% +223380% +224881%] Performance has improved. Found 2 outliers among 100 measurements (2.00%) 1 (1.00%) high mild 1 (1.00%) high severe many-stack-frames-traps/1 time: [1.4658 us 1.4719 us 1.4784 us] thrpt: [676.39 Kelem/s 679.38 Kelem/s 682.21 Kelem/s] change: time: [-90.368% -89.947% -89.586%] (p = 0.00 < 0.05) thrpt: [+860.23% +894.72% +938.21%] Performance has improved. Found 8 outliers among 100 measurements (8.00%) 5 (5.00%) high mild 3 (3.00%) high severe many-stack-frames-traps/8 time: [2.4772 us 2.4870 us 2.4973 us] thrpt: [3.2034 Melem/s 3.2167 Melem/s 3.2294 Melem/s] change: time: [-85.550% -85.370% -85.199%] (p = 0.00 < 0.05) thrpt: [+575.65% +583.51% +592.03%] Performance has improved. Found 8 outliers among 100 measurements (8.00%) 4 (4.00%) high mild 4 (4.00%) high severe many-stack-frames-traps/64 time: [10.109 us 10.171 us 10.236 us] thrpt: [6.2525 Melem/s 6.2925 Melem/s 6.3309 Melem/s] change: time: [-78.144% -77.797% -77.336%] (p = 0.00 < 0.05) thrpt: [+341.22% +350.38% +357.55%] Performance has improved. Found 7 outliers among 100 measurements (7.00%) 5 (5.00%) high mild 2 (2.00%) high severe many-stack-frames-traps/512 time: [126.16 us 126.54 us 126.96 us] thrpt: [4.0329 Melem/s 4.0461 Melem/s 4.0583 Melem/s] change: time: [-65.364% -64.933% -64.453%] (p = 0.00 < 0.05) thrpt: [+181.32% +185.17% +188.71%] Performance has improved. Found 4 outliers among 100 measurements (4.00%) 4 (4.00%) high severe ``` </details> <h3>Calls</h3> There is, however, a small regression in raw Wasm-to-host and host-to-Wasm call performance due the new trampolines. It seems to be on the order of about 2-10 nanoseconds per call, depending on the benchmark. I believe this regression is ultimately acceptable because 1. this overhead will be vastly dominated by whatever work a non-nop callee actually does, 2. we will need these trampolines, or something like them, when implementing the Wasm exceptions proposal to do things like translate Wasm's exceptions into Rust's `Result`s, 3. and because the performance improvements to trapping and capturing stack traces are of such a larger magnitude than this call regressions. <details> ``` sync/no-hook/host-to-wasm - typed - nop time: [28.683 ns 28.757 ns 28.844 ns] change: [+16.472% +17.183% +17.904%] (p = 0.00 < 0.05) Performance has regressed. Found 10 outliers among 100 measurements (10.00%) 1 (1.00%) low mild 4 (4.00%) high mild 5 (5.00%) high severe sync/no-hook/host-to-wasm - untyped - nop time: [42.515 ns 42.652 ns 42.841 ns] change: [+12.371% +14.614% +17.462%] (p = 0.00 < 0.05) Performance has regressed. Found 11 outliers among 100 measurements (11.00%) 1 (1.00%) high mild 10 (10.00%) high severe sync/no-hook/host-to-wasm - unchecked - nop time: [33.936 ns 34.052 ns 34.179 ns] change: [+25.478% +26.938% +28.369%] (p = 0.00 < 0.05) Performance has regressed. Found 9 outliers among 100 measurements (9.00%) 7 (7.00%) high mild 2 (2.00%) high severe sync/no-hook/host-to-wasm - typed - nop-params-and-results time: [34.290 ns 34.388 ns 34.502 ns] change: [+40.802% +42.706% +44.526%] (p = 0.00 < 0.05) Performance has regressed. Found 13 outliers among 100 measurements (13.00%) 5 (5.00%) high mild 8 (8.00%) high severe sync/no-hook/host-to-wasm - untyped - nop-params-and-results time: [62.546 ns 62.721 ns 62.919 ns] change: [+2.5014% +3.6319% +4.8078%] (p = 0.00 < 0.05) Performance has regressed. Found 12 outliers among 100 measurements (12.00%) 2 (2.00%) high mild 10 (10.00%) high severe sync/no-hook/host-to-wasm - unchecked - nop-params-and-results time: [42.609 ns 42.710 ns 42.831 ns] change: [+20.966% +22.282% +23.475%] (p = 0.00 < 0.05) Performance has regressed. Found 11 outliers among 100 measurements (11.00%) 4 (4.00%) high mild 7 (7.00%) high severe sync/hook-sync/host-to-wasm - typed - nop time: [29.546 ns 29.675 ns 29.818 ns] change: [+20.693% +21.794% +22.836%] (p = 0.00 < 0.05) Performance has regressed. Found 5 outliers among 100 measurements (5.00%) 3 (3.00%) high mild 2 (2.00%) high severe sync/hook-sync/host-to-wasm - untyped - nop time: [45.448 ns 45.699 ns 45.961 ns] change: [+17.204% +18.514% +19.590%] (p = 0.00 < 0.05) Performance has regressed. Found 14 outliers among 100 measurements (14.00%) 4 (4.00%) high mild 10 (10.00%) high severe sync/hook-sync/host-to-wasm - unchecked - nop time: [34.334 ns 34.437 ns 34.558 ns] change: [+23.225% +24.477% +25.886%] (p = 0.00 < 0.05) Performance has regressed. Found 12 outliers among 100 measurements (12.00%) 5 (5.00%) high mild 7 (7.00%) high severe sync/hook-sync/host-to-wasm - typed - nop-params-and-results time: [36.594 ns 36.763 ns 36.974 ns] change: [+41.967% +47.261% +52.086%] (p = 0.00 < 0.05) Performance has regressed. Found 12 outliers among 100 measurements (12.00%) 3 (3.00%) high mild 9 (9.00%) high severe sync/hook-sync/host-to-wasm - untyped - nop-params-and-results time: [63.541 ns 63.831 ns 64.194 ns] change: [-4.4337% -0.6855% +2.7134%] (p = 0.73 > 0.05) No change in performance detected. Found 8 outliers among 100 measurements (8.00%) 6 (6.00%) high mild 2 (2.00%) high severe sync/hook-sync/host-to-wasm - unchecked - nop-params-and-results time: [43.968 ns 44.169 ns 44.437 ns] change: [+18.772% +21.802% +24.623%] (p = 0.00 < 0.05) Performance has regressed. Found 15 outliers among 100 measurements (15.00%) 3 (3.00%) high mild 12 (12.00%) high severe async/no-hook/host-to-wasm - typed - nop time: [4.9612 us 4.9743 us 4.9889 us] change: [+9.9493% +11.911% +13.502%] (p = 0.00 < 0.05) Performance has regressed. Found 10 outliers among 100 measurements (10.00%) 6 (6.00%) high mild 4 (4.00%) high severe async/no-hook/host-to-wasm - untyped - nop time: [5.0030 us 5.0211 us 5.0439 us] change: [+10.841% +11.873% +12.977%] (p = 0.00 < 0.05) Performance has regressed. Found 10 outliers among 100 measurements (10.00%) 3 (3.00%) high mild 7 (7.00%) high severe async/no-hook/host-to-wasm - typed - nop-params-and-results time: [4.9273 us 4.9468 us 4.9700 us] change: [+4.7381% +6.8445% +8.8238%] (p = 0.00 < 0.05) Performance has regressed. Found 14 outliers among 100 measurements (14.00%) 5 (5.00%) high mild 9 (9.00%) high severe async/no-hook/host-to-wasm - untyped - nop-params-and-results time: [5.1151 us 5.1338 us 5.1555 us] change: [+9.5335% +11.290% +13.044%] (p = 0.00 < 0.05) Performance has regressed. Found 16 outliers among 100 measurements (16.00%) 3 (3.00%) high mild 13 (13.00%) high severe async/hook-sync/host-to-wasm - typed - nop time: [4.9330 us 4.9394 us 4.9467 us] change: [+10.046% +11.038% +12.035%] (p = 0.00 < 0.05) Performance has regressed. Found 12 outliers among 100 measurements (12.00%) 5 (5.00%) high mild 7 (7.00%) high severe async/hook-sync/host-to-wasm - untyped - nop time: [5.0073 us 5.0183 us 5.0310 us] change: [+9.3828% +10.565% +11.752%] (p = 0.00 < 0.05) Performance has regressed. Found 8 outliers among 100 measurements (8.00%) 3 (3.00%) high mild 5 (5.00%) high severe async/hook-sync/host-to-wasm - typed - nop-params-and-results time: [4.9610 us 4.9839 us 5.0097 us] change: [+9.0857% +11.513% +14.359%] (p = 0.00 < 0.05) Performance has regressed. Found 13 outliers among 100 measurements (13.00%) 7 (7.00%) high mild 6 (6.00%) high severe async/hook-sync/host-to-wasm - untyped - nop-params-and-results time: [5.0995 us 5.1272 us 5.1617 us] change: [+9.3600% +11.506% +13.809%] (p = 0.00 < 0.05) Performance has regressed. Found 10 outliers among 100 measurements (10.00%) 6 (6.00%) high mild 4 (4.00%) high severe async-pool/no-hook/host-to-wasm - typed - nop time: [2.4242 us 2.4316 us 2.4396 us] change: [+7.8756% +8.8803% +9.8346%] (p = 0.00 < 0.05) Performance has regressed. Found 8 outliers among 100 measurements (8.00%) 5 (5.00%) high mild 3 (3.00%) high severe async-pool/no-hook/host-to-wasm - untyped - nop time: [2.5102 us 2.5155 us 2.5210 us] change: [+12.130% +13.194% +14.270%] (p = 0.00 < 0.05) Performance has regressed. Found 12 outliers among 100 measurements (12.00%) 4 (4.00%) high mild 8 (8.00%) high severe async-pool/no-hook/host-to-wasm - typed - nop-params-and-results time: [2.4203 us 2.4310 us 2.4440 us] change: [+4.0380% +6.3623% +8.7534%] (p = 0.00 < 0.05) Performance has regressed. Found 14 outliers among 100 measurements (14.00%) 5 (5.00%) high mild 9 (9.00%) high severe async-pool/no-hook/host-to-wasm - untyped - nop-params-and-results time: [2.5501 us 2.5593 us 2.5700 us] change: [+8.8802% +10.976% +12.937%] (p = 0.00 < 0.05) Performance has regressed. Found 16 outliers among 100 measurements (16.00%) 5 (5.00%) high mild 11 (11.00%) high severe async-pool/hook-sync/host-to-wasm - typed - nop time: [2.4135 us 2.4190 us 2.4254 us] change: [+8.3640% +9.3774% +10.435%] (p = 0.00 < 0.05) Performance has regressed. Found 11 outliers among 100 measurements (11.00%) 6 (6.00%) high mild 5 (5.00%) high severe async-pool/hook-sync/host-to-wasm - untyped - nop time: [2.5172 us 2.5248 us 2.5357 us] change: [+11.543% +12.750% +13.982%] (p = 0.00 < 0.05) Performance has regressed. Found 8 outliers among 100 measurements (8.00%) 1 (1.00%) high mild 7 (7.00%) high severe async-pool/hook-sync/host-to-wasm - typed - nop-params-and-results time: [2.4214 us 2.4353 us 2.4532 us] change: [+1.5158% +5.0872% +8.6765%] (p = 0.00 < 0.05) Performance has regressed. Found 15 outliers among 100 measurements (15.00%) 2 (2.00%) high mild 13 (13.00%) high severe async-pool/hook-sync/host-to-wasm - untyped - nop-params-and-results time: [2.5499 us 2.5607 us 2.5748 us] change: [+10.146% +12.459% +14.919%] (p = 0.00 < 0.05) Performance has regressed. Found 18 outliers among 100 measurements (18.00%) 3 (3.00%) high mild 15 (15.00%) high severe sync/no-hook/wasm-to-host - nop - typed time: [6.6135 ns 6.6288 ns 6.6452 ns] change: [+37.927% +38.837% +39.869%] (p = 0.00 < 0.05) Performance has regressed. Found 7 outliers among 100 measurements (7.00%) 2 (2.00%) high mild 5 (5.00%) high severe sync/no-hook/wasm-to-host - nop-params-and-results - typed time: [15.930 ns 15.993 ns 16.067 ns] change: [+3.9583% +5.6286% +7.2430%] (p = 0.00 < 0.05) Performance has regressed. Found 12 outliers among 100 measurements (12.00%) 11 (11.00%) high mild 1 (1.00%) high severe sync/no-hook/wasm-to-host - nop - untyped time: [20.596 ns 20.640 ns 20.690 ns] change: [+4.3293% +5.2047% +6.0935%] (p = 0.00 < 0.05) Performance has regressed. Found 10 outliers among 100 measurements (10.00%) 5 (5.00%) high mild 5 (5.00%) high severe sync/no-hook/wasm-to-host - nop-params-and-results - untyped time: [42.659 ns 42.882 ns 43.159 ns] change: [-2.1466% -0.5079% +1.2554%] (p = 0.58 > 0.05) No change in performance detected. Found 15 outliers among 100 measurements (15.00%) 1 (1.00%) high mild 14 (14.00%) high severe sync/no-hook/wasm-to-host - nop - unchecked time: [10.671 ns 10.691 ns 10.713 ns] change: [+83.911% +87.620% +92.062%] (p = 0.00 < 0.05) Performance has regressed. Found 9 outliers among 100 measurements (9.00%) 2 (2.00%) high mild 7 (7.00%) high severe sync/no-hook/wasm-to-host - nop-params-and-results - unchecked time: [11.136 ns 11.190 ns 11.263 ns] change: [-29.719% -28.446% -27.029%] (p = 0.00 < 0.05) Performance has improved. Found 14 outliers among 100 measurements (14.00%) 4 (4.00%) high mild 10 (10.00%) high severe sync/hook-sync/wasm-to-host - nop - typed time: [6.7964 ns 6.8087 ns 6.8226 ns] change: [+21.531% +24.206% +27.331%] (p = 0.00 < 0.05) Performance has regressed. Found 14 outliers among 100 measurements (14.00%) 4 (4.00%) high mild 10 (10.00%) high severe sync/hook-sync/wasm-to-host - nop-params-and-results - typed time: [15.865 ns 15.921 ns 15.985 ns] change: [+4.8466% +6.3330% +7.8317%] (p = 0.00 < 0.05) Performance has regressed. Found 16 outliers among 100 measurements (16.00%) 3 (3.00%) high mild 13 (13.00%) high severe sync/hook-sync/wasm-to-host - nop - untyped time: [21.505 ns 21.587 ns 21.677 ns] change: [+8.0908% +9.1943% +10.254%] (p = 0.00 < 0.05) Performance has regressed. Found 8 outliers among 100 measurements (8.00%) 4 (4.00%) high mild 4 (4.00%) high severe sync/hook-sync/wasm-to-host - nop-params-and-results - untyped time: [44.018 ns 44.128 ns 44.261 ns] change: [-1.4671% -0.0458% +1.2443%] (p = 0.94 > 0.05) No change in performance detected. Found 14 outliers among 100 measurements (14.00%) 5 (5.00%) high mild 9 (9.00%) high severe sync/hook-sync/wasm-to-host - nop - unchecked time: [11.264 ns 11.326 ns 11.387 ns] change: [+80.225% +81.659% +83.068%] (p = 0.00 < 0.05) Performance has regressed. Found 6 outliers among 100 measurements (6.00%) 3 (3.00%) high mild 3 (3.00%) high severe sync/hook-sync/wasm-to-host - nop-params-and-results - unchecked time: [11.816 ns 11.865 ns 11.920 ns] change: [-29.152% -28.040% -26.957%] (p = 0.00 < 0.05) Performance has improved. Found 14 outliers among 100 measurements (14.00%) 8 (8.00%) high mild 6 (6.00%) high severe async/no-hook/wasm-to-host - nop - typed time: [6.6221 ns 6.6385 ns 6.6569 ns] change: [+43.618% +44.755% +45.965%] (p = 0.00 < 0.05) Performance has regressed. Found 13 outliers among 100 measurements (13.00%) 6 (6.00%) high mild 7 (7.00%) high severe async/no-hook/wasm-to-host - nop-params-and-results - typed time: [15.884 ns 15.929 ns 15.983 ns] change: [+3.5987% +5.2053% +6.7846%] (p = 0.00 < 0.05) Performance has regressed. Found 16 outliers among 100 measurements (16.00%) 3 (3.00%) high mild 13 (13.00%) high severe async/no-hook/wasm-to-host - nop - untyped time: [20.615 ns 20.702 ns 20.821 ns] change: [+6.9799% +8.1212% +9.2819%] (p = 0.00 < 0.05) Performance has regressed. Found 10 outliers among 100 measurements (10.00%) 2 (2.00%) high mild 8 (8.00%) high severe async/no-hook/wasm-to-host - nop-params-and-results - untyped time: [41.956 ns 42.207 ns 42.521 ns] change: [-4.3057% -2.7730% -1.2428%] (p = 0.00 < 0.05) Performance has improved. Found 14 outliers among 100 measurements (14.00%) 3 (3.00%) high mild 11 (11.00%) high severe async/no-hook/wasm-to-host - nop - unchecked time: [10.440 ns 10.474 ns 10.513 ns] change: [+83.959% +85.826% +87.541%] (p = 0.00 < 0.05) Performance has regressed. Found 11 outliers among 100 measurements (11.00%) 5 (5.00%) high mild 6 (6.00%) high severe async/no-hook/wasm-to-host - nop-params-and-results - unchecked time: [11.476 ns 11.512 ns 11.554 ns] change: [-29.857% -28.383% -26.978%] (p = 0.00 < 0.05) Performance has improved. Found 12 outliers among 100 measurements (12.00%) 1 (1.00%) low mild 6 (6.00%) high mild 5 (5.00%) high severe async/no-hook/wasm-to-host - nop - async-typed time: [26.427 ns 26.478 ns 26.532 ns] change: [+6.5730% +7.4676% +8.3983%] (p = 0.00 < 0.05) Performance has regressed. Found 9 outliers among 100 measurements (9.00%) 2 (2.00%) high mild 7 (7.00%) high severe async/no-hook/wasm-to-host - nop-params-and-results - async-typed time: [28.557 ns 28.693 ns 28.880 ns] change: [+1.9099% +3.7332% +5.9731%] (p = 0.00 < 0.05) Performance has regressed. Found 15 outliers among 100 measurements (15.00%) 1 (1.00%) high mild 14 (14.00%) high severe async/hook-sync/wasm-to-host - nop - typed time: [6.7488 ns 6.7630 ns 6.7784 ns] change: [+19.935% +22.080% +23.683%] (p = 0.00 < 0.05) Performance has regressed. Found 9 outliers among 100 measurements (9.00%) 4 (4.00%) high mild 5 (5.00%) high severe async/hook-sync/wasm-to-host - nop-params-and-results - typed time: [15.928 ns 16.031 ns 16.149 ns] change: [+5.5188% +6.9567% +8.3839%] (p = 0.00 < 0.05) Performance has regressed. Found 11 outliers among 100 measurements (11.00%) 9 (9.00%) high mild 2 (2.00%) high severe async/hook-sync/wasm-to-host - nop - untyped time: [21.930 ns 22.114 ns 22.296 ns] change: [+4.6674% +7.7588% +10.375%] (p = 0.00 < 0.05) Performance has regressed. Found 4 outliers among 100 measurements (4.00%) 3 (3.00%) high mild 1 (1.00%) high severe async/hook-sync/wasm-to-host - nop-params-and-results - untyped time: [42.684 ns 42.858 ns 43.081 ns] change: [-5.2957% -3.4693% -1.6217%] (p = 0.00 < 0.05) Performance has improved. Found 14 outliers among 100 measurements (14.00%) 2 (2.00%) high mild 12 (12.00%) high severe async/hook-sync/wasm-to-host - nop - unchecked time: [11.026 ns 11.053 ns 11.086 ns] change: [+70.751% +72.378% +73.961%] (p = 0.00 < 0.05) Performance has regressed. Found 10 outliers among 100 measurements (10.00%) 5 (5.00%) high mild 5 (5.00%) high severe async/hook-sync/wasm-to-host - nop-params-and-results - unchecked time: [11.840 ns 11.900 ns 11.982 ns] change: [-27.977% -26.584% -24.887%] (p = 0.00 < 0.05) Performance has improved. Found 18 outliers among 100 measurements (18.00%) 3 (3.00%) high mild 15 (15.00%) high severe async/hook-sync/wasm-to-host - nop - async-typed time: [27.601 ns 27.709 ns 27.882 ns] change: [+8.1781% +9.1102% +10.030%] (p = 0.00 < 0.05) Performance has regressed. Found 11 outliers among 100 measurements (11.00%) 2 (2.00%) low mild 3 (3.00%) high mild 6 (6.00%) high severe async/hook-sync/wasm-to-host - nop-params-and-results - async-typed time: [28.955 ns 29.174 ns 29.413 ns] change: [+1.1226% +3.0366% +5.1126%] (p = 0.00 < 0.05) Performance has regressed. Found 13 outliers among 100 measurements (13.00%) 7 (7.00%) high mild 6 (6.00%) high severe async-pool/no-hook/wasm-to-host - nop - typed time: [6.5626 ns 6.5733 ns 6.5851 ns] change: [+40.561% +42.307% +44.514%] (p = 0.00 < 0.05) Performance has regressed. Found 9 outliers among 100 measurements (9.00%) 5 (5.00%) high mild 4 (4.00%) high severe async-pool/no-hook/wasm-to-host - nop-params-and-results - typed time: [15.820 ns 15.886 ns 15.969 ns] change: [+4.1044% +5.7928% +7.7122%] (p = 0.00 < 0.05) Performance has regressed. Found 17 outliers among 100 measurements (17.00%) 4 (4.00%) high mild 13 (13.00%) high severe async-pool/no-hook/wasm-to-host - nop - untyped time: [20.481 ns 20.521 ns 20.566 ns] change: [+6.7962% +7.6950% +8.7612%] (p = 0.00 < 0.05) Performance has regressed. Found 11 outliers among 100 measurements (11.00%) 6 (6.00%) high mild 5 (5.00%) high severe async-pool/no-hook/wasm-to-host - nop-params-and-results - untyped time: [41.834 ns 41.998 ns 42.189 ns] change: [-3.8185% -2.2687% -0.7541%] (p = 0.01 < 0.05) Change within noise threshold. Found 13 outliers among 100 measurements (13.00%) 3 (3.00%) high mild 10 (10.00%) high severe async-pool/no-hook/wasm-to-host - nop - unchecked time: [10.353 ns 10.380 ns 10.414 ns] change: [+82.042% +84.591% +87.205%] (p = 0.00 < 0.05) Performance has regressed. Found 7 outliers among 100 measurements (7.00%) 4 (4.00%) high mild 3 (3.00%) high severe async-pool/no-hook/wasm-to-host - nop-params-and-results - unchecked time: [11.123 ns 11.168 ns 11.228 ns] change: [-30.813% -29.285% -27.874%] (p = 0.00 < 0.05) Performance has improved. Found 12 outliers among 100 measurements (12.00%) 11 (11.00%) high mild 1 (1.00%) high severe async-pool/no-hook/wasm-to-host - nop - async-typed time: [27.442 ns 27.528 ns 27.638 ns] change: [+7.5215% +9.9795% +12.266%] (p = 0.00 < 0.05) Performance has regressed. Found 18 outliers among 100 measurements (18.00%) 3 (3.00%) high mild 15 (15.00%) high severe async-pool/no-hook/wasm-to-host - nop-params-and-results - async-typed time: [29.014 ns 29.148 ns 29.312 ns] change: [+2.0227% +3.4722% +4.9047%] (p = 0.00 < 0.05) Performance has regressed. Found 7 outliers among 100 measurements (7.00%) 6 (6.00%) high mild 1 (1.00%) high severe async-pool/hook-sync/wasm-to-host - nop - typed time: [6.7916 ns 6.8116 ns 6.8325 ns] change: [+20.937% +22.050% +23.281%] (p = 0.00 < 0.05) Performance has regressed. Found 11 outliers among 100 measurements (11.00%) 5 (5.00%) high mild 6 (6.00%) high severe async-pool/hook-sync/wasm-to-host - nop-params-and-results - typed time: [15.917 ns 15.975 ns 16.051 ns] change: [+4.6404% +6.4217% +8.3075%] (p = 0.00 < 0.05) Performance has regressed. Found 16 outliers among 100 measurements (16.00%) 5 (5.00%) high mild 11 (11.00%) high severe async-pool/hook-sync/wasm-to-host - nop - untyped time: [21.558 ns 21.612 ns 21.679 ns] change: [+8.1158% +9.1409% +10.217%] (p = 0.00 < 0.05) Performance has regressed. Found 9 outliers among 100 measurements (9.00%) 2 (2.00%) high mild 7 (7.00%) high severe async-pool/hook-sync/wasm-to-host - nop-params-and-results - untyped time: [42.475 ns 42.614 ns 42.775 ns] change: [-6.3613% -4.4709% -2.7647%] (p = 0.00 < 0.05) Performance has improved. Found 18 outliers among 100 measurements (18.00%) 3 (3.00%) high mild 15 (15.00%) high severe async-pool/hook-sync/wasm-to-host - nop - unchecked time: [11.150 ns 11.195 ns 11.247 ns] change: [+74.424% +77.056% +79.811%] (p = 0.00 < 0.05) Performance has regressed. Found 14 outliers among 100 measurements (14.00%) 3 (3.00%) high mild 11 (11.00%) high severe async-pool/hook-sync/wasm-to-host - nop-params-and-results - unchecked time: [11.639 ns 11.695 ns 11.760 ns] change: [-30.212% -29.023% -27.954%] (p = 0.00 < 0.05) Performance has improved. Found 15 outliers among 100 measurements (15.00%) 7 (7.00%) high mild 8 (8.00%) high severe async-pool/hook-sync/wasm-to-host - nop - async-typed time: [27.480 ns 27.712 ns 27.984 ns] change: [+2.9764% +6.5061% +9.8914%] (p = 0.00 < 0.05) Performance has regressed. Found 8 outliers among 100 measurements (8.00%) 6 (6.00%) high mild 2 (2.00%) high severe async-pool/hook-sync/wasm-to-host - nop-params-and-results - async-typed time: [29.218 ns 29.380 ns 29.600 ns] change: [+5.2283% +7.7247% +10.822%] (p = 0.00 < 0.05) Performance has regressed. Found 16 outliers among 100 measurements (16.00%) 2 (2.00%) high mild 14 (14.00%) high severe ``` </details> * Add s390x support for frame pointer-based stack walking * wasmtime: Allow `Caller::get_export` to get all exports * fuzzing: Add a fuzz target to check that our stack traces are correct We generate Wasm modules that keep track of their own stack as they call and return between functions, and then we periodically check that if the host captures a backtrace, it matches what the Wasm module has recorded. * Remove VM offsets for `VMHostFuncContext` since it isn't used by JIT code * Add doc comment with stack walking implementation notes * Document the extra state that can be passed to `wasmtime_runtime::Backtrace` methods * Add extensive comments for stack walking function * Factor architecture-specific bits of stack walking out into modules * Initialize store-related fields in a vmctx to null when there is no store yet Rather than leaving them as uninitialized data. * Use `set_callee` instead of manually setting the vmctx field * Use a more informative compile error message for unsupported architectures * Document unsafety of `prepare_host_to_wasm_trampoline` * Use `bti c` instead of `hint #34` in inline aarch64 assembly * Remove outdated TODO comment * Remove setting of `last_wasm_exit_fp` in `set_jit_trap` This is no longer needed as the value is plumbed through to the backtrace code directly now. * Only set the stack limit once, in the face of re-entrancy into Wasm * Add comments for s390x-specific stack walking bits * Use the helper macro for all libcalls If we forget to use it, and then trigger a GC from the libcall, that means we could miss stack frames when walking the stack, fail to find live GC refs, and then get use after free bugs. Much less risky to always use the helper macro that takes care of all of that for us. * Use the `asm_sym!` macro in Wasm-to-libcall trampolines This macro handles the macOS-specific underscore prefix stuff for us. * wasmtime: add size and align to `externref` assertion error message * Extend the `stacks` fuzzer to have host frames in between Wasm frames This way we get one or more contiguous sequences of Wasm frames on the stack, instead of exactly one. * Add documentation for aarch64-specific backtrace helpers * Clarify that we only support little-endian aarch64 in trampoline comment * Use `.machine z13` in s390x assembly file Since apparently our CI machines have pretty old assemblers that don't have `.machine z14`. This should be fine though since these trampolines don't make use of anything that is introduced in z14. * Fix aarch64 build * Fix macOS build * Document the `asm_sym!` macro * Add windows support to the `wasmtime-asm-macros` crate * Add windows support to host<--->Wasm trampolines * Fix trap handler build on windows * Run `rustfmt` on s390x trampoline source file * Temporarily disable some assertions about a trap's backtrace in the component model tests Follow up to re-enable this and fix the associated issue: https://github.com/bytecodealliance/wasmtime/issues/4535 * Refactor libcall definitions with less macros This refactors the `libcall!` macro to use the `foreach_builtin_function!` macro to define all of the trampolines. Additionally the macro surrounding each libcall itself is no longer necessary and helps avoid too many macros. * Use `VMOpaqueContext::from_vm_host_func_context` in `VMHostFuncContext::new` * Move `backtrace` module to be submodule of `traphandlers` This avoids making some things `pub(crate)` in `traphandlers` that really shouldn't be. * Fix macOS aarch64 build * Use "i64" instead of "word" in aarch64-specific file * Save/restore entry SP and exit FP/return pointer in the face of panicking imported host functions Also clean up assertions surrounding our saved entry/exit registers. * Put "typed" vs "untyped" in the same position of call benchmark names Regardless if we are doing wasm-to-host or host-to-wasm * Fix stacks test case generator build for new `wasm-encoder` * Fix build for s390x * Expand libcalls in s390x asm * Disable more parts of component tests now that backtrace assertions are a bit tighter * Remove assertion that can maybe fail on s390x Co-authored-by: Ulrich Weigand <ulrich.weigand@de.ibm.com> Co-authored-by: Alex Crichton <alex@alexcrichton.com> |
2 years ago |
Alex Crichton |
32979b2714
|
Implement flags in fused adapters (#4549)
This implements the `flags` type for fused adapters and converting between modules. The main logic here is handling the variable size of flags in addition to the masking which happens to ignore unrelated bits when the values pass through the canonical ABI. |
2 years ago |
Alex Crichton |
ce7bbef24d
|
Implement other variant-like types in adapter fusion (#4547)
This commit fills out the adapter fusion compiler for the `union`, `enum`, `option,` and `result` types. The preexisting support for `variant` types was refactored slightly to be extensible to all of these other types and they all now call into the same common translation code. |
2 years ago |
Alex Crichton |
e1148e43be
|
Implement `char` type in adapter fusion (#4544)
This commit implements the translation of `char` which validates that it's in the valid range of unicode scalar values. The precise validation here is lifted from LLVM in the hopes that it's probably better than whatever I would concoct by hand. |
2 years ago |
Alex Crichton |
174b60dcf7
|
Add `*.wast` support for invoking components (#4526)
This commit builds on bytecodealliance/wasm-tools#690 to add support to testing of the component model to execute functions when running `*.wast` files. This support is all built on #4442 as functions are invoked through a "dynamic" API. Right now the testing and integration is fairly crude but I'm hoping that we can try to improve it over time as necessary. For now this should provide a hopefully more convenient syntax for unit tests and the like. |
2 years ago |
Alex Crichton |
285bc5ce24
|
Implement variant translation in fused adapters (#4534)
* Implement variant translation in fused adapters This commit implements the most general case of variants for fused adapter trampolines. Additionally a number of other primitive types are filled out here to assist with testing variants. The implementation internally was relatively straightforward given the shape of variants, but there's room for future optimization as necessary especially around converting locals to various types. This commit also introduces a "one off" fuzzer for adapters to ensure that the generated adapter is valid. I hope to extend this fuzz generator as more types are implemented to assist in various corner cases that might arise. For now the fuzzer simply tests that the output wasm module is valid, not that it actually executes correctly. I hope to integrate with a fuzzer along the lines of #4307 one day to test the run-time-correctness of the generated adapters as well, at which point this fuzzer would become obsolete. Finally this commit also fixes an issue with `u8` translation where upper bits weren't zero'd out and were passed raw across modules. Instead smaller-than-32 types now all mask out their upper bits and do sign-extension as appropriate for unsigned/signed variants. * Fuzz memory64 in the new trampoline fuzzer Currently memory64 isn't supported elsewhere in the component model implementation of Wasmtime but the trampoline compiler seems as good a place as any to ensure that it at least works in isolation. This plumbs through fuzz input into a `memory64` boolean which gets fed into compilation. Some miscellaneous bugs were fixed as a result to ensure that memory64 trampolines all validate correctly. * Tweak manifest for doc build |
2 years ago |
Alex Crichton |
1321c234e5
|
Remove dependency on `more-asserts` (#4408)
* Remove dependency on `more-asserts` In my recent adventures to do a bit of gardening on our dependencies I noticed that there's a new major version for the `more-asserts` crate. Instead of updating to this though I've opted to instead remove the dependency since I don't think we heavily lean on this crate and otherwise one-off prints are probably sufficient to avoid the need for pulling in a whole crate for this. * Remove exemption for `more-asserts` |
2 years ago |
Alex Crichton |
97894bc65e
|
Add initial support for fused adapter trampolines (#4501)
* Add initial support for fused adapter trampolines This commit lands a significant new piece of functionality to Wasmtime's implementation of the component model in the form of the implementation of fused adapter trampolines. Internally within a component core wasm modules can communicate with each other by having their exports `canon lift`'d to get `canon lower`'d into a different component. This signifies that two components are communicating through a statically known interface via the canonical ABI at this time. Previously Wasmtime was able to identify that this communication was happening but it simply panicked with `unimplemented!` upon seeing it. This commit is the beginning of filling out this panic location with an actual implementation. The implementation route chosen here for fused adapters is to use a WebAssembly module itself for the implementation. This means that, at compile time of a component, Wasmtime is generating core WebAssembly modules which then get recursively compiled within Wasmtime as well. The choice to use WebAssembly itself as the implementation of fused adapters stems from a few motivations: * This does not represent a significant increase in the "trusted compiler base" of Wasmtime. Getting the Wasm -> CLIF translation correct once is hard enough much less for an entirely different IR to CLIF. By generating WebAssembly no new interactions with Cranelift are added which drastically reduces the possibilities for mistakes. * Using WebAssembly means that component adapters are insulated from miscompilations and mistakes. If something goes wrong it's defined well within the WebAssembly specification how it goes wrong and what happens as a result. This means that the "blast zone" for a wrong adapter is the component instance but not the entire host itself. Accesses to linear memory are guaranteed to be in-bounds and otherwise handled via well-defined traps. * A fully-finished fused adapter compiler is expected to be a significant and quite complex component of Wasmtime. Functionality along these lines is expected to be needed for Web-based polyfills of the component model and by using core WebAssembly it provides the opportunity to share code between Wasmtime and these polyfills for the component model. * Finally the runtime implementation of managing WebAssembly modules is already implemented and quite easy to integrate with, so representing fused adapters with WebAssembly results in very little extra support necessary for the runtime implementation of instantiating and managing a component. The compiler added in this commit is dubbed Wasmtime's Fused Adapter Compiler of Trampolines (FACT) because who doesn't like deriving a name from an acronym. Currently the trampoline compiler is limited in its support for interface types and only supports a few primitives. I plan on filing future PRs to flesh out the support here for all the variants of `InterfaceType`. For now this PR is primarily focused on all of the other infrastructure for the addition of a trampoline compiler. With the choice to use core WebAssembly to implement fused adapters it means that adapters need to be inserted into a module. Unfortunately adapters cannot all go into a single WebAssembly module because adapters themselves have dependencies which may be provided transitively through instances that were instantiated with other adapters. This means that a significant chunk of this PR (`adapt.rs`) is dedicated to determining precisely which adapters go into precisely which adapter modules. This partitioning process attempts to make large modules wherever it can to cut down on core wasm instantiations but is likely not optimal as it's just a simple heuristic today. With all of this added together it's now possible to start writing `*.wast` tests that internally have adapted modules communicating with one another. A `fused.wast` test suite was added as part of this PR which is the beginning of tests for the support of the fused adapter compiler added in this PR. Currently this is primarily testing some various topologies of adapters along with direct/indirect modes. This will grow many more tests over time as more types are supported. Overall I'm not 100% satisfied with the testing story of this PR. When a test fails it's very difficult to debug since everything is written in the text format of WebAssembly meaning there's no "conveniences" to print out the state of the world when things go wrong and easily debug. I think this will become even more apparent as more tests are written for more types in subsequent PRs. At this time though I know of no better alternative other than leaning pretty heavily on fuzz-testing to ensure this is all exercised. * Fix an unused field warning * Fix tests in `wasmtime-runtime` * Add some more tests for compiled trampolines * Remap exports when injecting adapters The exports of a component were accidentally left unmapped which meant that they indexed the instance indexes pre-adapter module insertion. * Fix typo * Rebase conflicts |
2 years ago |
Peter Huene |
80bd650bab
|
Components: ignore type exports (for now). (#4488)
* Components: ignore type exports (for now). This commit updates component translation to ignore type exports for now. Components generated with `wit-component` contain type exports to give names to types used within the component's functions based on the component's wit definition. The intention is to allow bindings to be generated with meaningful names directly from a component. In the future, type exports (and imports) may be used for more than this purpose to support things like resource types. This commit effectively ignores type exports when translating the component as they are not useful to executing a component at this time. Closes #4415. * Code review feedback. |
2 years ago |
Alex Crichton |
3032e3fcfb
|
Track type information during component translation (#4448)
This commit augments the current translation phase of components with extra machinery to track the type information of component items such as instances, components, and functions. The end goal of this commit is to enable the `Lower` instruction to know the type of the component function being lowered. Currently during the inlining pass where component fusion is detected the type of the lifted function is known, but to implement fusion entirely the type of the lowered function must be known. Note that these two types are expected to be different to allow for the subtyping rules specified by the component model. For now nothing is actually done with this information other than noting its presence in the face of a lifted-then-lowered function. My hope though was to split this out for a separate review to avoid making a future component-adapter-compiler-containing-PR too large. |
2 years ago |
Alex Crichton |
9ae060a12a
|
Update some dependency versions used by Wasmtime (#4405)
No major motivation here, mostly just dependency gardening. |
2 years ago |
Alex Crichton |
038383dc42
|
Implement support for outer core type aliases (#4385)
Fill in the gaps of the implementation left after #4380. |
2 years ago |
Nick Fitzgerald |
7000b0a4cf
|
wasmtime: Add criterion micro benchmarks for traps (#4398)
* wasmtime: Rename host->wasm trampolines As we introduce new types of trampolines, having clear names for our existing trampolines will be helpful. * Fix typo in docs for `VMCOMPONENT_MAGIC` * wasmtime: Add criterion micro benchmarks for traps |
2 years ago |
Alex Crichton |
41ba851a95
|
Bump versions of wasm-tools crates (#4380)
* Bump versions of wasm-tools crates Note that this leaves new features in the component model, outer type aliases for core wasm types, unimplemented for now. * Move to crates.io-based versions of tools |
2 years ago |
Alex Crichton |
76a2545a7f
|
Implement nested instance exports for components (#4364)
This commit adds support to Wasmtime for components which themselves export instances. The support here adds new APIs for how instance exports are accessed in the embedding API. For now this is mostly just a first-pass where the API is somewhat confusing and has a lot of lifetimes. I'm hoping that over time we can figure out how to simplify this but for now it should at least be expressive enough for exploring the exports of an instance. |
2 years ago |
wasmtime-publish |
7c428bbd62
|
Bump Wasmtime to 0.40.0 (#4378)
Co-authored-by: Wasmtime Publish <wasmtime-publish@users.noreply.github.com> |
2 years ago |
Alex Crichton |
f0278c5db7
|
Implement `canon lower` of a `canon lift` function in the same component (#4347)
* Implement `canon lower` of a `canon lift` function in the same component This commit implements the "degenerate" logic for implementing a function within a component that is lifted and then immediately lowered again. In this situation the lowered function will immediately generate a trap and doesn't need to implement anything else. The implementation in this commit is somewhat heavyweight but I think is probably justified moreso in future additions to the component model rather than what exactly is here right now. It's not expected that this "always trap" functionality will really be used all that often since it would generally mean a buggy component, but the functionality plumbed through here is hopefully going to be useful for implementing component-to-component adapter trampolines. Specifically this commit implements a strategy where the `canon.lower`'d function is generated by Cranelift and simply has a single trap instruction when called, doing nothing else. The main complexity comes from juggling around all the data associated with these functions, primarily plumbing through the traps into the `ModuleRegistry` to ensure that the global `is_wasm_trap_pc` function returns `true` and at runtime when we lookup information about the trap it's all readily available (e.g. translating the trapping pc to a `TrapCode`). * Fix non-component build * Fix some offset calculations * Only create one "always trap" per signature Use an internal map to deduplicate during compilation. |
2 years ago |
Alex Crichton |
eef1758d19
|
Implement a first-class error for reexported component functions (#4348)
Currently I don't know how we can reasonably implement this. Given all the signatures of how we call functions and how functions are called on the host there's no real feasible way that I know of to hook these two up "seamlessly". This means that a component which reexports an imported function can't be run in Wasmtime. One of the main reasons for this is that when calling a component function Wasmtime wants to lower arguments first and then have them lifted when the host is called. With a reexport though there's not actually anything to lower into so we'd sort of need something similar to a table on the side or maybe a linear memory and that seems like it'd get quite complicated quite quickly for not really all that much benefit. As-such for now this simply returns a first-class error (rather than the current panic) in situations like this. |
2 years ago |
Alex Crichton |
c1b3962f7b
|
Implement lowered-then-lifted functions (#4327)
* Implement lowered-then-lifted functions This commit is a few features bundled into one, culminating in the implementation of lowered-then-lifted functions for the component model. It's probably not going to be used all that often but this is possible within a valid component so Wasmtime needs to do something relatively reasonable. The main things implemented in this commit are: * Component instances are now assigned a `RuntimeComponentInstanceIndex` to differentiate each one. This will be used in the future to detect fusion (one instance lowering a function from another instance). For now it's used to allocate separate `VMComponentFlags` for each internal component instance. * The `CoreExport<FuncIndex>` of lowered functions was changed to a `CoreDef` since technically a lowered function can use another lowered function as the callee. This ended up being not too difficult to plumb through as everything else was already in place. * A need arose to compile host-to-wasm trampolines which weren't already present. Currently wasm in a component is always entered through a host-to-wasm trampoline but core wasm modules are the source of all the trampolines. In the case of a lowered-then-lifted function there may not actually be any core wasm modules, so component objects now contain necessary trampolines not otherwise provided by the core wasm objects. This feature required splitting a new function into the `Compiler` trait for creating a host-to-wasm trampoline. After doing this core wasm compilation was also updated to leverage this which further enabled compiling trampolines in parallel as opposed to the previous synchronous compilation. * Review comments |
2 years ago |
Alex Crichton |
3339dd1f01
|
Implement the post-return attribute (#4297)
This commit implements the `post-return` feature of the canonical ABI in the component model. This attribute is an optionally-specified function which is to be executed after the return value has been processed by the caller to optionally clean-up the return value. This enables, for example, returning an allocated string and the host then knows how to clean it up to prevent memory leaks in the original module. The API exposed in this PR changes the prior `TypedFunc::call` API in behavior but not in its signature. Previously the `TypedFunc::call` method would set the `may_enter` flag on the way out, but now that operation is deferred until a new `TypedFunc::post_return` method is called. This means that once a method on an instance is invoked then nothing else can be done on the instance until the `post_return` method is called. Note that the method must be called irrespective of whether the `post-return` canonical ABI option was specified or not. Internally wasm will be invoked if necessary. This is a pretty wonky and unergonomic API to work with. For now I couldn't think of a better alternative that improved on the ergonomics. In the theory that the raw Wasmtime bindings for a component may not be used all that heavily (instead `wit-bindgen` would largely be used) I'm hoping that this isn't too much of an issue in the future. cc #4185 |
2 years ago |
Alex Crichton |
651f40855f
|
Add support for nested components (#4285)
* Add support for nested components This commit is an implementation of a number of features of the component model including: * Defining nested components * Outer aliases to components and modules * Instantiating nested components The implementation here is intended to be a foundational pillar of Wasmtime's component model support since recursion and nested components are the bread-and-butter of the component model. At a high level the intention for the component model implementation in Wasmtime has long been that the recursive nature of components is "erased" at compile time to something that's more optimized and efficient to process. This commit ended up exemplifying this quite well where the vast majority of the internal changes here are in the "compilation" phase of a component rather than the runtime instantiation phase. The support in the `wasmtime` crate, the runtime instantiation support, only had minor updates here while the internals of translation have seen heavy updates. The `translate` module was greatly refactored here in this commit. Previously it would, as a component is parsed, create a final `Component` to hand off to trampoline compilation and get persisted at runtime. Instead now it's a thin layer over `wasmparser` which simply records a list of `LocalInitializer` entries for how to instantiate the component and its index spaces are built. This internal representation of the instantiation of a component is pretty close to the binary format intentionally. Instead of performing dataflow legwork the `translate` phase of a component is now responsible for two primary tasks: 1. All components and modules are discovered within a component. They're assigned `Static{Component,Module}Index` depending on where they're found and a `{Module,}Translation` is prepared for each one. This "flattens" the recursive structure of the binary into an indexed list processable later. 2. The lexical scope of components is managed here to implement outer module and component aliases. This is a significant design implementation because when closing over an outer component or module that item may actually be imported or something like the result of a previous instantiation. This means that the capture of modules and components is both a lexical concern as well as a runtime concern. The handling of the "runtime" bits are handled in the next phase of compilation. The next and currently final phase of compilation is a new pass where much of the historical code in `translate.rs` has been moved to (but heavily refactored). The goal of compilation is to produce one "flat" list of initializers for a component (as happens prior to this PR) and to achieve this an "inliner" phase runs which runs through the instantiation process at compile time to produce a list of initializers. This `inline` module is the main addition as part of this PR and is now the workhorse for dataflow analysis and tracking what's actually referring to what. During the `inline` phase the local initializers recorded in the `translate` phase are processed, in sequence, to instantiate a component. Definitions of items are tracked to correspond to their root definition which allows seeing across instantiation argument boundaries and such. Handling "upvars" for component outer aliases is handled in the `inline` phase as well by creating state for a component whenever a component is defined as was recorded during the `translate` phase. Finally this phase is chiefly responsible for doing all string-based name resolution at compile time that it can. This means that at runtime no string maps will need to be consulted for item exports and such. The final result of inlining is a list of "global initializers" which is a flat list processed during instantiation time. These are almost identical to the initializers that were processed prior to this PR. There are certainly still more gaps of the component model to implement but this should be a major leg up in terms of functionality that Wasmtime implements. This commit, however leaves behind a "hole" which is not intended to be filled in at this time, namely importing and exporting components at the "root" level from and to the host. This is tracked and explained in more detail as part of #4283. cc #4185 as this completes a number of items there * Tweak code to work on stable without warning * Review comments |
2 years ago |
Pure White |
258dc9de42
|
fix(wasmtime):`Config` methods should be idempotent (#4252)
This commit refactored `Config` to use a seperate `CompilerConfig` field instead of operating on `CompilerBuilder` directly to make all its methods idempotent. Fixes #4189 |
2 years ago |
Andrew Brown |
0dcda643ea
|
runtime: vmoffsets must be checked in reverse order (#4253)
When adding shared memory, memories owned by the module were added to a `owned_memories` array placed immediately after the `defined_memories` array. When checking the size of each array with `region_sizes`, the size of `defined_memories` and `owned_memories` were checked in this order. But `region_sizes` is iterating through the fields in the reverse order. This change reverses the field order to fix the associated fuzz bug. |
2 years ago |
Alex Crichton |
7d7ddceb17
|
Update wasm-tools crates (#4246)
This commit updates the wasm-tools family of crates, notably pulling in the refactorings and updates from bytecodealliance/wasm-tools#621 for the latest iteration of the component model. This commit additionally updates all support for the component model for these changes, notably: * Many bits and pieces of type information was refactored. Many `FooTypeIndex` namings are now `TypeFooIndex`. Additionally there is now `TypeIndex` as well as `ComponentTypeIndex` for the two type index spaces in a component. * A number of new sections are now processed to handle the core and component variants. * Internal maps were split such as the `funcs` map into `component_funcs` and `funcs` (same for `instances`). * Canonical options are now processed individually instead of one bulk `into` definition. Overall this was not a major update to the internals of handling the component model in Wasmtime. Instead this was mostly a surface-level refactoring to make sure that everything lines up with the new binary format for components. * All text syntax used in tests was updated to the new syntax. |
2 years ago |
Andrew Brown |
2b52f47b83
|
Add shared memories (#4187)
* Add shared memories This change adds the ability to use shared memories in Wasmtime when the [threads proposal] is enabled. Shared memories are annotated as `shared` in the WebAssembly syntax, e.g., `(memory 1 1 shared)`, and are protected from concurrent access during `memory.size` and `memory.grow`. [threads proposal]: https://github.com/WebAssembly/threads/blob/master/proposals/threads/Overview.md In order to implement this in Wasmtime, there are two main cases to cover: - a program may simply create a shared memory and possibly export it; this means that Wasmtime itself must be able to create shared memories - a user may create a shared memory externally and pass it in as an import during instantiation; this is the case when the program contains code like `(import "env" "memory" (memory 1 1 shared))`--this case is handled by a new Wasmtime API type--`SharedMemory` Because of the first case, this change allows any of the current memory-creation mechanisms to work as-is. Wasmtime can still create either static or dynamic memories in either on-demand or pooling modes, and any of these memories can be considered shared. When shared, the `Memory` runtime container will lock appropriately during `memory.size` and `memory.grow` operations; since all memories use this container, it is an ideal place for implementing the locking once and once only. The second case is covered by the new `SharedMemory` structure. It uses the same `Mmap` allocation under the hood as non-shared memories, but allows the user to perform the allocation externally to Wasmtime and share the memory across threads (via an `Arc`). The pointer address to the actual memory is carefully wired through and owned by the `SharedMemory` structure itself. This means that there are differing views of where to access the pointer (i.e., `VMMemoryDefinition`): for owned memories (the default), the `VMMemoryDefinition` is stored directly by the `VMContext`; in the `SharedMemory` case, however, this `VMContext` must point to this separate structure. To ensure that the `VMContext` can always point to the correct `VMMemoryDefinition`, this change alters the `VMContext` structure. Since a `SharedMemory` owns its own `VMMemoryDefinition`, the `defined_memories` table in the `VMContext` becomes a sequence of pointers--in the shared memory case, they point to the `VMMemoryDefinition` owned by the `SharedMemory` and in the owned memory case (i.e., not shared) they point to `VMMemoryDefinition`s stored in a new table, `owned_memories`. This change adds an additional indirection (through the `*mut VMMemoryDefinition` pointer) that could add overhead. Using an imported memory as a proxy, we measured a 1-3% overhead of this approach on the `pulldown-cmark` benchmark. To avoid this, Cranelift-generated code will special-case the owned memory access (i.e., load a pointer directly to the `owned_memories` entry) for `memory.size` so that only shared memories (and imported memories, as before) incur the indirection cost. * review: remove thread feature check * review: swap wasmtime-types dependency for existing wasmtime-environ use * review: remove unused VMMemoryUnion * review: reword cross-engine error message * review: improve tests * review: refactor to separate prevent Memory <-> SharedMemory conversion * review: into_shared_memory -> as_shared_memory * review: remove commented out code * review: limit shared min/max to 32 bits * review: skip imported memories * review: imported memories are not owned * review: remove TODO * review: document unsafe send + sync * review: add limiter assertion * review: remove TODO * review: improve tests * review: fix doc test * fix: fixes based on discussion with Alex This changes several key parts: - adds memory indexes to imports and exports - makes `VMMemoryDefinition::current_length` an atomic usize * review: add `Extern::SharedMemory` * review: remove TODO * review: atomically load from VMMemoryDescription in JIT-generated code * review: add test probing the last available memory slot across threads * fix: move assertion to new location due to rebase * fix: doc link * fix: add TODOs to c-api * fix: broken doc link * fix: modify pooling allocator messages in tests * review: make owned_memory_index panic instead of returning an option * review: clarify calculation of num_owned_memories * review: move 'use' to top of file * review: change '*const [u8]' to '*mut [u8]' * review: remove TODO * review: avoid hard-coding memory index * review: remove 'preallocation' parameter from 'Memory::_new' * fix: component model memory length * review: check that shared memory plans are static * review: ignore growth limits for shared memory * review: improve atomic store comment * review: add FIXME for memory growth failure * review: add comment about absence of bounds-checked 'memory.size' * review: make 'current_length()' doc comment more precise * review: more comments related to memory.size non-determinism * review: make 'vmmemory' unreachable for shared memory * review: move code around * review: thread plan through to 'wrap()' * review: disallow shared memory allocation with the pooling allocator |
2 years ago |
wasmtime-publish |
55946704cb
|
Bump Wasmtime to 0.39.0 (#4225)
Co-authored-by: Wasmtime Publish <wasmtime-publish@users.noreply.github.com> |
2 years ago |