Branch:
release-18.0.0
cfallin/lucet-pr612-base
fitzgen-patch-1
main
pch/bound_tcp_userland_buffer
pch/bump_wasm_tools_210
pch/cli_wasi_legacy
pch/component_call_hooks
pch/resource_table
pch/resource_table_2
pch/upstream_wave
release-0.32.0
release-0.33.0
release-0.34.0
release-0.35.0
release-0.36.0
release-0.37.0
release-0.38.0
release-0.39.0
release-0.40.0
release-1.0.0
release-10.0.0
release-11.0.0
release-12.0.0
release-13.0.0
release-14.0.0
release-15.0.0
release-16.0.0
release-17.0.0
release-18.0.0
release-19.0.0
release-2.0.0
release-20.0.0
release-21.0.0
release-22.0.0
release-23.0.0
release-24.0.0
release-3.0.0
release-4.0.0
release-5.0.0
release-6.0.0
release-7.0.0
release-8.0.0
release-9.0.0
revert-9191-trevor/upgrade-regalloc
revert-union-find
stable-v0.26
trevor/fuzz-pcc
trevor/hyper-rc4
trevor/io-error-interface
0.2.0
0.3.0
cranelift-v0.31.0
cranelift-v0.32.0
cranelift-v0.33.0
cranelift-v0.34.0
cranelift-v0.35.0
cranelift-v0.36.0
cranelift-v0.37.0
cranelift-v0.39.0
cranelift-v0.40.0
cranelift-v0.41.0
cranelift-v0.42.0
cranelift-v0.43.0
cranelift-v0.43.1
cranelift-v0.44.0
cranelift-v0.45.0
cranelift-v0.46.0
cranelift-v0.46.1
cranelift-v0.60.0
cranelift-v0.61.0
cranelift-v0.62.0
cranelift-v0.69.0
dev
filecheck-v0.0.1
minimum-viable-wasi-proxy-serve
v0.10.0
v0.11.0
v0.12.0
v0.15.0
v0.16.0
v0.17.0
v0.18.0
v0.19.0
v0.2.0
v0.20.0
v0.21.0
v0.22.0
v0.22.1
v0.23.0
v0.24.0
v0.25.0
v0.26.0
v0.26.1
v0.27.0
v0.28.0
v0.29.0
v0.3.0
v0.30.0
v0.31.0
v0.32.0
v0.32.1
v0.33.0
v0.33.1
v0.34.0
v0.34.1
v0.34.2
v0.35.0
v0.35.1
v0.35.2
v0.35.3
v0.36.0
v0.37.0
v0.38.0
v0.38.1
v0.38.2
v0.38.3
v0.39.0
v0.39.1
v0.4.0
v0.40.0
v0.40.1
v0.6.0
v0.8.0
v0.9.0
v1.0.0
v1.0.1
v1.0.2
v10.0.0
v10.0.1
v10.0.2
v11.0.0
v11.0.1
v11.0.2
v12.0.0
v12.0.1
v12.0.2
v13.0.0
v13.0.1
v14.0.0
v14.0.1
v14.0.2
v14.0.3
v14.0.4
v15.0.0
v15.0.1
v16.0.0
v17.0.0
v17.0.1
v17.0.2
v17.0.3
v18.0.0
v18.0.1
v18.0.2
v18.0.3
v18.0.4
v19.0.0
v19.0.1
v19.0.2
v2.0.0
v2.0.1
v2.0.2
v20.0.0
v20.0.1
v20.0.2
v21.0.0
v21.0.1
v22.0.0
v23.0.0
v23.0.1
v23.0.2
v24.0.0
v3.0.0
v3.0.1
v4.0.0
v4.0.1
v5.0.0
v5.0.1
v6.0.0
v6.0.1
v6.0.2
v7.0.0
v7.0.1
v8.0.0
v8.0.1
v9.0.0
v9.0.1
v9.0.2
v9.0.3
v9.0.4
${ noResults }
9 Commits (release-18.0.0)
Author | SHA1 | Message | Date |
---|---|---|---|
Nick Fitzgerald |
a34427a3d2
|
Wasmtime: refactor the pooling allocator for components (#6835)
* Wasmtime: Rename `IndexAllocator` to `ModuleAffinityIndexAllocator` We will have multiple kinds of index allocators soon, so clarify which one this is. * Wasmtime: Introduce a simple index allocator This will be used in future commits refactoring the pooling allocator. * Wasmtime: refactor the pooling allocator for components We used to have one index allocator, an index per instance, and give out N tables and M memories to every instance regardless how many tables and memories they need. Now we have an index allocator for memories and another for tables. An instance isn't associated with a single instance, each of its memories and tables have an index. We allocate exactly as many tables and memories as the instance actually needs. Ultimately, this gives us better component support, where a component instance might have varying numbers of internal tables and memories. Additionally, you can now limit the number of tables, memories, and core instances a single component can allocate from the pooling allocator, even if there is the capacity for that many available. This is to give embedders tools to limit individual component instances and prevent them from hogging too much of the pooling allocator's resources. * Remove unused file Messed up from rebasing, this code is actually just inline in the index allocator module. * Address review feedback * Fix benchmarks build * Fix ignoring test under miri The `async_functions` module is not even compiled-but-ignored with miri, it is completely `cfg`ed off. Therefore we ahve to do the same with this test that imports stuff from that module. * Fix doc links * Allow testing utilities to be unused The exact `cfg`s that unlock the tests that use these are platform and feature dependent and ends up being like 5 things and super long. Simpler to just allow unused for when we are testing on other platforms or don't have the compile time features enabled. * Debug assert that the pool is empty on drop, per Alex's suggestion Also fix a couple scenarios where we could leak indices if allocating an index for a memory/table succeeded but then creating the memory/table itself failed. * Fix windows compile errors |
1 year ago |
Alex Crichton |
b0939f6626
|
Remove explicit `S` type parameters (#5275)
* Remove explicit `S` type parameters This commit removes the explicit `S` type parameter on `Func::typed` and `Instance::get_typed_func`. Historical versions of Rust required that this be a type parameter but recent rustcs support a mixture of explicit type parameters and `impl Trait`. This removes, at callsites, a superfluous `, _` argument which otherwise never needs specification. * Fix mdbook examples |
2 years ago |
Andrew Brown |
8426904129
|
bench: benchmark several common WASI scenarios (#5274)
In order to properly understand the impact of providing thread-safe implmentations of WASI contexts (#5235), we need benchmarks that measure the current performance of WASI calls using Wiggle. This change adds several common WASI scenarios as WAT files (see `benches/wasi/*.wat`) and benchmarks them with `criterion`. Using `criterion`'s `iter_custom`, the WAT file runs the desired number of benchmark iterations internally and the total duration of the runs is divided to get the average time for each loop iteration. Why WAT? When compiling these benchmarks from Rust to `wasm32-wasi`, the output files are large, contain other WASI imports than the desired ones, and overall it is difficult to tell if we are measuring what we expect. By hand-writing the WAT, it is (slightly) more clear what each benchmark is doing. |
2 years ago |
Alex Crichton |
b14551d7ca
|
Refactor configuration for the pooling allocator (#5205)
This commit changes the APIs in the `wasmtime` crate for configuring the pooling allocator. I plan on adding a few more configuration options in the near future and the current structure was feeling unwieldy for adding these new abstractions. The previous `struct`-based API has been replaced with a builder-style API in a similar shape as to `Config`. This is done to help make it easier to add more configuration options in the future through adding more methods as opposed to adding more field which could break prior initializations. |
2 years ago |
Alex Crichton |
82a31680d6
|
Use a `StoreOpaque` during backtraces for metadata (#4325)
Previous to this commit Wasmtime would use the `GlobalModuleRegistry` when learning information about a trap such as its trap code, the symbols for each frame, etc. This has a downside though of holding a global read-write lock for the duration of this operation which hinders registration of new modules in parallel. In addition there was a fair amount of internal duplication between this "global module registry" and the store-local module registry. Finally relying on global state for information like this gets a bit more brittle over time as it seems best to scope global queries to precisely what's necessary rather than holding extra information. With the refactoring in wasm backtraces done in #4183 it's now possible to always have a `StoreOpaque` reference when a backtrace is collected for symbolication and otherwise Trap-identification purposes. This commit adds a `StoreOpaque` parameter to the `Trap::from_runtime` constructor and then plumbs that everywhere. Note that while doing this I changed the internal `traphandlers::lazy_per_thread_init` function to no longer return a `Result` and instead just `panic!` on Unix if memory couldn't be allocated for a stack. This removed quite a lot of error-handling code for a case that's expected to quite rarely happen. If necessary in the future we can add a fallible initialization point but this feels like a better default balance for the code here. With a `StoreOpaque` in use when a trap is being symbolicated that means we have a `ModuleRegistry` which can be used for queries and such. This meant that the `GlobalModuleRegistry` state could largely be dismantled and moved to per-`Store` state (within the `ModuleRegistry`, mostly just moving methods around). The final state is that the global rwlock is not exclusively scoped around insertions/deletions/`is_wasm_trap_pc` which is just a lookup and atomic add. Otherwise symbolication for a backtrace exclusively uses store-local state now (as intended). The original motivation for this commit was that frame information lookup and pieces were looking to get somewhat complicated with the addition of components which are a new vector of traps coming out of Cranelift-generated code. My hope is that by having a `Store` around for more operations it's easier to plumb all this through. |
2 years ago |
Alex Crichton |
15bb0c6903
|
Remove the `ModuleLimits` pooling configuration structure (#3837)
* Remove the `ModuleLimits` pooling configuration structure This commit is an attempt to improve the usability of the pooling allocator by removing the need to configure a `ModuleLimits` structure. Internally this structure has limits on all forms of wasm constructs but this largely bottoms out in the size of an allocation for an instance in the instance pooling allocator. Maintaining this list of limits can be cumbersome as modules may get tweaked over time and there's otherwise no real reason to limit the number of globals in a module since the main goal is to limit the memory consumption of a `VMContext` which can be done with a memory allocation limit rather than fine-tuned control over each maximum and minimum. The new approach taken in this commit is to remove `ModuleLimits`. Some fields, such as `tables`, `table_elements` , `memories`, and `memory_pages` are moved to `InstanceLimits` since they're still enforced at runtime. A new field `size` is added to `InstanceLimits` which indicates, in bytes, the maximum size of the `VMContext` allocation. If the size of a `VMContext` for a module exceeds this value then instantiation will fail. This involved adding a few more checks to `{Table, Memory}::new_static` to ensure that the minimum size is able to fit in the allocation, since previously modules were validated at compile time of the module that everything fit and that validation no longer happens (it happens at runtime). A consequence of this commit is that Wasmtime will have no built-in way to reject modules at compile time if they'll fail to be instantiated within a particular pooling allocator configuration. Instead a module must attempt instantiation see if a failure happens. * Fix benchmark compiles * Fix some doc links * Fix a panic by ensuring modules have limited tables/memories * Review comments * Add back validation at `Module` time instantiation is possible This allows for getting an early signal at compile time that a module will never be instantiable in an engine with matching settings. * Provide a better error message when sizes are exceeded Improve the error message when an instance size exceeds the maximum by providing a breakdown of where the bytes are all going and why the large size is being requested. * Try to fix test in qemu * Flag new test as 64-bit only Sizes are all specific to 64-bit right now |
3 years ago |
Alex Crichton |
7ce46043dc
|
Add guard pages to the front of linear memories (#2977)
* Add guard pages to the front of linear memories This commit implements a safety feature for Wasmtime to place guard pages before the allocation of all linear memories. Guard pages placed after linear memories are typically present for performance (at least) because it can help elide bounds checks. Guard pages before a linear memory, however, are never strictly needed for performance or features. The intention of a preceding guard page is to help insulate against bugs in Cranelift or other code generators, such as CVE-2021-32629. This commit adds a `Config::guard_before_linear_memory` configuration option, defaulting to `true`, which indicates whether guard pages should be present both before linear memories as well as afterwards. Guard regions continue to be controlled by `{static,dynamic}_memory_guard_size` methods. The implementation here affects both on-demand allocated memories as well as the pooling allocator for memories. For on-demand memories this adjusts the size of the allocation as well as adjusts the calculations for the base pointer of the wasm memory. For the pooling allocator this will place a singular extra guard region at the very start of the allocation for memories. Since linear memories in the pooling allocator are contiguous every memory already had a preceding guard region in memory, it was just the previous memory's guard region afterwards. Only the first memory needed this extra guard. I've attempted to write some tests to help test all this, but this is all somewhat tricky to test because the settings are pretty far away from the actual behavior. I think, though, that the tests added here should help cover various use cases and help us have confidence in tweaking the various `Config` settings beyond their defaults. Note that this also contains a semantic change where `InstanceLimits::memory_reservation_size` has been removed. Instead this field is now inferred from the `static_memory_maximum_size` and guard size settings. This should hopefully remove some duplication in these settings, canonicalizing on the guard-size/static-size settings as the way to control memory sizes and virtual reservations. * Update config docs * Fix a typo * Fix benchmark * Fix wasmtime-runtime tests * Fix some more tests * Try to fix uffd failing test * Review items * Tweak 32-bit defaults Makes the pooling allocator a bit more reasonable by default on 32-bit with these settings. |
3 years ago |
Pat Hickey | 035d541fd9 |
add docs
|
3 years ago |
Pat Hickey | 2a4c51b77d |
switch eager vs lazy instantiation to a criterion bench
|
3 years ago |
Pat Hickey | 0a96b6b60a |
overhead is on calls, not instantiation
|
3 years ago |
Pat Hickey | bf1d1a4e7a |
add test for eager thread initialization
|
3 years ago |