You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

144 lines
3.7 KiB

[package]
name = "wasmtime-cli"
Add a two-week delay to Wasmtime's release process (#3955) * Bump to 0.36.0 * Add a two-week delay to Wasmtime's release process This commit is a proposal to update Wasmtime's release process with a two-week delay from branching a release until it's actually officially released. We've had two issues lately that came up which led to this proposal: * In #3915 it was realized that changes just before the 0.35.0 release weren't enough for an embedding use case, but the PR didn't meet the expectations for a full patch release. * At Fastly we were about to start rolling out a new version of Wasmtime when over the weekend the fuzz bug #3951 was found. This led to the desire internally to have a "must have been fuzzed for this long" period of time for Wasmtime changes which we felt were better reflected in the release process itself rather than something about Fastly's own integration with Wasmtime. This commit updates the automation for releases to unconditionally create a `release-X.Y.Z` branch on the 5th of every month. The actual release from this branch is then performed on the 20th of every month, roughly two weeks later. This should provide a period of time to ensure that all changes in a release are fuzzed for at least two weeks and avoid any further surprises. This should also help with any last-minute changes made just before a release if they need tweaking since backporting to a not-yet-released branch is much easier. Overall there are some new properties about Wasmtime with this proposal as well: * The `main` branch will always have a section in `RELEASES.md` which is listed as "Unreleased" for us to fill out. * The `main` branch will always be a version ahead of the latest release. For example it will be bump pre-emptively as part of the release process on the 5th where if `release-2.0.0` was created then the `main` branch will have 3.0.0 Wasmtime. * Dates for major versions are automatically updated in the `RELEASES.md` notes. The associated documentation for our release process is updated and the various scripts should all be updated now as well with this commit. * Add notes on a security patch * Clarify security fixes shouldn't be previewed early on CI
3 years ago
version = "0.36.0"
authors = ["The Wasmtime Project Developers"]
description = "Command-line interface for Wasmtime"
license = "Apache-2.0 WITH LLVM-exception"
documentation = "https://bytecodealliance.github.io/wasmtime/cli.html"
categories = ["wasm"]
keywords = ["webassembly", "wasm"]
repository = "https://github.com/bytecodealliance/wasmtime"
readme = "README.md"
edition = "2021"
default-run = "wasmtime"
[lib]
doctest = false
[[bin]]
name = "wasmtime"
path = "src/bin/wasmtime.rs"
doc = false
[dependencies]
Add a two-week delay to Wasmtime's release process (#3955) * Bump to 0.36.0 * Add a two-week delay to Wasmtime's release process This commit is a proposal to update Wasmtime's release process with a two-week delay from branching a release until it's actually officially released. We've had two issues lately that came up which led to this proposal: * In #3915 it was realized that changes just before the 0.35.0 release weren't enough for an embedding use case, but the PR didn't meet the expectations for a full patch release. * At Fastly we were about to start rolling out a new version of Wasmtime when over the weekend the fuzz bug #3951 was found. This led to the desire internally to have a "must have been fuzzed for this long" period of time for Wasmtime changes which we felt were better reflected in the release process itself rather than something about Fastly's own integration with Wasmtime. This commit updates the automation for releases to unconditionally create a `release-X.Y.Z` branch on the 5th of every month. The actual release from this branch is then performed on the 20th of every month, roughly two weeks later. This should provide a period of time to ensure that all changes in a release are fuzzed for at least two weeks and avoid any further surprises. This should also help with any last-minute changes made just before a release if they need tweaking since backporting to a not-yet-released branch is much easier. Overall there are some new properties about Wasmtime with this proposal as well: * The `main` branch will always have a section in `RELEASES.md` which is listed as "Unreleased" for us to fill out. * The `main` branch will always be a version ahead of the latest release. For example it will be bump pre-emptively as part of the release process on the 5th where if `release-2.0.0` was created then the `main` branch will have 3.0.0 Wasmtime. * Dates for major versions are automatically updated in the `RELEASES.md` notes. The associated documentation for our release process is updated and the various scripts should all be updated now as well with this commit. * Add notes on a security patch * Clarify security fixes shouldn't be previewed early on CI
3 years ago
wasmtime = { path = "crates/wasmtime", version = "0.36.0", default-features = false, features = ['cache', 'cranelift'] }
wasmtime-cache = { path = "crates/cache", version = "=0.36.0" }
wasmtime-cranelift = { path = "crates/cranelift", version = "=0.36.0" }
wasmtime-environ = { path = "crates/environ", version = "=0.36.0" }
wasmtime-wast = { path = "crates/wast", version = "=0.36.0" }
wasmtime-wasi = { path = "crates/wasi", version = "0.36.0" }
wasmtime-wasi-crypto = { path = "crates/wasi-crypto", version = "0.36.0", optional = true }
wasmtime-wasi-nn = { path = "crates/wasi-nn", version = "0.36.0", optional = true }
Use structopt instead of docopt. This commit refactors the Wasmtime CLI tools to use `structopt` instead of `docopt`. The `wasmtime` tool now has the following subcommands: * `config new` - creates a new Wasmtime configuration file. * `run` - runs a WebAssembly module. * `wasm2obj` - translates a Wasm module to native object file. * `wast` - runs a test script file. If no subcommand is specified, the `run` subcommand is used. Thus, `wasmtime foo.wasm` should continue to function as expected. The `wasm2obj` and `wast` tools still exist, but delegate to the same implementation as the `wasmtime` subcommands. The standalone `wasm2obj` and `wast` tools may be removed in the future in favor of simply using `wasmtime`. Included in this commit is a breaking change to the default Wasmtime configuration file: it has been renamed from `wasmtime-cache-config.toml` to simply `config.toml`. The new name is less specific which will allow for additional (non-cache-related) settings in the future. There are some breaking changes to improve command line UX: * The `--cache-config` option has been renamed to `--config`. * The `--create-config-file` option has moved to the `config new` subcommand. As a result, the `wasm2obj` and `wast` tools cannot be used to create a new config file. * The short form of the `--optimize` option has changed from `-o` to `-O` for consistency. * The `wasm2obj` command takes the output object file as a required positional argument rather than the former required output *option* (e.g. `wasmtime wasm2obj foo.wasm foo.obj`).
5 years ago
structopt = { version = "0.3.5", features = ["color", "suggestions"] }
Migrate from failure to thiserror and anyhow (#436) * Migrate from failure to thiserror and anyhow The failure crate invents its own traits that don't use std::error::Error (because failure predates certain features added to Error); this prevents using ? on an error from failure in a function using Error. The thiserror and anyhow crates integrate with the standard Error trait instead. This change does not attempt to semantically change or refactor the approach to error-handling in any portion of the code, to ensure that the change remains straightforward to review. Modules using specific differentiated error types move from failure_derive and derive(Fail) to thiserror and derive(Error). Modules boxing all errors opaquely move from failure::Error to anyhow. Modules using String as an error type continue to do so. Code using unwrap or expect continues to do so. Drop Display implementations when thiserror can easily derive an identical instance. Drop manual traversal of iter_causes; anyhow's Debug instance prints the chain of causes by default. Use anyhow's type alias anyhow::Result<T> in place of std::result::Result<T, anyhow::Error> whenever possible. * wasm2obj: Simplify error handling using existing messages handle_module in wasm2obj manually maps cranelift_codegen::isa::LookupError values to strings, but LookupError values already have strings that say almost exactly the same thing. Rely on the strings from cranelift. * wasmtime: Rely on question-mark-in-main The main() wrapper around rmain() completely matches the behavior of question-mark-in-main (print error to stderr and return 1), so switch to question-mark-in-main. * Update to walrus 0.13 and wasm-webidl-bindings 0.6 Both crates switched from failure to anyhow; updating lets us avoid a translation from failure to anyhow within wasmtime-interface-types.
5 years ago
anyhow = "1.0.19"
target-lexicon = { version = "0.12.0", default-features = false }
pretty_env_logger = "0.4.0"
file-per-thread-logger = "0.1.1"
libc = "0.2.60"
rayon = "1.5.0"
humantime = "2.0.0"
wasmparser = "0.83.0"
lazy_static = "1.4.0"
listenfd = "0.3.5"
[target.'cfg(unix)'.dependencies]
rustix = "0.33.5"
[dev-dependencies]
Refactor fuzzing configuration and sometimes disable debug verifier. (#3664) * fuzz: Refactor Wasmtime's fuzz targets A recent fuzz bug found is related to timing out when compiling a module. This timeout, however, is predominately because Cranelift's debug verifier is enabled and taking up over half the compilation time. I wanted to fix this by disabling the verifier when input modules might have a lot of functions, but this was pretty difficult to implement. Over time we've grown a number of various fuzzers. Most are `wasm-smith`-based at this point but there's various entry points for configuring the wasm-smith module, the wasmtime configuration, etc. I've historically gotten quite lost in trying to change defaults and feeling like I have to touch a lot of different places. This is the motivation for this commit, simplifying fuzzer default configuration. This commit removes the ability to create a default `Config` for fuzzing, instead only supporting generating a configuration via `Arbitrary`. This then involved refactoring all targets and fuzzers to ensure that configuration is generated through `Arbitrary`. This should actually expand the coverage of some existing fuzz targets since `Arbitrary for Config` will tweak options that don't affect runtime, such as memory configuration or jump veneers. All existing fuzz targets are refactored to use this new method of configuration. Some fuzz targets were also shuffled around or reimplemented: * `compile` - this now directly calls `Module::new` to skip all the fuzzing infrastructure. This is mostly done because this fuzz target isn't too interesting and is largely just seeing what happens when things are thrown at the wall for Wasmtime. * `instantiate-maybe-invalid` - this fuzz target now skips instantiation and instead simply goes into `Module::new` like the `compile` target. The rationale behind this is that most modules won't instantiate anyway and this fuzz target is primarily fuzzing the compiler. This skips having to generate arbitrary configuration since wasm-smith-generated-modules (or valid ones at least) aren't used here. * `instantiate` - this fuzz target was removed. In general this fuzz target isn't too interesting in isolation. Almost everything it deals with likely won't pass compilation and is covered by the `compile` fuzz target, and otherwise interesting modules being instantiated can all theoretically be created by `wasm-smith` anyway. * `instantiate-wasm-smith` and `instantiate-swarm` - these were both merged into a new `instantiate` target (replacing the old one from above). There wasn't really much need to keep these separate since they really only differed at this point in methods of timeout. Otherwise we much more heavily use `SwarmConfig` than wasm-smith's built-in options. The intention is that we should still have basically the same coverage of fuzzing as before, if not better because configuration is now possible on some targets. Additionally there is one centralized point of configuration for fuzzing for wasmtime, `Arbitrary for ModuleConfig`. This internally creates an arbitrary `SwarmConfig` from `wasm-smith` and then further tweaks it for Wasmtime's needs, such as enabling various wasm proposals by default. In the future enabling a wasm proposal on fuzzing should largely just be modifying this one trait implementation. * fuzz: Sometimes disable the cranelift debug verifier This commit disables the cranelift debug verifier if the input wasm module might be "large" for the definition of "more than 10 functions". While fuzzing we disable threads (set them to 1) and enable the cranelift debug verifier. Coupled with a 20-30x slowdown this means that a module with the maximum number of functions, 100, gives: 60x / 100 functions / 30x slowdown = 20ms With only 20 milliseconds per function this is even further halved by the `differential` fuzz target compiling a module twice, which means that, when compiling with a normal release mode Wasmtime, if any function takes more than 10ms to compile then it's a candidate for timing out while fuzzing. Given that the cranelift debug verifier can more than double compilation time in fuzzing mode this actually means that the real time budget for function compilation is more like 4ms. The `wasm-smith` crate can pretty easily generate a large function that takes 4ms to compile, and then when that function is multiplied 100x in the `differential` fuzz target we trivially time out the fuzz target. The hope of this commit is to buy back half our budget by disabling the debug verifier for modules that may have many functions. Further refinements can be implemented in the future such as limiting functions for just the differential target as well. * Fix the single-function-module fuzz configuration * Tweak how features work in differential fuzzing * Disable everything for baseline differential fuzzing * Enable selectively for each engine afterwards * Also forcibly enable reference types and bulk memory for spec tests * Log wasms when compiling * Add reference types support to v8 fuzzer * Fix timeouts via fuel The default store has "infinite" fuel so that needs to be consumed before fuel is added back in. * Remove fuzzing-specific tests These no longer compile and also haven't been added to in a long time. Most of the time a reduced form of original the fuzz test case is added when a fuzz bug is fixed.
3 years ago
# depend again on wasmtime to activate its default features for tests
Add a two-week delay to Wasmtime's release process (#3955) * Bump to 0.36.0 * Add a two-week delay to Wasmtime's release process This commit is a proposal to update Wasmtime's release process with a two-week delay from branching a release until it's actually officially released. We've had two issues lately that came up which led to this proposal: * In #3915 it was realized that changes just before the 0.35.0 release weren't enough for an embedding use case, but the PR didn't meet the expectations for a full patch release. * At Fastly we were about to start rolling out a new version of Wasmtime when over the weekend the fuzz bug #3951 was found. This led to the desire internally to have a "must have been fuzzed for this long" period of time for Wasmtime changes which we felt were better reflected in the release process itself rather than something about Fastly's own integration with Wasmtime. This commit updates the automation for releases to unconditionally create a `release-X.Y.Z` branch on the 5th of every month. The actual release from this branch is then performed on the 20th of every month, roughly two weeks later. This should provide a period of time to ensure that all changes in a release are fuzzed for at least two weeks and avoid any further surprises. This should also help with any last-minute changes made just before a release if they need tweaking since backporting to a not-yet-released branch is much easier. Overall there are some new properties about Wasmtime with this proposal as well: * The `main` branch will always have a section in `RELEASES.md` which is listed as "Unreleased" for us to fill out. * The `main` branch will always be a version ahead of the latest release. For example it will be bump pre-emptively as part of the release process on the 5th where if `release-2.0.0` was created then the `main` branch will have 3.0.0 Wasmtime. * Dates for major versions are automatically updated in the `RELEASES.md` notes. The associated documentation for our release process is updated and the various scripts should all be updated now as well with this commit. * Add notes on a security patch * Clarify security fixes shouldn't be previewed early on CI
3 years ago
wasmtime = { path = "crates/wasmtime", version = "0.36.0" }
env_logger = "0.8.1"
filecheck = "0.5.0"
more-asserts = "0.2.1"
tempfile = "3.1.0"
test-programs = { path = "crates/test-programs" }
wasmtime-runtime = { path = "crates/runtime" }
tokio = { version = "1.8.0", features = ["rt", "time", "macros", "rt-multi-thread"] }
tracing-subscriber = "0.3.1"
wast = "39.0.0"
criterion = "0.3.4"
num_cpus = "1.13.0"
Add guard pages to the front of linear memories (#2977) * Add guard pages to the front of linear memories This commit implements a safety feature for Wasmtime to place guard pages before the allocation of all linear memories. Guard pages placed after linear memories are typically present for performance (at least) because it can help elide bounds checks. Guard pages before a linear memory, however, are never strictly needed for performance or features. The intention of a preceding guard page is to help insulate against bugs in Cranelift or other code generators, such as CVE-2021-32629. This commit adds a `Config::guard_before_linear_memory` configuration option, defaulting to `true`, which indicates whether guard pages should be present both before linear memories as well as afterwards. Guard regions continue to be controlled by `{static,dynamic}_memory_guard_size` methods. The implementation here affects both on-demand allocated memories as well as the pooling allocator for memories. For on-demand memories this adjusts the size of the allocation as well as adjusts the calculations for the base pointer of the wasm memory. For the pooling allocator this will place a singular extra guard region at the very start of the allocation for memories. Since linear memories in the pooling allocator are contiguous every memory already had a preceding guard region in memory, it was just the previous memory's guard region afterwards. Only the first memory needed this extra guard. I've attempted to write some tests to help test all this, but this is all somewhat tricky to test because the settings are pretty far away from the actual behavior. I think, though, that the tests added here should help cover various use cases and help us have confidence in tweaking the various `Config` settings beyond their defaults. Note that this also contains a semantic change where `InstanceLimits::memory_reservation_size` has been removed. Instead this field is now inferred from the `static_memory_maximum_size` and guard size settings. This should hopefully remove some duplication in these settings, canonicalizing on the guard-size/static-size settings as the way to control memory sizes and virtual reservations. * Update config docs * Fix a typo * Fix benchmark * Fix wasmtime-runtime tests * Fix some more tests * Try to fix uffd failing test * Review items * Tweak 32-bit defaults Makes the pooling allocator a bit more reasonable by default on 32-bit with these settings.
3 years ago
winapi = { version = "0.3.9", features = ['memoryapi'] }
Use an mmap-friendly serialization format (#3257) * Use an mmap-friendly serialization format This commit reimplements the main serialization format for Wasmtime's precompiled artifacts. Previously they were generally a binary blob of `bincode`-encoded metadata prefixed with some versioning information. The downside of this format, though, is that loading a precompiled artifact required pushing all information through `bincode`. This is inefficient when some data, such as trap/address tables, are rarely accessed. The new format added in this commit is one which is designed to be `mmap`-friendly. This means that the relevant parts of the precompiled artifact are already page-aligned for updating permissions of pieces here and there. Additionally the artifact is optimized so that if data is rarely read then we can delay reading it until necessary. The new artifact format for serialized modules is an ELF file. This is not a public API guarantee, so it cannot be relied upon. In the meantime though this is quite useful for exploring precompiled modules with standard tooling like `objdump`. The ELF file is already constructed as part of module compilation, and this is the main contents of the serialized artifact. THere is some extra information, though, not encoded in each module's individual ELF file such as type information. This information continues to be `bincode`-encoded, but it's intended to be much smaller and much faster to deserialize. This extra information is appended to the end of the ELF file. This means that the original ELF file is still a valid ELF file, we just get to have extra bits at the end. More information on the new format can be found in the module docs of the serialization module of Wasmtime. Another refatoring implemented as part of this commit is to deserialize and store object files directly in `mmap`-backed storage. This avoids the need to copy bytes after the artifact is loaded into memory for each compiled module, and in a future commit it opens up the door to avoiding copying the text section into a `CodeMemory`. For now, though, the main change is that copies are not necessary when loading from a precompiled compilation artifact once the artifact is itself in mmap-based memory. To assist with managing `mmap`-based memory a new `MmapVec` type was added to `wasmtime_jit` which acts as a form of `Vec<T>` backed by a `wasmtime_runtime::Mmap`. This type notably supports `drain(..N)` to slice the buffer into disjoint regions that are all separately owned, such as having a separately owned window into one artifact for all object files contained within. Finally this commit implements a small refactoring in `wasmtime-cache` to use the standard artifact format for cache entries rather than a bincode-encoded version. This required some more hooks for serializing/deserializing but otherwise the crate still performs as before. * Review comments
3 years ago
memchr = "2.4"
async-trait = "0.1"
wat = "1.0.41"
once_cell = "1.9.0"
[build-dependencies]
anyhow = "1.0.19"
[profile.release.build-override]
opt-level = 0
[workspace]
resolver = '2'
members = [
"cranelift",
"cranelift/isle/fuzz",
"cranelift/isle/islec",
"cranelift/serde",
"crates/bench-api",
"crates/c-api",
"crates/misc/run-examples",
"examples/fib-debug/wasm",
"examples/wasi/wasm",
"examples/tokio/wasm",
"fuzz",
]
exclude = [
'crates/wasi-common/WASI/tools/witx-cli',
'docs/rust_wasi_markdown_parser'
]
[features]
default = [
"jitdump",
"wasmtime/wat",
"wasmtime/parallel-compilation",
"vtune",
"wasi-nn",
"pooling-allocator",
"memory-init-cow",
Support disabling backtraces at compile time (#3932) * Support disabling backtraces at compile time This commit adds support to Wasmtime to disable, at compile time, the gathering of backtraces on traps. The `wasmtime` crate now sports a `wasm-backtrace` feature which, when disabled, will mean that backtraces are never collected at compile time nor are unwinding tables inserted into compiled objects. The motivation for this commit stems from the fact that generating a backtrace is quite a slow operation. Currently backtrace generation is done with libunwind and `_Unwind_Backtrace` typically found in glibc or other system libraries. When thousands of modules are loaded into the same process though this means that the initial backtrace can take nearly half a second and all subsequent backtraces can take upwards of hundreds of milliseconds. Relative to all other operations in Wasmtime this is extremely expensive at this time. In the future we'd like to implement a more performant backtrace scheme but such an implementation would require coordination with Cranelift and is a big chunk of work that may take some time, so in the meantime if embedders don't need a backtrace they can still use this option to disable backtraces at compile time and avoid the performance pitfalls of collecting backtraces. In general I tried to originally make this a runtime configuration option but ended up opting for a compile-time option because `Trap::new` otherwise has no arguments and always captures a backtrace. By making this a compile-time option it was possible to configure, statically, the behavior of `Trap::new`. Additionally I also tried to minimize the amount of `#[cfg]` necessary by largely only having it at the producer and consumer sites. Also a noteworthy restriction of this implementation is that if backtrace support is disabled at compile time then reference types support will be unconditionally disabled at runtime. With backtrace support disabled there's no way to trace the stack of wasm frames which means that GC can't happen given our current implementation. * Always enable backtraces for the C API
3 years ago
"wasm-backtrace",
]
jitdump = ["wasmtime/jitdump"]
vtune = ["wasmtime/vtune"]
wasi-crypto = ["wasmtime-wasi-crypto"]
wasi-nn = ["wasmtime-wasi-nn"]
uffd = ["wasmtime/uffd"]
memory-init-cow = ["wasmtime/memory-init-cow"]
Add a pooling allocator mode based on copy-on-write mappings of memfds. As first suggested by Jan on the Zulip here [1], a cheap and effective way to obtain copy-on-write semantics of a "backing image" for a Wasm memory is to mmap a file with `MAP_PRIVATE`. The `memfd` mechanism provided by the Linux kernel allows us to create anonymous, in-memory-only files that we can use for this mapping, so we can construct the image contents on-the-fly then effectively create a CoW overlay. Furthermore, and importantly, `madvise(MADV_DONTNEED, ...)` will discard the CoW overlay, returning the mapping to its original state. By itself this is almost enough for a very fast instantiation-termination loop of the same image over and over, without changing the address space mapping at all (which is expensive). The only missing bit is how to implement heap *growth*. But here memfds can help us again: if we create another anonymous file and map it where the extended parts of the heap would go, we can take advantage of the fact that a `mmap()` mapping can be *larger than the file itself*, with accesses beyond the end generating a `SIGBUS`, and the fact that we can cheaply resize the file with `ftruncate`, even after a mapping exists. So we can map the "heap extension" file once with the maximum memory-slot size and grow the memfd itself as `memory.grow` operations occur. The above CoW technique and heap-growth technique together allow us a fastpath of `madvise()` and `ftruncate()` only when we re-instantiate the same module over and over, as long as we can reuse the same slot. This fastpath avoids all whole-process address-space locks in the Linux kernel, which should mean it is highly scalable. It also avoids the cost of copying data on read, as the `uffd` heap backend does when servicing pagefaults; the kernel's own optimized CoW logic (same as used by all file mmaps) is used instead. [1] https://bytecodealliance.zulipchat.com/#narrow/stream/206238-general/topic/Copy.20on.20write.20based.20instance.20reuse/near/266657772
3 years ago
pooling-allocator = ["wasmtime/pooling-allocator"]
all-arch = ["wasmtime/all-arch"]
posix-signals-on-macos = ["wasmtime/posix-signals-on-macos"]
Support disabling backtraces at compile time (#3932) * Support disabling backtraces at compile time This commit adds support to Wasmtime to disable, at compile time, the gathering of backtraces on traps. The `wasmtime` crate now sports a `wasm-backtrace` feature which, when disabled, will mean that backtraces are never collected at compile time nor are unwinding tables inserted into compiled objects. The motivation for this commit stems from the fact that generating a backtrace is quite a slow operation. Currently backtrace generation is done with libunwind and `_Unwind_Backtrace` typically found in glibc or other system libraries. When thousands of modules are loaded into the same process though this means that the initial backtrace can take nearly half a second and all subsequent backtraces can take upwards of hundreds of milliseconds. Relative to all other operations in Wasmtime this is extremely expensive at this time. In the future we'd like to implement a more performant backtrace scheme but such an implementation would require coordination with Cranelift and is a big chunk of work that may take some time, so in the meantime if embedders don't need a backtrace they can still use this option to disable backtraces at compile time and avoid the performance pitfalls of collecting backtraces. In general I tried to originally make this a runtime configuration option but ended up opting for a compile-time option because `Trap::new` otherwise has no arguments and always captures a backtrace. By making this a compile-time option it was possible to configure, statically, the behavior of `Trap::new`. Additionally I also tried to minimize the amount of `#[cfg]` necessary by largely only having it at the producer and consumer sites. Also a noteworthy restriction of this implementation is that if backtrace support is disabled at compile time then reference types support will be unconditionally disabled at runtime. With backtrace support disabled there's no way to trace the stack of wasm frames which means that GC can't happen given our current implementation. * Always enable backtraces for the C API
3 years ago
wasm-backtrace = ["wasmtime/wasm-backtrace"]
# Stub feature that does nothing, for Cargo-features compatibility: the new
# backend is the default now.
experimental_x64 = []
[badges]
maintenance = { status = "actively-developed" }
[[test]]
name = "host_segfault"
harness = false
externref: implement stack map-based garbage collection For host VM code, we use plain reference counting, where cloning increments the reference count, and dropping decrements it. We can avoid many of the on-stack increment/decrement operations that typically plague the performance of reference counting via Rust's ownership and borrowing system. Moving a `VMExternRef` avoids mutating its reference count, and borrowing it either avoids the reference count increment or delays it until if/when the `VMExternRef` is cloned. When passing a `VMExternRef` into compiled Wasm code, we don't want to do reference count mutations for every compiled `local.{get,set}`, nor for every function call. Therefore, we use a variation of **deferred reference counting**, where we only mutate reference counts when storing `VMExternRef`s somewhere that outlives the activation: into a global or table. Simultaneously, we over-approximate the set of `VMExternRef`s that are inside Wasm function activations. Periodically, we walk the stack at GC safe points, and use stack map information to precisely identify the set of `VMExternRef`s inside Wasm activations. Then we take the difference between this precise set and our over-approximation, and decrement the reference count for each of the `VMExternRef`s that are in our over-approximation but not in the precise set. Finally, the over-approximation is replaced with the precise set. The `VMExternRefActivationsTable` implements the over-approximized set of `VMExternRef`s referenced by Wasm activations. Calling a Wasm function and passing it a `VMExternRef` moves the `VMExternRef` into the table, and the compiled Wasm function logically "borrows" the `VMExternRef` from the table. Similarly, `global.get` and `table.get` operations clone the gotten `VMExternRef` into the `VMExternRefActivationsTable` and then "borrow" the reference out of the table. When a `VMExternRef` is returned to host code from a Wasm function, the host increments the reference count (because the reference is logically "borrowed" from the `VMExternRefActivationsTable` and the reference count from the table will be dropped at the next GC). For more general information on deferred reference counting, see *An Examination of Deferred Reference Counting and Cycle Detection* by Quinane: https://openresearch-repository.anu.edu.au/bitstream/1885/42030/2/hon-thesis.pdf cc #929 Fixes #1804
4 years ago
[[example]]
name = "tokio"
required-features = ["wasmtime-wasi/tokio"]
[profile.dev.package.backtrace]
debug = false # FIXME(#1813)
[[bench]]
name = "instantiation"
harness = false
[[bench]]
name = "thread_eager_init"
harness = false
Add a benchmark for measuring call overhead (#3883) The goal of this new benchmark, `call`, is to help us measure the overhead of both calling into WebAssembly from the host as well as calling the host from WebAssembly. There's lots of various ways to measure this so this benchmark is a bit large but should hopefully be pretty thorough. It's expected that this benchmark will rarely be run in its entirety but rather only a subset of the benchmarks will be run at any one given time. Some metrics measured here are: * Typed vs Untyped vs Unchecked - testing the cost of both calling wasm with these various methods as well as having wasm call the host where the host function is defined with these various methods. * With and without `call_hook` - helps to measure the overhead of the `Store::call_hook` API. * Synchronous and Asynchronous - measures the overhead of calling into WebAssembly asynchronously (with and without the pooling allocator) in addition to defining host APIs in various methods when wasm is called asynchronously. Currently all the numbers are as expected, notably: * Host calling WebAssembly is ~25ns of overhead * WebAssembly calling the host is ~3ns of overhead * "Unchecked" is a bit slower than "typed", and "Untyped" is slower than unchecked. * Asynchronous wasm calling a synchronous host function has ~3ns of overhead (nothing more than usual). * Asynchronous calls are much slower, on the order of 2-3us due to `madvise`. Lots of other fiddly bits that can be measured here, but this will hopefully help establish a benchmark through which we can measure in the future in addition to measuring changes such as #3876
3 years ago
[[bench]]
name = "call"
harness = false