You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

343 lines
11 KiB

use anyhow::Result;
use std::cell::Cell;
use std::rc::Rc;
use std::sync::atomic::{AtomicUsize, Ordering::SeqCst};
use std::sync::Arc;
use wasmtime::*;
#[test]
fn link_undefined() -> Result<()> {
let mut store = Store::<()>::default();
let linker = Linker::new(store.engine());
let module = Module::new(store.engine(), r#"(module (import "" "" (func)))"#)?;
assert!(linker.instantiate(&mut store, &module).is_err());
let module = Module::new(store.engine(), r#"(module (import "" "" (global i32)))"#)?;
assert!(linker.instantiate(&mut store, &module).is_err());
let module = Module::new(store.engine(), r#"(module (import "" "" (memory 1)))"#)?;
assert!(linker.instantiate(&mut store, &module).is_err());
let module = Module::new(
store.engine(),
r#"(module (import "" "" (table 1 funcref)))"#,
)?;
assert!(linker.instantiate(&mut store, &module).is_err());
Ok(())
}
#[test]
fn link_twice_bad() -> Result<()> {
let mut store = Store::<()>::default();
let mut linker = Linker::<()>::new(store.engine());
// functions
linker.func_wrap("f", "", || {})?;
assert!(linker.func_wrap("f", "", || {}).is_err());
assert!(linker
.func_wrap("f", "", || -> Result<(), Trap> { loop {} })
.is_err());
// globals
let ty = GlobalType::new(ValType::I32, Mutability::Const);
let global = Global::new(&mut store, ty, Val::I32(0))?;
linker.define("g", "1", global.clone())?;
assert!(linker.define("g", "1", global.clone()).is_err());
let ty = GlobalType::new(ValType::I32, Mutability::Var);
let global = Global::new(&mut store, ty, Val::I32(0))?;
linker.define("g", "2", global.clone())?;
assert!(linker.define("g", "2", global.clone()).is_err());
let ty = GlobalType::new(ValType::I64, Mutability::Const);
let global = Global::new(&mut store, ty, Val::I64(0))?;
linker.define("g", "3", global.clone())?;
assert!(linker.define("g", "3", global.clone()).is_err());
// memories
Implement the memory64 proposal in Wasmtime (#3153) * Implement the memory64 proposal in Wasmtime This commit implements the WebAssembly [memory64 proposal][proposal] in both Wasmtime and Cranelift. In terms of work done Cranelift ended up needing very little work here since most of it was already prepared for 64-bit memories at one point or another. Most of the work in Wasmtime is largely refactoring, changing a bunch of `u32` values to something else. A number of internal and public interfaces are changing as a result of this commit, for example: * Acessors on `wasmtime::Memory` that work with pages now all return `u64` unconditionally rather than `u32`. This makes it possible to accommodate 64-bit memories with this API, but we may also want to consider `usize` here at some point since the host can&#39;t grow past `usize`-limited pages anyway. * The `wasmtime::Limits` structure is removed in favor of minimum/maximum methods on table/memory types. * Many libcall intrinsics called by jit code now unconditionally take `u64` arguments instead of `u32`. Return values are `usize`, however, since the return value, if successful, is always bounded by host memory while arguments can come from any guest. * The `heap_addr` clif instruction now takes a 64-bit offset argument instead of a 32-bit one. It turns out that the legalization of `heap_addr` already worked with 64-bit offsets, so this change was fairly trivial to make. * The runtime implementation of mmap-based linear memories has changed to largely work in `usize` quantities in its API and in bytes instead of pages. This simplifies various aspects and reflects that mmap-memories are always bound by `usize` since that&#39;s what the host is using to address things, and additionally most calculations care about bytes rather than pages except for the very edge where we&#39;re going to/from wasm. Overall I&#39;ve tried to minimize the amount of `as` casts as possible, using checked `try_from` and checked arithemtic with either error handling or explicit `unwrap()` calls to tell us about bugs in the future. Most locations have relatively obvious things to do with various implications on various hosts, and I think they should all be roughly of the right shape but time will tell. I mostly relied on the compiler complaining that various types weren&#39;t aligned to figure out type-casting, and I manually audited some of the more obvious locations. I suspect we have a number of hidden locations that will panic on 32-bit hosts if 64-bit modules try to run there, but otherwise I think we should be generally ok (famous last words). In any case I wouldn&#39;t want to enable this by default naturally until we&#39;ve fuzzed it for some time. In terms of the actual underlying implementation, no one should expect memory64 to be all that fast. Right now it&#39;s implemented with &#34;dynamic&#34; heaps which have a few consequences: * All memory accesses are bounds-checked. I&#39;m not sure how aggressively Cranelift tries to optimize out bounds checks, but I suspect not a ton since we haven&#39;t stressed this much historically. * Heaps are always precisely sized. This means that every call to `memory.grow` will incur a `memcpy` of memory from the old heap to the new. We probably want to at least look into `mremap` on Linux and otherwise try to implement schemes where dynamic heaps have some reserved pages to grow into to help amortize the cost of `memory.grow`. The memory64 spec test suite is scheduled to now run on CI, but as with all the other spec test suites it&#39;s really not all that comprehensive. I&#39;ve tried adding more tests for basic things as I&#39;ve had to implement guards for them, but I wouldn&#39;t really consider the testing adequate from just this PR itself. I did try to take care in one test to actually allocate a 4gb+ heap and then avoid running that in the pooling allocator or in emulation because otherwise that may fail or take excessively long. [proposal]: https://github.com/WebAssembly/memory64/blob/master/proposals/memory64/Overview.md * Fix some tests * More test fixes * Fix wasmtime tests * Fix doctests * Revert to 32-bit immediate offsets in `heap_addr` This commit updates the generation of addresses in wasm code to always use 32-bit offsets for `heap_addr`, and if the calculated offset is bigger than 32-bits we emit a manual add with an overflow check. * Disable memory64 for spectest fuzzing * Fix wrong offset being added to heap addr * More comments! * Clarify bytes/pages
3 years ago
let ty = MemoryType::new(1, None);
let memory = Memory::new(&mut store, ty)?;
linker.define("m", "", memory.clone())?;
assert!(linker.define("m", "", memory.clone()).is_err());
Implement the memory64 proposal in Wasmtime (#3153) * Implement the memory64 proposal in Wasmtime This commit implements the WebAssembly [memory64 proposal][proposal] in both Wasmtime and Cranelift. In terms of work done Cranelift ended up needing very little work here since most of it was already prepared for 64-bit memories at one point or another. Most of the work in Wasmtime is largely refactoring, changing a bunch of `u32` values to something else. A number of internal and public interfaces are changing as a result of this commit, for example: * Acessors on `wasmtime::Memory` that work with pages now all return `u64` unconditionally rather than `u32`. This makes it possible to accommodate 64-bit memories with this API, but we may also want to consider `usize` here at some point since the host can&#39;t grow past `usize`-limited pages anyway. * The `wasmtime::Limits` structure is removed in favor of minimum/maximum methods on table/memory types. * Many libcall intrinsics called by jit code now unconditionally take `u64` arguments instead of `u32`. Return values are `usize`, however, since the return value, if successful, is always bounded by host memory while arguments can come from any guest. * The `heap_addr` clif instruction now takes a 64-bit offset argument instead of a 32-bit one. It turns out that the legalization of `heap_addr` already worked with 64-bit offsets, so this change was fairly trivial to make. * The runtime implementation of mmap-based linear memories has changed to largely work in `usize` quantities in its API and in bytes instead of pages. This simplifies various aspects and reflects that mmap-memories are always bound by `usize` since that&#39;s what the host is using to address things, and additionally most calculations care about bytes rather than pages except for the very edge where we&#39;re going to/from wasm. Overall I&#39;ve tried to minimize the amount of `as` casts as possible, using checked `try_from` and checked arithemtic with either error handling or explicit `unwrap()` calls to tell us about bugs in the future. Most locations have relatively obvious things to do with various implications on various hosts, and I think they should all be roughly of the right shape but time will tell. I mostly relied on the compiler complaining that various types weren&#39;t aligned to figure out type-casting, and I manually audited some of the more obvious locations. I suspect we have a number of hidden locations that will panic on 32-bit hosts if 64-bit modules try to run there, but otherwise I think we should be generally ok (famous last words). In any case I wouldn&#39;t want to enable this by default naturally until we&#39;ve fuzzed it for some time. In terms of the actual underlying implementation, no one should expect memory64 to be all that fast. Right now it&#39;s implemented with &#34;dynamic&#34; heaps which have a few consequences: * All memory accesses are bounds-checked. I&#39;m not sure how aggressively Cranelift tries to optimize out bounds checks, but I suspect not a ton since we haven&#39;t stressed this much historically. * Heaps are always precisely sized. This means that every call to `memory.grow` will incur a `memcpy` of memory from the old heap to the new. We probably want to at least look into `mremap` on Linux and otherwise try to implement schemes where dynamic heaps have some reserved pages to grow into to help amortize the cost of `memory.grow`. The memory64 spec test suite is scheduled to now run on CI, but as with all the other spec test suites it&#39;s really not all that comprehensive. I&#39;ve tried adding more tests for basic things as I&#39;ve had to implement guards for them, but I wouldn&#39;t really consider the testing adequate from just this PR itself. I did try to take care in one test to actually allocate a 4gb+ heap and then avoid running that in the pooling allocator or in emulation because otherwise that may fail or take excessively long. [proposal]: https://github.com/WebAssembly/memory64/blob/master/proposals/memory64/Overview.md * Fix some tests * More test fixes * Fix wasmtime tests * Fix doctests * Revert to 32-bit immediate offsets in `heap_addr` This commit updates the generation of addresses in wasm code to always use 32-bit offsets for `heap_addr`, and if the calculated offset is bigger than 32-bits we emit a manual add with an overflow check. * Disable memory64 for spectest fuzzing * Fix wrong offset being added to heap addr * More comments! * Clarify bytes/pages
3 years ago
let ty = MemoryType::new(2, None);
let memory = Memory::new(&mut store, ty)?;
assert!(linker.define("m", "", memory.clone()).is_err());
// tables
Implement the memory64 proposal in Wasmtime (#3153) * Implement the memory64 proposal in Wasmtime This commit implements the WebAssembly [memory64 proposal][proposal] in both Wasmtime and Cranelift. In terms of work done Cranelift ended up needing very little work here since most of it was already prepared for 64-bit memories at one point or another. Most of the work in Wasmtime is largely refactoring, changing a bunch of `u32` values to something else. A number of internal and public interfaces are changing as a result of this commit, for example: * Acessors on `wasmtime::Memory` that work with pages now all return `u64` unconditionally rather than `u32`. This makes it possible to accommodate 64-bit memories with this API, but we may also want to consider `usize` here at some point since the host can&#39;t grow past `usize`-limited pages anyway. * The `wasmtime::Limits` structure is removed in favor of minimum/maximum methods on table/memory types. * Many libcall intrinsics called by jit code now unconditionally take `u64` arguments instead of `u32`. Return values are `usize`, however, since the return value, if successful, is always bounded by host memory while arguments can come from any guest. * The `heap_addr` clif instruction now takes a 64-bit offset argument instead of a 32-bit one. It turns out that the legalization of `heap_addr` already worked with 64-bit offsets, so this change was fairly trivial to make. * The runtime implementation of mmap-based linear memories has changed to largely work in `usize` quantities in its API and in bytes instead of pages. This simplifies various aspects and reflects that mmap-memories are always bound by `usize` since that&#39;s what the host is using to address things, and additionally most calculations care about bytes rather than pages except for the very edge where we&#39;re going to/from wasm. Overall I&#39;ve tried to minimize the amount of `as` casts as possible, using checked `try_from` and checked arithemtic with either error handling or explicit `unwrap()` calls to tell us about bugs in the future. Most locations have relatively obvious things to do with various implications on various hosts, and I think they should all be roughly of the right shape but time will tell. I mostly relied on the compiler complaining that various types weren&#39;t aligned to figure out type-casting, and I manually audited some of the more obvious locations. I suspect we have a number of hidden locations that will panic on 32-bit hosts if 64-bit modules try to run there, but otherwise I think we should be generally ok (famous last words). In any case I wouldn&#39;t want to enable this by default naturally until we&#39;ve fuzzed it for some time. In terms of the actual underlying implementation, no one should expect memory64 to be all that fast. Right now it&#39;s implemented with &#34;dynamic&#34; heaps which have a few consequences: * All memory accesses are bounds-checked. I&#39;m not sure how aggressively Cranelift tries to optimize out bounds checks, but I suspect not a ton since we haven&#39;t stressed this much historically. * Heaps are always precisely sized. This means that every call to `memory.grow` will incur a `memcpy` of memory from the old heap to the new. We probably want to at least look into `mremap` on Linux and otherwise try to implement schemes where dynamic heaps have some reserved pages to grow into to help amortize the cost of `memory.grow`. The memory64 spec test suite is scheduled to now run on CI, but as with all the other spec test suites it&#39;s really not all that comprehensive. I&#39;ve tried adding more tests for basic things as I&#39;ve had to implement guards for them, but I wouldn&#39;t really consider the testing adequate from just this PR itself. I did try to take care in one test to actually allocate a 4gb+ heap and then avoid running that in the pooling allocator or in emulation because otherwise that may fail or take excessively long. [proposal]: https://github.com/WebAssembly/memory64/blob/master/proposals/memory64/Overview.md * Fix some tests * More test fixes * Fix wasmtime tests * Fix doctests * Revert to 32-bit immediate offsets in `heap_addr` This commit updates the generation of addresses in wasm code to always use 32-bit offsets for `heap_addr`, and if the calculated offset is bigger than 32-bits we emit a manual add with an overflow check. * Disable memory64 for spectest fuzzing * Fix wrong offset being added to heap addr * More comments! * Clarify bytes/pages
3 years ago
let ty = TableType::new(ValType::FuncRef, 1, None);
let table = Table::new(&mut store, ty, Val::FuncRef(None))?;
linker.define("t", "", table.clone())?;
assert!(linker.define("t", "", table.clone()).is_err());
Implement the memory64 proposal in Wasmtime (#3153) * Implement the memory64 proposal in Wasmtime This commit implements the WebAssembly [memory64 proposal][proposal] in both Wasmtime and Cranelift. In terms of work done Cranelift ended up needing very little work here since most of it was already prepared for 64-bit memories at one point or another. Most of the work in Wasmtime is largely refactoring, changing a bunch of `u32` values to something else. A number of internal and public interfaces are changing as a result of this commit, for example: * Acessors on `wasmtime::Memory` that work with pages now all return `u64` unconditionally rather than `u32`. This makes it possible to accommodate 64-bit memories with this API, but we may also want to consider `usize` here at some point since the host can&#39;t grow past `usize`-limited pages anyway. * The `wasmtime::Limits` structure is removed in favor of minimum/maximum methods on table/memory types. * Many libcall intrinsics called by jit code now unconditionally take `u64` arguments instead of `u32`. Return values are `usize`, however, since the return value, if successful, is always bounded by host memory while arguments can come from any guest. * The `heap_addr` clif instruction now takes a 64-bit offset argument instead of a 32-bit one. It turns out that the legalization of `heap_addr` already worked with 64-bit offsets, so this change was fairly trivial to make. * The runtime implementation of mmap-based linear memories has changed to largely work in `usize` quantities in its API and in bytes instead of pages. This simplifies various aspects and reflects that mmap-memories are always bound by `usize` since that&#39;s what the host is using to address things, and additionally most calculations care about bytes rather than pages except for the very edge where we&#39;re going to/from wasm. Overall I&#39;ve tried to minimize the amount of `as` casts as possible, using checked `try_from` and checked arithemtic with either error handling or explicit `unwrap()` calls to tell us about bugs in the future. Most locations have relatively obvious things to do with various implications on various hosts, and I think they should all be roughly of the right shape but time will tell. I mostly relied on the compiler complaining that various types weren&#39;t aligned to figure out type-casting, and I manually audited some of the more obvious locations. I suspect we have a number of hidden locations that will panic on 32-bit hosts if 64-bit modules try to run there, but otherwise I think we should be generally ok (famous last words). In any case I wouldn&#39;t want to enable this by default naturally until we&#39;ve fuzzed it for some time. In terms of the actual underlying implementation, no one should expect memory64 to be all that fast. Right now it&#39;s implemented with &#34;dynamic&#34; heaps which have a few consequences: * All memory accesses are bounds-checked. I&#39;m not sure how aggressively Cranelift tries to optimize out bounds checks, but I suspect not a ton since we haven&#39;t stressed this much historically. * Heaps are always precisely sized. This means that every call to `memory.grow` will incur a `memcpy` of memory from the old heap to the new. We probably want to at least look into `mremap` on Linux and otherwise try to implement schemes where dynamic heaps have some reserved pages to grow into to help amortize the cost of `memory.grow`. The memory64 spec test suite is scheduled to now run on CI, but as with all the other spec test suites it&#39;s really not all that comprehensive. I&#39;ve tried adding more tests for basic things as I&#39;ve had to implement guards for them, but I wouldn&#39;t really consider the testing adequate from just this PR itself. I did try to take care in one test to actually allocate a 4gb+ heap and then avoid running that in the pooling allocator or in emulation because otherwise that may fail or take excessively long. [proposal]: https://github.com/WebAssembly/memory64/blob/master/proposals/memory64/Overview.md * Fix some tests * More test fixes * Fix wasmtime tests * Fix doctests * Revert to 32-bit immediate offsets in `heap_addr` This commit updates the generation of addresses in wasm code to always use 32-bit offsets for `heap_addr`, and if the calculated offset is bigger than 32-bits we emit a manual add with an overflow check. * Disable memory64 for spectest fuzzing * Fix wrong offset being added to heap addr * More comments! * Clarify bytes/pages
3 years ago
let ty = TableType::new(ValType::FuncRef, 2, None);
let table = Table::new(&mut store, ty, Val::FuncRef(None))?;
assert!(linker.define("t", "", table.clone()).is_err());
Ok(())
}
#[test]
fn function_interposition() -> Result<()> {
let mut store = Store::<()>::default();
let mut linker = Linker::new(store.engine());
linker.allow_shadowing(true);
let mut module = Module::new(
store.engine(),
r#"(module (func (export "green") (result i32) (i32.const 7)))"#,
)?;
for _ in 0..4 {
let instance = linker.instantiate(&mut store, &module)?;
linker.define(
"red",
"green",
instance.get_export(&mut store, "green").unwrap().clone(),
)?;
module = Module::new(
store.engine(),
r#"(module
(import "red" "green" (func (result i32)))
(func (export "green") (result i32) (i32.mul (call 0) (i32.const 2)))
)"#,
)?;
}
let instance = linker.instantiate(&mut store, &module)?;
let func = instance
.get_export(&mut store, "green")
.unwrap()
.into_func()
.unwrap();
let func = func.typed::<(), i32, _>(&store)?;
assert_eq!(func.call(&mut store, ())?, 112);
Ok(())
}
// Same as `function_interposition`, but the linker's name for the function
// differs from the module's name.
#[test]
fn function_interposition_renamed() -> Result<()> {
let mut store = Store::<()>::default();
let mut linker = Linker::new(store.engine());
linker.allow_shadowing(true);
let mut module = Module::new(
store.engine(),
r#"(module (func (export "export") (result i32) (i32.const 7)))"#,
)?;
for _ in 0..4 {
let instance = linker.instantiate(&mut store, &module)?;
linker.define(
"red",
"green",
instance.get_export(&mut store, "export").unwrap().clone(),
)?;
module = Module::new(
store.engine(),
r#"(module
(import "red" "green" (func (result i32)))
(func (export "export") (result i32) (i32.mul (call 0) (i32.const 2)))
)"#,
)?;
}
let instance = linker.instantiate(&mut store, &module)?;
let func = instance.get_func(&mut store, "export").unwrap();
let func = func.typed::<(), i32, _>(&store)?;
assert_eq!(func.call(&mut store, ())?, 112);
Ok(())
}
// Similar to `function_interposition`, but use `Linker::instance` instead of
// `Linker::define`.
#[test]
fn module_interposition() -> Result<()> {
let mut store = Store::<()>::default();
let mut linker = Linker::new(store.engine());
linker.allow_shadowing(true);
let mut module = Module::new(
store.engine(),
r#"(module (func (export "export") (result i32) (i32.const 7)))"#,
)?;
for _ in 0..4 {
let instance = linker.instantiate(&mut store, &module)?;
linker.instance(&mut store, "instance", instance)?;
module = Module::new(
store.engine(),
r#"(module
(import "instance" "export" (func (result i32)))
(func (export "export") (result i32) (i32.mul (call 0) (i32.const 2)))
)"#,
)?;
}
let instance = linker.instantiate(&mut store, &module)?;
let func = instance
.get_export(&mut store, "export")
.unwrap()
.into_func()
.unwrap();
let func = func.typed::<(), i32, _>(&store)?;
assert_eq!(func.call(&mut store, ())?, 112);
Ok(())
}
#[test]
fn allow_unknown_exports() -> Result<()> {
let mut store = Store::<()>::default();
let mut linker = Linker::new(store.engine());
let module = Module::new(
store.engine(),
r#"(module (func (export "_start")) (global (export "g") i32 (i32.const 0)))"#,
)?;
assert!(linker.module(&mut store, "module", &module).is_err());
let mut linker = Linker::new(store.engine());
linker.allow_unknown_exports(true);
linker.module(&mut store, "module", &module)?;
Ok(())
}
#[test]
fn no_leak() -> Result<()> {
struct DropMe(Rc<Cell<bool>>);
impl Drop for DropMe {
fn drop(&mut self) {
self.0.set(true);
}
}
let flag = Rc::new(Cell::new(false));
{
let mut store = Store::new(&Engine::default(), DropMe(flag.clone()));
let mut linker = Linker::new(store.engine());
let module = Module::new(
store.engine(),
r#"
(module
(func (export "_start"))
)
"#,
)?;
linker.module(&mut store, "a", &module)?;
}
assert!(flag.get(), "store was leaked");
Ok(())
}
#[test]
fn no_leak_with_imports() -> Result<()> {
struct DropMe(Arc<AtomicUsize>);
impl Drop for DropMe {
fn drop(&mut self) {
self.0.fetch_add(1, SeqCst);
}
}
let flag = Arc::new(AtomicUsize::new(0));
{
let mut store = Store::new(&Engine::default(), DropMe(flag.clone()));
let mut linker = Linker::new(store.engine());
let drop_me = DropMe(flag.clone());
linker.func_wrap("", "", move || drop(&drop_me))?;
let module = Module::new(
store.engine(),
r#"
(module
(import "" "" (func))
(func (export "_start"))
)
"#,
)?;
linker.module(&mut store, "a", &module)?;
}
assert!(flag.load(SeqCst) == 2, "something was leaked");
Ok(())
}
Implement shared host functions. (#2625) * Implement defining host functions at the Config level. This commit introduces defining host functions at the `Config` rather than with `Func` tied to a `Store`. The intention here is to enable a host to define all of the functions once with a `Config` and then use a `Linker` (or directly with `Store::get_host_func`) to use the functions when instantiating a module. This should help improve the performance of use cases where a `Store` is short-lived and redefining the functions at every module instantiation is a noticeable performance hit. This commit adds `add_to_config` to the code generation for Wasmtime&#39;s `Wasi` type. The new method adds the WASI functions to the given config as host functions. This commit adds context functions to `Store`: `get` to get a context of a particular type and `set` to set the context on the store. For safety, `set` cannot replace an existing context value of the same type. `Wasi::set_context` was added to set the WASI context for a `Store` when using `Wasi::add_to_config`. * Add `Config::define_host_func_async`. * Make config &#34;async&#34; rather than store. This commit moves the concept of &#34;async-ness&#34; to `Config` rather than `Store`. Note: this is a breaking API change for anyone that&#39;s already adopted the new async support in Wasmtime. Now `Config::new_async` is used to create an &#34;async&#34; config and any `Store` associated with that config is inherently &#34;async&#34;. This is needed for async shared host functions to have some sanity check during their execution (async host functions, like &#34;async&#34; `Func`, need to be called with the &#34;async&#34; variants). * Update async function tests to smoke async shared host functions. This commit updates the async function tests to also smoke the shared host functions, plus `Func::wrap0_async`. This also changes the &#34;wrap async&#34; method names on `Config` to `wrap$N_host_func_async` to slightly better match what is on `Func`. * Move the instance allocator into `Engine`. This commit moves the instantiated instance allocator from `Config` into `Engine`. This makes certain settings in `Config` no longer order-dependent, which is how `Config` should ideally be. This also removes the confusing concept of the &#34;default&#34; instance allocator, instead opting to construct the on-demand instance allocator when needed. This does alter the semantics of the instance allocator as now each `Engine` gets its own instance allocator rather than sharing a single one between all engines created from a configuration. * Make `Engine::new` return `Result`. This is a breaking API change for anyone using `Engine::new`. As creating the pooling instance allocator may fail (likely cause is not enough memory for the provided limits), instead of panicking when creating an `Engine`, `Engine::new` now returns a `Result`. * Remove `Config::new_async`. This commit removes `Config::new_async` in favor of treating &#34;async support&#34; as any other setting on `Config`. The setting is `Config::async_support`. * Remove order dependency when defining async host functions in `Config`. This commit removes the order dependency where async support must be enabled on the `Config` prior to defining async host functions. The check is now delayed to when an `Engine` is created from the config. * Update WASI example to use shared `Wasi::add_to_config`. This commit updates the WASI example to use `Wasi::add_to_config`. As only a single store and instance are used in the example, it has no semantic difference from the previous example, but the intention is to steer users towards defining WASI on the config and only using `Wasi::add_to_linker` when more explicit scoping of the WASI context is required.
4 years ago
#[test]
fn get_host_function() -> Result<()> {
let engine = Engine::default();
Implement shared host functions. (#2625) * Implement defining host functions at the Config level. This commit introduces defining host functions at the `Config` rather than with `Func` tied to a `Store`. The intention here is to enable a host to define all of the functions once with a `Config` and then use a `Linker` (or directly with `Store::get_host_func`) to use the functions when instantiating a module. This should help improve the performance of use cases where a `Store` is short-lived and redefining the functions at every module instantiation is a noticeable performance hit. This commit adds `add_to_config` to the code generation for Wasmtime&#39;s `Wasi` type. The new method adds the WASI functions to the given config as host functions. This commit adds context functions to `Store`: `get` to get a context of a particular type and `set` to set the context on the store. For safety, `set` cannot replace an existing context value of the same type. `Wasi::set_context` was added to set the WASI context for a `Store` when using `Wasi::add_to_config`. * Add `Config::define_host_func_async`. * Make config &#34;async&#34; rather than store. This commit moves the concept of &#34;async-ness&#34; to `Config` rather than `Store`. Note: this is a breaking API change for anyone that&#39;s already adopted the new async support in Wasmtime. Now `Config::new_async` is used to create an &#34;async&#34; config and any `Store` associated with that config is inherently &#34;async&#34;. This is needed for async shared host functions to have some sanity check during their execution (async host functions, like &#34;async&#34; `Func`, need to be called with the &#34;async&#34; variants). * Update async function tests to smoke async shared host functions. This commit updates the async function tests to also smoke the shared host functions, plus `Func::wrap0_async`. This also changes the &#34;wrap async&#34; method names on `Config` to `wrap$N_host_func_async` to slightly better match what is on `Func`. * Move the instance allocator into `Engine`. This commit moves the instantiated instance allocator from `Config` into `Engine`. This makes certain settings in `Config` no longer order-dependent, which is how `Config` should ideally be. This also removes the confusing concept of the &#34;default&#34; instance allocator, instead opting to construct the on-demand instance allocator when needed. This does alter the semantics of the instance allocator as now each `Engine` gets its own instance allocator rather than sharing a single one between all engines created from a configuration. * Make `Engine::new` return `Result`. This is a breaking API change for anyone using `Engine::new`. As creating the pooling instance allocator may fail (likely cause is not enough memory for the provided limits), instead of panicking when creating an `Engine`, `Engine::new` now returns a `Result`. * Remove `Config::new_async`. This commit removes `Config::new_async` in favor of treating &#34;async support&#34; as any other setting on `Config`. The setting is `Config::async_support`. * Remove order dependency when defining async host functions in `Config`. This commit removes the order dependency where async support must be enabled on the `Config` prior to defining async host functions. The check is now delayed to when an `Engine` is created from the config. * Update WASI example to use shared `Wasi::add_to_config`. This commit updates the WASI example to use `Wasi::add_to_config`. As only a single store and instance are used in the example, it has no semantic difference from the previous example, but the intention is to steer users towards defining WASI on the config and only using `Wasi::add_to_linker` when more explicit scoping of the WASI context is required.
4 years ago
let module = Module::new(&engine, r#"(module (import "mod" "f1" (func)))"#)?;
let mut linker = Linker::new(&engine);
linker.func_wrap("mod", "f1", || {})?;
let mut store = Store::new(&engine, ());
assert!(linker
.get_by_import(&mut store, &module.imports().nth(0).unwrap())
.is_some());
Implement shared host functions. (#2625) * Implement defining host functions at the Config level. This commit introduces defining host functions at the `Config` rather than with `Func` tied to a `Store`. The intention here is to enable a host to define all of the functions once with a `Config` and then use a `Linker` (or directly with `Store::get_host_func`) to use the functions when instantiating a module. This should help improve the performance of use cases where a `Store` is short-lived and redefining the functions at every module instantiation is a noticeable performance hit. This commit adds `add_to_config` to the code generation for Wasmtime&#39;s `Wasi` type. The new method adds the WASI functions to the given config as host functions. This commit adds context functions to `Store`: `get` to get a context of a particular type and `set` to set the context on the store. For safety, `set` cannot replace an existing context value of the same type. `Wasi::set_context` was added to set the WASI context for a `Store` when using `Wasi::add_to_config`. * Add `Config::define_host_func_async`. * Make config &#34;async&#34; rather than store. This commit moves the concept of &#34;async-ness&#34; to `Config` rather than `Store`. Note: this is a breaking API change for anyone that&#39;s already adopted the new async support in Wasmtime. Now `Config::new_async` is used to create an &#34;async&#34; config and any `Store` associated with that config is inherently &#34;async&#34;. This is needed for async shared host functions to have some sanity check during their execution (async host functions, like &#34;async&#34; `Func`, need to be called with the &#34;async&#34; variants). * Update async function tests to smoke async shared host functions. This commit updates the async function tests to also smoke the shared host functions, plus `Func::wrap0_async`. This also changes the &#34;wrap async&#34; method names on `Config` to `wrap$N_host_func_async` to slightly better match what is on `Func`. * Move the instance allocator into `Engine`. This commit moves the instantiated instance allocator from `Config` into `Engine`. This makes certain settings in `Config` no longer order-dependent, which is how `Config` should ideally be. This also removes the confusing concept of the &#34;default&#34; instance allocator, instead opting to construct the on-demand instance allocator when needed. This does alter the semantics of the instance allocator as now each `Engine` gets its own instance allocator rather than sharing a single one between all engines created from a configuration. * Make `Engine::new` return `Result`. This is a breaking API change for anyone using `Engine::new`. As creating the pooling instance allocator may fail (likely cause is not enough memory for the provided limits), instead of panicking when creating an `Engine`, `Engine::new` now returns a `Result`. * Remove `Config::new_async`. This commit removes `Config::new_async` in favor of treating &#34;async support&#34; as any other setting on `Config`. The setting is `Config::async_support`. * Remove order dependency when defining async host functions in `Config`. This commit removes the order dependency where async support must be enabled on the `Config` prior to defining async host functions. The check is now delayed to when an `Engine` is created from the config. * Update WASI example to use shared `Wasi::add_to_config`. This commit updates the WASI example to use `Wasi::add_to_config`. As only a single store and instance are used in the example, it has no semantic difference from the previous example, but the intention is to steer users towards defining WASI on the config and only using `Wasi::add_to_linker` when more explicit scoping of the WASI context is required.
4 years ago
Ok(())
}
#[test]
fn funcs_live_on_to_fight_another_day() -> Result<()> {
struct DropMe(Arc<AtomicUsize>);
impl Drop for DropMe {
fn drop(&mut self) {
self.0.fetch_add(1, SeqCst);
}
}
Implement shared host functions. (#2625) * Implement defining host functions at the Config level. This commit introduces defining host functions at the `Config` rather than with `Func` tied to a `Store`. The intention here is to enable a host to define all of the functions once with a `Config` and then use a `Linker` (or directly with `Store::get_host_func`) to use the functions when instantiating a module. This should help improve the performance of use cases where a `Store` is short-lived and redefining the functions at every module instantiation is a noticeable performance hit. This commit adds `add_to_config` to the code generation for Wasmtime&#39;s `Wasi` type. The new method adds the WASI functions to the given config as host functions. This commit adds context functions to `Store`: `get` to get a context of a particular type and `set` to set the context on the store. For safety, `set` cannot replace an existing context value of the same type. `Wasi::set_context` was added to set the WASI context for a `Store` when using `Wasi::add_to_config`. * Add `Config::define_host_func_async`. * Make config &#34;async&#34; rather than store. This commit moves the concept of &#34;async-ness&#34; to `Config` rather than `Store`. Note: this is a breaking API change for anyone that&#39;s already adopted the new async support in Wasmtime. Now `Config::new_async` is used to create an &#34;async&#34; config and any `Store` associated with that config is inherently &#34;async&#34;. This is needed for async shared host functions to have some sanity check during their execution (async host functions, like &#34;async&#34; `Func`, need to be called with the &#34;async&#34; variants). * Update async function tests to smoke async shared host functions. This commit updates the async function tests to also smoke the shared host functions, plus `Func::wrap0_async`. This also changes the &#34;wrap async&#34; method names on `Config` to `wrap$N_host_func_async` to slightly better match what is on `Func`. * Move the instance allocator into `Engine`. This commit moves the instantiated instance allocator from `Config` into `Engine`. This makes certain settings in `Config` no longer order-dependent, which is how `Config` should ideally be. This also removes the confusing concept of the &#34;default&#34; instance allocator, instead opting to construct the on-demand instance allocator when needed. This does alter the semantics of the instance allocator as now each `Engine` gets its own instance allocator rather than sharing a single one between all engines created from a configuration. * Make `Engine::new` return `Result`. This is a breaking API change for anyone using `Engine::new`. As creating the pooling instance allocator may fail (likely cause is not enough memory for the provided limits), instead of panicking when creating an `Engine`, `Engine::new` now returns a `Result`. * Remove `Config::new_async`. This commit removes `Config::new_async` in favor of treating &#34;async support&#34; as any other setting on `Config`. The setting is `Config::async_support`. * Remove order dependency when defining async host functions in `Config`. This commit removes the order dependency where async support must be enabled on the `Config` prior to defining async host functions. The check is now delayed to when an `Engine` is created from the config. * Update WASI example to use shared `Wasi::add_to_config`. This commit updates the WASI example to use `Wasi::add_to_config`. As only a single store and instance are used in the example, it has no semantic difference from the previous example, but the intention is to steer users towards defining WASI on the config and only using `Wasi::add_to_linker` when more explicit scoping of the WASI context is required.
4 years ago
let flag = Arc::new(AtomicUsize::new(0));
let engine = Engine::default();
let mut linker = Linker::new(&engine);
let drop_me = DropMe(flag.clone());
linker.func_wrap("", "", move || drop(&drop_me))?;
assert_eq!(flag.load(SeqCst), 0);
Implement shared host functions. (#2625) * Implement defining host functions at the Config level. This commit introduces defining host functions at the `Config` rather than with `Func` tied to a `Store`. The intention here is to enable a host to define all of the functions once with a `Config` and then use a `Linker` (or directly with `Store::get_host_func`) to use the functions when instantiating a module. This should help improve the performance of use cases where a `Store` is short-lived and redefining the functions at every module instantiation is a noticeable performance hit. This commit adds `add_to_config` to the code generation for Wasmtime&#39;s `Wasi` type. The new method adds the WASI functions to the given config as host functions. This commit adds context functions to `Store`: `get` to get a context of a particular type and `set` to set the context on the store. For safety, `set` cannot replace an existing context value of the same type. `Wasi::set_context` was added to set the WASI context for a `Store` when using `Wasi::add_to_config`. * Add `Config::define_host_func_async`. * Make config &#34;async&#34; rather than store. This commit moves the concept of &#34;async-ness&#34; to `Config` rather than `Store`. Note: this is a breaking API change for anyone that&#39;s already adopted the new async support in Wasmtime. Now `Config::new_async` is used to create an &#34;async&#34; config and any `Store` associated with that config is inherently &#34;async&#34;. This is needed for async shared host functions to have some sanity check during their execution (async host functions, like &#34;async&#34; `Func`, need to be called with the &#34;async&#34; variants). * Update async function tests to smoke async shared host functions. This commit updates the async function tests to also smoke the shared host functions, plus `Func::wrap0_async`. This also changes the &#34;wrap async&#34; method names on `Config` to `wrap$N_host_func_async` to slightly better match what is on `Func`. * Move the instance allocator into `Engine`. This commit moves the instantiated instance allocator from `Config` into `Engine`. This makes certain settings in `Config` no longer order-dependent, which is how `Config` should ideally be. This also removes the confusing concept of the &#34;default&#34; instance allocator, instead opting to construct the on-demand instance allocator when needed. This does alter the semantics of the instance allocator as now each `Engine` gets its own instance allocator rather than sharing a single one between all engines created from a configuration. * Make `Engine::new` return `Result`. This is a breaking API change for anyone using `Engine::new`. As creating the pooling instance allocator may fail (likely cause is not enough memory for the provided limits), instead of panicking when creating an `Engine`, `Engine::new` now returns a `Result`. * Remove `Config::new_async`. This commit removes `Config::new_async` in favor of treating &#34;async support&#34; as any other setting on `Config`. The setting is `Config::async_support`. * Remove order dependency when defining async host functions in `Config`. This commit removes the order dependency where async support must be enabled on the `Config` prior to defining async host functions. The check is now delayed to when an `Engine` is created from the config. * Update WASI example to use shared `Wasi::add_to_config`. This commit updates the WASI example to use `Wasi::add_to_config`. As only a single store and instance are used in the example, it has no semantic difference from the previous example, but the intention is to steer users towards defining WASI on the config and only using `Wasi::add_to_linker` when more explicit scoping of the WASI context is required.
4 years ago
let get_and_call = || -> Result<()> {
assert_eq!(flag.load(SeqCst), 0);
let mut store = Store::new(&engine, ());
Remove the module linking implementation in Wasmtime (#3958) * Remove the module linking implementation in Wasmtime This commit removes the experimental implementation of the module linking WebAssembly proposal from Wasmtime. The module linking is no longer intended for core WebAssembly but is instead incorporated into the component model now at this point. This means that very large parts of Wasmtime&#39;s implementation of module linking are no longer applicable and would change greatly with an implementation of the component model. The main purpose of this is to remove Wasmtime&#39;s reliance on the support for module-linking in `wasmparser` and tooling crates. With this reliance removed we can move over to the `component-model` branch of `wasmparser` and use the updated support for the component model. Additionally given the trajectory of the component model proposal the embedding API of Wasmtime will not look like what it looks like today for WebAssembly. For example the core wasm `Instance` will not change and instead a `Component` is likely to be added instead. Some more rationale for this is in #3941, but the basic idea is that I feel that it&#39;s not going to be viable to develop support for the component model on a non-`main` branch of Wasmtime. Additionaly I don&#39;t think it&#39;s viable, for the same reasons as `wasm-tools`, to support the old module linking proposal and the new component model at the same time. This commit takes a moment to not only delete the existing module linking implementation but some abstractions are also simplified. For example module serialization is a bit simpler that there&#39;s only one module. Additionally instantiation is much simpler since the only initializer we have to deal with are imports and nothing else. Closes #3941 * Fix doc link * Update comments
3 years ago
let func = linker.get(&mut store, "", "").unwrap();
Optimize `Func::call` and its C API (#3319) * Optimize `Func::call` and its C API This commit is an alternative to #3298 which achieves effectively the same goal of optimizing the `Func::call` API as well as its C API sibling of `wasmtime_func_call`. The strategy taken here is different than #3298 though where a new API isn&#39;t created, rather a small tweak to an existing API is done. Specifically this commit handles the major sources of slowness with `Func::call` with: * Looking up the type of a function, to typecheck the arguments with and use to guide how the results should be loaded, no longer hits the rwlock in the `Engine` but instead each `Func` contains its own `FuncType`. This can be an unnecessary allocation for funcs not used with `Func::call`, so this is a downside of this implementation relative to #3298. A mitigating factor, though, is that instance exports are loaded lazily into the `Store` and in theory not too many funcs are active in the store as `Func` objects. * Temporary storage is amortized with a long-lived `Vec` in the `Store` rather than allocating a new vector on each call. This is basically the same strategy as #3294 only applied to different types in different places. Specifically `wasmtime::Store` now retains a `Vec&lt;u128&gt;` for `Func::call`, and the C API retains a `Vec&lt;Val&gt;` for calling `Func::call`. * Finally, an API breaking change is made to `Func::call` and its type signature (as well as `Func::call_async`). Instead of returning `Box&lt;[Val]&gt;` as it did before this function now takes a `results: &amp;mut [Val]` parameter. This allows the caller to manage the allocation and we can amortize-remove it in `wasmtime_func_call` by using space after the parameters in the `Vec&lt;Val&gt;` we&#39;re passing in. This change is naturally a breaking change and we&#39;ll want to consider it carefully, but mitigating factors are that most embeddings are likely using `TypedFunc::call` instead and this signature taking a mutable slice better aligns with `Func::new` which receives a mutable slice for the results. Overall this change, in the benchmark of &#34;call a nop function from the C API&#34; is not quite as good as #3298. It&#39;s still a bit slower, on the order of 15ns, because there&#39;s lots of capacity checks around vectors and the type checks are slightly less optimized than before. Overall though this is still significantly better than today because allocations and the rwlock to acquire the type information are both avoided. I personally feel that this change is the best to do because it has less of an API impact than #3298. * Rebase issues
3 years ago
func.into_func().unwrap().call(&mut store, &[], &mut [])?;
assert_eq!(flag.load(SeqCst), 0);
Ok(())
};
Implement shared host functions. (#2625) * Implement defining host functions at the Config level. This commit introduces defining host functions at the `Config` rather than with `Func` tied to a `Store`. The intention here is to enable a host to define all of the functions once with a `Config` and then use a `Linker` (or directly with `Store::get_host_func`) to use the functions when instantiating a module. This should help improve the performance of use cases where a `Store` is short-lived and redefining the functions at every module instantiation is a noticeable performance hit. This commit adds `add_to_config` to the code generation for Wasmtime&#39;s `Wasi` type. The new method adds the WASI functions to the given config as host functions. This commit adds context functions to `Store`: `get` to get a context of a particular type and `set` to set the context on the store. For safety, `set` cannot replace an existing context value of the same type. `Wasi::set_context` was added to set the WASI context for a `Store` when using `Wasi::add_to_config`. * Add `Config::define_host_func_async`. * Make config &#34;async&#34; rather than store. This commit moves the concept of &#34;async-ness&#34; to `Config` rather than `Store`. Note: this is a breaking API change for anyone that&#39;s already adopted the new async support in Wasmtime. Now `Config::new_async` is used to create an &#34;async&#34; config and any `Store` associated with that config is inherently &#34;async&#34;. This is needed for async shared host functions to have some sanity check during their execution (async host functions, like &#34;async&#34; `Func`, need to be called with the &#34;async&#34; variants). * Update async function tests to smoke async shared host functions. This commit updates the async function tests to also smoke the shared host functions, plus `Func::wrap0_async`. This also changes the &#34;wrap async&#34; method names on `Config` to `wrap$N_host_func_async` to slightly better match what is on `Func`. * Move the instance allocator into `Engine`. This commit moves the instantiated instance allocator from `Config` into `Engine`. This makes certain settings in `Config` no longer order-dependent, which is how `Config` should ideally be. This also removes the confusing concept of the &#34;default&#34; instance allocator, instead opting to construct the on-demand instance allocator when needed. This does alter the semantics of the instance allocator as now each `Engine` gets its own instance allocator rather than sharing a single one between all engines created from a configuration. * Make `Engine::new` return `Result`. This is a breaking API change for anyone using `Engine::new`. As creating the pooling instance allocator may fail (likely cause is not enough memory for the provided limits), instead of panicking when creating an `Engine`, `Engine::new` now returns a `Result`. * Remove `Config::new_async`. This commit removes `Config::new_async` in favor of treating &#34;async support&#34; as any other setting on `Config`. The setting is `Config::async_support`. * Remove order dependency when defining async host functions in `Config`. This commit removes the order dependency where async support must be enabled on the `Config` prior to defining async host functions. The check is now delayed to when an `Engine` is created from the config. * Update WASI example to use shared `Wasi::add_to_config`. This commit updates the WASI example to use `Wasi::add_to_config`. As only a single store and instance are used in the example, it has no semantic difference from the previous example, but the intention is to steer users towards defining WASI on the config and only using `Wasi::add_to_linker` when more explicit scoping of the WASI context is required.
4 years ago
get_and_call()?;
get_and_call()?;
drop(linker);
assert_eq!(flag.load(SeqCst), 1);
Ok(())
}
Implement shared host functions. (#2625) * Implement defining host functions at the Config level. This commit introduces defining host functions at the `Config` rather than with `Func` tied to a `Store`. The intention here is to enable a host to define all of the functions once with a `Config` and then use a `Linker` (or directly with `Store::get_host_func`) to use the functions when instantiating a module. This should help improve the performance of use cases where a `Store` is short-lived and redefining the functions at every module instantiation is a noticeable performance hit. This commit adds `add_to_config` to the code generation for Wasmtime&#39;s `Wasi` type. The new method adds the WASI functions to the given config as host functions. This commit adds context functions to `Store`: `get` to get a context of a particular type and `set` to set the context on the store. For safety, `set` cannot replace an existing context value of the same type. `Wasi::set_context` was added to set the WASI context for a `Store` when using `Wasi::add_to_config`. * Add `Config::define_host_func_async`. * Make config &#34;async&#34; rather than store. This commit moves the concept of &#34;async-ness&#34; to `Config` rather than `Store`. Note: this is a breaking API change for anyone that&#39;s already adopted the new async support in Wasmtime. Now `Config::new_async` is used to create an &#34;async&#34; config and any `Store` associated with that config is inherently &#34;async&#34;. This is needed for async shared host functions to have some sanity check during their execution (async host functions, like &#34;async&#34; `Func`, need to be called with the &#34;async&#34; variants). * Update async function tests to smoke async shared host functions. This commit updates the async function tests to also smoke the shared host functions, plus `Func::wrap0_async`. This also changes the &#34;wrap async&#34; method names on `Config` to `wrap$N_host_func_async` to slightly better match what is on `Func`. * Move the instance allocator into `Engine`. This commit moves the instantiated instance allocator from `Config` into `Engine`. This makes certain settings in `Config` no longer order-dependent, which is how `Config` should ideally be. This also removes the confusing concept of the &#34;default&#34; instance allocator, instead opting to construct the on-demand instance allocator when needed. This does alter the semantics of the instance allocator as now each `Engine` gets its own instance allocator rather than sharing a single one between all engines created from a configuration. * Make `Engine::new` return `Result`. This is a breaking API change for anyone using `Engine::new`. As creating the pooling instance allocator may fail (likely cause is not enough memory for the provided limits), instead of panicking when creating an `Engine`, `Engine::new` now returns a `Result`. * Remove `Config::new_async`. This commit removes `Config::new_async` in favor of treating &#34;async support&#34; as any other setting on `Config`. The setting is `Config::async_support`. * Remove order dependency when defining async host functions in `Config`. This commit removes the order dependency where async support must be enabled on the `Config` prior to defining async host functions. The check is now delayed to when an `Engine` is created from the config. * Update WASI example to use shared `Wasi::add_to_config`. This commit updates the WASI example to use `Wasi::add_to_config`. As only a single store and instance are used in the example, it has no semantic difference from the previous example, but the intention is to steer users towards defining WASI on the config and only using `Wasi::add_to_linker` when more explicit scoping of the WASI context is required.
4 years ago
#[test]
fn alias_one() -> Result<()> {
let mut store = Store::<()>::default();
let mut linker = Linker::new(store.engine());
assert!(linker.alias("a", "b", "c", "d").is_err());
linker.func_wrap("a", "b", || {})?;
assert!(linker.alias("a", "b", "c", "d").is_ok());
Remove the module linking implementation in Wasmtime (#3958) * Remove the module linking implementation in Wasmtime This commit removes the experimental implementation of the module linking WebAssembly proposal from Wasmtime. The module linking is no longer intended for core WebAssembly but is instead incorporated into the component model now at this point. This means that very large parts of Wasmtime&#39;s implementation of module linking are no longer applicable and would change greatly with an implementation of the component model. The main purpose of this is to remove Wasmtime&#39;s reliance on the support for module-linking in `wasmparser` and tooling crates. With this reliance removed we can move over to the `component-model` branch of `wasmparser` and use the updated support for the component model. Additionally given the trajectory of the component model proposal the embedding API of Wasmtime will not look like what it looks like today for WebAssembly. For example the core wasm `Instance` will not change and instead a `Component` is likely to be added instead. Some more rationale for this is in #3941, but the basic idea is that I feel that it&#39;s not going to be viable to develop support for the component model on a non-`main` branch of Wasmtime. Additionaly I don&#39;t think it&#39;s viable, for the same reasons as `wasm-tools`, to support the old module linking proposal and the new component model at the same time. This commit takes a moment to not only delete the existing module linking implementation but some abstractions are also simplified. For example module serialization is a bit simpler that there&#39;s only one module. Additionally instantiation is much simpler since the only initializer we have to deal with are imports and nothing else. Closes #3941 * Fix doc link * Update comments
3 years ago
assert!(linker.get(&mut store, "a", "b").is_some());
assert!(linker.get(&mut store, "c", "d").is_some());
Implement shared host functions. (#2625) * Implement defining host functions at the Config level. This commit introduces defining host functions at the `Config` rather than with `Func` tied to a `Store`. The intention here is to enable a host to define all of the functions once with a `Config` and then use a `Linker` (or directly with `Store::get_host_func`) to use the functions when instantiating a module. This should help improve the performance of use cases where a `Store` is short-lived and redefining the functions at every module instantiation is a noticeable performance hit. This commit adds `add_to_config` to the code generation for Wasmtime&#39;s `Wasi` type. The new method adds the WASI functions to the given config as host functions. This commit adds context functions to `Store`: `get` to get a context of a particular type and `set` to set the context on the store. For safety, `set` cannot replace an existing context value of the same type. `Wasi::set_context` was added to set the WASI context for a `Store` when using `Wasi::add_to_config`. * Add `Config::define_host_func_async`. * Make config &#34;async&#34; rather than store. This commit moves the concept of &#34;async-ness&#34; to `Config` rather than `Store`. Note: this is a breaking API change for anyone that&#39;s already adopted the new async support in Wasmtime. Now `Config::new_async` is used to create an &#34;async&#34; config and any `Store` associated with that config is inherently &#34;async&#34;. This is needed for async shared host functions to have some sanity check during their execution (async host functions, like &#34;async&#34; `Func`, need to be called with the &#34;async&#34; variants). * Update async function tests to smoke async shared host functions. This commit updates the async function tests to also smoke the shared host functions, plus `Func::wrap0_async`. This also changes the &#34;wrap async&#34; method names on `Config` to `wrap$N_host_func_async` to slightly better match what is on `Func`. * Move the instance allocator into `Engine`. This commit moves the instantiated instance allocator from `Config` into `Engine`. This makes certain settings in `Config` no longer order-dependent, which is how `Config` should ideally be. This also removes the confusing concept of the &#34;default&#34; instance allocator, instead opting to construct the on-demand instance allocator when needed. This does alter the semantics of the instance allocator as now each `Engine` gets its own instance allocator rather than sharing a single one between all engines created from a configuration. * Make `Engine::new` return `Result`. This is a breaking API change for anyone using `Engine::new`. As creating the pooling instance allocator may fail (likely cause is not enough memory for the provided limits), instead of panicking when creating an `Engine`, `Engine::new` now returns a `Result`. * Remove `Config::new_async`. This commit removes `Config::new_async` in favor of treating &#34;async support&#34; as any other setting on `Config`. The setting is `Config::async_support`. * Remove order dependency when defining async host functions in `Config`. This commit removes the order dependency where async support must be enabled on the `Config` prior to defining async host functions. The check is now delayed to when an `Engine` is created from the config. * Update WASI example to use shared `Wasi::add_to_config`. This commit updates the WASI example to use `Wasi::add_to_config`. As only a single store and instance are used in the example, it has no semantic difference from the previous example, but the intention is to steer users towards defining WASI on the config and only using `Wasi::add_to_linker` when more explicit scoping of the WASI context is required.
4 years ago
Ok(())
}
#[test]
fn instance_pre() -> Result<()> {
let engine = Engine::default();
let mut linker = Linker::new(&engine);
linker.func_wrap("", "", || {})?;
let module = Module::new(&engine, r#"(module (import "" "" (func)))"#)?;
let instance_pre = linker.instantiate_pre(&mut Store::new(&engine, ()), &module)?;
instance_pre.instantiate(&mut Store::new(&engine, ()))?;
instance_pre.instantiate(&mut Store::new(&engine, ()))?;
let mut store = Store::new(&engine, ());
let global = Global::new(
&mut store,
GlobalType::new(ValType::I32, Mutability::Const),
1.into(),
)?;
linker.define("", "g", global)?;
let module = Module::new(
&engine,
r#"(module
(import "" "" (func))
(import "" "g" (global i32))
)"#,
)?;
let instance_pre = linker.instantiate_pre(&mut store, &module)?;
instance_pre.instantiate(&mut store)?;
instance_pre.instantiate(&mut store)?;
Ok(())
}