You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

74 lines
2.9 KiB

//! An example of how to interact with wasm memory.
//!
//! Here a small wasm module is used to show how memory is initialized, how to
//! read and write memory through the `Memory` object, and how wasm functions
//! can trap when dealing with out-of-bounds addresses.
// You can execute this example with `cargo run --example memory`
use wasmtime::*;
fn main() -> Result<()> {
// Create our `store_fn` context and then compile a module and create an
// instance from the compiled module all in one go.
let mut store: Store<()> = Store::default();
let module = Module::from_file(store.engine(), "examples/memory.wat")?;
let instance = Instance::new(&mut store, &module, &[])?;
// load_fn up our exports from the instance
let memory = instance
.get_memory(&mut store, "memory")
.ok_or(anyhow::format_err!("failed to find `memory` export"))?;
let size = instance.get_typed_func::<(), i32>(&mut store, "size")?;
let load_fn = instance.get_typed_func::<i32, i32>(&mut store, "load")?;
let store_fn = instance.get_typed_func::<(i32, i32), ()>(&mut store, "store")?;
println!("Checking memory...");
assert_eq!(memory.size(&store), 2);
assert_eq!(memory.data_size(&store), 0x20000);
assert_eq!(memory.data_mut(&mut store)[0], 0);
assert_eq!(memory.data_mut(&mut store)[0x1000], 1);
assert_eq!(memory.data_mut(&mut store)[0x1003], 4);
assert_eq!(size.call(&mut store, ())?, 2);
assert_eq!(load_fn.call(&mut store, 0)?, 0);
assert_eq!(load_fn.call(&mut store, 0x1000)?, 1);
assert_eq!(load_fn.call(&mut store, 0x1003)?, 4);
assert_eq!(load_fn.call(&mut store, 0x1ffff)?, 0);
assert!(load_fn.call(&mut store, 0x20000).is_err()); // out of bounds trap
println!("Mutating memory...");
memory.data_mut(&mut store)[0x1003] = 5;
store_fn.call(&mut store, (0x1002, 6))?;
assert!(store_fn.call(&mut store, (0x20000, 0)).is_err()); // out of bounds trap
assert_eq!(memory.data(&store)[0x1002], 6);
assert_eq!(memory.data(&store)[0x1003], 5);
assert_eq!(load_fn.call(&mut store, 0x1002)?, 6);
assert_eq!(load_fn.call(&mut store, 0x1003)?, 5);
// Grow memory.
println!("Growing memory...");
memory.grow(&mut store, 1)?;
assert_eq!(memory.size(&store), 3);
assert_eq!(memory.data_size(&store), 0x30000);
assert_eq!(load_fn.call(&mut store, 0x20000)?, 0);
store_fn.call(&mut store, (0x20000, 0))?;
assert!(load_fn.call(&mut store, 0x30000).is_err());
assert!(store_fn.call(&mut store, (0x30000, 0)).is_err());
assert!(memory.grow(&mut store, 1).is_err());
assert!(memory.grow(&mut store, 0).is_ok());
println!("Creating stand-alone memory...");
Implement the memory64 proposal in Wasmtime (#3153) * Implement the memory64 proposal in Wasmtime This commit implements the WebAssembly [memory64 proposal][proposal] in both Wasmtime and Cranelift. In terms of work done Cranelift ended up needing very little work here since most of it was already prepared for 64-bit memories at one point or another. Most of the work in Wasmtime is largely refactoring, changing a bunch of `u32` values to something else. A number of internal and public interfaces are changing as a result of this commit, for example: * Acessors on `wasmtime::Memory` that work with pages now all return `u64` unconditionally rather than `u32`. This makes it possible to accommodate 64-bit memories with this API, but we may also want to consider `usize` here at some point since the host can&#39;t grow past `usize`-limited pages anyway. * The `wasmtime::Limits` structure is removed in favor of minimum/maximum methods on table/memory types. * Many libcall intrinsics called by jit code now unconditionally take `u64` arguments instead of `u32`. Return values are `usize`, however, since the return value, if successful, is always bounded by host memory while arguments can come from any guest. * The `heap_addr` clif instruction now takes a 64-bit offset argument instead of a 32-bit one. It turns out that the legalization of `heap_addr` already worked with 64-bit offsets, so this change was fairly trivial to make. * The runtime implementation of mmap-based linear memories has changed to largely work in `usize` quantities in its API and in bytes instead of pages. This simplifies various aspects and reflects that mmap-memories are always bound by `usize` since that&#39;s what the host is using to address things, and additionally most calculations care about bytes rather than pages except for the very edge where we&#39;re going to/from wasm. Overall I&#39;ve tried to minimize the amount of `as` casts as possible, using checked `try_from` and checked arithemtic with either error handling or explicit `unwrap()` calls to tell us about bugs in the future. Most locations have relatively obvious things to do with various implications on various hosts, and I think they should all be roughly of the right shape but time will tell. I mostly relied on the compiler complaining that various types weren&#39;t aligned to figure out type-casting, and I manually audited some of the more obvious locations. I suspect we have a number of hidden locations that will panic on 32-bit hosts if 64-bit modules try to run there, but otherwise I think we should be generally ok (famous last words). In any case I wouldn&#39;t want to enable this by default naturally until we&#39;ve fuzzed it for some time. In terms of the actual underlying implementation, no one should expect memory64 to be all that fast. Right now it&#39;s implemented with &#34;dynamic&#34; heaps which have a few consequences: * All memory accesses are bounds-checked. I&#39;m not sure how aggressively Cranelift tries to optimize out bounds checks, but I suspect not a ton since we haven&#39;t stressed this much historically. * Heaps are always precisely sized. This means that every call to `memory.grow` will incur a `memcpy` of memory from the old heap to the new. We probably want to at least look into `mremap` on Linux and otherwise try to implement schemes where dynamic heaps have some reserved pages to grow into to help amortize the cost of `memory.grow`. The memory64 spec test suite is scheduled to now run on CI, but as with all the other spec test suites it&#39;s really not all that comprehensive. I&#39;ve tried adding more tests for basic things as I&#39;ve had to implement guards for them, but I wouldn&#39;t really consider the testing adequate from just this PR itself. I did try to take care in one test to actually allocate a 4gb+ heap and then avoid running that in the pooling allocator or in emulation because otherwise that may fail or take excessively long. [proposal]: https://github.com/WebAssembly/memory64/blob/master/proposals/memory64/Overview.md * Fix some tests * More test fixes * Fix wasmtime tests * Fix doctests * Revert to 32-bit immediate offsets in `heap_addr` This commit updates the generation of addresses in wasm code to always use 32-bit offsets for `heap_addr`, and if the calculated offset is bigger than 32-bits we emit a manual add with an overflow check. * Disable memory64 for spectest fuzzing * Fix wrong offset being added to heap addr * More comments! * Clarify bytes/pages
3 years ago
let memorytype = MemoryType::new(5, Some(5));
let memory2 = Memory::new(&mut store, memorytype)?;
assert_eq!(memory2.size(&store), 5);
assert!(memory2.grow(&mut store, 1).is_err());
assert!(memory2.grow(&mut store, 0).is_ok());
Ok(())
}