Browse Source

Remove the native ABI calling convention from Wasmtime (#8629)

* Remove the native ABI calling convention from Wasmtime

This commit proposes removing the "native abi" calling convention used
in Wasmtime. For background this ABI dates back to the origins of
Wasmtime. Originally Wasmtime only had `Func::call` and eventually I
added `TypedFunc` with `TypedFunc::call` and `Func::wrap` for a faster
path. At the time given the state of trampolines it was easiest to call
WebAssembly code directly without any trampolines using the native ABI
that wasm used at the time. This is the original source of the native
ABI and it's persisted over time under the assumption that it's faster
than the array ABI due to keeping arguments in registers rather than
spilling them to the stack.

Over time, however, this design decision of using the native ABI has not
aged well. Trampolines have changed quite a lot in the meantime and it's
no longer possible for the host to call wasm without a trampoline, for
example. Compilations nowadays maintain both native and array
trampolines for wasm functions in addition to host functions. There's a
large split between `Func::new` and `Func::wrap`. Overall, there's quite
a lot of weight that we're pulling for the design decision of using the
native ABI.

Functionally this hasn't ever really been the end of the world.
Trampolines aren't a known issue in terms of performance or code size.
There's no known faster way to invoke WebAssembly from the host (or
vice-versa). One major downside of this design, however, is that
`Func::new` requires Cranelift as a backend to exist. This is due to the
fact that it needs to synthesize various entries in the matrix of ABIs
we have that aren't available at any other time. While this is itself
not the worst of issues it means that the C API cannot be built without
a compiler because the C API does not have access to `Func::wrap`.

Overall I'd like to reevaluate given where Wasmtime is today whether it
makes sense to keep the native ABI trampolines. Sure they're supposed to
be fast, but are they really that much faster than the array-call ABI as
an alternative? This commit is intended to measure this.

This commit removes the native ABI calling convention entirely. For
example `VMFuncRef` is now one pointer smaller. All of `TypedFunc` now
uses `*mut ValRaw` for loads/stores rather than dealing with ABI
business. The benchmarks with this PR are:

* `sync/no-hook/core - host-to-wasm - typed - nop` - 5% faster
* `sync/no-hook/core - host-to-wasm - typed - nop-params-and-results` - 10% slower
* `sync/no-hook/core - wasm-to-host - typed - nop` - no change
* `sync/no-hook/core - wasm-to-host - typed - nop-params-and-results` - 7% faster

These numbers are a bit surprising as I would have suspected no change
in both "nop" benchmarks as well as both being slower in the
params-and-results benchmarks. Regardless it is apparent that this is
not a major change in terms of performance given Wasmtime's current
state. In general my hunch is that there are more expensive sources of
overhead than reads/writes from the stack when dealing with wasm values
(e.g. trap handling, store management, etc).

Overall this commit feels like a large simplification of what we
currently do in `TypedFunc`:

* The number of ABIs that Wasmtime deals with is reduced by one. ABIs
  are pretty much always tricky and having fewer moving parts should
  help improve the understandability of the system.
* All of the `WasmTy` trait methods and `TypedFunc` infrastructure is
  simplified. Traits now work with simple `load`/`store` methods rather
  than various other flavors of conversion.
* The multi-return-value handling of the native ABI is all gone now
  which gave rise to significant complexity within Wasmtime's Cranelift
  translation layer in addition to the `TypedFunc` backing traits.
* This aligns components and core wasm where components always use the
  array ABI and now core wasm additionally will always use the array ABI
  when communicating with the host.

I'll note that this still leaves a major ABI "complexity" with respect
to native functions do not have a wasm ABI function pointer until
they're "attached" to a `Store` with a `Module`. That's required to
avoid needing Cranelift for creating host functions and that property is
still true today. This is a bit simpler to understand though now that
`Func::new` and `Func::wrap` are treated uniformly rather than one being
special-cased.

* Fix miri unsafety

prtest:full
pull/8489/merge
Alex Crichton 6 months ago
committed by GitHub
parent
commit
1d11b265f9
No known key found for this signature in database GPG Key ID: B5690EEEBB952194
  1. 357
      crates/cranelift/src/compiler.rs
  2. 26
      crates/cranelift/src/compiler/component.rs
  3. 49
      crates/cranelift/src/lib.rs
  4. 49
      crates/environ/src/compile/mod.rs
  5. 8
      crates/environ/src/compile/module_artifacts.rs
  6. 12
      crates/environ/src/component/artifacts.rs
  7. 6
      crates/environ/src/module_artifacts.rs
  8. 46
      crates/environ/src/vmoffsets.rs
  9. 95
      crates/wasmtime/src/compile.rs
  10. 9
      crates/wasmtime/src/compile/runtime.rs
  11. 61
      crates/wasmtime/src/lib.rs
  12. 9
      crates/wasmtime/src/runtime/component/component.rs
  13. 57
      crates/wasmtime/src/runtime/component/func.rs
  14. 10
      crates/wasmtime/src/runtime/component/instance.rs
  15. 4
      crates/wasmtime/src/runtime/component/mod.rs
  16. 294
      crates/wasmtime/src/runtime/func.rs
  17. 297
      crates/wasmtime/src/runtime/func/typed.rs
  18. 133
      crates/wasmtime/src/runtime/gc/enabled/anyref.rs
  19. 133
      crates/wasmtime/src/runtime/gc/enabled/externref.rs
  20. 79
      crates/wasmtime/src/runtime/gc/enabled/i31.rs
  21. 43
      crates/wasmtime/src/runtime/gc/noextern.rs
  22. 22
      crates/wasmtime/src/runtime/instance.rs
  23. 25
      crates/wasmtime/src/runtime/instantiate.rs
  24. 28
      crates/wasmtime/src/runtime/module.rs
  25. 4
      crates/wasmtime/src/runtime/module/registry.rs
  26. 12
      crates/wasmtime/src/runtime/store/func_refs.rs
  27. 49
      crates/wasmtime/src/runtime/trampoline/func.rs
  28. 22
      crates/wasmtime/src/runtime/v128.rs
  29. 17
      crates/wasmtime/src/runtime/vm.rs
  30. 7
      crates/wasmtime/src/runtime/vm/component.rs
  31. 5
      crates/wasmtime/src/runtime/vm/instance.rs
  32. 37
      crates/wasmtime/src/runtime/vm/vmcontext.rs
  33. 67
      crates/wasmtime/src/runtime/vm/vmcontext/vm_host_func_context.rs
  34. 26
      crates/winch/src/compiler.rs
  35. 12
      tests/disas/icall-loop.wat
  36. 6
      tests/disas/icall-simd.wat
  37. 6
      tests/disas/icall.wat
  38. 6
      tests/disas/indirect-call-caching-exclude-0-index.wat
  39. 6
      tests/disas/indirect-call-caching-exclude-table-export.wat
  40. 6
      tests/disas/indirect-call-caching-exclude-table-writes.wat
  41. 22
      tests/disas/indirect-call-caching-slot-limit-1.wat
  42. 22
      tests/disas/indirect-call-caching-slot-limit-2.wat
  43. 8
      tests/disas/indirect-call-caching.wat
  44. 6
      tests/disas/indirect-call-no-caching.wat
  45. 6
      tests/disas/readonly-funcrefs.wat
  46. 4
      tests/disas/ref-func-0.wat
  47. 24
      tests/disas/typed-funcrefs-eager-init.wat
  48. 24
      tests/disas/typed-funcrefs.wat
  49. 12
      tests/disas/winch/x64/call_indirect/call_indirect.wat
  50. 6
      tests/disas/winch/x64/call_indirect/local_arg.wat
  51. 2
      tests/disas/winch/x64/load/grow_load.wat
  52. 4
      tests/disas/winch/x64/table/fill.wat
  53. 2
      tests/disas/winch/x64/table/get.wat
  54. 2
      tests/disas/winch/x64/table/grow.wat
  55. 30
      tests/disas/winch/x64/table/init_copy_drop.wat
  56. 2
      tests/disas/winch/x64/table/set.wat

357
crates/cranelift/src/compiler.rs

@ -1,8 +1,8 @@
use crate::debug::{DwarfSectionRelocTarget, ModuleMemoryOffset};
use crate::func_environ::FuncEnvironment;
use crate::{array_call_signature, native_call_signature, DEBUG_ASSERT_TRAP_CODE};
use crate::DEBUG_ASSERT_TRAP_CODE;
use crate::{array_call_signature, CompiledFunction, ModuleTextBuilder};
use crate::{builder::LinkOptions, wasm_call_signature, BuiltinFunctionSignatures};
use crate::{CompiledFunction, ModuleTextBuilder};
use anyhow::{Context as _, Result};
use cranelift_codegen::ir::{self, InstBuilder, MemFlags, UserExternalName, UserFuncName, Value};
use cranelift_codegen::isa::{
@ -311,77 +311,23 @@ impl wasmtime_environ::Compiler for Compiler {
Ok(Box::new(compiler.finish()?))
}
fn compile_native_to_wasm_trampoline(
&self,
translation: &ModuleTranslation<'_>,
types: &ModuleTypesBuilder,
def_func_index: DefinedFuncIndex,
) -> Result<Box<dyn Any + Send>, CompileError> {
let func_index = translation.module.func_index(def_func_index);
let sig = translation.module.functions[func_index].signature;
let wasm_func_ty = types[sig].unwrap_func();
let isa = &*self.isa;
let pointer_type = isa.pointer_type();
let func_index = translation.module.func_index(def_func_index);
let wasm_call_sig = wasm_call_signature(isa, wasm_func_ty, &self.tunables);
let native_call_sig = native_call_signature(isa, wasm_func_ty);
let mut compiler = self.function_compiler();
let func = ir::Function::with_name_signature(Default::default(), native_call_sig);
let (mut builder, block0) = compiler.builder(func);
let args = builder.func.dfg.block_params(block0).to_vec();
let vmctx = args[0];
// Since we are entering Wasm, save our SP.
//
// Assert that we were really given a core Wasm vmctx, since that's
// what we are assuming with our offsets below.
debug_assert_vmctx_kind(isa, &mut builder, vmctx, wasmtime_environ::VMCONTEXT_MAGIC);
let offsets = VMOffsets::new(isa.pointer_bytes(), &translation.module);
let vm_runtime_limits_offset = offsets.vmctx_runtime_limits();
save_last_wasm_entry_sp(
&mut builder,
pointer_type,
&offsets.ptr,
vm_runtime_limits_offset,
vmctx,
);
let ret = NativeRet::classify(isa, wasm_func_ty);
let wasm_args = ret.native_args(&args);
// Then call into Wasm.
let call = declare_and_call(&mut builder, wasm_call_sig, func_index.as_u32(), wasm_args);
// Forward the results along.
let results = builder.func.dfg.inst_results(call).to_vec();
ret.native_return(&mut builder, block0, &results);
builder.finalize();
Ok(Box::new(compiler.finish()?))
}
fn compile_wasm_to_native_trampoline(
fn compile_wasm_to_array_trampoline(
&self,
wasm_func_ty: &WasmFuncType,
) -> Result<Box<dyn Any + Send>, CompileError> {
let isa = &*self.isa;
let pointer_type = isa.pointer_type();
let wasm_call_sig = wasm_call_signature(isa, wasm_func_ty, &self.tunables);
let native_call_sig = native_call_signature(isa, wasm_func_ty);
let array_call_sig = array_call_signature(isa);
let mut compiler = self.function_compiler();
let func = ir::Function::with_name_signature(Default::default(), wasm_call_sig);
let (mut builder, block0) = compiler.builder(func);
let mut args = builder.func.dfg.block_params(block0).to_vec();
let args = builder.func.dfg.block_params(block0).to_vec();
let callee_vmctx = args[0];
let caller_vmctx = args[1];
let ret = NativeRet::classify(isa, wasm_func_ty);
// We are exiting Wasm, so save our PC and FP.
//
// Assert that the caller vmctx really is a core Wasm vmctx, since
@ -401,49 +347,31 @@ impl wasmtime_environ::Compiler for Compiler {
);
save_last_wasm_exit_fp_and_pc(&mut builder, pointer_type, &ptr, limits);
// If the native call signature for this function uses a return pointer
// then allocate the return pointer here on the stack and pass it as the
// last argument.
let slot = match &ret {
NativeRet::Bare => None,
NativeRet::Retptr { size, .. } => Some(builder.func.create_sized_stack_slot(
ir::StackSlotData::new(ir::StackSlotKind::ExplicitSlot, *size, 0),
)),
};
if let Some(slot) = slot {
args.push(builder.ins().stack_addr(pointer_type, slot, 0));
}
// Spill all wasm arguments to the stack in `ValRaw` slots.
let (args_base, args_len) =
self.allocate_stack_array_and_spill_args(wasm_func_ty, &mut builder, &args[2..]);
let args_len = builder.ins().iconst(pointer_type, i64::from(args_len));
// Load the actual callee out of the
// `VMNativeCallHostFuncContext::host_func`.
// `VMArrayCallHostFuncContext::host_func`.
let ptr_size = isa.pointer_bytes();
let callee = builder.ins().load(
pointer_type,
MemFlags::trusted(),
callee_vmctx,
ptr_size.vmnative_call_host_func_context_func_ref()
+ ptr_size.vm_func_ref_native_call(),
ptr_size.vmarray_call_host_func_context_func_ref() + ptr_size.vm_func_ref_array_call(),
);
// Do an indirect call to the callee.
let callee_signature = builder.func.import_signature(native_call_sig);
let call = builder.ins().call_indirect(callee_signature, callee, &args);
// Forward the results back to the caller. If a return pointer was in
// use for the native call then load the results from the return pointer
// to pass through as native return values in the wasm abi.
let mut results = builder.func.dfg.inst_results(call).to_vec();
if let NativeRet::Retptr { slots, .. } = ret {
let base = *args.last().unwrap();
assert_eq!(slots.len(), wasm_func_ty.returns().len() - 1);
for (offset, ty) in slots {
results.push(
builder
.ins()
.load(ty, ir::MemFlags::trusted(), base, offset),
);
}
}
let callee_signature = builder.func.import_signature(array_call_sig);
builder.ins().call_indirect(
callee_signature,
callee,
&[callee_vmctx, caller_vmctx, args_base, args_len],
);
let results =
self.load_values_from_array(wasm_func_ty.returns(), &mut builder, args_base, args_len);
builder.ins().return_(&results);
builder.finalize();
@ -491,35 +419,6 @@ impl wasmtime_environ::Compiler for Compiler {
Ok(ret)
}
fn emit_trampolines_for_array_call_host_func(
&self,
ty: &WasmFuncType,
host_fn: usize,
obj: &mut Object<'static>,
) -> Result<(FunctionLoc, FunctionLoc)> {
let mut wasm_to_array = self.wasm_to_array_trampoline(ty, host_fn)?;
let mut native_to_array = self.native_to_array_trampoline(ty, host_fn)?;
let mut builder = ModuleTextBuilder::new(obj, self, self.isa.text_section_builder(2));
let (_, wasm_to_array) =
builder.append_func("wasm_to_array", &mut wasm_to_array, |_| unreachable!());
let (_, native_to_array) =
builder.append_func("native_to_array", &mut native_to_array, |_| unreachable!());
let wasm_to_array = FunctionLoc {
start: u32::try_from(wasm_to_array.start).unwrap(),
length: u32::try_from(wasm_to_array.end - wasm_to_array.start).unwrap(),
};
let native_to_array = FunctionLoc {
start: u32::try_from(native_to_array.start).unwrap(),
length: u32::try_from(native_to_array.end - native_to_array.start).unwrap(),
};
builder.finish();
Ok((wasm_to_array, native_to_array))
}
fn triple(&self) -> &target_lexicon::Triple {
self.isa.triple()
}
@ -767,154 +666,6 @@ fn compile_uncached<'a>(
}
impl Compiler {
/// Creates a trampoline for calling a host function callee defined with the
/// "array" calling convention from a native calling convention caller.
///
/// This style of trampoline is used with `Func::new`-style callees and
/// `TypedFunc::call`-style callers.
///
/// Both callee and caller are on the host side, so there is no host/Wasm
/// transition and associated entry/exit state to maintain.
///
/// The `host_fn` is a function pointer in this process with the following
/// signature:
///
/// ```ignore
/// unsafe extern "C" fn(*mut VMContext, *mut VMContext, *mut ValRaw, usize)
/// ```
///
/// where the first two arguments are forwarded from the trampoline
/// generated here itself, and the second two arguments are a pointer/length
/// into stack-space of this trampoline with storage for both the arguments
/// to the function and the results.
///
/// Note that `host_fn` is an immediate which is an actual function pointer
/// in this process. As such this compiled trampoline is not suitable for
/// serialization.
fn native_to_array_trampoline(
&self,
ty: &WasmFuncType,
host_fn: usize,
) -> Result<CompiledFunction, CompileError> {
let isa = &*self.isa;
let pointer_type = isa.pointer_type();
let native_call_sig = native_call_signature(isa, ty);
let array_call_sig = array_call_signature(isa);
let mut compiler = self.function_compiler();
let func = ir::Function::with_name_signature(Default::default(), native_call_sig);
let (mut builder, block0) = compiler.builder(func);
let args = builder.func.dfg.block_params(block0).to_vec();
let ret = NativeRet::classify(isa, ty);
let wasm_args = &ret.native_args(&args)[2..];
let (values_vec_ptr, values_vec_len) =
self.allocate_stack_array_and_spill_args(ty, &mut builder, wasm_args);
let values_vec_len = builder
.ins()
.iconst(pointer_type, i64::from(values_vec_len));
let callee_args = [args[0], args[1], values_vec_ptr, values_vec_len];
let new_sig = builder.import_signature(array_call_sig);
let callee_value = builder.ins().iconst(pointer_type, host_fn as i64);
builder
.ins()
.call_indirect(new_sig, callee_value, &callee_args);
let results =
self.load_values_from_array(ty.returns(), &mut builder, values_vec_ptr, values_vec_len);
ret.native_return(&mut builder, block0, &results);
builder.finalize();
compiler.finish()
}
/// Creates a trampoline for WebAssembly to call a host function defined
/// with the "array" calling convention: where all the arguments are spilled
/// to an array on the stack and results are loaded from the stack array.
///
/// This style of trampoline is currently only used with the
/// `Func::new`-style created functions in the Wasmtime embedding API. The
/// generated trampoline has a function signature appropriate to the `ty`
/// specified (e.g. a System-V ABI) and will call a `host_fn` that has a
/// type signature of:
///
/// ```ignore
/// unsafe extern "C" fn(*mut VMContext, *mut VMContext, *mut ValRaw, usize)
/// ```
///
/// where the first two arguments are forwarded from the trampoline
/// generated here itself, and the second two arguments are a pointer/length
/// into stack-space of this trampoline with storage for both the arguments
/// to the function and the results.
///
/// Note that `host_fn` is an immediate which is an actual function pointer
/// in this process, and `limits` is a pointer to `VMRuntimeLimits`. As such
/// this compiled trampoline is not suitable for serialization, and only
/// valid for a particular store.
fn wasm_to_array_trampoline(
&self,
ty: &WasmFuncType,
host_fn: usize,
) -> Result<CompiledFunction, CompileError> {
let isa = &*self.isa;
let pointer_type = isa.pointer_type();
let wasm_call_sig = wasm_call_signature(isa, ty, &self.tunables);
let array_call_sig = array_call_signature(isa);
let mut compiler = self.function_compiler();
let func = ir::Function::with_name_signature(Default::default(), wasm_call_sig);
let (mut builder, block0) = compiler.builder(func);
let args = builder.func.dfg.block_params(block0).to_vec();
let caller_vmctx = args[1];
// Assert that we were really given a core Wasm vmctx, since that's
// what we are assuming with our offsets below.
debug_assert_vmctx_kind(
isa,
&mut builder,
caller_vmctx,
wasmtime_environ::VMCONTEXT_MAGIC,
);
let ptr_size = isa.pointer_bytes();
let limits = builder.ins().load(
pointer_type,
MemFlags::trusted(),
caller_vmctx,
ptr_size.vmcontext_runtime_limits(),
);
save_last_wasm_exit_fp_and_pc(&mut builder, pointer_type, &ptr_size, limits);
let (values_vec_ptr, values_vec_len) =
self.allocate_stack_array_and_spill_args(ty, &mut builder, &args[2..]);
let values_vec_len = builder
.ins()
.iconst(pointer_type, i64::from(values_vec_len));
let block_params = builder.func.dfg.block_params(block0);
let callee_args = [
block_params[0],
block_params[1],
values_vec_ptr,
values_vec_len,
];
let new_sig = builder.import_signature(array_call_sig);
let callee_value = builder.ins().iconst(pointer_type, host_fn as i64);
builder
.ins()
.call_indirect(new_sig, callee_value, &callee_args);
let results =
self.load_values_from_array(ty.returns(), &mut builder, values_vec_ptr, values_vec_len);
builder.ins().return_(&results);
builder.finalize();
compiler.finish()
}
/// This function will allocate a stack slot suitable for storing both the
/// arguments and return values of the function, and then the arguments will
/// all be stored in this block.
@ -1314,71 +1065,3 @@ fn save_last_wasm_exit_fp_and_pc(
ptr.vmruntime_limits_last_wasm_exit_pc(),
);
}
enum NativeRet {
Bare,
Retptr {
slots: Vec<(i32, ir::Type)>,
size: u32,
},
}
impl NativeRet {
fn classify(isa: &dyn TargetIsa, ty: &WasmFuncType) -> NativeRet {
fn align_to(val: i32, align: i32) -> i32 {
(val + (align - 1)) & !(align - 1)
}
match ty.returns() {
[] | [_] => NativeRet::Bare,
other => {
let mut offset = 0;
let mut offsets = Vec::new();
let mut max_align = 1;
for ty in other[1..].iter() {
let ty = crate::value_type(isa, *ty);
let size = ty.bytes();
let size = i32::try_from(size).unwrap();
offset = align_to(offset, size);
offsets.push((offset, ty));
offset += size;
max_align = max_align.max(size);
}
NativeRet::Retptr {
slots: offsets,
size: u32::try_from(align_to(offset, max_align)).unwrap(),
}
}
}
}
fn native_args<'a>(&self, args: &'a [ir::Value]) -> &'a [ir::Value] {
match self {
NativeRet::Bare => args,
NativeRet::Retptr { .. } => &args[..args.len() - 1],
}
}
fn native_return(
&self,
builder: &mut FunctionBuilder<'_>,
block0: ir::Block,
results: &[ir::Value],
) {
match self {
NativeRet::Bare => {
builder.ins().return_(&results);
}
NativeRet::Retptr { slots, .. } => {
let ptr = *builder.func.dfg.block_params(block0).last().unwrap();
let (first, rest) = results.split_first().unwrap();
assert_eq!(rest.len(), slots.len());
for (arg, (offset, ty)) in rest.iter().zip(slots) {
assert_eq!(builder.func.dfg.value_type(*arg), *ty);
builder.ins().store(MemFlags::trusted(), *arg, ptr, *offset);
}
builder.ins().return_(&[*first]);
}
}
}
}

26
crates/cranelift/src/compiler/component.rs

@ -1,9 +1,6 @@
//! Compilation support for the component model.
use crate::{
compiler::{Compiler, NativeRet},
ALWAYS_TRAP_CODE, CANNOT_ENTER_CODE,
};
use crate::{compiler::Compiler, ALWAYS_TRAP_CODE, CANNOT_ENTER_CODE};
use anyhow::Result;
use cranelift_codegen::ir::{self, InstBuilder, MemFlags};
use cranelift_codegen::isa::{CallConv, TargetIsa};
@ -28,7 +25,6 @@ struct TrampolineCompiler<'a> {
#[derive(Copy, Clone)]
enum Abi {
Wasm,
Native,
Array,
}
@ -48,7 +44,6 @@ impl<'a> TrampolineCompiler<'a> {
ir::UserFuncName::user(0, 0),
match abi {
Abi::Wasm => crate::wasm_call_signature(isa, ty, &compiler.tunables),
Abi::Native => crate::native_call_signature(isa, ty),
Abi::Array => crate::array_call_signature(isa),
},
);
@ -81,7 +76,7 @@ impl<'a> TrampolineCompiler<'a> {
}
// Transcoders can only actually be called by Wasm, so let's assert
// that here.
Abi::Native | Abi::Array => {
Abi::Array => {
self.builder
.ins()
.trap(ir::TrapCode::User(crate::DEBUG_ASSERT_TRAP_CODE));
@ -129,16 +124,10 @@ impl<'a> TrampolineCompiler<'a> {
let vmctx = args[0];
let wasm_func_ty = self.types[self.signature].unwrap_func();
// More handling is necessary here if this changes
assert!(matches!(
NativeRet::classify(self.isa, wasm_func_ty),
NativeRet::Bare
));
// Start off by spilling all the wasm arguments into a stack slot to be
// passed to the host function.
let (values_vec_ptr, values_vec_len) = match self.abi {
Abi::Wasm | Abi::Native => {
Abi::Wasm => {
let (ptr, len) = self.compiler.allocate_stack_array_and_spill_args(
wasm_func_ty,
&mut self.builder,
@ -255,7 +244,7 @@ impl<'a> TrampolineCompiler<'a> {
.call_indirect(host_sig, host_fn, &callee_args);
match self.abi {
Abi::Wasm | Abi::Native => {
Abi::Wasm => {
// After the host function has returned the results are loaded from
// `values_vec_ptr` and then returned.
let results = self.compiler.load_values_from_array(
@ -536,7 +525,7 @@ impl<'a> TrampolineCompiler<'a> {
// These trampolines can only actually be called by Wasm, so
// let's assert that here.
Abi::Native | Abi::Array => {
Abi::Array => {
self.builder
.ins()
.trap(ir::TrapCode::User(crate::DEBUG_ASSERT_TRAP_CODE));
@ -586,7 +575,7 @@ impl<'a> TrampolineCompiler<'a> {
match self.abi {
// Wasm and native ABIs pass parameters as normal function
// parameters.
Abi::Wasm | Abi::Native => block0_params,
Abi::Wasm => block0_params,
// The array ABI passes a pointer/length as the 3rd/4th arguments
// and those are used to load the actual wasm parameters.
@ -607,7 +596,7 @@ impl<'a> TrampolineCompiler<'a> {
fn abi_store_results(&mut self, results: &[ir::Value]) {
match self.abi {
// Wasm/native ABIs return values as usual.
Abi::Wasm | Abi::Native => {
Abi::Wasm => {
self.builder.ins().return_(results);
}
@ -684,7 +673,6 @@ impl ComponentCompiler for Compiler {
Ok(AllCallFunc {
wasm_call: compile(Abi::Wasm)?,
array_call: compile(Abi::Array)?,
native_call: compile(Abi::Native)?,
})
}
}

49
crates/cranelift/src/lib.rs

@ -131,55 +131,6 @@ fn value_type(isa: &dyn TargetIsa, ty: WasmValType) -> ir::types::Type {
}
}
/// Get the Cranelift signature with the native calling convention for the given
/// Wasm function type.
///
/// This parameters will start with the callee and caller VM contexts, followed
/// by the translation of each of the Wasm parameter types to native types. The
/// results are the Wasm result types translated to native types.
///
/// The signature uses the wasmtime variant of the target's default calling
/// convention. The only difference from the default calling convention is how
/// multiple results are handled.
///
/// When there is only a single result, or zero results, these signatures are
/// suitable for calling from the host via
///
/// ```ignore
/// unsafe extern "C" fn(
/// callee_vmctx: *mut VMOpaqueContext,
/// caller_vmctx: *mut VMOpaqueContext,
/// // ...wasm parameter types...
/// ) -> // ...wasm result type...
/// ```
///
/// When there are more than one results, these signatures are suitable for
/// calling from the host via
///
/// ```ignore
/// unsafe extern "C" fn(
/// callee_vmctx: *mut VMOpaqueContext,
/// caller_vmctx: *mut VMOpaqueContext,
/// // ...wasm parameter types...
/// retptr: *mut (),
/// ) -> // ...wasm result type 0...
/// ```
///
/// where the first result is returned directly and the rest via the return
/// pointer.
fn native_call_signature(isa: &dyn TargetIsa, wasm: &WasmFuncType) -> ir::Signature {
let mut sig = blank_sig(isa, CallConv::triple_default(isa.triple()));
let cvt = |ty: &WasmValType| ir::AbiParam::new(value_type(isa, *ty));
sig.params.extend(wasm.params().iter().map(&cvt));
if let Some(first_ret) = wasm.returns().get(0) {
sig.returns.push(cvt(first_ret));
}
if wasm.returns().len() > 1 {
sig.params.push(ir::AbiParam::new(isa.pointer_type()));
}
sig
}
/// Get the Cranelift signature for all array-call functions, that is:
///
/// ```ignore

49
crates/environ/src/compile/mod.rs

@ -209,24 +209,12 @@ pub trait Compiler: Send + Sync {
index: DefinedFuncIndex,
) -> Result<Box<dyn Any + Send>, CompileError>;
/// Compile a trampoline for a native-call host function caller calling the
/// `index`th Wasm function.
///
/// The trampoline should save the necessary state to record the
/// host-to-Wasm transition (e.g. registers used for fast stack walking).
fn compile_native_to_wasm_trampoline(
&self,
translation: &ModuleTranslation<'_>,
types: &ModuleTypesBuilder,
index: DefinedFuncIndex,
) -> Result<Box<dyn Any + Send>, CompileError>;
/// Compile a trampoline for a Wasm caller calling a native callee with the
/// Compile a trampoline for a Wasm caller calling a array callee with the
/// given signature.
///
/// The trampoline should save the necessary state to record the
/// Wasm-to-host transition (e.g. registers used for fast stack walking).
fn compile_wasm_to_native_trampoline(
fn compile_wasm_to_array_trampoline(
&self,
wasm_func_ty: &WasmFuncType,
) -> Result<Box<dyn Any + Send>, CompileError>;
@ -286,39 +274,6 @@ pub trait Compiler: Send + Sync {
resolve_reloc: &dyn Fn(usize, RelocationTarget) -> usize,
) -> Result<Vec<(SymbolId, FunctionLoc)>>;
/// Inserts two trampolines into `obj` for a array-call host function:
///
/// 1. A wasm-call trampoline: A trampoline that takes arguments in their
/// wasm-call locations, moves them to their array-call locations, calls
/// the array-call host function, and finally moves the return values
/// from the array-call locations to the wasm-call return
/// locations. Additionally, this trampoline manages the wasm-to-host
/// state transition for the runtime.
///
/// 2. A native-call trampoline: A trampoline that takes arguments in their
/// native-call locations, moves them to their array-call locations,
/// calls the array-call host function, and finally moves the return
/// values from the array-call locations to the native-call return
/// locations. Does not need to manage any wasm/host state transitions,
/// since both caller and callee are on the host side.
///
/// This will configure the same sections as `append_code`, but will likely
/// be much smaller.
///
/// The two returned `FunctionLoc` structures describe where to find these
/// trampolines in the text section, respectively.
///
/// These trampolines are only valid for in-process JIT usage. They bake in
/// the function pointer to the host code.
fn emit_trampolines_for_array_call_host_func(
&self,
ty: &WasmFuncType,
// Actually `host_fn: VMArrayCallFunction` but that type is not
// available in `wasmtime-environ`.
host_fn: usize,
obj: &mut Object<'static>,
) -> Result<(FunctionLoc, FunctionLoc)>;
/// Creates a new `Object` file which is used to build the results of a
/// compilation into.
///

8
crates/environ/src/compile/module_artifacts.rs

@ -106,8 +106,8 @@ impl<'a> ObjectBuilder<'a> {
/// as well as where the functions are located in the text section and any
/// associated trampolines.
///
/// * `wasm_to_native_trampolines` - list of all trampolines necessary for
/// Wasm callers calling native callees (e.g. `Func::wrap`). One for each
/// * `wasm_to_array_trampolines` - list of all trampolines necessary for
/// Wasm callers calling array callees (e.g. `Func::wrap`). One for each
/// function signature in the module. Must be sorted by `SignatureIndex`.
///
/// Returns the `CompiledModuleInfo` corresponding to this core Wasm module
@ -117,7 +117,7 @@ impl<'a> ObjectBuilder<'a> {
&mut self,
translation: ModuleTranslation<'_>,
funcs: PrimaryMap<DefinedFuncIndex, CompiledFunctionInfo>,
wasm_to_native_trampolines: Vec<(ModuleInternedTypeIndex, FunctionLoc)>,
wasm_to_array_trampolines: Vec<(ModuleInternedTypeIndex, FunctionLoc)>,
) -> Result<CompiledModuleInfo> {
let ModuleTranslation {
mut module,
@ -220,7 +220,7 @@ impl<'a> ObjectBuilder<'a> {
Ok(CompiledModuleInfo {
module,
funcs,
wasm_to_native_trampolines,
wasm_to_array_trampolines,
func_names,
meta: Metadata {
native_debug_info_present: self.tunables.generate_native_debuginfo,

12
crates/environ/src/component/artifacts.rs

@ -32,20 +32,19 @@ pub struct CompiledComponentInfo {
/// These are the
///
/// 1. Wasm-call,
/// 2. array-call, and
/// 3. native-call
/// 2. array-call
///
/// function pointers that end up in a `VMFuncRef` for each
/// lowering.
pub trampolines: PrimaryMap<TrampolineIndex, AllCallFunc<FunctionLoc>>,
/// The location of the wasm-to-native trampoline for the `resource.drop`
/// The location of the wasm-to-array trampoline for the `resource.drop`
/// intrinsic.
pub resource_drop_wasm_to_native_trampoline: Option<FunctionLoc>,
pub resource_drop_wasm_to_array_trampoline: Option<FunctionLoc>,
}
/// A triple of related functions/trampolines variants with differing calling
/// conventions: `{wasm,array,native}_call`.
/// conventions: `{wasm,array}_call`.
///
/// Generic so we can use this with either the `Box<dyn Any + Send>`s that
/// implementations of the compiler trait return or with `FunctionLoc`s inside
@ -56,8 +55,6 @@ pub struct AllCallFunc<T> {
pub wasm_call: T,
/// The function exposing the array calling convention.
pub array_call: T,
/// The function exposing the native calling convention.
pub native_call: T,
}
impl<T> AllCallFunc<T> {
@ -66,7 +63,6 @@ impl<T> AllCallFunc<T> {
AllCallFunc {
wasm_call: f(self.wasm_call),
array_call: f(self.array_call),
native_call: f(self.native_call),
}
}
}

6
crates/environ/src/module_artifacts.rs

@ -19,8 +19,6 @@ pub struct CompiledFunctionInfo {
pub wasm_func_loc: FunctionLoc,
/// A trampoline for array callers (e.g. `Func::new`) calling into this function (if needed).
pub array_to_wasm_trampoline: Option<FunctionLoc>,
/// A trampoline for native callers (e.g. `Func::wrap`) calling into this function (if needed).
pub native_to_wasm_trampoline: Option<FunctionLoc>,
}
/// Information about a function, such as trap information, address map,
@ -70,9 +68,9 @@ pub struct CompiledModuleInfo {
/// Sorted list, by function index, of names we have for this module.
pub func_names: Vec<FunctionName>,
/// Metadata about wasm-to-native trampolines. Used when exposing a native
/// Metadata about wasm-to-array trampolines. Used when exposing a native
/// callee (e.g. `Func::wrap`) to a Wasm caller. Sorted by signature index.
pub wasm_to_native_trampolines: Vec<(ModuleInternedTypeIndex, FunctionLoc)>,
pub wasm_to_array_trampolines: Vec<(ModuleInternedTypeIndex, FunctionLoc)>,
/// General compilation metadata.
pub meta: Metadata,

46
crates/environ/src/vmoffsets.rs

@ -119,40 +119,34 @@ pub trait PtrSize {
self.vmcontext_runtime_limits() + self.size()
}
/// The offset of the `native_call` field.
#[inline]
fn vm_func_ref_native_call(&self) -> u8 {
0 * self.size()
}
/// The offset of the `array_call` field.
#[inline]
fn vm_func_ref_array_call(&self) -> u8 {
1 * self.size()
0 * self.size()
}
/// The offset of the `wasm_call` field.
#[inline]
fn vm_func_ref_wasm_call(&self) -> u8 {
2 * self.size()
1 * self.size()
}
/// The offset of the `type_index` field.
#[inline]
fn vm_func_ref_type_index(&self) -> u8 {
3 * self.size()
2 * self.size()
}
/// The offset of the `vmctx` field.
#[inline]
fn vm_func_ref_vmctx(&self) -> u8 {
4 * self.size()
3 * self.size()
}
/// Return the size of `VMFuncRef`.
#[inline]
fn size_of_vm_func_ref(&self) -> u8 {
5 * self.size()
4 * self.size()
}
/// Return the size of `VMGlobalDefinition`; this is the size of the largest value type (i.e. a
@ -225,10 +219,8 @@ pub trait PtrSize {
// Offsets within `VMArrayCallHostFuncContext`.
// Offsets within `VMNativeCallHostFuncContext`.
/// Return the offset of `VMNativeCallHostFuncContext::func_ref`.
fn vmnative_call_host_func_context_func_ref(&self) -> u8 {
/// Return the offset of `VMArrayCallHostFuncContext::func_ref`.
fn vmarray_call_host_func_context_func_ref(&self) -> u8 {
u8::try_from(align(
u32::try_from(core::mem::size_of::<u32>()).unwrap(),
u32::from(self.size()),
@ -526,28 +518,22 @@ impl<P: PtrSize> VMOffsets<P> {
0 * self.pointer_size()
}
/// The offset of the `native_call` field.
#[inline]
pub fn vmfunction_import_native_call(&self) -> u8 {
1 * self.pointer_size()
}
/// The offset of the `array_call` field.
#[inline]
pub fn vmfunction_import_array_call(&self) -> u8 {
2 * self.pointer_size()
1 * self.pointer_size()
}
/// The offset of the `vmctx` field.
#[inline]
pub fn vmfunction_import_vmctx(&self) -> u8 {
3 * self.pointer_size()
2 * self.pointer_size()
}
/// Return the size of `VMFunctionImport`.
#[inline]
pub fn size_of_vmfunction_import(&self) -> u8 {
4 * self.pointer_size()
3 * self.pointer_size()
}
}
@ -860,12 +846,6 @@ impl<P: PtrSize> VMOffsets<P> {
self.vmctx_vmfunction_import(index) + u32::from(self.vmfunction_import_wasm_call())
}
/// Return the offset to the `native_call` field in `*const VMFunctionBody` index `index`.
#[inline]
pub fn vmctx_vmfunction_import_native_call(&self, index: FuncIndex) -> u32 {
self.vmctx_vmfunction_import(index) + u32::from(self.vmfunction_import_native_call())
}
/// Return the offset to the `array_call` field in `*const VMFunctionBody` index `index`.
#[inline]
pub fn vmctx_vmfunction_import_array_call(&self, index: FuncIndex) -> u32 {
@ -992,12 +972,6 @@ pub const VMCONTEXT_MAGIC: u32 = u32::from_le_bytes(*b"core");
/// and double-checked on `VMArrayCallHostFuncContext::from_opaque`.
pub const VM_ARRAY_CALL_HOST_FUNC_MAGIC: u32 = u32::from_le_bytes(*b"ACHF");
/// Equivalent of `VMCONTEXT_MAGIC` except for native-call host functions.
///
/// This is stored at the start of all `VMNativeCallHostFuncContext` structures
/// and double-checked on `VMNativeCallHostFuncContext::from_opaque`.
pub const VM_NATIVE_CALL_HOST_FUNC_MAGIC: u32 = u32::from_le_bytes(*b"NCHF");
#[cfg(test)]
mod tests {
use crate::vmoffsets::align;

95
crates/wasmtime/src/compile.rs

@ -46,8 +46,6 @@ pub use self::code_builder::{CodeBuilder, HashedEngineCompileEnv};
#[cfg(feature = "runtime")]
mod runtime;
#[cfg(feature = "runtime")]
pub use self::runtime::finish_object;
/// Converts an input binary-encoded WebAssembly module to compilation
/// artifacts and type information.
@ -186,8 +184,8 @@ pub(crate) fn build_component_artifacts<'a, T: FinishedObject>(
let info = CompiledComponentInfo {
component: component.component,
trampolines: compilation_artifacts.trampolines,
resource_drop_wasm_to_native_trampoline: compilation_artifacts
.resource_drop_wasm_to_native_trampoline,
resource_drop_wasm_to_array_trampoline: compilation_artifacts
.resource_drop_wasm_to_array_trampoline,
};
let artifacts = ComponentArtifacts {
info,
@ -231,9 +229,8 @@ impl CompileKey {
const WASM_FUNCTION_KIND: u32 = Self::new_kind(0);
const ARRAY_TO_WASM_TRAMPOLINE_KIND: u32 = Self::new_kind(1);
const NATIVE_TO_WASM_TRAMPOLINE_KIND: u32 = Self::new_kind(2);
const WASM_TO_NATIVE_TRAMPOLINE_KIND: u32 = Self::new_kind(3);
const WASM_TO_BUILTIN_TRAMPOLINE_KIND: u32 = Self::new_kind(4);
const WASM_TO_ARRAY_TRAMPOLINE_KIND: u32 = Self::new_kind(2);
const WASM_TO_BUILTIN_TRAMPOLINE_KIND: u32 = Self::new_kind(3);
const fn new_kind(kind: u32) -> u32 {
assert!(kind < (1 << Self::KIND_BITS));
@ -258,17 +255,9 @@ impl CompileKey {
}
}
fn native_to_wasm_trampoline(module: StaticModuleIndex, index: DefinedFuncIndex) -> Self {
debug_assert_eq!(module.as_u32() & Self::KIND_MASK, 0);
Self {
namespace: Self::NATIVE_TO_WASM_TRAMPOLINE_KIND | module.as_u32(),
index: index.as_u32(),
}
}
fn wasm_to_native_trampoline(index: ModuleInternedTypeIndex) -> Self {
fn wasm_to_array_trampoline(index: ModuleInternedTypeIndex) -> Self {
Self {
namespace: Self::WASM_TO_NATIVE_TRAMPOLINE_KIND,
namespace: Self::WASM_TO_ARRAY_TRAMPOLINE_KIND,
index: index.as_u32(),
}
}
@ -283,8 +272,8 @@ impl CompileKey {
#[cfg(feature = "component-model")]
impl CompileKey {
const TRAMPOLINE_KIND: u32 = Self::new_kind(5);
const RESOURCE_DROP_WASM_TO_NATIVE_KIND: u32 = Self::new_kind(6);
const TRAMPOLINE_KIND: u32 = Self::new_kind(4);
const RESOURCE_DROP_WASM_TO_ARRAY_KIND: u32 = Self::new_kind(5);
fn trampoline(index: wasmtime_environ::component::TrampolineIndex) -> Self {
Self {
@ -293,9 +282,9 @@ impl CompileKey {
}
}
fn resource_drop_wasm_to_native_trampoline() -> Self {
fn resource_drop_wasm_to_array_trampoline() -> Self {
Self {
namespace: Self::RESOURCE_DROP_WASM_TO_NATIVE_KIND,
namespace: Self::RESOURCE_DROP_WASM_TO_ARRAY_KIND,
index: 0,
}
}
@ -406,9 +395,9 @@ impl<'a> CompileInputs<'a> {
if let Some(sig) = types.find_resource_drop_signature() {
ret.push_input(move |compiler| {
let trampoline =
compiler.compile_wasm_to_native_trampoline(types[sig].unwrap_func())?;
compiler.compile_wasm_to_array_trampoline(types[sig].unwrap_func())?;
Ok(CompileOutput {
key: CompileKey::resource_drop_wasm_to_native_trampoline(),
key: CompileKey::resource_drop_wasm_to_array_trampoline(),
symbol: "resource_drop_trampoline".to_string(),
function: CompiledFunction::Function(trampoline),
info: None,
@ -513,25 +502,6 @@ impl<'a> CompileInputs<'a> {
info: None,
})
});
self.push_input(move |compiler| {
let func_index = translation.module.func_index(def_func_index);
let trampoline = compiler.compile_native_to_wasm_trampoline(
translation,
types,
def_func_index,
)?;
Ok(CompileOutput {
key: CompileKey::native_to_wasm_trampoline(module, def_func_index),
symbol: format!(
"wasm[{}]::native_to_wasm_trampoline[{}]",
module.as_u32(),
func_index.as_u32()
),
function: CompiledFunction::Function(trampoline),
info: None,
})
});
}
}
}
@ -544,11 +514,11 @@ impl<'a> CompileInputs<'a> {
}
let trampoline_func_ty = types[trampoline_type_index].unwrap_func();
self.push_input(move |compiler| {
let trampoline = compiler.compile_wasm_to_native_trampoline(trampoline_func_ty)?;
let trampoline = compiler.compile_wasm_to_array_trampoline(trampoline_func_ty)?;
Ok(CompileOutput {
key: CompileKey::wasm_to_native_trampoline(trampoline_type_index),
key: CompileKey::wasm_to_array_trampoline(trampoline_type_index),
symbol: format!(
"signatures[{}]::wasm_to_native_trampoline",
"signatures[{}]::wasm_to_array_trampoline",
trampoline_type_index.as_u32()
),
function: CompiledFunction::Function(trampoline),
@ -648,13 +618,10 @@ impl UnlinkedCompileOutputs {
CompiledFunction::AllCallFunc(f) => {
let array_call = compiled_funcs.len();
compiled_funcs.push((format!("{}_array_call", x.symbol), f.array_call));
let native_call = compiled_funcs.len();
compiled_funcs.push((format!("{}_native_call", x.symbol), f.native_call));
let wasm_call = compiled_funcs.len();
compiled_funcs.push((format!("{}_wasm_call", x.symbol), f.wasm_call));
CompiledFunction::AllCallFunc(wasmtime_environ::component::AllCallFunc {
array_call,
native_call,
wasm_call,
})
}
@ -662,7 +629,6 @@ impl UnlinkedCompileOutputs {
if x.key.kind() == CompileKey::WASM_FUNCTION_KIND
|| x.key.kind() == CompileKey::ARRAY_TO_WASM_TRAMPOLINE_KIND
|| x.key.kind() == CompileKey::NATIVE_TO_WASM_TRAMPOLINE_KIND
{
indices
.compiled_func_index_to_module
@ -824,16 +790,11 @@ impl FunctionIndices {
.remove(&CompileKey::ARRAY_TO_WASM_TRAMPOLINE_KIND)
.unwrap_or_default();
let mut native_to_wasm_trampolines = self
.indices
.remove(&CompileKey::NATIVE_TO_WASM_TRAMPOLINE_KIND)
.unwrap_or_default();
// NB: unlike the above maps this is not emptied out during iteration
// since each module may reach into different portions of this map.
let wasm_to_native_trampolines = self
let wasm_to_array_trampolines = self
.indices
.remove(&CompileKey::WASM_TO_NATIVE_TRAMPOLINE_KIND)
.remove(&CompileKey::WASM_TO_ARRAY_TRAMPOLINE_KIND)
.unwrap_or_default();
artifacts.modules = translations
@ -870,18 +831,10 @@ impl FunctionIndices {
))
.map(|x| symbol_ids_and_locs[x.unwrap_function()].1);
let native_to_wasm_trampoline = native_to_wasm_trampolines
.remove(&CompileKey::native_to_wasm_trampoline(
key.module(),
DefinedFuncIndex::from_u32(key.index),
))
.map(|x| symbol_ids_and_locs[x.unwrap_function()].1);
CompiledFunctionInfo {
wasm_func_info,
wasm_func_loc,
array_to_wasm_trampoline,
native_to_wasm_trampoline,
}
})
.collect();
@ -894,17 +847,17 @@ impl FunctionIndices {
.filter(|idx| types[*idx].is_func())
.map(|idx| types.trampoline_type(idx))
.collect::<BTreeSet<_>>();
let wasm_to_native_trampolines = unique_and_sorted_trampoline_sigs
let wasm_to_array_trampolines = unique_and_sorted_trampoline_sigs
.iter()
.map(|idx| {
let trampoline = types.trampoline_type(*idx);
let key = CompileKey::wasm_to_native_trampoline(trampoline);
let compiled = wasm_to_native_trampolines[&key];
let key = CompileKey::wasm_to_array_trampoline(trampoline);
let compiled = wasm_to_array_trampolines[&key];
(*idx, symbol_ids_and_locs[compiled.unwrap_function()].1)
})
.collect();
obj.append(translation, funcs, wasm_to_native_trampolines)
obj.append(translation, funcs, wasm_to_array_trampolines)
})
.collect::<Result<PrimaryMap<_, _>>>()?;
@ -919,10 +872,10 @@ impl FunctionIndices {
.collect();
let map = self
.indices
.remove(&CompileKey::RESOURCE_DROP_WASM_TO_NATIVE_KIND)
.remove(&CompileKey::RESOURCE_DROP_WASM_TO_ARRAY_KIND)
.unwrap_or_default();
assert!(map.len() <= 1);
artifacts.resource_drop_wasm_to_native_trampoline = map
artifacts.resource_drop_wasm_to_array_trampoline = map
.into_iter()
.next()
.map(|(_id, x)| symbol_ids_and_locs[x.unwrap_function()].1);
@ -948,7 +901,7 @@ struct Artifacts {
wasmtime_environ::component::AllCallFunc<wasmtime_environ::FunctionLoc>,
>,
#[cfg(feature = "component-model")]
resource_drop_wasm_to_native_trampoline: Option<wasmtime_environ::FunctionLoc>,
resource_drop_wasm_to_array_trampoline: Option<wasmtime_environ::FunctionLoc>,
}
impl Artifacts {

9
crates/wasmtime/src/compile/runtime.rs

@ -98,15 +98,6 @@ fn publish_mmap(mmap: MmapVec) -> Result<Arc<CodeMemory>> {
Ok(Arc::new(code))
}
/// Write an object out to an [`MmapVec`] so that it can be marked executable
/// before running.
///
/// The returned `MmapVec` will contain the serialized version of `obj`
/// and is sized appropriately to the exact size of the object serialized.
pub fn finish_object(obj: ObjectBuilder<'_>) -> Result<MmapVec> {
Ok(<MmapVecWrapper as FinishedObject>::finish_object(obj)?.0)
}
pub(crate) struct MmapVecWrapper(pub MmapVec);
impl FinishedObject for MmapVecWrapper {

61
crates/wasmtime/src/lib.rs

@ -278,6 +278,67 @@ extern crate alloc;
use wasmtime_environ::prelude;
/// A helper macro to safely map `MaybeUninit<T>` to `MaybeUninit<U>` where `U`
/// is a field projection within `T`.
///
/// This is intended to be invoked as:
///
/// ```ignore
/// struct MyType {
/// field: u32,
/// }
///
/// let initial: &mut MaybeUninit<MyType> = ...;
/// let field: &mut MaybeUninit<u32> = map_maybe_uninit!(initial.field);
/// ```
///
/// Note that array accesses are also supported:
///
/// ```ignore
///
/// let initial: &mut MaybeUninit<[u32; 2]> = ...;
/// let element: &mut MaybeUninit<u32> = map_maybe_uninit!(initial[1]);
/// ```
#[doc(hidden)]
#[macro_export]
macro_rules! map_maybe_uninit {
($maybe_uninit:ident $($field:tt)*) => ({
#[allow(unused_unsafe)]
{
unsafe {
use $crate::MaybeUninitExt;
let m: &mut core::mem::MaybeUninit<_> = $maybe_uninit;
// Note the usage of `addr_of_mut!` here which is an attempt to "stay
// safe" here where we never accidentally create `&mut T` where `T` is
// actually uninitialized, hopefully appeasing the Rust unsafe
// guidelines gods.
m.map(|p| core::ptr::addr_of_mut!((*p)$($field)*))
}
}
})
}
#[doc(hidden)]
pub trait MaybeUninitExt<T> {
/// Maps `MaybeUninit<T>` to `MaybeUninit<U>` using the closure provided.
///
/// Note that this is `unsafe` as there is no guarantee that `U` comes from
/// `T`.
unsafe fn map<U>(&mut self, f: impl FnOnce(*mut T) -> *mut U)
-> &mut core::mem::MaybeUninit<U>;
}
impl<T> MaybeUninitExt<T> for core::mem::MaybeUninit<T> {
unsafe fn map<U>(
&mut self,
f: impl FnOnce(*mut T) -> *mut U,
) -> &mut core::mem::MaybeUninit<U> {
let new_ptr = f(self.as_mut_ptr());
core::mem::transmute::<*mut U, &mut core::mem::MaybeUninit<U>>(new_ptr)
}
}
#[cfg(feature = "runtime")]
mod runtime;
#[cfg(feature = "runtime")]

9
crates/wasmtime/src/runtime/component/component.rs

@ -2,9 +2,7 @@ use crate::component::matching::InstanceType;
use crate::component::types;
use crate::prelude::*;
use crate::runtime::vm::component::ComponentRuntimeInfo;
use crate::runtime::vm::{
VMArrayCallFunction, VMFuncRef, VMFunctionBody, VMNativeCallFunction, VMWasmCallFunction,
};
use crate::runtime::vm::{VMArrayCallFunction, VMFuncRef, VMFunctionBody, VMWasmCallFunction};
use crate::{
code::CodeObject, code_memory::CodeMemory, type_registry::TypeCollection, Engine, Module,
ResourcesRequired,
@ -87,7 +85,6 @@ struct ComponentInner {
pub(crate) struct AllCallFuncPointers {
pub wasm_call: NonNull<VMWasmCallFunction>,
pub array_call: VMArrayCallFunction,
pub native_call: NonNull<VMNativeCallFunction>,
}
impl Component {
@ -450,7 +447,6 @@ impl Component {
let AllCallFunc {
wasm_call,
array_call,
native_call,
} = &self.inner.info.trampolines[index];
AllCallFuncPointers {
wasm_call: self.func(wasm_call).cast(),
@ -459,7 +455,6 @@ impl Component {
self.func(array_call),
)
},
native_call: self.func(native_call).cast(),
}
}
@ -505,7 +500,7 @@ impl Component {
let wasm_call = self
.inner
.info
.resource_drop_wasm_to_native_trampoline
.resource_drop_wasm_to_array_trampoline
.as_ref()
.map(|i| self.func(i).cast());
VMFuncRef {

57
crates/wasmtime/src/runtime/component/func.rs

@ -16,63 +16,6 @@ use wasmtime_environ::component::{
TypeFuncIndex, TypeTuple, MAX_FLAT_PARAMS, MAX_FLAT_RESULTS,
};
/// A helper macro to safely map `MaybeUninit<T>` to `MaybeUninit<U>` where `U`
/// is a field projection within `T`.
///
/// This is intended to be invoked as:
///
/// ```ignore
/// struct MyType {
/// field: u32,
/// }
///
/// let initial: &mut MaybeUninit<MyType> = ...;
/// let field: &mut MaybeUninit<u32> = map_maybe_uninit!(initial.field);
/// ```
///
/// Note that array accesses are also supported:
///
/// ```ignore
///
/// let initial: &mut MaybeUninit<[u32; 2]> = ...;
/// let element: &mut MaybeUninit<u32> = map_maybe_uninit!(initial[1]);
/// ```
#[doc(hidden)]
#[macro_export]
macro_rules! map_maybe_uninit {
($maybe_uninit:ident $($field:tt)*) => ({
#[allow(unused_unsafe)]
{
unsafe {
use $crate::component::__internal::MaybeUninitExt;
let m: &mut core::mem::MaybeUninit<_> = $maybe_uninit;
// Note the usage of `addr_of_mut!` here which is an attempt to "stay
// safe" here where we never accidentally create `&mut T` where `T` is
// actually uninitialized, hopefully appeasing the Rust unsafe
// guidelines gods.
m.map(|p| core::ptr::addr_of_mut!((*p)$($field)*))
}
}
})
}
#[doc(hidden)]
pub trait MaybeUninitExt<T> {
/// Maps `MaybeUninit<T>` to `MaybeUninit<U>` using the closure provided.
///
/// Note that this is `unsafe` as there is no guarantee that `U` comes from
/// `T`.
unsafe fn map<U>(&mut self, f: impl FnOnce(*mut T) -> *mut U) -> &mut MaybeUninit<U>;
}
impl<T> MaybeUninitExt<T> for MaybeUninit<T> {
unsafe fn map<U>(&mut self, f: impl FnOnce(*mut T) -> *mut U) -> &mut MaybeUninit<U> {
let new_ptr = f(self.as_mut_ptr());
core::mem::transmute::<*mut U, &mut MaybeUninit<U>>(new_ptr)
}
}
mod host;
mod options;
mod typed;

10
crates/wasmtime/src/runtime/component/instance.rs

@ -324,13 +324,9 @@ impl<'a> Instantiator<'a> {
None => panic!("found unregistered signature: {sig:?}"),
};
self.data.state.set_trampoline(
idx,
ptrs.wasm_call,
ptrs.native_call,
ptrs.array_call,
signature,
);
self.data
.state
.set_trampoline(idx, ptrs.wasm_call, ptrs.array_call, signature);
}
for initializer in env_component.initializers.iter() {

4
crates/wasmtime/src/runtime/component/mod.rs

@ -131,12 +131,12 @@ pub(crate) use self::resources::HostResourceData;
pub mod __internal {
pub use super::func::{
bad_type_info, format_flags, lower_payload, typecheck_enum, typecheck_flags,
typecheck_record, typecheck_variant, ComponentVariant, LiftContext, LowerContext,
MaybeUninitExt, Options,
typecheck_record, typecheck_variant, ComponentVariant, LiftContext, LowerContext, Options,
};
pub use super::matching::InstanceType;
pub use crate::map_maybe_uninit;
pub use crate::store::StoreOpaque;
pub use crate::MaybeUninitExt;
pub use alloc::boxed::Box;
pub use alloc::string::String;
pub use alloc::vec::Vec;

294
crates/wasmtime/src/runtime/func.rs

@ -1,7 +1,7 @@
use crate::prelude::*;
use crate::runtime::vm::{
ExportFunction, SendSyncPtr, StoreBox, VMArrayCallHostFuncContext, VMContext, VMFuncRef,
VMFunctionImport, VMNativeCallHostFuncContext, VMOpaqueContext,
VMFunctionImport, VMOpaqueContext,
};
use crate::runtime::Uninhabited;
use crate::store::{AutoAssertNoGc, StoreData, StoreOpaque, Stored};
@ -14,7 +14,7 @@ use alloc::sync::Arc;
use anyhow::{bail, Context as _, Error, Result};
use core::ffi::c_void;
use core::future::Future;
use core::mem;
use core::mem::{self, MaybeUninit};
use core::num::NonZeroUsize;
use core::pin::Pin;
use core::ptr::{self, NonNull};
@ -1297,18 +1297,17 @@ impl Func {
wasm_call: if let Some(wasm_call) = f.as_ref().wasm_call {
wasm_call
} else {
// Assert that this is a native-call function, since those
// Assert that this is a array-call function, since those
// are the only ones that could be missing a `wasm_call`
// trampoline.
let _ = VMNativeCallHostFuncContext::from_opaque(f.as_ref().vmctx);
let _ = VMArrayCallHostFuncContext::from_opaque(f.as_ref().vmctx);
let sig = self.type_index(store.store_data());
module.runtime_info().wasm_to_native_trampoline(sig).expect(
module.runtime_info().wasm_to_array_trampoline(sig).expect(
"if the wasm is importing a function of a given type, it must have the \
type's trampoline",
)
},
native_call: f.as_ref().native_call,
array_call: f.as_ref().array_call,
vmctx: f.as_ref().vmctx,
}
@ -1667,40 +1666,35 @@ fn exit_wasm<T>(store: &mut StoreContextMut<'_, T>, prev_stack: Option<usize>) {
///
/// For more information see [`Func::wrap`]
pub unsafe trait WasmRet {
// Same as `WasmTy::Abi`.
#[doc(hidden)]
type Abi: 'static + Copy;
#[doc(hidden)]
type Retptr: Copy;
// Same as `WasmTy::compatible_with_store`.
#[doc(hidden)]
fn compatible_with_store(&self, store: &StoreOpaque) -> bool;
// Similar to `WasmTy::into_abi_for_arg` but used when host code is
// returning a value into Wasm, rather than host code passing an argument to
// a Wasm call. Unlike `into_abi_for_arg`, implementors of this method can
// raise traps, which means that callers must ensure that
// `invoke_wasm_and_catch_traps` is on the stack, and therefore this method
// is unsafe.
/// Stores this return value into the `ptr` specified using the rooted
/// `store`.
///
/// Traps are communicated through the `Result<_>` return value.
///
/// # Unsafety
///
/// This method is unsafe as `ptr` must have the correct length to store
/// this result. This property is only checked in debug mode, not in release
/// mode.
#[doc(hidden)]
unsafe fn into_abi_for_ret(
unsafe fn store(
self,
store: &mut AutoAssertNoGc<'_>,
ptr: Self::Retptr,
) -> Result<Self::Abi>;
ptr: &mut [MaybeUninit<ValRaw>],
) -> Result<()>;
#[doc(hidden)]
fn func_type(engine: &Engine, params: impl Iterator<Item = ValType>) -> FuncType;
#[doc(hidden)]
unsafe fn wrap_trampoline(ptr: *mut ValRaw, f: impl FnOnce(Self::Retptr) -> Self::Abi);
// Utilities used to convert an instance of this type to a `Result`
// explicitly, used when wrapping async functions which always bottom-out
// in a function that returns a trap because futures can be cancelled.
#[doc(hidden)]
type Fallible: WasmRet<Abi = Self::Abi, Retptr = Self::Retptr>;
type Fallible: WasmRet;
#[doc(hidden)]
fn into_fallible(self) -> Self::Fallible;
#[doc(hidden)]
@ -1711,30 +1705,25 @@ unsafe impl<T> WasmRet for T
where
T: WasmTy,
{
type Abi = <T as WasmTy>::Abi;
type Retptr = ();
type Fallible = Result<T>;
fn compatible_with_store(&self, store: &StoreOpaque) -> bool {
<Self as WasmTy>::compatible_with_store(self, store)
}
unsafe fn into_abi_for_ret(
unsafe fn store(
self,
store: &mut AutoAssertNoGc<'_>,
_retptr: (),
) -> Result<Self::Abi> {
<Self as WasmTy>::into_abi(self, store)
ptr: &mut [MaybeUninit<ValRaw>],
) -> Result<()> {
debug_assert!(ptr.len() > 0);
<Self as WasmTy>::store(self, store, ptr.get_unchecked_mut(0))
}
fn func_type(engine: &Engine, params: impl Iterator<Item = ValType>) -> FuncType {
FuncType::new(engine, params, Some(<Self as WasmTy>::valtype()))
}
unsafe fn wrap_trampoline(ptr: *mut ValRaw, f: impl FnOnce(Self::Retptr) -> Self::Abi) {
T::abi_into_raw(f(()), ptr);
}
fn into_fallible(self) -> Result<T> {
Ok(self)
}
@ -1748,8 +1737,6 @@ unsafe impl<T> WasmRet for Result<T>
where
T: WasmRet,
{
type Abi = <T as WasmRet>::Abi;
type Retptr = <T as WasmRet>::Retptr;
type Fallible = Self;
fn compatible_with_store(&self, store: &StoreOpaque) -> bool {
@ -1759,22 +1746,18 @@ where
}
}
unsafe fn into_abi_for_ret(
unsafe fn store(
self,
store: &mut AutoAssertNoGc<'_>,
retptr: Self::Retptr,
) -> Result<Self::Abi> {
self.and_then(|val| val.into_abi_for_ret(store, retptr))
ptr: &mut [MaybeUninit<ValRaw>],
) -> Result<()> {
self.and_then(|val| val.store(store, ptr))
}
fn func_type(engine: &Engine, params: impl Iterator<Item = ValType>) -> FuncType {
T::func_type(engine, params)
}
unsafe fn wrap_trampoline(ptr: *mut ValRaw, f: impl FnOnce(Self::Retptr) -> Self::Abi) {
T::wrap_trampoline(ptr, f)
}
fn into_fallible(self) -> Result<T> {
self
}
@ -1790,10 +1773,7 @@ macro_rules! impl_wasm_host_results {
unsafe impl<$($t),*> WasmRet for ($($t,)*)
where
$($t: WasmTy,)*
($($t::Abi,)*): HostAbi,
{
type Abi = <($($t::Abi,)*) as HostAbi>::Abi;
type Retptr = <($($t::Abi,)*) as HostAbi>::Retptr;
type Fallible = Result<Self>;
#[inline]
@ -1803,14 +1783,20 @@ macro_rules! impl_wasm_host_results {
}
#[inline]
unsafe fn into_abi_for_ret(
unsafe fn store(
self,
_store: &mut AutoAssertNoGc<'_>,
ptr: Self::Retptr,
) -> Result<Self::Abi> {
_ptr: &mut [MaybeUninit<ValRaw>],
) -> Result<()> {
let ($($t,)*) = self;
let abi = ($($t.into_abi(_store)?,)*);
Ok(<($($t::Abi,)*) as HostAbi>::into_abi(abi, ptr))
let mut _cur = 0;
$(
debug_assert!(_cur < _ptr.len());
let val = _ptr.get_unchecked_mut(_cur);
_cur += 1;
WasmTy::store($t, _store, val)?;
)*
Ok(())
}
fn func_type(engine: &Engine, params: impl Iterator<Item = ValType>) -> FuncType {
@ -1821,15 +1807,6 @@ macro_rules! impl_wasm_host_results {
)
}
#[allow(unused_assignments)]
unsafe fn wrap_trampoline(mut _ptr: *mut ValRaw, f: impl FnOnce(Self::Retptr) -> Self::Abi) {
let ($($t,)*) = <($($t::Abi,)*) as HostAbi>::call(f);
$(
$t::abi_into_raw($t, _ptr);
_ptr = _ptr.add(1);
)*
}
#[inline]
fn into_fallible(self) -> Result<Self> {
Ok(self)
@ -1845,113 +1822,6 @@ macro_rules! impl_wasm_host_results {
for_each_function_signature!(impl_wasm_host_results);
// Internal trait representing how to communicate tuples of return values across
// an ABI boundary. This internally corresponds to the "wasmtime" ABI inside of
// cranelift itself. Notably the first element of each tuple is returned via the
// typical system ABI (e.g. systemv or fastcall depending on platform) and all
// other values are returned packed via the stack.
//
// This trait helps to encapsulate all the details of that.
#[doc(hidden)]
pub trait HostAbi {
// A value returned from native functions which return `Self`
type Abi: Copy;
// A return pointer, added to the end of the argument list, for native
// functions that return `Self`. Note that a 0-sized type here should get
// elided at the ABI level.
type Retptr: Copy;
// Converts a value of `self` into its components. Stores necessary values
// into `ptr` and then returns whatever needs to be returned from the
// function.
unsafe fn into_abi(self, ptr: Self::Retptr) -> Self::Abi;
// Calls `f` with a suitably sized return area and requires `f` to return
// the raw abi value of the first element of our tuple. This will then
// unpack the `Retptr` and assemble it with `Self::Abi` to return an
// instance of the whole tuple.
unsafe fn call(f: impl FnOnce(Self::Retptr) -> Self::Abi) -> Self;
}
macro_rules! impl_host_abi {
// Base case, everything is `()`
(0) => {
impl HostAbi for () {
type Abi = ();
type Retptr = ();
#[inline]
unsafe fn into_abi(self, _ptr: Self::Retptr) -> Self::Abi {}
#[inline]
unsafe fn call(f: impl FnOnce(Self::Retptr) -> Self::Abi) -> Self {
f(())
}
}
};
// In the 1-case the retptr is not present, so it's a 0-sized value.
(1 $a:ident) => {
impl<$a: Copy> HostAbi for ($a,) {
type Abi = $a;
type Retptr = ();
unsafe fn into_abi(self, _ptr: Self::Retptr) -> Self::Abi {
self.0
}
unsafe fn call(f: impl FnOnce(Self::Retptr) -> Self::Abi) -> Self {
(f(()),)
}
}
};
// This is where the more interesting case happens. The first element of the
// tuple is returned via `Abi` and all other elements are returned via
// `Retptr`. We create a `TupleRetNN` structure to represent all of the
// return values here.
//
// Also note that this isn't implemented for the old backend right now due
// to the original author not really being sure how to implement this in the
// old backend.
($n:tt $t:ident $($u:ident)*) => {paste::paste!{
#[doc(hidden)]
#[allow(non_snake_case)]
#[repr(C)]
pub struct [<TupleRet $n>]<$($u,)*> {
$($u: $u,)*
}
#[allow(non_snake_case, unused_assignments)]
impl<$t: Copy, $($u: Copy,)*> HostAbi for ($t, $($u,)*) {
type Abi = $t;
type Retptr = *mut [<TupleRet $n>]<$($u,)*>;
unsafe fn into_abi(self, ptr: Self::Retptr) -> Self::Abi {
let ($t, $($u,)*) = self;
// Store the tail of our tuple into the return pointer...
$((*ptr).$u = $u;)*
// ... and return the head raw.
$t
}
unsafe fn call(f: impl FnOnce(Self::Retptr) -> Self::Abi) -> Self {
// Create space to store all the return values and then invoke
// the function.
let mut space = core::mem::MaybeUninit::uninit();
let t = f(space.as_mut_ptr());
let space = space.assume_init();
// Use the return value as the head of the tuple and unpack our
// return area to get the rest of the tuple.
(t, $(space.$u,)*)
}
}
}};
}
for_each_function_signature!(impl_host_abi);
/// Internal trait implemented for all arguments that can be passed to
/// [`Func::wrap`] and [`Linker::func_wrap`](crate::Linker::func_wrap).
///
@ -2133,7 +2003,7 @@ impl<T> AsContextMut for Caller<'_, T> {
}
}
// State stored inside a `VMNativeCallHostFuncContext`.
// State stored inside a `VMArrayCallHostFuncContext`.
struct HostFuncState<F> {
// The actual host function.
func: F,
@ -2174,22 +2044,22 @@ macro_rules! impl_into_func {
{
fn into_func(self, engine: &Engine) -> HostContext {
/// This shim is a regular, non-closure function we can stuff
/// inside `VMFuncRef::native_call`.
/// inside `VMFuncRef::array_call`.
///
/// It reads the actual callee closure out of
/// `VMNativeCallHostFuncContext::host_state`, forwards
/// arguments to that function, and finally forwards the results
/// back out to the caller. It also handles traps and panics
/// along the way.
unsafe extern "C" fn native_call_shim<T, F, $($args,)* R>(
vmctx: *mut VMOpaqueContext,
/// `VMArrayCallHostFuncContext::host_state`, forwards
/// arguments to that function, and finally forwards the
/// results back out to the caller. It also handles traps and
/// panics along the way.
unsafe extern "C" fn array_call_trampoline<T, F, $($args,)* R>(
callee_vmctx: *mut VMOpaqueContext,
caller_vmctx: *mut VMOpaqueContext,
$( $args: $args::Abi, )*
retptr: R::Retptr,
) -> R::Abi
args: *mut ValRaw,
args_len: usize
)
where
F: Fn(Caller<'_, T>, $( $args ),*) -> R + 'static,
$( $args: WasmTy, )*
$($args: WasmTy,)*
R: WasmRet,
{
// Note that this function is intentionally scoped into a
@ -2199,7 +2069,11 @@ macro_rules! impl_into_func {
// should be part of this closure, and the long-jmp-ing
// happens after the closure in handling the result.
let run = move |mut caller: Caller<'_, T>| {
let vmctx = VMNativeCallHostFuncContext::from_opaque(vmctx);
let args = core::slice::from_raw_parts_mut(
args.cast::<MaybeUninit<ValRaw>>(),
args_len,
);
let vmctx = VMArrayCallHostFuncContext::from_opaque(callee_vmctx);
let state = (*vmctx).host_state();
// Double-check ourselves in debug mode, but we control
@ -2215,7 +2089,13 @@ macro_rules! impl_into_func {
}
let mut store = AutoAssertNoGc::new(caller.store.0);
$(let $args = $args::from_abi($args, &mut store);)*
let mut _cur = 0;
$(let $args = {
debug_assert!(_cur < args.len());
let ptr = args.get_unchecked(_cur).assume_init_ref();
_cur += 1;
$args::load(&mut store, ptr)
};)*
let _ = &mut store;
drop(store);
@ -2233,7 +2113,7 @@ macro_rules! impl_into_func {
bail!("host function attempted to return cross-`Store` value to Wasm")
} else {
let mut store = AutoAssertNoGc::new(&mut **caller.store.0);
let ret = ret.into_abi_for_ret(&mut store, retptr)?;
let ret = ret.store(&mut store, args)?;
Ok(ret)
}
};
@ -2251,36 +2131,6 @@ macro_rules! impl_into_func {
}
}
/// This trampoline allows host code to indirectly call the
/// wrapped function (e.g. via `Func::call` on a `funcref` that
/// happens to reference our wrapped function).
///
/// It reads the arguments out of the incoming `args` array,
/// calls the given function pointer, and then stores the result
/// back into the `args` array.
unsafe extern "C" fn array_call_trampoline<T, F, $($args,)* R>(
callee_vmctx: *mut VMOpaqueContext,
caller_vmctx: *mut VMOpaqueContext,
args: *mut ValRaw,
_args_len: usize
)
where
F: Fn(Caller<'_, T>, $( $args ),*) -> R + 'static,
$($args: WasmTy,)*
R: WasmRet,
{
let mut _n = 0;
$(
debug_assert!(_n < _args_len);
let $args = $args::abi_from_raw(args.add(_n));
_n += 1;
)*
R::wrap_trampoline(args, |retptr| {
native_call_shim::<T, F, $( $args, )* R>(callee_vmctx, caller_vmctx, $( $args, )* retptr)
});
}
let ty = R::func_type(
engine,
None::<ValType>.into_iter()
@ -2289,12 +2139,10 @@ macro_rules! impl_into_func {
let type_index = ty.type_index();
let array_call = array_call_trampoline::<T, F, $($args,)* R>;
let native_call = NonNull::new(native_call_shim::<T, F, $($args,)* R> as *mut _).unwrap();
let ctx = unsafe {
VMNativeCallHostFuncContext::new(
VMArrayCallHostFuncContext::new(
VMFuncRef {
native_call,
array_call,
wasm_call: None,
type_index,
@ -2317,16 +2165,9 @@ for_each_function_signature!(impl_into_func);
#[doc(hidden)]
pub enum HostContext {
Native(StoreBox<VMNativeCallHostFuncContext>),
Array(StoreBox<VMArrayCallHostFuncContext>),
}
impl From<StoreBox<VMNativeCallHostFuncContext>> for HostContext {
fn from(ctx: StoreBox<VMNativeCallHostFuncContext>) -> Self {
HostContext::Native(ctx)
}
}
impl From<StoreBox<VMArrayCallHostFuncContext>> for HostContext {
fn from(ctx: StoreBox<VMArrayCallHostFuncContext>) -> Self {
HostContext::Array(ctx)
@ -2394,7 +2235,7 @@ impl HostFunc {
Ok(result)
})
};
let ctx = crate::trampoline::create_array_call_function(&ty, func, engine)
let ctx = crate::trampoline::create_array_call_function(&ty, func)
.expect("failed to create function");
HostFunc::_new(engine, ctx.into())
}
@ -2460,7 +2301,7 @@ impl HostFunc {
if rooted_func_ref.is_some() {
debug_assert!(self.func_ref().wasm_call.is_none());
debug_assert!(matches!(self.ctx, HostContext::Native(_)));
debug_assert!(matches!(self.ctx, HostContext::Array(_)));
}
Func::from_func_kind(
@ -2492,7 +2333,6 @@ impl HostFunc {
pub(crate) fn func_ref(&self) -> &VMFuncRef {
match &self.ctx {
HostContext::Native(ctx) => unsafe { (*ctx.get()).func_ref() },
HostContext::Array(ctx) => unsafe { (*ctx.get()).func_ref() },
}
}

297
crates/wasmtime/src/runtime/func/typed.rs

@ -1,5 +1,5 @@
use super::{invoke_wasm_and_catch_traps, HostAbi};
use crate::runtime::vm::{VMContext, VMFuncRef, VMNativeCallFunction, VMOpaqueContext};
use super::invoke_wasm_and_catch_traps;
use crate::runtime::vm::{VMFuncRef, VMOpaqueContext};
use crate::store::{AutoAssertNoGc, StoreOpaque};
use crate::{
AsContext, AsContextMut, Engine, Func, FuncType, HeapType, NoFunc, RefType, StoreContextMut,
@ -10,7 +10,7 @@ use core::ffi::c_void;
use core::marker;
use core::mem::{self, MaybeUninit};
use core::num::NonZeroUsize;
use core::ptr::{self, NonNull};
use core::ptr::{self};
use wasmtime_environ::VMSharedTypeIndex;
/// A statically typed WebAssembly function.
@ -190,33 +190,43 @@ where
// belong within this store, otherwise it would be unsafe for store
// values to cross each other.
let params = {
let mut store = AutoAssertNoGc::new(store.0);
params.into_abi(&mut store, ty)?
union Storage<T: Copy, U: Copy> {
params: MaybeUninit<T>,
results: U,
}
let mut storage = Storage::<Params::ValRawStorage, Results::ValRawStorage> {
params: MaybeUninit::uninit(),
};
{
let mut store = AutoAssertNoGc::new(store.0);
params.store(&mut store, ty, &mut storage.params)?;
}
// Try to capture only a single variable (a tuple) in the closure below.
// This means the size of the closure is one pointer and is much more
// efficient to move in memory. This closure is actually invoked on the
// other side of a C++ shim, so it can never be inlined enough to make
// the memory go away, so the size matters here for performance.
let mut captures = (func, MaybeUninit::uninit(), params, false);
let mut captures = (func, storage);
let result = invoke_wasm_and_catch_traps(store, |caller| {
let (func_ref, ret, params, returned) = &mut captures;
let (func_ref, storage) = &mut captures;
let func_ref = func_ref.as_ref();
let result =
Params::invoke::<Results>(func_ref.native_call, func_ref.vmctx, caller, *params);
ptr::write(ret.as_mut_ptr(), result);
*returned = true
(func_ref.array_call)(
func_ref.vmctx,
VMOpaqueContext::from_vmcontext(caller),
(storage as *mut Storage<_, _>) as *mut ValRaw,
mem::size_of_val::<Storage<_, _>>(storage) / mem::size_of::<ValRaw>(),
);
});
let (_, ret, _, returned) = captures;
debug_assert_eq!(result.is_ok(), returned);
let (_, storage) = captures;
result?;
let mut store = AutoAssertNoGc::new(store.0);
Ok(Results::from_abi(&mut store, ret.assume_init()))
Ok(Results::load(&mut store, &storage.results))
}
/// Purely a debug-mode assertion, not actually used in release builds.
@ -245,11 +255,6 @@ pub enum TypeCheckPosition {
///
/// For more information see [`Func::wrap`] and [`Func::typed`]
pub unsafe trait WasmTy: Send {
// The raw ABI type that values of this type can be converted to and passed
// to Wasm, or given from Wasm and converted back from.
#[doc(hidden)]
type Abi: 'static + Copy;
// Do a "static" (aka at time of `func.typed::<P, R>()`) ahead-of-time type
// check for this type at the given position. You probably don't need to
// override this trait method.
@ -342,15 +347,7 @@ pub unsafe trait WasmTy: Send {
Self::valtype().is_vmgcref_type_and_points_to_object()
}
// Construct a `Self::Abi` from the given `ValRaw`.
#[doc(hidden)]
unsafe fn abi_from_raw(raw: *mut ValRaw) -> Self::Abi;
// Stuff our given `Self::Abi` into a `ValRaw`.
#[doc(hidden)]
unsafe fn abi_into_raw(abi: Self::Abi, raw: *mut ValRaw);
// Convert `self` into `Self::Abi`.
// Store `self` into `ptr`.
//
// NB: We _must not_ trigger a GC when passing refs from host code into Wasm
// (e.g. returned from a host function or passed as arguments to a Wasm
@ -379,17 +376,21 @@ pub unsafe trait WasmTy: Send {
// In conclusion, to prevent uses-after-free bugs, we cannot GC while
// converting types into their raw ABI forms.
#[doc(hidden)]
fn into_abi(self, store: &mut AutoAssertNoGc<'_>) -> Result<Self::Abi>;
fn store(self, store: &mut AutoAssertNoGc<'_>, ptr: &mut MaybeUninit<ValRaw>) -> Result<()>;
// Convert back from `Self::Abi` into `Self`.
// Load a version of `Self` from the `ptr` provided.
//
// # Safety
//
// This function is unsafe as it's up to the caller to ensure that `ptr` is
// valid for this given type.
#[doc(hidden)]
unsafe fn from_abi(abi: Self::Abi, store: &mut AutoAssertNoGc<'_>) -> Self;
unsafe fn load(store: &mut AutoAssertNoGc<'_>, ptr: &ValRaw) -> Self;
}
macro_rules! integers {
($($primitive:ident/$get_primitive:ident => $ty:ident)*) => ($(
unsafe impl WasmTy for $primitive {
type Abi = $primitive;
#[inline]
fn valtype() -> ValType {
ValType::$ty
@ -403,21 +404,13 @@ macro_rules! integers {
unreachable!()
}
#[inline]
unsafe fn abi_from_raw(raw: *mut ValRaw) -> $primitive {
(*raw).$get_primitive()
}
#[inline]
unsafe fn abi_into_raw(abi: $primitive, raw: *mut ValRaw) {
*raw = ValRaw::$primitive(abi);
}
#[inline]
fn into_abi(self, _store: &mut AutoAssertNoGc<'_>) -> Result<Self::Abi>
{
Ok(self)
fn store(self, _store: &mut AutoAssertNoGc<'_>, ptr: &mut MaybeUninit<ValRaw>) -> Result<()> {
ptr.write(ValRaw::$primitive(self));
Ok(())
}
#[inline]
unsafe fn from_abi(abi: Self::Abi, _store: &mut AutoAssertNoGc<'_>) -> Self {
abi
unsafe fn load(_store: &mut AutoAssertNoGc<'_>, ptr: &ValRaw) -> Self {
ptr.$get_primitive()
}
}
)*)
@ -433,7 +426,6 @@ integers! {
macro_rules! floats {
($($float:ident/$int:ident/$get_float:ident => $ty:ident)*) => ($(
unsafe impl WasmTy for $float {
type Abi = $float;
#[inline]
fn valtype() -> ValType {
ValType::$ty
@ -447,21 +439,13 @@ macro_rules! floats {
unreachable!()
}
#[inline]
unsafe fn abi_from_raw(raw: *mut ValRaw) -> $float {
$float::from_bits((*raw).$get_float())
fn store(self, _store: &mut AutoAssertNoGc<'_>, ptr: &mut MaybeUninit<ValRaw>) -> Result<()> {
ptr.write(ValRaw::$float(self.to_bits()));
Ok(())
}
#[inline]
unsafe fn abi_into_raw(abi: $float, raw: *mut ValRaw) {
*raw = ValRaw::$float(abi.to_bits());
}
#[inline]
fn into_abi(self, _store: &mut AutoAssertNoGc<'_>) -> Result<Self::Abi>
{
Ok(self)
}
#[inline]
unsafe fn from_abi(abi: Self::Abi, _store: &mut AutoAssertNoGc<'_>) -> Self {
abi
unsafe fn load(_store: &mut AutoAssertNoGc<'_>, ptr: &ValRaw) -> Self {
$float::from_bits(ptr.$get_float())
}
}
)*)
@ -473,8 +457,6 @@ floats! {
}
unsafe impl WasmTy for NoFunc {
type Abi = NoFunc;
#[inline]
fn valtype() -> ValType {
ValType::Ref(RefType::new(false, HeapType::NoFunc))
@ -496,29 +478,17 @@ unsafe impl WasmTy for NoFunc {
}
#[inline]
unsafe fn abi_from_raw(_raw: *mut ValRaw) -> Self::Abi {
unreachable!("NoFunc is uninhabited")
}
#[inline]
unsafe fn abi_into_raw(_abi: Self::Abi, _raw: *mut ValRaw) {
unreachable!("NoFunc is uninhabited")
}
#[inline]
fn into_abi(self, _store: &mut AutoAssertNoGc<'_>) -> Result<Self::Abi> {
unreachable!("NoFunc is uninhabited")
fn store(self, _store: &mut AutoAssertNoGc<'_>, _ptr: &mut MaybeUninit<ValRaw>) -> Result<()> {
match self._inner {}
}
#[inline]
unsafe fn from_abi(_abi: Self::Abi, _store: &mut AutoAssertNoGc<'_>) -> Self {
unsafe fn load(_store: &mut AutoAssertNoGc<'_>, _ptr: &ValRaw) -> Self {
unreachable!("NoFunc is uninhabited")
}
}
unsafe impl WasmTy for Option<NoFunc> {
type Abi = *mut NoFunc;
#[inline]
fn valtype() -> ValType {
ValType::Ref(RefType::new(true, HeapType::NoFunc))
@ -545,29 +515,18 @@ unsafe impl WasmTy for Option<NoFunc> {
}
#[inline]
unsafe fn abi_from_raw(_raw: *mut ValRaw) -> Self::Abi {
ptr::null_mut()
}
#[inline]
unsafe fn abi_into_raw(_abi: Self::Abi, raw: *mut ValRaw) {
*raw = ValRaw::funcref(ptr::null_mut());
fn store(self, _store: &mut AutoAssertNoGc<'_>, ptr: &mut MaybeUninit<ValRaw>) -> Result<()> {
ptr.write(ValRaw::funcref(ptr::null_mut()));
Ok(())
}
#[inline]
fn into_abi(self, _store: &mut AutoAssertNoGc<'_>) -> Result<Self::Abi> {
Ok(ptr::null_mut())
}
#[inline]
unsafe fn from_abi(_abi: Self::Abi, _store: &mut AutoAssertNoGc<'_>) -> Self {
unsafe fn load(_store: &mut AutoAssertNoGc<'_>, _ptr: &ValRaw) -> Self {
None
}
}
unsafe impl WasmTy for Func {
type Abi = NonNull<crate::runtime::vm::VMFuncRef>;
#[inline]
fn valtype() -> ValType {
ValType::Ref(RefType::new(false, HeapType::Func))
@ -591,31 +550,21 @@ unsafe impl WasmTy for Func {
}
#[inline]
unsafe fn abi_from_raw(raw: *mut ValRaw) -> Self::Abi {
let p = (*raw).get_funcref();
debug_assert!(!p.is_null());
NonNull::new_unchecked(p.cast::<crate::runtime::vm::VMFuncRef>())
}
#[inline]
unsafe fn abi_into_raw(abi: Self::Abi, raw: *mut ValRaw) {
*raw = ValRaw::funcref(abi.cast::<c_void>().as_ptr());
fn store(self, store: &mut AutoAssertNoGc<'_>, ptr: &mut MaybeUninit<ValRaw>) -> Result<()> {
let abi = self.vm_func_ref(store);
ptr.write(ValRaw::funcref(abi.cast::<c_void>().as_ptr()));
Ok(())
}
#[inline]
fn into_abi(self, store: &mut AutoAssertNoGc<'_>) -> Result<Self::Abi> {
Ok(self.vm_func_ref(store))
}
#[inline]
unsafe fn from_abi(abi: Self::Abi, store: &mut AutoAssertNoGc<'_>) -> Self {
Func::from_vm_func_ref(store, abi.as_ptr()).unwrap()
unsafe fn load(store: &mut AutoAssertNoGc<'_>, ptr: &ValRaw) -> Self {
let p = ptr.get_funcref();
debug_assert!(!p.is_null());
Func::from_vm_func_ref(store, p.cast()).unwrap()
}
}
unsafe impl WasmTy for Option<Func> {
type Abi = *mut crate::runtime::vm::VMFuncRef;
#[inline]
fn valtype() -> ValType {
ValType::FUNCREF
@ -648,27 +597,19 @@ unsafe impl WasmTy for Option<Func> {
}
#[inline]
unsafe fn abi_from_raw(raw: *mut ValRaw) -> Self::Abi {
(*raw).get_funcref() as Self::Abi
}
#[inline]
unsafe fn abi_into_raw(abi: Self::Abi, raw: *mut ValRaw) {
*raw = ValRaw::funcref(abi.cast());
}
#[inline]
fn into_abi(self, store: &mut AutoAssertNoGc<'_>) -> Result<Self::Abi> {
Ok(if let Some(f) = self {
fn store(self, store: &mut AutoAssertNoGc<'_>, ptr: &mut MaybeUninit<ValRaw>) -> Result<()> {
let raw = if let Some(f) = self {
f.vm_func_ref(store).as_ptr()
} else {
ptr::null_mut()
})
};
ptr.write(ValRaw::funcref(raw.cast::<c_void>()));
Ok(())
}
#[inline]
unsafe fn from_abi(abi: Self::Abi, store: &mut AutoAssertNoGc<'_>) -> Self {
Func::from_vm_func_ref(store, abi)
unsafe fn load(store: &mut AutoAssertNoGc<'_>, ptr: &ValRaw) -> Self {
Func::from_vm_func_ref(store, ptr.get_funcref().cast())
}
}
@ -679,7 +620,7 @@ unsafe impl WasmTy for Option<Func> {
/// tuples of those types.
pub unsafe trait WasmParams: Send {
#[doc(hidden)]
type Abi: Copy;
type ValRawStorage: Copy;
#[doc(hidden)]
fn typecheck(
@ -692,15 +633,12 @@ pub unsafe trait WasmParams: Send {
fn vmgcref_pointing_to_object_count(&self) -> usize;
#[doc(hidden)]
fn into_abi(self, store: &mut AutoAssertNoGc<'_>, func_ty: &FuncType) -> Result<Self::Abi>;
#[doc(hidden)]
unsafe fn invoke<R: WasmResults>(
func: NonNull<VMNativeCallFunction>,
vmctx1: *mut VMOpaqueContext,
vmctx2: *mut VMContext,
abi: Self::Abi,
) -> R::ResultAbi;
fn store(
self,
store: &mut AutoAssertNoGc<'_>,
func_ty: &FuncType,
dst: &mut MaybeUninit<Self::ValRawStorage>,
) -> Result<()>;
}
// Forward an impl from `T` to `(T,)` for convenience if there's only one
@ -709,7 +647,7 @@ unsafe impl<T> WasmParams for T
where
T: WasmTy,
{
type Abi = <(T,) as WasmParams>::Abi;
type ValRawStorage = <(T,) as WasmParams>::ValRawStorage;
fn typecheck(
engine: &Engine,
@ -725,17 +663,13 @@ where
}
#[inline]
fn into_abi(self, store: &mut AutoAssertNoGc<'_>, func_ty: &FuncType) -> Result<Self::Abi> {
<(T,) as WasmParams>::into_abi((self,), store, func_ty)
}
unsafe fn invoke<R: WasmResults>(
func: NonNull<VMNativeCallFunction>,
vmctx1: *mut VMOpaqueContext,
vmctx2: *mut VMContext,
abi: Self::Abi,
) -> R::ResultAbi {
<(T,) as WasmParams>::invoke::<R>(func, vmctx1, vmctx2, abi)
fn store(
self,
store: &mut AutoAssertNoGc<'_>,
func_ty: &FuncType,
dst: &mut MaybeUninit<Self::ValRawStorage>,
) -> Result<()> {
<(T,) as WasmParams>::store((self,), store, func_ty, dst)
}
}
@ -743,7 +677,7 @@ macro_rules! impl_wasm_params {
($n:tt $($t:ident)*) => {
#[allow(non_snake_case)]
unsafe impl<$($t: WasmTy,)*> WasmParams for ($($t,)*) {
type Abi = ($($t::Abi,)*);
type ValRawStorage = [ValRaw; $n];
fn typecheck(
_engine: &Engine,
@ -781,11 +715,12 @@ macro_rules! impl_wasm_params {
#[inline]
fn into_abi(
fn store(
self,
_store: &mut AutoAssertNoGc<'_>,
_func_ty: &FuncType,
) -> Result<Self::Abi> {
_ptr: &mut MaybeUninit<Self::ValRawStorage>,
) -> Result<()> {
let ($($t,)*) = self;
let mut _i = 0;
@ -803,39 +738,12 @@ macro_rules! impl_wasm_params {
}
}
let $t = $t.into_abi(_store)?;
let dst = map_maybe_uninit!(_ptr[_i]);
$t.store(_store, dst)?;
_i += 1;
)*
Ok(($($t,)*))
}
unsafe fn invoke<R: WasmResults>(
func: NonNull<VMNativeCallFunction>,
vmctx1: *mut VMOpaqueContext,
vmctx2: *mut VMContext,
abi: Self::Abi,
) -> R::ResultAbi {
let fnptr = mem::transmute::<
NonNull<VMNativeCallFunction>,
unsafe extern "C" fn(
*mut VMOpaqueContext,
*mut VMContext,
$($t::Abi,)*
<R::ResultAbi as HostAbi>::Retptr,
) -> <R::ResultAbi as HostAbi>::Abi,
>(func);
let ($($t,)*) = abi;
// Use the `call` function to acquire a `retptr` which we'll
// forward to the native function. Once we have it we also
// convert all our arguments to abi arguments to go to the raw
// function.
//
// Upon returning `R::call` will convert all the returns back
// into `R`.
<R::ResultAbi as HostAbi>::call(|retptr| {
fnptr(vmctx1, vmctx2, $($t,)* retptr)
})
Ok(())
}
}
};
@ -847,36 +755,23 @@ for_each_function_signature!(impl_wasm_params);
/// results for wasm functions.
pub unsafe trait WasmResults: WasmParams {
#[doc(hidden)]
type ResultAbi: HostAbi;
#[doc(hidden)]
unsafe fn from_abi(store: &mut AutoAssertNoGc<'_>, abi: Self::ResultAbi) -> Self;
unsafe fn load(store: &mut AutoAssertNoGc<'_>, abi: &Self::ValRawStorage) -> Self;
}
// Forwards from a bare type `T` to the 1-tuple type `(T,)`
unsafe impl<T: WasmTy> WasmResults for T
where
(T::Abi,): HostAbi,
{
type ResultAbi = <(T,) as WasmResults>::ResultAbi;
unsafe fn from_abi(store: &mut AutoAssertNoGc<'_>, abi: Self::ResultAbi) -> Self {
<(T,) as WasmResults>::from_abi(store, abi).0
unsafe impl<T: WasmTy> WasmResults for T {
unsafe fn load(store: &mut AutoAssertNoGc<'_>, abi: &Self::ValRawStorage) -> Self {
<(T,) as WasmResults>::load(store, abi).0
}
}
macro_rules! impl_wasm_results {
($n:tt $($t:ident)*) => {
#[allow(non_snake_case, unused_variables)]
unsafe impl<$($t: WasmTy,)*> WasmResults for ($($t,)*)
where ($($t::Abi,)*): HostAbi
{
type ResultAbi = ($($t::Abi,)*);
#[inline]
unsafe fn from_abi(store: &mut AutoAssertNoGc<'_>, abi: Self::ResultAbi) -> Self {
let ($($t,)*) = abi;
($($t::from_abi($t, store),)*)
unsafe impl<$($t: WasmTy,)*> WasmResults for ($($t,)*) {
unsafe fn load(store: &mut AutoAssertNoGc<'_>, abi: &Self::ValRawStorage) -> Self {
let [$($t,)*] = abi;
($($t::load(store, $t),)*)
}
}
};

133
crates/wasmtime/src/runtime/gc/enabled/anyref.rs

@ -7,7 +7,7 @@ use crate::{
RootSet, Rooted, ValRaw, ValType, WasmTy, I31,
};
use core::mem;
use core::num::NonZeroU64;
use core::mem::MaybeUninit;
/// An `anyref` GC reference.
///
@ -245,10 +245,6 @@ impl AnyRef {
}
unsafe impl WasmTy for Rooted<AnyRef> {
// TODO: this should be `VMGcRef` but Cranelift currently doesn't support
// using r32 types when targeting 64-bit platforms.
type Abi = NonZeroU64;
#[inline]
fn valtype() -> ValType {
ValType::Ref(RefType::new(false, HeapType::Any))
@ -264,31 +260,20 @@ unsafe impl WasmTy for Rooted<AnyRef> {
unreachable!()
}
#[inline]
unsafe fn abi_from_raw(raw: *mut ValRaw) -> Self::Abi {
let raw = (*raw).get_externref();
debug_assert_ne!(raw, 0);
NonZeroU64::new_unchecked(u64::from(raw))
}
#[inline]
unsafe fn abi_into_raw(abi: Self::Abi, raw: *mut ValRaw) {
let externref = u32::try_from(abi.get()).unwrap();
*raw = ValRaw::externref(externref);
}
#[inline]
fn into_abi(self, store: &mut AutoAssertNoGc<'_>) -> Result<Self::Abi> {
fn store(self, store: &mut AutoAssertNoGc<'_>, ptr: &mut MaybeUninit<ValRaw>) -> Result<()> {
let gc_ref = self.inner.try_clone_gc_ref(store)?;
let r64 = gc_ref.as_r64();
store.gc_store_mut()?.expose_gc_ref_to_wasm(gc_ref);
debug_assert_ne!(r64, 0);
Ok(unsafe { NonZeroU64::new_unchecked(r64) })
let anyref = u32::try_from(r64).unwrap();
ptr.write(ValRaw::anyref(anyref));
Ok(())
}
#[inline]
unsafe fn from_abi(abi: Self::Abi, store: &mut AutoAssertNoGc<'_>) -> Self {
let gc_ref = VMGcRef::from_r64(abi.get())
unsafe fn load(store: &mut AutoAssertNoGc<'_>, ptr: &ValRaw) -> Self {
let raw = ptr.get_anyref();
debug_assert_ne!(raw, 0);
let gc_ref = VMGcRef::from_r64(raw.into())
.expect("valid r64")
.expect("non-null");
let gc_ref = store.unwrap_gc_store_mut().clone_gc_ref(&gc_ref);
@ -297,8 +282,6 @@ unsafe impl WasmTy for Rooted<AnyRef> {
}
unsafe impl WasmTy for Option<Rooted<AnyRef>> {
type Abi = u64;
#[inline]
fn valtype() -> ValType {
ValType::ANYREF
@ -319,38 +302,24 @@ unsafe impl WasmTy for Option<Rooted<AnyRef>> {
self.is_some()
}
#[inline]
unsafe fn abi_from_raw(raw: *mut ValRaw) -> Self::Abi {
let externref = (*raw).get_externref();
u64::from(externref)
}
#[inline]
unsafe fn abi_into_raw(abi: Self::Abi, raw: *mut ValRaw) {
let externref = u32::try_from(abi).unwrap();
*raw = ValRaw::externref(externref);
}
#[inline]
fn into_abi(self, store: &mut AutoAssertNoGc<'_>) -> Result<Self::Abi> {
Ok(if let Some(x) = self {
<Rooted<AnyRef> as WasmTy>::into_abi(x, store)?.get()
} else {
0
})
fn store(self, store: &mut AutoAssertNoGc<'_>, ptr: &mut MaybeUninit<ValRaw>) -> Result<()> {
match self {
Some(r) => r.store(store, ptr),
None => {
ptr.write(ValRaw::anyref(0));
Ok(())
}
}
}
#[inline]
unsafe fn from_abi(abi: Self::Abi, store: &mut AutoAssertNoGc<'_>) -> Self {
let gc_ref = VMGcRef::from_r64(abi).expect("valid r64")?;
unsafe fn load(store: &mut AutoAssertNoGc<'_>, ptr: &ValRaw) -> Self {
let gc_ref = VMGcRef::from_r64(ptr.get_anyref().into()).expect("valid r64")?;
let gc_ref = store.unwrap_gc_store_mut().clone_gc_ref(&gc_ref);
Some(AnyRef::from_cloned_gc_ref(store, gc_ref))
}
}
unsafe impl WasmTy for ManuallyRooted<AnyRef> {
type Abi = NonZeroU64;
#[inline]
fn valtype() -> ValType {
ValType::Ref(RefType::new(false, HeapType::Any))
@ -366,30 +335,20 @@ unsafe impl WasmTy for ManuallyRooted<AnyRef> {
unreachable!()
}
#[inline]
unsafe fn abi_from_raw(raw: *mut ValRaw) -> Self::Abi {
let externref = (*raw).get_externref();
debug_assert_ne!(externref, 0);
NonZeroU64::new_unchecked(u64::from(externref))
}
#[inline]
unsafe fn abi_into_raw(abi: Self::Abi, raw: *mut ValRaw) {
let externref = u32::try_from(abi.get()).unwrap();
*raw = ValRaw::externref(externref);
}
#[inline]
fn into_abi(self, store: &mut AutoAssertNoGc<'_>) -> Result<Self::Abi> {
fn store(self, store: &mut AutoAssertNoGc<'_>, ptr: &mut MaybeUninit<ValRaw>) -> Result<()> {
let gc_ref = self.inner.try_clone_gc_ref(store)?;
let r64 = gc_ref.as_r64();
store.gc_store_mut()?.expose_gc_ref_to_wasm(gc_ref);
Ok(unsafe { NonZeroU64::new_unchecked(r64) })
debug_assert_ne!(r64, 0);
let anyref = u32::try_from(r64).unwrap();
ptr.write(ValRaw::anyref(anyref));
Ok(())
}
#[inline]
unsafe fn from_abi(abi: Self::Abi, store: &mut AutoAssertNoGc<'_>) -> Self {
let gc_ref = VMGcRef::from_r64(abi.get())
unsafe fn load(store: &mut AutoAssertNoGc<'_>, ptr: &ValRaw) -> Self {
let raw = ptr.get_anyref();
debug_assert_ne!(raw, 0);
let gc_ref = VMGcRef::from_r64(raw.into())
.expect("valid r64")
.expect("non-null");
let gc_ref = store.unwrap_gc_store_mut().clone_gc_ref(&gc_ref);
@ -403,8 +362,6 @@ unsafe impl WasmTy for ManuallyRooted<AnyRef> {
}
unsafe impl WasmTy for Option<ManuallyRooted<AnyRef>> {
type Abi = u64;
#[inline]
fn valtype() -> ValType {
ValType::ANYREF
@ -426,30 +383,20 @@ unsafe impl WasmTy for Option<ManuallyRooted<AnyRef>> {
self.is_some()
}
#[inline]
unsafe fn abi_from_raw(raw: *mut ValRaw) -> Self::Abi {
let externref = (*raw).get_externref();
u64::from(externref)
}
#[inline]
unsafe fn abi_into_raw(abi: Self::Abi, raw: *mut ValRaw) {
let externref = u32::try_from(abi).unwrap();
*raw = ValRaw::externref(externref);
}
#[inline]
fn into_abi(self, store: &mut AutoAssertNoGc<'_>) -> Result<Self::Abi> {
Ok(if let Some(x) = self {
<ManuallyRooted<AnyRef> as WasmTy>::into_abi(x, store)?.get()
} else {
0
})
fn store(self, store: &mut AutoAssertNoGc<'_>, ptr: &mut MaybeUninit<ValRaw>) -> Result<()> {
match self {
Some(r) => r.store(store, ptr),
None => {
ptr.write(ValRaw::anyref(0));
Ok(())
}
}
}
#[inline]
unsafe fn from_abi(abi: Self::Abi, store: &mut AutoAssertNoGc<'_>) -> Self {
let gc_ref = VMGcRef::from_r64(abi).expect("valid r64")?;
unsafe fn load(store: &mut AutoAssertNoGc<'_>, ptr: &ValRaw) -> Self {
let raw = ptr.get_anyref();
debug_assert_ne!(raw, 0);
let gc_ref = VMGcRef::from_r64(raw.into()).expect("valid r64")?;
let gc_ref = store.unwrap_gc_store_mut().clone_gc_ref(&gc_ref);
RootSet::with_lifo_scope(store, |store| {
let rooted = AnyRef::from_cloned_gc_ref(store, gc_ref);

133
crates/wasmtime/src/runtime/gc/enabled/externref.rs

@ -10,7 +10,7 @@ use crate::{
use anyhow::Context;
use core::any::Any;
use core::mem;
use core::num::NonZeroU64;
use core::mem::MaybeUninit;
/// An opaque, GC-managed reference to some host data that can be passed to
/// WebAssembly.
@ -422,10 +422,6 @@ impl ExternRef {
}
unsafe impl WasmTy for Rooted<ExternRef> {
// TODO: this should be `VMGcRef` but Cranelift currently doesn't support
// using r32 types when targeting 64-bit platforms.
type Abi = NonZeroU64;
#[inline]
fn valtype() -> ValType {
ValType::Ref(RefType::new(false, HeapType::Extern))
@ -441,31 +437,20 @@ unsafe impl WasmTy for Rooted<ExternRef> {
unreachable!()
}
#[inline]
unsafe fn abi_from_raw(raw: *mut ValRaw) -> Self::Abi {
let raw = (*raw).get_externref();
debug_assert_ne!(raw, 0);
NonZeroU64::new_unchecked(u64::from(raw))
}
#[inline]
unsafe fn abi_into_raw(abi: Self::Abi, raw: *mut ValRaw) {
let externref = u32::try_from(abi.get()).unwrap();
*raw = ValRaw::externref(externref);
}
#[inline]
fn into_abi(self, store: &mut AutoAssertNoGc<'_>) -> Result<Self::Abi> {
fn store(self, store: &mut AutoAssertNoGc<'_>, ptr: &mut MaybeUninit<ValRaw>) -> Result<()> {
let gc_ref = self.inner.try_clone_gc_ref(store)?;
let r64 = gc_ref.as_r64();
store.gc_store_mut()?.expose_gc_ref_to_wasm(gc_ref);
debug_assert_ne!(r64, 0);
Ok(unsafe { NonZeroU64::new_unchecked(r64) })
let externref = u32::try_from(r64).unwrap();
ptr.write(ValRaw::externref(externref));
Ok(())
}
#[inline]
unsafe fn from_abi(abi: Self::Abi, store: &mut AutoAssertNoGc<'_>) -> Self {
let gc_ref = VMGcRef::from_r64(abi.get())
unsafe fn load(store: &mut AutoAssertNoGc<'_>, ptr: &ValRaw) -> Self {
let raw = ptr.get_externref();
debug_assert_ne!(raw, 0);
let gc_ref = VMGcRef::from_r64(raw.into())
.expect("valid r64")
.expect("non-null");
let gc_ref = store.unwrap_gc_store_mut().clone_gc_ref(&gc_ref);
@ -474,8 +459,6 @@ unsafe impl WasmTy for Rooted<ExternRef> {
}
unsafe impl WasmTy for Option<Rooted<ExternRef>> {
type Abi = u64;
#[inline]
fn valtype() -> ValType {
ValType::EXTERNREF
@ -496,38 +479,24 @@ unsafe impl WasmTy for Option<Rooted<ExternRef>> {
self.is_some()
}
#[inline]
unsafe fn abi_from_raw(raw: *mut ValRaw) -> Self::Abi {
let externref = (*raw).get_externref();
u64::from(externref)
}
#[inline]
unsafe fn abi_into_raw(abi: Self::Abi, raw: *mut ValRaw) {
let externref = u32::try_from(abi).unwrap();
*raw = ValRaw::externref(externref);
}
#[inline]
fn into_abi(self, store: &mut AutoAssertNoGc<'_>) -> Result<Self::Abi> {
Ok(if let Some(x) = self {
<Rooted<ExternRef> as WasmTy>::into_abi(x, store)?.get()
} else {
0
})
fn store(self, store: &mut AutoAssertNoGc<'_>, ptr: &mut MaybeUninit<ValRaw>) -> Result<()> {
match self {
Some(r) => r.store(store, ptr),
None => {
ptr.write(ValRaw::externref(0));
Ok(())
}
}
}
#[inline]
unsafe fn from_abi(abi: Self::Abi, store: &mut AutoAssertNoGc<'_>) -> Self {
let gc_ref = VMGcRef::from_r64(abi).expect("valid r64")?;
unsafe fn load(store: &mut AutoAssertNoGc<'_>, ptr: &ValRaw) -> Self {
let gc_ref = VMGcRef::from_r64(ptr.get_externref().into()).expect("valid r64")?;
let gc_ref = store.unwrap_gc_store_mut().clone_gc_ref(&gc_ref);
Some(ExternRef::from_cloned_gc_ref(store, gc_ref))
}
}
unsafe impl WasmTy for ManuallyRooted<ExternRef> {
type Abi = NonZeroU64;
#[inline]
fn valtype() -> ValType {
ValType::Ref(RefType::new(false, HeapType::Extern))
@ -548,30 +517,20 @@ unsafe impl WasmTy for ManuallyRooted<ExternRef> {
true
}
#[inline]
unsafe fn abi_from_raw(raw: *mut ValRaw) -> Self::Abi {
let externref = (*raw).get_externref();
debug_assert_ne!(externref, 0);
NonZeroU64::new_unchecked(u64::from(externref))
}
#[inline]
unsafe fn abi_into_raw(abi: Self::Abi, raw: *mut ValRaw) {
let externref = u32::try_from(abi.get()).unwrap();
*raw = ValRaw::externref(externref);
}
#[inline]
fn into_abi(self, store: &mut AutoAssertNoGc<'_>) -> Result<Self::Abi> {
fn store(self, store: &mut AutoAssertNoGc<'_>, ptr: &mut MaybeUninit<ValRaw>) -> Result<()> {
let gc_ref = self.inner.try_clone_gc_ref(store)?;
let r64 = gc_ref.as_r64();
store.gc_store_mut()?.expose_gc_ref_to_wasm(gc_ref);
Ok(unsafe { NonZeroU64::new_unchecked(r64) })
debug_assert_ne!(r64, 0);
let externref = u32::try_from(r64).unwrap();
ptr.write(ValRaw::externref(externref));
Ok(())
}
#[inline]
unsafe fn from_abi(abi: Self::Abi, store: &mut AutoAssertNoGc<'_>) -> Self {
let gc_ref = VMGcRef::from_r64(abi.get())
unsafe fn load(store: &mut AutoAssertNoGc<'_>, ptr: &ValRaw) -> Self {
let raw = ptr.get_externref();
debug_assert_ne!(raw, 0);
let gc_ref = VMGcRef::from_r64(raw.into())
.expect("valid r64")
.expect("non-null");
let gc_ref = store.unwrap_gc_store_mut().clone_gc_ref(&gc_ref);
@ -585,8 +544,6 @@ unsafe impl WasmTy for ManuallyRooted<ExternRef> {
}
unsafe impl WasmTy for Option<ManuallyRooted<ExternRef>> {
type Abi = u64;
#[inline]
fn valtype() -> ValType {
ValType::EXTERNREF
@ -608,30 +565,20 @@ unsafe impl WasmTy for Option<ManuallyRooted<ExternRef>> {
self.is_some()
}
#[inline]
unsafe fn abi_from_raw(raw: *mut ValRaw) -> Self::Abi {
let externref = (*raw).get_externref();
u64::from(externref)
}
#[inline]
unsafe fn abi_into_raw(abi: Self::Abi, raw: *mut ValRaw) {
let externref = u32::try_from(abi).unwrap();
*raw = ValRaw::externref(externref);
}
#[inline]
fn into_abi(self, store: &mut AutoAssertNoGc<'_>) -> Result<Self::Abi> {
Ok(if let Some(x) = self {
<ManuallyRooted<ExternRef> as WasmTy>::into_abi(x, store)?.get()
} else {
0
})
fn store(self, store: &mut AutoAssertNoGc<'_>, ptr: &mut MaybeUninit<ValRaw>) -> Result<()> {
match self {
Some(r) => r.store(store, ptr),
None => {
ptr.write(ValRaw::externref(0));
Ok(())
}
}
}
#[inline]
unsafe fn from_abi(abi: Self::Abi, store: &mut AutoAssertNoGc<'_>) -> Self {
let gc_ref = VMGcRef::from_r64(abi).expect("valid r64")?;
unsafe fn load(store: &mut AutoAssertNoGc<'_>, ptr: &ValRaw) -> Self {
let raw = ptr.get_externref();
debug_assert_ne!(raw, 0);
let gc_ref = VMGcRef::from_r64(raw.into()).expect("valid r64")?;
let gc_ref = store.unwrap_gc_store_mut().clone_gc_ref(&gc_ref);
RootSet::with_lifo_scope(store, |store| {
let rooted = ExternRef::from_cloned_gc_ref(store, gc_ref);

79
crates/wasmtime/src/runtime/gc/enabled/i31.rs

@ -10,6 +10,7 @@ use crate::{
HeapType, RefType, Result, ValType, WasmTy,
};
use core::fmt;
use core::mem::MaybeUninit;
/// A 31-bit integer.
///
@ -220,10 +221,6 @@ impl I31 {
}
unsafe impl WasmTy for I31 {
// TODO: This should eventually just be `VMGcRef`, but Cranelift doesn't
// currently support using its `r32` type on 64-bit platforms.
type Abi = u64;
#[inline]
fn valtype() -> ValType {
ValType::Ref(RefType::new(false, HeapType::I31))
@ -243,34 +240,21 @@ unsafe impl WasmTy for I31 {
unreachable!()
}
#[inline]
unsafe fn abi_from_raw(raw: *mut ValRaw) -> Self::Abi {
let raw = (*raw).get_anyref();
if cfg!(debug_assertions) {
let gc_ref = VMGcRef::from_raw_u32(raw).unwrap();
assert!(gc_ref.is_i31());
}
u64::from(raw)
fn store(self, _store: &mut AutoAssertNoGc<'_>, ptr: &mut MaybeUninit<ValRaw>) -> Result<()> {
let r64 = VMGcRef::from_i31(self.into()).into_r64();
let anyref = u32::try_from(r64).unwrap();
ptr.write(ValRaw::anyref(anyref));
Ok(())
}
#[inline]
unsafe fn abi_into_raw(abi: Self::Abi, raw: *mut ValRaw) {
unsafe fn load(_store: &mut AutoAssertNoGc<'_>, ptr: &ValRaw) -> Self {
let raw = ptr.get_anyref();
if cfg!(debug_assertions) {
let gc_ref = VMGcRef::from_r64(abi).unwrap().unwrap();
let gc_ref = VMGcRef::from_raw_u32(raw).unwrap();
assert!(gc_ref.is_i31());
}
let anyref = u32::try_from(abi).unwrap();
*raw = ValRaw::anyref(anyref)
}
#[inline]
fn into_abi(self, _store: &mut AutoAssertNoGc<'_>) -> Result<Self::Abi> {
Ok(VMGcRef::from_i31(self.into()).into_r64())
}
#[inline]
unsafe fn from_abi(abi: Self::Abi, _store: &mut AutoAssertNoGc<'_>) -> Self {
let gc_ref = VMGcRef::from_r64(abi)
let r64 = u64::from(raw);
let gc_ref = VMGcRef::from_r64(r64)
.expect("valid r64")
.expect("non-null");
gc_ref.unwrap_i31().into()
@ -278,10 +262,6 @@ unsafe impl WasmTy for I31 {
}
unsafe impl WasmTy for Option<I31> {
// TODO: This should eventually just be `VMGcRef`, but Cranelift doesn't
// currently support using its `r32` type on 64-bit platforms.
type Abi = u64;
#[inline]
fn valtype() -> ValType {
ValType::Ref(RefType::new(true, HeapType::I31))
@ -301,36 +281,19 @@ unsafe impl WasmTy for Option<I31> {
unreachable!()
}
#[inline]
unsafe fn abi_from_raw(raw: *mut ValRaw) -> Self::Abi {
let raw = (*raw).get_anyref();
if cfg!(debug_assertions) {
if let Some(gc_ref) = VMGcRef::from_raw_u32(raw) {
assert!(gc_ref.is_i31());
fn store(self, store: &mut AutoAssertNoGc<'_>, ptr: &mut MaybeUninit<ValRaw>) -> Result<()> {
match self {
Some(i) => i.store(store, ptr),
None => {
ptr.write(ValRaw::anyref(0));
Ok(())
}
}
u64::from(raw)
}
#[inline]
unsafe fn abi_into_raw(abi: Self::Abi, raw: *mut ValRaw) {
if cfg!(debug_assertions) {
if let Some(gc_ref) = VMGcRef::from_r64(abi).unwrap() {
assert!(gc_ref.is_i31());
}
}
let anyref = u32::try_from(abi).unwrap();
*raw = ValRaw::anyref(anyref)
}
#[inline]
fn into_abi(self, _store: &mut AutoAssertNoGc<'_>) -> Result<Self::Abi> {
Ok(self.map_or(0, |x| VMGcRef::from_i31(x.into()).into_r64()))
}
#[inline]
unsafe fn from_abi(abi: Self::Abi, _store: &mut AutoAssertNoGc<'_>) -> Self {
let gc_ref = VMGcRef::from_r64(abi).expect("valid r64");
gc_ref.map(|r| r.unwrap_i31().into())
unsafe fn load(_store: &mut AutoAssertNoGc<'_>, ptr: &ValRaw) -> Self {
let r64 = u64::from(ptr.get_anyref());
let gc_ref = VMGcRef::from_r64(r64).expect("valid r64")?;
Some(I31(gc_ref.unwrap_i31()))
}
}

43
crates/wasmtime/src/runtime/gc/noextern.rs

@ -2,6 +2,7 @@ use crate::{
store::{AutoAssertNoGc, StoreOpaque},
HeapType, Ref, RefType, Result, Uninhabited, Val, ValRaw, ValType, WasmTy,
};
use core::mem::MaybeUninit;
/// A reference to the abstract `noextern` heap value.
///
@ -89,8 +90,6 @@ impl NoExtern {
}
unsafe impl WasmTy for NoExtern {
type Abi = NoExtern;
#[inline]
fn valtype() -> ValType {
ValType::Ref(RefType::new(false, HeapType::NoExtern))
@ -111,30 +110,16 @@ unsafe impl WasmTy for NoExtern {
match self._inner {}
}
#[inline]
unsafe fn abi_from_raw(_raw: *mut ValRaw) -> Self::Abi {
unreachable!("NoExtern is uninhabited")
}
#[inline]
unsafe fn abi_into_raw(abi: Self::Abi, _raw: *mut ValRaw) {
match abi._inner {}
fn store(self, _store: &mut AutoAssertNoGc<'_>, _ptr: &mut MaybeUninit<ValRaw>) -> Result<()> {
match self._inner {}
}
#[inline]
fn into_abi(self, _store: &mut AutoAssertNoGc<'_>) -> Result<Self::Abi> {
unsafe fn load(_store: &mut AutoAssertNoGc<'_>, _ptr: &ValRaw) -> Self {
unreachable!("NoExtern is uninhabited")
}
#[inline]
unsafe fn from_abi(abi: Self::Abi, _store: &mut AutoAssertNoGc<'_>) -> Self {
match abi._inner {}
}
}
unsafe impl WasmTy for Option<NoExtern> {
type Abi = u32;
#[inline]
fn valtype() -> ValType {
ValType::Ref(RefType::new(true, HeapType::NoExtern))
@ -156,24 +141,14 @@ unsafe impl WasmTy for Option<NoExtern> {
}
#[inline]
unsafe fn abi_from_raw(_raw: *mut ValRaw) -> Self::Abi {
0
}
#[inline]
unsafe fn abi_into_raw(abi: Self::Abi, raw: *mut ValRaw) {
debug_assert_eq!(abi, 0);
*raw = ValRaw::externref(0);
}
#[inline]
fn into_abi(self, _store: &mut AutoAssertNoGc<'_>) -> Result<Self::Abi> {
Ok(0)
fn store(self, _store: &mut AutoAssertNoGc<'_>, ptr: &mut MaybeUninit<ValRaw>) -> Result<()> {
ptr.write(ValRaw::externref(0));
Ok(())
}
#[inline]
unsafe fn from_abi(abi: Self::Abi, _store: &mut AutoAssertNoGc<'_>) -> Self {
debug_assert_eq!(abi, 0);
unsafe fn load(_store: &mut AutoAssertNoGc<'_>, ptr: &ValRaw) -> Self {
debug_assert_eq!(ptr.get_externref(), 0);
None
}
}

22
crates/wasmtime/src/runtime/instance.rs

@ -1,8 +1,8 @@
use crate::linker::{Definition, DefinitionType};
use crate::prelude::*;
use crate::runtime::vm::{
Imports, InstanceAllocationRequest, StorePtr, VMContext, VMFuncRef, VMFunctionImport,
VMGlobalImport, VMMemoryImport, VMNativeCallFunction, VMOpaqueContext, VMTableImport,
Imports, InstanceAllocationRequest, StorePtr, VMFuncRef, VMFunctionImport, VMGlobalImport,
VMMemoryImport, VMOpaqueContext, VMTableImport,
};
use crate::store::{InstanceId, StoreOpaque, Stored};
use crate::types::matching;
@ -12,7 +12,6 @@ use crate::{
};
use alloc::sync::Arc;
use anyhow::{anyhow, bail, Context, Result};
use core::mem;
use core::ptr::NonNull;
use wasmparser::WasmFeatures;
use wasmtime_environ::{
@ -364,11 +363,13 @@ impl Instance {
let caller_vmctx = instance.vmctx();
unsafe {
super::func::invoke_wasm_and_catch_traps(store, |_default_caller| {
let func = mem::transmute::<
NonNull<VMNativeCallFunction>,
extern "C" fn(*mut VMOpaqueContext, *mut VMContext),
>(f.func_ref.as_ref().native_call);
func(f.func_ref.as_ref().vmctx, caller_vmctx)
let func = f.func_ref.as_ref().array_call;
func(
f.func_ref.as_ref().vmctx,
VMOpaqueContext::from_vmcontext(caller_vmctx),
[].as_mut_ptr(),
0,
)
})?;
}
Ok(())
@ -709,7 +710,6 @@ impl OwnedImports {
let f = f.func_ref.as_ref();
self.functions.push(VMFunctionImport {
wasm_call: f.wasm_call.unwrap(),
native_call: f.native_call,
array_call: f.array_call,
vmctx: f.vmctx,
});
@ -819,11 +819,11 @@ impl<T> InstancePre<T> {
if f.func_ref().wasm_call.is_none() {
// `f` needs its `VMFuncRef::wasm_call` patched with a
// Wasm-to-native trampoline.
debug_assert!(matches!(f.host_ctx(), crate::HostContext::Native(_)));
debug_assert!(matches!(f.host_ctx(), crate::HostContext::Array(_)));
func_refs.push(VMFuncRef {
wasm_call: module
.runtime_info()
.wasm_to_native_trampoline(f.sig_index()),
.wasm_to_array_trampoline(f.sig_index()),
..*f.func_ref()
});
}

25
crates/wasmtime/src/runtime/instantiate.rs

@ -19,7 +19,7 @@ use wasmtime_environ::{
pub struct CompiledModule {
module: Arc<Module>,
funcs: PrimaryMap<DefinedFuncIndex, CompiledFunctionInfo>,
wasm_to_native_trampolines: Vec<(ModuleInternedTypeIndex, FunctionLoc)>,
wasm_to_array_trampolines: Vec<(ModuleInternedTypeIndex, FunctionLoc)>,
meta: Metadata,
code_memory: Arc<CodeMemory>,
#[cfg(feature = "debug-builtins")]
@ -55,7 +55,7 @@ impl CompiledModule {
let mut ret = Self {
module: Arc::new(info.module),
funcs: info.funcs,
wasm_to_native_trampolines: info.wasm_to_native_trampolines,
wasm_to_array_trampolines: info.wasm_to_array_trampolines,
#[cfg(feature = "debug-builtins")]
dbg_jit_registration: None,
code_memory,
@ -173,34 +173,21 @@ impl CompiledModule {
Some(&self.text()[loc.start as usize..][..loc.length as usize])
}
/// Get the native-to-Wasm trampoline for the function `index` points to.
///
/// If the function `index` points to does not escape, then `None` is
/// returned.
///
/// These trampolines are used for native callers (e.g. `Func::wrap`)
/// calling Wasm callees.
#[inline]
pub fn native_to_wasm_trampoline(&self, index: DefinedFuncIndex) -> Option<&[u8]> {
let loc = self.funcs[index].native_to_wasm_trampoline?;
Some(&self.text()[loc.start as usize..][..loc.length as usize])
}
/// Get the Wasm-to-native trampoline for the given signature.
/// Get the Wasm-to-array trampoline for the given signature.
///
/// These trampolines are used for filling in
/// `VMFuncRef::wasm_call` for `Func::wrap`-style host funcrefs
/// that don't have access to a compiler when created.
pub fn wasm_to_native_trampoline(&self, signature: ModuleInternedTypeIndex) -> &[u8] {
pub fn wasm_to_array_trampoline(&self, signature: ModuleInternedTypeIndex) -> &[u8] {
let idx = match self
.wasm_to_native_trampolines
.wasm_to_array_trampolines
.binary_search_by_key(&signature, |entry| entry.0)
{
Ok(idx) => idx,
Err(_) => panic!("missing trampoline for {signature:?}"),
};
let (_, loc) = self.wasm_to_native_trampolines[idx];
let (_, loc) = self.wasm_to_array_trampolines[idx];
&self.text()[loc.start as usize..][..loc.length as usize]
}

28
crates/wasmtime/src/runtime/module.rs

@ -1,7 +1,7 @@
use crate::prelude::*;
use crate::runtime::vm::{
CompiledModuleId, MemoryImage, MmapVec, ModuleMemoryImages, VMArrayCallFunction,
VMNativeCallFunction, VMWasmCallFunction,
VMWasmCallFunction,
};
use crate::sync::OnceLock;
use crate::{
@ -1095,25 +1095,12 @@ impl crate::runtime::vm::ModuleRuntimeInfo for ModuleInner {
NonNull::new(ptr).unwrap()
}
fn native_to_wasm_trampoline(
&self,
index: DefinedFuncIndex,
) -> Option<NonNull<VMNativeCallFunction>> {
let ptr = self
.module
.native_to_wasm_trampoline(index)?
.as_ptr()
.cast::<VMNativeCallFunction>()
.cast_mut();
Some(NonNull::new(ptr).unwrap())
}
fn array_to_wasm_trampoline(&self, index: DefinedFuncIndex) -> Option<VMArrayCallFunction> {
let ptr = self.module.array_to_wasm_trampoline(index)?.as_ptr();
Some(unsafe { mem::transmute::<*const u8, VMArrayCallFunction>(ptr) })
}
fn wasm_to_native_trampoline(
fn wasm_to_array_trampoline(
&self,
signature: VMSharedTypeIndex,
) -> Option<NonNull<VMWasmCallFunction>> {
@ -1138,7 +1125,7 @@ impl crate::runtime::vm::ModuleRuntimeInfo for ModuleInner {
let ptr = self
.module
.wasm_to_native_trampoline(trampoline_module_ty)
.wasm_to_array_trampoline(trampoline_module_ty)
.as_ptr()
.cast::<VMWasmCallFunction>()
.cast_mut();
@ -1244,14 +1231,7 @@ impl crate::runtime::vm::ModuleRuntimeInfo for BareModuleInfo {
unreachable!()
}
fn native_to_wasm_trampoline(
&self,
_index: DefinedFuncIndex,
) -> Option<NonNull<VMNativeCallFunction>> {
unreachable!()
}
fn wasm_to_native_trampoline(
fn wasm_to_array_trampoline(
&self,
_signature: VMSharedTypeIndex,
) -> Option<NonNull<VMWasmCallFunction>> {

4
crates/wasmtime/src/runtime/module/registry.rs

@ -177,7 +177,7 @@ impl ModuleRegistry {
Some((info, module))
}
pub fn wasm_to_native_trampoline(
pub fn wasm_to_array_trampoline(
&self,
sig: VMSharedTypeIndex,
) -> Option<NonNull<VMWasmCallFunction>> {
@ -190,7 +190,7 @@ impl ModuleRegistry {
// See also the comment in `ModuleInner::wasm_to_native_trampoline`.
for (_, code) in self.loaded_code.values() {
for module in code.modules.values() {
if let Some(trampoline) = module.runtime_info().wasm_to_native_trampoline(sig) {
if let Some(trampoline) = module.runtime_info().wasm_to_array_trampoline(sig) {
return Some(trampoline);
}
}

12
crates/wasmtime/src/runtime/store/func_refs.rs

@ -3,7 +3,7 @@
use crate::module::ModuleRegistry;
use crate::prelude::*;
use crate::runtime::vm::{SendSyncPtr, VMFuncRef, VMNativeCallHostFuncContext};
use crate::runtime::vm::{SendSyncPtr, VMArrayCallHostFuncContext, VMFuncRef};
use alloc::sync::Arc;
use core::ptr::NonNull;
@ -55,9 +55,9 @@ impl FuncRefs {
/// `FuncRefs` and only while the store holding this `FuncRefs` exists.
pub unsafe fn push(&mut self, func_ref: VMFuncRef) -> NonNull<VMFuncRef> {
debug_assert!(func_ref.wasm_call.is_none());
// Debug assert that the vmctx is a `VMNativeCallHostFuncContext` as
// Debug assert that the vmctx is a `VMArrayCallHostFuncContext` as
// that is the only kind that can have holes.
let _ = unsafe { VMNativeCallHostFuncContext::from_opaque(func_ref.vmctx) };
let _ = unsafe { VMArrayCallHostFuncContext::from_opaque(func_ref.vmctx) };
let func_ref = self.bump.alloc(func_ref);
let unpatched = SendSyncPtr::from(func_ref);
@ -73,11 +73,11 @@ impl FuncRefs {
let func_ref = f.as_mut();
debug_assert!(func_ref.wasm_call.is_none());
// Debug assert that the vmctx is a `VMNativeCallHostFuncContext` as
// Debug assert that the vmctx is a `VMArrayCallHostFuncContext` as
// that is the only kind that can have holes.
let _ = VMNativeCallHostFuncContext::from_opaque(func_ref.vmctx);
let _ = VMArrayCallHostFuncContext::from_opaque(func_ref.vmctx);
func_ref.wasm_call = modules.wasm_to_native_trampoline(func_ref.type_index);
func_ref.wasm_call = modules.wasm_to_array_trampoline(func_ref.type_index);
func_ref.wasm_call.is_none()
}
});

49
crates/wasmtime/src/runtime/trampoline/func.rs

@ -4,14 +4,11 @@ use crate::runtime::vm::{
StoreBox, VMArrayCallHostFuncContext, VMContext, VMFuncRef, VMOpaqueContext,
};
use crate::type_registry::RegisteredType;
use crate::{code_memory::CodeMemory, Engine, FuncType, ValRaw};
use crate::{FuncType, ValRaw};
use anyhow::Result;
use core::ptr::NonNull;
struct TrampolineState<F> {
func: F,
#[allow(dead_code)]
code_memory: CodeMemory,
// Need to keep our `VMSharedTypeIndex` registered in the engine.
#[allow(dead_code)]
@ -74,66 +71,26 @@ unsafe extern "C" fn array_call_shim<F>(
pub fn create_array_call_function<F>(
ft: &FuncType,
func: F,
engine: &Engine,
) -> Result<StoreBox<VMArrayCallHostFuncContext>>
where
F: Fn(*mut VMContext, &mut [ValRaw]) -> Result<()> + Send + Sync + 'static,
{
use crate::compile::finish_object;
use crate::prelude::*;
use std::ptr;
let mut obj = engine
.compiler()
.object(wasmtime_environ::ObjectKind::Module)?;
let (wasm_call_range, native_call_range) = engine
.compiler()
.emit_trampolines_for_array_call_host_func(
ft.as_wasm_func_type(),
array_call_shim::<F> as usize,
&mut obj,
)?;
engine.append_bti(&mut obj);
let obj = finish_object(wasmtime_environ::ObjectBuilder::new(obj, engine.tunables()))?;
// Copy the results of JIT compilation into executable memory, and this will
// also take care of unwind table registration.
let mut code_memory = CodeMemory::new(obj)?;
code_memory.publish()?;
engine
.profiler()
.register_module(&code_memory.mmap()[..], &|_| None);
// Extract the host/wasm trampolines from the results of compilation since
// we know their start/length.
let text = code_memory.text();
let array_call = array_call_shim::<F>;
let wasm_call = text[wasm_call_range.start as usize..].as_ptr() as *mut _;
let wasm_call = Some(NonNull::new(wasm_call).unwrap());
let native_call = text[native_call_range.start as usize..].as_ptr() as *mut _;
let native_call = NonNull::new(native_call).unwrap();
let sig = ft.clone().into_registered_type();
unsafe {
Ok(VMArrayCallHostFuncContext::new(
VMFuncRef {
array_call,
wasm_call,
native_call,
wasm_call: None,
type_index: sig.index(),
vmctx: ptr::null_mut(),
},
Box::new(TrampolineState {
func,
code_memory,
sig,
}),
Box::new(TrampolineState { func, sig }),
))
}
}

22
crates/wasmtime/src/runtime/v128.rs

@ -8,6 +8,7 @@ use crate::store::{AutoAssertNoGc, StoreOpaque};
use crate::{Result, ValRaw, ValType, WasmTy};
use core::cmp::Ordering;
use core::fmt;
use core::mem::MaybeUninit;
/// Representation of a 128-bit vector type, `v128`, for WebAssembly.
///
@ -83,8 +84,6 @@ impl Ord for V128 {
// the documentation above in the `cfg_if!` for why this is conditional.
#[cfg(any(target_arch = "x86_64", target_arch = "aarch64"))]
unsafe impl WasmTy for V128 {
type Abi = V128Abi;
#[inline]
fn valtype() -> ValType {
ValType::V128
@ -105,22 +104,13 @@ unsafe impl WasmTy for V128 {
}
#[inline]
unsafe fn abi_from_raw(raw: *mut ValRaw) -> Self::Abi {
V128::from((*raw).get_v128()).0
}
#[inline]
unsafe fn abi_into_raw(abi: Self::Abi, raw: *mut ValRaw) {
*raw = ValRaw::v128(V128(abi).as_u128());
}
#[inline]
fn into_abi(self, _store: &mut AutoAssertNoGc<'_>) -> Result<Self::Abi> {
Ok(self.0)
fn store(self, _store: &mut AutoAssertNoGc<'_>, ptr: &mut MaybeUninit<ValRaw>) -> Result<()> {
ptr.write(ValRaw::v128(self.as_u128()));
Ok(())
}
#[inline]
unsafe fn from_abi(abi: Self::Abi, _store: &mut AutoAssertNoGc<'_>) -> Self {
V128(abi)
unsafe fn load(_store: &mut AutoAssertNoGc<'_>, ptr: &ValRaw) -> Self {
V128::from(ptr.get_v128())
}
}

17
crates/wasmtime/src/runtime/vm.rs

@ -69,8 +69,7 @@ pub use crate::runtime::vm::traphandlers::*;
pub use crate::runtime::vm::vmcontext::{
VMArrayCallFunction, VMArrayCallHostFuncContext, VMContext, VMFuncRef, VMFunctionBody,
VMFunctionImport, VMGlobalDefinition, VMGlobalImport, VMMemoryDefinition, VMMemoryImport,
VMNativeCallFunction, VMNativeCallHostFuncContext, VMOpaqueContext, VMRuntimeLimits,
VMTableImport, VMWasmCallFunction, ValRaw,
VMOpaqueContext, VMRuntimeLimits, VMTableImport, VMWasmCallFunction, ValRaw,
};
pub use send_sync_ptr::SendSyncPtr;
@ -194,16 +193,6 @@ pub trait ModuleRuntimeInfo: Send + Sync + 'static {
/// Returns the address, in memory, that the function `index` resides at.
fn function(&self, index: DefinedFuncIndex) -> NonNull<VMWasmCallFunction>;
/// Returns the address, in memory, of the trampoline that allows the given
/// defined Wasm function to be called by the native calling convention.
///
/// Returns `None` for Wasm functions which do not escape, and therefore are
/// not callable from outside the Wasm module itself.
fn native_to_wasm_trampoline(
&self,
index: DefinedFuncIndex,
) -> Option<NonNull<VMNativeCallFunction>>;
/// Returns the address, in memory, of the trampoline that allows the given
/// defined Wasm function to be called by the array calling convention.
///
@ -212,8 +201,8 @@ pub trait ModuleRuntimeInfo: Send + Sync + 'static {
fn array_to_wasm_trampoline(&self, index: DefinedFuncIndex) -> Option<VMArrayCallFunction>;
/// Return the address, in memory, of the trampoline that allows Wasm to
/// call a native function of the given signature.
fn wasm_to_native_trampoline(
/// call a array function of the given signature.
fn wasm_to_array_trampoline(
&self,
signature: VMSharedTypeIndex,
) -> Option<NonNull<VMWasmCallFunction>>;

7
crates/wasmtime/src/runtime/vm/component.rs

@ -8,7 +8,7 @@
use crate::runtime::vm::{
SendSyncPtr, Store, VMArrayCallFunction, VMFuncRef, VMGlobalDefinition, VMMemoryDefinition,
VMNativeCallFunction, VMOpaqueContext, VMWasmCallFunction, ValRaw,
VMOpaqueContext, VMWasmCallFunction, ValRaw,
};
use alloc::alloc::Layout;
use alloc::sync::Arc;
@ -389,7 +389,6 @@ impl ComponentInstance {
&mut self,
idx: TrampolineIndex,
wasm_call: NonNull<VMWasmCallFunction>,
native_call: NonNull<VMNativeCallFunction>,
array_call: VMArrayCallFunction,
type_index: VMSharedTypeIndex,
) {
@ -399,7 +398,6 @@ impl ComponentInstance {
let vmctx = VMOpaqueContext::from_vmcomponent(self.vmctx());
*self.vmctx_plus_offset_mut(offset) = VMFuncRef {
wasm_call: Some(wasm_call),
native_call,
array_call,
type_index,
vmctx,
@ -731,13 +729,12 @@ impl OwnedComponentInstance {
&mut self,
idx: TrampolineIndex,
wasm_call: NonNull<VMWasmCallFunction>,
native_call: NonNull<VMNativeCallFunction>,
array_call: VMArrayCallFunction,
type_index: VMSharedTypeIndex,
) {
unsafe {
self.instance_mut()
.set_trampoline(idx, wasm_call, native_call, array_call, type_index)
.set_trampoline(idx, wasm_call, array_call, type_index)
}
}

5
crates/wasmtime/src/runtime/vm/instance.rs

@ -719,10 +719,6 @@ impl Instance {
let func_ref = if let Some(def_index) = self.module().defined_func_index(index) {
VMFuncRef {
native_call: self
.runtime_info
.native_to_wasm_trampoline(def_index)
.expect("should have native-to-Wasm trampoline for escaping function"),
array_call: self
.runtime_info
.array_to_wasm_trampoline(def_index)
@ -734,7 +730,6 @@ impl Instance {
} else {
let import = self.imported_function(index);
VMFuncRef {
native_call: import.native_call,
array_call: import.array_call,
wasm_call: Some(import.wasm_call),
vmctx: import.vmctx,

37
crates/wasmtime/src/runtime/vm/vmcontext.rs

@ -3,7 +3,7 @@
mod vm_host_func_context;
pub use self::vm_host_func_context::{VMArrayCallHostFuncContext, VMNativeCallHostFuncContext};
pub use self::vm_host_func_context::VMArrayCallHostFuncContext;
use crate::runtime::vm::{GcStore, VMGcRef};
use core::cell::UnsafeCell;
use core::ffi::c_void;
@ -38,18 +38,6 @@ use wasmtime_environ::{
pub type VMArrayCallFunction =
unsafe extern "C" fn(*mut VMOpaqueContext, *mut VMOpaqueContext, *mut ValRaw, usize);
/// A function pointer that exposes the native calling convention.
///
/// Different Wasm function types end up mapping to different Rust function
/// types, so this isn't simply a type alias the way that `VMArrayCallFunction`
/// is.
///
/// This is the default calling convention for the target (e.g. System-V or
/// fast-call) except multiple return values are handled by returning the first
/// return value in a register and everything else through a return-pointer.
#[repr(transparent)]
pub struct VMNativeCallFunction(VMFunctionBody);
/// A function pointer that exposes the Wasm calling convention.
///
/// In practice, different Wasm function types end up mapping to different Rust
@ -68,9 +56,6 @@ pub struct VMFunctionImport {
/// Function pointer to use when calling this imported function from Wasm.
pub wasm_call: NonNull<VMWasmCallFunction>,
/// Function pointer to use when calling this imported function from native code.
pub native_call: NonNull<VMNativeCallFunction>,
/// Function pointer to use when calling this imported function with the
/// "array" calling convention that `Func::new` et al use.
pub array_call: VMArrayCallFunction,
@ -108,10 +93,6 @@ mod test_vmfunction_import {
offset_of!(VMFunctionImport, wasm_call),
usize::from(offsets.vmfunction_import_wasm_call())
);
assert_eq!(
offset_of!(VMFunctionImport, native_call),
usize::from(offsets.vmfunction_import_native_call())
);
assert_eq!(
offset_of!(VMFunctionImport, array_call),
usize::from(offsets.vmfunction_import_array_call())
@ -646,10 +627,6 @@ mod test_vmshared_type_index {
#[derive(Debug, Clone)]
#[repr(C)]
pub struct VMFuncRef {
/// Function pointer for this funcref if being called via the native calling
/// convention.
pub native_call: NonNull<VMNativeCallFunction>,
/// Function pointer for this funcref if being called via the "array"
/// calling convention that `Func::new` et al use.
pub array_call: VMArrayCallFunction,
@ -705,10 +682,6 @@ mod test_vm_func_ref {
size_of::<VMFuncRef>(),
usize::from(offsets.ptr.size_of_vm_func_ref())
);
assert_eq!(
offset_of!(VMFuncRef, native_call),
usize::from(offsets.ptr.vm_func_ref_native_call())
);
assert_eq!(
offset_of!(VMFuncRef, array_call),
usize::from(offsets.ptr.vm_func_ref_array_call())
@ -1326,12 +1299,4 @@ impl VMOpaqueContext {
) -> *mut VMOpaqueContext {
ptr.cast()
}
/// Helper function to clearly indicate that casts are desired.
#[inline]
pub fn from_vm_native_call_host_func_context(
ptr: *mut VMNativeCallHostFuncContext,
) -> *mut VMOpaqueContext {
ptr.cast()
}
}

67
crates/wasmtime/src/runtime/vm/vmcontext/vm_host_func_context.rs

@ -6,7 +6,7 @@ use super::VMOpaqueContext;
use crate::prelude::*;
use crate::runtime::vm::{StoreBox, VMFuncRef};
use core::any::Any;
use wasmtime_environ::{VM_ARRAY_CALL_HOST_FUNC_MAGIC, VM_NATIVE_CALL_HOST_FUNC_MAGIC};
use wasmtime_environ::VM_ARRAY_CALL_HOST_FUNC_MAGIC;
/// The `VM*Context` for array-call host functions.
///
@ -68,71 +68,12 @@ impl VMArrayCallHostFuncContext {
}
}
/// The `VM*Context` for native-call host functions.
///
/// Its `magic` field must always be
/// `wasmtime_environ::VM_NATIVE_CALL_HOST_FUNC_MAGIC`, and this is how you can
/// determine whether a `VM*Context` is a `VMNativeCallHostFuncContext` versus a
/// different kind of context.
#[repr(C)]
pub struct VMNativeCallHostFuncContext {
magic: u32,
// _padding: u32, // (on 64-bit systems)
func_ref: VMFuncRef,
host_state: Box<dyn Any + Send + Sync>,
}
#[test]
fn vmnative_call_host_func_context_offsets() {
fn vmarray_call_host_func_context_offsets() {
use memoffset::offset_of;
use wasmtime_environ::{HostPtr, PtrSize};
assert_eq!(
usize::from(HostPtr.vmnative_call_host_func_context_func_ref()),
offset_of!(VMNativeCallHostFuncContext, func_ref)
usize::from(HostPtr.vmarray_call_host_func_context_func_ref()),
offset_of!(VMArrayCallHostFuncContext, func_ref)
);
}
impl VMNativeCallHostFuncContext {
/// Create the context for the given host function.
///
/// # Safety
///
/// The `host_func` must be a pointer to a host (not Wasm) function and it
/// must be `Send` and `Sync`.
pub unsafe fn new(
func_ref: VMFuncRef,
host_state: Box<dyn Any + Send + Sync>,
) -> StoreBox<VMNativeCallHostFuncContext> {
let ctx = StoreBox::new(VMNativeCallHostFuncContext {
magic: wasmtime_environ::VM_NATIVE_CALL_HOST_FUNC_MAGIC,
func_ref,
host_state,
});
let vmctx = VMOpaqueContext::from_vm_native_call_host_func_context(ctx.get());
unsafe {
(*ctx.get()).func_ref.vmctx = vmctx;
}
ctx
}
/// Get the host state for this host function context.
#[inline]
pub fn host_state(&self) -> &(dyn Any + Send + Sync) {
&*self.host_state
}
/// Get this context's `VMFuncRef`.
#[inline]
pub fn func_ref(&self) -> &VMFuncRef {
&self.func_ref
}
/// Helper function to cast between context types using a debug assertion to
/// protect against some mistakes.
#[inline]
pub unsafe fn from_opaque(opaque: *mut VMOpaqueContext) -> *mut VMNativeCallHostFuncContext {
// See comments in `VMContext::from_opaque` for this debug assert
debug_assert_eq!((*opaque).magic, VM_NATIVE_CALL_HOST_FUNC_MAGIC);
opaque.cast()
}
}

26
crates/winch/src/compiler.rs

@ -144,22 +144,12 @@ impl wasmtime_environ::Compiler for Compiler {
.compile_array_to_wasm_trampoline(translation, types, index)
}
fn compile_native_to_wasm_trampoline(
&self,
translation: &ModuleTranslation<'_>,
types: &ModuleTypesBuilder,
index: DefinedFuncIndex,
) -> Result<Box<dyn Any + Send>, CompileError> {
self.trampolines
.compile_native_to_wasm_trampoline(translation, types, index)
}
fn compile_wasm_to_native_trampoline(
fn compile_wasm_to_array_trampoline(
&self,
wasm_func_ty: &wasmtime_environ::WasmFuncType,
) -> Result<Box<dyn Any + Send>, CompileError> {
self.trampolines
.compile_wasm_to_native_trampoline(wasm_func_ty)
.compile_wasm_to_array_trampoline(wasm_func_ty)
}
fn append_code(
@ -197,18 +187,6 @@ impl wasmtime_environ::Compiler for Compiler {
Ok(ret)
}
fn emit_trampolines_for_array_call_host_func(
&self,
ty: &wasmtime_environ::WasmFuncType,
// Actually `host_fn: VMArrayCallFunction` but that type is not
// available in `wasmtime-environ`.
host_fn: usize,
obj: &mut Object<'static>,
) -> Result<(FunctionLoc, FunctionLoc)> {
drop((ty, host_fn, obj));
todo!()
}
fn triple(&self) -> &target_lexicon::Triple {
self.isa.triple()
}

12
tests/disas/icall-loop.wat

@ -61,7 +61,7 @@
;; @002b jump block5(v17)
;;
;; block5(v14: i64):
;; @002b v21 = load.i32 icall_null aligned readonly v14+24
;; @002b v21 = load.i32 icall_null aligned readonly v14+16
;; @002b v22 = icmp eq v21, v20
;; @002b brif v22, block7, block6
;;
@ -69,8 +69,8 @@
;; @002b trap bad_sig
;;
;; block7:
;; @002b v23 = load.i64 notrap aligned readonly v14+16
;; @002b v24 = load.i64 notrap aligned readonly v14+32
;; @002b v23 = load.i64 notrap aligned readonly v14+8
;; @002b v24 = load.i64 notrap aligned readonly v14+24
;; @002b v25 = call_indirect sig0, v23(v24, v0)
;; @002e jump block2
;; }
@ -111,7 +111,7 @@
;; @0038 jump block5(v16)
;;
;; block5(v13: i64):
;; @0038 v20 = load.i32 icall_null aligned readonly v13+24
;; @0038 v20 = load.i32 icall_null aligned readonly v13+16
;; @0038 v21 = icmp eq v20, v19
;; @0038 brif v21, block7, block6
;;
@ -119,8 +119,8 @@
;; @0038 trap bad_sig
;;
;; block7:
;; @0038 v22 = load.i64 notrap aligned readonly v13+16
;; @0038 v23 = load.i64 notrap aligned readonly v13+32
;; @0038 v22 = load.i64 notrap aligned readonly v13+8
;; @0038 v23 = load.i64 notrap aligned readonly v13+24
;; @0038 v24 = call_indirect sig0, v22(v23, v0)
;; @003b jump block2
;; }

6
tests/disas/icall-simd.wat

@ -42,11 +42,11 @@
;; @0033 v19 = global_value.i64 gv3
;; @0033 v20 = load.i64 notrap aligned readonly v19+80
;; @0033 v21 = load.i32 notrap aligned readonly v20
;; @0033 v22 = load.i32 icall_null aligned readonly v15+24
;; @0033 v22 = load.i32 icall_null aligned readonly v15+16
;; @0033 v23 = icmp eq v22, v21
;; @0033 trapz v23, bad_sig
;; @0033 v24 = load.i64 notrap aligned readonly v15+16
;; @0033 v25 = load.i64 notrap aligned readonly v15+32
;; @0033 v24 = load.i64 notrap aligned readonly v15+8
;; @0033 v25 = load.i64 notrap aligned readonly v15+24
;; @0033 v26 = call_indirect sig0, v24(v25, v0, v3)
;; @0036 jump block1(v26)
;;

6
tests/disas/icall.wat

@ -42,11 +42,11 @@
;; @0033 v19 = global_value.i64 gv3
;; @0033 v20 = load.i64 notrap aligned readonly v19+80
;; @0033 v21 = load.i32 notrap aligned readonly v20
;; @0033 v22 = load.i32 icall_null aligned readonly v15+24
;; @0033 v22 = load.i32 icall_null aligned readonly v15+16
;; @0033 v23 = icmp eq v22, v21
;; @0033 trapz v23, bad_sig
;; @0033 v24 = load.i64 notrap aligned readonly v15+16
;; @0033 v25 = load.i64 notrap aligned readonly v15+32
;; @0033 v24 = load.i64 notrap aligned readonly v15+8
;; @0033 v25 = load.i64 notrap aligned readonly v15+24
;; @0033 v26 = call_indirect sig0, v24(v25, v0, v3)
;; @0036 jump block1(v26)
;;

6
tests/disas/indirect-call-caching-exclude-0-index.wat

@ -94,11 +94,11 @@
;; @0050 v18 = global_value.i64 gv3
;; @0050 v19 = load.i64 notrap aligned readonly v18+80
;; @0050 v20 = load.i32 notrap aligned readonly v19
;; @0050 v21 = load.i32 icall_null aligned readonly v14+24
;; @0050 v21 = load.i32 icall_null aligned readonly v14+16
;; @0050 v22 = icmp eq v21, v20
;; @0050 trapz v22, bad_sig
;; @0050 v23 = load.i64 notrap aligned readonly v14+16
;; @0050 v24 = load.i64 notrap aligned readonly v14+32
;; @0050 v23 = load.i64 notrap aligned readonly v14+8
;; @0050 v24 = load.i64 notrap aligned readonly v14+24
;; @0050 v25 = call_indirect sig0, v23(v24, v0)
;; @0053 jump block1(v25)
;;

6
tests/disas/indirect-call-caching-exclude-table-export.wat

@ -93,11 +93,11 @@
;; @0054 v18 = global_value.i64 gv3
;; @0054 v19 = load.i64 notrap aligned readonly v18+80
;; @0054 v20 = load.i32 notrap aligned readonly v19
;; @0054 v21 = load.i32 icall_null aligned readonly v14+24
;; @0054 v21 = load.i32 icall_null aligned readonly v14+16
;; @0054 v22 = icmp eq v21, v20
;; @0054 trapz v22, bad_sig
;; @0054 v23 = load.i64 notrap aligned readonly v14+16
;; @0054 v24 = load.i64 notrap aligned readonly v14+32
;; @0054 v23 = load.i64 notrap aligned readonly v14+8
;; @0054 v24 = load.i64 notrap aligned readonly v14+24
;; @0054 v25 = call_indirect sig0, v23(v24, v0)
;; @0057 jump block1(v25)
;;

6
tests/disas/indirect-call-caching-exclude-table-writes.wat

@ -98,11 +98,11 @@
;; @0063 v18 = global_value.i64 gv3
;; @0063 v19 = load.i64 notrap aligned readonly v18+80
;; @0063 v20 = load.i32 notrap aligned readonly v19
;; @0063 v21 = load.i32 icall_null aligned readonly v14+24
;; @0063 v21 = load.i32 icall_null aligned readonly v14+16
;; @0063 v22 = icmp eq v21, v20
;; @0063 trapz v22, bad_sig
;; @0063 v23 = load.i64 notrap aligned readonly v14+16
;; @0063 v24 = load.i64 notrap aligned readonly v14+32
;; @0063 v23 = load.i64 notrap aligned readonly v14+8
;; @0063 v24 = load.i64 notrap aligned readonly v14+24
;; @0063 v25 = call_indirect sig0, v23(v24, v0)
;; @0066 jump block1(v25)
;;

22
tests/disas/indirect-call-caching-slot-limit-1.wat

@ -33,7 +33,7 @@
;;
;; block0(v0: i64, v1: i64, v2: i32):
;; @0033 v4 = global_value.i64 gv3
;; @0033 v5 = iadd_imm v4, 152
;; @0033 v5 = iadd_imm v4, 144
;; @0033 v6 = load.i32 notrap aligned v5+8
;; @0033 v7 = load.i64 notrap aligned v5
;; @0033 v8 = icmp eq v6, v2
@ -60,7 +60,7 @@
;; block3(v31: i64, v32: i64):
;; @0033 v33 = call_indirect sig0, v31(v32, v0)
;; @0036 v34 = global_value.i64 gv3
;; @0036 v35 = iadd_imm v34, 168
;; @0036 v35 = iadd_imm v34, 160
;; @0036 v36 = load.i32 notrap aligned v35+8
;; @0036 v37 = load.i64 notrap aligned v35
;; @0036 v38 = icmp eq v36, v33
@ -108,11 +108,11 @@
;; @0033 v23 = global_value.i64 gv3
;; @0033 v24 = load.i64 notrap aligned readonly v23+80
;; @0033 v25 = load.i32 notrap aligned readonly v24+4
;; @0033 v26 = load.i32 icall_null aligned readonly v19+24
;; @0033 v26 = load.i32 icall_null aligned readonly v19+16
;; @0033 v27 = icmp eq v26, v25
;; @0033 trapz v27, bad_sig
;; @0033 v28 = load.i64 notrap aligned readonly v19+16
;; @0033 v29 = load.i64 notrap aligned readonly v19+32
;; @0033 v28 = load.i64 notrap aligned readonly v19+8
;; @0033 v29 = load.i64 notrap aligned readonly v19+24
;; @0033 v30 = icmp eq v29, v4
;; @0033 brif v30, block4, block3(v28, v29)
;;
@ -126,11 +126,11 @@
;; @0036 v53 = global_value.i64 gv3
;; @0036 v54 = load.i64 notrap aligned readonly v53+80
;; @0036 v55 = load.i32 notrap aligned readonly v54+4
;; @0036 v56 = load.i32 icall_null aligned readonly v49+24
;; @0036 v56 = load.i32 icall_null aligned readonly v49+16
;; @0036 v57 = icmp eq v56, v55
;; @0036 trapz v57, bad_sig
;; @0036 v58 = load.i64 notrap aligned readonly v49+16
;; @0036 v59 = load.i64 notrap aligned readonly v49+32
;; @0036 v58 = load.i64 notrap aligned readonly v49+8
;; @0036 v59 = load.i64 notrap aligned readonly v49+24
;; @0036 v60 = icmp eq v59, v34
;; @0036 brif v60, block9, block8(v58, v59)
;;
@ -144,11 +144,11 @@
;; @0039 v78 = global_value.i64 gv3
;; @0039 v79 = load.i64 notrap aligned readonly v78+80
;; @0039 v80 = load.i32 notrap aligned readonly v79+4
;; @0039 v81 = load.i32 icall_null aligned readonly v74+24
;; @0039 v81 = load.i32 icall_null aligned readonly v74+16
;; @0039 v82 = icmp eq v81, v80
;; @0039 trapz v82, bad_sig
;; @0039 v83 = load.i64 notrap aligned readonly v74+16
;; @0039 v84 = load.i64 notrap aligned readonly v74+32
;; @0039 v83 = load.i64 notrap aligned readonly v74+8
;; @0039 v84 = load.i64 notrap aligned readonly v74+24
;; @0039 v85 = call_indirect sig0, v83(v84, v0)
;; @003c jump block1(v85)
;;

22
tests/disas/indirect-call-caching-slot-limit-2.wat

@ -38,7 +38,7 @@
;;
;; block0(v0: i64, v1: i64, v2: i32):
;; @0040 v4 = global_value.i64 gv3
;; @0040 v5 = iadd_imm v4, 192
;; @0040 v5 = iadd_imm v4, 176
;; @0040 v6 = load.i32 notrap aligned v5+8
;; @0040 v7 = load.i64 notrap aligned v5
;; @0040 v8 = icmp eq v6, v2
@ -65,7 +65,7 @@
;; block3(v31: i64, v32: i64):
;; @0040 v33 = call_indirect sig0, v31(v32, v0)
;; @0043 v34 = global_value.i64 gv3
;; @0043 v35 = iadd_imm v34, 208
;; @0043 v35 = iadd_imm v34, 192
;; @0043 v36 = load.i32 notrap aligned v35+8
;; @0043 v37 = load.i64 notrap aligned v35
;; @0043 v38 = icmp eq v36, v33
@ -103,11 +103,11 @@
;; @0040 v23 = global_value.i64 gv3
;; @0040 v24 = load.i64 notrap aligned readonly v23+80
;; @0040 v25 = load.i32 notrap aligned readonly v24+4
;; @0040 v26 = load.i32 icall_null aligned readonly v19+24
;; @0040 v26 = load.i32 icall_null aligned readonly v19+16
;; @0040 v27 = icmp eq v26, v25
;; @0040 trapz v27, bad_sig
;; @0040 v28 = load.i64 notrap aligned readonly v19+16
;; @0040 v29 = load.i64 notrap aligned readonly v19+32
;; @0040 v28 = load.i64 notrap aligned readonly v19+8
;; @0040 v29 = load.i64 notrap aligned readonly v19+24
;; @0040 v30 = icmp eq v29, v4
;; @0040 brif v30, block4, block3(v28, v29)
;;
@ -121,11 +121,11 @@
;; @0043 v53 = global_value.i64 gv3
;; @0043 v54 = load.i64 notrap aligned readonly v53+80
;; @0043 v55 = load.i32 notrap aligned readonly v54+4
;; @0043 v56 = load.i32 icall_null aligned readonly v49+24
;; @0043 v56 = load.i32 icall_null aligned readonly v49+16
;; @0043 v57 = icmp eq v56, v55
;; @0043 trapz v57, bad_sig
;; @0043 v58 = load.i64 notrap aligned readonly v49+16
;; @0043 v59 = load.i64 notrap aligned readonly v49+32
;; @0043 v58 = load.i64 notrap aligned readonly v49+8
;; @0043 v59 = load.i64 notrap aligned readonly v49+24
;; @0043 v60 = icmp eq v59, v34
;; @0043 brif v60, block9, block8(v58, v59)
;;
@ -167,11 +167,11 @@
;; @004b v18 = global_value.i64 gv3
;; @004b v19 = load.i64 notrap aligned readonly v18+80
;; @004b v20 = load.i32 notrap aligned readonly v19+4
;; @004b v21 = load.i32 icall_null aligned readonly v14+24
;; @004b v21 = load.i32 icall_null aligned readonly v14+16
;; @004b v22 = icmp eq v21, v20
;; @004b trapz v22, bad_sig
;; @004b v23 = load.i64 notrap aligned readonly v14+16
;; @004b v24 = load.i64 notrap aligned readonly v14+32
;; @004b v23 = load.i64 notrap aligned readonly v14+8
;; @004b v24 = load.i64 notrap aligned readonly v14+24
;; @004b v25 = call_indirect sig0, v23(v24, v0)
;; @004e jump block1(v25)
;;

8
tests/disas/indirect-call-caching.wat

@ -74,7 +74,7 @@
;;
;; block0(v0: i64, v1: i64, v2: i32):
;; @0050 v4 = global_value.i64 gv3
;; @0050 v5 = iadd_imm v4, 272
;; @0050 v5 = iadd_imm v4, 240
;; @0050 v6 = load.i32 notrap aligned v5+8
;; @0050 v7 = load.i64 notrap aligned v5
;; @0050 v8 = icmp eq v6, v2
@ -112,11 +112,11 @@
;; @0050 v23 = global_value.i64 gv3
;; @0050 v24 = load.i64 notrap aligned readonly v23+80
;; @0050 v25 = load.i32 notrap aligned readonly v24
;; @0050 v26 = load.i32 icall_null aligned readonly v19+24
;; @0050 v26 = load.i32 icall_null aligned readonly v19+16
;; @0050 v27 = icmp eq v26, v25
;; @0050 trapz v27, bad_sig
;; @0050 v28 = load.i64 notrap aligned readonly v19+16
;; @0050 v29 = load.i64 notrap aligned readonly v19+32
;; @0050 v28 = load.i64 notrap aligned readonly v19+8
;; @0050 v29 = load.i64 notrap aligned readonly v19+24
;; @0050 v30 = icmp eq v29, v4
;; @0050 brif v30, block4, block3(v28, v29)
;;

6
tests/disas/indirect-call-no-caching.wat

@ -96,11 +96,11 @@
;; @0050 v18 = global_value.i64 gv3
;; @0050 v19 = load.i64 notrap aligned readonly v18+80
;; @0050 v20 = load.i32 notrap aligned readonly v19
;; @0050 v21 = load.i32 icall_null aligned readonly v14+24
;; @0050 v21 = load.i32 icall_null aligned readonly v14+16
;; @0050 v22 = icmp eq v21, v20
;; @0050 trapz v22, bad_sig
;; @0050 v23 = load.i64 notrap aligned readonly v14+16
;; @0050 v24 = load.i64 notrap aligned readonly v14+32
;; @0050 v23 = load.i64 notrap aligned readonly v14+8
;; @0050 v24 = load.i64 notrap aligned readonly v14+24
;; @0050 v25 = call_indirect sig0, v23(v24, v0)
;; @0053 jump block1(v25)
;;

6
tests/disas/readonly-funcrefs.wat

@ -63,7 +63,7 @@
;; @0031 jump block3(v16)
;;
;; block3(v13: i64):
;; @0031 v20 = load.i32 icall_null aligned readonly v13+24
;; @0031 v20 = load.i32 icall_null aligned readonly v13+16
;; @0031 v18 = load.i64 notrap aligned readonly v0+80
;; @0031 v19 = load.i32 notrap aligned readonly v18
;; @0031 v21 = icmp eq v20, v19
@ -73,8 +73,8 @@
;; @0031 trap bad_sig
;;
;; block5:
;; @0031 v22 = load.i64 notrap aligned readonly v13+16
;; @0031 v23 = load.i64 notrap aligned readonly v13+32
;; @0031 v22 = load.i64 notrap aligned readonly v13+8
;; @0031 v23 = load.i64 notrap aligned readonly v13+24
;; @0031 call_indirect sig0, v22(v23, v0)
;; @0034 jump block1
;;

4
tests/disas/ref-func-0.wat

@ -30,9 +30,9 @@
;; @0091 v10 = iconst.i32 1
;; @0091 v11 = call fn0(v9, v10) ; v10 = 1
;; @0093 v12 = global_value.i64 gv3
;; @0093 v13 = load.i64 notrap aligned table v12+160
;; @0093 v13 = load.i64 notrap aligned table v12+144
;; @0095 v14 = global_value.i64 gv3
;; @0095 v15 = load.i64 notrap aligned table v14+176
;; @0095 v15 = load.i64 notrap aligned table v14+160
;; @0097 jump block1(v8, v11, v13, v15)
;;
;; block1(v2: r64, v3: r64, v4: i64, v5: i64):

24
tests/disas/typed-funcrefs-eager-init.wat

@ -140,14 +140,14 @@
;; v48 = iconst.i64 8
;; @0048 v14 = iadd v12, v48 ; v48 = 8
;; @0048 v17 = load.i64 table_oob aligned table v14
;; @004a v18 = load.i64 null_reference aligned readonly v17+16
;; @004a v19 = load.i64 notrap aligned readonly v17+32
;; @004a v18 = load.i64 null_reference aligned readonly v17+8
;; @004a v19 = load.i64 notrap aligned readonly v17+24
;; @004a v20 = call_indirect sig0, v18(v19, v0, v2, v3, v4, v5)
;; v56 = iconst.i64 16
;; @005b v28 = iadd v12, v56 ; v56 = 16
;; @005b v31 = load.i64 table_oob aligned table v28
;; @005d v32 = load.i64 null_reference aligned readonly v31+16
;; @005d v33 = load.i64 notrap aligned readonly v31+32
;; @005d v32 = load.i64 null_reference aligned readonly v31+8
;; @005d v33 = load.i64 notrap aligned readonly v31+24
;; @005d v34 = call_indirect sig0, v32(v33, v0, v2, v3, v4, v5)
;; @0066 jump block1
;;
@ -170,14 +170,14 @@
;; v48 = iconst.i64 8
;; @0075 v14 = iadd v12, v48 ; v48 = 8
;; @0075 v17 = load.i64 table_oob aligned table v14
;; @0075 v18 = load.i64 icall_null aligned readonly v17+16
;; @0075 v19 = load.i64 notrap aligned readonly v17+32
;; @0075 v18 = load.i64 icall_null aligned readonly v17+8
;; @0075 v19 = load.i64 notrap aligned readonly v17+24
;; @0075 v20 = call_indirect sig0, v18(v19, v0, v2, v3, v4, v5)
;; v56 = iconst.i64 16
;; @0087 v28 = iadd v12, v56 ; v56 = 16
;; @0087 v31 = load.i64 table_oob aligned table v28
;; @0087 v32 = load.i64 icall_null aligned readonly v31+16
;; @0087 v33 = load.i64 notrap aligned readonly v31+32
;; @0087 v32 = load.i64 icall_null aligned readonly v31+8
;; @0087 v33 = load.i64 notrap aligned readonly v31+24
;; @0087 v34 = call_indirect sig0, v32(v33, v0, v2, v3, v4, v5)
;; @0091 jump block1
;;
@ -196,12 +196,12 @@
;;
;; block0(v0: i64, v1: i64, v2: i32, v3: i32, v4: i32, v5: i32):
;; @009e v9 = load.i64 notrap aligned table v0+112
;; @00a0 v10 = load.i64 null_reference aligned readonly v9+16
;; @00a0 v11 = load.i64 notrap aligned readonly v9+32
;; @00a0 v10 = load.i64 null_reference aligned readonly v9+8
;; @00a0 v11 = load.i64 notrap aligned readonly v9+24
;; @00a0 v12 = call_indirect sig0, v10(v11, v0, v2, v3, v4, v5)
;; @00af v15 = load.i64 notrap aligned table v0+128
;; @00b1 v16 = load.i64 null_reference aligned readonly v15+16
;; @00b1 v17 = load.i64 notrap aligned readonly v15+32
;; @00b1 v16 = load.i64 null_reference aligned readonly v15+8
;; @00b1 v17 = load.i64 notrap aligned readonly v15+24
;; @00b1 v18 = call_indirect sig0, v16(v17, v0, v2, v3, v4, v5)
;; @00ba jump block1
;;

24
tests/disas/typed-funcrefs.wat

@ -153,8 +153,8 @@
;; @0048 jump block3(v22)
;;
;; block3(v19: i64):
;; @004a v23 = load.i64 null_reference aligned readonly v19+16
;; @004a v24 = load.i64 notrap aligned readonly v19+32
;; @004a v23 = load.i64 null_reference aligned readonly v19+8
;; @004a v24 = load.i64 notrap aligned readonly v19+24
;; @004a v25 = call_indirect sig1, v23(v24, v0, v2, v3, v4, v5)
;; v74 = iconst.i64 16
;; @005b v38 = iadd.i64 v12, v74 ; v74 = 16
@ -170,8 +170,8 @@
;; @005b jump block5(v46)
;;
;; block5(v43: i64):
;; @005d v47 = load.i64 null_reference aligned readonly v43+16
;; @005d v48 = load.i64 notrap aligned readonly v43+32
;; @005d v47 = load.i64 null_reference aligned readonly v43+8
;; @005d v48 = load.i64 notrap aligned readonly v43+24
;; @005d v49 = call_indirect sig1, v47(v48, v0, v2, v3, v4, v5)
;; @0066 jump block1
;;
@ -207,8 +207,8 @@
;; @0075 jump block3(v22)
;;
;; block3(v19: i64):
;; @0075 v23 = load.i64 icall_null aligned readonly v19+16
;; @0075 v24 = load.i64 notrap aligned readonly v19+32
;; @0075 v23 = load.i64 icall_null aligned readonly v19+8
;; @0075 v24 = load.i64 notrap aligned readonly v19+24
;; @0075 v25 = call_indirect sig0, v23(v24, v0, v2, v3, v4, v5)
;; v74 = iconst.i64 16
;; @0087 v38 = iadd.i64 v12, v74 ; v74 = 16
@ -224,8 +224,8 @@
;; @0087 jump block5(v46)
;;
;; block5(v43: i64):
;; @0087 v47 = load.i64 icall_null aligned readonly v43+16
;; @0087 v48 = load.i64 notrap aligned readonly v43+32
;; @0087 v47 = load.i64 icall_null aligned readonly v43+8
;; @0087 v48 = load.i64 notrap aligned readonly v43+24
;; @0087 v49 = call_indirect sig0, v47(v48, v0, v2, v3, v4, v5)
;; @0091 jump block1
;;
@ -244,12 +244,12 @@
;;
;; block0(v0: i64, v1: i64, v2: i32, v3: i32, v4: i32, v5: i32):
;; @009e v9 = load.i64 notrap aligned table v0+112
;; @00a0 v10 = load.i64 null_reference aligned readonly v9+16
;; @00a0 v11 = load.i64 notrap aligned readonly v9+32
;; @00a0 v10 = load.i64 null_reference aligned readonly v9+8
;; @00a0 v11 = load.i64 notrap aligned readonly v9+24
;; @00a0 v12 = call_indirect sig0, v10(v11, v0, v2, v3, v4, v5)
;; @00af v15 = load.i64 notrap aligned table v0+128
;; @00b1 v16 = load.i64 null_reference aligned readonly v15+16
;; @00b1 v17 = load.i64 notrap aligned readonly v15+32
;; @00b1 v16 = load.i64 null_reference aligned readonly v15+8
;; @00b1 v17 = load.i64 notrap aligned readonly v15+24
;; @00b1 v18 = call_indirect sig0, v16(v17, v0, v2, v3, v4, v5)
;; @00ba jump block1
;;

12
tests/disas/winch/x64/call_indirect/call_indirect.wat

@ -76,11 +76,11 @@
;; @0047 v26 = global_value.i64 gv3
;; @0047 v27 = load.i64 notrap aligned readonly v26+80
;; @0047 v28 = load.i32 notrap aligned readonly v27
;; @0047 v29 = load.i32 icall_null aligned readonly v22+24
;; @0047 v29 = load.i32 icall_null aligned readonly v22+16
;; @0047 v30 = icmp eq v29, v28
;; @0047 trapz v30, bad_sig
;; @0047 v31 = load.i64 notrap aligned readonly v22+16
;; @0047 v32 = load.i64 notrap aligned readonly v22+32
;; @0047 v31 = load.i64 notrap aligned readonly v22+8
;; @0047 v32 = load.i64 notrap aligned readonly v22+24
;; @0047 v33 = call_indirect sig0, v31(v32, v0, v10)
;; @004c v35 = iconst.i32 1
;; @004e v36 = isub.i32 v2, v35 ; v35 = 1
@ -107,11 +107,11 @@
;; @0051 v52 = global_value.i64 gv3
;; @0051 v53 = load.i64 notrap aligned readonly v52+80
;; @0051 v54 = load.i32 notrap aligned readonly v53
;; @0051 v55 = load.i32 icall_null aligned readonly v48+24
;; @0051 v55 = load.i32 icall_null aligned readonly v48+16
;; @0051 v56 = icmp eq v55, v54
;; @0051 trapz v56, bad_sig
;; @0051 v57 = load.i64 notrap aligned readonly v48+16
;; @0051 v58 = load.i64 notrap aligned readonly v48+32
;; @0051 v57 = load.i64 notrap aligned readonly v48+8
;; @0051 v58 = load.i64 notrap aligned readonly v48+24
;; @0051 v59 = call_indirect sig0, v57(v58, v0, v36)
;; @0054 v60 = iadd.i32 v33, v59
;; @0055 jump block3(v60)

6
tests/disas/winch/x64/call_indirect/local_arg.wat

@ -65,11 +65,11 @@
;; @003b v18 = global_value.i64 gv3
;; @003b v19 = load.i64 notrap aligned readonly v18+80
;; @003b v20 = load.i32 notrap aligned readonly v19
;; @003b v21 = load.i32 icall_null aligned readonly v14+24
;; @003b v21 = load.i32 icall_null aligned readonly v14+16
;; @003b v22 = icmp eq v21, v20
;; @003b trapz v22, bad_sig
;; @003b v23 = load.i64 notrap aligned readonly v14+16
;; @003b v24 = load.i64 notrap aligned readonly v14+32
;; @003b v23 = load.i64 notrap aligned readonly v14+8
;; @003b v24 = load.i64 notrap aligned readonly v14+24
;; @003b call_indirect sig0, v23(v24, v0, v2) ; v2 = 0
;; @003e jump block1
;;

2
tests/disas/winch/x64/load/grow_load.wat

@ -65,7 +65,7 @@
;; movq %r14, %rdi
;; movl 0xc(%rsp), %esi
;; movl $0, %edx
;; callq 0x2f7
;; callq 0x2b0
;; addq $0xc, %rsp
;; addq $4, %rsp
;; movq 0x48(%rsp), %r14

4
tests/disas/winch/x64/table/fill.wat

@ -113,7 +113,7 @@
;; movq %r14, %rdi
;; movl $0, %esi
;; movl 4(%rsp), %edx
;; callq 0x5f4
;; callq 0x4c9
;; addq $4, %rsp
;; addq $4, %rsp
;; movq 0x20(%rsp), %r14
@ -134,7 +134,7 @@
;; movl 0x14(%rsp), %edx
;; movq 0xc(%rsp), %rcx
;; movl 8(%rsp), %r8d
;; callq 0x636
;; callq 0x50b
;; addq $8, %rsp
;; addq $0x10, %rsp
;; movq 0x20(%rsp), %r14

2
tests/disas/winch/x64/table/get.wat

@ -65,7 +65,7 @@
;; movq %r14, %rdi
;; movl $0, %esi
;; movl 4(%rsp), %edx
;; callq 0x33f
;; callq 0x2da
;; addq $4, %rsp
;; addq $4, %rsp
;; movq 0x10(%rsp), %r14

2
tests/disas/winch/x64/table/grow.wat

@ -29,7 +29,7 @@
;; movl $0, %esi
;; movl $0xa, %edx
;; movq (%rsp), %rcx
;; callq 0x18c
;; callq 0x165
;; addq $8, %rsp
;; movq 0x10(%rsp), %r14
;; addq $0x18, %rsp

30
tests/disas/winch/x64/table/init_copy_drop.wat

@ -142,11 +142,11 @@
;; movl $7, %ecx
;; movl $0, %r8d
;; movl $4, %r9d
;; callq 0xaef
;; callq 0x8d5
;; movq 8(%rsp), %r14
;; movq %r14, %rdi
;; movl $1, %esi
;; callq 0xb38
;; callq 0x91e
;; movq 8(%rsp), %r14
;; movq %r14, %rdi
;; movl $0, %esi
@ -154,11 +154,11 @@
;; movl $0xf, %ecx
;; movl $1, %r8d
;; movl $3, %r9d
;; callq 0xaef
;; callq 0x8d5
;; movq 8(%rsp), %r14
;; movq %r14, %rdi
;; movl $3, %esi
;; callq 0xb38
;; callq 0x91e
;; movq 8(%rsp), %r14
;; movq %r14, %rdi
;; movl $0, %esi
@ -166,7 +166,7 @@
;; movl $0x14, %ecx
;; movl $0xf, %r8d
;; movl $5, %r9d
;; callq 0xb77
;; callq 0x95d
;; movq 8(%rsp), %r14
;; movq %r14, %rdi
;; movl $0, %esi
@ -174,7 +174,7 @@
;; movl $0x15, %ecx
;; movl $0x1d, %r8d
;; movl $1, %r9d
;; callq 0xb77
;; callq 0x95d
;; movq 8(%rsp), %r14
;; movq %r14, %rdi
;; movl $0, %esi
@ -182,7 +182,7 @@
;; movl $0x18, %ecx
;; movl $0xa, %r8d
;; movl $1, %r9d
;; callq 0xb77
;; callq 0x95d
;; movq 8(%rsp), %r14
;; movq %r14, %rdi
;; movl $0, %esi
@ -190,7 +190,7 @@
;; movl $0xd, %ecx
;; movl $0xb, %r8d
;; movl $4, %r9d
;; callq 0xb77
;; callq 0x95d
;; movq 8(%rsp), %r14
;; movq %r14, %rdi
;; movl $0, %esi
@ -198,7 +198,7 @@
;; movl $0x13, %ecx
;; movl $0x14, %r8d
;; movl $5, %r9d
;; callq 0xb77
;; callq 0x95d
;; movq 8(%rsp), %r14
;; addq $0x10, %rsp
;; popq %rbp
@ -224,12 +224,12 @@
;; movl (%rsp), %ecx
;; addq $4, %rsp
;; movq %r14, %rdx
;; movl 0x100(%rdx), %ebx
;; movl 0xd8(%rdx), %ebx
;; cmpl %ebx, %ecx
;; jae 0x39f
;; 305: movl %ecx, %r11d
;; imulq $8, %r11, %r11
;; movq 0xf8(%rdx), %rdx
;; movq 0xd0(%rdx), %rdx
;; movq %rdx, %rsi
;; addq %r11, %rdx
;; cmpl %ebx, %ecx
@ -243,7 +243,7 @@
;; movq %r14, %rdi
;; movl $0, %esi
;; movl 4(%rsp), %edx
;; callq 0xbc0
;; callq 0x9a6
;; addq $4, %rsp
;; addq $4, %rsp
;; movq 0x10(%rsp), %r14
@ -253,13 +253,13 @@
;; je 0x3a1
;; 366: movq 0x50(%r14), %r11
;; movl (%r11), %ecx
;; movl 0x18(%rax), %edx
;; movl 0x10(%rax), %edx
;; cmpl %edx, %ecx
;; jne 0x3a3
;; 378: pushq %rax
;; popq %rcx
;; movq 0x20(%rcx), %rbx
;; movq 0x10(%rcx), %rdx
;; movq 0x18(%rcx), %rbx
;; movq 8(%rcx), %rdx
;; subq $8, %rsp
;; movq %rbx, %rdi
;; movq %r14, %rsi

2
tests/disas/winch/x64/table/set.wat

@ -108,7 +108,7 @@
;; movq %r14, %rdi
;; movl $0, %esi
;; movl (%rsp), %edx
;; callq 0x517
;; callq 0x48c
;; addq $4, %rsp
;; movq 0x14(%rsp), %r14
;; jmp 0x16e

Loading…
Cancel
Save