Tree:
34a9ad9826
cfallin/lucet-pr612-base
fitzgen-patch-1
main
pch/bound_tcp_userland_buffer
pch/bump_wasm_tools_210
pch/cli_wasi_legacy
pch/component_call_hooks
pch/resource_table
pch/resource_table_2
pch/upstream_wave
release-0.32.0
release-0.33.0
release-0.34.0
release-0.35.0
release-0.36.0
release-0.37.0
release-0.38.0
release-0.39.0
release-0.40.0
release-1.0.0
release-10.0.0
release-11.0.0
release-12.0.0
release-13.0.0
release-14.0.0
release-15.0.0
release-16.0.0
release-17.0.0
release-18.0.0
release-19.0.0
release-2.0.0
release-20.0.0
release-21.0.0
release-22.0.0
release-23.0.0
release-24.0.0
release-3.0.0
release-4.0.0
release-5.0.0
release-6.0.0
release-7.0.0
release-8.0.0
release-9.0.0
revert-9191-trevor/upgrade-regalloc
revert-union-find
stable-v0.26
trevor/fuzz-pcc
trevor/hyper-rc4
trevor/io-error-interface
0.2.0
0.3.0
cranelift-v0.31.0
cranelift-v0.32.0
cranelift-v0.33.0
cranelift-v0.34.0
cranelift-v0.35.0
cranelift-v0.36.0
cranelift-v0.37.0
cranelift-v0.39.0
cranelift-v0.40.0
cranelift-v0.41.0
cranelift-v0.42.0
cranelift-v0.43.0
cranelift-v0.43.1
cranelift-v0.44.0
cranelift-v0.45.0
cranelift-v0.46.0
cranelift-v0.46.1
cranelift-v0.60.0
cranelift-v0.61.0
cranelift-v0.62.0
cranelift-v0.69.0
dev
filecheck-v0.0.1
minimum-viable-wasi-proxy-serve
v0.10.0
v0.11.0
v0.12.0
v0.15.0
v0.16.0
v0.17.0
v0.18.0
v0.19.0
v0.2.0
v0.20.0
v0.21.0
v0.22.0
v0.22.1
v0.23.0
v0.24.0
v0.25.0
v0.26.0
v0.26.1
v0.27.0
v0.28.0
v0.29.0
v0.3.0
v0.30.0
v0.31.0
v0.32.0
v0.32.1
v0.33.0
v0.33.1
v0.34.0
v0.34.1
v0.34.2
v0.35.0
v0.35.1
v0.35.2
v0.35.3
v0.36.0
v0.37.0
v0.38.0
v0.38.1
v0.38.2
v0.38.3
v0.39.0
v0.39.1
v0.4.0
v0.40.0
v0.40.1
v0.6.0
v0.8.0
v0.9.0
v1.0.0
v1.0.1
v1.0.2
v10.0.0
v10.0.1
v10.0.2
v11.0.0
v11.0.1
v11.0.2
v12.0.0
v12.0.1
v12.0.2
v13.0.0
v13.0.1
v14.0.0
v14.0.1
v14.0.2
v14.0.3
v14.0.4
v15.0.0
v15.0.1
v16.0.0
v17.0.0
v17.0.1
v17.0.2
v17.0.3
v18.0.0
v18.0.1
v18.0.2
v18.0.3
v18.0.4
v19.0.0
v19.0.1
v19.0.2
v2.0.0
v2.0.1
v2.0.2
v20.0.0
v20.0.1
v20.0.2
v21.0.0
v21.0.1
v22.0.0
v23.0.0
v23.0.1
v23.0.2
v24.0.0
v3.0.0
v3.0.1
v4.0.0
v4.0.1
v5.0.0
v5.0.1
v6.0.0
v6.0.1
v6.0.2
v7.0.0
v7.0.1
v8.0.0
v8.0.1
v9.0.0
v9.0.1
v9.0.2
v9.0.3
v9.0.4
${ noResults }
3 Commits (34a9ad98264a351ae72928c1fb77d85c1c750d12)
Author | SHA1 | Message | Date |
---|---|---|---|
Nick Fitzgerald |
0fa130131d
|
Add `GcRuntime` and `GcCompiler` traits; `i31ref` support (#8196)
\### The `GcRuntime` and `GcCompiler` Traits This commit factors out the details of the garbage collector away from the rest of the runtime and the compiler. It does this by introducing two new traits, very similar to a subset of [those proposed in the Wasm GC RFC], although not all equivalent functionality has been added yet because Wasmtime doesn't support, for example, GC structs yet: [those proposed in the Wasm GC RFC]: https://github.com/bytecodealliance/rfcs/blob/main/accepted/wasm-gc.md#defining-the-pluggable-gc-interface 1. The `GcRuntime` trait: This trait defines how to create new GC heaps, run collections within them, and execute the various GC barriers the collector requires. Rather than monomorphize all of Wasmtime on this trait, we use it as a dynamic trait object. This does imply some virtual call overhead and missing some inlining (and resulting post-inlining) optimization opportunities. However, it is *much* less disruptive to the existing embedder API, results in a cleaner embedder API anyways, and we don't believe that VM runtime/embedder code is on the hot path for working with the GC at this time anyways (that would be the actual Wasm code, which has inlined GC barriers and direct calls and all of that). In the future, once we have optimized enough of the GC that such code is ever hot, we have options we can investigate at that time to avoid these dynamic virtual calls, like only enabling one single collector at build time and then creating a static type alias like `type TheOneGcImpl = ...;` based on the compile time configuration, and using this type alias in the runtime rather than a dynamic trait object. The `GcRuntime` trait additionally defines a method to reset a GC heap, for use by the pooling allocator. This allows reuse of GC heaps across different stores. This integration is very rudimentary at the moment, and is missing all kinds of configuration knobs that we should have before deploying Wasm GC in production. This commit is large enough as it is already! Ideally, in the future, I'd like to make it so that GC heaps receive their memory region, rather than allocate/reserve it themselves, and let each slot in the pooling allocator's memory pool be *either* a linear memory or a GC heap. This would unask various capacity planning questions such as "what percent of memory capacity should we dedicate to linear memories vs GC heaps?". It also seems like basically all the same configuration knobs we have for linear memories apply equally to GC heaps (see also the "Indexed Heaps" section below). 2. The `GcCompiler` trait: This trait defines how to emit CLIF that implements GC barriers for various operations on GC-managed references. The Rust code calls into this trait dynamically via a trait object, but since it is customizing the CLIF that is generated for Wasm code, the Wasm code itself is not making dynamic, indirect calls for GC barriers. The `GcCompiler` implementation can inline the parts of GC barrier that it believes should be inline, and leave out-of-line calls to rare slow paths. All that said, there is still only a single implementation of each of these traits: the existing deferred reference-counting (DRC) collector. So there is a bunch of code motion in this commit as the DRC collector was further isolated from the rest of the runtime and moved to its own submodule. That said, this was not *purely* code motion (see "Indexed Heaps" below) so it is worth not simply skipping over the DRC collector's code in review. \### Indexed Heaps This commit does bake in a couple assumptions that must be shared across all collector implementations, such as a shared `VMGcHeader` that all objects allocated within a GC heap must begin with, but the most notable and far-reaching of these assumptions is that all collectors will use "indexed heaps". What we are calling indexed heaps are basically the three following invariants: 1. All GC heaps will be a single contiguous region of memory, and all GC objects will be allocated within this region of memory. The collector may ask the system allocator for additional memory, e.g. to maintain its free lists, but GC objects themselves will never be allocated via `malloc`. 2. A pointer to a GC-managed object (i.e. a `VMGcRef`) is a 32-bit offset into the GC heap's contiguous region of memory. We never hold raw pointers to GC objects (although, of course, we have to compute them and use them temporarily when actually accessing objects). This means that deref'ing GC pointers is equivalent to deref'ing linear memory pointers: we need to add a base and we also check that the GC pointer/index is within the bounds of the GC heap. Furthermore, compressing 64-bit pointers into 32 bits is a fairly common technique among high-performance GC implementations[^compressed-oops][^v8-ptr-compression] so we are in good company. 3. Anything stored inside the GC heap is untrusted. Even each GC reference that is an element of an `(array (ref any))` is untrusted, and bounds checked on access. This means that, for example, we do not store the raw pointer to an `externref`'s host object inside the GC heap. Instead an `externref` now stores an ID that can be used to index into a side table in the store that holds the actual `Box<dyn Any>` host object, and accessing that side table is always checked. [^compressed-oops]: See ["Compressed OOPs" in OpenJDK.](https://wiki.openjdk.org/display/HotSpot/CompressedOops) [^v8-ptr-compression]: See [V8's pointer compression](https://v8.dev/blog/pointer-compression). The good news with regards to all the bounds checking that this scheme implies is that we can use all the same virtual memory tricks that linear memories use to omit explicit bounds checks. Additionally, (2) means that the sizes of GC objects is that much smaller (and therefore that much more cache friendly) because they are only holding onto 32-bit, rather than 64-bit, references to other GC objects. (We can, in the future, support GC heaps up to 16GiB in size without losing 32-bit GC pointers by taking advantage of `VMGcHeader` alignment and storing aligned indices rather than byte indices, while still leaving the bottom bit available for tagging as an `i31ref` discriminant. Should we ever need to support even larger GC heap capacities, we could go to full 64-bit references, but we would need explicit bounds checks.) The biggest benefit of indexed heaps is that, because we are (explicitly or implicitly) bounds checking GC heap accesses, and because we are not otherwise trusting any data from inside the GC heap, we greatly reduce how badly things can go wrong in the face of collector bugs and GC heap corruption. We are essentially sandboxing the GC heap region, the same way that linear memory is a sandbox. GC bugs could lead to the guest program accessing the wrong GC object, or getting garbage data from within the GC heap. But only garbage data from within the GC heap, never outside it. The worse that could happen would be if we decided not to zero out GC heaps between reuse across stores (which is a valid trade off to make, since zeroing a GC heap is a defense-in-depth technique similar to zeroing a Wasm stack and not semantically visible in the absence of GC bugs) and then a GC bug would allow the current Wasm guest to read old GC data from the old Wasm guest that previously used this GC heap. But again, it could never access host data. Taken altogether, this allows for collector implementations that are nearly free from `unsafe` code, and unsafety can otherwise be targeted and limited in scope, such as interactions with JIT code. Most importantly, we do not have to maintain critical invariants across the whole system -- invariants which can't be nicely encapsulated or abstracted -- to preserve memory safety. Such holistic invariants that refuse encapsulation are otherwise generally a huge safety problem with GC implementations. \### `VMGcRef` is *NOT* `Clone` or `Copy` Anymore `VMGcRef` used to be `Clone` and `Copy`. It is not anymore. The motivation here was to be sure that I was actually calling GC barriers at all the correct places. I couldn't be sure before. Now, you can still explicitly copy a raw GC reference without running GC barriers if you need to and understand why that's okay (aka you are implementing the collector), but that is something you have to opt into explicitly by calling `unchecked_copy`. The default now is that you can't just copy the reference, and instead call an explicit `clone` method (not *the* `Clone` trait, because we need to pass in the GC heap context to run the GC barriers) and it is hard to forget to do that accidentally. This resulted in a pretty big amount of churn, but I am wayyyyyy more confident that the correct GC barriers are called at the correct times now than I was before. \### `i31ref` I started this commit by trying to add `i31ref` support. And it grew into the whole traits interface because I found that I needed to abstract GC barriers into helpers anyways to avoid running them for `i31ref`s, so I figured that I might as well add the whole traits interface. In comparison, `i31ref` support is much easier and smaller than that other part! But it was also difficult to pull apart from this commit, sorry about that! --------------------- Overall, I know this is a very large commit. I am super happy to have some synchronous meetings to walk through this all, give an overview of the architecture, answer questions directly, etc... to make review easier! prtest:full |
7 months ago |
Nick Fitzgerald | 89f1e02f1f |
Remove executable bits from a few Rust source files
|
4 years ago |
Nick Fitzgerald | 98e899f6b3 |
fuzz: Add a fuzz target for `table.{get,set}` operations
This new fuzz target exercises sequences of `table.get`s, `table.set`s, and GCs. It already found a couple bugs: * Some leaks due to ref count cycles between stores and host-defined functions closing over those stores. * If there are no live references for a PC, Cranelift can avoid emiting an associated stack map. This was running afoul of a debug assertion. |
4 years ago |