Tree:
dfc4358d0c
cfallin/lucet-pr612-base
fitzgen-patch-1
main
pch/bound_tcp_userland_buffer
pch/bump_wasm_tools_210
pch/cli_wasi_legacy
pch/component_call_hooks
pch/resource_table
pch/resource_table_2
pch/upstream_wave
release-0.32.0
release-0.33.0
release-0.34.0
release-0.35.0
release-0.36.0
release-0.37.0
release-0.38.0
release-0.39.0
release-0.40.0
release-1.0.0
release-10.0.0
release-11.0.0
release-12.0.0
release-13.0.0
release-14.0.0
release-15.0.0
release-16.0.0
release-17.0.0
release-18.0.0
release-19.0.0
release-2.0.0
release-20.0.0
release-21.0.0
release-22.0.0
release-23.0.0
release-24.0.0
release-3.0.0
release-4.0.0
release-5.0.0
release-6.0.0
release-7.0.0
release-8.0.0
release-9.0.0
revert-9191-trevor/upgrade-regalloc
revert-union-find
stable-v0.26
trevor/fuzz-pcc
trevor/hyper-rc4
trevor/io-error-interface
0.2.0
0.3.0
cranelift-v0.31.0
cranelift-v0.32.0
cranelift-v0.33.0
cranelift-v0.34.0
cranelift-v0.35.0
cranelift-v0.36.0
cranelift-v0.37.0
cranelift-v0.39.0
cranelift-v0.40.0
cranelift-v0.41.0
cranelift-v0.42.0
cranelift-v0.43.0
cranelift-v0.43.1
cranelift-v0.44.0
cranelift-v0.45.0
cranelift-v0.46.0
cranelift-v0.46.1
cranelift-v0.60.0
cranelift-v0.61.0
cranelift-v0.62.0
cranelift-v0.69.0
dev
filecheck-v0.0.1
minimum-viable-wasi-proxy-serve
v0.10.0
v0.11.0
v0.12.0
v0.15.0
v0.16.0
v0.17.0
v0.18.0
v0.19.0
v0.2.0
v0.20.0
v0.21.0
v0.22.0
v0.22.1
v0.23.0
v0.24.0
v0.25.0
v0.26.0
v0.26.1
v0.27.0
v0.28.0
v0.29.0
v0.3.0
v0.30.0
v0.31.0
v0.32.0
v0.32.1
v0.33.0
v0.33.1
v0.34.0
v0.34.1
v0.34.2
v0.35.0
v0.35.1
v0.35.2
v0.35.3
v0.36.0
v0.37.0
v0.38.0
v0.38.1
v0.38.2
v0.38.3
v0.39.0
v0.39.1
v0.4.0
v0.40.0
v0.40.1
v0.6.0
v0.8.0
v0.9.0
v1.0.0
v1.0.1
v1.0.2
v10.0.0
v10.0.1
v10.0.2
v11.0.0
v11.0.1
v11.0.2
v12.0.0
v12.0.1
v12.0.2
v13.0.0
v13.0.1
v14.0.0
v14.0.1
v14.0.2
v14.0.3
v14.0.4
v15.0.0
v15.0.1
v16.0.0
v17.0.0
v17.0.1
v17.0.2
v17.0.3
v18.0.0
v18.0.1
v18.0.2
v18.0.3
v18.0.4
v19.0.0
v19.0.1
v19.0.2
v2.0.0
v2.0.1
v2.0.2
v20.0.0
v20.0.1
v20.0.2
v21.0.0
v21.0.1
v22.0.0
v23.0.0
v23.0.1
v23.0.2
v24.0.0
v3.0.0
v3.0.1
v4.0.0
v4.0.1
v5.0.0
v5.0.1
v6.0.0
v6.0.1
v6.0.2
v7.0.0
v7.0.1
v8.0.0
v8.0.1
v9.0.0
v9.0.1
v9.0.2
v9.0.3
v9.0.4
${ noResults }
4 Commits (dfc4358d0c7ced2f1e8f1b4913841ed6ce31ff46)
Author | SHA1 | Message | Date |
---|---|---|---|
Pat Hickey |
2b00a541f3
|
Make wasi-common self-contained, deprecate exports from wasmtime-wasi (#7881)
* WIP: try to make wasi-common self contained. * rebase: cargo.lock * remove all dependencies between wasi-common and wasmtime-wasi * use wasi-common directly throughout tests, benches, examples, cli run * wasi-threads: use wasi-common's maybe_exit_on_error in spawned thread not a very modular design, but at this point wasi-common and wasi-threads are forever wed * fix wasmtime's docs * re-introduce wasmtime-wasi's exports of wasi-common definitions behind deprecated * factor out determining i32 process exit code and remove libc dep because rustix provides the same constant * commands/run: inline the logic about aborting on trap since this is the sole place in the codebase its used * Add high-level summary to wasi-common's top-level doc comment. * c-api: fix use of wasi_cap_std_sync => wasi_common::sync, wasmtime_wasi => wasi_common * fix tokio example * think better of combining downcast and masking into one method * fix references to wasmtime_wasi in docs prtest:full * benches: use wasi-common * cfg-if around use of rustix::process because that doesnt exist on windows * wasi-common: include tests, caught by verify-publish * fix another bench * exit requires wasmtime dep. caught by verify-publish. |
9 months ago |
Alex Crichton |
bb3734bd72
|
Change preview2 builder methods to use `&mut self` (#6770)
* Change preview2 builder methods to use `&mut self` This commit changes the `WasiCtxBuilder` for preview2 to use a builder pattern more similar to `std::process::Command` where methods take `&mut self` and return `&mut Self` instead of taking `self` and returning `Self`. This pattern enables more easily building up configuration over time throughout code where ownership transfer might otherwise be awkward. A small caveat to this is that the ergonomics of this pattern only really work out well if the final "build" method takes `&mut self` as well. In this situation it's difficult to try to figure out what's supposed to happen if this method is called twice, so I left it to panic for now so we can more easily update it in the future possibly. * Synchronize preview1/preview2 builders * Move preview1 builders to `&mut`-style * Rename methods on preview2 builder to match names on the preview1 builders * Fix C API * Fix more tests * Fix benchmark build * Fix unused variable |
1 year ago |
Andrew Brown |
4899537328
|
bench: add more WASI benchmarks (#5309)
* bench: add more WASI benchmarks This follows up on #5274 to add several more scenarios with which to benchmark WASI performance: - `open-file.wat`: opens and closes a file - `read-file.wat`: opens a file, reads 4K bytes from it, then closes it - `read-dir.wat`: reads a directory's entries Each benchmark is hand-crafted WAT to more clearly control what WASI calls are made. As with #5274, these modules' sole entry point takes a parameter indicating the number of iterations to run in order to use `criterion`'s `iter_custom` feature. * fix: reduce expected size of directory entries |
2 years ago |
Andrew Brown |
8426904129
|
bench: benchmark several common WASI scenarios (#5274)
In order to properly understand the impact of providing thread-safe implmentations of WASI contexts (#5235), we need benchmarks that measure the current performance of WASI calls using Wiggle. This change adds several common WASI scenarios as WAT files (see `benches/wasi/*.wat`) and benchmarks them with `criterion`. Using `criterion`'s `iter_custom`, the WAT file runs the desired number of benchmark iterations internally and the total duration of the runs is divided to get the average time for each loop iteration. Why WAT? When compiling these benchmarks from Rust to `wasm32-wasi`, the output files are large, contain other WASI imports than the desired ones, and overall it is difficult to tell if we are measuring what we expect. By hand-writing the WAT, it is (slightly) more clear what each benchmark is doing. |
2 years ago |