Browse Source

WASI Preview 2: rewrite streams and pollable implementation (#6556)

* preview2: make everything but streams/io and poll/poll synchronous

* streams: get rid of as_any method, which is no longer used

* delete legacy sched and pollable concepts

* more code motion and renaming

* make tokio a workspace dep, because we need it directly in wasmtime-wasi

* HostPollable exists

* more fixes

* pollable can trap, and implement clock properly

* HostPollable is now a generator of futures

because we need to be able to poll a pollable many times

* explain various todo!s

* Synchronous version of the wasi-preview2-components tests

* Change with_tokio to accept the future as an argument

* Store futures in the PollOneoff struct instead, to avoid dropping them

* Remove TODO for HostOutputStream impl for WritePipe

* Implement pollable for ReadPipe

* Use a Notify when ReadPipe is ready

* wip

* wip

* Read/write pipe ends with tokio channels

* Empty reader/writer wrappers

* EmptyStream, and warning cleanup

* Wrapped reader/writer structs

* Rework stdio in terms of wrapped read/write

* Add MemoryOutputPipe and update tests

* Remove todo

* rewrite nearly everything

* implement the pipe stuff

* wibble

* fix MemoryOutputPipe just enough to make the tests compile

* Move the table iteration into a helper function

* AsyncFd stream implementation to fix stdin on unix

* Rename Wrapped{Read,Write} streams to Async{Read,Write}Stream

* Move async io wrappers into stream.rs

* Fix the sync tests

* fix test uses of pipes, juggle tokio context for stdin construction

* add some fixmes

* the future i named Never is defined in futures-util as pending

which is a better name

* i believe this is a correct implementation of one global stdin resource

* move unix stdin to its own file

* make most of the mods private

* fix build - we are skipping rust 1.70

due to llvm regressions in s390x and riscv64 which are fixed in 1.71 and
will not be backported

* preview1-in-preview2: use async funcs for io, and the async io interface

prtest:full

* windows stdin support

* done!

* table ext functions: fix tests

* tests: expect poll_oneoff_{files,stdio} to pass on all platforms

* export the bindings under wasmtime_wasi::preview2::bindings

rather than preview2::wasi.

and command moves to wasmtime_wasi::preview2::command as well.

* fix renaming of wasi to bindings in tests

* use block_in_place throughout filesystem

and move block_on and block_in_place to be pub crate at the root

* AsyncFdStream: ensure file is nonblocking

* tests: block_in_place requires multi-threaded runtime

* actually, use fcntl_setfl to make the asyncfd file nonblocking

* fix windows block_on

* docs, remove unnecessary methods

* more docs

* Add a workspace dependency on bytes-1.4

* Remove vectored stream operations

* Rework the read/write stream traits

* Add a size parameter to `read`, and switch to usize for traits

* Pipe through the bool -> stream-status change in wit

* Plumb stream-status through write operations in wit

* write host trait also gives streamstate

* hook new stream host read/write back up to the wit bindgen

* sketchy AsyncReadStream impl

* Fill out implementations for AsyncReadStream and AsyncWriteStream

* some reasonable read tests

* more

* first smoke test for AsyncWriteStream

* bunch of AsyncWriteStream tests

* half-baked idea that the output-stream interface will need a flush mechanism

* adapter: fixes for changes to stream wit

* fix new rust 1.71 warnings

* make stdin work on unix without using AsyncFdStream

inline the tokio docs example of how to impl AsyncRead for an AsyncFd,
except theres some "minor" changes because stdin doesnt impl Read on
&Stdin whereas tcpstream from the example does

* delete AsyncFdStream for now

it turns out to be kinda hard and we can always work on adding it back
in later.

* Implement some memory pipe operations, and move async wrappers to the pipe mod

* Make blocking_write actually block until everything is written

* Remove debug print

* Adapter stdio should use blocking write

Rust guests will panic if the write returns less than the number of
bytes sent with stdio.

* Clean up implementations of {blocking_}write_zeros and skip

* Remove debug macro usage

* Move EmptyStream to pipe, and split it into four variants

Use EmptyInputStream and SinkOutputStream as the defaults for stdin and
stdout/stderr respectively.

* Add a big warning about resource lifetime tracking in pollables

* Start working through changes to the filesystem implementation

* Remove todos in the filesystem implementation

* Avoid lifetime errors by moving blocking operations to File and Dir

* Fix more lifetime issues with `block`

* Finish filling out translation impl

* fix warnings

* we can likely eliminate block_in_place in the stdin implementations

* sync command uses sync filesystem, start of translation layer

* symc filesystem: all the trait boilerplate is in place

just need to finish the from impl boilerplate

* finish type conversion boilerplate

* Revert "half-baked idea that the output-stream interface will need a flush mechanism"

This reverts commit 3eb762e333.

* cargo fmt

* test type fixes

* renames and comments

* refactor stream table internals so we can have a blocking variant...

* preview1 host adapter: stdout/stderr use blocking_write here too

* filesystem streams are blocking now

* fixes

* satisfy cargo doc

* cargo vet: dep upgrades taken care of by imports from mozilla

* unix stdio: eliminate block_in_place

* replace private in_tokio with spawn, since its only used for spawning

* comments

* worker thread stdin implementation can be tested on linux, i guess

and start outlining a test plan

* eliminate tokio boilerplate - no longer using tokios lock

* rename our private block_on to in_tokio

* fill in missing file input skip

* code review: fix MemoryInputPipe. Closed status is always available immediately.

* code review: empty input stream is not essential, closed input stream is a better fi for stdin

* code review: unreachable

* turn worker thread (windows) stdin off

* expect preview2-based poll_oneoff_stdio to fail on windows

* command directory_list test: no need to inherit stdin

* preview1 in preview2: turn off inherit_stdio except for poll_oneoff_stdio

* wasi-preview2-components: apparently inherit_stdio was on everywhere here as well. turn it off

except for poll_oneoff_stdio

* extend timeout for riscv64 i suppose

---------

Co-authored-by: Trevor Elliott <telliott@fastly.com>
pull/6751/head
Pat Hickey 1 year ago
committed by GitHub
parent
commit
0f9ac11bfb
No known key found for this signature in database GPG Key ID: 4AEE18F83AFDEB23
  1. 30
      Cargo.lock
  2. 3
      Cargo.toml
  3. 2
      crates/test-programs/reactor-tests/Cargo.toml
  4. 67
      crates/test-programs/tests/command.rs
  5. 48
      crates/test-programs/tests/reactor.rs
  6. 102
      crates/test-programs/tests/wasi-preview1-host-in-preview2.rs
  7. 297
      crates/test-programs/tests/wasi-preview2-components-sync.rs
  8. 109
      crates/test-programs/tests/wasi-preview2-components.rs
  9. 4
      crates/wasi-common/tokio/Cargo.toml
  10. 2
      crates/wasi-http/Cargo.toml
  11. 2
      crates/wasi-preview1-component-adapter/src/descriptors.rs
  12. 12
      crates/wasi-preview1-component-adapter/src/lib.rs
  13. 12
      crates/wasi/Cargo.toml
  14. 91
      crates/wasi/src/preview2/command.rs
  15. 21
      crates/wasi/src/preview2/ctx.rs
  16. 255
      crates/wasi/src/preview2/filesystem.rs
  17. 178
      crates/wasi/src/preview2/mod.rs
  18. 906
      crates/wasi/src/preview2/pipe.rs
  19. 157
      crates/wasi/src/preview2/poll.rs
  20. 308
      crates/wasi/src/preview2/preview1/mod.rs
  21. 75
      crates/wasi/src/preview2/preview2/clocks.rs
  22. 29
      crates/wasi/src/preview2/preview2/env.rs
  23. 5
      crates/wasi/src/preview2/preview2/exit.rs
  24. 281
      crates/wasi/src/preview2/preview2/filesystem.rs
  25. 613
      crates/wasi/src/preview2/preview2/filesystem/sync.rs
  26. 427
      crates/wasi/src/preview2/preview2/io.rs
  27. 1
      crates/wasi/src/preview2/preview2/mod.rs
  28. 83
      crates/wasi/src/preview2/preview2/poll.rs
  29. 17
      crates/wasi/src/preview2/preview2/random.rs
  30. 110
      crates/wasi/src/preview2/sched.rs
  31. 105
      crates/wasi/src/preview2/sched/subscription.rs
  32. 156
      crates/wasi/src/preview2/sched/sync.rs
  33. 221
      crates/wasi/src/preview2/stdio.rs
  34. 130
      crates/wasi/src/preview2/stdio/unix.rs
  35. 117
      crates/wasi/src/preview2/stdio/worker_thread_stdin.rs
  36. 394
      crates/wasi/src/preview2/stream.rs
  37. 38
      crates/wasi/src/preview2/table.rs
  38. 43
      crates/wasi/src/preview2/wasi/command.rs
  39. 30
      crates/wasi/src/preview2/wasi/mod.rs
  40. 73
      crates/wasi/wit/deps/io/streams.wit
  41. 24
      supply-chain/imports.lock

30
Cargo.lock

@ -295,9 +295,9 @@ checksum = "14c189c53d098945499cdfa7ecc63567cf3886b3332b312a5b4585d8d3a6a610"
[[package]]
name = "bytes"
version = "1.1.0"
version = "1.4.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c4872d67bab6358e59559027aa3b9157c53d9358c51423c17554809a8858e0f8"
checksum = "89b2fd2a0dcf38d7971e2194b6b6eebab45ae01067456a7fd93d5547a61b70be"
[[package]]
name = "camino"
@ -1438,6 +1438,20 @@ dependencies = [
"winapi",
]
[[package]]
name = "futures"
version = "0.3.27"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "531ac96c6ff5fd7c62263c5e3c67a603af4fcaee2e1a0ae5565ba3a11e69e549"
dependencies = [
"futures-channel",
"futures-core",
"futures-io",
"futures-sink",
"futures-task",
"futures-util",
]
[[package]]
name = "futures-channel"
version = "0.3.27"
@ -1445,6 +1459,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "164713a5a0dcc3e7b4b1ed7d3b433cabc18025386f9339346e8daf15963cf7ac"
dependencies = [
"futures-core",
"futures-sink",
]
[[package]]
@ -1453,6 +1468,12 @@ version = "0.3.27"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "86d7a0c1aa76363dac491de0ee99faf6941128376f1cf96f07db7603b7de69dd"
[[package]]
name = "futures-io"
version = "0.3.28"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "4fff74096e71ed47f8e023204cfd0aa1289cd54ae5430a9523be060cdb849964"
[[package]]
name = "futures-sink"
version = "0.3.27"
@ -1472,6 +1493,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "3ef6b17e481503ec85211fed8f39d1970f128935ca1f814cd32ac4a6842e84ab"
dependencies = [
"futures-core",
"futures-sink",
"futures-task",
"pin-project-lite",
"pin-utils",
@ -4373,16 +4395,20 @@ dependencies = [
"anyhow",
"async-trait",
"bitflags 2.3.3",
"bytes",
"cap-fs-ext",
"cap-rand",
"cap-std",
"cap-time-ext",
"fs-set-times",
"futures",
"io-extras",
"libc",
"once_cell",
"rustix 0.38.4",
"system-interface",
"thiserror",
"tokio",
"tracing",
"wasi-cap-std-sync",
"wasi-common",

3
Cargo.toml

@ -242,6 +242,9 @@ tempfile = "3.1.0"
filecheck = "0.5.0"
libc = "0.2.60"
file-per-thread-logger = "0.2.0"
tokio = { version = "1.26.0" }
bytes = "1.4"
futures = { version = "0.3.27", default-features = false }
indexmap = "2.0.0"
pretty_env_logger = "0.5.0"
syn = "2.0.25"

2
crates/test-programs/reactor-tests/Cargo.toml

@ -8,4 +8,4 @@ publish = false
crate-type=["cdylib"]
[dependencies]
wit-bindgen = { workspace = true }
wit-bindgen = { workspace = true, features = ["macros", "realloc"] }

67
crates/test-programs/tests/command.rs

@ -1,19 +1,15 @@
use anyhow::Result;
use cap_std::{ambient_authority, fs::Dir, time::Duration};
use std::{
io::{Cursor, Write},
sync::Mutex,
};
use std::{io::Write, sync::Mutex};
use wasmtime::{
component::{Component, Linker},
Config, Engine, Store,
};
use wasmtime_wasi::preview2::{
clocks::{HostMonotonicClock, HostWallClock},
pipe::ReadPipe,
wasi::command::add_to_linker,
wasi::command::Command,
DirPerms, FilePerms, Table, WasiCtx, WasiCtxBuilder, WasiView,
command::{add_to_linker, Command},
pipe::MemoryInputPipe,
DirPerms, FilePerms, HostMonotonicClock, HostWallClock, Table, WasiCtx, WasiCtxBuilder,
WasiView,
};
lazy_static::lazy_static! {
@ -63,7 +59,7 @@ async fn instantiate(
Ok((store, command))
}
#[test_log::test(tokio::test)]
#[test_log::test(tokio::test(flavor = "multi_thread"))]
async fn hello_stdout() -> Result<()> {
let mut table = Table::new();
let wasi = WasiCtxBuilder::new()
@ -77,7 +73,7 @@ async fn hello_stdout() -> Result<()> {
.map_err(|()| anyhow::anyhow!("command returned with failing exit status"))
}
#[test_log::test(tokio::test)]
#[test_log::test(tokio::test(flavor = "multi_thread"))]
async fn panic() -> Result<()> {
let mut table = Table::new();
let wasi = WasiCtxBuilder::new()
@ -100,7 +96,7 @@ async fn panic() -> Result<()> {
Ok(())
}
#[test_log::test(tokio::test)]
#[test_log::test(tokio::test(flavor = "multi_thread"))]
async fn args() -> Result<()> {
let mut table = Table::new();
let wasi = WasiCtxBuilder::new()
@ -114,7 +110,7 @@ async fn args() -> Result<()> {
.map_err(|()| anyhow::anyhow!("command returned with failing exit status"))
}
#[test_log::test(tokio::test)]
#[test_log::test(tokio::test(flavor = "multi_thread"))]
async fn random() -> Result<()> {
let mut table = Table::new();
let wasi = WasiCtxBuilder::new().build(&mut table)?;
@ -127,7 +123,7 @@ async fn random() -> Result<()> {
.map_err(|()| anyhow::anyhow!("command returned with failing exit status"))
}
#[test_log::test(tokio::test)]
#[test_log::test(tokio::test(flavor = "multi_thread"))]
async fn time() -> Result<()> {
struct FakeWallClock;
@ -173,13 +169,13 @@ async fn time() -> Result<()> {
.map_err(|()| anyhow::anyhow!("command returned with failing exit status"))
}
#[test_log::test(tokio::test)]
#[test_log::test(tokio::test(flavor = "multi_thread"))]
async fn stdin() -> Result<()> {
let mut table = Table::new();
let wasi = WasiCtxBuilder::new()
.set_stdin(ReadPipe::new(Cursor::new(
"So rested he by the Tumtum tree",
)))
.set_stdin(MemoryInputPipe::new(
"So rested he by the Tumtum tree".into(),
))
.build(&mut table)?;
let (mut store, command) =
@ -191,13 +187,13 @@ async fn stdin() -> Result<()> {
.map_err(|()| anyhow::anyhow!("command returned with failing exit status"))
}
#[test_log::test(tokio::test)]
#[test_log::test(tokio::test(flavor = "multi_thread"))]
async fn poll_stdin() -> Result<()> {
let mut table = Table::new();
let wasi = WasiCtxBuilder::new()
.set_stdin(ReadPipe::new(Cursor::new(
"So rested he by the Tumtum tree",
)))
.set_stdin(MemoryInputPipe::new(
"So rested he by the Tumtum tree".into(),
))
.build(&mut table)?;
let (mut store, command) =
@ -209,7 +205,7 @@ async fn poll_stdin() -> Result<()> {
.map_err(|()| anyhow::anyhow!("command returned with failing exit status"))
}
#[test_log::test(tokio::test)]
#[test_log::test(tokio::test(flavor = "multi_thread"))]
async fn env() -> Result<()> {
let mut table = Table::new();
let wasi = WasiCtxBuilder::new()
@ -226,7 +222,7 @@ async fn env() -> Result<()> {
.map_err(|()| anyhow::anyhow!("command returned with failing exit status"))
}
#[test_log::test(tokio::test)]
#[test_log::test(tokio::test(flavor = "multi_thread"))]
async fn file_read() -> Result<()> {
let dir = tempfile::tempdir()?;
@ -248,7 +244,7 @@ async fn file_read() -> Result<()> {
.map_err(|()| anyhow::anyhow!("command returned with failing exit status"))
}
#[test_log::test(tokio::test)]
#[test_log::test(tokio::test(flavor = "multi_thread"))]
async fn file_append() -> Result<()> {
let dir = tempfile::tempdir()?;
@ -280,7 +276,7 @@ async fn file_append() -> Result<()> {
Ok(())
}
#[test_log::test(tokio::test)]
#[test_log::test(tokio::test(flavor = "multi_thread"))]
async fn file_dir_sync() -> Result<()> {
let dir = tempfile::tempdir()?;
@ -303,7 +299,7 @@ async fn file_dir_sync() -> Result<()> {
.map_err(|()| anyhow::anyhow!("command returned with failing exit status"))
}
#[test_log::test(tokio::test)]
#[test_log::test(tokio::test(flavor = "multi_thread"))]
async fn exit_success() -> Result<()> {
let mut table = Table::new();
let wasi = WasiCtxBuilder::new().build(&mut table)?;
@ -320,7 +316,7 @@ async fn exit_success() -> Result<()> {
Ok(())
}
#[test_log::test(tokio::test)]
#[test_log::test(tokio::test(flavor = "multi_thread"))]
async fn exit_default() -> Result<()> {
let mut table = Table::new();
let wasi = WasiCtxBuilder::new().build(&mut table)?;
@ -333,7 +329,7 @@ async fn exit_default() -> Result<()> {
Ok(())
}
#[test_log::test(tokio::test)]
#[test_log::test(tokio::test(flavor = "multi_thread"))]
async fn exit_failure() -> Result<()> {
let mut table = Table::new();
let wasi = WasiCtxBuilder::new().build(&mut table)?;
@ -350,7 +346,7 @@ async fn exit_failure() -> Result<()> {
Ok(())
}
#[test_log::test(tokio::test)]
#[test_log::test(tokio::test(flavor = "multi_thread"))]
async fn exit_panic() -> Result<()> {
let mut table = Table::new();
let wasi = WasiCtxBuilder::new().build(&mut table)?;
@ -367,7 +363,7 @@ async fn exit_panic() -> Result<()> {
Ok(())
}
#[test_log::test(tokio::test)]
#[test_log::test(tokio::test(flavor = "multi_thread"))]
async fn directory_list() -> Result<()> {
let dir = tempfile::tempdir()?;
@ -382,7 +378,8 @@ async fn directory_list() -> Result<()> {
let mut table = Table::new();
let wasi = WasiCtxBuilder::new()
.inherit_stdio()
.inherit_stdout()
.inherit_stderr()
.push_preopened_dir(open_dir, DirPerms::all(), FilePerms::all(), "/")
.build(&mut table)?;
@ -395,7 +392,7 @@ async fn directory_list() -> Result<()> {
.map_err(|()| anyhow::anyhow!("command returned with failing exit status"))
}
#[test_log::test(tokio::test)]
#[test_log::test(tokio::test(flavor = "multi_thread"))]
async fn default_clocks() -> Result<()> {
let mut table = Table::new();
let wasi = WasiCtxBuilder::new().build(&mut table)?;
@ -409,7 +406,7 @@ async fn default_clocks() -> Result<()> {
.map_err(|()| anyhow::anyhow!("command returned with failing exit status"))
}
#[test_log::test(tokio::test)]
#[test_log::test(tokio::test(flavor = "multi_thread"))]
async fn export_cabi_realloc() -> Result<()> {
let mut table = Table::new();
let wasi = WasiCtxBuilder::new().build(&mut table)?;
@ -425,7 +422,7 @@ async fn export_cabi_realloc() -> Result<()> {
.map_err(|()| anyhow::anyhow!("command returned with failing exit status"))
}
#[test_log::test(tokio::test)]
#[test_log::test(tokio::test(flavor = "multi_thread"))]
async fn read_only() -> Result<()> {
let dir = tempfile::tempdir()?;

48
crates/test-programs/tests/reactor.rs

@ -1,11 +1,10 @@
use anyhow::Result;
use std::sync::{Arc, RwLock};
use wasmtime::{
component::{Component, Linker},
Config, Engine, Store,
};
use wasmtime_wasi::preview2::wasi::clocks::wall_clock;
use wasmtime_wasi::preview2::wasi::filesystem::filesystem;
use wasmtime_wasi::preview2::bindings::clocks::wall_clock;
use wasmtime_wasi::preview2::bindings::filesystem::filesystem;
use wasmtime_wasi::preview2::{self, Table, WasiCtx, WasiCtxBuilder, WasiView};
lazy_static::lazy_static! {
@ -28,14 +27,14 @@ wasmtime::component::bindgen!({
world: "test-reactor",
async: true,
with: {
"wasi:io/streams": preview2::wasi::io::streams,
"wasi:filesystem/filesystem": preview2::wasi::filesystem::filesystem,
"wasi:cli-base/environment": preview2::wasi::cli_base::environment,
"wasi:cli-base/preopens": preview2::wasi::cli_base::preopens,
"wasi:cli-base/exit": preview2::wasi::cli_base::exit,
"wasi:cli-base/stdin": preview2::wasi::cli_base::stdin,
"wasi:cli-base/stdout": preview2::wasi::cli_base::stdout,
"wasi:cli-base/stderr": preview2::wasi::cli_base::stderr,
"wasi:io/streams": preview2::bindings::io::streams,
"wasi:filesystem/filesystem": preview2::bindings::filesystem::filesystem,
"wasi:cli-base/environment": preview2::bindings::cli_base::environment,
"wasi:cli-base/preopens": preview2::bindings::cli_base::preopens,
"wasi:cli-base/exit": preview2::bindings::cli_base::exit,
"wasi:cli-base/stdin": preview2::bindings::cli_base::stdin,
"wasi:cli-base/stdout": preview2::bindings::cli_base::stdout,
"wasi:cli-base/stderr": preview2::bindings::cli_base::stderr,
},
ownership: Borrowing {
duplicate_if_necessary: false
@ -69,14 +68,14 @@ async fn instantiate(
let mut linker = Linker::new(&ENGINE);
// All of the imports available to the world are provided by the wasi-common crate:
preview2::wasi::filesystem::filesystem::add_to_linker(&mut linker, |x| x)?;
preview2::wasi::io::streams::add_to_linker(&mut linker, |x| x)?;
preview2::wasi::cli_base::environment::add_to_linker(&mut linker, |x| x)?;
preview2::wasi::cli_base::preopens::add_to_linker(&mut linker, |x| x)?;
preview2::wasi::cli_base::exit::add_to_linker(&mut linker, |x| x)?;
preview2::wasi::cli_base::stdin::add_to_linker(&mut linker, |x| x)?;
preview2::wasi::cli_base::stdout::add_to_linker(&mut linker, |x| x)?;
preview2::wasi::cli_base::stderr::add_to_linker(&mut linker, |x| x)?;
preview2::bindings::filesystem::filesystem::add_to_linker(&mut linker, |x| x)?;
preview2::bindings::io::streams::add_to_linker(&mut linker, |x| x)?;
preview2::bindings::cli_base::environment::add_to_linker(&mut linker, |x| x)?;
preview2::bindings::cli_base::preopens::add_to_linker(&mut linker, |x| x)?;
preview2::bindings::cli_base::exit::add_to_linker(&mut linker, |x| x)?;
preview2::bindings::cli_base::stdin::add_to_linker(&mut linker, |x| x)?;
preview2::bindings::cli_base::stdout::add_to_linker(&mut linker, |x| x)?;
preview2::bindings::cli_base::stderr::add_to_linker(&mut linker, |x| x)?;
let mut store = Store::new(&ENGINE, wasi_ctx);
@ -109,14 +108,15 @@ async fn reactor_tests() -> Result<()> {
// `host` and `wasi-common` crate.
// Note, this works because of the add_to_linker invocations using the
// `host` crate for `streams`, not because of `with` in the bindgen macro.
let write_dest: Arc<RwLock<Vec<u8>>> = Arc::new(RwLock::new(Vec::new()));
let writepipe = wasmtime_wasi::preview2::pipe::WritePipe::from_shared(write_dest.clone());
let outputstream: Box<dyn wasmtime_wasi::preview2::OutputStream> = Box::new(writepipe);
let table_ix = store.data_mut().table_mut().push(Box::new(outputstream))?;
let writepipe = preview2::pipe::MemoryOutputPipe::new();
let table_ix = preview2::TableStreamExt::push_output_stream(
store.data_mut().table_mut(),
Box::new(writepipe.clone()),
)?;
let r = reactor.call_write_strings_to(&mut store, table_ix).await?;
assert_eq!(r, Ok(()));
assert_eq!(*write_dest.read().unwrap(), b"hellogussie");
assert_eq!(writepipe.contents().as_ref(), b"hellogussie");
// Show that the `with` invocation in the macro means we get to re-use the
// type definitions from inside the `host` crate for these structures:

102
crates/test-programs/tests/wasi-preview1-host-in-preview2.rs

@ -3,7 +3,7 @@ use anyhow::Result;
use tempfile::TempDir;
use wasmtime::{Config, Engine, Linker, Store};
use wasmtime_wasi::preview2::{
pipe::WritePipe,
pipe::MemoryOutputPipe,
preview1::{add_to_linker, WasiPreview1Adapter, WasiPreview1View},
DirPerms, FilePerms, Table, WasiCtx, WasiCtxBuilder, WasiView,
};
@ -30,8 +30,8 @@ pub fn prepare_workspace(exe_name: &str) -> Result<TempDir> {
async fn run(name: &str, inherit_stdio: bool) -> Result<()> {
let workspace = prepare_workspace(name)?;
let stdout = WritePipe::new_in_memory();
let stderr = WritePipe::new_in_memory();
let stdout = MemoryOutputPipe::new();
let stderr = MemoryOutputPipe::new();
let r = {
let mut linker = Linker::new(&ENGINE);
add_to_linker(&mut linker)?;
@ -102,17 +102,11 @@ async fn run(name: &str, inherit_stdio: bool) -> Result<()> {
};
r.map_err(move |trap: anyhow::Error| {
let stdout = stdout
.try_into_inner()
.expect("sole ref to stdout")
.into_inner();
let stdout = stdout.try_into_inner().expect("sole ref to stdout");
if !stdout.is_empty() {
println!("guest stdout:\n{}\n===", String::from_utf8_lossy(&stdout));
}
let stderr = stderr
.try_into_inner()
.expect("sole ref to stderr")
.into_inner();
let stderr = stderr.try_into_inner().expect("sole ref to stderr");
if !stderr.is_empty() {
println!("guest stderr:\n{}\n===", String::from_utf8_lossy(&stderr));
}
@ -129,124 +123,126 @@ async fn run(name: &str, inherit_stdio: bool) -> Result<()> {
// tests which fail.
#[test_log::test(tokio::test(flavor = "multi_thread"))]
async fn big_random_buf() {
run("big_random_buf", true).await.unwrap()
run("big_random_buf", false).await.unwrap()
}
#[test_log::test(tokio::test(flavor = "multi_thread"))]
async fn clock_time_get() {
run("clock_time_get", true).await.unwrap()
run("clock_time_get", false).await.unwrap()
}
#[test_log::test(tokio::test(flavor = "multi_thread"))]
async fn close_preopen() {
run("close_preopen", true).await.unwrap()
run("close_preopen", false).await.unwrap()
}
#[test_log::test(tokio::test(flavor = "multi_thread"))]
async fn dangling_fd() {
run("dangling_fd", true).await.unwrap()
run("dangling_fd", false).await.unwrap()
}
#[test_log::test(tokio::test(flavor = "multi_thread"))]
async fn dangling_symlink() {
run("dangling_symlink", true).await.unwrap()
run("dangling_symlink", false).await.unwrap()
}
#[test_log::test(tokio::test(flavor = "multi_thread"))]
async fn directory_seek() {
run("directory_seek", true).await.unwrap()
run("directory_seek", false).await.unwrap()
}
#[test_log::test(tokio::test(flavor = "multi_thread"))]
async fn dir_fd_op_failures() {
run("dir_fd_op_failures", true).await.unwrap()
run("dir_fd_op_failures", false).await.unwrap()
}
#[test_log::test(tokio::test(flavor = "multi_thread"))]
async fn fd_advise() {
run("fd_advise", true).await.unwrap()
run("fd_advise", false).await.unwrap()
}
#[test_log::test(tokio::test(flavor = "multi_thread"))]
async fn fd_filestat_get() {
run("fd_filestat_get", true).await.unwrap()
run("fd_filestat_get", false).await.unwrap()
}
#[test_log::test(tokio::test(flavor = "multi_thread"))]
async fn fd_filestat_set() {
run("fd_filestat_set", true).await.unwrap()
run("fd_filestat_set", false).await.unwrap()
}
#[test_log::test(tokio::test(flavor = "multi_thread"))]
async fn fd_flags_set() {
run("fd_flags_set", true).await.unwrap()
run("fd_flags_set", false).await.unwrap()
}
#[test_log::test(tokio::test(flavor = "multi_thread"))]
async fn fd_readdir() {
run("fd_readdir", true).await.unwrap()
run("fd_readdir", false).await.unwrap()
}
#[test_log::test(tokio::test(flavor = "multi_thread"))]
async fn file_allocate() {
run("file_allocate", true).await.unwrap()
run("file_allocate", false).await.unwrap()
}
#[test_log::test(tokio::test(flavor = "multi_thread"))]
async fn file_pread_pwrite() {
run("file_pread_pwrite", true).await.unwrap()
run("file_pread_pwrite", false).await.unwrap()
}
#[test_log::test(tokio::test(flavor = "multi_thread"))]
async fn file_seek_tell() {
run("file_seek_tell", true).await.unwrap()
run("file_seek_tell", false).await.unwrap()
}
#[test_log::test(tokio::test(flavor = "multi_thread"))]
async fn file_truncation() {
run("file_truncation", true).await.unwrap()
run("file_truncation", false).await.unwrap()
}
#[test_log::test(tokio::test(flavor = "multi_thread"))]
async fn file_unbuffered_write() {
run("file_unbuffered_write", true).await.unwrap()
run("file_unbuffered_write", false).await.unwrap()
}
#[test_log::test(tokio::test(flavor = "multi_thread"))]
#[cfg_attr(windows, should_panic)]
async fn interesting_paths() {
run("interesting_paths", true).await.unwrap()
run("interesting_paths", false).await.unwrap()
}
#[test_log::test(tokio::test(flavor = "multi_thread"))]
async fn isatty() {
run("isatty", true).await.unwrap()
run("isatty", false).await.unwrap()
}
#[test_log::test(tokio::test(flavor = "multi_thread"))]
async fn nofollow_errors() {
run("nofollow_errors", true).await.unwrap()
run("nofollow_errors", false).await.unwrap()
}
#[test_log::test(tokio::test(flavor = "multi_thread"))]
async fn overwrite_preopen() {
run("overwrite_preopen", true).await.unwrap()
run("overwrite_preopen", false).await.unwrap()
}
#[test_log::test(tokio::test(flavor = "multi_thread"))]
async fn path_exists() {
run("path_exists", true).await.unwrap()
run("path_exists", false).await.unwrap()
}
#[test_log::test(tokio::test(flavor = "multi_thread"))]
async fn path_filestat() {
run("path_filestat", true).await.unwrap()
run("path_filestat", false).await.unwrap()
}
#[test_log::test(tokio::test(flavor = "multi_thread"))]
async fn path_link() {
run("path_link", true).await.unwrap()
run("path_link", false).await.unwrap()
}
#[test_log::test(tokio::test(flavor = "multi_thread"))]
async fn path_open_create_existing() {
run("path_open_create_existing", true).await.unwrap()
run("path_open_create_existing", false).await.unwrap()
}
#[test_log::test(tokio::test(flavor = "multi_thread"))]
async fn path_open_read_write() {
run("path_open_read_write", true).await.unwrap()
run("path_open_read_write", false).await.unwrap()
}
#[test_log::test(tokio::test(flavor = "multi_thread"))]
async fn path_open_dirfd_not_dir() {
run("path_open_dirfd_not_dir", true).await.unwrap()
run("path_open_dirfd_not_dir", false).await.unwrap()
}
#[test_log::test(tokio::test(flavor = "multi_thread"))]
async fn path_open_missing() {
run("path_open_missing", true).await.unwrap()
run("path_open_missing", false).await.unwrap()
}
#[test_log::test(tokio::test(flavor = "multi_thread"))]
async fn path_open_nonblock() {
run("path_open_nonblock", true).await.unwrap()
run("path_open_nonblock", false).await.unwrap()
}
#[test_log::test(tokio::test(flavor = "multi_thread"))]
async fn path_rename_dir_trailing_slashes() {
run("path_rename_dir_trailing_slashes", true).await.unwrap()
run("path_rename_dir_trailing_slashes", false)
.await
.unwrap()
}
#[test_log::test(tokio::test(flavor = "multi_thread"))]
#[should_panic]
@ -257,11 +253,11 @@ async fn path_rename_file_trailing_slashes() {
}
#[test_log::test(tokio::test(flavor = "multi_thread"))]
async fn path_rename() {
run("path_rename", true).await.unwrap()
run("path_rename", false).await.unwrap()
}
#[test_log::test(tokio::test(flavor = "multi_thread"))]
async fn path_symlink_trailing_slashes() {
run("path_symlink_trailing_slashes", true).await.unwrap()
run("path_symlink_trailing_slashes", false).await.unwrap()
}
#[test_log::test(tokio::test(flavor = "multi_thread"))]
#[should_panic]
@ -276,7 +272,7 @@ async fn poll_oneoff_stdio() {
}
#[test_log::test(tokio::test(flavor = "multi_thread"))]
async fn readlink() {
run("readlink", true).await.unwrap()
run("readlink", false).await.unwrap()
}
#[test_log::test(tokio::test(flavor = "multi_thread"))]
#[should_panic]
@ -287,37 +283,37 @@ async fn remove_directory_trailing_slashes() {
}
#[test_log::test(tokio::test(flavor = "multi_thread"))]
async fn remove_nonempty_directory() {
run("remove_nonempty_directory", true).await.unwrap()
run("remove_nonempty_directory", false).await.unwrap()
}
#[test_log::test(tokio::test(flavor = "multi_thread"))]
async fn renumber() {
run("renumber", true).await.unwrap()
run("renumber", false).await.unwrap()
}
#[test_log::test(tokio::test(flavor = "multi_thread"))]
async fn sched_yield() {
run("sched_yield", true).await.unwrap()
run("sched_yield", false).await.unwrap()
}
#[test_log::test(tokio::test(flavor = "multi_thread"))]
async fn stdio() {
run("stdio", true).await.unwrap()
run("stdio", false).await.unwrap()
}
#[test_log::test(tokio::test(flavor = "multi_thread"))]
async fn symlink_create() {
run("symlink_create", true).await.unwrap()
run("symlink_create", false).await.unwrap()
}
#[test_log::test(tokio::test(flavor = "multi_thread"))]
async fn symlink_filestat() {
run("symlink_filestat", true).await.unwrap()
run("symlink_filestat", false).await.unwrap()
}
#[test_log::test(tokio::test(flavor = "multi_thread"))]
async fn symlink_loop() {
run("symlink_loop", true).await.unwrap()
run("symlink_loop", false).await.unwrap()
}
#[test_log::test(tokio::test(flavor = "multi_thread"))]
async fn unlink_file_trailing_slashes() {
run("unlink_file_trailing_slashes", true).await.unwrap()
run("unlink_file_trailing_slashes", false).await.unwrap()
}
#[test_log::test(tokio::test(flavor = "multi_thread"))]
async fn path_open_preopen() {
run("path_open_preopen", true).await.unwrap()
run("path_open_preopen", false).await.unwrap()
}

297
crates/test-programs/tests/wasi-preview2-components-sync.rs

@ -0,0 +1,297 @@
#![cfg(feature = "test_programs")]
use anyhow::Result;
use tempfile::TempDir;
use wasmtime::{component::Linker, Config, Engine, Store};
use wasmtime_wasi::preview2::{
command::sync::{add_to_linker, Command},
pipe::MemoryOutputPipe,
DirPerms, FilePerms, Table, WasiCtx, WasiCtxBuilder, WasiView,
};
lazy_static::lazy_static! {
static ref ENGINE: Engine = {
let mut config = Config::new();
config.wasm_backtrace_details(wasmtime::WasmBacktraceDetails::Enable);
config.wasm_component_model(true);
config.async_support(false);
let engine = Engine::new(&config).unwrap();
engine
};
}
// uses ENGINE, creates a fn get_component(&str) -> Component
include!(concat!(env!("OUT_DIR"), "/wasi_tests_components.rs"));
pub fn prepare_workspace(exe_name: &str) -> Result<TempDir> {
let prefix = format!("wasi_components_{}_", exe_name);
let tempdir = tempfile::Builder::new().prefix(&prefix).tempdir()?;
Ok(tempdir)
}
fn run(name: &str, inherit_stdio: bool) -> Result<()> {
let workspace = prepare_workspace(name)?;
let stdout = MemoryOutputPipe::new();
let stderr = MemoryOutputPipe::new();
let r = {
let mut linker = Linker::new(&ENGINE);
add_to_linker(&mut linker)?;
// Create our wasi context.
// Additionally register any preopened directories if we have them.
let mut builder = WasiCtxBuilder::new();
if inherit_stdio {
builder = builder.inherit_stdio();
} else {
builder = builder
.set_stdout(stdout.clone())
.set_stderr(stderr.clone());
}
builder = builder.set_args(&[name, "."]);
println!("preopen: {:?}", workspace);
let preopen_dir =
cap_std::fs::Dir::open_ambient_dir(workspace.path(), cap_std::ambient_authority())?;
builder = builder.push_preopened_dir(preopen_dir, DirPerms::all(), FilePerms::all(), ".");
for (var, val) in test_programs::wasi_tests_environment() {
builder = builder.push_env(var, val);
}
let mut table = Table::new();
let wasi = builder.build(&mut table)?;
struct Ctx {
wasi: WasiCtx,
table: Table,
}
impl WasiView for Ctx {
fn ctx(&self) -> &WasiCtx {
&self.wasi
}
fn ctx_mut(&mut self) -> &mut WasiCtx {
&mut self.wasi
}
fn table(&self) -> &Table {
&self.table
}
fn table_mut(&mut self) -> &mut Table {
&mut self.table
}
}
let ctx = Ctx { wasi, table };
let mut store = Store::new(&ENGINE, ctx);
let (command, _instance) = Command::instantiate(&mut store, &get_component(name), &linker)?;
command
.call_run(&mut store)?
.map_err(|()| anyhow::anyhow!("run returned a failure"))?;
Ok(())
};
r.map_err(move |trap: anyhow::Error| {
let stdout = stdout.try_into_inner().expect("single ref to stdout");
if !stdout.is_empty() {
println!("guest stdout:\n{}\n===", String::from_utf8_lossy(&stdout));
}
let stderr = stderr.try_into_inner().expect("single ref to stderr");
if !stderr.is_empty() {
println!("guest stderr:\n{}\n===", String::from_utf8_lossy(&stderr));
}
trap.context(format!(
"error while testing wasi-tests {} with cap-std-sync",
name
))
})?;
Ok(())
}
// Below here is mechanical: there should be one test for every binary in
// wasi-tests. The only differences should be should_panic annotations for
// tests which fail.
#[test_log::test]
fn big_random_buf() {
run("big_random_buf", false).unwrap()
}
#[test_log::test]
fn clock_time_get() {
run("clock_time_get", false).unwrap()
}
#[test_log::test]
fn close_preopen() {
run("close_preopen", false).unwrap()
}
#[test_log::test]
fn dangling_fd() {
run("dangling_fd", false).unwrap()
}
#[test_log::test]
fn dangling_symlink() {
run("dangling_symlink", false).unwrap()
}
#[test_log::test]
fn directory_seek() {
run("directory_seek", false).unwrap()
}
#[test_log::test]
fn dir_fd_op_failures() {
run("dir_fd_op_failures", false).unwrap()
}
#[test_log::test]
fn fd_advise() {
run("fd_advise", false).unwrap()
}
#[test_log::test]
fn fd_filestat_get() {
run("fd_filestat_get", false).unwrap()
}
#[test_log::test]
fn fd_filestat_set() {
run("fd_filestat_set", false).unwrap()
}
#[test_log::test]
fn fd_flags_set() {
run("fd_flags_set", false).unwrap()
}
#[test_log::test]
fn fd_readdir() {
run("fd_readdir", false).unwrap()
}
#[test_log::test]
fn file_allocate() {
run("file_allocate", false).unwrap()
}
#[test_log::test]
fn file_pread_pwrite() {
run("file_pread_pwrite", false).unwrap()
}
#[test_log::test]
fn file_seek_tell() {
run("file_seek_tell", false).unwrap()
}
#[test_log::test]
fn file_truncation() {
run("file_truncation", false).unwrap()
}
#[test_log::test]
fn file_unbuffered_write() {
run("file_unbuffered_write", false).unwrap()
}
#[test_log::test]
#[cfg_attr(windows, should_panic)]
fn interesting_paths() {
run("interesting_paths", false).unwrap()
}
#[test_log::test]
fn isatty() {
run("isatty", false).unwrap()
}
#[test_log::test]
fn nofollow_errors() {
run("nofollow_errors", false).unwrap()
}
#[test_log::test]
fn overwrite_preopen() {
run("overwrite_preopen", false).unwrap()
}
#[test_log::test]
fn path_exists() {
run("path_exists", false).unwrap()
}
#[test_log::test]
fn path_filestat() {
run("path_filestat", false).unwrap()
}
#[test_log::test]
fn path_link() {
run("path_link", false).unwrap()
}
#[test_log::test]
fn path_open_create_existing() {
run("path_open_create_existing", false).unwrap()
}
#[test_log::test]
fn path_open_read_write() {
run("path_open_read_write", false).unwrap()
}
#[test_log::test]
fn path_open_dirfd_not_dir() {
run("path_open_dirfd_not_dir", false).unwrap()
}
#[test_log::test]
fn path_open_missing() {
run("path_open_missing", false).unwrap()
}
#[test_log::test]
fn path_open_nonblock() {
run("path_open_nonblock", false).unwrap()
}
#[test_log::test]
fn path_rename_dir_trailing_slashes() {
run("path_rename_dir_trailing_slashes", false).unwrap()
}
#[test_log::test]
#[should_panic]
fn path_rename_file_trailing_slashes() {
run("path_rename_file_trailing_slashes", false).unwrap()
}
#[test_log::test]
fn path_rename() {
run("path_rename", false).unwrap()
}
#[test_log::test]
fn path_symlink_trailing_slashes() {
run("path_symlink_trailing_slashes", false).unwrap()
}
#[test_log::test]
fn poll_oneoff_files() {
run("poll_oneoff_files", false).unwrap()
}
#[cfg_attr(windows, should_panic)]
#[test_log::test]
fn poll_oneoff_stdio() {
run("poll_oneoff_stdio", true).unwrap()
}
#[test_log::test]
fn readlink() {
run("readlink", false).unwrap()
}
#[test_log::test]
#[should_panic]
fn remove_directory_trailing_slashes() {
run("remove_directory_trailing_slashes", false).unwrap()
}
#[test_log::test]
fn remove_nonempty_directory() {
run("remove_nonempty_directory", false).unwrap()
}
#[test_log::test]
fn renumber() {
run("renumber", false).unwrap()
}
#[test_log::test]
fn sched_yield() {
run("sched_yield", false).unwrap()
}
#[test_log::test]
fn stdio() {
run("stdio", false).unwrap()
}
#[test_log::test]
fn symlink_create() {
run("symlink_create", false).unwrap()
}
#[test_log::test]
fn symlink_filestat() {
run("symlink_filestat", false).unwrap()
}
#[test_log::test]
fn symlink_loop() {
run("symlink_loop", false).unwrap()
}
#[test_log::test]
fn unlink_file_trailing_slashes() {
run("unlink_file_trailing_slashes", false).unwrap()
}
#[test_log::test]
fn path_open_preopen() {
run("path_open_preopen", false).unwrap()
}

109
crates/test-programs/tests/wasi-preview2-components.rs

@ -3,8 +3,8 @@ use anyhow::Result;
use tempfile::TempDir;
use wasmtime::{component::Linker, Config, Engine, Store};
use wasmtime_wasi::preview2::{
pipe::WritePipe,
wasi::command::{add_to_linker, Command},
command::{add_to_linker, Command},
pipe::MemoryOutputPipe,
DirPerms, FilePerms, Table, WasiCtx, WasiCtxBuilder, WasiView,
};
@ -30,8 +30,8 @@ pub fn prepare_workspace(exe_name: &str) -> Result<TempDir> {
async fn run(name: &str, inherit_stdio: bool) -> Result<()> {
let workspace = prepare_workspace(name)?;
let stdout = WritePipe::new_in_memory();
let stderr = WritePipe::new_in_memory();
let stdout = MemoryOutputPipe::new();
let stderr = MemoryOutputPipe::new();
let r = {
let mut linker = Linker::new(&ENGINE);
add_to_linker(&mut linker)?;
@ -89,17 +89,11 @@ async fn run(name: &str, inherit_stdio: bool) -> Result<()> {
};
r.map_err(move |trap: anyhow::Error| {
let stdout = stdout
.try_into_inner()
.expect("sole ref to stdout")
.into_inner();
let stdout = stdout.try_into_inner().expect("single ref to stdout");
if !stdout.is_empty() {
println!("guest stdout:\n{}\n===", String::from_utf8_lossy(&stdout));
}
let stderr = stderr
.try_into_inner()
.expect("sole ref to stderr")
.into_inner();
let stderr = stderr.try_into_inner().expect("single ref to stderr");
if !stderr.is_empty() {
println!("guest stderr:\n{}\n===", String::from_utf8_lossy(&stderr));
}
@ -116,124 +110,126 @@ async fn run(name: &str, inherit_stdio: bool) -> Result<()> {
// tests which fail.
#[test_log::test(tokio::test(flavor = "multi_thread"))]
async fn big_random_buf() {
run("big_random_buf", true).await.unwrap()
run("big_random_buf", false).await.unwrap()
}
#[test_log::test(tokio::test(flavor = "multi_thread"))]
async fn clock_time_get() {
run("clock_time_get", true).await.unwrap()
run("clock_time_get", false).await.unwrap()
}
#[test_log::test(tokio::test(flavor = "multi_thread"))]
async fn close_preopen() {
run("close_preopen", true).await.unwrap()
run("close_preopen", false).await.unwrap()
}
#[test_log::test(tokio::test(flavor = "multi_thread"))]
async fn dangling_fd() {
run("dangling_fd", true).await.unwrap()
run("dangling_fd", false).await.unwrap()
}
#[test_log::test(tokio::test(flavor = "multi_thread"))]
async fn dangling_symlink() {
run("dangling_symlink", true).await.unwrap()
run("dangling_symlink", false).await.unwrap()
}
#[test_log::test(tokio::test(flavor = "multi_thread"))]
async fn directory_seek() {
run("directory_seek", true).await.unwrap()
run("directory_seek", false).await.unwrap()
}
#[test_log::test(tokio::test(flavor = "multi_thread"))]
async fn dir_fd_op_failures() {
run("dir_fd_op_failures", true).await.unwrap()
run("dir_fd_op_failures", false).await.unwrap()
}
#[test_log::test(tokio::test(flavor = "multi_thread"))]
async fn fd_advise() {
run("fd_advise", true).await.unwrap()
run("fd_advise", false).await.unwrap()
}
#[test_log::test(tokio::test(flavor = "multi_thread"))]
async fn fd_filestat_get() {
run("fd_filestat_get", true).await.unwrap()
run("fd_filestat_get", false).await.unwrap()
}
#[test_log::test(tokio::test(flavor = "multi_thread"))]
async fn fd_filestat_set() {
run("fd_filestat_set", true).await.unwrap()
run("fd_filestat_set", false).await.unwrap()
}
#[test_log::test(tokio::test(flavor = "multi_thread"))]
async fn fd_flags_set() {
run("fd_flags_set", true).await.unwrap()
run("fd_flags_set", false).await.unwrap()
}
#[test_log::test(tokio::test(flavor = "multi_thread"))]
async fn fd_readdir() {
run("fd_readdir", true).await.unwrap()
run("fd_readdir", false).await.unwrap()
}
#[test_log::test(tokio::test(flavor = "multi_thread"))]
async fn file_allocate() {
run("file_allocate", true).await.unwrap()
run("file_allocate", false).await.unwrap()
}
#[test_log::test(tokio::test(flavor = "multi_thread"))]
async fn file_pread_pwrite() {
run("file_pread_pwrite", true).await.unwrap()
run("file_pread_pwrite", false).await.unwrap()
}
#[test_log::test(tokio::test(flavor = "multi_thread"))]
async fn file_seek_tell() {
run("file_seek_tell", true).await.unwrap()
run("file_seek_tell", false).await.unwrap()
}
#[test_log::test(tokio::test(flavor = "multi_thread"))]
async fn file_truncation() {
run("file_truncation", true).await.unwrap()
run("file_truncation", false).await.unwrap()
}
#[test_log::test(tokio::test(flavor = "multi_thread"))]
async fn file_unbuffered_write() {
run("file_unbuffered_write", true).await.unwrap()
run("file_unbuffered_write", false).await.unwrap()
}
#[test_log::test(tokio::test(flavor = "multi_thread"))]
#[cfg_attr(windows, should_panic)]
async fn interesting_paths() {
run("interesting_paths", true).await.unwrap()
run("interesting_paths", false).await.unwrap()
}
#[test_log::test(tokio::test(flavor = "multi_thread"))]
async fn isatty() {
run("isatty", true).await.unwrap()
run("isatty", false).await.unwrap()
}
#[test_log::test(tokio::test(flavor = "multi_thread"))]
async fn nofollow_errors() {
run("nofollow_errors", true).await.unwrap()
run("nofollow_errors", false).await.unwrap()
}
#[test_log::test(tokio::test(flavor = "multi_thread"))]
async fn overwrite_preopen() {
run("overwrite_preopen", true).await.unwrap()
run("overwrite_preopen", false).await.unwrap()
}
#[test_log::test(tokio::test(flavor = "multi_thread"))]
async fn path_exists() {
run("path_exists", true).await.unwrap()
run("path_exists", false).await.unwrap()
}
#[test_log::test(tokio::test(flavor = "multi_thread"))]
async fn path_filestat() {
run("path_filestat", true).await.unwrap()
run("path_filestat", false).await.unwrap()
}
#[test_log::test(tokio::test(flavor = "multi_thread"))]
async fn path_link() {
run("path_link", true).await.unwrap()
run("path_link", false).await.unwrap()
}
#[test_log::test(tokio::test(flavor = "multi_thread"))]
async fn path_open_create_existing() {
run("path_open_create_existing", true).await.unwrap()
run("path_open_create_existing", false).await.unwrap()
}
#[test_log::test(tokio::test(flavor = "multi_thread"))]
async fn path_open_read_write() {
run("path_open_read_write", true).await.unwrap()
run("path_open_read_write", false).await.unwrap()
}
#[test_log::test(tokio::test(flavor = "multi_thread"))]
async fn path_open_dirfd_not_dir() {
run("path_open_dirfd_not_dir", true).await.unwrap()
run("path_open_dirfd_not_dir", false).await.unwrap()
}
#[test_log::test(tokio::test(flavor = "multi_thread"))]
async fn path_open_missing() {
run("path_open_missing", true).await.unwrap()
run("path_open_missing", false).await.unwrap()
}
#[test_log::test(tokio::test(flavor = "multi_thread"))]
async fn path_open_nonblock() {
run("path_open_nonblock", true).await.unwrap()
run("path_open_nonblock", false).await.unwrap()
}
#[test_log::test(tokio::test(flavor = "multi_thread"))]
async fn path_rename_dir_trailing_slashes() {
run("path_rename_dir_trailing_slashes", true).await.unwrap()
run("path_rename_dir_trailing_slashes", false)
.await
.unwrap()
}
#[test_log::test(tokio::test(flavor = "multi_thread"))]
#[should_panic]
@ -244,26 +240,25 @@ async fn path_rename_file_trailing_slashes() {
}
#[test_log::test(tokio::test(flavor = "multi_thread"))]
async fn path_rename() {
run("path_rename", true).await.unwrap()
run("path_rename", false).await.unwrap()
}
#[test_log::test(tokio::test(flavor = "multi_thread"))]
async fn path_symlink_trailing_slashes() {
run("path_symlink_trailing_slashes", true).await.unwrap()
run("path_symlink_trailing_slashes", false).await.unwrap()
}
#[test_log::test(tokio::test(flavor = "multi_thread"))]
#[cfg_attr(windows, should_panic)]
async fn poll_oneoff_files() {
run("poll_oneoff_files", false).await.unwrap()
}
#[test_log::test(tokio::test(flavor = "multi_thread"))]
// This is a known bug with the preview 2 implementation on Windows:
#[cfg_attr(windows, should_panic)]
#[test_log::test(tokio::test(flavor = "multi_thread"))]
async fn poll_oneoff_stdio() {
run("poll_oneoff_stdio", true).await.unwrap()
}
#[test_log::test(tokio::test(flavor = "multi_thread"))]
async fn readlink() {
run("readlink", true).await.unwrap()
run("readlink", false).await.unwrap()
}
#[test_log::test(tokio::test(flavor = "multi_thread"))]
#[should_panic]
@ -274,37 +269,37 @@ async fn remove_directory_trailing_slashes() {
}
#[test_log::test(tokio::test(flavor = "multi_thread"))]
async fn remove_nonempty_directory() {
run("remove_nonempty_directory", true).await.unwrap()
run("remove_nonempty_directory", false).await.unwrap()
}
#[test_log::test(tokio::test(flavor = "multi_thread"))]
async fn renumber() {
run("renumber", true).await.unwrap()
run("renumber", false).await.unwrap()
}
#[test_log::test(tokio::test(flavor = "multi_thread"))]
async fn sched_yield() {
run("sched_yield", true).await.unwrap()
run("sched_yield", false).await.unwrap()
}
#[test_log::test(tokio::test(flavor = "multi_thread"))]
async fn stdio() {
run("stdio", true).await.unwrap()
run("stdio", false).await.unwrap()
}
#[test_log::test(tokio::test(flavor = "multi_thread"))]
async fn symlink_create() {
run("symlink_create", true).await.unwrap()
run("symlink_create", false).await.unwrap()
}
#[test_log::test(tokio::test(flavor = "multi_thread"))]
async fn symlink_filestat() {
run("symlink_filestat", true).await.unwrap()
run("symlink_filestat", false).await.unwrap()
}
#[test_log::test(tokio::test(flavor = "multi_thread"))]
async fn symlink_loop() {
run("symlink_loop", true).await.unwrap()
run("symlink_loop", false).await.unwrap()
}
#[test_log::test(tokio::test(flavor = "multi_thread"))]
async fn unlink_file_trailing_slashes() {
run("unlink_file_trailing_slashes", true).await.unwrap()
run("unlink_file_trailing_slashes", false).await.unwrap()
}
#[test_log::test(tokio::test(flavor = "multi_thread"))]
async fn path_open_preopen() {
run("path_open_preopen", true).await.unwrap()
run("path_open_preopen", false).await.unwrap()
}

4
crates/wasi-common/tokio/Cargo.toml

@ -14,7 +14,7 @@ include = ["src/**/*", "LICENSE" ]
wasi-common = { workspace = true }
wasi-cap-std-sync = { workspace = true }
wiggle = { workspace = true }
tokio = { version = "1.8.0", features = [ "rt", "fs", "time", "io-util", "net", "io-std", "rt-multi-thread"] }
tokio = { workspace = true, features = [ "rt", "fs", "time", "io-util", "net", "io-std", "rt-multi-thread"] }
cap-std = { workspace = true }
anyhow = { workspace = true }
io-lifetimes = { workspace = true }
@ -27,5 +27,5 @@ io-extras = { workspace = true }
[dev-dependencies]
tempfile = "3.1.0"
tokio = { version = "1.8.0", features = [ "macros" ] }
tokio = { workspace = true, features = [ "macros" ] }
cap-tempfile = { workspace = true }

2
crates/wasi-http/Cargo.toml

@ -9,7 +9,7 @@ description = "Experimental HTTP library for WebAssembly in Wasmtime"
[dependencies]
anyhow = { workspace = true }
bytes = "1.1.0"
bytes = { workspace = true }
hyper = { version = "=1.0.0-rc.3", features = ["full"] }
tokio = { version = "1", default-features = false, features = ["net", "rt-multi-thread", "time"] }
http = { version = "0.2.9" }

2
crates/wasi-preview1-component-adapter/src/descriptors.rs

@ -305,7 +305,7 @@ impl Descriptors {
// Implementation of fd_renumber
pub fn renumber(&mut self, from_fd: Fd, to_fd: Fd) -> Result<(), Errno> {
// First, ensure from_fd is in bounds:
drop(self.get(from_fd)?);
let _ = self.get(from_fd)?;
// Expand table until to_fd is in bounds as well:
while self.table_len.get() as u32 <= to_fd as u32 {
self.push_closed()?;

12
crates/wasi-preview1-component-adapter/src/lib.rs

@ -839,7 +839,7 @@ pub unsafe extern "C" fn fd_read(
let read_len = u64::try_from(len).trapping_unwrap();
let wasi_stream = streams.get_read_stream()?;
let (data, end) = state
let (data, stream_stat) = state
.import_alloc
.with_buffer(ptr, len, || {
if blocking {
@ -861,7 +861,7 @@ pub unsafe extern "C" fn fd_read(
let len = data.len();
forget(data);
if !end && len == 0 {
if stream_stat == crate::streams::StreamStatus::Open && len == 0 {
Err(ERRNO_INTR)
} else {
*nread = len;
@ -1215,14 +1215,16 @@ pub unsafe extern "C" fn fd_write(
Descriptor::Streams(streams) => {
let wasi_stream = streams.get_write_stream()?;
let bytes = if let StreamType::File(file) = &streams.type_ {
let (bytes, _stream_stat) = if let StreamType::File(file) = &streams.type_ {
if file.blocking {
streams::blocking_write(wasi_stream, bytes)
} else {
streams::write(wasi_stream, bytes)
}
} else {
streams::write(wasi_stream, bytes)
// Use blocking writes on non-file streams (stdout, stderr, as sockets
// aren't currently used).
streams::blocking_write(wasi_stream, bytes)
}
.map_err(|_| ERRNO_IO)?;
@ -1376,7 +1378,7 @@ pub unsafe extern "C" fn path_open(
fdflags: Fdflags,
opened_fd: *mut Fd,
) -> Errno {
drop(fs_rights_inheriting);
let _ = fs_rights_inheriting;
let path = slice::from_raw_parts(path_ptr, path_len);
let at_flags = at_flags_from_lookupflags(dirflags);

12
crates/wasi/Cargo.toml

@ -20,7 +20,10 @@ wasi-cap-std-sync = { workspace = true, optional = true }
wasi-tokio = { workspace = true, optional = true }
wiggle = { workspace = true, optional = true }
libc = { workspace = true }
once_cell = { workspace = true }
tokio = { workspace = true, optional = true, features = ["time", "sync", "io-std", "io-util", "rt", "rt-multi-thread", "net"] }
bytes = { workspace = true }
thiserror = { workspace = true, optional = true }
tracing = { workspace = true, optional = true }
cap-std = { workspace = true, optional = true }
@ -31,10 +34,13 @@ fs-set-times = { workspace = true, optional = true }
bitflags = { workspace = true, optional = true }
async-trait = { workspace = true, optional = true }
system-interface = { workspace = true, optional = true}
rustix = { workspace = true, features = ["net", "event"], optional = true}
futures = { workspace = true, optional = true }
[dev-dependencies]
tokio = { workspace = true, features = ["time", "sync", "io-std", "io-util", "rt", "rt-multi-thread", "net", "macros"] }
[target.'cfg(unix)'.dependencies]
rustix = { workspace = true, features = ["fs"] }
rustix = { workspace = true, features = ["fs"], optional = true }
[target.'cfg(windows)'.dependencies]
io-extras = { workspace = true }
@ -59,6 +65,8 @@ preview2 = [
'dep:async-trait',
'dep:system-interface',
'dep:rustix',
'dep:tokio',
'dep:futures',
]
preview1-on-preview2 = [
"preview2",

91
crates/wasi/src/preview2/command.rs

@ -0,0 +1,91 @@
use crate::preview2::WasiView;
wasmtime::component::bindgen!({
world: "wasi:preview/command",
tracing: true,
async: true,
trappable_error_type: {
"filesystem"::"error-code": Error,
"streams"::"stream-error": Error,
},
with: {
"wasi:filesystem/filesystem": crate::preview2::bindings::filesystem::filesystem,
"wasi:clocks/monotonic_clock": crate::preview2::bindings::clocks::monotonic_clock,
"wasi:poll/poll": crate::preview2::bindings::poll::poll,
"wasi:io/streams": crate::preview2::bindings::io::streams,
"wasi:clocks/timezone": crate::preview2::bindings::clocks::timezone,
"wasi:clocks/wall_clock": crate::preview2::bindings::clocks::wall_clock,
"wasi:random/random": crate::preview2::bindings::random::random,
"wasi:cli_base/environment": crate::preview2::bindings::cli_base::environment,
"wasi:cli_base/exit": crate::preview2::bindings::cli_base::exit,
"wasi:cli_base/preopens": crate::preview2::bindings::cli_base::preopens,
"wasi:cli_base/stdin": crate::preview2::bindings::cli_base::stdin,
"wasi:cli_base/stdout": crate::preview2::bindings::cli_base::stdout,
"wasi:cli_base/stderr": crate::preview2::bindings::cli_base::stderr,
},
});
pub fn add_to_linker<T: WasiView>(l: &mut wasmtime::component::Linker<T>) -> anyhow::Result<()> {
crate::preview2::bindings::clocks::wall_clock::add_to_linker(l, |t| t)?;
crate::preview2::bindings::clocks::monotonic_clock::add_to_linker(l, |t| t)?;
crate::preview2::bindings::clocks::timezone::add_to_linker(l, |t| t)?;
crate::preview2::bindings::filesystem::filesystem::add_to_linker(l, |t| t)?;
crate::preview2::bindings::poll::poll::add_to_linker(l, |t| t)?;
crate::preview2::bindings::io::streams::add_to_linker(l, |t| t)?;
crate::preview2::bindings::random::random::add_to_linker(l, |t| t)?;
crate::preview2::bindings::cli_base::exit::add_to_linker(l, |t| t)?;
crate::preview2::bindings::cli_base::environment::add_to_linker(l, |t| t)?;
crate::preview2::bindings::cli_base::preopens::add_to_linker(l, |t| t)?;
crate::preview2::bindings::cli_base::stdin::add_to_linker(l, |t| t)?;
crate::preview2::bindings::cli_base::stdout::add_to_linker(l, |t| t)?;
crate::preview2::bindings::cli_base::stderr::add_to_linker(l, |t| t)?;
Ok(())
}
pub mod sync {
use crate::preview2::WasiView;
wasmtime::component::bindgen!({
world: "wasi:preview/command",
tracing: true,
async: false,
trappable_error_type: {
"filesystem"::"error-code": Error,
"streams"::"stream-error": Error,
},
with: {
"wasi:filesystem/filesystem": crate::preview2::bindings::sync_io::filesystem::filesystem,
"wasi:clocks/monotonic_clock": crate::preview2::bindings::clocks::monotonic_clock,
"wasi:poll/poll": crate::preview2::bindings::sync_io::poll::poll,
"wasi:io/streams": crate::preview2::bindings::sync_io::io::streams,
"wasi:clocks/timezone": crate::preview2::bindings::clocks::timezone,
"wasi:clocks/wall_clock": crate::preview2::bindings::clocks::wall_clock,
"wasi:random/random": crate::preview2::bindings::random::random,
"wasi:cli_base/environment": crate::preview2::bindings::cli_base::environment,
"wasi:cli_base/exit": crate::preview2::bindings::cli_base::exit,
"wasi:cli_base/preopens": crate::preview2::bindings::cli_base::preopens,
"wasi:cli_base/stdin": crate::preview2::bindings::cli_base::stdin,
"wasi:cli_base/stdout": crate::preview2::bindings::cli_base::stdout,
"wasi:cli_base/stderr": crate::preview2::bindings::cli_base::stderr,
},
});
pub fn add_to_linker<T: WasiView>(
l: &mut wasmtime::component::Linker<T>,
) -> anyhow::Result<()> {
crate::preview2::bindings::clocks::wall_clock::add_to_linker(l, |t| t)?;
crate::preview2::bindings::clocks::monotonic_clock::add_to_linker(l, |t| t)?;
crate::preview2::bindings::clocks::timezone::add_to_linker(l, |t| t)?;
crate::preview2::bindings::sync_io::filesystem::filesystem::add_to_linker(l, |t| t)?;
crate::preview2::bindings::sync_io::poll::poll::add_to_linker(l, |t| t)?;
crate::preview2::bindings::sync_io::io::streams::add_to_linker(l, |t| t)?;
crate::preview2::bindings::random::random::add_to_linker(l, |t| t)?;
crate::preview2::bindings::cli_base::exit::add_to_linker(l, |t| t)?;
crate::preview2::bindings::cli_base::environment::add_to_linker(l, |t| t)?;
crate::preview2::bindings::cli_base::preopens::add_to_linker(l, |t| t)?;
crate::preview2::bindings::cli_base::stdin::add_to_linker(l, |t| t)?;
crate::preview2::bindings::cli_base::stdout::add_to_linker(l, |t| t)?;
crate::preview2::bindings::cli_base::stderr::add_to_linker(l, |t| t)?;
Ok(())
}
}

21
crates/wasi/src/preview2/ctx.rs

@ -2,7 +2,7 @@ use crate::preview2::{
clocks::{self, HostMonotonicClock, HostWallClock},
filesystem::{Dir, TableFsExt},
pipe, random, stdio,
stream::{InputStream, OutputStream, TableStreamExt},
stream::{HostInputStream, HostOutputStream, TableStreamExt},
DirPerms, FilePerms, Table,
};
use cap_rand::{Rng, RngCore, SeedableRng};
@ -10,9 +10,9 @@ use cap_rand::{Rng, RngCore, SeedableRng};
use super::clocks::host::{monotonic_clock, wall_clock};
pub struct WasiCtxBuilder {
stdin: Box<dyn InputStream>,
stdout: Box<dyn OutputStream>,
stderr: Box<dyn OutputStream>,
stdin: Box<dyn HostInputStream>,
stdout: Box<dyn HostOutputStream>,
stderr: Box<dyn HostOutputStream>,
env: Vec<(String, String)>,
args: Vec<String>,
preopens: Vec<(Dir, String)>,
@ -35,11 +35,10 @@ impl WasiCtxBuilder {
// API.
let insecure_random_seed =
cap_rand::thread_rng(cap_rand::ambient_authority()).gen::<u128>();
Self {
stdin: Box::new(pipe::ReadPipe::new(std::io::empty())),
stdout: Box::new(pipe::WritePipe::new(std::io::sink())),
stderr: Box::new(pipe::WritePipe::new(std::io::sink())),
stdin: Box::new(pipe::ClosedInputStream),
stdout: Box::new(pipe::SinkOutputStream),
stderr: Box::new(pipe::SinkOutputStream),
env: Vec::new(),
args: Vec::new(),
preopens: Vec::new(),
@ -51,17 +50,17 @@ impl WasiCtxBuilder {
}
}
pub fn set_stdin(mut self, stdin: impl InputStream + 'static) -> Self {
pub fn set_stdin(mut self, stdin: impl HostInputStream + 'static) -> Self {
self.stdin = Box::new(stdin);
self
}
pub fn set_stdout(mut self, stdout: impl OutputStream + 'static) -> Self {
pub fn set_stdout(mut self, stdout: impl HostOutputStream + 'static) -> Self {
self.stdout = Box::new(stdout);
self
}
pub fn set_stderr(mut self, stderr: impl OutputStream + 'static) -> Self {
pub fn set_stderr(mut self, stderr: impl HostOutputStream + 'static) -> Self {
self.stderr = Box::new(stderr);
self
}

255
crates/wasi/src/preview2/filesystem.rs

@ -1,5 +1,5 @@
use crate::preview2::{InputStream, OutputStream, Table, TableError};
use std::any::Any;
use crate::preview2::{StreamState, Table, TableError};
use bytes::{Bytes, BytesMut};
use std::sync::Arc;
bitflags::bitflags! {
@ -11,6 +11,8 @@ bitflags::bitflags! {
}
pub(crate) struct File {
/// Wrapped in an Arc because the same underlying file is used for
/// implementing the stream types. Also needed for [`block`].
pub file: Arc<cap_std::fs::File>,
pub perms: FilePerms,
}
@ -22,6 +24,17 @@ impl File {
perms,
}
}
/// Spawn a task on tokio's blocking thread for performing blocking
/// syscalls on the underlying [`cap_std::fs::File`].
pub(crate) async fn spawn_blocking<F, R>(&self, body: F) -> R
where
F: FnOnce(&cap_std::fs::File) -> R + Send + 'static,
R: Send + 'static,
{
let f = self.file.clone();
tokio::task::spawn_blocking(move || body(&f)).await.unwrap()
}
}
pub(crate) trait TableFsExt {
fn push_file(&mut self, file: File) -> Result<u32, TableError>;
@ -72,7 +85,7 @@ bitflags::bitflags! {
}
pub(crate) struct Dir {
pub dir: cap_std::fs::Dir,
pub dir: Arc<cap_std::fs::Dir>,
pub perms: DirPerms,
pub file_perms: FilePerms,
}
@ -80,11 +93,22 @@ pub(crate) struct Dir {
impl Dir {
pub fn new(dir: cap_std::fs::Dir, perms: DirPerms, file_perms: FilePerms) -> Self {
Dir {
dir,
dir: Arc::new(dir),
perms,
file_perms,
}
}
/// Spawn a task on tokio's blocking thread for performing blocking
/// syscalls on the underlying [`cap_std::fs::Dir`].
pub(crate) async fn spawn_blocking<F, R>(&self, body: F) -> R
where
F: FnOnce(&cap_std::fs::Dir) -> R + Send + 'static,
R: Send + 'static,
{
let d = self.dir.clone();
tokio::task::spawn_blocking(move || body(&d)).await.unwrap()
}
}
pub(crate) struct FileInputStream {
@ -95,177 +119,98 @@ impl FileInputStream {
pub fn new(file: Arc<cap_std::fs::File>, position: u64) -> Self {
Self { file, position }
}
}
#[async_trait::async_trait]
impl InputStream for FileInputStream {
fn as_any(&self) -> &dyn Any {
self
}
#[cfg(unix)]
fn pollable_read(&self) -> Option<rustix::fd::BorrowedFd> {
use cap_std::io_lifetimes::AsFd;
Some(self.file.as_fd())
}
#[cfg(windows)]
fn pollable_read(&self) -> Option<io_extras::os::windows::BorrowedHandleOrSocket> {
use io_extras::os::windows::AsHandleOrSocket;
Some(self.file.as_handle_or_socket())
}
async fn read(&mut self, buf: &mut [u8]) -> anyhow::Result<(u64, bool)> {
pub async fn read(&mut self, size: usize) -> anyhow::Result<(Bytes, StreamState)> {
use system_interface::fs::FileIoExt;
let (n, end) = read_result(self.file.read_at(buf, self.position))?;
self.position = self.position.wrapping_add(n);
Ok((n, end))
}
async fn read_vectored<'a>(
&mut self,
bufs: &mut [std::io::IoSliceMut<'a>],
) -> anyhow::Result<(u64, bool)> {
use system_interface::fs::FileIoExt;
let (n, end) = read_result(self.file.read_vectored_at(bufs, self.position))?;
self.position = self.position.wrapping_add(n);
Ok((n, end))
}
fn is_read_vectored(&self) -> bool {
use system_interface::fs::FileIoExt;
self.file.is_read_vectored_at()
}
async fn num_ready_bytes(&self) -> anyhow::Result<u64> {
// FIXME we ought to be able to do better than this
Ok(0)
}
async fn readable(&self) -> anyhow::Result<()> {
// FIXME is this the spot to perform the permission check?
Ok(())
let f = Arc::clone(&self.file);
let p = self.position;
let (r, mut buf) = tokio::task::spawn_blocking(move || {
let mut buf = BytesMut::zeroed(size);
let r = f.read_at(&mut buf, p);
(r, buf)
})
.await
.unwrap();
let (n, state) = read_result(r)?;
buf.truncate(n);
self.position += n as u64;
Ok((buf.freeze(), state))
}
pub async fn skip(&mut self, nelem: usize) -> anyhow::Result<(usize, StreamState)> {
let mut nread = 0;
let mut state = StreamState::Open;
let (bs, read_state) = self.read(nelem).await?;
// TODO: handle the case where `bs.len()` is less than `nelem`
nread += bs.len();
if read_state.is_closed() {
state = read_state;
}
Ok((nread, state))
}
}
pub(crate) fn read_result(r: Result<usize, std::io::Error>) -> Result<(u64, bool), std::io::Error> {
pub(crate) fn read_result(
r: Result<usize, std::io::Error>,
) -> Result<(usize, StreamState), std::io::Error> {
match r {
Ok(0) => Ok((0, true)),
Ok(n) => Ok((n as u64, false)),
Err(e) if e.kind() == std::io::ErrorKind::Interrupted => Ok((0, false)),
Ok(0) => Ok((0, StreamState::Closed)),
Ok(n) => Ok((n, StreamState::Open)),
Err(e) if e.kind() == std::io::ErrorKind::Interrupted => Ok((0, StreamState::Open)),
Err(e) => Err(e),
}
}
pub(crate) struct FileOutputStream {
file: Arc<cap_std::fs::File>,
position: u64,
}
impl FileOutputStream {
pub fn new(file: Arc<cap_std::fs::File>, position: u64) -> Self {
Self { file, position }
pub(crate) fn write_result(
r: Result<usize, std::io::Error>,
) -> Result<(usize, StreamState), std::io::Error> {
match r {
Ok(0) => Ok((0, StreamState::Closed)),
Ok(n) => Ok((n, StreamState::Open)),
Err(e) => Err(e),
}
}
#[async_trait::async_trait]
impl OutputStream for FileOutputStream {
fn as_any(&self) -> &dyn Any {
self
}
/// If this stream is writing from a host file descriptor, return it so
/// that it can be polled with a host poll.
#[cfg(unix)]
fn pollable_write(&self) -> Option<rustix::fd::BorrowedFd> {
use cap_std::io_lifetimes::AsFd;
Some(self.file.as_fd())
}
/// If this stream is writing from a host file descriptor, return it so
/// that it can be polled with a host poll.
#[cfg(windows)]
fn pollable_write(&self) -> Option<io_extras::os::windows::BorrowedHandleOrSocket> {
use io_extras::os::windows::AsHandleOrSocket;
Some(self.file.as_handle_or_socket())
}
/// Write bytes. On success, returns the number of bytes written.
async fn write(&mut self, buf: &[u8]) -> anyhow::Result<u64> {
use system_interface::fs::FileIoExt;
let n = self.file.write_at(buf, self.position)? as i64 as u64;
self.position = self.position.wrapping_add(n);
Ok(n)
}
/// Vectored-I/O form of `write`.
async fn write_vectored<'a>(&mut self, bufs: &[std::io::IoSlice<'a>]) -> anyhow::Result<u64> {
use system_interface::fs::FileIoExt;
let n = self.file.write_vectored_at(bufs, self.position)? as i64 as u64;
self.position = self.position.wrapping_add(n);
Ok(n)
}
/// Test whether vectored I/O writes are known to be optimized in the
/// underlying implementation.
fn is_write_vectored(&self) -> bool {
use system_interface::fs::FileIoExt;
self.file.is_write_vectored_at()
}
/// Test whether this stream is writable.
async fn writable(&self) -> anyhow::Result<()> {
// FIXME perm check?
Ok(())
}
#[derive(Clone, Copy)]
pub(crate) enum FileOutputMode {
Position(u64),
Append,
}
pub(crate) struct FileAppendStream {
pub(crate) struct FileOutputStream {
file: Arc<cap_std::fs::File>,
mode: FileOutputMode,
}
impl FileAppendStream {
pub fn new(file: Arc<cap_std::fs::File>) -> Self {
Self { file }
}
}
#[async_trait::async_trait]
impl OutputStream for FileAppendStream {
fn as_any(&self) -> &dyn Any {
self
}
/// If this stream is writing from a host file descriptor, return it so
/// that it can be polled with a host poll.
#[cfg(unix)]
fn pollable_write(&self) -> Option<rustix::fd::BorrowedFd> {
use cap_std::io_lifetimes::AsFd;
Some(self.file.as_fd())
impl FileOutputStream {
pub fn write_at(file: Arc<cap_std::fs::File>, position: u64) -> Self {
Self {
file,
mode: FileOutputMode::Position(position),
}
}
/// If this stream is writing from a host file descriptor, return it so
/// that it can be polled with a host poll.
#[cfg(windows)]
fn pollable_write(&self) -> Option<io_extras::os::windows::BorrowedHandleOrSocket> {
use io_extras::os::windows::AsHandleOrSocket;
Some(self.file.as_handle_or_socket())
pub fn append(file: Arc<cap_std::fs::File>) -> Self {
Self {
file,
mode: FileOutputMode::Append,
}
}
/// Write bytes. On success, returns the number of bytes written.
async fn write(&mut self, buf: &[u8]) -> anyhow::Result<u64> {
use system_interface::fs::FileIoExt;
Ok(self.file.append(buf)? as i64 as u64)
}
/// Vectored-I/O form of `write`.
async fn write_vectored<'a>(&mut self, bufs: &[std::io::IoSlice<'a>]) -> anyhow::Result<u64> {
use system_interface::fs::FileIoExt;
let n = self.file.append_vectored(bufs)? as i64 as u64;
Ok(n)
}
/// Test whether vectored I/O writes are known to be optimized in the
/// underlying implementation.
fn is_write_vectored(&self) -> bool {
pub async fn write(&mut self, buf: Bytes) -> anyhow::Result<(usize, StreamState)> {
use system_interface::fs::FileIoExt;
self.file.is_write_vectored_at()
}
/// Test whether this stream is writable.
async fn writable(&self) -> anyhow::Result<()> {
// FIXME perm check?
Ok(())
let f = Arc::clone(&self.file);
let m = self.mode;
let r = tokio::task::spawn_blocking(move || match m {
FileOutputMode::Position(p) => f.write_at(buf.as_ref(), p),
FileOutputMode::Append => f.append(buf.as_ref()),
})
.await
.unwrap();
let (n, state) = write_result(r)?;
if let FileOutputMode::Position(ref mut position) = self.mode {
*position += n as u64;
}
Ok((n, state))
}
}

178
crates/wasi/src/preview2/mod.rs

@ -7,10 +7,6 @@
//!
//! Presently, this crate is experimental. We don't yet recommend you use it
//! in production. Specifically:
//! * it does not yet support a synchronous rust embedding
//! * polling and streams need a redesign. IO that currently should be
//! non-blocking may be blocking. poll probably doesn't work at all.
//! * its internal organization could use some love
//! * the wit files in tree describing preview 2 are not faithful to the
//! standards repos
//!
@ -19,26 +15,170 @@
//! `pub mod legacy` with an off-by-default feature flag, and after 2
//! releases, retire and remove that code from our tree.
pub mod clocks;
mod clocks;
pub mod command;
mod ctx;
mod error;
pub(crate) mod filesystem;
mod filesystem;
pub mod pipe;
mod poll;
#[cfg(feature = "preview1-on-preview2")]
pub mod preview1;
pub mod preview2;
pub mod random;
mod sched;
pub mod stdio;
pub mod stream;
pub mod table;
pub mod wasi;
mod preview2;
mod random;
mod stdio;
mod stream;
mod table;
pub use self::clocks::{HostMonotonicClock, HostWallClock};
pub use self::ctx::{WasiCtx, WasiCtxBuilder, WasiView};
pub use self::error::I32Exit;
pub use self::filesystem::{DirPerms, FilePerms};
pub use self::poll::{ClosureFuture, HostPollable, MakeFuture, PollableFuture, TablePollableExt};
pub use self::random::{thread_rng, Deterministic};
pub use self::stream::{HostInputStream, HostOutputStream, StreamState, TableStreamExt};
pub use self::table::{Table, TableError};
pub use cap_fs_ext::SystemTimeSpec;
pub use cap_rand::RngCore;
pub use clocks::{HostMonotonicClock, HostWallClock};
pub use ctx::{WasiCtx, WasiCtxBuilder, WasiView};
pub use error::I32Exit;
pub use filesystem::{DirPerms, FilePerms};
pub use stream::{InputStream, OutputStream};
pub use table::{Table, TableError};
pub mod bindings {
pub mod sync_io {
pub(crate) mod _internal {
wasmtime::component::bindgen!({
path: "wit",
interfaces: "
import wasi:poll/poll
import wasi:io/streams
import wasi:filesystem/filesystem
",
tracing: true,
trappable_error_type: {
"streams"::"stream-error": Error,
"filesystem"::"error-code": Error,
},
with: {
"wasi:clocks/wall-clock": crate::preview2::bindings::clocks::wall_clock,
}
});
}
pub use self::_internal::wasi::{filesystem, io, poll};
impl From<super::io::streams::StreamError> for io::streams::StreamError {
fn from(_other: super::io::streams::StreamError) -> Self {
// There are no cases for this record.
Self {}
}
}
impl From<super::io::streams::Error> for io::streams::Error {
fn from(other: super::io::streams::Error) -> Self {
match other.downcast() {
Ok(se) => io::streams::Error::from(io::streams::StreamError::from(se)),
Err(e) => io::streams::Error::trap(e),
}
}
}
}
pub(crate) mod _internal_clocks {
wasmtime::component::bindgen!({
path: "wit",
interfaces: "
import wasi:clocks/wall-clock
import wasi:clocks/monotonic-clock
import wasi:clocks/timezone
",
tracing: true,
});
}
pub use self::_internal_clocks::wasi::clocks;
pub(crate) mod _internal_io {
wasmtime::component::bindgen!({
path: "wit",
interfaces: "
import wasi:poll/poll
import wasi:io/streams
import wasi:filesystem/filesystem
",
tracing: true,
async: true,
trappable_error_type: {
"streams"::"stream-error": Error,
"filesystem"::"error-code": Error,
},
with: {
"wasi:clocks/wall-clock": crate::preview2::bindings::clocks::wall_clock,
}
});
}
pub use self::_internal_io::wasi::{filesystem, io, poll};
pub(crate) mod _internal_rest {
wasmtime::component::bindgen!({
path: "wit",
interfaces: "
import wasi:clocks/wall-clock
import wasi:clocks/monotonic-clock
import wasi:clocks/timezone
import wasi:random/random
import wasi:random/insecure
import wasi:random/insecure-seed
import wasi:cli-base/environment
import wasi:cli-base/preopens
import wasi:cli-base/exit
import wasi:cli-base/stdin
import wasi:cli-base/stdout
import wasi:cli-base/stderr
",
tracing: true,
trappable_error_type: {
"filesystem"::"error-code": Error,
"streams"::"stream-error": Error,
},
with: {
"wasi:clocks/wall-clock": crate::preview2::bindings::clocks::wall_clock,
"wasi:poll/poll": crate::preview2::bindings::poll::poll,
"wasi:io/streams": crate::preview2::bindings::io::streams,
"wasi:filesystem/filesystem": crate::preview2::bindings::filesystem::filesystem
}
});
}
pub use self::_internal_rest::wasi::*;
}
static RUNTIME: once_cell::sync::Lazy<tokio::runtime::Runtime> = once_cell::sync::Lazy::new(|| {
tokio::runtime::Builder::new_multi_thread()
.enable_time()
.enable_io()
.build()
.unwrap()
});
pub(crate) fn spawn<F, G>(f: F) -> tokio::task::JoinHandle<G>
where
F: std::future::Future<Output = G> + Send + 'static,
G: Send + 'static,
{
match tokio::runtime::Handle::try_current() {
Ok(_) => tokio::task::spawn(f),
Err(_) => {
let _enter = RUNTIME.enter();
tokio::task::spawn(f)
}
}
}
pub(crate) fn in_tokio<F: std::future::Future>(f: F) -> F::Output {
match tokio::runtime::Handle::try_current() {
Ok(h) => {
let _enter = h.enter();
h.block_on(f)
}
Err(_) => {
let _enter = RUNTIME.enter();
RUNTIME.block_on(f)
}
}
}

906
crates/wasi/src/preview2/pipe.rs

@ -7,227 +7,821 @@
//! Some convenience constructors are included for common backing types like `Vec<u8>` and `String`,
//! but the virtual pipes can be instantiated with any `Read` or `Write` type.
//!
use crate::preview2::stream::{InputStream, OutputStream};
use crate::preview2::{HostInputStream, HostOutputStream, StreamState};
use anyhow::Error;
use std::any::Any;
use std::convert::TryInto;
use std::io::{self, Read, Write};
use std::sync::{Arc, RwLock};
use system_interface::io::ReadReady;
/// A virtual pipe read end.
///
/// This reads from a source that implements the [`Read`] trait. It
/// also requires the [`ReadReady`] trait, which is implemented for many
/// popular `Read`-implementing types and is easy to implemented for new
/// types.
///
/// A variety of `From` impls are provided so that common pipe types are
/// easy to create. For example:
///
/// ```
/// use wasmtime_wasi::preview2::{pipe::ReadPipe, WasiCtxBuilder};
/// let stdin = ReadPipe::from("hello from stdin!");
/// let builder = WasiCtxBuilder::new().set_stdin(stdin);
/// ```
use bytes::Bytes;
#[derive(Debug)]
pub struct ReadPipe<R: Read + ReadReady> {
reader: Arc<RwLock<R>>,
pub struct MemoryInputPipe {
buffer: std::io::Cursor<Bytes>,
}
impl<R: Read + ReadReady> Clone for ReadPipe<R> {
fn clone(&self) -> Self {
impl MemoryInputPipe {
pub fn new(bytes: Bytes) -> Self {
Self {
reader: self.reader.clone(),
buffer: std::io::Cursor::new(bytes),
}
}
}
impl<R: Read + ReadReady> ReadPipe<R> {
/// Create a new pipe from a `Read` type.
///
/// All `Handle` read operations delegate to reading from this underlying reader.
pub fn new(r: R) -> Self {
Self::from_shared(Arc::new(RwLock::new(r)))
}
/// Create a new pipe from a shareable `Read` type.
///
/// All `Handle` read operations delegate to reading from this underlying reader.
pub fn from_shared(reader: Arc<RwLock<R>>) -> Self {
Self { reader }
pub fn is_empty(&self) -> bool {
self.buffer.get_ref().len() as u64 == self.buffer.position()
}
}
/// Try to convert this `ReadPipe<R>` back to the underlying `R` type.
///
/// This will fail with `Err(self)` if multiple references to the underlying `R` exist.
pub fn try_into_inner(mut self) -> Result<R, Self> {
match Arc::try_unwrap(self.reader) {
Ok(rc) => Ok(RwLock::into_inner(rc).unwrap()),
Err(reader) => {
self.reader = reader;
Err(self)
}
#[async_trait::async_trait]
impl HostInputStream for MemoryInputPipe {
fn read(&mut self, size: usize) -> Result<(Bytes, StreamState), Error> {
if self.is_empty() {
return Ok((Bytes::new(), StreamState::Closed));
}
let mut dest = bytes::BytesMut::zeroed(size);
let nbytes = std::io::Read::read(&mut self.buffer, dest.as_mut())?;
dest.truncate(nbytes);
let state = if self.is_empty() {
StreamState::Closed
} else {
StreamState::Open
};
Ok((dest.freeze(), state))
}
fn borrow(&self) -> std::sync::RwLockWriteGuard<R> {
RwLock::write(&self.reader).unwrap()
async fn ready(&mut self) -> Result<(), Error> {
Ok(())
}
}
impl From<Vec<u8>> for ReadPipe<io::Cursor<Vec<u8>>> {
fn from(r: Vec<u8>) -> Self {
Self::new(io::Cursor::new(r))
}
#[derive(Debug, Clone)]
pub struct MemoryOutputPipe {
buffer: std::sync::Arc<std::sync::Mutex<bytes::BytesMut>>,
}
impl From<&[u8]> for ReadPipe<io::Cursor<Vec<u8>>> {
fn from(r: &[u8]) -> Self {
Self::from(r.to_vec())
impl MemoryOutputPipe {
pub fn new() -> Self {
MemoryOutputPipe {
buffer: std::sync::Arc::new(std::sync::Mutex::new(bytes::BytesMut::new())),
}
}
}
impl From<String> for ReadPipe<io::Cursor<String>> {
fn from(r: String) -> Self {
Self::new(io::Cursor::new(r))
pub fn contents(&self) -> bytes::Bytes {
self.buffer.lock().unwrap().clone().freeze()
}
}
impl From<&str> for ReadPipe<io::Cursor<String>> {
fn from(r: &str) -> Self {
Self::from(r.to_string())
pub fn try_into_inner(self) -> Option<bytes::BytesMut> {
std::sync::Arc::into_inner(self.buffer).map(|m| m.into_inner().unwrap())
}
}
#[async_trait::async_trait]
impl<R: Read + ReadReady + Any + Send + Sync> InputStream for ReadPipe<R> {
fn as_any(&self) -> &dyn Any {
self
impl HostOutputStream for MemoryOutputPipe {
fn write(&mut self, bytes: Bytes) -> Result<(usize, StreamState), anyhow::Error> {
let mut buf = self.buffer.lock().unwrap();
buf.extend_from_slice(bytes.as_ref());
Ok((bytes.len(), StreamState::Open))
}
async fn num_ready_bytes(&self) -> Result<u64, Error> {
Ok(self.borrow().num_ready_bytes()?)
async fn ready(&mut self) -> Result<(), Error> {
// This stream is always ready for writing.
Ok(())
}
}
async fn read(&mut self, buf: &mut [u8]) -> Result<(u64, bool), Error> {
match self.borrow().read(buf) {
Ok(0) => Ok((0, true)),
Ok(n) => Ok((n.try_into()?, false)),
Err(e) if e.kind() == io::ErrorKind::Interrupted => Ok((0, false)),
Err(e) => Err(e.into()),
/// TODO
pub fn pipe(size: usize) -> (AsyncReadStream, AsyncWriteStream) {
let (a, b) = tokio::io::duplex(size);
let (_read_half, write_half) = tokio::io::split(a);
let (read_half, _write_half) = tokio::io::split(b);
(
AsyncReadStream::new(read_half),
AsyncWriteStream::new(write_half),
)
}
/// Provides a [`HostInputStream`] impl from a [`tokio::io::AsyncRead`] impl
pub struct AsyncReadStream {
state: StreamState,
buffer: Option<Result<Bytes, std::io::Error>>,
receiver: tokio::sync::mpsc::Receiver<Result<(Bytes, StreamState), std::io::Error>>,
}
impl AsyncReadStream {
/// Create a [`AsyncReadStream`]. In order to use the [`HostInputStream`] impl
/// provided by this struct, the argument must impl [`tokio::io::AsyncRead`].
pub fn new<T: tokio::io::AsyncRead + Send + Sync + Unpin + 'static>(mut reader: T) -> Self {
let (sender, receiver) = tokio::sync::mpsc::channel(1);
crate::preview2::spawn(async move {
loop {
use tokio::io::AsyncReadExt;
let mut buf = bytes::BytesMut::with_capacity(4096);
let sent = match reader.read_buf(&mut buf).await {
Ok(nbytes) if nbytes == 0 => {
sender.send(Ok((Bytes::new(), StreamState::Closed))).await
}
Ok(_) => sender.send(Ok((buf.freeze(), StreamState::Open))).await,
Err(e) => sender.send(Err(e)).await,
};
if sent.is_err() {
// no more receiver - stop trying to read
break;
}
}
});
AsyncReadStream {
state: StreamState::Open,
buffer: None,
receiver,
}
}
}
#[async_trait::async_trait]
impl HostInputStream for AsyncReadStream {
fn read(&mut self, size: usize) -> Result<(Bytes, StreamState), Error> {
use tokio::sync::mpsc::error::TryRecvError;
match self.buffer.take() {
Some(Ok(mut bytes)) => {
// TODO: de-duplicate the buffer management with the case below
let len = bytes.len().min(size);
let rest = bytes.split_off(len);
let return_state = if !rest.is_empty() {
self.buffer = Some(Ok(rest));
StreamState::Open
} else {
self.state
};
return Ok((bytes, return_state));
}
Some(Err(e)) => return Err(e.into()),
None => {}
}
match self.receiver.try_recv() {
Ok(Ok((mut bytes, state))) => {
self.state = state;
let len = bytes.len().min(size);
let rest = bytes.split_off(len);
let return_state = if !rest.is_empty() {
self.buffer = Some(Ok(rest));
StreamState::Open
} else {
self.state
};
async fn skip(&mut self, nelem: u64) -> Result<(u64, bool), Error> {
let num = io::copy(
&mut io::Read::take(&mut *self.borrow(), nelem),
&mut io::sink(),
)?;
Ok((num, num < nelem))
Ok((bytes, return_state))
}
Ok(Err(e)) => Err(e.into()),
Err(TryRecvError::Empty) => Ok((Bytes::new(), self.state)),
Err(TryRecvError::Disconnected) => Err(anyhow::anyhow!(
"AsyncReadStream sender died - should be impossible"
)),
}
}
async fn readable(&self) -> Result<(), Error> {
async fn ready(&mut self) -> Result<(), Error> {
if self.buffer.is_some() || self.state == StreamState::Closed {
return Ok(());
}
match self.receiver.recv().await {
Some(Ok((bytes, state))) => {
if state == StreamState::Closed {
self.state = state;
}
self.buffer = Some(Ok(bytes));
}
Some(Err(e)) => self.buffer = Some(Err(e)),
None => {
return Err(anyhow::anyhow!(
"no more sender for an open AsyncReadStream - should be impossible"
))
}
}
Ok(())
}
}
/// A virtual pipe write end.
///
/// ```no_run
/// use wasmtime_wasi::preview2::{pipe::WritePipe, WasiCtxBuilder, Table};
/// let mut table = Table::new();
/// let stdout = WritePipe::new_in_memory();
/// let mut ctx = WasiCtxBuilder::new().set_stdout(stdout.clone()).build(&mut table).unwrap();
/// // use ctx and table in an instance, then make sure it is dropped:
/// drop(ctx);
/// drop(table);
/// let contents: Vec<u8> = stdout.try_into_inner().expect("sole remaining reference to WritePipe").into_inner();
/// println!("contents of stdout: {:?}", contents);
/// ```
#[derive(Debug)]
pub struct WritePipe<W: Write> {
writer: Arc<RwLock<W>>,
enum WriteState {
Ready,
Pending,
Err(std::io::Error),
}
impl<W: Write> Clone for WritePipe<W> {
fn clone(&self) -> Self {
Self {
writer: self.writer.clone(),
}
}
/// Provides a [`HostOutputStream`] impl from a [`tokio::io::AsyncWrite`] impl
pub struct AsyncWriteStream {
state: Option<WriteState>,
sender: tokio::sync::mpsc::Sender<Bytes>,
result_receiver: tokio::sync::mpsc::Receiver<Result<StreamState, std::io::Error>>,
}
impl<W: Write> WritePipe<W> {
/// Create a new pipe from a `Write` type.
///
/// All `Handle` write operations delegate to writing to this underlying writer.
pub fn new(w: W) -> Self {
Self::from_shared(Arc::new(RwLock::new(w)))
impl AsyncWriteStream {
/// Create a [`AsyncWriteStream`]. In order to use the [`HostOutputStream`] impl
/// provided by this struct, the argument must impl [`tokio::io::AsyncWrite`].
pub fn new<T: tokio::io::AsyncWrite + Send + Sync + Unpin + 'static>(mut writer: T) -> Self {
let (sender, mut receiver) = tokio::sync::mpsc::channel::<Bytes>(1);
let (result_sender, result_receiver) = tokio::sync::mpsc::channel(1);
crate::preview2::spawn(async move {
'outer: loop {
use tokio::io::AsyncWriteExt;
match receiver.recv().await {
Some(mut bytes) => {
while !bytes.is_empty() {
match writer.write_buf(&mut bytes).await {
Ok(0) => {
let _ = result_sender.send(Ok(StreamState::Closed)).await;
break 'outer;
}
Ok(_) => {
if bytes.is_empty() {
match result_sender.send(Ok(StreamState::Open)).await {
Ok(_) => break,
Err(_) => break 'outer,
}
}
continue;
}
Err(e) => {
let _ = result_sender.send(Err(e)).await;
break 'outer;
}
}
}
}
// The other side of the channel hung up, the task can exit now
None => break 'outer,
}
}
});
AsyncWriteStream {
state: Some(WriteState::Ready),
sender,
result_receiver,
}
}
/// Create a new pipe from a shareable `Write` type.
///
/// All `Handle` write operations delegate to writing to this underlying writer.
pub fn from_shared(writer: Arc<RwLock<W>>) -> Self {
Self { writer }
fn send(&mut self, bytes: Bytes) -> anyhow::Result<(usize, StreamState)> {
use tokio::sync::mpsc::error::TrySendError;
debug_assert!(matches!(self.state, Some(WriteState::Ready)));
let len = bytes.len();
match self.sender.try_send(bytes) {
Ok(_) => {
self.state = Some(WriteState::Pending);
Ok((len, StreamState::Open))
}
Err(TrySendError::Full(_)) => {
unreachable!("task shouldnt be full when writestate is ready")
}
Err(TrySendError::Closed(_)) => unreachable!("task shouldn't die while not closed"),
}
}
}
#[async_trait::async_trait]
impl HostOutputStream for AsyncWriteStream {
fn write(&mut self, bytes: Bytes) -> Result<(usize, StreamState), anyhow::Error> {
use tokio::sync::mpsc::error::TryRecvError;
match self.state {
Some(WriteState::Ready) => self.send(bytes),
Some(WriteState::Pending) => match self.result_receiver.try_recv() {
Ok(Ok(StreamState::Open)) => {
self.state = Some(WriteState::Ready);
self.send(bytes)
}
/// Try to convert this `WritePipe<W>` back to the underlying `W` type.
///
/// This will fail with `Err(self)` if multiple references to the underlying `W` exist.
pub fn try_into_inner(mut self) -> Result<W, Self> {
match Arc::try_unwrap(self.writer) {
Ok(rc) => Ok(RwLock::into_inner(rc).unwrap()),
Err(writer) => {
self.writer = writer;
Err(self)
Ok(Ok(StreamState::Closed)) => {
self.state = None;
Ok((0, StreamState::Closed))
}
Ok(Err(e)) => {
self.state = None;
Err(e.into())
}
Err(TryRecvError::Empty) => {
self.state = Some(WriteState::Pending);
Ok((0, StreamState::Open))
}
Err(TryRecvError::Disconnected) => {
unreachable!("task shouldn't die while pending")
}
},
Some(WriteState::Err(_)) => {
// Move the error payload out of self.state, because errors are not Copy,
// and set self.state to None, because the stream is now closed.
if let Some(WriteState::Err(e)) = self.state.take() {
Err(e.into())
} else {
unreachable!("self.state shown to be Some(Err(e)) in match clause")
}
}
None => Ok((0, StreamState::Closed)),
}
}
fn borrow(&self) -> std::sync::RwLockWriteGuard<W> {
RwLock::write(&self.writer).unwrap()
async fn ready(&mut self) -> Result<(), Error> {
match &self.state {
Some(WriteState::Pending) => match self.result_receiver.recv().await {
Some(Ok(StreamState::Open)) => {
self.state = Some(WriteState::Ready);
}
Some(Ok(StreamState::Closed)) => {
self.state = None;
}
Some(Err(e)) => {
self.state = Some(WriteState::Err(e));
}
None => unreachable!("task shouldn't die while pending"),
},
Some(WriteState::Ready | WriteState::Err(_)) | None => {}
}
Ok(())
}
}
impl WritePipe<io::Cursor<Vec<u8>>> {
/// Create a new writable virtual pipe backed by a `Vec<u8>` buffer.
pub fn new_in_memory() -> Self {
Self::new(io::Cursor::new(vec![]))
/// An output stream that consumes all input written to it, and is always ready.
pub struct SinkOutputStream;
#[async_trait::async_trait]
impl HostOutputStream for SinkOutputStream {
fn write(&mut self, buf: Bytes) -> Result<(usize, StreamState), Error> {
Ok((buf.len(), StreamState::Open))
}
async fn ready(&mut self) -> Result<(), Error> {
Ok(())
}
}
/// A stream that is ready immediately, but will always report that it's closed.
pub struct ClosedInputStream;
#[async_trait::async_trait]
impl<W: Write + Any + Send + Sync> OutputStream for WritePipe<W> {
fn as_any(&self) -> &dyn Any {
self
impl HostInputStream for ClosedInputStream {
fn read(&mut self, _size: usize) -> Result<(Bytes, StreamState), Error> {
Ok((Bytes::new(), StreamState::Closed))
}
async fn write(&mut self, buf: &[u8]) -> Result<u64, Error> {
let n = self.borrow().write(buf)?;
Ok(n.try_into()?)
async fn ready(&mut self) -> Result<(), Error> {
Ok(())
}
}
// TODO: Optimize for pipes.
/*
async fn splice(
&mut self,
src: &mut dyn InputStream,
nelem: u64,
) -> Result<u64, Error> {
todo!()
}
*/
/// An output stream that is always closed.
pub struct ClosedOutputStream;
async fn write_zeroes(&mut self, nelem: u64) -> Result<u64, Error> {
let num = io::copy(
&mut io::Read::take(io::repeat(0), nelem),
&mut *self.borrow(),
)?;
Ok(num)
#[async_trait::async_trait]
impl HostOutputStream for ClosedOutputStream {
fn write(&mut self, _: Bytes) -> Result<(usize, StreamState), Error> {
Ok((0, StreamState::Closed))
}
async fn writable(&self) -> Result<(), Error> {
async fn ready(&mut self) -> Result<(), Error> {
Ok(())
}
}
#[cfg(test)]
mod test {
use super::*;
use tokio::io::{AsyncRead, AsyncReadExt, AsyncWrite, AsyncWriteExt};
// 10ms was enough for every CI platform except linux riscv64:
const REASONABLE_DURATION: std::time::Duration = std::time::Duration::from_millis(100);
pub fn simplex(size: usize) -> (impl AsyncRead, impl AsyncWrite) {
let (a, b) = tokio::io::duplex(size);
let (_read_half, write_half) = tokio::io::split(a);
let (read_half, _write_half) = tokio::io::split(b);
(read_half, write_half)
}
#[tokio::test(flavor = "multi_thread")]
async fn empty_read_stream() {
let mut reader = AsyncReadStream::new(tokio::io::empty());
let (bs, state) = reader.read(10).unwrap();
assert!(bs.is_empty());
// In a multi-threaded context, the value of state is not deterministic -- the spawned
// reader task may run on a different thread.
match state {
// The reader task ran before we tried to read, and noticed that the input was empty.
StreamState::Closed => {}
// The reader task hasn't run yet. Call `ready` to await and fill the buffer.
StreamState::Open => {
tokio::time::timeout(REASONABLE_DURATION, reader.ready())
.await
.expect("the reader should be ready instantly")
.expect("ready is ok");
let (bs, state) = reader.read(0).unwrap();
assert!(bs.is_empty());
assert_eq!(state, StreamState::Closed);
}
}
}
#[tokio::test(flavor = "multi_thread")]
async fn infinite_read_stream() {
let mut reader = AsyncReadStream::new(tokio::io::repeat(0));
let (bs, state) = reader.read(10).unwrap();
assert_eq!(state, StreamState::Open);
if bs.is_empty() {
// Reader task hasn't run yet. Call `ready` to await and fill the buffer.
tokio::time::timeout(REASONABLE_DURATION, reader.ready())
.await
.expect("the reader should be ready instantly")
.expect("ready is ok");
// Now a read should succeed
let (bs, state) = reader.read(10).unwrap();
assert_eq!(bs.len(), 10);
assert_eq!(state, StreamState::Open);
} else {
assert_eq!(bs.len(), 10);
}
// Subsequent reads should succeed
let (bs, state) = reader.read(10).unwrap();
assert_eq!(state, StreamState::Open);
assert_eq!(bs.len(), 10);
// Even 0-length reads should succeed and show its open
let (bs, state) = reader.read(0).unwrap();
assert_eq!(state, StreamState::Open);
assert_eq!(bs.len(), 0);
}
async fn finite_async_reader(contents: &[u8]) -> impl AsyncRead + Send + Sync + 'static {
let (r, mut w) = simplex(contents.len());
w.write_all(contents).await.unwrap();
r
}
#[tokio::test(flavor = "multi_thread")]
async fn finite_read_stream() {
let mut reader = AsyncReadStream::new(finite_async_reader(&[1; 123]).await);
let (bs, state) = reader.read(123).unwrap();
assert_eq!(state, StreamState::Open);
if bs.is_empty() {
// Reader task hasn't run yet. Call `ready` to await and fill the buffer.
tokio::time::timeout(REASONABLE_DURATION, reader.ready())
.await
.expect("the reader should be ready instantly")
.expect("ready is ok");
// Now a read should succeed
let (bs, state) = reader.read(123).unwrap();
assert_eq!(bs.len(), 123);
assert_eq!(state, StreamState::Open);
} else {
assert_eq!(bs.len(), 123);
}
// The AsyncRead's should be empty now, but we have a race where the reader task hasn't
// yet send that to the AsyncReadStream.
let (bs, state) = reader.read(0).unwrap();
assert!(bs.is_empty());
match state {
StreamState::Closed => {} // Correct!
StreamState::Open => {
// Need to await to give this side time to catch up
tokio::time::timeout(REASONABLE_DURATION, reader.ready())
.await
.expect("the reader should be ready instantly")
.expect("ready is ok");
// Now a read should show closed
let (bs, state) = reader.read(0).unwrap();
assert_eq!(bs.len(), 0);
assert_eq!(state, StreamState::Closed);
}
}
}
#[tokio::test(flavor = "multi_thread")]
// Test that you can write items into the stream, and they get read out in the order they were
// written, with the proper indications of readiness for reading:
async fn multiple_chunks_read_stream() {
let (r, mut w) = simplex(1024);
let mut reader = AsyncReadStream::new(r);
w.write_all(&[123]).await.unwrap();
let (bs, state) = reader.read(1).unwrap();
assert_eq!(state, StreamState::Open);
if bs.is_empty() {
// Reader task hasn't run yet. Call `ready` to await and fill the buffer.
tokio::time::timeout(REASONABLE_DURATION, reader.ready())
.await
.expect("the reader should be ready instantly")
.expect("ready is ok");
// Now a read should succeed
let (bs, state) = reader.read(1).unwrap();
assert_eq!(*bs, [123u8]);
assert_eq!(state, StreamState::Open);
} else {
assert_eq!(*bs, [123u8]);
}
// The stream should be empty and open now:
let (bs, state) = reader.read(1).unwrap();
assert!(bs.is_empty());
assert_eq!(state, StreamState::Open);
// We can wait on readiness and it will time out:
tokio::time::timeout(REASONABLE_DURATION, reader.ready())
.await
.err()
.expect("the reader should time out");
// Still open and empty:
let (bs, state) = reader.read(1).unwrap();
assert!(bs.is_empty());
assert_eq!(state, StreamState::Open);
// Put something else in the stream:
w.write_all(&[45]).await.unwrap();
// Wait readiness (yes we could possibly win the race and read it out faster, leaving that
// out of the test for simplicity)
tokio::time::timeout(REASONABLE_DURATION, reader.ready())
.await
.expect("the reader should be ready instantly")
.expect("the ready is ok");
// read the something else back out:
let (bs, state) = reader.read(1).unwrap();
assert_eq!(*bs, [45u8]);
assert_eq!(state, StreamState::Open);
// nothing else in there:
let (bs, state) = reader.read(1).unwrap();
assert!(bs.is_empty());
assert_eq!(state, StreamState::Open);
// We can wait on readiness and it will time out:
tokio::time::timeout(REASONABLE_DURATION, reader.ready())
.await
.err()
.expect("the reader should time out");
// nothing else in there:
let (bs, state) = reader.read(1).unwrap();
assert!(bs.is_empty());
assert_eq!(state, StreamState::Open);
// Now close the pipe:
drop(w);
// Wait readiness (yes we could possibly win the race and read it out faster, leaving that
// out of the test for simplicity)
tokio::time::timeout(REASONABLE_DURATION, reader.ready())
.await
.expect("the reader should be ready instantly")
.expect("the ready is ok");
// empty and now closed:
let (bs, state) = reader.read(1).unwrap();
assert!(bs.is_empty());
assert_eq!(state, StreamState::Closed);
}
#[tokio::test(flavor = "multi_thread")]
// At the moment we are restricting AsyncReadStream from buffering more than 4k. This isn't a
// suitable design for all applications, and we will probably make a knob or change the
// behavior at some point, but this test shows the behavior as it is implemented:
async fn backpressure_read_stream() {
let (r, mut w) = simplex(16 * 1024); // Make sure this buffer isnt a bottleneck
let mut reader = AsyncReadStream::new(r);
let writer_task = tokio::task::spawn(async move {
// Write twice as much as we can buffer up in an AsyncReadStream:
w.write_all(&[123; 8192]).await.unwrap();
w
});
tokio::time::timeout(REASONABLE_DURATION, reader.ready())
.await
.expect("the reader should be ready instantly")
.expect("ready is ok");
// Now we expect the reader task has sent 4k from the stream to the reader.
// Try to read out one bigger than the buffer available:
let (bs, state) = reader.read(4097).unwrap();
assert_eq!(bs.len(), 4096);
assert_eq!(state, StreamState::Open);
// Allow the crank to turn more:
tokio::time::timeout(REASONABLE_DURATION, reader.ready())
.await
.expect("the reader should be ready instantly")
.expect("ready is ok");
// Again we expect the reader task has sent 4k from the stream to the reader.
// Try to read out one bigger than the buffer available:
let (bs, state) = reader.read(4097).unwrap();
assert_eq!(bs.len(), 4096);
assert_eq!(state, StreamState::Open);
// The writer task is now finished - join with it:
let w = tokio::time::timeout(REASONABLE_DURATION, writer_task)
.await
.expect("the join should be ready instantly");
// And close the pipe:
drop(w);
// Allow the crank to turn more:
tokio::time::timeout(REASONABLE_DURATION, reader.ready())
.await
.expect("the reader should be ready instantly")
.expect("ready is ok");
// Now we expect the reader to be empty, and the stream closed:
let (bs, state) = reader.read(4097).unwrap();
assert_eq!(bs.len(), 0);
assert_eq!(state, StreamState::Closed);
}
#[tokio::test(flavor = "multi_thread")]
async fn sink_write_stream() {
let mut writer = AsyncWriteStream::new(tokio::io::sink());
let chunk = Bytes::from_static(&[0; 1024]);
// I can write whatever:
let (len, state) = writer.write(chunk.clone()).unwrap();
assert_eq!(len, chunk.len());
assert_eq!(state, StreamState::Open);
// But I expect this to block additional writes:
let (len, state) = writer.write(chunk.clone()).unwrap();
assert_eq!(len, 0);
assert_eq!(state, StreamState::Open);
tokio::time::timeout(REASONABLE_DURATION, writer.ready())
.await
.expect("the writer should be ready instantly")
.expect("ready is ok");
// Now additional writes will work:
let (len, state) = writer.write(chunk.clone()).unwrap();
assert_eq!(len, chunk.len());
assert_eq!(state, StreamState::Open);
}
#[tokio::test(flavor = "multi_thread")]
async fn closed_write_stream() {
let (reader, writer) = simplex(1024);
drop(reader);
let mut writer = AsyncWriteStream::new(writer);
// Without checking write readiness, perform a nonblocking write: this should succeed
// because we will buffer up the write.
let chunk = Bytes::from_static(&[0; 1]);
let (len, state) = writer.write(chunk.clone()).unwrap();
assert_eq!(len, chunk.len());
assert_eq!(state, StreamState::Open);
// Check write readiness:
tokio::time::timeout(REASONABLE_DURATION, writer.ready())
.await
.expect("the writer should be ready instantly")
.expect("ready is ok");
// When we drop the simplex reader, that causes the simplex writer to return BrokenPipe on
// its write. Now that the buffering crank has turned, our next write will give BrokenPipe.
let err = writer.write(chunk.clone()).err().unwrap();
assert_eq!(
err.downcast_ref::<std::io::Error>().unwrap().kind(),
std::io::ErrorKind::BrokenPipe
);
// Now that we got the error out of the writer, it should be closed - subsequent writes
// will not work
let (len, state) = writer.write(chunk.clone()).unwrap();
assert_eq!(len, 0);
assert_eq!(state, StreamState::Closed);
}
#[tokio::test(flavor = "multi_thread")]
async fn multiple_chunks_write_stream() {
use std::ops::Deref;
let (mut reader, writer) = simplex(1024);
let mut writer = AsyncWriteStream::new(writer);
// Write a chunk:
let chunk = Bytes::from_static(&[123; 1]);
let (len, state) = writer.write(chunk.clone()).unwrap();
assert_eq!(len, chunk.len());
assert_eq!(state, StreamState::Open);
// After the write, still ready for more writing:
tokio::time::timeout(REASONABLE_DURATION, writer.ready())
.await
.expect("the writer should be ready instantly")
.expect("ready is ok");
let mut read_buf = vec![0; chunk.len()];
let read_len = reader.read_exact(&mut read_buf).await.unwrap();
assert_eq!(read_len, chunk.len());
assert_eq!(read_buf.as_slice(), chunk.deref());
// Write a second, different chunk:
let chunk2 = Bytes::from_static(&[45; 1]);
let (len, state) = writer.write(chunk2.clone()).unwrap();
assert_eq!(len, chunk2.len());
assert_eq!(state, StreamState::Open);
// After the write, still ready for more writing:
tokio::time::timeout(REASONABLE_DURATION, writer.ready())
.await
.expect("the writer should be ready instantly")
.expect("ready is ok");
let mut read2_buf = vec![0; chunk2.len()];
let read2_len = reader.read_exact(&mut read2_buf).await.unwrap();
assert_eq!(read2_len, chunk2.len());
assert_eq!(read2_buf.as_slice(), chunk2.deref());
}
#[tokio::test(flavor = "multi_thread")]
async fn backpressure_write_stream() {
// Stream can buffer up to 1k, plus one write chunk, before not
// accepting more input:
let (mut reader, writer) = simplex(1024);
let mut writer = AsyncWriteStream::new(writer);
// Write enough to fill the simplex buffer:
let chunk = Bytes::from_static(&[0; 1024]);
let (len, state) = writer.write(chunk.clone()).unwrap();
assert_eq!(len, chunk.len());
assert_eq!(state, StreamState::Open);
// turn the crank and it should be ready for writing again:
tokio::time::timeout(REASONABLE_DURATION, writer.ready())
.await
.expect("the writer should be ready instantly")
.expect("ready is ok");
// Now fill the buffer between here and the writer task:
let (len, state) = writer.write(chunk.clone()).unwrap();
assert_eq!(len, chunk.len());
assert_eq!(state, StreamState::Open);
// Try shoving even more down there, and it shouldnt accept more input:
let (len, state) = writer.write(chunk.clone()).unwrap();
assert_eq!(len, 0);
assert_eq!(state, StreamState::Open);
// turn the crank and it should Not become ready for writing until we read something out.
tokio::time::timeout(REASONABLE_DURATION, writer.ready())
.await
.err()
.expect("the writer should be not become ready");
// Still not ready from the .write interface either:
let (len, state) = writer.write(chunk.clone()).unwrap();
assert_eq!(len, 0);
assert_eq!(state, StreamState::Open);
// There is 2k in the buffer. I should be able to read all of it out:
let mut buf = [0; 2048];
reader.read_exact(&mut buf).await.unwrap();
// and no more:
tokio::time::timeout(REASONABLE_DURATION, reader.read(&mut buf))
.await
.err()
.expect("nothing more buffered in the system");
// Now the backpressure should be cleared, and an additional write should be accepted.
// immediately ready for writing:
tokio::time::timeout(REASONABLE_DURATION, writer.ready())
.await
.expect("the writer should be ready instantly")
.expect("ready is ok");
// and the write succeeds:
let (len, state) = writer.write(chunk.clone()).unwrap();
assert_eq!(len, chunk.len());
assert_eq!(state, StreamState::Open);
}
}

157
crates/wasi/src/preview2/poll.rs

@ -0,0 +1,157 @@
use crate::preview2::{
bindings::poll::poll::{self, Pollable},
Table, TableError, WasiView,
};
use anyhow::Result;
use std::any::Any;
use std::collections::{hash_map::Entry, HashMap};
use std::future::Future;
use std::pin::Pin;
use std::task::{Context, Poll};
pub type PollableFuture<'a> = Pin<Box<dyn Future<Output = Result<()>> + Send + 'a>>;
pub type MakeFuture = for<'a> fn(&'a mut dyn Any) -> PollableFuture<'a>;
pub type ClosureFuture = Box<dyn Fn() -> PollableFuture<'static> + Send + Sync + 'static>;
/// A host representation of the `wasi:poll/poll.pollable` resource.
///
/// A pollable is not the same thing as a Rust Future: the same pollable may be used to
/// repeatedly check for readiness of a given condition, e.g. if a stream is readable
/// or writable. So, rather than containing a Future, which can only become Ready once, a
/// HostPollable contains a way to create a Future in each call to poll_oneoff.
pub enum HostPollable {
/// Create a Future by calling a fn on another resource in the table. This
/// indirection means the created Future can use a mut borrow of another
/// resource in the Table (e.g. a stream)
///
/// FIXME: we currently aren't tracking the lifetime of the resource along
/// with this entry, which means that this index could be occupied by something
/// unrelated by the time we poll it again. This is a crash vector, because
/// the [`MakeFuture`] would panic if the type of the index has changed, and
/// would yield undefined behavior otherwise. We'll likely fix this by making
/// the parent resources of a pollable clean up their pollable entries when
/// they are destroyed (e.g. the HostInputStream would track the pollables it
/// has created).
///
/// WARNING: do not deploy this library to production until the above issue has
/// been fixed.
TableEntry { index: u32, make_future: MakeFuture },
/// Create a future by calling an owned, static closure. This is used for
/// pollables which do not share state with another resource in the Table
/// (e.g. a timer)
Closure(ClosureFuture),
}
pub trait TablePollableExt {
fn push_host_pollable(&mut self, p: HostPollable) -> Result<u32, TableError>;
fn get_host_pollable_mut(&mut self, fd: u32) -> Result<&mut HostPollable, TableError>;
fn delete_host_pollable(&mut self, fd: u32) -> Result<HostPollable, TableError>;
}
impl TablePollableExt for Table {
fn push_host_pollable(&mut self, p: HostPollable) -> Result<u32, TableError> {
self.push(Box::new(p))
}
fn get_host_pollable_mut(&mut self, fd: u32) -> Result<&mut HostPollable, TableError> {
self.get_mut::<HostPollable>(fd)
}
fn delete_host_pollable(&mut self, fd: u32) -> Result<HostPollable, TableError> {
self.delete::<HostPollable>(fd)
}
}
#[async_trait::async_trait]
impl<T: WasiView> poll::Host for T {
async fn drop_pollable(&mut self, pollable: Pollable) -> Result<()> {
self.table_mut().delete_host_pollable(pollable)?;
Ok(())
}
async fn poll_oneoff(&mut self, pollables: Vec<Pollable>) -> Result<Vec<bool>> {
type ReadylistIndex = usize;
let table = self.table_mut();
let mut table_futures: HashMap<u32, (MakeFuture, Vec<ReadylistIndex>)> = HashMap::new();
let mut closure_futures: Vec<(PollableFuture<'_>, Vec<ReadylistIndex>)> = Vec::new();
for (ix, p) in pollables.iter().enumerate() {
match table.get_host_pollable_mut(*p)? {
HostPollable::Closure(f) => closure_futures.push((f(), vec![ix])),
HostPollable::TableEntry { index, make_future } => {
match table_futures.entry(*index) {
Entry::Vacant(v) => {
v.insert((*make_future, vec![ix]));
}
Entry::Occupied(mut o) => {
let (_, v) = o.get_mut();
v.push(ix);
}
}
}
}
}
for (entry, (make_future, readylist_indices)) in table.iter_entries(table_futures) {
let entry = entry?;
closure_futures.push((make_future(entry), readylist_indices));
}
struct PollOneoff<'a> {
elems: Vec<(PollableFuture<'a>, Vec<ReadylistIndex>)>,
}
impl<'a> Future for PollOneoff<'a> {
type Output = Result<Vec<bool>>;
fn poll(mut self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Self::Output> {
let mut any_ready = false;
let mut results = vec![false; self.elems.len()];
for (fut, readylist_indicies) in self.elems.iter_mut() {
match fut.as_mut().poll(cx) {
Poll::Ready(Ok(())) => {
for r in readylist_indicies {
results[*r] = true;
}
any_ready = true;
}
Poll::Ready(Err(e)) => {
return Poll::Ready(Err(
e.context(format!("poll_oneoff {readylist_indicies:?}"))
));
}
Poll::Pending => {}
}
}
if any_ready {
Poll::Ready(Ok(results))
} else {
Poll::Pending
}
}
}
Ok(PollOneoff {
elems: closure_futures,
}
.await?)
}
}
pub mod sync {
use crate::preview2::{
bindings::poll::poll::Host as AsyncHost,
bindings::sync_io::poll::poll::{self, Pollable},
in_tokio, WasiView,
};
use anyhow::Result;
impl<T: WasiView> poll::Host for T {
fn drop_pollable(&mut self, pollable: Pollable) -> Result<()> {
in_tokio(async { AsyncHost::drop_pollable(self, pollable).await })
}
fn poll_oneoff(&mut self, pollables: Vec<Pollable>) -> Result<Vec<bool>> {
in_tokio(async { AsyncHost::poll_oneoff(self, pollables).await })
}
}
}

308
crates/wasi/src/preview2/preview1/mod.rs

@ -1,11 +1,10 @@
use crate::preview2::bindings::cli_base::{preopens, stderr, stdin, stdout};
use crate::preview2::bindings::clocks::{monotonic_clock, wall_clock};
use crate::preview2::bindings::filesystem::filesystem;
use crate::preview2::bindings::io::streams;
use crate::preview2::filesystem::TableFsExt;
use crate::preview2::preview2::filesystem::TableReaddirExt;
use crate::preview2::wasi::cli_base::{preopens, stderr, stdin, stdout};
use crate::preview2::wasi::clocks::monotonic_clock;
use crate::preview2::wasi::clocks::wall_clock;
use crate::preview2::wasi::filesystem::filesystem;
use crate::preview2::wasi::io::streams;
use crate::preview2::{wasi, TableError, WasiView};
use crate::preview2::{bindings, TableError, WasiView};
use anyhow::{anyhow, bail, Context};
use std::borrow::Borrow;
use std::cell::Cell;
@ -71,27 +70,23 @@ impl DerefMut for Descriptors {
impl Descriptors {
/// Initializes [Self] using `preopens`
async fn new(
fn new(
preopens: &mut (impl preopens::Host + stdin::Host + stdout::Host + stderr::Host + ?Sized),
) -> Result<Self, types::Error> {
let stdin = preopens
.get_stdin()
.await
.context("failed to call `get-stdin`")
.map_err(types::Error::trap)?;
let stdout = preopens
.get_stdout()
.await
.context("failed to call `get-stdout`")
.map_err(types::Error::trap)?;
let stderr = preopens
.get_stderr()
.await
.context("failed to call `get-stderr`")
.map_err(types::Error::trap)?;
let directories = preopens
.get_directories()
.await
.context("failed to call `get-directories`")
.map_err(types::Error::trap)?;
@ -272,17 +267,16 @@ impl<T: WasiPreview1View + ?Sized> Transaction<'_, T> {
}
}
#[wiggle::async_trait]
trait WasiPreview1ViewExt:
WasiPreview1View + preopens::Host + stdin::Host + stdout::Host + stderr::Host
{
/// Lazily initializes [`WasiPreview1Adapter`] returned by [`WasiPreview1View::adapter_mut`]
/// and returns [`Transaction`] on success
async fn transact(&mut self) -> Result<Transaction<'_, Self>, types::Error> {
fn transact(&mut self) -> Result<Transaction<'_, Self>, types::Error> {
let descriptors = if let Some(descriptors) = self.adapter_mut().descriptors.take() {
descriptors
} else {
Descriptors::new(self).await?
Descriptors::new(self)?
}
.into();
Ok(Transaction {
@ -293,8 +287,8 @@ trait WasiPreview1ViewExt:
/// Lazily initializes [`WasiPreview1Adapter`] returned by [`WasiPreview1View::adapter_mut`]
/// and returns [`filesystem::Descriptor`] corresponding to `fd`
async fn get_fd(&mut self, fd: types::Fd) -> Result<filesystem::Descriptor, types::Error> {
let mut st = self.transact().await?;
fn get_fd(&mut self, fd: types::Fd) -> Result<filesystem::Descriptor, types::Error> {
let mut st = self.transact()?;
let fd = st.get_fd(fd)?;
Ok(fd)
}
@ -302,8 +296,8 @@ trait WasiPreview1ViewExt:
/// Lazily initializes [`WasiPreview1Adapter`] returned by [`WasiPreview1View::adapter_mut`]
/// and returns [`filesystem::Descriptor`] corresponding to `fd`
/// if it describes a [`Descriptor::File`] of [`crate::preview2::filesystem::File`] type
async fn get_file_fd(&mut self, fd: types::Fd) -> Result<filesystem::Descriptor, types::Error> {
let mut st = self.transact().await?;
fn get_file_fd(&mut self, fd: types::Fd) -> Result<filesystem::Descriptor, types::Error> {
let mut st = self.transact()?;
let fd = st.get_file_fd(fd)?;
Ok(fd)
}
@ -312,8 +306,8 @@ trait WasiPreview1ViewExt:
/// and returns [`filesystem::Descriptor`] corresponding to `fd`
/// if it describes a [`Descriptor::File`] or [`Descriptor::PreopenDirectory`]
/// of [`crate::preview2::filesystem::Dir`] type
async fn get_dir_fd(&mut self, fd: types::Fd) -> Result<filesystem::Descriptor, types::Error> {
let mut st = self.transact().await?;
fn get_dir_fd(&mut self, fd: types::Fd) -> Result<filesystem::Descriptor, types::Error> {
let mut st = self.transact()?;
let fd = st.get_dir_fd(fd)?;
Ok(fd)
}
@ -323,15 +317,15 @@ impl<T: WasiPreview1View + preopens::Host> WasiPreview1ViewExt for T {}
pub fn add_to_linker<
T: WasiPreview1View
+ wasi::cli_base::environment::Host
+ wasi::cli_base::exit::Host
+ wasi::cli_base::preopens::Host
+ wasi::filesystem::filesystem::Host
+ wasi::poll::poll::Host
+ wasi::random::random::Host
+ wasi::io::streams::Host
+ wasi::clocks::monotonic_clock::Host
+ wasi::clocks::wall_clock::Host,
+ bindings::cli_base::environment::Host
+ bindings::cli_base::exit::Host
+ bindings::cli_base::preopens::Host
+ bindings::filesystem::filesystem::Host
+ bindings::sync_io::poll::poll::Host
+ bindings::random::random::Host
+ bindings::io::streams::Host
+ bindings::clocks::monotonic_clock::Host
+ bindings::clocks::wall_clock::Host,
>(
linker: &mut wasmtime::Linker<T>,
) -> anyhow::Result<()> {
@ -344,8 +338,16 @@ pub fn add_to_linker<
// to this module.
wiggle::from_witx!({
witx: ["$CARGO_MANIFEST_DIR/witx/wasi_snapshot_preview1.witx"],
async: {
wasi_snapshot_preview1::{
fd_advise, fd_close, fd_datasync, fd_fdstat_get, fd_filestat_get, fd_filestat_set_size,
fd_filestat_set_times, fd_read, fd_pread, fd_seek, fd_sync, fd_readdir, fd_write,
fd_pwrite, poll_oneoff, path_create_directory, path_filestat_get,
path_filestat_set_times, path_link, path_open, path_readlink, path_remove_directory,
path_rename, path_symlink, path_unlink_file
}
},
errors: { errno => trappable Error },
async: *,
});
impl wiggle::GuestErrorType for types::Errno {
@ -617,31 +619,30 @@ fn first_non_empty_iovec<'a>(
.transpose()
}
#[async_trait::async_trait]
// Implement the WasiSnapshotPreview1 trait using only the traits that are
// required for T, i.e., in terms of the preview 2 wit interface, and state
// stored in the WasiPreview1Adapter struct.
#[wiggle::async_trait]
impl<
T: WasiPreview1View
+ wasi::cli_base::environment::Host
+ wasi::cli_base::exit::Host
+ wasi::cli_base::preopens::Host
+ wasi::filesystem::filesystem::Host
+ wasi::poll::poll::Host
+ wasi::random::random::Host
+ wasi::io::streams::Host
+ wasi::clocks::monotonic_clock::Host
+ wasi::clocks::wall_clock::Host,
+ bindings::cli_base::environment::Host
+ bindings::cli_base::exit::Host
+ bindings::cli_base::preopens::Host
+ bindings::filesystem::filesystem::Host
+ bindings::sync_io::poll::poll::Host
+ bindings::random::random::Host
+ bindings::io::streams::Host
+ bindings::clocks::monotonic_clock::Host
+ bindings::clocks::wall_clock::Host,
> wasi_snapshot_preview1::WasiSnapshotPreview1 for T
{
#[instrument(skip(self))]
async fn args_get<'b>(
fn args_get<'b>(
&mut self,
argv: &GuestPtr<'b, GuestPtr<'b, u8>>,
argv_buf: &GuestPtr<'b, u8>,
) -> Result<(), types::Error> {
self.get_arguments()
.await
.context("failed to call `get-arguments`")
.map_err(types::Error::trap)?
.into_iter()
@ -663,10 +664,9 @@ impl<
}
#[instrument(skip(self))]
async fn args_sizes_get(&mut self) -> Result<(types::Size, types::Size), types::Error> {
fn args_sizes_get(&mut self) -> Result<(types::Size, types::Size), types::Error> {
let args = self
.get_arguments()
.await
.context("failed to call `get-arguments`")
.map_err(types::Error::trap)?;
let num = args.len().try_into().map_err(|_| types::Errno::Overflow)?;
@ -680,13 +680,12 @@ impl<
}
#[instrument(skip(self))]
async fn environ_get<'b>(
fn environ_get<'b>(
&mut self,
environ: &GuestPtr<'b, GuestPtr<'b, u8>>,
environ_buf: &GuestPtr<'b, u8>,
) -> Result<(), types::Error> {
self.get_environment()
.await
.context("failed to call `get-environment`")
.map_err(types::Error::trap)?
.into_iter()
@ -712,10 +711,9 @@ impl<
}
#[instrument(skip(self))]
async fn environ_sizes_get(&mut self) -> Result<(types::Size, types::Size), types::Error> {
fn environ_sizes_get(&mut self) -> Result<(types::Size, types::Size), types::Error> {
let environ = self
.get_environment()
.await
.context("failed to call `get-environment`")
.map_err(types::Error::trap)?;
let num = environ
@ -732,18 +730,13 @@ impl<
}
#[instrument(skip(self))]
async fn clock_res_get(
&mut self,
id: types::Clockid,
) -> Result<types::Timestamp, types::Error> {
fn clock_res_get(&mut self, id: types::Clockid) -> Result<types::Timestamp, types::Error> {
let res = match id {
types::Clockid::Realtime => wall_clock::Host::resolution(self)
.await
.context("failed to call `wall_clock::resolution`")
.map_err(types::Error::trap)?
.try_into()?,
types::Clockid::Monotonic => monotonic_clock::Host::resolution(self)
.await
.context("failed to call `monotonic_clock::resolution`")
.map_err(types::Error::trap)?,
types::Clockid::ProcessCputimeId | types::Clockid::ThreadCputimeId => {
@ -754,19 +747,17 @@ impl<
}
#[instrument(skip(self))]
async fn clock_time_get(
fn clock_time_get(
&mut self,
id: types::Clockid,
_precision: types::Timestamp,
) -> Result<types::Timestamp, types::Error> {
let now = match id {
types::Clockid::Realtime => wall_clock::Host::now(self)
.await
.context("failed to call `wall_clock::now`")
.map_err(types::Error::trap)?
.try_into()?,
types::Clockid::Monotonic => monotonic_clock::Host::now(self)
.await
.context("failed to call `monotonic_clock::now`")
.map_err(types::Error::trap)?,
types::Clockid::ProcessCputimeId | types::Clockid::ThreadCputimeId => {
@ -784,7 +775,7 @@ impl<
len: types::Filesize,
advice: types::Advice,
) -> Result<(), types::Error> {
let fd = self.get_file_fd(fd).await?;
let fd = self.get_file_fd(fd)?;
self.advise(fd, offset, len, advice.into())
.await
.map_err(|e| {
@ -797,13 +788,13 @@ impl<
/// Force the allocation of space in a file.
/// NOTE: This is similar to `posix_fallocate` in POSIX.
#[instrument(skip(self))]
async fn fd_allocate(
fn fd_allocate(
&mut self,
fd: types::Fd,
_offset: types::Filesize,
_len: types::Filesize,
) -> Result<(), types::Error> {
self.get_file_fd(fd).await?;
self.get_file_fd(fd)?;
Err(types::Errno::Notsup.into())
}
@ -812,22 +803,21 @@ impl<
#[instrument(skip(self))]
async fn fd_close(&mut self, fd: types::Fd) -> Result<(), types::Error> {
let desc = self
.transact()
.await?
.transact()?
.descriptors
.get_mut()
.remove(fd)
.ok_or(types::Errno::Badf)?
.clone();
match desc {
Descriptor::Stdin(stream) => self
.drop_input_stream(stream)
Descriptor::Stdin(stream) => streams::Host::drop_input_stream(self, stream)
.await
.context("failed to call `drop-input-stream`"),
Descriptor::Stdout(stream) | Descriptor::Stderr(stream) => self
.drop_output_stream(stream)
.await
.context("failed to call `drop-output-stream`"),
Descriptor::Stdout(stream) | Descriptor::Stderr(stream) => {
streams::Host::drop_output_stream(self, stream)
.await
.context("failed to call `drop-output-stream`")
}
Descriptor::File(File { fd, .. }) | Descriptor::PreopenDirectory((fd, _)) => self
.drop_descriptor(fd)
.await
@ -840,7 +830,7 @@ impl<
/// NOTE: This is similar to `fdatasync` in POSIX.
#[instrument(skip(self))]
async fn fd_datasync(&mut self, fd: types::Fd) -> Result<(), types::Error> {
let fd = self.get_file_fd(fd).await?;
let fd = self.get_file_fd(fd)?;
self.sync_data(fd).await.map_err(|e| {
e.try_into()
.context("failed to call `sync-data`")
@ -852,7 +842,7 @@ impl<
/// NOTE: This returns similar flags to `fsync(fd, F_GETFL)` in POSIX, as well as additional fields.
#[instrument(skip(self))]
async fn fd_fdstat_get(&mut self, fd: types::Fd) -> Result<types::Fdstat, types::Error> {
let (fd, blocking, append) = match self.transact().await?.get_descriptor(fd)? {
let (fd, blocking, append) = match self.transact()?.get_descriptor(fd)? {
Descriptor::Stdin(..) => {
let fs_rights_base = types::Rights::FD_READ;
return Ok(types::Fdstat {
@ -931,12 +921,12 @@ impl<
/// Adjust the flags associated with a file descriptor.
/// NOTE: This is similar to `fcntl(fd, F_SETFL, flags)` in POSIX.
#[instrument(skip(self))]
async fn fd_fdstat_set_flags(
fn fd_fdstat_set_flags(
&mut self,
fd: types::Fd,
flags: types::Fdflags,
) -> Result<(), types::Error> {
let mut st = self.transact().await?;
let mut st = self.transact()?;
let File {
append, blocking, ..
} = st.get_file_mut(fd)?;
@ -955,20 +945,20 @@ impl<
/// Does not do anything if `fd` corresponds to a valid descriptor and returns `[types::Errno::Badf]` error otherwise.
#[instrument(skip(self))]
async fn fd_fdstat_set_rights(
fn fd_fdstat_set_rights(
&mut self,
fd: types::Fd,
_fs_rights_base: types::Rights,
_fs_rights_inheriting: types::Rights,
) -> Result<(), types::Error> {
self.get_fd(fd).await?;
self.get_fd(fd)?;
Ok(())
}
/// Return the attributes of an open file.
#[instrument(skip(self))]
async fn fd_filestat_get(&mut self, fd: types::Fd) -> Result<types::Filestat, types::Error> {
let desc = self.transact().await?.get_descriptor(fd)?.clone();
let desc = self.transact()?.get_descriptor(fd)?.clone();
match desc {
Descriptor::Stdin(..) | Descriptor::Stdout(..) | Descriptor::Stderr(..) => {
Ok(types::Filestat {
@ -1023,7 +1013,7 @@ impl<
fd: types::Fd,
size: types::Filesize,
) -> Result<(), types::Error> {
let fd = self.get_file_fd(fd).await?;
let fd = self.get_file_fd(fd)?;
self.set_size(fd, size).await.map_err(|e| {
e.try_into()
.context("failed to call `set-size`")
@ -1052,7 +1042,7 @@ impl<
fst_flags.contains(types::Fstflags::MTIM_NOW),
)?;
let fd = self.get_fd(fd).await?;
let fd = self.get_fd(fd)?;
self.set_times(fd, atim, mtim).await.map_err(|e| {
e.try_into()
.context("failed to call `set-times`")
@ -1068,8 +1058,8 @@ impl<
fd: types::Fd,
iovs: &types::IovecArray<'a>,
) -> Result<types::Size, types::Error> {
let desc = self.transact().await?.get_descriptor(fd)?.clone();
let (mut buf, read, end) = match desc {
let desc = self.transact()?.get_descriptor(fd)?.clone();
let (mut buf, read, state) = match desc {
Descriptor::File(File {
fd,
blocking,
@ -1087,36 +1077,35 @@ impl<
.unwrap_or_else(types::Error::trap)
})?;
let max = buf.len().try_into().unwrap_or(u64::MAX);
let (read, end) = if blocking {
self.blocking_read(stream, max)
let (read, state) = if blocking {
streams::Host::blocking_read(self, stream, max).await
} else {
streams::Host::read(self, stream, max)
streams::Host::read(self, stream, max).await
}
.await
.map_err(|_| types::Errno::Io)?;
let n = read.len().try_into().or(Err(types::Errno::Overflow))?;
let pos = pos.checked_add(n).ok_or(types::Errno::Overflow)?;
position.store(pos, Ordering::Relaxed);
(buf, read, end)
(buf, read, state)
}
Descriptor::Stdin(stream) => {
let Some(buf) = first_non_empty_iovec(iovs)? else {
return Ok(0)
};
let (read, end) =
let (read, state) =
streams::Host::read(self, stream, buf.len().try_into().unwrap_or(u64::MAX))
.await
.map_err(|_| types::Errno::Io)?;
(buf, read, end)
(buf, read, state)
}
_ => return Err(types::Errno::Badf.into()),
};
if read.len() > buf.len() {
return Err(types::Errno::Range.into());
}
if !end && read.len() == 0 {
if state == streams::StreamStatus::Open && read.len() == 0 {
return Err(types::Errno::Intr.into());
}
let (buf, _) = buf.split_at_mut(read.len());
@ -1134,8 +1123,8 @@ impl<
iovs: &types::IovecArray<'a>,
offset: types::Filesize,
) -> Result<types::Size, types::Error> {
let desc = self.transact().await?.get_descriptor(fd)?.clone();
let (mut buf, read, end) = match desc {
let desc = self.transact()?.get_descriptor(fd)?.clone();
let (mut buf, read, state) = match desc {
Descriptor::File(File { fd, blocking, .. }) if self.table().is_file(fd) => {
let Some(buf) = first_non_empty_iovec(iovs)? else {
return Ok(0)
@ -1147,15 +1136,14 @@ impl<
.unwrap_or_else(types::Error::trap)
})?;
let max = buf.len().try_into().unwrap_or(u64::MAX);
let (read, end) = if blocking {
self.blocking_read(stream, max)
let (read, state) = if blocking {
streams::Host::blocking_read(self, stream, max).await
} else {
streams::Host::read(self, stream, max)
streams::Host::read(self, stream, max).await
}
.await
.map_err(|_| types::Errno::Io)?;
(buf, read, end)
(buf, read, state)
}
Descriptor::Stdin(..) => {
// NOTE: legacy implementation returns SPIPE here
@ -1166,7 +1154,7 @@ impl<
if read.len() > buf.len() {
return Err(types::Errno::Range.into());
}
if !end && read.len() == 0 {
if state == streams::StreamStatus::Open && read.len() == 0 {
return Err(types::Errno::Intr.into());
}
let (buf, _) = buf.split_at_mut(read.len());
@ -1183,7 +1171,7 @@ impl<
fd: types::Fd,
ciovs: &types::CiovecArray<'a>,
) -> Result<types::Size, types::Error> {
let desc = self.transact().await?.get_descriptor(fd)?.clone();
let desc = self.transact()?.get_descriptor(fd)?.clone();
let n = match desc {
Descriptor::File(File {
fd,
@ -1210,12 +1198,11 @@ impl<
})?;
(stream, position)
};
let n = if blocking {
self.blocking_write(stream, buf)
let (n, _stat) = if blocking {
streams::Host::blocking_write(self, stream, buf).await
} else {
streams::Host::write(self, stream, buf)
streams::Host::write(self, stream, buf).await
}
.await
.map_err(|_| types::Errno::Io)?;
if !append {
let pos = pos.checked_add(n).ok_or(types::Errno::Overflow)?;
@ -1227,14 +1214,14 @@ impl<
let Some(buf) = first_non_empty_ciovec(ciovs)? else {
return Ok(0)
};
streams::Host::write(self, stream, buf)
let (n, _stat) = streams::Host::blocking_write(self, stream, buf)
.await
.map_err(|_| types::Errno::Io)?
.map_err(|_| types::Errno::Io)?;
n
}
_ => return Err(types::Errno::Badf.into()),
}
.try_into()
.or(Err(types::Errno::Overflow))?;
};
let n = n.try_into().or(Err(types::Errno::Overflow))?;
Ok(n)
}
@ -1247,8 +1234,8 @@ impl<
ciovs: &types::CiovecArray<'a>,
offset: types::Filesize,
) -> Result<types::Size, types::Error> {
let desc = self.transact().await?.get_descriptor(fd)?.clone();
let n = match desc {
let desc = self.transact()?.get_descriptor(fd)?.clone();
let (n, _stat) = match desc {
Descriptor::File(File { fd, blocking, .. }) if self.table().is_file(fd) => {
let Some(buf) = first_non_empty_ciovec(ciovs)? else {
return Ok(0)
@ -1259,11 +1246,10 @@ impl<
.unwrap_or_else(types::Error::trap)
})?;
if blocking {
self.blocking_write(stream, buf)
streams::Host::blocking_write(self, stream, buf).await
} else {
streams::Host::write(self, stream, buf)
streams::Host::write(self, stream, buf).await
}
.await
.map_err(|_| types::Errno::Io)?
}
Descriptor::Stdout(..) | Descriptor::Stderr(..) => {
@ -1271,16 +1257,15 @@ impl<
return Err(types::Errno::Spipe.into());
}
_ => return Err(types::Errno::Badf.into()),
}
.try_into()
.or(Err(types::Errno::Overflow))?;
};
let n = n.try_into().or(Err(types::Errno::Overflow))?;
Ok(n)
}
/// Return a description of the given preopened file descriptor.
#[instrument(skip(self))]
async fn fd_prestat_get(&mut self, fd: types::Fd) -> Result<types::Prestat, types::Error> {
if let Descriptor::PreopenDirectory((_, p)) = self.transact().await?.get_descriptor(fd)? {
fn fd_prestat_get(&mut self, fd: types::Fd) -> Result<types::Prestat, types::Error> {
if let Descriptor::PreopenDirectory((_, p)) = self.transact()?.get_descriptor(fd)? {
let pr_name_len = p.len().try_into().or(Err(types::Errno::Overflow))?;
return Ok(types::Prestat::Dir(types::PrestatDir { pr_name_len }));
}
@ -1289,14 +1274,14 @@ impl<
/// Return a description of the given preopened file descriptor.
#[instrument(skip(self))]
async fn fd_prestat_dir_name<'a>(
fn fd_prestat_dir_name<'a>(
&mut self,
fd: types::Fd,
path: &GuestPtr<'a, u8>,
path_max_len: types::Size,
) -> Result<(), types::Error> {
let path_max_len = path_max_len.try_into().or(Err(types::Errno::Overflow))?;
if let Descriptor::PreopenDirectory((_, p)) = self.transact().await?.get_descriptor(fd)? {
if let Descriptor::PreopenDirectory((_, p)) = self.transact()?.get_descriptor(fd)? {
if p.len() > path_max_len {
return Err(types::Errno::Nametoolong.into());
}
@ -1308,8 +1293,8 @@ impl<
/// Atomically replace a file descriptor by renumbering another file descriptor.
#[instrument(skip(self))]
async fn fd_renumber(&mut self, from: types::Fd, to: types::Fd) -> Result<(), types::Error> {
let mut st = self.transact().await?;
fn fd_renumber(&mut self, from: types::Fd, to: types::Fd) -> Result<(), types::Error> {
let mut st = self.transact()?;
let descriptors = st.descriptors.get_mut();
let desc = descriptors.remove(from).ok_or(types::Errno::Badf)?;
descriptors.insert(to.into(), desc);
@ -1326,7 +1311,7 @@ impl<
whence: types::Whence,
) -> Result<types::Filesize, types::Error> {
let (fd, position) = {
let mut st = self.transact().await?;
let mut st = self.transact()?;
let File { fd, position, .. } = st.get_seekable(fd)?;
(*fd, Arc::clone(&position))
};
@ -1354,7 +1339,7 @@ impl<
/// NOTE: This is similar to `fsync` in POSIX.
#[instrument(skip(self))]
async fn fd_sync(&mut self, fd: types::Fd) -> Result<(), types::Error> {
let fd = self.get_file_fd(fd).await?;
let fd = self.get_file_fd(fd)?;
self.sync(fd).await.map_err(|e| {
e.try_into()
.context("failed to call `sync`")
@ -1365,10 +1350,9 @@ impl<
/// Return the current offset of a file descriptor.
/// NOTE: This is similar to `lseek(fd, 0, SEEK_CUR)` in POSIX.
#[instrument(skip(self))]
async fn fd_tell(&mut self, fd: types::Fd) -> Result<types::Filesize, types::Error> {
fn fd_tell(&mut self, fd: types::Fd) -> Result<types::Filesize, types::Error> {
let pos = self
.transact()
.await?
.transact()?
.get_seekable(fd)
.map(|File { position, .. }| position.load(Ordering::Relaxed))?;
Ok(pos)
@ -1382,7 +1366,7 @@ impl<
buf_len: types::Size,
cookie: types::Dircookie,
) -> Result<types::Size, types::Error> {
let fd = self.get_dir_fd(fd).await?;
let fd = self.get_dir_fd(fd)?;
let stream = self.read_directory(fd).await.map_err(|e| {
e.try_into()
.context("failed to call `read-directory`")
@ -1490,7 +1474,7 @@ impl<
dirfd: types::Fd,
path: &GuestPtr<'a, str>,
) -> Result<(), types::Error> {
let dirfd = self.get_dir_fd(dirfd).await?;
let dirfd = self.get_dir_fd(dirfd)?;
let path = read_string(path)?;
self.create_directory_at(dirfd, path).await.map_err(|e| {
e.try_into()
@ -1508,7 +1492,7 @@ impl<
flags: types::Lookupflags,
path: &GuestPtr<'a, str>,
) -> Result<types::Filestat, types::Error> {
let dirfd = self.get_dir_fd(dirfd).await?;
let dirfd = self.get_dir_fd(dirfd)?;
let path = read_string(path)?;
let filesystem::DescriptorStat {
device: dev,
@ -1563,7 +1547,7 @@ impl<
fst_flags.contains(types::Fstflags::MTIM_NOW),
)?;
let dirfd = self.get_dir_fd(dirfd).await?;
let dirfd = self.get_dir_fd(dirfd)?;
let path = read_string(path)?;
self.set_times_at(dirfd, flags.into(), path, atim, mtim)
.await
@ -1585,8 +1569,8 @@ impl<
target_fd: types::Fd,
target_path: &GuestPtr<'a, str>,
) -> Result<(), types::Error> {
let src_fd = self.get_dir_fd(src_fd).await?;
let target_fd = self.get_dir_fd(target_fd).await?;
let src_fd = self.get_dir_fd(src_fd)?;
let target_fd = self.get_dir_fd(target_fd)?;
let src_path = read_string(src_path)?;
let target_path = read_string(target_path)?;
self.link_at(src_fd, src_flags.into(), src_path, target_fd, target_path)
@ -1630,7 +1614,7 @@ impl<
flags |= filesystem::DescriptorFlags::REQUESTED_WRITE_SYNC;
}
let desc = self.transact().await?.get_descriptor(dirfd)?.clone();
let desc = self.transact()?.get_descriptor(dirfd)?.clone();
let dirfd = match desc {
Descriptor::PreopenDirectory((fd, _)) => fd,
Descriptor::File(File { fd, .. }) if self.table().is_dir(fd) => fd,
@ -1655,17 +1639,12 @@ impl<
.context("failed to call `open-at`")
.unwrap_or_else(types::Error::trap)
})?;
let fd = self
.transact()
.await?
.descriptors
.get_mut()
.push_file(File {
fd,
position: Default::default(),
append: fdflags.contains(types::Fdflags::APPEND),
blocking: !fdflags.contains(types::Fdflags::NONBLOCK),
})?;
let fd = self.transact()?.descriptors.get_mut().push_file(File {
fd,
position: Default::default(),
append: fdflags.contains(types::Fdflags::APPEND),
blocking: !fdflags.contains(types::Fdflags::NONBLOCK),
})?;
Ok(fd.into())
}
@ -1679,7 +1658,7 @@ impl<
buf: &GuestPtr<'a, u8>,
buf_len: types::Size,
) -> Result<types::Size, types::Error> {
let dirfd = self.get_dir_fd(dirfd).await?;
let dirfd = self.get_dir_fd(dirfd)?;
let path = read_string(path)?;
let mut path = self.readlink_at(dirfd, path).await.map_err(|e| {
e.try_into()
@ -1701,7 +1680,7 @@ impl<
dirfd: types::Fd,
path: &GuestPtr<'a, str>,
) -> Result<(), types::Error> {
let dirfd = self.get_dir_fd(dirfd).await?;
let dirfd = self.get_dir_fd(dirfd)?;
let path = read_string(path)?;
self.remove_directory_at(dirfd, path).await.map_err(|e| {
e.try_into()
@ -1720,8 +1699,8 @@ impl<
dest_fd: types::Fd,
dest_path: &GuestPtr<'a, str>,
) -> Result<(), types::Error> {
let src_fd = self.get_dir_fd(src_fd).await?;
let dest_fd = self.get_dir_fd(dest_fd).await?;
let src_fd = self.get_dir_fd(src_fd)?;
let dest_fd = self.get_dir_fd(dest_fd)?;
let src_path = read_string(src_path)?;
let dest_path = read_string(dest_path)?;
self.rename_at(src_fd, src_path, dest_fd, dest_path)
@ -1740,7 +1719,7 @@ impl<
dirfd: types::Fd,
dest_path: &GuestPtr<'a, str>,
) -> Result<(), types::Error> {
let dirfd = self.get_dir_fd(dirfd).await?;
let dirfd = self.get_dir_fd(dirfd)?;
let src_path = read_string(src_path)?;
let dest_path = read_string(dest_path)?;
self.symlink_at(dirfd, src_path, dest_path)
@ -1758,7 +1737,7 @@ impl<
dirfd: types::Fd,
path: &GuestPtr<'a, str>,
) -> Result<(), types::Error> {
let dirfd = self.get_dir_fd(dirfd).await?;
let dirfd = self.get_dir_fd(dirfd)?;
let path = path.as_cow().map_err(|_| types::Errno::Inval)?.to_string();
self.unlink_file_at(dirfd, path).await.map_err(|e| {
e.try_into()
@ -1775,41 +1754,40 @@ impl<
events: &GuestPtr<'a, types::Event>,
nsubscriptions: types::Size,
) -> Result<types::Size, types::Error> {
todo!()
todo!("preview1 poll_oneoff is not implemented")
}
#[instrument(skip(self))]
async fn proc_exit(&mut self, status: types::Exitcode) -> anyhow::Error {
fn proc_exit(&mut self, status: types::Exitcode) -> anyhow::Error {
let status = match status {
0 => Ok(()),
_ => Err(()),
};
match self.exit(status).await {
match self.exit(status) {
Err(e) => e,
Ok(()) => anyhow!("`exit` did not return an error"),
}
}
#[instrument(skip(self))]
async fn proc_raise(&mut self, _sig: types::Signal) -> Result<(), types::Error> {
fn proc_raise(&mut self, _sig: types::Signal) -> Result<(), types::Error> {
Err(types::Errno::Notsup.into())
}
#[instrument(skip(self))]
async fn sched_yield(&mut self) -> Result<(), types::Error> {
fn sched_yield(&mut self) -> Result<(), types::Error> {
// TODO: This is not yet covered in Preview2.
Ok(())
}
#[instrument(skip(self))]
async fn random_get<'a>(
fn random_get<'a>(
&mut self,
buf: &GuestPtr<'a, u8>,
buf_len: types::Size,
) -> Result<(), types::Error> {
let rand = self
.get_random_bytes(buf_len.into())
.await
.context("failed to call `get-random-bytes`")
.map_err(types::Error::trap)?;
write_bytes(buf, rand)?;
@ -1818,43 +1796,39 @@ impl<
#[allow(unused_variables)]
#[instrument(skip(self))]
async fn sock_accept(
fn sock_accept(
&mut self,
fd: types::Fd,
flags: types::Fdflags,
) -> Result<types::Fd, types::Error> {
todo!()
todo!("preview1 sock_accept is not implemented")
}
#[allow(unused_variables)]
#[instrument(skip(self))]
async fn sock_recv<'a>(
fn sock_recv<'a>(
&mut self,
fd: types::Fd,
ri_data: &types::IovecArray<'a>,
ri_flags: types::Riflags,
) -> Result<(types::Size, types::Roflags), types::Error> {
todo!()
todo!("preview1 sock_recv is not implemented")
}
#[allow(unused_variables)]
#[instrument(skip(self))]
async fn sock_send<'a>(
fn sock_send<'a>(
&mut self,
fd: types::Fd,
si_data: &types::CiovecArray<'a>,
_si_flags: types::Siflags,
) -> Result<types::Size, types::Error> {
todo!()
todo!("preview1 sock_send is not implemented")
}
#[allow(unused_variables)]
#[instrument(skip(self))]
async fn sock_shutdown(
&mut self,
fd: types::Fd,
how: types::Sdflags,
) -> Result<(), types::Error> {
todo!()
fn sock_shutdown(&mut self, fd: types::Fd, how: types::Sdflags) -> Result<(), types::Error> {
todo!("preview1 sock_shutdown is not implemented")
}
}

75
crates/wasi/src/preview2/preview2/clocks.rs

@ -1,13 +1,12 @@
#![allow(unused_variables)]
use crate::preview2::preview2::poll::PollableEntry;
use crate::preview2::wasi::{
use crate::preview2::bindings::{
clocks::monotonic_clock::{self, Instant},
clocks::timezone::{self, Timezone, TimezoneDisplay},
clocks::wall_clock::{self, Datetime},
poll::poll::Pollable,
};
use crate::preview2::WasiView;
use crate::preview2::{HostPollable, TablePollableExt, WasiView};
use cap_std::time::SystemTime;
impl TryFrom<SystemTime> for Datetime {
@ -24,9 +23,8 @@ impl TryFrom<SystemTime> for Datetime {
}
}
#[async_trait::async_trait]
impl<T: WasiView> wall_clock::Host for T {
async fn now(&mut self) -> anyhow::Result<Datetime> {
fn now(&mut self) -> anyhow::Result<Datetime> {
let now = self.ctx().wall_clock.now();
Ok(Datetime {
seconds: now.as_secs(),
@ -34,7 +32,7 @@ impl<T: WasiView> wall_clock::Host for T {
})
}
async fn resolution(&mut self) -> anyhow::Result<Datetime> {
fn resolution(&mut self) -> anyhow::Result<Datetime> {
let res = self.ctx().wall_clock.resolution();
Ok(Datetime {
seconds: res.as_secs(),
@ -43,38 +41,67 @@ impl<T: WasiView> wall_clock::Host for T {
}
}
#[async_trait::async_trait]
impl<T: WasiView> monotonic_clock::Host for T {
async fn now(&mut self) -> anyhow::Result<Instant> {
fn now(&mut self) -> anyhow::Result<Instant> {
Ok(self.ctx().monotonic_clock.now())
}
async fn resolution(&mut self) -> anyhow::Result<Instant> {
fn resolution(&mut self) -> anyhow::Result<Instant> {
Ok(self.ctx().monotonic_clock.resolution())
}
async fn subscribe(&mut self, when: Instant, absolute: bool) -> anyhow::Result<Pollable> {
Ok(self
.table_mut()
.push(Box::new(PollableEntry::MonotonicClock(when, absolute)))?)
fn subscribe(&mut self, when: Instant, absolute: bool) -> anyhow::Result<Pollable> {
use std::time::Duration;
// Calculate time relative to clock object, which may not have the same zero
// point as tokio Inst::now()
let clock_now = self.ctx().monotonic_clock.now();
if absolute && when < clock_now {
// Deadline is in the past, so pollable is always ready:
Ok(self
.table_mut()
.push_host_pollable(HostPollable::Closure(Box::new(|| {
Box::pin(async { Ok(()) })
})))?)
} else {
let duration = if absolute {
Duration::from_nanos(clock_now - when)
} else {
Duration::from_nanos(when)
};
let deadline = tokio::time::Instant::now()
.checked_add(duration)
.ok_or_else(|| anyhow::anyhow!("time overflow: duration {duration:?}"))?;
tracing::trace!(
"deadline = {:?}, now = {:?}",
deadline,
tokio::time::Instant::now()
);
Ok(self
.table_mut()
.push_host_pollable(HostPollable::Closure(Box::new(move || {
Box::pin(async move {
tracing::trace!(
"mkf: deadline = {:?}, now = {:?}",
deadline,
tokio::time::Instant::now()
);
Ok(tokio::time::sleep_until(deadline).await)
})
})))?)
}
}
}
#[async_trait::async_trait]
impl<T: WasiView> timezone::Host for T {
async fn display(
&mut self,
timezone: Timezone,
when: Datetime,
) -> anyhow::Result<TimezoneDisplay> {
todo!()
fn display(&mut self, timezone: Timezone, when: Datetime) -> anyhow::Result<TimezoneDisplay> {
todo!("timezone display is not implemented")
}
async fn utc_offset(&mut self, timezone: Timezone, when: Datetime) -> anyhow::Result<i32> {
todo!()
fn utc_offset(&mut self, timezone: Timezone, when: Datetime) -> anyhow::Result<i32> {
todo!("timezone utc_offset is not implemented")
}
async fn drop_timezone(&mut self, timezone: Timezone) -> anyhow::Result<()> {
todo!()
fn drop_timezone(&mut self, timezone: Timezone) -> anyhow::Result<()> {
todo!("timezone drop is not implemented")
}
}

29
crates/wasi/src/preview2/preview2/env.rs

@ -1,48 +1,37 @@
use crate::preview2::wasi::cli_base::environment;
use crate::preview2::wasi::cli_base::preopens;
use crate::preview2::wasi::cli_base::stderr;
use crate::preview2::wasi::cli_base::stdin;
use crate::preview2::wasi::cli_base::stdout;
use crate::preview2::wasi::filesystem::filesystem;
use crate::preview2::wasi::io::streams;
use crate::preview2::bindings::cli_base::{environment, preopens, stderr, stdin, stdout};
use crate::preview2::bindings::filesystem::filesystem;
use crate::preview2::bindings::io::streams;
use crate::preview2::WasiView;
#[async_trait::async_trait]
impl<T: WasiView> environment::Host for T {
async fn get_environment(&mut self) -> anyhow::Result<Vec<(String, String)>> {
fn get_environment(&mut self) -> anyhow::Result<Vec<(String, String)>> {
Ok(self.ctx().env.clone())
}
async fn get_arguments(&mut self) -> anyhow::Result<Vec<String>> {
fn get_arguments(&mut self) -> anyhow::Result<Vec<String>> {
Ok(self.ctx().args.clone())
}
}
#[async_trait::async_trait]
impl<T: WasiView> preopens::Host for T {
async fn get_directories(
&mut self,
) -> Result<Vec<(filesystem::Descriptor, String)>, anyhow::Error> {
fn get_directories(&mut self) -> Result<Vec<(filesystem::Descriptor, String)>, anyhow::Error> {
Ok(self.ctx().preopens.clone())
}
}
#[async_trait::async_trait]
impl<T: WasiView> stdin::Host for T {
async fn get_stdin(&mut self) -> Result<streams::InputStream, anyhow::Error> {
fn get_stdin(&mut self) -> Result<streams::InputStream, anyhow::Error> {
Ok(self.ctx().stdin)
}
}
#[async_trait::async_trait]
impl<T: WasiView> stdout::Host for T {
async fn get_stdout(&mut self) -> Result<streams::OutputStream, anyhow::Error> {
fn get_stdout(&mut self) -> Result<streams::OutputStream, anyhow::Error> {
Ok(self.ctx().stdout)
}
}
#[async_trait::async_trait]
impl<T: WasiView> stderr::Host for T {
async fn get_stderr(&mut self) -> Result<streams::OutputStream, anyhow::Error> {
fn get_stderr(&mut self) -> Result<streams::OutputStream, anyhow::Error> {
Ok(self.ctx().stderr)
}
}

5
crates/wasi/src/preview2/preview2/exit.rs

@ -1,8 +1,7 @@
use crate::preview2::{wasi::cli_base::exit, I32Exit, WasiView};
use crate::preview2::{bindings::cli_base::exit, I32Exit, WasiView};
#[async_trait::async_trait]
impl<T: WasiView> exit::Host for T {
async fn exit(&mut self, status: Result<(), ()>) -> anyhow::Result<()> {
fn exit(&mut self, status: Result<(), ()>) -> anyhow::Result<()> {
let status = match status {
Ok(()) => 0,
Err(()) => 1,

281
crates/wasi/src/preview2/preview2/filesystem.rs

@ -1,12 +1,13 @@
use crate::preview2::bindings::clocks::wall_clock;
use crate::preview2::bindings::filesystem::filesystem;
use crate::preview2::bindings::io::streams;
use crate::preview2::filesystem::{Dir, File, TableFsExt};
use crate::preview2::stream::TableStreamExt;
use crate::preview2::wasi::clocks::wall_clock;
use crate::preview2::wasi::filesystem::filesystem;
use crate::preview2::wasi::io::streams;
use crate::preview2::{DirPerms, FilePerms, Table, TableError, WasiView};
use filesystem::ErrorCode;
mod sync;
impl From<TableError> for filesystem::Error {
fn from(error: TableError) -> filesystem::Error {
match error {
@ -16,6 +17,12 @@ impl From<TableError> for filesystem::Error {
}
}
impl From<tokio::task::JoinError> for filesystem::Error {
fn from(error: tokio::task::JoinError) -> Self {
Self::trap(anyhow::anyhow!(error))
}
}
#[async_trait::async_trait]
impl<T: WasiView> filesystem::Host for T {
async fn advise(
@ -38,14 +45,16 @@ impl<T: WasiView> filesystem::Host for T {
};
let f = self.table().get_file(fd)?;
f.file.advise(offset, len, advice)?;
f.spawn_blocking(move |f| f.advise(offset, len, advice))
.await?;
Ok(())
}
async fn sync_data(&mut self, fd: filesystem::Descriptor) -> Result<(), filesystem::Error> {
let table = self.table();
if table.is_file(fd) {
match table.get_file(fd)?.file.sync_data() {
let f = table.get_file(fd)?;
match f.spawn_blocking(|f| f.sync_data()).await {
Ok(()) => Ok(()),
// On windows, `sync_data` uses `FileFlushBuffers` which fails with
// `ERROR_ACCESS_DENIED` if the file is not upen for writing. Ignore
@ -60,11 +69,9 @@ impl<T: WasiView> filesystem::Host for T {
Err(e) => Err(e.into()),
}
} else if table.is_dir(fd) {
Ok(table
.get_dir(fd)?
.dir
.open(std::path::Component::CurDir)?
.sync_data()?)
let d = table.get_dir(fd)?;
d.spawn_blocking(|d| Ok(d.open(std::path::Component::CurDir)?.sync_data()?))
.await
} else {
Err(ErrorCode::BadDescriptor.into())
}
@ -74,12 +81,10 @@ impl<T: WasiView> filesystem::Host for T {
&mut self,
fd: filesystem::Descriptor,
) -> Result<filesystem::DescriptorFlags, filesystem::Error> {
use cap_std::io_lifetimes::AsFilelike;
use filesystem::DescriptorFlags;
use system_interface::fs::{FdFlags, GetSetFdFlags};
fn get_from_fdflags(f: impl AsFilelike) -> std::io::Result<DescriptorFlags> {
let flags = f.as_filelike().get_fd_flags()?;
fn get_from_fdflags(flags: FdFlags) -> DescriptorFlags {
let mut out = DescriptorFlags::empty();
if flags.contains(FdFlags::DSYNC) {
out |= DescriptorFlags::REQUESTED_WRITE_SYNC;
@ -90,13 +95,14 @@ impl<T: WasiView> filesystem::Host for T {
if flags.contains(FdFlags::SYNC) {
out |= DescriptorFlags::FILE_INTEGRITY_SYNC;
}
Ok(out)
out
}
let table = self.table();
if table.is_file(fd) {
let f = table.get_file(fd)?;
let mut flags = get_from_fdflags(&*f.file)?;
let flags = f.spawn_blocking(|f| f.get_fd_flags()).await?;
let mut flags = get_from_fdflags(flags);
if f.perms.contains(FilePerms::READ) {
flags |= DescriptorFlags::READ;
}
@ -106,7 +112,8 @@ impl<T: WasiView> filesystem::Host for T {
Ok(flags)
} else if table.is_dir(fd) {
let d = table.get_dir(fd)?;
let mut flags = get_from_fdflags(&d.dir)?;
let flags = d.spawn_blocking(|d| d.get_fd_flags()).await?;
let mut flags = get_from_fdflags(flags);
if d.perms.contains(DirPerms::READ) {
flags |= DescriptorFlags::READ;
}
@ -126,7 +133,8 @@ impl<T: WasiView> filesystem::Host for T {
let table = self.table();
if table.is_file(fd) {
let meta = table.get_file(fd)?.file.metadata()?;
let f = table.get_file(fd)?;
let meta = f.spawn_blocking(|f| f.metadata()).await?;
Ok(descriptortype_from(meta.file_type()))
} else if table.is_dir(fd) {
Ok(filesystem::DescriptorType::Directory)
@ -144,7 +152,7 @@ impl<T: WasiView> filesystem::Host for T {
if !f.perms.contains(FilePerms::WRITE) {
Err(ErrorCode::NotPermitted)?;
}
f.file.set_len(size)?;
f.spawn_blocking(move |f| f.set_len(size)).await?;
Ok(())
}
@ -164,7 +172,7 @@ impl<T: WasiView> filesystem::Host for T {
}
let atim = systemtimespec_from(atim)?;
let mtim = systemtimespec_from(mtim)?;
f.file.set_times(atim, mtim)?;
f.spawn_blocking(|f| f.set_times(atim, mtim)).await?;
Ok(())
} else if table.is_dir(fd) {
let d = table.get_dir(fd)?;
@ -173,7 +181,7 @@ impl<T: WasiView> filesystem::Host for T {
}
let atim = systemtimespec_from(atim)?;
let mtim = systemtimespec_from(mtim)?;
d.dir.set_times(atim, mtim)?;
d.spawn_blocking(|d| d.set_times(atim, mtim)).await?;
Ok(())
} else {
Err(ErrorCode::BadDescriptor.into())
@ -196,11 +204,15 @@ impl<T: WasiView> filesystem::Host for T {
return Err(ErrorCode::NotPermitted.into());
}
let mut buffer = vec![0; len.try_into().unwrap_or(usize::MAX)];
let (bytes_read, end) = crate::preview2::filesystem::read_result(
f.file
.read_vectored_at(&mut [IoSliceMut::new(&mut buffer)], offset),
)?;
let (mut buffer, r) = f
.spawn_blocking(move |f| {
let mut buffer = vec![0; len.try_into().unwrap_or(usize::MAX)];
let r = f.read_vectored_at(&mut [IoSliceMut::new(&mut buffer)], offset);
(buffer, r)
})
.await;
let (bytes_read, state) = crate::preview2::filesystem::read_result(r)?;
buffer.truncate(
bytes_read
@ -208,7 +220,7 @@ impl<T: WasiView> filesystem::Host for T {
.expect("bytes read into memory as u64 fits in usize"),
);
Ok((buffer, end))
Ok((buffer, state.is_closed()))
}
async fn write(
@ -226,7 +238,9 @@ impl<T: WasiView> filesystem::Host for T {
return Err(ErrorCode::NotPermitted.into());
}
let bytes_written = f.file.write_vectored_at(&[IoSlice::new(&buf)], offset)?;
let bytes_written = f
.spawn_blocking(move |f| f.write_vectored_at(&[IoSlice::new(&buf)], offset))
.await?;
Ok(filesystem::Filesize::try_from(bytes_written).expect("usize fits in Filesize"))
}
@ -253,17 +267,30 @@ impl<T: WasiView> filesystem::Host for T {
}
}
let entries = d.dir.entries()?.map(|entry| {
let entry = entry?;
let meta = entry.full_metadata()?;
let inode = Some(meta.ino());
let type_ = descriptortype_from(meta.file_type());
let name = entry
.file_name()
.into_string()
.map_err(|_| ReaddirError::IllegalSequence)?;
Ok(filesystem::DirectoryEntry { inode, type_, name })
});
let entries = d
.spawn_blocking(|d| {
// Both `entries` and `full_metadata` perform syscalls, which is why they are done
// within this `block` call, rather than delay calculating the full metadata
// for entries when they're demanded later in the iterator chain.
Ok::<_, std::io::Error>(
d.entries()?
.map(|entry| {
let entry = entry?;
let meta = entry.full_metadata()?;
let inode = Some(meta.ino());
let type_ = descriptortype_from(meta.file_type());
let name = entry
.file_name()
.into_string()
.map_err(|_| ReaddirError::IllegalSequence)?;
Ok(filesystem::DirectoryEntry { inode, type_, name })
})
.collect::<Vec<Result<filesystem::DirectoryEntry, ReaddirError>>>(),
)
})
.await?
.into_iter();
// On windows, filter out files like `C:\DumpStack.log.tmp` which we
// can't get full metadata for.
#[cfg(windows)]
@ -306,7 +333,8 @@ impl<T: WasiView> filesystem::Host for T {
async fn sync(&mut self, fd: filesystem::Descriptor) -> Result<(), filesystem::Error> {
let table = self.table();
if table.is_file(fd) {
match table.get_file(fd)?.file.sync_all() {
let f = table.get_file(fd)?;
match f.spawn_blocking(|f| f.sync_all()).await {
Ok(()) => Ok(()),
// On windows, `sync_data` uses `FileFlushBuffers` which fails with
// `ERROR_ACCESS_DENIED` if the file is not upen for writing. Ignore
@ -321,11 +349,9 @@ impl<T: WasiView> filesystem::Host for T {
Err(e) => Err(e.into()),
}
} else if table.is_dir(fd) {
Ok(table
.get_dir(fd)?
.dir
.open(std::path::Component::CurDir)?
.sync_all()?)
let d = table.get_dir(fd)?;
d.spawn_blocking(|d| Ok(d.open(std::path::Component::CurDir)?.sync_all()?))
.await
} else {
Err(ErrorCode::BadDescriptor.into())
}
@ -341,7 +367,7 @@ impl<T: WasiView> filesystem::Host for T {
if !d.perms.contains(DirPerms::MUTATE) {
return Err(ErrorCode::NotPermitted.into());
}
d.dir.create_dir(&path)?;
d.spawn_blocking(move |d| d.create_dir(&path)).await?;
Ok(())
}
@ -353,12 +379,12 @@ impl<T: WasiView> filesystem::Host for T {
if table.is_file(fd) {
let f = table.get_file(fd)?;
// No permissions check on stat: if opened, allowed to stat it
let meta = f.file.metadata()?;
let meta = f.spawn_blocking(|f| f.metadata()).await?;
Ok(descriptorstat_from(meta))
} else if table.is_dir(fd) {
let d = table.get_dir(fd)?;
// No permissions check on stat: if opened, allowed to stat it
let meta = d.dir.dir_metadata()?;
let meta = d.spawn_blocking(|d| d.dir_metadata()).await?;
Ok(descriptorstat_from(meta))
} else {
Err(ErrorCode::BadDescriptor.into())
@ -378,9 +404,9 @@ impl<T: WasiView> filesystem::Host for T {
}
let meta = if symlink_follow(path_flags) {
d.dir.metadata(&path)?
d.spawn_blocking(move |d| d.metadata(&path)).await?
} else {
d.dir.symlink_metadata(&path)?
d.spawn_blocking(move |d| d.symlink_metadata(&path)).await?
};
Ok(descriptorstat_from(meta))
}
@ -403,17 +429,23 @@ impl<T: WasiView> filesystem::Host for T {
let atim = systemtimespec_from(atim)?;
let mtim = systemtimespec_from(mtim)?;
if symlink_follow(path_flags) {
d.dir.set_times(
&path,
atim.map(cap_fs_ext::SystemTimeSpec::from_std),
mtim.map(cap_fs_ext::SystemTimeSpec::from_std),
)?;
d.spawn_blocking(move |d| {
d.set_times(
&path,
atim.map(cap_fs_ext::SystemTimeSpec::from_std),
mtim.map(cap_fs_ext::SystemTimeSpec::from_std),
)
})
.await?;
} else {
d.dir.set_symlink_times(
&path,
atim.map(cap_fs_ext::SystemTimeSpec::from_std),
mtim.map(cap_fs_ext::SystemTimeSpec::from_std),
)?;
d.spawn_blocking(move |d| {
d.set_symlink_times(
&path,
atim.map(cap_fs_ext::SystemTimeSpec::from_std),
mtim.map(cap_fs_ext::SystemTimeSpec::from_std),
)
})
.await?;
}
Ok(())
}
@ -439,7 +471,10 @@ impl<T: WasiView> filesystem::Host for T {
if symlink_follow(old_path_flags) {
return Err(ErrorCode::Invalid.into());
}
old_dir.dir.hard_link(&old_path, &new_dir.dir, &new_path)?;
let new_dir_handle = std::sync::Arc::clone(&new_dir.dir);
old_dir
.spawn_blocking(move |d| d.hard_link(&old_path, &new_dir_handle, &new_path))
.await?;
Ok(())
}
@ -521,31 +556,58 @@ impl<T: WasiView> filesystem::Host for T {
Err(ErrorCode::Invalid)?;
}
}
let mut opened = d.dir.open_with(&path, &opts)?;
if opened.metadata()?.is_dir() {
Ok(table.push_dir(Dir::new(
cap_std::fs::Dir::from_std_file(opened.into_std()),
d.perms,
d.file_perms,
))?)
} else if oflags.contains(OpenFlags::DIRECTORY) {
Err(ErrorCode::NotDirectory)?
} else {
// FIXME cap-std needs a nonblocking open option so that files reads and writes
// are nonblocking. Instead we set it after opening here:
let set_fd_flags = opened.new_set_fd_flags(FdFlags::NONBLOCK)?;
opened.set_fd_flags(set_fd_flags)?;
// Represents each possible outcome from the spawn_blocking operation.
// This makes sure we don't have to give spawn_blocking any way to
// manipulate the table.
enum OpenResult {
Dir(cap_std::fs::Dir),
File(cap_std::fs::File),
NotDir,
}
let opened = d
.spawn_blocking::<_, std::io::Result<OpenResult>>(move |d| {
let mut opened = d.open_with(&path, &opts)?;
if opened.metadata()?.is_dir() {
Ok(OpenResult::Dir(cap_std::fs::Dir::from_std_file(
opened.into_std(),
)))
} else if oflags.contains(OpenFlags::DIRECTORY) {
Ok(OpenResult::NotDir)
} else {
// FIXME cap-std needs a nonblocking open option so that files reads and writes
// are nonblocking. Instead we set it after opening here:
let set_fd_flags = opened.new_set_fd_flags(FdFlags::NONBLOCK)?;
opened.set_fd_flags(set_fd_flags)?;
Ok(OpenResult::File(opened))
}
})
.await?;
match opened {
OpenResult::Dir(dir) => Ok(table.push_dir(Dir::new(dir, d.perms, d.file_perms))?),
OpenResult::File(file) => {
Ok(table.push_file(File::new(file, mask_file_perms(d.file_perms, flags)))?)
}
Ok(table.push_file(File::new(opened, mask_file_perms(d.file_perms, flags)))?)
OpenResult::NotDir => Err(ErrorCode::NotDirectory.into()),
}
}
async fn drop_descriptor(&mut self, fd: filesystem::Descriptor) -> anyhow::Result<()> {
let table = self.table_mut();
// The Drop will close the file/dir, but if the close syscall
// blocks the thread, I will face god and walk backwards into hell.
// tokio::fs::File just uses std::fs::File's Drop impl to close, so
// it doesn't appear anyone else has found this to be a problem.
// (Not that they could solve it without async drop...)
if table.delete_file(fd).is_err() {
table.delete_dir(fd)?;
}
Ok(())
}
@ -559,7 +621,7 @@ impl<T: WasiView> filesystem::Host for T {
if !d.perms.contains(DirPerms::READ) {
return Err(ErrorCode::NotPermitted.into());
}
let link = d.dir.read_link(&path)?;
let link = d.spawn_blocking(move |d| d.read_link(&path)).await?;
Ok(link
.into_os_string()
.into_string()
@ -576,7 +638,7 @@ impl<T: WasiView> filesystem::Host for T {
if !d.perms.contains(DirPerms::MUTATE) {
return Err(ErrorCode::NotPermitted.into());
}
Ok(d.dir.remove_dir(&path)?)
Ok(d.spawn_blocking(move |d| d.remove_dir(&path)).await?)
}
async fn rename_at(
@ -595,8 +657,10 @@ impl<T: WasiView> filesystem::Host for T {
if !new_dir.perms.contains(DirPerms::MUTATE) {
return Err(ErrorCode::NotPermitted.into());
}
old_dir.dir.rename(&old_path, &new_dir.dir, &new_path)?;
Ok(())
let new_dir_handle = std::sync::Arc::clone(&new_dir.dir);
Ok(old_dir
.spawn_blocking(move |d| d.rename(&old_path, &new_dir_handle, &new_path))
.await?)
}
async fn symlink_at(
@ -614,8 +678,8 @@ impl<T: WasiView> filesystem::Host for T {
if !d.perms.contains(DirPerms::MUTATE) {
return Err(ErrorCode::NotPermitted.into());
}
d.dir.symlink(&src_path, &dest_path)?;
Ok(())
Ok(d.spawn_blocking(move |d| d.symlink(&src_path, &dest_path))
.await?)
}
async fn unlink_file_at(
@ -630,8 +694,8 @@ impl<T: WasiView> filesystem::Host for T {
if !d.perms.contains(DirPerms::MUTATE) {
return Err(ErrorCode::NotPermitted.into());
}
d.dir.remove_file_or_symlink(&path)?;
Ok(())
Ok(d.spawn_blocking(move |d| d.remove_file_or_symlink(&path))
.await?)
}
async fn access_at(
@ -641,7 +705,7 @@ impl<T: WasiView> filesystem::Host for T {
_path: String,
_access: filesystem::AccessType,
) -> Result<(), filesystem::Error> {
todo!()
todo!("filesystem access_at is not implemented")
}
async fn change_file_permissions_at(
@ -651,7 +715,7 @@ impl<T: WasiView> filesystem::Host for T {
_path: String,
_mode: filesystem::Modes,
) -> Result<(), filesystem::Error> {
todo!()
todo!("filesystem change_file_permissions_at is not implemented")
}
async fn change_directory_permissions_at(
@ -661,36 +725,36 @@ impl<T: WasiView> filesystem::Host for T {
_path: String,
_mode: filesystem::Modes,
) -> Result<(), filesystem::Error> {
todo!()
todo!("filesystem change_directory_permissions_at is not implemented")
}
async fn lock_shared(&mut self, _fd: filesystem::Descriptor) -> Result<(), filesystem::Error> {
todo!()
todo!("filesystem lock_shared is not implemented")
}
async fn lock_exclusive(
&mut self,
_fd: filesystem::Descriptor,
) -> Result<(), filesystem::Error> {
todo!()
todo!("filesystem lock_exclusive is not implemented")
}
async fn try_lock_shared(
&mut self,
_fd: filesystem::Descriptor,
) -> Result<(), filesystem::Error> {
todo!()
todo!("filesystem try_lock_shared is not implemented")
}
async fn try_lock_exclusive(
&mut self,
_fd: filesystem::Descriptor,
) -> Result<(), filesystem::Error> {
todo!()
todo!("filesystem try_lock_exclusive is not implemented")
}
async fn unlock(&mut self, _fd: filesystem::Descriptor) -> Result<(), filesystem::Error> {
todo!()
todo!("filesystem unlock is not implemented")
}
async fn read_via_stream(
@ -698,6 +762,11 @@ impl<T: WasiView> filesystem::Host for T {
fd: filesystem::Descriptor,
offset: filesystem::Filesize,
) -> Result<streams::InputStream, filesystem::Error> {
use crate::preview2::{
filesystem::FileInputStream,
stream::{InternalInputStream, InternalTableStreamExt},
};
// Trap if fd lookup fails:
let f = self.table().get_file(fd)?;
@ -708,10 +777,12 @@ impl<T: WasiView> filesystem::Host for T {
let clone = std::sync::Arc::clone(&f.file);
// Create a stream view for it.
let reader = crate::preview2::filesystem::FileInputStream::new(clone, offset);
let reader = FileInputStream::new(clone, offset);
// Insert the stream view into the table. Trap if the table is full.
let index = self.table_mut().push_input_stream(Box::new(reader))?;
let index = self
.table_mut()
.push_internal_input_stream(InternalInputStream::File(reader))?;
Ok(index)
}
@ -721,6 +792,11 @@ impl<T: WasiView> filesystem::Host for T {
fd: filesystem::Descriptor,
offset: filesystem::Filesize,
) -> Result<streams::OutputStream, filesystem::Error> {
use crate::preview2::{
filesystem::FileOutputStream,
stream::{InternalOutputStream, InternalTableStreamExt},
};
// Trap if fd lookup fails:
let f = self.table().get_file(fd)?;
@ -732,10 +808,12 @@ impl<T: WasiView> filesystem::Host for T {
let clone = std::sync::Arc::clone(&f.file);
// Create a stream view for it.
let writer = crate::preview2::filesystem::FileOutputStream::new(clone, offset);
let writer = FileOutputStream::write_at(clone, offset);
// Insert the stream view into the table. Trap if the table is full.
let index = self.table_mut().push_output_stream(Box::new(writer))?;
let index = self
.table_mut()
.push_internal_output_stream(InternalOutputStream::File(writer))?;
Ok(index)
}
@ -744,6 +822,11 @@ impl<T: WasiView> filesystem::Host for T {
&mut self,
fd: filesystem::Descriptor,
) -> Result<streams::OutputStream, filesystem::Error> {
use crate::preview2::{
filesystem::FileOutputStream,
stream::{InternalOutputStream, InternalTableStreamExt},
};
// Trap if fd lookup fails:
let f = self.table().get_file(fd)?;
@ -754,10 +837,12 @@ impl<T: WasiView> filesystem::Host for T {
let clone = std::sync::Arc::clone(&f.file);
// Create a stream view for it.
let appender = crate::preview2::filesystem::FileAppendStream::new(clone);
let appender = FileOutputStream::append(clone);
// Insert the stream view into the table. Trap if the table is full.
let index = self.table_mut().push_output_stream(Box::new(appender))?;
let index = self
.table_mut()
.push_internal_output_stream(InternalOutputStream::File(appender))?;
Ok(index)
}

613
crates/wasi/src/preview2/preview2/filesystem/sync.rs

@ -0,0 +1,613 @@
use crate::preview2::bindings::filesystem::filesystem as async_filesystem;
use crate::preview2::bindings::sync_io::filesystem::filesystem as sync_filesystem;
use crate::preview2::bindings::sync_io::io::streams;
use crate::preview2::in_tokio;
impl<T: async_filesystem::Host> sync_filesystem::Host for T {
fn advise(
&mut self,
fd: sync_filesystem::Descriptor,
offset: sync_filesystem::Filesize,
len: sync_filesystem::Filesize,
advice: sync_filesystem::Advice,
) -> Result<(), sync_filesystem::Error> {
Ok(in_tokio(async {
async_filesystem::Host::advise(self, fd, offset, len, advice.into()).await
})?)
}
fn sync_data(&mut self, fd: sync_filesystem::Descriptor) -> Result<(), sync_filesystem::Error> {
Ok(in_tokio(async {
async_filesystem::Host::sync_data(self, fd).await
})?)
}
fn get_flags(
&mut self,
fd: sync_filesystem::Descriptor,
) -> Result<sync_filesystem::DescriptorFlags, sync_filesystem::Error> {
Ok(in_tokio(async { async_filesystem::Host::get_flags(self, fd).await })?.into())
}
fn get_type(
&mut self,
fd: sync_filesystem::Descriptor,
) -> Result<sync_filesystem::DescriptorType, sync_filesystem::Error> {
Ok(in_tokio(async { async_filesystem::Host::get_type(self, fd).await })?.into())
}
fn set_size(
&mut self,
fd: sync_filesystem::Descriptor,
size: sync_filesystem::Filesize,
) -> Result<(), sync_filesystem::Error> {
Ok(in_tokio(async {
async_filesystem::Host::set_size(self, fd, size).await
})?)
}
fn set_times(
&mut self,
fd: sync_filesystem::Descriptor,
atim: sync_filesystem::NewTimestamp,
mtim: sync_filesystem::NewTimestamp,
) -> Result<(), sync_filesystem::Error> {
Ok(in_tokio(async {
async_filesystem::Host::set_times(self, fd, atim.into(), mtim.into()).await
})?)
}
fn read(
&mut self,
fd: sync_filesystem::Descriptor,
len: sync_filesystem::Filesize,
offset: sync_filesystem::Filesize,
) -> Result<(Vec<u8>, bool), sync_filesystem::Error> {
Ok(in_tokio(async {
async_filesystem::Host::read(self, fd, len, offset).await
})?)
}
fn write(
&mut self,
fd: sync_filesystem::Descriptor,
buf: Vec<u8>,
offset: sync_filesystem::Filesize,
) -> Result<sync_filesystem::Filesize, sync_filesystem::Error> {
Ok(in_tokio(async {
async_filesystem::Host::write(self, fd, buf, offset).await
})?)
}
fn read_directory(
&mut self,
fd: sync_filesystem::Descriptor,
) -> Result<sync_filesystem::DirectoryEntryStream, sync_filesystem::Error> {
Ok(in_tokio(async {
async_filesystem::Host::read_directory(self, fd).await
})?)
}
fn read_directory_entry(
&mut self,
stream: sync_filesystem::DirectoryEntryStream,
) -> Result<Option<sync_filesystem::DirectoryEntry>, sync_filesystem::Error> {
Ok(
in_tokio(async { async_filesystem::Host::read_directory_entry(self, stream).await })?
.map(|e| e.into()),
)
}
fn drop_directory_entry_stream(
&mut self,
stream: sync_filesystem::DirectoryEntryStream,
) -> anyhow::Result<()> {
Ok(in_tokio(async {
async_filesystem::Host::drop_directory_entry_stream(self, stream).await
})?)
}
fn sync(&mut self, fd: sync_filesystem::Descriptor) -> Result<(), sync_filesystem::Error> {
Ok(in_tokio(async {
async_filesystem::Host::sync(self, fd).await
})?)
}
fn create_directory_at(
&mut self,
fd: sync_filesystem::Descriptor,
path: String,
) -> Result<(), sync_filesystem::Error> {
Ok(in_tokio(async {
async_filesystem::Host::create_directory_at(self, fd, path).await
})?)
}
fn stat(
&mut self,
fd: sync_filesystem::Descriptor,
) -> Result<sync_filesystem::DescriptorStat, sync_filesystem::Error> {
Ok(in_tokio(async { async_filesystem::Host::stat(self, fd).await })?.into())
}
fn stat_at(
&mut self,
fd: sync_filesystem::Descriptor,
path_flags: sync_filesystem::PathFlags,
path: String,
) -> Result<sync_filesystem::DescriptorStat, sync_filesystem::Error> {
Ok(in_tokio(async {
async_filesystem::Host::stat_at(self, fd, path_flags.into(), path).await
})?
.into())
}
fn set_times_at(
&mut self,
fd: sync_filesystem::Descriptor,
path_flags: sync_filesystem::PathFlags,
path: String,
atim: sync_filesystem::NewTimestamp,
mtim: sync_filesystem::NewTimestamp,
) -> Result<(), sync_filesystem::Error> {
Ok(in_tokio(async {
async_filesystem::Host::set_times_at(
self,
fd,
path_flags.into(),
path,
atim.into(),
mtim.into(),
)
.await
})?)
}
fn link_at(
&mut self,
fd: sync_filesystem::Descriptor,
// TODO delete the path flags from this function
old_path_flags: sync_filesystem::PathFlags,
old_path: String,
new_descriptor: sync_filesystem::Descriptor,
new_path: String,
) -> Result<(), sync_filesystem::Error> {
Ok(in_tokio(async {
async_filesystem::Host::link_at(
self,
fd,
old_path_flags.into(),
old_path,
new_descriptor,
new_path,
)
.await
})?)
}
fn open_at(
&mut self,
fd: sync_filesystem::Descriptor,
path_flags: sync_filesystem::PathFlags,
path: String,
oflags: sync_filesystem::OpenFlags,
flags: sync_filesystem::DescriptorFlags,
mode: sync_filesystem::Modes,
) -> Result<sync_filesystem::Descriptor, sync_filesystem::Error> {
Ok(in_tokio(async {
async_filesystem::Host::open_at(
self,
fd,
path_flags.into(),
path,
oflags.into(),
flags.into(),
mode.into(),
)
.await
})?)
}
fn drop_descriptor(&mut self, fd: sync_filesystem::Descriptor) -> anyhow::Result<()> {
Ok(in_tokio(async {
async_filesystem::Host::drop_descriptor(self, fd).await
})?)
}
fn readlink_at(
&mut self,
fd: sync_filesystem::Descriptor,
path: String,
) -> Result<String, sync_filesystem::Error> {
Ok(in_tokio(async {
async_filesystem::Host::readlink_at(self, fd, path).await
})?)
}
fn remove_directory_at(
&mut self,
fd: sync_filesystem::Descriptor,
path: String,
) -> Result<(), sync_filesystem::Error> {
Ok(in_tokio(async {
async_filesystem::Host::remove_directory_at(self, fd, path).await
})?)
}
fn rename_at(
&mut self,
fd: sync_filesystem::Descriptor,
old_path: String,
new_fd: sync_filesystem::Descriptor,
new_path: String,
) -> Result<(), sync_filesystem::Error> {
Ok(in_tokio(async {
async_filesystem::Host::rename_at(self, fd, old_path, new_fd, new_path).await
})?)
}
fn symlink_at(
&mut self,
fd: sync_filesystem::Descriptor,
src_path: String,
dest_path: String,
) -> Result<(), sync_filesystem::Error> {
Ok(in_tokio(async {
async_filesystem::Host::symlink_at(self, fd, src_path, dest_path).await
})?)
}
fn unlink_file_at(
&mut self,
fd: sync_filesystem::Descriptor,
path: String,
) -> Result<(), sync_filesystem::Error> {
Ok(in_tokio(async {
async_filesystem::Host::unlink_file_at(self, fd, path).await
})?)
}
fn access_at(
&mut self,
fd: sync_filesystem::Descriptor,
path_flags: sync_filesystem::PathFlags,
path: String,
access: sync_filesystem::AccessType,
) -> Result<(), sync_filesystem::Error> {
Ok(in_tokio(async {
async_filesystem::Host::access_at(self, fd, path_flags.into(), path, access.into())
.await
})?)
}
fn change_file_permissions_at(
&mut self,
fd: sync_filesystem::Descriptor,
path_flags: sync_filesystem::PathFlags,
path: String,
mode: sync_filesystem::Modes,
) -> Result<(), sync_filesystem::Error> {
Ok(in_tokio(async {
async_filesystem::Host::change_file_permissions_at(
self,
fd,
path_flags.into(),
path,
mode.into(),
)
.await
})?)
}
fn change_directory_permissions_at(
&mut self,
fd: sync_filesystem::Descriptor,
path_flags: sync_filesystem::PathFlags,
path: String,
mode: sync_filesystem::Modes,
) -> Result<(), sync_filesystem::Error> {
Ok(in_tokio(async {
async_filesystem::Host::change_directory_permissions_at(
self,
fd,
path_flags.into(),
path,
mode.into(),
)
.await
})?)
}
fn lock_shared(
&mut self,
fd: sync_filesystem::Descriptor,
) -> Result<(), sync_filesystem::Error> {
Ok(in_tokio(async {
async_filesystem::Host::lock_shared(self, fd).await
})?)
}
fn lock_exclusive(
&mut self,
fd: sync_filesystem::Descriptor,
) -> Result<(), sync_filesystem::Error> {
Ok(in_tokio(async {
async_filesystem::Host::lock_exclusive(self, fd).await
})?)
}
fn try_lock_shared(
&mut self,
fd: sync_filesystem::Descriptor,
) -> Result<(), sync_filesystem::Error> {
Ok(in_tokio(async {
async_filesystem::Host::try_lock_shared(self, fd).await
})?)
}
fn try_lock_exclusive(
&mut self,
fd: sync_filesystem::Descriptor,
) -> Result<(), sync_filesystem::Error> {
Ok(in_tokio(async {
async_filesystem::Host::try_lock_exclusive(self, fd).await
})?)
}
fn unlock(&mut self, fd: sync_filesystem::Descriptor) -> Result<(), sync_filesystem::Error> {
Ok(in_tokio(async {
async_filesystem::Host::unlock(self, fd).await
})?)
}
fn read_via_stream(
&mut self,
fd: sync_filesystem::Descriptor,
offset: sync_filesystem::Filesize,
) -> Result<streams::InputStream, sync_filesystem::Error> {
Ok(in_tokio(async {
async_filesystem::Host::read_via_stream(self, fd, offset).await
})?)
}
fn write_via_stream(
&mut self,
fd: sync_filesystem::Descriptor,
offset: sync_filesystem::Filesize,
) -> Result<streams::OutputStream, sync_filesystem::Error> {
Ok(in_tokio(async {
async_filesystem::Host::write_via_stream(self, fd, offset).await
})?)
}
fn append_via_stream(
&mut self,
fd: sync_filesystem::Descriptor,
) -> Result<streams::OutputStream, sync_filesystem::Error> {
Ok(in_tokio(async {
async_filesystem::Host::append_via_stream(self, fd).await
})?)
}
}
impl From<async_filesystem::ErrorCode> for sync_filesystem::ErrorCode {
fn from(other: async_filesystem::ErrorCode) -> Self {
use async_filesystem::ErrorCode;
match other {
ErrorCode::Access => Self::Access,
ErrorCode::WouldBlock => Self::WouldBlock,
ErrorCode::Already => Self::Already,
ErrorCode::BadDescriptor => Self::BadDescriptor,
ErrorCode::Busy => Self::Busy,
ErrorCode::Deadlock => Self::Deadlock,
ErrorCode::Quota => Self::Quota,
ErrorCode::Exist => Self::Exist,
ErrorCode::FileTooLarge => Self::FileTooLarge,
ErrorCode::IllegalByteSequence => Self::IllegalByteSequence,
ErrorCode::InProgress => Self::InProgress,
ErrorCode::Interrupted => Self::Interrupted,
ErrorCode::Invalid => Self::Invalid,
ErrorCode::Io => Self::Io,
ErrorCode::IsDirectory => Self::IsDirectory,
ErrorCode::Loop => Self::Loop,
ErrorCode::TooManyLinks => Self::TooManyLinks,
ErrorCode::MessageSize => Self::MessageSize,
ErrorCode::NameTooLong => Self::NameTooLong,
ErrorCode::NoDevice => Self::NoDevice,
ErrorCode::NoEntry => Self::NoEntry,
ErrorCode::NoLock => Self::NoLock,
ErrorCode::InsufficientMemory => Self::InsufficientMemory,
ErrorCode::InsufficientSpace => Self::InsufficientSpace,
ErrorCode::NotDirectory => Self::NotDirectory,
ErrorCode::NotEmpty => Self::NotEmpty,
ErrorCode::NotRecoverable => Self::NotRecoverable,
ErrorCode::Unsupported => Self::Unsupported,
ErrorCode::NoTty => Self::NoTty,
ErrorCode::NoSuchDevice => Self::NoSuchDevice,
ErrorCode::Overflow => Self::Overflow,
ErrorCode::NotPermitted => Self::NotPermitted,
ErrorCode::Pipe => Self::Pipe,
ErrorCode::ReadOnly => Self::ReadOnly,
ErrorCode::InvalidSeek => Self::InvalidSeek,
ErrorCode::TextFileBusy => Self::TextFileBusy,
ErrorCode::CrossDevice => Self::CrossDevice,
}
}
}
impl From<async_filesystem::Error> for sync_filesystem::Error {
fn from(other: async_filesystem::Error) -> Self {
match other.downcast() {
Ok(errorcode) => Self::from(sync_filesystem::ErrorCode::from(errorcode)),
Err(other) => Self::trap(other),
}
}
}
impl From<sync_filesystem::Advice> for async_filesystem::Advice {
fn from(other: sync_filesystem::Advice) -> Self {
use sync_filesystem::Advice;
match other {
Advice::Normal => Self::Normal,
Advice::Sequential => Self::Sequential,
Advice::Random => Self::Random,
Advice::WillNeed => Self::WillNeed,
Advice::DontNeed => Self::DontNeed,
Advice::NoReuse => Self::NoReuse,
}
}
}
impl From<async_filesystem::DescriptorFlags> for sync_filesystem::DescriptorFlags {
fn from(other: async_filesystem::DescriptorFlags) -> Self {
let mut out = Self::empty();
if other.contains(async_filesystem::DescriptorFlags::READ) {
out |= Self::READ;
}
if other.contains(async_filesystem::DescriptorFlags::WRITE) {
out |= Self::WRITE;
}
if other.contains(async_filesystem::DescriptorFlags::FILE_INTEGRITY_SYNC) {
out |= Self::FILE_INTEGRITY_SYNC;
}
if other.contains(async_filesystem::DescriptorFlags::DATA_INTEGRITY_SYNC) {
out |= Self::DATA_INTEGRITY_SYNC;
}
if other.contains(async_filesystem::DescriptorFlags::REQUESTED_WRITE_SYNC) {
out |= Self::REQUESTED_WRITE_SYNC;
}
if other.contains(async_filesystem::DescriptorFlags::MUTATE_DIRECTORY) {
out |= Self::MUTATE_DIRECTORY;
}
out
}
}
impl From<async_filesystem::DescriptorType> for sync_filesystem::DescriptorType {
fn from(other: async_filesystem::DescriptorType) -> Self {
use async_filesystem::DescriptorType;
match other {
DescriptorType::RegularFile => Self::RegularFile,
DescriptorType::Directory => Self::Directory,
DescriptorType::BlockDevice => Self::BlockDevice,
DescriptorType::CharacterDevice => Self::CharacterDevice,
DescriptorType::Fifo => Self::Fifo,
DescriptorType::Socket => Self::Socket,
DescriptorType::SymbolicLink => Self::SymbolicLink,
DescriptorType::Unknown => Self::Unknown,
}
}
}
impl From<async_filesystem::DirectoryEntry> for sync_filesystem::DirectoryEntry {
fn from(other: async_filesystem::DirectoryEntry) -> Self {
Self {
inode: other.inode,
type_: other.type_.into(),
name: other.name,
}
}
}
impl From<async_filesystem::DescriptorStat> for sync_filesystem::DescriptorStat {
fn from(other: async_filesystem::DescriptorStat) -> Self {
Self {
device: other.device,
inode: other.inode,
type_: other.type_.into(),
link_count: other.link_count,
size: other.size,
data_access_timestamp: other.data_access_timestamp,
data_modification_timestamp: other.data_modification_timestamp,
status_change_timestamp: other.status_change_timestamp,
}
}
}
impl From<sync_filesystem::PathFlags> for async_filesystem::PathFlags {
fn from(other: sync_filesystem::PathFlags) -> Self {
let mut out = Self::empty();
if other.contains(sync_filesystem::PathFlags::SYMLINK_FOLLOW) {
out |= Self::SYMLINK_FOLLOW;
}
out
}
}
impl From<sync_filesystem::NewTimestamp> for async_filesystem::NewTimestamp {
fn from(other: sync_filesystem::NewTimestamp) -> Self {
use sync_filesystem::NewTimestamp;
match other {
NewTimestamp::NoChange => Self::NoChange,
NewTimestamp::Now => Self::Now,
NewTimestamp::Timestamp(datetime) => Self::Timestamp(datetime),
}
}
}
impl From<sync_filesystem::OpenFlags> for async_filesystem::OpenFlags {
fn from(other: sync_filesystem::OpenFlags) -> Self {
let mut out = Self::empty();
if other.contains(sync_filesystem::OpenFlags::CREATE) {
out |= Self::CREATE;
}
if other.contains(sync_filesystem::OpenFlags::DIRECTORY) {
out |= Self::DIRECTORY;
}
if other.contains(sync_filesystem::OpenFlags::EXCLUSIVE) {
out |= Self::EXCLUSIVE;
}
if other.contains(sync_filesystem::OpenFlags::TRUNCATE) {
out |= Self::TRUNCATE;
}
out
}
}
impl From<sync_filesystem::DescriptorFlags> for async_filesystem::DescriptorFlags {
fn from(other: sync_filesystem::DescriptorFlags) -> Self {
let mut out = Self::empty();
if other.contains(sync_filesystem::DescriptorFlags::READ) {
out |= Self::READ;
}
if other.contains(sync_filesystem::DescriptorFlags::WRITE) {
out |= Self::WRITE;
}
if other.contains(sync_filesystem::DescriptorFlags::FILE_INTEGRITY_SYNC) {
out |= Self::FILE_INTEGRITY_SYNC;
}
if other.contains(sync_filesystem::DescriptorFlags::DATA_INTEGRITY_SYNC) {
out |= Self::DATA_INTEGRITY_SYNC;
}
if other.contains(sync_filesystem::DescriptorFlags::REQUESTED_WRITE_SYNC) {
out |= Self::REQUESTED_WRITE_SYNC;
}
if other.contains(sync_filesystem::DescriptorFlags::MUTATE_DIRECTORY) {
out |= Self::MUTATE_DIRECTORY;
}
out
}
}
impl From<sync_filesystem::Modes> for async_filesystem::Modes {
fn from(other: sync_filesystem::Modes) -> Self {
let mut out = Self::empty();
if other.contains(sync_filesystem::Modes::READABLE) {
out |= Self::READABLE;
}
if other.contains(sync_filesystem::Modes::WRITABLE) {
out |= Self::WRITABLE;
}
if other.contains(sync_filesystem::Modes::EXECUTABLE) {
out |= Self::EXECUTABLE;
}
out
}
}
impl From<sync_filesystem::AccessType> for async_filesystem::AccessType {
fn from(other: sync_filesystem::AccessType) -> Self {
use sync_filesystem::AccessType;
match other {
AccessType::Access(modes) => Self::Access(modes.into()),
AccessType::Exists => Self::Exists,
}
}
}

427
crates/wasi/src/preview2/preview2/io.rs

@ -1,11 +1,16 @@
use crate::preview2::{
preview2::poll::PollableEntry,
stream::TableStreamExt,
wasi::io::streams::{self, InputStream, OutputStream, StreamError},
wasi::poll::poll::Pollable,
TableError, WasiView,
bindings::io::streams::{self, InputStream, OutputStream, StreamError},
bindings::poll::poll::Pollable,
filesystem::{FileInputStream, FileOutputStream},
poll::PollableFuture,
stream::{
HostInputStream, HostOutputStream, InternalInputStream, InternalOutputStream,
InternalTableStreamExt, StreamState,
},
HostPollable, TableError, TablePollableExt, WasiView,
};
use anyhow::anyhow;
use std::any::Any;
impl From<anyhow::Error> for streams::Error {
fn from(error: anyhow::Error) -> streams::Error {
@ -28,17 +33,26 @@ impl From<TableError> for streams::Error {
}
}
impl From<StreamState> for streams::StreamStatus {
fn from(state: StreamState) -> Self {
match state {
StreamState::Open => Self::Open,
StreamState::Closed => Self::Ended,
}
}
}
const ZEROS: &[u8] = &[0; 4 * 1024 * 1024];
#[async_trait::async_trait]
impl<T: WasiView> streams::Host for T {
async fn drop_input_stream(&mut self, stream: InputStream) -> anyhow::Result<()> {
self.table_mut()
.delete::<Box<dyn crate::preview2::InputStream>>(stream)?;
self.table_mut().delete_internal_input_stream(stream)?;
Ok(())
}
async fn drop_output_stream(&mut self, stream: OutputStream) -> anyhow::Result<()> {
self.table_mut()
.delete::<Box<dyn crate::preview2::OutputStream>>(stream)?;
self.table_mut().delete_internal_output_stream(stream)?;
Ok(())
}
@ -46,88 +60,159 @@ impl<T: WasiView> streams::Host for T {
&mut self,
stream: InputStream,
len: u64,
) -> Result<(Vec<u8>, bool), streams::Error> {
let s: &mut Box<dyn crate::preview2::InputStream> =
self.table_mut().get_input_stream_mut(stream)?;
// Len could be any `u64` value, but we don't want to
// allocate too much up front, so make a wild guess
// of an upper bound for the buffer size.
let buffer_len = std::cmp::min(len, 0x400000) as _;
let mut buffer = vec![0; buffer_len];
) -> Result<(Vec<u8>, streams::StreamStatus), streams::Error> {
match self.table_mut().get_internal_input_stream_mut(stream)? {
InternalInputStream::Host(s) => {
let (bytes, state) = HostInputStream::read(s.as_mut(), len as usize)?;
debug_assert!(bytes.len() <= len as usize);
let (bytes_read, end) = s.read(&mut buffer).await?;
buffer.truncate(bytes_read as usize);
Ok((buffer, end))
Ok((bytes.into(), state.into()))
}
InternalInputStream::File(s) => {
let (bytes, state) = FileInputStream::read(s, len as usize).await?;
Ok((bytes.into(), state.into()))
}
}
}
async fn blocking_read(
&mut self,
stream: InputStream,
len: u64,
) -> Result<(Vec<u8>, bool), streams::Error> {
// TODO: When this is really async make this block.
self.read(stream, len).await
) -> Result<(Vec<u8>, streams::StreamStatus), streams::Error> {
match self.table_mut().get_internal_input_stream_mut(stream)? {
InternalInputStream::Host(s) => {
s.ready().await?;
let (bytes, state) = HostInputStream::read(s.as_mut(), len as usize)?;
debug_assert!(bytes.len() <= len as usize);
Ok((bytes.into(), state.into()))
}
InternalInputStream::File(s) => {
let (bytes, state) = FileInputStream::read(s, len as usize).await?;
Ok((bytes.into(), state.into()))
}
}
}
async fn write(&mut self, stream: OutputStream, bytes: Vec<u8>) -> Result<u64, streams::Error> {
let s: &mut Box<dyn crate::preview2::OutputStream> =
self.table_mut().get_output_stream_mut(stream)?;
let bytes_written: u64 = s.write(&bytes).await?;
Ok(u64::try_from(bytes_written).unwrap())
async fn write(
&mut self,
stream: OutputStream,
bytes: Vec<u8>,
) -> Result<(u64, streams::StreamStatus), streams::Error> {
match self.table_mut().get_internal_output_stream_mut(stream)? {
InternalOutputStream::Host(s) => {
let (bytes_written, status) = HostOutputStream::write(s.as_mut(), bytes.into())?;
Ok((u64::try_from(bytes_written).unwrap(), status.into()))
}
InternalOutputStream::File(s) => {
let (nwritten, state) = FileOutputStream::write(s, bytes.into()).await?;
Ok((nwritten as u64, state.into()))
}
}
}
async fn blocking_write(
&mut self,
stream: OutputStream,
bytes: Vec<u8>,
) -> Result<u64, streams::Error> {
// TODO: When this is really async make this block.
self.write(stream, bytes).await
) -> Result<(u64, streams::StreamStatus), streams::Error> {
match self.table_mut().get_internal_output_stream_mut(stream)? {
InternalOutputStream::Host(s) => {
let mut bytes = bytes::Bytes::from(bytes);
let mut nwritten: usize = 0;
loop {
s.ready().await?;
let (written, state) = HostOutputStream::write(s.as_mut(), bytes.clone())?;
let _ = bytes.split_to(written);
nwritten += written;
if bytes.is_empty() || state == StreamState::Closed {
return Ok((nwritten as u64, state.into()));
}
}
}
InternalOutputStream::File(s) => {
let (written, state) = FileOutputStream::write(s, bytes.into()).await?;
Ok((written as u64, state.into()))
}
}
}
async fn skip(&mut self, stream: InputStream, len: u64) -> Result<(u64, bool), streams::Error> {
let s: &mut Box<dyn crate::preview2::InputStream> =
self.table_mut().get_input_stream_mut(stream)?;
let (bytes_skipped, end) = s.skip(len).await?;
async fn skip(
&mut self,
stream: InputStream,
len: u64,
) -> Result<(u64, streams::StreamStatus), streams::Error> {
match self.table_mut().get_internal_input_stream_mut(stream)? {
InternalInputStream::Host(s) => {
// TODO: the cast to usize should be fallible, use `.try_into()?`
let (bytes_skipped, state) = HostInputStream::skip(s.as_mut(), len as usize)?;
Ok((bytes_skipped, end))
Ok((bytes_skipped as u64, state.into()))
}
InternalInputStream::File(s) => {
let (bytes_skipped, state) = FileInputStream::skip(s, len as usize).await?;
Ok((bytes_skipped as u64, state.into()))
}
}
}
async fn blocking_skip(
&mut self,
stream: InputStream,
len: u64,
) -> Result<(u64, bool), streams::Error> {
// TODO: When this is really async make this block.
self.skip(stream, len).await
) -> Result<(u64, streams::StreamStatus), streams::Error> {
match self.table_mut().get_internal_input_stream_mut(stream)? {
InternalInputStream::Host(s) => {
s.ready().await?;
// TODO: the cast to usize should be fallible, use `.try_into()?`
let (bytes_skipped, state) = HostInputStream::skip(s.as_mut(), len as usize)?;
Ok((bytes_skipped as u64, state.into()))
}
InternalInputStream::File(s) => {
let (bytes_skipped, state) = FileInputStream::skip(s, len as usize).await?;
Ok((bytes_skipped as u64, state.into()))
}
}
}
async fn write_zeroes(
&mut self,
stream: OutputStream,
len: u64,
) -> Result<u64, streams::Error> {
let s: &mut Box<dyn crate::preview2::OutputStream> =
self.table_mut().get_output_stream_mut(stream)?;
let bytes_written: u64 = s.write_zeroes(len).await?;
Ok(bytes_written)
) -> Result<(u64, streams::StreamStatus), streams::Error> {
let s = self.table_mut().get_internal_output_stream_mut(stream)?;
let mut bytes = bytes::Bytes::from_static(ZEROS);
bytes.truncate((len as usize).min(bytes.len()));
let (written, state) = match s {
InternalOutputStream::Host(s) => HostOutputStream::write(s.as_mut(), bytes)?,
InternalOutputStream::File(s) => FileOutputStream::write(s, bytes).await?,
};
Ok((written as u64, state.into()))
}
async fn blocking_write_zeroes(
&mut self,
stream: OutputStream,
len: u64,
) -> Result<u64, streams::Error> {
// TODO: When this is really async make this block.
self.write_zeroes(stream, len).await
) -> Result<(u64, streams::StreamStatus), streams::Error> {
let mut remaining = len as usize;
let s = self.table_mut().get_internal_output_stream_mut(stream)?;
loop {
if let InternalOutputStream::Host(s) = s {
HostOutputStream::ready(s.as_mut()).await?;
}
let mut bytes = bytes::Bytes::from_static(ZEROS);
bytes.truncate(remaining.min(bytes.len()));
let (written, state) = match s {
InternalOutputStream::Host(s) => HostOutputStream::write(s.as_mut(), bytes)?,
InternalOutputStream::File(s) => FileOutputStream::write(s, bytes).await?,
};
remaining -= written;
if remaining == 0 || state == StreamState::Closed {
return Ok((len - remaining as u64, state.into()));
}
}
}
async fn splice(
@ -135,7 +220,7 @@ impl<T: WasiView> streams::Host for T {
_src: InputStream,
_dst: OutputStream,
_len: u64,
) -> Result<(u64, bool), streams::Error> {
) -> Result<(u64, streams::StreamStatus), streams::Error> {
// TODO: We can't get two streams at the same time because they both
// carry the exclusive lifetime of `ctx`. When [`get_many_mut`] is
// stabilized, that could allow us to add a `get_many_stream_mut` or
@ -156,24 +241,25 @@ impl<T: WasiView> streams::Host for T {
Ok(bytes_spliced)
*/
todo!()
todo!("stream splice is not implemented")
}
async fn blocking_splice(
&mut self,
src: InputStream,
dst: OutputStream,
len: u64,
) -> Result<(u64, bool), streams::Error> {
// TODO: When this is really async make this block.
self.splice(src, dst, len).await
_src: InputStream,
_dst: OutputStream,
_len: u64,
) -> Result<(u64, streams::StreamStatus), streams::Error> {
// TODO: once splice is implemented, figure out what the blocking semantics are for waiting
// on src and dest here.
todo!("stream splice is not implemented")
}
async fn forward(
&mut self,
_src: InputStream,
_dst: OutputStream,
) -> Result<u64, streams::Error> {
) -> Result<(u64, streams::StreamStatus), streams::Error> {
// TODO: We can't get two streams at the same time because they both
// carry the exclusive lifetime of `ctx`. When [`get_many_mut`] is
// stabilized, that could allow us to add a `get_many_stream_mut` or
@ -195,21 +281,220 @@ impl<T: WasiView> streams::Host for T {
Ok(bytes_spliced)
*/
todo!()
todo!("stream forward is not implemented")
}
async fn subscribe_to_input_stream(&mut self, stream: InputStream) -> anyhow::Result<Pollable> {
Ok(self
.table_mut()
.push(Box::new(PollableEntry::Read(stream)))?)
// Ensure that table element is an input-stream:
let pollable = match self.table_mut().get_internal_input_stream_mut(stream)? {
InternalInputStream::Host(_) => {
fn input_stream_ready<'a>(stream: &'a mut dyn Any) -> PollableFuture<'a> {
// FIXME: This downcast and match should be guaranteed by the checks above,
// however, the table element at index could be changed which would make this
// panic! This is a known problem with referring to other resources in the
// table which must be fixed.
let stream = stream
.downcast_mut::<InternalInputStream>()
.expect("downcast to InternalInputStream failed");
match *stream {
InternalInputStream::Host(ref mut hs) => hs.ready(),
_ => unreachable!(),
}
}
HostPollable::TableEntry {
index: stream,
make_future: input_stream_ready,
}
}
// Files are always "ready" immediately (because we have no way to actually wait on
// readiness in epoll)
InternalInputStream::File(_) => {
HostPollable::Closure(Box::new(|| Box::pin(futures::future::ready(Ok(())))))
}
};
Ok(self.table_mut().push_host_pollable(pollable)?)
}
async fn subscribe_to_output_stream(
&mut self,
stream: OutputStream,
) -> anyhow::Result<Pollable> {
Ok(self
.table_mut()
.push(Box::new(PollableEntry::Write(stream)))?)
// Ensure that table element is an output-stream:
let pollable = match self.table_mut().get_internal_output_stream_mut(stream)? {
InternalOutputStream::Host(_) => {
fn output_stream_ready<'a>(stream: &'a mut dyn Any) -> PollableFuture<'a> {
// FIXME: This downcast and match should be guaranteed by the checks above,
// however, the table element at index could be changed which would make this
// panic! This is a known problem with referring to other resources in the
// table which must be fixed.
let stream = stream
.downcast_mut::<InternalOutputStream>()
.expect("downcast to HostOutputStream failed");
match *stream {
InternalOutputStream::Host(ref mut hs) => hs.ready(),
_ => unreachable!(),
}
}
HostPollable::TableEntry {
index: stream,
make_future: output_stream_ready,
}
}
InternalOutputStream::File(_) => {
HostPollable::Closure(Box::new(|| Box::pin(futures::future::ready(Ok(())))))
}
};
Ok(self.table_mut().push_host_pollable(pollable)?)
}
}
pub mod sync {
use crate::preview2::{
bindings::io::streams::{Host as AsyncHost, StreamStatus as AsyncStreamStatus},
bindings::sync_io::io::streams::{self, InputStream, OutputStream},
bindings::sync_io::poll::poll::Pollable,
in_tokio, WasiView,
};
impl From<AsyncStreamStatus> for streams::StreamStatus {
fn from(other: AsyncStreamStatus) -> Self {
match other {
AsyncStreamStatus::Open => Self::Open,
AsyncStreamStatus::Ended => Self::Ended,
}
}
}
impl<T: WasiView> streams::Host for T {
fn drop_input_stream(&mut self, stream: InputStream) -> anyhow::Result<()> {
in_tokio(async { AsyncHost::drop_input_stream(self, stream).await })
}
fn drop_output_stream(&mut self, stream: OutputStream) -> anyhow::Result<()> {
in_tokio(async { AsyncHost::drop_output_stream(self, stream).await })
}
fn read(
&mut self,
stream: InputStream,
len: u64,
) -> Result<(Vec<u8>, streams::StreamStatus), streams::Error> {
in_tokio(async { AsyncHost::read(self, stream, len).await })
.map(|(a, b)| (a, b.into()))
.map_err(streams::Error::from)
}
fn blocking_read(
&mut self,
stream: InputStream,
len: u64,
) -> Result<(Vec<u8>, streams::StreamStatus), streams::Error> {
in_tokio(async { AsyncHost::blocking_read(self, stream, len).await })
.map(|(a, b)| (a, b.into()))
.map_err(streams::Error::from)
}
fn write(
&mut self,
stream: OutputStream,
bytes: Vec<u8>,
) -> Result<(u64, streams::StreamStatus), streams::Error> {
in_tokio(async { AsyncHost::write(self, stream, bytes).await })
.map(|(a, b)| (a, b.into()))
.map_err(streams::Error::from)
}
fn blocking_write(
&mut self,
stream: OutputStream,
bytes: Vec<u8>,
) -> Result<(u64, streams::StreamStatus), streams::Error> {
in_tokio(async { AsyncHost::write(self, stream, bytes).await })
.map(|(a, b)| (a, b.into()))
.map_err(streams::Error::from)
}
fn skip(
&mut self,
stream: InputStream,
len: u64,
) -> Result<(u64, streams::StreamStatus), streams::Error> {
in_tokio(async { AsyncHost::skip(self, stream, len).await })
.map(|(a, b)| (a, b.into()))
.map_err(streams::Error::from)
}
fn blocking_skip(
&mut self,
stream: InputStream,
len: u64,
) -> Result<(u64, streams::StreamStatus), streams::Error> {
in_tokio(async { AsyncHost::blocking_skip(self, stream, len).await })
.map(|(a, b)| (a, b.into()))
.map_err(streams::Error::from)
}
fn write_zeroes(
&mut self,
stream: OutputStream,
len: u64,
) -> Result<(u64, streams::StreamStatus), streams::Error> {
in_tokio(async { AsyncHost::write_zeroes(self, stream, len).await })
.map(|(a, b)| (a, b.into()))
.map_err(streams::Error::from)
}
fn blocking_write_zeroes(
&mut self,
stream: OutputStream,
len: u64,
) -> Result<(u64, streams::StreamStatus), streams::Error> {
in_tokio(async { AsyncHost::blocking_write_zeroes(self, stream, len).await })
.map(|(a, b)| (a, b.into()))
.map_err(streams::Error::from)
}
fn splice(
&mut self,
src: InputStream,
dst: OutputStream,
len: u64,
) -> Result<(u64, streams::StreamStatus), streams::Error> {
in_tokio(async { AsyncHost::splice(self, src, dst, len).await })
.map(|(a, b)| (a, b.into()))
.map_err(streams::Error::from)
}
fn blocking_splice(
&mut self,
src: InputStream,
dst: OutputStream,
len: u64,
) -> Result<(u64, streams::StreamStatus), streams::Error> {
in_tokio(async { AsyncHost::blocking_splice(self, src, dst, len).await })
.map(|(a, b)| (a, b.into()))
.map_err(streams::Error::from)
}
fn forward(
&mut self,
src: InputStream,
dst: OutputStream,
) -> Result<(u64, streams::StreamStatus), streams::Error> {
in_tokio(async { AsyncHost::forward(self, src, dst).await })
.map(|(a, b)| (a, b.into()))
.map_err(streams::Error::from)
}
fn subscribe_to_input_stream(&mut self, stream: InputStream) -> anyhow::Result<Pollable> {
in_tokio(async { AsyncHost::subscribe_to_input_stream(self, stream).await })
}
fn subscribe_to_output_stream(&mut self, stream: OutputStream) -> anyhow::Result<Pollable> {
in_tokio(async { AsyncHost::subscribe_to_output_stream(self, stream).await })
}
}
}

1
crates/wasi/src/preview2/preview2/mod.rs

@ -3,5 +3,4 @@ mod env;
mod exit;
pub(crate) mod filesystem;
mod io;
mod poll;
mod random;

83
crates/wasi/src/preview2/preview2/poll.rs

@ -1,83 +0,0 @@
use crate::preview2::{
stream::TableStreamExt,
wasi::clocks::monotonic_clock::Instant,
wasi::io::streams::{InputStream, OutputStream},
wasi::poll::poll::{self, Pollable},
WasiView,
};
/// A pollable resource table entry.
#[derive(Copy, Clone)]
pub(crate) enum PollableEntry {
/// Poll for read events.
Read(InputStream),
/// Poll for write events.
Write(OutputStream),
/// Poll for a monotonic-clock timer.
MonotonicClock(Instant, bool),
/* FIXME: need to rebuild the poll interface to let pollables be created in different crates.
/// Poll for a tcp-socket.
TcpSocket(TcpSocket),
*/
}
// Implementatations of the interface. The bodies had been pulled out into
// functions above to allow them to be shared between the two worlds, which
// used to require different traits . Features have been added to facilitate
// sharing between worlds, but I want to avoid the huge whitespace diff on
// this PR.
#[async_trait::async_trait]
impl<T: WasiView> poll::Host for T {
async fn drop_pollable(&mut self, pollable: Pollable) -> anyhow::Result<()> {
self.table_mut().delete::<PollableEntry>(pollable)?;
Ok(())
}
async fn poll_oneoff(&mut self, futures: Vec<Pollable>) -> anyhow::Result<Vec<bool>> {
use crate::preview2::sched::{sync::SyncSched, Poll, Userdata, WasiSched};
// Convert `futures` into `Poll` subscriptions.
let mut poll = Poll::new();
let len = futures.len();
for (index, future) in futures.into_iter().enumerate() {
let userdata = Userdata::from(index as u64);
match *self.table().get(future)? {
PollableEntry::Read(stream) => {
let wasi_stream: &dyn crate::preview2::InputStream =
self.table().get_input_stream(stream)?;
poll.subscribe_read(wasi_stream, userdata);
}
PollableEntry::Write(stream) => {
let wasi_stream: &dyn crate::preview2::OutputStream =
self.table().get_output_stream(stream)?;
poll.subscribe_write(wasi_stream, userdata);
}
PollableEntry::MonotonicClock(when, absolute) => {
poll.subscribe_monotonic_clock(
&*self.ctx().monotonic_clock,
when,
absolute,
userdata,
);
} /*
PollableEntry::TcpSocket(tcp_socket) => {
let wasi_tcp_socket: &dyn crate::WasiTcpSocket =
self.table().get_tcp_socket(tcp_socket)?;
poll.subscribe_tcp_socket(wasi_tcp_socket, userdata);
}
*/
}
}
// Do the poll.
SyncSched.poll_oneoff(&mut poll).await?;
let mut results = vec![false; len];
for (_result, data) in poll.results() {
results[u64::from(data) as usize] = true;
}
Ok(results)
}
}

17
crates/wasi/src/preview2/preview2/random.rs

@ -1,40 +1,35 @@
use crate::preview2::wasi::random::insecure;
use crate::preview2::wasi::random::insecure_seed;
use crate::preview2::wasi::random::random;
use crate::preview2::bindings::random::{insecure, insecure_seed, random};
use crate::preview2::WasiView;
use cap_rand::{distributions::Standard, Rng};
#[async_trait::async_trait]
impl<T: WasiView> random::Host for T {
async fn get_random_bytes(&mut self, len: u64) -> anyhow::Result<Vec<u8>> {
fn get_random_bytes(&mut self, len: u64) -> anyhow::Result<Vec<u8>> {
Ok((&mut self.ctx_mut().random)
.sample_iter(Standard)
.take(len as usize)
.collect())
}
async fn get_random_u64(&mut self) -> anyhow::Result<u64> {
fn get_random_u64(&mut self) -> anyhow::Result<u64> {
Ok(self.ctx_mut().random.sample(Standard))
}
}
#[async_trait::async_trait]
impl<T: WasiView> insecure::Host for T {
async fn get_insecure_random_bytes(&mut self, len: u64) -> anyhow::Result<Vec<u8>> {
fn get_insecure_random_bytes(&mut self, len: u64) -> anyhow::Result<Vec<u8>> {
Ok((&mut self.ctx_mut().insecure_random)
.sample_iter(Standard)
.take(len as usize)
.collect())
}
async fn get_insecure_random_u64(&mut self) -> anyhow::Result<u64> {
fn get_insecure_random_u64(&mut self) -> anyhow::Result<u64> {
Ok(self.ctx_mut().insecure_random.sample(Standard))
}
}
#[async_trait::async_trait]
impl<T: WasiView> insecure_seed::Host for T {
async fn insecure_seed(&mut self) -> anyhow::Result<(u64, u64)> {
fn insecure_seed(&mut self) -> anyhow::Result<(u64, u64)> {
let seed: u128 = self.ctx_mut().insecure_random_seed;
Ok((seed as u64, (seed >> 64) as u64))
}

110
crates/wasi/src/preview2/sched.rs

@ -1,110 +0,0 @@
#![allow(dead_code)]
use crate::preview2::{
clocks::HostMonotonicClock,
stream::{InputStream, OutputStream},
};
use anyhow::Error;
pub(crate) mod subscription;
pub(crate) mod sync;
pub use cap_std::time::Duration;
pub(crate) use subscription::{
MonotonicClockSubscription, RwSubscription, Subscription, SubscriptionResult,
};
#[async_trait::async_trait]
pub(crate) trait WasiSched: Send + Sync {
async fn poll_oneoff<'a>(&self, poll: &mut Poll<'a>) -> Result<(), Error>;
async fn sched_yield(&self) -> Result<(), Error>;
async fn sleep(&self, duration: Duration) -> Result<(), Error>;
}
#[derive(Debug, Copy, Clone, PartialEq, Eq)]
pub(crate) struct Userdata(u64);
impl From<u64> for Userdata {
fn from(u: u64) -> Userdata {
Userdata(u)
}
}
impl From<Userdata> for u64 {
fn from(u: Userdata) -> u64 {
u.0
}
}
pub(crate) struct Poll<'a> {
subs: Vec<(Subscription<'a>, Userdata)>,
}
impl<'a> Poll<'a> {
pub fn new() -> Self {
Self { subs: Vec::new() }
}
pub fn subscribe_monotonic_clock(
&mut self,
clock: &'a dyn HostMonotonicClock,
deadline: u64,
absolute: bool,
ud: Userdata,
) {
let absolute_deadline: u64 = if absolute {
deadline
} else {
// Convert a relative deadline to an absolute one. Use a saturating
// add because there are no meaningful timeouts after the monotonic
// clock overflows.
clock.now().saturating_add(deadline)
};
self.subs.push((
Subscription::MonotonicClock(MonotonicClockSubscription {
clock,
absolute_deadline,
}),
ud,
));
}
pub fn subscribe_read(&mut self, stream: &'a dyn InputStream, ud: Userdata) {
self.subs.push((
Subscription::ReadWrite(RwSubscription::new_input(stream)),
ud,
));
}
pub fn subscribe_write(&mut self, stream: &'a dyn OutputStream, ud: Userdata) {
self.subs.push((
Subscription::ReadWrite(RwSubscription::new_output(stream)),
ud,
));
}
/* FIXME need to redo poll interface to support pollables defined in other crates
pub fn subscribe_tcp_socket(&mut self, tcp_socket: &'a dyn WasiTcpSocket, ud: Userdata) {
self.subs.push((
Subscription::ReadWrite(RwSubscription::new_tcp_socket(tcp_socket)),
ud,
));
}
*/
pub fn results(self) -> impl Iterator<Item = (SubscriptionResult, Userdata)> + 'a {
self.subs
.into_iter()
.filter_map(|(s, ud)| SubscriptionResult::from_subscription(s).map(|r| (r, ud)))
}
pub fn is_empty(&self) -> bool {
self.subs.is_empty()
}
pub fn earliest_clock_deadline(&self) -> Option<&MonotonicClockSubscription<'a>> {
self.subs
.iter()
.filter_map(|(s, _ud)| match s {
Subscription::MonotonicClock(t) => Some(t),
_ => None,
})
.min_by(|a, b| a.absolute_deadline.cmp(&b.absolute_deadline))
}
pub fn rw_subscriptions<'b>(&'b mut self) -> impl Iterator<Item = &'b mut RwSubscription<'a>> {
self.subs.iter_mut().filter_map(|sub| match &mut sub.0 {
Subscription::ReadWrite(rwsub) => Some(rwsub),
_ => None,
})
}
}

105
crates/wasi/src/preview2/sched/subscription.rs

@ -1,105 +0,0 @@
use crate::preview2::{
clocks::HostMonotonicClock,
stream::{InputStream, OutputStream},
};
use anyhow::Error;
use bitflags::bitflags;
bitflags! {
#[derive(Copy, Clone, Debug, PartialEq, Eq)]
pub struct RwEventFlags: u32 {
const HANGUP = 0b1;
}
}
pub enum RwStream<'a> {
// fixme: rename?
Read(&'a dyn InputStream),
Write(&'a dyn OutputStream),
/*
TcpSocket(&'a dyn WasiTcpSocket),
*/
}
pub struct RwSubscription<'a> {
pub stream: RwStream<'a>,
status: Option<Result<RwEventFlags, Error>>,
}
impl<'a> RwSubscription<'a> {
pub fn new_input(stream: &'a dyn InputStream) -> Self {
Self {
stream: RwStream::Read(stream),
status: None,
}
}
pub fn new_output(stream: &'a dyn OutputStream) -> Self {
Self {
stream: RwStream::Write(stream),
status: None,
}
}
/*
pub fn new_tcp_socket(tcp_socket: &'a dyn WasiTcpSocket) -> Self {
Self {
stream: RwStream::TcpSocket(tcp_socket),
status: None,
}
}
*/
pub fn complete(&mut self, flags: RwEventFlags) {
self.status = Some(Ok(flags))
}
pub fn error(&mut self, error: Error) {
self.status = Some(Err(error))
}
pub fn result(&mut self) -> Option<Result<RwEventFlags, Error>> {
self.status.take()
}
pub fn is_complete(&self) -> bool {
self.status.is_some()
}
}
pub struct MonotonicClockSubscription<'a> {
pub clock: &'a dyn HostMonotonicClock,
pub absolute_deadline: u64,
}
impl<'a> MonotonicClockSubscription<'a> {
pub fn now(&self) -> u64 {
self.clock.now()
}
pub fn duration_until(&self) -> Option<u64> {
self.absolute_deadline.checked_sub(self.now())
}
pub fn result(&self) -> Option<Result<(), Error>> {
if self.now() >= self.absolute_deadline {
Some(Ok(()))
} else {
None
}
}
}
pub enum Subscription<'a> {
ReadWrite(RwSubscription<'a>),
MonotonicClock(MonotonicClockSubscription<'a>),
}
#[derive(Debug)]
pub enum SubscriptionResult {
ReadWrite(Result<RwEventFlags, Error>),
MonotonicClock(Result<(), Error>),
}
impl SubscriptionResult {
pub fn from_subscription(s: Subscription) -> Option<SubscriptionResult> {
match s {
Subscription::ReadWrite(mut s) => {
s.result().map(|sub| SubscriptionResult::ReadWrite(sub))
}
Subscription::MonotonicClock(s) => s.result().map(SubscriptionResult::MonotonicClock),
}
}
}

156
crates/wasi/src/preview2/sched/sync.rs

@ -1,156 +0,0 @@
use crate::preview2::sched::{
subscription::{RwEventFlags, RwStream},
Poll, WasiSched,
};
use rustix::event::{PollFd, PollFlags};
use std::thread;
use std::time::Duration;
use anyhow::Error;
pub(crate) async fn poll_oneoff<'a>(poll: &mut Poll<'a>) -> Result<(), Error> {
// Collect all stream I/O subscriptions. Clock subscriptions are handled
// separately below.
let mut ready = false;
let mut pollfds = Vec::new();
for rwsub in poll.rw_subscriptions() {
match rwsub.stream {
RwStream::Read(stream) => {
// Poll things that can be polled.
if let Some(fd) = stream.pollable_read() {
#[cfg(unix)]
{
pollfds.push(PollFd::from_borrowed_fd(fd, PollFlags::IN));
continue;
}
#[cfg(windows)]
{
if let Some(fd) = fd.as_socket() {
pollfds.push(PollFd::from_borrowed_fd(fd, PollFlags::IN));
continue;
}
}
}
// Allow in-memory buffers or other immediately-available
// sources to complete successfully.
if let Ok(nbytes) = stream.num_ready_bytes().await {
if nbytes != 0 {
rwsub.complete(RwEventFlags::empty());
ready = true;
continue;
}
}
return Err(anyhow::anyhow!("stream is not pollable for reading"));
}
RwStream::Write(stream) => {
let fd = stream
.pollable_write()
.ok_or_else(|| anyhow::anyhow!("stream is not pollable for writing"))?;
#[cfg(unix)]
{
pollfds.push(PollFd::from_borrowed_fd(fd, PollFlags::OUT));
}
#[cfg(windows)]
{
if let Some(fd) = fd.as_socket() {
pollfds.push(PollFd::from_borrowed_fd(fd, PollFlags::OUT));
} else {
return Err(anyhow::anyhow!(
"unimplemented: polling for writing to non-OS resources"
));
}
}
} /* FIXME redesign of sched to make it possible to define pollables out of crate
RwStream::TcpSocket(tcp_socket) => {
let fd = tcp_socket.pollable();
pollfds.push(PollFd::from_borrowed_fd(fd, PollFlags::IN | PollFlags::PRI));
}
*/
}
}
// If we didn't have any streams that are immediately available, do an OS
// `poll` to wait for streams to become available.
if !ready {
loop {
let poll_timeout = if let Some(t) = poll.earliest_clock_deadline() {
// Convert the timeout to milliseconds for `poll`, rounding up.
//
// TODO: On Linux and FreeBSD, we could use `ppoll` instead
// which takes a `timespec.`
((t.absolute_deadline.saturating_sub(t.clock.now()) + 999_999) / 1_000_000)
.try_into()
.map_err(|_| anyhow::anyhow!("overflow: poll timeout"))?
} else {
// A negative value requests an infinite timeout.
-1
};
tracing::debug!(
poll_timeout = tracing::field::debug(poll_timeout),
poll_fds = tracing::field::debug(&pollfds),
"poll"
);
match rustix::event::poll(&mut pollfds, poll_timeout) {
Ok(_num_ready) => {
ready = true;
break;
}
Err(rustix::io::Errno::INTR) => continue,
Err(err) => return Err(std::io::Error::from(err).into()),
}
}
assert_eq!(poll.rw_subscriptions().count(), pollfds.len());
// If the OS `poll` returned events, record them.
if ready {
// Iterate through the stream subscriptions, skipping those that
// were already completed due to being immediately available.
for (rwsub, pollfd) in poll.rw_subscriptions().zip(pollfds.into_iter()) {
let revents = pollfd.revents();
if revents.contains(PollFlags::NVAL) {
rwsub.error(anyhow::anyhow!("rw subscription badf"));
} else if revents.contains(PollFlags::ERR) {
rwsub.error(anyhow::anyhow!("rw subscription io error"));
} else if revents.contains(PollFlags::HUP) {
rwsub.complete(RwEventFlags::HANGUP);
} else {
rwsub.complete(RwEventFlags::empty());
};
}
}
};
// If we had no immediately-available events and no events becoming
// available in a `poll`, it means we timed out. Report that event.
if !ready {
poll.earliest_clock_deadline()
.expect("timed out")
.result()
.expect("timer deadline is past")
.unwrap()
}
Ok(())
}
pub(crate) struct SyncSched;
#[async_trait::async_trait]
impl WasiSched for SyncSched {
async fn poll_oneoff<'a>(&self, poll: &mut Poll<'a>) -> Result<(), Error> {
poll_oneoff(poll).await
}
async fn sched_yield(&self) -> Result<(), Error> {
thread::yield_now();
Ok(())
}
async fn sleep(&self, duration: Duration) -> Result<(), Error> {
std::thread::sleep(duration);
Ok(())
}
}

221
crates/wasi/src/preview2/stdio.rs

@ -1,176 +1,67 @@
use anyhow::Error;
use std::any::Any;
use std::convert::TryInto;
use std::io::{self, Read, Write};
use system_interface::io::ReadReady;
use crate::preview2::pipe::AsyncWriteStream;
use crate::preview2::{InputStream, OutputStream};
#[cfg(unix)]
use cap_std::io_lifetimes::{AsFd, BorrowedFd};
#[cfg(windows)]
use cap_std::io_lifetimes::{AsHandle, BorrowedHandle};
#[cfg(windows)]
use io_extras::os::windows::{AsHandleOrSocket, BorrowedHandleOrSocket};
pub struct Stdin(std::io::Stdin);
pub fn stdin() -> Stdin {
Stdin(std::io::stdin())
}
#[async_trait::async_trait]
impl InputStream for Stdin {
fn as_any(&self) -> &dyn Any {
self
}
#[cfg(unix)]
fn pollable_read(&self) -> Option<rustix::fd::BorrowedFd> {
Some(self.0.as_fd())
}
#[cfg(windows)]
fn pollable_read(&self) -> Option<io_extras::os::windows::BorrowedHandleOrSocket> {
Some(self.0.as_handle_or_socket())
}
async fn read(&mut self, buf: &mut [u8]) -> Result<(u64, bool), Error> {
match Read::read(&mut self.0, buf) {
Ok(0) => Ok((0, true)),
Ok(n) => Ok((n as u64, false)),
Err(err) if err.kind() == io::ErrorKind::Interrupted => Ok((0, false)),
Err(err) => Err(err.into()),
}
}
async fn read_vectored<'a>(
&mut self,
bufs: &mut [io::IoSliceMut<'a>],
) -> Result<(u64, bool), Error> {
match Read::read_vectored(&mut self.0, bufs) {
Ok(0) => Ok((0, true)),
Ok(n) => Ok((n as u64, false)),
Err(err) if err.kind() == io::ErrorKind::Interrupted => Ok((0, false)),
Err(err) => Err(err.into()),
}
}
#[cfg(can_vector)]
fn is_read_vectored(&self) {
Read::is_read_vectored(&mut self.0)
}
async fn skip(&mut self, nelem: u64) -> Result<(u64, bool), Error> {
let num = io::copy(&mut io::Read::take(&mut self.0, nelem), &mut io::sink())?;
Ok((num, num < nelem))
}
async fn num_ready_bytes(&self) -> Result<u64, Error> {
Ok(self.0.num_ready_bytes()?)
}
async fn readable(&self) -> Result<(), Error> {
Err(anyhow::anyhow!("idk"))
}
}
#[cfg(windows)]
impl AsHandle for Stdin {
fn as_handle(&self) -> BorrowedHandle<'_> {
self.0.as_handle()
}
}
#[cfg(windows)]
impl AsHandleOrSocket for Stdin {
#[inline]
fn as_handle_or_socket(&self) -> BorrowedHandleOrSocket {
self.0.as_handle_or_socket()
}
}
mod unix;
#[cfg(unix)]
impl AsFd for Stdin {
fn as_fd(&self) -> BorrowedFd<'_> {
self.0.as_fd()
}
}
macro_rules! wasi_output_stream_impl {
($ty:ty, $ident:ident) => {
#[async_trait::async_trait]
impl OutputStream for $ty {
fn as_any(&self) -> &dyn Any {
self
}
pub use self::unix::{stdin, Stdin};
#[cfg(unix)]
fn pollable_write(&self) -> Option<rustix::fd::BorrowedFd> {
Some(self.0.as_fd())
}
#[cfg(windows)]
fn pollable_write(&self) -> Option<io_extras::os::windows::BorrowedHandleOrSocket> {
Some(self.0.as_handle_or_socket())
}
async fn write(&mut self, buf: &[u8]) -> Result<u64, Error> {
let n = Write::write(&mut self.0, buf)?;
Ok(n.try_into()?)
}
async fn write_vectored<'a>(&mut self, bufs: &[io::IoSlice<'a>]) -> Result<u64, Error> {
let n = Write::write_vectored(&mut self.0, bufs)?;
Ok(n.try_into()?)
}
#[cfg(can_vector)]
fn is_write_vectored(&self) {
Write::is_write_vectored(&mut self.0)
}
// TODO: Optimize for stdio streams.
/*
async fn splice(
&mut self,
src: &mut dyn InputStream,
nelem: u64,
) -> Result<u64, Error> {
todo!()
}
*/
async fn write_zeroes(&mut self, nelem: u64) -> Result<u64, Error> {
let num = io::copy(&mut io::Read::take(io::repeat(0), nelem), &mut self.0)?;
Ok(num)
}
async fn writable(&self) -> Result<(), Error> {
Ok(())
}
}
#[cfg(windows)]
impl AsHandle for $ty {
fn as_handle(&self) -> BorrowedHandle<'_> {
self.0.as_handle()
}
}
#[cfg(unix)]
impl AsFd for $ty {
fn as_fd(&self) -> BorrowedFd<'_> {
self.0.as_fd()
}
}
#[cfg(windows)]
impl AsHandleOrSocket for $ty {
#[inline]
fn as_handle_or_socket(&self) -> BorrowedHandleOrSocket {
self.0.as_handle_or_socket()
}
}
};
}
#[allow(dead_code)]
mod worker_thread_stdin;
#[cfg(windows)]
pub use self::worker_thread_stdin::{stdin, Stdin};
pub struct Stdout(std::io::Stdout);
pub type Stdout = AsyncWriteStream;
pub fn stdout() -> Stdout {
Stdout(std::io::stdout())
AsyncWriteStream::new(tokio::io::stdout())
}
wasi_output_stream_impl!(Stdout, Stdout);
pub struct Stderr(std::io::Stderr);
pub type Stderr = AsyncWriteStream;
pub fn stderr() -> Stderr {
Stderr(std::io::stderr())
AsyncWriteStream::new(tokio::io::stderr())
}
#[cfg(all(unix, test))]
mod test {
// This could even be parameterized somehow to use the worker thread stdin vs the asyncfd
// stdin.
#[test]
fn test_stdin_by_forking() {
// Make pipe for emulating stdin.
// Make pipe for getting results.
// Fork.
// When child:
// close stdin fd.
// use dup2 to turn the pipe recv end into the stdin fd.
// in a tokio runtime:
// let stdin = super::stdin();
// // Make sure the initial state is that stdin is not ready:
// if timeout(stdin.ready().await).is_timeout() {
// send "start\n" on result pipe.
// }
// loop {
// match timeout(stdin.ready().await) {
// Ok => {
// let bytes = stdin.read();
// if bytes == ending sentinel:
// exit
// if bytes == some other sentinel:
// return and go back to the thing where we start the tokio runtime,
// testing that when creating a new super::stdin() it works correctly
// send "got: {bytes:?}\n" on result pipe.
// }
// Err => {
// send "timed out\n" on result pipe.
// }
// }
// }
// When parent:
// wait to recv "start\n" on result pipe (or the child process exits)
// send some bytes to child stdin.
// make sure we get back "got {bytes:?}" on result pipe (or the child process exits)
// sleep for a while.
// make sure we get back "timed out" on result pipe (or the child process exits)
// send some bytes again. and etc.
//
}
}
wasi_output_stream_impl!(Stderr, Stderr);

130
crates/wasi/src/preview2/stdio/unix.rs

@ -0,0 +1,130 @@
use crate::preview2::{pipe::AsyncReadStream, HostInputStream, StreamState};
use anyhow::Error;
use bytes::Bytes;
use futures::ready;
use std::future::Future;
use std::io::{self, Read};
use std::pin::Pin;
use std::task::{Context, Poll};
use tokio::io::unix::AsyncFd;
use tokio::io::{AsyncRead, ReadBuf};
// wasmtime cant use std::sync::OnceLock yet because of a llvm regression in
// 1.70. when 1.71 is released, we can switch to using std here.
use once_cell::sync::OnceCell as OnceLock;
use std::sync::Mutex;
// We need a single global instance of the AsyncFd<Stdin> because creating
// this instance registers the process's stdin fd with epoll, which will
// return an error if an fd is registered more than once.
struct GlobalStdin(Mutex<AsyncReadStream>);
static STDIN: OnceLock<GlobalStdin> = OnceLock::new();
impl GlobalStdin {
fn new() -> anyhow::Result<Self> {
Ok(Self(Mutex::new(AsyncReadStream::new(InnerStdin::new()?))))
}
fn read(&self, size: usize) -> Result<(Bytes, StreamState), Error> {
HostInputStream::read(&mut *self.0.lock().unwrap(), size)
}
fn ready<'a>(&'a self) -> impl Future<Output = Result<(), Error>> + 'a {
// Custom Future impl takes the std mutex in each invocation of poll.
// Required so we don't have to use a tokio mutex, which we can't take from
// inside a sync context in Self::read.
//
// Taking the lock, creating a fresh ready() future, polling it once, and
// then releasing the lock is acceptable here because the ready() future
// is only ever going to await on a single channel recv, plus some management
// of a state machine (for buffering).
struct Ready<'a>(&'a GlobalStdin);
impl<'a> Future for Ready<'a> {
type Output = Result<(), Error>;
fn poll(mut self: Pin<&mut Self>, cx: &mut Context) -> Poll<Self::Output> {
let mut locked = self.as_mut().0 .0.lock().unwrap();
let fut = locked.ready();
tokio::pin!(fut);
fut.poll(cx)
}
}
Ready(self)
}
}
pub struct Stdin;
impl Stdin {
fn get_global() -> &'static GlobalStdin {
// Creation must be running in a tokio context to succeed.
match tokio::runtime::Handle::try_current() {
Ok(_) => STDIN.get_or_init(|| {
GlobalStdin::new().expect("creating AsyncFd for stdin in existing tokio context")
}),
Err(_) => STDIN.get_or_init(|| {
crate::preview2::in_tokio(async {
GlobalStdin::new()
.expect("creating AsyncFd for stdin in internal tokio context")
})
}),
}
}
}
pub fn stdin() -> Stdin {
Stdin
}
#[async_trait::async_trait]
impl crate::preview2::HostInputStream for Stdin {
fn read(&mut self, size: usize) -> Result<(Bytes, StreamState), Error> {
Self::get_global().read(size)
}
async fn ready(&mut self) -> Result<(), Error> {
Self::get_global().ready().await
}
}
struct InnerStdin {
inner: AsyncFd<std::io::Stdin>,
}
impl InnerStdin {
pub fn new() -> anyhow::Result<Self> {
use rustix::fs::OFlags;
use std::os::fd::AsRawFd;
let stdin = std::io::stdin();
let borrowed_fd = unsafe { rustix::fd::BorrowedFd::borrow_raw(stdin.as_raw_fd()) };
let flags = rustix::fs::fcntl_getfl(borrowed_fd)?;
if !flags.contains(OFlags::NONBLOCK) {
rustix::fs::fcntl_setfl(borrowed_fd, flags.difference(OFlags::NONBLOCK))?;
}
Ok(Self {
inner: AsyncFd::new(std::io::stdin())?,
})
}
}
impl AsyncRead for InnerStdin {
fn poll_read(
mut self: Pin<&mut Self>,
cx: &mut Context<'_>,
buf: &mut ReadBuf<'_>,
) -> Poll<io::Result<()>> {
loop {
let mut guard = ready!(self.inner.poll_read_ready_mut(cx))?;
let unfilled = buf.initialize_unfilled();
match guard.try_io(|inner| inner.get_mut().read(unfilled)) {
Ok(Ok(len)) => {
buf.advance(len);
return Poll::Ready(Ok(()));
}
Ok(Err(err)) => return Poll::Ready(Err(err)),
Err(_would_block) => continue,
}
}
}
}

117
crates/wasi/src/preview2/stdio/worker_thread_stdin.rs

@ -0,0 +1,117 @@
use crate::preview2::{HostInputStream, StreamState};
use anyhow::{Context, Error};
use bytes::Bytes;
use tokio::sync::{mpsc, oneshot};
// wasmtime cant use std::sync::OnceLock yet because of a llvm regression in
// 1.70. when 1.71 is released, we can switch to using std here.
use once_cell::sync::OnceCell as OnceLock;
use std::sync::Mutex;
// We need a single global instance of the AsyncFd<Stdin> because creating
// this instance registers the process's stdin fd with epoll, which will
// return an error if an fd is registered more than once.
struct GlobalStdin {
tx: mpsc::Sender<oneshot::Sender<anyhow::Result<()>>>,
// FIXME use a Watch to check for readiness instead of sending a oneshot sender
}
static STDIN: OnceLock<Mutex<GlobalStdin>> = OnceLock::new();
fn create() -> Mutex<GlobalStdin> {
let (tx, mut rx) = mpsc::channel::<oneshot::Sender<anyhow::Result<()>>>(1);
std::thread::spawn(move || {
use std::io::BufRead;
// A client is interested in stdin's readiness.
// Don't care about the None case - the GlobalStdin sender on the other
// end of this pipe will live forever, because it lives inside the OnceLock.
while let Some(msg) = rx.blocking_recv() {
// Fill buf - can we skip this if its
// already filled?
// also, this could block forever and the
// client could give up. in that case,
// another client may want to start waiting
let r = std::io::stdin()
.lock()
.fill_buf()
.map(|_| ())
.map_err(anyhow::Error::from);
// tell the client stdin is ready for reading.
// don't care if the client happens to have died.
let _ = msg.send(r);
}
});
Mutex::new(GlobalStdin { tx })
}
pub struct Stdin;
impl Stdin {
fn get_global() -> &'static Mutex<GlobalStdin> {
STDIN.get_or_init(|| create())
}
}
pub fn stdin() -> Stdin {
// This implementation still needs to be fixed, and we need better test coverage.
// We are deferring that work to a future PR.
// https://github.com/bytecodealliance/wasmtime/pull/6556#issuecomment-1646232646
panic!("worker-thread based stdin is not yet implemented");
// Stdin
}
#[async_trait::async_trait]
impl HostInputStream for Stdin {
fn read(&mut self, size: usize) -> Result<(Bytes, StreamState), Error> {
use std::io::Read;
let mut buf = vec![0; size];
// FIXME: this is actually blocking. This whole implementation is likely bogus as a result
let nbytes = std::io::stdin().read(&mut buf)?;
buf.truncate(nbytes);
Ok((
buf.into(),
if nbytes > 0 {
StreamState::Open
} else {
StreamState::Closed
},
))
}
async fn ready(&mut self) -> Result<(), Error> {
use mpsc::error::TrySendError;
use std::future::Future;
use std::pin::Pin;
use std::task::{Context, Poll};
// Custom Future impl takes the std mutex in each invocation of poll.
// Required so we don't have to use a tokio mutex, which we can't take from
// inside a sync context in Self::read.
//
// Take the lock, attempt to
struct Send(Option<oneshot::Sender<anyhow::Result<()>>>);
impl Future for Send {
type Output = anyhow::Result<()>;
fn poll(mut self: Pin<&mut Self>, _: &mut Context) -> Poll<Self::Output> {
let locked = Stdin::get_global().lock().unwrap();
let to_send = self.as_mut().0.take().expect("to_send should be some");
match locked.tx.try_send(to_send) {
Ok(()) => Poll::Ready(Ok(())),
Err(TrySendError::Full(to_send)) => {
self.as_mut().0.replace(to_send);
Poll::Pending
}
Err(TrySendError::Closed(_)) => {
Poll::Ready(Err(anyhow::anyhow!("channel to GlobalStdin closed")))
}
}
}
}
let (result_tx, rx) = oneshot::channel::<anyhow::Result<()>>();
Box::pin(Send(Some(result_tx)))
.await
.context("sending message to worker thread")?;
rx.await.expect("channel is always alive")
}
}

394
crates/wasi/src/preview2/stream.rs

@ -1,210 +1,302 @@
use crate::preview2::filesystem::{FileInputStream, FileOutputStream};
use crate::preview2::{Table, TableError};
use anyhow::Error;
use std::any::Any;
use bytes::Bytes;
/// An input bytestream.
///
/// This is "pseudo" because the real streams will be a type in wit, and
/// built into the wit bindings, and will support async and type parameters.
/// This pseudo-stream abstraction is synchronous and only supports bytes.
#[async_trait::async_trait]
pub trait InputStream: Send + Sync {
fn as_any(&self) -> &dyn Any;
#[derive(Clone, Copy, Debug, PartialEq)]
pub enum StreamState {
Open,
Closed,
}
/// If this stream is reading from a host file descriptor, return it so
/// that it can be polled with a host poll.
#[cfg(unix)]
fn pollable_read(&self) -> Option<rustix::fd::BorrowedFd> {
None
impl StreamState {
pub fn is_closed(&self) -> bool {
*self == Self::Closed
}
}
/// If this stream is reading from a host file descriptor, return it so
/// that it can be polled with a host poll.
#[cfg(windows)]
fn pollable_read(&self) -> Option<io_extras::os::windows::BorrowedHandleOrSocket> {
None
}
/// Host trait for implementing the `wasi:io/streams.input-stream` resource: A
/// bytestream which can be read from.
#[async_trait::async_trait]
pub trait HostInputStream: Send + Sync {
/// Read bytes. On success, returns a pair holding the number of bytes
/// read and a flag indicating whether the end of the stream was reached.
/// Important: this read must be non-blocking!
fn read(&mut self, size: usize) -> Result<(Bytes, StreamState), Error>;
/// Read bytes. On success, returns a pair holding the number of bytes read
/// and a flag indicating whether the end of the stream was reached.
async fn read(&mut self, _buf: &mut [u8]) -> Result<(u64, bool), Error> {
Err(anyhow::anyhow!("badf"))
}
/// Read bytes from a stream and discard them. Important: this method must
/// be non-blocking!
fn skip(&mut self, nelem: usize) -> Result<(usize, StreamState), Error> {
let mut nread = 0;
let mut state = StreamState::Open;
/// Vectored-I/O form of `read`.
async fn read_vectored<'a>(
&mut self,
_bufs: &mut [std::io::IoSliceMut<'a>],
) -> Result<(u64, bool), Error> {
Err(anyhow::anyhow!("badf"))
}
let (bs, read_state) = self.read(nelem)?;
// TODO: handle the case where `bs.len()` is less than `nelem`
nread += bs.len();
if read_state.is_closed() {
state = read_state;
}
/// Test whether vectored I/O reads are known to be optimized in the
/// underlying implementation.
fn is_read_vectored(&self) -> bool {
false
Ok((nread, state))
}
/// Read bytes from a stream and discard them.
async fn skip(&mut self, nelem: u64) -> Result<(u64, bool), Error> {
let mut nread = 0;
let mut saw_end = false;
// TODO: Optimize by reading more than one byte at a time.
for _ in 0..nelem {
let (num, end) = self.read(&mut [0]).await?;
nread += num;
if end {
saw_end = true;
break;
}
/// Check for read readiness: this method blocks until the stream is ready
/// for reading.
async fn ready(&mut self) -> Result<(), Error>;
}
/// Host trait for implementing the `wasi:io/streams.output-stream` resource:
/// A bytestream which can be written to.
#[async_trait::async_trait]
pub trait HostOutputStream: Send + Sync {
/// Write bytes. On success, returns the number of bytes written.
/// Important: this write must be non-blocking!
fn write(&mut self, bytes: Bytes) -> Result<(usize, StreamState), Error>;
/// Transfer bytes directly from an input stream to an output stream.
/// Important: this splice must be non-blocking!
fn splice(
&mut self,
src: &mut dyn HostInputStream,
nelem: usize,
) -> Result<(usize, StreamState), Error> {
let mut nspliced = 0;
let mut state = StreamState::Open;
// TODO: handle the case where `bs.len()` is less than `nelem`
let (bs, read_state) = src.read(nelem)?;
// TODO: handle the case where write returns less than `bs.len()`
let (nwritten, _write_state) = self.write(bs)?;
nspliced += nwritten;
if read_state.is_closed() {
state = read_state;
}
Ok((nread, saw_end))
Ok((nspliced, state))
}
/// Return the number of bytes that may be read without blocking.
async fn num_ready_bytes(&self) -> Result<u64, Error> {
Ok(0)
/// Repeatedly write a byte to a stream. Important: this write must be
/// non-blocking!
fn write_zeroes(&mut self, nelem: usize) -> Result<(usize, StreamState), Error> {
// TODO: We could optimize this to not allocate one big zeroed buffer, and instead write
// repeatedly from a 'static buffer of zeros.
let bs = Bytes::from_iter(core::iter::repeat(0 as u8).take(nelem));
let r = self.write(bs)?;
Ok(r)
}
/// Test whether this stream is readable.
async fn readable(&self) -> Result<(), Error>;
/// Check for write readiness: this method blocks until the stream is
/// ready for writing.
async fn ready(&mut self) -> Result<(), Error>;
}
/// An output bytestream.
///
/// This is "pseudo" because the real streams will be a type in wit, and
/// built into the wit bindings, and will support async and type parameters.
/// This pseudo-stream abstraction is synchronous and only supports bytes.
#[async_trait::async_trait]
pub trait OutputStream: Send + Sync {
fn as_any(&self) -> &dyn Any;
pub(crate) enum InternalInputStream {
Host(Box<dyn HostInputStream>),
File(FileInputStream),
}
/// If this stream is writing from a host file descriptor, return it so
/// that it can be polled with a host poll.
#[cfg(unix)]
fn pollable_write(&self) -> Option<rustix::fd::BorrowedFd> {
None
}
pub(crate) enum InternalOutputStream {
Host(Box<dyn HostOutputStream>),
File(FileOutputStream),
}
/// If this stream is writing from a host file descriptor, return it so
/// that it can be polled with a host poll.
#[cfg(windows)]
fn pollable_write(&self) -> Option<io_extras::os::windows::BorrowedHandleOrSocket> {
None
}
pub(crate) trait InternalTableStreamExt {
fn push_internal_input_stream(
&mut self,
istream: InternalInputStream,
) -> Result<u32, TableError>;
fn get_internal_input_stream_mut(
&mut self,
fd: u32,
) -> Result<&mut InternalInputStream, TableError>;
fn delete_internal_input_stream(&mut self, fd: u32) -> Result<InternalInputStream, TableError>;
/// Write bytes. On success, returns the number of bytes written.
async fn write(&mut self, _buf: &[u8]) -> Result<u64, Error> {
Err(anyhow::anyhow!("badf"))
fn push_internal_output_stream(
&mut self,
ostream: InternalOutputStream,
) -> Result<u32, TableError>;
fn get_internal_output_stream_mut(
&mut self,
fd: u32,
) -> Result<&mut InternalOutputStream, TableError>;
fn delete_internal_output_stream(
&mut self,
fd: u32,
) -> Result<InternalOutputStream, TableError>;
}
impl InternalTableStreamExt for Table {
fn push_internal_input_stream(
&mut self,
istream: InternalInputStream,
) -> Result<u32, TableError> {
self.push(Box::new(istream))
}
/// Vectored-I/O form of `write`.
async fn write_vectored<'a>(&mut self, _bufs: &[std::io::IoSlice<'a>]) -> Result<u64, Error> {
Err(anyhow::anyhow!("badf"))
fn get_internal_input_stream_mut(
&mut self,
fd: u32,
) -> Result<&mut InternalInputStream, TableError> {
self.get_mut(fd)
}
/// Test whether vectored I/O writes are known to be optimized in the
/// underlying implementation.
fn is_write_vectored(&self) -> bool {
false
fn delete_internal_input_stream(&mut self, fd: u32) -> Result<InternalInputStream, TableError> {
self.delete(fd)
}
/// Transfer bytes directly from an input stream to an output stream.
async fn splice(
fn push_internal_output_stream(
&mut self,
src: &mut dyn InputStream,
nelem: u64,
) -> Result<(u64, bool), Error> {
let mut nspliced = 0;
let mut saw_end = false;
// TODO: Optimize by splicing more than one byte at a time.
for _ in 0..nelem {
let mut buf = [0u8];
let (num, end) = src.read(&mut buf).await?;
self.write(&buf).await?;
nspliced += num;
if end {
saw_end = true;
break;
}
}
Ok((nspliced, saw_end))
ostream: InternalOutputStream,
) -> Result<u32, TableError> {
self.push(Box::new(ostream))
}
/// Repeatedly write a byte to a stream.
async fn write_zeroes(&mut self, nelem: u64) -> Result<u64, Error> {
let mut nwritten = 0;
// TODO: Optimize by writing more than one byte at a time.
for _ in 0..nelem {
let num = self.write(&[0]).await?;
if num == 0 {
break;
}
nwritten += num;
}
Ok(nwritten)
fn get_internal_output_stream_mut(
&mut self,
fd: u32,
) -> Result<&mut InternalOutputStream, TableError> {
self.get_mut(fd)
}
fn delete_internal_output_stream(
&mut self,
fd: u32,
) -> Result<InternalOutputStream, TableError> {
self.delete(fd)
}
/// Test whether this stream is writable.
async fn writable(&self) -> Result<(), Error>;
}
/// Extension trait for managing [`HostInputStream`]s and [`HostOutputStream`]s in the [`Table`].
pub trait TableStreamExt {
fn push_input_stream(&mut self, istream: Box<dyn InputStream>) -> Result<u32, TableError>;
fn get_input_stream(&self, fd: u32) -> Result<&dyn InputStream, TableError>;
fn get_input_stream_mut(&mut self, fd: u32) -> Result<&mut Box<dyn InputStream>, TableError>;
/// Push a [`HostInputStream`] into a [`Table`], returning the table index.
fn push_input_stream(&mut self, istream: Box<dyn HostInputStream>) -> Result<u32, TableError>;
/// Get a mutable reference to a [`HostInputStream`] in a [`Table`].
fn get_input_stream_mut(&mut self, fd: u32) -> Result<&mut dyn HostInputStream, TableError>;
/// Remove [`HostInputStream`] from table:
fn delete_input_stream(&mut self, fd: u32) -> Result<Box<dyn HostInputStream>, TableError>;
/// Push a [`HostOutputStream`] into a [`Table`], returning the table index.
fn push_output_stream(&mut self, ostream: Box<dyn HostOutputStream>)
-> Result<u32, TableError>;
/// Get a mutable reference to a [`HostOutputStream`] in a [`Table`].
fn get_output_stream_mut(&mut self, fd: u32) -> Result<&mut dyn HostOutputStream, TableError>;
fn push_output_stream(&mut self, ostream: Box<dyn OutputStream>) -> Result<u32, TableError>;
fn get_output_stream(&self, fd: u32) -> Result<&dyn OutputStream, TableError>;
fn get_output_stream_mut(&mut self, fd: u32) -> Result<&mut Box<dyn OutputStream>, TableError>;
/// Remove [`HostOutputStream`] from table:
fn delete_output_stream(&mut self, fd: u32) -> Result<Box<dyn HostOutputStream>, TableError>;
}
impl TableStreamExt for Table {
fn push_input_stream(&mut self, istream: Box<dyn InputStream>) -> Result<u32, TableError> {
self.push(Box::new(istream))
fn push_input_stream(&mut self, istream: Box<dyn HostInputStream>) -> Result<u32, TableError> {
self.push_internal_input_stream(InternalInputStream::Host(istream))
}
fn get_input_stream(&self, fd: u32) -> Result<&dyn InputStream, TableError> {
self.get::<Box<dyn InputStream>>(fd).map(|f| f.as_ref())
fn get_input_stream_mut(&mut self, fd: u32) -> Result<&mut dyn HostInputStream, TableError> {
match self.get_internal_input_stream_mut(fd)? {
InternalInputStream::Host(ref mut h) => Ok(h.as_mut()),
_ => Err(TableError::WrongType),
}
}
fn get_input_stream_mut(&mut self, fd: u32) -> Result<&mut Box<dyn InputStream>, TableError> {
self.get_mut::<Box<dyn InputStream>>(fd)
fn delete_input_stream(&mut self, fd: u32) -> Result<Box<dyn HostInputStream>, TableError> {
let occ = self.entry(fd)?;
match occ.get().downcast_ref::<InternalInputStream>() {
Some(InternalInputStream::Host(_)) => {
let (_, any) = occ.remove_entry();
match *any.downcast().expect("downcast checked above") {
InternalInputStream::Host(h) => Ok(h),
_ => unreachable!("variant checked above"),
}
}
_ => Err(TableError::WrongType),
}
}
fn push_output_stream(&mut self, ostream: Box<dyn OutputStream>) -> Result<u32, TableError> {
self.push(Box::new(ostream))
fn push_output_stream(
&mut self,
ostream: Box<dyn HostOutputStream>,
) -> Result<u32, TableError> {
self.push_internal_output_stream(InternalOutputStream::Host(ostream))
}
fn get_output_stream(&self, fd: u32) -> Result<&dyn OutputStream, TableError> {
self.get::<Box<dyn OutputStream>>(fd).map(|f| f.as_ref())
fn get_output_stream_mut(&mut self, fd: u32) -> Result<&mut dyn HostOutputStream, TableError> {
match self.get_internal_output_stream_mut(fd)? {
InternalOutputStream::Host(ref mut h) => Ok(h.as_mut()),
_ => Err(TableError::WrongType),
}
}
fn get_output_stream_mut(&mut self, fd: u32) -> Result<&mut Box<dyn OutputStream>, TableError> {
self.get_mut::<Box<dyn OutputStream>>(fd)
fn delete_output_stream(&mut self, fd: u32) -> Result<Box<dyn HostOutputStream>, TableError> {
let occ = self.entry(fd)?;
match occ.get().downcast_ref::<InternalOutputStream>() {
Some(InternalOutputStream::Host(_)) => {
let (_, any) = occ.remove_entry();
match *any.downcast().expect("downcast checked above") {
InternalOutputStream::Host(h) => Ok(h),
_ => unreachable!("variant checked above"),
}
}
_ => Err(TableError::WrongType),
}
}
}
#[cfg(test)]
mod test {
use super::*;
use crate::preview2::pipe::{ReadPipe, WritePipe};
#[test]
fn input_stream_in_table() {
let empty_pipe = ReadPipe::new(std::io::empty());
struct DummyInputStream;
#[async_trait::async_trait]
impl HostInputStream for DummyInputStream {
fn read(&mut self, _size: usize) -> Result<(Bytes, StreamState), Error> {
unimplemented!();
}
async fn ready(&mut self) -> Result<(), Error> {
unimplemented!();
}
}
let dummy = DummyInputStream;
let mut table = Table::new();
let ix = table.push_input_stream(Box::new(empty_pipe)).unwrap();
let _ = table.get_input_stream(ix).unwrap();
// Put it into the table:
let ix = table.push_input_stream(Box::new(dummy)).unwrap();
// Get a mut ref to it:
let _ = table.get_input_stream_mut(ix).unwrap();
// Fails at wrong type:
assert!(matches!(
table.get_output_stream_mut(ix),
Err(TableError::WrongType)
));
// Delete it:
let _ = table.delete_input_stream(ix).unwrap();
// Now absent from table:
assert!(matches!(
table.get_input_stream_mut(ix),
Err(TableError::NotPresent)
));
}
#[test]
fn output_stream_in_table() {
let dev_null = WritePipe::new(std::io::sink());
struct DummyOutputStream;
#[async_trait::async_trait]
impl HostOutputStream for DummyOutputStream {
fn write(&mut self, _: Bytes) -> Result<(usize, StreamState), Error> {
unimplemented!();
}
async fn ready(&mut self) -> Result<(), Error> {
unimplemented!();
}
}
let dummy = DummyOutputStream;
let mut table = Table::new();
let ix = table.push_output_stream(Box::new(dev_null)).unwrap();
let _ = table.get_output_stream(ix).unwrap();
// Put it in the table:
let ix = table.push_output_stream(Box::new(dummy)).unwrap();
// Get a mut ref to it:
let _ = table.get_output_stream_mut(ix).unwrap();
// Fails at wrong type:
assert!(matches!(
table.get_input_stream_mut(ix),
Err(TableError::WrongType)
));
// Delete it:
let _ = table.delete_output_stream(ix).unwrap();
// Now absent:
assert!(matches!(
table.get_output_stream_mut(ix),
Err(TableError::NotPresent)
));
}
}

38
crates/wasi/src/preview2/table.rs

@ -20,7 +20,7 @@ pub enum TableError {
/// up. Right now it is just an approximation.
#[derive(Debug)]
pub struct Table {
map: HashMap<u32, Box<dyn Any + Send + Sync>>,
pub(crate) map: HashMap<u32, Box<dyn Any + Send + Sync>>,
next_key: u32,
}
@ -87,6 +87,23 @@ impl Table {
}
}
/// Get an [`std::collections::hash_map::OccupiedEntry`] corresponding to
/// a table entry, if it exists. This allows you to remove or replace the
/// entry based on its contents.
pub fn entry(
&mut self,
key: u32,
) -> Result<
std::collections::hash_map::OccupiedEntry<u32, Box<dyn Any + Send + Sync + 'static>>,
TableError,
> {
use std::collections::hash_map::Entry;
match self.map.entry(key) {
Entry::Occupied(occ) => Ok(occ),
Entry::Vacant(_) => Err(TableError::NotPresent),
}
}
/// Remove a resource at a given index from the table.
pub fn delete<T: Any + Sized>(&mut self, key: u32) -> Result<T, TableError> {
// Optimistically attempt to remove the value stored under key
@ -104,4 +121,23 @@ impl Table {
}
}
}
/// Zip the values of the map with mutable references to table entries corresponding to each
/// key. As the keys in the [HashMap] are unique, this iterator can give mutable references
/// with the same lifetime as the mutable reference to the [Table].
pub fn iter_entries<'a, T>(
&'a mut self,
map: HashMap<u32, T>,
) -> impl Iterator<Item = (Result<&'a mut dyn Any, TableError>, T)> {
map.into_iter().map(move |(k, v)| {
let item = self
.map
.get_mut(&k)
.map(Box::as_mut)
// Safety: extending the lifetime of the mutable reference.
.map(|item| unsafe { &mut *(item as *mut dyn Any) })
.ok_or(TableError::NotPresent);
(item, v)
})
}
}

43
crates/wasi/src/preview2/wasi/command.rs

@ -1,43 +0,0 @@
use crate::preview2::WasiView;
wasmtime::component::bindgen!({
world: "wasi:preview/command",
tracing: true,
async: true,
trappable_error_type: {
"filesystem"::"error-code": Error,
"streams"::"stream-error": Error,
},
with: {
"wasi:filesystem/filesystem": crate::preview2::wasi::filesystem::filesystem,
"wasi:clocks/monotonic_clock": crate::preview2::wasi::clocks::monotonic_clock,
"wasi:poll/poll": crate::preview2::wasi::poll::poll,
"wasi:io/streams": crate::preview2::wasi::io::streams,
"wasi:clocks/timezone": crate::preview2::wasi::clocks::timezone,
"wasi:clocks/wall_clock": crate::preview2::wasi::clocks::wall_clock,
"wasi:random/random": crate::preview2::wasi::random::random,
"wasi:cli_base/environment": crate::preview2::wasi::cli_base::environment,
"wasi:cli_base/exit": crate::preview2::wasi::cli_base::exit,
"wasi:cli_base/preopens": crate::preview2::wasi::cli_base::preopens,
"wasi:cli_base/stdin": crate::preview2::wasi::cli_base::stdin,
"wasi:cli_base/stdout": crate::preview2::wasi::cli_base::stdout,
"wasi:cli_base/stderr": crate::preview2::wasi::cli_base::stderr,
},
});
pub fn add_to_linker<T: WasiView>(l: &mut wasmtime::component::Linker<T>) -> anyhow::Result<()> {
crate::preview2::wasi::clocks::wall_clock::add_to_linker(l, |t| t)?;
crate::preview2::wasi::clocks::monotonic_clock::add_to_linker(l, |t| t)?;
crate::preview2::wasi::clocks::timezone::add_to_linker(l, |t| t)?;
crate::preview2::wasi::filesystem::filesystem::add_to_linker(l, |t| t)?;
crate::preview2::wasi::poll::poll::add_to_linker(l, |t| t)?;
crate::preview2::wasi::io::streams::add_to_linker(l, |t| t)?;
crate::preview2::wasi::random::random::add_to_linker(l, |t| t)?;
crate::preview2::wasi::cli_base::exit::add_to_linker(l, |t| t)?;
crate::preview2::wasi::cli_base::environment::add_to_linker(l, |t| t)?;
crate::preview2::wasi::cli_base::preopens::add_to_linker(l, |t| t)?;
crate::preview2::wasi::cli_base::stdin::add_to_linker(l, |t| t)?;
crate::preview2::wasi::cli_base::stdout::add_to_linker(l, |t| t)?;
crate::preview2::wasi::cli_base::stderr::add_to_linker(l, |t| t)?;
Ok(())
}

30
crates/wasi/src/preview2/wasi/mod.rs

@ -1,30 +0,0 @@
pub mod command;
wasmtime::component::bindgen!({
path: "wit",
interfaces: "
import wasi:clocks/wall-clock
import wasi:clocks/monotonic-clock
import wasi:clocks/timezone
import wasi:filesystem/filesystem
import wasi:random/random
import wasi:random/insecure
import wasi:random/insecure-seed
import wasi:poll/poll
import wasi:io/streams
import wasi:cli-base/environment
import wasi:cli-base/preopens
import wasi:cli-base/exit
import wasi:cli-base/stdin
import wasi:cli-base/stdout
import wasi:cli-base/stderr
",
tracing: true,
async: true,
trappable_error_type: {
"filesystem"::"error-code": Error,
"streams"::"stream-error": Error,
}
});
pub use wasi::*;

73
crates/wasi/wit/deps/io/streams.wit

@ -12,6 +12,22 @@ interface streams {
/// doesn't provide any additional information.
record stream-error {}
/// Streams provide a sequence of data and then end; once they end, they
/// no longer provide any further data.
///
/// For example, a stream reading from a file ends when the stream reaches
/// the end of the file. For another example, a stream reading from a
/// socket ends when the socket is closed.
enum stream-status {
/// The stream is open and may produce further data.
open,
/// When reading, this indicates that the stream will not produce
/// further data.
/// When writing, this indicates that the stream will no longer be read.
/// Further writes are still permitted.
ended,
}
/// An input bytestream. In the future, this will be replaced by handle
/// types.
///
@ -31,12 +47,12 @@ interface streams {
/// This [represents a resource](https://github.com/WebAssembly/WASI/blob/main/docs/WitInWasi.md#Resources).
type input-stream = u32
/// Read bytes from a stream.
/// Perform a non-blocking read from the stream.
///
/// This function returns a list of bytes containing the data that was
/// read, along with a bool which, when true, indicates that the end of the
/// stream was reached. The returned list will contain up to `len` bytes; it
/// may return fewer than requested, but not more.
/// read, along with a `stream-status` which, indicates whether further
/// reads are expected to produce data. The returned list will contain up to
/// `len` bytes; it may return fewer than requested, but not more.
///
/// Once a stream has reached the end, subsequent calls to read or
/// `skip` will always report end-of-stream rather than producing more
@ -49,11 +65,17 @@ interface streams {
/// The len here is a `u64`, but some callees may not be able to allocate
/// a buffer as large as that would imply.
/// FIXME: describe what happens if allocation fails.
///
/// When the returned `stream-status` is `open`, the length of the returned
/// value may be less than `len`. When an empty list is returned, this
/// indicates that no more bytes were available from the stream at that
/// time. In that case the subscribe-to-input-stream pollable will indicate
/// when additional bytes are available for reading.
read: func(
this: input-stream,
/// The maximum number of bytes to read
len: u64
) -> result<tuple<list<u8>, bool>, stream-error>
) -> result<tuple<list<u8>, stream-status>, stream-error>
/// Read bytes from a stream, with blocking.
///
@ -63,7 +85,7 @@ interface streams {
this: input-stream,
/// The maximum number of bytes to read
len: u64
) -> result<tuple<list<u8>, bool>, stream-error>
) -> result<tuple<list<u8>, stream-status>, stream-error>
/// Skip bytes from a stream.
///
@ -81,7 +103,7 @@ interface streams {
this: input-stream,
/// The maximum number of bytes to skip.
len: u64,
) -> result<tuple<u64, bool>, stream-error>
) -> result<tuple<u64, stream-status>, stream-error>
/// Skip bytes from a stream, with blocking.
///
@ -91,7 +113,7 @@ interface streams {
this: input-stream,
/// The maximum number of bytes to skip.
len: u64,
) -> result<tuple<u64, bool>, stream-error>
) -> result<tuple<u64, stream-status>, stream-error>
/// Create a `pollable` which will resolve once either the specified stream
/// has bytes available to read or the other end of the stream has been
@ -113,7 +135,7 @@ interface streams {
/// always return promptly, after the number of bytes that can be written
/// promptly, which could even be zero. To wait for the stream to be ready to
/// accept data, the `subscribe-to-output-stream` function to obtain a
/// `pollable` which can be polled for using `wasi_poll`.
/// `pollable` which can be polled for using `wasi:poll`.
///
/// And at present, it is a `u32` instead of being an actual handle, until
/// the wit-bindgen implementation of handles and resources is ready.
@ -121,15 +143,25 @@ interface streams {
/// This [represents a resource](https://github.com/WebAssembly/WASI/blob/main/docs/WitInWasi.md#Resources).
type output-stream = u32
/// Write bytes to a stream.
/// Perform a non-blocking write of bytes to a stream.
///
/// This function returns a `u64` and a `stream-status`. The `u64` indicates
/// the number of bytes from `buf` that were written, which may be less than
/// the length of `buf`. The `stream-status` indicates if further writes to
/// the stream are expected to be read.
///
/// This function returns a `u64` indicating the number of bytes from
/// `buf` that were written; it may be less than the full list.
/// When the returned `stream-status` is `open`, the `u64` return value may
/// be less than the length of `buf`. This indicates that no more bytes may
/// be written to the stream promptly. In that case the
/// subscribe-to-output-stream pollable will indicate when additional bytes
/// may be promptly written.
///
/// TODO: document what happens when an empty list is written
write: func(
this: output-stream,
/// Data to write
buf: list<u8>
) -> result<u64, stream-error>
) -> result<tuple<u64, stream-status>, stream-error>
/// Write bytes to a stream, with blocking.
///
@ -139,7 +171,7 @@ interface streams {
this: output-stream,
/// Data to write
buf: list<u8>
) -> result<u64, stream-error>
) -> result<tuple<u64, stream-status>, stream-error>
/// Write multiple zero bytes to a stream.
///
@ -149,7 +181,7 @@ interface streams {
this: output-stream,
/// The number of zero bytes to write
len: u64
) -> result<u64, stream-error>
) -> result<tuple<u64, stream-status>, stream-error>
/// Write multiple zero bytes to a stream, with blocking.
///
@ -159,7 +191,7 @@ interface streams {
this: output-stream,
/// The number of zero bytes to write
len: u64
) -> result<u64, stream-error>
) -> result<tuple<u64, stream-status>, stream-error>
/// Read from one stream and write to another.
///
@ -174,7 +206,7 @@ interface streams {
src: input-stream,
/// The number of bytes to splice
len: u64,
) -> result<tuple<u64, bool>, stream-error>
) -> result<tuple<u64, stream-status>, stream-error>
/// Read from one stream and write to another, with blocking.
///
@ -186,7 +218,7 @@ interface streams {
src: input-stream,
/// The number of bytes to splice
len: u64,
) -> result<tuple<u64, bool>, stream-error>
) -> result<tuple<u64, stream-status>, stream-error>
/// Forward the entire contents of an input stream to an output stream.
///
@ -198,12 +230,13 @@ interface streams {
/// of the input stream is seen and all the data has been written to
/// the output stream.
///
/// This function returns the number of bytes transferred.
/// This function returns the number of bytes transferred, and the status of
/// the output stream.
forward: func(
this: output-stream,
/// The stream to read from
src: input-stream
) -> result<u64, stream-error>
) -> result<tuple<u64, stream-status>, stream-error>
/// Create a `pollable` which will resolve once either the specified stream
/// is ready to accept bytes or the other end of the stream has been closed.

24
supply-chain/imports.lock

@ -1936,6 +1936,24 @@ criteria = "safe-to-deploy"
delta = "2.2.1 -> 2.3.2"
aggregated-from = "https://hg.mozilla.org/mozilla-central/raw-file/tip/supply-chain/audits.toml"
[[audits.mozilla.audits.bytes]]
who = "Mike Hommey <mh+mozilla@glandium.org>"
criteria = "safe-to-deploy"
delta = "1.1.0 -> 1.2.1"
aggregated-from = "https://hg.mozilla.org/mozilla-central/raw-file/tip/supply-chain/audits.toml"
[[audits.mozilla.audits.bytes]]
who = "Mike Hommey <mh+mozilla@glandium.org>"
criteria = "safe-to-deploy"
delta = "1.2.1 -> 1.3.0"
aggregated-from = "https://hg.mozilla.org/mozilla-central/raw-file/tip/supply-chain/audits.toml"
[[audits.mozilla.audits.bytes]]
who = "Mike Hommey <mh+mozilla@glandium.org>"
criteria = "safe-to-deploy"
delta = "1.3.0 -> 1.4.0"
aggregated-from = "https://hg.mozilla.org/mozilla-central/raw-file/tip/supply-chain/audits.toml"
[[audits.mozilla.audits.crypto-common]]
who = "Mike Hommey <mh+mozilla@glandium.org>"
criteria = "safe-to-deploy"
@ -1992,6 +2010,12 @@ version = "1.0.7"
notes = "Simple hasher implementation with no unsafe code."
aggregated-from = "https://hg.mozilla.org/mozilla-central/raw-file/tip/supply-chain/audits.toml"
[[audits.mozilla.audits.futures-io]]
who = "Mike Hommey <mh+mozilla@glandium.org>"
criteria = "safe-to-deploy"
delta = "0.3.27 -> 0.3.28"
aggregated-from = "https://hg.mozilla.org/mozilla-central/raw-file/tip/supply-chain/audits.toml"
[[audits.mozilla.audits.fxhash]]
who = "Bobby Holley <bobbyholley@gmail.com>"
criteria = "safe-to-deploy"

Loading…
Cancel
Save