Since the latest updates to our release process which transitioned to
merge queues it appears that patch release create incorrectly named
tarballs. The version in the tarball is based on the branch name, which
doesn't change for patch releases, so the version needs to come from
`Cargo.toml`. Thankfully there's already a helpful shell script to do
that so use the shell script instead of using the branch name.
* riscv64: Remove unused code
* riscv64: Add vector types
* riscv64: Initial Vector ABI Load/Stores
* riscv64: Vector Loads/Stores
* riscv64: Fix `vsetvli` encoding error
* riscv64: Add SIMD `iadd` runtests
* riscv64: Rename `VecSew`
The SEW name is correct, but only for VType. We also use this type
in loads/stores as the Efective Element Width, so the name isn't
quite correct in that case.
* ci: Add V extension to RISC-V QEMU
* riscv64: Misc Cleanups
* riscv64: Check V extension in `load`/`store` for SIMD
* riscv64: Fix `sumop` doc comment
* cranelift: Fix comment typo
* riscv64: Add convert for VType and VecElementWidth
* riscv64: Remove VecElementWidth converter
* cranelift-native: Move riscv to separate module
* cranelift-native: Read /proc/cpuinfo to parse RISC-V extensions
* ci: Add QEMU cpuinfo emulation patch
This patch emulates the /proc/cpuinfo interface for RISC-V. This
allows us to do feature detection for the RISC-V backend.
It has been queued for QEMU 8.1 so we should remove it as soon as
that is available.
* ci: Enable QEMU RISC-V extensions
* cranelift-native: Cleanup ISA string parsing
Co-Authored-By: Jamey Sharp <jsharp@fastly.com>
* cranelift-native: Rework `/proc/cpuinfo` parsing
Co-Authored-By: Jamey Sharp <jsharp@fastly.com>
---------
Co-authored-by: Jamey Sharp <jsharp@fastly.com>
* ci: unpin the wasi-nn tasks from an older Ubuntu
Previously, OpenVINO's lack of APT packages for Ubuntu 22.04 (`jammy`)
prevented us from upgrading the GitHub runner to use `ubuntu-latest`. I
updated the `install-openvino-action` to substitute in the `focal`
packages in this case (this is what the OpenVINO team considers the fix)
so this pin should no longer be necessary. Fixes#5408.
(Run all CI actions: prtest:full)
* vet: audit the openvino version bump
* Integrate experimental HTTP into wasmtime.
* Reset Cargo.lock
* Switch to bail!, plumb options partially.
* Implement timeouts.
* Remove generated files & wasm, add Makefile
* Remove generated code textfile
* Update crates/wasi-http/Cargo.toml
Co-authored-by: Eduardo de Moura Rodrigues <16357187+eduardomourar@users.noreply.github.com>
* Update crates/wasi-http/Cargo.toml
Co-authored-by: Eduardo de Moura Rodrigues <16357187+eduardomourar@users.noreply.github.com>
* Extract streams from request/response.
* Fix read for len < buffer length.
* Formatting.
* types impl: swap todos for traps
* streams_impl: idioms, and swap todos for traps
* component impl: idioms, swap all unwraps for traps, swap all todos for traps
* http impl: idiom
* Remove an unnecessary mut.
* Remove an unsupported function.
* Switch to the tokio runtime for the HTTP request.
* Add a rust example.
* Update to latest wit definition
* Remove example code.
* wip: start writing a http test...
* finish writing the outbound request example
havent executed it yet
* better debug output
* wasi-http: some stubs required for rust rewrite of the example
* add wasi_http tests to test-programs
* CI: run the http tests
* Fix some warnings.
* bump new deps to latest releases (#3)
* Add tests for wasi-http to test-programs (#2)
* wip: start writing a http test...
* finish writing the outbound request example
havent executed it yet
* better debug output
* wasi-http: some stubs required for rust rewrite of the example
* add wasi_http tests to test-programs
* CI: run the http tests
* bump new deps to latest releases
h2 0.3.16
http 0.2.9
mio 0.8.6
openssl 0.10.48
openssl-sys 0.9.83
tokio 1.26.0
---------
Co-authored-by: Brendan Burns <bburns@microsoft.com>
* Update crates/test-programs/tests/http_tests/runtime/wasi_http_tests.rs
* Update crates/test-programs/tests/http_tests/runtime/wasi_http_tests.rs
* Update crates/test-programs/tests/http_tests/runtime/wasi_http_tests.rs
* wasi-http: fix cargo.toml file and publish script to work together (#4)
unfortunately, the publish script doesn't use a proper toml parser (in
order to not have any dependencies), so the whitespace has to be the
trivial expected case.
then, add wasi-http to the list of crates to publish.
* Update crates/test-programs/build.rs
* Switch to rustls
* Cleanups.
* Merge switch to rustls.
* Formatting
* Remove libssl install
* Fix tests.
* Rename wasi-http -> wasmtime-wasi-http
* prtest:full
Conditionalize TLS on riscv64gc.
* prtest:full
Fix formatting, also disable tls on s390x
* prtest:full
Add a path parameter to wit-bindgen, remove symlink.
* prtest:full
Fix tests for places where SSL isn't supported.
* Update crates/wasi-http/Cargo.toml
---------
Co-authored-by: Eduardo de Moura Rodrigues <16357187+eduardomourar@users.noreply.github.com>
Co-authored-by: Pat Hickey <phickey@fastly.com>
Co-authored-by: Pat Hickey <pat@moreproductive.org>
This refactoring from #5766 accidentally broke the tag name calculation
for the `build-src-tarball.sh` script so this fixes it by copying over
the same logic from `build-tarballs.sh`.
* Test all backends when a runtest is modified
* Check that this triggers all backend tests
* Revert "Check that this triggers all backend tests"
This reverts commit 1d12536d04.
* Don't run LLDB tests on PRs
These take an extra minute or so, so only run them on the full test
suite of a merge instead of on all PRs as well.
* Add a test for the x64 isa files
This guarantees that if cranelift's x64 backend is modified that the
tests will be run on a PR, even if other backends were also modified.
GitHub recently made its merge queue feature available for use in public
repositories owned by organizations meaning that the Wasmtime repository
is a candidate for using this. GitHub's Merge Queue feature is a system
that's similar to Rust's bors integration where PRs are tested before
merging and only passing PRs are merged. This implements the "not rocket
science" rule where the `main` branch of Wasmtime, for example, is
always tested and passes CI. This is in contrast to our current
implementation of CI where PRs are merged when they pass their own CI,
but the code that was tested is not guaranteed to be the state of `main`
when the PR is merged, meaning that we're at risk now of a failing
`main` branch despite all merged PRs being green. While this has
happened with Wasmtime this is not a common occurrence, however.
The main motivation, instead, to use GitHub's Merge Queue feature is
that it will enable Wasmtime to greatly reduce the amount of CI running
on PRs themselves. Currently the full test suite runs on every push to
every PR, meaning that our workers on GitHub Actions are frequently
clogged throughout weekdays and PRs can take quite some time to come
back with a successful run. Through the use of a Merge Queue, however,
we're able to configure only a small handful of checks to run on PRs
while deferring the main body of checks to happening on the
merge-via-the-queue itself. This is hoped to free up capacity on CI and
overall improve CI times for Wasmtime and Cranelift developers.
The implementation of all of this required quite a lot of plumbing and
retooling of our CI. I've been testing this in an [external
repository][testrepo] and I think everything is working now. A list of
changes made in this PR are:
* The `build.yml` workflow is merged back into the `main.yml` workflow
as the original reason to split it out is not longer applicable (it'll
run on all merges). This was also done to fit in the dependency graph
of jobs of one workflow.
* Publication of the `gh-pages` branch, the `dev` tag artifacts, and
release artifacts have been moved to a separate
`publish-artifacts.yml` workflow. This workflow runs on all pushes to
`main` and all tags. This workflow no longer actually preforms any
builds, however, and relies on a merge queue or similar being used for
branches/tags where artifacts are downloaded from the workflow run to
be uploaded. For pushes to `main` this works because a merge queue is
run meaning that by the time the push happens all artifacts are ready.
For release branches this is handled by..
* The `push-tag.yml` workflow is subsumed by the `main.yml` workflow. CI
for a tag being pushed will upload artifacts to a release in GitHub,
meaning that all builds must finish first for the commit. The
`main.yml` workflow at the end now scans commits for the preexisting
magical marker and pushes a tag if necessary.
* CI is currently a flat list of "run all these jobs" and this is now
rearchitected to a "fan out" approach where some jobs run to determine
the next jobs to run which then get "joined" into a finish step. The
purpose for this is somewhat nuanced and this has implications for CI
runtime as well. The Merge Queue feature requires branches to be
protected with "these checks must pass" and then the same checks are
gates both to enter the merge queue as well as pass the merge queue.
The saving grace, however, is that a "skipped" check counts as
passing, meaning checks can be skipped on PRs but run to completion on
the merge queue. A problem with this though is the build matrix used
for tests where PRs want to only run one element of the build matrix
ideally but there's no means on GitHub Actions right now for the
skipped entries to show up as skipped easily (or not that I know of).
This means that the "join" step serves the purpose of being the single
gate for both PR and merge queue CI and there's just more inputs to it
for merge queue CI. The major consequence of this decision is that
GitHub's actions scheduling doesn't work out well here. Jobs are
scheduled in a FIFO order meaning that the job for "ok complete the CI
run" is queued up after everything else has completed, possibly
after lots of other CI requests in the middle for other PRs. The hope
here is that by using a merge queue we can keep CI relatively under
control and this won't affect merge times too much.
* All jobs in the `main.yml` workflow will not automatically cancel the
entire run if they fail. Previously this fail-fast behavior was only
part of the matrix runs (and just for that matrix), but this is
required to make the merge queue expedient. The gate of the merge
queue is the final "join" step which is only executed once all
dependencies have finished. This means, for example, that if rustfmt
fails quickly then the tests which take longer might run for quite
awhile before the join step reports failure, meaning that the PR sits
in the queue for longer than needed being tested when we know it's
already going to fail. By having all jobs cancel the run this means
that failures immediately bail out and mark the whole job as
cancelled.
* A new "determine" CI job was added to determine what CI actually needs
to run. This is a "choke point" which is scheduled at the start of CI
that quickly figures out what else needs to be run. This notably
indicates whether large swaths of ci (the `run-full` flag) like the
build matrix are executed. Additionally this dynamically calculates a
matrix of tests to run based on a new `./ci/build-test-matrix.js`
script. Various inputs are considered for this such as:
1. All pushes, meaning merge queue branches or release-branch merges,
will run full CI.
2. PRs to release branches will run full CI.
3. PRs to `main`, the most common, determine what to run based on
what's modified and what's in the commit message.
Some examples for (3) above are if modifications are made to
`cranelift/codegen/src/isa/*` then that corresponding builder is
executed on CI. If the `crates/c-api` directory is modified then the
CMake-based tests are run on PRs but are otherwise skipped.
Annotations in commit messages such as `prtest:*` can be used to
explicitly request testing.
Before this PR merges to `main` would perform two full runs of CI: one
on the PR itself and one on the merge to `main`. Note that the one as a
merge to `main` was quite frequently cancelled due to a merge happening
later. Additionally before this PR there was always the risk of a bad
merge where what was merged ended up creating a `main` that failed CI to
to a non-code-related merge conflict.
After this PR merges to `main` will perform one full run of CI, the one
as part of the merge queue. PRs themselves will perform one test job
most of the time otherwise. The `main` branch is additionally always
guaranteed to pass tests via the merge queue feature.
For release branches, before this PR merges would perform two full
builds - one for the PR and one for the merge. A third build was then
required for the release tag itself. This is now cut down to two full
builds, one for the PR and one for the merge. The reason for this is
that the merge queue feature currently can't be used for our
wildcard-based `release-*` branch protections. It is now possible,
however, to turn on required CI checks for the `release-*` branch PRs so
we can at least have a "hit the button and forget" strategy for merging
PRs now.
Note that this change to CI is not without its risks. The Merge Queue
feature is still in beta and is quite new for GitHub. One bug that
Trevor and I uncovered is that if a PR is being tested in the merge
queue and a contributor pushes to their PR then the PR isn't removed
from the merge queue but is instead merged when CI is successful, losing
the changes that the contributor pushed (what's merged is what was
tested). We suspect that GitHub will fix this, however.
Additionally though there's the risk that this may increase merge time
for PRs to Wasmtime in practice. The Merge Queue feature has the ability
to "batch" PRs together for a merge but this is only done if concurrent
builds are allowed. This means that if 5 PRs are batched together then 5
separate merges would be created for the stack of 5 PRs. If the CI for
all 5 merged together passes then everything is merged, otherwise a PR
is kicked out. We can't easily do this, however, since a major purpose
for the merge queue for us would be to cut down on usage of CI builders
meaning the max concurrency would be set to 1 meaning that only one PR
at a time will be merged. This means PRs may sit in the queue for awhile
since previously many `main`-based builds are cancelled due to
subsequent merges of other PRs, but now they must all run to 100%
completion.
[testrepo]: https://github.com/bytecodealliance/wasmtime-merge-queue-testing
This commit includes a set of changes that add initial support for `wasi-threads` to Wasmtime:
* feat: remove mutability from the WasiCtx Table
This patch adds interior mutability to the WasiCtx Table and the Table elements.
Major pain points:
* `File` only needs `RwLock<cap_std::fs::File>` to implement
`File::set_fdflags()` on Windows, because of [1]
* Because `File` needs a `RwLock` and `RwLock*Guard` cannot
be hold across an `.await`, The `async` from
`async fn num_ready_bytes(&self)` had to be removed
* Because `File` needs a `RwLock` and `RwLock*Guard` cannot
be dereferenced in `pollable`, the signature of
`fn pollable(&self) -> Option<rustix::fd::BorrowedFd>`
changed to `fn pollable(&self) -> Option<Arc<dyn AsFd + '_>>`
[1] da238e324e/src/fs/fd_flags.rs (L210-L217)
* wasi-threads: add an initial implementation
This change is a first step toward implementing `wasi-threads` in
Wasmtime. We may find that it has some missing pieces, but the core
functionality is there: when `wasi::thread_spawn` is called by a running
WebAssembly module, a function named `wasi_thread_start` is found in the
module's exports and called in a new instance. The shared memory of the
original instance is reused in the new instance.
This new WASI proposal is in its early stages and details are still
being hashed out in the [spec] and [wasi-libc] repositories. Due to its
experimental state, the `wasi-threads` functionality is hidden behind
both a compile-time and runtime flag: one must build with `--features
wasi-threads` but also run the Wasmtime CLI with `--wasm-features
threads` and `--wasi-modules experimental-wasi-threads`. One can
experiment with `wasi-threads` by running:
```console
$ cargo run --features wasi-threads -- \
--wasm-features threads --wasi-modules experimental-wasi-threads \
<a threads-enabled module>
```
Threads-enabled Wasm modules are not yet easy to build. Hopefully this
is resolved soon, but in the meantime see the use of
`THREAD_MODEL=posix` in the [wasi-libc] repository for some clues on
what is necessary. Wiggle complicates things by requiring the Wasm
memory to be exported with a certain name and `wasi-threads` also
expects that memory to be imported; this build-time obstacle can be
overcome with the `--import-memory --export-memory` flags only available
in the latest Clang tree. Due to all of this, the included tests are
written directly in WAT--run these with:
```console
$ cargo test --features wasi-threads -p wasmtime-cli -- cli_tests
```
[spec]: https://github.com/WebAssembly/wasi-threads
[wasi-libc]: https://github.com/WebAssembly/wasi-libc
This change does not protect the WASI implementations themselves from
concurrent access. This is already complete in previous commits or left
for future commits in certain cases (e.g., wasi-nn).
* wasi-threads: factor out process exit logic
As is being discussed [elsewhere], either calling `proc_exit` or
trapping in any thread should halt execution of all threads. The
Wasmtime CLI already has logic for adapting a WebAssembly error code to
a code expected in each OS. This change factors out this logic to a new
function, `maybe_exit_on_error`, for use within the `wasi-threads`
implementation.
This will work reasonably well for CLI users of Wasmtime +
`wasi-threads`, but embedders will want something better in the future:
when a `wasi-threads` threads fails, they may not want their application
to exit. Handling this is tricky, because it will require cancelling the
threads spawned by the `wasi-threads` implementation, something that is
not trivial to do in Rust. With this change, we defer that work until
later in order to provide a working implementation of `wasi-threads` for
experimentation.
[elsewhere]: https://github.com/WebAssembly/wasi-threads/pull/17
* review: work around `fd_fdstat_set_flags`
In order to make progress with wasi-threads, this change temporarily
works around limitations induced by `wasi-common`'s
`fd_fdstat_set_flags` to allow `&mut self` use in the implementation.
Eventual resolution is tracked in
https://github.com/bytecodealliance/wasmtime/issues/5643. This change
makes several related helper functions (e.g., `set_fdflags`) take `&mut
self` as well.
* test: use `wait`/`notify` to improve `threads.wat` test
Previously, the test simply executed in a loop for some hardcoded number
of iterations. This changes uses `wait` and `notify` and atomic
operations to keep track of when the spawned threads are done and join
on the main thread appropriately.
* various fixes and tweaks due to the PR review
---------
Signed-off-by: Harald Hoyer <harald@profian.com>
Co-authored-by: Harald Hoyer <harald@profian.com>
Co-authored-by: Alex Crichton <alex@alexcrichton.com>
This fixes the build issue identified in #5664 at the toolchain level
rather than working around it in our own build. The next step in fixing
this will be to remove the nightly override in the future when the
toolchain becomes stable.
* Add source tarballs to our releases
This commit adds a small script to create a source tarball as part of
the release process. This goes further than requested by #3808 by
vendoring all Rust dependencies as well to be more in line with
"download the source once then build somewhere without a network".
Vendoring the Rust dependencies makes the tarball pretty beefy (67M
compressed, 500M uncompressed). Unfortunately most of this size comes
from vendored crates such as v8, pqcrypto-kyber, winapi, capstone-sys,
plotters, and web-sys. Only `winapi` in this list is actually needed for
`wasmtime`-the-binary and only on Windows as well but for now this is
the state of things related to `cargo vendor`. If this becomes an issue
we could specifically remove the bulky contents of crates in the
`vendor` directory such as `v8` since it's only used for fuzzing.
Closes#3808
* Review feedback
* Review comments
* ci: replace OpenVINO installer action
To test wasi-nn, we currently use an OpenVINO backend. The Wasmtime CI
must install OpenVINO using a custom GitHub action. This CI action has
not been updated in some time and in the meantime OpenVINO (and the
OpenVINO crates) have released several new versions.
https://github.com/abrown/install-openvino-action is an external action
that we plan to keep up to date with the latest releases. This change
replaces the current CI action with that one.
* wasi-nn: upgrade openvino dependency to v0.4.1
This eliminates a `lazy_static` dependency and changes a few parameters
to pass by reference. Importantly, it enables support for the latest
versions of OpenVINO (v2022.*) in wasi-nn.
* ci: update wasi-nn script to source correct env script
* ci: really use the correct path for the env script
Also, clarify which directory OpenVINO is installed in (the symlink may
not be present).
* Refactor binary-compatible-builds for releases
I was poking around this yesterday and noticed a few things that could
be improved for our release builds:
* The centos container for the x86_64 builds contained a bunch of extra
tooling we no longer need such as python3 and a C++ compiler. Along
with custom toolchain things this could all get removed since the C we
include now is quite simple.
* The aarch64 and s390x cross-compiled builds had relatively high glibc
version requirements compared to the x86_64 build. This was because we
don't use a container to build the cross-compiled binaries. I added
containers here along the lines of the x86_64 build to use an older
glibc to build the release binary to lower our version requirement.
This lower the aarch64 version requirement from glibc 2.28 to 2.17.
Additionally the s390x requirement dropped from 2.28 to 2.16.
* To make the containers a bit easier to read/write I added
`Dockerfile`s for them in a new `ci/docker` directory instead of
hardcoding install commands in JS.
This isn't intended to be a really big change or anything for anyone,
but it's intended to keep our Linux-based builds consistent at least as
best we can.
* Remove temporary change
This PR fixes#4066: it modifies the Cranelift `build.rs` workflow to
invoke the ISLE DSL compiler on every compilation, rather than only
when the user specifies a special "rebuild ISLE" feature.
The main benefit of this change is that it vastly simplifies the mental
model required of developers, and removes a bunch of failure modes
we have tried to work around in other ways. There is now just one
"source of truth", the ISLE source itself, in the repository, and so there
is no need to understand a special "rebuild" step and how to handle
merge errors. There is no special process needed to develop the compiler
when modifying the DSL. And there is no "noise" in the git history produced
by constantly-regenerated files.
The two main downsides we discussed in #4066 are:
- Compile time could increase, by adding more to the "meta" step before the main build;
- It becomes less obvious where the source definitions are (everything becomes
more "magic"), which makes exploration and debugging harder.
This PR addresses each of these concerns:
1. To maintain reasonable compile time, it includes work to cut down the
dependencies of the `cranelift-isle` crate to *nothing* (only the Rust stdlib),
in the default build. It does this by putting the error-reporting bits
(`miette` crate) under an optional feature, and the logging (`log` crate) under
a feature-controlled macro, and manually writing an `Error` impl rather than
using `thiserror`. This completely avoids proc macros and the `syn` build slowness.
The user can still get nice errors out of `miette`: this is enabled by specifying
a Cargo feature `--features isle-errors`.
2. To allow the user to optionally inspect the generated source, which nominally
lives in a hard-to-find path inside `target/` now, this PR adds a feature `isle-in-source-tree`
that, as implied by the name, moves the target for ISLE generated source into
the source tree, at `cranelift/codegen/isle_generated_source/`. It seems reasonable
to do this when an explicit feature (opt-in) is specified because this is how ISLE regeneration
currently works as well. To prevent surprises, if the feature is *not* specified, the
build fails if this directory exists.
* Fix the release process's latest step
The automated release of 0.36.0 was attempted last night but it failed
due to a [failure on CI][bad]. This failure comes about because it was
trying to change the release date of 0.35.0 which ended up not modifying
any fails so `git` failed to commit as no files were changed.
The original bug though was that 0.35.0 was being changed instead of
0.36.0. The reason for this is that the script used
`--sort=-committerdate` to determine the latest branch. I forgot,
though, that with backports it's possible for 0.35.0 to have a more
recent commit date than 0.36.0 (as is currently the case). This commit
updates the script to perform a numerical sort outside of git to get the
latest release branch.
Additionally this adds in some `set -ex` commands for the shell which
should help print out commands as they're run and assist in future
debugging.
[bad]: https://github.com/bytecodealliance/wasmtime/runs/6087188708
* Replace sed with rust
* Add m1 to release process
This will create a pre-compiled binary for m1 macs and adds
a link to review embark studios CI for verification.
* remove test for macos arm
Tests will not succeed for macos arm until GitHub provides a an m1 hosted runner.
Co-authored-by: Bailey Hayes <bhayes@singlestore.com>
* Bump to 0.36.0
* Add a two-week delay to Wasmtime's release process
This commit is a proposal to update Wasmtime's release process with a
two-week delay from branching a release until it's actually officially
released. We've had two issues lately that came up which led to this proposal:
* In #3915 it was realized that changes just before the 0.35.0 release
weren't enough for an embedding use case, but the PR didn't meet the
expectations for a full patch release.
* At Fastly we were about to start rolling out a new version of Wasmtime
when over the weekend the fuzz bug #3951 was found. This led to the
desire internally to have a "must have been fuzzed for this long"
period of time for Wasmtime changes which we felt were better
reflected in the release process itself rather than something about
Fastly's own integration with Wasmtime.
This commit updates the automation for releases to unconditionally
create a `release-X.Y.Z` branch on the 5th of every month. The actual
release from this branch is then performed on the 20th of every month,
roughly two weeks later. This should provide a period of time to ensure
that all changes in a release are fuzzed for at least two weeks and
avoid any further surprises. This should also help with any last-minute
changes made just before a release if they need tweaking since
backporting to a not-yet-released branch is much easier.
Overall there are some new properties about Wasmtime with this proposal
as well:
* The `main` branch will always have a section in `RELEASES.md` which is
listed as "Unreleased" for us to fill out.
* The `main` branch will always be a version ahead of the latest
release. For example it will be bump pre-emptively as part of the
release process on the 5th where if `release-2.0.0` was created then
the `main` branch will have 3.0.0 Wasmtime.
* Dates for major versions are automatically updated in the
`RELEASES.md` notes.
The associated documentation for our release process is updated and the
various scripts should all be updated now as well with this commit.
* Add notes on a security patch
* Clarify security fixes shouldn't be previewed early on CI
Updates the OpenVINO backend for wasi-nn to pre-configure all inputs as `NHWC`. This is not a long-term fix but no worse than the status quo, which configures all input tensors to the same layout. This change updates the CI script to use the latest version of OpenVINO. Closes#3948.
Co-authored-by: Andrew Brown <andrew.brown@intel.com>
We currently skip some tests when running our qemu-based tests for
aarch64 and s390x. Qemu has broken madvise(MADV_DONTNEED) semantics --
specifically, it just ignores madvise() [1].
We could continue to whack-a-mole the tests whenever we create new
functionality that relies on madvise() semantics, but ideally we'd just
have emulation that properly emulates!
The earlier discussions on the qemu mailing list [2] had a proposed
patch for this, but (i) this patch doesn't seem to apply cleanly anymore
(it's 3.5 years old) and (ii) it's pretty complex due to the need to
handle qemu's ability to emulate differing page sizes on host and guest.
It turns out that we only really need this for CI when host and guest
have the same page size (4KiB), so we *could* just pass the madvise()s
through. I wouldn't expect such a patch to ever land upstream in qemu,
but it satisfies our needs I think. So this PR modifies our CI setup to
patch qemu before building it locally with a little one-off patch.
[1]
https://github.com/bytecodealliance/wasmtime/pull/2518#issuecomment-747280133
[2]
https://lists.gnu.org/archive/html/qemu-devel/2018-08/msg05416.html
Peepmatic was an early attempt at a DSL for peephole optimizations, with the
idea that maybe sometime in the future we could user it for instruction
selection as well. It didn't really pan out, however:
* Peepmatic wasn't quite flexible enough, and adding new operators or snippets
of code implemented externally in Rust was a bit of a pain.
* The performance was never competitive with the hand-written peephole
optimizers. It was *very* size efficient, but that came at the cost of
run-time efficiency. Everything was table-based and interpreted, rather than
generating any Rust code.
Ultimately, because of these reasons, we never turned Peepmatic on by default.
These days, we just landed the ISLE domain-specific language, and it is better
suited than Peepmatic for all the things that Peepmatic was originally designed
to do. It is more flexible and easy to integrate with external Rust code. It is
has better time efficiency, meeting or even beating hand-written code. I think a
small part of the reason why ISLE excels in these things is because its design
was informed by Peepmatic's failures. I still plan on continuing Peepmatic's
mission to make Cranelift's peephole optimizer passes generated from DSL rewrite
rules, but using ISLE instead of Peepmatic.
Thank you Peepmatic, rest in peace!
* Change the bump-version workflow's schedule
Either I don't understand cron or GitHub doesn't understand cron. It's
not clear which. I think that
https://github.com/bytecodealliance/wasmtime/pull/3511 may have fallen
within our schedule but it was supposed to be on a weekday. Otherwise
https://github.com/bytecodealliance/wasmtime/pull/3499 was certainly
spurious. This commit moves to a simpler "just do it on the same day
each month" and we can manually figure out weekdays and such. Hopefully
this should reduce the number of spurious PRs we're getting to bump
versions.
This also removes the script to force a version bump since I found a
button on the GitHub UI to do the same thing. Additionally I've updated
the patch-release documentation to use this button. Note that this
button takes inputs as well which means we can further automate patch
releases to look even more like normal release process, differing only
in one part of the argument used to trigger the workflow.
* Fix a typo
This commit removes the Lightbeam backend from Wasmtime as per [RFC 14].
This backend hasn't received maintenance in quite some time, and as [RFC
14] indicates this doesn't meet the threshold for keeping the code
in-tree, so this commit removes it.
A fast "baseline" compiler may still be added in the future. The
addition of such a backend should be in line with [RFC 14], though, with
the principles we now have for stable releases of Wasmtime. I'll close
out Lightbeam-related issues once this is merged.
[RFC 14]: https://github.com/bytecodealliance/rfcs/pull/14
Implement Wasmtime's new API as designed by RFC 11. This is quite a large commit which has had lots of discussion externally, so for more information it's best to read the RFC thread and the PR thread.
Apparently `powershell Compress-Archive` produces zip files with
backslashes in filesnames which makes them unable to be extracted with
some Unix variants of extraction. For example [this failure][build] and
using macOS's built-in unzip feature it creates filenames with
backslashes in them rather than subdirectories.
[build]: https://github.com/bytecodealliance/wasmtime-go/runs/2680596219?check_suite_focus=true
This commit removes the publish step in GitHub actions, insteading
folding all functionality into the release build steps. This avoids
having a separately scheduled job after all the release build jobs which
ends up getting delayed for quite a long time given the current
scheduling algorithm.
This involves refactoring the tarball assembly scripts and refactoring the
github asset upload script too. Tarball assembly now manages everything
internally and does platform-specific bits where necessary. The upload
script is restructured to be run in parallel (in theory) and hopefully
catches various errors and tries to not stomp over everyone else's work.
The main trickiness here is handling `dev`, which is less critical for
correctness than than tags themselves.
As a small tweak build-wise the QEMU build for cross-compiled builders
is now cached unlike before where it was unconditionally built, shaving
a minute or two off build time.
* wasi-nn: turn it on by default
This change makes the wasi-nn Cargo feature a default feature. Previously, a wasi-nn user would have to build a separate Wasmtime binary (e.g. `cargo build --features wasi-nn ...`) to use wasi-nn and the resulting binary would require OpenVINO shared libraries to be present in the environment in order to run (otherwise it would fail immediately with linking errors). With recent changes to the `openvino` crate, the wasi-nn implementation can defer the loading of the OpenVINO shared libraries until runtime (i.e., when the user Wasm program calls `wasi_ephemeral_nn::load`) and display a user-level error if anything goes wrong (e.g., the OpenVINO libraries are not present on the system). This runtime-linking addition allows the wasi-nn feature to be turned on by default and shipped with upcoming releases of Wasmtime. This change should be transparent for users who do not use wasi-nn: the `openvino` crate is small and the newly-available wasi-nn imports only affect programs in which they are used.
For those interested in reviewing the runtime linking approach added to the `openvino` crate, see https://github.com/intel/openvino-rs/pull/19.
* wasi-nn spec path: don't use canonicalize
* Allow dependencies using the ISC license
The ISC license should be [just as permissive](https://choosealicense.com/licenses/isc) as MIT, e.g., with no additional limitations.
* Add a `--wasi-modules` flag
This flag controls which WASI modules are made available to the Wasm program. This initial commit enables `wasi-common` by default (equivalent to `--wasi-modules=all`) and allows `wasi-nn` and `wasi-crypto` to be added in either individually (e.g., `--wasi-modules=wasi-nn`) or as a group (e.g., `--wasi-modules=all-experimental`).
* wasi-crypto: fix unused dependency
Co-authored-by: Pat Hickey <pat@moreproductive.org>
This PR switches the default backend on x86, for both the
`cranelift-codegen` crate and for Wasmtime, to the new
(`MachInst`-style, `VCode`-based) backend that has been under
development and testing for some time now.
The old backend is still available by default in builds with the
`old-x86-backend` feature, or by requesting `BackendVariant::Legacy`
from the appropriate APIs.
As part of that switch, it adds some more runtime-configurable plumbing
to the testing infrastructure so that tests can be run using the
appropriate backend. `clif-util test` is now capable of parsing a
backend selector option from filetests and instantiating the correct
backend.
CI has been updated so that the old x86 backend continues to run its
tests, just as we used to run the new x64 backend separately.
At some point, we will remove the old x86 backend entirely, once we are
satisfied that the new backend has not caused any unforeseen issues and
we do not need to revert.
* Use stable Rust on CI to test the x64 backend
This commit leverages the newly-released 1.51.0 compiler to test the
new backend on Windows and Linux with a stable compiler instead of a
nightly compiler. This isolates the nightly build to just the nightly
documentation generation and fuzzing, both of which rely on nightly for
the best results right now.
* Use updated stable in book build job
* Run rustfmt for new stable
* Silence new warnings for wasi-nn
* Allow some dead code in the x64 backend
Looks like new rustc is better about emitting some dead-code warnings
* Update rust in peepmatic job
* Fix a test in the pooling allocator
* Remove `package.metdata.docs.rs` temporarily
Needs resolution of https://github.com/rust-lang/cargo/pull/9300 first
* Fix a warning in a wasi-nn example
* Add support for the experimental wasi-crypto APIs
The sole purpose of the implementation is to allow bindings and
application developers to test the proposed APIs.
Rust and AssemblyScript bindings are also available as examples.
Like `wasi-nn`, it is currently disabled by default, and requires
the `wasi-crypto` feature flag to be compiled in.
* Rename the wasi-crypto/spec submodule
* Add a path dependency into the submodule for wasi-crypto
* Tell the publish script to vendor wasi-crypto
Since downloading the wasi-nn artifacts take a bit of time, the example script's first argument serves as a directory to reuse for running this script. This change cleans up temporary directories only when a directory was not specified.
* Add an initial wasi-nn implementation for Wasmtime
This change adds a crate, `wasmtime-wasi-nn`, that uses `wiggle` to expose the current state of the wasi-nn API and `openvino` to implement the exposed functions. It includes an end-to-end test demonstrating how to do classification using wasi-nn:
- `crates/wasi-nn/tests/classification-example` contains Rust code that is compiled to the `wasm32-wasi` target and run with a Wasmtime embedding that exposes the wasi-nn calls
- the example uses Rust bindings for wasi-nn contained in `crates/wasi-nn/tests/wasi-nn-rust-bindings`; this crate contains code generated by `witx-bindgen` and eventually should be its own standalone crate
* Test wasi-nn as a CI step
This change adds:
- a GitHub action for installing OpenVINO
- a script, `ci/run-wasi-nn-example.sh`, to run the classification example
We do a `cargo fetch --locked` for most of our CI builds, but
`run-experimental-x64-ci.sh` was not doing this. As a result, some CI
runs seem to fail depending on which versions of crates they download. A
common failure mode is that two different versions of the `syn` crate
get into the build somehow, resulting in errors in wiggle/witx.
This change simply adds the `--locked` flag to the `cargo test` run for
the new x64 backend.