* Add source tarballs to our releases
This commit adds a small script to create a source tarball as part of
the release process. This goes further than requested by #3808 by
vendoring all Rust dependencies as well to be more in line with
"download the source once then build somewhere without a network".
Vendoring the Rust dependencies makes the tarball pretty beefy (67M
compressed, 500M uncompressed). Unfortunately most of this size comes
from vendored crates such as v8, pqcrypto-kyber, winapi, capstone-sys,
plotters, and web-sys. Only `winapi` in this list is actually needed for
`wasmtime`-the-binary and only on Windows as well but for now this is
the state of things related to `cargo vendor`. If this becomes an issue
we could specifically remove the bulky contents of crates in the
`vendor` directory such as `v8` since it's only used for fuzzing.
Closes#3808
* Review feedback
* Review comments
* ci: replace OpenVINO installer action
To test wasi-nn, we currently use an OpenVINO backend. The Wasmtime CI
must install OpenVINO using a custom GitHub action. This CI action has
not been updated in some time and in the meantime OpenVINO (and the
OpenVINO crates) have released several new versions.
https://github.com/abrown/install-openvino-action is an external action
that we plan to keep up to date with the latest releases. This change
replaces the current CI action with that one.
* wasi-nn: upgrade openvino dependency to v0.4.1
This eliminates a `lazy_static` dependency and changes a few parameters
to pass by reference. Importantly, it enables support for the latest
versions of OpenVINO (v2022.*) in wasi-nn.
* ci: update wasi-nn script to source correct env script
* ci: really use the correct path for the env script
Also, clarify which directory OpenVINO is installed in (the symlink may
not be present).
* Refactor binary-compatible-builds for releases
I was poking around this yesterday and noticed a few things that could
be improved for our release builds:
* The centos container for the x86_64 builds contained a bunch of extra
tooling we no longer need such as python3 and a C++ compiler. Along
with custom toolchain things this could all get removed since the C we
include now is quite simple.
* The aarch64 and s390x cross-compiled builds had relatively high glibc
version requirements compared to the x86_64 build. This was because we
don't use a container to build the cross-compiled binaries. I added
containers here along the lines of the x86_64 build to use an older
glibc to build the release binary to lower our version requirement.
This lower the aarch64 version requirement from glibc 2.28 to 2.17.
Additionally the s390x requirement dropped from 2.28 to 2.16.
* To make the containers a bit easier to read/write I added
`Dockerfile`s for them in a new `ci/docker` directory instead of
hardcoding install commands in JS.
This isn't intended to be a really big change or anything for anyone,
but it's intended to keep our Linux-based builds consistent at least as
best we can.
* Remove temporary change
This PR fixes#4066: it modifies the Cranelift `build.rs` workflow to
invoke the ISLE DSL compiler on every compilation, rather than only
when the user specifies a special "rebuild ISLE" feature.
The main benefit of this change is that it vastly simplifies the mental
model required of developers, and removes a bunch of failure modes
we have tried to work around in other ways. There is now just one
"source of truth", the ISLE source itself, in the repository, and so there
is no need to understand a special "rebuild" step and how to handle
merge errors. There is no special process needed to develop the compiler
when modifying the DSL. And there is no "noise" in the git history produced
by constantly-regenerated files.
The two main downsides we discussed in #4066 are:
- Compile time could increase, by adding more to the "meta" step before the main build;
- It becomes less obvious where the source definitions are (everything becomes
more "magic"), which makes exploration and debugging harder.
This PR addresses each of these concerns:
1. To maintain reasonable compile time, it includes work to cut down the
dependencies of the `cranelift-isle` crate to *nothing* (only the Rust stdlib),
in the default build. It does this by putting the error-reporting bits
(`miette` crate) under an optional feature, and the logging (`log` crate) under
a feature-controlled macro, and manually writing an `Error` impl rather than
using `thiserror`. This completely avoids proc macros and the `syn` build slowness.
The user can still get nice errors out of `miette`: this is enabled by specifying
a Cargo feature `--features isle-errors`.
2. To allow the user to optionally inspect the generated source, which nominally
lives in a hard-to-find path inside `target/` now, this PR adds a feature `isle-in-source-tree`
that, as implied by the name, moves the target for ISLE generated source into
the source tree, at `cranelift/codegen/isle_generated_source/`. It seems reasonable
to do this when an explicit feature (opt-in) is specified because this is how ISLE regeneration
currently works as well. To prevent surprises, if the feature is *not* specified, the
build fails if this directory exists.
* Fix the release process's latest step
The automated release of 0.36.0 was attempted last night but it failed
due to a [failure on CI][bad]. This failure comes about because it was
trying to change the release date of 0.35.0 which ended up not modifying
any fails so `git` failed to commit as no files were changed.
The original bug though was that 0.35.0 was being changed instead of
0.36.0. The reason for this is that the script used
`--sort=-committerdate` to determine the latest branch. I forgot,
though, that with backports it's possible for 0.35.0 to have a more
recent commit date than 0.36.0 (as is currently the case). This commit
updates the script to perform a numerical sort outside of git to get the
latest release branch.
Additionally this adds in some `set -ex` commands for the shell which
should help print out commands as they're run and assist in future
debugging.
[bad]: https://github.com/bytecodealliance/wasmtime/runs/6087188708
* Replace sed with rust
* Add m1 to release process
This will create a pre-compiled binary for m1 macs and adds
a link to review embark studios CI for verification.
* remove test for macos arm
Tests will not succeed for macos arm until GitHub provides a an m1 hosted runner.
Co-authored-by: Bailey Hayes <bhayes@singlestore.com>
* Bump to 0.36.0
* Add a two-week delay to Wasmtime's release process
This commit is a proposal to update Wasmtime's release process with a
two-week delay from branching a release until it's actually officially
released. We've had two issues lately that came up which led to this proposal:
* In #3915 it was realized that changes just before the 0.35.0 release
weren't enough for an embedding use case, but the PR didn't meet the
expectations for a full patch release.
* At Fastly we were about to start rolling out a new version of Wasmtime
when over the weekend the fuzz bug #3951 was found. This led to the
desire internally to have a "must have been fuzzed for this long"
period of time for Wasmtime changes which we felt were better
reflected in the release process itself rather than something about
Fastly's own integration with Wasmtime.
This commit updates the automation for releases to unconditionally
create a `release-X.Y.Z` branch on the 5th of every month. The actual
release from this branch is then performed on the 20th of every month,
roughly two weeks later. This should provide a period of time to ensure
that all changes in a release are fuzzed for at least two weeks and
avoid any further surprises. This should also help with any last-minute
changes made just before a release if they need tweaking since
backporting to a not-yet-released branch is much easier.
Overall there are some new properties about Wasmtime with this proposal
as well:
* The `main` branch will always have a section in `RELEASES.md` which is
listed as "Unreleased" for us to fill out.
* The `main` branch will always be a version ahead of the latest
release. For example it will be bump pre-emptively as part of the
release process on the 5th where if `release-2.0.0` was created then
the `main` branch will have 3.0.0 Wasmtime.
* Dates for major versions are automatically updated in the
`RELEASES.md` notes.
The associated documentation for our release process is updated and the
various scripts should all be updated now as well with this commit.
* Add notes on a security patch
* Clarify security fixes shouldn't be previewed early on CI
Updates the OpenVINO backend for wasi-nn to pre-configure all inputs as `NHWC`. This is not a long-term fix but no worse than the status quo, which configures all input tensors to the same layout. This change updates the CI script to use the latest version of OpenVINO. Closes#3948.
Co-authored-by: Andrew Brown <andrew.brown@intel.com>
We currently skip some tests when running our qemu-based tests for
aarch64 and s390x. Qemu has broken madvise(MADV_DONTNEED) semantics --
specifically, it just ignores madvise() [1].
We could continue to whack-a-mole the tests whenever we create new
functionality that relies on madvise() semantics, but ideally we'd just
have emulation that properly emulates!
The earlier discussions on the qemu mailing list [2] had a proposed
patch for this, but (i) this patch doesn't seem to apply cleanly anymore
(it's 3.5 years old) and (ii) it's pretty complex due to the need to
handle qemu's ability to emulate differing page sizes on host and guest.
It turns out that we only really need this for CI when host and guest
have the same page size (4KiB), so we *could* just pass the madvise()s
through. I wouldn't expect such a patch to ever land upstream in qemu,
but it satisfies our needs I think. So this PR modifies our CI setup to
patch qemu before building it locally with a little one-off patch.
[1]
https://github.com/bytecodealliance/wasmtime/pull/2518#issuecomment-747280133
[2]
https://lists.gnu.org/archive/html/qemu-devel/2018-08/msg05416.html
Peepmatic was an early attempt at a DSL for peephole optimizations, with the
idea that maybe sometime in the future we could user it for instruction
selection as well. It didn't really pan out, however:
* Peepmatic wasn't quite flexible enough, and adding new operators or snippets
of code implemented externally in Rust was a bit of a pain.
* The performance was never competitive with the hand-written peephole
optimizers. It was *very* size efficient, but that came at the cost of
run-time efficiency. Everything was table-based and interpreted, rather than
generating any Rust code.
Ultimately, because of these reasons, we never turned Peepmatic on by default.
These days, we just landed the ISLE domain-specific language, and it is better
suited than Peepmatic for all the things that Peepmatic was originally designed
to do. It is more flexible and easy to integrate with external Rust code. It is
has better time efficiency, meeting or even beating hand-written code. I think a
small part of the reason why ISLE excels in these things is because its design
was informed by Peepmatic's failures. I still plan on continuing Peepmatic's
mission to make Cranelift's peephole optimizer passes generated from DSL rewrite
rules, but using ISLE instead of Peepmatic.
Thank you Peepmatic, rest in peace!
* Change the bump-version workflow's schedule
Either I don't understand cron or GitHub doesn't understand cron. It's
not clear which. I think that
https://github.com/bytecodealliance/wasmtime/pull/3511 may have fallen
within our schedule but it was supposed to be on a weekday. Otherwise
https://github.com/bytecodealliance/wasmtime/pull/3499 was certainly
spurious. This commit moves to a simpler "just do it on the same day
each month" and we can manually figure out weekdays and such. Hopefully
this should reduce the number of spurious PRs we're getting to bump
versions.
This also removes the script to force a version bump since I found a
button on the GitHub UI to do the same thing. Additionally I've updated
the patch-release documentation to use this button. Note that this
button takes inputs as well which means we can further automate patch
releases to look even more like normal release process, differing only
in one part of the argument used to trigger the workflow.
* Fix a typo
This commit removes the Lightbeam backend from Wasmtime as per [RFC 14].
This backend hasn't received maintenance in quite some time, and as [RFC
14] indicates this doesn't meet the threshold for keeping the code
in-tree, so this commit removes it.
A fast "baseline" compiler may still be added in the future. The
addition of such a backend should be in line with [RFC 14], though, with
the principles we now have for stable releases of Wasmtime. I'll close
out Lightbeam-related issues once this is merged.
[RFC 14]: https://github.com/bytecodealliance/rfcs/pull/14
Implement Wasmtime's new API as designed by RFC 11. This is quite a large commit which has had lots of discussion externally, so for more information it's best to read the RFC thread and the PR thread.
Apparently `powershell Compress-Archive` produces zip files with
backslashes in filesnames which makes them unable to be extracted with
some Unix variants of extraction. For example [this failure][build] and
using macOS's built-in unzip feature it creates filenames with
backslashes in them rather than subdirectories.
[build]: https://github.com/bytecodealliance/wasmtime-go/runs/2680596219?check_suite_focus=true
This commit removes the publish step in GitHub actions, insteading
folding all functionality into the release build steps. This avoids
having a separately scheduled job after all the release build jobs which
ends up getting delayed for quite a long time given the current
scheduling algorithm.
This involves refactoring the tarball assembly scripts and refactoring the
github asset upload script too. Tarball assembly now manages everything
internally and does platform-specific bits where necessary. The upload
script is restructured to be run in parallel (in theory) and hopefully
catches various errors and tries to not stomp over everyone else's work.
The main trickiness here is handling `dev`, which is less critical for
correctness than than tags themselves.
As a small tweak build-wise the QEMU build for cross-compiled builders
is now cached unlike before where it was unconditionally built, shaving
a minute or two off build time.
* wasi-nn: turn it on by default
This change makes the wasi-nn Cargo feature a default feature. Previously, a wasi-nn user would have to build a separate Wasmtime binary (e.g. `cargo build --features wasi-nn ...`) to use wasi-nn and the resulting binary would require OpenVINO shared libraries to be present in the environment in order to run (otherwise it would fail immediately with linking errors). With recent changes to the `openvino` crate, the wasi-nn implementation can defer the loading of the OpenVINO shared libraries until runtime (i.e., when the user Wasm program calls `wasi_ephemeral_nn::load`) and display a user-level error if anything goes wrong (e.g., the OpenVINO libraries are not present on the system). This runtime-linking addition allows the wasi-nn feature to be turned on by default and shipped with upcoming releases of Wasmtime. This change should be transparent for users who do not use wasi-nn: the `openvino` crate is small and the newly-available wasi-nn imports only affect programs in which they are used.
For those interested in reviewing the runtime linking approach added to the `openvino` crate, see https://github.com/intel/openvino-rs/pull/19.
* wasi-nn spec path: don't use canonicalize
* Allow dependencies using the ISC license
The ISC license should be [just as permissive](https://choosealicense.com/licenses/isc) as MIT, e.g., with no additional limitations.
* Add a `--wasi-modules` flag
This flag controls which WASI modules are made available to the Wasm program. This initial commit enables `wasi-common` by default (equivalent to `--wasi-modules=all`) and allows `wasi-nn` and `wasi-crypto` to be added in either individually (e.g., `--wasi-modules=wasi-nn`) or as a group (e.g., `--wasi-modules=all-experimental`).
* wasi-crypto: fix unused dependency
Co-authored-by: Pat Hickey <pat@moreproductive.org>
This PR switches the default backend on x86, for both the
`cranelift-codegen` crate and for Wasmtime, to the new
(`MachInst`-style, `VCode`-based) backend that has been under
development and testing for some time now.
The old backend is still available by default in builds with the
`old-x86-backend` feature, or by requesting `BackendVariant::Legacy`
from the appropriate APIs.
As part of that switch, it adds some more runtime-configurable plumbing
to the testing infrastructure so that tests can be run using the
appropriate backend. `clif-util test` is now capable of parsing a
backend selector option from filetests and instantiating the correct
backend.
CI has been updated so that the old x86 backend continues to run its
tests, just as we used to run the new x64 backend separately.
At some point, we will remove the old x86 backend entirely, once we are
satisfied that the new backend has not caused any unforeseen issues and
we do not need to revert.
* Use stable Rust on CI to test the x64 backend
This commit leverages the newly-released 1.51.0 compiler to test the
new backend on Windows and Linux with a stable compiler instead of a
nightly compiler. This isolates the nightly build to just the nightly
documentation generation and fuzzing, both of which rely on nightly for
the best results right now.
* Use updated stable in book build job
* Run rustfmt for new stable
* Silence new warnings for wasi-nn
* Allow some dead code in the x64 backend
Looks like new rustc is better about emitting some dead-code warnings
* Update rust in peepmatic job
* Fix a test in the pooling allocator
* Remove `package.metdata.docs.rs` temporarily
Needs resolution of https://github.com/rust-lang/cargo/pull/9300 first
* Fix a warning in a wasi-nn example
* Add support for the experimental wasi-crypto APIs
The sole purpose of the implementation is to allow bindings and
application developers to test the proposed APIs.
Rust and AssemblyScript bindings are also available as examples.
Like `wasi-nn`, it is currently disabled by default, and requires
the `wasi-crypto` feature flag to be compiled in.
* Rename the wasi-crypto/spec submodule
* Add a path dependency into the submodule for wasi-crypto
* Tell the publish script to vendor wasi-crypto
Since downloading the wasi-nn artifacts take a bit of time, the example script's first argument serves as a directory to reuse for running this script. This change cleans up temporary directories only when a directory was not specified.
* Add an initial wasi-nn implementation for Wasmtime
This change adds a crate, `wasmtime-wasi-nn`, that uses `wiggle` to expose the current state of the wasi-nn API and `openvino` to implement the exposed functions. It includes an end-to-end test demonstrating how to do classification using wasi-nn:
- `crates/wasi-nn/tests/classification-example` contains Rust code that is compiled to the `wasm32-wasi` target and run with a Wasmtime embedding that exposes the wasi-nn calls
- the example uses Rust bindings for wasi-nn contained in `crates/wasi-nn/tests/wasi-nn-rust-bindings`; this crate contains code generated by `witx-bindgen` and eventually should be its own standalone crate
* Test wasi-nn as a CI step
This change adds:
- a GitHub action for installing OpenVINO
- a script, `ci/run-wasi-nn-example.sh`, to run the classification example
We do a `cargo fetch --locked` for most of our CI builds, but
`run-experimental-x64-ci.sh` was not doing this. As a result, some CI
runs seem to fail depending on which versions of crates they download. A
common failure mode is that two different versions of the `syn` crate
get into the build somehow, resulting in errors in wiggle/witx.
This change simply adds the `--locked` flag to the `cargo test` run for
the new x64 backend.
This commit extends our CI to produce release artifacts for the
x86_64-pc-windows-gnu target. This was originally motivate by the [go
extension] where it looks like the Go toolchain primarily interoperates
with MinGW, not with MSVC natively.
The support here turned out to be quite trivial, largely just adding the
configuration to make the release. I don't think we should necessarily
commit to this being a primary platform for Wasmtime at this time
though. If the support here regresses in the future for a
difficult-to-fix reason I think it would be fine to back out the
platform at least temporarily.
Note that this does not add a full test suite for the MinGW target, only
a release builder. This release builder does run tests, but not with
full debug assertions enabled.
[go extension]: https://github.com/bytecodealliance/wasmtime-go/issues/3Closes#1535
* Add AArch64 tests to CI
This commit enhances our CI with an AArch64 builder. Currently we have
no physical hardware to run on so for now we run all tests in an
emulator. The AArch64 build is cross-compiled from x86_64 from Linux.
Tests all happen in release mode with a recent version of QEMU (recent
version because it's so much faster, and in release mode because debug
mode tests take quite a long time in an emulator).
The goal here was not to get all tests passing on CI, but rather to get
AArch64 running on CI and get it green at the same time. To achieve that
goal many tests are now ignored on aarch64 platforms. Many tests fail
due to unimplemented functionality in the aarch64 backend (#1521), and
all wasmtime tests involving compilation are also disabled due to
panicking attempting to generate generate instruction offset information
for trap symbolication (#1523).
Despite this, though, all Cranelift tests and other wasmtime tests
should be runnin on AArch64 through QEMU with this PR. Additionally
we'll have an AArch64 binary release of Wasmtime for Linux, although it
won't be too useful just yet since it will panic on almost all wasm
modules.
* Review comments
* Remove the wasmtime Python extension from this repo
This commit removes the `crates/misc/py` folder and all associated
doo-dads like CI. This module has been rewritten to use the C API
natively and now lives at
https://github.com/bytecodealliance/wasmtime-py as discussed on #1390
* Remove the old wast and wasm2obj commands.
These are subsumed by the `wasmtime wast` and `wasmtime wasm2obj` commands.
Fixes#827.
* Remove wasm2obj install commands.
* Move all examples to a top-level directory
This commit moves all API examples (Rust and C) to a top-level
`examples` directory. This is intended to make it more discoverable and
conventional as to where examples are located. Additionally all examples
are now available in both Rust and C to see how to execute the example
in the language you're familiar with. The intention is that as more
languages are supported we'd add more languages as examples here too.
Each example is also accompanied by either a `*.wat` file which is
parsed as input, or a Rust project in a `wasm` folder which is compiled
as input.
A simple driver crate was also added to `crates/misc` which executes all
the examples on CI, ensuring the C and Rust examples all execute
successfully.
* Move the C API to a separate crate
This commit moves the C API from `crates/api/src/wasm.rs` to
`crates/capi/src/lib.rs` to be located in a separate crate. There's a
number of reasons for this:
* When a Rust program depends on the `wasmtime` crate, there's no need
to compile in the C API.
* This should improve compile times of the `wasmtime` crate since it's
not producing artifacts which aren't always used.
* The development of the C API can be guaranteed to only use the public
API of the `wasmtime` crate itself.
Some CI pieces are tweaked and this overall shouldn't have much impact
on users, it's intended that it's a cleanup/speedup for developers!
* Disable rustdoc/tests for capi
* Review feedback
* Add back in accidentally deleted comment
* More renamings
* Try to fix dotnet build
A small amount of cleanup for the top-level of this repository, where
the `installer` directory just had one misc file for the Windows
installer which we can put elsewhere inside of `ci`, another misc folder
but one that already has a few files in it.
This commit migrates wasmtime's CI infrastructure from Azure Pipelines
to Github Actions. Using Github Actions has a few benefits over other
offerings:
* Being natively integrated with Github means that there's no degree of
user account configuration or access control management, it's all
inherent via already existing Github permissions.
* Github Actions gives 20 parallel builders instead of Azure's 10 by
default, which is a nice boost to have!
Overall I've found Github Actions to feel a bit cleaner than Azure
Pipelines as well. Subjectively I've found the configuration to be more
readable and more pleasant to work with, although they're both just as
"powerful" I think. Additionally Github Actions has been pretty solid in
my own personal testing for a number of other projects.
The main trickiness with wasmtime's CI is the rolling `dev` release of
the master branch as well as binary releases for tags. Github Actions
doesn't have quite as much built in functionality as Azure Pipelines,
but Github Actions does have a nice feature where you can define the
code for an action locally rather than only using built-in actions.
This migration adds three local actions with some associated JS code to
run the action (currently it looks like it basically requires JS)
* An `install-rust` action papers over the gotchas about installing
Rust, allowing Rust installation to be a one-liner in the configuration.
* A `binary-compatible-builds` action allows easily configuring the
wheels and the binaries to be "more binary compatible" and handles
things like compilation flags on OSX and Windows while handling the
`centos:6` container on Linux.
* The `github-release` action is the logic using the `@actions/github`
JS package to orchestrate the custom way we manage rolling releases,
ensuring that a new release is made for the master branch under `dev`
(deleting the previous tag/release ahead of time) and then also
manages tagged releases by uploading them there.
I'm hoping that most of the inline actions here will largely go away.
For example `install-rust` should be simply `rustup update $toolchain`
once various environment issues are fixed on Github Actions runner
images. Additionally `github-release` will ideally migrate to something
like https://github.com/actions/create-release or similar once it has
enough functionality. I'm also hoping that the maintenance in the
meantime of these actions is pretty low-cost, but if it becomes an issue
we can look into other solutions!