* Initial skeleton of some component model processing
This commit is the first of what will likely be many to implement the
component model proposal in Wasmtime. This will be structured as a
series of incremental commits, most of which haven't been written yet.
My hope is to make this incremental and over time to make this easier to
review and easier to test each step in isolation.
Here much of the skeleton of how components are going to work in
Wasmtime is sketched out. This is not a complete implementation of the
component model so it's not all that useful yet, but some things you can
do are:
* Process the type section into a representation amenable for working
with in Wasmtime.
* Process the module section and register core wasm modules.
* Process the instance section for core wasm modules.
* Process core wasm module imports.
* Process core wasm instance aliasing.
* Ability to compile a component with core wasm embedded.
* Ability to instantiate a component with no imports.
* Ability to get functions from this component.
This is already starting to diverge from the previous module linking
representation where a `Component` will try to avoid unnecessary
metadata about the component and instead internally only have the bare
minimum necessary to instantiate the module. My hope is we can avoid
constructing most of the index spaces during instantiation only for it
to all ge thrown away. Additionally I'm predicting that we'll need to
see through processing where possible to know how to generate adapters
and where they are fused.
At this time you can't actually call a component's functions, and that's
the next PR that I would like to make.
* Add tests for the component model support
This commit uses the recently updated wasm-tools crates to add tests for
the component model added in the previous commit. This involved updating
the `wasmtime-wast` crate for component-model changes. Currently the
component support there is quite primitive, but enough to at least
instantiate components and verify the internals of Wasmtime are all
working correctly. Additionally some simple tests for the embedding API
have also been added.
* Refactor binary-compatible-builds for releases
I was poking around this yesterday and noticed a few things that could
be improved for our release builds:
* The centos container for the x86_64 builds contained a bunch of extra
tooling we no longer need such as python3 and a C++ compiler. Along
with custom toolchain things this could all get removed since the C we
include now is quite simple.
* The aarch64 and s390x cross-compiled builds had relatively high glibc
version requirements compared to the x86_64 build. This was because we
don't use a container to build the cross-compiled binaries. I added
containers here along the lines of the x86_64 build to use an older
glibc to build the release binary to lower our version requirement.
This lower the aarch64 version requirement from glibc 2.28 to 2.17.
Additionally the s390x requirement dropped from 2.28 to 2.16.
* To make the containers a bit easier to read/write I added
`Dockerfile`s for them in a new `ci/docker` directory instead of
hardcoding install commands in JS.
This isn't intended to be a really big change or anything for anyone,
but it's intended to keep our Linux-based builds consistent at least as
best we can.
* Remove temporary change
This PR fixes#4066: it modifies the Cranelift `build.rs` workflow to
invoke the ISLE DSL compiler on every compilation, rather than only
when the user specifies a special "rebuild ISLE" feature.
The main benefit of this change is that it vastly simplifies the mental
model required of developers, and removes a bunch of failure modes
we have tried to work around in other ways. There is now just one
"source of truth", the ISLE source itself, in the repository, and so there
is no need to understand a special "rebuild" step and how to handle
merge errors. There is no special process needed to develop the compiler
when modifying the DSL. And there is no "noise" in the git history produced
by constantly-regenerated files.
The two main downsides we discussed in #4066 are:
- Compile time could increase, by adding more to the "meta" step before the main build;
- It becomes less obvious where the source definitions are (everything becomes
more "magic"), which makes exploration and debugging harder.
This PR addresses each of these concerns:
1. To maintain reasonable compile time, it includes work to cut down the
dependencies of the `cranelift-isle` crate to *nothing* (only the Rust stdlib),
in the default build. It does this by putting the error-reporting bits
(`miette` crate) under an optional feature, and the logging (`log` crate) under
a feature-controlled macro, and manually writing an `Error` impl rather than
using `thiserror`. This completely avoids proc macros and the `syn` build slowness.
The user can still get nice errors out of `miette`: this is enabled by specifying
a Cargo feature `--features isle-errors`.
2. To allow the user to optionally inspect the generated source, which nominally
lives in a hard-to-find path inside `target/` now, this PR adds a feature `isle-in-source-tree`
that, as implied by the name, moves the target for ISLE generated source into
the source tree, at `cranelift/codegen/isle_generated_source/`. It seems reasonable
to do this when an explicit feature (opt-in) is specified because this is how ISLE regeneration
currently works as well. To prevent surprises, if the feature is *not* specified, the
build fails if this directory exists.
* ISLE compiler: fix priority-trie interval bug. (#4093)
This PR fixes a bug in the ISLE compiler related to rule priorities.
An important note first: the bug did not affect the correctness of the
Cranelift backends, either in theory (because the rules should be
correct applied in any order, even contrary to the stated priorities)
or in practice (because the generated code actually does not change at
all with the DSL compiler fix, only with a separate minimized bug
example).
The issue was a simple swap of `min` for `max` (see first
commit). This is the minimal fix, I think, to get a correct
priority-trie with the minimized bug example in this commit.
However, while debugging this, I started to convince myself that the
complexity of merging multiple priority ranges using the sort of
hybrid interval tree / string-matching trie data structure was
unneeded. The original design was built with the assumption we might
have a bunch of different priority levels, and would need the
efficiency of merging where possible. But in practice we haven't used
priorities this way: the vast majority of lowering rules exist at the
default (priority 0), and just a few overrides are explicitly at prio
1, 2 or (rarely) 3.
So, it turns out to be a lot simpler to label trie edges with (prio,
symbol) rather than (prio-range, symbol), and delete the whole mess of
interval-splitting logic on insertion. It's easier (IMHO) to convince
oneself that the resulting insertion algorithm is correct.
I was worried that this might impact the size of the generated Rust
code or its runtime, but In fact, to my initial surprise (but it makes
sense given the above "rarely used" factor), the generated code with
this compiler fix is *exactly the same*. I rebuilt with `--features
rebuild-isle,all-arch` but... there were no diffs to commit! This is
to me the simplest evidence that we didn't really need that
complexity.
* Fix earlier commit from #4093: properly sort trie.
This commit fixes an in-hindsight-obvious bug in #4093: the trie's edges
must be sorted recursively, not just at the top level.
With this fix, the generated code differs only in one cosmetic way (a
let-binding moves) but otherwise is the same.
This includes @fitzgen's fix to the CI (from the revert in #4102) that
deletes manifests to actually check that the checked-in source is
consistent with the checked-in compiler. The force-rebuild step is now
in a shell script for convenience: anyone hacking on the ISLE compiler
itself can use this script to more easily rebuild everything.
* Add note to build.rs to remind to update force-rebuild-isle.sh
* Revert "ISLE compiler: fix priority-trie interval bug. (#4093)"
This reverts commit 2122337112.
* ci: Delete ISLE manifest files to force an ISLE rebuild
This makes it so that we actually rebuild ISLE rather than just check that the
manifests are up-to-date.
The automated release process failed last night because the `./publish
bump` process failed to automatically increment the version number. This
is fixed in #4097 but it also seems prudent to run this continually on
CI as well to ensure that this doesn't regress again.
* Update more workflows to only this repository
This adds `if: github.repository == 'bytecodealliance/wasmtime'` to a
few more workflows related to the release process which should only run
in this repository and no other (e.g. forks).
* Also only run verify-publish in the upstream repo
No need for local deelopment to be burdened with ensuring everything is
actually publish-able, that's just a concern for the main repository.
* Gate workflows which need secrets on this repository
* Fix the release process's latest step
The automated release of 0.36.0 was attempted last night but it failed
due to a [failure on CI][bad]. This failure comes about because it was
trying to change the release date of 0.35.0 which ended up not modifying
any fails so `git` failed to commit as no files were changed.
The original bug though was that 0.35.0 was being changed instead of
0.36.0. The reason for this is that the script used
`--sort=-committerdate` to determine the latest branch. I forgot,
though, that with backports it's possible for 0.35.0 to have a more
recent commit date than 0.36.0 (as is currently the case). This commit
updates the script to perform a numerical sort outside of git to get the
latest release branch.
Additionally this adds in some `set -ex` commands for the shell which
should help print out commands as they're run and assist in future
debugging.
[bad]: https://github.com/bytecodealliance/wasmtime/runs/6087188708
* Replace sed with rust
This commit removes support for the `userfaultfd` or "uffd" syscall on
Linux. This support was originally added for users migrating from Lucet
to Wasmtime, but the recent developments of kernel-supported
copy-on-write support for memory initialization wound up being more
appropriate for these use cases than usefaultfd. The main reason for
moving to copy-on-write initialization are:
* The `userfaultfd` feature was never necessarily intended for this
style of use case with wasm and was susceptible to subtle and rare
bugs that were extremely difficult to track down. We were never 100%
certain that there were kernel bugs related to userfaultfd but the
suspicion never went away.
* Handling faults with userfaultfd was always slow and single-threaded.
Only one thread could handle faults and traveling to user-space to
handle faults is inherently slower than handling them all in the
kernel. The single-threaded aspect in particular presented a
significant scaling bottleneck for embeddings that want to run many
wasm instances in parallel.
* One of the major benefits of userfaultfd was lazy initialization of
wasm linear memory which is also achieved with the copy-on-write
initialization support we have right now.
* One of the suspected benefits of userfaultfd was less frobbing of the
kernel vma structures when wasm modules are instantiated. Currently
the copy-on-write support has a mitigation where we attempt to reuse
the memory images where possible to avoid changing vma structures.
When comparing this to userfaultfd's performance it was found that
kernel modifications of vmas aren't a worrisome bottleneck so
copy-on-write is suitable for this as well.
Overall there are no remaining benefits that userfaultfd gives that
copy-on-write doesn't, and copy-on-write solves a major downsides of
userfaultfd, the scaling issue with a single faulting thread.
Additionally copy-on-write support seems much more robust in terms of
kernel implementation since it's only using standard memory-management
syscalls which are heavily exercised. Finally copy-on-write support
provides a new bonus where read-only memory in WebAssembly can be mapped
directly to the same kernel cache page, even amongst many wasm instances
of the same module, which was never possible with userfaultfd.
In light of all this it's expected that all users of userfaultfd should
migrate to the copy-on-write initialization of Wasmtime (which is
enabled by default).
* Add m1 to release process
This will create a pre-compiled binary for m1 macs and adds
a link to review embark studios CI for verification.
* remove test for macos arm
Tests will not succeed for macos arm until GitHub provides a an m1 hosted runner.
Co-authored-by: Bailey Hayes <bhayes@singlestore.com>
* Bump to 0.36.0
* Add a two-week delay to Wasmtime's release process
This commit is a proposal to update Wasmtime's release process with a
two-week delay from branching a release until it's actually officially
released. We've had two issues lately that came up which led to this proposal:
* In #3915 it was realized that changes just before the 0.35.0 release
weren't enough for an embedding use case, but the PR didn't meet the
expectations for a full patch release.
* At Fastly we were about to start rolling out a new version of Wasmtime
when over the weekend the fuzz bug #3951 was found. This led to the
desire internally to have a "must have been fuzzed for this long"
period of time for Wasmtime changes which we felt were better
reflected in the release process itself rather than something about
Fastly's own integration with Wasmtime.
This commit updates the automation for releases to unconditionally
create a `release-X.Y.Z` branch on the 5th of every month. The actual
release from this branch is then performed on the 20th of every month,
roughly two weeks later. This should provide a period of time to ensure
that all changes in a release are fuzzed for at least two weeks and
avoid any further surprises. This should also help with any last-minute
changes made just before a release if they need tweaking since
backporting to a not-yet-released branch is much easier.
Overall there are some new properties about Wasmtime with this proposal
as well:
* The `main` branch will always have a section in `RELEASES.md` which is
listed as "Unreleased" for us to fill out.
* The `main` branch will always be a version ahead of the latest
release. For example it will be bump pre-emptively as part of the
release process on the 5th where if `release-2.0.0` was created then
the `main` branch will have 3.0.0 Wasmtime.
* Dates for major versions are automatically updated in the
`RELEASES.md` notes.
The associated documentation for our release process is updated and the
various scripts should all be updated now as well with this commit.
* Add notes on a security patch
* Clarify security fixes shouldn't be previewed early on CI
This commit limits the triage workflow to only `bytecodealliance/wasmtime`.
Hopefully this will prevent the labeler from adding the "do you even fuzz?"
comment to PRs to wasmtime's forks.
Updates the OpenVINO backend for wasi-nn to pre-configure all inputs as `NHWC`. This is not a long-term fix but no worse than the status quo, which configures all input tensors to the same layout. This change updates the CI script to use the latest version of OpenVINO. Closes#3948.
Co-authored-by: Andrew Brown <andrew.brown@intel.com>
* Support disabling backtraces at compile time
This commit adds support to Wasmtime to disable, at compile time, the
gathering of backtraces on traps. The `wasmtime` crate now sports a
`wasm-backtrace` feature which, when disabled, will mean that backtraces
are never collected at compile time nor are unwinding tables inserted
into compiled objects.
The motivation for this commit stems from the fact that generating a
backtrace is quite a slow operation. Currently backtrace generation is
done with libunwind and `_Unwind_Backtrace` typically found in glibc or
other system libraries. When thousands of modules are loaded into the
same process though this means that the initial backtrace can take
nearly half a second and all subsequent backtraces can take upwards of
hundreds of milliseconds. Relative to all other operations in Wasmtime
this is extremely expensive at this time. In the future we'd like to
implement a more performant backtrace scheme but such an implementation
would require coordination with Cranelift and is a big chunk of work
that may take some time, so in the meantime if embedders don't need a
backtrace they can still use this option to disable backtraces at
compile time and avoid the performance pitfalls of collecting
backtraces.
In general I tried to originally make this a runtime configuration
option but ended up opting for a compile-time option because `Trap::new`
otherwise has no arguments and always captures a backtrace. By making
this a compile-time option it was possible to configure, statically, the
behavior of `Trap::new`. Additionally I also tried to minimize the
amount of `#[cfg]` necessary by largely only having it at the producer
and consumer sites.
Also a noteworthy restriction of this implementation is that if
backtrace support is disabled at compile time then reference types
support will be unconditionally disabled at runtime. With backtrace
support disabled there's no way to trace the stack of wasm frames which
means that GC can't happen given our current implementation.
* Always enable backtraces for the C API
After adding the `call`-oriented benchmark recently I just noticed that
running benchmarks on CI is taking 30+ minutes which is not intended.
Instead of running a full benchmark run on CI (which I believe we're not
looking at anyway) instead only run the benchmarks for a single
iteration to ensure they still work but otherwise don't collect
statistics about them.
Additionally cap the number of parallel instantiations to 16 to avoid
running tons of tests for machines with lots of cpus.
This avoids a situation such as for the last point release where we had
different point releases all use the same branch name which forced us
make one release at a time. By putting the version number in the branch
name that should hopefully fix this.
It seems our `compile` fuzz target for ISLE has not been regularly
tested, as it was never updated for the `isle` -> `cranelift_isle` crate
renaming. This PR fixes it to compile again.
This also includes a simple fix in the typechecking: when verifying that
a term decl is valid, we might insert a term ID into the name->ID map
before fully checking that all of the types exist, and then skipping
(for error recovery purposes) the actual push onto the term-signature
vector if one of the types does have an error. This phantom TID can
later cause a panic. The fix is to avoid adding to the map until we have
fully verified the term decl.
Triage is scheduled to run once every 5 minutes but it's often queued up
during the day as builders are otherwise occupied with actual CI builds.
This can end up in a lot of triage tasks queued up back-to-back. While
this doesn't seem to be a huge issue one thing I suspect is that this is
perhaps somewhat related to API rate limits getting hit when recent
versions were published. In any case there's no need for each and every
triage run to do something, it's fine to only have one at a time
pending.
Looks like the 0.34.1 release is missing artifacts and some jobs
building artifacts ended up being cancelled because of API rate limits
being hit on the builders. Artifacts are uploaded to the job, however,
which means we can always go back and grab them to upload them, unless
the whole job was cancelled. For 0.34.1 it looks like the Linux builder
hit an error but its error then subsequently cancelled the Windows
builders, so we don't actually have artifacts for Windows for the 0.34.1
release. This will hopefully prevent this from causing further issues in
the future where if one builder hits an error while uploading artifacts
the others will continue and we can manually upload what's missing if
necessary.
cc #3812
* Automatically label Wasmtime `Config` object changes
* Add the "Label Messager" github action
This allows us to have custom messages left in comments on issues and pull
requests when they get labeled with a specific label.
* Add a message for `wasmtime:config`-labeled pull requests
* CI: Consolidate issue/PR triage workflows
In #3721, we have been discussing what to do about the ARM32 backend in
Cranelift. Currently, this backend supports only 32-bit types, which is
insufficient for full Wasm-MVP; it's missing other critical bits, like
floating-point support; and it has only ever been exercised, AFAIK, via
the filetests for the individual CLIF instructions that are implemented.
We were very very thankful for the original contribution of this
backend, even in its partial state, and we had hoped at the time that we
could eventually mature it in-tree until it supported e.g. Wasm and
other use-cases. But that hasn't yet happened -- to the blame of no-one,
to be clear, we just haven't had a contributor with sufficient time.
Unfortunately, the existence of the backend and lack of active
maintainer now potentially pose a bit of a burden as we hope to make
continuing changes to the backend framework. For example, the ISLE
migration, and the use of regalloc2 that it will allow, would need all
of the existing lowering patterns in the hand-written ARM32 backend to
be rewritten as ISLE rules.
Given that we don't currently have the resources to do this, we think
it's probably best if we, sadly, for now remove this partial backend.
This is not in any way a statement of what we might accept in the
future, though. If, in the future, an ARM32 backend updated to our
latest codebase with an active maintainer were to appear, we'd be happy
to merge it (and likewise for any other architecture!). But for now,
this is probably the best path. Thanks again to the original contributor
@jmkrauz and we hope that this work can eventually be brought back and
reused if someone has the time to do so!
We currently skip some tests when running our qemu-based tests for
aarch64 and s390x. Qemu has broken madvise(MADV_DONTNEED) semantics --
specifically, it just ignores madvise() [1].
We could continue to whack-a-mole the tests whenever we create new
functionality that relies on madvise() semantics, but ideally we'd just
have emulation that properly emulates!
The earlier discussions on the qemu mailing list [2] had a proposed
patch for this, but (i) this patch doesn't seem to apply cleanly anymore
(it's 3.5 years old) and (ii) it's pretty complex due to the need to
handle qemu's ability to emulate differing page sizes on host and guest.
It turns out that we only really need this for CI when host and guest
have the same page size (4KiB), so we *could* just pass the madvise()s
through. I wouldn't expect such a patch to ever land upstream in qemu,
but it satisfies our needs I think. So this PR modifies our CI setup to
patch qemu before building it locally with a little one-off patch.
[1]
https://github.com/bytecodealliance/wasmtime/pull/2518#issuecomment-747280133
[2]
https://lists.gnu.org/archive/html/qemu-devel/2018-08/msg05416.html
As first suggested by Jan on the Zulip here [1], a cheap and effective
way to obtain copy-on-write semantics of a "backing image" for a Wasm
memory is to mmap a file with `MAP_PRIVATE`. The `memfd` mechanism
provided by the Linux kernel allows us to create anonymous,
in-memory-only files that we can use for this mapping, so we can
construct the image contents on-the-fly then effectively create a CoW
overlay. Furthermore, and importantly, `madvise(MADV_DONTNEED, ...)`
will discard the CoW overlay, returning the mapping to its original
state.
By itself this is almost enough for a very fast
instantiation-termination loop of the same image over and over,
without changing the address space mapping at all (which is
expensive). The only missing bit is how to implement
heap *growth*. But here memfds can help us again: if we create another
anonymous file and map it where the extended parts of the heap would
go, we can take advantage of the fact that a `mmap()` mapping can
be *larger than the file itself*, with accesses beyond the end
generating a `SIGBUS`, and the fact that we can cheaply resize the
file with `ftruncate`, even after a mapping exists. So we can map the
"heap extension" file once with the maximum memory-slot size and grow
the memfd itself as `memory.grow` operations occur.
The above CoW technique and heap-growth technique together allow us a
fastpath of `madvise()` and `ftruncate()` only when we re-instantiate
the same module over and over, as long as we can reuse the same
slot. This fastpath avoids all whole-process address-space locks in
the Linux kernel, which should mean it is highly scalable. It also
avoids the cost of copying data on read, as the `uffd` heap backend
does when servicing pagefaults; the kernel's own optimized CoW
logic (same as used by all file mmaps) is used instead.
[1] https://bytecodealliance.zulipchat.com/#narrow/stream/206238-general/topic/Copy.20on.20write.20based.20instance.20reuse/near/266657772
This updates our `push-tag.yml` workflow to check all commits to
`release-*` branches in addition to the `main` branch for cmomits which
should be tagged.
Actually poking around the `github` context it looks like for
workflow-dispatch-triggered-events `base_ref` is blank but `ref_name`
has the branch name we're interested in.
This was accidentally ommitted from our CI configuration which meant
that release branches didn't get PR CI. They still won't get on-merge CI
but that shouldn't be an issue because the PR CI is the full CI.
* Update to cap-std 0.22.0.
The main change relevant to Wasmtime here is that this includes the
rustix fix for compilation errors on Rust nightly with the `asm!` macro.
* Add itoa to deny.toml.
* Update the doc and fuzz builds to the latest Rust nightly.
* Update to libc 0.2.112 to pick up the `POLLRDHUP` fix.
* Update to cargo-fuzz 0.11, for compatibility with Rust nightly.
This appears to be the fix for rust-fuzz/cargo-fuzz#277.
Peepmatic was an early attempt at a DSL for peephole optimizations, with the
idea that maybe sometime in the future we could user it for instruction
selection as well. It didn't really pan out, however:
* Peepmatic wasn't quite flexible enough, and adding new operators or snippets
of code implemented externally in Rust was a bit of a pain.
* The performance was never competitive with the hand-written peephole
optimizers. It was *very* size efficient, but that came at the cost of
run-time efficiency. Everything was table-based and interpreted, rather than
generating any Rust code.
Ultimately, because of these reasons, we never turned Peepmatic on by default.
These days, we just landed the ISLE domain-specific language, and it is better
suited than Peepmatic for all the things that Peepmatic was originally designed
to do. It is more flexible and easy to integrate with external Rust code. It is
has better time efficiency, meeting or even beating hand-written code. I think a
small part of the reason why ISLE excels in these things is because its design
was informed by Peepmatic's failures. I still plan on continuing Peepmatic's
mission to make Cranelift's peephole optimizer passes generated from DSL rewrite
rules, but using ISLE instead of Peepmatic.
Thank you Peepmatic, rest in peace!
- The Windows line-ending canonicalization was incomplete: we need to
canonicalize the manifest text itself too!
- The "meta deterministic check" runs the cranelift-codegen build script
N times outside of the source tree, examining what it produces to
ensure the output is always the same (is detministic). This works fine
when everything comes from the internal DSL, but when reading ISLE,
this breaks because we no longer have the ISLE source paths.
The initial ISLE integration did not hit this because without the
`rebuild-isle` feature, it simply did nothing in the build script;
now, with the manifest check, we hit the issue.
The fix for now is just to turn off all ISLE-specific behavior in the
build script by setting a special-purpose Cargo feature in the
specific CI job. Eventually IMHO we should remove or find a better way
to do this check.
This commit adds the `pooling-allocator` feature to both the `wasmtime` and
`wasmtime-runtime` crates.
The feature controls whether or not the pooling allocator implementation is
built into the runtime and exposed as a supported instance allocation strategy
in the wasmtime API.
The feature is on by default for the `wasmtime` crate.
Closes#3513.
* Change the bump-version workflow's schedule
Either I don't understand cron or GitHub doesn't understand cron. It's
not clear which. I think that
https://github.com/bytecodealliance/wasmtime/pull/3511 may have fallen
within our schedule but it was supposed to be on a weekday. Otherwise
https://github.com/bytecodealliance/wasmtime/pull/3499 was certainly
spurious. This commit moves to a simpler "just do it on the same day
each month" and we can manually figure out weekdays and such. Hopefully
this should reduce the number of spurious PRs we're getting to bump
versions.
This also removes the script to force a version bump since I found a
button on the GitHub UI to do the same thing. Additionally I've updated
the patch-release documentation to use this button. Note that this
button takes inputs as well which means we can further automate patch
releases to look even more like normal release process, differing only
in one part of the argument used to trigger the workflow.
* Fix a typo
* Automate more of Wasmtime's release process
This change revamps the release process for Wasmtime and intends to make
it nearly 100% automated for major release and hopefully still pretty
simple for patch releases. New workflows are introduced as part of
this commit:
* Once a month a PR is created with major version bumps
* Specifically hinted commit messages to the `main` branch will get
tagged and pushed to the main repository.
* On tags we'll now not only build releases after running CI but
additionally crates will be published to crates.io.
In conjunction with other changes this means that the release process
for a new major version of Wasmtime is simply merging a PR. Patch
releases will involve running some steps locally but most of the
nitty-gritty should be simply merging the PR that's generated.
* Use an anchor in a regex
Before this commit we actually have two builders checking for security
advisories on CI, one is `cargo audit` and one is `cargo deny`. The
`cargo deny` builder is slightly different in that it checks a few other
things about our dependency tree such as licenses, duplicates, etc. This
commit removes the advisory check from `cargo deny` on CI and then moves
the `cargo audit` check to a separate workflow.
The `cargo audit` check will now run nightly and will open an issue on
the Wasmtime repository when an advisory is found. This should help make
it such that our CI is never broken by the publication of an advisory
but we're still promptly notified whenever an advisory is made. I've
updated the release process notes to indicate that the open issues
should be double-checked to ensure that there are no open advisories
that we need to take care of.