This change implements a new "scheduler" for WebAssembly using binaryen's asyncify transform.
This is more reliable than the current "coroutines" transform, and works with non-Go code in the call stack.
runtime (js/wasm): handle scheduler nesting
If WASM calls into JS which calls back into WASM, it is possible for the scheduler to nest.
The event from the callback must be handled immediately, so the task cannot simply be deferred to the outer scheduler.
This creates a minimal scheduler loop which is used to handle such nesting.
This matches Clang, and with that, it adds support for inlining between
Go and C because LLVM only allows inlining if the "target-cpu" and
"target-features" string attributes match.
For example, take a look at the following code:
// int add(int a, int b) {
// return a + b;
// }
import "C"
func main() {
println(C.add(3, 5))
}
The 'add' function is not inlined into the main function before this
commit, but after it, it can be inlined and trivially be optimized to
`println(8)`.
This is fake debug info. It doesn't point to a source location because
there is no source location. However, it helps to correctly attribute
code size usage to particular packages.
I've also updated builder/sizes.go with some debugging helpers.
Instead of doing everything in the interrupt lowering pass, generate
some more code in gen-device to declare interrupt handler functions and
do some work in the compiler so that interrupt lowering becomes a lot
simpler.
This has several benefits:
- Overall code is smaller, in particular the interrupt lowering pass.
- The code should be a bit less "magical" and instead a bit easier to
read. In particular, instead of having a magic
runtime.callInterruptHandler (that is fully written by the interrupt
lowering pass), the runtime calls a generated function like
device/sifive.InterruptHandler where this switch already exists in
code.
- Debug information is improved. This can be helpful during actual
debugging but is also useful for other uses of DWARF debug
information.
For an example on debug information improvement, this is what a
backtrace might look like before this commit:
Breakpoint 1, 0x00000b46 in UART0_IRQHandler ()
(gdb) bt
#0 0x00000b46 in UART0_IRQHandler ()
#1 <signal handler called>
[..etc]
Notice that the debugger doesn't see the source code location where it
has stopped.
After this commit, breaking at the same line might look like this:
Breakpoint 1, (*machine.UART).handleInterrupt (arg1=..., uart=<optimized out>) at /home/ayke/src/github.com/tinygo-org/tinygo/src/machine/machine_nrf.go:200
200 uart.Receive(byte(nrf.UART0.RXD.Get()))
(gdb) bt
#0 (*machine.UART).handleInterrupt (arg1=..., uart=<optimized out>) at /home/ayke/src/github.com/tinygo-org/tinygo/src/machine/machine_nrf.go:200
#1 UART0_IRQHandler () at /home/ayke/src/github.com/tinygo-org/tinygo/src/device/nrf/nrf51.go:176
#2 <signal handler called>
[..etc]
By now, the debugger sees an actual source location for UART0_IRQHandler
(in the generated file) and an inlined function.
This matches the behavior of Clang, which uses optsize for -Os and adds
minsize for -Oz.
The code size change is all over the map, but using a hacked together
size comparison tool I've found that there is a slight reduction in
binary size overall (-1.6% with the tinygo smoke tests and -0.8% for the
drivers smoke test).
This commit has a few related changes:
* It sets the optsize attribute immediately in the compiler instead of
adding it to each function afterwards in a loop. This seems to me
like the more appropriate way to do it.
* It centralizes setting the optsize attribute in the transform
package, to make later changes easier.
* It sets the optsize in a few more places: to runtime.initAll and to
WebAssembly i64 wrappers.
This commit does not affect the binary size of any of the smoke tests,
so should be risk-free.
This layout parameter is currently always nil and ignored, but will
eventually contain a pointer to a memory layout.
This commit also adds module verification to the transform tests, as I
found out that it didn't (and therefore didn't initially catch all
bugs).
This commit simplifies the IR a little bit: instead of calling
pseudo-functions runtime.interfaceImplements and
runtime.interfaceMethod, real declared functions are being called that
are then defined in the interface lowering pass. This should simplify
the interaction between various transformation passes. It also reduces
the number of lines of code, which is generally a good thing.
This adds support for a construct like this:
type foo func(fn foo)
Unfortunately, LLVM cannot create function pointers that look like this.
LLVM only supports named types for structs (not for pointers) and thus
can't add a pointer to a function type of the same type to a parameter
of that function type.
The fix is simple: cast all function pointers to a void function, in
LLVM IR:
void ()*
Raw function pointers are cast to this type before storing, and cast
back to the regular function type before calling. This means that
function parameters will never refer to its own type because raw
function types are fixed at that one type.
Somehow, this does have an effect on binary size in some cases. The
effect is small and goes both ways. On top of that, there is work
underway in LLVM which would make all pointer types opaque (without a
pointee type). This would make this whole commit useless and therefore
should fix any size increases that might happen.
https://llvm.org/docs/OpaquePointers.html
... instead of setting a special -target= value. This is more robust and
makes sure that the test actually tests different arcitectures as they
would be compiled by TinyGo. As an example, the bug of the bugfix in the
previous commit ("arm: use armv7 instead of thumbv7") would have been
caught if this change was applied earlier.
I've decided to put GOOS/GOARCH in compileopts.Options, as it makes
sense to me to treat them the same way as command line parameters.
This change fixes a bug in which `alloca` memory lifetimes would not extend past the suspend of an asynchronous tail call.
This would typically manifest as memory corruption, and could happen with or without normal suspending calls within the function.
Bug 1790 ("musttail call must precede a ret with an optional bitcast")
is caused by the GC stack slot pass inserting a store instruction
between a musttail call and a return instruction. This is not allowed in
LLVM IR.
One solution would be to remove the musttail. That would probably work,
but 1) the go-llvm API doesn't support this and 2) this might have
unforeseen consequences. What I've done in this commit is to move the
store instruction to a position earlier in the basic block, just after
the last access to the GC stack slot alloca.
Thanks to @fgsch for a very small repro, which I've used as a regression
test.
This patch adds a new pragma for functions and globals to set the
section name. This can be useful to place a function or global in a
special device specific section, for example:
* Functions may be placed in RAM to make them run faster, or in flash
(if RAM is the default) to not let them take up RAM.
* DMA memory may only be placed in a special memory area.
* Some RAM may be faster than other RAM, and some globals may be
performance critical thus placing them in this special RAM area can
help.
* Some (large) global variables may need to be placed in external RAM,
which can be done by placing them in a special section.
To use it, you have to place a function or global in a special section,
for example:
//go:section .externalram
var externalRAMBuffer [1024]byte
This can then be placed in a special section of the linker script, for
example something like this:
.bss.extram (NOLOAD) : {
*(.externalram)
} > ERAM
This commit includes two changes:
* It makes unexported interface methods package-private, so that it's
not possible to type-assert on an unexported method in a different
package.
* It makes the globals used to identify interface methods defined
globals, so that they can (eventually) be left in the program for an
eventual non-LTO build mode.
Closure variables are allocated in a parent function and are thus never
nil. Don't do a nil check before reading or modifying the value.
This commit results in a slight reduction in code size in some test
cases: calls.go, channel.go, goroutines.go, json.go, sort.go -
presumably wherever closures are used.
These two passes are related, but can definitely work independently.
Which is what this change does: it splits the two passes. This should
make it easier to change these two new passes in the future.
This change now also enables slightly better testing by testing these
two passes independently. In particular, the reflect lowering pass got
some actual tests: it was barely unit-tested before.
I have verified that this doesn't really change code size, at least not
on the microbit target. Two tests do change, but in a very minor way
(and in opposite direction).
In many cases, position information is not stored in Go SSA instructions
because they don't exit directly in the source code. This includes
implicit type conversions, implicit returns at the end of a function,
the creation of a (hidden) slice when calling a variadic function, and
many other cases. I'm not sure where this information is supposed to
come from, but this patch takes the value (usually) from the value the
instruction refers to. This seems to work well for these implicit
conversions.
I've also added a few extra tests to the heap-to-stack transform pass,
of which one requires this improved position information.
This allows better escape analysis even without being able to see the
entire program. This makes the stack allocation test case more complete
but probably won't have much of an effect outside of that (as the
compiler is able to infer these attributes in the whole-program
functionattrs pass).
This flag, if set, is a regexp for function names. If there are heap
allocations in the matching function names, these heap allocations will
be printed with an explanation why the heap allocation exists (and why
the object can't be stack allocated).
This allows for adding more advanced tests, for example tests that use
the compiler package so that test sources can be written in Go instead
of LLVM IR.
There is no good reason for func values to refer to interface type
codes. The only thing they need is a stable identifier for function
signatures, which is easily created as a new kind of globals. Decoupling
makes it easier to change interface related code.
The LLVM CoroFrame pass appears to be tripping over this zero-sized
alloca. Therefore, do what the runtime would do: return a pointer to
runtime.zeroSizedAlloc. Or just don't deal with this case. But don't
emit a zero sized alloca to avoid this LLVM bug.
More information: https://bugs.llvm.org/show_bug.cgi?id=49916
The func-lowering pass has started to fail in the dev branch, probably
as a result of replacing the ConstPropagation pass with the IPSCCP pass.
This commit makes the code a bit more robust and should be able to
handle all possible cases (not just ptrtoint).
The constant propagation pass is removed in LLVM 12, so this pass needs
to be replaced anyway. The direct replacement would be the SCCP (sparse
conditional constant propagation) pass, but perhaps a better replacement
is the IPSCCP pass, which is an interprocedural version of the SCCP
pass and propagates constants across function calls if possible.
This is not always a code size reduction, but it appears to reduce code
size in a majority of cases. It certainly reduces code size in almost
all WebAssembly tests I did.
This should result in a small compile time reduction for incremental
builds, somewhere around 5-9%.
This commit, while small, required many previous commits to not regress
binary size. Right now binary size is basically identical with very few
changes in size (the only baremetal program that changed in size did so
with a 4 byte increase).
This commit is one extra step towards doing as much work as possible in
the parallel and cached package build step, out of the serial LTO phase.
Later improvements in this area have this change as a prerequisite.
Sometimes, LLVM may rename named structs when merging modules.
Therefore, we can't rely on typecodeID structs to retain their struct
names.
This commit changes the interface lowering pass to not rely on these
names. The interp package does however still rely on this name, but I
hope to fix that in the future.
This simplifies future changes. While the move itself is very simple, it
required some other changes to a few transforms that create new
functions to add the optsize attribute manually. It also required
abstracting away the optimization level flags (based on the -opt flag)
so that it can easily be retrieved from the config object.
This commit does not impact binary size on baremetal and WebAssembly.
I've seen a few tests on linux/amd64 grow slightly in size, but I'm not
too worried about those.
In rare cases the signature might change as a result of LLVM renaming
some named struct types when multiple LLVM modules are merged. The
easiest workaround is to detect such mismatched signatures and adding a
bitcast: this should be safe as the underlying data is effectively of
the same type.
This commit adds a new transform that converts reflect Implements()
calls to runtime.interfaceImplements. At the moment, the Implements()
method is not yet implemented (how ironic) but if the value passed to
Implements is known at compile time the method call can be optimized to
runtime.interfaceImplements to make it a regular interface assert.
This commit is the last change necessary to add basic support for the
encoding/json package. The json package is certainly not yet fully
supported, but some trivial objects can be converted to JSON.
Previously there was code to avoid impossible type asserts but it wasn't
great and in fact was too aggressive when combined with reflection.
This commit improves this by checking all types that exist in the
program that may appear in an interface (even struct fields and the
like) but without creating runtime.typecodeID objects with the type
assert. This has two advantages:
* As mentioned, it optimizes impossible type asserts away.
* It allows methods on types that were only asserted on (in
runtime.typeAssert) but never used in an interface to be optimized
away using GlobalDCE. This may have a cascading effect so that other
parts of the code can be further optimized.
This sometimes massively improves code size and mostly negates the code
size regression of the previous commit.
This distinction was useful before when reflect wasn't properly
supported. Back then it made sense to only include method sets that were
actually used in an interface. But now that it is possible to get to
other values (for example, by extracting fields from structs) and it is
possible to turn them back into interfaces, it is necessary to preserve
all method sets that can possibly be used in the program in a type
assert, interface assert or interface method call.
In the future, this logic will need to be revisited again when
reflect.New or reflect.Zero gets implemented.
Code size increases a bit in some cases, but usually in a very limited
way (except for one outlier in the drivers smoke tests). The next commit
will improve the situation significantly.
This commit switches from the previous behavior of compiling the whole
program at once, to compiling every package in parallel and linking the
LLVM bitcode files together for further whole-program optimization.
This is a small performance win, but it has several advantages in the
future:
- There are many more things that can be done per package in parallel,
avoiding the bottleneck at the end of the compiler phase. This
should speed up the compiler futher.
- This change is a necessary step towards a non-LTO build mode for
fast incremental builds that only rebuild the changed package, when
compiler speed is more important than binary size.
- This change refactors the compiler in such a way that it will be
easier to inspect the IR for one package only. Inspecting this IR
will be very helpful for compiler developers.
This optimization level wasn't working before because some passes expect
some globals to be cleaned up afterwards. Cleaning these globals is
easy, just add the pass necessary for it. This shouldn't reduce the
usefulness of the -opt=0 build flag as most optimizations are still
skipped.
A common error is when someone tries to export a blocking function. This
is not possible with the coroutines scheduler. Previously, it resulted
in an error like this:
panic: trying to make exported function async: messageHandler
With this change, the error is much better and shows where it comes from
exactly:
/home/ayke/tmp/export-async.go:8: blocking operation in exported function: messageHandler
traceback:
messageHandler
/home/ayke/tmp/export-async.go:9:5
main.foo
/home/ayke/tmp/export-async.go:15:2
runtime.chanSend
/home/ayke/src/github.com/tinygo-org/tinygo/src/runtime/chan.go:494:12
This should make it easier to identify and fix the problem. And it
avoids a compiler panic, which is a really bad way of showing
diagnostics.
Moving settings to a separate config struct has two benefits:
- It decouples the compiler a bit from other packages, most
importantly the compileopts package. Decoupling is generally a good
thing.
- Perhaps more importantly, it precisely specifies which settings are
used while compiling and affect the resulting LLVM module. This will
be necessary for caching the LLVM module.
While it would have been possible to cache without this refactor, it
would have been very easy to miss a setting and thus let the
compiler work with invalid/stale data.
This fixes an issue where a normal suspending call followed by a plain tail call would result in the tail return value being written to the return pointer of the normal suspending call.
This is fixed by saving the return pointer at the start of the function and restoring it before initiating a plain tail call.
Unfortunately, the .rodata section can't be stored in flash. Instead, an
explicit .progmem section should be used, which is supported in LLVM as
address space 1 but not exposed to normal programs.
Eventually a pass should be written that converts trivial const globals
of which all loads are visible to be in addrspace 1, to get the benefits
of storing those globals directly in ROM.
This can be useful to test improvements in LLVM master and to make it
possible to support LLVM 11 for the most part already before the next
release. That also allows catching LLVM bugs early to fix them upstream.
Note that tests do not yet pass for this LLVM version, but the TinyGo
compiler can be built with the binaries from apt.llvm.org (at the time
of making this commit).
* initial commit for WASI support
* merge "time" package with wasi build tag
* override syscall package with wasi build tag
* create runtime_wasm_{js,wasi}.go files
* create syscall_wasi.go file
* create time/zoneinfo_wasi.go file as the replacement of zoneinfo_js.go
* add targets/wasi.json target
* set visbility hidden for runtime extern variables
Accodring to the WASI docs (https://github.com/WebAssembly/WASI/blob/master/design/application-abi.md#current-unstable-abi),
none of exports of WASI executable(Command) should no be accessed.
v0.19.0 of bytecodealliance/wasmetime, which is often refered to as the reference implementation of WASI,
does not accept any exports except functions and the only limited variables like "table", "memory".
* merge syscall_{baremetal,wasi}.go
* fix js target build
* mv wasi functions to syscall/wasi && implement sleepTicks
* WASI: set visibility hidden for globals variables
* mv back syscall/wasi/* to runtime package
* WASI: add test
* unexport wasi types
* WASI test: fix wasmtime path
* stop changing visibility of runtime.alloc
* use GOOS=linux, GOARCH=arm for wasi target
Signed-off-by: mathetake <takeshi@tetrate.io>
* WASI: fix build tags for os/runtime packages
Signed-off-by: mathetake <takeshi@tetrate.io>
* run WASI test only on Linux
Signed-off-by: mathetake <takeshi@tetrate.io>
* set InternalLinkage instead of changing visibility
Signed-off-by: mathetake <takeshi@tetrate.io>
This is a big change that will determine the stack size for many
goroutines automatically. Functions that aren't recursive and don't call
function pointers can in many cases have an automatically determined
worst case stack size. This is useful, as the stack size is usually much
lower than the previous hardcoded default of 1024 bytes: somewhere
around 200-500 bytes is common.
A side effect of this change is that the default stack sizes (including
the stack size for other architectures such as AVR) can now be changed
in the config JSON file, making it tunable per application.
It appears that LLVM is turning bitcasts into 0-index GEPs.
This caused stuff to not be tracked, resulting in use-after-free issues.
This solution is sub-optimal, but is the most reasonable solution I could come up with without redesigning the stack slots pass.
This option was broken for a long time, in part because we didn't test
for it. This commit fixes that and adds a test to make sure it won't
break again unnoticed.