Previously, the typecode was passed via a direct reference, which results in invalid IR when the defer is not reached in all return paths.
It also results in incorrect behavior if the defer is in a loop, causing all defers to use the typecode of the last iteration.
This is a fairly big commit, but it actually changes very little.
getValue should really be a property of the builder (or frame), where
the previously created instructions are kept.
Previously using defer with an exported function generated an invalid function call due to differences between TinyGo's calling convention and the C calling convention.
This code is required by transformation passes which are being moved
into a separate package, but is too complicated to simply copy.
Therefore, I decided to move them into a new package.
In Go, it is not possible to construct pointers that are out of bounds
(and not null), so let LLVM know about this fact.
This leads to a significant code size reduction, around 3% in many
cases.
Most of these errors are actually "todo" or "unimplemented" errors, so
the return type is known. This means that compilation can proceed (with
errors) even though the output will be incorrect. This is useful because
this way, all errors in a compilation unit can be shown together to the
user.
This commit adds getValue which gets a const, global, or result of a
local SSA expression and replaces (almost) all uses of parseExpr with
getValue. The only remaining use is in parseInstr, which makes sure an
instruction is only evaluated once.
This commit replaces "unknown type" errors in getLLVMType with panics.
The main reason this is done is that it simplifies the code *a lot*.
Many `if err != nil` lines were there just because of type information.
Additionally, simply panicking is probably a better approach as the only
way this error can be produced is either with big new language features
or a serious compiler bug. Panicking is probably a better way to handle
this error anyway.
The LLVM library we use does not (yet) provide a llvm.Zero (like it
provides a llvm.Undef) so we have implemented our own. However, in
theory it might return an error in some cases.
No real-world errors have been seen in a while and errors would likely
indicate a serious compiler bug anyway (not an external error), so make
it panic instead of returning an error.
Before this commit, goroutine support was spread through the compiler.
This commit changes this support, so that the compiler itself only
generates simple intrinsics and leaves the real support to a compiler
pass that runs as one of the TinyGo-specific optimization passes.
The biggest change, that was done together with the rewrite, was support
for goroutines in WebAssembly for JavaScript. The challenge in
JavaScript is that in general no blocking operations are allowed, which
means that programs that call time.Sleep() but do not start goroutines
also have to be scheduled by the scheduler.
This reduces complexity in the compiler without affecting binary sizes
too much.
Cortex-M0: no changes
Linux x64: no changes
WebAssembly: some testcases (calls, coroutines, map) are slightly bigger
Implement defer in a different way, which results in smaller binaries.
The binary produced from testdata/calls.go (the only test case with
defer) is reduced a bit in size, but the savings in bytes greatly vary
by architecture:
Cortex-M0: -96 .text / flash
WebAssembly: -215 entire file
Linux x64: -32 .text
Deferred functions in TinyGo were implemented by creating a linked list
of struct objects that contain a function pointer to a thunk, a pointer
to the next object, and a list of parameters. When it was time to run
deferred functions, a helper runtime function called each function
pointer (the thunk) with the struct pointer as a parameter. This thunk
would then in turn extract the saved function parameter from the struct
and call the real function.
What this commit changes, is that the loop to call deferred functions is
moved into the end of the function (practically inlining it) and
replacing the thunks with direct calls inside this loop. This makes it
much easier for LLVM to perform all kinds of optimizations like inlining
and dead argument elimination.