This change fixes a bug in which `alloca` memory lifetimes would not extend past the suspend of an asynchronous tail call.
This would typically manifest as memory corruption, and could happen with or without normal suspending calls within the function.
Bug 1790 ("musttail call must precede a ret with an optional bitcast")
is caused by the GC stack slot pass inserting a store instruction
between a musttail call and a return instruction. This is not allowed in
LLVM IR.
One solution would be to remove the musttail. That would probably work,
but 1) the go-llvm API doesn't support this and 2) this might have
unforeseen consequences. What I've done in this commit is to move the
store instruction to a position earlier in the basic block, just after
the last access to the GC stack slot alloca.
Thanks to @fgsch for a very small repro, which I've used as a regression
test.
This commit includes two changes:
* It makes unexported interface methods package-private, so that it's
not possible to type-assert on an unexported method in a different
package.
* It makes the globals used to identify interface methods defined
globals, so that they can (eventually) be left in the program for an
eventual non-LTO build mode.
Closure variables are allocated in a parent function and are thus never
nil. Don't do a nil check before reading or modifying the value.
This commit results in a slight reduction in code size in some test
cases: calls.go, channel.go, goroutines.go, json.go, sort.go -
presumably wherever closures are used.
These two passes are related, but can definitely work independently.
Which is what this change does: it splits the two passes. This should
make it easier to change these two new passes in the future.
This change now also enables slightly better testing by testing these
two passes independently. In particular, the reflect lowering pass got
some actual tests: it was barely unit-tested before.
I have verified that this doesn't really change code size, at least not
on the microbit target. Two tests do change, but in a very minor way
(and in opposite direction).
In many cases, position information is not stored in Go SSA instructions
because they don't exit directly in the source code. This includes
implicit type conversions, implicit returns at the end of a function,
the creation of a (hidden) slice when calling a variadic function, and
many other cases. I'm not sure where this information is supposed to
come from, but this patch takes the value (usually) from the value the
instruction refers to. This seems to work well for these implicit
conversions.
I've also added a few extra tests to the heap-to-stack transform pass,
of which one requires this improved position information.
This allows better escape analysis even without being able to see the
entire program. This makes the stack allocation test case more complete
but probably won't have much of an effect outside of that (as the
compiler is able to infer these attributes in the whole-program
functionattrs pass).
This flag, if set, is a regexp for function names. If there are heap
allocations in the matching function names, these heap allocations will
be printed with an explanation why the heap allocation exists (and why
the object can't be stack allocated).
There is no good reason for func values to refer to interface type
codes. The only thing they need is a stable identifier for function
signatures, which is easily created as a new kind of globals. Decoupling
makes it easier to change interface related code.
The LLVM CoroFrame pass appears to be tripping over this zero-sized
alloca. Therefore, do what the runtime would do: return a pointer to
runtime.zeroSizedAlloc. Or just don't deal with this case. But don't
emit a zero sized alloca to avoid this LLVM bug.
More information: https://bugs.llvm.org/show_bug.cgi?id=49916
Sometimes, LLVM may rename named structs when merging modules.
Therefore, we can't rely on typecodeID structs to retain their struct
names.
This commit changes the interface lowering pass to not rely on these
names. The interp package does however still rely on this name, but I
hope to fix that in the future.
This commit adds a new transform that converts reflect Implements()
calls to runtime.interfaceImplements. At the moment, the Implements()
method is not yet implemented (how ironic) but if the value passed to
Implements is known at compile time the method call can be optimized to
runtime.interfaceImplements to make it a regular interface assert.
This commit is the last change necessary to add basic support for the
encoding/json package. The json package is certainly not yet fully
supported, but some trivial objects can be converted to JSON.
Previously there was code to avoid impossible type asserts but it wasn't
great and in fact was too aggressive when combined with reflection.
This commit improves this by checking all types that exist in the
program that may appear in an interface (even struct fields and the
like) but without creating runtime.typecodeID objects with the type
assert. This has two advantages:
* As mentioned, it optimizes impossible type asserts away.
* It allows methods on types that were only asserted on (in
runtime.typeAssert) but never used in an interface to be optimized
away using GlobalDCE. This may have a cascading effect so that other
parts of the code can be further optimized.
This sometimes massively improves code size and mostly negates the code
size regression of the previous commit.
This distinction was useful before when reflect wasn't properly
supported. Back then it made sense to only include method sets that were
actually used in an interface. But now that it is possible to get to
other values (for example, by extracting fields from structs) and it is
possible to turn them back into interfaces, it is necessary to preserve
all method sets that can possibly be used in the program in a type
assert, interface assert or interface method call.
In the future, this logic will need to be revisited again when
reflect.New or reflect.Zero gets implemented.
Code size increases a bit in some cases, but usually in a very limited
way (except for one outlier in the drivers smoke tests). The next commit
will improve the situation significantly.
Unfortunately, the .rodata section can't be stored in flash. Instead, an
explicit .progmem section should be used, which is supported in LLVM as
address space 1 but not exposed to normal programs.
Eventually a pass should be written that converts trivial const globals
of which all loads are visible to be in addrspace 1, to get the benefits
of storing those globals directly in ROM.
* initial commit for WASI support
* merge "time" package with wasi build tag
* override syscall package with wasi build tag
* create runtime_wasm_{js,wasi}.go files
* create syscall_wasi.go file
* create time/zoneinfo_wasi.go file as the replacement of zoneinfo_js.go
* add targets/wasi.json target
* set visbility hidden for runtime extern variables
Accodring to the WASI docs (https://github.com/WebAssembly/WASI/blob/master/design/application-abi.md#current-unstable-abi),
none of exports of WASI executable(Command) should no be accessed.
v0.19.0 of bytecodealliance/wasmetime, which is often refered to as the reference implementation of WASI,
does not accept any exports except functions and the only limited variables like "table", "memory".
* merge syscall_{baremetal,wasi}.go
* fix js target build
* mv wasi functions to syscall/wasi && implement sleepTicks
* WASI: set visibility hidden for globals variables
* mv back syscall/wasi/* to runtime package
* WASI: add test
* unexport wasi types
* WASI test: fix wasmtime path
* stop changing visibility of runtime.alloc
* use GOOS=linux, GOARCH=arm for wasi target
Signed-off-by: mathetake <takeshi@tetrate.io>
* WASI: fix build tags for os/runtime packages
Signed-off-by: mathetake <takeshi@tetrate.io>
* run WASI test only on Linux
Signed-off-by: mathetake <takeshi@tetrate.io>
* set InternalLinkage instead of changing visibility
Signed-off-by: mathetake <takeshi@tetrate.io>
This is a big change that will determine the stack size for many
goroutines automatically. Functions that aren't recursive and don't call
function pointers can in many cases have an automatically determined
worst case stack size. This is useful, as the stack size is usually much
lower than the previous hardcoded default of 1024 bytes: somewhere
around 200-500 bytes is common.
A side effect of this change is that the default stack sizes (including
the stack size for other architectures such as AVR) can now be changed
in the config JSON file, making it tunable per application.
It appears that LLVM is turning bitcasts into 0-index GEPs.
This caused stuff to not be tracked, resulting in use-after-free issues.
This solution is sub-optimal, but is the most reasonable solution I could come up with without redesigning the stack slots pass.
It was `avr-atmel-none`, which is incorrect. It must be
`avr-unknown-unknown`.
Additionally, there is no reason to specify the target triple per chip,
it can be done for all AVR chips at once as it doesn't vary like
Cortex-M chips.
I ran into an issue where I did a method call on a nil interface and it
resulted in a HardFault. Luckily I quickly realized what was going on so
I could fix it, but I think undefined behavior is definitely the wrong
behavior in this case. This commit therefore changes such calls to cause
a nil panic instead of introducing undefined behavior.
This does have a code size impact. It's relatively minor, much lower
than I expected. When comparing the before and after of the drivers
smoke tests (probably the most representative sample available), I found
that most did not change at all and those that did change, normally not
more than 100 bytes (16 or 32 byte changes are typical).
Right now the pattern is the following:
switch typecode {
case 1:
call method 1
case 2:
call method 2
default:
nil panic
}
I also tried the following (in the hope that it would be easier to
optimize), but it didn't really result in a code size reduction:
switch typecode {
case 1:
call method 1
case 2:
call method 2
case 0:
nil panic
default:
unreachable
}
Some code got smaller, while other code (the majority) got bigger. Maybe
this can be improved once range[1] is finally allowed[2] on function
parameters, but it'll probably take a while before that is implemented.
[1]: https://llvm.org/docs/LangRef.html#range-metadata
[2]: https://github.com/rust-lang/rust/issues/50156
This is a common case, but it also complicates the code. Removing this
special case does have a negative effect on code size in rare cases, but
I don't think it's worth keeping around (and possibly causing bugs) for
such uncommon cases.
This should not result in functional changes, although the output (as
stated above) sometimes changes a little bit.
Previously, the function value lowering pass had special cases for when there were 0 or 1 function implementations.
However, the results of the pass were incorrect in both of these cases.
This change removes the specializations and fixes the transformation.
In the case that there was a single function implementation, the compiler emitted a select instruction to obtain the function pointer.
This selected between null and the implementing function pointer.
While this was technically correct, it failed to eliminate indirect function calls.
This prevented discovery of these calls by the coroutine lowering pass, and caused async function calls to be passed through unlowered.
As a result, the generated code had undefined behavior (usually resulting in a segfault).
In the case of no function implementations, the lowering code was correct.
However, the lowering code was not run.
The discovery of function signatures was accomplished by scanning implementations, and when there were no implementations nothing was discovered or lowered.
For maintainability reasons, I have removed both specializations rather than fixing them.
This substantially simplifies the code, and reduces the amount of variation that we need to worry about for testing purposes.
The IR now generated in the cases of 0 or 1 function implementations can be efficiently simplified by LLVM's optimization passes.
Therefore, there should not be a substantial regression in terms of performance or machine code size.
This commit also adds a bit of version independence, in particular for
external commands. It also adds the LLVM version to the `tinygo version`
command, which might help while debugging.
This hack was originally introduced in
https://github.com/tinygo-org/tinygo/pull/251 to fix an escape analysis
regression after https://github.com/tinygo-org/tinygo/pull/222
introduced nil checks. Since a new optimization in LLVM (see
https://reviews.llvm.org/D60047) this hack is not necessary anymore and
can be removed.
I've compared all regular tests and smoke tests before and after to
check the size. In most cases this change was an improvement although
there are a few regressions.
This avoids problems with goroutines in WebAssembly, and is generally a
good thing. It fixes some cases of the following problem:
LLVM ERROR: Coroutines cannot handle non static allocas yet
This commit lets the compiler know about interrupts and allows
optimizations to be performed based on that: interrupts are eliminated
when they appear to be unused in a program. This is done with a new
pseudo-call (runtime/interrupt.New) that is treated specially by the
compiler.
This commit makes a number of changes:
* It avoids a dependency on Compiler.emitStartGoroutine.
* It moves the func-lowering pass to the transform package.
* It adds testing to func lowering.
No functionality should have changed with this commit.
Unfortunately, while doing this I found that it doesn't actually apply
in any real-world programs (tested with `make smoketest`), apparently
because nil pointer checking messes with the functionattrs pass. I hope
to fix that after moving to LLVM 9, which has an optimization that makes
nil pointer checking easier to implement.
Also add unit tests.
This is the first of several transformation (optimization/lowering)
passes that I'd like to move to the new transform package. This
separates the compiler from the optimizer.
Also, it finally adds unit tests for the compiler, not just end-to-end
compilation tests. This should improve robustness and should make it
easier to change these transformation passes in the future.
While the heap-to-stack transform is relatively simple, other passes are
much more complex. Adding unit tests not only helps robustness over
time, but also doubles as documentation as to what these transformation
passes do exactly.