This is a big change: apart from removing LLVM 14 it also removes typed
pointer support (which was only fully supported in LLVM up to version
14). This removes about 200 lines of code, but more importantly removes
a ton of special cases for LLVM 14.
This commit adds support for LLVM 16 and switches to it by default. That
means three LLVM versions are supported at the same time: LLVM 14, 15,
and 16.
This commit includes work by QuLogic:
* Part of this work was based on a PR by QuLogic:
https://github.com/tinygo-org/tinygo/pull/3649
But I also had parts of this already implemented in an old branch I
already made for LLVM 16.
* QuLogic also provided a CGo fix here, which is also incorporated in
this commit:
https://github.com/tinygo-org/tinygo/pull/3869
The difference with the original PR by QuLogic is that this commit is
more complete:
* It switches to LLVM 16 by default.
* It updates some things to also make it work with a self-built LLVM.
* It fixes the CGo bug in a slightly different way, and also fixes
another one not included in the original PR.
* It does not keep compiler tests passing on older LLVM versions. I
have found this to be quite burdensome and therefore don't generally
do this - the smoke tests should hopefully catch most regressions.
When a function is exported using //export, but also had a
//go:wasm-module pragma, the //export name was ignored. The
//go:wasm-module doesn't actually do anything besides breaking the
export (exported functions don't have a module name).
I've refactored and cleaned up the code, and in the process removed this
weird edge case.
This is a small change that's not really important in itself, but it
avoids duplicate errors in a future commit that adds error messages to
//go:wasmimport.
This gives a small improvement now, and is needed to be able to use the
Heap2Stack transform that's available in the Attributor pass. This
Heap2Stack transform could replace our custom OptimizeAllocs pass.
Most of the changes are just IR that changed, the actual change is
relatively small.
To give an example of why this is useful, here is the code size before
this change:
$ tinygo build -o test -size=short ./testdata/stdlib.go
code data bss | flash ram
95620 1812 968 | 97432 2780
$ tinygo build -o test -size=short ./testdata/stdlib.go
code data bss | flash ram
95380 1812 968 | 97192 2780
That's a 0.25% reduction. Not a whole lot, but nice for such a small
patch.
Previously we were using a really weird calculation to determine the
alignment in bits - written by me (no idea what I was thinking at the
time, it's obviously incorrect). Just replace it with the much more
obviously correct multiplication by 8 to get bits from bytes.
Found while working on properly dealing with alignment in `-size=full`.
ThinLTO results in a small code size reduction, which is nice
(especially on these very small chips). It also brings us one step
closer to using ThinLTO everywhere.
--allow-undefined can be a problem: it allows compiling code that will
fail when loaded. This change makes sure that if some symbols are
undefined, they are reported as an error by the linker.
Previously, people could get away with importing a function that was not
defined, like this:
func add(int a, int b) int
func test() {
println(add(3, 5))
}
This was always unintended but mostly worked. With this change, it isn't
possible anymore. Now every function needs to be marked with //export
explicitly:
//export add
func add(int a, int b) int
func test() {
println(add(3, 5))
}
As before, functions will be placed in the `env` module with the name
set from the `//export` tag. This can be overridden with
`//go:import-module`:
//go:import-module math
//export add
func add(int a, int b) int
func test() {
println(add(3, 5))
}
For the syscall/js package, I needed to give a list of symbols that are
undefined. This list is based on the JavaScript functions defined in
targets/wasm_exec.js.
This removes the parentHandle argument from the internal calling convention.
It was formerly used to implment coroutines.
Now that coroutines have been removed, it is no longer necessary.
This patch adds //go: pragmas directly to declared functions and
globals found during CGo processing. This simplifies the logic in the
compiler: it no longer has to consider special "C." prefixed function
names. It also makes the cgo pass more flexible in the pragmas it emits
for functions and global variables.
This matches Clang, and with that, it adds support for inlining between
Go and C because LLVM only allows inlining if the "target-cpu" and
"target-features" string attributes match.
For example, take a look at the following code:
// int add(int a, int b) {
// return a + b;
// }
import "C"
func main() {
println(C.add(3, 5))
}
The 'add' function is not inlined into the main function before this
commit, but after it, it can be inlined and trivially be optimized to
`println(8)`.
Instead of doing everything in the interrupt lowering pass, generate
some more code in gen-device to declare interrupt handler functions and
do some work in the compiler so that interrupt lowering becomes a lot
simpler.
This has several benefits:
- Overall code is smaller, in particular the interrupt lowering pass.
- The code should be a bit less "magical" and instead a bit easier to
read. In particular, instead of having a magic
runtime.callInterruptHandler (that is fully written by the interrupt
lowering pass), the runtime calls a generated function like
device/sifive.InterruptHandler where this switch already exists in
code.
- Debug information is improved. This can be helpful during actual
debugging but is also useful for other uses of DWARF debug
information.
For an example on debug information improvement, this is what a
backtrace might look like before this commit:
Breakpoint 1, 0x00000b46 in UART0_IRQHandler ()
(gdb) bt
#0 0x00000b46 in UART0_IRQHandler ()
#1 <signal handler called>
[..etc]
Notice that the debugger doesn't see the source code location where it
has stopped.
After this commit, breaking at the same line might look like this:
Breakpoint 1, (*machine.UART).handleInterrupt (arg1=..., uart=<optimized out>) at /home/ayke/src/github.com/tinygo-org/tinygo/src/machine/machine_nrf.go:200
200 uart.Receive(byte(nrf.UART0.RXD.Get()))
(gdb) bt
#0 (*machine.UART).handleInterrupt (arg1=..., uart=<optimized out>) at /home/ayke/src/github.com/tinygo-org/tinygo/src/machine/machine_nrf.go:200
#1 UART0_IRQHandler () at /home/ayke/src/github.com/tinygo-org/tinygo/src/device/nrf/nrf51.go:176
#2 <signal handler called>
[..etc]
By now, the debugger sees an actual source location for UART0_IRQHandler
(in the generated file) and an inlined function.
This matches the behavior of Clang, which uses optsize for -Os and adds
minsize for -Oz.
The code size change is all over the map, but using a hacked together
size comparison tool I've found that there is a slight reduction in
binary size overall (-1.6% with the tinygo smoke tests and -0.8% for the
drivers smoke test).
This commit has a few related changes:
* It sets the optsize attribute immediately in the compiler instead of
adding it to each function afterwards in a loop. This seems to me
like the more appropriate way to do it.
* It centralizes setting the optsize attribute in the transform
package, to make later changes easier.
* It sets the optsize in a few more places: to runtime.initAll and to
WebAssembly i64 wrappers.
This commit does not affect the binary size of any of the smoke tests,
so should be risk-free.
This attribute is also set by Clang when it compiles C source files
(unless -fexceptions is set). The advantage is that no unwind tables are
emitted on Linux (and perhaps other systems). It also avoids
__aeabi_unwind_cpp_pr0 on ARM when using the musl libc.
This patch adds a new pragma for functions and globals to set the
section name. This can be useful to place a function or global in a
special device specific section, for example:
* Functions may be placed in RAM to make them run faster, or in flash
(if RAM is the default) to not let them take up RAM.
* DMA memory may only be placed in a special memory area.
* Some RAM may be faster than other RAM, and some globals may be
performance critical thus placing them in this special RAM area can
help.
* Some (large) global variables may need to be placed in external RAM,
which can be done by placing them in a special section.
To use it, you have to place a function or global in a special section,
for example:
//go:section .externalram
var externalRAMBuffer [1024]byte
This can then be placed in a special section of the linker script, for
example something like this:
.bss.extram (NOLOAD) : {
*(.externalram)
} > ERAM
This allows better escape analysis even without being able to see the
entire program. This makes the stack allocation test case more complete
but probably won't have much of an effect outside of that (as the
compiler is able to infer these attributes in the whole-program
functionattrs pass).
This commit optimizes string literals and globals by setting the
appropriate alignment and using a nil pointer in zero-length strings.
- Setting the alignment for string values has a surprisingly large
effect, up to around 2% in binary size. I suspect that LLVM will
pick some default alignment for larger byte arrays if no alignment
has been specified and forcing an alignment of 1 will pack all
strings closer together.
- Using nil for zero-length strings also has a positive effect, but
I'm not sure why. Perhaps it makes some optimizations more trivial.
- Always setting the alignment on globals improves code size slightly,
probably for the same reasons setting the alignment of string
literals improves code size. The effect is much smaller, however.
This commit might have an effect on performance, but if it does this
should be tested separately and such a large win in binary size should
definitely not be ignored for small embedded systems.
This commit switches from the previous behavior of compiling the whole
program at once, to compiling every package in parallel and linking the
LLVM bitcode files together for further whole-program optimization.
This is a small performance win, but it has several advantages in the
future:
- There are many more things that can be done per package in parallel,
avoiding the bottleneck at the end of the compiler phase. This
should speed up the compiler futher.
- This change is a necessary step towards a non-LTO build mode for
fast incremental builds that only rebuild the changed package, when
compiler speed is more important than binary size.
- This change refactors the compiler in such a way that it will be
easier to inspect the IR for one package only. Inspecting this IR
will be very helpful for compiler developers.
The SimpleDCE pass was previously used to only compile the parts of the
program that were in use. However, lately the only real purpose has been
to speed up the compiler a bit by only compiling the necessary
functions.
This pass however is a problem for compiling (and caching) packages in
parallel. Therefore, this commit removes it as a preparatory step
towards that goal.
This doesn't yet add support for actually making use of variadic
functions, but at least allows (unintended) variadic functions like the
following to work:
void foo();
Moving settings to a separate config struct has two benefits:
- It decouples the compiler a bit from other packages, most
importantly the compileopts package. Decoupling is generally a good
thing.
- Perhaps more importantly, it precisely specifies which settings are
used while compiling and affect the resulting LLVM module. This will
be necessary for caching the LLVM module.
While it would have been possible to cache without this refactor, it
would have been very easy to miss a setting and thus let the
compiler work with invalid/stale data.
This is a small refactor to move code away from compiler.CompilePackage,
with the goal that compiler.CompilePackage will eventually be removed
entirely in favor of compiler.CompilePackage.
This package was long making the design of the compiler more complicated
than it needs to be. Previously this package implemented several
optimization passes, but those passes have since moved to work directly
with LLVM IR instead of Go SSA. The only remaining pass is the SimpleDCE
pass.
This commit removes the *ir.Function type that permeated the whole
compiler and instead switches to use *ssa.Function directly. The
SimpleDCE pass is kept but is far less tightly coupled to the rest of
the compiler so that it can easily be removed once the switch to
building and caching packages individually happens.
This commit merges NewCompiler and Compile into one simplifying the
external interface. More importantly, it does away with the entire
Compiler object so the public API becomes a lot smaller.
The refactor is not complete: eventually, the compiler should just
compile a single package without trying to load it first (that should be
done by the builder package).
This is a fairly big commit, but it actually changes very little.
getValue should really be a property of the builder (or frame), where
the previously created instructions are kept.
This results in a link error in the following commit (undefined
reference to runtime.trackedGlobalsBitmap from .debug_info). Solution:
don't emit debug info for declared but not defined symbols.
This is part of a larger rafactor that tries to shrink the ir package
and in general tries to shrink the amount of state that is kept around
in the compiler. The end goal is being able to compile packages
independent of each other, linking them together in a later stage. Along
the way, it cleans up lots of old cruft that has accumulated over the
months.
This refactor also results in globals being loaded lazily. This may be a
problem for some specific programs but will probably change back in a
commit in the near future.