The only thing that's different between all these chips is the flash
size, which can easily be passed as a linker flag instead. This removes
a bunch of duplicate code in an uncommon language (linker script).
I've also fixed a few boards with incorrect flash sizes:
* nano-rp2040 has 16MB instead of 2MB
* macropad-rp2040 has 8MB instead of 2MB
* gopher-badge has 8MB instead of 1MB
These functions can be implemented more efficiently using LLVM
intrinsics. That makes them the Go equivalent of functions like
__builtin_clz which are also implemented using these LLVM intrinsics.
I believe the Go compiler does something very similar: IIRC it converts
calls to these functions into optimal instructions for the given
architecture.
I tested these by running `tinygo test math/bits` after uncommenting the
tests that would always fail (the *PanicZero and *PanicOverflow tests).
As discussed on Slack, I believe this property does more harm than good:
* I don't think it's used anywhere. None of the drivers use it.
* It is not fully implemented. While values <= 8 might work fine,
values larger than 8 result in extra zero bits (instead of anything
sensible).
* Worse, it doesn't return an error when it's out of range. This is
not an optional property: if the SPI peripheral doesn't support a
particular number of bits, it should return an error instead of
silently limiting the number of bits. This will be confusing to
users.
Therefore, I propose we drop it. Maybe there are good uses for it
(perhaps for displays that use big endian 16-bit values?), but without a
good use case like a driver in tinygo.org/x/drivers, I think it's more
trouble than it's worth.
The runtime.stringFromBytesTyped and runtime.stringToBytesTyped
functions aren't really necessary, because they have the same LLVM IR
signature. Therefore, remove them and link directly to the functions
that the compiler uses internally.