These macros could in principle be (inline) functions so it makes sense to
have them lower case, to match the other C API functions.
The remaining macros that are upper case are:
- MP_OBJ_TO_PTR, MP_OBJ_FROM_PTR
- MP_OBJ_NEW_SMALL_INT, MP_OBJ_SMALL_INT_VALUE
- MP_OBJ_NEW_QSTR, MP_OBJ_QSTR_VALUE
- MP_OBJ_FUN_MAKE_SIG
- MP_DECLARE_CONST_xxx
- MP_DEFINE_CONST_xxx
These must remain macros because they are used when defining const data (at
least, MP_OBJ_NEW_SMALL_INT is so it makes sense to have
MP_OBJ_SMALL_INT_VALUE also a macro).
For those macros that have been made lower case, compatibility macros are
provided for the old names so that users do not need to change their code
immediately.
Changes to the layout of the bytecode header meant that this debug code was
no longer compiling. This is now fixed and a new compile-time option is
introduced, MICROPY_DEBUG_VM_STACK_OVERFLOW, to turn on this feature (which
is disabled by default). This option is needed because more than one file
needs to cooperate to make this check work.
This commit adds first class support for yield and yield-from in the native
emitter, including send and throw support, and yields enclosed in exception
handlers (which requires pulling down the NLR stack before yielding, then
rebuilding it when resuming).
This has been fully tested and is working on unix x86 and x86-64, and
stm32. Also basic tests have been done with the esp8266 port. Performance
of existing native code is unchanged.
Instead of at end of state, n_state - 1. It was originally (way back in
v1.0) put at the end of the state because the VM didn't have a pointer to
the start. But now that the VM takes a mp_code_state_t pointer it does
have a pointer to the start of the state so can put the exception object
there.
This commit saves about 30 bytes of code on all architectures, and, more
importantly, reduces C-stack usage by a couple of words (8 bytes on Thumb2
and 16 bytes on x86-64) for every (non-generator) call of a bytecode
function because fun_bc_call no longer needs to remember the n_state
variable.
This commit makes viper functions have the same signature as native
functions, at the level of the emitter/assembler. This means that viper
functions can now be wrapped in the same uPy object as native functions.
Viper functions are now responsible for parsing their arguments (before it
was done by the runtime), and this makes calling them more efficient (in
most cases) because the viper entry code can be custom generated to suit
the signature of the function.
This change also opens the way forward for viper functions to take
arbitrary numbers of arguments, and for them to handle globals correctly,
among other things.
With 5 arguments to mp_arg_check_num(), some architectures need to pass
values on the stack. So compressing n_args_min, n_args_max, takes_kw into
a single word and passing only 3 arguments makes the call more efficient,
because almost all calls to this function pass in constant values. Code
size is also reduced by a decent amount:
bare-arm: -116
minimal x86: -64
unix x64: -256
unix nanbox: -112
stm32: -324
cc3200: -192
esp8266: -192
esp32: -144
DEBUG_printf and MICROPY_DEBUG_PRINTER is now used instead of normal
printf, and a fault is fixed in mp_obj_class_lookup with debugging enabled;
see issue #3999. Debugging can now be enabled on all ports including when
nan-boxing is used.
With the recent change b488a4a848, a
generating function now has the same layout in memory as a normal bytecode
function, and so can reuse the latter's attribute accessor code to
implement __name__.
This patch fixes the macro so you can pass any name in, and the macro will
make more sense if you're reading it on its own. It worked previously
because n_state is always passed in as n_state_out_var.
This is second part of fun_bc_call() vs mp_obj_fun_bc_prepare_codestate()
common code refactor. This factors out code to initialize codestate
object. After this patch, mp_obj_fun_bc_prepare_codestate() is effectively
DECODE_CODESTATE_SIZE() followed by allocation followed by
INIT_CODESTATE(), and fun_bc_call() starts with that too.
fun_bc_call() starts with almost the same code as
mp_obj_fun_bc_prepare_codestate(), the only difference is a way to
allocate the codestate object (heap vs stack with heap fallback).
Still, would be nice to avoid code duplication to make further
refactoring easier.
So, this commit factors out the common code before the allocation -
decoding and calculating codestate size. It produces two values,
so structured as a macro which writes to 2 variables passed as
arguments.
Header files that are considered internal to the py core and should not
normally be included directly are:
py/nlr.h - internal nlr configuration and declarations
py/bc0.h - contains bytecode macro definitions
py/runtime0.h - contains basic runtime enums
Instead, the top-level header files to include are one of:
py/obj.h - includes runtime0.h and defines everything to use the
mp_obj_t type
py/runtime.h - includes mpstate.h and hence nlr.h, obj.h, runtime0.h,
and defines everything to use the general runtime support functions
Additional, specific headers (eg py/objlist.h) can be included if needed.
Taking the address of a local variable leads to increased stack usage, so
the mp_decode_uint_skip() function is added to reduce the need for taking
addresses. The changes in this patch reduce stack usage of a Python call
by 8 bytes on ARM Thumb, by 16 bytes on non-windowing Xtensa archs, and by
16 bytes on x86-64. Code size is also slightly reduced on most archs by
around 32 bytes.
This patch changes mp_uint_t to size_t for the len argument of the
following public facing C functions:
mp_obj_tuple_get
mp_obj_list_get
mp_obj_get_array
These functions take a pointer to the len argument (to be filled in by the
function) and callers of these functions should update their code so the
type of len is changed to size_t. For ports that don't use nan-boxing
there should be no change in generate code because the size of the type
remains the same (word sized), and in a lot of cases there won't even be a
compiler warning if the type remains as mp_uint_t.
The reason for this change is to standardise on the use of size_t for
variables that count memory (or memory related) sizes/lengths. It helps
builds that use nan-boxing.
Instead of caching data that is constant (code_info, const_table and
n_state), store just a pointer to the underlying function object from which
this data can be derived.
This helps reduce stack usage for the case when the mp_code_state_t
structure is stored on the stack, as well as heap usage when it's stored
on the heap.
The downside is that the VM becomes a little more complex because it now
needs to derive the data from the underlying function object. But this
doesn't impact the performance by much (if at all) because most of the
decoding of data is done outside the main opcode loop. Measurements using
pystone show that little to no performance is lost.
This patch also fixes a nasty bug whereby the bytecode can be reclaimed by
the GC during execution. With this patch there is always a pointer to the
function object held by the VM during execution, since it's stored in the
mp_code_state_t structure.
Builtin functions with a fixed number of arguments (0, 1, 2 or 3) are
quite common. Before this patch the wrapper for such a function cost
3 machine words. After this patch it only takes 2, which can reduce the
code size by quite a bit (and pays off even more, the more functions are
added). It also makes function dispatch slightly more efficient in CPU
usage, and furthermore reduces stack usage for these cases. On x86 and
Thumb archs the dispatch functions are now tail-call optimised by the
compiler.
The bare-arm port has its code size increase by 76 bytes, but stmhal drops
by 904 bytes. Stack usage by these builtin functions is decreased by 48
bytes on Thumb2 archs.
Before this patch, the native types for uint and ptr/ptr8/ptr16/ptr32
all overlapped and it was possible to make a mistake in casting. Now,
these types are all separate and any coding mistakes will be raised
as runtime errors.
This patch changes the type signature of .make_new and .call object method
slots to use size_t for n_args and n_kw (was mp_uint_t. Makes code more
efficient when mp_uint_t is larger than a machine word. Doesn't affect
ports when size_t and mp_uint_t have the same size.
This allows the mp_obj_t type to be configured to something other than a
pointer-sized primitive type.
This patch also includes additional changes to allow the code to compile
when sizeof(mp_uint_t) != sizeof(void*), such as using size_t instead of
mp_uint_t, and various casts.
Main changes when MICROPY_PERSISTENT_CODE is enabled are:
- qstrs are encoded as 2-byte fixed width in the bytecode
- all pointers are removed from bytecode and put in const_table (this
includes const objects and raw code pointers)
Ultimately this option will enable persistence for not just bytecode but
also native code.
If heap allocation for the Python-stack of a function fails then we may
as well allocate the Python-stack on the C stack. This will allow to
run more code without using the heap.
Hashing is now done using mp_unary_op function with MP_UNARY_OP_HASH as
the operator argument. Hashing for int, str and bytes still go via
fast-path in mp_unary_op since they are the most common objects which
need to be hashed.
This lead to quite a bit of code cleanup, and should be more efficient
if anything. It saves 176 bytes code space on Thumb2, and 360 bytes on
x86.
The only loss is that the error message "unhashable type" is now the
more generic "unsupported type for __hash__".
Previous to this patch the printing mechanism was a bit of a tangled
mess. This patch attempts to consolidate printing into one interface.
All (non-debug) printing now uses the mp_print* family of functions,
mainly mp_printf. All these functions take an mp_print_t structure as
their first argument, and this structure defines the printing backend
through the "print_strn" function of said structure.
Printing from the uPy core can reach the platform-defined print code via
two paths: either through mp_sys_stdout_obj (defined pert port) in
conjunction with mp_stream_write; or through the mp_plat_print structure
which uses the MP_PLAT_PRINT_STRN macro to define how string are printed
on the platform. The former is only used when MICROPY_PY_IO is defined.
With this new scheme printing is generally more efficient (less layers
to go through, less arguments to pass), and, given an mp_print_t*
structure, one can call mp_print_str for efficiency instead of
mp_printf("%s", ...). Code size is also reduced by around 200 bytes on
Thumb2 archs.
This simplifies the API for objects and reduces code size (by around 400
bytes on Thumb2, and around 2k on x86). Performance impact was measured
with Pystone score, but change was barely noticeable.