You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

115 lines
4.3 KiB

/*
* This file is part of the MicroPython project, http://micropython.org/
*
* The MIT License (MIT)
*
* Copyright (c) 2013-2016 Damien P. George
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
#ifndef MICROPY_INCLUDED_PY_PERSISTENTCODE_H
#define MICROPY_INCLUDED_PY_PERSISTENTCODE_H
#include "py/mpprint.h"
#include "py/reader.h"
#include "py/emitglue.h"
// The current version of .mpy files
py: Rework bytecode and .mpy file format to be mostly static data. Background: .mpy files are precompiled .py files, built using mpy-cross, that contain compiled bytecode functions (and can also contain machine code). The benefit of using an .mpy file over a .py file is that they are faster to import and take less memory when importing. They are also smaller on disk. But the real benefit of .mpy files comes when they are frozen into the firmware. This is done by loading the .mpy file during compilation of the firmware and turning it into a set of big C data structures (the job of mpy-tool.py), which are then compiled and downloaded into the ROM of a device. These C data structures can be executed in-place, ie directly from ROM. This makes importing even faster because there is very little to do, and also means such frozen modules take up much less RAM (because their bytecode stays in ROM). The downside of frozen code is that it requires recompiling and reflashing the entire firmware. This can be a big barrier to entry, slows down development time, and makes it harder to do OTA updates of frozen code (because the whole firmware must be updated). This commit attempts to solve this problem by providing a solution that sits between loading .mpy files into RAM and freezing them into the firmware. The .mpy file format has been reworked so that it consists of data and bytecode which is mostly static and ready to run in-place. If these new .mpy files are located in flash/ROM which is memory addressable, the .mpy file can be executed (mostly) in-place. With this approach there is still a small amount of unpacking and linking of the .mpy file that needs to be done when it's imported, but it's still much better than loading an .mpy from disk into RAM (although not as good as freezing .mpy files into the firmware). The main trick to make static .mpy files is to adjust the bytecode so any qstrs that it references now go through a lookup table to convert from local qstr number in the module to global qstr number in the firmware. That means the bytecode does not need linking/rewriting of qstrs when it's loaded. Instead only a small qstr table needs to be built (and put in RAM) at import time. This means the bytecode itself is static/constant and can be used directly if it's in addressable memory. Also the qstr string data in the .mpy file, and some constant object data, can be used directly. Note that the qstr table is global to the module (ie not per function). In more detail, in the VM what used to be (schematically): qst = DECODE_QSTR_VALUE; is now (schematically): idx = DECODE_QSTR_INDEX; qst = qstr_table[idx]; That allows the bytecode to be fixed at compile time and not need relinking/rewriting of the qstr values. Only qstr_table needs to be linked when the .mpy is loaded. Incidentally, this helps to reduce the size of bytecode because what used to be 2-byte qstr values in the bytecode are now (mostly) 1-byte indices. If the module uses the same qstr more than two times then the bytecode is smaller than before. The following changes are measured for this commit compared to the previous (the baseline): - average 7%-9% reduction in size of .mpy files - frozen code size is reduced by about 5%-7% - importing .py files uses about 5% less RAM in total - importing .mpy files uses about 4% less RAM in total - importing .py and .mpy files takes about the same time as before The qstr indirection in the bytecode has only a small impact on VM performance. For stm32 on PYBv1.0 the performance change of this commit is: diff of scores (higher is better) N=100 M=100 baseline -> this-commit diff diff% (error%) bm_chaos.py 371.07 -> 357.39 : -13.68 = -3.687% (+/-0.02%) bm_fannkuch.py 78.72 -> 77.49 : -1.23 = -1.563% (+/-0.01%) bm_fft.py 2591.73 -> 2539.28 : -52.45 = -2.024% (+/-0.00%) bm_float.py 6034.93 -> 5908.30 : -126.63 = -2.098% (+/-0.01%) bm_hexiom.py 48.96 -> 47.93 : -1.03 = -2.104% (+/-0.00%) bm_nqueens.py 4510.63 -> 4459.94 : -50.69 = -1.124% (+/-0.00%) bm_pidigits.py 650.28 -> 644.96 : -5.32 = -0.818% (+/-0.23%) core_import_mpy_multi.py 564.77 -> 581.49 : +16.72 = +2.960% (+/-0.01%) core_import_mpy_single.py 68.67 -> 67.16 : -1.51 = -2.199% (+/-0.01%) core_qstr.py 64.16 -> 64.12 : -0.04 = -0.062% (+/-0.00%) core_yield_from.py 362.58 -> 354.50 : -8.08 = -2.228% (+/-0.00%) misc_aes.py 429.69 -> 405.59 : -24.10 = -5.609% (+/-0.01%) misc_mandel.py 3485.13 -> 3416.51 : -68.62 = -1.969% (+/-0.00%) misc_pystone.py 2496.53 -> 2405.56 : -90.97 = -3.644% (+/-0.01%) misc_raytrace.py 381.47 -> 374.01 : -7.46 = -1.956% (+/-0.01%) viper_call0.py 576.73 -> 572.49 : -4.24 = -0.735% (+/-0.04%) viper_call1a.py 550.37 -> 546.21 : -4.16 = -0.756% (+/-0.09%) viper_call1b.py 438.23 -> 435.68 : -2.55 = -0.582% (+/-0.06%) viper_call1c.py 442.84 -> 440.04 : -2.80 = -0.632% (+/-0.08%) viper_call2a.py 536.31 -> 532.35 : -3.96 = -0.738% (+/-0.06%) viper_call2b.py 382.34 -> 377.07 : -5.27 = -1.378% (+/-0.03%) And for unix on x64: diff of scores (higher is better) N=2000 M=2000 baseline -> this-commit diff diff% (error%) bm_chaos.py 13594.20 -> 13073.84 : -520.36 = -3.828% (+/-5.44%) bm_fannkuch.py 60.63 -> 59.58 : -1.05 = -1.732% (+/-3.01%) bm_fft.py 112009.15 -> 111603.32 : -405.83 = -0.362% (+/-4.03%) bm_float.py 246202.55 -> 247923.81 : +1721.26 = +0.699% (+/-2.79%) bm_hexiom.py 615.65 -> 617.21 : +1.56 = +0.253% (+/-1.64%) bm_nqueens.py 215807.95 -> 215600.96 : -206.99 = -0.096% (+/-3.52%) bm_pidigits.py 8246.74 -> 8422.82 : +176.08 = +2.135% (+/-3.64%) misc_aes.py 16133.00 -> 16452.74 : +319.74 = +1.982% (+/-1.50%) misc_mandel.py 128146.69 -> 130796.43 : +2649.74 = +2.068% (+/-3.18%) misc_pystone.py 83811.49 -> 83124.85 : -686.64 = -0.819% (+/-1.03%) misc_raytrace.py 21688.02 -> 21385.10 : -302.92 = -1.397% (+/-3.20%) The code size change is (firmware with a lot of frozen code benefits the most): bare-arm: +396 +0.697% minimal x86: +1595 +0.979% [incl +32(data)] unix x64: +2408 +0.470% [incl +800(data)] unix nanbox: +1396 +0.309% [incl -96(data)] stm32: -1256 -0.318% PYBV10 cc3200: +288 +0.157% esp8266: -260 -0.037% GENERIC esp32: -216 -0.014% GENERIC[incl -1072(data)] nrf: +116 +0.067% pca10040 rp2: -664 -0.135% PICO samd: +844 +0.607% ADAFRUIT_ITSYBITSY_M4_EXPRESS As part of this change the .mpy file format version is bumped to version 6. And mpy-tool.py has been improved to provide a good visualisation of the contents of .mpy files. In summary: this commit changes the bytecode to use qstr indirection, and reworks the .mpy file format to be simpler and allow .mpy files to be executed in-place. Performance is not impacted too much. Eventually it will be possible to store such .mpy files in a linear, read-only, memory- mappable filesystem so they can be executed from flash/ROM. This will essentially be able to replace frozen code for most applications. Signed-off-by: Damien George <damien@micropython.org>
3 years ago
#define MPY_VERSION 6
// Macros to encode/decode flags to/from the feature byte
#define MPY_FEATURE_ENCODE_FLAGS(flags) (flags)
#define MPY_FEATURE_DECODE_FLAGS(feat) ((feat) & 3)
// Macros to encode/decode native architecture to/from the feature byte
#define MPY_FEATURE_ENCODE_ARCH(arch) ((arch) << 2)
#define MPY_FEATURE_DECODE_ARCH(feat) ((feat) >> 2)
all: Remove MICROPY_OPT_CACHE_MAP_LOOKUP_IN_BYTECODE. This commit removes all parts of code associated with the existing MICROPY_OPT_CACHE_MAP_LOOKUP_IN_BYTECODE optimisation option, including the -mcache-lookup-bc option to mpy-cross. This feature originally provided a significant performance boost for Unix, but wasn&#39;t able to be enabled for MCU targets (due to frozen bytecode), and added significant extra complexity to generating and distributing .mpy files. The equivalent performance gain is now provided by the combination of MICROPY_OPT_LOAD_ATTR_FAST_PATH and MICROPY_OPT_MAP_LOOKUP_CACHE (which has been enabled on the unix port in the previous commit). It&#39;s hard to provide precise performance numbers, but tests have been run on a wide variety of architectures (x86-64, ARM Cortex, Aarch64, RISC-V, xtensa) and they all generally agree on the qualitative improvements seen by the combination of MICROPY_OPT_LOAD_ATTR_FAST_PATH and MICROPY_OPT_MAP_LOOKUP_CACHE. For example, on a &#34;quiet&#34; Linux x64 environment (i3-5010U @ 2.10GHz) the change from CACHE_MAP_LOOKUP_IN_BYTECODE, to LOAD_ATTR_FAST_PATH combined with MAP_LOOKUP_CACHE is: diff of scores (higher is better) N=2000 M=2000 bccache -&gt; attrmapcache diff diff% (error%) bm_chaos.py 13742.56 -&gt; 13905.67 : +163.11 = +1.187% (+/-3.75%) bm_fannkuch.py 60.13 -&gt; 61.34 : +1.21 = +2.012% (+/-2.11%) bm_fft.py 113083.20 -&gt; 114793.68 : +1710.48 = +1.513% (+/-1.57%) bm_float.py 256552.80 -&gt; 243908.29 : -12644.51 = -4.929% (+/-1.90%) bm_hexiom.py 521.93 -&gt; 625.41 : +103.48 = +19.826% (+/-0.40%) bm_nqueens.py 197544.25 -&gt; 217713.12 : +20168.87 = +10.210% (+/-3.01%) bm_pidigits.py 8072.98 -&gt; 8198.75 : +125.77 = +1.558% (+/-3.22%) misc_aes.py 17283.45 -&gt; 16480.52 : -802.93 = -4.646% (+/-0.82%) misc_mandel.py 99083.99 -&gt; 128939.84 : +29855.85 = +30.132% (+/-5.88%) misc_pystone.py 83860.10 -&gt; 82592.56 : -1267.54 = -1.511% (+/-2.27%) misc_raytrace.py 21490.40 -&gt; 22227.23 : +736.83 = +3.429% (+/-1.88%) This shows that the new optimisations are at least as good as the existing inline-bytecode-caching, and are sometimes much better (because the new ones apply caching to a wider variety of map lookups). The new optimisations can also benefit code generated by the native emitter, because they apply to the runtime rather than the generated code. The improvement for the native emitter when LOAD_ATTR_FAST_PATH and MAP_LOOKUP_CACHE are enabled is (same Linux environment as above): diff of scores (higher is better) N=2000 M=2000 native -&gt; nat-attrmapcache diff diff% (error%) bm_chaos.py 14130.62 -&gt; 15464.68 : +1334.06 = +9.441% (+/-7.11%) bm_fannkuch.py 74.96 -&gt; 76.16 : +1.20 = +1.601% (+/-1.80%) bm_fft.py 166682.99 -&gt; 168221.86 : +1538.87 = +0.923% (+/-4.20%) bm_float.py 233415.23 -&gt; 265524.90 : +32109.67 = +13.756% (+/-2.57%) bm_hexiom.py 628.59 -&gt; 734.17 : +105.58 = +16.796% (+/-1.39%) bm_nqueens.py 225418.44 -&gt; 232926.45 : +7508.01 = +3.331% (+/-3.10%) bm_pidigits.py 6322.00 -&gt; 6379.52 : +57.52 = +0.910% (+/-5.62%) misc_aes.py 20670.10 -&gt; 27223.18 : +6553.08 = +31.703% (+/-1.56%) misc_mandel.py 138221.11 -&gt; 152014.01 : +13792.90 = +9.979% (+/-2.46%) misc_pystone.py 85032.14 -&gt; 105681.44 : +20649.30 = +24.284% (+/-2.25%) misc_raytrace.py 19800.01 -&gt; 23350.73 : +3550.72 = +17.933% (+/-2.79%) In summary, compared to MICROPY_OPT_CACHE_MAP_LOOKUP_IN_BYTECODE, the new MICROPY_OPT_LOAD_ATTR_FAST_PATH and MICROPY_OPT_MAP_LOOKUP_CACHE options: - are simpler; - take less code size; - are faster (generally); - work with code generated by the native emitter; - can be used on embedded targets with a small and constant RAM overhead; - allow the same .mpy bytecode to run on all targets. See #7680 for further discussion. And see also #7653 for a discussion about simplifying mpy-cross options. Signed-off-by: Jim Mussared &lt;jim.mussared@gmail.com&gt;
3 years ago
// The feature flag bits encode the compile-time config options that affect
// the generate bytecode. Note: position 0 is now unused
// (formerly MICROPY_OPT_CACHE_MAP_LOOKUP_IN_BYTECODE).
#define MPY_FEATURE_FLAGS ( \
all: Remove MICROPY_OPT_CACHE_MAP_LOOKUP_IN_BYTECODE. This commit removes all parts of code associated with the existing MICROPY_OPT_CACHE_MAP_LOOKUP_IN_BYTECODE optimisation option, including the -mcache-lookup-bc option to mpy-cross. This feature originally provided a significant performance boost for Unix, but wasn&#39;t able to be enabled for MCU targets (due to frozen bytecode), and added significant extra complexity to generating and distributing .mpy files. The equivalent performance gain is now provided by the combination of MICROPY_OPT_LOAD_ATTR_FAST_PATH and MICROPY_OPT_MAP_LOOKUP_CACHE (which has been enabled on the unix port in the previous commit). It&#39;s hard to provide precise performance numbers, but tests have been run on a wide variety of architectures (x86-64, ARM Cortex, Aarch64, RISC-V, xtensa) and they all generally agree on the qualitative improvements seen by the combination of MICROPY_OPT_LOAD_ATTR_FAST_PATH and MICROPY_OPT_MAP_LOOKUP_CACHE. For example, on a &#34;quiet&#34; Linux x64 environment (i3-5010U @ 2.10GHz) the change from CACHE_MAP_LOOKUP_IN_BYTECODE, to LOAD_ATTR_FAST_PATH combined with MAP_LOOKUP_CACHE is: diff of scores (higher is better) N=2000 M=2000 bccache -&gt; attrmapcache diff diff% (error%) bm_chaos.py 13742.56 -&gt; 13905.67 : +163.11 = +1.187% (+/-3.75%) bm_fannkuch.py 60.13 -&gt; 61.34 : +1.21 = +2.012% (+/-2.11%) bm_fft.py 113083.20 -&gt; 114793.68 : +1710.48 = +1.513% (+/-1.57%) bm_float.py 256552.80 -&gt; 243908.29 : -12644.51 = -4.929% (+/-1.90%) bm_hexiom.py 521.93 -&gt; 625.41 : +103.48 = +19.826% (+/-0.40%) bm_nqueens.py 197544.25 -&gt; 217713.12 : +20168.87 = +10.210% (+/-3.01%) bm_pidigits.py 8072.98 -&gt; 8198.75 : +125.77 = +1.558% (+/-3.22%) misc_aes.py 17283.45 -&gt; 16480.52 : -802.93 = -4.646% (+/-0.82%) misc_mandel.py 99083.99 -&gt; 128939.84 : +29855.85 = +30.132% (+/-5.88%) misc_pystone.py 83860.10 -&gt; 82592.56 : -1267.54 = -1.511% (+/-2.27%) misc_raytrace.py 21490.40 -&gt; 22227.23 : +736.83 = +3.429% (+/-1.88%) This shows that the new optimisations are at least as good as the existing inline-bytecode-caching, and are sometimes much better (because the new ones apply caching to a wider variety of map lookups). The new optimisations can also benefit code generated by the native emitter, because they apply to the runtime rather than the generated code. The improvement for the native emitter when LOAD_ATTR_FAST_PATH and MAP_LOOKUP_CACHE are enabled is (same Linux environment as above): diff of scores (higher is better) N=2000 M=2000 native -&gt; nat-attrmapcache diff diff% (error%) bm_chaos.py 14130.62 -&gt; 15464.68 : +1334.06 = +9.441% (+/-7.11%) bm_fannkuch.py 74.96 -&gt; 76.16 : +1.20 = +1.601% (+/-1.80%) bm_fft.py 166682.99 -&gt; 168221.86 : +1538.87 = +0.923% (+/-4.20%) bm_float.py 233415.23 -&gt; 265524.90 : +32109.67 = +13.756% (+/-2.57%) bm_hexiom.py 628.59 -&gt; 734.17 : +105.58 = +16.796% (+/-1.39%) bm_nqueens.py 225418.44 -&gt; 232926.45 : +7508.01 = +3.331% (+/-3.10%) bm_pidigits.py 6322.00 -&gt; 6379.52 : +57.52 = +0.910% (+/-5.62%) misc_aes.py 20670.10 -&gt; 27223.18 : +6553.08 = +31.703% (+/-1.56%) misc_mandel.py 138221.11 -&gt; 152014.01 : +13792.90 = +9.979% (+/-2.46%) misc_pystone.py 85032.14 -&gt; 105681.44 : +20649.30 = +24.284% (+/-2.25%) misc_raytrace.py 19800.01 -&gt; 23350.73 : +3550.72 = +17.933% (+/-2.79%) In summary, compared to MICROPY_OPT_CACHE_MAP_LOOKUP_IN_BYTECODE, the new MICROPY_OPT_LOAD_ATTR_FAST_PATH and MICROPY_OPT_MAP_LOOKUP_CACHE options: - are simpler; - take less code size; - are faster (generally); - work with code generated by the native emitter; - can be used on embedded targets with a small and constant RAM overhead; - allow the same .mpy bytecode to run on all targets. See #7680 for further discussion. And see also #7653 for a discussion about simplifying mpy-cross options. Signed-off-by: Jim Mussared &lt;jim.mussared@gmail.com&gt;
3 years ago
((MICROPY_PY_BUILTINS_STR_UNICODE) << 1) \
)
// This is a version of the flags that can be configured at runtime.
#define MPY_FEATURE_FLAGS_DYNAMIC ( \
all: Remove MICROPY_OPT_CACHE_MAP_LOOKUP_IN_BYTECODE. This commit removes all parts of code associated with the existing MICROPY_OPT_CACHE_MAP_LOOKUP_IN_BYTECODE optimisation option, including the -mcache-lookup-bc option to mpy-cross. This feature originally provided a significant performance boost for Unix, but wasn&#39;t able to be enabled for MCU targets (due to frozen bytecode), and added significant extra complexity to generating and distributing .mpy files. The equivalent performance gain is now provided by the combination of MICROPY_OPT_LOAD_ATTR_FAST_PATH and MICROPY_OPT_MAP_LOOKUP_CACHE (which has been enabled on the unix port in the previous commit). It&#39;s hard to provide precise performance numbers, but tests have been run on a wide variety of architectures (x86-64, ARM Cortex, Aarch64, RISC-V, xtensa) and they all generally agree on the qualitative improvements seen by the combination of MICROPY_OPT_LOAD_ATTR_FAST_PATH and MICROPY_OPT_MAP_LOOKUP_CACHE. For example, on a &#34;quiet&#34; Linux x64 environment (i3-5010U @ 2.10GHz) the change from CACHE_MAP_LOOKUP_IN_BYTECODE, to LOAD_ATTR_FAST_PATH combined with MAP_LOOKUP_CACHE is: diff of scores (higher is better) N=2000 M=2000 bccache -&gt; attrmapcache diff diff% (error%) bm_chaos.py 13742.56 -&gt; 13905.67 : +163.11 = +1.187% (+/-3.75%) bm_fannkuch.py 60.13 -&gt; 61.34 : +1.21 = +2.012% (+/-2.11%) bm_fft.py 113083.20 -&gt; 114793.68 : +1710.48 = +1.513% (+/-1.57%) bm_float.py 256552.80 -&gt; 243908.29 : -12644.51 = -4.929% (+/-1.90%) bm_hexiom.py 521.93 -&gt; 625.41 : +103.48 = +19.826% (+/-0.40%) bm_nqueens.py 197544.25 -&gt; 217713.12 : +20168.87 = +10.210% (+/-3.01%) bm_pidigits.py 8072.98 -&gt; 8198.75 : +125.77 = +1.558% (+/-3.22%) misc_aes.py 17283.45 -&gt; 16480.52 : -802.93 = -4.646% (+/-0.82%) misc_mandel.py 99083.99 -&gt; 128939.84 : +29855.85 = +30.132% (+/-5.88%) misc_pystone.py 83860.10 -&gt; 82592.56 : -1267.54 = -1.511% (+/-2.27%) misc_raytrace.py 21490.40 -&gt; 22227.23 : +736.83 = +3.429% (+/-1.88%) This shows that the new optimisations are at least as good as the existing inline-bytecode-caching, and are sometimes much better (because the new ones apply caching to a wider variety of map lookups). The new optimisations can also benefit code generated by the native emitter, because they apply to the runtime rather than the generated code. The improvement for the native emitter when LOAD_ATTR_FAST_PATH and MAP_LOOKUP_CACHE are enabled is (same Linux environment as above): diff of scores (higher is better) N=2000 M=2000 native -&gt; nat-attrmapcache diff diff% (error%) bm_chaos.py 14130.62 -&gt; 15464.68 : +1334.06 = +9.441% (+/-7.11%) bm_fannkuch.py 74.96 -&gt; 76.16 : +1.20 = +1.601% (+/-1.80%) bm_fft.py 166682.99 -&gt; 168221.86 : +1538.87 = +0.923% (+/-4.20%) bm_float.py 233415.23 -&gt; 265524.90 : +32109.67 = +13.756% (+/-2.57%) bm_hexiom.py 628.59 -&gt; 734.17 : +105.58 = +16.796% (+/-1.39%) bm_nqueens.py 225418.44 -&gt; 232926.45 : +7508.01 = +3.331% (+/-3.10%) bm_pidigits.py 6322.00 -&gt; 6379.52 : +57.52 = +0.910% (+/-5.62%) misc_aes.py 20670.10 -&gt; 27223.18 : +6553.08 = +31.703% (+/-1.56%) misc_mandel.py 138221.11 -&gt; 152014.01 : +13792.90 = +9.979% (+/-2.46%) misc_pystone.py 85032.14 -&gt; 105681.44 : +20649.30 = +24.284% (+/-2.25%) misc_raytrace.py 19800.01 -&gt; 23350.73 : +3550.72 = +17.933% (+/-2.79%) In summary, compared to MICROPY_OPT_CACHE_MAP_LOOKUP_IN_BYTECODE, the new MICROPY_OPT_LOAD_ATTR_FAST_PATH and MICROPY_OPT_MAP_LOOKUP_CACHE options: - are simpler; - take less code size; - are faster (generally); - work with code generated by the native emitter; - can be used on embedded targets with a small and constant RAM overhead; - allow the same .mpy bytecode to run on all targets. See #7680 for further discussion. And see also #7653 for a discussion about simplifying mpy-cross options. Signed-off-by: Jim Mussared &lt;jim.mussared@gmail.com&gt;
3 years ago
((MICROPY_PY_BUILTINS_STR_UNICODE_DYNAMIC) << 1) \
)
// Define the host architecture
#if MICROPY_EMIT_X86
#define MPY_FEATURE_ARCH (MP_NATIVE_ARCH_X86)
#elif MICROPY_EMIT_X64
#define MPY_FEATURE_ARCH (MP_NATIVE_ARCH_X64)
#elif MICROPY_EMIT_THUMB
#if defined(__thumb2__)
#if defined(__ARM_FP) && (__ARM_FP & 8) == 8
#define MPY_FEATURE_ARCH (MP_NATIVE_ARCH_ARMV7EMDP)
#elif defined(__ARM_FP) && (__ARM_FP & 4) == 4
#define MPY_FEATURE_ARCH (MP_NATIVE_ARCH_ARMV7EMSP)
#else
#define MPY_FEATURE_ARCH (MP_NATIVE_ARCH_ARMV7EM)
#endif
#else
#define MPY_FEATURE_ARCH (MP_NATIVE_ARCH_ARMV7M)
#endif
#define MPY_FEATURE_ARCH_TEST(x) (MP_NATIVE_ARCH_ARMV6M <= (x) && (x) <= MPY_FEATURE_ARCH)
#elif MICROPY_EMIT_ARM
#define MPY_FEATURE_ARCH (MP_NATIVE_ARCH_ARMV6)
#elif MICROPY_EMIT_XTENSA
#define MPY_FEATURE_ARCH (MP_NATIVE_ARCH_XTENSA)
#elif MICROPY_EMIT_XTENSAWIN
#define MPY_FEATURE_ARCH (MP_NATIVE_ARCH_XTENSAWIN)
#else
#define MPY_FEATURE_ARCH (MP_NATIVE_ARCH_NONE)
#endif
#ifndef MPY_FEATURE_ARCH_TEST
#define MPY_FEATURE_ARCH_TEST(x) ((x) == MPY_FEATURE_ARCH)
#endif
// 16-bit little-endian integer with the second and third bytes of supported .mpy files
#define MPY_FILE_HEADER_INT (MPY_VERSION \
| (MPY_FEATURE_ENCODE_FLAGS(MPY_FEATURE_FLAGS) | MPY_FEATURE_ENCODE_ARCH(MPY_FEATURE_ARCH)) << 8)
enum {
MP_NATIVE_ARCH_NONE = 0,
MP_NATIVE_ARCH_X86,
MP_NATIVE_ARCH_X64,
MP_NATIVE_ARCH_ARMV6,
MP_NATIVE_ARCH_ARMV6M,
MP_NATIVE_ARCH_ARMV7M,
MP_NATIVE_ARCH_ARMV7EM,
MP_NATIVE_ARCH_ARMV7EMSP,
MP_NATIVE_ARCH_ARMV7EMDP,
MP_NATIVE_ARCH_XTENSA,
MP_NATIVE_ARCH_XTENSAWIN,
};
py: Rework bytecode and .mpy file format to be mostly static data. Background: .mpy files are precompiled .py files, built using mpy-cross, that contain compiled bytecode functions (and can also contain machine code). The benefit of using an .mpy file over a .py file is that they are faster to import and take less memory when importing. They are also smaller on disk. But the real benefit of .mpy files comes when they are frozen into the firmware. This is done by loading the .mpy file during compilation of the firmware and turning it into a set of big C data structures (the job of mpy-tool.py), which are then compiled and downloaded into the ROM of a device. These C data structures can be executed in-place, ie directly from ROM. This makes importing even faster because there is very little to do, and also means such frozen modules take up much less RAM (because their bytecode stays in ROM). The downside of frozen code is that it requires recompiling and reflashing the entire firmware. This can be a big barrier to entry, slows down development time, and makes it harder to do OTA updates of frozen code (because the whole firmware must be updated). This commit attempts to solve this problem by providing a solution that sits between loading .mpy files into RAM and freezing them into the firmware. The .mpy file format has been reworked so that it consists of data and bytecode which is mostly static and ready to run in-place. If these new .mpy files are located in flash/ROM which is memory addressable, the .mpy file can be executed (mostly) in-place. With this approach there is still a small amount of unpacking and linking of the .mpy file that needs to be done when it&#39;s imported, but it&#39;s still much better than loading an .mpy from disk into RAM (although not as good as freezing .mpy files into the firmware). The main trick to make static .mpy files is to adjust the bytecode so any qstrs that it references now go through a lookup table to convert from local qstr number in the module to global qstr number in the firmware. That means the bytecode does not need linking/rewriting of qstrs when it&#39;s loaded. Instead only a small qstr table needs to be built (and put in RAM) at import time. This means the bytecode itself is static/constant and can be used directly if it&#39;s in addressable memory. Also the qstr string data in the .mpy file, and some constant object data, can be used directly. Note that the qstr table is global to the module (ie not per function). In more detail, in the VM what used to be (schematically): qst = DECODE_QSTR_VALUE; is now (schematically): idx = DECODE_QSTR_INDEX; qst = qstr_table[idx]; That allows the bytecode to be fixed at compile time and not need relinking/rewriting of the qstr values. Only qstr_table needs to be linked when the .mpy is loaded. Incidentally, this helps to reduce the size of bytecode because what used to be 2-byte qstr values in the bytecode are now (mostly) 1-byte indices. If the module uses the same qstr more than two times then the bytecode is smaller than before. The following changes are measured for this commit compared to the previous (the baseline): - average 7%-9% reduction in size of .mpy files - frozen code size is reduced by about 5%-7% - importing .py files uses about 5% less RAM in total - importing .mpy files uses about 4% less RAM in total - importing .py and .mpy files takes about the same time as before The qstr indirection in the bytecode has only a small impact on VM performance. For stm32 on PYBv1.0 the performance change of this commit is: diff of scores (higher is better) N=100 M=100 baseline -&gt; this-commit diff diff% (error%) bm_chaos.py 371.07 -&gt; 357.39 : -13.68 = -3.687% (+/-0.02%) bm_fannkuch.py 78.72 -&gt; 77.49 : -1.23 = -1.563% (+/-0.01%) bm_fft.py 2591.73 -&gt; 2539.28 : -52.45 = -2.024% (+/-0.00%) bm_float.py 6034.93 -&gt; 5908.30 : -126.63 = -2.098% (+/-0.01%) bm_hexiom.py 48.96 -&gt; 47.93 : -1.03 = -2.104% (+/-0.00%) bm_nqueens.py 4510.63 -&gt; 4459.94 : -50.69 = -1.124% (+/-0.00%) bm_pidigits.py 650.28 -&gt; 644.96 : -5.32 = -0.818% (+/-0.23%) core_import_mpy_multi.py 564.77 -&gt; 581.49 : +16.72 = +2.960% (+/-0.01%) core_import_mpy_single.py 68.67 -&gt; 67.16 : -1.51 = -2.199% (+/-0.01%) core_qstr.py 64.16 -&gt; 64.12 : -0.04 = -0.062% (+/-0.00%) core_yield_from.py 362.58 -&gt; 354.50 : -8.08 = -2.228% (+/-0.00%) misc_aes.py 429.69 -&gt; 405.59 : -24.10 = -5.609% (+/-0.01%) misc_mandel.py 3485.13 -&gt; 3416.51 : -68.62 = -1.969% (+/-0.00%) misc_pystone.py 2496.53 -&gt; 2405.56 : -90.97 = -3.644% (+/-0.01%) misc_raytrace.py 381.47 -&gt; 374.01 : -7.46 = -1.956% (+/-0.01%) viper_call0.py 576.73 -&gt; 572.49 : -4.24 = -0.735% (+/-0.04%) viper_call1a.py 550.37 -&gt; 546.21 : -4.16 = -0.756% (+/-0.09%) viper_call1b.py 438.23 -&gt; 435.68 : -2.55 = -0.582% (+/-0.06%) viper_call1c.py 442.84 -&gt; 440.04 : -2.80 = -0.632% (+/-0.08%) viper_call2a.py 536.31 -&gt; 532.35 : -3.96 = -0.738% (+/-0.06%) viper_call2b.py 382.34 -&gt; 377.07 : -5.27 = -1.378% (+/-0.03%) And for unix on x64: diff of scores (higher is better) N=2000 M=2000 baseline -&gt; this-commit diff diff% (error%) bm_chaos.py 13594.20 -&gt; 13073.84 : -520.36 = -3.828% (+/-5.44%) bm_fannkuch.py 60.63 -&gt; 59.58 : -1.05 = -1.732% (+/-3.01%) bm_fft.py 112009.15 -&gt; 111603.32 : -405.83 = -0.362% (+/-4.03%) bm_float.py 246202.55 -&gt; 247923.81 : +1721.26 = +0.699% (+/-2.79%) bm_hexiom.py 615.65 -&gt; 617.21 : +1.56 = +0.253% (+/-1.64%) bm_nqueens.py 215807.95 -&gt; 215600.96 : -206.99 = -0.096% (+/-3.52%) bm_pidigits.py 8246.74 -&gt; 8422.82 : +176.08 = +2.135% (+/-3.64%) misc_aes.py 16133.00 -&gt; 16452.74 : +319.74 = +1.982% (+/-1.50%) misc_mandel.py 128146.69 -&gt; 130796.43 : +2649.74 = +2.068% (+/-3.18%) misc_pystone.py 83811.49 -&gt; 83124.85 : -686.64 = -0.819% (+/-1.03%) misc_raytrace.py 21688.02 -&gt; 21385.10 : -302.92 = -1.397% (+/-3.20%) The code size change is (firmware with a lot of frozen code benefits the most): bare-arm: +396 +0.697% minimal x86: +1595 +0.979% [incl +32(data)] unix x64: +2408 +0.470% [incl +800(data)] unix nanbox: +1396 +0.309% [incl -96(data)] stm32: -1256 -0.318% PYBV10 cc3200: +288 +0.157% esp8266: -260 -0.037% GENERIC esp32: -216 -0.014% GENERIC[incl -1072(data)] nrf: +116 +0.067% pca10040 rp2: -664 -0.135% PICO samd: +844 +0.607% ADAFRUIT_ITSYBITSY_M4_EXPRESS As part of this change the .mpy file format version is bumped to version 6. And mpy-tool.py has been improved to provide a good visualisation of the contents of .mpy files. In summary: this commit changes the bytecode to use qstr indirection, and reworks the .mpy file format to be simpler and allow .mpy files to be executed in-place. Performance is not impacted too much. Eventually it will be possible to store such .mpy files in a linear, read-only, memory- mappable filesystem so they can be executed from flash/ROM. This will essentially be able to replace frozen code for most applications. Signed-off-by: Damien George &lt;damien@micropython.org&gt;
3 years ago
mp_compiled_module_t mp_raw_code_load(mp_reader_t *reader, mp_module_context_t *ctx);
mp_compiled_module_t mp_raw_code_load_mem(const byte *buf, size_t len, mp_module_context_t *ctx);
mp_compiled_module_t mp_raw_code_load_file(const char *filename, mp_module_context_t *ctx);
py: Rework bytecode and .mpy file format to be mostly static data. Background: .mpy files are precompiled .py files, built using mpy-cross, that contain compiled bytecode functions (and can also contain machine code). The benefit of using an .mpy file over a .py file is that they are faster to import and take less memory when importing. They are also smaller on disk. But the real benefit of .mpy files comes when they are frozen into the firmware. This is done by loading the .mpy file during compilation of the firmware and turning it into a set of big C data structures (the job of mpy-tool.py), which are then compiled and downloaded into the ROM of a device. These C data structures can be executed in-place, ie directly from ROM. This makes importing even faster because there is very little to do, and also means such frozen modules take up much less RAM (because their bytecode stays in ROM). The downside of frozen code is that it requires recompiling and reflashing the entire firmware. This can be a big barrier to entry, slows down development time, and makes it harder to do OTA updates of frozen code (because the whole firmware must be updated). This commit attempts to solve this problem by providing a solution that sits between loading .mpy files into RAM and freezing them into the firmware. The .mpy file format has been reworked so that it consists of data and bytecode which is mostly static and ready to run in-place. If these new .mpy files are located in flash/ROM which is memory addressable, the .mpy file can be executed (mostly) in-place. With this approach there is still a small amount of unpacking and linking of the .mpy file that needs to be done when it&#39;s imported, but it&#39;s still much better than loading an .mpy from disk into RAM (although not as good as freezing .mpy files into the firmware). The main trick to make static .mpy files is to adjust the bytecode so any qstrs that it references now go through a lookup table to convert from local qstr number in the module to global qstr number in the firmware. That means the bytecode does not need linking/rewriting of qstrs when it&#39;s loaded. Instead only a small qstr table needs to be built (and put in RAM) at import time. This means the bytecode itself is static/constant and can be used directly if it&#39;s in addressable memory. Also the qstr string data in the .mpy file, and some constant object data, can be used directly. Note that the qstr table is global to the module (ie not per function). In more detail, in the VM what used to be (schematically): qst = DECODE_QSTR_VALUE; is now (schematically): idx = DECODE_QSTR_INDEX; qst = qstr_table[idx]; That allows the bytecode to be fixed at compile time and not need relinking/rewriting of the qstr values. Only qstr_table needs to be linked when the .mpy is loaded. Incidentally, this helps to reduce the size of bytecode because what used to be 2-byte qstr values in the bytecode are now (mostly) 1-byte indices. If the module uses the same qstr more than two times then the bytecode is smaller than before. The following changes are measured for this commit compared to the previous (the baseline): - average 7%-9% reduction in size of .mpy files - frozen code size is reduced by about 5%-7% - importing .py files uses about 5% less RAM in total - importing .mpy files uses about 4% less RAM in total - importing .py and .mpy files takes about the same time as before The qstr indirection in the bytecode has only a small impact on VM performance. For stm32 on PYBv1.0 the performance change of this commit is: diff of scores (higher is better) N=100 M=100 baseline -&gt; this-commit diff diff% (error%) bm_chaos.py 371.07 -&gt; 357.39 : -13.68 = -3.687% (+/-0.02%) bm_fannkuch.py 78.72 -&gt; 77.49 : -1.23 = -1.563% (+/-0.01%) bm_fft.py 2591.73 -&gt; 2539.28 : -52.45 = -2.024% (+/-0.00%) bm_float.py 6034.93 -&gt; 5908.30 : -126.63 = -2.098% (+/-0.01%) bm_hexiom.py 48.96 -&gt; 47.93 : -1.03 = -2.104% (+/-0.00%) bm_nqueens.py 4510.63 -&gt; 4459.94 : -50.69 = -1.124% (+/-0.00%) bm_pidigits.py 650.28 -&gt; 644.96 : -5.32 = -0.818% (+/-0.23%) core_import_mpy_multi.py 564.77 -&gt; 581.49 : +16.72 = +2.960% (+/-0.01%) core_import_mpy_single.py 68.67 -&gt; 67.16 : -1.51 = -2.199% (+/-0.01%) core_qstr.py 64.16 -&gt; 64.12 : -0.04 = -0.062% (+/-0.00%) core_yield_from.py 362.58 -&gt; 354.50 : -8.08 = -2.228% (+/-0.00%) misc_aes.py 429.69 -&gt; 405.59 : -24.10 = -5.609% (+/-0.01%) misc_mandel.py 3485.13 -&gt; 3416.51 : -68.62 = -1.969% (+/-0.00%) misc_pystone.py 2496.53 -&gt; 2405.56 : -90.97 = -3.644% (+/-0.01%) misc_raytrace.py 381.47 -&gt; 374.01 : -7.46 = -1.956% (+/-0.01%) viper_call0.py 576.73 -&gt; 572.49 : -4.24 = -0.735% (+/-0.04%) viper_call1a.py 550.37 -&gt; 546.21 : -4.16 = -0.756% (+/-0.09%) viper_call1b.py 438.23 -&gt; 435.68 : -2.55 = -0.582% (+/-0.06%) viper_call1c.py 442.84 -&gt; 440.04 : -2.80 = -0.632% (+/-0.08%) viper_call2a.py 536.31 -&gt; 532.35 : -3.96 = -0.738% (+/-0.06%) viper_call2b.py 382.34 -&gt; 377.07 : -5.27 = -1.378% (+/-0.03%) And for unix on x64: diff of scores (higher is better) N=2000 M=2000 baseline -&gt; this-commit diff diff% (error%) bm_chaos.py 13594.20 -&gt; 13073.84 : -520.36 = -3.828% (+/-5.44%) bm_fannkuch.py 60.63 -&gt; 59.58 : -1.05 = -1.732% (+/-3.01%) bm_fft.py 112009.15 -&gt; 111603.32 : -405.83 = -0.362% (+/-4.03%) bm_float.py 246202.55 -&gt; 247923.81 : +1721.26 = +0.699% (+/-2.79%) bm_hexiom.py 615.65 -&gt; 617.21 : +1.56 = +0.253% (+/-1.64%) bm_nqueens.py 215807.95 -&gt; 215600.96 : -206.99 = -0.096% (+/-3.52%) bm_pidigits.py 8246.74 -&gt; 8422.82 : +176.08 = +2.135% (+/-3.64%) misc_aes.py 16133.00 -&gt; 16452.74 : +319.74 = +1.982% (+/-1.50%) misc_mandel.py 128146.69 -&gt; 130796.43 : +2649.74 = +2.068% (+/-3.18%) misc_pystone.py 83811.49 -&gt; 83124.85 : -686.64 = -0.819% (+/-1.03%) misc_raytrace.py 21688.02 -&gt; 21385.10 : -302.92 = -1.397% (+/-3.20%) The code size change is (firmware with a lot of frozen code benefits the most): bare-arm: +396 +0.697% minimal x86: +1595 +0.979% [incl +32(data)] unix x64: +2408 +0.470% [incl +800(data)] unix nanbox: +1396 +0.309% [incl -96(data)] stm32: -1256 -0.318% PYBV10 cc3200: +288 +0.157% esp8266: -260 -0.037% GENERIC esp32: -216 -0.014% GENERIC[incl -1072(data)] nrf: +116 +0.067% pca10040 rp2: -664 -0.135% PICO samd: +844 +0.607% ADAFRUIT_ITSYBITSY_M4_EXPRESS As part of this change the .mpy file format version is bumped to version 6. And mpy-tool.py has been improved to provide a good visualisation of the contents of .mpy files. In summary: this commit changes the bytecode to use qstr indirection, and reworks the .mpy file format to be simpler and allow .mpy files to be executed in-place. Performance is not impacted too much. Eventually it will be possible to store such .mpy files in a linear, read-only, memory- mappable filesystem so they can be executed from flash/ROM. This will essentially be able to replace frozen code for most applications. Signed-off-by: Damien George &lt;damien@micropython.org&gt;
3 years ago
void mp_raw_code_save(mp_compiled_module_t *cm, mp_print_t *print);
void mp_raw_code_save_file(mp_compiled_module_t *cm, const char *filename);
void mp_native_relocate(void *reloc, uint8_t *text, uintptr_t reloc_text);
#endif // MICROPY_INCLUDED_PY_PERSISTENTCODE_H