Tree:
8e048d2548
master
parse-bytecode
v1.22-release
v1.0
v1.0-rc1
v1.0.1
v1.1
v1.1.1
v1.10
v1.11
v1.12
v1.13
v1.14
v1.15
v1.16
v1.17
v1.18
v1.19
v1.19.1
v1.2
v1.20.0
v1.21.0
v1.22.0
v1.22.0-preview
v1.22.1
v1.22.2
v1.23.0
v1.23.0-preview
v1.24.0-preview
v1.3
v1.3.1
v1.3.10
v1.3.2
v1.3.3
v1.3.4
v1.3.5
v1.3.6
v1.3.7
v1.3.8
v1.3.9
v1.4
v1.4.1
v1.4.2
v1.4.3
v1.4.4
v1.4.5
v1.4.6
v1.5
v1.5.1
v1.5.2
v1.6
v1.7
v1.8
v1.8.1
v1.8.2
v1.8.3
v1.8.4
v1.8.5
v1.8.6
v1.8.7
v1.9
v1.9.1
v1.9.2
v1.9.3
v1.9.4
${ noResults }
1 Commits (8e048d2548867aac743866ca5a4c244b7b5aac09)
Author | SHA1 | Message | Date |
---|---|---|---|
Jim Mussared | 154b4eb354 |
py: Implement "common word" compression scheme for error messages.
The idea here is that there's a moderate amount of ROM used up by exception text. Obviously we try to keep the messages short, and the code can enable terse errors, but it still adds up. Listed below is the total string data size for various ports: bare-arm 2860 minimal 2876 stm32 8926 (PYBV11) cc3200 3751 esp32 5721 This commit implements compression of these strings. It takes advantage of the fact that these strings are all 7-bit ascii and extracts the top 128 frequently used words from the messages and stores them packed (dropping their null-terminator), then uses (0x80 | index) inside strings to refer to these common words. Spaces are automatically added around words, saving more bytes. This happens transparently in the build process, mirroring the steps that are used to generate the QSTR data. The MP_COMPRESSED_ROM_TEXT macro wraps any literal string that should compressed, and it's automatically decompressed in mp_decompress_rom_string. There are many schemes that could be used for the compression, and some are included in py/makecompresseddata.py for reference (space, Huffman, ngram, common word). Results showed that the common-word compression gets better results. This is before counting the increased cost of the Huffman decoder. This might be slightly counter-intuitive, but this data is extremely repetitive at a word-level, and the byte-level entropy coder can't quite exploit that as efficiently. Ideally one would combine both approaches, but for now the common-word approach is the one that is used. For additional comparison, the size of the raw data compressed with gzip and zlib is calculated, as a sort of proxy for a lower entropy bound. With this scheme we come within 15% on stm32, and 30% on bare-arm (i.e. we use x% more bytes than the data compressed with gzip -- not counting the code overhead of a decoder, and how this would be hypothetically implemented). The feature is disabled by default and can be enabled by setting MICROPY_ROM_TEXT_COMPRESSION at the Makefile-level. |
5 years ago |