This was previously implemented by adding additional members to the
mp_obj_type_t defined for each NIC, which is difficult to do cleanly with
the new object type slots mechanism. The way this works is also not
supported on GCC 8.x and below.
Instead replace it with the type protocol, which is a much simpler way of
achieving the same thing.
This affects the WizNet (in non-LWIP mode) and Nina NIC drivers.
This work was funded through GitHub Sponsors.
Signed-off-by: Jim Mussared <jim.mussared@gmail.com>
Changes in this commit:
- Clear and mask D2 EXTIs.
- Set correct voltage scaling level for standby mode.
- Disable debug MCU (if debugging is disabled), for all MCU series.
It keeps compatibility with the XIAO bootloader by:
- using Soft Device 7.3.0
- reserving 48k memory for the bootloader.
So on double reset a drive pops for uploading an uf2 image or a nrfutil zip
pkg file. Instructions to create it from a hex file are included. The
bootloader can as well be activated with the touch 1200 option of nrfutil.
The script download_ble_stack.sh has been adapted to get the version 7.3.0
soft device files. It may have to be executed once before building.
The file system is set to 256k and the pin definitions are adapted.
Besides that, it has the common functionality and omissions. The on-board
sensors and additional flash can be supported by Python scripts.
This was introduced by 35fb90bd57, but
it is much simpler and essentially the same to just use
`tud_cdc_n_connected()`.
The only difference is that tud_cdc_n_connected() only checks for DTR,
but this is correct anyway: DTR indicates device presence, RTS indicates
that the host wants to receive data.
Signed-off-by: Damien Tournoud <damien@platform.sh>
usocket_events_deinit will only be available if MICROPY_PY_USOCKET_EVENTS
is enabled (which is only enabled when webrepl is enabled).
Signed-off-by: Jim Mussared <jim.mussared@gmail.com>
This adds a concurrency section to all github workflows to cancel any
in progress workflow when a branch is updated. This should cancel any
ongoing or queued workflows, e.g. when a pull request is updated.
Signed-off-by: David Lechner <david@pybricks.com>
The compiler is not picky right now, but these are actually all syntax
errors:
- await is only valid in an async function
- async functions that use yield are actually async generators (a construct
not supported by the compiler right now)
ADC: The argument of vref=num is an integer. Values for num are:
SAMD21:
0 INT1V 1.0V voltage reference
1 INTVCC0 1/1.48 Analog voltage supply
2 INTVCC1 1/2 Analog voltage supply (only for VDDANA > 2.0V)
3 VREFA External reference
4 VREFB External reference
SAMD51:
0 INTREF internal bandgap reference
1 INTVCC1 Analog voltage supply
2 INTVCC0 1/2 Analog voltage supply (only for VDDANA > 2.0v)
3 AREFA External reference A
4 AREFB External reference B
5 AREFC External reference C (ADC1 only)
DAC: The argument of vref=num is an integer. Suitable values:
SAMD21:
0 INT1V Internal voltage reference
1 VDDANA Analog voltage supply
2 VREFA External reference
SAMD51:
0 INTREF Internal bandgap reference
1 VDDANA Analog voltage supply
2 VREFAU Unbuffered external voltage reference (not buffered in DAC)
4 VREFAB Buffered external voltage reference (buffered in DAC).
Changes in this commit:
- Do not deinit IRQ when uart.deinit() is called with an inactive object.
- Remove using it for the finaliser. There is another machanism for soft
reset, and it is not needed otherwise.
- Do not tag the UART buffers with MP_STATE_PORT, it is not required.
Clearing the DRE flag for the transmit interrupt at the end of a
uart.write() also cleared the RXC flag disabling the receive interrupt.
This commit also changes the flag set/clear mechanism in the driver for SPI
as well, even if it did not cause a problem there. But at least it saves a
few bytes of code.
Applies to both SPI and I2C. The underflow caused high baudrate settings
resulting in the lowest possible baudrate. The overflow resulted in
erratic baudrates, not just the lowest possible.
The datasheet on page 55 shows PF0 (SDA) and PF1 (SCL) are the pins for
I2C2, but these pins do not work. Checking the MBED pinout for the
NUCLEO-F429ZI shows:
I2C1: PB8 (SCL) and PB9 (SDA).
I2C2: PB10 (SCL) and PB11 (SDA).
Both of these work and can be scanned and find devices connected to them.
Signed-off-by: Dale Weber <hybotics.sd@gmail.com>.
This changes the signatures of QSPI write_cmd_data, write_cmd_addr_data and
read_cmd_qaddr_qdata so they return an error code. The softqspi and stm32
hardware qspi driver are updated to follow this new signature. Also the
spiflash driver is updated to use these new return values.
Signed-off-by: Damien George <damien@micropython.org>
The assertion that is added here (to gc.c) fails when running this new test
if ALLOC_TABLE_GAP_BYTE is set to 0.
Signed-off-by: Jeff Epler <jepler@gmail.com>
Signed-off-by: Damien George <damien@micropython.org>
Prior to this fix the follow crash occurred. With a GC layout of:
GC layout:
alloc table at 0x3fd80428, length 32001 bytes, 128004 blocks
finaliser table at 0x3fd88129, length 16001 bytes, 128008 blocks
pool at 0x3fd8bfc0, length 2048064 bytes, 128004 blocks
Block 128003 is an AT_HEAD and eventually is passed to gc_mark_subtree.
This causes gc_mark_subtree to call ATB_GET_KIND(128004). When block 1 is
created with a finaliser, the first byte of the finaliser table becomes
0x2, but ATB_GET_KIND(128004) reads these bits as AT_TAIL, and then
gc_mark_subtree references past the end of the heap, which happened to be
past the end of PSRAM on the esp32-s2.
The fix in this commit is to ensure there is a one-byte gap after the ATB
filled permanently with AT_FREE.
Fixes issue #7116.
See also https://github.com/adafruit/circuitpython/issues/5021
Signed-off-by: Jeff Epler <jepler@gmail.com>
Signed-off-by: Damien George <damien@micropython.org>
When you want to use the valgrind memory analysis tool on MicroPython, you
can arrange to define MICROPY_DEBUG_VALGRIND to enable use of special
valgrind macros. For now, this only fixes `gc_get_ptr` so that it never
emits the diagnostic "Conditional jump or move depends on uninitialised
value(s)".
Signed-off-by: Jeff Epler <jepler@gmail.com>
This change makes it so the compiler and persistent code loader take a
mp_compiled_module_t* as their last argument, instead of returning this
struct. This eliminates a duplicate context variable for all callers of
these functions (because the context is now stored in the
mp_compiled_module_t by the caller), and also eliminates any confusion
about which context to use after the mp_compile_to_raw_code or
mp_raw_code_load function returns (because there is now only one context,
that stored in mp_compiled_module_t.context).
Reduces code size by 16 bytes on ARM Cortex-based ports.
Signed-off-by: Damien George <damien@micropython.org>
This fixes the case for the code size comment action where there is no
matching artifact. Apparently, the result of the github-script action was
not treating `false` as a boolean value. To fix the problem we change the
result to use string. Also add some logging to make the step a bit less
cryptic.
Signed-off-by: David Lechner <david@pybricks.com>
These unimplemented features may never be implemented, and having the word
"yet" there takes up space.
Signed-off-by: Damien George <damien@micropython.org>
This module is useful, but it is not always needed. Disabling it saves
several kilobytes of build size, depending on other config options.
Signed-off-by: Laurens Valk <laurens@pybricks.com>
The STM32H7xx HAL LPUART AF macros are missing the number, this HAL is the
only one that's inconsistent in the way it defines LPUART AF macros, so we
only need to define them for H7.
Prior to this commit, only sector 0 was erase/write protected, which may
not be enough to protect all of mboot (especially if mboot lives at a
higher address than the start of flash).
This commit makes sure all internal flash sectors that mboot lives in are
protected from erasing and writing. The linker script must define
_mboot_writable_flash_start for this to work.
Signed-off-by: Damien George <damien@micropython.org>
This changes the code size workflow to post a comment on pull requests with
the code size report. It also removes the error threshold so that the test
won't fail if code size increases.
Allowable code size changes are subjective, so shouldn't cause CI to fail.
In addition, failing CI tests can cause other hooks like code coverage
reports to be suppressed, so this fixes that problem as well.
Fixes issue #8464.
Signed-off-by: David Lechner <david@pybricks.com>
The code was already checking for duplicate kwargs for named parameters but
if `**kwargs` was given as a parameter, it did not check for multiples of
the same argument name.
This fixes the issue by adding an addition test to catch duplicates and
adds a test to exercise the code.
Fixes issue #10083.
Signed-off-by: David Lechner <david@pybricks.com>