This feature {x=} was introduced in Python 3.8 so needs a separate .exp
file to run on earlier Python versions.
See https://bugs.python.org/issue36817
Signed-off-by: Damien George <damien@micropython.org>
This achieves a substantial performance improvement when rendering glyphs
to color displays, the benefit increasing proportional to the number of
pixels in the glyph.
Prevents the finaliser from being missed if there's a dangling reference
on the stack to one of the blocks for the files (that this test checks
that they get finalised).
See github.com/micropython/micropython/pull/7659#issuecomment-899479793
Signed-off-by: Jim Mussared <jim.mussared@gmail.com>
This tests both sending indications/notifications from a server to
subscribed clients via gatts_write(...,send_update=True) and subscribing
from a client.
Signed-off-by: Jim Mussared <jim.mussared@gmail.com>
This implements (most of) the PEP-498 spec for f-strings and is based on
https://github.com/micropython/micropython/pull/4998 by @klardotsh.
It is implemented in the lexer as a syntax translation to `str.format`:
f"{a}" --> "{}".format(a)
It also supports:
f"{a=}" --> "a={}".format(a)
This is done by extracting the arguments into a temporary vstr buffer,
then after the string has been tokenized, the lexer input queue is saved
and the contents of the temporary vstr buffer are injected into the lexer
instead.
There are four main limitations:
- raw f-strings (`fr` or `rf` prefixes) are not supported and will raise
`SyntaxError: raw f-strings are not supported`.
- literal concatenation of f-strings with adjacent strings will fail
"{}" f"{a}" --> "{}{}".format(a) (str.format will incorrectly use
the braces from the non-f-string)
f"{a}" f"{a}" --> "{}".format(a) "{}".format(a) (cannot concatenate)
- PEP-498 requires the full parser to understand the interpolated
argument, however because this entirely runs in the lexer it cannot
resolve nested braces in expressions like
f"{'}'}"
- The !r, !s, and !a conversions are not supported.
Includes tests and cpydiffs.
Signed-off-by: Jim Mussared <jim.mussared@gmail.com>
The correct callback-deregister functions must be called dependent on the
socket type, otherwise resources may not be freed correctly.
Signed-off-by: Damien George <damien@micropython.org>
Test instances can now use the following methods to synchronise their
execution:
multitest.broadcast("sync message")
multitest.wait("sync message")
Signed-off-by: Damien George <damien@micropython.org>
Previously a subclass of a type that didn't implement unary_op, or didn't
handle MP_UNARY_OP_BOOL, would raise TypeError on bool conversion.
Fixes#5677.
Replace "master" with "controller" and "slave" with "peripheral" in
comments, errors, and debug messages.
Add CONTROLLER and PERIPHERAL constants to pyb.SPI and pyb.I2C classes;
retain MASTER and SLAVE constants for backward compatiblity.
The MP_OBJ_STOP_ITERATION optimisation is a shortcut for creating a
StopIteration() exception object, and means that heap memory does not need
to be allocated for the exception (in cases where it can be used). This
commit allows this optimised object to take an optional argument (before,
it could only have no argument).
The commit also adds some new tests to cover corner cases with
StopIteration and generators that previously did not work.
Signed-off-by: Damien George <damien@micropython.org>
This test snuck through without proper formatting and is causing CI for
other unrelated changes to fail.
Signed-off-by: David Lechner <david@pybricks.com>
Currently when using uasyncio.start_server() the socket configuration is
done inside a uasyncio.create_task() background function. If the address
and port are already in use however this throws an OSError which cannot be
cleanly caught behind the create_task().
This commit moves the getaddrinfo and socket binding to the start_server()
function, and only creates the task if that succeeds. This means that any
OSError from the initial socket configuration is propagated directly up the
call stack, compatible with CPython behaviour.
See #7444.
Signed-off-by: Damien George <damien@micropython.org>
2.5 can be represented correctly in object representation C, but 2.3 cannot
(it is slightly truncated).
Signed-off-by: Damien George <damien@micropython.org>
This commit fixes a problem with a race between cancellation of task A and
completion of task B, when A waits on B. If task B completes just before
task A is cancelled then the cancellation of A does not work. Instead,
the CancelledError meant to cancel A gets passed through to B (that's
expected behaviour) but B handles it as a "Task exception wasn't retrieved"
scenario, printing out such a message (this is because finished tasks point
their "coro" attribute to themselves to indicate they are done, and
implement the throw() method, but that method inadvertently catches the
CancelledError). The correct behaviour is for B to bounce that
CancelledError back out.
This bug is mainly seen when wait_for() is used, and in that context the
symptoms are:
- occurs when using wait_for(T, S), if the task T being waited on finishes
at exactly the same time as the wait-for timeout S expires
- task T will have run to completion
- the "Task exception wasn't retrieved message" is printed with
"<class 'CancelledError'>" as the error (ie no traceback)
- the wait_for(T, S) call never returns (it's never put back on the
uasyncio run queue) and all tasks waiting on this are blocked forever
from running
- uasyncio otherwise continues to function and other tasks continue to be
scheduled as normal
The fix here reworks the "waiting" attribute of Task to be called "state"
and uses it to indicate whether a task is: running and not awaited on,
running and awaited on, finished and not awaited on, or finished and
awaited on. This means the task does not need to point "coro" to itself to
indicate finished, and also allows removal of the throw() method.
A benefit of this is that "Task exception wasn't retrieved" messages can go
back to being able to print the name of the coroutine function.
Fixes issue #7386.
Signed-off-by: Damien George <damien@micropython.org>
With docs and a multi-test using TCP server/client.
This method is a MicroPython extension, although there is discussion of
adding it to CPython: https://bugs.python.org/issue41305
Signed-off-by: Mike Teachman <mike.teachman@gmail.com>
The random module's getrandbits() method didn't give a proper error message
when calling it with a value that was outside of the range of 1-32, which
can lead to confusion using this function (which under CPython can accept
numbers larger than 32). Now instead of simply giving a ValueError it
gives an error message that states that the number of bits is constrained.
Also, since the random module's functions getrandbits() and randint()
differ from CPython, tests have been added to describe these differences.
For getrandbits the relevant documentation is shown and added to the docs.
The same is given for randint method so that the information is more easily
found.
Finally, since the int object lacks the bit_length() method there is a test
for that method also to include within the docs, showing the difference to
CPython.
If digest is called then the hash object is put in a "final" state and
calling update() or digest() again will raise a ValueError (instead of
silently producing the wrong result).
See issue #4119.
Signed-off-by: Damien George <damien@micropython.org>
MicroPython does not store any reference from a function object to the
module it was defined in, but there is a way to use function.__globals__ to
indirectly get the module.
See issue #7259.
Signed-off-by: Damien George <damien@micropython.org>
This significantly reduces the time taken to run the test suite (on the
unix port). Use `-j1` to disable this feature.
Signed-off-by: Jeff Epler <jepler@gmail.com>
Array equality is defined as each element being equal but to keep
code size down MicroPython implements a binary comparison. This
can only be used correctly for elements with the same binary layout
though so turn it into an NotImplementedError when comparing types
for which the binary comparison yielded incorrect results: types
with different sizes, and floating point numbers because nan != nan.
The RP2040 has 2 cores and supports running at most 2 Python threads (the
main one plus another), and will raise OSError if a thread cannot be
created because core1 is already in use. This commit adjusts some thread
tests to be robust against such OSError's. These tests now pass on rp2
boards.
Signed-off-by: Damien George <damien@micropython.org>
This commit makes gc_lock_depth have one counter per thread, instead of one
global counter. This makes threads properly independent with respect to
the GC, in particular threads can now independently lock the GC for
themselves without locking it for other threads. It also means a given
thread can run a hard IRQ without temporarily locking the GC for all other
threads and potentially making them have MemoryError exceptions at random
locations (this really only occurs on MCUs with multiple cores and no GIL,
eg on the rp2 port).
The commit also removes protection of the GC lock/unlock functions, which
is no longer needed when the counter is per thread (and this also fixes the
cas where a hard IRQ calling gc_lock() may stall waiting for the mutex).
It also puts the check for `gc_lock_depth > 0` outside the GC mutex in
gc_alloc, gc_realloc and gc_free, to potentially prevent a hard IRQ from
waiting on a mutex if it does attempt to allocate heap memory (and putting
the check outside the GC mutex is now safe now that there is a
gc_lock_depth per thread).
Signed-off-by: Damien George <damien@micropython.org>
So a lock can be acquired on one Python thread and then released on
another. A test for this is added.
Signed-off-by: Damien George <damien@micropython.org>
uctypes.FLOAT32 has a special value representation and
uctypes_struct_scalar_size() should be used instead of GET_SCALAR_SIZE().
Signed-off-by: Damien George <damien@micropython.org>