Changes are:
- use ssl.SSLContext.wrap_socket instead of ssl.wrap_socket
- disable check_hostname and call load_default_certs() where appropriate,
to get CPython to run the tests correctly
- pass socket.AF_INET to getaddrinfo and socket.socket(), to force IPv4
- change tests to use github.com instead of google.com, because certificate
validation was failing with google.com
Signed-off-by: Damien George <damien@micropython.org>
And only enable this method when the relevant feature is available in
mbedtls. Otherwise, if mbedtls doesn't support getting the peer
certificate, this method always returns None and it's confusing why it does
that. It's better to remove the method altogether, so the error trying to
use it is more obvious.
Signed-off-by: Damien George <damien@micropython.org>
This commit adds:
1) Methods to SSLContext class that match CPython signature:
- `SSLContext.load_cert_chain(certfile, keyfile)`
- `SSLContext.load_verify_locations(cafile=, cadata=)`
- `SSLContext.get_ciphers()` --> ["CIPHERSUITE"]
- `SSLContext.set_ciphers(["CIPHERSUITE"])`
2) `sslsocket.cipher()` to get current ciphersuite and protocol
version.
3) `ssl.MBEDTLS_VERSION` string constant.
4) Certificate verification errors info instead of
`MBEDTLS_ERR_X509_CERT_VERIFY_FAILED`.
5) Tests in `net_inet` and `multi_net` to test these new methods.
`SSLContext.load_cert_chain` method allows loading key and cert from disk
passing a filepath in `certfile` or `keyfile` options.
`SSLContext.load_verify_locations`'s `cafile` option enables the same
functionality for ca files.
Signed-off-by: Carlos Gil <carlosgilglez@gmail.com>
Otherwise passing in a non-integer can lead to an invalid memory access.
Thanks to Junwha Hong and Wonil Jang @S2Lab, UNIST for finding the issue.
Fixes issue #13007.
Signed-off-by: Damien George <damien@micropython.org>
Set the position of new line tokens as the end of the preceding line
instead of the beginning of the next line. This is done by first moving
the pointer to the end of the current line to skip any whitespace, record
the position for the token, then finaly skip any other line and whitespace.
The previous behavior was to skip every new line and whitespace, including
the indent of the next line, before recording the token position.
(Note that both lex->emit_dent and lex->nested_bracket_level equal 0 if
had_physical_newline == true, which allows simplifying the if-logic for
MP_TOKEN_NEWLINE.)
And update the cmd_parsetree.py test expected output, because the position
of the new-line token has changed.
Fixes issue #12792.
Signed-off-by: Mathieu Serandour <mathieu.serandour@numworks.fr>
This updates a small number of files that change with ruff-format's (vs
black's) rules.
This work was funded through GitHub Sponsors.
Signed-off-by: Jim Mussared <jim.mussared@gmail.com>
This test depends on the order in which qstrs are stored in ROM, which
affects the order in which `dir()` will probe the object to see what it
supports. Because of the lazy-loading in asyncio/__init__.py, if it
tries to do e.g. `wait_for_ms` before `funcs` then it will import funcs,
making `funcs` later succeed. But in the other way around, `funcs` will
initially not be found.
This work was funded through GitHub Sponsors.
Signed-off-by: Jim Mussared <jim.mussared@gmail.com>
These tests are designed to measure changes in performance relating to:
- string interning / searching for existing strings
- map lookup
- string operations
- string hashing
This work was funded through GitHub Sponsors.
Signed-off-by: Jim Mussared <jim.mussared@gmail.com>
The unwritten API contract expected of a VFS.getcwd() by mp_vfs_getcwd()
is that its return value should be either "" or "/" when the CWD is at
the root of the VFS and otherwise start with a slash and not end with a
slash. This was not correctly implemented in VfsPosix for instances with
a non-empty root - the required leading slash, if any, was cut off
because the root length includes a trailing slash. This would result in
missing slashes in the middle of the return value of os.getcwd() or in
uninitialized garbage from beyond a string's null terminator when the
CWD was at the VFS root.
Signed-off-by: Christian Walther <cwalther@gmx.ch>
The unwritten API contract expected of a VFS by mp_vfs_lookup_path() is
that paths passed in are relative to the root of the VFS if they start
with '/' and relative to the current directory of the VFS otherwise.
This was not correctly implemented in VfsPosix for instances with a
non-empty root - all paths were interpreted relative to the root. Fix
that. Since VfsPosix tracks its CWD using the "external" CWD of the Unix
process, the correct handling for relative paths is to pass them through
unmodified.
Also, when concatenating absolute paths, fix an off-by-one resulting in
a harmless double slash (the root path already has a trailing slash).
Signed-off-by: Christian Walther <cwalther@gmx.ch>
These tests test an unrealistic situation and only pass by accident due
to a bug. The upcoming fix for the bug would make them fail.
The unrealistic situation is that VfsPosix methods are called with
relative paths while the current working directory is somewhere outside
of the root of the VFS. In the intended use of VFS objects via
os.mount() (as opposed to calling methods directly as the tests do),
this never happens, as mp_vfs_lookup_path() directs incoming calls to
the VFS that contains the CWD.
Make the testing situation realistic by changing the working directory
to the root of the VFS before calling methods on it, as the subsequent
relative path accesses expect.
Thanks to the preceding commit, the tests still pass, but still for the
wrong reason. The following commit "Fix relative paths on non-root VFS"
will make them pass for the correct reason.
Signed-off-by: Christian Walther <cwalther@gmx.ch>
A VfsPosix created with a relative root path would get confused when
chdir() was called on it and become unable to properly resolve absolute
paths, because changing directories effectively shifted its root. The
simplest fix for that would be to say "don't do that", but since the
unit tests themselves do it, fix it by making a relative path absolute
before storing it.
Signed-off-by: Christian Walther <cwalther@gmx.ch>
This wasn't correctly accounting for the bits-per-pixel and was returning a
bufinfo struct with the incorrect length. Instead, just forward directly
to the underlying buffer object.
Fixes issue #12563.
This work was funded through GitHub Sponsors.
Signed-off-by: Jim Mussared <jim.mussared@gmail.com>
This ensures that the buffer is large enough for the specified width,
height, bits-per-pixel, and stride.
Also makes the legacy FrameBuffer1 constructor re-use the FrameBuffer
make_new to save some code size.
Fixes issue #12562.
This work was funded through GitHub Sponsors.
Signed-off-by: Jim Mussared <jim.mussared@gmail.com>
This commit adds tests for bound method comparison and hashing to support
the changes in the previous commit.
Signed-off-by: Ned Konz <ned@productcreationstudio.com>
If a non-string buffer was passed to execfile, then it would be passed
as a non-null-terminated char* to mp_lexer_new_from_file.
This changes mp_lexer_new_from_file to take a qstr instead (as in almost
all cases a qstr will be created from this input anyway to set the
`__file__` attribute on the module).
This now makes execfile require a string (not generic buffer) argument,
which is probably a good fix to make anyway.
Fixes issue #12522.
This work was funded through GitHub Sponsors.
Signed-off-by: Jim Mussared <jim.mussared@gmail.com>
"Raise SomeException() from None" is a common Python idiom to suppress
chained exceptions and thus shouldn't trigger a warning on a version of
Python that doesn't support them in the first place.
The following code:
server = await asyncio.start_server(...)
async with server:
... code that raises ...
would lose the original exception because the server's task would not have
had a chance to be scheduled yet, and so awaiting the task in wait_closed
would raise the cancellation instead of the original exception.
Additionally, ensures that explicitly cancelling the parent task delivers
the cancellation correctly (previously was masked by the server loop), now
this only happens if the server was closed, not when the task was
cancelled.
Signed-off-by: Jim Mussared <jim.mussared@gmail.com>
1. Remove the skip for detecting support for polling user-defined objects
as this is always possible now on all ports.
2. Don't print when the scheduled task runs as the ordering of this
relative to the other prints is dependent on other factors (e.g. if
using the native emitter).
This work was funded through GitHub Sponsors.
Signed-off-by: Jim Mussared <jim.mussared@gmail.com>
Update expired certificate, increase time validity period to five years and
fix command arguments typos in commentaries.
Signed-off-by: Carlos Gil <carlosgilglez@gmail.com>
The PYBD_SF2 is right on the limit of being able to run this test and so
it succeeds the first two cases and fails the next two with MemoryError.
This causes it to SKIP, but that only works if it's the first thing
printed. So reverse the order of the tests so it fails on the biggest
one first.
This work was funded through GitHub Sponsors.
Signed-off-by: Jim Mussared <jim.mussared@gmail.com>
Increase asyncio tests timeouts to account for different WiFi modules and
CPU clocks on different boards.
Signed-off-by: iabdalkader <i.abdalkader@gmail.com>
In CPython, `_thread.start_new_thread()` returns an ID that is the same ID
that is returned by `_thread.get_ident()`. The current MicroPython
implementation of `_thread.start_new_thread()` always returns `None`.
This modifies the required functions to return a value. The native thread
id is returned since this can be used for interop with other functions, for
example, `pthread_kill()` on *nix. `_thread.get_ident()` is also modified
to return the native thread id so that the values match and avoids the need
for a separate `native_id` attribute.
Fixes issue #12153.
Signed-off-by: David Lechner <david@pybricks.com>
Add the buffer attribute to sys.stdin, sys.stdout and sys.stderr. This
provides raw access to underlying stdio streams for the unix port (and
others that use VfsPosix).
Signed-off-by: stephanelsmith <stephane.smith@titansensor.com>
Because mpy_ld.py doesn't know the target object representation, it emits
instances of `MP_OBJ_NEW_QSTR(MP_QSTR_Foo)` as const string objects, rather
than qstrs. However this doesn't work for map keys (e.g. for a locals dict)
because the map has all_keys_are_qstrs flag is set (and also auto-complete
requires the map keys to be qstrs).
Instead, emit them as regular qstrs, and make a functioning MP_OBJ_NEW_QSTR
function available (via `native_to_obj`, also used for e.g. making
integers).
Remove the code from mpy_ld.py to emit qstrs as constant strings, but leave
behind the scaffold to emit constant objects in case we want to do use this
in the future.
Strictly this should be a .mpy sub-version bump, even though the function
table isn't changing, it does lead to a change in behavior for a new .mpy
running against old MicroPython. `mp_native_to_obj` will incorrectly return
the qstr value directly as an `mp_obj_t`, leading to unexpected results.
But given that it's broken at the moment, it seems unlikely that anyone is
relying on this, so it's not work the other downsides of a sub-version bump
(i.e. breaking pure-Python modules that use @native). The opposite case of
running an old .mpy on new MicroPython is unchanged, and remains broken in
exactly the same way.
This work was funded through GitHub Sponsors.
Signed-off-by: Jim Mussared <jim.mussared@gmail.com>
This adds support for the x format code in struct.pack and struct.unpack.
The primary use case for this is ignoring bytes while unpacking. When
interfacing with existing systems, it may often happen that you either have
fields in a struct that aren't properly specified or you simply don't care
about them. Being able to easily skip them is useful.
Signed-off-by: Daniël van de Giessen <daniel@dvdgiessen.nl>
Eliminate `TypeError` when format string contains no named conversions.
This matches CPython behavior.
Signed-off-by: mcskatkat <mc_skatkat@hotmail.com>
The "true" command by default is unavailable on windows so use
an equivalent which works on both unix and windows.
Signed-off-by: stijn <stijn@ignitron.net>
The only reason that const had to be disabled was to make the test output
match CPython when const was involved. Instead, this commit fixes the test
to handle the lines where const is used.
Also:
- remove the special handling for MICROPY_PERSISTENT_CODE_SAVE in
unix/mpconfigport.h, and make this automatic.
- move the check for MICROPY_PERSISTENT_CODE_SAVE to where it's used (like
we do for other similar checks) and add a comment explaining it.
This work was funded through GitHub Sponsors.
Signed-off-by: Jim Mussared <jim.mussared@gmail.com>
This test was failing on CPython 3.11 as it now emits `0` as the line
number for the "call" event corresponding to import, where as in 3.6 it had
`1` as the line number.
We maintain the old behavior, but in order to make this test pass on both
CPython versions, the trace handler now converts the `0` to a `1`.
This work was funded through GitHub Sponsors.
Signed-off-by: Jim Mussared <jim.mussared@gmail.com>