* Use same "proxy rejected" for all Proxy invariant rejects.
* #ifdef guard for proxy policy checks.
* Test coverage for 'preventExtensions' trap.
* Convert many duk_hobject.h macro calls into function calls.
* Add functions to interact with idx_props allocation.
* Compile warning fixes.
* Remove lazy charlen support. Since we need to WTF-8 sanitize the entire
input string, charlen can be computed while validating (avoiding extra
book-keeping for ASCII eventually).
* Improve WTF-8 search forwards/backwards performance (no substring operations)
when the search string is valid UTF-8. Use reference implementation for
non-UTF-8 still, to be optimized later.
* Minor testcase improvements.
While setting up an environment for running the catch clause, the
bytecode executor could trigger a memory allocation and thus a GC
within "error handling" state.
Fix this by splitting catch clause handling into two parts:
- Part 1: unwind activation(s) and catcher(s) until the desired
catcher is found (this should be side effect free). Then write
the error value and longjmp type to the value stack (at idx_base),
and set a calling scope flag to indicate we need to run part 2 too.
Finally, deactivate the "catch active" flag of the catcher.
- Part 2: runs after we're out of error handling and inside a new
setjmp. Setup the environment needed to run the catch clause and
execute it with the bytecode executor. If an error (such as out
of memory) happens here, it propagates out (or into a finally
clause) because the catcher no longer has "catch active" set.
It's as if the throw came from inside the catch code.
(Part 2 is only needed if a catch clause needs an environment for
the catch variable; currently this is always the case.)
Also add a sanity check to mark-and-sweep object compaction: in case
mark-and-sweep was called inside error handling (which should ideally
never happen), refuse to compact objects which causes side effects.
Also rename debug format code %!C to %!X, and add %!C (duk_catcher *)
and %!A (duk_activation *) format codes so that activations and
catchers can be debug printed conveniently.
* Rework duk_heaphdr and subclass assertions into functions to
reduce debug build size.
* Add explicit object validity assert passes to mark-and-sweep.
This allows detection of invalid internal structures especially
when used with GC torture.
* Rename assertion macros for consistency, e.g. from
DUK_ASSERT_HSTRING_VALID to DUK_HSTRING_ASSERT_VALID.
* Invalidate the literal cache on every mark-and-sweep round, and allow
collection of pinned literals. This approach still allows the literal
cache to be simplistic and without an invalidation mechanism in normal
operation (between mark-and-sweep rounds), but still allows eventual
collection of pinned literals if they stop occurring.
* Log level fixes.
* Automatically pin C literals interned into heap strings. Or if the
literal maps to an already interned string, pin it too. Pinning is
implemented using a duk_hstring flag and a one-off refcount bump.
Mark-and-sweep avoids sweeping pinned strings based on the flag.
* Add a lookup cache for quickly mapping a C literal address (which is
assumed stable) into a duk_hstring pointer. Once a mapping has been
formed, it never needs to be invalidated because the duk_hstring is
always pinned if the cache is used. Only heap destruction will free
the pinned duk_hstrings.
* More internal call site conversion for literals.
* Wording trivia.
* Change duk_bool_to to duk_small_uint_t from duk_small_int_t. This may
cause some sign warnings in calling code.
* Reject attempt to unpack an array-like value whose length is 2G or over;
previously was not checked explicitly, and the length was cast to duk_idx_t
with a sign change and the unpack would then later fail. Now it fails with
a clean RangeError.
* Add wrap check for Node.js Buffer.concat().
* API DUK_TYPE_xxx, DUK_TYPE_MASK_xxx, flag constants etc are now unsigned.
Both duk_hthread and duk_context typedefs resolve to struct duk_hthread
internally. In external API duk_context resolves to struct duk_hthread
which is intentionally left undefined as the struct itself is not
dereferenced. Change internal code to use duk_hthread exclusively which
removes unnecessary and awkward thr <-> ctx casts from internals.
The basic guidelines are:
* Public API uses duk_context in prototype declarations. The intent is to
hide the internal type, and there's already a wide dependency on the
type name.
* All internal code, both declarations and definitions, use duk_hthread
exclusively. This is done even for API functions, i.e. an API function
declared as "void duk_foo(duk_context *ctx);" is then defined as
"void duk_foo(duk_hthread *thr);".
* Make value stack and call stack limits configurable via DUK_USE_xxx
options. Also make value stack grow/shrink constants configurable.
* Rewrite value stack grow/shrink check primitives for better hot/cold path
handling.
* Use a proportional spare for grow and shrink sizes so that applications
needing a large value stack have fewer value stack resizes.
* Grow value stack allocation when entering a call or when explicitly requested
via e.g. duk_require_stack().
* Never shrink the value stack when entering a call, so that the unwind path
is guaranteed to have value stack to handle a protected call return. This
guarantee is only needed for protected call but is now applied to all calls
for simplicity.
* Don't perform a value stack shrink check at all in function return anymore.
It would be OK from protected call semantics perspective to do a shrink
attempt without throwing if it fails.
* Perform a value stack shrink check in mark-and-sweep only for now. When
emergency GC is running, shrink to a minimal size respecting current value
stack reserve.
With ROM objects having REACHABLE always set, mark-and-sweep doesn't need a
specific ROM object check when deciding whether to mark an object or not:
the REACHABLE check causes the object not to be marked.
Remove thr->callstack as a monolithic array and replace it with a linked list
of duk_activations. thr->callstack_curr is the current call (or NULL if no
call is in progress), and act->parent chains to a previous call or NULL.
thr->callstack_top is kept because it's needed by some internals at present;
it may be removed in the future.
Tweak mark-and-sweep so that if finalizers are present (heap->finalize_list
is not NULL), rescue decisions are postponed (free decisions are not).
In concrete terms this means that objects normally rescued keep their
FINALIZED flag so that their finalizer won't be called again if the object
turns out to be unreachable in a later run.
This wasn't necessary before: finalize_list only contained unreachable
objects so nothing could point to them while we marked heap_allocated.
But when duk_push_heapptr() is allowed to push unreachable pointers
(which are pending finalization), it's possible for an object in
heap_allocated to point to an object on finalize_list, which also means
that the latter object can get a TEMPROOT flag.
* Add a h_assert_refcount field to duk_heaphdr when assertions are enabled.
* When doing mark-and-sweep, clear h_assert_refcount, perform mark-and-sweep
processing normally, and assert for correct refcounts for objects that
remain in heap_allocated after sweeping. (Refcounts for objects prior to
sweeping won't match those computed via reachability roots.)
* Improve FASTREFS asserts for refcounting and mark-and-sweep.
When the flag is set, there is either no subclass C struct for the
duk_hobject, or there is a subclass C struct but there are no references
needing DECREF/marking in the struct.
This allows DECREF and mark-and-sweep to handle duk_hobjects with less
overhead for the common cases of plain objects and arrays (and some other
less commonly collected structs like duk_hnatfunc).
Also change Duktape.Thread.prototype internal class from Thread to Object:
with the other changes internal code now assumes that if an object's class
is Thread, it has the duk_hthread memory layout which wouldn't be the case
for Duktape.Thread.prototype.
One bottleneck in refzero and mark-and-sweep handling is checking whether an
object has an own or inherited _Finalizer property. This check walked the
prototype chain and did a property lookup for every object. Because a
finalizer is usually not present, the prototype chain would almost always be
walked to completion.
Improve this behavior by:
* Adding a DUK_HOBJECT_FLAG_HAVE_FINALIZER flag. The flag is set when the
object has an own _Finalizer property with a callable value, and cleared
otherwise. The flag is *only* set by duk_set_finalizer(), so any other
means of changing the internal _Finalizer property will leave the flag out
of sync (which causes a finalizer run to be skipped).
* Adding duk_hobject_has_finalizer_fast() which checks for finalizer existence
by walking the prototype chain, but only checking the flag, not the property
table.
* Use the fast finalizer check in refzero and mark-and-sweep.
Out-of sync cases:
* If the flag is set but there is no actual finalizer, the object will go
through finalizer processing when garbage collecting. This is harmless:
the finalizer call will fail and the object will be garbage collected, but
with some potential delay (especially for mark-and-sweep).
* If the flag is cleared but there is an actual finalizer, the finalizer will
be ignored.
Related changes:
* When duk_dump_function() is called, zero DUK_HOBJECT_FLAG_HAVE_FINALIZER on
serialization, so it won't be set when the function is loaded back. If this
is not done, the loaded function will (harmlessly) go through finalizer
processing when garbage collected.
* Update debugger artificial properties to include "have_finalizer" flag.
Other changes:
* A few DUK_UNLIKELY() attributes for prototype sanity limits which are
almost never hit.
* Replace the two alternative algorithms with a single one which works for
both desktop and low memory cases.
* Basic algorithm is a hash table with size 2^N, hash mask is simply
(size - 1), e.g. if size is 0x100, mask is 0xFF. duk_hstring has a 'next'
pointer (single linked list) for chaining strings mapping to the same
slot.
* Change plain buffers to inherit from Uint8Array. This affects a lot of
small things like Object.prototype.toString() output, enumeration of plain
buffers, etc. It also changes JSON serialization for plain buffers because
the index properties are enumerable as with Uint8Array instances.
* Disable JSON stringify fastpath for plain buffers for now so that the
virtual index properties get serialized correctly.
* Remove ArrayBuffer non-standard virtual properties.
* Remove DataView non-standard virtual properties.
* Move .byteLength, .byteOffset, .BYTES_PER_ELEMENT, and .buffer into
inherited getters as required in ES6. However, the .length property
remains a virtual own property for now (it too is an inherited getter
in ES6).
* Move ArrayBuffer.allocPlain() and ArrayBuffer.plainOf() to
Uint8Array.allocPlain() and Uint8Array.plainOf() to match the
semantics change for plain buffers.
* Fix Node.js buffer .slice() behavior, the returned Node.js buffer
would have ArrayBuffer.isView == 0 which doesn't match the revised
Node.js behavior (Buffers being Uint8Array instances)
* Reject ArrayBuffers with a view offset/length in Node.js Buffer .slice()
rather than accept such ArrayBuffers without actually respecting the
view offset/length.
* Allow a plain buffer or a lightfunc as a constructor "replacement object"
return value.
These can be used whenever we're 100% certain that the value stack index
exists and the type matches expected type. When these are true, a
duk_hstring, duk_hbuffer, or duk_hobject pointer fetch can be inlined to
small code.