mirror of https://github.com/svaarala/duktape.git
Sami Vaarala
8 years ago
committed by
GitHub
66 changed files with 3707 additions and 2305 deletions
@ -0,0 +1,11 @@ |
|||
define: DUK_USE_FINALIZER_TORTURE |
|||
introduced: 2.1.0 |
|||
default: false |
|||
tags: |
|||
- gc |
|||
- memory |
|||
- development |
|||
- torture |
|||
description: > |
|||
Development time option: simulate a fake finalizer call every time when |
|||
finalizers might be executed (even if the actual finalize_list is empty). |
@ -0,0 +1,7 @@ |
|||
define: DUK_USE_INJECT_HEAP_ALLOC_ERROR |
|||
introduced: 2.1.0 |
|||
default: false |
|||
tags: |
|||
- development |
|||
description: > |
|||
Force heap allocation to fail, value indicates the desired error position. |
@ -0,0 +1,48 @@ |
|||
========================= |
|||
Duktape 2.1 release notes |
|||
========================= |
|||
|
|||
Release overview |
|||
================ |
|||
|
|||
Main changes in this release (see RELEASES.rst for full details): |
|||
|
|||
* TBD. |
|||
|
|||
Upgrading from Duktape 2.0 |
|||
========================== |
|||
|
|||
No action (other than recompiling) should be needed for most users to upgrade |
|||
from Duktape v2.0.x. Note the following: |
|||
|
|||
* The Duktape thread used for finalizer calls is now always the initial thread |
|||
(heap_thread), for both reference counting and mark-and-sweep triggered |
|||
finalization. This should be taken into account in finalizer functions; |
|||
in particular, if there are multiple global environments, finalizers will |
|||
execute in the first global environment created for the heap. |
|||
|
|||
Prior to 2.1 the finalizer thread could also be heap_thread but usually the |
|||
current thread would be used. |
|||
|
|||
Known issues |
|||
============ |
|||
|
|||
TBD. |
|||
|
|||
Raw issues from test runs |
|||
========================= |
|||
|
|||
API tests |
|||
--------- |
|||
|
|||
TBD. |
|||
|
|||
Ecmascript tests |
|||
---------------- |
|||
|
|||
TBD. |
|||
|
|||
test262 |
|||
------- |
|||
|
|||
TBD. |
@ -0,0 +1,533 @@ |
|||
============ |
|||
Side effects |
|||
============ |
|||
|
|||
Overview |
|||
======== |
|||
|
|||
Duktape is a single threaded interpreter, so when the internal C code deals |
|||
with memory allocations, pointers, and internal data structures it is safe |
|||
to assume, for example, that pointers are stable while they're being used and |
|||
that internal state and data structures are not modified simultaneously from |
|||
other threads. |
|||
|
|||
However, many internal operations trigger quite extensive side effects such |
|||
as resizing the value stack (invalidating any pointers to it) or clobbering |
|||
the current heap error handling (longjmp) state. There are a few primary |
|||
causes for the side effects, such as memory management reallocating data |
|||
structures, finalizer invocation, and Proxy trap invocation. The primary |
|||
causes are also triggered by a lot of secondary causes. The practical effect |
|||
is that any internal helper should be assumed to potentially invoke arbitrary |
|||
side effects unless there's a specific reason to assume otherwise. |
|||
|
|||
Some of the side effects can be surprising when simply looking at calling |
|||
code, which makes side effects an error prone element when maintaining Duktape |
|||
internals. Incorrect call site assumptions can cause immediate issues like |
|||
segfaults, assert failures, or valgrind warnings. But it's also common for |
|||
an incorrect assumption to work out fine in practice, only to be triggered by |
|||
rare conditions like voluntary mark-and-sweep or a unrecoverable out-of-memory |
|||
error happening in just the right place. Such bugs have crept into the code |
|||
base several times -- they're easy to make and hard to catch with tests or |
|||
code review. |
|||
|
|||
This document describes the different side effects, how they may be triggered, |
|||
what mechanisms are in place to deal with them internally, and how tests try |
|||
to cover side effects. |
|||
|
|||
Basic side effect categories |
|||
============================ |
|||
|
|||
Primary causes |
|||
-------------- |
|||
|
|||
Side effects are ultimately caused by: |
|||
|
|||
* A refcount dropping to zero, causing a "refzero cascade" where a set of |
|||
objects is refcount finalized and freed. If any objects in the cascade |
|||
have finalizers, the finalizer calls have a lot of side effects. Object |
|||
freeing itself is nearly side effect free, but does invalidate any pointers |
|||
to unreachable but not-yet-freed objects which are held at times. |
|||
|
|||
* Mark-and-sweep similarly frees objects and can make finalizer calls. |
|||
Mark-and-sweep may also resize/compact the string table and object property |
|||
tables. The set of mark-and-sweep side effects are likely to slowly change |
|||
over time (e.g. better emergency GC capabilities). |
|||
|
|||
* Error throwing overwrites heap-wide error handling state, and causes a long |
|||
control transfer. Concrete impact on call site is that e.g. calling code |
|||
may not be able to store/restore internal flags or counters if an error gets |
|||
thrown. Almost anything involving a memory allocation, property operation, |
|||
etc may throw. |
|||
|
|||
Any operation doing a DECREF may thus have side effects. Any operation doing |
|||
anything to cause a mark-and-sweep (like allocating memory) may similarly have |
|||
side effects. Finalizers cause the most wide ranging side effects, but even |
|||
with finalizers disabled there are significant side effects in mark-and-sweep. |
|||
|
|||
Full side effects |
|||
----------------- |
|||
|
|||
The most extensive type of side effect is arbitrary code execution, caused |
|||
by e.g. a finalizer or a Proxy trap call (and a number of indirect causes). |
|||
The potential side effects are very wide: |
|||
|
|||
* Because a call is made, value, call, and catch stacks may be grown (but |
|||
not shrunk) and their base pointers may change. As a result, any duk_tval |
|||
pointers to the value stack, duk_activation pointers to the call stack, and |
|||
duk_catcher pointers to the catch stack are (potentially) invalidated. |
|||
|
|||
* An error throw may happen, clobbering heap longjmp state. This is a |
|||
problem particularly in error handling where we're dealing with a previous |
|||
throw. A long control transfer may skip intended cleanup code. |
|||
|
|||
* A new thread may be resumed and yielded from. The resumed thread may even |
|||
duk_suspend(). |
|||
|
|||
* A native thread switch may occur, for an arbitrarily long time, if any |
|||
function called uses duk_suspend() and duk_resume(). This is not currently |
|||
supported for finalizers, but may happen, for example, for Proxy trap calls. |
|||
|
|||
* Because called code may operate on any object (except those we're certain |
|||
not to be reachable), objects may undergo arbitrary mutation. For example, |
|||
object properties may be added, deleted, or modified; dynamic and external |
|||
buffer data pointers may change; external buffer length may change. An |
|||
object's property table may be resized and its base pointer may change, |
|||
invalidating both pointers to the property table. Object property slot |
|||
indices may also be invalidated due to object resize/compaction. |
|||
|
|||
The following will be stable at all times: |
|||
|
|||
* Value stack entries in the current activation won't be unwound or modified. |
|||
Similarly, the current call stack and catch stack entries and entries below |
|||
them won't be unwound or modified. |
|||
|
|||
* All heap object (duk_heaphdr) pointers are valid and stable regardless of |
|||
any side effects, provided that the objects in question are reachable and |
|||
correctly refcounted for. Called code cannot (in the absence of bugs) |
|||
remove references from previous activations in the call stack and thread |
|||
resume chain. |
|||
|
|||
* In particular, while duk_tval pointers to the value stack may change, if |
|||
an object pointer is encapsulated in a duk_tval, the pointer to the actual |
|||
object is still stable. |
|||
|
|||
* All string data pointers, including external strings. String data is |
|||
immutable, and can't be reallocated or relocated. |
|||
|
|||
* All fixed buffer data pointers, because fixed buffer data follows the stable |
|||
duk_heaphdr directly. Dynamic and external buffer data pointers are not |
|||
stable. |
|||
|
|||
Side effects without finalizers, but with mark-and-sweep allowed |
|||
---------------------------------------------------------------- |
|||
|
|||
If code execution side effects (finalizer calls, Proxy traps, getter/setter |
|||
calls, etc) are avoided, most of the side effects are avoided. In particular, |
|||
refzero situations are then side effect free because object freeing has no |
|||
side effects beyond memory free calls. |
|||
|
|||
The following side effects still remain: |
|||
|
|||
* Refzero processing still frees objects whose refcount reaches zero. |
|||
Any pointers to such objects will thus be invalidated. This may happen |
|||
e.g. when a borrowed pointer is used and that pointer loses its backing |
|||
reference. |
|||
|
|||
* Mark-and-sweep may reallocate/compact the string table. This affects |
|||
the string table data structure pointers and indices/offsets into them. |
|||
Strings themselves are not affected (but unreachable strings may be freed). |
|||
|
|||
* Mark-and-sweep may reallocate/compact object property tables. All property |
|||
keys and values will remain reachable, but pointers and indices to an object |
|||
property table may be invalidated. This mostly affects property code which |
|||
often finds a property's "slot index" and then operates on the index. |
|||
|
|||
* Mark-and-sweep may free unreachable objects, invalidating any pointers to |
|||
them. This affects only objects which have been allocated and added to |
|||
heap_allocated list. Objects not on heap_allocated list are not affected |
|||
because mark-and-sweep isn't aware of them; such objects are thus safe from |
|||
collection, but at risk for leaking if an error is thrown, so such |
|||
situations are usually very short lived. |
|||
|
|||
Other side effects don't happen with current mark-and-sweep implementation. |
|||
For example, the following don't happen (but could, if mark-and-sweep scope |
|||
and side effect lockouts are changed): |
|||
|
|||
* Thread value stack, call stack, and catch stack are never reallocated |
|||
and all pointers to duk_tvals, duk_activations, and duk_catchers remain |
|||
valid. (This could easily change if mark-and-sweep were to "compact" |
|||
the stacks in an emergency GC.) |
|||
|
|||
The mark-and-sweep side effects listed above are not fundamental to the |
|||
engine and could be removed if they became inconvenient. For example, it's |
|||
nice that emergency GC can compact objects in an attempt to free memory, but |
|||
it's not a critical feature (and many other engines don't do it either). |
|||
|
|||
Side effects with finalizers and mark-and-sweep disabled |
|||
-------------------------------------------------------- |
|||
|
|||
When both finalizers and mark-and-sweep are disabled, the only remaining side |
|||
effects come from DECREF (plain or NORZ): |
|||
|
|||
* Refzero processing still frees objects whose refcount reaches zero. |
|||
Any pointers to such objects will thus be invalidated. This may happen |
|||
e.g. when a borrowed pointer is used and that pointer loses its backing |
|||
reference. |
|||
|
|||
When DECREF operations happen during mark-and-sweep they get handled specially: |
|||
the refcounts are updated normally, but the objects are never freed or even |
|||
queued to refzero_list. This is done because mark-and-sweep will free any |
|||
unreachable objects; DECREF still gets called because mark-and-sweep finalizes |
|||
refcounts of any freed objects (or rather other objects they point to) so that |
|||
refcounts remain in sync. |
|||
|
|||
Controls in place |
|||
================= |
|||
|
|||
Finalizer execution, pf_prevent_count |
|||
------------------------------------- |
|||
|
|||
Objects with finalizers are queued to finalize_list and are processed later |
|||
by duk_heap_process_finalize_list(). The queueing doesn't need any side |
|||
effect protection as it is side effect free. |
|||
|
|||
duk_heap_process_finalize_list() is guarded by heap->pf_prevent_count which |
|||
prevents recursive finalize_list processing. If the count is zero on entry, |
|||
it's bumped and finalize_list is processed until it becomes empty. New |
|||
finalizable objects may be queued while the list is being processed, but |
|||
only the first call will process the list. If the count is non-zero on entry, |
|||
the call is a no-op. |
|||
|
|||
The count can also be bumped upwards to prevent finalizer execution in the |
|||
first place, even if no call site is currently processing finalizers. If the |
|||
count is bumped, there must be a reliable mechanism of unbumping the count or |
|||
finalizer execution will be prevented permanently. |
|||
|
|||
Because only the first finalizer processing site processes the finalize_list, |
|||
using duk_suspend() from a finalizer or anything called by a finalizer is not |
|||
currently supported. If duk_suspend() were called in a finalizer, finalization |
|||
would be stuck until duk_resume() was called. Processing finalizers from |
|||
multiple call sites would by itself be relatively straightforward (each call |
|||
site would just process the list head or notice it is NULL and finish); |
|||
however, at present mark-and-sweep also needs to be disabled while finalizers |
|||
run. |
|||
|
|||
Mark-and-sweep prevent count, ms_prevent_count |
|||
---------------------------------------------- |
|||
|
|||
Stacking counter to prevent mark-and-sweep. Also used to prevent recursive |
|||
mark-and-sweep entry when mark-and-sweep runs. |
|||
|
|||
Mark-and-sweep running, ms_running |
|||
---------------------------------- |
|||
|
|||
This flag is set only when mark-and-sweep is actually running, and doesn't |
|||
stack because recursive mark-and-sweep is not allowed. |
|||
|
|||
The flag is used by DECREF macros to detect that mark-and-sweep is running |
|||
and that objects must not be queued to refzero_list or finalize_list; their |
|||
refcounts must still be updated. |
|||
|
|||
Mark-and-sweep flags, ms_base_flags |
|||
----------------------------------- |
|||
|
|||
Mark-and-sweep base flags from duk_heap are ORed to mark-and-sweep argument |
|||
flags. This allows a section of code to avoid e.g. object compaction |
|||
regardless of how mark-and-sweep gets triggered. |
|||
|
|||
Using the base flags is useful when mark-and-sweep by itself is desirable |
|||
but e.g. object compaction is not. Finalizers are prevented using a |
|||
separate flag. |
|||
|
|||
Calling code must restore the flags reliably -- e.g. catching errors or having |
|||
assurance of no errors being thrown in any situation. It might be nice to |
|||
make this easier by allowing flags to be modified, the modification flagged, |
|||
and for error throw handling to do the restoration automatically. |
|||
|
|||
Creating an error object, creating_error |
|||
---------------------------------------- |
|||
|
|||
This flag is set when Duktape internals are creating an error to be thrown. |
|||
If an error happens during that process (which includes a user errCreate() |
|||
callback), the flag is set and avoids recursion. A pre-allocated "double |
|||
error" object is thrown instead. |
|||
|
|||
Call stack unwind or handling an error, error_not_allowed |
|||
--------------------------------------------------------- |
|||
|
|||
This flag is only enabled when using assertions. It is set in code sections |
|||
which must be protected against an error being thrown. This is a concern |
|||
because currently the error state is global in duk_heap and doesn't stack, |
|||
so an error throw (even a caught and handled one) clobbers the state which |
|||
may be fatal in code sections working to handle an error. |
|||
|
|||
DECREF NORZ (no refzero) macros |
|||
------------------------------- |
|||
|
|||
DECREF NORZ (no refzero) macro variants behave the same as plain DECREF macros |
|||
except that they don't trigger side effects. Since Duktape 2.1 NORZ macros |
|||
will handle refzero cascades inline (freeing all the memory directly); however, |
|||
objects with finalizers will be placed in finalize_list without finalizer |
|||
calls being made. |
|||
|
|||
Once a code segment with NORZ macros is complete, DUK_REFZERO_CHECK_{SLOW,FAST}() |
|||
should be called. The macro checks for any pending finalizers and processes |
|||
them. No error catcher is necessary: error throw path also calls the macros and |
|||
processes pending finalizers. (The NORZ name is a bit of a misnomer since |
|||
Duktape 2.1 reworks.) |
|||
|
|||
Mitigation, test coverage |
|||
========================= |
|||
|
|||
There are several torture test options to exercise side effect handling: |
|||
|
|||
* Triggering a mark-and-sweep for every allocation (and in a few other places |
|||
like DECREF too). |
|||
|
|||
* Causing a simulated finalizer run with error throwing and call side effects |
|||
every time a finalizer might have executed. |
|||
|
|||
Some specific cold paths like out-of-memory errors in critical places are |
|||
difficult to exercise with black box testing. There is a small set of |
|||
DUK_USE_INJECT_xxx config options which allow errors to be injected into |
|||
specific critical functions. These can be combined with e.g. valgrind and |
|||
asserts, to cover assertions, memory leaks, and memory safety. |
|||
|
|||
Operations causing side effects |
|||
=============================== |
|||
|
|||
The main reasons and controls for side effects are covered above. Below is |
|||
a (non-exhaustive) list of common operations with side effects. Any internal |
|||
helper may invoke some of these primitives and thus also have side effects. |
|||
|
|||
DUK_ALLOC() |
|||
|
|||
* May trigger voluntary or emergency mark-and-sweep, with arbitrary |
|||
code execution side effects. |
|||
|
|||
DUK_REALLOC() |
|||
|
|||
* May trigger voluntary or emergency mark-and-sweep, with arbitrary |
|||
code execution side effects. |
|||
|
|||
* In particular, if reallocating e.g. the value stack, the triggered |
|||
mark-and-sweep may change the base pointer being reallocated! |
|||
To avoid this, the DUK_REALLOC_INDIRECT() call queries the base pointer |
|||
from the caller for every realloc() attempt. |
|||
|
|||
DUK_FREE() |
|||
|
|||
* No side effects at present. |
|||
|
|||
Property read, write, delete, existence check |
|||
|
|||
* May invoke getters, setters, and Proxy traps with arbitrary code execution |
|||
side effects. |
|||
|
|||
* Memory allocation is potentially required for every operation, thus causing |
|||
arbitrary code execution side effects. Memory allocation is obviously |
|||
needed for property writes, but any other operations may also allocate |
|||
memory e.g. to coerce a number to a string. |
|||
|
|||
Value stack pushes |
|||
|
|||
* Pushing to the value stack is side effect free. The space must be allocated |
|||
beforehand, and a pushed value is INCREF'd if it isn't primitive, and INCREF |
|||
is side effect free. |
|||
|
|||
* A duk_check_stack() / duk_require_stack() + push has arbitrary side effects |
|||
because of a potential reallocation. |
|||
|
|||
Value stack pops |
|||
|
|||
* Popping a value may invoke a finalizer, and thus may cause arbitrary code |
|||
execution side effects. |
|||
|
|||
Value stack coercions |
|||
|
|||
* Value stack type coercions may, depending on the coercion, invoke methods |
|||
like .toString() and .valueOf(), and thus have arbitrary code execution |
|||
side effects. Even failed attempts may cause side effects due to memory |
|||
allocation attempts. |
|||
|
|||
* In specific cases it may be safe to conclude that a coercion is side effect |
|||
free; for example, doing a ToNumber() conversion on a plain string is side |
|||
effect free at present. (This may not always be the case in the future, |
|||
e.g. if numbers become heap allocated.) |
|||
|
|||
* Some coercions not involving an explicit method call may require an |
|||
allocation call -- which may then trigger a voluntary or emergency |
|||
mark-and-sweep leading to arbitrary code execution side effects. |
|||
|
|||
INCREF |
|||
|
|||
* No side effects at present. Object is never freed or queued anywhere. |
|||
|
|||
DECREF_NORZ |
|||
|
|||
* No side effects other than freeing one or more objects, strings, and |
|||
buffers. The freed objects don't have finalizers; objects with finalizers |
|||
are queued to finalize_list but finalizers are not executed. |
|||
|
|||
* Queries finalizer existence which is side effect free. |
|||
|
|||
* When mark-and-sweep is running, DECREF_NORZ adjusts target refcount but |
|||
won't do anything else like queue object to refzero_list or free it; that's |
|||
up to mark-and-sweep. |
|||
|
|||
DECREF |
|||
|
|||
* If refcount doesn't reach zero, no side effects. |
|||
|
|||
* If refcount reaches zero, one or more objects, strings, and buffers are |
|||
freed which is side effect free. Objects with finalizers are queued to |
|||
finalize_list, and the list is processed when the cascade of objects without |
|||
finalizers has been freed. Finalizer execution had arbitrary code execution |
|||
side effects. |
|||
|
|||
* Queries finalizer existence which is side effect free. |
|||
|
|||
* When mark-and-sweep is running, DECREF adjusts target refcount but won't |
|||
do anything else. |
|||
|
|||
duk__refcount_free_pending() |
|||
|
|||
* As of Duktape 2.1 no side effects, just frees objects without a finalizer |
|||
until refzero_list is empty. (Equivalent in Duktape 2.0 and prior would |
|||
process finalizers inline.) |
|||
|
|||
* Recursive entry is prevented; first caller processes a cascade until it's |
|||
done. Pending finalizers are executed after the refzero_list is empty |
|||
(unless prevented). Finalizers are executed outside of refzero_list |
|||
processing protection so DECREF freeing may happen normally during finalizer |
|||
execution. |
|||
|
|||
Mark-and-sweep |
|||
|
|||
* Queries finalizer existence which is side effect free. |
|||
|
|||
* Object compaction. |
|||
|
|||
* String table compaction. |
|||
|
|||
* Recursive entry prevented. |
|||
|
|||
* Executes finalizers after mark-and-sweep is complete (unless prevented), |
|||
which has arbitrary code execution side effects. Finalizer execution |
|||
happens outside of mark-and-sweep protection, but currently finalizer |
|||
execution explicitly prevents mark-and-sweep to avoid incorrect rescue/free |
|||
decisions when the finalize_list is only partially processed. |
|||
|
|||
Error throw |
|||
|
|||
* Overwrites heap longjmp state, so an error throw while handling a previous |
|||
one is a fatal error. |
|||
|
|||
* Because finalizer calls may involve error throws, finalizers cannot be |
|||
executed in error handling (at least without storing/restoring longjmp |
|||
state). |
|||
|
|||
* Memory allocation may involve side effects or fail with out-of-memory, so |
|||
it must be used carefully in error handling. For example, creating an object |
|||
may potentially fail, throwing an error inside error handling. The error |
|||
that is thrown is constructed *before* error throwing critical section |
|||
begins. |
|||
|
|||
* Protected call error handling must also never throw (without catching) for |
|||
sandboxing reasons: the error handling path of a protected call is assumed |
|||
to never throw. |
|||
|
|||
* Ecmascript try-catch handling isn't currently fully protected against out of |
|||
memory: if setting up the catch execution fails, an out-of-memory error is |
|||
propagated from the try-catch block. Try-catch isn't as safe as protected |
|||
calls for sandboxing. Even if catch execution setup didn't allocate memory, |
|||
it's difficult to write script code that is fully memory allocation free |
|||
(whereas writing C code which is allocation free is much easier). |
|||
|
|||
* Mark-and-sweep without error throwing or (finalizer) call side effects is |
|||
OK. |
|||
|
|||
Debugger message writes |
|||
|
|||
* Code writing a debugger message to the current debug client transport |
|||
must ensure, somehow, that it will never happen when another function |
|||
is doing the same (including nested call to itself). |
|||
|
|||
* If nesting happens, memory unsafe behavior won't happen, but the debug |
|||
connection becomes corrupted. |
|||
|
|||
* There are some current issues for debugger message handling, e.g. debugger |
|||
code uses duk_safe_to_string() which may have side effects or even busy |
|||
loop. |
|||
|
|||
Call sites needing side effect protection |
|||
========================================= |
|||
|
|||
Error throw and resulting unwind |
|||
|
|||
* Must protect against another error: longjmp state doesn't nest. |
|||
|
|||
* Prevent finalizers, avoid Proxy traps and getter/setter calls. |
|||
|
|||
* Avoid out-of-memory error throws, trial allocation is OK. |
|||
|
|||
* Refzero with pure memory freeing is OK. |
|||
|
|||
* Mark-and-sweep without finalizer execution is OK. Object and string |
|||
table compaction is OK, at least present. |
|||
|
|||
* Error code must be very careful not to throw an error in any part of the |
|||
error unwind process. Otherwise sandboxing/protected call guarantees are |
|||
broken, and some of the side effect prevention changes are not correctly |
|||
undone (e.g. pf_prevent_count is bumped again!). There are asserts in place |
|||
for the entire critical part (heap->error_not_allowed). |
|||
|
|||
Success unwind |
|||
|
|||
* Must generally avoid (or protect against) error throws: otherwise state may |
|||
be only partially unwound. Same issues as with error unwind. |
|||
|
|||
* However, if the callstack state is consistent, it may be safe to throw in |
|||
specific places in the success unwind code path. |
|||
|
|||
String table resize |
|||
|
|||
* String table resize must be protected against string interning. |
|||
|
|||
* Prevent finalizers, avoid Proxy traps. |
|||
|
|||
* Avoid any throws, so that state is not left incomplete. |
|||
|
|||
* Refzero with pure memory freeing is OK. |
|||
|
|||
* Mark-and-sweep without finalizer execution is OK. As of Duktape 2.1 |
|||
string interning is OK because it no longer causes a recursive string |
|||
table resize regardless of interned string count. String table itself |
|||
protects against recursive resizing, so both object and string table |
|||
compaction attempts are OK. |
|||
|
|||
Object property table resize |
|||
|
|||
* Prevent compaction of the object being resized. |
|||
|
|||
* In practice, prevent finalizers (they may mutate objects) and proxy |
|||
traps. Prevent compaction of all objects because there's no fine |
|||
grained control now (could be changed). |
|||
|
|||
JSON fast path |
|||
|
|||
* Prevent all side effects affecting property tables which are walked |
|||
by the fast path. |
|||
|
|||
* Prevent object and string table compaction, mark-and-sweep otherwise OK. |
|||
|
|||
Object property slot updates (e.g. data -> accessor conversion) |
|||
|
|||
* Property slot index being modified must not change. |
|||
|
|||
* Prevent finalizers and proxy traps/getters (which may operate on the object). |
|||
|
|||
* Prevent object compaction which affects slot indices even when properties |
|||
are not deleted. |
|||
|
|||
* In practice, use NORZ macros which avoids all relevant side effects. |
@ -0,0 +1,444 @@ |
|||
/*
|
|||
* Finalizer handling |
|||
*/ |
|||
|
|||
#include "duk_internal.h" |
|||
|
|||
#if defined(DUK_USE_FINALIZER_SUPPORT) |
|||
|
|||
/*
|
|||
* Fake torture finalizer. |
|||
*/ |
|||
|
|||
#if defined(DUK_USE_FINALIZER_TORTURE) |
|||
DUK_LOCAL duk_ret_t duk__fake_global_finalizer(duk_context *ctx) { |
|||
DUK_DD(DUK_DDPRINT("fake global torture finalizer executed")); |
|||
|
|||
/* Require a lot of stack to force a value stack grow/shrink. */ |
|||
duk_require_stack(ctx, 100000); |
|||
|
|||
/* XXX: do something to force a callstack grow/shrink, perhaps
|
|||
* just a manual forced resize or a forced relocating realloc? |
|||
*/ |
|||
|
|||
/* Inner function call, error throw. */ |
|||
duk_eval_string_noresult(ctx, |
|||
"(function dummy() {\n" |
|||
" dummy.prototype = null; /* break reference loop */\n" |
|||
" try {\n" |
|||
" throw 'fake-finalizer-dummy-error';\n" |
|||
" } catch (e) {\n" |
|||
" void e;\n" |
|||
" }\n" |
|||
"})()"); |
|||
|
|||
/* The above creates garbage (e.g. a function instance). Because
|
|||
* the function/prototype reference loop is broken, it gets collected |
|||
* immediately by DECREF. If Function.prototype has a _Finalizer |
|||
* property (happens in some test cases), the garbage gets queued to |
|||
* finalize_list. This still won't cause an infinite loop because |
|||
* the torture finalizer is called once per finalize_list run and |
|||
* the garbage gets handled in the same run. (If the garbage needs |
|||
* mark-and-sweep collection, an infinite loop might ensue.) |
|||
*/ |
|||
return 0; |
|||
} |
|||
|
|||
DUK_LOCAL void duk__run_global_torture_finalizer(duk_hthread *thr) { |
|||
DUK_ASSERT(thr != NULL); |
|||
|
|||
/* Avoid fake finalization when callstack limit has been reached.
|
|||
* Otherwise a callstack limit error will be created, then refzero'ed. |
|||
*/ |
|||
if (thr->heap->call_recursion_depth >= thr->heap->call_recursion_limit || |
|||
thr->callstack_size + 2 * DUK_CALLSTACK_GROW_STEP >= thr->callstack_max /*approximate*/) { |
|||
DUK_D(DUK_DPRINT("skip global torture finalizer because of call recursion or call stack size limit")); |
|||
return; |
|||
} |
|||
|
|||
/* Run fake finalizer. Avoid creating unnecessary garbage. */ |
|||
duk_push_c_function((duk_context *) thr, duk__fake_global_finalizer, 0 /*nargs*/); |
|||
(void) duk_pcall((duk_context *) thr, 0 /*nargs*/); |
|||
duk_pop((duk_context *) thr); |
|||
} |
|||
#endif /* DUK_USE_FINALIZER_TORTURE */ |
|||
|
|||
/*
|
|||
* Process the finalize_list to completion. |
|||
* |
|||
* An object may be placed on finalize_list by either refcounting or |
|||
* mark-and-sweep. The refcount of objects placed by refcounting will be |
|||
* zero; the refcount of objects placed by mark-and-sweep is > 0. |
|||
* |
|||
* List processing assumes refcounts are kept up-to-date at all times, so |
|||
* that once the finalizer returns, a zero refcount is a reliable reason to |
|||
* free the object immediately rather than place it back to the heap. This |
|||
* is the case because we run outside of refzero_list processing so that |
|||
* DECREF cascades are handled fully inline. |
|||
* |
|||
* For mark-and-sweep queued objects (had_zero_refcount false) the object |
|||
* may be freed immediately if its refcount is zero after the finalizer call |
|||
* (i.e. finalizer removed the reference loop for the object). If not, the |
|||
* next mark-and-sweep will collect the object unless it has become reachable |
|||
* (i.e. rescued) by that time and its refcount hasn't fallen to zero before |
|||
* that. Mark-and-sweep detects these objects because their FINALIZED flag |
|||
* is set. |
|||
* |
|||
* There's an inherent limitation for mark-and-sweep finalizer rescuing: an |
|||
* object won't get refinalized if (1) it's rescued, but (2) becomes |
|||
* unreachable before mark-and-sweep has had time to notice it. The next |
|||
* mark-and-sweep round simply doesn't have any information of whether the |
|||
* object has been unreachable the whole time or not (the only way to get |
|||
* that information would be a mark-and-sweep pass for *every finalized |
|||
* object*). This is awkward for the application because the mark-and-sweep |
|||
* round is not generally visible or under full application control. |
|||
* |
|||
* For refcount queued objects (had_zero_refcount true) the object is either |
|||
* immediately freed or rescued, and waiting for a mark-and-sweep round is not |
|||
* necessary (or desirable); FINALIZED is cleared when a rescued object is |
|||
* queued back to heap_allocated. The object is eligible for finalization |
|||
* again (either via refcounting or mark-and-sweep) immediately after being |
|||
* rescued. If a refcount finalized object is placed into an unreachable |
|||
* reference loop by its finalizer, it will get collected by mark-and-sweep |
|||
* and currently the finalizer will execute again. |
|||
* |
|||
* There's a special case where: |
|||
* |
|||
* - Mark-and-sweep queues an object to finalize_list for finalization. |
|||
* - The finalizer is executed, FINALIZED is set, and object is queued |
|||
* back to heap_allocated, waiting for a new mark-and-sweep round. |
|||
* - The object's refcount drops to zero before mark-and-sweep has a |
|||
* chance to run another round and make a rescue/free decision. |
|||
* |
|||
* This is now handled by refzero code: if an object has a finalizer but |
|||
* FINALIZED is already set, the object is freed without finalizer processing. |
|||
* The outcome is the same as if mark-and-sweep was executed at that point; |
|||
* mark-and-sweep would also free the object without another finalizer run. |
|||
* This could also be changed so that the refzero-triggered finalizer *IS* |
|||
* executed: being refzero collected implies someone has operated on the |
|||
* object so it hasn't been totally unreachable the whole time. This would |
|||
* risk a finalizer loop however. |
|||
*/ |
|||
|
|||
DUK_INTERNAL void duk_heap_process_finalize_list(duk_heap *heap) { |
|||
duk_heaphdr *curr; |
|||
#if defined(DUK_USE_DEBUG) |
|||
duk_size_t count = 0; |
|||
#endif |
|||
|
|||
DUK_DDD(DUK_DDDPRINT("duk_heap_process_finalize_list: %p", (void *) heap)); |
|||
|
|||
if (heap->pf_prevent_count != 0) { |
|||
DUK_DDD(DUK_DDDPRINT("skip finalize_list processing: pf_prevent_count != 0")); |
|||
return; |
|||
} |
|||
|
|||
/* Heap alloc prevents mark-and-sweep before heap_thread is ready. */ |
|||
DUK_ASSERT(heap != NULL); |
|||
DUK_ASSERT(heap->heap_thread != NULL); |
|||
DUK_ASSERT(heap->heap_thread->valstack != NULL); |
|||
DUK_ASSERT(heap->heap_thread->callstack != NULL); |
|||
DUK_ASSERT(heap->heap_thread->catchstack != NULL); |
|||
#if defined(DUK_USE_REFERENCE_COUNTING) |
|||
DUK_ASSERT(heap->refzero_list == NULL); |
|||
#endif |
|||
|
|||
DUK_ASSERT(heap->pf_prevent_count == 0); |
|||
heap->pf_prevent_count = 1; |
|||
|
|||
/* Bump ms_prevent_count to prevent mark-and-sweep while we execute
|
|||
* finalizers. It's important for no mark-and-sweep passes to happen |
|||
* while we process the finalize_list. If a part of the finalize_list |
|||
* has been processed and mark-and-sweep runs, it will incorrectly |
|||
* consider the processed objects rescued if they are in a reference |
|||
* relationship with objects still in finalize_list. This happens |
|||
* because mark-and-sweep treats the whole finalize_list as being |
|||
* "reachable". |
|||
*/ |
|||
heap->ms_prevent_count++; |
|||
DUK_ASSERT(heap->ms_prevent_count != 0); /* Wrap. */ |
|||
|
|||
/* When finalizer torture is enabled, make a fake finalizer call with
|
|||
* maximum side effects regardless of whether finalize_list is empty. |
|||
*/ |
|||
#if defined(DUK_USE_FINALIZER_TORTURE) |
|||
duk__run_global_torture_finalizer(heap->heap_thread); |
|||
#endif |
|||
|
|||
/* Process finalize_list until it becomes empty. There's currently no
|
|||
* protection against a finalizer always creating more garbage. |
|||
*/ |
|||
while ((curr = heap->finalize_list) != NULL) { |
|||
#if defined(DUK_USE_REFERENCE_COUNTING) |
|||
duk_bool_t queue_back; |
|||
#endif |
|||
|
|||
DUK_DD(DUK_DDPRINT("processing finalize_list entry: %p -> %!iO", (void *) curr, curr)); |
|||
|
|||
DUK_ASSERT(DUK_HEAPHDR_GET_TYPE(curr) == DUK_HTYPE_OBJECT); /* Only objects have finalizers. */ |
|||
DUK_ASSERT(!DUK_HEAPHDR_HAS_REACHABLE(curr)); |
|||
DUK_ASSERT(!DUK_HEAPHDR_HAS_TEMPROOT(curr)); |
|||
DUK_ASSERT(DUK_HEAPHDR_HAS_FINALIZABLE(curr)); /* All objects on finalize_list will have this flag. */ |
|||
DUK_ASSERT(!DUK_HEAPHDR_HAS_FINALIZED(curr)); /* Queueing code ensures. */ |
|||
DUK_ASSERT(!DUK_HEAPHDR_HAS_READONLY(curr)); /* ROM objects never get freed (or finalized). */ |
|||
|
|||
/* Clear FINALIZABLE for object being finalized, so that
|
|||
* duk_push_heapptr() can properly ignore the object. |
|||
*/ |
|||
DUK_HEAPHDR_CLEAR_FINALIZABLE(curr); |
|||
|
|||
/* XXX: Instead of an artificial refcount bump, could also
|
|||
* push/pop the object for reachability. This doesn't really |
|||
* matter much because the object is still on finalize_list |
|||
* and treated as reachable by mark-and-sweep. |
|||
*/ |
|||
|
|||
if (DUK_LIKELY(!heap->pf_skip_finalizers)) { |
|||
/* Run the finalizer, duk_heap_run_finalizer() sets
|
|||
* and checks for FINALIZED to prevent the finalizer |
|||
* from executing multiple times per finalization cycle. |
|||
* (This safeguard shouldn't be actually needed anymore). |
|||
*/ |
|||
|
|||
#if defined(DUK_USE_REFERENCE_COUNTING) |
|||
duk_bool_t had_zero_refcount; |
|||
#endif |
|||
|
|||
/* Ensure object's refcount is >0 throughout so it
|
|||
* won't be refzero processed prematurely. |
|||
*/ |
|||
#if defined(DUK_USE_REFERENCE_COUNTING) |
|||
had_zero_refcount = (DUK_HEAPHDR_GET_REFCOUNT(curr) == 0); |
|||
DUK_HEAPHDR_PREINC_REFCOUNT(curr); |
|||
#endif |
|||
|
|||
DUK_ASSERT(!DUK_HEAPHDR_HAS_FINALIZED(curr)); |
|||
duk_heap_run_finalizer(heap, (duk_hobject *) curr); /* must never longjmp */ |
|||
DUK_ASSERT(DUK_HEAPHDR_HAS_FINALIZED(curr)); |
|||
/* XXX: assert that object is still in finalize_list
|
|||
* when duk_push_heapptr() allows automatic rescue. |
|||
*/ |
|||
|
|||
#if defined(DUK_USE_REFERENCE_COUNTING) |
|||
DUK_ASSERT(DUK_HEAPHDR_GET_REFCOUNT(curr) >= 1); /* ref on value stack at least */ |
|||
DUK_HEAPHDR_PREDEC_REFCOUNT(curr); |
|||
DUK_DD(DUK_DDPRINT("refcount after finalizer: %ld", (long) DUK_HEAPHDR_GET_REFCOUNT(curr))); |
|||
if (DUK_HEAPHDR_GET_REFCOUNT(curr) == 0) { |
|||
#if defined(DUK_USE_DEBUG) |
|||
if (had_zero_refcount) { |
|||
DUK_DD(DUK_DDPRINT("finalized object's refcount is zero -> free immediately (refcount queued)")); |
|||
} else { |
|||
DUK_DD(DUK_DDPRINT("finalized object's refcount is zero -> free immediately (mark-and-sweep queued)")); |
|||
} |
|||
#endif |
|||
queue_back = 0; |
|||
} else |
|||
#endif |
|||
{ |
|||
#if defined(DUK_USE_REFERENCE_COUNTING) |
|||
queue_back = 1; |
|||
if (had_zero_refcount) { |
|||
/* When finalization is triggered
|
|||
* by refzero and we queue the object |
|||
* back, clear FINALIZED right away |
|||
* so that the object can be refinalized |
|||
* immediately if necessary. |
|||
*/ |
|||
DUK_HEAPHDR_CLEAR_FINALIZED(curr); |
|||
} |
|||
#endif |
|||
} |
|||
} else { |
|||
/* Used during heap destruction: don't actually run finalizers
|
|||
* because we're heading into forced finalization. Instead, |
|||
* queue finalizable objects back to the heap_allocated list. |
|||
*/ |
|||
DUK_D(DUK_DPRINT("skip finalizers flag set, queue object to heap_allocated without finalizing")); |
|||
DUK_ASSERT(!DUK_HEAPHDR_HAS_FINALIZED(curr)); |
|||
#if defined(DUK_USE_REFERENCE_COUNTING) |
|||
queue_back = 1; |
|||
#endif |
|||
} |
|||
|
|||
/* Dequeue object from finalize_list. Note that 'curr' may no
|
|||
* longer be finalize_list head because new objects may have |
|||
* been queued to the list. As a result we can't optimize for |
|||
* the single-linked heap case and must scan the list for |
|||
* removal, typically the scan is very short however. |
|||
*/ |
|||
DUK_HEAP_REMOVE_FROM_FINALIZE_LIST(heap, curr); |
|||
|
|||
/* Queue back to heap_allocated or free immediately. */ |
|||
#if defined(DUK_USE_REFERENCE_COUNTING) |
|||
if (queue_back) { |
|||
/* FINALIZED is only cleared if object originally
|
|||
* queued for finalization by refcounting. For |
|||
* mark-and-sweep FINALIZED is left set, so that |
|||
* next mark-and-sweep round can make a rescue/free |
|||
* decision. |
|||
*/ |
|||
DUK_HEAPHDR_CLEAR_FINALIZABLE(curr); |
|||
DUK_HEAP_INSERT_INTO_HEAP_ALLOCATED(heap, curr); |
|||
} else { |
|||
DUK_ASSERT(DUK_HEAPHDR_GET_TYPE(curr) == DUK_HTYPE_OBJECT); /* currently, always the case */ |
|||
DUK_DD(DUK_DDPRINT("refcount finalize after finalizer call: %!O", curr)); |
|||
duk_hobject_refcount_finalize_norz(heap, (duk_hobject *) curr); |
|||
duk_free_hobject(heap, (duk_hobject *) curr); |
|||
DUK_DD(DUK_DDPRINT("freed hobject after finalization: %p", (void *) curr)); |
|||
} |
|||
#else |
|||
DUK_HEAPHDR_CLEAR_FINALIZABLE(curr); |
|||
DUK_HEAP_INSERT_INTO_HEAP_ALLOCATED(heap, curr); |
|||
#endif |
|||
|
|||
#if defined(DUK_USE_DEBUG) |
|||
count++; |
|||
#endif |
|||
} |
|||
|
|||
/* finalize_list will always be processed completely. */ |
|||
DUK_ASSERT(heap->finalize_list == NULL); |
|||
|
|||
#if 0 |
|||
/* While NORZ macros are used above, this is unnecessary because the
|
|||
* only pending side effects are now finalizers, and finalize_list is |
|||
* empty. |
|||
*/ |
|||
DUK_REFZERO_CHECK_SLOW(heap->heap_thread); |
|||
#endif |
|||
|
|||
DUK_ASSERT(heap->ms_prevent_count > 0); |
|||
heap->ms_prevent_count--; |
|||
|
|||
/* Prevent count may be bumped while finalizers run, but should always
|
|||
* be reliably unbumped by the time we get here. |
|||
*/ |
|||
DUK_ASSERT(heap->pf_prevent_count == 1); |
|||
heap->pf_prevent_count = 0; |
|||
|
|||
#if defined(DUK_USE_DEBUG) |
|||
DUK_DD(DUK_DDPRINT("duk_heap_process_finalize_list: %ld finalizers called", (long) count)); |
|||
#endif |
|||
} |
|||
|
|||
/*
|
|||
* Run an duk_hobject finalizer. Must never throw an uncaught error |
|||
* (but may throw caught errors). |
|||
* |
|||
* There is no return value. Any return value or error thrown by |
|||
* the finalizer is ignored (although errors are debug logged). |
|||
* |
|||
* Notes: |
|||
* |
|||
* - The finalizer thread 'top' assertions are there because it is |
|||
* critical that strict stack policy is observed (i.e. no cruft |
|||
* left on the finalizer stack). |
|||
*/ |
|||
|
|||
DUK_LOCAL duk_ret_t duk__finalize_helper(duk_context *ctx, void *udata) { |
|||
duk_hthread *thr; |
|||
|
|||
DUK_ASSERT(ctx != NULL); |
|||
thr = (duk_hthread *) ctx; |
|||
DUK_UNREF(udata); |
|||
|
|||
DUK_DDD(DUK_DDDPRINT("protected finalization helper running")); |
|||
|
|||
/* [... obj] */ |
|||
|
|||
/* _Finalizer property is read without checking if the value is
|
|||
* callable or even exists. This is intentional, and handled |
|||
* by throwing an error which is caught by the safe call wrapper. |
|||
* |
|||
* XXX: Finalizer lookup should traverse the prototype chain (to allow |
|||
* inherited finalizers) but should not invoke accessors or proxy object |
|||
* behavior. At the moment this lookup will invoke proxy behavior, so |
|||
* caller must ensure that this function is not called if the target is |
|||
* a Proxy. |
|||
*/ |
|||
duk_get_prop_stridx_short(ctx, -1, DUK_STRIDX_INT_FINALIZER); /* -> [... obj finalizer] */ |
|||
duk_dup_m2(ctx); |
|||
duk_push_boolean(ctx, DUK_HEAP_HAS_FINALIZER_NORESCUE(thr->heap)); |
|||
DUK_DDD(DUK_DDDPRINT("calling finalizer")); |
|||
duk_call(ctx, 2); /* [ ... obj finalizer obj heapDestruct ] -> [ ... obj retval ] */ |
|||
DUK_DDD(DUK_DDDPRINT("finalizer returned successfully")); |
|||
return 0; |
|||
|
|||
/* Note: we rely on duk_safe_call() to fix up the stack for the caller,
|
|||
* so we don't need to pop stuff here. There is no return value; |
|||
* caller determines rescued status based on object refcount. |
|||
*/ |
|||
} |
|||
|
|||
DUK_INTERNAL void duk_heap_run_finalizer(duk_heap *heap, duk_hobject *obj) { |
|||
duk_context *ctx; |
|||
duk_ret_t rc; |
|||
#if defined(DUK_USE_ASSERTIONS) |
|||
duk_idx_t entry_top; |
|||
#endif |
|||
|
|||
DUK_DD(DUK_DDPRINT("running duk_hobject finalizer for object: %p", (void *) obj)); |
|||
|
|||
DUK_ASSERT(heap != NULL); |
|||
DUK_ASSERT(heap->heap_thread != NULL); |
|||
ctx = (duk_context *) heap->heap_thread; |
|||
DUK_ASSERT(obj != NULL); |
|||
DUK_ASSERT_VALSTACK_SPACE(heap->heap_thread, 1); |
|||
|
|||
#if defined(DUK_USE_ASSERTIONS) |
|||
entry_top = duk_get_top(ctx); |
|||
#endif |
|||
/*
|
|||
* Get and call the finalizer. All of this must be wrapped |
|||
* in a protected call, because even getting the finalizer |
|||
* may trigger an error (getter may throw one, for instance). |
|||
*/ |
|||
|
|||
/* ROM objects could inherit a finalizer, but they are never deemed
|
|||
* unreachable by mark-and-sweep, and their refcount never falls to 0. |
|||
*/ |
|||
DUK_ASSERT(!DUK_HEAPHDR_HAS_READONLY((duk_heaphdr *) obj)); |
|||
|
|||
/* Duktape 2.1: finalize_list never contains objects with FINALIZED
|
|||
* set, so no need to check here. |
|||
*/ |
|||
DUK_ASSERT(!DUK_HEAPHDR_HAS_FINALIZED((duk_heaphdr *) obj)); |
|||
#if 0 |
|||
if (DUK_HEAPHDR_HAS_FINALIZED((duk_heaphdr *) obj)) { |
|||
DUK_D(DUK_DPRINT("object already finalized, avoid running finalizer twice: %!O", obj)); |
|||
return; |
|||
} |
|||
#endif |
|||
DUK_HEAPHDR_SET_FINALIZED((duk_heaphdr *) obj); /* ensure never re-entered until rescue cycle complete */ |
|||
|
|||
if (DUK_HOBJECT_HAS_EXOTIC_PROXYOBJ(obj)) { |
|||
/* This may happen if duk_set_finalizer() or Duktape.fin() is
|
|||
* called for a Proxy object. In such cases the fast finalizer |
|||
* flag will be set on the Proxy, not the target, and neither |
|||
* will be finalized. |
|||
*/ |
|||
DUK_D(DUK_DPRINT("object is a proxy, skip finalizer call")); |
|||
return; |
|||
} |
|||
|
|||
duk_push_hobject(ctx, obj); /* this also increases refcount by one */ |
|||
rc = duk_safe_call(ctx, duk__finalize_helper, NULL /*udata*/, 0 /*nargs*/, 1 /*nrets*/); /* -> [... obj retval/error] */ |
|||
DUK_ASSERT_TOP(ctx, entry_top + 2); /* duk_safe_call discipline */ |
|||
|
|||
if (rc != DUK_EXEC_SUCCESS) { |
|||
/* Note: we ask for one return value from duk_safe_call to get this
|
|||
* error debugging here. |
|||
*/ |
|||
DUK_D(DUK_DPRINT("wrapped finalizer call failed for object %p (ignored); error: %!T", |
|||
(void *) obj, (duk_tval *) duk_get_tval(ctx, -1))); |
|||
} |
|||
duk_pop_2(ctx); /* -> [...] */ |
|||
|
|||
DUK_ASSERT_TOP(ctx, entry_top); |
|||
} |
|||
|
|||
#else /* DUK_USE_FINALIZER_SUPPORT */ |
|||
|
|||
/* nothing */ |
|||
|
|||
#endif /* DUK_USE_FINALIZER_SUPPORT */ |
@ -1,113 +0,0 @@ |
|||
/*
|
|||
* Run an duk_hobject finalizer. Used for both reference counting |
|||
* and mark-and-sweep algorithms. Must never throw an error. |
|||
* |
|||
* There is no return value. Any return value or error thrown by |
|||
* the finalizer is ignored (although errors are debug logged). |
|||
* |
|||
* Notes: |
|||
* |
|||
* - The thread used for calling the finalizer is the same as the |
|||
* 'thr' argument. This may need to change later. |
|||
* |
|||
* - The finalizer thread 'top' assertions are there because it is |
|||
* critical that strict stack policy is observed (i.e. no cruft |
|||
* left on the finalizer stack). |
|||
*/ |
|||
|
|||
#include "duk_internal.h" |
|||
|
|||
#if defined(DUK_USE_FINALIZER_SUPPORT) |
|||
DUK_LOCAL duk_ret_t duk__finalize_helper(duk_context *ctx, void *udata) { |
|||
duk_hthread *thr; |
|||
|
|||
DUK_ASSERT(ctx != NULL); |
|||
thr = (duk_hthread *) ctx; |
|||
DUK_UNREF(udata); |
|||
|
|||
DUK_DDD(DUK_DDDPRINT("protected finalization helper running")); |
|||
|
|||
/* [... obj] */ |
|||
|
|||
/* XXX: Finalizer lookup should traverse the prototype chain (to allow
|
|||
* inherited finalizers) but should not invoke accessors or proxy object |
|||
* behavior. At the moment this lookup will invoke proxy behavior, so |
|||
* caller must ensure that this function is not called if the target is |
|||
* a Proxy. |
|||
*/ |
|||
|
|||
duk_get_prop_stridx_short(ctx, -1, DUK_STRIDX_INT_FINALIZER); /* -> [... obj finalizer] */ |
|||
if (!duk_is_callable(ctx, -1)) { |
|||
DUK_DDD(DUK_DDDPRINT("-> no finalizer or finalizer not callable")); |
|||
return 0; |
|||
} |
|||
duk_dup_m2(ctx); |
|||
duk_push_boolean(ctx, DUK_HEAP_HAS_FINALIZER_NORESCUE(thr->heap)); |
|||
DUK_DDD(DUK_DDDPRINT("-> finalizer found, calling finalizer")); |
|||
duk_call(ctx, 2); /* [ ... obj finalizer obj heapDestruct ] -> [ ... obj retval ] */ |
|||
DUK_DDD(DUK_DDDPRINT("finalizer finished successfully")); |
|||
return 0; |
|||
|
|||
/* Note: we rely on duk_safe_call() to fix up the stack for the caller,
|
|||
* so we don't need to pop stuff here. There is no return value; |
|||
* caller determines rescued status based on object refcount. |
|||
*/ |
|||
} |
|||
|
|||
DUK_INTERNAL void duk_hobject_run_finalizer(duk_hthread *thr, duk_hobject *obj) { |
|||
duk_context *ctx = (duk_context *) thr; |
|||
duk_ret_t rc; |
|||
#if defined(DUK_USE_ASSERTIONS) |
|||
duk_idx_t entry_top; |
|||
#endif |
|||
|
|||
DUK_DDD(DUK_DDDPRINT("running object finalizer for object: %p", (void *) obj)); |
|||
|
|||
DUK_ASSERT(thr != NULL); |
|||
DUK_ASSERT(ctx != NULL); |
|||
DUK_ASSERT(obj != NULL); |
|||
DUK_ASSERT_VALSTACK_SPACE(thr, 1); |
|||
|
|||
#if defined(DUK_USE_ASSERTIONS) |
|||
entry_top = duk_get_top(ctx); |
|||
#endif |
|||
/*
|
|||
* Get and call the finalizer. All of this must be wrapped |
|||
* in a protected call, because even getting the finalizer |
|||
* may trigger an error (getter may throw one, for instance). |
|||
*/ |
|||
|
|||
DUK_ASSERT(!DUK_HEAPHDR_HAS_READONLY((duk_heaphdr *) obj)); |
|||
if (DUK_HEAPHDR_HAS_FINALIZED((duk_heaphdr *) obj)) { |
|||
DUK_D(DUK_DPRINT("object already finalized, avoid running finalizer twice: %!O", obj)); |
|||
return; |
|||
} |
|||
DUK_HEAPHDR_SET_FINALIZED((duk_heaphdr *) obj); /* ensure never re-entered until rescue cycle complete */ |
|||
if (DUK_HOBJECT_HAS_EXOTIC_PROXYOBJ(obj)) { |
|||
/* This shouldn't happen; call sites should avoid looking up
|
|||
* _Finalizer "through" a Proxy, but ignore if we come here |
|||
* with a Proxy to avoid finalizer re-entry. |
|||
*/ |
|||
DUK_D(DUK_DPRINT("object is a proxy, skip finalizer call")); |
|||
return; |
|||
} |
|||
|
|||
/* XXX: use a NULL error handler for the finalizer call? */ |
|||
|
|||
DUK_DDD(DUK_DDDPRINT("-> finalizer found, calling wrapped finalize helper")); |
|||
duk_push_hobject(ctx, obj); /* this also increases refcount by one */ |
|||
rc = duk_safe_call(ctx, duk__finalize_helper, NULL /*udata*/, 0 /*nargs*/, 1 /*nrets*/); /* -> [... obj retval/error] */ |
|||
DUK_ASSERT_TOP(ctx, entry_top + 2); /* duk_safe_call discipline */ |
|||
|
|||
if (rc != DUK_EXEC_SUCCESS) { |
|||
/* Note: we ask for one return value from duk_safe_call to get this
|
|||
* error debugging here. |
|||
*/ |
|||
DUK_D(DUK_DPRINT("wrapped finalizer call failed for object %p (ignored); error: %!T", |
|||
(void *) obj, (duk_tval *) duk_get_tval(ctx, -1))); |
|||
} |
|||
duk_pop_2(ctx); /* -> [...] */ |
|||
|
|||
DUK_ASSERT_TOP(ctx, entry_top); |
|||
} |
|||
#endif /* DUK_USE_FINALIZER_SUPPORT */ |
@ -0,0 +1,698 @@ |
|||
/*
|
|||
* Reference counting helper macros. The macros take a thread argument |
|||
* and must thus always be executed in a specific thread context. The |
|||
* thread argument is not really needed anymore: DECREF can operate with |
|||
* a heap pointer only, and INCREF needs neither. |
|||
*/ |
|||
|
|||
#if !defined(DUK_REFCOUNT_H_INCLUDED) |
|||
#define DUK_REFCOUNT_H_INCLUDED |
|||
|
|||
#if defined(DUK_USE_REFERENCE_COUNTING) |
|||
|
|||
#if defined(DUK_USE_ROM_OBJECTS) |
|||
/* With ROM objects "needs refcount update" is true when the value is
|
|||
* heap allocated and is not a ROM object. |
|||
*/ |
|||
/* XXX: double evaluation for 'tv' argument. */ |
|||
#define DUK_TVAL_NEEDS_REFCOUNT_UPDATE(tv) \ |
|||
(DUK_TVAL_IS_HEAP_ALLOCATED((tv)) && !DUK_HEAPHDR_HAS_READONLY(DUK_TVAL_GET_HEAPHDR((tv)))) |
|||
#define DUK_HEAPHDR_NEEDS_REFCOUNT_UPDATE(h) (!DUK_HEAPHDR_HAS_READONLY((h))) |
|||
#else /* DUK_USE_ROM_OBJECTS */ |
|||
/* Without ROM objects "needs refcount update" == is heap allocated. */ |
|||
#define DUK_TVAL_NEEDS_REFCOUNT_UPDATE(tv) DUK_TVAL_IS_HEAP_ALLOCATED((tv)) |
|||
#define DUK_HEAPHDR_NEEDS_REFCOUNT_UPDATE(h) 1 |
|||
#endif /* DUK_USE_ROM_OBJECTS */ |
|||
|
|||
/* Fast variants, inline refcount operations except for refzero handling.
|
|||
* Can be used explicitly when speed is always more important than size. |
|||
* For a good compiler and a single file build, these are basically the |
|||
* same as a forced inline. |
|||
*/ |
|||
#define DUK_TVAL_INCREF_FAST(thr,tv) do { \ |
|||
duk_tval *duk__tv = (tv); \ |
|||
DUK_ASSERT(duk__tv != NULL); \ |
|||
if (DUK_TVAL_NEEDS_REFCOUNT_UPDATE(duk__tv)) { \ |
|||
duk_heaphdr *duk__h = DUK_TVAL_GET_HEAPHDR(duk__tv); \ |
|||
DUK_ASSERT(duk__h != NULL); \ |
|||
DUK_ASSERT(DUK_HEAPHDR_HTYPE_VALID(duk__h)); \ |
|||
DUK_HEAPHDR_PREINC_REFCOUNT(duk__h); \ |
|||
DUK_ASSERT(DUK_HEAPHDR_GET_REFCOUNT(duk__h) != 0); /* No wrapping. */ \ |
|||
} \ |
|||
} while (0) |
|||
#define DUK_TVAL_DECREF_FAST(thr,tv) do { \ |
|||
duk_tval *duk__tv = (tv); \ |
|||
DUK_ASSERT(duk__tv != NULL); \ |
|||
if (DUK_TVAL_NEEDS_REFCOUNT_UPDATE(duk__tv)) { \ |
|||
duk_heaphdr *duk__h = DUK_TVAL_GET_HEAPHDR(duk__tv); \ |
|||
DUK_ASSERT(duk__h != NULL); \ |
|||
DUK_ASSERT(DUK_HEAPHDR_HTYPE_VALID(duk__h)); \ |
|||
DUK_ASSERT(DUK_HEAPHDR_GET_REFCOUNT(duk__h) > 0); \ |
|||
if (DUK_HEAPHDR_PREDEC_REFCOUNT(duk__h) == 0) { \ |
|||
duk_heaphdr_refzero((thr), duk__h); \ |
|||
} \ |
|||
} \ |
|||
} while (0) |
|||
#define DUK_TVAL_DECREF_NORZ_FAST(thr,tv) do { \ |
|||
duk_tval *duk__tv = (tv); \ |
|||
DUK_ASSERT(duk__tv != NULL); \ |
|||
if (DUK_TVAL_NEEDS_REFCOUNT_UPDATE(duk__tv)) { \ |
|||
duk_heaphdr *duk__h = DUK_TVAL_GET_HEAPHDR(duk__tv); \ |
|||
DUK_ASSERT(duk__h != NULL); \ |
|||
DUK_ASSERT(DUK_HEAPHDR_HTYPE_VALID(duk__h)); \ |
|||
DUK_ASSERT(DUK_HEAPHDR_GET_REFCOUNT(duk__h) > 0); \ |
|||
if (DUK_HEAPHDR_PREDEC_REFCOUNT(duk__h) == 0) { \ |
|||
duk_heaphdr_refzero_norz((thr), duk__h); \ |
|||
} \ |
|||
} \ |
|||
} while (0) |
|||
#define DUK_HEAPHDR_INCREF_FAST(thr,h) do { \ |
|||
duk_heaphdr *duk__h = (duk_heaphdr *) (h); \ |
|||
DUK_ASSERT(duk__h != NULL); \ |
|||
DUK_ASSERT(DUK_HEAPHDR_HTYPE_VALID(duk__h)); \ |
|||
if (DUK_HEAPHDR_NEEDS_REFCOUNT_UPDATE(duk__h)) { \ |
|||
DUK_HEAPHDR_PREINC_REFCOUNT(duk__h); \ |
|||
DUK_ASSERT(DUK_HEAPHDR_GET_REFCOUNT(duk__h) != 0); /* No wrapping. */ \ |
|||
} \ |
|||
} while (0) |
|||
#define DUK_HEAPHDR_DECREF_FAST_RAW(thr,h,rzcall,rzcast) do { \ |
|||
duk_heaphdr *duk__h = (duk_heaphdr *) (h); \ |
|||
DUK_ASSERT(duk__h != NULL); \ |
|||
DUK_ASSERT(DUK_HEAPHDR_HTYPE_VALID(duk__h)); \ |
|||
DUK_ASSERT(DUK_HEAPHDR_GET_REFCOUNT(duk__h) > 0); \ |
|||
if (DUK_HEAPHDR_NEEDS_REFCOUNT_UPDATE(duk__h)) { \ |
|||
if (DUK_HEAPHDR_PREDEC_REFCOUNT(duk__h) == 0) { \ |
|||
(rzcall)((thr), (rzcast) duk__h); \ |
|||
} \ |
|||
} \ |
|||
} while (0) |
|||
#define DUK_HEAPHDR_DECREF_FAST(thr,h) \ |
|||
DUK_HEAPHDR_DECREF_FAST_RAW((thr),(h),duk_heaphdr_refzero,duk_heaphdr *) |
|||
#define DUK_HEAPHDR_DECREF_NORZ_FAST(thr,h) \ |
|||
DUK_HEAPHDR_DECREF_FAST_RAW((thr),(h),duk_heaphdr_refzero_norz,duk_heaphdr *) |
|||
|
|||
/* Slow variants, call to a helper to reduce code size.
|
|||
* Can be used explicitly when size is always more important than speed. |
|||
*/ |
|||
#define DUK_TVAL_INCREF_SLOW(thr,tv) do { duk_tval_incref((tv)); } while (0) |
|||
#define DUK_TVAL_DECREF_SLOW(thr,tv) do { duk_tval_decref((thr), (tv)); } while (0) |
|||
#define DUK_TVAL_DECREF_NORZ_SLOW(thr,tv) do { duk_tval_decref_norz((thr), (tv)); } while (0) |
|||
#define DUK_HEAPHDR_INCREF_SLOW(thr,h) do { duk_heaphdr_incref((duk_heaphdr *) (h)); } while (0) |
|||
#define DUK_HEAPHDR_DECREF_SLOW(thr,h) do { duk_heaphdr_decref((thr), (duk_heaphdr *) (h)); } while (0) |
|||
#define DUK_HEAPHDR_DECREF_NORZ_SLOW(thr,h) do { duk_heaphdr_decref_norz((thr), (duk_heaphdr *) (h)); } while (0) |
|||
#define DUK_HSTRING_INCREF_SLOW(thr,h) do { duk_heaphdr_incref((duk_heaphdr *) (h)); } while (0) |
|||
#define DUK_HSTRING_DECREF_SLOW(thr,h) do { duk_heaphdr_decref((thr), (duk_heaphdr *) (h)); } while (0) |
|||
#define DUK_HSTRING_DECREF_NORZ_SLOW(thr,h) do { duk_heaphdr_decref_norz((thr), (duk_heaphdr *) (h)); } while (0) |
|||
#define DUK_HBUFFER_INCREF_SLOW(thr,h) do { duk_heaphdr_incref((duk_heaphdr *) (h)); } while (0) |
|||
#define DUK_HBUFFER_DECREF_SLOW(thr,h) do { duk_heaphdr_decref((thr), (duk_heaphdr *) (h)); } while (0) |
|||
#define DUK_HBUFFER_DECREF_NORZ_SLOW(thr,h) do { duk_heaphdr_decref_norz((thr), (duk_heaphdr *) (h)); } while (0) |
|||
#define DUK_HOBJECT_INCREF_SLOW(thr,h) do { duk_heaphdr_incref((duk_heaphdr *) (h)); } while (0) |
|||
#define DUK_HOBJECT_DECREF_SLOW(thr,h) do { duk_heaphdr_decref((thr), (duk_heaphdr *) (h)); } while (0) |
|||
#define DUK_HOBJECT_DECREF_NORZ_SLOW(thr,h) do { duk_heaphdr_decref_norz((thr), (duk_heaphdr *) (h)); } while (0) |
|||
|
|||
/* Default variants. Selection depends on speed/size preference.
|
|||
* Concretely: with gcc 4.8.1 -Os x64 the difference in final binary |
|||
* is about +1kB for _FAST variants. |
|||
*/ |
|||
#if defined(DUK_USE_FAST_REFCOUNT_DEFAULT) |
|||
/* XXX: It would be nice to specialize for specific duk_hobject subtypes
|
|||
* but current refzero queue handling prevents that. |
|||
*/ |
|||
#define DUK_TVAL_INCREF(thr,tv) DUK_TVAL_INCREF_FAST((thr),(tv)) |
|||
#define DUK_TVAL_DECREF(thr,tv) DUK_TVAL_DECREF_FAST((thr),(tv)) |
|||
#define DUK_TVAL_DECREF_NORZ(thr,tv) DUK_TVAL_DECREF_NORZ_FAST((thr),(tv)) |
|||
#define DUK_HEAPHDR_INCREF(thr,h) DUK_HEAPHDR_INCREF_FAST((thr),(h)) |
|||
#define DUK_HEAPHDR_DECREF(thr,h) DUK_HEAPHDR_DECREF_FAST_RAW((thr),(h),duk_heaphdr_refzero,duk_heaphdr *) |
|||
#define DUK_HEAPHDR_DECREF_NORZ(thr,h) DUK_HEAPHDR_DECREF_FAST_RAW((thr),(h),duk_heaphdr_refzero_norz,duk_heaphdr *) |
|||
#define DUK_HSTRING_INCREF(thr,h) DUK_HEAPHDR_INCREF((thr),(duk_heaphdr *) (h)) |
|||
#define DUK_HSTRING_DECREF(thr,h) DUK_HEAPHDR_DECREF_FAST_RAW((thr),(h),duk_hstring_refzero,duk_hstring *) |
|||
#define DUK_HSTRING_DECREF_NORZ(thr,h) DUK_HEAPHDR_DECREF_FAST_RAW((thr),(h),duk_hstring_refzero,duk_hstring *) /* no 'norz' variant */ |
|||
#define DUK_HOBJECT_INCREF(thr,h) DUK_HEAPHDR_INCREF((thr),(duk_heaphdr *) (h)) |
|||
#define DUK_HOBJECT_DECREF(thr,h) DUK_HEAPHDR_DECREF_FAST_RAW((thr),(h),duk_hobject_refzero,duk_hobject *) |
|||
#define DUK_HOBJECT_DECREF_NORZ(thr,h) DUK_HEAPHDR_DECREF_FAST_RAW((thr),(h),duk_hobject_refzero_norz,duk_hobject *) |
|||
#define DUK_HBUFFER_INCREF(thr,h) DUK_HEAPHDR_INCREF((thr),(duk_heaphdr *) (h)) |
|||
#define DUK_HBUFFER_DECREF(thr,h) DUK_HEAPHDR_DECREF_FAST_RAW((thr),(h),duk_hbuffer_refzero,duk_hbuffer *) |
|||
#define DUK_HBUFFER_DECREF_NORZ(thr,h) DUK_HEAPHDR_DECREF_FAST_RAW((thr),(h),duk_hbuffer_refzero,duk_hbuffer *) /* no 'norz' variant */ |
|||
#define DUK_HCOMPFUNC_INCREF(thr,h) DUK_HEAPHDR_INCREF((thr),(duk_heaphdr *) &(h)->obj) |
|||
#define DUK_HCOMPFUNC_DECREF(thr,h) DUK_HEAPHDR_DECREF_FAST_RAW((thr),(h),duk_hobject_refzero,duk_hobject *) |
|||
#define DUK_HCOMPFUNC_DECREF_NORZ(thr,h) DUK_HEAPHDR_DECREF_FAST_RAW((thr),(h),duk_hobject_refzero_norz,duk_hobject *) |
|||
#define DUK_HNATFUNC_INCREF(thr,h) DUK_HEAPHDR_INCREF((thr),(duk_heaphdr *) &(h)->obj) |
|||
#define DUK_HNATFUNC_DECREF(thr,h) DUK_HEAPHDR_DECREF_FAST_RAW((thr),(h),duk_hobject_refzero,duk_hobject *) |
|||
#define DUK_HNATFUNC_DECREF_NORZ(thr,h) DUK_HEAPHDR_DECREF_FAST_RAW((thr),(h),duk_hobject_refzero_norz,duk_hobject *) |
|||
#define DUK_HBUFOBJ_INCREF(thr,h) DUK_HEAPHDR_INCREF((thr),(duk_heaphdr *) &(h)->obj) |
|||
#define DUK_HBUFOBJ_DECREF(thr,h) DUK_HEAPHDR_DECREF_FAST_RAW((thr),(h),duk_hobject_refzero,duk_hobject *) |
|||
#define DUK_HBUFOBJ_DECREF_NORZ(thr,h) DUK_HEAPHDR_DECREF_FAST_RAW((thr),(h),duk_hobject_refzero_norz,duk_hobject *) |
|||
#define DUK_HTHREAD_INCREF(thr,h) DUK_HEAPHDR_INCREF((thr),(duk_heaphdr *) &(h)->obj) |
|||
#define DUK_HTHREAD_DECREF(thr,h) DUK_HEAPHDR_DECREF_FAST_RAW((thr),(h),duk_hobject_refzero,duk_hobject *) |
|||
#define DUK_HTHREAD_DECREF_NORZ(thr,h) DUK_HEAPHDR_DECREF_FAST_RAW((thr),(h),duk_hobject_refzero_norz,duk_hobject *) |
|||
#else |
|||
#define DUK_TVAL_INCREF(thr,tv) DUK_TVAL_INCREF_SLOW((thr),(tv)) |
|||
#define DUK_TVAL_DECREF(thr,tv) DUK_TVAL_DECREF_SLOW((thr),(tv)) |
|||
#define DUK_TVAL_DECREF_NORZ(thr,tv) DUK_TVAL_DECREF_NORZ_SLOW((thr),(tv)) |
|||
#define DUK_HEAPHDR_INCREF(thr,h) DUK_HEAPHDR_INCREF_SLOW((thr),(h)) |
|||
#define DUK_HEAPHDR_DECREF(thr,h) DUK_HEAPHDR_DECREF_SLOW((thr),(h)) |
|||
#define DUK_HEAPHDR_DECREF_NORZ(thr,h) DUK_HEAPHDR_DECREF_NORZ_SLOW((thr),(h)) |
|||
#define DUK_HSTRING_INCREF(thr,h) DUK_HEAPHDR_INCREF((thr),(duk_heaphdr *) (h)) |
|||
#define DUK_HSTRING_DECREF(thr,h) DUK_HSTRING_DECREF_SLOW((thr),(h)) |
|||
#define DUK_HSTRING_DECREF_NORZ(thr,h) DUK_HSTRING_DECREF_NORZ_SLOW((thr),(h)) |
|||
#define DUK_HOBJECT_INCREF(thr,h) DUK_HEAPHDR_INCREF((thr),(duk_heaphdr *) (h)) |
|||
#define DUK_HOBJECT_DECREF(thr,h) DUK_HOBJECT_DECREF_SLOW((thr),(h)) |
|||
#define DUK_HOBJECT_DECREF_NORZ(thr,h) DUK_HOBJECT_DECREF_NORZ_SLOW((thr),(h)) |
|||
#define DUK_HBUFFER_INCREF(thr,h) DUK_HEAPHDR_INCREF((thr),(duk_heaphdr *) (h)) |
|||
#define DUK_HBUFFER_DECREF(thr,h) DUK_HBUFFER_DECREF_SLOW((thr),(h)) |
|||
#define DUK_HBUFFER_DECREF_NORZ(thr,h) DUK_HBUFFER_DECREF_NORZ_SLOW((thr),(h)) |
|||
#define DUK_HCOMPFUNC_INCREF(thr,h) DUK_HEAPHDR_INCREF((thr),(duk_heaphdr *) &(h)->obj) |
|||
#define DUK_HCOMPFUNC_DECREF(thr,h) DUK_HOBJECT_DECREF_SLOW((thr),(duk_hobject *) &(h)->obj) |
|||
#define DUK_HCOMPFUNC_DECREF_NORZ(thr,h) DUK_HOBJECT_DECREF_NORZ_SLOW((thr),(duk_hobject *) &(h)->obj) |
|||
#define DUK_HNATFUNC_INCREF(thr,h) DUK_HEAPHDR_INCREF((thr),(duk_heaphdr *) &(h)->obj) |
|||
#define DUK_HNATFUNC_DECREF(thr,h) DUK_HOBJECT_DECREF_SLOW((thr),(duk_hobject *) &(h)->obj) |
|||
#define DUK_HNATFUNC_DECREF_NORZ(thr,h) DUK_HOBJECT_DECREF_NORZ_SLOW((thr),(duk_hobject *) &(h)->obj) |
|||
#define DUK_HBUFOBJ_INCREF(thr,h) DUK_HEAPHDR_INCREF((thr),(duk_heaphdr *) &(h)->obj) |
|||
#define DUK_HBUFOBJ_DECREF(thr,h) DUK_HOBJECT_DECREF_SLOW((thr),(duk_hobject *) &(h)->obj) |
|||
#define DUK_HBUFOB_DECREF_NORZ(thr,h) DUK_HOBJECT_DECREF_NORZ_SLOW((thr),(duk_hobject *) &(h)->obj) |
|||
#define DUK_HTHREAD_INCREF(thr,h) DUK_HEAPHDR_INCREF((thr),(duk_heaphdr *) &(h)->obj) |
|||
#define DUK_HTHREAD_DECREF(thr,h) DUK_HOBJECT_DECREF_SLOW((thr),(duk_hobject *) &(h)->obj) |
|||
#define DUK_HTHREAD_DECREF_NORZ(thr,h) DUK_HOBJECT_DECREF_NORZ_SLOW((thr),(duk_hobject *) &(h)->obj) |
|||
#endif |
|||
|
|||
/* Convenience for some situations; the above macros don't allow NULLs
|
|||
* for performance reasons. Macros cover only actually needed cases. |
|||
*/ |
|||
#define DUK_HEAPHDR_INCREF_ALLOWNULL(thr,h) do { \ |
|||
if ((h) != NULL) { \ |
|||
DUK_HEAPHDR_INCREF((thr), (duk_heaphdr *) (h)); \ |
|||
} \ |
|||
} while (0) |
|||
#define DUK_HEAPHDR_DECREF_ALLOWNULL(thr,h) do { \ |
|||
if ((h) != NULL) { \ |
|||
DUK_HEAPHDR_DECREF((thr), (duk_heaphdr *) (h)); \ |
|||
} \ |
|||
} while (0) |
|||
#define DUK_HEAPHDR_DECREF_NORZ_ALLOWNULL(thr,h) do { \ |
|||
if ((h) != NULL) { \ |
|||
DUK_HEAPHDR_DECREF_NORZ((thr), (duk_heaphdr *) (h)); \ |
|||
} \ |
|||
} while (0) |
|||
#define DUK_HOBJECT_INCREF_ALLOWNULL(thr,h) do { \ |
|||
if ((h) != NULL) { \ |
|||
DUK_HOBJECT_INCREF((thr), (h)); \ |
|||
} \ |
|||
} while (0) |
|||
#define DUK_HOBJECT_DECREF_ALLOWNULL(thr,h) do { \ |
|||
if ((h) != NULL) { \ |
|||
DUK_HOBJECT_DECREF((thr), (h)); \ |
|||
} \ |
|||
} while (0) |
|||
#define DUK_HOBJECT_DECREF_NORZ_ALLOWNULL(thr,h) do { \ |
|||
if ((h) != NULL) { \ |
|||
DUK_HOBJECT_DECREF_NORZ((thr), (h)); \ |
|||
} \ |
|||
} while (0) |
|||
#define DUK_HBUFFER_INCREF_ALLOWNULL(thr,h) do { \ |
|||
if ((h) != NULL) { \ |
|||
DUK_HBUFFER_INCREF((thr), (h)); \ |
|||
} \ |
|||
} while (0) |
|||
#define DUK_HBUFFER_DECREF_ALLOWNULL(thr,h) do { \ |
|||
if ((h) != NULL) { \ |
|||
DUK_HBUFFER_DECREF((thr), (h)); \ |
|||
} \ |
|||
} while (0) |
|||
#define DUK_HBUFFER_DECREF_NORZ_ALLOWNULL(thr,h) do { \ |
|||
if ((h) != NULL) { \ |
|||
DUK_HBUFFER_DECREF_NORZ((thr), (h)); \ |
|||
} \ |
|||
} while (0) |
|||
#define DUK_HTHREAD_INCREF_ALLOWNULL(thr,h) do { \ |
|||
if ((h) != NULL) { \ |
|||
DUK_HTHREAD_INCREF((thr), (h)); \ |
|||
} \ |
|||
} while (0) |
|||
#define DUK_HTHREAD_DECREF_ALLOWNULL(thr,h) do { \ |
|||
if ((h) != NULL) { \ |
|||
DUK_HTHREAD_DECREF((thr), (h)); \ |
|||
} \ |
|||
} while (0) |
|||
#define DUK_HTHREAD_DECREF_NORZ_ALLOWNULL(thr,h) do { \ |
|||
if ((h) != NULL) { \ |
|||
DUK_HTHREAD_DECREF_NORZ((thr), (h)); \ |
|||
} \ |
|||
} while (0) |
|||
|
|||
/* Called after one or more DECREF NORZ calls to handle pending side effects.
|
|||
* At present DECREF NORZ does freeing inline but doesn't execute finalizers, |
|||
* so these macros check for pending finalizers and execute them. The FAST |
|||
* variant is performance critical. |
|||
*/ |
|||
#if defined(DUK_USE_FINALIZER_SUPPORT) |
|||
#define DUK_REFZERO_CHECK_FAST(thr) do { \ |
|||
duk_refzero_check_fast((thr)); \ |
|||
} while (0) |
|||
#define DUK_REFZERO_CHECK_SLOW(thr) do { \ |
|||
duk_refzero_check_slow((thr)); \ |
|||
} while (0) |
|||
#else /* DUK_USE_FINALIZER_SUPPORT */ |
|||
#define DUK_REFZERO_CHECK_FAST(thr) do { } while (0) |
|||
#define DUK_REFZERO_CHECK_SLOW(thr) do { } while (0) |
|||
#endif /* DUK_USE_FINALIZER_SUPPORT */ |
|||
|
|||
/*
|
|||
* Macros to set a duk_tval and update refcount of the target (decref the |
|||
* old value and incref the new value if necessary). This is both performance |
|||
* and footprint critical; any changes made should be measured for size/speed. |
|||
*/ |
|||
|
|||
#define DUK_TVAL_SET_UNDEFINED_UPDREF_ALT0(thr,tvptr_dst) do { \ |
|||
duk_tval *tv__dst; duk_tval tv__tmp; tv__dst = (tvptr_dst); \ |
|||
DUK_TVAL_SET_TVAL(&tv__tmp, tv__dst); \ |
|||
DUK_TVAL_SET_UNDEFINED(tv__dst); \ |
|||
DUK_TVAL_DECREF((thr), &tv__tmp); /* side effects */ \ |
|||
} while (0) |
|||
|
|||
#define DUK_TVAL_SET_UNDEFINED_UPDREF_NORZ_ALT0(thr,tvptr_dst) do { \ |
|||
duk_tval *tv__dst; duk_tval tv__tmp; tv__dst = (tvptr_dst); \ |
|||
DUK_TVAL_SET_TVAL(&tv__tmp, tv__dst); \ |
|||
DUK_TVAL_SET_UNDEFINED(tv__dst); \ |
|||
DUK_TVAL_DECREF_NORZ((thr), &tv__tmp); \ |
|||
} while (0) |
|||
|
|||
#define DUK_TVAL_SET_UNUSED_UPDREF_ALT0(thr,tvptr_dst) do { \ |
|||
duk_tval *tv__dst; duk_tval tv__tmp; tv__dst = (tvptr_dst); \ |
|||
DUK_TVAL_SET_TVAL(&tv__tmp, tv__dst); \ |
|||
DUK_TVAL_SET_UNUSED(tv__dst); \ |
|||
DUK_TVAL_DECREF((thr), &tv__tmp); /* side effects */ \ |
|||
} while (0) |
|||
|
|||
#define DUK_TVAL_SET_NULL_UPDREF_ALT0(thr,tvptr_dst) do { \ |
|||
duk_tval *tv__dst; duk_tval tv__tmp; tv__dst = (tvptr_dst); \ |
|||
DUK_TVAL_SET_TVAL(&tv__tmp, tv__dst); \ |
|||
DUK_TVAL_SET_NULL(tv__dst); \ |
|||
DUK_TVAL_DECREF((thr), &tv__tmp); /* side effects */ \ |
|||
} while (0) |
|||
|
|||
#define DUK_TVAL_SET_BOOLEAN_UPDREF_ALT0(thr,tvptr_dst,newval) do { \ |
|||
duk_tval *tv__dst; duk_tval tv__tmp; tv__dst = (tvptr_dst); \ |
|||
DUK_TVAL_SET_TVAL(&tv__tmp, tv__dst); \ |
|||
DUK_TVAL_SET_BOOLEAN(tv__dst, (newval)); \ |
|||
DUK_TVAL_DECREF((thr), &tv__tmp); /* side effects */ \ |
|||
} while (0) |
|||
|
|||
#define DUK_TVAL_SET_NUMBER_UPDREF_ALT0(thr,tvptr_dst,newval) do { \ |
|||
duk_tval *tv__dst; duk_tval tv__tmp; tv__dst = (tvptr_dst); \ |
|||
DUK_TVAL_SET_TVAL(&tv__tmp, tv__dst); \ |
|||
DUK_TVAL_SET_NUMBER(tv__dst, (newval)); \ |
|||
DUK_TVAL_DECREF((thr), &tv__tmp); /* side effects */ \ |
|||
} while (0) |
|||
#define DUK_TVAL_SET_NUMBER_CHKFAST_UPDREF_ALT0(thr,tvptr_dst,newval) do { \ |
|||
duk_tval *tv__dst; duk_tval tv__tmp; tv__dst = (tvptr_dst); \ |
|||
DUK_TVAL_SET_TVAL(&tv__tmp, tv__dst); \ |
|||
DUK_TVAL_SET_NUMBER_CHKFAST_FAST(tv__dst, (newval)); \ |
|||
DUK_TVAL_DECREF((thr), &tv__tmp); /* side effects */ \ |
|||
} while (0) |
|||
#define DUK_TVAL_SET_DOUBLE_UPDREF_ALT0(thr,tvptr_dst,newval) do { \ |
|||
duk_tval *tv__dst; duk_tval tv__tmp; tv__dst = (tvptr_dst); \ |
|||
DUK_TVAL_SET_TVAL(&tv__tmp, tv__dst); \ |
|||
DUK_TVAL_SET_DOUBLE(tv__dst, (newval)); \ |
|||
DUK_TVAL_DECREF((thr), &tv__tmp); /* side effects */ \ |
|||
} while (0) |
|||
#define DUK_TVAL_SET_NAN_UPDREF_ALT0(thr,tvptr_dst) do { \ |
|||
duk_tval *tv__dst; duk_tval tv__tmp; tv__dst = (tvptr_dst); \ |
|||
DUK_TVAL_SET_TVAL(&tv__tmp, tv__dst); \ |
|||
DUK_TVAL_SET_NAN(tv__dst); \ |
|||
DUK_TVAL_DECREF((thr), &tv__tmp); /* side effects */ \ |
|||
} while (0) |
|||
#if defined(DUK_USE_FASTINT) |
|||
#define DUK_TVAL_SET_I48_UPDREF_ALT0(thr,tvptr_dst,newval) do { \ |
|||
duk_tval *tv__dst; duk_tval tv__tmp; tv__dst = (tvptr_dst); \ |
|||
DUK_TVAL_SET_TVAL(&tv__tmp, tv__dst); \ |
|||
DUK_TVAL_SET_I48(tv__dst, (newval)); \ |
|||
DUK_TVAL_DECREF((thr), &tv__tmp); /* side effects */ \ |
|||
} while (0) |
|||
#define DUK_TVAL_SET_I32_UPDREF_ALT0(thr,tvptr_dst,newval) do { \ |
|||
duk_tval *tv__dst; duk_tval tv__tmp; tv__dst = (tvptr_dst); \ |
|||
DUK_TVAL_SET_TVAL(&tv__tmp, tv__dst); \ |
|||
DUK_TVAL_SET_I32(tv__dst, (newval)); \ |
|||
DUK_TVAL_DECREF((thr), &tv__tmp); /* side effects */ \ |
|||
} while (0) |
|||
#define DUK_TVAL_SET_U32_UPDREF_ALT0(thr,tvptr_dst,newval) do { \ |
|||
duk_tval *tv__dst; duk_tval tv__tmp; tv__dst = (tvptr_dst); \ |
|||
DUK_TVAL_SET_TVAL(&tv__tmp, tv__dst); \ |
|||
DUK_TVAL_SET_U32(tv__dst, (newval)); \ |
|||
DUK_TVAL_DECREF((thr), &tv__tmp); /* side effects */ \ |
|||
} while (0) |
|||
#else |
|||
#define DUK_TVAL_SET_DOUBLE_CAST_UPDREF(thr,tvptr_dst,newval) \ |
|||
DUK_TVAL_SET_DOUBLE_UPDREF((thr), (tvptr_dst), (duk_double_t) (newval)) |
|||
#endif /* DUK_USE_FASTINT */ |
|||
|
|||
#define DUK_TVAL_SET_LIGHTFUNC_UPDREF_ALT0(thr,tvptr_dst,lf_v,lf_fp,lf_flags) do { \ |
|||
duk_tval *tv__dst; duk_tval tv__tmp; tv__dst = (tvptr_dst); \ |
|||
DUK_TVAL_SET_TVAL(&tv__tmp, tv__dst); \ |
|||
DUK_TVAL_SET_LIGHTFUNC(tv__dst, (lf_v), (lf_fp), (lf_flags)); \ |
|||
DUK_TVAL_DECREF((thr), &tv__tmp); /* side effects */ \ |
|||
} while (0) |
|||
|
|||
#define DUK_TVAL_SET_STRING_UPDREF_ALT0(thr,tvptr_dst,newval) do { \ |
|||
duk_tval *tv__dst; duk_tval tv__tmp; tv__dst = (tvptr_dst); \ |
|||
DUK_TVAL_SET_TVAL(&tv__tmp, tv__dst); \ |
|||
DUK_TVAL_SET_STRING(tv__dst, (newval)); \ |
|||
DUK_HSTRING_INCREF((thr), (newval)); \ |
|||
DUK_TVAL_DECREF((thr), &tv__tmp); /* side effects */ \ |
|||
} while (0) |
|||
|
|||
#define DUK_TVAL_SET_OBJECT_UPDREF_ALT0(thr,tvptr_dst,newval) do { \ |
|||
duk_tval *tv__dst; duk_tval tv__tmp; tv__dst = (tvptr_dst); \ |
|||
DUK_TVAL_SET_TVAL(&tv__tmp, tv__dst); \ |
|||
DUK_TVAL_SET_OBJECT(tv__dst, (newval)); \ |
|||
DUK_HOBJECT_INCREF((thr), (newval)); \ |
|||
DUK_TVAL_DECREF((thr), &tv__tmp); /* side effects */ \ |
|||
} while (0) |
|||
|
|||
#define DUK_TVAL_SET_BUFFER_UPDREF_ALT0(thr,tvptr_dst,newval) do { \ |
|||
duk_tval *tv__dst; duk_tval tv__tmp; tv__dst = (tvptr_dst); \ |
|||
DUK_TVAL_SET_TVAL(&tv__tmp, tv__dst); \ |
|||
DUK_TVAL_SET_BUFFER(tv__dst, (newval)); \ |
|||
DUK_HBUFFER_INCREF((thr), (newval)); \ |
|||
DUK_TVAL_DECREF((thr), &tv__tmp); /* side effects */ \ |
|||
} while (0) |
|||
|
|||
#define DUK_TVAL_SET_POINTER_UPDREF_ALT0(thr,tvptr_dst,newval) do { \ |
|||
duk_tval *tv__dst; duk_tval tv__tmp; tv__dst = (tvptr_dst); \ |
|||
DUK_TVAL_SET_TVAL(&tv__tmp, tv__dst); \ |
|||
DUK_TVAL_SET_POINTER(tv__dst, (newval)); \ |
|||
DUK_TVAL_DECREF((thr), &tv__tmp); /* side effects */ \ |
|||
} while (0) |
|||
|
|||
/* DUK_TVAL_SET_TVAL_UPDREF() is used a lot in executor, property lookups,
|
|||
* etc, so it's very important for performance. Measure when changing. |
|||
* |
|||
* NOTE: the source and destination duk_tval pointers may be the same, and |
|||
* the macros MUST deal with that correctly. |
|||
*/ |
|||
|
|||
/* Original idiom used, minimal code size. */ |
|||
#define DUK_TVAL_SET_TVAL_UPDREF_ALT0(thr,tvptr_dst,tvptr_src) do { \ |
|||
duk_tval *tv__dst, *tv__src; duk_tval tv__tmp; \ |
|||
tv__dst = (tvptr_dst); tv__src = (tvptr_src); \ |
|||
DUK_TVAL_SET_TVAL(&tv__tmp, tv__dst); \ |
|||
DUK_TVAL_SET_TVAL(tv__dst, tv__src); \ |
|||
DUK_TVAL_INCREF((thr), tv__src); \ |
|||
DUK_TVAL_DECREF((thr), &tv__tmp); /* side effects */ \ |
|||
} while (0) |
|||
|
|||
/* Faster alternative: avoid making a temporary copy of tvptr_dst and use
|
|||
* fast incref/decref macros. |
|||
*/ |
|||
#define DUK_TVAL_SET_TVAL_UPDREF_ALT1(thr,tvptr_dst,tvptr_src) do { \ |
|||
duk_tval *tv__dst, *tv__src; duk_heaphdr *h__obj; \ |
|||
tv__dst = (tvptr_dst); tv__src = (tvptr_src); \ |
|||
DUK_TVAL_INCREF_FAST((thr), tv__src); \ |
|||
if (DUK_TVAL_NEEDS_REFCOUNT_UPDATE(tv__dst)) { \ |
|||
h__obj = DUK_TVAL_GET_HEAPHDR(tv__dst); \ |
|||
DUK_ASSERT(h__obj != NULL); \ |
|||
DUK_TVAL_SET_TVAL(tv__dst, tv__src); \ |
|||
DUK_HEAPHDR_DECREF_FAST((thr), h__obj); /* side effects */ \ |
|||
} else { \ |
|||
DUK_TVAL_SET_TVAL(tv__dst, tv__src); \ |
|||
} \ |
|||
} while (0) |
|||
|
|||
/* XXX: no optimized variants yet */ |
|||
#define DUK_TVAL_SET_UNDEFINED_UPDREF DUK_TVAL_SET_UNDEFINED_UPDREF_ALT0 |
|||
#define DUK_TVAL_SET_UNDEFINED_UPDREF_NORZ DUK_TVAL_SET_UNDEFINED_UPDREF_NORZ_ALT0 |
|||
#define DUK_TVAL_SET_UNUSED_UPDREF DUK_TVAL_SET_UNUSED_UPDREF_ALT0 |
|||
#define DUK_TVAL_SET_NULL_UPDREF DUK_TVAL_SET_NULL_UPDREF_ALT0 |
|||
#define DUK_TVAL_SET_BOOLEAN_UPDREF DUK_TVAL_SET_BOOLEAN_UPDREF_ALT0 |
|||
#define DUK_TVAL_SET_NUMBER_UPDREF DUK_TVAL_SET_NUMBER_UPDREF_ALT0 |
|||
#define DUK_TVAL_SET_NUMBER_CHKFAST_UPDREF DUK_TVAL_SET_NUMBER_CHKFAST_UPDREF_ALT0 |
|||
#define DUK_TVAL_SET_DOUBLE_UPDREF DUK_TVAL_SET_DOUBLE_UPDREF_ALT0 |
|||
#define DUK_TVAL_SET_NAN_UPDREF DUK_TVAL_SET_NAN_UPDREF_ALT0 |
|||
#if defined(DUK_USE_FASTINT) |
|||
#define DUK_TVAL_SET_I48_UPDREF DUK_TVAL_SET_I48_UPDREF_ALT0 |
|||
#define DUK_TVAL_SET_I32_UPDREF DUK_TVAL_SET_I32_UPDREF_ALT0 |
|||
#define DUK_TVAL_SET_U32_UPDREF DUK_TVAL_SET_U32_UPDREF_ALT0 |
|||
#else |
|||
#define DUK_TVAL_SET_I48_UPDREF DUK_TVAL_SET_DOUBLE_CAST_UPDREF /* XXX: fast int-to-double */ |
|||
#define DUK_TVAL_SET_I32_UPDREF DUK_TVAL_SET_DOUBLE_CAST_UPDREF |
|||
#define DUK_TVAL_SET_U32_UPDREF DUK_TVAL_SET_DOUBLE_CAST_UPDREF |
|||
#endif /* DUK_USE_FASTINT */ |
|||
#define DUK_TVAL_SET_FASTINT_UPDREF DUK_TVAL_SET_I48_UPDREF /* convenience */ |
|||
#define DUK_TVAL_SET_LIGHTFUNC_UPDREF DUK_TVAL_SET_LIGHTFUNC_UPDREF_ALT0 |
|||
#define DUK_TVAL_SET_STRING_UPDREF DUK_TVAL_SET_STRING_UPDREF_ALT0 |
|||
#define DUK_TVAL_SET_OBJECT_UPDREF DUK_TVAL_SET_OBJECT_UPDREF_ALT0 |
|||
#define DUK_TVAL_SET_BUFFER_UPDREF DUK_TVAL_SET_BUFFER_UPDREF_ALT0 |
|||
#define DUK_TVAL_SET_POINTER_UPDREF DUK_TVAL_SET_POINTER_UPDREF_ALT0 |
|||
|
|||
#if defined(DUK_USE_FAST_REFCOUNT_DEFAULT) |
|||
/* Optimized for speed. */ |
|||
#define DUK_TVAL_SET_TVAL_UPDREF DUK_TVAL_SET_TVAL_UPDREF_ALT1 |
|||
#define DUK_TVAL_SET_TVAL_UPDREF_FAST DUK_TVAL_SET_TVAL_UPDREF_ALT1 |
|||
#define DUK_TVAL_SET_TVAL_UPDREF_SLOW DUK_TVAL_SET_TVAL_UPDREF_ALT0 |
|||
#else |
|||
/* Optimized for size. */ |
|||
#define DUK_TVAL_SET_TVAL_UPDREF DUK_TVAL_SET_TVAL_UPDREF_ALT0 |
|||
#define DUK_TVAL_SET_TVAL_UPDREF_FAST DUK_TVAL_SET_TVAL_UPDREF_ALT0 |
|||
#define DUK_TVAL_SET_TVAL_UPDREF_SLOW DUK_TVAL_SET_TVAL_UPDREF_ALT0 |
|||
#endif |
|||
|
|||
#else /* DUK_USE_REFERENCE_COUNTING */ |
|||
|
|||
#define DUK_TVAL_NEEDS_REFCOUNT_UPDATE(tv) 0 |
|||
#define DUK_HEAPHDR_NEEDS_REFCOUNT_UPDATE(h) 0 |
|||
|
|||
#define DUK_TVAL_INCREF_FAST(thr,v) do {} while (0) /* nop */ |
|||
#define DUK_TVAL_DECREF_FAST(thr,v) do {} while (0) /* nop */ |
|||
#define DUK_TVAL_DECREF_NORZ_FAST(thr,v) do {} while (0) /* nop */ |
|||
#define DUK_TVAL_INCREF_SLOW(thr,v) do {} while (0) /* nop */ |
|||
#define DUK_TVAL_DECREF_SLOW(thr,v) do {} while (0) /* nop */ |
|||
#define DUK_TVAL_DECREF_NORZ_SLOW(thr,v) do {} while (0) /* nop */ |
|||
#define DUK_TVAL_INCREF(thr,v) do {} while (0) /* nop */ |
|||
#define DUK_TVAL_DECREF(thr,v) do {} while (0) /* nop */ |
|||
#define DUK_TVAL_DECREF_NORZ(thr,v) do {} while (0) /* nop */ |
|||
#define DUK_HEAPHDR_INCREF_FAST(thr,h) do {} while (0) /* nop */ |
|||
#define DUK_HEAPHDR_DECREF_FAST(thr,h) do {} while (0) /* nop */ |
|||
#define DUK_HEAPHDR_DECREF_NORZ_FAST(thr,h) do {} while (0) /* nop */ |
|||
#define DUK_HEAPHDR_INCREF_SLOW(thr,h) do {} while (0) /* nop */ |
|||
#define DUK_HEAPHDR_DECREF_SLOW(thr,h) do {} while (0) /* nop */ |
|||
#define DUK_HEAPHDR_DECREF_NORZ_SLOW(thr,h) do {} while (0) /* nop */ |
|||
#define DUK_HEAPHDR_INCREF(thr,h) do {} while (0) /* nop */ |
|||
#define DUK_HEAPHDR_DECREF(thr,h) do {} while (0) /* nop */ |
|||
#define DUK_HEAPHDR_DECREF_NORZ(thr,h) do {} while (0) /* nop */ |
|||
#define DUK_HSTRING_INCREF_FAST(thr,h) do {} while (0) /* nop */ |
|||
#define DUK_HSTRING_DECREF_FAST(thr,h) do {} while (0) /* nop */ |
|||
#define DUK_HSTRING_DECREF_NORZ_FAST(thr,h) do {} while (0) /* nop */ |
|||
#define DUK_HSTRING_INCREF_SLOW(thr,h) do {} while (0) /* nop */ |
|||
#define DUK_HSTRING_DECREF_SLOW(thr,h) do {} while (0) /* nop */ |
|||
#define DUK_HSTRING_DECREF_NORZ_SLOW(thr,h) do {} while (0) /* nop */ |
|||
#define DUK_HSTRING_INCREF(thr,h) do {} while (0) /* nop */ |
|||
#define DUK_HSTRING_DECREF(thr,h) do {} while (0) /* nop */ |
|||
#define DUK_HSTRING_DECREF_NORZ(thr,h) do {} while (0) /* nop */ |
|||
#define DUK_HOBJECT_INCREF_FAST(thr,h) do {} while (0) /* nop */ |
|||
#define DUK_HOBJECT_DECREF_FAST(thr,h) do {} while (0) /* nop */ |
|||
#define DUK_HOBJECT_DECREF_NORZ_FAST(thr,h) do {} while (0) /* nop */ |
|||
#define DUK_HOBJECT_INCREF_SLOW(thr,h) do {} while (0) /* nop */ |
|||
#define DUK_HOBJECT_DECREF_SLOW(thr,h) do {} while (0) /* nop */ |
|||
#define DUK_HOBJECT_DECREF_NORZ_SLOW(thr,h) do {} while (0) /* nop */ |
|||
#define DUK_HOBJECT_INCREF(thr,h) do {} while (0) /* nop */ |
|||
#define DUK_HOBJECT_DECREF(thr,h) do {} while (0) /* nop */ |
|||
#define DUK_HOBJECT_DECREF_NORZ(thr,h) do {} while (0) /* nop */ |
|||
#define DUK_HBUFFER_INCREF_FAST(thr,h) do {} while (0) /* nop */ |
|||
#define DUK_HBUFFER_DECREF_FAST(thr,h) do {} while (0) /* nop */ |
|||
#define DUK_HBUFFER_DECREF_NORZ_FAST(thr,h) do {} while (0) /* nop */ |
|||
#define DUK_HBUFFER_INCREF_SLOW(thr,h) do {} while (0) /* nop */ |
|||
#define DUK_HBUFFER_DECREF_SLOW(thr,h) do {} while (0) /* nop */ |
|||
#define DUK_HBUFFER_DECREF_NORZ_SLOW(thr,h) do {} while (0) /* nop */ |
|||
#define DUK_HBUFFER_INCREF(thr,h) do {} while (0) /* nop */ |
|||
#define DUK_HBUFFER_DECREF(thr,h) do {} while (0) /* nop */ |
|||
#define DUK_HBUFFER_DECREF_NORZ(thr,h) do {} while (0) /* nop */ |
|||
|
|||
#define DUK_HCOMPFUNC_INCREF(thr,h) do {} while (0) /* nop */ |
|||
#define DUK_HCOMPFUNC_DECREF(thr,h) do {} while (0) /* nop */ |
|||
#define DUK_HCOMPFUNC_DECREF_NORZ(thr,h) do {} while (0) /* nop */ |
|||
#define DUK_HNATFUNC_INCREF(thr,h) do {} while (0) /* nop */ |
|||
#define DUK_HNATFUNC_DECREF(thr,h) do {} while (0) /* nop */ |
|||
#define DUK_HNATFUNC_DECREF_NORZ(thr,h) do {} while (0) /* nop */ |
|||
#define DUK_HBUFOBJ_INCREF(thr,h) do {} while (0) /* nop */ |
|||
#define DUK_HBUFOBJ_DECREF(thr,h) do {} while (0) /* nop */ |
|||
#define DUK_HBUFOBJ_DECREF_NORZ(thr,h) do {} while (0) /* nop */ |
|||
#define DUK_HTHREAD_INCREF(thr,h) do {} while (0) /* nop */ |
|||
#define DUK_HTHREAD_DECREF(thr,h) do {} while (0) /* nop */ |
|||
#define DUK_HTHREAD_DECREF_NORZ(thr,h) do {} while (0) /* nop */ |
|||
#define DUK_HOBJECT_INCREF_ALLOWNULL(thr,h) do {} while (0) /* nop */ |
|||
#define DUK_HOBJECT_DECREF_ALLOWNULL(thr,h) do {} while (0) /* nop */ |
|||
#define DUK_HOBJECT_DECREF_NORZ_ALLOWNULL(thr,h) do {} while (0) /* nop */ |
|||
#define DUK_HBUFFER_INCREF_ALLOWNULL(thr,h) do {} while (0) /* nop */ |
|||
#define DUK_HBUFFER_DECREF_ALLOWNULL(thr,h) do {} while (0) /* nop */ |
|||
#define DUK_HBUFFER_DECREF_NORZ_ALLOWNULL(thr,h) do {} while (0) /* nop */ |
|||
|
|||
#define DUK_REFZERO_CHECK_FAST(thr) do {} while (0) /* nop */ |
|||
#define DUK_REFZERO_CHECK_SLOW(thr) do {} while (0) /* nop */ |
|||
|
|||
#define DUK_TVAL_SET_UNDEFINED_UPDREF_ALT0(thr,tvptr_dst) do { \ |
|||
duk_tval *tv__dst; tv__dst = (tvptr_dst); \ |
|||
DUK_TVAL_SET_UNDEFINED(tv__dst); \ |
|||
DUK_UNREF((thr)); \ |
|||
} while (0) |
|||
|
|||
#define DUK_TVAL_SET_UNUSED_UPDREF_ALT0(thr,tvptr_dst) do { \ |
|||
duk_tval *tv__dst; tv__dst = (tvptr_dst); \ |
|||
DUK_TVAL_SET_UNUSED(tv__dst); \ |
|||
DUK_UNREF((thr)); \ |
|||
} while (0) |
|||
|
|||
#define DUK_TVAL_SET_NULL_UPDREF_ALT0(thr,tvptr_dst) do { \ |
|||
duk_tval *tv__dst; tv__dst = (tvptr_dst); \ |
|||
DUK_TVAL_SET_NULL(tv__dst); \ |
|||
DUK_UNREF((thr)); \ |
|||
} while (0) |
|||
|
|||
#define DUK_TVAL_SET_BOOLEAN_UPDREF_ALT0(thr,tvptr_dst,newval) do { \ |
|||
duk_tval *tv__dst; tv__dst = (tvptr_dst); \ |
|||
DUK_TVAL_SET_BOOLEAN(tv__dst, (newval)); \ |
|||
DUK_UNREF((thr)); \ |
|||
} while (0) |
|||
|
|||
#define DUK_TVAL_SET_NUMBER_UPDREF_ALT0(thr,tvptr_dst,newval) do { \ |
|||
duk_tval *tv__dst; tv__dst = (tvptr_dst); \ |
|||
DUK_TVAL_SET_NUMBER(tv__dst, (newval)); \ |
|||
DUK_UNREF((thr)); \ |
|||
} while (0) |
|||
#define DUK_TVAL_SET_NUMBER_CHKFAST_UPDREF_ALT0(thr,tvptr_dst,newval) do { \ |
|||
duk_tval *tv__dst; tv__dst = (tvptr_dst); \ |
|||
DUK_TVAL_SET_NUMBER_CHKFAST_FAST(tv__dst, (newval)); \ |
|||
DUK_UNREF((thr)); \ |
|||
} while (0) |
|||
#define DUK_TVAL_SET_DOUBLE_UPDREF_ALT0(thr,tvptr_dst,newval) do { \ |
|||
duk_tval *tv__dst; tv__dst = (tvptr_dst); \ |
|||
DUK_TVAL_SET_DOUBLE(tv__dst, (newval)); \ |
|||
DUK_UNREF((thr)); \ |
|||
} while (0) |
|||
#define DUK_TVAL_SET_NAN_UPDREF_ALT0(thr,tvptr_dst) do { \ |
|||
duk_tval *tv__dst; tv__dst = (tvptr_dst); \ |
|||
DUK_TVAL_SET_NAN(tv__dst); \ |
|||
DUK_UNREF((thr)); \ |
|||
} while (0) |
|||
#if defined(DUK_USE_FASTINT) |
|||
#define DUK_TVAL_SET_I48_UPDREF_ALT0(thr,tvptr_dst,newval) do { \ |
|||
duk_tval *tv__dst; tv__dst = (tvptr_dst); \ |
|||
DUK_TVAL_SET_I48(tv__dst, (newval)); \ |
|||
DUK_UNREF((thr)); \ |
|||
} while (0) |
|||
#define DUK_TVAL_SET_I32_UPDREF_ALT0(thr,tvptr_dst,newval) do { \ |
|||
duk_tval *tv__dst; tv__dst = (tvptr_dst); \ |
|||
DUK_TVAL_SET_I32(tv__dst, (newval)); \ |
|||
DUK_UNREF((thr)); \ |
|||
} while (0) |
|||
#define DUK_TVAL_SET_U32_UPDREF_ALT0(thr,tvptr_dst,newval) do { \ |
|||
duk_tval *tv__dst; tv__dst = (tvptr_dst); \ |
|||
DUK_TVAL_SET_U32(tv__dst, (newval)); \ |
|||
DUK_UNREF((thr)); \ |
|||
} while (0) |
|||
#else |
|||
#define DUK_TVAL_SET_DOUBLE_CAST_UPDREF(thr,tvptr_dst,newval) \ |
|||
DUK_TVAL_SET_DOUBLE_UPDREF((thr), (tvptr_dst), (duk_double_t) (newval)) |
|||
#endif /* DUK_USE_FASTINT */ |
|||
|
|||
#define DUK_TVAL_SET_LIGHTFUNC_UPDREF_ALT0(thr,tvptr_dst,lf_v,lf_fp,lf_flags) do { \ |
|||
duk_tval *tv__dst; tv__dst = (tvptr_dst); \ |
|||
DUK_TVAL_SET_LIGHTFUNC(tv__dst, (lf_v), (lf_fp), (lf_flags)); \ |
|||
DUK_UNREF((thr)); \ |
|||
} while (0) |
|||
|
|||
#define DUK_TVAL_SET_STRING_UPDREF_ALT0(thr,tvptr_dst,newval) do { \ |
|||
duk_tval *tv__dst; tv__dst = (tvptr_dst); \ |
|||
DUK_TVAL_SET_STRING(tv__dst, (newval)); \ |
|||
DUK_UNREF((thr)); \ |
|||
} while (0) |
|||
|
|||
#define DUK_TVAL_SET_OBJECT_UPDREF_ALT0(thr,tvptr_dst,newval) do { \ |
|||
duk_tval *tv__dst; tv__dst = (tvptr_dst); \ |
|||
DUK_TVAL_SET_OBJECT(tv__dst, (newval)); \ |
|||
DUK_UNREF((thr)); \ |
|||
} while (0) |
|||
|
|||
#define DUK_TVAL_SET_BUFFER_UPDREF_ALT0(thr,tvptr_dst,newval) do { \ |
|||
duk_tval *tv__dst; tv__dst = (tvptr_dst); \ |
|||
DUK_TVAL_SET_BUFFER(tv__dst, (newval)); \ |
|||
DUK_UNREF((thr)); \ |
|||
} while (0) |
|||
|
|||
#define DUK_TVAL_SET_POINTER_UPDREF_ALT0(thr,tvptr_dst,newval) do { \ |
|||
duk_tval *tv__dst; tv__dst = (tvptr_dst); \ |
|||
DUK_TVAL_SET_POINTER(tv__dst, (newval)); \ |
|||
DUK_UNREF((thr)); \ |
|||
} while (0) |
|||
|
|||
#define DUK_TVAL_SET_TVAL_UPDREF_ALT0(thr,tvptr_dst,tvptr_src) do { \ |
|||
duk_tval *tv__dst, *tv__src; \ |
|||
tv__dst = (tvptr_dst); tv__src = (tvptr_src); \ |
|||
DUK_TVAL_SET_TVAL(tv__dst, tv__src); \ |
|||
DUK_UNREF((thr)); \ |
|||
} while (0) |
|||
|
|||
#define DUK_TVAL_SET_UNDEFINED_UPDREF DUK_TVAL_SET_UNDEFINED_UPDREF_ALT0 |
|||
#define DUK_TVAL_SET_UNDEFINED_UPDREF_NORZ DUK_TVAL_SET_UNDEFINED_UPDREF_ALT0 |
|||
#define DUK_TVAL_SET_UNUSED_UPDREF DUK_TVAL_SET_UNUSED_UPDREF_ALT0 |
|||
#define DUK_TVAL_SET_NULL_UPDREF DUK_TVAL_SET_NULL_UPDREF_ALT0 |
|||
#define DUK_TVAL_SET_BOOLEAN_UPDREF DUK_TVAL_SET_BOOLEAN_UPDREF_ALT0 |
|||
#define DUK_TVAL_SET_NUMBER_UPDREF DUK_TVAL_SET_NUMBER_UPDREF_ALT0 |
|||
#define DUK_TVAL_SET_NUMBER_CHKFAST_UPDREF DUK_TVAL_SET_NUMBER_CHKFAST_UPDREF_ALT0 |
|||
#define DUK_TVAL_SET_DOUBLE_UPDREF DUK_TVAL_SET_DOUBLE_UPDREF_ALT0 |
|||
#define DUK_TVAL_SET_NAN_UPDREF DUK_TVAL_SET_NAN_UPDREF_ALT0 |
|||
#if defined(DUK_USE_FASTINT) |
|||
#define DUK_TVAL_SET_I48_UPDREF DUK_TVAL_SET_I48_UPDREF_ALT0 |
|||
#define DUK_TVAL_SET_I32_UPDREF DUK_TVAL_SET_I32_UPDREF_ALT0 |
|||
#define DUK_TVAL_SET_U32_UPDREF DUK_TVAL_SET_U32_UPDREF_ALT0 |
|||
#else |
|||
#define DUK_TVAL_SET_I48_UPDREF DUK_TVAL_SET_DOUBLE_CAST_UPDREF /* XXX: fast-int-to-double */ |
|||
#define DUK_TVAL_SET_I32_UPDREF DUK_TVAL_SET_DOUBLE_CAST_UPDREF |
|||
#define DUK_TVAL_SET_U32_UPDREF DUK_TVAL_SET_DOUBLE_CAST_UPDREF |
|||
#endif /* DUK_USE_FASTINT */ |
|||
#define DUK_TVAL_SET_FASTINT_UPDREF DUK_TVAL_SET_I48_UPDREF /* convenience */ |
|||
#define DUK_TVAL_SET_LIGHTFUNC_UPDREF DUK_TVAL_SET_LIGHTFUNC_UPDREF_ALT0 |
|||
#define DUK_TVAL_SET_STRING_UPDREF DUK_TVAL_SET_STRING_UPDREF_ALT0 |
|||
#define DUK_TVAL_SET_OBJECT_UPDREF DUK_TVAL_SET_OBJECT_UPDREF_ALT0 |
|||
#define DUK_TVAL_SET_BUFFER_UPDREF DUK_TVAL_SET_BUFFER_UPDREF_ALT0 |
|||
#define DUK_TVAL_SET_POINTER_UPDREF DUK_TVAL_SET_POINTER_UPDREF_ALT0 |
|||
|
|||
#define DUK_TVAL_SET_TVAL_UPDREF DUK_TVAL_SET_TVAL_UPDREF_ALT0 |
|||
#define DUK_TVAL_SET_TVAL_UPDREF_FAST DUK_TVAL_SET_TVAL_UPDREF_ALT0 |
|||
#define DUK_TVAL_SET_TVAL_UPDREF_SLOW DUK_TVAL_SET_TVAL_UPDREF_ALT0 |
|||
|
|||
#endif /* DUK_USE_REFERENCE_COUNTING */ |
|||
|
|||
#if defined(DUK_USE_REFERENCE_COUNTING) |
|||
#if defined(DUK_USE_FINALIZER_SUPPORT) |
|||
DUK_INTERNAL_DECL void duk_refzero_check_slow(duk_hthread *thr); |
|||
DUK_INTERNAL_DECL void duk_refzero_check_fast(duk_hthread *thr); |
|||
#endif |
|||
DUK_INTERNAL_DECL void duk_heaphdr_refcount_finalize_norz(duk_heap *heap, duk_heaphdr *hdr); |
|||
DUK_INTERNAL_DECL void duk_hobject_refcount_finalize_norz(duk_heap *heap, duk_hobject *h); |
|||
#if 0 /* Not needed: fast path handles inline; slow path uses duk_heaphdr_decref() which is needed anyway. */
|
|||
DUK_INTERNAL_DECL void duk_hstring_decref(duk_hthread *thr, duk_hstring *h); |
|||
DUK_INTERNAL_DECL void duk_hstring_decref_norz(duk_hthread *thr, duk_hstring *h); |
|||
DUK_INTERNAL_DECL void duk_hbuffer_decref(duk_hthread *thr, duk_hbuffer *h); |
|||
DUK_INTERNAL_DECL void duk_hbuffer_decref_norz(duk_hthread *thr, duk_hbuffer *h); |
|||
DUK_INTERNAL_DECL void duk_hobject_decref(duk_hthread *thr, duk_hobject *h); |
|||
DUK_INTERNAL_DECL void duk_hobject_decref_norz(duk_hthread *thr, duk_hobject *h); |
|||
#endif |
|||
DUK_INTERNAL_DECL void duk_heaphdr_refzero(duk_hthread *thr, duk_heaphdr *h); |
|||
DUK_INTERNAL_DECL void duk_heaphdr_refzero_norz(duk_hthread *thr, duk_heaphdr *h); |
|||
#if defined(DUK_USE_FAST_REFCOUNT_DEFAULT) |
|||
DUK_INTERNAL_DECL void duk_hstring_refzero(duk_hthread *thr, duk_hstring *h); /* no 'norz' variant */ |
|||
DUK_INTERNAL_DECL void duk_hbuffer_refzero(duk_hthread *thr, duk_hbuffer *h); /* no 'norz' variant */ |
|||
DUK_INTERNAL_DECL void duk_hobject_refzero(duk_hthread *thr, duk_hobject *h); |
|||
DUK_INTERNAL_DECL void duk_hobject_refzero_norz(duk_hthread *thr, duk_hobject *h); |
|||
#else |
|||
DUK_INTERNAL_DECL void duk_tval_incref(duk_tval *tv); |
|||
DUK_INTERNAL_DECL void duk_tval_decref(duk_hthread *thr, duk_tval *tv); |
|||
DUK_INTERNAL_DECL void duk_tval_decref_norz(duk_hthread *thr, duk_tval *tv); |
|||
DUK_INTERNAL_DECL void duk_heaphdr_incref(duk_heaphdr *h); |
|||
DUK_INTERNAL_DECL void duk_heaphdr_decref(duk_hthread *thr, duk_heaphdr *h); |
|||
DUK_INTERNAL_DECL void duk_heaphdr_decref_norz(duk_hthread *thr, duk_heaphdr *h); |
|||
#endif |
|||
#else /* DUK_USE_REFERENCE_COUNTING */ |
|||
/* no refcounting */ |
|||
#endif /* DUK_USE_REFERENCE_COUNTING */ |
|||
|
|||
#endif /* DUK_REFCOUNT_H_INCLUDED */ |
@ -0,0 +1,22 @@ |
|||
/* |
|||
* Pure JS repro for one issue in https://github.com/svaarala/duktape/issues/1311.
|
|||
*/ |
|||
|
|||
/*=== |
|||
Error: a pig ate it |
|||
done |
|||
===*/ |
|||
|
|||
function Foo() { |
|||
throw new Error('a pig ate it'); |
|||
} |
|||
|
|||
Duktape.fin(Foo.prototype, function(o) {}); |
|||
|
|||
try { |
|||
new Foo(); |
|||
} catch (e) { |
|||
print(e); |
|||
} |
|||
|
|||
print('done'); |
@ -0,0 +1,108 @@ |
|||
/* |
|||
* Bug test for a Duktape 2.0 and prior refzero finalizer limitation for |
|||
* creating new garbage. This has been fixed in Duktape 2.1. |
|||
* |
|||
* When the finalizer runs and calls Object.getOwnPropertyNames(), an |
|||
* enumerator object referencing the object being finalized is created. |
|||
* When the refzero finalizer exits, the object will then have a refcount |
|||
* > 0, but will actually be unreachable because it's only reachable via |
|||
* the enumerator which gets collected right after the finalizer call |
|||
* returns. |
|||
* |
|||
* So, when the finalizer exits, the enumerator object's refzero falls |
|||
* to zero and it is queued to the refzero_list for processing. But |
|||
* it's not yet processed when the decision to rescue/free the finalized |
|||
* object is made, so the object is rescued and queued back to the heap. |
|||
* |
|||
* When the enumerator object is refzero processed, it gets freed and |
|||
* refcount finalized, which causes the finalized object's refcount to |
|||
* fall to zero, too -- and the object is queued to refzero_list again. |
|||
* |
|||
* This loop then continues forever. The same happens with any object |
|||
* which references the object being finalized without being in a reference |
|||
* loop. |
|||
* |
|||
* Duktape 2.1 fixes the issue by running finalizers outside of refzero_list |
|||
* processing, so that the newly created enumerator (or other "wrapper" |
|||
* object) gets DECREF processed immediately, and the keep/rescue decision |
|||
* is then made with up-to-date refcounts. |
|||
*/ |
|||
|
|||
/*=== |
|||
no finalizer |
|||
caught Error: thrown by constructor |
|||
add finalizer |
|||
finalizer called for object |
|||
caught Error: thrown by constructor |
|||
done |
|||
finalizer called for prototype |
|||
===*/ |
|||
|
|||
var thisPointer; |
|||
var sanityCount = 0; |
|||
|
|||
function finalizer(o) { |
|||
if (o === Foo.prototype) { |
|||
print('finalizer called for prototype'); |
|||
return; |
|||
} |
|||
|
|||
print('finalizer called for object'); |
|||
if (++sanityCount > 1000) { |
|||
// Break the finalizer loop for testing. Without this, the loop
|
|||
// would go on forever.
|
|||
print('sanity limit reached'); |
|||
return; |
|||
} |
|||
|
|||
// Create a temporary object referencing the object being finalized.
|
|||
// When temp is assigned null, the temporary object gets refzero queued
|
|||
// to refzero_list, but won't be actually processed because we're already
|
|||
// processing the finalizer for the current object inside refzero_list
|
|||
// handling.
|
|||
var temp = { name: 'temp', ref: o }; |
|||
//temp.foo = { bar: temp }; // ref loop
|
|||
temp = null; |
|||
|
|||
// If 'temp' was in a reference loop, it would only be collectable via
|
|||
// mark-and-sweep, and the *second* finalization round would then be
|
|||
// mark-and-sweep driven, avoiding the infinite loop.
|
|||
|
|||
// This would cause the same issue.
|
|||
// void Object.getOwnPropertyNames(o);
|
|||
} |
|||
|
|||
function Foo() { |
|||
thisPointer = String(Duktape.Pointer(this)); |
|||
|
|||
// If the object is placed into a reference loop, the finalization will
|
|||
// be handled via mark-and-sweep which works fine.
|
|||
|
|||
/* |
|||
this.foo = {}; |
|||
this.foo.bar = this; |
|||
*/ |
|||
|
|||
this.name = 'Foo instance'; |
|||
|
|||
throw new Error('thrown by constructor'); |
|||
} |
|||
|
|||
print('no finalizer'); |
|||
try { |
|||
new Foo(); |
|||
} catch (e) { |
|||
print('caught', e); |
|||
} |
|||
|
|||
print('add finalizer'); |
|||
Duktape.fin(Foo.prototype, finalizer); |
|||
Duktape.gc(); |
|||
|
|||
try { |
|||
new Foo(); |
|||
} catch (e) { |
|||
print('caught', e); |
|||
} |
|||
|
|||
print('done'); |
@ -0,0 +1,63 @@ |
|||
/* |
|||
* Special case in finalization: |
|||
* |
|||
* - Object in reference cycle is queued for finalization by mark-and-sweep. |
|||
* - Finalizer is executed, FINALIZED is set, object is queued back to |
|||
* heap_allocated. |
|||
* - While waiting for the next mark-and-sweep round to make a rescue/free |
|||
* decision, the object's refcount drops to zero. |
|||
* |
|||
* Current handling (Duktape 2.1) is to detect the situation in REFZERO |
|||
* handling and free the object without going through finalize_list and |
|||
* mark-and-sweep unnecessarily. Prior to 2.1, the object would be queued |
|||
* back for a new mark-and-sweep round. |
|||
*/ |
|||
|
|||
/*=== |
|||
gc 1 |
|||
gc 2, finalizer |
|||
finalizer called |
|||
call func() |
|||
func called |
|||
set func to null |
|||
gc 3, nop |
|||
done |
|||
===*/ |
|||
|
|||
function test() { |
|||
var obj = {}; |
|||
obj.ref = {}; |
|||
obj.ref.ref = obj; // cycle
|
|||
|
|||
var func; |
|||
|
|||
Duktape.fin(obj, function (v) { |
|||
print('finalizer called'); |
|||
// When finalizer finishes the object is reachable via 'func'.
|
|||
// When func() is called and set to null, it gets a REFZERO.
|
|||
func = function () { |
|||
print('func called'); |
|||
v.ref = null; // break cycle
|
|||
}; |
|||
func.prototype = null; // break cycle
|
|||
}); |
|||
|
|||
print('gc 1'); |
|||
Duktape.gc(); |
|||
obj = null; |
|||
print('gc 2, finalizer'); |
|||
Duktape.gc(); // finalizer execution
|
|||
print('call func()'); |
|||
func(); |
|||
print('set func to null'); |
|||
func = null; // DECREF
|
|||
print('gc 3, nop'); |
|||
Duktape.gc(); // should no longer see object
|
|||
print('done'); |
|||
} |
|||
|
|||
try { |
|||
test(); |
|||
} catch (e) { |
|||
print(e.stack || e); |
|||
} |
@ -0,0 +1,39 @@ |
|||
/* |
|||
* If an object is mark-and-sweep finalized and the finalizer breaks the |
|||
* reference cycle where the object participates, and the object has a zero |
|||
* refcount after the finalizer returns, the object gets freed immediately. |
|||
*/ |
|||
|
|||
/*=== |
|||
gc 1 |
|||
gc 2, finalizer execution |
|||
finalizer called |
|||
gc 3, nop |
|||
done |
|||
===*/ |
|||
|
|||
function test() { |
|||
var obj = {}; |
|||
obj.ref = {}; |
|||
obj.ref.ref = obj; // Cycle
|
|||
|
|||
Duktape.fin(obj, function (v) { |
|||
print('finalizer called'); |
|||
v.ref = null; |
|||
}); |
|||
|
|||
print('gc 1'); |
|||
Duktape.gc(); |
|||
obj = null; |
|||
print('gc 2, finalizer execution'); |
|||
Duktape.gc(); |
|||
print('gc 3, nop'); |
|||
Duktape.gc(); |
|||
print('done'); |
|||
} |
|||
|
|||
try { |
|||
test(); |
|||
} catch (e) { |
|||
print(e.stack || e); |
|||
} |
@ -0,0 +1,33 @@ |
|||
/* |
|||
* Create a very long refzero cascade. Ensures that such cascades are |
|||
* handled without a deep C stack. |
|||
*/ |
|||
|
|||
/*=== |
|||
build object |
|||
start cascade |
|||
end cascade |
|||
done |
|||
===*/ |
|||
|
|||
function test() { |
|||
var obj; |
|||
var i; |
|||
|
|||
print('build object'); |
|||
obj = {}; |
|||
for (i = 0; i < 1e7; i++) { |
|||
obj = { ref: obj }; |
|||
} |
|||
|
|||
// On return 'obj' becomes unreachable and is freed.
|
|||
print('start cascade'); |
|||
} |
|||
|
|||
try { |
|||
test(); |
|||
print('end cascade'); |
|||
} catch (e) { |
|||
print(e.stack || e); |
|||
} |
|||
print('done'); |
@ -0,0 +1,23 @@ |
|||
#!/bin/bash |
|||
# |
|||
# Some error injection tests |
|||
# |
|||
|
|||
set -x |
|||
|
|||
for i in 1 2 3 4; do |
|||
echo "Heap alloc inject: $i" |
|||
rm -f duk; rm -rf ./prep |
|||
python2 tools/configure.py --output-directory ./prep --source-directory ./src-input --config-metadata config --line-directives -DDUK_USE_INJECT_HEAP_ALLOC_ERROR=$i -DDUK_USE_ASSERTIONS |
|||
gcc -o duk -std=c99 -Wall -g -ggdb -DDUK_CMDLINE_PRINTALERT_SUPPORT -I./prep -Iextras/print-alert prep/duktape.c examples/cmdline/duk_cmdline.c extras/print-alert/duk_print_alert.c -lm |
|||
valgrind --leak-check=full --error-exitcode=123 ./duk |
|||
EXITCODE=$? |
|||
echo "Exit code is $EXITCODE" |
|||
if [ $EXITCODE -eq 1 ]; then echo "OK: 'duk' returned error, but no fatal error"; |
|||
elif [ $EXITCODE -eq 134 ]; then echo "FAILED: ABORTED, fatal error, should not happen"; exit 1; |
|||
elif [ $EXITCODE -eq 123 ]; then echo "FAILED: valgrind reported error, probably a leak, should not happen"; exit 1; |
|||
else echo "Unexpected exit code $EXITCODE, should not happen"; exit 1; |
|||
fi |
|||
done |
|||
|
|||
# FIXME: object resize |
Loading…
Reference in new issue