Browse Source

Merge pull request #1427 from svaarala/rework-side-effect-handling

Rework refzero, finalizer, and side effect handling
pull/1452/head
Sami Vaarala 8 years ago
committed by GitHub
parent
commit
074321639f
  1. 2
      .travis.yml
  2. 5
      Makefile
  3. 43
      RELEASES.rst
  4. 11
      config/config-options/DUK_USE_FINALIZER_TORTURE.yaml
  5. 7
      config/config-options/DUK_USE_INJECT_HEAP_ALLOC_ERROR.yaml
  6. 1
      config/config-options/DUK_USE_MARKANDSWEEP_FINALIZER_TORTURE.yaml
  7. 1
      config/config-options/DUK_USE_REFZERO_FINALIZER_TORTURE.yaml
  8. 2
      doc/bytecode.rst
  9. 10
      doc/compiler.rst
  10. 9
      doc/hobject-design.rst
  11. 4
      doc/lightweight-functions.rst
  12. 301
      doc/memory-management.rst
  13. 4
      doc/release-checklist.rst
  14. 48
      doc/release-notes-v2-1.rst
  15. 533
      doc/side-effects.rst
  16. 2
      examples/cmdline/duk_cmdline.c
  17. 34
      src-input/duk_api_debug.c
  18. 27
      src-input/duk_api_heap.c
  19. 5
      src-input/duk_api_object.c
  20. 115
      src-input/duk_api_stack.c
  21. 7
      src-input/duk_bi_duktape.c
  22. 15
      src-input/duk_bi_json.c
  23. 81
      src-input/duk_debugger.c
  24. 8
      src-input/duk_debugger.h
  25. 5
      src-input/duk_error.h
  26. 31
      src-input/duk_error_longjmp.c
  27. 118
      src-input/duk_error_misc.c
  28. 52
      src-input/duk_error_throw.c
  29. 243
      src-input/duk_heap.h
  30. 205
      src-input/duk_heap_alloc.c
  31. 444
      src-input/duk_heap_finalize.c
  32. 524
      src-input/duk_heap_markandsweep.c
  33. 81
      src-input/duk_heap_memory.c
  34. 117
      src-input/duk_heap_misc.c
  35. 570
      src-input/duk_heap_refcount.c
  36. 21
      src-input/duk_heap_stringtable.c
  37. 679
      src-input/duk_heaphdr.h
  38. 21
      src-input/duk_hobject.h
  39. 1
      src-input/duk_hobject_alloc.c
  40. 113
      src-input/duk_hobject_finalizer.c
  41. 254
      src-input/duk_hobject_props.c
  42. 2
      src-input/duk_hthread.h
  43. 1
      src-input/duk_hthread_misc.c
  44. 27
      src-input/duk_hthread_stacks.c
  45. 1
      src-input/duk_internal.h
  46. 108
      src-input/duk_js_call.c
  47. 2
      src-input/duk_js_compiler.c
  48. 79
      src-input/duk_js_executor.c
  49. 698
      src-input/duk_refcount.h
  50. 22
      tests/ecmascript/test-bug-finalizer-repro-gh1311.js
  51. 2
      tests/ecmascript/test-bug-proxy-finalizer-double-call.js
  52. 108
      tests/ecmascript/test-bug-refcount-finalizer-garbage-loop.js
  53. 63
      tests/ecmascript/test-dev-finalizer-markandsweep-refzero.js
  54. 39
      tests/ecmascript/test-dev-finalizer-markandsweep-zero-refcount.js
  55. 7
      tests/ecmascript/test-dev-finalizer-skip.js
  56. 2
      tests/ecmascript/test-dev-lightfunc-finalizer.js
  57. 10
      tests/ecmascript/test-dev-markandsweep-finalizer-3.js
  58. 33
      tests/ecmascript/test-dev-refzero-long-cascade.js
  59. 2
      tests/ecmascript/test-dev-yield-from-getset.js
  60. 3
      tools/configure.py
  61. 3
      util/dist.py
  62. 23
      util/error_inject_test.sh
  63. 13
      util/makeduk_base.yaml
  64. 4
      website/api/duk_set_finalizer.yaml
  65. 4
      website/guide/finalization.html
  66. 2
      website/guide/limitations.html

2
.travis.yml

@ -4,6 +4,7 @@ before_install:
- sudo apt-get update -qq
- sudo apt-get install python-yaml
- sudo apt-get install tidy
- sudo apt-get install valgrind
#compiler:
# - clang
# - gcc
@ -14,3 +15,4 @@ script:
- make tidy-site
- make apitest
- make qecmatest
- make injectiontest

5
Makefile

@ -544,6 +544,11 @@ checkalign:
.PHONY: test
test: qecmatest apitest regfuzztest underscoretest lodashtest emscriptentest test262test
# Error injection tests.
.PHONY: injectiontest
injectiontest:
bash util/error_inject_test.sh
# Runtests-based Ecmascript and API tests.
.PHONY: runtestsdeps
runtestsdeps: runtests/node_modules UglifyJS2

43
RELEASES.rst

@ -2580,6 +2580,37 @@ Planned
necessary because all free operations decrement the voluntary GC counter and
all allocs/reallocs check for voluntary GC (GH-1355)
* Remove voluntary GC trigger counter decrement from memory free calls; the
decrement is unnecessary because alloc and free calls are ultimately in a
rough balance and it suffices to update the counter in allocation only
(GH-1427)
* Rework zero refcount (refzero) handling: memory frees triggered by a cascade
of zero refcounts are now never postponed for objects that don't have a
finalizer (and freeing the cascade has no side effects other than freeing
blocks of memory) (GH-1427)
* Rework finalizer handling: always use the heap thread (heap->heap_thread)
for finalizer calls, regardless of whether finalization is refcount or
mark-and-sweep triggered; previously the current thread would be used for
refcount finalization and current thread or heap thread (if no current
thread exists) for mark-and-sweep finalization (GH-1427)
* Rework finalizer handling: if a mark-and-sweep triggered finalizer removes
the object from a reference cycle so that its refcount is zero after
finalizer execution, the object gets freed immediately rather than waiting
for mark-and-sweep to confirm its status (GH-1427)
* Rework finalizer handling: finalizer execution is now outside of refzero
processing and mark-and-sweep; however, mark-and-sweep is still disabled
while finalizers are being executed to avoid incorrect rescue decisions
caused by a partially processed finalize_list (GH-1427)
* Improve side effect protections: prevent finalizer execution between an
error throw point and its catch point; add asserts for catching any cases
where an error would be thrown when handling a previously thrown error
(GH-1427)
* Use a 32-bit refcount field by default (even on 64-bit systems) which saves
8 bytes for each heap object and can only wrap if the Duktape heap is
larger than 64GB; disable DUK_USE_REFCOUNT32 to use size_t for refcounts
@ -2596,7 +2627,8 @@ Planned
* Avoid a harmless GC refcount assert when abandoning an object's array part
(GH-1408)
* More assertion coverage for GC (GH-1411)
* More assertion and torture test coverage for GC, finalizers, and error
handling (GH-1411, GH-1427)
* Avoid relying on the value stack when handling a double error (error which
happened during handling of a previous error); this is cleaner but relying
@ -2606,6 +2638,15 @@ Planned
allowed typos like "-DFOO bar" to be accepted silently (here as "-DFOO" and
an ignored pain "bar" argument) (GH-1425)
* Fix unintuitive refcount triggered finalizer behavior where a finalizer loop
would happen if the finalizer created a (garbage) object referencing the
object being finalized (GH-1396, GH-1427)
* Fix out-of-memory handling for object property table resize, previously
an out-of-memory during property table resize could leave internal state
in a state which prevented mark-and-sweep from fully working afterwards
(GH-1427)
* Fix a garbage collection bug where a finalizer triggered by mark-and-sweep
could cause a recursive entry into mark-and-sweep (leading to memory unsafe
behavior) if the voluntary GC trigger counter dropped to zero during

11
config/config-options/DUK_USE_FINALIZER_TORTURE.yaml

@ -0,0 +1,11 @@
define: DUK_USE_FINALIZER_TORTURE
introduced: 2.1.0
default: false
tags:
- gc
- memory
- development
- torture
description: >
Development time option: simulate a fake finalizer call every time when
finalizers might be executed (even if the actual finalize_list is empty).

7
config/config-options/DUK_USE_INJECT_HEAP_ALLOC_ERROR.yaml

@ -0,0 +1,7 @@
define: DUK_USE_INJECT_HEAP_ALLOC_ERROR
introduced: 2.1.0
default: false
tags:
- development
description: >
Force heap allocation to fail, value indicates the desired error position.

1
config/config-options/DUK_USE_MARKANDSWEEP_FINALIZER_TORTURE.yaml

@ -1,5 +1,6 @@
define: DUK_USE_MARKANDSWEEP_FINALIZER_TORTURE
introduced: 1.3.0
removed: 2.1.0
related:
- DUK_USE_GC_TORTURE
- DUK_USE_REFZERO_FINALIZER_TORTURE

1
config/config-options/DUK_USE_REFZERO_FINALIZER_TORTURE.yaml

@ -1,5 +1,6 @@
define: DUK_USE_REFZERO_FINALIZER_TORTURE
introduced: 1.3.0
removed: 2.1.0
related:
- DUK_USE_MARKANDSWEEP_FINALIZER_TORTURE
default: false

2
doc/bytecode.rst

@ -284,7 +284,7 @@ Finalizer on the function is lost
---------------------------------
A finalizer on the function being serialized is lost, no finalizer will
exist on bytecode load.
exist after a bytecode load.
Only specific function object properties are kept
-------------------------------------------------

10
doc/compiler.rst

@ -222,9 +222,13 @@ resources must thus be visible to the garbage collector and correctly reference
counted at nearly all times. The current approach to deal with this is to use
the current thread's value stack to stash token values, intermediate values,
identifier names, etc. Slots are allocated from the value stack as necessary.
This is a bit complicated but the alternatives are not simple either. (One
alternative would be to make the compiler state a traversable object type for
the garbage collector.)
This is a bit complicated; some alternatives:
* Finalization (and possibly mark-and-sweep) could be prevented during
compilation.
* Make the compiler state a traversable object type visible to garbage
collection.
Ivalue example
--------------

9
doc/hobject-design.rst

@ -1163,11 +1163,10 @@ Some complications:
"emergency GC" compaction. This needs to be prevented for the current
object (or in general, for any object being concurrently resized).
The current solution is to use the ``heap->mark_and_sweep_base_flags``
mechanism to prevent finalizers from running (= prevents attempts to add,
remove, or modify properties in the middle of a resize) and to prevent
object compaction (so that a certain object won't be resized when it is
already being resized).
The current solution is to prevent finalizers from running (which prevents
attempts to add, remove, or modify properties in the middle of a resize)
and to prevent object compaction (so that a certain object won't be resized
when it is already being resized).
Enumeration
===========

4
doc/lightweight-functions.rst

@ -122,7 +122,9 @@ in garbage collection like actual objects.
Hypothetically, even if lightfuncs were garbage collected somehow, they
don't have space for a virtual ``_Finalizer`` property. It would be
possible to set a finalizer on ``Function.prototype`` though and let that
finalize the lightfuncs.
finalize the lightfuncs. But because lightfuncs are not really objects,
it's not clear when the finalizer should be called (e.g. every time a
lightfunc ``duk_tval`` is DECREF'd?).
Implementation notes
====================

301
doc/memory-management.rst

@ -79,8 +79,8 @@ Heap element
Only ``duk_hobject`` contains further internal references to other heap
elements. These references are kept in the object property table and the
object internal prototype pointer. Currently only ``duk_hobject`` may
have a finalizer.
object internal prototype pointer. Currently only ``duk_hobject`` or its
subtypes may have a finalizer.
Heap elements have a **stable pointer** which means that the (main) heap
element is not relocated during its lifetime. Auxiliary allocations
@ -351,7 +351,7 @@ ownership relationships::
* "heap allocated"
* "refzero work list"
* "mark-and-sweep finalization work list"
* "finalization work list"
+-------------+ h_next +-------------+ h_next
.------>| duk_hobject |<-------->| duk_hbuffer |<--------> ...
@ -828,44 +828,32 @@ The ``DECREF`` algorithm is a bit more complicated:
analysis and have correct reference counts at the end of the
mark-and-sweep algorithm.)
b. If the target is a string:
1. Remove the string from the string table.
2. Remove any references to the string from the "string access cache"
(which accelerates character index to byte index conversions).
Note that this is a special, internal "weak" reference.
3. Free the string. There are no auxiliary allocations to free
for strings.
4. Return.
b. If the target is a string, remove the string from the string table,
remove any weak references (e.g. from string access cache), and
then free the string structure.
c. If the target is a buffer:
1. Remove the buffer from the "heap allocated" list.
2. If the buffer is dynamic, free the auxiliary buffer (which is
allocated separately).
3. Free the buffer.
4. Return.
1. Remove the buffer from the "heap allocated" list, free any related
allocations (if the buffer is dynamic, the separately allocated
buffer), and then free the buffer structure.
d. Else the target is an object:
1. Move the object from the "heap allocated" list to the "refzero" work
list. Note that this prevents the mark-and-sweep algorithm from
freeing the object (the "sweep" phase does not affect objects in the
"refzero" work list).
1. This case is relatively complicated, see code for details:
2. If the "refzero" algorithm is already running, return.
* If the object doesn't have a finalizer, queue it to "refzero list".
If no-one is processing refzero_list now, process it until it
becomes empty; new objects may be queued as previous ones are
refcount finalized and freed. When the list is empty, run any
pending finalizers queued up during the process. If a previous
call is already processing the list, just queue the object and
finish.
3. Else, call the "refzero" algorithm to free pending objects.
The refzero algorithm returns when the entire work list has
been successfully cleared.
4. Return.
* If the object has a finalizer, queue it to finalize_list. If
no-one is processing the refzero_list or finalize_list, process
the finalize_list directly. Otherwise just queue the object and
finish.
The REFZERO algorithm
---------------------
@ -875,109 +863,7 @@ algorithm may run at any given time. The "refzero" work list model is used
to avoid an unbounded C call stack depth caused by a cascade of reference
counts which drop to zero.
The algorithm is as follows:
1. While the "refzero" work list is not empty:
a. Let ``O`` be the element at the head of the work list.
Note:
* ``O`` is always an object, because only objects are placed in the work list.
* ``O`` must not be removed from the work list yet. ``O`` must be on the
work list in case a finalizer is executed, so that a mark-and-sweep
triggered by the finalizer works correctly (concretely: to be able to
clear the ``DUK_HEAPHDR_FLAG_REACHABLE`` of the object.)
b. If ``O`` is an object (this is always the case, currently), and has a
finalizer (i.e. has a ``_Finalizer`` internal property):
1. Create a ``setjmp()`` catchpoint.
2. Increase the reference count of ``O`` temporarily by one (back to 1).
3. Note: the presence of ``O`` in the "refzero" work list is enough to
guarantee that the mark-and-sweep algorithm will not free the object
despite it not being reachable.
4. Call the finalizer method. Ignore the return value and a possible
error thrown by the finalizer (except for debug logging an error).
Any error or other ``longjmp()`` is caught by the ``setjmp()``
catchpoint. Note:
* The thread used for finalization is currently the thread which
executed ``DECREF``. *This is liable to be changed later.*
5. Regardless of how the finalizer finishes, decrease the reference
count of ``O`` by one.
6. If the reference count of ``O`` is non-zero, the object has been
"rescued" and:
a. Place the object back into the "heap allocated" list (and debug
log the object as "rescued").
b. Continue the while-loop with the next object.
c. Remove ``O`` from the work list.
d. Call ``DECREF`` for any references that ``O`` contains (this is
called "refcount finalization" in the source). Concretely:
* String: no internal references.
* Buffer: no internal references.
* Object: properties contain references; specific sub-types (like
``duk_hthread``) contain further references.
* Note: this step is recursive with respect to ``DECREF`` but not
the "refzero" algorithm: a ``DECREF`` is executed inside a
``DECREF`` which started the "refzero" algorithm, but the inner
``DECREF`` doesn't restart the "refzero" algorithm. Recursion is
thus limited to two levels.
e. Free any auxiliary references (such as object properties) contained
in ``O``, and finally ``O`` itself.
Notes:
* "Churning" the work list requires that the type of a heap element can be
determined by looking at the heap header.
+ This is one of the rare places where this would be necessary: usually the
tagged type of a ``duk_tval`` is sufficient to type an arbitrary value,
and when following pointer references from one heap element to another,
the pointers themselves are typed.
+ Right now, this type determination is not actually needed because only
object (``duk_hobject``) values will be placed in the work list.
* The finalizer thread selection is not a trivial issue, especially for
mark-and-sweep. See discussion under mark-and-sweep.
* Because the reference count is artifially increased by one during finalization,
the object being finalized cannot encounter a "refcount drops to zero"
situation while being finalized (assuming of course that all ``INCREF`` and
``DECREF`` calls are properly "nested").
* If mark-and-sweep is triggered during finalization, the target may or
may not be reachable, but will have a non-zero reference count in
either case due to the artificial ``INCREF`` in the finalization
algorithm. The reference count is inconsistent with the actual reference
count in the reachability graph but this is not an issue for mark-and-sweep.
In any case, mark-and-sweep will not free any object in the "refzero" work
list, regardless of its reachability status, so mark-and-sweep during
REFZERO is not a problem.
* Although finalization increases C call stack size, another finalization
triggered by reference counting cannot occur while finalization for one
object is in progress: any objects whose refcounts drop to zero during
finalization are simply placed in the refzero work list and dealt with
when the object being finalization has been fully processed. However,
there can still be **two** active finalizers at the same time, one initiated
by reference counting and another by a mark-and-sweep triggered inside
REFZERO.
See code for details, also see ``doc/side-effects.rst``.
Background on the refzero algorithm, limiting C recursion depth
---------------------------------------------------------------
@ -1012,10 +898,7 @@ which is called just before the object is freed either by reference counting
or by the mark-and-sweep collector. The finalizer gets a reference to the
object in question, and may "rescue" the reference.
Mark-and-sweep may be triggered during the "refzero" algorithm, currently
only by finalization. If mark-and-sweep is triggered, it must not touch any
object in the "refzero" work list (i.e. any object whose reference count is
zero, but which has not yet been processed).
There are many side effects to consider, see ``doc/side-effects.rst``.
Mark-and-sweep
==============
@ -1066,20 +949,20 @@ Mark-and-sweep control flags are defined in ``duk_heap.h``, e.g.:
* ``DUK_MS_FLAG_NO_OBJECT_COMPACTION``
In addition to the explicitly requested flags, the bit mask in
``mark_and_sweep_base_flags`` in ``duk_heap`` is bitwise ORed into the
requested flags to form effective flags. The flags added to the "base
flags" control restrictions on mark-and-sweep side effects, and are used
for certain critical sections.
``ms_base_flags`` in ``duk_heap`` is bitwise ORed into the requested flags
to form effective flags. The flags added to the "base flags" control
restrictions on mark-and-sweep side effects, and are used for certain
critical sections.
To protect against such side effects, the critical algorithms:
* Store the original value of ``heap->mark_and_sweep_base_flags``
* Store the original value of ``heap->ms_base_flags``
* Set the suitable restriction flags into ``heap->mark_and_sweep_base_flags``
* Set the suitable restriction flags into ``heap->ms_base_flags``
* Attempt the allocation / reallocation operation, *without throwing errors*
* Restore the ``heap->mark_and_sweep_base_flags`` to its previous value
* Restore the ``heap->ms_base_flags`` to its previous value
* Examine the allocation result and act accordingly
@ -1125,9 +1008,9 @@ either in "normal" mode or "emergency" mode. Emergency mode is used if
a normal mark-and-sweep pass did not resolve the allocation failure; the
emergency mode is a more aggressive attempt to free memory. Mark-and-sweep
is controlled by a set of flags. The effective flags set is a bitwise OR
of explicit flags and "base flags" stored in ``heap->mark_and_sweep_base_flags``.
of explicit flags and "base flags" stored in ``heap->ms_base_flags``.
The "base flags" essentially prohibit specific garbage collection operations
(like finalizers) when a certain critical code section is active.
when a certain critical code section is active.
The mark-and-sweep algorithm is as follows:
@ -1254,30 +1137,7 @@ The mark-and-sweep algorithm is as follows:
a. Compact and rehash the string table. This can be controlled by build
flags as it may not be appropriate in all environments.
8. Run finalizers:
a. While the "to be finalized" work queue is not empty:
1. Select object from head of the list.
2. Set up a ``setjmp()`` catchpoint.
3. Execute finalizer. Note:
* The thread used for this is the currently running thread
(``heap->curr_thread``), or if no thread is running,
``heap->heap_thread``. This is liable to change in the future.
4. Ignore finalizer result (except for logging errors).
5. Mark the object ``FINALIZED``.
6. Move the object back to the "heap allocated" list. The object will
be collected on the next pass if it is still unreachable. (Regardless
of actual reachability, the ``REACHABLE`` flag of the object is clear
at this point.)
9. Finish.
8. Finish.
a. All ``TEMPROOT`` and ``REACHABLE`` flags are clear at this point.
@ -1289,6 +1149,10 @@ The mark-and-sweep algorithm is as follows:
d. No object in the "refzero" work list has been freed.
9. Execute pending finalizers unless finalizer execution is prevented or an
earlier call site is already finalizing objects (currently mark-and-sweep
is not allowed during finalization, but that may change).
Notes:
* Elements on the refzero list are considered reachability roots, as we need
@ -1308,22 +1172,18 @@ Notes:
+ Another mark-and-sweep cannot execute.
+ A ``DECREF`` resulting in a zero reference count is not processed at all.
The object is not placed into the "refzero" work list, as mark-and-sweep
is assumed to be a comprehensive pass, including running finalizers.
+ A ``DECREF`` resulting in a zero reference count is not processed at all
(other than updating the refcount). The object is not placed into the
"refzero" work list, as mark-and-sweep is assumed to be a comprehensive
pass, including running finalizers.
* Finalizers are executed after the sweep phase to ensure that finalizers
have as much available memory as possible. Since mark-and-sweep is
running, if a finalizer runs out of memory, no memory can be reclaimed
as recursive mark-and-sweep is explicitly blocked. This is probably a
very minor issue in practice.
* Finalizers could be executed from their work list after the mark-and-sweep
has finished to allow mark-and-sweep to run if mark-and-sweep is required
by a finalizer. The mark-and-sweep could then append more objects to be
finalized into the "to be finalized" work list; this is not a problem.
However, since finalizers are used with a rather limited scope, this is not
currently done.
have as much available memory as possible. While finalizers execute outside
the mark-and-sweep algorithm (since Duktape 2.1), mark-and-sweep is
explicitly prevented during finalization because it may cause incorrect
rescue/free decisions when the finalize_list is only partially processed.
As a result, no memory can be reclaimed while the finalize_list is being
processed. This is probably a very minor issue in practice.
* The sweep phase is divided into two separate scans: one to adjust refcounts
and one to actually free the objects. If these were performed in a single
@ -1341,14 +1201,9 @@ Notes:
``REACHABLE`` flag set at the end of the algorithm. At first it might seem
that this can never happen if reference counts are correct: all objects in
the "refzero" work list are unreachable by definition. However, this is not
the case for objects with finalizers.
+ A finalizer call made by the "refzero" algorithm makes the object reachable
again (through the finalizer thread value stack; the finalizer method itself
can also create reachable references for the target). If a mark-and-sweep
is triggered during finalization, the target will be marked ``REACHABLE``
during the mark phase. Thus, ``REACHABLE`` flags of "refzero" work list
elements must be cleared explicitly after or during the sweep phase.
the case for objects with finalizers. (As of Duktape 2.1 refzero_list is
freed inline without side effects, so it's always NULL when mark-and-sweep
runs.)
Note that there is a small "hole" in the reclamation right now, when
mark-and-sweep finalizers are used:
@ -1382,35 +1237,9 @@ phase:
mark-and-sweep is expected to deal with the object directly.
If the "refzero" algorithm is triggered first (with some objects in the
"refzero" work list), mark-and-sweep may be triggered while the "refzero"
algorithm is running. In more detail:
* A ``DECREF`` happens while neither mark-and-sweep nor "refzero" algorithm
is running.
* A reference count reaches zero, and the object is placed on the "refzero"
work list and the "refzero" algorithm is invoked.
* The "refzero" algorithm cannot trigger another "refzero" algorithm to
execute recursively. Instead, the work list is churned until it becomes
empty. Any objects whose reference count reaches zero are added to the
work list, though, so will be processed eventually.
* The "refzero" algorithm may trigger a mark-and-sweep while it is running,
e.g. by running a finalizer which runs out of memory:
+ This mark-and-sweep will mark any elements in the "refzero" work list
but will not free them.
+ While the mark-and-sweep is running, no new elements are placed into
the "refzero" work list, even if their reference count reaches zero.
Instead, the mark-and-sweep algorithm is assumed to deal with them.
+ The mark-and-sweep algorithm may also execute finalizers, so two
finalizers (but no more) can be running simultaneously, though on
different objects.
+ Another recursive mark-and-sweep run cannot happen.
"refzero" work list), since Duktape 2.1 mark-and-sweep is not triggered while
the refzero_list is being processed as refzero_list handling is side effect
free.
Finalizer behavior
==================
@ -1446,15 +1275,9 @@ General notes:
error, this is only debug logged but is considered to be a successful
finalization.
* The thread running a finalizer is not very logical right now and is liable
to change:
+ Reference counting: the thread which executed ``DECREF`` is used as the
finalizer thread.
+ Mark-and-sweep: the thread which caused mark-and-sweep is used as the
finalizer thread; if there is no active thread, ``heap->heap_thread``
is used instead.
* Finalizers are always executed using ``heap->heap_thread`` in Duktape 2.1.
Before Duktape 2.0 the thread used depended on whether the object was
finalized via refcounting or mark-and-sweep.
* The finalizer may technically launch other threads and do arbitrary things
in general, but it is a good practice to make the finalizer very simple and
@ -1465,18 +1288,6 @@ General notes:
stack, in particular the thread which triggered a finalization or the
finalizer thread (if these are different).
Finalizer thread selection is currently not optimal; there are several
approaches:
* The thread triggering mark-and-sweep is not a good thread for finalization,
as it may be from a different conceptual virtual machine, and may thus have
a different global context (global object) than where the finalized object
was created.
* A heap-level dedicated finalizer thread has a similar problem: the finalizer
will run in a different global context than where the finalized object was
created.
Voluntary mark-and-sweep interval
=================================

4
doc/release-checklist.rst

@ -133,9 +133,9 @@ Checklist for ordinary releases
- DUK_USE_SHUFFLE_TORTURE
- DUK_USE_REFZERO_FINALIZER_TORTURE
- DUK_USE_FINALIZER_TORTURE
- DUK_USE_MARKANDSWEEP_FINALIZER_TORTURE + DUK_USE_GC_TORTURE
- DUK_USE_FINALIZER_TORTURE + DUK_USE_GC_TORTURE
- DUK_USE_STRTAB_TORTURE

48
doc/release-notes-v2-1.rst

@ -0,0 +1,48 @@
=========================
Duktape 2.1 release notes
=========================
Release overview
================
Main changes in this release (see RELEASES.rst for full details):
* TBD.
Upgrading from Duktape 2.0
==========================
No action (other than recompiling) should be needed for most users to upgrade
from Duktape v2.0.x. Note the following:
* The Duktape thread used for finalizer calls is now always the initial thread
(heap_thread), for both reference counting and mark-and-sweep triggered
finalization. This should be taken into account in finalizer functions;
in particular, if there are multiple global environments, finalizers will
execute in the first global environment created for the heap.
Prior to 2.1 the finalizer thread could also be heap_thread but usually the
current thread would be used.
Known issues
============
TBD.
Raw issues from test runs
=========================
API tests
---------
TBD.
Ecmascript tests
----------------
TBD.
test262
-------
TBD.

533
doc/side-effects.rst

@ -0,0 +1,533 @@
============
Side effects
============
Overview
========
Duktape is a single threaded interpreter, so when the internal C code deals
with memory allocations, pointers, and internal data structures it is safe
to assume, for example, that pointers are stable while they're being used and
that internal state and data structures are not modified simultaneously from
other threads.
However, many internal operations trigger quite extensive side effects such
as resizing the value stack (invalidating any pointers to it) or clobbering
the current heap error handling (longjmp) state. There are a few primary
causes for the side effects, such as memory management reallocating data
structures, finalizer invocation, and Proxy trap invocation. The primary
causes are also triggered by a lot of secondary causes. The practical effect
is that any internal helper should be assumed to potentially invoke arbitrary
side effects unless there's a specific reason to assume otherwise.
Some of the side effects can be surprising when simply looking at calling
code, which makes side effects an error prone element when maintaining Duktape
internals. Incorrect call site assumptions can cause immediate issues like
segfaults, assert failures, or valgrind warnings. But it's also common for
an incorrect assumption to work out fine in practice, only to be triggered by
rare conditions like voluntary mark-and-sweep or a unrecoverable out-of-memory
error happening in just the right place. Such bugs have crept into the code
base several times -- they're easy to make and hard to catch with tests or
code review.
This document describes the different side effects, how they may be triggered,
what mechanisms are in place to deal with them internally, and how tests try
to cover side effects.
Basic side effect categories
============================
Primary causes
--------------
Side effects are ultimately caused by:
* A refcount dropping to zero, causing a "refzero cascade" where a set of
objects is refcount finalized and freed. If any objects in the cascade
have finalizers, the finalizer calls have a lot of side effects. Object
freeing itself is nearly side effect free, but does invalidate any pointers
to unreachable but not-yet-freed objects which are held at times.
* Mark-and-sweep similarly frees objects and can make finalizer calls.
Mark-and-sweep may also resize/compact the string table and object property
tables. The set of mark-and-sweep side effects are likely to slowly change
over time (e.g. better emergency GC capabilities).
* Error throwing overwrites heap-wide error handling state, and causes a long
control transfer. Concrete impact on call site is that e.g. calling code
may not be able to store/restore internal flags or counters if an error gets
thrown. Almost anything involving a memory allocation, property operation,
etc may throw.
Any operation doing a DECREF may thus have side effects. Any operation doing
anything to cause a mark-and-sweep (like allocating memory) may similarly have
side effects. Finalizers cause the most wide ranging side effects, but even
with finalizers disabled there are significant side effects in mark-and-sweep.
Full side effects
-----------------
The most extensive type of side effect is arbitrary code execution, caused
by e.g. a finalizer or a Proxy trap call (and a number of indirect causes).
The potential side effects are very wide:
* Because a call is made, value, call, and catch stacks may be grown (but
not shrunk) and their base pointers may change. As a result, any duk_tval
pointers to the value stack, duk_activation pointers to the call stack, and
duk_catcher pointers to the catch stack are (potentially) invalidated.
* An error throw may happen, clobbering heap longjmp state. This is a
problem particularly in error handling where we're dealing with a previous
throw. A long control transfer may skip intended cleanup code.
* A new thread may be resumed and yielded from. The resumed thread may even
duk_suspend().
* A native thread switch may occur, for an arbitrarily long time, if any
function called uses duk_suspend() and duk_resume(). This is not currently
supported for finalizers, but may happen, for example, for Proxy trap calls.
* Because called code may operate on any object (except those we're certain
not to be reachable), objects may undergo arbitrary mutation. For example,
object properties may be added, deleted, or modified; dynamic and external
buffer data pointers may change; external buffer length may change. An
object's property table may be resized and its base pointer may change,
invalidating both pointers to the property table. Object property slot
indices may also be invalidated due to object resize/compaction.
The following will be stable at all times:
* Value stack entries in the current activation won't be unwound or modified.
Similarly, the current call stack and catch stack entries and entries below
them won't be unwound or modified.
* All heap object (duk_heaphdr) pointers are valid and stable regardless of
any side effects, provided that the objects in question are reachable and
correctly refcounted for. Called code cannot (in the absence of bugs)
remove references from previous activations in the call stack and thread
resume chain.
* In particular, while duk_tval pointers to the value stack may change, if
an object pointer is encapsulated in a duk_tval, the pointer to the actual
object is still stable.
* All string data pointers, including external strings. String data is
immutable, and can't be reallocated or relocated.
* All fixed buffer data pointers, because fixed buffer data follows the stable
duk_heaphdr directly. Dynamic and external buffer data pointers are not
stable.
Side effects without finalizers, but with mark-and-sweep allowed
----------------------------------------------------------------
If code execution side effects (finalizer calls, Proxy traps, getter/setter
calls, etc) are avoided, most of the side effects are avoided. In particular,
refzero situations are then side effect free because object freeing has no
side effects beyond memory free calls.
The following side effects still remain:
* Refzero processing still frees objects whose refcount reaches zero.
Any pointers to such objects will thus be invalidated. This may happen
e.g. when a borrowed pointer is used and that pointer loses its backing
reference.
* Mark-and-sweep may reallocate/compact the string table. This affects
the string table data structure pointers and indices/offsets into them.
Strings themselves are not affected (but unreachable strings may be freed).
* Mark-and-sweep may reallocate/compact object property tables. All property
keys and values will remain reachable, but pointers and indices to an object
property table may be invalidated. This mostly affects property code which
often finds a property's "slot index" and then operates on the index.
* Mark-and-sweep may free unreachable objects, invalidating any pointers to
them. This affects only objects which have been allocated and added to
heap_allocated list. Objects not on heap_allocated list are not affected
because mark-and-sweep isn't aware of them; such objects are thus safe from
collection, but at risk for leaking if an error is thrown, so such
situations are usually very short lived.
Other side effects don't happen with current mark-and-sweep implementation.
For example, the following don't happen (but could, if mark-and-sweep scope
and side effect lockouts are changed):
* Thread value stack, call stack, and catch stack are never reallocated
and all pointers to duk_tvals, duk_activations, and duk_catchers remain
valid. (This could easily change if mark-and-sweep were to "compact"
the stacks in an emergency GC.)
The mark-and-sweep side effects listed above are not fundamental to the
engine and could be removed if they became inconvenient. For example, it's
nice that emergency GC can compact objects in an attempt to free memory, but
it's not a critical feature (and many other engines don't do it either).
Side effects with finalizers and mark-and-sweep disabled
--------------------------------------------------------
When both finalizers and mark-and-sweep are disabled, the only remaining side
effects come from DECREF (plain or NORZ):
* Refzero processing still frees objects whose refcount reaches zero.
Any pointers to such objects will thus be invalidated. This may happen
e.g. when a borrowed pointer is used and that pointer loses its backing
reference.
When DECREF operations happen during mark-and-sweep they get handled specially:
the refcounts are updated normally, but the objects are never freed or even
queued to refzero_list. This is done because mark-and-sweep will free any
unreachable objects; DECREF still gets called because mark-and-sweep finalizes
refcounts of any freed objects (or rather other objects they point to) so that
refcounts remain in sync.
Controls in place
=================
Finalizer execution, pf_prevent_count
-------------------------------------
Objects with finalizers are queued to finalize_list and are processed later
by duk_heap_process_finalize_list(). The queueing doesn't need any side
effect protection as it is side effect free.
duk_heap_process_finalize_list() is guarded by heap->pf_prevent_count which
prevents recursive finalize_list processing. If the count is zero on entry,
it's bumped and finalize_list is processed until it becomes empty. New
finalizable objects may be queued while the list is being processed, but
only the first call will process the list. If the count is non-zero on entry,
the call is a no-op.
The count can also be bumped upwards to prevent finalizer execution in the
first place, even if no call site is currently processing finalizers. If the
count is bumped, there must be a reliable mechanism of unbumping the count or
finalizer execution will be prevented permanently.
Because only the first finalizer processing site processes the finalize_list,
using duk_suspend() from a finalizer or anything called by a finalizer is not
currently supported. If duk_suspend() were called in a finalizer, finalization
would be stuck until duk_resume() was called. Processing finalizers from
multiple call sites would by itself be relatively straightforward (each call
site would just process the list head or notice it is NULL and finish);
however, at present mark-and-sweep also needs to be disabled while finalizers
run.
Mark-and-sweep prevent count, ms_prevent_count
----------------------------------------------
Stacking counter to prevent mark-and-sweep. Also used to prevent recursive
mark-and-sweep entry when mark-and-sweep runs.
Mark-and-sweep running, ms_running
----------------------------------
This flag is set only when mark-and-sweep is actually running, and doesn't
stack because recursive mark-and-sweep is not allowed.
The flag is used by DECREF macros to detect that mark-and-sweep is running
and that objects must not be queued to refzero_list or finalize_list; their
refcounts must still be updated.
Mark-and-sweep flags, ms_base_flags
-----------------------------------
Mark-and-sweep base flags from duk_heap are ORed to mark-and-sweep argument
flags. This allows a section of code to avoid e.g. object compaction
regardless of how mark-and-sweep gets triggered.
Using the base flags is useful when mark-and-sweep by itself is desirable
but e.g. object compaction is not. Finalizers are prevented using a
separate flag.
Calling code must restore the flags reliably -- e.g. catching errors or having
assurance of no errors being thrown in any situation. It might be nice to
make this easier by allowing flags to be modified, the modification flagged,
and for error throw handling to do the restoration automatically.
Creating an error object, creating_error
----------------------------------------
This flag is set when Duktape internals are creating an error to be thrown.
If an error happens during that process (which includes a user errCreate()
callback), the flag is set and avoids recursion. A pre-allocated "double
error" object is thrown instead.
Call stack unwind or handling an error, error_not_allowed
---------------------------------------------------------
This flag is only enabled when using assertions. It is set in code sections
which must be protected against an error being thrown. This is a concern
because currently the error state is global in duk_heap and doesn't stack,
so an error throw (even a caught and handled one) clobbers the state which
may be fatal in code sections working to handle an error.
DECREF NORZ (no refzero) macros
-------------------------------
DECREF NORZ (no refzero) macro variants behave the same as plain DECREF macros
except that they don't trigger side effects. Since Duktape 2.1 NORZ macros
will handle refzero cascades inline (freeing all the memory directly); however,
objects with finalizers will be placed in finalize_list without finalizer
calls being made.
Once a code segment with NORZ macros is complete, DUK_REFZERO_CHECK_{SLOW,FAST}()
should be called. The macro checks for any pending finalizers and processes
them. No error catcher is necessary: error throw path also calls the macros and
processes pending finalizers. (The NORZ name is a bit of a misnomer since
Duktape 2.1 reworks.)
Mitigation, test coverage
=========================
There are several torture test options to exercise side effect handling:
* Triggering a mark-and-sweep for every allocation (and in a few other places
like DECREF too).
* Causing a simulated finalizer run with error throwing and call side effects
every time a finalizer might have executed.
Some specific cold paths like out-of-memory errors in critical places are
difficult to exercise with black box testing. There is a small set of
DUK_USE_INJECT_xxx config options which allow errors to be injected into
specific critical functions. These can be combined with e.g. valgrind and
asserts, to cover assertions, memory leaks, and memory safety.
Operations causing side effects
===============================
The main reasons and controls for side effects are covered above. Below is
a (non-exhaustive) list of common operations with side effects. Any internal
helper may invoke some of these primitives and thus also have side effects.
DUK_ALLOC()
* May trigger voluntary or emergency mark-and-sweep, with arbitrary
code execution side effects.
DUK_REALLOC()
* May trigger voluntary or emergency mark-and-sweep, with arbitrary
code execution side effects.
* In particular, if reallocating e.g. the value stack, the triggered
mark-and-sweep may change the base pointer being reallocated!
To avoid this, the DUK_REALLOC_INDIRECT() call queries the base pointer
from the caller for every realloc() attempt.
DUK_FREE()
* No side effects at present.
Property read, write, delete, existence check
* May invoke getters, setters, and Proxy traps with arbitrary code execution
side effects.
* Memory allocation is potentially required for every operation, thus causing
arbitrary code execution side effects. Memory allocation is obviously
needed for property writes, but any other operations may also allocate
memory e.g. to coerce a number to a string.
Value stack pushes
* Pushing to the value stack is side effect free. The space must be allocated
beforehand, and a pushed value is INCREF'd if it isn't primitive, and INCREF
is side effect free.
* A duk_check_stack() / duk_require_stack() + push has arbitrary side effects
because of a potential reallocation.
Value stack pops
* Popping a value may invoke a finalizer, and thus may cause arbitrary code
execution side effects.
Value stack coercions
* Value stack type coercions may, depending on the coercion, invoke methods
like .toString() and .valueOf(), and thus have arbitrary code execution
side effects. Even failed attempts may cause side effects due to memory
allocation attempts.
* In specific cases it may be safe to conclude that a coercion is side effect
free; for example, doing a ToNumber() conversion on a plain string is side
effect free at present. (This may not always be the case in the future,
e.g. if numbers become heap allocated.)
* Some coercions not involving an explicit method call may require an
allocation call -- which may then trigger a voluntary or emergency
mark-and-sweep leading to arbitrary code execution side effects.
INCREF
* No side effects at present. Object is never freed or queued anywhere.
DECREF_NORZ
* No side effects other than freeing one or more objects, strings, and
buffers. The freed objects don't have finalizers; objects with finalizers
are queued to finalize_list but finalizers are not executed.
* Queries finalizer existence which is side effect free.
* When mark-and-sweep is running, DECREF_NORZ adjusts target refcount but
won't do anything else like queue object to refzero_list or free it; that's
up to mark-and-sweep.
DECREF
* If refcount doesn't reach zero, no side effects.
* If refcount reaches zero, one or more objects, strings, and buffers are
freed which is side effect free. Objects with finalizers are queued to
finalize_list, and the list is processed when the cascade of objects without
finalizers has been freed. Finalizer execution had arbitrary code execution
side effects.
* Queries finalizer existence which is side effect free.
* When mark-and-sweep is running, DECREF adjusts target refcount but won't
do anything else.
duk__refcount_free_pending()
* As of Duktape 2.1 no side effects, just frees objects without a finalizer
until refzero_list is empty. (Equivalent in Duktape 2.0 and prior would
process finalizers inline.)
* Recursive entry is prevented; first caller processes a cascade until it's
done. Pending finalizers are executed after the refzero_list is empty
(unless prevented). Finalizers are executed outside of refzero_list
processing protection so DECREF freeing may happen normally during finalizer
execution.
Mark-and-sweep
* Queries finalizer existence which is side effect free.
* Object compaction.
* String table compaction.
* Recursive entry prevented.
* Executes finalizers after mark-and-sweep is complete (unless prevented),
which has arbitrary code execution side effects. Finalizer execution
happens outside of mark-and-sweep protection, but currently finalizer
execution explicitly prevents mark-and-sweep to avoid incorrect rescue/free
decisions when the finalize_list is only partially processed.
Error throw
* Overwrites heap longjmp state, so an error throw while handling a previous
one is a fatal error.
* Because finalizer calls may involve error throws, finalizers cannot be
executed in error handling (at least without storing/restoring longjmp
state).
* Memory allocation may involve side effects or fail with out-of-memory, so
it must be used carefully in error handling. For example, creating an object
may potentially fail, throwing an error inside error handling. The error
that is thrown is constructed *before* error throwing critical section
begins.
* Protected call error handling must also never throw (without catching) for
sandboxing reasons: the error handling path of a protected call is assumed
to never throw.
* Ecmascript try-catch handling isn't currently fully protected against out of
memory: if setting up the catch execution fails, an out-of-memory error is
propagated from the try-catch block. Try-catch isn't as safe as protected
calls for sandboxing. Even if catch execution setup didn't allocate memory,
it's difficult to write script code that is fully memory allocation free
(whereas writing C code which is allocation free is much easier).
* Mark-and-sweep without error throwing or (finalizer) call side effects is
OK.
Debugger message writes
* Code writing a debugger message to the current debug client transport
must ensure, somehow, that it will never happen when another function
is doing the same (including nested call to itself).
* If nesting happens, memory unsafe behavior won't happen, but the debug
connection becomes corrupted.
* There are some current issues for debugger message handling, e.g. debugger
code uses duk_safe_to_string() which may have side effects or even busy
loop.
Call sites needing side effect protection
=========================================
Error throw and resulting unwind
* Must protect against another error: longjmp state doesn't nest.
* Prevent finalizers, avoid Proxy traps and getter/setter calls.
* Avoid out-of-memory error throws, trial allocation is OK.
* Refzero with pure memory freeing is OK.
* Mark-and-sweep without finalizer execution is OK. Object and string
table compaction is OK, at least present.
* Error code must be very careful not to throw an error in any part of the
error unwind process. Otherwise sandboxing/protected call guarantees are
broken, and some of the side effect prevention changes are not correctly
undone (e.g. pf_prevent_count is bumped again!). There are asserts in place
for the entire critical part (heap->error_not_allowed).
Success unwind
* Must generally avoid (or protect against) error throws: otherwise state may
be only partially unwound. Same issues as with error unwind.
* However, if the callstack state is consistent, it may be safe to throw in
specific places in the success unwind code path.
String table resize
* String table resize must be protected against string interning.
* Prevent finalizers, avoid Proxy traps.
* Avoid any throws, so that state is not left incomplete.
* Refzero with pure memory freeing is OK.
* Mark-and-sweep without finalizer execution is OK. As of Duktape 2.1
string interning is OK because it no longer causes a recursive string
table resize regardless of interned string count. String table itself
protects against recursive resizing, so both object and string table
compaction attempts are OK.
Object property table resize
* Prevent compaction of the object being resized.
* In practice, prevent finalizers (they may mutate objects) and proxy
traps. Prevent compaction of all objects because there's no fine
grained control now (could be changed).
JSON fast path
* Prevent all side effects affecting property tables which are walked
by the fast path.
* Prevent object and string table compaction, mark-and-sweep otherwise OK.
Object property slot updates (e.g. data -> accessor conversion)
* Property slot index being modified must not change.
* Prevent finalizers and proxy traps/getters (which may operate on the object).
* Prevent object compaction which affects slot indices even when properties
are not deleted.
* In practice, use NORZ macros which avoids all relevant side effects.

2
examples/cmdline/duk_cmdline.c

@ -1123,7 +1123,7 @@ static duk_context *create_duktape_heap(int alloc_provider, int debugger, int aj
if (!ctx) {
fprintf(stderr, "Failed to create Duktape heap\n");
fflush(stderr);
exit(-1);
exit(1);
}
#if defined(DUK_CMDLINE_AJSHEAP)

34
src-input/duk_api_debug.c

@ -84,16 +84,16 @@ DUK_EXTERNAL void duk_debugger_attach(duk_context *ctx,
/* Start in paused state. */
heap->dbg_processing = 0;
DUK_HEAP_SET_DEBUGGER_PAUSED(heap);
heap->dbg_state_dirty = 1;
heap->dbg_state_dirty = 0;
heap->dbg_force_restart = 0;
heap->dbg_step_type = 0;
heap->dbg_step_type = DUK_STEP_TYPE_NONE;
heap->dbg_step_thread = NULL;
heap->dbg_step_csindex = 0;
heap->dbg_step_startline = 0;
heap->dbg_exec_counter = 0;
heap->dbg_last_counter = 0;
heap->dbg_last_time = 0.0;
duk_debug_set_paused(heap); /* XXX: overlap with fields above */
/* Send version identification and flush right afterwards. Note that
* we must write raw, unframed bytes here.
@ -133,7 +133,7 @@ DUK_EXTERNAL void duk_debugger_cooperate(duk_context *ctx) {
DUK_ASSERT(thr != NULL);
DUK_ASSERT(thr->heap != NULL);
if (!DUK_HEAP_IS_DEBUGGER_ATTACHED(thr->heap)) {
if (!duk_debug_is_attached(thr->heap)) {
return;
}
if (thr->callstack_top > 0 || thr->heap->dbg_processing) {
@ -166,7 +166,7 @@ DUK_EXTERNAL duk_bool_t duk_debugger_notify(duk_context *ctx, duk_idx_t nvalues)
DUK_ERROR_RANGE(thr, "not enough stack values for notify");
return ret; /* unreachable */
}
if (DUK_HEAP_IS_DEBUGGER_ATTACHED(thr->heap)) {
if (duk_debug_is_attached(thr->heap)) {
duk_debug_write_notify(thr, DUK_DBG_CMD_APPNOTIFY);
for (idx = top - nvalues; idx < top; idx++) {
duk_tval *tv = DUK_GET_TVAL_POSIDX(ctx, idx);
@ -179,7 +179,7 @@ DUK_EXTERNAL duk_bool_t duk_debugger_notify(duk_context *ctx, duk_idx_t nvalues)
* a transport error was not indicated by the transport write
* callback. This is not a 100% guarantee of course.
*/
if (DUK_HEAP_IS_DEBUGGER_ATTACHED(thr->heap)) {
if (duk_debug_is_attached(thr->heap)) {
ret = 1;
}
}
@ -198,15 +198,19 @@ DUK_EXTERNAL void duk_debugger_pause(duk_context *ctx) {
DUK_D(DUK_DPRINT("application called duk_debugger_pause()"));
/* Treat like a debugger statement: ignore when not attached. */
if (DUK_HEAP_IS_DEBUGGER_ATTACHED(thr->heap)) {
DUK_HEAP_SET_PAUSED(thr->heap);
/* Pause on the next opcode executed. This is always safe to do even
* inside the debugger message loop: the interrupt counter will be reset
* to its proper value when the message loop exits.
*/
thr->interrupt_init = 1;
thr->interrupt_counter = 0;
if (duk_debug_is_attached(thr->heap)) {
if (duk_debug_is_paused(thr->heap)) {
DUK_D(DUK_DPRINT("duk_debugger_pause() called when already paused; ignoring"));
} else {
duk_debug_set_paused(thr->heap);
/* Pause on the next opcode executed. This is always safe to do even
* inside the debugger message loop: the interrupt counter will be reset
* to its proper value when the message loop exits.
*/
thr->interrupt_init = 1;
thr->interrupt_counter = 0;
}
}
}

27
src-input/duk_api_heap.c

@ -8,7 +8,7 @@ typedef struct duk_internal_thread_state duk_internal_thread_state;
struct duk_internal_thread_state {
duk_ljstate lj;
duk_bool_t handling_error;
duk_bool_t creating_error;
duk_hthread *curr_thread;
duk_int_t call_recursion_depth;
};
@ -89,14 +89,27 @@ DUK_EXTERNAL void duk_suspend(duk_context *ctx, duk_thread_state *state) {
DUK_ASSERT(thr->heap != NULL);
DUK_ASSERT(state != NULL); /* unvalidated */
/* Currently not supported when called from within a finalizer.
* If that is done, the finalizer will remain running indefinitely,
* preventing other finalizers from executing. The assert is a bit
* wider, checking that it would be OK to run pending finalizers.
*/
DUK_ASSERT(thr->heap->pf_prevent_count == 0);
/* Currently not supported to duk_suspend() from an errCreate()
* call.
*/
DUK_ASSERT(thr->heap->creating_error == 0);
heap = thr->heap;
lj = &heap->lj;
duk_push_tval(ctx, &lj->value1);
duk_push_tval(ctx, &lj->value2);
/* XXX: creating_error == 0 is asserted above, so no need to store. */
DUK_MEMCPY((void *) &snapshot->lj, (const void *) lj, sizeof(duk_ljstate));
snapshot->handling_error = heap->handling_error;
snapshot->creating_error = heap->creating_error;
snapshot->curr_thread = heap->curr_thread;
snapshot->call_recursion_depth = heap->call_recursion_depth;
@ -104,7 +117,7 @@ DUK_EXTERNAL void duk_suspend(duk_context *ctx, duk_thread_state *state) {
lj->type = DUK_LJ_TYPE_UNKNOWN;
DUK_TVAL_SET_UNDEFINED(&lj->value1);
DUK_TVAL_SET_UNDEFINED(&lj->value2);
heap->handling_error = 0;
heap->creating_error = 0;
heap->curr_thread = NULL;
heap->call_recursion_depth = 0;
}
@ -119,10 +132,16 @@ DUK_EXTERNAL void duk_resume(duk_context *ctx, const duk_thread_state *state) {
DUK_ASSERT(thr->heap != NULL);
DUK_ASSERT(state != NULL); /* unvalidated */
/* Shouldn't be necessary if duk_suspend() is called before
* duk_resume(), but assert in case API sequence is incorrect.
*/
DUK_ASSERT(thr->heap->pf_prevent_count == 0);
DUK_ASSERT(thr->heap->creating_error == 0);
heap = thr->heap;
DUK_MEMCPY((void *) &heap->lj, (const void *) &snapshot->lj, sizeof(duk_ljstate));
heap->handling_error = snapshot->handling_error;
heap->creating_error = snapshot->creating_error;
heap->curr_thread = snapshot->curr_thread;
heap->call_recursion_depth = snapshot->call_recursion_depth;

5
src-input/duk_api_object.c

@ -726,6 +726,11 @@ DUK_EXTERNAL void duk_set_finalizer(duk_context *ctx, duk_idx_t idx) {
* a very quick finalizer check by walking the prototype chain
* and checking the flag alone. (Note that this means that just
* setting _Finalizer on an object won't affect finalizer checks.)
*
* NOTE: if the argument is a Proxy object, this flag will be set
* on the Proxy, not the target. As a result, the target won't get
* a finalizer flag and the Proxy also won't be finalized as there's
* an explicit Proxy check in finalization now.
*/
if (callable) {
DUK_HOBJECT_SET_HAVE_FINALIZER(h);

115
src-input/duk_api_stack.c

@ -4607,45 +4607,46 @@ DUK_LOCAL void duk__validate_push_heapptr(duk_context *ctx, void *ptr) {
* by seeing that X's FINALIZED flag is set (which is done before
* the finalizer starts executing).
*/
#if defined(DUK_USE_FINALIZER_SUPPORT)
for (curr = thr->heap->finalize_list;
curr != NULL;
curr = DUK_HEAPHDR_GET_NEXT(thr->heap, curr)) {
/* FINALIZABLE is set for all objects on finalize_list
* except for an object being finalized right now. So
* can't assert here.
*/
#if 0
DUK_ASSERT(DUK_HEAPHDR_HAS_FINALIZABLE(curr));
#endif
if (curr == h) {
if (DUK_HEAPHDR_HAS_FINALIZED((duk_heaphdr *) h)) {
/* Object is currently being finalized. */
DUK_ASSERT(found == 0); /* Would indicate corrupted lists. */
found = 1;
} else {
#if 1
DUK_ASSERT(0);
}
}
}
/* Also check for the refzero_list; must not be there unless it is
* being finalized when duk_push_heapptr() is called.
*
* Corner case: similar to finalize_list.
*/
#if defined(DUK_USE_REFERENCE_COUNTING)
for (curr = thr->heap->refzero_list;
curr != NULL;
curr = DUK_HEAPHDR_GET_NEXT(thr->heap, curr)) {
if (curr == h) {
if (DUK_HEAPHDR_HAS_FINALIZED((duk_heaphdr *) h)) {
/* Object is currently being finalized. */
#else /* Enable when duk_push_heapptr() allowed for object on finalize_list. */
DUK_ASSERT(found == 0); /* Would indicate corrupted lists. */
found = 1;
} else {
DUK_ASSERT(0);
#endif
}
}
}
#endif /* DUK_USE_FINALIZER_SUPPORT */
#if defined(DUK_USE_REFERENCE_COUNTING)
/* Because refzero_list is now processed to completion inline with
* no side effects, it's always empty here.
*/
DUK_ASSERT(thr->heap->refzero_list == NULL);
#endif
/* If not present in finalize_list or refzero_list, the pointer
/* If not present in finalize_list (or refzero_list), it
* must be either in heap_allocated or the string table.
*/
if (DUK_HEAPHDR_GET_TYPE(h) == DUK_HTYPE_STRING) {
if (DUK_HEAPHDR_IS_STRING(h)) {
duk_uint32_t i;
duk_hstring *str;
duk_heap *heap = thr->heap;
@ -4684,6 +4685,7 @@ DUK_LOCAL void duk__validate_push_heapptr(duk_context *ctx, void *ptr) {
DUK_EXTERNAL duk_idx_t duk_push_heapptr(duk_context *ctx, void *ptr) {
duk_hthread *thr = (duk_hthread *) ctx;
duk_idx_t ret;
duk_tval *tv;
DUK_ASSERT_CTX_VALID(ctx);
@ -4698,29 +4700,75 @@ DUK_EXTERNAL duk_idx_t duk_push_heapptr(duk_context *ctx, void *ptr) {
duk__validate_push_heapptr(ctx, ptr);
#endif
DUK__CHECK_SPACE();
ret = (duk_idx_t) (thr->valstack_top - thr->valstack_bottom);
tv = thr->valstack_top++;
if (ptr == NULL) {
goto push_undefined;
DUK_ASSERT(DUK_TVAL_IS_UNDEFINED(tv));
return ret;
}
DUK_ASSERT_HEAPHDR_VALID((duk_heaphdr *) ptr);
#if 0
/* If the argument is on finalize_list it has technically been
* unreachable before duk_push_heapptr() but it's still safe to
* push it. Starting from Duktape 2.1 allow application code to
* do so. There are two main cases:
*
* (1) The object is on the finalize_list and we're called by
* the finalizer for the object being finalized. In this
* case do nothing: finalize_list handling will deal with
* the object queueing. This is detected by the object not
* having a FINALIZABLE flag despite being on the finalize_list;
* the flag is cleared for the object being finalized only.
*
* (2) The object is on the finalize_list but is not currently
* being processed. In this case the object can be queued
* back to heap_allocated with a few flags cleared, in effect
* cancelling the finalizer.
*/
if (DUK_UNLIKELY(DUK_HEAPHDR_HAS_FINALIZABLE((duk_heaphdr *) ptr))) {
duk_heaphdr *curr;
DUK_D(DUK_DPRINT("duk_push_heapptr() with a pointer on finalize_list, autorescue"));
curr = (duk_heaphdr *) ptr;
DUK_HEAPHDR_CLEAR_FINALIZABLE(curr);
/* Because FINALIZED is set prior to finalizer call, will be
* set for the object being currently finalized, but not for
* other objects on finalize_list.
*/
DUK_HEAPHDR_CLEAR_FINALIZED(curr);
/* Dequeue object from finalize_list and queue it back to
* heap_allocated.
*/
DUK_HEAP_REMOVE_FROM_FINALIZE_LIST(thr->heap, curr);
DUK_HEAP_INSERT_INTO_HEAP_ALLOCATED(thr->heap, curr);
/* Continue with the rest. */
}
#endif
switch (DUK_HEAPHDR_GET_TYPE((duk_heaphdr *) ptr)) {
case DUK_HTYPE_STRING:
duk_push_hstring(ctx, (duk_hstring *) ptr);
DUK_TVAL_SET_STRING(tv, (duk_hstring *) ptr);
break;
case DUK_HTYPE_OBJECT:
duk_push_hobject(ctx, (duk_hobject *) ptr);
break;
case DUK_HTYPE_BUFFER:
duk_push_hbuffer(ctx, (duk_hbuffer *) ptr);
DUK_TVAL_SET_OBJECT(tv, (duk_hobject *) ptr);
break;
default:
goto push_undefined;
DUK_ASSERT(DUK_HEAPHDR_GET_TYPE((duk_heaphdr *) ptr) == DUK_HTYPE_BUFFER);
DUK_TVAL_SET_BUFFER(tv, (duk_hbuffer *) ptr);
break;
}
return ret;
push_undefined:
duk_push_undefined(ctx);
DUK_HEAPHDR_INCREF(thr, (duk_heaphdr *) ptr);
return ret;
}
@ -5042,6 +5090,7 @@ DUK_INTERNAL void duk_unpack(duk_context *ctx) {
DUK_EXTERNAL void duk_throw_raw(duk_context *ctx) {
duk_hthread *thr = (duk_hthread *) ctx;
duk_tval *tv_val;
DUK_ASSERT(thr->valstack_bottom >= thr->valstack);
DUK_ASSERT(thr->valstack_top >= thr->valstack_bottom);
@ -5068,7 +5117,11 @@ DUK_EXTERNAL void duk_throw_raw(duk_context *ctx) {
#endif
DUK_DDD(DUK_DDDPRINT("THROW ERROR (API): %!dT (after throw augment)", (duk_tval *) duk_get_tval(ctx, -1)));
duk_err_setup_heap_ljstate(thr, DUK_LJ_TYPE_THROW);
tv_val = DUK_GET_TVAL_NEGIDX(ctx, -1);
duk_err_setup_ljstate1(thr, DUK_LJ_TYPE_THROW, tv_val);
#if defined(DUK_USE_DEBUGGER_SUPPORT)
duk_err_check_debugger_integration(thr);
#endif
/* thr->heap->lj.jmpbuf_ptr is checked by duk_err_longjmp() so we don't
* need to check that here. If the value is NULL, a fatal error occurs

7
src-input/duk_bi_duktape.c

@ -29,15 +29,14 @@ DUK_INTERNAL duk_ret_t duk_bi_duktape_object_act(duk_context *ctx) {
DUK_INTERNAL duk_ret_t duk_bi_duktape_object_gc(duk_context *ctx) {
duk_hthread *thr = (duk_hthread *) ctx;
duk_small_uint_t flags;
duk_bool_t rc;
flags = (duk_small_uint_t) duk_get_uint(ctx, 0);
rc = duk_heap_mark_and_sweep(thr->heap, flags);
duk_heap_mark_and_sweep(thr->heap, flags);
/* XXX: Not sure what the best return value would be in the API.
* Return a boolean for now. Note that rc == 0 is success (true).
* Return true for now.
*/
duk_push_boolean(ctx, !rc);
duk_push_true(ctx);
return 1;
}

15
src-input/duk_bi_json.c

@ -3115,7 +3115,7 @@ void duk_bi_json_stringify_helper(duk_context *ctx,
if (js_ctx->h_replacer == NULL && /* replacer is a mutation risk */
js_ctx->idx_proplist == -1) { /* proplist is very rare */
duk_int_t pcall_rc;
duk_small_uint_t prev_mark_and_sweep_base_flags;
duk_small_uint_t prev_ms_base_flags;
DUK_DD(DUK_DDPRINT("try JSON.stringify() fast path"));
@ -3137,14 +3137,17 @@ void duk_bi_json_stringify_helper(duk_context *ctx,
duk_dup(ctx, idx_value);
/* Must prevent finalizers which may have arbitrary side effects. */
prev_mark_and_sweep_base_flags = thr->heap->mark_and_sweep_base_flags;
thr->heap->mark_and_sweep_base_flags |=
DUK_MS_FLAG_NO_FINALIZERS | /* avoid attempts to add/remove object keys */
DUK_MS_FLAG_NO_OBJECT_COMPACTION; /* avoid attempt to compact any objects */
prev_ms_base_flags = thr->heap->ms_base_flags;
thr->heap->ms_base_flags |=
DUK_MS_FLAG_NO_OBJECT_COMPACTION; /* Avoid attempt to compact any objects. */
thr->heap->pf_prevent_count++; /* Prevent finalizers. */
DUK_ASSERT(thr->heap->pf_prevent_count != 0); /* Wrap. */
pcall_rc = duk_safe_call(ctx, duk__json_stringify_fast, (void *) js_ctx /*udata*/, 1 /*nargs*/, 0 /*nret*/);
thr->heap->mark_and_sweep_base_flags = prev_mark_and_sweep_base_flags;
DUK_ASSERT(thr->heap->pf_prevent_count > 0);
thr->heap->pf_prevent_count--;
thr->heap->ms_base_flags = prev_ms_base_flags;
if (pcall_rc == DUK_EXEC_SUCCESS) {
DUK_DD(DUK_DDPRINT("fast path successful"));

81
src-input/duk_debugger.c

@ -66,7 +66,6 @@ DUK_LOCAL void duk__debug_do_detach1(duk_heap *heap, duk_int_t reason) {
/* heap->dbg_detached_cb: keep */
/* heap->dbg_udata: keep */
/* heap->dbg_processing: keep on purpose to avoid debugger re-entry in detaching state */
DUK_HEAP_CLEAR_DEBUGGER_PAUSED(heap);
heap->dbg_state_dirty = 0;
heap->dbg_force_restart = 0;
heap->dbg_step_type = 0;
@ -74,6 +73,8 @@ DUK_LOCAL void duk__debug_do_detach1(duk_heap *heap, duk_int_t reason) {
heap->dbg_step_csindex = 0;
heap->dbg_step_startline = 0;
heap->dbg_have_next_byte = 0;
duk_debug_clear_paused(heap); /* XXX: some overlap with field inits above */
heap->dbg_state_dirty = 0; /* XXX: clear_paused sets dirty; rework? */
/* Ensure there are no stale active breakpoint pointers.
* Breakpoint list is currently kept - we could empty it
@ -92,7 +93,10 @@ DUK_LOCAL void duk__debug_do_detach2(duk_heap *heap) {
duk_context *ctx;
thr = heap->heap_thread;
DUK_ASSERT(thr != NULL);
if (thr == NULL) {
DUK_ASSERT(heap->dbg_detached_cb == NULL);
return;
}
ctx = (duk_context *) thr;
/* Safe to call multiple times. */
@ -126,6 +130,9 @@ DUK_INTERNAL void duk_debug_do_detach(duk_heap *heap) {
*/
DUK_LOCAL void duk__debug_null_most_callbacks(duk_hthread *thr) {
duk_heap *heap;
DUK_ASSERT(thr != NULL);
heap = thr->heap;
DUK_D(DUK_DPRINT("transport read/write error, NULL all callbacks expected detached"));
heap->dbg_read_cb = NULL;
@ -1214,7 +1221,11 @@ DUK_LOCAL void duk__debug_handle_trigger_status(duk_hthread *thr, duk_heap *heap
DUK_LOCAL void duk__debug_handle_pause(duk_hthread *thr, duk_heap *heap) {
DUK_D(DUK_DPRINT("debug command Pause"));
DUK_HEAP_SET_PAUSED(heap);
if (duk_debug_is_paused(heap)) {
DUK_D(DUK_DPRINT("Pause requested when already paused, ignore"));
} else {
duk_debug_set_paused(heap);
}
duk_debug_write_reply(thr);
duk_debug_write_eom(thr);
}
@ -1222,7 +1233,7 @@ DUK_LOCAL void duk__debug_handle_pause(duk_hthread *thr, duk_heap *heap) {
DUK_LOCAL void duk__debug_handle_resume(duk_hthread *thr, duk_heap *heap) {
DUK_D(DUK_DPRINT("debug command Resume"));
DUK_HEAP_CLEAR_PAUSED(heap);
duk_debug_clear_paused(heap);
duk_debug_write_reply(thr);
duk_debug_write_eom(thr);
}
@ -1244,7 +1255,7 @@ DUK_LOCAL void duk__debug_handle_step(duk_hthread *thr, duk_heap *heap, duk_int3
line = duk_debug_curr_line(thr);
if (line > 0) {
DUK_HEAP_CLEAR_DEBUGGER_PAUSED(heap);
duk_debug_clear_paused(heap); /* XXX: overlap with fields below; separate macro/helper? */
heap->dbg_step_type = step_type;
heap->dbg_step_thread = thr;
heap->dbg_step_csindex = thr->callstack_top - 1;
@ -2648,10 +2659,11 @@ DUK_INTERNAL void duk_debug_halt_execution(duk_hthread *thr, duk_bool_t use_prev
DUK_ASSERT(thr != NULL);
DUK_ASSERT(thr->heap != NULL);
DUK_ASSERT(DUK_HEAP_IS_DEBUGGER_ATTACHED(thr->heap));
DUK_ASSERT(duk_debug_is_attached(thr->heap));
DUK_ASSERT(thr->heap->dbg_processing == 0);
DUK_ASSERT(!duk_debug_is_paused(thr->heap));
DUK_HEAP_SET_PAUSED(thr->heap);
duk_debug_set_paused(thr->heap);
act = thr->callstack_curr;
@ -2686,8 +2698,8 @@ DUK_INTERNAL void duk_debug_halt_execution(duk_hthread *thr, duk_bool_t use_prev
thr->heap->dbg_state_dirty = 1;
while (DUK_HEAP_HAS_DEBUGGER_PAUSED(thr->heap)) {
DUK_ASSERT(DUK_HEAP_IS_DEBUGGER_ATTACHED(thr->heap));
DUK_ASSERT(thr->heap->dbg_processing);
DUK_ASSERT(duk_debug_is_attached(thr->heap));
DUK_ASSERT(thr->heap->dbg_processing == 0);
duk_debug_process_messages(thr, 0 /*no_block*/);
}
@ -2749,7 +2761,7 @@ DUK_INTERNAL duk_bool_t duk_debug_remove_breakpoint(duk_hthread *thr, duk_small_
DUK_ASSERT(thr != NULL);
heap = thr->heap;
DUK_ASSERT(heap != NULL);
DUK_ASSERT(DUK_HEAP_IS_DEBUGGER_ATTACHED(thr->heap));
DUK_ASSERT(duk_debug_is_attached(thr->heap));
DUK_ASSERT_DISABLE(breakpoint_index >= 0); /* unsigned */
if (breakpoint_index >= heap->dbg_breakpoint_count) {
@ -2778,6 +2790,55 @@ DUK_INTERNAL duk_bool_t duk_debug_remove_breakpoint(duk_hthread *thr, duk_small_
return 1;
}
/*
* Misc state management
*/
DUK_INTERNAL duk_bool_t duk_debug_is_attached(duk_heap *heap) {
return (heap->dbg_read_cb != NULL);
}
DUK_INTERNAL duk_bool_t duk_debug_is_paused(duk_heap *heap) {
return (DUK_HEAP_HAS_DEBUGGER_PAUSED(heap) != 0);
}
DUK_INTERNAL void duk_debug_set_paused(duk_heap *heap) {
if (duk_debug_is_paused(heap)) {
DUK_D(DUK_DPRINT("trying to set paused state when already paused, ignoring"));
} else {
DUK_HEAP_SET_DEBUGGER_PAUSED(heap);
heap->dbg_state_dirty = 1;
duk_debug_clear_step_state(heap);
DUK_ASSERT(heap->ms_running == 0); /* debugger can't be triggered within mark-and-sweep */
heap->ms_running = 1; /* prevent mark-and-sweep, prevent refzero queueing */
heap->ms_prevent_count++;
DUK_ASSERT(heap->ms_prevent_count != 0); /* Wrap. */
DUK_ASSERT(heap->heap_thread != NULL);
}
}
DUK_INTERNAL void duk_debug_clear_paused(duk_heap *heap) {
if (duk_debug_is_paused(heap)) {
DUK_HEAP_CLEAR_DEBUGGER_PAUSED(heap);
heap->dbg_state_dirty = 1;
duk_debug_clear_step_state(heap);
DUK_ASSERT(heap->ms_running == 1);
DUK_ASSERT(heap->ms_prevent_count > 0);
heap->ms_prevent_count--;
heap->ms_running = 0;
DUK_ASSERT(heap->heap_thread != NULL);
} else {
DUK_D(DUK_DPRINT("trying to clear paused state when not paused, ignoring"));
}
}
DUK_INTERNAL void duk_debug_clear_step_state(duk_heap *heap) {
heap->dbg_step_type = DUK_STEP_TYPE_NONE;
heap->dbg_step_thread = NULL;
heap->dbg_step_csindex = 0;
heap->dbg_step_startline = 0;
}
#else /* DUK_USE_DEBUGGER_SUPPORT */
/* No debugger support. */

8
src-input/duk_debugger.h

@ -140,6 +140,12 @@ DUK_INTERNAL_DECL duk_bool_t duk_debug_process_messages(duk_hthread *thr, duk_bo
DUK_INTERNAL_DECL duk_small_int_t duk_debug_add_breakpoint(duk_hthread *thr, duk_hstring *filename, duk_uint32_t line);
DUK_INTERNAL_DECL duk_bool_t duk_debug_remove_breakpoint(duk_hthread *thr, duk_small_uint_t breakpoint_index);
#endif
DUK_INTERNAL_DECL duk_bool_t duk_debug_is_attached(duk_heap *heap);
DUK_INTERNAL_DECL duk_bool_t duk_debug_is_paused(duk_heap *heap);
DUK_INTERNAL_DECL void duk_debug_set_paused(duk_heap *heap);
DUK_INTERNAL_DECL void duk_debug_clear_paused(duk_heap *heap);
DUK_INTERNAL_DECL void duk_debug_clear_step_state(duk_heap *heap);
#endif /* DUK_USE_DEBUGGER_SUPPORT */
#endif /* DUK_DEBUGGER_H_INCLUDED */

5
src-input/duk_error.h

@ -475,7 +475,10 @@ DUK_NORETURN(DUK_INTERNAL_DECL void duk_err_longjmp(duk_hthread *thr));
DUK_NORETURN(DUK_INTERNAL_DECL void duk_default_fatal_handler(void *udata, const char *msg));
DUK_INTERNAL_DECL void duk_err_setup_heap_ljstate(duk_hthread *thr, duk_small_int_t lj_type);
DUK_INTERNAL_DECL void duk_err_setup_ljstate1(duk_hthread *thr, duk_small_uint_t lj_type, duk_tval *tv_val);
#if defined(DUK_USE_DEBUGGER_SUPPORT)
DUK_INTERNAL_DECL void duk_err_check_debugger_integration(duk_hthread *thr);
#endif
DUK_INTERNAL_DECL duk_hobject *duk_error_prototype_from_code(duk_hthread *thr, duk_errcode_t err_code);

31
src-input/duk_error_longjmp.c

@ -38,18 +38,37 @@ DUK_LOCAL void duk__uncaught_error_aware(duk_hthread *thr) {
DUK_INTERNAL void duk_err_longjmp(duk_hthread *thr) {
DUK_ASSERT(thr != NULL);
DUK_ASSERT(thr->heap != NULL);
DUK_DD(DUK_DDPRINT("longjmp error: type=%d iserror=%d value1=%!T value2=%!T",
(int) thr->heap->lj.type, (int) thr->heap->lj.iserror,
&thr->heap->lj.value1, &thr->heap->lj.value2));
/* Perform a refzero check before throwing: this catches cases where
* some internal code uses no-refzero (NORZ) macro variants but an
* error occurs before it has the chance to DUK_REFZERO_CHECK_xxx()
* explicitly. Refzero'ed objects would otherwise remain pending
* until the next refzero (which is not a big issue but still).
/* Prevent finalizer execution during error handling. All error
* handling sites will process pending finalizers once error handling
* is complete and we're ready for the side effects. Does not prevent
* refzero freeing or mark-and-sweep during error handling.
*
* NOTE: when we come here some calling code may have used DECREF
* NORZ macros without an explicit DUK_REFZERO_CHECK_xxx() call.
* We don't want to do it here because it would just check for
* pending finalizers and we prevent that explicitly. Instead,
* the error catcher will run the finalizers once error handling
* is complete.
*/
DUK_REFZERO_CHECK_SLOW(thr);
DUK_ASSERT_LJSTATE_SET(thr->heap);
thr->heap->pf_prevent_count++;
DUK_ASSERT(thr->heap->pf_prevent_count != 0); /* Wrap. */
#if defined(DUK_USE_ASSERTIONS)
/* XXX: set this immediately when longjmp state is set */
DUK_ASSERT(thr->heap->error_not_allowed == 0); /* Detect error within critical section. */
thr->heap->error_not_allowed = 1;
#endif
DUK_DD(DUK_DDPRINT("about to longjmp, pf_prevent_count=%ld", (long) thr->heap->pf_prevent_count));
#if !defined(DUK_USE_CPP_EXCEPTIONS)
/* If we don't have a jmpbuf_ptr, there is little we can do except

118
src-input/duk_error_misc.c

@ -61,11 +61,16 @@ DUK_INTERNAL duk_hobject *duk_error_prototype_from_code(duk_hthread *thr, duk_er
}
/*
* Exposed helper for setting up heap longjmp state.
* Helper for debugger throw notify and pause-on-uncaught integration.
*/
DUK_INTERNAL void duk_err_setup_heap_ljstate(duk_hthread *thr, duk_small_int_t lj_type) {
#if defined(DUK_USE_DEBUGGER_SUPPORT)
#if defined(DUK_USE_DEBUGGER_THROW_NOTIFY) || defined(DUK_USE_DEBUGGER_PAUSE_UNCAUGHT)
DUK_INTERNAL void duk_err_check_debugger_integration(duk_hthread *thr) {
duk_context *ctx = (duk_context *) thr;
duk_bool_t fatal;
duk_tval *tv_obj;
/* If something is thrown with the debugger attached and nobody will
* catch it, execution is paused before the longjmp, turning over
* control to the debug client. This allows local state to be examined
@ -73,53 +78,100 @@ DUK_INTERNAL void duk_err_setup_heap_ljstate(duk_hthread *thr, duk_small_int_t l
* message loop is active (e.g. for Eval).
*/
DUK_ASSERT(thr != NULL);
DUK_ASSERT(thr->heap != NULL);
/* XXX: Allow customizing the pause and notify behavior at runtime
* using debugger runtime flags. For now the behavior is fixed using
* config options.
*/
#if defined(DUK_USE_DEBUGGER_THROW_NOTIFY) || defined(DUK_USE_DEBUGGER_PAUSE_UNCAUGHT)
if (DUK_HEAP_IS_DEBUGGER_ATTACHED(thr->heap) &&
!thr->heap->dbg_processing &&
lj_type == DUK_LJ_TYPE_THROW) {
duk_context *ctx = (duk_context *) thr;
duk_bool_t fatal;
duk_hobject *h_obj;
/* Don't intercept a DoubleError, we may have caused the initial double
* fault and attempting to intercept it will cause us to be called
* recursively and exhaust the C stack.
*/
h_obj = duk_get_hobject(ctx, -1);
if (h_obj == thr->builtins[DUK_BIDX_DOUBLE_ERROR]) {
DUK_D(DUK_DPRINT("built-in DoubleError instance thrown, not intercepting"));
goto skip_throw_intercept;
}
DUK_D(DUK_DPRINT("throw with debugger attached, report to client"));
if (!duk_debug_is_attached(thr->heap) ||
thr->heap->dbg_processing ||
thr->heap->lj.type != DUK_LJ_TYPE_THROW ||
thr->heap->creating_error) {
DUK_D(DUK_DPRINT("skip debugger error integration; not attached, debugger processing, not THROW, or error thrown while creating error"));
return;
}
/* Don't intercept a DoubleError, we may have caused the initial double
* fault and attempting to intercept it will cause us to be called
* recursively and exhaust the C stack. (This should no longer happen
* for the initial throw because DoubleError path doesn't do a debugger
* integration check, but it might happen for rethrows.)
*/
tv_obj = &thr->heap->lj.value1;
if (DUK_TVAL_IS_OBJECT(tv_obj) && DUK_TVAL_GET_OBJECT(tv_obj) == thr->builtins[DUK_BIDX_DOUBLE_ERROR]) {
DUK_D(DUK_DPRINT("built-in DoubleError instance (re)thrown, not intercepting"));
return;
}
fatal = !duk__have_active_catcher(thr);
/* Debugger code expects the value at stack top. This also serves
* as a backup: we need to store/restore the longjmp state because
* when the debugger is paused Eval commands may be executed and
* they can arbitrarily clobber the longjmp state.
*/
duk_push_tval(ctx, tv_obj);
fatal = !duk__have_active_catcher(thr);
/* Store and reset longjmp state. */
DUK_ASSERT_LJSTATE_SET(thr->heap);
DUK_TVAL_DECREF_NORZ(thr, tv_obj);
DUK_ASSERT(DUK_TVAL_IS_UNDEFINED(&thr->heap->lj.value2)); /* Always for THROW type. */
DUK_TVAL_SET_UNDEFINED(tv_obj);
thr->heap->lj.type = DUK_LJ_TYPE_UNKNOWN;
DUK_ASSERT_LJSTATE_UNSET(thr->heap);
#if defined(DUK_USE_DEBUGGER_THROW_NOTIFY)
/* Report it to the debug client */
duk_debug_send_throw(thr, fatal);
/* Report it to the debug client */
DUK_D(DUK_DPRINT("throw with debugger attached, report to client"));
duk_debug_send_throw(thr, fatal);
#endif
#if defined(DUK_USE_DEBUGGER_PAUSE_UNCAUGHT)
if (fatal) {
DUK_D(DUK_DPRINT("throw will be fatal, halt before longjmp"));
duk_debug_halt_execution(thr, 1 /*use_prev_pc*/);
}
#endif
if (fatal) {
DUK_D(DUK_DPRINT("throw will be fatal, halt before longjmp"));
duk_debug_halt_execution(thr, 1 /*use_prev_pc*/);
}
#endif
skip_throw_intercept:
/* Restore longjmp state. */
DUK_ASSERT_LJSTATE_UNSET(thr->heap);
thr->heap->lj.type = DUK_LJ_TYPE_THROW;
tv_obj = DUK_GET_TVAL_NEGIDX(ctx, -1);
DUK_ASSERT(DUK_TVAL_IS_UNDEFINED(&thr->heap->lj.value1));
DUK_ASSERT(DUK_TVAL_IS_UNDEFINED(&thr->heap->lj.value2));
DUK_TVAL_SET_TVAL(&thr->heap->lj.value1, tv_obj);
DUK_TVAL_INCREF(thr, tv_obj);
DUK_ASSERT_LJSTATE_SET(thr->heap);
duk_pop(ctx);
}
#else /* DUK_USE_DEBUGGER_THROW_NOTIFY || DUK_USE_DEBUGGER_PAUSE_UNCAUGHT */
DUK_INTERNAL void duk_err_check_debugger_integration(duk_hthread *thr) {
DUK_UNREF(thr);
}
#endif /* DUK_USE_DEBUGGER_THROW_NOTIFY || DUK_USE_DEBUGGER_PAUSE_UNCAUGHT */
#endif /* DUK_USE_DEBUGGER_SUPPORT */
thr->heap->lj.type = lj_type;
/*
* Helpers for setting up heap longjmp state.
*/
DUK_INTERNAL void duk_err_setup_ljstate1(duk_hthread *thr, duk_small_uint_t lj_type, duk_tval *tv_val) {
duk_heap *heap;
DUK_ASSERT(thr != NULL);
heap = thr->heap;
DUK_ASSERT(heap != NULL);
DUK_ASSERT(tv_val != NULL);
DUK_ASSERT_LJSTATE_UNSET(heap);
DUK_ASSERT(thr->valstack_top > thr->valstack);
DUK_TVAL_SET_TVAL_UPDREF(thr, &thr->heap->lj.value1, thr->valstack_top - 1); /* side effects */
heap->lj.type = lj_type;
DUK_TVAL_SET_TVAL(&heap->lj.value1, tv_val);
DUK_TVAL_INCREF(thr, tv_val);
duk_pop((duk_context *) thr);
DUK_ASSERT_LJSTATE_SET(heap);
}

52
src-input/duk_error_throw.c

@ -16,7 +16,7 @@
*
* If an error occurs while we're dealing with the current error, we might
* enter an infinite recursion loop. This is prevented by detecting a
* "double fault" through the heap->handling_error flag; the recursion
* "double fault" through the heap->creating_error flag; the recursion
* then stops at the second level.
*/
@ -26,7 +26,6 @@ DUK_INTERNAL void duk_err_create_and_throw(duk_hthread *thr, duk_errcode_t code,
DUK_INTERNAL void duk_err_create_and_throw(duk_hthread *thr, duk_errcode_t code) {
#endif
duk_context *ctx = (duk_context *) thr;
duk_bool_t double_error = thr->heap->handling_error;
#if defined(DUK_USE_VERBOSE_ERRORS)
DUK_DD(DUK_DDPRINT("duk_err_create_and_throw(): code=%ld, msg=%s, filename=%s, line=%ld",
@ -39,7 +38,11 @@ DUK_INTERNAL void duk_err_create_and_throw(duk_hthread *thr, duk_errcode_t code)
DUK_ASSERT(thr != NULL);
DUK_ASSERT(ctx != NULL);
thr->heap->handling_error = 1;
/* Even though nested call is possible because we throw an error when
* trying to create an error, the potential errors must happen before
* the longjmp state is configured.
*/
DUK_ASSERT_LJSTATE_UNSET(thr->heap);
/* Sync so that augmentation sees up-to-date activations, NULL
* thr->ptr_curr_pc so that it's not used if side effects occur
@ -55,34 +58,39 @@ DUK_INTERNAL void duk_err_create_and_throw(duk_hthread *thr, duk_errcode_t code)
* to avoid further trouble.
*/
if (DUK_UNLIKELY(double_error)) {
if (thr->heap->creating_error) {
duk_tval tv_val;
duk_hobject *h_err;
duk_tval *tv_dst;
thr->heap->lj.type = DUK_LJ_TYPE_THROW;
tv_dst = &thr->heap->lj.value1;
DUK_TVAL_DECREF_NORZ(thr, tv_dst); /* XXX: shouldn't be necessary without side effects */
#if 0 /* XXX: not always true because the second throw may come from a different coroutine */
DUK_ASSERT(thr->callstack_max == DUK_CALLSTACK_DEFAULT_MAX + DUK_CALLSTACK_GROW_STEP + 11);
#endif
thr->callstack_max = DUK_CALLSTACK_DEFAULT_MAX;
thr->heap->creating_error = 0;
h_err = thr->builtins[DUK_BIDX_DOUBLE_ERROR];
if (h_err != NULL) {
DUK_D(DUK_DPRINT("double fault detected -> use built-in fixed 'double error' instance"));
DUK_TVAL_SET_OBJECT(tv_dst, h_err);
DUK_HOBJECT_INCREF(thr, h_err);
DUK_TVAL_SET_OBJECT(&tv_val, h_err);
} else {
DUK_D(DUK_DPRINT("double fault detected; there is no built-in fixed 'double error' instance "
"-> use the error code as a number"));
DUK_TVAL_SET_I32(tv_dst, (duk_int32_t) code);
DUK_TVAL_SET_I32(&tv_val, (duk_int32_t) code);
}
DUK_D(DUK_DPRINT("double error: skip throw augmenting to avoid further trouble"));
duk_err_setup_ljstate1(thr, DUK_LJ_TYPE_THROW, &tv_val);
/* No augmentation to avoid any allocations or side effects. */
} else {
/* Allow headroom for calls during error handling (see GH-191).
* We allow space for 10 additional recursions, with one extra
* for, e.g. a print() call at the deepest level.
*/
#if 0 /* XXX: not always true, second throw may come from a different coroutine */
DUK_ASSERT(thr->callstack_max == DUK_CALLSTACK_DEFAULT_MAX);
#endif
thr->callstack_max = DUK_CALLSTACK_DEFAULT_MAX + DUK_CALLSTACK_GROW_STEP + 11;
thr->heap->creating_error = 1;
duk_require_stack(ctx, 1);
@ -116,17 +124,25 @@ DUK_INTERNAL void duk_err_create_and_throw(duk_hthread *thr, duk_errcode_t code)
duk_err_augment_error_throw(thr);
#endif
duk_err_setup_heap_ljstate(thr, DUK_LJ_TYPE_THROW);
}
duk_err_setup_ljstate1(thr, DUK_LJ_TYPE_THROW, DUK_GET_TVAL_NEGIDX(ctx, -1));
thr->callstack_max = DUK_CALLSTACK_DEFAULT_MAX;
thr->heap->creating_error = 0;
thr->callstack_max = DUK_CALLSTACK_DEFAULT_MAX; /* reset callstack limit */
/* Error is now created and we assume no errors can occur any
* more. Check for debugger Throw integration only when the
* error is complete. If we enter debugger message loop,
* creating_error must be 0 so that errors can be thrown in
* the paused state, e.g. in Eval commands.
*/
#if defined(DUK_USE_DEBUGGER_SUPPORT)
duk_err_check_debugger_integration(thr);
#endif
}
/*
* Finally, longjmp
*/
thr->heap->handling_error = 0;
DUK_DDD(DUK_DDDPRINT("THROW ERROR (INTERNAL): %!iT, %!iT (after throw augment)",
(duk_tval *) &thr->heap->lj.value1, (duk_tval *) &thr->heap->lj.value2));

243
src-input/duk_heap.h

@ -14,12 +14,11 @@
* Heap flags
*/
#define DUK_HEAP_FLAG_MARKANDSWEEP_RUNNING (1 << 0) /* mark-and-sweep is currently running */
#define DUK_HEAP_FLAG_MARKANDSWEEP_RECLIMIT_REACHED (1 << 1) /* mark-and-sweep marking reached a recursion limit and must use multi-pass marking */
#define DUK_HEAP_FLAG_ERRHANDLER_RUNNING (1 << 2) /* an error handler (user callback to augment/replace error) is running */
#define DUK_HEAP_FLAG_INTERRUPT_RUNNING (1 << 3) /* executor interrupt running (used to avoid nested interrupts) */
#define DUK_HEAP_FLAG_FINALIZER_NORESCUE (1 << 4) /* heap destruction ongoing, finalizer rescue no longer possible */
#define DUK_HEAP_FLAG_DEBUGGER_PAUSED (1 << 5) /* debugger is paused: talk with debug client until step/resume */
#define DUK_HEAP_FLAG_MARKANDSWEEP_RECLIMIT_REACHED (1 << 0) /* mark-and-sweep marking reached a recursion limit and must use multi-pass marking */
#define DUK_HEAP_FLAG_ERRHANDLER_RUNNING (1 << 1) /* an error handler (user callback to augment/replace error) is running */
#define DUK_HEAP_FLAG_INTERRUPT_RUNNING (1 << 2) /* executor interrupt running (used to avoid nested interrupts) */
#define DUK_HEAP_FLAG_FINALIZER_NORESCUE (1 << 3) /* heap destruction ongoing, finalizer rescue no longer possible */
#define DUK_HEAP_FLAG_DEBUGGER_PAUSED (1 << 4) /* debugger is paused: talk with debug client until step/resume */
#define DUK__HEAP_HAS_FLAGS(heap,bits) ((heap)->flags & (bits))
#define DUK__HEAP_SET_FLAGS(heap,bits) do { \
@ -29,21 +28,18 @@
(heap)->flags &= ~(bits); \
} while (0)
#define DUK_HEAP_HAS_MARKANDSWEEP_RUNNING(heap) DUK__HEAP_HAS_FLAGS((heap), DUK_HEAP_FLAG_MARKANDSWEEP_RUNNING)
#define DUK_HEAP_HAS_MARKANDSWEEP_RECLIMIT_REACHED(heap) DUK__HEAP_HAS_FLAGS((heap), DUK_HEAP_FLAG_MARKANDSWEEP_RECLIMIT_REACHED)
#define DUK_HEAP_HAS_ERRHANDLER_RUNNING(heap) DUK__HEAP_HAS_FLAGS((heap), DUK_HEAP_FLAG_ERRHANDLER_RUNNING)
#define DUK_HEAP_HAS_INTERRUPT_RUNNING(heap) DUK__HEAP_HAS_FLAGS((heap), DUK_HEAP_FLAG_INTERRUPT_RUNNING)
#define DUK_HEAP_HAS_FINALIZER_NORESCUE(heap) DUK__HEAP_HAS_FLAGS((heap), DUK_HEAP_FLAG_FINALIZER_NORESCUE)
#define DUK_HEAP_HAS_DEBUGGER_PAUSED(heap) DUK__HEAP_HAS_FLAGS((heap), DUK_HEAP_FLAG_DEBUGGER_PAUSED)
#define DUK_HEAP_SET_MARKANDSWEEP_RUNNING(heap) DUK__HEAP_SET_FLAGS((heap), DUK_HEAP_FLAG_MARKANDSWEEP_RUNNING)
#define DUK_HEAP_SET_MARKANDSWEEP_RECLIMIT_REACHED(heap) DUK__HEAP_SET_FLAGS((heap), DUK_HEAP_FLAG_MARKANDSWEEP_RECLIMIT_REACHED)
#define DUK_HEAP_SET_ERRHANDLER_RUNNING(heap) DUK__HEAP_SET_FLAGS((heap), DUK_HEAP_FLAG_ERRHANDLER_RUNNING)
#define DUK_HEAP_SET_INTERRUPT_RUNNING(heap) DUK__HEAP_SET_FLAGS((heap), DUK_HEAP_FLAG_INTERRUPT_RUNNING)
#define DUK_HEAP_SET_FINALIZER_NORESCUE(heap) DUK__HEAP_SET_FLAGS((heap), DUK_HEAP_FLAG_FINALIZER_NORESCUE)
#define DUK_HEAP_SET_DEBUGGER_PAUSED(heap) DUK__HEAP_SET_FLAGS((heap), DUK_HEAP_FLAG_DEBUGGER_PAUSED)
#define DUK_HEAP_CLEAR_MARKANDSWEEP_RUNNING(heap) DUK__HEAP_CLEAR_FLAGS((heap), DUK_HEAP_FLAG_MARKANDSWEEP_RUNNING)
#define DUK_HEAP_CLEAR_MARKANDSWEEP_RECLIMIT_REACHED(heap) DUK__HEAP_CLEAR_FLAGS((heap), DUK_HEAP_FLAG_MARKANDSWEEP_RECLIMIT_REACHED)
#define DUK_HEAP_CLEAR_ERRHANDLER_RUNNING(heap) DUK__HEAP_CLEAR_FLAGS((heap), DUK_HEAP_FLAG_ERRHANDLER_RUNNING)
#define DUK_HEAP_CLEAR_INTERRUPT_RUNNING(heap) DUK__HEAP_CLEAR_FLAGS((heap), DUK_HEAP_FLAG_INTERRUPT_RUNNING)
@ -71,25 +67,19 @@
* field and the GC caller can impose further flags.
*/
/* Emergency mode: try extra hard, even at the cost of performance. */
/* Emergency mark-and-sweep: try extra hard, even at the cost of
* performance.
*/
#define DUK_MS_FLAG_EMERGENCY (1 << 0)
/* Voluntary mark-and-sweep: triggered periodically. */
#define DUK_MS_FLAG_VOLUNTARY (1 << 1)
/* Don't compact objects; needed during object property table resize
* to prevent a recursive resize. It would suffice to protect only the
* current object being resized, but this is not yet implemented.
*/
#define DUK_MS_FLAG_NO_OBJECT_COMPACTION (1 << 1)
/* Don't run finalizers, leave finalizable objects in finalize_list for
* next mark-and-sweep round. Finalizers may have arbitrary side effects.
*/
#define DUK_MS_FLAG_NO_FINALIZERS (1 << 2)
/* Don't run finalizers, queue finalizable objects back to heap_allocated.
* This is used during heap destruction to deal with finalizers that keep
* on creating more finalizable garbage.
*/
#define DUK_MS_FLAG_SKIP_FINALIZERS (1 << 3)
#define DUK_MS_FLAG_NO_OBJECT_COMPACTION (1 << 2)
/*
* Thread switching
@ -144,8 +134,15 @@
#define DUK_HEAP_STRCACHE_SIZE 4
#define DUK_HEAP_STRINGCACHE_NOCACHE_LIMIT 16 /* strings up to the this length are not cached */
/* helper to insert a (non-string) heap object into heap allocated list */
#define DUK_HEAP_INSERT_INTO_HEAP_ALLOCATED(heap,hdr) duk_heap_insert_into_heap_allocated((heap),(hdr))
/* Some list management macros. */
#define DUK_HEAP_INSERT_INTO_HEAP_ALLOCATED(heap,hdr) duk_heap_insert_into_heap_allocated((heap), (hdr))
#if defined(DUK_USE_REFERENCE_COUNTING)
#define DUK_HEAP_REMOVE_FROM_HEAP_ALLOCATED(heap,hdr) duk_heap_remove_from_heap_allocated((heap), (hdr))
#endif
#if defined(DUK_USE_FINALIZER_SUPPORT)
#define DUK_HEAP_INSERT_INTO_FINALIZE_LIST(heap,hdr) duk_heap_insert_into_finalize_list((heap), (hdr))
#define DUK_HEAP_REMOVE_FROM_FINALIZE_LIST(heap,hdr) duk_heap_remove_from_finalize_list((heap), (hdr))
#endif
/*
* Built-in strings
@ -267,27 +264,6 @@ struct duk_breakpoint {
duk_uint32_t line;
};
#if defined(DUK_USE_DEBUGGER_SUPPORT)
#define DUK_HEAP_IS_DEBUGGER_ATTACHED(heap) ((heap)->dbg_read_cb != NULL)
#define DUK_HEAP_CLEAR_STEP_STATE(heap) do { \
(heap)->dbg_step_type = DUK_STEP_TYPE_NONE; \
(heap)->dbg_step_thread = NULL; \
(heap)->dbg_step_csindex = 0; \
(heap)->dbg_step_startline = 0; \
} while (0)
#define DUK_HEAP_SET_PAUSED(heap) do { \
DUK_HEAP_SET_DEBUGGER_PAUSED(heap); \
(heap)->dbg_state_dirty = 1; \
DUK_HEAP_CLEAR_STEP_STATE((heap)); \
} while (0)
#define DUK_HEAP_CLEAR_PAUSED(heap) do { \
DUK_HEAP_CLEAR_DEBUGGER_PAUSED(heap); \
(heap)->dbg_state_dirty = 1; \
DUK_HEAP_CLEAR_STEP_STATE((heap)); \
} while (0)
#define DUK_HEAP_IS_PAUSED(heap) (DUK_HEAP_HAS_DEBUGGER_PAUSED((heap)))
#endif /* DUK_USE_DEBUGGER_SUPPORT */
/*
* String cache should ideally be at duk_hthread level, but that would
* cause string finalization to slow down relative to the number of
@ -316,6 +292,18 @@ struct duk_ljstate {
duk_tval value2; /* 2nd related value (type specific) */
};
#define DUK_ASSERT_LJSTATE_UNSET(heap) do { \
DUK_ASSERT(heap != NULL); \
DUK_ASSERT(heap->lj.type == DUK_LJ_TYPE_UNKNOWN); \
DUK_ASSERT(heap->lj.iserror == 0); \
DUK_ASSERT(DUK_TVAL_IS_UNDEFINED(&heap->lj.value1)); \
DUK_ASSERT(DUK_TVAL_IS_UNDEFINED(&heap->lj.value2)); \
} while (0)
#define DUK_ASSERT_LJSTATE_SET(heap) do { \
DUK_ASSERT(heap != NULL); \
DUK_ASSERT(heap->lj.type != DUK_LJ_TYPE_UNKNOWN); \
} while (0)
/*
* Main heap structure
*/
@ -333,11 +321,6 @@ struct duk_heap {
*/
void *heap_udata;
/* Precomputed pointers when using 16-bit heap pointer packing. */
#if defined(DUK_USE_HEAPPTR16)
duk_uint16_t heapptr_null16;
#endif
/* Fatal error handling, called e.g. when a longjmp() is needed but
* lj.jmpbuf_ptr is NULL. fatal_func must never return; it's not
* declared as "noreturn" because doing that for typedefs is a bit
@ -345,54 +328,110 @@ struct duk_heap {
*/
duk_fatal_function fatal_func;
/* allocated heap objects */
/* Main list of allocated heap objects. Objects are either here,
* in finalize_list waiting for processing, or in refzero_list
* temporarily while a DECREF refzero cascade finishes.
*/
duk_heaphdr *heap_allocated;
/* work list for objects whose refcounts are zero but which have not been
* "finalized"; avoids recursive C calls when refcounts go to zero in a
* chain of objects.
/* Temporary work list for freeing a cascade of objects when a DECREF
* (or DECREF_NORZ) encounters a zero refcount. Using a work list
* allows fixed C stack size when refcounts go to zero for a chain of
* objects. Outside of DECREF this is always a NULL because DECREF is
* processed without side effects (only memory free calls).
*/
#if defined(DUK_USE_REFERENCE_COUNTING)
duk_heaphdr *refzero_list;
duk_heaphdr *refzero_list_tail;
duk_bool_t refzero_free_running;
#endif
/* mark-and-sweep control */
/* Work list for objects to be finalized (by mark-and-sweep). */
#if defined(DUK_USE_FINALIZER_SUPPORT)
duk_heaphdr *finalize_list;
#endif
/* Voluntary mark-and-sweep trigger counter. Intentionally signed
* because we continue decreasing the value when voluntary GC cannot
* run.
*/
#if defined(DUK_USE_VOLUNTARY_GC)
duk_int_t mark_and_sweep_trigger_counter;
duk_int_t ms_trigger_counter;
#endif
duk_int_t mark_and_sweep_recursion_depth;
/* mark-and-sweep flags automatically active (used for critical sections) */
duk_small_uint_t mark_and_sweep_base_flags;
/* Mark-and-sweep recursion control: too deep recursion causes
* multi-pass processing to avoid growing C stack without bound.
*/
duk_uint_t ms_recursion_depth;
/* work list for objects to be finalized (by mark-and-sweep) */
duk_heaphdr *finalize_list;
/* Mark-and-sweep flags automatically active (used for critical sections). */
duk_small_uint_t ms_base_flags;
/* longjmp state */
duk_ljstate lj;
/* Mark-and-sweep running flag. Prevents re-entry, and also causes
* refzero events to be ignored (= objects won't be queued to refzero_list).
*/
duk_uint_t ms_running;
/* marker for detecting internal "double faults", see duk_error_throw.c */
duk_bool_t handling_error;
/* Mark-and-sweep prevent count, stacking. Used to avoid M&S side
* effects (besides finalizers which are controlled separately) such
* as compacting the string table or object property tables. This
* is also bumped when ms_running is set to prevent recursive re-entry.
* Can also be bumped when mark-and-sweep is not running.
*/
duk_uint_t ms_prevent_count;
/* heap thread, used internally and for finalization */
/* Finalizer processing prevent count, stacking. Bumped when finalizers
* are processed to prevent recursive finalizer processing (first call site
* processing finalizers handles all finalizers until the list is empty).
* Can also be bumped explicitly to prevent finalizer execution.
*/
duk_uint_t pf_prevent_count;
/* When processing finalize_list, don't actually run finalizers but
* queue finalizable objects back to heap_allocated as is. This is
* used during heap destruction to deal with finalizers that keep
* on creating more finalizable garbage.
*/
duk_uint_t pf_skip_finalizers;
#if defined(DUK_USE_ASSERTIONS)
/* Set when we're in a critical path where an error throw would cause
* e.g. sandboxing/protected call violations or state corruption. This
* is just used for asserts.
*/
duk_bool_t error_not_allowed;
#endif
#if defined(DUK_USE_ASSERTIONS)
/* Set when heap is still being initialized, helps with writing
* some assertions.
*/
duk_bool_t heap_initializing;
#endif
/* Marker for detecting internal "double faults", errors thrown when
* we're trying to create an error object, see duk_error_throw.c.
*/
duk_bool_t creating_error;
/* Longjmp state. */
duk_ljstate lj;
/* Heap thread, used internally and for finalization. */
duk_hthread *heap_thread;
/* current thread */
duk_hthread *curr_thread; /* currently running thread */
/* Current running thread. */
duk_hthread *curr_thread;
/* heap level "stash" object (e.g., various reachability roots) */
/* Heap level "stash" object (e.g., various reachability roots). */
duk_hobject *heap_object;
/* duk_handle_call / duk_handle_safe_call recursion depth limiting */
duk_int_t call_recursion_depth;
duk_int_t call_recursion_limit;
/* mix-in value for computing string hashes; should be reasonably unpredictable */
/* Mix-in value for computing string hashes; should be reasonably unpredictable. */
duk_uint32_t hash_seed;
/* rnd_state for duk_util_tinyrandom.c */
/* Random number state for duk_util_tinyrandom.c. */
#if !defined(DUK_USE_GET_RANDOM_DOUBLE)
#if defined(DUK_USE_PREFER_SIZE) || !defined(DUK_USE_64BIT_OPS)
duk_uint32_t rnd_state; /* State for Shamir's three-op algorithm */
@ -401,7 +440,7 @@ struct duk_heap {
#endif
#endif
/* counter for unique local symbol creation */
/* Counter for unique local symbol creation. */
/* XXX: When 64-bit types are available, it would be more efficient to
* use a duk_uint64_t at least for incrementing but maybe also for
* string formatting in the Symbol constructor.
@ -417,10 +456,9 @@ struct duk_heap {
duk_int_t inst_count_interrupt;
#endif
/* debugger */
/* Debugger state. */
#if defined(DUK_USE_DEBUGGER_SUPPORT)
/* callbacks and udata; dbg_read_cb != NULL is used to indicate attached state */
/* Callbacks and udata; dbg_read_cb != NULL is used to indicate attached state. */
duk_debug_read_function dbg_read_cb; /* required, NULL implies detached */
duk_debug_write_function dbg_write_cb; /* required */
duk_debug_peek_function dbg_peek_cb;
@ -430,7 +468,7 @@ struct duk_heap {
duk_debug_detached_function dbg_detached_cb;
void *dbg_udata;
/* debugger state, only relevant when attached */
/* The following are only relevant when debugger is attached. */
duk_bool_t dbg_processing; /* currently processing messages or breakpoints: don't enter message processing recursively (e.g. no breakpoints when processing debugger eval) */
duk_bool_t dbg_state_dirty; /* resend state next time executor is about to run */
duk_bool_t dbg_force_restart; /* force executor restart to recheck breakpoints; used to handle function returns (see GH-303) */
@ -454,7 +492,7 @@ struct duk_heap {
duk_uint8_t dbg_next_byte;
#endif
/* string intern table (weak refs) */
/* String intern table (weak refs). */
#if defined(DUK_USE_STRTAB_PTRCOMP)
duk_uint16_t *strtable16;
#else
@ -467,12 +505,12 @@ struct duk_heap {
#endif
duk_bool_t st_resizing; /* string table is being resized; avoid recursive resize */
/* string access cache (codepoint offset -> byte offset) for fast string
/* String access cache (codepoint offset -> byte offset) for fast string
* character looping; 'weak' reference which needs special handling in GC.
*/
duk_strcache strcache[DUK_HEAP_STRCACHE_SIZE];
/* built-in strings */
/* Built-in strings. */
#if defined(DUK_USE_ROM_STRINGS)
/* No field needed when strings are in ROM. */
#else
@ -501,8 +539,12 @@ DUK_INTERNAL_DECL void duk_free_hstring(duk_heap *heap, duk_hstring *h);
DUK_INTERNAL_DECL void duk_heap_free_heaphdr_raw(duk_heap *heap, duk_heaphdr *hdr);
DUK_INTERNAL_DECL void duk_heap_insert_into_heap_allocated(duk_heap *heap, duk_heaphdr *hdr);
#if defined(DUK_USE_DOUBLE_LINKED_HEAP) && defined(DUK_USE_REFERENCE_COUNTING)
DUK_INTERNAL_DECL void duk_heap_remove_any_from_heap_allocated(duk_heap *heap, duk_heaphdr *hdr);
#if defined(DUK_USE_REFERENCE_COUNTING)
DUK_INTERNAL_DECL void duk_heap_remove_from_heap_allocated(duk_heap *heap, duk_heaphdr *hdr);
#endif
#if defined(DUK_USE_FINALIZER_SUPPORT)
DUK_INTERNAL_DECL void duk_heap_insert_into_finalize_list(duk_heap *heap, duk_heaphdr *hdr);
DUK_INTERNAL_DECL void duk_heap_remove_from_finalize_list(duk_heap *heap, duk_heaphdr *hdr);
#endif
#if defined(DUK_USE_INTERRUPT_COUNTER)
DUK_INTERNAL_DECL void duk_heap_switch_thread(duk_heap *heap, duk_hthread *new_thr);
@ -539,37 +581,12 @@ DUK_INTERNAL_DECL void *duk_heap_mem_realloc(duk_heap *heap, void *ptr, duk_size
DUK_INTERNAL_DECL void *duk_heap_mem_realloc_indirect(duk_heap *heap, duk_mem_getptr cb, void *ud, duk_size_t newsize);
DUK_INTERNAL_DECL void duk_heap_mem_free(duk_heap *heap, void *ptr);
#if defined(DUK_USE_REFERENCE_COUNTING)
DUK_INTERNAL_DECL void duk_refzero_free_pending(duk_hthread *thr);
DUK_INTERNAL_DECL void duk_heaphdr_refcount_finalize(duk_hthread *thr, duk_heaphdr *hdr);
#if 0 /* Not needed: fast path handles inline; slow path uses duk_heaphdr_decref() which is needed anyway. */
DUK_INTERNAL_DECL void duk_hstring_decref(duk_hthread *thr, duk_hstring *h);
DUK_INTERNAL_DECL void duk_hstring_decref_norz(duk_hthread *thr, duk_hstring *h);
DUK_INTERNAL_DECL void duk_hbuffer_decref(duk_hthread *thr, duk_hbuffer *h);
DUK_INTERNAL_DECL void duk_hbuffer_decref_norz(duk_hthread *thr, duk_hbuffer *h);
DUK_INTERNAL_DECL void duk_hobject_decref(duk_hthread *thr, duk_hobject *h);
DUK_INTERNAL_DECL void duk_hobject_decref_norz(duk_hthread *thr, duk_hobject *h);
#endif
DUK_INTERNAL_DECL void duk_heaphdr_refzero(duk_hthread *thr, duk_heaphdr *h);
DUK_INTERNAL_DECL void duk_heaphdr_refzero_norz(duk_hthread *thr, duk_heaphdr *h);
#if defined(DUK_USE_FAST_REFCOUNT_DEFAULT)
DUK_INTERNAL_DECL void duk_hstring_refzero(duk_hthread *thr, duk_hstring *h); /* no 'norz' variant */
DUK_INTERNAL_DECL void duk_hbuffer_refzero(duk_hthread *thr, duk_hbuffer *h); /* no 'norz' variant */
DUK_INTERNAL_DECL void duk_hobject_refzero(duk_hthread *thr, duk_hobject *h);
DUK_INTERNAL_DECL void duk_hobject_refzero_norz(duk_hthread *thr, duk_hobject *h);
#else
DUK_INTERNAL_DECL void duk_tval_incref(duk_tval *tv);
DUK_INTERNAL_DECL void duk_tval_decref(duk_hthread *thr, duk_tval *tv);
DUK_INTERNAL_DECL void duk_tval_decref_norz(duk_hthread *thr, duk_tval *tv);
DUK_INTERNAL_DECL void duk_heaphdr_incref(duk_heaphdr *h);
DUK_INTERNAL_DECL void duk_heaphdr_decref(duk_hthread *thr, duk_heaphdr *h);
DUK_INTERNAL_DECL void duk_heaphdr_decref_norz(duk_hthread *thr, duk_heaphdr *h);
#endif
#else /* DUK_USE_REFERENCE_COUNTING */
/* no refcounting */
#endif /* DUK_USE_REFERENCE_COUNTING */
#if defined(DUK_USE_FINALIZER_SUPPORT)
DUK_INTERNAL_DECL void duk_heap_run_finalizer(duk_heap *heap, duk_hobject *obj);
DUK_INTERNAL_DECL void duk_heap_process_finalize_list(duk_heap *heap);
#endif /* DUK_USE_FINALIZER_SUPPORT */
DUK_INTERNAL_DECL duk_bool_t duk_heap_mark_and_sweep(duk_heap *heap, duk_small_uint_t flags);
DUK_INTERNAL_DECL void duk_heap_mark_and_sweep(duk_heap *heap, duk_small_uint_t flags);
DUK_INTERNAL_DECL duk_uint32_t duk_heap_hashstring(duk_heap *heap, const duk_uint8_t *str, duk_size_t len);

205
src-input/duk_heap_alloc.c

@ -92,11 +92,9 @@ DUK_INTERNAL void duk_heap_free_heaphdr_raw(duk_heap *heap, duk_heaphdr *hdr) {
case DUK_HTYPE_OBJECT:
duk_free_hobject(heap, (duk_hobject *) hdr);
break;
case DUK_HTYPE_BUFFER:
duk_free_hbuffer(heap, (duk_hbuffer *) hdr);
break;
default:
DUK_UNREACHABLE();
DUK_ASSERT(DUK_HEAPHDR_GET_TYPE(hdr) == DUK_HTYPE_BUFFER);
duk_free_hbuffer(heap, (duk_hbuffer *) hdr);
}
}
@ -132,23 +130,8 @@ DUK_LOCAL void duk__free_allocated(duk_heap *heap) {
}
}
#if defined(DUK_USE_REFERENCE_COUNTING)
DUK_LOCAL void duk__free_refzero_list(duk_heap *heap) {
duk_heaphdr *curr;
duk_heaphdr *next;
curr = heap->refzero_list;
while (curr) {
DUK_DDD(DUK_DDDPRINT("FINALFREE (refzero_list): %!iO",
(duk_heaphdr *) curr));
next = DUK_HEAPHDR_GET_NEXT(heap, curr);
duk_heap_free_heaphdr_raw(heap, curr);
curr = next;
}
}
#endif
DUK_LOCAL void duk__free_markandsweep_finalize_list(duk_heap *heap) {
#if defined(DUK_USE_FINALIZER_SUPPORT)
DUK_LOCAL void duk__free_finalize_list(duk_heap *heap) {
duk_heaphdr *curr;
duk_heaphdr *next;
@ -161,6 +144,7 @@ DUK_LOCAL void duk__free_markandsweep_finalize_list(duk_heap *heap) {
curr = next;
}
}
#endif /* DUK_USE_FINALIZER_SUPPORT */
DUK_LOCAL void duk__free_stringtable(duk_heap *heap) {
/* strings are only tracked by stringtable */
@ -169,7 +153,6 @@ DUK_LOCAL void duk__free_stringtable(duk_heap *heap) {
#if defined(DUK_USE_FINALIZER_SUPPORT)
DUK_LOCAL void duk__free_run_finalizers(duk_heap *heap) {
duk_hthread *thr;
duk_heaphdr *curr;
duk_uint_t round_no;
duk_size_t count_all;
@ -177,25 +160,31 @@ DUK_LOCAL void duk__free_run_finalizers(duk_heap *heap) {
duk_size_t curr_limit;
DUK_ASSERT(heap != NULL);
DUK_ASSERT(heap->heap_thread != NULL);
#if defined(DUK_USE_REFERENCE_COUNTING)
DUK_ASSERT(heap->refzero_list == NULL); /* refzero not running -> must be empty */
#endif
DUK_ASSERT(heap->finalize_list == NULL); /* mark-and-sweep not running -> must be empty */
DUK_ASSERT(heap->finalize_list == NULL); /* mark-and-sweep last pass */
/* XXX: here again finalizer thread is the heap_thread which needs
* to be coordinated with finalizer thread fixes.
*/
thr = heap->heap_thread;
DUK_ASSERT(thr != NULL);
if (heap->heap_thread == NULL) {
/* May happen when heap allocation fails right off. There
* cannot be any finalizable objects in this case.
*/
DUK_D(DUK_DPRINT("no heap_thread in heap destruct, assume no finalizable objects"));
return;
}
/* Prevent mark-and-sweep for the pending finalizers, also prevents
* refzero handling from moving objects away from the heap_allocated
* list. (The flag meaning is slightly abused here.)
/* Prevent finalize_list processing and mark-and-sweep entirely.
* Setting ms_running = 1 also prevents refzero handling from moving
* objects away from the heap_allocated list (the flag name is a bit
* misleading here).
*/
DUK_ASSERT(!DUK_HEAP_HAS_MARKANDSWEEP_RUNNING(heap));
DUK_HEAP_SET_MARKANDSWEEP_RUNNING(heap);
DUK_ASSERT(heap->pf_prevent_count == 0);
heap->pf_prevent_count = 1;
DUK_ASSERT(heap->ms_running == 0);
heap->ms_running = 1;
DUK_ASSERT(heap->ms_prevent_count == 0);
heap->ms_prevent_count = 1; /* Bump, because mark-and-sweep assumes it's bumped when ms_running is set. */
curr_limit = 0; /* suppress warning, not used */
for (round_no = 0; ; round_no++) {
@ -204,18 +193,17 @@ DUK_LOCAL void duk__free_run_finalizers(duk_heap *heap) {
count_finalized = 0;
while (curr) {
count_all++;
if (DUK_HEAPHDR_GET_TYPE(curr) == DUK_HTYPE_OBJECT) {
if (DUK_HEAPHDR_IS_OBJECT(curr)) {
/* Only objects in heap_allocated may have finalizers. Check that
* the object itself has a _Finalizer property (own or inherited)
* so that we don't execute finalizers for e.g. Proxy objects.
*/
DUK_ASSERT(thr != NULL);
DUK_ASSERT(curr != NULL);
if (duk_hobject_has_finalizer_fast(thr, (duk_hobject *) curr)) {
if (DUK_HOBJECT_HAS_FINALIZER_FAST(heap, (duk_hobject *) curr)) {
if (!DUK_HEAPHDR_HAS_FINALIZED((duk_heaphdr *) curr)) {
DUK_ASSERT(DUK_HEAP_HAS_FINALIZER_NORESCUE(heap)); /* maps to finalizer 2nd argument */
duk_hobject_run_finalizer(thr, (duk_hobject *) curr);
duk_heap_run_finalizer(heap, (duk_hobject *) curr);
count_finalized++;
}
}
@ -256,8 +244,10 @@ DUK_LOCAL void duk__free_run_finalizers(duk_heap *heap) {
}
}
DUK_ASSERT(DUK_HEAP_HAS_MARKANDSWEEP_RUNNING(heap));
DUK_HEAP_CLEAR_MARKANDSWEEP_RUNNING(heap);
DUK_ASSERT(heap->ms_running == 1);
heap->ms_running = 0;
DUK_ASSERT(heap->pf_prevent_count == 1);
heap->pf_prevent_count = 0;
}
#endif /* DUK_USE_FINALIZER_SUPPORT */
@ -279,32 +269,47 @@ DUK_INTERNAL void duk_heap_free(duk_heap *heap) {
#endif
/* Execute finalizers before freeing the heap, even for reachable
* objects, and regardless of whether or not mark-and-sweep is
* enabled. This gives finalizers the chance to free any native
* objects. This gives finalizers the chance to free any native
* resources like file handles, allocations made outside Duktape,
* etc. This is quite tricky to get right, so that all finalizer
* guarantees are honored.
*
* XXX: this perhaps requires an execution time limit.
*/
DUK_D(DUK_DPRINT("execute finalizers before freeing heap"));
/* Run mark-and-sweep a few times just in case (unreachable object
* Run mark-and-sweep a few times just in case (unreachable object
* finalizers run already here). The last round must rescue objects
* from the previous round without running any more finalizers. This
* ensures rescued objects get their FINALIZED flag cleared so that
* their finalizer is called once more in forced finalization to
* satisfy finalizer guarantees. However, we don't want to run any
* more finalizer because that'd required one more loop, and so on.
* more finalizers because that'd required one more loop, and so on.
*
* XXX: this perhaps requires an execution time limit.
*/
DUK_D(DUK_DPRINT("execute finalizers before freeing heap"));
DUK_ASSERT(heap->pf_skip_finalizers == 0);
DUK_D(DUK_DPRINT("forced gc #1 in heap destruction"));
duk_heap_mark_and_sweep(heap, 0);
DUK_D(DUK_DPRINT("forced gc #2 in heap destruction"));
duk_heap_mark_and_sweep(heap, 0);
DUK_D(DUK_DPRINT("forced gc #3 in heap destruction (don't run finalizers)"));
duk_heap_mark_and_sweep(heap, DUK_MS_FLAG_SKIP_FINALIZERS); /* skip finalizers; queue finalizable objects to heap_allocated */
heap->pf_skip_finalizers = 1;
duk_heap_mark_and_sweep(heap, 0); /* Skip finalizers; queue finalizable objects to heap_allocated. */
/* There are never objects in refzero_list at this point, or at any
* point beyond a DECREF (even a DECREF_NORZ). Since Duktape 2.1
* refzero_list processing is side effect free, so it is always
* processed to completion by a DECREF initially triggering a zero
* refcount.
*/
#if defined(DUK_USE_REFERENCE_COUNTING)
DUK_ASSERT(heap->refzero_list == NULL); /* Always processed to completion inline. */
#endif
#if defined(DUK_USE_FINALIZER_SUPPORT)
DUK_ASSERT(heap->finalize_list == NULL); /* Last mark-and-sweep with skip_finalizers. */
#endif
#if defined(DUK_USE_FINALIZER_SUPPORT)
DUK_HEAP_SET_FINALIZER_NORESCUE(heap); /* rescue no longer supported */
DUK_D(DUK_DPRINT("run finalizers for remaining finalizable objects"));
DUK_HEAP_SET_FINALIZER_NORESCUE(heap); /* Rescue no longer supported. */
duk__free_run_finalizers(heap);
#endif /* DUK_USE_FINALIZER_SUPPORT */
@ -312,16 +317,17 @@ DUK_INTERNAL void duk_heap_free(duk_heap *heap) {
* are on the heap allocated list.
*/
DUK_D(DUK_DPRINT("freeing heap objects of heap: %p", (void *) heap));
DUK_D(DUK_DPRINT("freeing heap_allocated of heap: %p", (void *) heap));
duk__free_allocated(heap);
#if defined(DUK_USE_REFERENCE_COUNTING)
DUK_D(DUK_DPRINT("freeing refzero list of heap: %p", (void *) heap));
duk__free_refzero_list(heap);
DUK_ASSERT(heap->refzero_list == NULL); /* Always processed to completion inline. */
#endif
DUK_D(DUK_DPRINT("freeing mark-and-sweep finalize list of heap: %p", (void *) heap));
duk__free_markandsweep_finalize_list(heap);
#if defined(DUK_USE_FINALIZER_SUPPORT)
DUK_D(DUK_DPRINT("freeing finalize_list of heap: %p", (void *) heap));
duk__free_finalize_list(heap);
#endif
DUK_D(DUK_DPRINT("freeing string table of heap: %p", (void *) heap));
duk__free_stringtable(heap);
@ -435,7 +441,7 @@ DUK_LOCAL duk_bool_t duk__init_heap_strings(duk_heap *heap) {
DUK_LOCAL duk_bool_t duk__init_heap_thread(duk_heap *heap) {
duk_hthread *thr;
DUK_DD(DUK_DDPRINT("heap init: alloc heap thread"));
DUK_D(DUK_DPRINT("heap init: alloc heap thread"));
thr = duk_hthread_alloc_unchecked(heap,
DUK_HOBJECT_FLAG_EXTENSIBLE |
DUK_HOBJECT_CLASS_AS_FLAGS(DUK_HOBJECT_CLASS_THREAD));
@ -459,6 +465,7 @@ DUK_LOCAL duk_bool_t duk__init_heap_thread(duk_heap *heap) {
/* 'thr' is now reachable */
DUK_D(DUK_DPRINT("heap init: init heap thread stacks"));
if (!duk_hthread_init_stacks(heap, thr)) {
return 0;
}
@ -709,6 +716,10 @@ duk_heap *duk_heap_alloc(duk_alloc_function alloc_func,
DUK_ASSERT(DUK_USE_STRTAB_MINSIZE >= 64);
DUK_ASSERT((DUK_HTYPE_STRING & 0x01U) == 0);
DUK_ASSERT((DUK_HTYPE_BUFFER & 0x01U) == 0);
DUK_ASSERT((DUK_HTYPE_OBJECT & 0x01U) == 1); /* DUK_HEAPHDR_IS_OBJECT() relies ont his. */
/*
* Debug dump type sizes
*/
@ -724,9 +735,11 @@ duk_heap *duk_heap_alloc(duk_alloc_function alloc_func,
*/
#if defined(DUK_USE_SELF_TESTS)
DUK_D(DUK_DPRINT("run self tests"));
if (duk_selftest_run_tests(alloc_func, realloc_func, free_func, heap_udata) > 0) {
fatal_func(heap_udata, "self test(s) failed");
}
DUK_D(DUK_DPRINT("self tests passed"));
#endif
/*
@ -783,6 +796,10 @@ duk_heap *duk_heap_alloc(duk_alloc_function alloc_func,
* Use a raw call, all macros expect the heap to be initialized
*/
#if defined(DUK_USE_INJECT_HEAP_ALLOC_ERROR) && (DUK_USE_INJECT_HEAP_ALLOC_ERROR == 1)
goto failed;
#endif
DUK_D(DUK_DPRINT("alloc duk_heap object"));
res = (duk_heap *) alloc_func(heap_udata, sizeof(duk_heap));
if (!res) {
goto failed;
@ -793,6 +810,9 @@ duk_heap *duk_heap_alloc(duk_alloc_function alloc_func,
*/
DUK_MEMZERO(res, sizeof(*res));
#if defined(DUK_USE_ASSERTIONS)
res->heap_initializing = 1;
#endif
/* explicit NULL inits */
#if defined(DUK_USE_EXPLICIT_NULL_INIT)
@ -800,9 +820,10 @@ duk_heap *duk_heap_alloc(duk_alloc_function alloc_func,
res->heap_allocated = NULL;
#if defined(DUK_USE_REFERENCE_COUNTING)
res->refzero_list = NULL;
res->refzero_list_tail = NULL;
#endif
#if defined(DUK_USE_FINALIZER_SUPPORT)
res->finalize_list = NULL;
#endif
res->heap_thread = NULL;
res->curr_thread = NULL;
res->heap_object = NULL;
@ -843,12 +864,21 @@ duk_heap *duk_heap_alloc(duk_alloc_function alloc_func,
res->heap_udata = heap_udata;
res->fatal_func = fatal_func;
#if defined(DUK_USE_HEAPPTR16)
/* XXX: zero assumption */
res->heapptr_null16 = DUK_USE_HEAPPTR_ENC16(res->heap_udata, (void *) NULL);
#endif
/* XXX: for now there's a pointer packing zero assumption, i.e.
* NULL <=> compressed pointer 0. If this is removed, may need
* to precompute e.g. null16 here.
*/
/* res->ms_trigger_counter == 0 -> now causes immediate GC; which is OK */
/* res->mark_and_sweep_trigger_counter == 0 -> now causes immediate GC; which is OK */
/* Prevent mark-and-sweep and finalizer execution until heap is completely
* initialized.
*/
DUK_ASSERT(res->ms_prevent_count == 0);
DUK_ASSERT(res->pf_prevent_count == 0);
res->ms_prevent_count = 1;
res->pf_prevent_count = 1;
DUK_ASSERT(res->ms_running == 0);
res->call_recursion_depth = 0;
res->call_recursion_limit = DUK_USE_NATIVE_CALL_RECLIMIT;
@ -876,10 +906,12 @@ duk_heap *duk_heap_alloc(duk_alloc_function alloc_func,
res->lj.jmpbuf_ptr = NULL;
#endif
DUK_ASSERT(res->lj.type == DUK_LJ_TYPE_UNKNOWN); /* zero */
DUK_ASSERT(res->lj.iserror == 0);
DUK_TVAL_SET_UNDEFINED(&res->lj.value1);
DUK_TVAL_SET_UNDEFINED(&res->lj.value2);
DUK_ASSERT_LJSTATE_UNSET(res);
/*
* Init stringtable: fixed variant
*/
@ -941,7 +973,10 @@ duk_heap *duk_heap_alloc(duk_alloc_function alloc_func,
* Init built-in strings
*/
DUK_DD(DUK_DDPRINT("HEAP: INIT STRINGS"));
#if defined(DUK_USE_INJECT_HEAP_ALLOC_ERROR) && (DUK_USE_INJECT_HEAP_ALLOC_ERROR == 2)
goto failed;
#endif
DUK_D(DUK_DPRINT("heap init: initialize heap strings"));
if (!duk__init_heap_strings(res)) {
goto failed;
}
@ -950,7 +985,10 @@ duk_heap *duk_heap_alloc(duk_alloc_function alloc_func,
* Init the heap thread
*/
DUK_DD(DUK_DDPRINT("HEAP: INIT HEAP THREAD"));
#if defined(DUK_USE_INJECT_HEAP_ALLOC_ERROR) && (DUK_USE_INJECT_HEAP_ALLOC_ERROR == 3)
goto failed;
#endif
DUK_D(DUK_DPRINT("heap init: initialize heap thread"));
if (!duk__init_heap_thread(res)) {
goto failed;
}
@ -959,7 +997,10 @@ duk_heap *duk_heap_alloc(duk_alloc_function alloc_func,
* Init the heap object
*/
DUK_DD(DUK_DDPRINT("HEAP: INIT HEAP OBJECT"));
#if defined(DUK_USE_INJECT_HEAP_ALLOC_ERROR) && (DUK_USE_INJECT_HEAP_ALLOC_ERROR == 4)
goto failed;
#endif
DUK_D(DUK_DPRINT("heap init: initialize heap object"));
DUK_ASSERT(res->heap_thread != NULL);
res->heap_object = duk_hobject_alloc_unchecked(res, DUK_HOBJECT_FLAG_EXTENSIBLE |
DUK_HOBJECT_FLAG_FASTREFS |
@ -1012,7 +1053,21 @@ duk_heap *duk_heap_alloc(duk_alloc_function alloc_func,
#endif
/*
* All done
* Allow finalizer and mark-and-sweep processing.
*/
DUK_D(DUK_DPRINT("heap init: allow finalizer/mark-and-sweep processing"));
DUK_ASSERT(res->ms_prevent_count == 1);
DUK_ASSERT(res->pf_prevent_count == 1);
res->ms_prevent_count = 0;
res->pf_prevent_count = 0;
DUK_ASSERT(res->ms_running == 0);
#if defined(DUK_USE_ASSERTIONS)
res->heap_initializing = 0;
#endif
/*
* All done.
*/
DUK_D(DUK_DPRINT("allocated heap: %p", (void *) res));
@ -1021,14 +1076,26 @@ duk_heap *duk_heap_alloc(duk_alloc_function alloc_func,
failed:
DUK_D(DUK_DPRINT("heap allocation failed"));
if (res) {
/* assumes that allocated pointers and alloc funcs are valid
* if res exists
if (res != NULL) {
/* Assumes that allocated pointers and alloc funcs are valid
* if res exists.
*/
DUK_ASSERT(res->ms_prevent_count == 1);
DUK_ASSERT(res->pf_prevent_count == 1);
DUK_ASSERT(res->ms_running == 0);
if (res->heap_thread != NULL) {
res->ms_prevent_count = 0;
res->pf_prevent_count = 0;
}
#if defined(DUK_USE_ASSERTIONS)
res->heap_initializing = 0;
#endif
DUK_ASSERT(res->alloc_func != NULL);
DUK_ASSERT(res->realloc_func != NULL);
DUK_ASSERT(res->free_func != NULL);
duk_heap_free(res);
}
return NULL;
}

444
src-input/duk_heap_finalize.c

@ -0,0 +1,444 @@
/*
* Finalizer handling
*/
#include "duk_internal.h"
#if defined(DUK_USE_FINALIZER_SUPPORT)
/*
* Fake torture finalizer.
*/
#if defined(DUK_USE_FINALIZER_TORTURE)
DUK_LOCAL duk_ret_t duk__fake_global_finalizer(duk_context *ctx) {
DUK_DD(DUK_DDPRINT("fake global torture finalizer executed"));
/* Require a lot of stack to force a value stack grow/shrink. */
duk_require_stack(ctx, 100000);
/* XXX: do something to force a callstack grow/shrink, perhaps
* just a manual forced resize or a forced relocating realloc?
*/
/* Inner function call, error throw. */
duk_eval_string_noresult(ctx,
"(function dummy() {\n"
" dummy.prototype = null; /* break reference loop */\n"
" try {\n"
" throw 'fake-finalizer-dummy-error';\n"
" } catch (e) {\n"
" void e;\n"
" }\n"
"})()");
/* The above creates garbage (e.g. a function instance). Because
* the function/prototype reference loop is broken, it gets collected
* immediately by DECREF. If Function.prototype has a _Finalizer
* property (happens in some test cases), the garbage gets queued to
* finalize_list. This still won't cause an infinite loop because
* the torture finalizer is called once per finalize_list run and
* the garbage gets handled in the same run. (If the garbage needs
* mark-and-sweep collection, an infinite loop might ensue.)
*/
return 0;
}
DUK_LOCAL void duk__run_global_torture_finalizer(duk_hthread *thr) {
DUK_ASSERT(thr != NULL);
/* Avoid fake finalization when callstack limit has been reached.
* Otherwise a callstack limit error will be created, then refzero'ed.
*/
if (thr->heap->call_recursion_depth >= thr->heap->call_recursion_limit ||
thr->callstack_size + 2 * DUK_CALLSTACK_GROW_STEP >= thr->callstack_max /*approximate*/) {
DUK_D(DUK_DPRINT("skip global torture finalizer because of call recursion or call stack size limit"));
return;
}
/* Run fake finalizer. Avoid creating unnecessary garbage. */
duk_push_c_function((duk_context *) thr, duk__fake_global_finalizer, 0 /*nargs*/);
(void) duk_pcall((duk_context *) thr, 0 /*nargs*/);
duk_pop((duk_context *) thr);
}
#endif /* DUK_USE_FINALIZER_TORTURE */
/*
* Process the finalize_list to completion.
*
* An object may be placed on finalize_list by either refcounting or
* mark-and-sweep. The refcount of objects placed by refcounting will be
* zero; the refcount of objects placed by mark-and-sweep is > 0.
*
* List processing assumes refcounts are kept up-to-date at all times, so
* that once the finalizer returns, a zero refcount is a reliable reason to
* free the object immediately rather than place it back to the heap. This
* is the case because we run outside of refzero_list processing so that
* DECREF cascades are handled fully inline.
*
* For mark-and-sweep queued objects (had_zero_refcount false) the object
* may be freed immediately if its refcount is zero after the finalizer call
* (i.e. finalizer removed the reference loop for the object). If not, the
* next mark-and-sweep will collect the object unless it has become reachable
* (i.e. rescued) by that time and its refcount hasn't fallen to zero before
* that. Mark-and-sweep detects these objects because their FINALIZED flag
* is set.
*
* There's an inherent limitation for mark-and-sweep finalizer rescuing: an
* object won't get refinalized if (1) it's rescued, but (2) becomes
* unreachable before mark-and-sweep has had time to notice it. The next
* mark-and-sweep round simply doesn't have any information of whether the
* object has been unreachable the whole time or not (the only way to get
* that information would be a mark-and-sweep pass for *every finalized
* object*). This is awkward for the application because the mark-and-sweep
* round is not generally visible or under full application control.
*
* For refcount queued objects (had_zero_refcount true) the object is either
* immediately freed or rescued, and waiting for a mark-and-sweep round is not
* necessary (or desirable); FINALIZED is cleared when a rescued object is
* queued back to heap_allocated. The object is eligible for finalization
* again (either via refcounting or mark-and-sweep) immediately after being
* rescued. If a refcount finalized object is placed into an unreachable
* reference loop by its finalizer, it will get collected by mark-and-sweep
* and currently the finalizer will execute again.
*
* There's a special case where:
*
* - Mark-and-sweep queues an object to finalize_list for finalization.
* - The finalizer is executed, FINALIZED is set, and object is queued
* back to heap_allocated, waiting for a new mark-and-sweep round.
* - The object's refcount drops to zero before mark-and-sweep has a
* chance to run another round and make a rescue/free decision.
*
* This is now handled by refzero code: if an object has a finalizer but
* FINALIZED is already set, the object is freed without finalizer processing.
* The outcome is the same as if mark-and-sweep was executed at that point;
* mark-and-sweep would also free the object without another finalizer run.
* This could also be changed so that the refzero-triggered finalizer *IS*
* executed: being refzero collected implies someone has operated on the
* object so it hasn't been totally unreachable the whole time. This would
* risk a finalizer loop however.
*/
DUK_INTERNAL void duk_heap_process_finalize_list(duk_heap *heap) {
duk_heaphdr *curr;
#if defined(DUK_USE_DEBUG)
duk_size_t count = 0;
#endif
DUK_DDD(DUK_DDDPRINT("duk_heap_process_finalize_list: %p", (void *) heap));
if (heap->pf_prevent_count != 0) {
DUK_DDD(DUK_DDDPRINT("skip finalize_list processing: pf_prevent_count != 0"));
return;
}
/* Heap alloc prevents mark-and-sweep before heap_thread is ready. */
DUK_ASSERT(heap != NULL);
DUK_ASSERT(heap->heap_thread != NULL);
DUK_ASSERT(heap->heap_thread->valstack != NULL);
DUK_ASSERT(heap->heap_thread->callstack != NULL);
DUK_ASSERT(heap->heap_thread->catchstack != NULL);
#if defined(DUK_USE_REFERENCE_COUNTING)
DUK_ASSERT(heap->refzero_list == NULL);
#endif
DUK_ASSERT(heap->pf_prevent_count == 0);
heap->pf_prevent_count = 1;
/* Bump ms_prevent_count to prevent mark-and-sweep while we execute
* finalizers. It's important for no mark-and-sweep passes to happen
* while we process the finalize_list. If a part of the finalize_list
* has been processed and mark-and-sweep runs, it will incorrectly
* consider the processed objects rescued if they are in a reference
* relationship with objects still in finalize_list. This happens
* because mark-and-sweep treats the whole finalize_list as being
* "reachable".
*/
heap->ms_prevent_count++;
DUK_ASSERT(heap->ms_prevent_count != 0); /* Wrap. */
/* When finalizer torture is enabled, make a fake finalizer call with
* maximum side effects regardless of whether finalize_list is empty.
*/
#if defined(DUK_USE_FINALIZER_TORTURE)
duk__run_global_torture_finalizer(heap->heap_thread);
#endif
/* Process finalize_list until it becomes empty. There's currently no
* protection against a finalizer always creating more garbage.
*/
while ((curr = heap->finalize_list) != NULL) {
#if defined(DUK_USE_REFERENCE_COUNTING)
duk_bool_t queue_back;
#endif
DUK_DD(DUK_DDPRINT("processing finalize_list entry: %p -> %!iO", (void *) curr, curr));
DUK_ASSERT(DUK_HEAPHDR_GET_TYPE(curr) == DUK_HTYPE_OBJECT); /* Only objects have finalizers. */
DUK_ASSERT(!DUK_HEAPHDR_HAS_REACHABLE(curr));
DUK_ASSERT(!DUK_HEAPHDR_HAS_TEMPROOT(curr));
DUK_ASSERT(DUK_HEAPHDR_HAS_FINALIZABLE(curr)); /* All objects on finalize_list will have this flag. */
DUK_ASSERT(!DUK_HEAPHDR_HAS_FINALIZED(curr)); /* Queueing code ensures. */
DUK_ASSERT(!DUK_HEAPHDR_HAS_READONLY(curr)); /* ROM objects never get freed (or finalized). */
/* Clear FINALIZABLE for object being finalized, so that
* duk_push_heapptr() can properly ignore the object.
*/
DUK_HEAPHDR_CLEAR_FINALIZABLE(curr);
/* XXX: Instead of an artificial refcount bump, could also
* push/pop the object for reachability. This doesn't really
* matter much because the object is still on finalize_list
* and treated as reachable by mark-and-sweep.
*/
if (DUK_LIKELY(!heap->pf_skip_finalizers)) {
/* Run the finalizer, duk_heap_run_finalizer() sets
* and checks for FINALIZED to prevent the finalizer
* from executing multiple times per finalization cycle.
* (This safeguard shouldn't be actually needed anymore).
*/
#if defined(DUK_USE_REFERENCE_COUNTING)
duk_bool_t had_zero_refcount;
#endif
/* Ensure object's refcount is >0 throughout so it
* won't be refzero processed prematurely.
*/
#if defined(DUK_USE_REFERENCE_COUNTING)
had_zero_refcount = (DUK_HEAPHDR_GET_REFCOUNT(curr) == 0);
DUK_HEAPHDR_PREINC_REFCOUNT(curr);
#endif
DUK_ASSERT(!DUK_HEAPHDR_HAS_FINALIZED(curr));
duk_heap_run_finalizer(heap, (duk_hobject *) curr); /* must never longjmp */
DUK_ASSERT(DUK_HEAPHDR_HAS_FINALIZED(curr));
/* XXX: assert that object is still in finalize_list
* when duk_push_heapptr() allows automatic rescue.
*/
#if defined(DUK_USE_REFERENCE_COUNTING)
DUK_ASSERT(DUK_HEAPHDR_GET_REFCOUNT(curr) >= 1); /* ref on value stack at least */
DUK_HEAPHDR_PREDEC_REFCOUNT(curr);
DUK_DD(DUK_DDPRINT("refcount after finalizer: %ld", (long) DUK_HEAPHDR_GET_REFCOUNT(curr)));
if (DUK_HEAPHDR_GET_REFCOUNT(curr) == 0) {
#if defined(DUK_USE_DEBUG)
if (had_zero_refcount) {
DUK_DD(DUK_DDPRINT("finalized object's refcount is zero -> free immediately (refcount queued)"));
} else {
DUK_DD(DUK_DDPRINT("finalized object's refcount is zero -> free immediately (mark-and-sweep queued)"));
}
#endif
queue_back = 0;
} else
#endif
{
#if defined(DUK_USE_REFERENCE_COUNTING)
queue_back = 1;
if (had_zero_refcount) {
/* When finalization is triggered
* by refzero and we queue the object
* back, clear FINALIZED right away
* so that the object can be refinalized
* immediately if necessary.
*/
DUK_HEAPHDR_CLEAR_FINALIZED(curr);
}
#endif
}
} else {
/* Used during heap destruction: don't actually run finalizers
* because we're heading into forced finalization. Instead,
* queue finalizable objects back to the heap_allocated list.
*/
DUK_D(DUK_DPRINT("skip finalizers flag set, queue object to heap_allocated without finalizing"));
DUK_ASSERT(!DUK_HEAPHDR_HAS_FINALIZED(curr));
#if defined(DUK_USE_REFERENCE_COUNTING)
queue_back = 1;
#endif
}
/* Dequeue object from finalize_list. Note that 'curr' may no
* longer be finalize_list head because new objects may have
* been queued to the list. As a result we can't optimize for
* the single-linked heap case and must scan the list for
* removal, typically the scan is very short however.
*/
DUK_HEAP_REMOVE_FROM_FINALIZE_LIST(heap, curr);
/* Queue back to heap_allocated or free immediately. */
#if defined(DUK_USE_REFERENCE_COUNTING)
if (queue_back) {
/* FINALIZED is only cleared if object originally
* queued for finalization by refcounting. For
* mark-and-sweep FINALIZED is left set, so that
* next mark-and-sweep round can make a rescue/free
* decision.
*/
DUK_HEAPHDR_CLEAR_FINALIZABLE(curr);
DUK_HEAP_INSERT_INTO_HEAP_ALLOCATED(heap, curr);
} else {
DUK_ASSERT(DUK_HEAPHDR_GET_TYPE(curr) == DUK_HTYPE_OBJECT); /* currently, always the case */
DUK_DD(DUK_DDPRINT("refcount finalize after finalizer call: %!O", curr));
duk_hobject_refcount_finalize_norz(heap, (duk_hobject *) curr);
duk_free_hobject(heap, (duk_hobject *) curr);
DUK_DD(DUK_DDPRINT("freed hobject after finalization: %p", (void *) curr));
}
#else
DUK_HEAPHDR_CLEAR_FINALIZABLE(curr);
DUK_HEAP_INSERT_INTO_HEAP_ALLOCATED(heap, curr);
#endif
#if defined(DUK_USE_DEBUG)
count++;
#endif
}
/* finalize_list will always be processed completely. */
DUK_ASSERT(heap->finalize_list == NULL);
#if 0
/* While NORZ macros are used above, this is unnecessary because the
* only pending side effects are now finalizers, and finalize_list is
* empty.
*/
DUK_REFZERO_CHECK_SLOW(heap->heap_thread);
#endif
DUK_ASSERT(heap->ms_prevent_count > 0);
heap->ms_prevent_count--;
/* Prevent count may be bumped while finalizers run, but should always
* be reliably unbumped by the time we get here.
*/
DUK_ASSERT(heap->pf_prevent_count == 1);
heap->pf_prevent_count = 0;
#if defined(DUK_USE_DEBUG)
DUK_DD(DUK_DDPRINT("duk_heap_process_finalize_list: %ld finalizers called", (long) count));
#endif
}
/*
* Run an duk_hobject finalizer. Must never throw an uncaught error
* (but may throw caught errors).
*
* There is no return value. Any return value or error thrown by
* the finalizer is ignored (although errors are debug logged).
*
* Notes:
*
* - The finalizer thread 'top' assertions are there because it is
* critical that strict stack policy is observed (i.e. no cruft
* left on the finalizer stack).
*/
DUK_LOCAL duk_ret_t duk__finalize_helper(duk_context *ctx, void *udata) {
duk_hthread *thr;
DUK_ASSERT(ctx != NULL);
thr = (duk_hthread *) ctx;
DUK_UNREF(udata);
DUK_DDD(DUK_DDDPRINT("protected finalization helper running"));
/* [... obj] */
/* _Finalizer property is read without checking if the value is
* callable or even exists. This is intentional, and handled
* by throwing an error which is caught by the safe call wrapper.
*
* XXX: Finalizer lookup should traverse the prototype chain (to allow
* inherited finalizers) but should not invoke accessors or proxy object
* behavior. At the moment this lookup will invoke proxy behavior, so
* caller must ensure that this function is not called if the target is
* a Proxy.
*/
duk_get_prop_stridx_short(ctx, -1, DUK_STRIDX_INT_FINALIZER); /* -> [... obj finalizer] */
duk_dup_m2(ctx);
duk_push_boolean(ctx, DUK_HEAP_HAS_FINALIZER_NORESCUE(thr->heap));
DUK_DDD(DUK_DDDPRINT("calling finalizer"));
duk_call(ctx, 2); /* [ ... obj finalizer obj heapDestruct ] -> [ ... obj retval ] */
DUK_DDD(DUK_DDDPRINT("finalizer returned successfully"));
return 0;
/* Note: we rely on duk_safe_call() to fix up the stack for the caller,
* so we don't need to pop stuff here. There is no return value;
* caller determines rescued status based on object refcount.
*/
}
DUK_INTERNAL void duk_heap_run_finalizer(duk_heap *heap, duk_hobject *obj) {
duk_context *ctx;
duk_ret_t rc;
#if defined(DUK_USE_ASSERTIONS)
duk_idx_t entry_top;
#endif
DUK_DD(DUK_DDPRINT("running duk_hobject finalizer for object: %p", (void *) obj));
DUK_ASSERT(heap != NULL);
DUK_ASSERT(heap->heap_thread != NULL);
ctx = (duk_context *) heap->heap_thread;
DUK_ASSERT(obj != NULL);
DUK_ASSERT_VALSTACK_SPACE(heap->heap_thread, 1);
#if defined(DUK_USE_ASSERTIONS)
entry_top = duk_get_top(ctx);
#endif
/*
* Get and call the finalizer. All of this must be wrapped
* in a protected call, because even getting the finalizer
* may trigger an error (getter may throw one, for instance).
*/
/* ROM objects could inherit a finalizer, but they are never deemed
* unreachable by mark-and-sweep, and their refcount never falls to 0.
*/
DUK_ASSERT(!DUK_HEAPHDR_HAS_READONLY((duk_heaphdr *) obj));
/* Duktape 2.1: finalize_list never contains objects with FINALIZED
* set, so no need to check here.
*/
DUK_ASSERT(!DUK_HEAPHDR_HAS_FINALIZED((duk_heaphdr *) obj));
#if 0
if (DUK_HEAPHDR_HAS_FINALIZED((duk_heaphdr *) obj)) {
DUK_D(DUK_DPRINT("object already finalized, avoid running finalizer twice: %!O", obj));
return;
}
#endif
DUK_HEAPHDR_SET_FINALIZED((duk_heaphdr *) obj); /* ensure never re-entered until rescue cycle complete */
if (DUK_HOBJECT_HAS_EXOTIC_PROXYOBJ(obj)) {
/* This may happen if duk_set_finalizer() or Duktape.fin() is
* called for a Proxy object. In such cases the fast finalizer
* flag will be set on the Proxy, not the target, and neither
* will be finalized.
*/
DUK_D(DUK_DPRINT("object is a proxy, skip finalizer call"));
return;
}
duk_push_hobject(ctx, obj); /* this also increases refcount by one */
rc = duk_safe_call(ctx, duk__finalize_helper, NULL /*udata*/, 0 /*nargs*/, 1 /*nrets*/); /* -> [... obj retval/error] */
DUK_ASSERT_TOP(ctx, entry_top + 2); /* duk_safe_call discipline */
if (rc != DUK_EXEC_SUCCESS) {
/* Note: we ask for one return value from duk_safe_call to get this
* error debugging here.
*/
DUK_D(DUK_DPRINT("wrapped finalizer call failed for object %p (ignored); error: %!T",
(void *) obj, (duk_tval *) duk_get_tval(ctx, -1)));
}
duk_pop_2(ctx); /* -> [...] */
DUK_ASSERT_TOP(ctx, entry_top);
}
#else /* DUK_USE_FINALIZER_SUPPORT */
/* nothing */
#endif /* DUK_USE_FINALIZER_SUPPORT */

524
src-input/duk_heap_markandsweep.c

@ -11,19 +11,8 @@ DUK_LOCAL_DECL void duk__mark_tval(duk_heap *heap, duk_tval *tv);
* Misc
*/
/* Select a thread for mark-and-sweep use.
*
* XXX: This needs to change later.
*/
DUK_LOCAL duk_hthread *duk__get_temp_hthread(duk_heap *heap) {
if (heap->curr_thread) {
return heap->curr_thread;
}
return heap->heap_thread; /* may be NULL, too */
}
/*
* Marking functions for heap types: mark children recursively
* Marking functions for heap types: mark children recursively.
*/
DUK_LOCAL void duk__mark_hstring(duk_heap *heap, duk_hstring *h) {
@ -187,7 +176,7 @@ DUK_LOCAL void duk__mark_heaphdr(duk_heap *heap, duk_heaphdr *h) {
}
DUK_HEAPHDR_SET_REACHABLE(h);
if (heap->mark_and_sweep_recursion_depth >= DUK_USE_MARK_AND_SWEEP_RECLIMIT) {
if (heap->ms_recursion_depth >= DUK_USE_MARK_AND_SWEEP_RECLIMIT) {
/* log this with a normal debug level because this should be relatively rare */
DUK_D(DUK_DPRINT("mark-and-sweep recursion limit reached, marking as temproot: %p", (void *) h));
DUK_HEAP_SET_MARKANDSWEEP_RECLIMIT_REACHED(heap);
@ -195,7 +184,8 @@ DUK_LOCAL void duk__mark_heaphdr(duk_heap *heap, duk_heaphdr *h) {
return;
}
heap->mark_and_sweep_recursion_depth++;
heap->ms_recursion_depth++;
DUK_ASSERT(heap->ms_recursion_depth != 0); /* Wrap. */
switch (DUK_HEAPHDR_GET_TYPE(h)) {
case DUK_HTYPE_STRING:
@ -212,7 +202,8 @@ DUK_LOCAL void duk__mark_heaphdr(duk_heap *heap, duk_heaphdr *h) {
DUK_UNREACHABLE();
}
heap->mark_and_sweep_recursion_depth--;
DUK_ASSERT(heap->ms_recursion_depth > 0);
heap->ms_recursion_depth--;
}
DUK_LOCAL void duk__mark_tval(duk_heap *heap, duk_tval *tv) {
@ -252,38 +243,13 @@ DUK_LOCAL void duk__mark_roots_heap(duk_heap *heap) {
#endif
}
/*
* Mark refzero_list objects.
*
* Objects on the refzero_list have no inbound references. They might have
* outbound references to objects that we might free, which would invalidate
* any references held by the refzero objects. A refzero object might also
* be rescued by refcount finalization. Refzero objects are treated as
* reachability roots to ensure they (or anything they point to) are not
* freed in mark-and-sweep.
*/
#if defined(DUK_USE_REFERENCE_COUNTING)
DUK_LOCAL void duk__mark_refzero_list(duk_heap *heap) {
duk_heaphdr *hdr;
DUK_DD(DUK_DDPRINT("duk__mark_refzero_list: %p", (void *) heap));
hdr = heap->refzero_list;
while (hdr) {
duk__mark_heaphdr(heap, hdr);
hdr = DUK_HEAPHDR_GET_NEXT(heap, hdr);
}
}
#endif
/*
* Mark unreachable, finalizable objects.
*
* Such objects will be moved aside and their finalizers run later. They have
* to be treated as reachability roots for their properties etc to remain
* allocated. This marking is only done for unreachable values which would
* be swept later (refzero_list is thus excluded).
* be swept later.
*
* Objects are first marked FINALIZABLE and only then marked as reachability
* roots; otherwise circular references might be handled inconsistently.
@ -291,31 +257,30 @@ DUK_LOCAL void duk__mark_refzero_list(duk_heap *heap) {
#if defined(DUK_USE_FINALIZER_SUPPORT)
DUK_LOCAL void duk__mark_finalizable(duk_heap *heap) {
duk_hthread *thr;
duk_heaphdr *hdr;
duk_size_t count_finalizable = 0;
DUK_DD(DUK_DDPRINT("duk__mark_finalizable: %p", (void *) heap));
thr = duk__get_temp_hthread(heap);
DUK_ASSERT(thr != NULL);
DUK_ASSERT(heap->heap_thread != NULL);
hdr = heap->heap_allocated;
while (hdr) {
/* A finalizer is looked up from the object and up its prototype chain
* (which allows inherited finalizers). The finalizer is checked for
* using a duk_hobject flag which is kept in sync with the presence and
* callability of a _Finalizer hidden symbol.
while (hdr != NULL) {
/* A finalizer is looked up from the object and up its
* prototype chain (which allows inherited finalizers).
* The finalizer is checked for using a duk_hobject flag
* which is kept in sync with the presence and callability
* of a _Finalizer hidden symbol.
*/
if (!DUK_HEAPHDR_HAS_REACHABLE(hdr) &&
DUK_HEAPHDR_GET_TYPE(hdr) == DUK_HTYPE_OBJECT &&
DUK_HEAPHDR_IS_OBJECT(hdr) &&
!DUK_HEAPHDR_HAS_FINALIZED(hdr) &&
duk_hobject_has_finalizer_fast(thr, (duk_hobject *) hdr)) {
DUK_HOBJECT_HAS_FINALIZER_FAST(heap, (duk_hobject *) hdr)) {
/* heaphdr:
* - is not reachable
* - is an object
* - is not a finalized object
* - is not a finalized object waiting for rescue/keep decision
* - has a finalizer
*/
@ -325,7 +290,7 @@ DUK_LOCAL void duk__mark_finalizable(duk_heap *heap) {
(void *) hdr));
DUK_ASSERT(!DUK_HEAPHDR_HAS_READONLY(hdr));
DUK_HEAPHDR_SET_FINALIZABLE(hdr);
count_finalizable ++;
count_finalizable++;
}
hdr = DUK_HEAPHDR_GET_NEXT(heap, hdr);
@ -339,7 +304,7 @@ DUK_LOCAL void duk__mark_finalizable(duk_heap *heap) {
(long) count_finalizable));
hdr = heap->heap_allocated;
while (hdr) {
while (hdr != NULL) {
if (DUK_HEAPHDR_HAS_FINALIZABLE(hdr)) {
duk__mark_heaphdr(heap, hdr);
}
@ -353,7 +318,6 @@ DUK_LOCAL void duk__mark_finalizable(duk_heap *heap) {
/*
* Mark objects on finalize_list.
*
*/
#if defined(DUK_USE_FINALIZER_SUPPORT)
@ -366,7 +330,7 @@ DUK_LOCAL void duk__mark_finalize_list(duk_heap *heap) {
DUK_DD(DUK_DDPRINT("duk__mark_finalize_list: %p", (void *) heap));
hdr = heap->finalize_list;
while (hdr) {
while (hdr != NULL) {
duk__mark_heaphdr(heap, hdr);
hdr = DUK_HEAPHDR_GET_NEXT(heap, hdr);
#if defined(DUK_USE_DEBUG)
@ -446,19 +410,6 @@ DUK_LOCAL void duk__mark_temproots_by_heap_scan(duk_heap *heap) {
hdr = DUK_HEAPHDR_GET_NEXT(heap, hdr);
}
/* must also check refzero_list */
#if defined(DUK_USE_REFERENCE_COUNTING)
hdr = heap->refzero_list;
while (hdr) {
#if defined(DUK_USE_DEBUG)
duk__handle_temproot(heap, hdr, &count);
#else
duk__handle_temproot(heap, hdr);
#endif
hdr = DUK_HEAPHDR_GET_NEXT(heap, hdr);
}
#endif /* DUK_USE_REFERENCE_COUNTING */
#if defined(DUK_USE_DEBUG)
DUK_DD(DUK_DDPRINT("temproot mark heap scan processed %ld temp roots", (long) count));
#endif
@ -476,14 +427,11 @@ DUK_LOCAL void duk__mark_temproots_by_heap_scan(duk_heap *heap) {
#if defined(DUK_USE_REFERENCE_COUNTING)
DUK_LOCAL void duk__finalize_refcounts(duk_heap *heap) {
duk_hthread *thr;
duk_heaphdr *hdr;
thr = duk__get_temp_hthread(heap);
DUK_ASSERT(thr != NULL);
DUK_ASSERT(heap->heap_thread != NULL);
DUK_DD(DUK_DDPRINT("duk__finalize_refcounts: heap=%p, hthread=%p",
(void *) heap, (void *) thr));
DUK_DD(DUK_DDPRINT("duk__finalize_refcounts: heap=%p", (void *) heap));
hdr = heap->heap_allocated;
while (hdr) {
@ -499,30 +447,14 @@ DUK_LOCAL void duk__finalize_refcounts(duk_heap *heap) {
*/
DUK_DDD(DUK_DDDPRINT("unreachable object, refcount finalize before sweeping: %p", (void *) hdr));
duk_heaphdr_refcount_finalize(thr, hdr);
}
hdr = DUK_HEAPHDR_GET_NEXT(heap, hdr);
}
}
#endif /* DUK_USE_REFERENCE_COUNTING */
/*
* Clear (reachable) flags of refzero work list.
*/
#if defined(DUK_USE_REFERENCE_COUNTING)
DUK_LOCAL void duk__clear_refzero_list_flags(duk_heap *heap) {
duk_heaphdr *hdr;
DUK_DD(DUK_DDPRINT("duk__clear_refzero_list_flags: %p", (void *) heap));
/* Finalize using heap->heap_thread; DECREF has a
* suppress check for mark-and-sweep which is based
* on heap->ms_running.
*/
duk_heaphdr_refcount_finalize_norz(heap, hdr);
}
hdr = heap->refzero_list;
while (hdr) {
DUK_HEAPHDR_CLEAR_REACHABLE(hdr);
DUK_ASSERT(!DUK_HEAPHDR_HAS_FINALIZABLE(hdr));
/* DUK_HEAPHDR_HAS_FINALIZED may or may not be set. */
DUK_ASSERT(!DUK_HEAPHDR_HAS_TEMPROOT(hdr));
hdr = DUK_HEAPHDR_GET_NEXT(heap, hdr);
}
}
@ -548,8 +480,8 @@ DUK_LOCAL void duk__clear_finalize_list_flags(duk_heap *heap) {
hdr = heap->finalize_list;
while (hdr) {
DUK_HEAPHDR_CLEAR_REACHABLE(hdr);
DUK_ASSERT(!DUK_HEAPHDR_HAS_FINALIZABLE(hdr));
DUK_ASSERT(!DUK_HEAPHDR_HAS_FINALIZED(hdr));
DUK_ASSERT(DUK_HEAPHDR_HAS_FINALIZABLE(hdr)); /* Currently true, may change if mark-and-sweep during finalization allowed. */
/* DUK_HEAPHDR_FLAG_FINALIZED may be set. */
DUK_ASSERT(!DUK_HEAPHDR_HAS_TEMPROOT(hdr));
hdr = DUK_HEAPHDR_GET_NEXT(heap, hdr);
}
@ -669,6 +601,7 @@ DUK_LOCAL void duk__sweep_heap(duk_heap *heap, duk_int_t flags, duk_size_t *out_
DUK_DDD(DUK_DDDPRINT("sweep, reachable: %p", (void *) curr));
#if defined(DUK_USE_FINALIZER_SUPPORT)
if (DUK_UNLIKELY(DUK_HEAPHDR_HAS_FINALIZABLE(curr))) {
/*
* If object has been marked finalizable, move it to the
@ -681,19 +614,14 @@ DUK_LOCAL void duk__sweep_heap(duk_heap *heap, duk_int_t flags, duk_size_t *out_
DUK_ASSERT(DUK_HEAPHDR_GET_TYPE(curr) == DUK_HTYPE_OBJECT);
DUK_DDD(DUK_DDDPRINT("object has finalizer, move to finalization work list: %p", (void *) curr));
#if defined(DUK_USE_DOUBLE_LINKED_HEAP)
if (heap->finalize_list != NULL) {
DUK_HEAPHDR_SET_PREV(heap, heap->finalize_list, curr);
}
DUK_HEAPHDR_SET_PREV(heap, curr, NULL);
#endif
DUK_HEAPHDR_SET_NEXT(heap, curr, heap->finalize_list);
DUK_ASSERT_HEAPHDR_LINKS(heap, curr);
heap->finalize_list = curr;
DUK_HEAP_INSERT_INTO_FINALIZE_LIST(heap, curr);
#if defined(DUK_USE_DEBUG)
count_finalize++;
#endif
} else {
}
else
#endif /* DUK_USE_FINALIZER_SUPPORT */
{
/*
* Object will be kept; queue object back to heap_allocated (to tail)
*/
@ -734,12 +662,13 @@ DUK_LOCAL void duk__sweep_heap(duk_heap *heap, duk_int_t flags, duk_size_t *out_
}
DUK_HEAPHDR_CLEAR_REACHABLE(curr);
#if defined(DUK_USE_FINALIZER_SUPPORT)
DUK_HEAPHDR_CLEAR_FINALIZED(curr);
DUK_HEAPHDR_CLEAR_FINALIZABLE(curr);
#endif
/* Keep FINALIZABLE for objects on finalize_list. */
DUK_ASSERT(!DUK_HEAPHDR_HAS_REACHABLE(curr));
DUK_ASSERT(!DUK_HEAPHDR_HAS_FINALIZED(curr));
DUK_ASSERT(!DUK_HEAPHDR_HAS_FINALIZABLE(curr));
curr = next;
} else {
@ -758,9 +687,11 @@ DUK_LOCAL void duk__sweep_heap(duk_heap *heap, duk_int_t flags, duk_size_t *out_
#endif
DUK_ASSERT(!DUK_HEAPHDR_HAS_FINALIZABLE(curr));
#if defined(DUK_USE_DEBUG)
if (DUK_HEAPHDR_HAS_FINALIZED(curr)) {
DUK_DDD(DUK_DDDPRINT("finalized object not rescued: %p", (void *) curr));
}
#endif
/* Note: object cannot be a finalizable unreachable object, as
* they have been marked temporarily reachable for this round,
@ -793,81 +724,6 @@ DUK_LOCAL void duk__sweep_heap(duk_heap *heap, duk_int_t flags, duk_size_t *out_
*out_count_keep = count_keep;
}
/*
* Run (object) finalizers in the "to be finalized" work list.
*/
#if defined(DUK_USE_FINALIZER_SUPPORT)
DUK_LOCAL void duk__run_object_finalizers(duk_heap *heap, duk_small_uint_t flags) {
duk_heaphdr *curr;
duk_heaphdr *next;
#if defined(DUK_USE_DEBUG)
duk_size_t count = 0;
#endif
duk_hthread *thr;
DUK_DD(DUK_DDPRINT("duk__run_object_finalizers: %p", (void *) heap));
thr = duk__get_temp_hthread(heap);
DUK_ASSERT(thr != NULL);
curr = heap->finalize_list;
while (curr) {
DUK_DDD(DUK_DDDPRINT("mark-and-sweep finalize: %p", (void *) curr));
DUK_ASSERT(DUK_HEAPHDR_GET_TYPE(curr) == DUK_HTYPE_OBJECT); /* only objects have finalizers */
DUK_ASSERT(!DUK_HEAPHDR_HAS_REACHABLE(curr)); /* flags have been already cleared */
DUK_ASSERT(!DUK_HEAPHDR_HAS_TEMPROOT(curr));
DUK_ASSERT(!DUK_HEAPHDR_HAS_FINALIZABLE(curr));
DUK_ASSERT(!DUK_HEAPHDR_HAS_FINALIZED(curr));
DUK_ASSERT(!DUK_HEAPHDR_HAS_READONLY(curr)); /* No finalizers for ROM objects */
/* Keep heap->finalize_list up-to-date during the list walk.
* This has no functional impact, but does matter e.g. for
* duk_push_heapptr() asserts when assertions are enabled.
*/
heap->finalize_list = curr;
if (DUK_LIKELY((flags & DUK_MS_FLAG_SKIP_FINALIZERS) == 0)) {
/* Run the finalizer, duk_hobject_run_finalizer() sets FINALIZED.
* Next mark-and-sweep will collect the object unless it has
* become reachable (i.e. rescued). FINALIZED prevents the
* finalizer from being executed again before that.
*/
duk_hobject_run_finalizer(thr, (duk_hobject *) curr); /* must never longjmp */
DUK_ASSERT(DUK_HEAPHDR_HAS_FINALIZED(curr));
/* XXX: could clear FINALIZED already here; now cleared in
* next mark-and-sweep.
*/
} else {
/* Used during heap destruction: don't actually run finalizers
* because we're heading into forced finalization. Instead,
* queue finalizable objects back to the heap_allocated list.
*/
DUK_D(DUK_DPRINT("skip finalizers flag set, queue object to heap_allocated without finalizing"));
DUK_ASSERT(!DUK_HEAPHDR_HAS_FINALIZED(curr));
}
/* queue back to heap_allocated */
next = DUK_HEAPHDR_GET_NEXT(heap, curr);
DUK_HEAP_INSERT_INTO_HEAP_ALLOCATED(heap, curr);
curr = next;
#if defined(DUK_USE_DEBUG)
count++;
#endif
}
/* finalize_list will always be processed completely */
heap->finalize_list = NULL;
#if defined(DUK_USE_DEBUG)
DUK_D(DUK_DPRINT("mark-and-sweep finalize objects: %ld finalizers called", (long) count));
#endif
}
#endif /* DUK_USE_FINALIZER_SUPPORT */
/*
* Object compaction.
*
@ -943,26 +799,25 @@ DUK_LOCAL void duk__compact_objects(duk_heap *heap) {
duk_size_t count_compact = 0;
duk_size_t count_bytes_saved = 0;
#endif
duk_hthread *thr;
DUK_DD(DUK_DDPRINT("duk__compact_objects: %p", (void *) heap));
thr = duk__get_temp_hthread(heap);
DUK_ASSERT(thr != NULL);
DUK_ASSERT(heap->heap_thread != NULL);
#if defined(DUK_USE_DEBUG)
duk__compact_object_list(heap, thr, heap->heap_allocated, &count_check, &count_compact, &count_bytes_saved);
duk__compact_object_list(heap, thr, heap->finalize_list, &count_check, &count_compact, &count_bytes_saved);
#if defined(DUK_USE_REFERENCE_COUNTING)
duk__compact_object_list(heap, thr, heap->refzero_list, &count_check, &count_compact, &count_bytes_saved);
duk__compact_object_list(heap, heap->heap_thread, heap->heap_allocated, &count_check, &count_compact, &count_bytes_saved);
#if defined(DUK_USE_FINALIZER_SUPPORT)
duk__compact_object_list(heap, heap->heap_thread, heap->finalize_list, &count_check, &count_compact, &count_bytes_saved);
#endif
#else
duk__compact_object_list(heap, thr, heap->heap_allocated);
duk__compact_object_list(heap, thr, heap->finalize_list);
#if defined(DUK_USE_REFERENCE_COUNTING)
duk__compact_object_list(heap, thr, heap->refzero_list);
duk__compact_object_list(heap, heap->heap_thread, heap->heap_allocated);
#if defined(DUK_USE_FINALIZER_SUPPORT)
duk__compact_object_list(heap, heap->heap_thread, heap->finalize_list);
#endif
#endif
#if defined(DUK_USE_REFERENCE_COUNTING)
DUK_ASSERT(heap->refzero_list == NULL); /* Always handled to completion inline in DECREF. */
#endif
#if defined(DUK_USE_DEBUG)
DUK_D(DUK_DPRINT("mark-and-sweep compact objects: %ld checked, %ld compaction attempts, %ld bytes saved by compaction",
@ -988,47 +843,32 @@ DUK_LOCAL void duk__assert_heaphdr_flags(duk_heap *heap) {
}
#if defined(DUK_USE_REFERENCE_COUNTING)
hdr = heap->refzero_list;
while (hdr) {
DUK_ASSERT(!DUK_HEAPHDR_HAS_REACHABLE(hdr));
DUK_ASSERT(!DUK_HEAPHDR_HAS_TEMPROOT(hdr));
DUK_ASSERT(!DUK_HEAPHDR_HAS_FINALIZABLE(hdr));
/* DUK_HEAPHDR_HAS_FINALIZED may be set if we're doing a
* refzero finalization and mark-and-sweep gets triggered
* during the finalizer.
*/
/* DUK_HEAPHDR_HAS_FINALIZED may or may not be set. */
hdr = DUK_HEAPHDR_GET_NEXT(heap, hdr);
}
#endif /* DUK_USE_REFERENCE_COUNTING */
DUK_ASSERT(heap->refzero_list == NULL); /* Always handled to completion inline in DECREF. */
#endif
}
#if defined(DUK_USE_REFERENCE_COUNTING)
DUK_LOCAL void duk__assert_valid_refcounts(duk_heap *heap) {
duk_heaphdr *hdr = heap->heap_allocated;
while (hdr) {
/* Cannot really assert much w.r.t. refcounts now. */
if (DUK_HEAPHDR_GET_REFCOUNT(hdr) == 0 &&
DUK_HEAPHDR_HAS_FINALIZED(hdr)) {
/* An object may be in heap_allocated list with a zero
* refcount if it has just been finalized and is waiting
* to be collected by the next cycle.
* (This doesn't currently happen however.)
*/
} else if (DUK_HEAPHDR_GET_REFCOUNT(hdr) == 0) {
/* An object may be in heap_allocated list with a zero
* refcount also if it is a temporary object created by
* a finalizer; because finalization now runs inside
* mark-and-sweep, such objects will not be queued to
* refzero_list and will thus appear here with refcount
* zero.
* refcount also if it is a temporary object created
* during debugger paused state. It will get collected
* by mark-and-sweep based on its reachability status
* (presumably not reachable because refcount is 0).
*/
#if 0 /* this case can no longer occur because refcount is unsigned */
} else if (DUK_HEAPHDR_GET_REFCOUNT(hdr) < 0) {
DUK_D(DUK_DPRINT("invalid refcount: %ld, %p -> %!O",
(hdr != NULL ? (long) DUK_HEAPHDR_GET_REFCOUNT(hdr) : (long) 0),
(void *) hdr, (duk_heaphdr *) hdr));
DUK_ASSERT(DUK_HEAPHDR_GET_REFCOUNT(hdr) > 0);
#endif
}
DUK_ASSERT_DISABLE(DUK_HEAPHDR_GET_REFCOUNT(hdr) >= 0); /* Unsigned. */
hdr = DUK_HEAPHDR_GET_NEXT(heap, hdr);
}
}
@ -1040,12 +880,16 @@ DUK_LOCAL void duk__clear_assert_refcounts(duk_heap *heap) {
for (curr = heap->heap_allocated; curr != NULL; curr = DUK_HEAPHDR_GET_NEXT(heap, curr)) {
curr->h_assert_refcount = 0;
}
#if defined(DUK_USE_FINALIZER_SUPPORT)
for (curr = heap->finalize_list; curr != NULL; curr = DUK_HEAPHDR_GET_NEXT(heap, curr)) {
curr->h_assert_refcount = 0;
}
#endif
#if defined(DUK_USE_REFERENCE_COUNTING)
for (curr = heap->refzero_list; curr != NULL; curr = DUK_HEAPHDR_GET_NEXT(heap, curr)) {
curr->h_assert_refcount = 0;
}
#endif
for (i = 0; i < heap->st_size; i++) {
duk_hstring *h;
@ -1114,125 +958,61 @@ DUK_LOCAL void duk__check_assert_refcounts(duk_heap *heap) {
#endif /* DUK_USE_REFERENCE_COUNTING */
#endif /* DUK_USE_ASSERTIONS */
/*
* Finalizer torture. Do one fake finalizer call which causes side effects
* similar to one or more finalizers on actual objects.
*/
#if defined(DUK_USE_FINALIZER_SUPPORT)
#if defined(DUK_USE_MARKANDSWEEP_FINALIZER_TORTURE)
DUK_LOCAL duk_ret_t duk__markandsweep_fake_finalizer(duk_context *ctx) {
DUK_D(DUK_DPRINT("fake mark-and-sweep torture finalizer executed"));
/* Require a lot of stack to force a value stack grow/shrink.
* Recursive mark-and-sweep is prevented by allocation macros
* so this won't trigger another mark-and-sweep.
*/
duk_require_stack(ctx, 100000);
/* XXX: do something to force a callstack grow/shrink, perhaps
* just a manual forced resize or a forced relocating realloc?
*/
return 0;
}
DUK_LOCAL void duk__markandsweep_torture_finalizer(duk_hthread *thr) {
duk_context *ctx;
duk_int_t rc;
DUK_ASSERT(thr != NULL);
ctx = (duk_context *) thr;
/* Avoid fake finalization when callstack limit has been reached.
* Otherwise a callstack limit error will be created, then refzero'ed.
*/
if (thr->heap->call_recursion_depth >= thr->heap->call_recursion_limit ||
thr->callstack_size + 2 * DUK_CALLSTACK_GROW_STEP >= thr->callstack_max /*approximate*/) {
DUK_D(DUK_DPRINT("call recursion depth reached, avoid fake mark-and-sweep torture finalizer"));
return;
}
/* Run fake finalizer. Avoid creating unnecessary garbage. */
duk_push_c_function(ctx, duk__markandsweep_fake_finalizer, 0 /*nargs*/);
rc = duk_pcall(ctx, 0 /*nargs*/);
DUK_UNREF(rc); /* ignored */
duk_pop(ctx);
}
#endif /* DUK_USE_MARKANDSWEEP_FINALIZER_TORTURE */
#endif /* DUK_USE_FINALIZER_SUPPORT */
/*
* Main mark-and-sweep function.
*
* 'flags' represents the features requested by the caller. The current
* heap->mark_and_sweep_base_flags is ORed automatically into the flags;
* the base flags mask typically prevents certain mark-and-sweep operations
* to avoid trouble.
* heap->ms_base_flags is ORed automatically into the flags; the base flags
* mask typically prevents certain mark-and-sweep operation to avoid trouble.
*/
DUK_INTERNAL duk_bool_t duk_heap_mark_and_sweep(duk_heap *heap, duk_small_uint_t flags) {
duk_hthread *thr;
DUK_INTERNAL void duk_heap_mark_and_sweep(duk_heap *heap, duk_small_uint_t flags) {
duk_size_t count_keep_obj;
duk_size_t count_keep_str;
#if defined(DUK_USE_VOLUNTARY_GC)
duk_size_t tmp;
#endif
if (DUK_HEAP_HAS_MARKANDSWEEP_RUNNING(heap)) {
DUK_D(DUK_DPRINT("refuse to do a recursive mark-and-sweep"));
return 0;
}
/* XXX: thread selection for mark-and-sweep is currently a hack.
* If we don't have a thread, the entire mark-and-sweep is now
* skipped (although we could just skip finalizations).
/* If debugger is paused, garbage collection is disabled by default.
* This is achieved by bumping ms_prevent_count when becoming paused.
*/
DUK_ASSERT(!DUK_HEAP_HAS_DEBUGGER_PAUSED(heap) || heap->ms_prevent_count > 0);
/* If thr != NULL, the thr may still be in the middle of
* initialization.
* XXX: Improve the thread viability test.
/* Prevention/recursion check as soon as possible because we may
* be called a number of times when voluntary mark-and-sweep is
* pending.
*/
thr = duk__get_temp_hthread(heap);
if (thr == NULL) {
DUK_D(DUK_DPRINT("gc skipped because we don't have a temp thread"));
/* reset voluntary gc trigger count */
#if defined(DUK_USE_VOLUNTARY_GC)
heap->mark_and_sweep_trigger_counter = DUK_HEAP_MARK_AND_SWEEP_TRIGGER_SKIP;
#endif
return 0; /* OK */
if (heap->ms_prevent_count != 0) {
DUK_DD(DUK_DDPRINT("reject recursive mark-and-sweep"));
return;
}
DUK_ASSERT(heap->ms_running == 0); /* ms_prevent_count is bumped when ms_running is set */
/* If debugger is paused, garbage collection is disabled by default. */
/* XXX: will need a force flag if garbage collection is triggered
* explicitly during paused state.
/* Heap_thread is used during mark-and-sweep for refcount finalization
* (it's also used for finalizer execution once mark-and-sweep is
* complete). Heap allocation code ensures heap_thread is set and
* properly initialized before setting ms_prevent_count to 0.
*/
#if defined(DUK_USE_DEBUGGER_SUPPORT)
if (DUK_HEAP_IS_PAUSED(heap)) {
/* Checking this here rather that in memory alloc primitives
* reduces checking code there but means a failed allocation
* will go through a few retries before giving up. That's
* fine because this only happens during debugging.
*/
DUK_D(DUK_DPRINT("gc skipped because debugger is paused"));
return 0;
}
#endif
DUK_ASSERT(heap->heap_thread != NULL);
DUK_ASSERT(heap->heap_thread->valstack != NULL);
DUK_ASSERT(heap->heap_thread->callstack != NULL);
DUK_ASSERT(heap->heap_thread->catchstack != NULL);
DUK_D(DUK_DPRINT("garbage collect (mark-and-sweep) starting, requested flags: 0x%08lx, effective flags: 0x%08lx",
(unsigned long) flags, (unsigned long) (flags | heap->mark_and_sweep_base_flags)));
(unsigned long) flags, (unsigned long) (flags | heap->ms_base_flags)));
flags |= heap->mark_and_sweep_base_flags;
flags |= heap->ms_base_flags;
/*
* Assertions before
*/
#if defined(DUK_USE_ASSERTIONS)
DUK_ASSERT(!DUK_HEAP_HAS_MARKANDSWEEP_RUNNING(heap));
DUK_ASSERT(heap->ms_prevent_count == 0);
DUK_ASSERT(heap->ms_running == 0);
DUK_ASSERT(!DUK_HEAP_HAS_DEBUGGER_PAUSED(heap));
DUK_ASSERT(!DUK_HEAP_HAS_MARKANDSWEEP_RECLIMIT_REACHED(heap));
DUK_ASSERT(heap->mark_and_sweep_recursion_depth == 0);
DUK_ASSERT(heap->ms_recursion_depth == 0);
duk__assert_heaphdr_flags(heap);
#if defined(DUK_USE_REFERENCE_COUNTING)
/* Note: heap->refzero_free_running may be true; a refcount
@ -1246,7 +1026,10 @@ DUK_INTERNAL duk_bool_t duk_heap_mark_and_sweep(duk_heap *heap, duk_small_uint_t
* Begin
*/
DUK_HEAP_SET_MARKANDSWEEP_RUNNING(heap);
DUK_ASSERT(heap->ms_prevent_count == 0);
DUK_ASSERT(heap->ms_running == 0);
heap->ms_prevent_count = 1;
heap->ms_running = 1;
/*
* Mark roots, hoping that recursion limit is not normally hit.
@ -1267,17 +1050,17 @@ DUK_INTERNAL duk_bool_t duk_heap_mark_and_sweep(duk_heap *heap, duk_small_uint_t
#if defined(DUK_USE_ASSERTIONS) && defined(DUK_USE_REFERENCE_COUNTING)
duk__clear_assert_refcounts(heap);
#endif
duk__mark_roots_heap(heap); /* main reachability roots */
duk__mark_roots_heap(heap); /* Main reachability roots. */
#if defined(DUK_USE_REFERENCE_COUNTING)
duk__mark_refzero_list(heap); /* refzero_list treated as reachability roots */
DUK_ASSERT(heap->refzero_list == NULL); /* Always handled to completion inline in DECREF. */
#endif
duk__mark_temproots_by_heap_scan(heap); /* temproots */
duk__mark_temproots_by_heap_scan(heap); /* Temproots. */
#if defined(DUK_USE_FINALIZER_SUPPORT)
duk__mark_finalizable(heap); /* mark finalizable as reachability roots */
duk__mark_finalizable(heap); /* Mark finalizable as reachability roots. */
duk__mark_finalize_list(heap); /* mark finalizer work list as reachability roots */
#endif
duk__mark_temproots_by_heap_scan(heap); /* temproots */
duk__mark_temproots_by_heap_scan(heap); /* Temproots. */
/*
* Sweep garbage and remove marking flags, and move objects with
@ -1304,7 +1087,7 @@ DUK_INTERNAL duk_bool_t duk_heap_mark_and_sweep(duk_heap *heap, duk_small_uint_t
duk__check_assert_refcounts(heap);
#endif
#if defined(DUK_USE_REFERENCE_COUNTING)
duk__clear_refzero_list_flags(heap);
DUK_ASSERT(heap->refzero_list == NULL); /* Always handled to completion inline in DECREF. */
#endif
#if defined(DUK_USE_FINALIZER_SUPPORT)
duk__clear_finalize_list_flags(heap);
@ -1348,71 +1131,23 @@ DUK_INTERNAL duk_bool_t duk_heap_mark_and_sweep(duk_heap *heap, duk_small_uint_t
duk_heap_strtable_force_resize(heap);
}
/*
* Finalize objects in the finalization work list. Finalized
* objects are queued back to heap_allocated with FINALIZED set.
*
* Since finalizers may cause arbitrary side effects, they are
* prevented during string table and object property allocation
* resizing using the DUK_MS_FLAG_NO_FINALIZERS flag in
* heap->mark_and_sweep_base_flags. In this case the objects
* remain in the finalization work list after mark-and-sweep
* exits and they may be finalized on the next pass.
*
* Finalization currently happens inside "MARKANDSWEEP_RUNNING"
* protection (no mark-and-sweep may be triggered by the
* finalizers). As a side effect:
*
* 1) an out-of-memory error inside a finalizer will not
* cause a mark-and-sweep and may cause the finalizer
* to fail unnecessarily
*
* 2) any temporary objects whose refcount decreases to zero
* during finalization will not be put into refzero_list;
* they can only be collected by another mark-and-sweep
*
* This is not optimal, but since the sweep for this phase has
* already happened, this is probably good enough for now.
*/
#if defined(DUK_USE_FINALIZER_SUPPORT)
#if defined(DUK_USE_MARKANDSWEEP_FINALIZER_TORTURE)
/* Cannot simulate individual finalizers because finalize_list only
* contains objects with actual finalizers. But simulate side effects
* from finalization by doing a bogus function call and resizing the
* stacks.
*/
if (flags & DUK_MS_FLAG_NO_FINALIZERS) {
DUK_D(DUK_DPRINT("skip mark-and-sweep torture finalizer, DUK_MS_FLAG_NO_FINALIZERS is set"));
} else if (!(thr->valstack != NULL && thr->callstack != NULL && thr->catchstack != NULL)) {
DUK_D(DUK_DPRINT("skip mark-and-sweep torture finalizer, thread not yet viable"));
} else {
DUK_D(DUK_DPRINT("run mark-and-sweep torture finalizer"));
duk__markandsweep_torture_finalizer(thr);
}
#endif /* DUK_USE_MARKANDSWEEP_FINALIZER_TORTURE */
if (flags & DUK_MS_FLAG_NO_FINALIZERS) {
DUK_D(DUK_DPRINT("finalizer run skipped because DUK_MS_FLAG_NO_FINALIZERS is set"));
} else {
duk__run_object_finalizers(heap, flags);
}
#endif /* DUK_USE_FINALIZER_SUPPORT */
/*
* Finish
*/
DUK_HEAP_CLEAR_MARKANDSWEEP_RUNNING(heap);
DUK_ASSERT(heap->ms_prevent_count == 1);
heap->ms_prevent_count = 0;
DUK_ASSERT(heap->ms_running == 1);
heap->ms_running = 0;
/*
* Assertions after
*/
#if defined(DUK_USE_ASSERTIONS)
DUK_ASSERT(!DUK_HEAP_HAS_MARKANDSWEEP_RUNNING(heap));
DUK_ASSERT(heap->ms_prevent_count == 0);
DUK_ASSERT(!DUK_HEAP_HAS_MARKANDSWEEP_RECLIMIT_REACHED(heap));
DUK_ASSERT(heap->mark_and_sweep_recursion_depth == 0);
DUK_ASSERT(heap->ms_recursion_depth == 0);
duk__assert_heaphdr_flags(heap);
#if defined(DUK_USE_REFERENCE_COUNTING)
/* Note: heap->refzero_free_running may be true; a refcount
@ -1428,15 +1163,52 @@ DUK_INTERNAL duk_bool_t duk_heap_mark_and_sweep(duk_heap *heap, duk_small_uint_t
#if defined(DUK_USE_VOLUNTARY_GC)
tmp = (count_keep_obj + count_keep_str) / 256;
heap->mark_and_sweep_trigger_counter = (duk_int_t) (
heap->ms_trigger_counter = (duk_int_t) (
(tmp * DUK_HEAP_MARK_AND_SWEEP_TRIGGER_MULT) +
DUK_HEAP_MARK_AND_SWEEP_TRIGGER_ADD);
DUK_D(DUK_DPRINT("garbage collect (mark-and-sweep) finished: %ld objects kept, %ld strings kept, trigger reset to %ld",
(long) count_keep_obj, (long) count_keep_str, (long) heap->mark_and_sweep_trigger_counter));
(long) count_keep_obj, (long) count_keep_str, (long) heap->ms_trigger_counter));
#else
DUK_D(DUK_DPRINT("garbage collect (mark-and-sweep) finished: %ld objects kept, %ld strings kept, no voluntary trigger",
(long) count_keep_obj, (long) count_keep_str));
#endif
return 0; /* OK */
/*
* Finalize objects in the finalization work list. Finalized
* objects are queued back to heap_allocated with FINALIZED set.
*
* Since finalizers may cause arbitrary side effects, they are
* prevented e.g. during string table and object property allocation
* resizing using heap->pf_prevent_count. In this case the objects
* remain in the finalization work list after mark-and-sweep exits
* and they may be finalized on the next pass or any DECREF checking
* for finalize_list.
*
* As of Duktape 2.1 finalization happens outside mark-and-sweep
* protection. Even so, mark-and-sweep is prevented while finalizers
* run: if mark-and-sweep runs when the finalize_list has only been
* partially processed, incorrect rescue decisions are made because
* finalize_list is considered a reachability root. As a side effect:
*
* * An out-of-memory error inside a finalizer will not
* cause a mark-and-sweep and may cause the finalizer
* to fail unnecessarily.
*
* This is not optimal, but since the sweep for this phase has
* already happened, this is probably good enough for now.
*
* There are at least two main fixes to this limitation: (1) a better
* notion of reachability for rescue/free decisions, and (2) skipping
* rescue/free decisions when mark-and-sweep runs and finalize_list
* is not empty.
*
* XXX: avoid finalizer execution when doing emergency GC?
*/
#if defined(DUK_USE_FINALIZER_SUPPORT)
/* Attempt to process finalize_list, pf_prevent_count check
* is inside the target.
*/
duk_heap_process_finalize_list(heap);
#endif /* DUK_USE_FINALIZER_SUPPORT */
}

81
src-input/duk_heap_memory.c

@ -5,33 +5,28 @@
#include "duk_internal.h"
/*
* Helpers
*
* The fast path checks are done within a macro to ensure "inlining"
* while the slow path actions use a helper (which won't typically be
* inlined in size optimized builds).
* Voluntary GC check
*/
#if defined(DUK_USE_VOLUNTARY_GC)
#define DUK__VOLUNTARY_PERIODIC_GC(heap) do { \
if (DUK_UNLIKELY(--(heap)->mark_and_sweep_trigger_counter < 0)) { \
duk__run_voluntary_gc(heap); \
} \
} while (0)
DUK_LOCAL void duk__run_voluntary_gc(duk_heap *heap) {
if (DUK_HEAP_HAS_MARKANDSWEEP_RUNNING(heap)) {
DUK_DD(DUK_DDPRINT("mark-and-sweep in progress -> skip voluntary mark-and-sweep now"));
} else {
duk_small_uint_t flags;
duk_bool_t rc;
DUK_LOCAL DUK_INLINE void duk__check_voluntary_gc(duk_heap *heap) {
if (DUK_UNLIKELY(--(heap)->ms_trigger_counter < 0)) {
#if defined(DUK_USE_DEBUG)
if (heap->ms_prevent_count == 0) {
DUK_D(DUK_DPRINT("triggering voluntary mark-and-sweep"));
} else {
DUK_DD(DUK_DDPRINT("gc blocked -> skip voluntary mark-and-sweep now"));
}
#endif
DUK_D(DUK_DPRINT("triggering voluntary mark-and-sweep"));
flags = 0;
rc = duk_heap_mark_and_sweep(heap, flags);
DUK_UNREF(rc);
/* Prevention checks in the call target handle cases where
* voluntary GC is not allowed. The voluntary GC trigger
* counter is only rewritten if mark-and-sweep actually runs.
*/
duk_heap_mark_and_sweep(heap, DUK_MS_FLAG_VOLUNTARY /*flags*/);
}
}
#define DUK__VOLUNTARY_PERIODIC_GC(heap) do { duk__check_voluntary_gc((heap)); } while (0)
#else
#define DUK__VOLUNTARY_PERIODIC_GC(heap) /* no voluntary gc */
#endif /* DUK_USE_VOLUNTARY_GC */
@ -42,7 +37,6 @@ DUK_LOCAL void duk__run_voluntary_gc(duk_heap *heap) {
DUK_INTERNAL void *duk_heap_mem_alloc(duk_heap *heap, duk_size_t size) {
void *res;
duk_bool_t rc;
duk_small_int_t i;
DUK_ASSERT(heap != NULL);
@ -60,7 +54,7 @@ DUK_INTERNAL void *duk_heap_mem_alloc(duk_heap *heap, duk_size_t size) {
#if defined(DUK_USE_GC_TORTURE)
/* simulate alloc failure on every alloc (except when mark-and-sweep is running) */
if (!DUK_HEAP_HAS_MARKANDSWEEP_RUNNING(heap)) {
if (heap->ms_prevent_count == 0) {
DUK_DDD(DUK_DDDPRINT("gc torture enabled, pretend that first alloc attempt fails"));
res = NULL;
DUK_UNREF(res);
@ -78,16 +72,22 @@ DUK_INTERNAL void *duk_heap_mem_alloc(duk_heap *heap, duk_size_t size) {
DUK_D(DUK_DPRINT("first alloc attempt failed, attempt to gc and retry"));
#if 0
/*
* Avoid a GC if GC is already running. This can happen at a late
* stage in a GC when we try to e.g. resize the stringtable
* or compact objects.
*
* NOTE: explicit handling isn't actually be needed: if the GC is
* not allowed, duk_heap_mark_and_sweep() will reject it for every
* attempt in the loop below, resulting in a NULL same as here.
*/
if (DUK_HEAP_HAS_MARKANDSWEEP_RUNNING(heap)) {
if (heap->ms_prevent_count != 0) {
DUK_D(DUK_DPRINT("duk_heap_mem_alloc() failed, gc in progress (gc skipped), alloc size %ld", (long) size));
return NULL;
}
#endif
/*
* Retry with several GC attempts. Initial attempts are made without
@ -103,8 +103,7 @@ DUK_INTERNAL void *duk_heap_mem_alloc(duk_heap *heap, duk_size_t size) {
flags |= DUK_MS_FLAG_EMERGENCY;
}
rc = duk_heap_mark_and_sweep(heap, flags);
DUK_UNREF(rc);
duk_heap_mark_and_sweep(heap, flags);
res = heap->alloc_func(heap->heap_udata, size);
if (res) {
@ -162,7 +161,6 @@ DUK_INTERNAL void *duk_heap_mem_alloc_checked_zeroed(duk_hthread *thr, duk_size_
DUK_INTERNAL void *duk_heap_mem_realloc(duk_heap *heap, void *ptr, duk_size_t newsize) {
void *res;
duk_bool_t rc;
duk_small_int_t i;
DUK_ASSERT(heap != NULL);
@ -181,7 +179,7 @@ DUK_INTERNAL void *duk_heap_mem_realloc(duk_heap *heap, void *ptr, duk_size_t ne
#if defined(DUK_USE_GC_TORTURE)
/* simulate alloc failure on every realloc (except when mark-and-sweep is running) */
if (!DUK_HEAP_HAS_MARKANDSWEEP_RUNNING(heap)) {
if (heap->ms_prevent_count == 0) {
DUK_DDD(DUK_DDDPRINT("gc torture enabled, pretend that first realloc attempt fails"));
res = NULL;
DUK_UNREF(res);
@ -199,14 +197,16 @@ DUK_INTERNAL void *duk_heap_mem_realloc(duk_heap *heap, void *ptr, duk_size_t ne
DUK_D(DUK_DPRINT("first realloc attempt failed, attempt to gc and retry"));
#if 0
/*
* Avoid a GC if GC is already running. See duk_heap_mem_alloc().
*/
if (DUK_HEAP_HAS_MARKANDSWEEP_RUNNING(heap)) {
if (heap->ms_prevent_count != 0) {
DUK_D(DUK_DPRINT("duk_heap_mem_realloc() failed, gc in progress (gc skipped), alloc size %ld", (long) newsize));
return NULL;
}
#endif
/*
* Retry with several GC attempts. Initial attempts are made without
@ -222,8 +222,7 @@ DUK_INTERNAL void *duk_heap_mem_realloc(duk_heap *heap, void *ptr, duk_size_t ne
flags |= DUK_MS_FLAG_EMERGENCY;
}
rc = duk_heap_mark_and_sweep(heap, flags);
DUK_UNREF(rc);
duk_heap_mark_and_sweep(heap, flags);
res = heap->realloc_func(heap->heap_udata, ptr, newsize);
if (res || newsize == 0) {
@ -245,7 +244,6 @@ DUK_INTERNAL void *duk_heap_mem_realloc(duk_heap *heap, void *ptr, duk_size_t ne
DUK_INTERNAL void *duk_heap_mem_realloc_indirect(duk_heap *heap, duk_mem_getptr cb, void *ud, duk_size_t newsize) {
void *res;
duk_bool_t rc;
duk_small_int_t i;
DUK_ASSERT(heap != NULL);
@ -263,7 +261,7 @@ DUK_INTERNAL void *duk_heap_mem_realloc_indirect(duk_heap *heap, duk_mem_getptr
#if defined(DUK_USE_GC_TORTURE)
/* simulate alloc failure on every realloc (except when mark-and-sweep is running) */
if (!DUK_HEAP_HAS_MARKANDSWEEP_RUNNING(heap)) {
if (heap->ms_prevent_count == 0) {
DUK_DDD(DUK_DDDPRINT("gc torture enabled, pretend that first indirect realloc attempt fails"));
res = NULL;
DUK_UNREF(res);
@ -281,14 +279,16 @@ DUK_INTERNAL void *duk_heap_mem_realloc_indirect(duk_heap *heap, duk_mem_getptr
DUK_D(DUK_DPRINT("first indirect realloc attempt failed, attempt to gc and retry"));
#if 0
/*
* Avoid a GC if GC is already running. See duk_heap_mem_alloc().
*/
if (DUK_HEAP_HAS_MARKANDSWEEP_RUNNING(heap)) {
if (heap->ms_prevent_count != 0) {
DUK_D(DUK_DPRINT("duk_heap_mem_realloc_indirect() failed, gc in progress (gc skipped), alloc size %ld", (long) newsize));
return NULL;
}
#endif
/*
* Retry with several GC attempts. Initial attempts are made without
@ -312,8 +312,7 @@ DUK_INTERNAL void *duk_heap_mem_realloc_indirect(duk_heap *heap, duk_mem_getptr
flags |= DUK_MS_FLAG_EMERGENCY;
}
rc = duk_heap_mark_and_sweep(heap, flags);
DUK_UNREF(rc);
duk_heap_mark_and_sweep(heap, flags);
#if defined(DUK_USE_ASSERTIONS)
ptr_post = cb(heap, ud);
if (ptr_pre != ptr_post) {
@ -352,12 +351,8 @@ DUK_INTERNAL void duk_heap_mem_free(duk_heap *heap, void *ptr) {
*/
heap->free_func(heap->heap_udata, ptr);
/* Count free operations toward triggering a GC but never actually trigger
* a GC from a free. Otherwise code which frees internal structures would
* need to put in NULLs at every turn to ensure the object is always in
* consistent state for a mark-and-sweep.
/* Never perform a GC (even voluntary) in a memory free, otherwise
* all call sites doing frees would need to deal with the side effects.
* No need to update voluntary GC counter either.
*/
#if defined(DUK_USE_VOLUNTARY_GC)
heap->mark_and_sweep_trigger_counter--;
#endif
}

117
src-input/duk_heap_misc.c

@ -4,16 +4,49 @@
#include "duk_internal.h"
#if defined(DUK_USE_DOUBLE_LINKED_HEAP) && defined(DUK_USE_REFERENCE_COUNTING)
/* Arbitrary remove only works with double linked heap, and is only required by
* reference counting so far.
*/
DUK_INTERNAL void duk_heap_remove_any_from_heap_allocated(duk_heap *heap, duk_heaphdr *hdr) {
DUK_INTERNAL void duk_heap_insert_into_heap_allocated(duk_heap *heap, duk_heaphdr *hdr) {
duk_heaphdr *root;
DUK_ASSERT(DUK_HEAPHDR_GET_TYPE(hdr) != DUK_HTYPE_STRING);
root = heap->heap_allocated;
#if defined(DUK_USE_DOUBLE_LINKED_HEAP)
if (root != NULL) {
DUK_ASSERT(DUK_HEAPHDR_GET_PREV(heap, root) == NULL);
DUK_HEAPHDR_SET_PREV(heap, root, hdr);
}
DUK_HEAPHDR_SET_PREV(heap, hdr, NULL);
#endif
DUK_HEAPHDR_SET_NEXT(heap, hdr, root);
DUK_ASSERT_HEAPHDR_LINKS(heap, hdr);
DUK_ASSERT_HEAPHDR_LINKS(heap, root);
heap->heap_allocated = hdr;
}
#if defined(DUK_USE_REFERENCE_COUNTING)
DUK_INTERNAL void duk_heap_remove_from_heap_allocated(duk_heap *heap, duk_heaphdr *hdr) {
duk_heaphdr *prev;
duk_heaphdr *next;
/* Strings are in string table. */
DUK_ASSERT(hdr != NULL);
DUK_ASSERT(DUK_HEAPHDR_GET_TYPE(hdr) != DUK_HTYPE_STRING);
/* Target 'hdr' must be in heap_allocated (not e.g. finalize_list).
* If not, heap lists will become corrupted so assert early for it.
*/
#if defined(DUK_USE_ASSERTIONS)
{
duk_heaphdr *tmp;
for (tmp = heap->heap_allocated; tmp != NULL; tmp = DUK_HEAPHDR_GET_NEXT(heap, tmp)) {
if (tmp == hdr) {
break;
}
}
DUK_ASSERT(tmp == hdr);
}
#endif
/* Read/write only once to minimize pointer compression calls. */
prev = DUK_HEAPHDR_GET_PREV(heap, hdr);
next = DUK_HEAPHDR_GET_NEXT(heap, hdr);
@ -30,27 +63,73 @@ DUK_INTERNAL void duk_heap_remove_any_from_heap_allocated(duk_heap *heap, duk_he
} else {
;
}
/* The prev/next pointers of the removed duk_heaphdr are left as garbage.
* It's up to the caller to ensure they're written before inserting the
* object back.
*/
}
#endif
#endif /* DUK_USE_REFERENCE_COUNTING */
DUK_INTERNAL void duk_heap_insert_into_heap_allocated(duk_heap *heap, duk_heaphdr *hdr) {
DUK_ASSERT(DUK_HEAPHDR_GET_TYPE(hdr) != DUK_HTYPE_STRING);
#if defined(DUK_USE_FINALIZER_SUPPORT)
DUK_INTERNAL void duk_heap_insert_into_finalize_list(duk_heap *heap, duk_heaphdr *hdr) {
duk_heaphdr *root;
root = heap->finalize_list;
#if defined(DUK_USE_DOUBLE_LINKED_HEAP)
if (heap->heap_allocated) {
DUK_ASSERT(DUK_HEAPHDR_GET_PREV(heap, heap->heap_allocated) == NULL);
DUK_HEAPHDR_SET_PREV(heap, heap->heap_allocated, hdr);
}
DUK_HEAPHDR_SET_PREV(heap, hdr, NULL);
if (root != NULL) {
DUK_ASSERT(DUK_HEAPHDR_GET_PREV(heap, root) == NULL);
DUK_HEAPHDR_SET_PREV(heap, root, hdr);
}
#endif
DUK_HEAPHDR_SET_NEXT(heap, hdr, root);
DUK_ASSERT_HEAPHDR_LINKS(heap, hdr);
DUK_ASSERT_HEAPHDR_LINKS(heap, root);
heap->finalize_list = hdr;
}
#endif /* DUK_USE_FINALIZER_SUPPORT */
#if defined(DUK_USE_FINALIZER_SUPPORT)
DUK_INTERNAL void duk_heap_remove_from_finalize_list(duk_heap *heap, duk_heaphdr *hdr) {
#if defined(DUK_USE_DOUBLE_LINKED_HEAP)
duk_heaphdr *next;
duk_heaphdr *prev;
next = DUK_HEAPHDR_GET_NEXT(heap, hdr);
prev = DUK_HEAPHDR_GET_PREV(heap, hdr);
if (next != NULL) {
DUK_ASSERT(DUK_HEAPHDR_GET_PREV(heap, next) == hdr);
DUK_HEAPHDR_SET_PREV(heap, next, prev);
}
if (prev == NULL) {
DUK_ASSERT(hdr == heap->finalize_list);
heap->finalize_list = next;
} else {
DUK_ASSERT(hdr != heap->finalize_list);
DUK_HEAPHDR_SET_NEXT(heap, prev, next);
}
#else
duk_heaphdr *next;
duk_heaphdr *curr;
/* Random removal is expensive: we need to locate the previous element
* because we don't have a 'prev' pointer.
*/
curr = heap->finalize_list;
if (curr == hdr) {
heap->finalize_list = DUK_HEAPHDR_GET_NEXT(heap, curr);
} else {
DUK_ASSERT(hdr != heap->finalize_list);
for (;;) {
DUK_ASSERT(curr != NULL); /* Caller responsibility. */
next = DUK_HEAPHDR_GET_NEXT(heap, curr);
if (next == hdr) {
next = DUK_HEAPHDR_GET_NEXT(heap, hdr);
DUK_HEAPHDR_SET_NEXT(heap, curr, next);
break;
}
}
}
#endif
DUK_HEAPHDR_SET_NEXT(heap, hdr, heap->heap_allocated);
heap->heap_allocated = hdr;
}
#endif /* DUK_USE_FINALIZER_SUPPORT */
#if defined(DUK_USE_INTERRUPT_COUNTER)
DUK_INTERNAL void duk_heap_switch_thread(duk_heap *heap, duk_hthread *new_thr) {

570
src-input/duk_heap_refcount.c

@ -1,5 +1,9 @@
/*
* Reference counting implementation.
*
* INCREF/DECREF, finalization and freeing of objects whose refcount reaches
* zero (refzero). These operations are very performance sensitive, so
* various small tricks are used in an attempt to maximize speed.
*/
#include "duk_internal.h"
@ -10,36 +14,6 @@
#error internal error, reference counting requires a double linked heap
#endif
/*
* Misc
*/
DUK_LOCAL void duk__queue_refzero(duk_heap *heap, duk_heaphdr *hdr) {
/* Tail insert: don't disturb head in case refzero is running. */
if (heap->refzero_list != NULL) {
duk_heaphdr *hdr_prev;
hdr_prev = heap->refzero_list_tail;
DUK_ASSERT(hdr_prev != NULL);
DUK_ASSERT(DUK_HEAPHDR_GET_NEXT(heap, hdr_prev) == NULL);
DUK_HEAPHDR_SET_NEXT(heap, hdr, NULL);
DUK_HEAPHDR_SET_PREV(heap, hdr, hdr_prev);
DUK_HEAPHDR_SET_NEXT(heap, hdr_prev, hdr);
DUK_ASSERT_HEAPHDR_LINKS(heap, hdr);
DUK_ASSERT_HEAPHDR_LINKS(heap, hdr_prev);
heap->refzero_list_tail = hdr;
} else {
DUK_ASSERT(heap->refzero_list_tail == NULL);
DUK_HEAPHDR_SET_NEXT(heap, hdr, NULL);
DUK_HEAPHDR_SET_PREV(heap, hdr, NULL);
DUK_ASSERT_HEAPHDR_LINKS(heap, hdr);
heap->refzero_list = hdr;
heap->refzero_list_tail = hdr;
}
}
/*
* Heap object refcount finalization.
*
@ -48,16 +22,18 @@ DUK_LOCAL void duk__queue_refzero(duk_heap *heap, duk_heaphdr *hdr) {
* allocations (mark-and-sweep shares these helpers), it just manipulates
* the refcounts.
*
* Note that any of the decref's may cause a refcount to drop to zero, BUT
* it will not be processed inline. If refcount finalization is triggered
* by refzero processing, the objects will be just queued to the refzero
* list and processed later which eliminates C recursion. If refcount
* finalization is triggered by mark-and-sweep, any refzero situations are
* ignored because mark-and-sweep will deal with them. NORZ variants can
* be used here in both cases.
* Note that any of the DECREFs may cause a refcount to drop to zero. If so,
* the object won't be refzero processed inline, but will just be queued to
* refzero_list and processed by an earlier caller working on refzero_list,
* eliminating C recursion from even long refzero cascades. If refzero
* finalization is triggered by mark-and-sweep, refzero conditions are ignored
* (objects are not even queued to refzero_list) because mark-and-sweep deals
* with them; refcounts are still updated so that they remain in sync with
* actual references.
*/
DUK_LOCAL void duk__refcount_finalize_hobject(duk_hthread *thr, duk_hobject *h) {
DUK_INTERNAL void duk_hobject_refcount_finalize_norz(duk_heap *heap, duk_hobject *h) {
duk_hthread *thr;
duk_uint_fast32_t i;
duk_uint_fast32_t n;
duk_propvalue *p_val;
@ -66,22 +42,27 @@ DUK_LOCAL void duk__refcount_finalize_hobject(duk_hthread *thr, duk_hobject *h)
duk_uint8_t *p_flag;
duk_hobject *h_proto;
DUK_ASSERT(heap != NULL);
DUK_ASSERT(heap->heap_thread != NULL);
DUK_ASSERT(h);
DUK_ASSERT(DUK_HEAPHDR_GET_TYPE((duk_heaphdr *) h) == DUK_HTYPE_OBJECT);
p_key = DUK_HOBJECT_E_GET_KEY_BASE(thr->heap, h);
p_val = DUK_HOBJECT_E_GET_VALUE_BASE(thr->heap, h);
p_flag = DUK_HOBJECT_E_GET_FLAGS_BASE(thr->heap, h);
thr = heap->heap_thread;
DUK_ASSERT(thr != NULL);
p_key = DUK_HOBJECT_E_GET_KEY_BASE(heap, h);
p_val = DUK_HOBJECT_E_GET_VALUE_BASE(heap, h);
p_flag = DUK_HOBJECT_E_GET_FLAGS_BASE(heap, h);
n = DUK_HOBJECT_GET_ENEXT(h);
while (n-- > 0) {
duk_hstring *key;
key = p_key[n];
if (!key) {
if (DUK_UNLIKELY(key == NULL)) {
continue;
}
DUK_HSTRING_DECREF_NORZ(thr, key);
if (p_flag[n] & DUK_PROPDESC_FLAG_ACCESSOR) {
if (DUK_UNLIKELY(p_flag[n] & DUK_PROPDESC_FLAG_ACCESSOR)) {
duk_hobject *h_getset;
h_getset = p_val[n].a.get;
DUK_ASSERT(h_getset == NULL || DUK_HEAPHDR_IS_OBJECT((duk_heaphdr *) h_getset));
@ -96,7 +77,7 @@ DUK_LOCAL void duk__refcount_finalize_hobject(duk_hthread *thr, duk_hobject *h)
}
}
p_tv = DUK_HOBJECT_A_GET_BASE(thr->heap, h);
p_tv = DUK_HOBJECT_A_GET_BASE(heap, h);
n = DUK_HOBJECT_GET_ASIZE(h);
while (n-- > 0) {
duk_tval *tv_val;
@ -104,9 +85,9 @@ DUK_LOCAL void duk__refcount_finalize_hobject(duk_hthread *thr, duk_hobject *h)
DUK_TVAL_DECREF_NORZ(thr, tv_val);
}
/* hash part is a 'weak reference' and does not contribute */
/* Hash part is a 'weak reference' and doesn't contribute to refcounts. */
h_proto = (duk_hobject *) DUK_HOBJECT_GET_PROTOTYPE(thr->heap, h);
h_proto = (duk_hobject *) DUK_HOBJECT_GET_PROTOTYPE(heap, h);
DUK_ASSERT(h_proto == NULL || DUK_HEAPHDR_IS_OBJECT((duk_heaphdr *) h_proto));
DUK_HOBJECT_DECREF_NORZ_ALLOWNULL(thr, h_proto);
@ -133,19 +114,20 @@ DUK_LOCAL void duk__refcount_finalize_hobject(duk_hthread *thr, duk_hobject *h)
duk_tval *tv, *tv_end;
duk_hobject **funcs, **funcs_end;
if (DUK_HCOMPFUNC_GET_DATA(thr->heap, f) != NULL) {
tv = DUK_HCOMPFUNC_GET_CONSTS_BASE(thr->heap, f);
tv_end = DUK_HCOMPFUNC_GET_CONSTS_END(thr->heap, f);
if (DUK_LIKELY(DUK_HCOMPFUNC_GET_DATA(heap, f) != NULL)) {
tv = DUK_HCOMPFUNC_GET_CONSTS_BASE(heap, f);
tv_end = DUK_HCOMPFUNC_GET_CONSTS_END(heap, f);
while (tv < tv_end) {
DUK_TVAL_DECREF_NORZ(thr, tv);
tv++;
}
funcs = DUK_HCOMPFUNC_GET_FUNCS_BASE(thr->heap, f);
funcs_end = DUK_HCOMPFUNC_GET_FUNCS_END(thr->heap, f);
funcs = DUK_HCOMPFUNC_GET_FUNCS_BASE(heap, f);
funcs_end = DUK_HCOMPFUNC_GET_FUNCS_END(heap, f);
while (funcs < funcs_end) {
duk_hobject *h_func;
h_func = *funcs;
DUK_ASSERT(h_func != NULL);
DUK_ASSERT(DUK_HEAPHDR_IS_OBJECT((duk_heaphdr *) h_func));
DUK_HCOMPFUNC_DECREF_NORZ(thr, (duk_hcompfunc *) h_func);
funcs++;
@ -155,9 +137,9 @@ DUK_LOCAL void duk__refcount_finalize_hobject(duk_hthread *thr, duk_hobject *h)
DUK_D(DUK_DPRINT("duk_hcompfunc 'data' is NULL, skipping decref"));
}
DUK_HEAPHDR_DECREF_ALLOWNULL(thr, (duk_heaphdr *) DUK_HCOMPFUNC_GET_LEXENV(thr->heap, f));
DUK_HEAPHDR_DECREF_ALLOWNULL(thr, (duk_heaphdr *) DUK_HCOMPFUNC_GET_VARENV(thr->heap, f));
DUK_HEAPHDR_DECREF_ALLOWNULL(thr, (duk_hbuffer *) DUK_HCOMPFUNC_GET_DATA(thr->heap, f));
DUK_HEAPHDR_DECREF_ALLOWNULL(thr, (duk_heaphdr *) DUK_HCOMPFUNC_GET_LEXENV(heap, f));
DUK_HEAPHDR_DECREF_ALLOWNULL(thr, (duk_heaphdr *) DUK_HCOMPFUNC_GET_VARENV(heap, f));
DUK_HEAPHDR_DECREF_ALLOWNULL(thr, (duk_hbuffer *) DUK_HCOMPFUNC_GET_DATA(heap, f));
} else if (DUK_HOBJECT_IS_DECENV(h)) {
duk_hdecenv *e = (duk_hdecenv *) h;
DUK_ASSERT_HDECENV_VALID(e);
@ -215,253 +197,257 @@ DUK_LOCAL void duk__refcount_finalize_hobject(duk_hthread *thr, duk_hobject *h)
}
}
DUK_INTERNAL void duk_heaphdr_refcount_finalize(duk_hthread *thr, duk_heaphdr *hdr) {
DUK_ASSERT(hdr);
DUK_INTERNAL void duk_heaphdr_refcount_finalize_norz(duk_heap *heap, duk_heaphdr *hdr) {
DUK_ASSERT(heap != NULL);
DUK_ASSERT(heap->heap_thread != NULL);
DUK_ASSERT(hdr != NULL);
if (DUK_HEAPHDR_GET_TYPE(hdr) == DUK_HTYPE_OBJECT) {
duk__refcount_finalize_hobject(thr, (duk_hobject *) hdr);
if (DUK_HEAPHDR_IS_OBJECT(hdr)) {
duk_hobject_refcount_finalize_norz(heap, (duk_hobject *) hdr);
}
/* DUK_HTYPE_BUFFER: nothing to finalize */
/* DUK_HTYPE_STRING: nothing to finalize */
}
#if defined(DUK_USE_FINALIZER_SUPPORT)
#if defined(DUK_USE_REFZERO_FINALIZER_TORTURE)
DUK_LOCAL duk_ret_t duk__refcount_fake_finalizer(duk_context *ctx) {
DUK_UNREF(ctx);
DUK_D(DUK_DPRINT("fake refcount torture finalizer executed"));
#if 0
DUK_DD(DUK_DDPRINT("fake torture finalizer for: %!T", duk_get_tval(ctx, 0)));
#endif
/* Require a lot of stack to force a value stack grow/shrink. */
duk_require_stack(ctx, 100000);
/* XXX: do something to force a callstack grow/shrink, perhaps
* just a manual forced resize?
*/
return 0;
}
DUK_LOCAL void duk__refcount_run_torture_finalizer(duk_hthread *thr, duk_hobject *obj) {
duk_context *ctx;
duk_int_t rc;
DUK_ASSERT(thr != NULL);
DUK_ASSERT(obj != NULL);
ctx = (duk_context *) thr;
/* Avoid fake finalization for the duk__refcount_fake_finalizer function
* itself, otherwise we're in infinite recursion.
*/
if (DUK_HOBJECT_HAS_NATFUNC(obj)) {
if (((duk_hnatfunc *) obj)->func == duk__refcount_fake_finalizer) {
DUK_DD(DUK_DDPRINT("avoid fake torture finalizer for duk__refcount_fake_finalizer itself"));
return;
}
}
/* Avoid fake finalization when callstack limit has been reached.
* Otherwise a callstack limit error will be created, then refzero'ed,
* and we're in an infinite loop.
*/
if (thr->heap->call_recursion_depth >= thr->heap->call_recursion_limit ||
thr->callstack_size + 2 * DUK_CALLSTACK_GROW_STEP >= thr->callstack_max /*approximate*/) {
DUK_D(DUK_DPRINT("call recursion depth reached, avoid fake torture finalizer"));
return;
}
/* Run fake finalizer. Avoid creating new refzero queue entries
* so that we are not forced into a forever loop.
*/
duk_push_c_function(ctx, duk__refcount_fake_finalizer, 1 /*nargs*/);
duk_push_hobject(ctx, obj);
rc = duk_pcall(ctx, 1);
DUK_UNREF(rc); /* ignored */
duk_pop(ctx);
}
#endif /* DUK_USE_REFZERO_FINALIZER_TORTURE */
#endif /* DUK_USE_FINALIZER_SUPPORT */
/*
* Refcount memory freeing loop.
* Refzero processing for duk_hobject: queue a refzero'ed object to either
* finalize_list or refzero_list and process the relevent list(s) if
* necessary.
*
* Refzero_list is single linked, with only 'prev' pointers set and valid.
* All 'next' pointers are intentionally left as garbage. This doesn't
* matter because refzero_list is processed to completion before any other
* code (like mark-and-sweep) might walk the list.
*
* In more detail:
*
* Frees objects in the refzero_pending list until the list becomes
* empty. When an object is freed, its references get decref'd and
* may cause further objects to be queued for freeing.
* - On first insert refzero_list is NULL and the new object becomes the
* first and only element on the list; duk__refcount_free_pending() is
* called and it starts processing the list from the initial element,
* i.e. the list tail.
*
* - As each object is refcount finalized, new objects may be queued to
* refzero_list head. Their 'next' pointers are left as garbage, but
* 'prev' points are set correctly, with the element at refzero_list
* having a NULL 'prev' pointer. The fact that refzero_list is non-NULL
* is used to reject (1) recursive duk__refcount_free_pending() and
* (2) finalize_list processing calls.
*
* - When we're done with the current object, read its 'prev' pointer and
* free the object. If 'prev' is NULL, we've reached head of list and are
* done: set refzero_list to NULL and process pending finalizers. Otherwise
* continue processing the list.
*
* A refzero cascade is free of side effects because it only involves
* queueing more objects and freeing memory; finalizer execution is blocked
* in the code path queueing objects to finalize_list. As a result the
* initial refzero call (which triggers duk__refcount_free_pending()) must
* check finalize_list so that finalizers are executed snappily.
*
* If finalize_list processing starts first, refzero may occur while we're
* processing finalizers. That's fine: that particular refzero cascade is
* handled to completion without side effects. Once the cascade is complete,
* we'll run pending finalizers but notice that we're already doing that and
* return.
*
* This could be expanded to allow incremental freeing: just bail out
* early and resume at a future alloc/decref/refzero.
* early and resume at a future alloc/decref/refzero. However, if that
* were done, the list structure would need to be kept consistent at all
* times, mark-and-sweep would need to handle refzero_list, etc.
*/
DUK_INTERNAL void duk_refzero_free_pending(duk_hthread *thr) {
duk_heaphdr *h1, *h2;
duk_heap *heap;
DUK_LOCAL void duk__refcount_free_pending(duk_heap *heap) {
duk_heaphdr *curr;
#if defined(DUK_USE_DEBUG)
duk_int_t count = 0;
#endif
DUK_ASSERT(thr != NULL);
DUK_ASSERT(thr->heap != NULL);
heap = thr->heap;
DUK_ASSERT(heap != NULL);
/*
* Detect recursive invocation
*/
curr = heap->refzero_list;
DUK_ASSERT(curr != NULL);
DUK_ASSERT(DUK_HEAPHDR_GET_PREV(heap, curr) == NULL); /* We're called on initial insert only. */
/* curr->next is GARBAGE. */
if (heap->refzero_free_running) {
DUK_DDD(DUK_DDDPRINT("refzero free running, skip run"));
return;
}
do {
duk_heaphdr *prev;
/*
* Churn refzero_list until empty
*/
DUK_DDD(DUK_DDDPRINT("refzero processing %p: %!O", (void *) curr, (duk_heaphdr *) curr));
DUK_ASSERT(heap->refzero_free_running == 0);
heap->refzero_free_running = 1;
#if defined(DUK_USE_DEBUG)
count++;
#endif
while (heap->refzero_list) {
duk_hobject *obj;
#if defined(DUK_USE_FINALIZER_SUPPORT)
duk_bool_t rescued = 0;
#endif /* DUK_USE_FINALIZER_SUPPORT */
DUK_ASSERT(curr != NULL);
DUK_ASSERT(DUK_HEAPHDR_GET_TYPE(curr) == DUK_HTYPE_OBJECT); /* currently, always the case */
/* FINALIZED may be set; don't care about flags here. */
/*
* Pick an object from the head (don't remove yet).
/* Refcount finalize 'curr'. Refzero_list must be non-NULL
* here to prevent recursive entry to duk__refcount_free_pending().
*/
DUK_ASSERT(heap->refzero_list != NULL);
duk_hobject_refcount_finalize_norz(heap, (duk_hobject *) curr);
h1 = heap->refzero_list;
obj = (duk_hobject *) h1;
DUK_DD(DUK_DDPRINT("refzero processing %p: %!O", (void *) h1, (duk_heaphdr *) h1));
DUK_ASSERT(DUK_HEAPHDR_GET_PREV(heap, h1) == NULL);
DUK_ASSERT(DUK_HEAPHDR_GET_TYPE(h1) == DUK_HTYPE_OBJECT); /* currently, always the case */
prev = DUK_HEAPHDR_GET_PREV(heap, curr);
DUK_ASSERT((prev == NULL && heap->refzero_list == curr) || \
(prev != NULL && heap->refzero_list != curr));
/* prev->next is intentionally not updated and is garbage. */
#if defined(DUK_USE_FINALIZER_SUPPORT)
#if defined(DUK_USE_REFZERO_FINALIZER_TORTURE)
/* Torture option to shake out finalizer side effect issues:
* make a bogus function call for every finalizable object,
* essentially simulating the case where everything has a
* finalizer.
*/
DUK_DD(DUK_DDPRINT("refzero torture enabled, fake finalizer"));
DUK_ASSERT(DUK_HEAPHDR_GET_REFCOUNT(h1) == 0);
DUK_HEAPHDR_PREINC_REFCOUNT(h1); /* bump refcount to prevent refzero during finalizer processing */
DUK_ASSERT(DUK_HEAPHDR_GET_REFCOUNT(h1) != 0); /* No wrapping; always true because initial refcount was 0. */
duk__refcount_run_torture_finalizer(thr, obj); /* must never longjmp */
DUK_HEAPHDR_PREDEC_REFCOUNT(h1); /* remove artificial bump */
DUK_ASSERT_DISABLE(h1->h_refcount >= 0); /* refcount is unsigned, so always true */
#endif /* DUK_USE_REFZERO_FINALIZER_TORTURE */
#endif /* DUK_USE_FINALIZER_SUPPORT */
duk_free_hobject(heap, (duk_hobject *) curr); /* Invalidates 'curr'. */
/*
* Finalizer check.
*
* Note: running a finalizer may have arbitrary side effects, e.g.
* queue more objects on refzero_list (tail), or even trigger a
* mark-and-sweep.
*
* Note: quick reject check should match vast majority of
* objects and must be safe (not throw any errors, ever).
*
* An object may have FINALIZED here if it was finalized by mark-and-sweep
* on a previous run and refcount then decreased to zero. We won't run the
* finalizer again here.
*
* A finalizer is looked up from the object and up its prototype chain
* (which allows inherited finalizers), but using a duk_hobject flag
* to avoid actual property table lookups.
*/
curr = prev;
} while (curr != NULL);
#if defined(DUK_USE_FINALIZER_SUPPORT)
if (DUK_UNLIKELY(duk_hobject_has_finalizer_fast(thr, obj))) {
DUK_DDD(DUK_DDDPRINT("object has a finalizer, run it"));
heap->refzero_list = NULL;
DUK_DD(DUK_DDPRINT("refzero processed %ld objects", (long) count));
}
DUK_LOCAL DUK_INLINE void duk__refcount_refzero_hobject(duk_heap *heap, duk_hobject *obj, duk_bool_t skip_free_pending) {
duk_heaphdr *hdr;
duk_heaphdr *root;
DUK_ASSERT(DUK_HEAPHDR_GET_REFCOUNT(h1) == 0);
DUK_HEAPHDR_PREINC_REFCOUNT(h1); /* bump refcount to prevent refzero during finalizer processing */
DUK_ASSERT(DUK_HEAPHDR_GET_REFCOUNT(h1) != 0); /* No wrapping; always true because initial refcount was 0. */
DUK_ASSERT(heap != NULL);
DUK_ASSERT(heap->heap_thread != NULL);
DUK_ASSERT(obj != NULL);
DUK_ASSERT(DUK_HEAPHDR_GET_TYPE((duk_heaphdr *) obj) == DUK_HTYPE_OBJECT);
duk_hobject_run_finalizer(thr, obj); /* must never longjmp */
DUK_ASSERT(DUK_HEAPHDR_HAS_FINALIZED(h1)); /* duk_hobject_run_finalizer() sets */
hdr = (duk_heaphdr *) obj;
DUK_HEAPHDR_PREDEC_REFCOUNT(h1); /* remove artificial bump */
DUK_ASSERT_DISABLE(h1->h_refcount >= 0); /* refcount is unsigned, so always true */
DUK_HEAP_REMOVE_FROM_HEAP_ALLOCATED(heap, hdr);
if (DUK_HEAPHDR_GET_REFCOUNT(h1) != 0) {
DUK_DDD(DUK_DDDPRINT("-> object refcount after finalization non-zero, object will be rescued"));
rescued = 1;
} else {
DUK_DDD(DUK_DDDPRINT("-> object refcount still zero after finalization, object will be freed"));
#if defined(DUK_USE_FINALIZER_SUPPORT)
/* This finalizer check MUST BE side effect free. It should also be
* as fast as possible because it's applied to every object freed.
*/
if (DUK_UNLIKELY(DUK_HOBJECT_HAS_FINALIZER_FAST(heap, (duk_hobject *) hdr))) {
/* Special case: FINALIZED may be set if mark-and-sweep queued
* object for finalization, the finalizer was executed (and
* FINALIZED set), mark-and-sweep hasn't yet processed the
* object again, but its refcount drops to zero. Free without
* running the finalizer again.
*/
if (DUK_HEAPHDR_HAS_FINALIZED(hdr)) {
DUK_D(DUK_DPRINT("refzero'd object has finalizer and FINALIZED is set -> free"));
} else {
/* Set FINALIZABLE flag so that all objects on finalize_list
* will have it set and are thus detectable based on the
* flag alone.
*/
DUK_HEAPHDR_SET_FINALIZABLE(hdr);
DUK_ASSERT(!DUK_HEAPHDR_HAS_FINALIZED(hdr));
DUK_HEAP_INSERT_INTO_FINALIZE_LIST(heap, hdr);
/* Process finalizers unless skipping is explicitly
* requested (NORZ) or refzero_list is being processed
* (avoids side effects during a refzero cascade).
* If refzero_list is processed, the initial refzero
* call will run pending finalizers when refzero_list
* is done.
*/
if (!skip_free_pending && heap->refzero_list == NULL) {
duk_heap_process_finalize_list(heap);
}
return;
}
}
#endif /* DUK_USE_FINALIZER_SUPPORT */
/* Refzero head is still the same. This is the case even if finalizer
* inserted more refzero objects; they are inserted to the tail.
*/
DUK_ASSERT(h1 == heap->refzero_list);
DUK_ASSERT(DUK_HEAPHDR_GET_PREV(heap, h1) == NULL);
/*
* Remove the object from the refzero list. This cannot be done
* before a possible finalizer has been executed; the finalizer
* may trigger a mark-and-sweep, and mark-and-sweep must be able
* to traverse a complete refzero_list.
/* No need to finalize, free object via refzero_list. */
root = heap->refzero_list;
DUK_HEAPHDR_SET_PREV(heap, hdr, NULL);
/* 'next' is left as GARBAGE. */
heap->refzero_list = hdr;
if (root == NULL) {
/* Object is now queued. Refzero_list was NULL so
* no-one is currently processing it; do it here.
* With refzero processing just doing a cascade of
* free calls, we can process it directly even when
* NORZ macros are used: there are no side effects.
*/
duk__refcount_free_pending(heap);
DUK_ASSERT(heap->refzero_list == NULL);
h2 = DUK_HEAPHDR_GET_NEXT(heap, h1);
if (h2 != NULL) {
DUK_HEAPHDR_SET_PREV(heap, h2, NULL);
heap->refzero_list = h2;
} else {
heap->refzero_list = NULL;
heap->refzero_list_tail = NULL;
/* Process finalizers only after the entire cascade
* is finished. In most cases there's nothing to
* finalize, so fast path check to avoid a call.
*/
#if defined(DUK_USE_FINALIZER_SUPPORT)
if (!skip_free_pending && DUK_UNLIKELY(heap->finalize_list != NULL)) {
duk_heap_process_finalize_list(heap);
}
#endif
} else {
DUK_ASSERT(DUK_HEAPHDR_GET_PREV(heap, root) == NULL);
DUK_HEAPHDR_SET_PREV(heap, root, hdr);
/*
* Rescue or free.
/* Object is now queued. Because refzero_list was
* non-NULL, it's already being processed by someone
* in the C call stack, so we're done.
*/
}
}
#if defined(DUK_USE_FINALIZER_SUPPORT)
if (DUK_UNLIKELY(rescued)) {
/* yes -> move back to heap allocated */
DUK_DD(DUK_DDPRINT("object rescued during refcount finalization: %p", (void *) h1));
DUK_ASSERT(!DUK_HEAPHDR_HAS_FINALIZABLE(h1));
DUK_ASSERT(DUK_HEAPHDR_HAS_FINALIZED(h1));
DUK_HEAPHDR_CLEAR_FINALIZED(h1);
h2 = heap->heap_allocated;
DUK_ASSERT(DUK_HEAPHDR_GET_PREV(heap, h1) == NULL); /* Policy for head of list. */
if (h2 != NULL) {
DUK_HEAPHDR_SET_PREV(heap, h2, h1);
}
DUK_HEAPHDR_SET_NEXT(heap, h1, h2);
DUK_ASSERT_HEAPHDR_LINKS(heap, h1);
DUK_ASSERT_HEAPHDR_LINKS(heap, h2);
heap->heap_allocated = h1;
} else
#endif /* DUK_USE_FINALIZER_SUPPORT */
{
/* no -> decref members, then free */
duk__refcount_finalize_hobject(thr, obj);
DUK_ASSERT(DUK_HEAPHDR_GET_TYPE(h1) == DUK_HTYPE_OBJECT); /* currently, always the case */
duk_free_hobject(heap, (duk_hobject *) h1);
}
DUK_INTERNAL DUK_ALWAYS_INLINE void duk_refzero_check_fast(duk_hthread *thr) {
DUK_ASSERT(thr != NULL);
DUK_ASSERT(thr->heap != NULL);
DUK_ASSERT(thr->heap->refzero_list == NULL); /* Processed to completion inline. */
#if defined(DUK_USE_DEBUG)
count++;
#endif
if (DUK_UNLIKELY(thr->heap->finalize_list != NULL)) {
duk_heap_process_finalize_list(thr->heap);
}
}
DUK_ASSERT(heap->refzero_free_running == 1);
heap->refzero_free_running = 0;
DUK_INTERNAL void duk_refzero_check_slow(duk_hthread *thr) {
DUK_ASSERT(thr != NULL);
DUK_ASSERT(thr->heap != NULL);
DUK_ASSERT(thr->heap->refzero_list == NULL); /* Processed to completion inline. */
DUK_DDD(DUK_DDDPRINT("refzero processed %ld objects", (long) count));
if (DUK_UNLIKELY(thr->heap->finalize_list != NULL)) {
duk_heap_process_finalize_list(thr->heap);
}
}
#endif /* DUK_USE_FINALIZER_SUPPORT */
/*
* Refzero processing for duk_hstring.
*/
DUK_LOCAL DUK_INLINE void duk__refcount_refzero_hstring(duk_heap *heap, duk_hstring *str) {
DUK_ASSERT(heap != NULL);
DUK_ASSERT(heap->heap_thread != NULL);
DUK_ASSERT(str != NULL);
DUK_ASSERT(DUK_HEAPHDR_GET_TYPE((duk_heaphdr *) str) == DUK_HTYPE_STRING);
duk_heap_strcache_string_remove(heap, str);
duk_heap_strtable_unlink(heap, str);
duk_free_hstring(heap, str);
}
/*
* Refzero processing for duk_hbuffer.
*/
DUK_LOCAL DUK_INLINE void duk__refcount_refzero_hbuffer(duk_heap *heap, duk_hbuffer *buf) {
DUK_ASSERT(heap != NULL);
DUK_ASSERT(heap->heap_thread != NULL);
DUK_ASSERT(buf != NULL);
DUK_ASSERT(DUK_HEAPHDR_GET_TYPE((duk_heaphdr *) buf) == DUK_HTYPE_BUFFER);
DUK_HEAP_REMOVE_FROM_HEAP_ALLOCATED(heap, (duk_heaphdr *) buf);
duk_free_hbuffer(heap, buf);
}
/*
* Incref and decref functions.
*
* Decref may trigger immediate refzero handling, which may free and finalize
* an arbitrary number of objects.
* an arbitrary number of objects (a "DECREF cascade").
*
* Refzero handling is skipped entirely if (1) mark-and-sweep is running or
* (2) execution is paused in the debugger. The objects are left in the heap,
@ -474,46 +460,67 @@ DUK_INTERNAL void duk_refzero_free_pending(duk_hthread *thr) {
* mark-and-sweep also calls finalizers which would use the ordinary decref
* macros anyway.
*
* The DUK__RZ_SUPPRESS_CHECK() must be enabled also when mark-and-sweep
* support has been disabled: the flag is also used in heap destruction when
* running finalizers for remaining objects, and the flag prevents objects
* from being moved around in heap linked lists.
* We can't process refzeros (= free objects) when the debugger is running
* as the debugger might make an object unreachable but still continue
* inspecting it (or even cause it to be pushed back). So we must rely on
* mark-and-sweep to collect them.
*
* The DUK__RZ_SUPPRESS_CHECK() condition is also used in heap destruction
* when running finalizers for remaining objects: the flag prevents objects
* from being moved around in heap linked lists while that's being done.
*
* The suppress condition is important to performance.
*/
/* The suppress condition is important to performance. The flags being tested
* are in the same duk_heap field so a single TEST instruction (on x86) tests
* for them.
*/
#define DUK__RZ_SUPPRESS_ASSERT1() do { \
DUK_ASSERT(thr != NULL); \
DUK_ASSERT(thr->heap != NULL); \
/* When mark-and-sweep runs, heap_thread must exist. */ \
DUK_ASSERT(thr->heap->ms_running == 0 || thr->heap->heap_thread != NULL); \
/* When mark-and-sweep runs, the 'thr' argument always matches heap_thread. \
* This could be used to e.g. suppress check against 'thr' directly (and \
* knowing it would be heap_thread); not really used now. \
*/ \
DUK_ASSERT(thr->heap->ms_running == 0 || thr == thr->heap->heap_thread); \
/* We may be called when the heap is initializing and we process \
* refzeros normally, but mark-and-sweep and finalizers are prevented \
* if that's the case. \
*/ \
DUK_ASSERT(thr->heap->heap_initializing == 0 || thr->heap->ms_prevent_count > 0); \
DUK_ASSERT(thr->heap->heap_initializing == 0 || thr->heap->pf_prevent_count > 0); \
} while (0)
#if defined(DUK_USE_DEBUGGER_SUPPORT)
#define DUK__RZ_SUPPRESS_COND() \
(DUK_HEAP_HAS_MARKANDSWEEP_RUNNING(heap) || DUK_HEAP_IS_PAUSED(heap))
#define DUK__RZ_SUPPRESS_ASSERT2() do { \
/* When debugger is paused, ms_running is set. */ \
DUK_ASSERT(!DUK_HEAP_HAS_DEBUGGER_PAUSED(thr->heap) || thr->heap->ms_running != 0); \
} while (0)
#define DUK__RZ_SUPPRESS_COND() (heap->ms_running != 0)
#else
#define DUK__RZ_SUPPRESS_COND() \
(DUK_HEAP_HAS_MARKANDSWEEP_RUNNING(heap))
#endif
#define DUK__RZ_SUPPRESS_ASSERT2() do { } while (0)
#define DUK__RZ_SUPPRESS_COND() (heap->ms_running != 0)
#endif /* DUK_USE_DEBUGGER_SUPPORT */
#define DUK__RZ_SUPPRESS_CHECK() do { \
DUK__RZ_SUPPRESS_ASSERT1(); \
DUK__RZ_SUPPRESS_ASSERT2(); \
if (DUK_UNLIKELY(DUK__RZ_SUPPRESS_COND())) { \
DUK_DDD(DUK_DDDPRINT("refzero handling suppressed when mark-and-sweep running, object: %p", (void *) h)); \
DUK_DDD(DUK_DDDPRINT("refzero handling suppressed (not even queued) when mark-and-sweep running, object: %p", (void *) h)); \
return; \
} \
} while (0)
#define DUK__RZ_STRING() do { \
duk_heap_strcache_string_remove(thr->heap, (duk_hstring *) h); \
duk_heap_strtable_unlink(heap, (duk_hstring *) h); \
duk_free_hstring(heap, (duk_hstring *) h); \
duk__refcount_refzero_hstring(heap, (duk_hstring *) h); \
} while (0)
#define DUK__RZ_BUFFER() do { \
duk_heap_remove_any_from_heap_allocated(heap, (duk_heaphdr *) h); \
duk_free_hbuffer(heap, (duk_hbuffer *) h); \
duk__refcount_refzero_hbuffer(heap, (duk_hbuffer *) h); \
} while (0)
#define DUK__RZ_OBJECT() do { \
duk_heap_remove_any_from_heap_allocated(heap, (duk_heaphdr *) h); \
duk__queue_refzero(heap, (duk_heaphdr *) h); \
if (!skip_free_pending) { \
duk_refzero_free_pending(thr); \
} \
duk__refcount_refzero_hobject(heap, (duk_hobject *) h, skip_free_pending); \
} while (0)
/* XXX: test the effect of inlining here vs. NOINLINE in refzero helpers */
#if defined(DUK_USE_FAST_REFCOUNT_DEFAULT)
#define DUK__RZ_INLINE DUK_ALWAYS_INLINE
#else
@ -583,42 +590,39 @@ DUK_LOCAL DUK__RZ_INLINE void duk__heaphdr_refzero_helper(duk_hthread *thr, duk_
DUK__RZ_OBJECT();
break;
case DUK_HTYPE_BUFFER:
default:
/* Buffers have no internal references. However, a dynamic
* buffer has a separate allocation for the buffer. This is
* freed by duk_heap_free_heaphdr_raw().
*/
DUK_ASSERT(DUK_HEAPHDR_GET_TYPE(h) == DUK_HTYPE_BUFFER);
DUK__RZ_BUFFER();
break;
default:
DUK_D(DUK_DPRINT("invalid heap type in decref: %ld", (long) DUK_HEAPHDR_GET_TYPE(h)));
DUK_UNREACHABLE();
}
}
DUK_INTERNAL void duk_heaphdr_refzero(duk_hthread *thr, duk_heaphdr *h) {
DUK_INTERNAL DUK_NOINLINE void duk_heaphdr_refzero(duk_hthread *thr, duk_heaphdr *h) {
duk__heaphdr_refzero_helper(thr, h, 0 /*skip_free_pending*/);
}
DUK_INTERNAL void duk_heaphdr_refzero_norz(duk_hthread *thr, duk_heaphdr *h) {
DUK_INTERNAL DUK_NOINLINE void duk_heaphdr_refzero_norz(duk_hthread *thr, duk_heaphdr *h) {
duk__heaphdr_refzero_helper(thr, h, 1 /*skip_free_pending*/);
}
DUK_INTERNAL void duk_hstring_refzero(duk_hthread *thr, duk_hstring *h) {
DUK_INTERNAL DUK_NOINLINE void duk_hstring_refzero(duk_hthread *thr, duk_hstring *h) {
duk__hstring_refzero_helper(thr, h);
}
DUK_INTERNAL void duk_hbuffer_refzero(duk_hthread *thr, duk_hbuffer *h) {
DUK_INTERNAL DUK_NOINLINE void duk_hbuffer_refzero(duk_hthread *thr, duk_hbuffer *h) {
duk__hbuffer_refzero_helper(thr, h);
}
DUK_INTERNAL void duk_hobject_refzero(duk_hthread *thr, duk_hobject *h) {
DUK_INTERNAL DUK_NOINLINE void duk_hobject_refzero(duk_hthread *thr, duk_hobject *h) {
duk__hobject_refzero_helper(thr, h, 0 /*skip_free_pending*/);
}
DUK_INTERNAL void duk_hobject_refzero_norz(duk_hthread *thr, duk_hobject *h) {
DUK_INTERNAL DUK_NOINLINE void duk_hobject_refzero_norz(duk_hthread *thr, duk_hobject *h) {
duk__hobject_refzero_helper(thr, h, 1 /*skip_free_pending*/);
}

21
src-input/duk_heap_stringtable.c

@ -4,15 +4,6 @@
#include "duk_internal.h"
/* Avoid finalizers during string table resize because a finalizer may e.g.
* resize a dynamic buffer whose data area is used as the source for a string
* intern operation. String table traffic (interning strings) is fine.
*/
#define DUK__PREVENT_MS_SIDE_EFFECTS(heap) do { \
(heap)->mark_and_sweep_base_flags |= \
DUK_MS_FLAG_NO_FINALIZERS; \
} while (0)
/* Resize checks not needed if minsize == maxsize, typical for low memory
* targets.
*/
@ -556,7 +547,6 @@ DUK_LOCAL duk_hstring *duk__strtable_do_intern(duk_heap *heap, const duk_uint8_t
#else
duk_hstring **slot;
#endif
duk_small_uint_t prev_mark_and_sweep_base_flags;
DUK_DDD(DUK_DDDPRINT("do_intern: heap=%p, str=%p, blen=%lu, strhash=%lx, st_size=%lu, st_count=%lu, load=%lf",
(void *) heap, (const void *) str, (unsigned long) blen, (unsigned long) strhash,
@ -570,10 +560,13 @@ DUK_LOCAL duk_hstring *duk__strtable_do_intern(duk_heap *heap, const duk_uint8_t
* the caller provided str/blen from a dynamic buffer, a finalizer
* might resize or modify that dynamic buffer, invalidating the call
* arguments.
*
* While finalizers must be prevented, mark-and-sweep itself is fine.
* Recursive string table resize is prevented explicitly here.
*/
prev_mark_and_sweep_base_flags = heap->mark_and_sweep_base_flags;
DUK__PREVENT_MS_SIDE_EFFECTS(heap);
heap->pf_prevent_count++;
DUK_ASSERT(heap->pf_prevent_count != 0); /* Wrap. */
#if defined(DUK_USE_STRTAB_TORTURE) && defined(DUK__STRTAB_RESIZE_CHECK)
duk__strtable_resize_torture(heap);
@ -617,8 +610,8 @@ DUK_LOCAL duk_hstring *duk__strtable_do_intern(duk_heap *heap, const duk_uint8_t
/* Allow side effects again: GC must be avoided until duk_hstring
* result (if successful) has been INCREF'd.
*/
heap->mark_and_sweep_base_flags = prev_mark_and_sweep_base_flags;
DUK_ASSERT(heap->pf_prevent_count > 0);
heap->pf_prevent_count--;
/* Alloc error handling. */

679
src-input/duk_heaphdr.h

@ -24,6 +24,10 @@
*
* Heap header size on 32-bit platforms: 8 bytes without reference counting,
* 16 bytes with reference counting.
*
* Note that 'raw' macros such as DUK_HEAPHDR_GET_REFCOUNT() are not
* defined without DUK_USE_REFERENCE_COUNTING, so caller must #if defined()
* around them.
*/
/* XXX: macro for shared header fields (avoids some padding issues) */
@ -262,12 +266,15 @@ struct duk_heaphdr_string {
* Type tests
*/
#define DUK_HEAPHDR_IS_OBJECT(h) \
(DUK_HEAPHDR_GET_TYPE((h)) == DUK_HTYPE_OBJECT)
#define DUK_HEAPHDR_IS_STRING(h) \
(DUK_HEAPHDR_GET_TYPE((h)) == DUK_HTYPE_STRING)
#define DUK_HEAPHDR_IS_BUFFER(h) \
(DUK_HEAPHDR_GET_TYPE((h)) == DUK_HTYPE_BUFFER)
/* Take advantage of the fact that for DUK_HTYPE_xxx numbers the lowest bit
* is only set for DUK_HTYPE_OBJECT (= 1).
*/
#if 0
#define DUK_HEAPHDR_IS_OBJECT(h) (DUK_HEAPHDR_GET_TYPE((h)) == DUK_HTYPE_OBJECT)
#endif
#define DUK_HEAPHDR_IS_OBJECT(h) ((h)->h_flags & 0x01UL)
#define DUK_HEAPHDR_IS_STRING(h) (DUK_HEAPHDR_GET_TYPE((h)) == DUK_HTYPE_STRING)
#define DUK_HEAPHDR_IS_BUFFER(h) (DUK_HEAPHDR_GET_TYPE((h)) == DUK_HTYPE_BUFFER)
/*
* Assert helpers
@ -290,663 +297,9 @@ struct duk_heaphdr_string {
#define DUK_ASSERT_HEAPHDR_LINKS(heap,h) do {} while (0)
#endif
/*
* Reference counting helper macros. The macros take a thread argument
* and must thus always be executed in a specific thread context. The
* thread argument is needed for features like finalization. Currently
* it is not required for INCREF, but it is included just in case.
*
* Note that 'raw' macros such as DUK_HEAPHDR_GET_REFCOUNT() are not
* defined without DUK_USE_REFERENCE_COUNTING, so caller must #if defined()
* around them.
*/
#if defined(DUK_USE_REFERENCE_COUNTING)
#if defined(DUK_USE_ROM_OBJECTS)
/* With ROM objects "needs refcount update" is true when the value is
* heap allocated and is not a ROM object.
*/
/* XXX: double evaluation for 'tv' argument. */
#define DUK_TVAL_NEEDS_REFCOUNT_UPDATE(tv) \
(DUK_TVAL_IS_HEAP_ALLOCATED((tv)) && !DUK_HEAPHDR_HAS_READONLY(DUK_TVAL_GET_HEAPHDR((tv))))
#define DUK_HEAPHDR_NEEDS_REFCOUNT_UPDATE(h) (!DUK_HEAPHDR_HAS_READONLY((h)))
#else /* DUK_USE_ROM_OBJECTS */
/* Without ROM objects "needs refcount update" == is heap allocated. */
#define DUK_TVAL_NEEDS_REFCOUNT_UPDATE(tv) DUK_TVAL_IS_HEAP_ALLOCATED((tv))
#define DUK_HEAPHDR_NEEDS_REFCOUNT_UPDATE(h) 1
#endif /* DUK_USE_ROM_OBJECTS */
/* Fast variants, inline refcount operations except for refzero handling.
* Can be used explicitly when speed is always more important than size.
* For a good compiler and a single file build, these are basically the
* same as a forced inline.
*/
#define DUK_TVAL_INCREF_FAST(thr,tv) do { \
duk_tval *duk__tv = (tv); \
DUK_ASSERT(duk__tv != NULL); \
if (DUK_TVAL_NEEDS_REFCOUNT_UPDATE(duk__tv)) { \
duk_heaphdr *duk__h = DUK_TVAL_GET_HEAPHDR(duk__tv); \
DUK_ASSERT(duk__h != NULL); \
DUK_ASSERT(DUK_HEAPHDR_HTYPE_VALID(duk__h)); \
DUK_HEAPHDR_PREINC_REFCOUNT(duk__h); \
DUK_ASSERT(DUK_HEAPHDR_GET_REFCOUNT(duk__h) != 0); /* No wrapping. */ \
} \
} while (0)
#define DUK_TVAL_DECREF_FAST(thr,tv) do { \
duk_tval *duk__tv = (tv); \
DUK_ASSERT(duk__tv != NULL); \
if (DUK_TVAL_NEEDS_REFCOUNT_UPDATE(duk__tv)) { \
duk_heaphdr *duk__h = DUK_TVAL_GET_HEAPHDR(duk__tv); \
DUK_ASSERT(duk__h != NULL); \
DUK_ASSERT(DUK_HEAPHDR_HTYPE_VALID(duk__h)); \
DUK_ASSERT(DUK_HEAPHDR_GET_REFCOUNT(duk__h) > 0); \
if (DUK_HEAPHDR_PREDEC_REFCOUNT(duk__h) == 0) { \
duk_heaphdr_refzero((thr), duk__h); \
} \
} \
} while (0)
#define DUK_TVAL_DECREF_NORZ_FAST(thr,tv) do { \
duk_tval *duk__tv = (tv); \
DUK_ASSERT(duk__tv != NULL); \
if (DUK_TVAL_NEEDS_REFCOUNT_UPDATE(duk__tv)) { \
duk_heaphdr *duk__h = DUK_TVAL_GET_HEAPHDR(duk__tv); \
DUK_ASSERT(duk__h != NULL); \
DUK_ASSERT(DUK_HEAPHDR_HTYPE_VALID(duk__h)); \
DUK_ASSERT(DUK_HEAPHDR_GET_REFCOUNT(duk__h) > 0); \
if (DUK_HEAPHDR_PREDEC_REFCOUNT(duk__h) == 0) { \
duk_heaphdr_refzero_norz((thr), duk__h); \
} \
} \
} while (0)
#define DUK_HEAPHDR_INCREF_FAST(thr,h) do { \
duk_heaphdr *duk__h = (duk_heaphdr *) (h); \
DUK_ASSERT(duk__h != NULL); \
DUK_ASSERT(DUK_HEAPHDR_HTYPE_VALID(duk__h)); \
if (DUK_HEAPHDR_NEEDS_REFCOUNT_UPDATE(duk__h)) { \
DUK_HEAPHDR_PREINC_REFCOUNT(duk__h); \
DUK_ASSERT(DUK_HEAPHDR_GET_REFCOUNT(duk__h) != 0); /* No wrapping. */ \
} \
} while (0)
#define DUK_HEAPHDR_DECREF_FAST_RAW(thr,h,rzcall,rzcast) do { \
duk_heaphdr *duk__h = (duk_heaphdr *) (h); \
DUK_ASSERT(duk__h != NULL); \
DUK_ASSERT(DUK_HEAPHDR_HTYPE_VALID(duk__h)); \
DUK_ASSERT(DUK_HEAPHDR_GET_REFCOUNT(duk__h) > 0); \
if (DUK_HEAPHDR_NEEDS_REFCOUNT_UPDATE(duk__h)) { \
if (DUK_HEAPHDR_PREDEC_REFCOUNT(duk__h) == 0) { \
(rzcall)((thr), (rzcast) duk__h); \
} \
} \
} while (0)
#define DUK_HEAPHDR_DECREF_FAST(thr,h) \
DUK_HEAPHDR_DECREF_FAST_RAW((thr),(h),duk_heaphdr_refzero,duk_heaphdr *)
#define DUK_HEAPHDR_DECREF_NORZ_FAST(thr,h) \
DUK_HEAPHDR_DECREF_FAST_RAW((thr),(h),duk_heaphdr_refzero_norz,duk_heaphdr *)
/* Slow variants, call to a helper to reduce code size.
* Can be used explicitly when size is always more important than speed.
*/
#define DUK_TVAL_INCREF_SLOW(thr,tv) do { duk_tval_incref((tv)); } while (0)
#define DUK_TVAL_DECREF_SLOW(thr,tv) do { duk_tval_decref((thr), (tv)); } while (0)
#define DUK_TVAL_DECREF_NORZ_SLOW(thr,tv) do { duk_tval_decref_norz((thr), (tv)); } while (0)
#define DUK_HEAPHDR_INCREF_SLOW(thr,h) do { duk_heaphdr_incref((duk_heaphdr *) (h)); } while (0)
#define DUK_HEAPHDR_DECREF_SLOW(thr,h) do { duk_heaphdr_decref((thr), (duk_heaphdr *) (h)); } while (0)
#define DUK_HEAPHDR_DECREF_NORZ_SLOW(thr,h) do { duk_heaphdr_decref_norz((thr), (duk_heaphdr *) (h)); } while (0)
#define DUK_HSTRING_INCREF_SLOW(thr,h) do { duk_heaphdr_incref((duk_heaphdr *) (h)); } while (0)
#define DUK_HSTRING_DECREF_SLOW(thr,h) do { duk_heaphdr_decref((thr), (duk_heaphdr *) (h)); } while (0)
#define DUK_HSTRING_DECREF_NORZ_SLOW(thr,h) do { duk_heaphdr_decref_norz((thr), (duk_heaphdr *) (h)); } while (0)
#define DUK_HBUFFER_INCREF_SLOW(thr,h) do { duk_heaphdr_incref((duk_heaphdr *) (h)); } while (0)
#define DUK_HBUFFER_DECREF_SLOW(thr,h) do { duk_heaphdr_decref((thr), (duk_heaphdr *) (h)); } while (0)
#define DUK_HBUFFER_DECREF_NORZ_SLOW(thr,h) do { duk_heaphdr_decref_norz((thr), (duk_heaphdr *) (h)); } while (0)
#define DUK_HOBJECT_INCREF_SLOW(thr,h) do { duk_heaphdr_incref((duk_heaphdr *) (h)); } while (0)
#define DUK_HOBJECT_DECREF_SLOW(thr,h) do { duk_heaphdr_decref((thr), (duk_heaphdr *) (h)); } while (0)
#define DUK_HOBJECT_DECREF_NORZ_SLOW(thr,h) do { duk_heaphdr_decref_norz((thr), (duk_heaphdr *) (h)); } while (0)
/* Default variants. Selection depends on speed/size preference.
* Concretely: with gcc 4.8.1 -Os x64 the difference in final binary
* is about +1kB for _FAST variants.
*/
#if defined(DUK_USE_FAST_REFCOUNT_DEFAULT)
/* XXX: It would be nice to specialize for specific duk_hobject subtypes
* but current refzero queue handling prevents that.
*/
#define DUK_TVAL_INCREF(thr,tv) DUK_TVAL_INCREF_FAST((thr),(tv))
#define DUK_TVAL_DECREF(thr,tv) DUK_TVAL_DECREF_FAST((thr),(tv))
#define DUK_TVAL_DECREF_NORZ(thr,tv) DUK_TVAL_DECREF_NORZ_FAST((thr),(tv))
#define DUK_HEAPHDR_INCREF(thr,h) DUK_HEAPHDR_INCREF_FAST((thr),(h))
#define DUK_HEAPHDR_DECREF(thr,h) DUK_HEAPHDR_DECREF_FAST_RAW((thr),(h),duk_heaphdr_refzero,duk_heaphdr *)
#define DUK_HEAPHDR_DECREF_NORZ(thr,h) DUK_HEAPHDR_DECREF_FAST_RAW((thr),(h),duk_heaphdr_refzero_norz,duk_heaphdr *)
#define DUK_HSTRING_INCREF(thr,h) DUK_HEAPHDR_INCREF((thr),(duk_heaphdr *) (h))
#define DUK_HSTRING_DECREF(thr,h) DUK_HEAPHDR_DECREF_FAST_RAW((thr),(h),duk_hstring_refzero,duk_hstring *)
#define DUK_HSTRING_DECREF_NORZ(thr,h) DUK_HEAPHDR_DECREF_FAST_RAW((thr),(h),duk_hstring_refzero,duk_hstring *) /* no 'norz' variant */
#define DUK_HOBJECT_INCREF(thr,h) DUK_HEAPHDR_INCREF((thr),(duk_heaphdr *) (h))
#define DUK_HOBJECT_DECREF(thr,h) DUK_HEAPHDR_DECREF_FAST_RAW((thr),(h),duk_hobject_refzero,duk_hobject *)
#define DUK_HOBJECT_DECREF_NORZ(thr,h) DUK_HEAPHDR_DECREF_FAST_RAW((thr),(h),duk_hobject_refzero_norz,duk_hobject *)
#define DUK_HBUFFER_INCREF(thr,h) DUK_HEAPHDR_INCREF((thr),(duk_heaphdr *) (h))
#define DUK_HBUFFER_DECREF(thr,h) DUK_HEAPHDR_DECREF_FAST_RAW((thr),(h),duk_hbuffer_refzero,duk_hbuffer *)
#define DUK_HBUFFER_DECREF_NORZ(thr,h) DUK_HEAPHDR_DECREF_FAST_RAW((thr),(h),duk_hbuffer_refzero,duk_hbuffer *) /* no 'norz' variant */
#define DUK_HCOMPFUNC_INCREF(thr,h) DUK_HEAPHDR_INCREF((thr),(duk_heaphdr *) &(h)->obj)
#define DUK_HCOMPFUNC_DECREF(thr,h) DUK_HEAPHDR_DECREF_FAST_RAW((thr),(h),duk_hobject_refzero,duk_hobject *)
#define DUK_HCOMPFUNC_DECREF_NORZ(thr,h) DUK_HEAPHDR_DECREF_FAST_RAW((thr),(h),duk_hobject_refzero_norz,duk_hobject *)
#define DUK_HNATFUNC_INCREF(thr,h) DUK_HEAPHDR_INCREF((thr),(duk_heaphdr *) &(h)->obj)
#define DUK_HNATFUNC_DECREF(thr,h) DUK_HEAPHDR_DECREF_FAST_RAW((thr),(h),duk_hobject_refzero,duk_hobject *)
#define DUK_HNATFUNC_DECREF_NORZ(thr,h) DUK_HEAPHDR_DECREF_FAST_RAW((thr),(h),duk_hobject_refzero_norz,duk_hobject *)
#define DUK_HBUFOBJ_INCREF(thr,h) DUK_HEAPHDR_INCREF((thr),(duk_heaphdr *) &(h)->obj)
#define DUK_HBUFOBJ_DECREF(thr,h) DUK_HEAPHDR_DECREF_FAST_RAW((thr),(h),duk_hobject_refzero,duk_hobject *)
#define DUK_HBUFOBJ_DECREF_NORZ(thr,h) DUK_HEAPHDR_DECREF_FAST_RAW((thr),(h),duk_hobject_refzero_norz,duk_hobject *)
#define DUK_HTHREAD_INCREF(thr,h) DUK_HEAPHDR_INCREF((thr),(duk_heaphdr *) &(h)->obj)
#define DUK_HTHREAD_DECREF(thr,h) DUK_HEAPHDR_DECREF_FAST_RAW((thr),(h),duk_hobject_refzero,duk_hobject *)
#define DUK_HTHREAD_DECREF_NORZ(thr,h) DUK_HEAPHDR_DECREF_FAST_RAW((thr),(h),duk_hobject_refzero_norz,duk_hobject *)
#else
#define DUK_TVAL_INCREF(thr,tv) DUK_TVAL_INCREF_SLOW((thr),(tv))
#define DUK_TVAL_DECREF(thr,tv) DUK_TVAL_DECREF_SLOW((thr),(tv))
#define DUK_TVAL_DECREF_NORZ(thr,tv) DUK_TVAL_DECREF_NORZ_SLOW((thr),(tv))
#define DUK_HEAPHDR_INCREF(thr,h) DUK_HEAPHDR_INCREF_SLOW((thr),(h))
#define DUK_HEAPHDR_DECREF(thr,h) DUK_HEAPHDR_DECREF_SLOW((thr),(h))
#define DUK_HEAPHDR_DECREF_NORZ(thr,h) DUK_HEAPHDR_DECREF_NORZ_SLOW((thr),(h))
#define DUK_HSTRING_INCREF(thr,h) DUK_HEAPHDR_INCREF((thr),(duk_heaphdr *) (h))
#define DUK_HSTRING_DECREF(thr,h) DUK_HSTRING_DECREF_SLOW((thr),(h))
#define DUK_HSTRING_DECREF_NORZ(thr,h) DUK_HSTRING_DECREF_NORZ_SLOW((thr),(h))
#define DUK_HOBJECT_INCREF(thr,h) DUK_HEAPHDR_INCREF((thr),(duk_heaphdr *) (h))
#define DUK_HOBJECT_DECREF(thr,h) DUK_HOBJECT_DECREF_SLOW((thr),(h))
#define DUK_HOBJECT_DECREF_NORZ(thr,h) DUK_HOBJECT_DECREF_NORZ_SLOW((thr),(h))
#define DUK_HBUFFER_INCREF(thr,h) DUK_HEAPHDR_INCREF((thr),(duk_heaphdr *) (h))
#define DUK_HBUFFER_DECREF(thr,h) DUK_HBUFFER_DECREF_SLOW((thr),(h))
#define DUK_HBUFFER_DECREF_NORZ(thr,h) DUK_HBUFFER_DECREF_NORZ_SLOW((thr),(h))
#define DUK_HCOMPFUNC_INCREF(thr,h) DUK_HEAPHDR_INCREF((thr),(duk_heaphdr *) &(h)->obj)
#define DUK_HCOMPFUNC_DECREF(thr,h) DUK_HOBJECT_DECREF_SLOW((thr),(duk_hobject *) &(h)->obj)
#define DUK_HCOMPFUNC_DECREF_NORZ(thr,h) DUK_HOBJECT_DECREF_NORZ_SLOW((thr),(duk_hobject *) &(h)->obj)
#define DUK_HNATFUNC_INCREF(thr,h) DUK_HEAPHDR_INCREF((thr),(duk_heaphdr *) &(h)->obj)
#define DUK_HNATFUNC_DECREF(thr,h) DUK_HOBJECT_DECREF_SLOW((thr),(duk_hobject *) &(h)->obj)
#define DUK_HNATFUNC_DECREF_NORZ(thr,h) DUK_HOBJECT_DECREF_NORZ_SLOW((thr),(duk_hobject *) &(h)->obj)
#define DUK_HBUFOBJ_INCREF(thr,h) DUK_HEAPHDR_INCREF((thr),(duk_heaphdr *) &(h)->obj)
#define DUK_HBUFOBJ_DECREF(thr,h) DUK_HOBJECT_DECREF_SLOW((thr),(duk_hobject *) &(h)->obj)
#define DUK_HBUFOB_DECREF_NORZ(thr,h) DUK_HOBJECT_DECREF_NORZ_SLOW((thr),(duk_hobject *) &(h)->obj)
#define DUK_HTHREAD_INCREF(thr,h) DUK_HEAPHDR_INCREF((thr),(duk_heaphdr *) &(h)->obj)
#define DUK_HTHREAD_DECREF(thr,h) DUK_HOBJECT_DECREF_SLOW((thr),(duk_hobject *) &(h)->obj)
#define DUK_HTHREAD_DECREF_NORZ(thr,h) DUK_HOBJECT_DECREF_NORZ_SLOW((thr),(duk_hobject *) &(h)->obj)
#endif
/* Convenience for some situations; the above macros don't allow NULLs
* for performance reasons. Macros cover only actually needed cases.
*/
#define DUK_HEAPHDR_INCREF_ALLOWNULL(thr,h) do { \
if ((h) != NULL) { \
DUK_HEAPHDR_INCREF((thr), (duk_heaphdr *) (h)); \
} \
} while (0)
#define DUK_HEAPHDR_DECREF_ALLOWNULL(thr,h) do { \
if ((h) != NULL) { \
DUK_HEAPHDR_DECREF((thr), (duk_heaphdr *) (h)); \
} \
} while (0)
#define DUK_HEAPHDR_DECREF_NORZ_ALLOWNULL(thr,h) do { \
if ((h) != NULL) { \
DUK_HEAPHDR_DECREF_NORZ((thr), (duk_heaphdr *) (h)); \
} \
} while (0)
#define DUK_HOBJECT_INCREF_ALLOWNULL(thr,h) do { \
if ((h) != NULL) { \
DUK_HOBJECT_INCREF((thr), (h)); \
} \
} while (0)
#define DUK_HOBJECT_DECREF_ALLOWNULL(thr,h) do { \
if ((h) != NULL) { \
DUK_HOBJECT_DECREF((thr), (h)); \
} \
} while (0)
#define DUK_HOBJECT_DECREF_NORZ_ALLOWNULL(thr,h) do { \
if ((h) != NULL) { \
DUK_HOBJECT_DECREF_NORZ((thr), (h)); \
} \
} while (0)
#define DUK_HBUFFER_INCREF_ALLOWNULL(thr,h) do { \
if ((h) != NULL) { \
DUK_HBUFFER_INCREF((thr), (h)); \
} \
} while (0)
#define DUK_HBUFFER_DECREF_ALLOWNULL(thr,h) do { \
if ((h) != NULL) { \
DUK_HBUFFER_DECREF((thr), (h)); \
} \
} while (0)
#define DUK_HBUFFER_DECREF_NORZ_ALLOWNULL(thr,h) do { \
if ((h) != NULL) { \
DUK_HBUFFER_DECREF_NORZ((thr), (h)); \
} \
} while (0)
#define DUK_HTHREAD_INCREF_ALLOWNULL(thr,h) do { \
if ((h) != NULL) { \
DUK_HTHREAD_INCREF((thr), (h)); \
} \
} while (0)
#define DUK_HTHREAD_DECREF_ALLOWNULL(thr,h) do { \
if ((h) != NULL) { \
DUK_HTHREAD_DECREF((thr), (h)); \
} \
} while (0)
#define DUK_HTHREAD_DECREF_NORZ_ALLOWNULL(thr,h) do { \
if ((h) != NULL) { \
DUK_HTHREAD_DECREF_NORZ((thr), (h)); \
} \
} while (0)
/* Free pending refzero entries; quick check to avoid call because often
* the queue is empty.
*/
#define DUK_REFZERO_CHECK_FAST(thr) do { \
if ((thr)->heap->refzero_list != NULL) { \
duk_refzero_free_pending((thr)); \
} \
#define DUK_ASSERT_HEAPHDR_VALID(h) do { \
DUK_ASSERT((h) != NULL); \
DUK_ASSERT(DUK_HEAPHDR_HTYPE_VALID((h))); \
} while (0)
#define DUK_REFZERO_CHECK_SLOW(thr) do { \
duk_refzero_free_pending((thr)); \
} while (0)
/*
* Macros to set a duk_tval and update refcount of the target (decref the
* old value and incref the new value if necessary). This is both performance
* and footprint critical; any changes made should be measured for size/speed.
*/
#define DUK_TVAL_SET_UNDEFINED_UPDREF_ALT0(thr,tvptr_dst) do { \
duk_tval *tv__dst; duk_tval tv__tmp; tv__dst = (tvptr_dst); \
DUK_TVAL_SET_TVAL(&tv__tmp, tv__dst); \
DUK_TVAL_SET_UNDEFINED(tv__dst); \
DUK_TVAL_DECREF((thr), &tv__tmp); /* side effects */ \
} while (0)
#define DUK_TVAL_SET_UNDEFINED_UPDREF_NORZ_ALT0(thr,tvptr_dst) do { \
duk_tval *tv__dst; duk_tval tv__tmp; tv__dst = (tvptr_dst); \
DUK_TVAL_SET_TVAL(&tv__tmp, tv__dst); \
DUK_TVAL_SET_UNDEFINED(tv__dst); \
DUK_TVAL_DECREF_NORZ((thr), &tv__tmp); \
} while (0)
#define DUK_TVAL_SET_UNUSED_UPDREF_ALT0(thr,tvptr_dst) do { \
duk_tval *tv__dst; duk_tval tv__tmp; tv__dst = (tvptr_dst); \
DUK_TVAL_SET_TVAL(&tv__tmp, tv__dst); \
DUK_TVAL_SET_UNUSED(tv__dst); \
DUK_TVAL_DECREF((thr), &tv__tmp); /* side effects */ \
} while (0)
#define DUK_TVAL_SET_NULL_UPDREF_ALT0(thr,tvptr_dst) do { \
duk_tval *tv__dst; duk_tval tv__tmp; tv__dst = (tvptr_dst); \
DUK_TVAL_SET_TVAL(&tv__tmp, tv__dst); \
DUK_TVAL_SET_NULL(tv__dst); \
DUK_TVAL_DECREF((thr), &tv__tmp); /* side effects */ \
} while (0)
#define DUK_TVAL_SET_BOOLEAN_UPDREF_ALT0(thr,tvptr_dst,newval) do { \
duk_tval *tv__dst; duk_tval tv__tmp; tv__dst = (tvptr_dst); \
DUK_TVAL_SET_TVAL(&tv__tmp, tv__dst); \
DUK_TVAL_SET_BOOLEAN(tv__dst, (newval)); \
DUK_TVAL_DECREF((thr), &tv__tmp); /* side effects */ \
} while (0)
#define DUK_TVAL_SET_NUMBER_UPDREF_ALT0(thr,tvptr_dst,newval) do { \
duk_tval *tv__dst; duk_tval tv__tmp; tv__dst = (tvptr_dst); \
DUK_TVAL_SET_TVAL(&tv__tmp, tv__dst); \
DUK_TVAL_SET_NUMBER(tv__dst, (newval)); \
DUK_TVAL_DECREF((thr), &tv__tmp); /* side effects */ \
} while (0)
#define DUK_TVAL_SET_NUMBER_CHKFAST_UPDREF_ALT0(thr,tvptr_dst,newval) do { \
duk_tval *tv__dst; duk_tval tv__tmp; tv__dst = (tvptr_dst); \
DUK_TVAL_SET_TVAL(&tv__tmp, tv__dst); \
DUK_TVAL_SET_NUMBER_CHKFAST_FAST(tv__dst, (newval)); \
DUK_TVAL_DECREF((thr), &tv__tmp); /* side effects */ \
} while (0)
#define DUK_TVAL_SET_DOUBLE_UPDREF_ALT0(thr,tvptr_dst,newval) do { \
duk_tval *tv__dst; duk_tval tv__tmp; tv__dst = (tvptr_dst); \
DUK_TVAL_SET_TVAL(&tv__tmp, tv__dst); \
DUK_TVAL_SET_DOUBLE(tv__dst, (newval)); \
DUK_TVAL_DECREF((thr), &tv__tmp); /* side effects */ \
} while (0)
#define DUK_TVAL_SET_NAN_UPDREF_ALT0(thr,tvptr_dst) do { \
duk_tval *tv__dst; duk_tval tv__tmp; tv__dst = (tvptr_dst); \
DUK_TVAL_SET_TVAL(&tv__tmp, tv__dst); \
DUK_TVAL_SET_NAN(tv__dst); \
DUK_TVAL_DECREF((thr), &tv__tmp); /* side effects */ \
} while (0)
#if defined(DUK_USE_FASTINT)
#define DUK_TVAL_SET_I48_UPDREF_ALT0(thr,tvptr_dst,newval) do { \
duk_tval *tv__dst; duk_tval tv__tmp; tv__dst = (tvptr_dst); \
DUK_TVAL_SET_TVAL(&tv__tmp, tv__dst); \
DUK_TVAL_SET_I48(tv__dst, (newval)); \
DUK_TVAL_DECREF((thr), &tv__tmp); /* side effects */ \
} while (0)
#define DUK_TVAL_SET_I32_UPDREF_ALT0(thr,tvptr_dst,newval) do { \
duk_tval *tv__dst; duk_tval tv__tmp; tv__dst = (tvptr_dst); \
DUK_TVAL_SET_TVAL(&tv__tmp, tv__dst); \
DUK_TVAL_SET_I32(tv__dst, (newval)); \
DUK_TVAL_DECREF((thr), &tv__tmp); /* side effects */ \
} while (0)
#define DUK_TVAL_SET_U32_UPDREF_ALT0(thr,tvptr_dst,newval) do { \
duk_tval *tv__dst; duk_tval tv__tmp; tv__dst = (tvptr_dst); \
DUK_TVAL_SET_TVAL(&tv__tmp, tv__dst); \
DUK_TVAL_SET_U32(tv__dst, (newval)); \
DUK_TVAL_DECREF((thr), &tv__tmp); /* side effects */ \
} while (0)
#else
#define DUK_TVAL_SET_DOUBLE_CAST_UPDREF(thr,tvptr_dst,newval) \
DUK_TVAL_SET_DOUBLE_UPDREF((thr), (tvptr_dst), (duk_double_t) (newval))
#endif /* DUK_USE_FASTINT */
#define DUK_TVAL_SET_LIGHTFUNC_UPDREF_ALT0(thr,tvptr_dst,lf_v,lf_fp,lf_flags) do { \
duk_tval *tv__dst; duk_tval tv__tmp; tv__dst = (tvptr_dst); \
DUK_TVAL_SET_TVAL(&tv__tmp, tv__dst); \
DUK_TVAL_SET_LIGHTFUNC(tv__dst, (lf_v), (lf_fp), (lf_flags)); \
DUK_TVAL_DECREF((thr), &tv__tmp); /* side effects */ \
} while (0)
#define DUK_TVAL_SET_STRING_UPDREF_ALT0(thr,tvptr_dst,newval) do { \
duk_tval *tv__dst; duk_tval tv__tmp; tv__dst = (tvptr_dst); \
DUK_TVAL_SET_TVAL(&tv__tmp, tv__dst); \
DUK_TVAL_SET_STRING(tv__dst, (newval)); \
DUK_HSTRING_INCREF((thr), (newval)); \
DUK_TVAL_DECREF((thr), &tv__tmp); /* side effects */ \
} while (0)
#define DUK_TVAL_SET_OBJECT_UPDREF_ALT0(thr,tvptr_dst,newval) do { \
duk_tval *tv__dst; duk_tval tv__tmp; tv__dst = (tvptr_dst); \
DUK_TVAL_SET_TVAL(&tv__tmp, tv__dst); \
DUK_TVAL_SET_OBJECT(tv__dst, (newval)); \
DUK_HOBJECT_INCREF((thr), (newval)); \
DUK_TVAL_DECREF((thr), &tv__tmp); /* side effects */ \
} while (0)
#define DUK_TVAL_SET_BUFFER_UPDREF_ALT0(thr,tvptr_dst,newval) do { \
duk_tval *tv__dst; duk_tval tv__tmp; tv__dst = (tvptr_dst); \
DUK_TVAL_SET_TVAL(&tv__tmp, tv__dst); \
DUK_TVAL_SET_BUFFER(tv__dst, (newval)); \
DUK_HBUFFER_INCREF((thr), (newval)); \
DUK_TVAL_DECREF((thr), &tv__tmp); /* side effects */ \
} while (0)
#define DUK_TVAL_SET_POINTER_UPDREF_ALT0(thr,tvptr_dst,newval) do { \
duk_tval *tv__dst; duk_tval tv__tmp; tv__dst = (tvptr_dst); \
DUK_TVAL_SET_TVAL(&tv__tmp, tv__dst); \
DUK_TVAL_SET_POINTER(tv__dst, (newval)); \
DUK_TVAL_DECREF((thr), &tv__tmp); /* side effects */ \
} while (0)
/* DUK_TVAL_SET_TVAL_UPDREF() is used a lot in executor, property lookups,
* etc, so it's very important for performance. Measure when changing.
*
* NOTE: the source and destination duk_tval pointers may be the same, and
* the macros MUST deal with that correctly.
*/
/* Original idiom used, minimal code size. */
#define DUK_TVAL_SET_TVAL_UPDREF_ALT0(thr,tvptr_dst,tvptr_src) do { \
duk_tval *tv__dst, *tv__src; duk_tval tv__tmp; \
tv__dst = (tvptr_dst); tv__src = (tvptr_src); \
DUK_TVAL_SET_TVAL(&tv__tmp, tv__dst); \
DUK_TVAL_SET_TVAL(tv__dst, tv__src); \
DUK_TVAL_INCREF((thr), tv__src); \
DUK_TVAL_DECREF((thr), &tv__tmp); /* side effects */ \
} while (0)
/* Faster alternative: avoid making a temporary copy of tvptr_dst and use
* fast incref/decref macros.
*/
#define DUK_TVAL_SET_TVAL_UPDREF_ALT1(thr,tvptr_dst,tvptr_src) do { \
duk_tval *tv__dst, *tv__src; duk_heaphdr *h__obj; \
tv__dst = (tvptr_dst); tv__src = (tvptr_src); \
DUK_TVAL_INCREF_FAST((thr), tv__src); \
if (DUK_TVAL_NEEDS_REFCOUNT_UPDATE(tv__dst)) { \
h__obj = DUK_TVAL_GET_HEAPHDR(tv__dst); \
DUK_ASSERT(h__obj != NULL); \
DUK_TVAL_SET_TVAL(tv__dst, tv__src); \
DUK_HEAPHDR_DECREF_FAST((thr), h__obj); /* side effects */ \
} else { \
DUK_TVAL_SET_TVAL(tv__dst, tv__src); \
} \
} while (0)
/* XXX: no optimized variants yet */
#define DUK_TVAL_SET_UNDEFINED_UPDREF DUK_TVAL_SET_UNDEFINED_UPDREF_ALT0
#define DUK_TVAL_SET_UNDEFINED_UPDREF_NORZ DUK_TVAL_SET_UNDEFINED_UPDREF_NORZ_ALT0
#define DUK_TVAL_SET_UNUSED_UPDREF DUK_TVAL_SET_UNUSED_UPDREF_ALT0
#define DUK_TVAL_SET_NULL_UPDREF DUK_TVAL_SET_NULL_UPDREF_ALT0
#define DUK_TVAL_SET_BOOLEAN_UPDREF DUK_TVAL_SET_BOOLEAN_UPDREF_ALT0
#define DUK_TVAL_SET_NUMBER_UPDREF DUK_TVAL_SET_NUMBER_UPDREF_ALT0
#define DUK_TVAL_SET_NUMBER_CHKFAST_UPDREF DUK_TVAL_SET_NUMBER_CHKFAST_UPDREF_ALT0
#define DUK_TVAL_SET_DOUBLE_UPDREF DUK_TVAL_SET_DOUBLE_UPDREF_ALT0
#define DUK_TVAL_SET_NAN_UPDREF DUK_TVAL_SET_NAN_UPDREF_ALT0
#if defined(DUK_USE_FASTINT)
#define DUK_TVAL_SET_I48_UPDREF DUK_TVAL_SET_I48_UPDREF_ALT0
#define DUK_TVAL_SET_I32_UPDREF DUK_TVAL_SET_I32_UPDREF_ALT0
#define DUK_TVAL_SET_U32_UPDREF DUK_TVAL_SET_U32_UPDREF_ALT0
#else
#define DUK_TVAL_SET_I48_UPDREF DUK_TVAL_SET_DOUBLE_CAST_UPDREF /* XXX: fast int-to-double */
#define DUK_TVAL_SET_I32_UPDREF DUK_TVAL_SET_DOUBLE_CAST_UPDREF
#define DUK_TVAL_SET_U32_UPDREF DUK_TVAL_SET_DOUBLE_CAST_UPDREF
#endif /* DUK_USE_FASTINT */
#define DUK_TVAL_SET_FASTINT_UPDREF DUK_TVAL_SET_I48_UPDREF /* convenience */
#define DUK_TVAL_SET_LIGHTFUNC_UPDREF DUK_TVAL_SET_LIGHTFUNC_UPDREF_ALT0
#define DUK_TVAL_SET_STRING_UPDREF DUK_TVAL_SET_STRING_UPDREF_ALT0
#define DUK_TVAL_SET_OBJECT_UPDREF DUK_TVAL_SET_OBJECT_UPDREF_ALT0
#define DUK_TVAL_SET_BUFFER_UPDREF DUK_TVAL_SET_BUFFER_UPDREF_ALT0
#define DUK_TVAL_SET_POINTER_UPDREF DUK_TVAL_SET_POINTER_UPDREF_ALT0
#if defined(DUK_USE_FAST_REFCOUNT_DEFAULT)
/* Optimized for speed. */
#define DUK_TVAL_SET_TVAL_UPDREF DUK_TVAL_SET_TVAL_UPDREF_ALT1
#define DUK_TVAL_SET_TVAL_UPDREF_FAST DUK_TVAL_SET_TVAL_UPDREF_ALT1
#define DUK_TVAL_SET_TVAL_UPDREF_SLOW DUK_TVAL_SET_TVAL_UPDREF_ALT0
#else
/* Optimized for size. */
#define DUK_TVAL_SET_TVAL_UPDREF DUK_TVAL_SET_TVAL_UPDREF_ALT0
#define DUK_TVAL_SET_TVAL_UPDREF_FAST DUK_TVAL_SET_TVAL_UPDREF_ALT0
#define DUK_TVAL_SET_TVAL_UPDREF_SLOW DUK_TVAL_SET_TVAL_UPDREF_ALT0
#endif
#else /* DUK_USE_REFERENCE_COUNTING */
#define DUK_TVAL_NEEDS_REFCOUNT_UPDATE(tv) 0
#define DUK_HEAPHDR_NEEDS_REFCOUNT_UPDATE(h) 0
#define DUK_TVAL_INCREF_FAST(thr,v) do {} while (0) /* nop */
#define DUK_TVAL_DECREF_FAST(thr,v) do {} while (0) /* nop */
#define DUK_TVAL_DECREF_NORZ_FAST(thr,v) do {} while (0) /* nop */
#define DUK_TVAL_INCREF_SLOW(thr,v) do {} while (0) /* nop */
#define DUK_TVAL_DECREF_SLOW(thr,v) do {} while (0) /* nop */
#define DUK_TVAL_DECREF_NORZ_SLOW(thr,v) do {} while (0) /* nop */
#define DUK_TVAL_INCREF(thr,v) do {} while (0) /* nop */
#define DUK_TVAL_DECREF(thr,v) do {} while (0) /* nop */
#define DUK_TVAL_DECREF_NORZ(thr,v) do {} while (0) /* nop */
#define DUK_HEAPHDR_INCREF_FAST(thr,h) do {} while (0) /* nop */
#define DUK_HEAPHDR_DECREF_FAST(thr,h) do {} while (0) /* nop */
#define DUK_HEAPHDR_DECREF_NORZ_FAST(thr,h) do {} while (0) /* nop */
#define DUK_HEAPHDR_INCREF_SLOW(thr,h) do {} while (0) /* nop */
#define DUK_HEAPHDR_DECREF_SLOW(thr,h) do {} while (0) /* nop */
#define DUK_HEAPHDR_DECREF_NORZ_SLOW(thr,h) do {} while (0) /* nop */
#define DUK_HEAPHDR_INCREF(thr,h) do {} while (0) /* nop */
#define DUK_HEAPHDR_DECREF(thr,h) do {} while (0) /* nop */
#define DUK_HEAPHDR_DECREF_NORZ(thr,h) do {} while (0) /* nop */
#define DUK_HSTRING_INCREF_FAST(thr,h) do {} while (0) /* nop */
#define DUK_HSTRING_DECREF_FAST(thr,h) do {} while (0) /* nop */
#define DUK_HSTRING_DECREF_NORZ_FAST(thr,h) do {} while (0) /* nop */
#define DUK_HSTRING_INCREF_SLOW(thr,h) do {} while (0) /* nop */
#define DUK_HSTRING_DECREF_SLOW(thr,h) do {} while (0) /* nop */
#define DUK_HSTRING_DECREF_NORZ_SLOW(thr,h) do {} while (0) /* nop */
#define DUK_HSTRING_INCREF(thr,h) do {} while (0) /* nop */
#define DUK_HSTRING_DECREF(thr,h) do {} while (0) /* nop */
#define DUK_HSTRING_DECREF_NORZ(thr,h) do {} while (0) /* nop */
#define DUK_HOBJECT_INCREF_FAST(thr,h) do {} while (0) /* nop */
#define DUK_HOBJECT_DECREF_FAST(thr,h) do {} while (0) /* nop */
#define DUK_HOBJECT_DECREF_NORZ_FAST(thr,h) do {} while (0) /* nop */
#define DUK_HOBJECT_INCREF_SLOW(thr,h) do {} while (0) /* nop */
#define DUK_HOBJECT_DECREF_SLOW(thr,h) do {} while (0) /* nop */
#define DUK_HOBJECT_DECREF_NORZ_SLOW(thr,h) do {} while (0) /* nop */
#define DUK_HOBJECT_INCREF(thr,h) do {} while (0) /* nop */
#define DUK_HOBJECT_DECREF(thr,h) do {} while (0) /* nop */
#define DUK_HOBJECT_DECREF_NORZ(thr,h) do {} while (0) /* nop */
#define DUK_HBUFFER_INCREF_FAST(thr,h) do {} while (0) /* nop */
#define DUK_HBUFFER_DECREF_FAST(thr,h) do {} while (0) /* nop */
#define DUK_HBUFFER_DECREF_NORZ_FAST(thr,h) do {} while (0) /* nop */
#define DUK_HBUFFER_INCREF_SLOW(thr,h) do {} while (0) /* nop */
#define DUK_HBUFFER_DECREF_SLOW(thr,h) do {} while (0) /* nop */
#define DUK_HBUFFER_DECREF_NORZ_SLOW(thr,h) do {} while (0) /* nop */
#define DUK_HBUFFER_INCREF(thr,h) do {} while (0) /* nop */
#define DUK_HBUFFER_DECREF(thr,h) do {} while (0) /* nop */
#define DUK_HBUFFER_DECREF_NORZ(thr,h) do {} while (0) /* nop */
#define DUK_HCOMPFUNC_INCREF(thr,h) do {} while (0) /* nop */
#define DUK_HCOMPFUNC_DECREF(thr,h) do {} while (0) /* nop */
#define DUK_HCOMPFUNC_DECREF_NORZ(thr,h) do {} while (0) /* nop */
#define DUK_HNATFUNC_INCREF(thr,h) do {} while (0) /* nop */
#define DUK_HNATFUNC_DECREF(thr,h) do {} while (0) /* nop */
#define DUK_HNATFUNC_DECREF_NORZ(thr,h) do {} while (0) /* nop */
#define DUK_HBUFOBJ_INCREF(thr,h) do {} while (0) /* nop */
#define DUK_HBUFOBJ_DECREF(thr,h) do {} while (0) /* nop */
#define DUK_HBUFOBJ_DECREF_NORZ(thr,h) do {} while (0) /* nop */
#define DUK_HTHREAD_INCREF(thr,h) do {} while (0) /* nop */
#define DUK_HTHREAD_DECREF(thr,h) do {} while (0) /* nop */
#define DUK_HTHREAD_DECREF_NORZ(thr,h) do {} while (0) /* nop */
#define DUK_HOBJECT_INCREF_ALLOWNULL(thr,h) do {} while (0) /* nop */
#define DUK_HOBJECT_DECREF_ALLOWNULL(thr,h) do {} while (0) /* nop */
#define DUK_HOBJECT_DECREF_NORZ_ALLOWNULL(thr,h) do {} while (0) /* nop */
#define DUK_HBUFFER_INCREF_ALLOWNULL(thr,h) do {} while (0) /* nop */
#define DUK_HBUFFER_DECREF_ALLOWNULL(thr,h) do {} while (0) /* nop */
#define DUK_HBUFFER_DECREF_NORZ_ALLOWNULL(thr,h) do {} while (0) /* nop */
#define DUK_REFZERO_CHECK_FAST(thr) do {} while (0) /* nop */
#define DUK_REFZERO_CHECK_SLOW(thr) do {} while (0) /* nop */
#define DUK_TVAL_SET_UNDEFINED_UPDREF_ALT0(thr,tvptr_dst) do { \
duk_tval *tv__dst; tv__dst = (tvptr_dst); \
DUK_TVAL_SET_UNDEFINED(tv__dst); \
DUK_UNREF((thr)); \
} while (0)
#define DUK_TVAL_SET_UNUSED_UPDREF_ALT0(thr,tvptr_dst) do { \
duk_tval *tv__dst; tv__dst = (tvptr_dst); \
DUK_TVAL_SET_UNUSED(tv__dst); \
DUK_UNREF((thr)); \
} while (0)
#define DUK_TVAL_SET_NULL_UPDREF_ALT0(thr,tvptr_dst) do { \
duk_tval *tv__dst; tv__dst = (tvptr_dst); \
DUK_TVAL_SET_NULL(tv__dst); \
DUK_UNREF((thr)); \
} while (0)
#define DUK_TVAL_SET_BOOLEAN_UPDREF_ALT0(thr,tvptr_dst,newval) do { \
duk_tval *tv__dst; tv__dst = (tvptr_dst); \
DUK_TVAL_SET_BOOLEAN(tv__dst, (newval)); \
DUK_UNREF((thr)); \
} while (0)
#define DUK_TVAL_SET_NUMBER_UPDREF_ALT0(thr,tvptr_dst,newval) do { \
duk_tval *tv__dst; tv__dst = (tvptr_dst); \
DUK_TVAL_SET_NUMBER(tv__dst, (newval)); \
DUK_UNREF((thr)); \
} while (0)
#define DUK_TVAL_SET_NUMBER_CHKFAST_UPDREF_ALT0(thr,tvptr_dst,newval) do { \
duk_tval *tv__dst; tv__dst = (tvptr_dst); \
DUK_TVAL_SET_NUMBER_CHKFAST_FAST(tv__dst, (newval)); \
DUK_UNREF((thr)); \
} while (0)
#define DUK_TVAL_SET_DOUBLE_UPDREF_ALT0(thr,tvptr_dst,newval) do { \
duk_tval *tv__dst; tv__dst = (tvptr_dst); \
DUK_TVAL_SET_DOUBLE(tv__dst, (newval)); \
DUK_UNREF((thr)); \
} while (0)
#define DUK_TVAL_SET_NAN_UPDREF_ALT0(thr,tvptr_dst) do { \
duk_tval *tv__dst; tv__dst = (tvptr_dst); \
DUK_TVAL_SET_NAN(tv__dst); \
DUK_UNREF((thr)); \
} while (0)
#if defined(DUK_USE_FASTINT)
#define DUK_TVAL_SET_I48_UPDREF_ALT0(thr,tvptr_dst,newval) do { \
duk_tval *tv__dst; tv__dst = (tvptr_dst); \
DUK_TVAL_SET_I48(tv__dst, (newval)); \
DUK_UNREF((thr)); \
} while (0)
#define DUK_TVAL_SET_I32_UPDREF_ALT0(thr,tvptr_dst,newval) do { \
duk_tval *tv__dst; tv__dst = (tvptr_dst); \
DUK_TVAL_SET_I32(tv__dst, (newval)); \
DUK_UNREF((thr)); \
} while (0)
#define DUK_TVAL_SET_U32_UPDREF_ALT0(thr,tvptr_dst,newval) do { \
duk_tval *tv__dst; tv__dst = (tvptr_dst); \
DUK_TVAL_SET_U32(tv__dst, (newval)); \
DUK_UNREF((thr)); \
} while (0)
#else
#define DUK_TVAL_SET_DOUBLE_CAST_UPDREF(thr,tvptr_dst,newval) \
DUK_TVAL_SET_DOUBLE_UPDREF((thr), (tvptr_dst), (duk_double_t) (newval))
#endif /* DUK_USE_FASTINT */
#define DUK_TVAL_SET_LIGHTFUNC_UPDREF_ALT0(thr,tvptr_dst,lf_v,lf_fp,lf_flags) do { \
duk_tval *tv__dst; tv__dst = (tvptr_dst); \
DUK_TVAL_SET_LIGHTFUNC(tv__dst, (lf_v), (lf_fp), (lf_flags)); \
DUK_UNREF((thr)); \
} while (0)
#define DUK_TVAL_SET_STRING_UPDREF_ALT0(thr,tvptr_dst,newval) do { \
duk_tval *tv__dst; tv__dst = (tvptr_dst); \
DUK_TVAL_SET_STRING(tv__dst, (newval)); \
DUK_UNREF((thr)); \
} while (0)
#define DUK_TVAL_SET_OBJECT_UPDREF_ALT0(thr,tvptr_dst,newval) do { \
duk_tval *tv__dst; tv__dst = (tvptr_dst); \
DUK_TVAL_SET_OBJECT(tv__dst, (newval)); \
DUK_UNREF((thr)); \
} while (0)
#define DUK_TVAL_SET_BUFFER_UPDREF_ALT0(thr,tvptr_dst,newval) do { \
duk_tval *tv__dst; tv__dst = (tvptr_dst); \
DUK_TVAL_SET_BUFFER(tv__dst, (newval)); \
DUK_UNREF((thr)); \
} while (0)
#define DUK_TVAL_SET_POINTER_UPDREF_ALT0(thr,tvptr_dst,newval) do { \
duk_tval *tv__dst; tv__dst = (tvptr_dst); \
DUK_TVAL_SET_POINTER(tv__dst, (newval)); \
DUK_UNREF((thr)); \
} while (0)
#define DUK_TVAL_SET_TVAL_UPDREF_ALT0(thr,tvptr_dst,tvptr_src) do { \
duk_tval *tv__dst, *tv__src; \
tv__dst = (tvptr_dst); tv__src = (tvptr_src); \
DUK_TVAL_SET_TVAL(tv__dst, tv__src); \
DUK_UNREF((thr)); \
} while (0)
#define DUK_TVAL_SET_UNDEFINED_UPDREF DUK_TVAL_SET_UNDEFINED_UPDREF_ALT0
#define DUK_TVAL_SET_UNDEFINED_UPDREF_NORZ DUK_TVAL_SET_UNDEFINED_UPDREF_ALT0
#define DUK_TVAL_SET_UNUSED_UPDREF DUK_TVAL_SET_UNUSED_UPDREF_ALT0
#define DUK_TVAL_SET_NULL_UPDREF DUK_TVAL_SET_NULL_UPDREF_ALT0
#define DUK_TVAL_SET_BOOLEAN_UPDREF DUK_TVAL_SET_BOOLEAN_UPDREF_ALT0
#define DUK_TVAL_SET_NUMBER_UPDREF DUK_TVAL_SET_NUMBER_UPDREF_ALT0
#define DUK_TVAL_SET_NUMBER_CHKFAST_UPDREF DUK_TVAL_SET_NUMBER_CHKFAST_UPDREF_ALT0
#define DUK_TVAL_SET_DOUBLE_UPDREF DUK_TVAL_SET_DOUBLE_UPDREF_ALT0
#define DUK_TVAL_SET_NAN_UPDREF DUK_TVAL_SET_NAN_UPDREF_ALT0
#if defined(DUK_USE_FASTINT)
#define DUK_TVAL_SET_I48_UPDREF DUK_TVAL_SET_I48_UPDREF_ALT0
#define DUK_TVAL_SET_I32_UPDREF DUK_TVAL_SET_I32_UPDREF_ALT0
#define DUK_TVAL_SET_U32_UPDREF DUK_TVAL_SET_U32_UPDREF_ALT0
#else
#define DUK_TVAL_SET_I48_UPDREF DUK_TVAL_SET_DOUBLE_CAST_UPDREF /* XXX: fast-int-to-double */
#define DUK_TVAL_SET_I32_UPDREF DUK_TVAL_SET_DOUBLE_CAST_UPDREF
#define DUK_TVAL_SET_U32_UPDREF DUK_TVAL_SET_DOUBLE_CAST_UPDREF
#endif /* DUK_USE_FASTINT */
#define DUK_TVAL_SET_FASTINT_UPDREF DUK_TVAL_SET_I48_UPDREF /* convenience */
#define DUK_TVAL_SET_LIGHTFUNC_UPDREF DUK_TVAL_SET_LIGHTFUNC_UPDREF_ALT0
#define DUK_TVAL_SET_STRING_UPDREF DUK_TVAL_SET_STRING_UPDREF_ALT0
#define DUK_TVAL_SET_OBJECT_UPDREF DUK_TVAL_SET_OBJECT_UPDREF_ALT0
#define DUK_TVAL_SET_BUFFER_UPDREF DUK_TVAL_SET_BUFFER_UPDREF_ALT0
#define DUK_TVAL_SET_POINTER_UPDREF DUK_TVAL_SET_POINTER_UPDREF_ALT0
#define DUK_TVAL_SET_TVAL_UPDREF DUK_TVAL_SET_TVAL_UPDREF_ALT0
#define DUK_TVAL_SET_TVAL_UPDREF_FAST DUK_TVAL_SET_TVAL_UPDREF_ALT0
#define DUK_TVAL_SET_TVAL_UPDREF_SLOW DUK_TVAL_SET_TVAL_UPDREF_ALT0
#endif /* DUK_USE_REFERENCE_COUNTING */
#endif /* DUK_HEAPHDR_H_INCLUDED */

21
src-input/duk_hobject.h

@ -645,6 +645,16 @@
/* note: this updates refcounts */
#define DUK_HOBJECT_SET_PROTOTYPE_UPDREF(thr,h,p) duk_hobject_set_prototype_updref((thr), (h), (p))
/*
* Finalizer check
*/
#if defined(DUK_USE_HEAPPTR16)
#define DUK_HOBJECT_HAS_FINALIZER_FAST(heap,h) duk_hobject_has_finalizer_fast_raw((heap), (h))
#else
#define DUK_HOBJECT_HAS_FINALIZER_FAST(heap,h) duk_hobject_has_finalizer_fast_raw((h))
#endif
/*
* Resizing and hash behavior
*/
@ -873,7 +883,11 @@ DUK_INTERNAL_DECL duk_bool_t duk_hobject_hasprop_raw(duk_hthread *thr, duk_hobje
DUK_INTERNAL_DECL void duk_hobject_define_property_internal(duk_hthread *thr, duk_hobject *obj, duk_hstring *key, duk_small_uint_t flags);
DUK_INTERNAL_DECL void duk_hobject_define_property_internal_arridx(duk_hthread *thr, duk_hobject *obj, duk_uarridx_t arr_idx, duk_small_uint_t flags);
DUK_INTERNAL_DECL duk_size_t duk_hobject_get_length(duk_hthread *thr, duk_hobject *obj);
DUK_INTERNAL_DECL duk_bool_t duk_hobject_has_finalizer_fast(duk_hthread *thr, duk_hobject *obj);
#if defined(DUK_USE_HEAPPTR16)
DUK_INTERNAL_DECL duk_bool_t duk_hobject_has_finalizer_fast_raw(duk_heap *heap, duk_hobject *obj);
#else
DUK_INTERNAL_DECL duk_bool_t duk_hobject_has_finalizer_fast_raw(duk_hobject *obj);
#endif
/* helpers for defineProperty() and defineProperties() */
DUK_INTERNAL_DECL
@ -920,11 +934,6 @@ DUK_INTERNAL_DECL duk_bool_t duk_hobject_enumerator_next(duk_context *ctx, duk_b
/* macros */
DUK_INTERNAL_DECL void duk_hobject_set_prototype_updref(duk_hthread *thr, duk_hobject *h, duk_hobject *p);
/* finalization */
#if defined(DUK_USE_FINALIZER_SUPPORT)
DUK_INTERNAL_DECL void duk_hobject_run_finalizer(duk_hthread *thr, duk_hobject *obj);
#endif
/* pc2line */
#if defined(DUK_USE_PC2LINE)
DUK_INTERNAL_DECL void duk_hobject_pc2line_pack(duk_hthread *thr, duk_compiler_instr *instrs, duk_uint_fast32_t length);

1
src-input/duk_hobject_alloc.c

@ -77,6 +77,7 @@ DUK_INTERNAL duk_hobject *duk_hobject_alloc_unchecked(duk_heap *heap, duk_uint_t
if (DUK_UNLIKELY(res == NULL)) {
return NULL;
}
DUK_ASSERT(!DUK_HOBJECT_IS_THREAD(res));
duk__init_object_parts(heap, hobject_flags, res);

113
src-input/duk_hobject_finalizer.c

@ -1,113 +0,0 @@
/*
* Run an duk_hobject finalizer. Used for both reference counting
* and mark-and-sweep algorithms. Must never throw an error.
*
* There is no return value. Any return value or error thrown by
* the finalizer is ignored (although errors are debug logged).
*
* Notes:
*
* - The thread used for calling the finalizer is the same as the
* 'thr' argument. This may need to change later.
*
* - The finalizer thread 'top' assertions are there because it is
* critical that strict stack policy is observed (i.e. no cruft
* left on the finalizer stack).
*/
#include "duk_internal.h"
#if defined(DUK_USE_FINALIZER_SUPPORT)
DUK_LOCAL duk_ret_t duk__finalize_helper(duk_context *ctx, void *udata) {
duk_hthread *thr;
DUK_ASSERT(ctx != NULL);
thr = (duk_hthread *) ctx;
DUK_UNREF(udata);
DUK_DDD(DUK_DDDPRINT("protected finalization helper running"));
/* [... obj] */
/* XXX: Finalizer lookup should traverse the prototype chain (to allow
* inherited finalizers) but should not invoke accessors or proxy object
* behavior. At the moment this lookup will invoke proxy behavior, so
* caller must ensure that this function is not called if the target is
* a Proxy.
*/
duk_get_prop_stridx_short(ctx, -1, DUK_STRIDX_INT_FINALIZER); /* -> [... obj finalizer] */
if (!duk_is_callable(ctx, -1)) {
DUK_DDD(DUK_DDDPRINT("-> no finalizer or finalizer not callable"));
return 0;
}
duk_dup_m2(ctx);
duk_push_boolean(ctx, DUK_HEAP_HAS_FINALIZER_NORESCUE(thr->heap));
DUK_DDD(DUK_DDDPRINT("-> finalizer found, calling finalizer"));
duk_call(ctx, 2); /* [ ... obj finalizer obj heapDestruct ] -> [ ... obj retval ] */
DUK_DDD(DUK_DDDPRINT("finalizer finished successfully"));
return 0;
/* Note: we rely on duk_safe_call() to fix up the stack for the caller,
* so we don't need to pop stuff here. There is no return value;
* caller determines rescued status based on object refcount.
*/
}
DUK_INTERNAL void duk_hobject_run_finalizer(duk_hthread *thr, duk_hobject *obj) {
duk_context *ctx = (duk_context *) thr;
duk_ret_t rc;
#if defined(DUK_USE_ASSERTIONS)
duk_idx_t entry_top;
#endif
DUK_DDD(DUK_DDDPRINT("running object finalizer for object: %p", (void *) obj));
DUK_ASSERT(thr != NULL);
DUK_ASSERT(ctx != NULL);
DUK_ASSERT(obj != NULL);
DUK_ASSERT_VALSTACK_SPACE(thr, 1);
#if defined(DUK_USE_ASSERTIONS)
entry_top = duk_get_top(ctx);
#endif
/*
* Get and call the finalizer. All of this must be wrapped
* in a protected call, because even getting the finalizer
* may trigger an error (getter may throw one, for instance).
*/
DUK_ASSERT(!DUK_HEAPHDR_HAS_READONLY((duk_heaphdr *) obj));
if (DUK_HEAPHDR_HAS_FINALIZED((duk_heaphdr *) obj)) {
DUK_D(DUK_DPRINT("object already finalized, avoid running finalizer twice: %!O", obj));
return;
}
DUK_HEAPHDR_SET_FINALIZED((duk_heaphdr *) obj); /* ensure never re-entered until rescue cycle complete */
if (DUK_HOBJECT_HAS_EXOTIC_PROXYOBJ(obj)) {
/* This shouldn't happen; call sites should avoid looking up
* _Finalizer "through" a Proxy, but ignore if we come here
* with a Proxy to avoid finalizer re-entry.
*/
DUK_D(DUK_DPRINT("object is a proxy, skip finalizer call"));
return;
}
/* XXX: use a NULL error handler for the finalizer call? */
DUK_DDD(DUK_DDDPRINT("-> finalizer found, calling wrapped finalize helper"));
duk_push_hobject(ctx, obj); /* this also increases refcount by one */
rc = duk_safe_call(ctx, duk__finalize_helper, NULL /*udata*/, 0 /*nargs*/, 1 /*nrets*/); /* -> [... obj retval/error] */
DUK_ASSERT_TOP(ctx, entry_top + 2); /* duk_safe_call discipline */
if (rc != DUK_EXEC_SUCCESS) {
/* Note: we ask for one return value from duk_safe_call to get this
* error debugging here.
*/
DUK_D(DUK_DPRINT("wrapped finalizer call failed for object %p (ignored); error: %!T",
(void *) obj, (duk_tval *) duk_get_tval(ctx, -1)));
}
duk_pop_2(ctx); /* -> [...] */
DUK_ASSERT_TOP(ctx, entry_top);
}
#endif /* DUK_USE_FINALIZER_SUPPORT */

254
src-input/duk_hobject_props.c

@ -507,29 +507,26 @@ DUK_LOCAL duk_bool_t duk__proxy_check_prop(duk_hthread *thr, duk_hobject *obj, d
/*
* Reallocate property allocation, moving properties to the new allocation.
*
* Includes key compaction, rehashing, and can also optionally abandoning
* Includes key compaction, rehashing, and can also optionally abandon
* the array part, 'migrating' array entries into the beginning of the
* new entry part. Arguments are not validated here, so e.g. new_h_size
* MUST be a valid prime.
* new entry part.
*
* There is no support for in-place reallocation or just compacting keys
* without resizing the property allocation. This is intentional to keep
* code size minimal.
* code size minimal, but would be useful future work.
*
* The implementation is relatively straightforward, except for the array
* abandonment process. Array abandonment requires that new string keys
* are interned, which may trigger GC. All keys interned so far must be
* reachable for GC at all times; valstack is used for that now.
* reachable for GC at all times and correctly refcounted for; valstack is
* used for that now.
*
* Also, a GC triggered during this reallocation process must not interfere
* with the object being resized. This is currently controlled by using
* heap->mark_and_sweep_base_flags to indicate that no finalizers will be
* executed (as they can affect ANY object) and no objects are compacted
* (it would suffice to protect this particular object only, though).
*
* Note: a non-checked variant would be nice but is a bit tricky to
* implement for the array abandonment process. It's easy for
* everything else.
* with the object being resized. This is currently controlled by preventing
* finalizers (as they may affect ANY object) and object compaction in
* mark-and-sweep. It would suffice to protect only this particular object
* from compaction, however. DECREF refzero cascades are side effect free
* and OK.
*
* Note: because we need to potentially resize the valstack (as part
* of abandoning the array part), any tval pointers to the valstack
@ -543,7 +540,7 @@ DUK_INTERNAL void duk_hobject_realloc_props(duk_hthread *thr,
duk_uint32_t new_h_size,
duk_bool_t abandon_array) {
duk_context *ctx = (duk_context *) thr;
duk_small_uint_t prev_mark_and_sweep_base_flags;
duk_small_uint_t prev_ms_base_flags;
duk_uint32_t new_alloc_size;
duk_uint32_t new_e_size_adjusted;
duk_uint8_t *new_p;
@ -554,6 +551,10 @@ DUK_INTERNAL void duk_hobject_realloc_props(duk_hthread *thr,
duk_uint32_t *new_h;
duk_uint32_t new_e_next;
duk_uint_fast32_t i;
duk_size_t array_copy_size;
#if defined(DUK_USE_ASSERTIONS)
duk_bool_t prev_error_not_allowed;
#endif
DUK_ASSERT(thr != NULL);
DUK_ASSERT(ctx != NULL);
@ -623,9 +624,8 @@ DUK_INTERNAL void duk_hobject_realloc_props(duk_hthread *thr,
/*
* Property count check. This is the only point where we ensure that
* we don't get more (allocated) property space that we can handle.
* There aren't hard limits as such, but some algorithms fail (e.g.
* finding next higher prime, selecting hash part size) if we get too
* close to the 4G property limit.
* There aren't hard limits as such, but some algorithms may fail
* if we get too close to the 4G property limit.
*
* Since this works based on allocation size (not actually used size),
* the limit is a bit approximate but good enough in practice.
@ -638,43 +638,46 @@ DUK_INTERNAL void duk_hobject_realloc_props(duk_hthread *thr,
/*
* Compute new alloc size and alloc new area.
*
* The new area is allocated as a dynamic buffer and placed into the
* valstack for reachability. The actual buffer is then detached at
* the end.
*
* Note: heap_mark_and_sweep_base_flags are altered here to ensure
* no-one touches this object while we're resizing and rehashing it.
* The flags must be reset on every exit path after it. Finalizers
* and compaction is prevented currently for all objects while it
* would be enough to restrict it only for the current object.
* The new area is not tracked in the heap at all, so it's critical
* we get to free/keep it in a controlled manner.
*/
prev_mark_and_sweep_base_flags = thr->heap->mark_and_sweep_base_flags;
thr->heap->mark_and_sweep_base_flags |=
DUK_MS_FLAG_NO_FINALIZERS | /* avoid attempts to add/remove object keys */
DUK_MS_FLAG_NO_OBJECT_COMPACTION; /* avoid attempt to compact the current object */
#if defined(DUK_USE_ASSERTIONS)
/* Whole path must be error throw free, but we may be called from
* within error handling so can't assert for error_not_allowed == 0.
*/
prev_error_not_allowed = thr->heap->error_not_allowed;
thr->heap->error_not_allowed = 1;
#endif
prev_ms_base_flags = thr->heap->ms_base_flags;
thr->heap->ms_base_flags |=
DUK_MS_FLAG_NO_OBJECT_COMPACTION; /* Avoid attempt to compact the current object (all objects really). */
thr->heap->pf_prevent_count++; /* Avoid finalizers. */
DUK_ASSERT(thr->heap->pf_prevent_count != 0); /* Wrap. */
new_alloc_size = DUK_HOBJECT_P_COMPUTE_SIZE(new_e_size_adjusted, new_a_size, new_h_size);
DUK_DDD(DUK_DDDPRINT("new hobject allocation size is %ld", (long) new_alloc_size));
if (new_alloc_size == 0) {
/* for zero size, don't push anything on valstack */
DUK_ASSERT(new_e_size_adjusted == 0);
DUK_ASSERT(new_a_size == 0);
DUK_ASSERT(new_h_size == 0);
new_p = NULL;
} else {
/* This may trigger mark-and-sweep with arbitrary side effects,
* including an attempted resize of the object we're resizing,
* executing a finalizer which may add or remove properties of
* the object we're resizing etc.
*/
/* Note: buffer is dynamic so that we can 'steal' the actual
* allocation later.
/* Alloc may trigger mark-and-sweep but no compaction, and
* cannot throw.
*/
new_p = (duk_uint8_t *) duk_push_dynamic_buffer(ctx, new_alloc_size); /* errors out if out of memory */
DUK_ASSERT(new_p != NULL); /* since new_alloc_size > 0 */
#if 0 /* XXX: inject test */
if (1) {
goto alloc_failed;
}
#endif
new_p = (duk_uint8_t *) DUK_ALLOC(thr->heap, new_alloc_size);
if (new_p == NULL) {
/* NULL always indicates alloc failure because
* new_alloc_size > 0.
*/
goto alloc_failed;
}
}
/* Set up pointers to the new property area: this is hidden behind a macro
@ -699,23 +702,23 @@ DUK_INTERNAL void duk_hobject_realloc_props(duk_hthread *thr,
*
* Note: from an enumeration perspective the order of entry keys matters.
* Array keys should appear wherever they appeared before the array abandon
* operation.
* operation. (This no longer matters much because keys are ES2015 sorted.)
*/
if (abandon_array) {
/*
* Note: assuming new_a_size == 0, and that entry part contains
* no conflicting keys, refcounts do not need to be adjusted for
* the values, as they remain exactly the same.
/* Assuming new_a_size == 0, and that entry part contains
* no conflicting keys, refcounts do not need to be adjusted for
* the values, as they remain exactly the same.
*
* The keys, however, need to be interned, incref'd, and be
* reachable for GC. Any intern attempt may trigger a GC and
* claim any non-reachable strings, so every key must be reachable
* at all times.
* The keys, however, need to be interned, incref'd, and be
* reachable for GC. Any intern attempt may trigger a GC and
* claim any non-reachable strings, so every key must be reachable
* at all times. Refcounts must be correct to satisfy refcount
* assertions.
*
* A longjmp must not occur here, as the new_p allocation would
* be freed without these keys being decref'd, hence the messy
* decref handling if intern fails.
* A longjmp must not occur here, as the new_p allocation would
* leak. Refcounts would come out correctly as the interned
* strings are valstack tracked.
*/
DUK_ASSERT(new_a_size == 0);
@ -744,13 +747,20 @@ DUK_INTERNAL void duk_hobject_realloc_props(duk_hthread *thr,
* must be careful.
*/
/* never shrinks; auto-adds DUK_VALSTACK_INTERNAL_EXTRA, which is generous */
#if 0 /* XXX: inject test */
if (1) {
goto abandon_error;
}
#endif
/* Never shrinks; auto-adds DUK_VALSTACK_INTERNAL_EXTRA, which
* is generous.
*/
if (!duk_check_stack(ctx, 1)) {
goto abandon_error;
}
DUK_ASSERT_VALSTACK_SPACE(thr, 1);
key = duk_heap_strtable_intern_u32(thr->heap, i);
if (!key) {
if (key == NULL) {
goto abandon_error;
}
duk_push_hstring(ctx, key); /* keep key reachable for GC etc; guaranteed not to fail */
@ -788,7 +798,7 @@ DUK_INTERNAL void duk_hobject_realloc_props(duk_hthread *thr,
DUK_ASSERT(DUK_HOBJECT_GET_PROPS(thr->heap, obj) != NULL);
key = DUK_HOBJECT_E_GET_KEY(thr->heap, obj, i);
if (!key) {
if (key == NULL) {
continue;
}
@ -803,53 +813,46 @@ DUK_INTERNAL void duk_hobject_realloc_props(duk_hthread *thr,
/* the entries [new_e_next, new_e_size_adjusted[ are left uninitialized on purpose (ok, not gc reachable) */
/*
* Copy array elements to new array part.
* Copy array elements to new array part. If the new array part is
* larger, initialize the unused entries as UNUSED because they are
* GC reachable.
*/
if (new_a_size > DUK_HOBJECT_GET_ASIZE(obj)) {
/* copy existing entries as is */
DUK_ASSERT(new_p != NULL && new_a != NULL);
if (DUK_HOBJECT_GET_ASIZE(obj) > 0) {
/* Avoid zero copy with an invalid pointer. If obj->p is NULL,
* the 'new_a' pointer will be invalid which is not allowed even
* when copy size is zero.
*/
DUK_ASSERT(DUK_HOBJECT_GET_PROPS(thr->heap, obj) != NULL);
DUK_ASSERT(DUK_HOBJECT_GET_ASIZE(obj) > 0);
DUK_MEMCPY((void *) new_a, (void *) DUK_HOBJECT_A_GET_BASE(thr->heap, obj), sizeof(duk_tval) * DUK_HOBJECT_GET_ASIZE(obj));
}
/* fill new entries with -unused- (required, gc reachable) */
for (i = DUK_HOBJECT_GET_ASIZE(obj); i < new_a_size; i++) {
duk_tval *tv = &new_a[i];
DUK_TVAL_SET_UNUSED(tv);
}
} else {
#if defined(DUK_USE_ASSERTIONS)
/* caller must have decref'd values above new_a_size (if that is necessary) */
if (!abandon_array) {
for (i = new_a_size; i < DUK_HOBJECT_GET_ASIZE(obj); i++) {
duk_tval *tv;
tv = DUK_HOBJECT_A_GET_VALUE_PTR(thr->heap, obj, i);
/* current assertion is quite strong: decref's and set to unused */
DUK_ASSERT(DUK_TVAL_IS_UNUSED(tv));
}
/* Caller must have decref'd values above new_a_size (if that is necessary). */
if (!abandon_array) {
for (i = new_a_size; i < DUK_HOBJECT_GET_ASIZE(obj); i++) {
duk_tval *tv;
tv = DUK_HOBJECT_A_GET_VALUE_PTR(thr->heap, obj, i);
DUK_ASSERT(DUK_TVAL_IS_UNUSED(tv));
}
}
#endif
if (new_a_size > 0) {
/* Avoid zero copy with an invalid pointer. If obj->p is NULL,
* the 'new_a' pointer will be invalid which is not allowed even
* when copy size is zero.
*/
DUK_ASSERT(DUK_HOBJECT_GET_PROPS(thr->heap, obj) != NULL);
DUK_ASSERT(new_a_size > 0);
DUK_MEMCPY((void *) new_a, (void *) DUK_HOBJECT_A_GET_BASE(thr->heap, obj), sizeof(duk_tval) * new_a_size);
}
if (new_a_size > DUK_HOBJECT_GET_ASIZE(obj)) {
array_copy_size = sizeof(duk_tval) * DUK_HOBJECT_GET_ASIZE(obj);
} else {
array_copy_size = sizeof(duk_tval) * new_a_size;
}
if (array_copy_size > 0) {
/* Avoid zero copy with an invalid pointer. If obj->p is NULL,
* the 'new_a' pointer will be invalid which is not allowed even
* when copy size is zero.
*/
DUK_ASSERT(new_a != NULL);
DUK_ASSERT(DUK_HOBJECT_GET_PROPS(thr->heap, obj) != NULL);
DUK_ASSERT(DUK_HOBJECT_GET_ASIZE(obj) > 0);
DUK_MEMCPY((void *) new_a,
(const void *) DUK_HOBJECT_A_GET_BASE(thr->heap, obj),
array_copy_size);
}
for (i = DUK_HOBJECT_GET_ASIZE(obj); i < new_a_size; i++) {
duk_tval *tv = &new_a[i];
DUK_TVAL_SET_UNUSED(tv);
}
/*
* Rebuild the hash part always from scratch (guaranteed to finish).
* Rebuild the hash part always from scratch (guaranteed to finish
* as long as caller gave consistent parameters).
*
* Any resize of hash part requires rehashing. In addition, by rehashing
* get rid of any elements marked deleted (DUK__HASH_DELETED) which is critical
@ -857,7 +860,9 @@ DUK_INTERNAL void duk_hobject_realloc_props(duk_hthread *thr,
*/
#if defined(DUK_USE_HOBJECT_HASH_PART)
if (DUK_UNLIKELY(new_h_size > 0)) {
if (new_h_size == 0) {
DUK_DDD(DUK_DDDPRINT("no hash part, no rehash"));
} else {
duk_uint32_t mask;
DUK_ASSERT(new_h != NULL);
@ -891,8 +896,6 @@ DUK_INTERNAL void duk_hobject_realloc_props(duk_hthread *thr,
/* Guaranteed to finish (hash is larger than #props). */
}
}
} else {
DUK_DDD(DUK_DDDPRINT("no hash part, no rehash"));
}
#endif /* DUK_USE_HOBJECT_HASH_PART */
@ -931,30 +934,20 @@ DUK_INTERNAL void duk_hobject_realloc_props(duk_hthread *thr,
DUK_HOBJECT_SET_ASIZE(obj, new_a_size);
DUK_HOBJECT_SET_HSIZE(obj, new_h_size);
if (new_p) {
/*
* Detach actual buffer from dynamic buffer in valstack, and
* pop it from the stack.
*
* XXX: the buffer object is certainly not reachable at this point,
* so it would be nice to free it forcibly even with only
* mark-and-sweep enabled. Not a big issue though.
*/
(void) duk_steal_buffer(ctx, -1, NULL);
duk_pop(ctx);
} else {
DUK_ASSERT(new_alloc_size == 0);
/* no need to pop, nothing was pushed */
}
/* clear array part flag only after switching */
/* Clear array part flag only after switching. */
if (abandon_array) {
DUK_HOBJECT_CLEAR_ARRAY_PART(obj);
}
DUK_DDD(DUK_DDDPRINT("resize result: %!O", (duk_heaphdr *) obj));
thr->heap->mark_and_sweep_base_flags = prev_mark_and_sweep_base_flags;
DUK_ASSERT(thr->heap->pf_prevent_count > 0);
thr->heap->pf_prevent_count--;
thr->heap->ms_base_flags = prev_ms_base_flags;
#if defined(DUK_USE_ASSERTIONS)
DUK_ASSERT(thr->heap->error_not_allowed == 1);
thr->heap->error_not_allowed = prev_error_not_allowed;
#endif
/*
* Post resize assertions.
@ -974,9 +967,17 @@ DUK_INTERNAL void duk_hobject_realloc_props(duk_hthread *thr,
*/
abandon_error:
DUK_D(DUK_DPRINT("hobject resize failed during abandon array, decref keys"));
alloc_failed:
DUK_D(DUK_DPRINT("object property table resize failed"));
thr->heap->mark_and_sweep_base_flags = prev_mark_and_sweep_base_flags;
DUK_FREE(thr->heap, new_p); /* OK for NULL. */
thr->heap->pf_prevent_count--;
thr->heap->ms_base_flags = prev_ms_base_flags;
#if defined(DUK_USE_ASSERTIONS)
DUK_ASSERT(thr->heap->error_not_allowed == 1);
thr->heap->error_not_allowed = prev_error_not_allowed;
#endif
DUK_ERROR_ALLOC_FAILED(thr);
}
@ -4796,12 +4797,14 @@ DUK_INTERNAL duk_size_t duk_hobject_get_length(duk_hthread *thr, duk_hobject *ob
* in sync with the actual property when setting/removing the finalizer.
*/
DUK_INTERNAL duk_bool_t duk_hobject_has_finalizer_fast(duk_hthread *thr, duk_hobject *obj) {
#if defined(DUK_USE_HEAPPTR16)
DUK_INTERNAL duk_bool_t duk_hobject_has_finalizer_fast_raw(duk_heap *heap, duk_hobject *obj) {
#else
DUK_INTERNAL duk_bool_t duk_hobject_has_finalizer_fast_raw(duk_hobject *obj) {
#endif
duk_uint_t sanity;
DUK_ASSERT(thr != NULL);
DUK_ASSERT(obj != NULL);
DUK_UNREF(thr);
sanity = DUK_HOBJECT_PROTOTYPE_CHAIN_SANITY;
do {
@ -4812,7 +4815,12 @@ DUK_INTERNAL duk_bool_t duk_hobject_has_finalizer_fast(duk_hthread *thr, duk_hob
DUK_D(DUK_DPRINT("prototype loop when checking for finalizer existence; returning false"));
return 0;
}
obj = DUK_HOBJECT_GET_PROTOTYPE(thr->heap, obj);
#if defined(DUK_USE_HEAPPTR16)
DUK_ASSERT(heap != NULL);
obj = DUK_HOBJECT_GET_PROTOTYPE(heap, obj);
#else
obj = DUK_HOBJECT_GET_PROTOTYPE(NULL, obj); /* 'heap' arg ignored */
#endif
} while (obj != NULL);
return 0;

2
src-input/duk_hthread.h

@ -379,9 +379,11 @@ DUK_INTERNAL_DECL void duk_hthread_terminate(duk_hthread *thr);
DUK_INTERNAL_DECL void duk_hthread_callstack_grow(duk_hthread *thr);
DUK_INTERNAL_DECL void duk_hthread_callstack_shrink_check(duk_hthread *thr);
DUK_INTERNAL_DECL void duk_hthread_callstack_unwind_norz(duk_hthread *thr, duk_size_t new_top);
DUK_INTERNAL_DECL void duk_hthread_callstack_unwind(duk_hthread *thr, duk_size_t new_top);
DUK_INTERNAL_DECL void duk_hthread_catchstack_grow(duk_hthread *thr);
DUK_INTERNAL_DECL void duk_hthread_catchstack_shrink_check(duk_hthread *thr);
DUK_INTERNAL_DECL void duk_hthread_catchstack_unwind_norz(duk_hthread *thr, duk_size_t new_top);
DUK_INTERNAL_DECL void duk_hthread_catchstack_unwind(duk_hthread *thr, duk_size_t new_top);
DUK_INTERNAL_DECL void *duk_hthread_get_valstack_ptr(duk_heap *heap, void *ud); /* indirect allocs */

1
src-input/duk_hthread_misc.c

@ -10,7 +10,6 @@ DUK_INTERNAL void duk_hthread_terminate(duk_hthread *thr) {
/* Order of unwinding is important */
duk_hthread_catchstack_unwind(thr, 0);
duk_hthread_callstack_unwind(thr, 0); /* side effects, possibly errors */
thr->valstack_bottom = thr->valstack;

27
src-input/duk_hthread_stacks.c

@ -128,7 +128,7 @@ DUK_INTERNAL void duk_hthread_callstack_shrink_check(duk_hthread *thr) {
duk__hthread_do_callstack_shrink(thr);
}
DUK_INTERNAL void duk_hthread_callstack_unwind(duk_hthread *thr, duk_size_t new_top) {
DUK_INTERNAL void duk_hthread_callstack_unwind_norz(duk_hthread *thr, duk_size_t new_top) {
duk_size_t idx;
DUK_DDD(DUK_DDDPRINT("unwind callstack top of thread %p from %ld to %ld",
@ -224,8 +224,12 @@ DUK_INTERNAL void duk_hthread_callstack_unwind(duk_hthread *thr, duk_size_t new_
/* Pause for all step types: step into, step over, step out.
* This is the only place explicitly handling a step out.
*/
DUK_HEAP_SET_PAUSED(heap);
DUK_ASSERT(heap->dbg_step_thread == NULL);
if (duk_debug_is_paused(heap)) {
DUK_D(DUK_DPRINT("step pause trigger but already paused, ignoring"));
} else {
duk_debug_set_paused(heap);
DUK_ASSERT(heap->dbg_step_thread == NULL);
}
}
#endif
@ -311,11 +315,11 @@ DUK_INTERNAL void duk_hthread_callstack_unwind(duk_hthread *thr, duk_size_t new_
* Also topmost activation idx_retval is garbage (not zeroed), and must
* be ignored.
*/
}
/* Check for pending refzero entries, many places in the unwind
* use NORZ macros.
*/
DUK_REFZERO_CHECK_SLOW(thr);
DUK_INTERNAL void duk_hthread_callstack_unwind(duk_hthread *thr, duk_size_t new_top) {
duk_hthread_callstack_unwind_norz(thr, new_top);
DUK_REFZERO_CHECK_FAST(thr);
}
DUK_LOCAL DUK_COLD DUK_NOINLINE void duk__hthread_do_catchstack_grow(duk_hthread *thr) {
@ -412,7 +416,7 @@ DUK_INTERNAL void duk_hthread_catchstack_shrink_check(duk_hthread *thr) {
duk__hthread_do_catchstack_shrink(thr);
}
DUK_INTERNAL void duk_hthread_catchstack_unwind(duk_hthread *thr, duk_size_t new_top) {
DUK_INTERNAL void duk_hthread_catchstack_unwind_norz(duk_hthread *thr, duk_size_t new_top) {
duk_size_t idx;
DUK_DDD(DUK_DDDPRINT("unwind catchstack top of thread %p from %ld to %ld",
@ -469,7 +473,7 @@ DUK_INTERNAL void duk_hthread_catchstack_unwind(duk_hthread *thr, duk_size_t new
DUK_ASSERT(env != NULL); /* must be, since env was created when catcher was created */
act->lex_env = DUK_HOBJECT_GET_PROTOTYPE(thr->heap, env); /* prototype is lex_env before catcher created */
DUK_HOBJECT_INCREF(thr, act->lex_env);
DUK_HOBJECT_DECREF(thr, env);
DUK_HOBJECT_DECREF_NORZ(thr, env);
/* There is no need to decref anything else than 'env': if 'env'
* becomes unreachable, refzero will handle decref'ing its prototype.
@ -481,3 +485,8 @@ DUK_INTERNAL void duk_hthread_catchstack_unwind(duk_hthread *thr, duk_size_t new
/* note: any entries above the catchstack top are garbage and not zeroed */
}
DUK_INTERNAL void duk_hthread_catchstack_unwind(duk_hthread *thr, duk_size_t new_top) {
duk_hthread_catchstack_unwind_norz(thr, new_top);
DUK_REFZERO_CHECK_FAST(thr);
}

1
src-input/duk_internal.h

@ -55,6 +55,7 @@ DUK_USE_USER_DECLARE()
#include "duk_js_compiler.h"
#include "duk_regexp.h"
#include "duk_heaphdr.h"
#include "duk_refcount.h"
#include "duk_api_internal.h"
#include "duk_hstring.h"
#include "duk_hobject.h"

108
src-input/duk_js_call.c

@ -21,6 +21,8 @@
#include "duk_internal.h"
/* XXX: heap->error_not_allowed for success path too? */
/*
* Forward declarations.
*/
@ -1055,16 +1057,6 @@ DUK_INTERNAL duk_int_t duk_handle_call_protected(duk_hthread *thr,
*/
duk__handle_call_inner(thr, num_stack_args, call_flags, idx_func);
/* Success path handles */
DUK_ASSERT(thr->heap->call_recursion_depth == entry_call_recursion_depth);
DUK_ASSERT(thr->ptr_curr_pc == entry_ptr_curr_pc);
/* Longjmp state is kept clean in success path */
DUK_ASSERT(thr->heap->lj.type == DUK_LJ_TYPE_UNKNOWN);
DUK_ASSERT(thr->heap->lj.iserror == 0);
DUK_ASSERT(DUK_TVAL_IS_UNDEFINED(&thr->heap->lj.value1));
DUK_ASSERT(DUK_TVAL_IS_UNDEFINED(&thr->heap->lj.value2));
thr->heap->lj.jmpbuf_ptr = old_jmpbuf_ptr;
return DUK_EXEC_SUCCESS;
@ -1091,11 +1083,6 @@ DUK_INTERNAL duk_int_t duk_handle_call_protected(duk_hthread *thr,
idx_func,
old_jmpbuf_ptr);
/* Longjmp state is cleaned up by error handling */
DUK_ASSERT(thr->heap->lj.type == DUK_LJ_TYPE_UNKNOWN);
DUK_ASSERT(thr->heap->lj.iserror == 0);
DUK_ASSERT(DUK_TVAL_IS_UNDEFINED(&thr->heap->lj.value1));
DUK_ASSERT(DUK_TVAL_IS_UNDEFINED(&thr->heap->lj.value2));
return DUK_EXEC_ERROR;
}
#if defined(DUK_USE_CPP_EXCEPTIONS)
@ -1121,6 +1108,7 @@ DUK_INTERNAL duk_int_t duk_handle_call_protected(duk_hthread *thr,
entry_ptr_curr_pc,
idx_func,
old_jmpbuf_ptr);
return DUK_EXEC_ERROR;
}
} catch (...) {
@ -1141,6 +1129,7 @@ DUK_INTERNAL duk_int_t duk_handle_call_protected(duk_hthread *thr,
entry_ptr_curr_pc,
idx_func,
old_jmpbuf_ptr);
return DUK_EXEC_ERROR;
}
}
@ -1581,9 +1570,9 @@ DUK_LOCAL void duk__handle_call_inner(duk_hthread *thr,
DUK_ASSERT(thr->catchstack_top >= entry_catchstack_top); /* may need unwind */
DUK_ASSERT(thr->callstack_top == entry_callstack_top + 1);
duk_hthread_catchstack_unwind(thr, entry_catchstack_top);
duk_hthread_catchstack_unwind_norz(thr, entry_catchstack_top);
duk_hthread_catchstack_shrink_check(thr);
duk_hthread_callstack_unwind(thr, entry_callstack_top);
duk_hthread_callstack_unwind_norz(thr, entry_callstack_top); /* XXX: may now fail */
duk_hthread_callstack_shrink_check(thr);
thr->valstack_bottom = thr->valstack + entry_valstack_bottom_index;
@ -1645,7 +1634,7 @@ DUK_LOCAL void duk__handle_call_inner(duk_hthread *thr,
DUK_ASSERT(thr->catchstack_top == entry_catchstack_top); /* no need to unwind */
DUK_ASSERT(thr->callstack_top == entry_callstack_top + 1);
duk_hthread_callstack_unwind(thr, entry_callstack_top);
duk_hthread_callstack_unwind_norz(thr, entry_callstack_top);
duk_hthread_callstack_shrink_check(thr);
thr->valstack_bottom = thr->valstack + entry_valstack_bottom_index;
@ -1700,9 +1689,12 @@ DUK_LOCAL void duk__handle_call_inner(duk_hthread *thr,
DUK_HEAP_SWITCH_THREAD(thr->heap, entry_curr_thread); /* may be NULL */
thr->state = (duk_uint8_t) entry_thread_state;
/* Disabled assert: triggered with some torture tests. */
#if 0
DUK_ASSERT((thr->state == DUK_HTHREAD_STATE_INACTIVE && thr->heap->curr_thread == NULL) || /* first call */
(thr->state == DUK_HTHREAD_STATE_INACTIVE && thr->heap->curr_thread != NULL) || /* other call */
(thr->state == DUK_HTHREAD_STATE_RUNNING && thr->heap->curr_thread == thr)); /* current thread */
#endif
thr->heap->call_recursion_depth = entry_call_recursion_depth;
@ -1715,7 +1707,7 @@ DUK_LOCAL void duk__handle_call_inner(duk_hthread *thr,
* on every return should have no ill effect.
*/
#if defined(DUK_USE_DEBUGGER_SUPPORT)
if (DUK_HEAP_IS_DEBUGGER_ATTACHED(thr->heap)) {
if (duk_debug_is_attached(thr->heap)) {
DUK_DD(DUK_DDPRINT("returning with debugger enabled, force interrupt"));
DUK_ASSERT(thr->interrupt_counter <= thr->interrupt_init);
thr->interrupt_init -= thr->interrupt_counter;
@ -1728,6 +1720,14 @@ DUK_LOCAL void duk__handle_call_inner(duk_hthread *thr,
duk__interrupt_fixup(thr, entry_curr_thread);
#endif
/* Restored by success path. */
DUK_ASSERT(thr->heap->call_recursion_depth == entry_call_recursion_depth);
DUK_ASSERT(thr->ptr_curr_pc == entry_ptr_curr_pc);
DUK_ASSERT_LJSTATE_UNSET(thr->heap);
DUK_REFZERO_CHECK_FAST(thr);
return;
thread_state_error:
@ -1759,6 +1759,7 @@ DUK_LOCAL void duk__handle_call_error(duk_hthread *thr,
* the error here.
*/
DUK_ASSERT(thr->heap->lj.type == DUK_LJ_TYPE_THROW);
DUK_ASSERT_LJSTATE_SET(thr->heap);
DUK_ASSERT(thr->callstack_top >= entry_callstack_top);
DUK_ASSERT(thr->catchstack_top >= entry_catchstack_top);
@ -1780,9 +1781,9 @@ DUK_LOCAL void duk__handle_call_error(duk_hthread *thr,
* scopes; this is a sandboxing issue, described in:
* https://github.com/svaarala/duktape/issues/476
*/
duk_hthread_catchstack_unwind(thr, entry_catchstack_top);
duk_hthread_catchstack_unwind_norz(thr, entry_catchstack_top);
duk_hthread_catchstack_shrink_check(thr);
duk_hthread_callstack_unwind(thr, entry_callstack_top);
duk_hthread_callstack_unwind_norz(thr, entry_callstack_top);
duk_hthread_callstack_shrink_check(thr);
thr->valstack_bottom = thr->valstack + entry_valstack_bottom_index;
@ -1833,9 +1834,12 @@ DUK_LOCAL void duk__handle_call_error(duk_hthread *thr,
DUK_HEAP_SWITCH_THREAD(thr->heap, entry_curr_thread); /* may be NULL */
thr->state = (duk_uint8_t) entry_thread_state;
/* Disabled assert: triggered with some torture tests. */
#if 0
DUK_ASSERT((thr->state == DUK_HTHREAD_STATE_INACTIVE && thr->heap->curr_thread == NULL) || /* first call */
(thr->state == DUK_HTHREAD_STATE_INACTIVE && thr->heap->curr_thread != NULL) || /* other call */
(thr->state == DUK_HTHREAD_STATE_RUNNING && thr->heap->curr_thread == thr)); /* current thread */
#endif
thr->heap->call_recursion_depth = entry_call_recursion_depth;
@ -1848,7 +1852,7 @@ DUK_LOCAL void duk__handle_call_error(duk_hthread *thr,
* on every return should have no ill effect.
*/
#if defined(DUK_USE_DEBUGGER_SUPPORT)
if (DUK_HEAP_IS_DEBUGGER_ATTACHED(thr->heap)) {
if (duk_debug_is_attached(thr->heap)) {
DUK_DD(DUK_DDPRINT("returning with debugger enabled, force interrupt"));
DUK_ASSERT(thr->interrupt_counter <= thr->interrupt_init);
thr->interrupt_init -= thr->interrupt_counter;
@ -1860,6 +1864,21 @@ DUK_LOCAL void duk__handle_call_error(duk_hthread *thr,
#if defined(DUK_USE_INTERRUPT_COUNTER) && defined(DUK_USE_DEBUG)
duk__interrupt_fixup(thr, entry_curr_thread);
#endif
/* Error handling complete, remove side effect protections and
* process pending finalizers.
*/
#if defined(DUK_USE_ASSERTIONS)
DUK_ASSERT(thr->heap->error_not_allowed == 1);
thr->heap->error_not_allowed = 0;
#endif
DUK_ASSERT(thr->heap->pf_prevent_count > 0);
thr->heap->pf_prevent_count--;
DUK_DD(DUK_DDPRINT("call error handled, pf_prevent_count updated to %ld", (long) thr->heap->pf_prevent_count));
DUK_ASSERT_LJSTATE_UNSET(thr->heap);
DUK_REFZERO_CHECK_SLOW(thr);
}
/*
@ -1957,12 +1976,6 @@ DUK_INTERNAL duk_int_t duk_handle_safe_call(duk_hthread *thr,
entry_callstack_top,
entry_catchstack_top);
/* Longjmp state is kept clean in success path */
DUK_ASSERT(thr->heap->lj.type == DUK_LJ_TYPE_UNKNOWN);
DUK_ASSERT(thr->heap->lj.iserror == 0);
DUK_ASSERT(DUK_TVAL_IS_UNDEFINED(&thr->heap->lj.value1));
DUK_ASSERT(DUK_TVAL_IS_UNDEFINED(&thr->heap->lj.value2));
/* Note: either pointer may be NULL (at entry), so don't assert */
thr->heap->lj.jmpbuf_ptr = old_jmpbuf_ptr;
@ -1982,12 +1995,6 @@ DUK_INTERNAL duk_int_t duk_handle_safe_call(duk_hthread *thr,
entry_catchstack_top,
old_jmpbuf_ptr);
/* Longjmp state is cleaned up by error handling */
DUK_ASSERT(thr->heap->lj.type == DUK_LJ_TYPE_UNKNOWN);
DUK_ASSERT(thr->heap->lj.iserror == 0);
DUK_ASSERT(DUK_TVAL_IS_UNDEFINED(&thr->heap->lj.value1));
DUK_ASSERT(DUK_TVAL_IS_UNDEFINED(&thr->heap->lj.value2));
retval = DUK_EXEC_ERROR;
}
#if defined(DUK_USE_CPP_EXCEPTIONS)
@ -2032,6 +2039,8 @@ DUK_INTERNAL duk_int_t duk_handle_safe_call(duk_hthread *thr,
DUK_ASSERT(thr->heap->lj.jmpbuf_ptr == old_jmpbuf_ptr); /* success/error path both do this */
DUK_ASSERT_LJSTATE_UNSET(thr->heap);
duk__handle_safe_call_shared(thr,
idx_retbase,
num_stack_rets,
@ -2146,6 +2155,10 @@ DUK_LOCAL void duk__handle_safe_call_inner(duk_hthread *thr,
DUK_ASSERT(thr->callstack_top == entry_callstack_top);
duk__safe_call_adjust_valstack(thr, idx_retbase, num_stack_rets, rc);
DUK_ASSERT_LJSTATE_UNSET(thr->heap);
DUK_REFZERO_CHECK_FAST(thr);
return;
thread_state_error:
@ -2180,6 +2193,7 @@ DUK_LOCAL void duk__handle_safe_call_error(duk_hthread *thr,
* the error here.
*/
DUK_ASSERT(thr->heap->lj.type == DUK_LJ_TYPE_THROW);
DUK_ASSERT_LJSTATE_SET(thr->heap);
DUK_ASSERT(thr->callstack_top >= entry_callstack_top);
DUK_ASSERT(thr->catchstack_top >= entry_catchstack_top);
@ -2188,9 +2202,9 @@ DUK_LOCAL void duk__handle_safe_call_error(duk_hthread *thr,
DUK_ASSERT(thr->catchstack_top >= entry_catchstack_top);
DUK_ASSERT(thr->callstack_top >= entry_callstack_top);
duk_hthread_catchstack_unwind(thr, entry_catchstack_top);
duk_hthread_catchstack_unwind_norz(thr, entry_catchstack_top);
duk_hthread_catchstack_shrink_check(thr);
duk_hthread_callstack_unwind(thr, entry_callstack_top);
duk_hthread_callstack_unwind_norz(thr, entry_callstack_top);
duk_hthread_callstack_shrink_check(thr);
thr->valstack_bottom = thr->valstack + entry_valstack_bottom_index;
@ -2222,6 +2236,21 @@ DUK_LOCAL void duk__handle_safe_call_error(duk_hthread *thr,
thr->heap->lj.iserror = 0;
DUK_TVAL_SET_UNDEFINED_UPDREF(thr, &thr->heap->lj.value1); /* side effects */
DUK_TVAL_SET_UNDEFINED_UPDREF(thr, &thr->heap->lj.value2); /* side effects */
/* Error handling complete, remove side effect protections and
* process pending finalizers.
*/
#if defined(DUK_USE_ASSERTIONS)
DUK_ASSERT(thr->heap->error_not_allowed == 1);
thr->heap->error_not_allowed = 0;
#endif
DUK_ASSERT(thr->heap->pf_prevent_count > 0);
thr->heap->pf_prevent_count--;
DUK_DD(DUK_DDPRINT("safe call error handled, pf_prevent_count updated to %ld", (long) thr->heap->pf_prevent_count));
DUK_ASSERT_LJSTATE_UNSET(thr->heap);
DUK_REFZERO_CHECK_SLOW(thr);
}
DUK_LOCAL void duk__handle_safe_call_shared(duk_hthread *thr,
@ -2268,6 +2297,8 @@ DUK_LOCAL void duk__handle_safe_call_shared(duk_hthread *thr,
#if defined(DUK_USE_INTERRUPT_COUNTER) && defined(DUK_USE_DEBUG)
duk__interrupt_fixup(thr, entry_curr_thread);
#endif
DUK_ASSERT_LJSTATE_UNSET(thr->heap);
}
/*
@ -2503,11 +2534,11 @@ DUK_INTERNAL duk_bool_t duk_handle_ecma_call_setup(duk_hthread *thr,
break;
}
}
duk_hthread_catchstack_unwind(thr, i_stk + 1);
duk_hthread_catchstack_unwind_norz(thr, i_stk + 1);
/* Unwind the topmost callstack entry before reusing it */
DUK_ASSERT(thr->callstack_top > 0);
duk_hthread_callstack_unwind(thr, thr->callstack_top - 1);
duk_hthread_callstack_unwind_norz(thr, thr->callstack_top - 1);
/* Then reuse the unwound activation; callstack was not shrunk so there is always space */
DUK_ASSERT(thr->callstack_top < thr->callstack_size);
@ -2736,5 +2767,6 @@ DUK_INTERNAL duk_bool_t duk_handle_ecma_call_setup(duk_hthread *thr,
* the topmost activation.
*/
DUK_REFZERO_CHECK_FAST(thr);
return 1;
}

2
src-input/duk_js_compiler.c

@ -1998,6 +1998,8 @@ DUK_LOCAL duk_bool_t duk__const_needs_refcount(duk_compiler_ctx *comp_ctx, duk_r
duk_pop(ctx);
return ret;
#else
DUK_UNREF(comp_ctx);
DUK_UNREF(rc);
DUK_ASSERT((rc & DUK__CONST_MARKER) == 0); /* caller removes const marker */
return 0;
#endif

79
src-input/duk_js_executor.c

@ -884,8 +884,8 @@ DUK_LOCAL void duk__handle_catch(duk_hthread *thr, duk_size_t cat_idx, duk_tval
duk__set_catcher_regs(thr, cat_idx, tv_val_unstable, lj_type);
duk_hthread_catchstack_unwind(thr, cat_idx + 1);
duk_hthread_callstack_unwind(thr, thr->catchstack[cat_idx].callstack_index + 1);
duk_hthread_catchstack_unwind_norz(thr, cat_idx + 1);
duk_hthread_callstack_unwind_norz(thr, thr->catchstack[cat_idx].callstack_index + 1);
DUK_ASSERT(thr->callstack_top >= 1);
DUK_ASSERT(thr->callstack_curr != NULL);
@ -936,6 +936,12 @@ DUK_LOCAL void duk__handle_catch(duk_hthread *thr, duk_size_t cat_idx, duk_tval
DUK_ASSERT(DUK_ACT_GET_FUNC(act) != NULL);
DUK_UNREF(act); /* unreferenced without assertions */
/* XXX: If an out-of-memory happens here, longjmp state asserts
* will be triggered at present and a try-catch fails to catch.
* That's not sandboxing fatal (C API protected calls are what
* matters), and script catch code can immediately throw anyway
* for almost any operation.
*/
new_env = duk_hdecenv_alloc(thr,
DUK_HOBJECT_FLAG_EXTENSIBLE |
DUK_HOBJECT_CLASS_AS_FLAGS(DUK_HOBJECT_CLASS_DECENV));
@ -950,6 +956,7 @@ DUK_LOCAL void duk__handle_catch(duk_hthread *thr, duk_size_t cat_idx, duk_tval
* record regbases etc.
*/
/* XXX: duk_xdef_prop() may cause an out-of-memory, see above. */
DUK_ASSERT(thr->catchstack[cat_idx].h_varname != NULL);
duk_push_hstring(ctx, thr->catchstack[cat_idx].h_varname);
duk_push_tval(ctx, thr->valstack + thr->catchstack[cat_idx].idx_base);
@ -982,8 +989,8 @@ DUK_LOCAL void duk__handle_finally(duk_hthread *thr, duk_size_t cat_idx, duk_tva
duk__set_catcher_regs(thr, cat_idx, tv_val_unstable, lj_type);
duk_hthread_catchstack_unwind(thr, cat_idx + 1); /* cat_idx catcher is kept, even for finally */
duk_hthread_callstack_unwind(thr, thr->catchstack[cat_idx].callstack_index + 1);
duk_hthread_catchstack_unwind_norz(thr, cat_idx + 1); /* cat_idx catcher is kept, even for finally */
duk_hthread_callstack_unwind_norz(thr, thr->catchstack[cat_idx].callstack_index + 1);
DUK_ASSERT(thr->callstack_top >= 1);
DUK_ASSERT(thr->callstack_curr != NULL);
@ -1017,7 +1024,7 @@ DUK_LOCAL void duk__handle_label(duk_hthread *thr, duk_size_t cat_idx, duk_small
act->curr_pc = thr->catchstack[cat_idx].pc_base + (lj_type == DUK_LJ_TYPE_CONTINUE ? 1 : 0);
act = NULL; /* invalidated */
duk_hthread_catchstack_unwind(thr, cat_idx + 1); /* keep label catcher */
duk_hthread_catchstack_unwind_norz(thr, cat_idx + 1); /* keep label catcher */
/* no need to unwind callstack */
/* valstack should not need changes */
@ -1046,7 +1053,7 @@ DUK_LOCAL void duk__handle_yield(duk_hthread *thr, duk_hthread *resumer, duk_siz
tv1 = resumer->valstack + resumer->callstack[act_idx].idx_retval; /* return value from Duktape.Thread.resume() */
DUK_TVAL_SET_TVAL_UPDREF(thr, tv1, tv_val_unstable); /* side effects */
duk_hthread_callstack_unwind(resumer, act_idx + 1); /* unwind to 'resume' caller */
duk_hthread_callstack_unwind_norz(resumer, act_idx + 1); /* unwind to 'resume' caller */
/* no need to unwind catchstack */
duk__reconfig_valstack_ecma_return(resumer, act_idx);
@ -1173,7 +1180,7 @@ duk_small_uint_t duk__handle_longjmp(duk_hthread *thr,
tv2 = &thr->heap->lj.value1;
DUK_TVAL_SET_TVAL_UPDREF(thr, tv, tv2); /* side effects */
duk_hthread_callstack_unwind(resumee, act_idx + 1); /* unwind to 'yield' caller */
duk_hthread_callstack_unwind_norz(resumee, act_idx + 1); /* unwind to 'yield' caller */
/* no need to unwind catchstack */
@ -1373,9 +1380,9 @@ duk_small_uint_t duk__handle_longjmp(duk_hthread *thr,
* final catcher unwind everything
*/
#if 0
duk_hthread_catchstack_unwind(thr, (cat - thr->catchstack) + 1); /* leave 'cat' as top catcher (also works if catchstack exhausted) */
duk_hthread_callstack_unwind(thr, entry_callstack_index + 1);
duk_hthread_catchstack_unwind_norz(thr, (cat - thr->catchstack) + 1); /* leave 'cat' as top catcher (also works if catchstack exhausted) */
duk_hthread_callstack_unwind_norz(thr, entry_callstack_index + 1);
DUK_REFZERO_CHECK_SLOW(thr);
#endif
DUK_D(DUK_DPRINT("-> throw propagated up to entry level, rethrow and exit bytecode executor"));
retval = DUK__LONGJMP_RETHROW;
@ -1643,8 +1650,8 @@ DUK_LOCAL duk_small_uint_t duk__handle_return(duk_hthread *thr,
(long) (thr->callstack_curr - 1)->idx_retval,
(duk_tval *) (thr->valstack + (thr->callstack_curr - 1)->idx_retval)));
duk_hthread_catchstack_unwind(thr, new_cat_top); /* leave 'cat' as top catcher (also works if catchstack exhausted) */
duk_hthread_callstack_unwind(thr, thr->callstack_top - 1);
duk_hthread_catchstack_unwind_norz(thr, new_cat_top); /* leave 'cat' as top catcher (also works if catchstack exhausted) */
duk_hthread_callstack_unwind_norz(thr, thr->callstack_top - 1);
duk__reconfig_valstack_ecma_return(thr, thr->callstack_top - 1);
DUK_DD(DUK_DDPRINT("-> return not intercepted, restart execution in caller"));
@ -1761,8 +1768,7 @@ DUK_LOCAL void duk__interrupt_handle_debugger(duk_hthread *thr, duk_bool_t *out_
(line != thr->heap->dbg_step_startline)) {
DUK_D(DUK_DPRINT("STEP STATE TRIGGERED PAUSE at line %ld",
(long) line));
DUK_HEAP_SET_PAUSED(thr->heap);
duk_debug_set_paused(thr->heap);
}
/* Check for breakpoints only on line transition.
@ -1788,8 +1794,7 @@ DUK_LOCAL void duk__interrupt_handle_debugger(duk_hthread *thr, duk_bool_t *out_
if (act->prev_line != bp->line && line == bp->line) {
DUK_D(DUK_DPRINT("BREAKPOINT TRIGGERED at %!O:%ld",
(duk_heaphdr *) bp->filename, (long) bp->line));
DUK_HEAP_SET_PAUSED(thr->heap);
duk_debug_set_paused(thr->heap);
}
}
} else {
@ -1876,7 +1881,7 @@ DUK_LOCAL void duk__interrupt_handle_debugger(duk_hthread *thr, duk_bool_t *out_
* above, so we must recheck attach status.
*/
if (DUK_HEAP_IS_DEBUGGER_ATTACHED(thr->heap)) {
if (duk_debug_is_attached(thr->heap)) {
act = thr->callstack_curr; /* relookup, may have changed */
DUK_ASSERT(act != NULL);
if (act->flags & DUK_ACT_FLAG_BREAKPOINT_ACTIVE ||
@ -2130,7 +2135,7 @@ DUK_LOCAL void duk__executor_recheck_debugger(duk_hthread *thr, duk_activation *
(thr->heap->dbg_step_thread != thr ||
thr->heap->dbg_step_csindex != thr->callstack_top - 1)) {
DUK_D(DUK_DPRINT("STEP INTO ACTIVE, FORCE PAUSED"));
DUK_HEAP_SET_PAUSED(thr->heap);
duk_debug_set_paused(thr->heap);
}
/* Force interrupt right away if we're paused or in "checked mode".
@ -2319,15 +2324,27 @@ DUK_LOCAL void duk__handle_executor_error(duk_heap *heap,
lj_ret = duk__handle_longjmp(heap->curr_thread, entry_thread, entry_callstack_top);
/* Error handling complete, remove side effect protections.
*/
#if defined(DUK_USE_ASSERTIONS)
DUK_ASSERT(heap->error_not_allowed == 1);
heap->error_not_allowed = 0;
#endif
DUK_ASSERT(heap->pf_prevent_count > 0);
heap->pf_prevent_count--;
DUK_DD(DUK_DDPRINT("executor error handled, pf_prevent_count updated to %ld", (long) heap->pf_prevent_count));
if (lj_ret == DUK__LONGJMP_RESTART) {
/* Restart bytecode execution, possibly with a changed thread. */
;
DUK_REFZERO_CHECK_SLOW(heap->curr_thread);
} else {
/* Rethrow error to calling state. */
DUK_ASSERT(lj_ret == DUK__LONGJMP_RETHROW);
/* If an error is propagated, don't run refzero checks here.
* The next catcher will deal with that. Pf_prevent_count
* will be re-bumped by the longjmp.
*/
/* Longjmp handling has restored jmpbuf_ptr. */
DUK_ASSERT(heap->lj.jmpbuf_ptr == entry_jmpbuf_ptr);
DUK_ASSERT(lj_ret == DUK__LONGJMP_RETHROW); /* Rethrow error to calling state. */
DUK_ASSERT(heap->lj.jmpbuf_ptr == entry_jmpbuf_ptr); /* Longjmp handling has restored jmpbuf_ptr. */
/* Thread may have changed, e.g. YIELD converted to THROW. */
duk_err_longjmp(heap->curr_thread);
@ -2540,6 +2557,8 @@ DUK_LOCAL DUK_NOINLINE DUK_HOT void duk__js_execute_bytecode_inner(duk_hthread *
DUK_ASSERT(DUK_ACT_GET_FUNC(thr->callstack_curr) != NULL);
DUK_ASSERT(DUK_HOBJECT_IS_COMPFUNC(DUK_ACT_GET_FUNC(thr->callstack_curr)));
DUK_GC_TORTURE(thr->heap);
thr->ptr_curr_pc = &curr_pc;
/* Relookup and initialize dispatch loop variables. Debugger check. */
@ -2561,7 +2580,7 @@ DUK_LOCAL DUK_NOINLINE DUK_HOT void duk__js_execute_bytecode_inner(duk_hthread *
DUK_ASSERT(consts != NULL);
#if defined(DUK_USE_DEBUGGER_SUPPORT)
if (DUK_HEAP_IS_DEBUGGER_ATTACHED(thr->heap) && !thr->heap->dbg_processing) {
if (duk_debug_is_attached(thr->heap) && !thr->heap->dbg_processing) {
duk__executor_recheck_debugger(thr, act, fun);
act = thr->callstack_curr; /* relookup after side effects (no side effects currently however) */
DUK_ASSERT(act != NULL);
@ -3924,6 +3943,7 @@ DUK_LOCAL DUK_NOINLINE DUK_HOT void duk__js_execute_bytecode_inner(duk_hthread *
thr->valstack_top++;
DUK__RETURN_SHARED();
}
/* This will be unused without refcounting. */
case DUK_OP_RETCONST: {
duk_tval *tv;
@ -3941,6 +3961,7 @@ DUK_LOCAL DUK_NOINLINE DUK_HOT void duk__js_execute_bytecode_inner(duk_hthread *
tv = DUK__CONSTP_BC(ins);
DUK_TVAL_SET_TVAL(thr->valstack_top, tv);
#if defined(DUK_USE_REFERENCE_COUNTING)
/* Without refcounting only RETCONSTN is used. */
DUK_ASSERT(!DUK_TVAL_IS_HEAP_ALLOCATED(tv)); /* no INCREF for this constant */
#endif
thr->valstack_top++;
@ -4380,7 +4401,8 @@ DUK_LOCAL DUK_NOINLINE DUK_HOT void duk__js_execute_bytecode_inner(duk_hthread *
duk_push_tval(ctx, thr->valstack + cat->idx_base);
duk_err_setup_heap_ljstate(thr, (duk_small_int_t) cont_type);
duk_err_setup_ljstate1(thr, (duk_small_int_t) cont_type, thr->valstack + cat->idx_base);
/* No debugger Throw notify check on purpose (rethrow). */
DUK_ASSERT(thr->heap->lj.jmpbuf_ptr != NULL); /* always in executor */
duk_err_longjmp(thr);
@ -4417,7 +4439,10 @@ DUK_LOCAL DUK_NOINLINE DUK_HOT void duk__js_execute_bytecode_inner(duk_hthread *
(duk_tval *) duk_get_tval(ctx, -1)));
#endif
duk_err_setup_heap_ljstate(thr, DUK_LJ_TYPE_THROW);
duk_err_setup_ljstate1(thr, DUK_LJ_TYPE_THROW, DUK_GET_TVAL_NEGIDX(ctx, -1));
#if defined(DUK_USE_DEBUGGER_SUPPORT)
duk_err_check_debugger_integration(thr);
#endif
DUK_ASSERT(thr->heap->lj.jmpbuf_ptr != NULL); /* always in executor */
duk_err_longjmp(thr);
@ -4954,7 +4979,7 @@ DUK_LOCAL DUK_NOINLINE DUK_HOT void duk__js_execute_bytecode_inner(duk_hthread *
* from precompiled bytecode.
*/
#if defined(DUK_USE_DEBUGGER_SUPPORT)
if (DUK_HEAP_IS_DEBUGGER_ATTACHED(thr->heap)) {
if (duk_debug_is_attached(thr->heap)) {
DUK_D(DUK_DPRINT("DEBUGGER statement encountered, halt execution"));
DUK__SYNC_AND_NULL_CURR_PC();
duk_debug_halt_execution(thr, 1 /*use_prev_pc*/);

698
src-input/duk_refcount.h

@ -0,0 +1,698 @@
/*
* Reference counting helper macros. The macros take a thread argument
* and must thus always be executed in a specific thread context. The
* thread argument is not really needed anymore: DECREF can operate with
* a heap pointer only, and INCREF needs neither.
*/
#if !defined(DUK_REFCOUNT_H_INCLUDED)
#define DUK_REFCOUNT_H_INCLUDED
#if defined(DUK_USE_REFERENCE_COUNTING)
#if defined(DUK_USE_ROM_OBJECTS)
/* With ROM objects "needs refcount update" is true when the value is
* heap allocated and is not a ROM object.
*/
/* XXX: double evaluation for 'tv' argument. */
#define DUK_TVAL_NEEDS_REFCOUNT_UPDATE(tv) \
(DUK_TVAL_IS_HEAP_ALLOCATED((tv)) && !DUK_HEAPHDR_HAS_READONLY(DUK_TVAL_GET_HEAPHDR((tv))))
#define DUK_HEAPHDR_NEEDS_REFCOUNT_UPDATE(h) (!DUK_HEAPHDR_HAS_READONLY((h)))
#else /* DUK_USE_ROM_OBJECTS */
/* Without ROM objects "needs refcount update" == is heap allocated. */
#define DUK_TVAL_NEEDS_REFCOUNT_UPDATE(tv) DUK_TVAL_IS_HEAP_ALLOCATED((tv))
#define DUK_HEAPHDR_NEEDS_REFCOUNT_UPDATE(h) 1
#endif /* DUK_USE_ROM_OBJECTS */
/* Fast variants, inline refcount operations except for refzero handling.
* Can be used explicitly when speed is always more important than size.
* For a good compiler and a single file build, these are basically the
* same as a forced inline.
*/
#define DUK_TVAL_INCREF_FAST(thr,tv) do { \
duk_tval *duk__tv = (tv); \
DUK_ASSERT(duk__tv != NULL); \
if (DUK_TVAL_NEEDS_REFCOUNT_UPDATE(duk__tv)) { \
duk_heaphdr *duk__h = DUK_TVAL_GET_HEAPHDR(duk__tv); \
DUK_ASSERT(duk__h != NULL); \
DUK_ASSERT(DUK_HEAPHDR_HTYPE_VALID(duk__h)); \
DUK_HEAPHDR_PREINC_REFCOUNT(duk__h); \
DUK_ASSERT(DUK_HEAPHDR_GET_REFCOUNT(duk__h) != 0); /* No wrapping. */ \
} \
} while (0)
#define DUK_TVAL_DECREF_FAST(thr,tv) do { \
duk_tval *duk__tv = (tv); \
DUK_ASSERT(duk__tv != NULL); \
if (DUK_TVAL_NEEDS_REFCOUNT_UPDATE(duk__tv)) { \
duk_heaphdr *duk__h = DUK_TVAL_GET_HEAPHDR(duk__tv); \
DUK_ASSERT(duk__h != NULL); \
DUK_ASSERT(DUK_HEAPHDR_HTYPE_VALID(duk__h)); \
DUK_ASSERT(DUK_HEAPHDR_GET_REFCOUNT(duk__h) > 0); \
if (DUK_HEAPHDR_PREDEC_REFCOUNT(duk__h) == 0) { \
duk_heaphdr_refzero((thr), duk__h); \
} \
} \
} while (0)
#define DUK_TVAL_DECREF_NORZ_FAST(thr,tv) do { \
duk_tval *duk__tv = (tv); \
DUK_ASSERT(duk__tv != NULL); \
if (DUK_TVAL_NEEDS_REFCOUNT_UPDATE(duk__tv)) { \
duk_heaphdr *duk__h = DUK_TVAL_GET_HEAPHDR(duk__tv); \
DUK_ASSERT(duk__h != NULL); \
DUK_ASSERT(DUK_HEAPHDR_HTYPE_VALID(duk__h)); \
DUK_ASSERT(DUK_HEAPHDR_GET_REFCOUNT(duk__h) > 0); \
if (DUK_HEAPHDR_PREDEC_REFCOUNT(duk__h) == 0) { \
duk_heaphdr_refzero_norz((thr), duk__h); \
} \
} \
} while (0)
#define DUK_HEAPHDR_INCREF_FAST(thr,h) do { \
duk_heaphdr *duk__h = (duk_heaphdr *) (h); \
DUK_ASSERT(duk__h != NULL); \
DUK_ASSERT(DUK_HEAPHDR_HTYPE_VALID(duk__h)); \
if (DUK_HEAPHDR_NEEDS_REFCOUNT_UPDATE(duk__h)) { \
DUK_HEAPHDR_PREINC_REFCOUNT(duk__h); \
DUK_ASSERT(DUK_HEAPHDR_GET_REFCOUNT(duk__h) != 0); /* No wrapping. */ \
} \
} while (0)
#define DUK_HEAPHDR_DECREF_FAST_RAW(thr,h,rzcall,rzcast) do { \
duk_heaphdr *duk__h = (duk_heaphdr *) (h); \
DUK_ASSERT(duk__h != NULL); \
DUK_ASSERT(DUK_HEAPHDR_HTYPE_VALID(duk__h)); \
DUK_ASSERT(DUK_HEAPHDR_GET_REFCOUNT(duk__h) > 0); \
if (DUK_HEAPHDR_NEEDS_REFCOUNT_UPDATE(duk__h)) { \
if (DUK_HEAPHDR_PREDEC_REFCOUNT(duk__h) == 0) { \
(rzcall)((thr), (rzcast) duk__h); \
} \
} \
} while (0)
#define DUK_HEAPHDR_DECREF_FAST(thr,h) \
DUK_HEAPHDR_DECREF_FAST_RAW((thr),(h),duk_heaphdr_refzero,duk_heaphdr *)
#define DUK_HEAPHDR_DECREF_NORZ_FAST(thr,h) \
DUK_HEAPHDR_DECREF_FAST_RAW((thr),(h),duk_heaphdr_refzero_norz,duk_heaphdr *)
/* Slow variants, call to a helper to reduce code size.
* Can be used explicitly when size is always more important than speed.
*/
#define DUK_TVAL_INCREF_SLOW(thr,tv) do { duk_tval_incref((tv)); } while (0)
#define DUK_TVAL_DECREF_SLOW(thr,tv) do { duk_tval_decref((thr), (tv)); } while (0)
#define DUK_TVAL_DECREF_NORZ_SLOW(thr,tv) do { duk_tval_decref_norz((thr), (tv)); } while (0)
#define DUK_HEAPHDR_INCREF_SLOW(thr,h) do { duk_heaphdr_incref((duk_heaphdr *) (h)); } while (0)
#define DUK_HEAPHDR_DECREF_SLOW(thr,h) do { duk_heaphdr_decref((thr), (duk_heaphdr *) (h)); } while (0)
#define DUK_HEAPHDR_DECREF_NORZ_SLOW(thr,h) do { duk_heaphdr_decref_norz((thr), (duk_heaphdr *) (h)); } while (0)
#define DUK_HSTRING_INCREF_SLOW(thr,h) do { duk_heaphdr_incref((duk_heaphdr *) (h)); } while (0)
#define DUK_HSTRING_DECREF_SLOW(thr,h) do { duk_heaphdr_decref((thr), (duk_heaphdr *) (h)); } while (0)
#define DUK_HSTRING_DECREF_NORZ_SLOW(thr,h) do { duk_heaphdr_decref_norz((thr), (duk_heaphdr *) (h)); } while (0)
#define DUK_HBUFFER_INCREF_SLOW(thr,h) do { duk_heaphdr_incref((duk_heaphdr *) (h)); } while (0)
#define DUK_HBUFFER_DECREF_SLOW(thr,h) do { duk_heaphdr_decref((thr), (duk_heaphdr *) (h)); } while (0)
#define DUK_HBUFFER_DECREF_NORZ_SLOW(thr,h) do { duk_heaphdr_decref_norz((thr), (duk_heaphdr *) (h)); } while (0)
#define DUK_HOBJECT_INCREF_SLOW(thr,h) do { duk_heaphdr_incref((duk_heaphdr *) (h)); } while (0)
#define DUK_HOBJECT_DECREF_SLOW(thr,h) do { duk_heaphdr_decref((thr), (duk_heaphdr *) (h)); } while (0)
#define DUK_HOBJECT_DECREF_NORZ_SLOW(thr,h) do { duk_heaphdr_decref_norz((thr), (duk_heaphdr *) (h)); } while (0)
/* Default variants. Selection depends on speed/size preference.
* Concretely: with gcc 4.8.1 -Os x64 the difference in final binary
* is about +1kB for _FAST variants.
*/
#if defined(DUK_USE_FAST_REFCOUNT_DEFAULT)
/* XXX: It would be nice to specialize for specific duk_hobject subtypes
* but current refzero queue handling prevents that.
*/
#define DUK_TVAL_INCREF(thr,tv) DUK_TVAL_INCREF_FAST((thr),(tv))
#define DUK_TVAL_DECREF(thr,tv) DUK_TVAL_DECREF_FAST((thr),(tv))
#define DUK_TVAL_DECREF_NORZ(thr,tv) DUK_TVAL_DECREF_NORZ_FAST((thr),(tv))
#define DUK_HEAPHDR_INCREF(thr,h) DUK_HEAPHDR_INCREF_FAST((thr),(h))
#define DUK_HEAPHDR_DECREF(thr,h) DUK_HEAPHDR_DECREF_FAST_RAW((thr),(h),duk_heaphdr_refzero,duk_heaphdr *)
#define DUK_HEAPHDR_DECREF_NORZ(thr,h) DUK_HEAPHDR_DECREF_FAST_RAW((thr),(h),duk_heaphdr_refzero_norz,duk_heaphdr *)
#define DUK_HSTRING_INCREF(thr,h) DUK_HEAPHDR_INCREF((thr),(duk_heaphdr *) (h))
#define DUK_HSTRING_DECREF(thr,h) DUK_HEAPHDR_DECREF_FAST_RAW((thr),(h),duk_hstring_refzero,duk_hstring *)
#define DUK_HSTRING_DECREF_NORZ(thr,h) DUK_HEAPHDR_DECREF_FAST_RAW((thr),(h),duk_hstring_refzero,duk_hstring *) /* no 'norz' variant */
#define DUK_HOBJECT_INCREF(thr,h) DUK_HEAPHDR_INCREF((thr),(duk_heaphdr *) (h))
#define DUK_HOBJECT_DECREF(thr,h) DUK_HEAPHDR_DECREF_FAST_RAW((thr),(h),duk_hobject_refzero,duk_hobject *)
#define DUK_HOBJECT_DECREF_NORZ(thr,h) DUK_HEAPHDR_DECREF_FAST_RAW((thr),(h),duk_hobject_refzero_norz,duk_hobject *)
#define DUK_HBUFFER_INCREF(thr,h) DUK_HEAPHDR_INCREF((thr),(duk_heaphdr *) (h))
#define DUK_HBUFFER_DECREF(thr,h) DUK_HEAPHDR_DECREF_FAST_RAW((thr),(h),duk_hbuffer_refzero,duk_hbuffer *)
#define DUK_HBUFFER_DECREF_NORZ(thr,h) DUK_HEAPHDR_DECREF_FAST_RAW((thr),(h),duk_hbuffer_refzero,duk_hbuffer *) /* no 'norz' variant */
#define DUK_HCOMPFUNC_INCREF(thr,h) DUK_HEAPHDR_INCREF((thr),(duk_heaphdr *) &(h)->obj)
#define DUK_HCOMPFUNC_DECREF(thr,h) DUK_HEAPHDR_DECREF_FAST_RAW((thr),(h),duk_hobject_refzero,duk_hobject *)
#define DUK_HCOMPFUNC_DECREF_NORZ(thr,h) DUK_HEAPHDR_DECREF_FAST_RAW((thr),(h),duk_hobject_refzero_norz,duk_hobject *)
#define DUK_HNATFUNC_INCREF(thr,h) DUK_HEAPHDR_INCREF((thr),(duk_heaphdr *) &(h)->obj)
#define DUK_HNATFUNC_DECREF(thr,h) DUK_HEAPHDR_DECREF_FAST_RAW((thr),(h),duk_hobject_refzero,duk_hobject *)
#define DUK_HNATFUNC_DECREF_NORZ(thr,h) DUK_HEAPHDR_DECREF_FAST_RAW((thr),(h),duk_hobject_refzero_norz,duk_hobject *)
#define DUK_HBUFOBJ_INCREF(thr,h) DUK_HEAPHDR_INCREF((thr),(duk_heaphdr *) &(h)->obj)
#define DUK_HBUFOBJ_DECREF(thr,h) DUK_HEAPHDR_DECREF_FAST_RAW((thr),(h),duk_hobject_refzero,duk_hobject *)
#define DUK_HBUFOBJ_DECREF_NORZ(thr,h) DUK_HEAPHDR_DECREF_FAST_RAW((thr),(h),duk_hobject_refzero_norz,duk_hobject *)
#define DUK_HTHREAD_INCREF(thr,h) DUK_HEAPHDR_INCREF((thr),(duk_heaphdr *) &(h)->obj)
#define DUK_HTHREAD_DECREF(thr,h) DUK_HEAPHDR_DECREF_FAST_RAW((thr),(h),duk_hobject_refzero,duk_hobject *)
#define DUK_HTHREAD_DECREF_NORZ(thr,h) DUK_HEAPHDR_DECREF_FAST_RAW((thr),(h),duk_hobject_refzero_norz,duk_hobject *)
#else
#define DUK_TVAL_INCREF(thr,tv) DUK_TVAL_INCREF_SLOW((thr),(tv))
#define DUK_TVAL_DECREF(thr,tv) DUK_TVAL_DECREF_SLOW((thr),(tv))
#define DUK_TVAL_DECREF_NORZ(thr,tv) DUK_TVAL_DECREF_NORZ_SLOW((thr),(tv))
#define DUK_HEAPHDR_INCREF(thr,h) DUK_HEAPHDR_INCREF_SLOW((thr),(h))
#define DUK_HEAPHDR_DECREF(thr,h) DUK_HEAPHDR_DECREF_SLOW((thr),(h))
#define DUK_HEAPHDR_DECREF_NORZ(thr,h) DUK_HEAPHDR_DECREF_NORZ_SLOW((thr),(h))
#define DUK_HSTRING_INCREF(thr,h) DUK_HEAPHDR_INCREF((thr),(duk_heaphdr *) (h))
#define DUK_HSTRING_DECREF(thr,h) DUK_HSTRING_DECREF_SLOW((thr),(h))
#define DUK_HSTRING_DECREF_NORZ(thr,h) DUK_HSTRING_DECREF_NORZ_SLOW((thr),(h))
#define DUK_HOBJECT_INCREF(thr,h) DUK_HEAPHDR_INCREF((thr),(duk_heaphdr *) (h))
#define DUK_HOBJECT_DECREF(thr,h) DUK_HOBJECT_DECREF_SLOW((thr),(h))
#define DUK_HOBJECT_DECREF_NORZ(thr,h) DUK_HOBJECT_DECREF_NORZ_SLOW((thr),(h))
#define DUK_HBUFFER_INCREF(thr,h) DUK_HEAPHDR_INCREF((thr),(duk_heaphdr *) (h))
#define DUK_HBUFFER_DECREF(thr,h) DUK_HBUFFER_DECREF_SLOW((thr),(h))
#define DUK_HBUFFER_DECREF_NORZ(thr,h) DUK_HBUFFER_DECREF_NORZ_SLOW((thr),(h))
#define DUK_HCOMPFUNC_INCREF(thr,h) DUK_HEAPHDR_INCREF((thr),(duk_heaphdr *) &(h)->obj)
#define DUK_HCOMPFUNC_DECREF(thr,h) DUK_HOBJECT_DECREF_SLOW((thr),(duk_hobject *) &(h)->obj)
#define DUK_HCOMPFUNC_DECREF_NORZ(thr,h) DUK_HOBJECT_DECREF_NORZ_SLOW((thr),(duk_hobject *) &(h)->obj)
#define DUK_HNATFUNC_INCREF(thr,h) DUK_HEAPHDR_INCREF((thr),(duk_heaphdr *) &(h)->obj)
#define DUK_HNATFUNC_DECREF(thr,h) DUK_HOBJECT_DECREF_SLOW((thr),(duk_hobject *) &(h)->obj)
#define DUK_HNATFUNC_DECREF_NORZ(thr,h) DUK_HOBJECT_DECREF_NORZ_SLOW((thr),(duk_hobject *) &(h)->obj)
#define DUK_HBUFOBJ_INCREF(thr,h) DUK_HEAPHDR_INCREF((thr),(duk_heaphdr *) &(h)->obj)
#define DUK_HBUFOBJ_DECREF(thr,h) DUK_HOBJECT_DECREF_SLOW((thr),(duk_hobject *) &(h)->obj)
#define DUK_HBUFOB_DECREF_NORZ(thr,h) DUK_HOBJECT_DECREF_NORZ_SLOW((thr),(duk_hobject *) &(h)->obj)
#define DUK_HTHREAD_INCREF(thr,h) DUK_HEAPHDR_INCREF((thr),(duk_heaphdr *) &(h)->obj)
#define DUK_HTHREAD_DECREF(thr,h) DUK_HOBJECT_DECREF_SLOW((thr),(duk_hobject *) &(h)->obj)
#define DUK_HTHREAD_DECREF_NORZ(thr,h) DUK_HOBJECT_DECREF_NORZ_SLOW((thr),(duk_hobject *) &(h)->obj)
#endif
/* Convenience for some situations; the above macros don't allow NULLs
* for performance reasons. Macros cover only actually needed cases.
*/
#define DUK_HEAPHDR_INCREF_ALLOWNULL(thr,h) do { \
if ((h) != NULL) { \
DUK_HEAPHDR_INCREF((thr), (duk_heaphdr *) (h)); \
} \
} while (0)
#define DUK_HEAPHDR_DECREF_ALLOWNULL(thr,h) do { \
if ((h) != NULL) { \
DUK_HEAPHDR_DECREF((thr), (duk_heaphdr *) (h)); \
} \
} while (0)
#define DUK_HEAPHDR_DECREF_NORZ_ALLOWNULL(thr,h) do { \
if ((h) != NULL) { \
DUK_HEAPHDR_DECREF_NORZ((thr), (duk_heaphdr *) (h)); \
} \
} while (0)
#define DUK_HOBJECT_INCREF_ALLOWNULL(thr,h) do { \
if ((h) != NULL) { \
DUK_HOBJECT_INCREF((thr), (h)); \
} \
} while (0)
#define DUK_HOBJECT_DECREF_ALLOWNULL(thr,h) do { \
if ((h) != NULL) { \
DUK_HOBJECT_DECREF((thr), (h)); \
} \
} while (0)
#define DUK_HOBJECT_DECREF_NORZ_ALLOWNULL(thr,h) do { \
if ((h) != NULL) { \
DUK_HOBJECT_DECREF_NORZ((thr), (h)); \
} \
} while (0)
#define DUK_HBUFFER_INCREF_ALLOWNULL(thr,h) do { \
if ((h) != NULL) { \
DUK_HBUFFER_INCREF((thr), (h)); \
} \
} while (0)
#define DUK_HBUFFER_DECREF_ALLOWNULL(thr,h) do { \
if ((h) != NULL) { \
DUK_HBUFFER_DECREF((thr), (h)); \
} \
} while (0)
#define DUK_HBUFFER_DECREF_NORZ_ALLOWNULL(thr,h) do { \
if ((h) != NULL) { \
DUK_HBUFFER_DECREF_NORZ((thr), (h)); \
} \
} while (0)
#define DUK_HTHREAD_INCREF_ALLOWNULL(thr,h) do { \
if ((h) != NULL) { \
DUK_HTHREAD_INCREF((thr), (h)); \
} \
} while (0)
#define DUK_HTHREAD_DECREF_ALLOWNULL(thr,h) do { \
if ((h) != NULL) { \
DUK_HTHREAD_DECREF((thr), (h)); \
} \
} while (0)
#define DUK_HTHREAD_DECREF_NORZ_ALLOWNULL(thr,h) do { \
if ((h) != NULL) { \
DUK_HTHREAD_DECREF_NORZ((thr), (h)); \
} \
} while (0)
/* Called after one or more DECREF NORZ calls to handle pending side effects.
* At present DECREF NORZ does freeing inline but doesn't execute finalizers,
* so these macros check for pending finalizers and execute them. The FAST
* variant is performance critical.
*/
#if defined(DUK_USE_FINALIZER_SUPPORT)
#define DUK_REFZERO_CHECK_FAST(thr) do { \
duk_refzero_check_fast((thr)); \
} while (0)
#define DUK_REFZERO_CHECK_SLOW(thr) do { \
duk_refzero_check_slow((thr)); \
} while (0)
#else /* DUK_USE_FINALIZER_SUPPORT */
#define DUK_REFZERO_CHECK_FAST(thr) do { } while (0)
#define DUK_REFZERO_CHECK_SLOW(thr) do { } while (0)
#endif /* DUK_USE_FINALIZER_SUPPORT */
/*
* Macros to set a duk_tval and update refcount of the target (decref the
* old value and incref the new value if necessary). This is both performance
* and footprint critical; any changes made should be measured for size/speed.
*/
#define DUK_TVAL_SET_UNDEFINED_UPDREF_ALT0(thr,tvptr_dst) do { \
duk_tval *tv__dst; duk_tval tv__tmp; tv__dst = (tvptr_dst); \
DUK_TVAL_SET_TVAL(&tv__tmp, tv__dst); \
DUK_TVAL_SET_UNDEFINED(tv__dst); \
DUK_TVAL_DECREF((thr), &tv__tmp); /* side effects */ \
} while (0)
#define DUK_TVAL_SET_UNDEFINED_UPDREF_NORZ_ALT0(thr,tvptr_dst) do { \
duk_tval *tv__dst; duk_tval tv__tmp; tv__dst = (tvptr_dst); \
DUK_TVAL_SET_TVAL(&tv__tmp, tv__dst); \
DUK_TVAL_SET_UNDEFINED(tv__dst); \
DUK_TVAL_DECREF_NORZ((thr), &tv__tmp); \
} while (0)
#define DUK_TVAL_SET_UNUSED_UPDREF_ALT0(thr,tvptr_dst) do { \
duk_tval *tv__dst; duk_tval tv__tmp; tv__dst = (tvptr_dst); \
DUK_TVAL_SET_TVAL(&tv__tmp, tv__dst); \
DUK_TVAL_SET_UNUSED(tv__dst); \
DUK_TVAL_DECREF((thr), &tv__tmp); /* side effects */ \
} while (0)
#define DUK_TVAL_SET_NULL_UPDREF_ALT0(thr,tvptr_dst) do { \
duk_tval *tv__dst; duk_tval tv__tmp; tv__dst = (tvptr_dst); \
DUK_TVAL_SET_TVAL(&tv__tmp, tv__dst); \
DUK_TVAL_SET_NULL(tv__dst); \
DUK_TVAL_DECREF((thr), &tv__tmp); /* side effects */ \
} while (0)
#define DUK_TVAL_SET_BOOLEAN_UPDREF_ALT0(thr,tvptr_dst,newval) do { \
duk_tval *tv__dst; duk_tval tv__tmp; tv__dst = (tvptr_dst); \
DUK_TVAL_SET_TVAL(&tv__tmp, tv__dst); \
DUK_TVAL_SET_BOOLEAN(tv__dst, (newval)); \
DUK_TVAL_DECREF((thr), &tv__tmp); /* side effects */ \
} while (0)
#define DUK_TVAL_SET_NUMBER_UPDREF_ALT0(thr,tvptr_dst,newval) do { \
duk_tval *tv__dst; duk_tval tv__tmp; tv__dst = (tvptr_dst); \
DUK_TVAL_SET_TVAL(&tv__tmp, tv__dst); \
DUK_TVAL_SET_NUMBER(tv__dst, (newval)); \
DUK_TVAL_DECREF((thr), &tv__tmp); /* side effects */ \
} while (0)
#define DUK_TVAL_SET_NUMBER_CHKFAST_UPDREF_ALT0(thr,tvptr_dst,newval) do { \
duk_tval *tv__dst; duk_tval tv__tmp; tv__dst = (tvptr_dst); \
DUK_TVAL_SET_TVAL(&tv__tmp, tv__dst); \
DUK_TVAL_SET_NUMBER_CHKFAST_FAST(tv__dst, (newval)); \
DUK_TVAL_DECREF((thr), &tv__tmp); /* side effects */ \
} while (0)
#define DUK_TVAL_SET_DOUBLE_UPDREF_ALT0(thr,tvptr_dst,newval) do { \
duk_tval *tv__dst; duk_tval tv__tmp; tv__dst = (tvptr_dst); \
DUK_TVAL_SET_TVAL(&tv__tmp, tv__dst); \
DUK_TVAL_SET_DOUBLE(tv__dst, (newval)); \
DUK_TVAL_DECREF((thr), &tv__tmp); /* side effects */ \
} while (0)
#define DUK_TVAL_SET_NAN_UPDREF_ALT0(thr,tvptr_dst) do { \
duk_tval *tv__dst; duk_tval tv__tmp; tv__dst = (tvptr_dst); \
DUK_TVAL_SET_TVAL(&tv__tmp, tv__dst); \
DUK_TVAL_SET_NAN(tv__dst); \
DUK_TVAL_DECREF((thr), &tv__tmp); /* side effects */ \
} while (0)
#if defined(DUK_USE_FASTINT)
#define DUK_TVAL_SET_I48_UPDREF_ALT0(thr,tvptr_dst,newval) do { \
duk_tval *tv__dst; duk_tval tv__tmp; tv__dst = (tvptr_dst); \
DUK_TVAL_SET_TVAL(&tv__tmp, tv__dst); \
DUK_TVAL_SET_I48(tv__dst, (newval)); \
DUK_TVAL_DECREF((thr), &tv__tmp); /* side effects */ \
} while (0)
#define DUK_TVAL_SET_I32_UPDREF_ALT0(thr,tvptr_dst,newval) do { \
duk_tval *tv__dst; duk_tval tv__tmp; tv__dst = (tvptr_dst); \
DUK_TVAL_SET_TVAL(&tv__tmp, tv__dst); \
DUK_TVAL_SET_I32(tv__dst, (newval)); \
DUK_TVAL_DECREF((thr), &tv__tmp); /* side effects */ \
} while (0)
#define DUK_TVAL_SET_U32_UPDREF_ALT0(thr,tvptr_dst,newval) do { \
duk_tval *tv__dst; duk_tval tv__tmp; tv__dst = (tvptr_dst); \
DUK_TVAL_SET_TVAL(&tv__tmp, tv__dst); \
DUK_TVAL_SET_U32(tv__dst, (newval)); \
DUK_TVAL_DECREF((thr), &tv__tmp); /* side effects */ \
} while (0)
#else
#define DUK_TVAL_SET_DOUBLE_CAST_UPDREF(thr,tvptr_dst,newval) \
DUK_TVAL_SET_DOUBLE_UPDREF((thr), (tvptr_dst), (duk_double_t) (newval))
#endif /* DUK_USE_FASTINT */
#define DUK_TVAL_SET_LIGHTFUNC_UPDREF_ALT0(thr,tvptr_dst,lf_v,lf_fp,lf_flags) do { \
duk_tval *tv__dst; duk_tval tv__tmp; tv__dst = (tvptr_dst); \
DUK_TVAL_SET_TVAL(&tv__tmp, tv__dst); \
DUK_TVAL_SET_LIGHTFUNC(tv__dst, (lf_v), (lf_fp), (lf_flags)); \
DUK_TVAL_DECREF((thr), &tv__tmp); /* side effects */ \
} while (0)
#define DUK_TVAL_SET_STRING_UPDREF_ALT0(thr,tvptr_dst,newval) do { \
duk_tval *tv__dst; duk_tval tv__tmp; tv__dst = (tvptr_dst); \
DUK_TVAL_SET_TVAL(&tv__tmp, tv__dst); \
DUK_TVAL_SET_STRING(tv__dst, (newval)); \
DUK_HSTRING_INCREF((thr), (newval)); \
DUK_TVAL_DECREF((thr), &tv__tmp); /* side effects */ \
} while (0)
#define DUK_TVAL_SET_OBJECT_UPDREF_ALT0(thr,tvptr_dst,newval) do { \
duk_tval *tv__dst; duk_tval tv__tmp; tv__dst = (tvptr_dst); \
DUK_TVAL_SET_TVAL(&tv__tmp, tv__dst); \
DUK_TVAL_SET_OBJECT(tv__dst, (newval)); \
DUK_HOBJECT_INCREF((thr), (newval)); \
DUK_TVAL_DECREF((thr), &tv__tmp); /* side effects */ \
} while (0)
#define DUK_TVAL_SET_BUFFER_UPDREF_ALT0(thr,tvptr_dst,newval) do { \
duk_tval *tv__dst; duk_tval tv__tmp; tv__dst = (tvptr_dst); \
DUK_TVAL_SET_TVAL(&tv__tmp, tv__dst); \
DUK_TVAL_SET_BUFFER(tv__dst, (newval)); \
DUK_HBUFFER_INCREF((thr), (newval)); \
DUK_TVAL_DECREF((thr), &tv__tmp); /* side effects */ \
} while (0)
#define DUK_TVAL_SET_POINTER_UPDREF_ALT0(thr,tvptr_dst,newval) do { \
duk_tval *tv__dst; duk_tval tv__tmp; tv__dst = (tvptr_dst); \
DUK_TVAL_SET_TVAL(&tv__tmp, tv__dst); \
DUK_TVAL_SET_POINTER(tv__dst, (newval)); \
DUK_TVAL_DECREF((thr), &tv__tmp); /* side effects */ \
} while (0)
/* DUK_TVAL_SET_TVAL_UPDREF() is used a lot in executor, property lookups,
* etc, so it's very important for performance. Measure when changing.
*
* NOTE: the source and destination duk_tval pointers may be the same, and
* the macros MUST deal with that correctly.
*/
/* Original idiom used, minimal code size. */
#define DUK_TVAL_SET_TVAL_UPDREF_ALT0(thr,tvptr_dst,tvptr_src) do { \
duk_tval *tv__dst, *tv__src; duk_tval tv__tmp; \
tv__dst = (tvptr_dst); tv__src = (tvptr_src); \
DUK_TVAL_SET_TVAL(&tv__tmp, tv__dst); \
DUK_TVAL_SET_TVAL(tv__dst, tv__src); \
DUK_TVAL_INCREF((thr), tv__src); \
DUK_TVAL_DECREF((thr), &tv__tmp); /* side effects */ \
} while (0)
/* Faster alternative: avoid making a temporary copy of tvptr_dst and use
* fast incref/decref macros.
*/
#define DUK_TVAL_SET_TVAL_UPDREF_ALT1(thr,tvptr_dst,tvptr_src) do { \
duk_tval *tv__dst, *tv__src; duk_heaphdr *h__obj; \
tv__dst = (tvptr_dst); tv__src = (tvptr_src); \
DUK_TVAL_INCREF_FAST((thr), tv__src); \
if (DUK_TVAL_NEEDS_REFCOUNT_UPDATE(tv__dst)) { \
h__obj = DUK_TVAL_GET_HEAPHDR(tv__dst); \
DUK_ASSERT(h__obj != NULL); \
DUK_TVAL_SET_TVAL(tv__dst, tv__src); \
DUK_HEAPHDR_DECREF_FAST((thr), h__obj); /* side effects */ \
} else { \
DUK_TVAL_SET_TVAL(tv__dst, tv__src); \
} \
} while (0)
/* XXX: no optimized variants yet */
#define DUK_TVAL_SET_UNDEFINED_UPDREF DUK_TVAL_SET_UNDEFINED_UPDREF_ALT0
#define DUK_TVAL_SET_UNDEFINED_UPDREF_NORZ DUK_TVAL_SET_UNDEFINED_UPDREF_NORZ_ALT0
#define DUK_TVAL_SET_UNUSED_UPDREF DUK_TVAL_SET_UNUSED_UPDREF_ALT0
#define DUK_TVAL_SET_NULL_UPDREF DUK_TVAL_SET_NULL_UPDREF_ALT0
#define DUK_TVAL_SET_BOOLEAN_UPDREF DUK_TVAL_SET_BOOLEAN_UPDREF_ALT0
#define DUK_TVAL_SET_NUMBER_UPDREF DUK_TVAL_SET_NUMBER_UPDREF_ALT0
#define DUK_TVAL_SET_NUMBER_CHKFAST_UPDREF DUK_TVAL_SET_NUMBER_CHKFAST_UPDREF_ALT0
#define DUK_TVAL_SET_DOUBLE_UPDREF DUK_TVAL_SET_DOUBLE_UPDREF_ALT0
#define DUK_TVAL_SET_NAN_UPDREF DUK_TVAL_SET_NAN_UPDREF_ALT0
#if defined(DUK_USE_FASTINT)
#define DUK_TVAL_SET_I48_UPDREF DUK_TVAL_SET_I48_UPDREF_ALT0
#define DUK_TVAL_SET_I32_UPDREF DUK_TVAL_SET_I32_UPDREF_ALT0
#define DUK_TVAL_SET_U32_UPDREF DUK_TVAL_SET_U32_UPDREF_ALT0
#else
#define DUK_TVAL_SET_I48_UPDREF DUK_TVAL_SET_DOUBLE_CAST_UPDREF /* XXX: fast int-to-double */
#define DUK_TVAL_SET_I32_UPDREF DUK_TVAL_SET_DOUBLE_CAST_UPDREF
#define DUK_TVAL_SET_U32_UPDREF DUK_TVAL_SET_DOUBLE_CAST_UPDREF
#endif /* DUK_USE_FASTINT */
#define DUK_TVAL_SET_FASTINT_UPDREF DUK_TVAL_SET_I48_UPDREF /* convenience */
#define DUK_TVAL_SET_LIGHTFUNC_UPDREF DUK_TVAL_SET_LIGHTFUNC_UPDREF_ALT0
#define DUK_TVAL_SET_STRING_UPDREF DUK_TVAL_SET_STRING_UPDREF_ALT0
#define DUK_TVAL_SET_OBJECT_UPDREF DUK_TVAL_SET_OBJECT_UPDREF_ALT0
#define DUK_TVAL_SET_BUFFER_UPDREF DUK_TVAL_SET_BUFFER_UPDREF_ALT0
#define DUK_TVAL_SET_POINTER_UPDREF DUK_TVAL_SET_POINTER_UPDREF_ALT0
#if defined(DUK_USE_FAST_REFCOUNT_DEFAULT)
/* Optimized for speed. */
#define DUK_TVAL_SET_TVAL_UPDREF DUK_TVAL_SET_TVAL_UPDREF_ALT1
#define DUK_TVAL_SET_TVAL_UPDREF_FAST DUK_TVAL_SET_TVAL_UPDREF_ALT1
#define DUK_TVAL_SET_TVAL_UPDREF_SLOW DUK_TVAL_SET_TVAL_UPDREF_ALT0
#else
/* Optimized for size. */
#define DUK_TVAL_SET_TVAL_UPDREF DUK_TVAL_SET_TVAL_UPDREF_ALT0
#define DUK_TVAL_SET_TVAL_UPDREF_FAST DUK_TVAL_SET_TVAL_UPDREF_ALT0
#define DUK_TVAL_SET_TVAL_UPDREF_SLOW DUK_TVAL_SET_TVAL_UPDREF_ALT0
#endif
#else /* DUK_USE_REFERENCE_COUNTING */
#define DUK_TVAL_NEEDS_REFCOUNT_UPDATE(tv) 0
#define DUK_HEAPHDR_NEEDS_REFCOUNT_UPDATE(h) 0
#define DUK_TVAL_INCREF_FAST(thr,v) do {} while (0) /* nop */
#define DUK_TVAL_DECREF_FAST(thr,v) do {} while (0) /* nop */
#define DUK_TVAL_DECREF_NORZ_FAST(thr,v) do {} while (0) /* nop */
#define DUK_TVAL_INCREF_SLOW(thr,v) do {} while (0) /* nop */
#define DUK_TVAL_DECREF_SLOW(thr,v) do {} while (0) /* nop */
#define DUK_TVAL_DECREF_NORZ_SLOW(thr,v) do {} while (0) /* nop */
#define DUK_TVAL_INCREF(thr,v) do {} while (0) /* nop */
#define DUK_TVAL_DECREF(thr,v) do {} while (0) /* nop */
#define DUK_TVAL_DECREF_NORZ(thr,v) do {} while (0) /* nop */
#define DUK_HEAPHDR_INCREF_FAST(thr,h) do {} while (0) /* nop */
#define DUK_HEAPHDR_DECREF_FAST(thr,h) do {} while (0) /* nop */
#define DUK_HEAPHDR_DECREF_NORZ_FAST(thr,h) do {} while (0) /* nop */
#define DUK_HEAPHDR_INCREF_SLOW(thr,h) do {} while (0) /* nop */
#define DUK_HEAPHDR_DECREF_SLOW(thr,h) do {} while (0) /* nop */
#define DUK_HEAPHDR_DECREF_NORZ_SLOW(thr,h) do {} while (0) /* nop */
#define DUK_HEAPHDR_INCREF(thr,h) do {} while (0) /* nop */
#define DUK_HEAPHDR_DECREF(thr,h) do {} while (0) /* nop */
#define DUK_HEAPHDR_DECREF_NORZ(thr,h) do {} while (0) /* nop */
#define DUK_HSTRING_INCREF_FAST(thr,h) do {} while (0) /* nop */
#define DUK_HSTRING_DECREF_FAST(thr,h) do {} while (0) /* nop */
#define DUK_HSTRING_DECREF_NORZ_FAST(thr,h) do {} while (0) /* nop */
#define DUK_HSTRING_INCREF_SLOW(thr,h) do {} while (0) /* nop */
#define DUK_HSTRING_DECREF_SLOW(thr,h) do {} while (0) /* nop */
#define DUK_HSTRING_DECREF_NORZ_SLOW(thr,h) do {} while (0) /* nop */
#define DUK_HSTRING_INCREF(thr,h) do {} while (0) /* nop */
#define DUK_HSTRING_DECREF(thr,h) do {} while (0) /* nop */
#define DUK_HSTRING_DECREF_NORZ(thr,h) do {} while (0) /* nop */
#define DUK_HOBJECT_INCREF_FAST(thr,h) do {} while (0) /* nop */
#define DUK_HOBJECT_DECREF_FAST(thr,h) do {} while (0) /* nop */
#define DUK_HOBJECT_DECREF_NORZ_FAST(thr,h) do {} while (0) /* nop */
#define DUK_HOBJECT_INCREF_SLOW(thr,h) do {} while (0) /* nop */
#define DUK_HOBJECT_DECREF_SLOW(thr,h) do {} while (0) /* nop */
#define DUK_HOBJECT_DECREF_NORZ_SLOW(thr,h) do {} while (0) /* nop */
#define DUK_HOBJECT_INCREF(thr,h) do {} while (0) /* nop */
#define DUK_HOBJECT_DECREF(thr,h) do {} while (0) /* nop */
#define DUK_HOBJECT_DECREF_NORZ(thr,h) do {} while (0) /* nop */
#define DUK_HBUFFER_INCREF_FAST(thr,h) do {} while (0) /* nop */
#define DUK_HBUFFER_DECREF_FAST(thr,h) do {} while (0) /* nop */
#define DUK_HBUFFER_DECREF_NORZ_FAST(thr,h) do {} while (0) /* nop */
#define DUK_HBUFFER_INCREF_SLOW(thr,h) do {} while (0) /* nop */
#define DUK_HBUFFER_DECREF_SLOW(thr,h) do {} while (0) /* nop */
#define DUK_HBUFFER_DECREF_NORZ_SLOW(thr,h) do {} while (0) /* nop */
#define DUK_HBUFFER_INCREF(thr,h) do {} while (0) /* nop */
#define DUK_HBUFFER_DECREF(thr,h) do {} while (0) /* nop */
#define DUK_HBUFFER_DECREF_NORZ(thr,h) do {} while (0) /* nop */
#define DUK_HCOMPFUNC_INCREF(thr,h) do {} while (0) /* nop */
#define DUK_HCOMPFUNC_DECREF(thr,h) do {} while (0) /* nop */
#define DUK_HCOMPFUNC_DECREF_NORZ(thr,h) do {} while (0) /* nop */
#define DUK_HNATFUNC_INCREF(thr,h) do {} while (0) /* nop */
#define DUK_HNATFUNC_DECREF(thr,h) do {} while (0) /* nop */
#define DUK_HNATFUNC_DECREF_NORZ(thr,h) do {} while (0) /* nop */
#define DUK_HBUFOBJ_INCREF(thr,h) do {} while (0) /* nop */
#define DUK_HBUFOBJ_DECREF(thr,h) do {} while (0) /* nop */
#define DUK_HBUFOBJ_DECREF_NORZ(thr,h) do {} while (0) /* nop */
#define DUK_HTHREAD_INCREF(thr,h) do {} while (0) /* nop */
#define DUK_HTHREAD_DECREF(thr,h) do {} while (0) /* nop */
#define DUK_HTHREAD_DECREF_NORZ(thr,h) do {} while (0) /* nop */
#define DUK_HOBJECT_INCREF_ALLOWNULL(thr,h) do {} while (0) /* nop */
#define DUK_HOBJECT_DECREF_ALLOWNULL(thr,h) do {} while (0) /* nop */
#define DUK_HOBJECT_DECREF_NORZ_ALLOWNULL(thr,h) do {} while (0) /* nop */
#define DUK_HBUFFER_INCREF_ALLOWNULL(thr,h) do {} while (0) /* nop */
#define DUK_HBUFFER_DECREF_ALLOWNULL(thr,h) do {} while (0) /* nop */
#define DUK_HBUFFER_DECREF_NORZ_ALLOWNULL(thr,h) do {} while (0) /* nop */
#define DUK_REFZERO_CHECK_FAST(thr) do {} while (0) /* nop */
#define DUK_REFZERO_CHECK_SLOW(thr) do {} while (0) /* nop */
#define DUK_TVAL_SET_UNDEFINED_UPDREF_ALT0(thr,tvptr_dst) do { \
duk_tval *tv__dst; tv__dst = (tvptr_dst); \
DUK_TVAL_SET_UNDEFINED(tv__dst); \
DUK_UNREF((thr)); \
} while (0)
#define DUK_TVAL_SET_UNUSED_UPDREF_ALT0(thr,tvptr_dst) do { \
duk_tval *tv__dst; tv__dst = (tvptr_dst); \
DUK_TVAL_SET_UNUSED(tv__dst); \
DUK_UNREF((thr)); \
} while (0)
#define DUK_TVAL_SET_NULL_UPDREF_ALT0(thr,tvptr_dst) do { \
duk_tval *tv__dst; tv__dst = (tvptr_dst); \
DUK_TVAL_SET_NULL(tv__dst); \
DUK_UNREF((thr)); \
} while (0)
#define DUK_TVAL_SET_BOOLEAN_UPDREF_ALT0(thr,tvptr_dst,newval) do { \
duk_tval *tv__dst; tv__dst = (tvptr_dst); \
DUK_TVAL_SET_BOOLEAN(tv__dst, (newval)); \
DUK_UNREF((thr)); \
} while (0)
#define DUK_TVAL_SET_NUMBER_UPDREF_ALT0(thr,tvptr_dst,newval) do { \
duk_tval *tv__dst; tv__dst = (tvptr_dst); \
DUK_TVAL_SET_NUMBER(tv__dst, (newval)); \
DUK_UNREF((thr)); \
} while (0)
#define DUK_TVAL_SET_NUMBER_CHKFAST_UPDREF_ALT0(thr,tvptr_dst,newval) do { \
duk_tval *tv__dst; tv__dst = (tvptr_dst); \
DUK_TVAL_SET_NUMBER_CHKFAST_FAST(tv__dst, (newval)); \
DUK_UNREF((thr)); \
} while (0)
#define DUK_TVAL_SET_DOUBLE_UPDREF_ALT0(thr,tvptr_dst,newval) do { \
duk_tval *tv__dst; tv__dst = (tvptr_dst); \
DUK_TVAL_SET_DOUBLE(tv__dst, (newval)); \
DUK_UNREF((thr)); \
} while (0)
#define DUK_TVAL_SET_NAN_UPDREF_ALT0(thr,tvptr_dst) do { \
duk_tval *tv__dst; tv__dst = (tvptr_dst); \
DUK_TVAL_SET_NAN(tv__dst); \
DUK_UNREF((thr)); \
} while (0)
#if defined(DUK_USE_FASTINT)
#define DUK_TVAL_SET_I48_UPDREF_ALT0(thr,tvptr_dst,newval) do { \
duk_tval *tv__dst; tv__dst = (tvptr_dst); \
DUK_TVAL_SET_I48(tv__dst, (newval)); \
DUK_UNREF((thr)); \
} while (0)
#define DUK_TVAL_SET_I32_UPDREF_ALT0(thr,tvptr_dst,newval) do { \
duk_tval *tv__dst; tv__dst = (tvptr_dst); \
DUK_TVAL_SET_I32(tv__dst, (newval)); \
DUK_UNREF((thr)); \
} while (0)
#define DUK_TVAL_SET_U32_UPDREF_ALT0(thr,tvptr_dst,newval) do { \
duk_tval *tv__dst; tv__dst = (tvptr_dst); \
DUK_TVAL_SET_U32(tv__dst, (newval)); \
DUK_UNREF((thr)); \
} while (0)
#else
#define DUK_TVAL_SET_DOUBLE_CAST_UPDREF(thr,tvptr_dst,newval) \
DUK_TVAL_SET_DOUBLE_UPDREF((thr), (tvptr_dst), (duk_double_t) (newval))
#endif /* DUK_USE_FASTINT */
#define DUK_TVAL_SET_LIGHTFUNC_UPDREF_ALT0(thr,tvptr_dst,lf_v,lf_fp,lf_flags) do { \
duk_tval *tv__dst; tv__dst = (tvptr_dst); \
DUK_TVAL_SET_LIGHTFUNC(tv__dst, (lf_v), (lf_fp), (lf_flags)); \
DUK_UNREF((thr)); \
} while (0)
#define DUK_TVAL_SET_STRING_UPDREF_ALT0(thr,tvptr_dst,newval) do { \
duk_tval *tv__dst; tv__dst = (tvptr_dst); \
DUK_TVAL_SET_STRING(tv__dst, (newval)); \
DUK_UNREF((thr)); \
} while (0)
#define DUK_TVAL_SET_OBJECT_UPDREF_ALT0(thr,tvptr_dst,newval) do { \
duk_tval *tv__dst; tv__dst = (tvptr_dst); \
DUK_TVAL_SET_OBJECT(tv__dst, (newval)); \
DUK_UNREF((thr)); \
} while (0)
#define DUK_TVAL_SET_BUFFER_UPDREF_ALT0(thr,tvptr_dst,newval) do { \
duk_tval *tv__dst; tv__dst = (tvptr_dst); \
DUK_TVAL_SET_BUFFER(tv__dst, (newval)); \
DUK_UNREF((thr)); \
} while (0)
#define DUK_TVAL_SET_POINTER_UPDREF_ALT0(thr,tvptr_dst,newval) do { \
duk_tval *tv__dst; tv__dst = (tvptr_dst); \
DUK_TVAL_SET_POINTER(tv__dst, (newval)); \
DUK_UNREF((thr)); \
} while (0)
#define DUK_TVAL_SET_TVAL_UPDREF_ALT0(thr,tvptr_dst,tvptr_src) do { \
duk_tval *tv__dst, *tv__src; \
tv__dst = (tvptr_dst); tv__src = (tvptr_src); \
DUK_TVAL_SET_TVAL(tv__dst, tv__src); \
DUK_UNREF((thr)); \
} while (0)
#define DUK_TVAL_SET_UNDEFINED_UPDREF DUK_TVAL_SET_UNDEFINED_UPDREF_ALT0
#define DUK_TVAL_SET_UNDEFINED_UPDREF_NORZ DUK_TVAL_SET_UNDEFINED_UPDREF_ALT0
#define DUK_TVAL_SET_UNUSED_UPDREF DUK_TVAL_SET_UNUSED_UPDREF_ALT0
#define DUK_TVAL_SET_NULL_UPDREF DUK_TVAL_SET_NULL_UPDREF_ALT0
#define DUK_TVAL_SET_BOOLEAN_UPDREF DUK_TVAL_SET_BOOLEAN_UPDREF_ALT0
#define DUK_TVAL_SET_NUMBER_UPDREF DUK_TVAL_SET_NUMBER_UPDREF_ALT0
#define DUK_TVAL_SET_NUMBER_CHKFAST_UPDREF DUK_TVAL_SET_NUMBER_CHKFAST_UPDREF_ALT0
#define DUK_TVAL_SET_DOUBLE_UPDREF DUK_TVAL_SET_DOUBLE_UPDREF_ALT0
#define DUK_TVAL_SET_NAN_UPDREF DUK_TVAL_SET_NAN_UPDREF_ALT0
#if defined(DUK_USE_FASTINT)
#define DUK_TVAL_SET_I48_UPDREF DUK_TVAL_SET_I48_UPDREF_ALT0
#define DUK_TVAL_SET_I32_UPDREF DUK_TVAL_SET_I32_UPDREF_ALT0
#define DUK_TVAL_SET_U32_UPDREF DUK_TVAL_SET_U32_UPDREF_ALT0
#else
#define DUK_TVAL_SET_I48_UPDREF DUK_TVAL_SET_DOUBLE_CAST_UPDREF /* XXX: fast-int-to-double */
#define DUK_TVAL_SET_I32_UPDREF DUK_TVAL_SET_DOUBLE_CAST_UPDREF
#define DUK_TVAL_SET_U32_UPDREF DUK_TVAL_SET_DOUBLE_CAST_UPDREF
#endif /* DUK_USE_FASTINT */
#define DUK_TVAL_SET_FASTINT_UPDREF DUK_TVAL_SET_I48_UPDREF /* convenience */
#define DUK_TVAL_SET_LIGHTFUNC_UPDREF DUK_TVAL_SET_LIGHTFUNC_UPDREF_ALT0
#define DUK_TVAL_SET_STRING_UPDREF DUK_TVAL_SET_STRING_UPDREF_ALT0
#define DUK_TVAL_SET_OBJECT_UPDREF DUK_TVAL_SET_OBJECT_UPDREF_ALT0
#define DUK_TVAL_SET_BUFFER_UPDREF DUK_TVAL_SET_BUFFER_UPDREF_ALT0
#define DUK_TVAL_SET_POINTER_UPDREF DUK_TVAL_SET_POINTER_UPDREF_ALT0
#define DUK_TVAL_SET_TVAL_UPDREF DUK_TVAL_SET_TVAL_UPDREF_ALT0
#define DUK_TVAL_SET_TVAL_UPDREF_FAST DUK_TVAL_SET_TVAL_UPDREF_ALT0
#define DUK_TVAL_SET_TVAL_UPDREF_SLOW DUK_TVAL_SET_TVAL_UPDREF_ALT0
#endif /* DUK_USE_REFERENCE_COUNTING */
#if defined(DUK_USE_REFERENCE_COUNTING)
#if defined(DUK_USE_FINALIZER_SUPPORT)
DUK_INTERNAL_DECL void duk_refzero_check_slow(duk_hthread *thr);
DUK_INTERNAL_DECL void duk_refzero_check_fast(duk_hthread *thr);
#endif
DUK_INTERNAL_DECL void duk_heaphdr_refcount_finalize_norz(duk_heap *heap, duk_heaphdr *hdr);
DUK_INTERNAL_DECL void duk_hobject_refcount_finalize_norz(duk_heap *heap, duk_hobject *h);
#if 0 /* Not needed: fast path handles inline; slow path uses duk_heaphdr_decref() which is needed anyway. */
DUK_INTERNAL_DECL void duk_hstring_decref(duk_hthread *thr, duk_hstring *h);
DUK_INTERNAL_DECL void duk_hstring_decref_norz(duk_hthread *thr, duk_hstring *h);
DUK_INTERNAL_DECL void duk_hbuffer_decref(duk_hthread *thr, duk_hbuffer *h);
DUK_INTERNAL_DECL void duk_hbuffer_decref_norz(duk_hthread *thr, duk_hbuffer *h);
DUK_INTERNAL_DECL void duk_hobject_decref(duk_hthread *thr, duk_hobject *h);
DUK_INTERNAL_DECL void duk_hobject_decref_norz(duk_hthread *thr, duk_hobject *h);
#endif
DUK_INTERNAL_DECL void duk_heaphdr_refzero(duk_hthread *thr, duk_heaphdr *h);
DUK_INTERNAL_DECL void duk_heaphdr_refzero_norz(duk_hthread *thr, duk_heaphdr *h);
#if defined(DUK_USE_FAST_REFCOUNT_DEFAULT)
DUK_INTERNAL_DECL void duk_hstring_refzero(duk_hthread *thr, duk_hstring *h); /* no 'norz' variant */
DUK_INTERNAL_DECL void duk_hbuffer_refzero(duk_hthread *thr, duk_hbuffer *h); /* no 'norz' variant */
DUK_INTERNAL_DECL void duk_hobject_refzero(duk_hthread *thr, duk_hobject *h);
DUK_INTERNAL_DECL void duk_hobject_refzero_norz(duk_hthread *thr, duk_hobject *h);
#else
DUK_INTERNAL_DECL void duk_tval_incref(duk_tval *tv);
DUK_INTERNAL_DECL void duk_tval_decref(duk_hthread *thr, duk_tval *tv);
DUK_INTERNAL_DECL void duk_tval_decref_norz(duk_hthread *thr, duk_tval *tv);
DUK_INTERNAL_DECL void duk_heaphdr_incref(duk_heaphdr *h);
DUK_INTERNAL_DECL void duk_heaphdr_decref(duk_hthread *thr, duk_heaphdr *h);
DUK_INTERNAL_DECL void duk_heaphdr_decref_norz(duk_hthread *thr, duk_heaphdr *h);
#endif
#else /* DUK_USE_REFERENCE_COUNTING */
/* no refcounting */
#endif /* DUK_USE_REFERENCE_COUNTING */
#endif /* DUK_REFCOUNT_H_INCLUDED */

22
tests/ecmascript/test-bug-finalizer-repro-gh1311.js

@ -0,0 +1,22 @@
/*
* Pure JS repro for one issue in https://github.com/svaarala/duktape/issues/1311.
*/
/*===
Error: a pig ate it
done
===*/
function Foo() {
throw new Error('a pig ate it');
}
Duktape.fin(Foo.prototype, function(o) {});
try {
new Foo();
} catch (e) {
print(e);
}
print('done');

2
tests/ecmascript/test-bug-proxy-finalizer-double-call.js

@ -2,7 +2,7 @@
* Duktape 0.11.0 bug: if a Proxy is reachable at heap destruction and the
* target object has a finalizer, the finalizer will be executed both for
* the target object and the Proxy object. Normal run-time refcount and
* mark-and-sweep finalizer code will not finalize a Proxy.
* mark-and-sweep triggered finalizer code will not finalize a Proxy.
*
* See: https://github.com/svaarala/duktape/issues/45
*/

108
tests/ecmascript/test-bug-refcount-finalizer-garbage-loop.js

@ -0,0 +1,108 @@
/*
* Bug test for a Duktape 2.0 and prior refzero finalizer limitation for
* creating new garbage. This has been fixed in Duktape 2.1.
*
* When the finalizer runs and calls Object.getOwnPropertyNames(), an
* enumerator object referencing the object being finalized is created.
* When the refzero finalizer exits, the object will then have a refcount
* > 0, but will actually be unreachable because it's only reachable via
* the enumerator which gets collected right after the finalizer call
* returns.
*
* So, when the finalizer exits, the enumerator object's refzero falls
* to zero and it is queued to the refzero_list for processing. But
* it's not yet processed when the decision to rescue/free the finalized
* object is made, so the object is rescued and queued back to the heap.
*
* When the enumerator object is refzero processed, it gets freed and
* refcount finalized, which causes the finalized object's refcount to
* fall to zero, too -- and the object is queued to refzero_list again.
*
* This loop then continues forever. The same happens with any object
* which references the object being finalized without being in a reference
* loop.
*
* Duktape 2.1 fixes the issue by running finalizers outside of refzero_list
* processing, so that the newly created enumerator (or other "wrapper"
* object) gets DECREF processed immediately, and the keep/rescue decision
* is then made with up-to-date refcounts.
*/
/*===
no finalizer
caught Error: thrown by constructor
add finalizer
finalizer called for object
caught Error: thrown by constructor
done
finalizer called for prototype
===*/
var thisPointer;
var sanityCount = 0;
function finalizer(o) {
if (o === Foo.prototype) {
print('finalizer called for prototype');
return;
}
print('finalizer called for object');
if (++sanityCount > 1000) {
// Break the finalizer loop for testing. Without this, the loop
// would go on forever.
print('sanity limit reached');
return;
}
// Create a temporary object referencing the object being finalized.
// When temp is assigned null, the temporary object gets refzero queued
// to refzero_list, but won't be actually processed because we're already
// processing the finalizer for the current object inside refzero_list
// handling.
var temp = { name: 'temp', ref: o };
//temp.foo = { bar: temp }; // ref loop
temp = null;
// If 'temp' was in a reference loop, it would only be collectable via
// mark-and-sweep, and the *second* finalization round would then be
// mark-and-sweep driven, avoiding the infinite loop.
// This would cause the same issue.
// void Object.getOwnPropertyNames(o);
}
function Foo() {
thisPointer = String(Duktape.Pointer(this));
// If the object is placed into a reference loop, the finalization will
// be handled via mark-and-sweep which works fine.
/*
this.foo = {};
this.foo.bar = this;
*/
this.name = 'Foo instance';
throw new Error('thrown by constructor');
}
print('no finalizer');
try {
new Foo();
} catch (e) {
print('caught', e);
}
print('add finalizer');
Duktape.fin(Foo.prototype, finalizer);
Duktape.gc();
try {
new Foo();
} catch (e) {
print('caught', e);
}
print('done');

63
tests/ecmascript/test-dev-finalizer-markandsweep-refzero.js

@ -0,0 +1,63 @@
/*
* Special case in finalization:
*
* - Object in reference cycle is queued for finalization by mark-and-sweep.
* - Finalizer is executed, FINALIZED is set, object is queued back to
* heap_allocated.
* - While waiting for the next mark-and-sweep round to make a rescue/free
* decision, the object's refcount drops to zero.
*
* Current handling (Duktape 2.1) is to detect the situation in REFZERO
* handling and free the object without going through finalize_list and
* mark-and-sweep unnecessarily. Prior to 2.1, the object would be queued
* back for a new mark-and-sweep round.
*/
/*===
gc 1
gc 2, finalizer
finalizer called
call func()
func called
set func to null
gc 3, nop
done
===*/
function test() {
var obj = {};
obj.ref = {};
obj.ref.ref = obj; // cycle
var func;
Duktape.fin(obj, function (v) {
print('finalizer called');
// When finalizer finishes the object is reachable via 'func'.
// When func() is called and set to null, it gets a REFZERO.
func = function () {
print('func called');
v.ref = null; // break cycle
};
func.prototype = null; // break cycle
});
print('gc 1');
Duktape.gc();
obj = null;
print('gc 2, finalizer');
Duktape.gc(); // finalizer execution
print('call func()');
func();
print('set func to null');
func = null; // DECREF
print('gc 3, nop');
Duktape.gc(); // should no longer see object
print('done');
}
try {
test();
} catch (e) {
print(e.stack || e);
}

39
tests/ecmascript/test-dev-finalizer-markandsweep-zero-refcount.js

@ -0,0 +1,39 @@
/*
* If an object is mark-and-sweep finalized and the finalizer breaks the
* reference cycle where the object participates, and the object has a zero
* refcount after the finalizer returns, the object gets freed immediately.
*/
/*===
gc 1
gc 2, finalizer execution
finalizer called
gc 3, nop
done
===*/
function test() {
var obj = {};
obj.ref = {};
obj.ref.ref = obj; // Cycle
Duktape.fin(obj, function (v) {
print('finalizer called');
v.ref = null;
});
print('gc 1');
Duktape.gc();
obj = null;
print('gc 2, finalizer execution');
Duktape.gc();
print('gc 3, nop');
Duktape.gc();
print('done');
}
try {
test();
} catch (e) {
print(e.stack || e);
}

7
tests/ecmascript/test-dev-finalizer-skip.js

@ -1,11 +1,14 @@
/*
* Test that finalizers are executed correctly if they are skipped by
* one GC round.
*
* Marked 'skip' because no longer relevant in Duktape 2.1.
*/
/*---
{
"custom": true
"custom": true,
"skip": true
}
---*/
@ -36,8 +39,6 @@ try {
// Mark-and-sweep without finalizers. (1 << 3) = 8 is a flag from
// duk_heap.h (this is a fragile dependency):
//
// #define DUK_MS_FLAG_NO_FINALIZERS (1 << 3) /* don't run finalizers; leave finalizable objects in finalize_list for next round */
print('gc without finalizers');
Duktape.gc(8);

2
tests/ecmascript/test-dev-lightfunc-finalizer.js

@ -43,6 +43,8 @@ function lightfuncFinalizerTest() {
Duktape.fin(Function.prototype, function (v) {
if (valueIsLightFunc(v)) {
print('inherited finalizer for lightfunc');
} else {
//print('inherited finalizer, not for lightfunc: ' + v);
}
});

10
tests/ecmascript/test-dev-markandsweep-finalizer-3.js

@ -1,9 +1,9 @@
/*
* If a mark-and-sweep finalizer creates a new reachable object, the object
* must not be collected even though it has not been marked reachable.
* This is handled correctly by the implementation: finalizers run after
* the sweep pass has been completed and new elements and references created
* by the finalizer are considered in the next pass.
* If a mark-and-sweep triggered finalizer creates a new reachable object,
* the object must not be collected even though it has not been marked
* reachable. This is handled correctly by the implementation: finalizers
* run after the sweep pass has been completed and new elements and references
* created by the finalizer are considered in the next pass.
*/
/*===

33
tests/ecmascript/test-dev-refzero-long-cascade.js

@ -0,0 +1,33 @@
/*
* Create a very long refzero cascade. Ensures that such cascades are
* handled without a deep C stack.
*/
/*===
build object
start cascade
end cascade
done
===*/
function test() {
var obj;
var i;
print('build object');
obj = {};
for (i = 0; i < 1e7; i++) {
obj = { ref: obj };
}
// On return 'obj' becomes unreachable and is freed.
print('start cascade');
}
try {
test();
print('end cascade');
} catch (e) {
print(e.stack || e);
}
print('done');

2
tests/ecmascript/test-dev-yield-from-getset.js

@ -39,11 +39,13 @@ function test_set() {
try {
test_get();
} catch (e) {
//print(e.stack);
print(e.name);
}
try {
test_set();
} catch (e) {
//print(e.stack);
print(e.name);
}

3
tools/configure.py

@ -456,6 +456,7 @@ def main():
'duk_heap.h',
'duk_heap_hashstring.c',
'duk_heaphdr.h',
'duk_heap_finalize.c',
'duk_heap_markandsweep.c',
'duk_heap_memory.c',
'duk_heap_misc.c',
@ -466,7 +467,6 @@ def main():
'duk_hobject_alloc.c',
'duk_hobject_class.c',
'duk_hobject_enum.c',
'duk_hobject_finalizer.c',
'duk_hobject.h',
'duk_hobject_misc.c',
'duk_hobject_pc2line.c',
@ -496,6 +496,7 @@ def main():
'duk_lexer.h',
'duk_numconv.c',
'duk_numconv.h',
'duk_refcount.h',
'duk_regexp_compiler.c',
'duk_regexp_executor.c',
'duk_regexp.h',

3
util/dist.py

@ -366,6 +366,7 @@ def main():
'duk_heap.h',
'duk_heap_hashstring.c',
'duk_heaphdr.h',
'duk_heap_finalize.c',
'duk_heap_markandsweep.c',
'duk_heap_memory.c',
'duk_heap_misc.c',
@ -377,7 +378,6 @@ def main():
'duk_hobject_alloc.c',
'duk_hobject_class.c',
'duk_hobject_enum.c',
'duk_hobject_finalizer.c',
'duk_hobject.h',
'duk_hobject_misc.c',
'duk_hobject_pc2line.c',
@ -405,6 +405,7 @@ def main():
'duk_lexer.h',
'duk_numconv.c',
'duk_numconv.h',
'duk_refcount.h',
'duk_regexp_compiler.c',
'duk_regexp_executor.c',
'duk_regexp.h',

23
util/error_inject_test.sh

@ -0,0 +1,23 @@
#!/bin/bash
#
# Some error injection tests
#
set -x
for i in 1 2 3 4; do
echo "Heap alloc inject: $i"
rm -f duk; rm -rf ./prep
python2 tools/configure.py --output-directory ./prep --source-directory ./src-input --config-metadata config --line-directives -DDUK_USE_INJECT_HEAP_ALLOC_ERROR=$i -DDUK_USE_ASSERTIONS
gcc -o duk -std=c99 -Wall -g -ggdb -DDUK_CMDLINE_PRINTALERT_SUPPORT -I./prep -Iextras/print-alert prep/duktape.c examples/cmdline/duk_cmdline.c extras/print-alert/duk_print_alert.c -lm
valgrind --leak-check=full --error-exitcode=123 ./duk
EXITCODE=$?
echo "Exit code is $EXITCODE"
if [ $EXITCODE -eq 1 ]; then echo "OK: 'duk' returned error, but no fatal error";
elif [ $EXITCODE -eq 134 ]; then echo "FAILED: ABORTED, fatal error, should not happen"; exit 1;
elif [ $EXITCODE -eq 123 ]; then echo "FAILED: valgrind reported error, probably a leak, should not happen"; exit 1;
else echo "Unexpected exit code $EXITCODE, should not happen"; exit 1;
fi
done
# FIXME: object resize

13
util/makeduk_base.yaml

@ -12,11 +12,17 @@ DUK_USE_FATAL_HANDLER:
DUK_USE_SELF_TESTS: true
#DUK_USE_ASSERTIONS: true
#DUK_USE_GC_TORTURE: true
#DUK_USE_SHUFFLE_TORTURE: true
#DUK_USE_FINALIZER_TORTURE: true
#DUK_USE_STRTAB_TORTURE: true
#DUK_USE_INJECT_HEAP_ALLOC_ERROR: 1 # 1 to 4
#DUK_USE_PACKED_TVAL: false
#DUK_USE_ALIGN_BY: 4
#DUK_USE_ALIGN_BY: 8
#DUK_USE_REFERENCE_COUNTING: false
#DUK_USE_DOUBLE_LINKED_HEAP: false
#DUK_USE_MARK_AND_SWEEP: false
#DUK_USE_VOLUNTARY_GC: false
@ -25,11 +31,6 @@ DUK_USE_SELF_TESTS: true
#DUK_USE_VERBOSE_ERRORS: false
#DUK_USE_PARANOID_ERRORS: true
#DUK_USE_AUGMENT_ERRORS: false
#DUK_USE_GC_TORTURE: true
#DUK_USE_SHUFFLE_TORTURE: true
#DUK_USE_REFZERO_FINALIZER_TORTURE: true
#DUK_USE_MARKANDSWEEP_FINALIZER_TORTURE: true
#DUK_USE_STRTAB_TORTURE: true
#DUK_USE_STRICT_DECL: false
#DUK_USE_REGEXP_SUPPORT: false
#DUK_USE_ES6_REGEXP_SYNTAX: false
@ -42,7 +43,7 @@ DUK_USE_DEBUGGER_SUPPORT: true
DUK_USE_DEBUGGER_DUMPHEAP: true
DUK_USE_DEBUGGER_INSPECT: true
#DUK_USE_DEBUGGER_THROW_NOTIFY: false
#DUK_USE_DEBUGGER_PAUSE_UNCAUGHT: false
#DUK_USE_DEBUGGER_PAUSE_UNCAUGHT: true
#DUK_USE_DEBUGGER_TRANSPORT_TORTURE: true
#DUK_USE_JX: false

4
website/api/duk_set_finalizer.yaml

@ -13,6 +13,10 @@ summary: |
as if no finalizer was set. To delete a finalizer from an object, set it
to <code>undefined</code>.</p>
<div class="note">
Finalizer on a Proxy object is currently unsupported.
</div>
example: |
duk_ret_t my_finalizer(duk_context *ctx) {
/* Object being finalized is at stack index 0. */

4
website/guide/finalization.html

@ -62,8 +62,8 @@ finalizer are also silently ignored.</p>
<li>A finalizer is called exactly once, at the latest when the heap is
destroyed, unless the object is rescued by making it reachable again.
An object may be rescued by its own finalizer, or by another object's
finalizer when mark-and-sweep finalizes an object. For example, if
<code>X.ref = Y</code>, and both X and Y become unreachable, it's
finalizer when mark-and-sweep finalizes a set of objects. For example,
if <code>X.ref = Y</code>, and both X and Y become unreachable, it's
possible for Y's finalizer to run, and later on X's finalizer to rescue
both X and Y.</li>
<li>An object may be rescued an arbitrary number of times; the finalizer

2
website/guide/limitations.html

@ -163,7 +163,7 @@ g = null; // collected immediately, finalizer runs
h.prototype.constructor = null;
h = null; // collected immediately, finalizer runs
// no-op with refcount only, with mark-and-sweep finalizer for 'f' runs
// mark-and-sweep triggers finalizer for 'f'
Duktape.gc();
</pre>

Loading…
Cancel
Save