The array part of a table wastes too much space, due to padding.
To avoid that, we need to store values in the array as something
different from a TValue. Therefore, the API for table access
should not assume that any value in a table lives in a *TValue.
This commit is the first step to remove that assumption: functions
luaH_get*, instead of returning a *TValue where the value lives,
receive a *TValue where to put the value being accessed.
(We still have to change the luaH_set* functions.)
When integer keys do not form a sequence, it is better to use all their
bits to compute their hashes. (The previous implementation was quite bad
for integer keys with common lower bits, and disastrous for integer keys
changing only in their upper 32 bits.)
More uses of macros 'likely'/'unlikely' (renamed to
'l_likely'/'l_unlikely'), both in range (extended to the
libraries) and in scope (extended to hooks, stack growth).
Fixed the bug that a key removed from a table might not be found
again by 'next'. (This is needed to allow keys to be removed during a
traversal.) This bug was introduced in commit 73ec04fc.
Tables were using this bit to indicate their array sizes were real
('isrealasize'), but this bit can be useful for tests. Instead, they
can use bit 7 of their 'flag' field for that purpose. (There are only
six fast-access metamethods.) This 'flag' field only exists in tables,
so this use does not affect other types.
Instead of an explicit value (field 'b'), true and false use different
tag variants. This avoids reading an extra field and results in more
direct code. (Most code that uses booleans needs to distinguish between
true and false anyway.)
- Several details in 'lcode.c'
- A few more tests for code generation
- Bug in assert in 'lcode.c' ("=" x "==")
- Comments in 'lopcodes.h' and 'ltable.c'
- The preprocessor must work with at least 'long', and therefore must
do shifts of up to 31 bits correctly.
- Whenever possible, use unsigned types in shifts.
Checks of the form '1 <= x && x <= M' were rewritten in the form
'(unsigned)x - 1 < (unsigned)M', which is usually more efficient.
(Other similar checks have similar translations.) Although
some compilers do these optimizations, that does not happen
for all compilers or all cases.
if the key is dead, it cannot be given to 'next'. Instead, we now
use a 'table' tag without the collectable bit, which makes it
a unique tag good enough to reserve space.