This is required for the Windows configuration to succeed at all. It
should also be beneficial when we start sharing object code between
bootstrap and actual executable.
C standard defines signbit() as a macro returning "non-zero value" for
negative arguments (see 7.12.3.6 of C11 standard). SRFI 144's flsign-bit
is defined to return exactly 1.
Make sure to convert the result of signbit() call into "boolean int"
which is either 0 or 1.
This is not a theoretical issue. This causes SRFI 144 test suite to fail
on many architectures that are not x86_64.
GCC on x86_64 compiles signbit() as
movmskpd %xmm0, %eax
andl $1, %eax
which indeed returns either 0 or 1. movmskpd extracts 2-bit sign mask
from the FP value in src register and stores that in low-order bits of
the dst register. Then the unneded extra bit is masked out, leaving only
the lowest bit set or unset.
However, other architectures don't have such conveniences and go with
more direct approach. For example, GCC on ARMv7 produces this:
sub sp, sp, #8
vstr.64 d0, [sp]
ldr r0, [sp, #4]
and r0, r0, #0x80000000
add sp, sp, #8
bx lr
which effectively returns either 0 or -1. Generated code masks out
everything but the sign bit and returns the result as is. The value
0x80000000 is the representation of -1.
Even on i386 signbit() is compiled as
fldl 4(%esp)
fxam
fnstsw %ax
fstp %st(0)
andl $512, %eax
ret
which effectively returns either 0 or 512: fxam sets C1 bit FPU status
word to the sign of FP value, then the status word is extracted, the
"sign bit" is masked out, and left as is.
These ones are used to compute averages. If they are not initialized to
zero, they might contain some garbage. In fact, they almost always do
on platforms other that x86_64, failing the FFI tests. If optimizations
are enabled, these tests usually fail on x86_64 too. The reason this
went unnoticed is contrived set of coincidences.