Make sure that we don't miss anything and cover the rest of bytevector
accessors with tests for unaligned memory access. Include both integers
and floats, in little and big endian flavors.
Native code implementing bytevector accessors uses the following access
pattern:
*(intNN_t*)(base+offset)
This can result in so called "unaligned memory access" if the offset is
not a multiple of 2, 4, 8, or if the base address has not been allocated
at an aligned address (unlikely).
Most popular modern architectures--x86 and ARMs--allow unaligned memory
accesses on the instruction level but they are typically performed a bit
slower than properly aligned accesses.
On the other hand, there are architectures which do not allow unaligned
memory accesses. Each load or store of a value longer than 1 byte should
use properly aligned address on those architectures. That is, u16 should
be loaded from even addresses, s32 should only be stored at an address
which is a multiple of 4, and f64 (aka double) can be located only at
addresses which are multiple of 8. If the address is not aligned, CPU
raises an exception which typically results in the process being killed
by the operating system with a SIGBUS signal.
SPARC is one of those architectures which are strict with alignment. The
current access pattern in bytevector native code can result in unaligned
accesses, which in turn results in crashes. This issue has been found in
this way: Chibi test suite includes some tests for unaligned accesses
and it failed on SPARC.
In order to avoid unaligned accesses, loads and stores need to be
performed a bit differently, doing 'type punning' in a safe way, not
just casting pointers which breaks strict aliasing rules.
The most portable and efficient way to do this is to use memcpy().
Compilers know about this trick and generate very efficient code here,
avoiding the function call and using the most efficient instructions.
(Obviously, only when optimizations are enabled.)
That is, given
static inline uint32_t ref_u32(const void* p) {
uint32_t v;
memcpy(&v, p, sizeof(v));
return v;
}
on x86 this will be compiled into a single "movl" instruction because
x86 allows unaligned accesses, similar with ARM where this becomes
a single "ldr" instruction. However, on RISC-V--another platform with
strict alignment rules--this code compiles into 4 "lbu" instructions
fetching 4 bytes, then some more arithmetic to stitch those bytes into
a single 32-bit value.
while at there, spell out the empty CHIBI_MODULE_PATH case. It's obvious
if you really think about it, but it's even better if I don't have to
read between the lines.
I did grep getenv to find if anything was missing. There is
CHIBI_MAX_ALLOC, but I think that one is more about debugging
out-of-memory than to be customized by the user.
This fix is rather dumb, but it prevents things from crashing when
forking a lot and creating file handles. I assume that this is where
the filehandles go, but I don't have a good guess.
Uploading a package is an irreversible operation. It's not even about
accidentally leaking your secret sauce to the internet. You could upload
a package to snow-fort.org by accident and pullute the package name
space [1].
So let's ask the user first before going ahead uploading stuff. We only
ask once even if we're going to upload a dozen packages, so it's not
that annoying. The target repo is also shown in case you want to upload
to a custom repo and want to make sure it does so.
[1] I did (while attempting to uploading to a local snow-fort instance
during testing). I guess `(chibi snow commands)` is forever mine
now.
Tested with gauche. It's mostly about not importing (chibi test)
unconditionally, and importing (scheme write). And in one case I need to
exclude some tests because gauche catches invalid call forms at compile
time. I'm not sure if that can be caught...