8 bytes that’ll drive you mad

Today I’ve discovered a very strange bug in my software: after using an internal viewer in Files with PkgInfo in app package, Files crashes randomly and alerting on memory corruption.
Something like this:
Thread 1: EXC_BAD_ACCESS (code=1, address=0x0)
or this:
Files(85331,0x7fff7e8cc310) malloc: *** error for object 0x600000018a70: Heap corruption detected, free list canary is damaged
*** set a breakpoint in malloc_error_break to debug

A following story is below.


It was funny, since this viewer can easily handle a gigabyte-scale files. So I blamed the last refactoring and checked previous releases. They crash too, with the same symptoms.
Ok, I started investigations.
PkgInfo is 8-bytes file with text APPL????.
Copied this file to another place (some system-level locks maybe?) and viewed it. Crash.
Renamed file. Crash.
Created another 8-byte file with content 01234567 and tried it. Crash.
Created 7-byte and 9-byte files and tried them. Worked like a charm. Hmm…
It was something definitely related with alignment logic somewhere in low-level modules.
My suspicions fall onto data analysis module, which answers the questions like “what is the likely encoding for this bulk of data?” or “is this bulk of data binary or text?”.
Turned off this module and tried 8-byte file again. Crash.
Things becoming worse since sometimes viewer crashed even before it’s logic (layout, render, navigation etc) get to work.
Desperately I start tracking everything down from the very beginning of viewer’s initialization and came to my string decoders. At least it was a place where memory can become corrupted and I’ve changed encoding to Western (Mac OS Roman) from UTF-8 to see any difference. Worked like a charm. Hmm…
A few minutes later I finally stopped on relatively old UTF-8 conversion function:
void InterpretUTF8BufferAsIndexedUniChar(
const unsigned char* _input,
size_t _input_size,
unsigned short *_output_buf, // should be at least _input_size 16b words long
uint32_t *_indexes_buf, // should be at least _input_size 32b words long
size_t *_output_sz, // size of an output
unsigned short _bad_symb // something like ‘?’ or U+FFFD
)

And after tracing the decoding of 8 bytes “01234567” I finally stopped at a humble line:
    *_output_buf = 0;
}
That’s a zero-terminating of resulting UniChar buffer. But! -> // should be at least _input_size 16b words long
When looked into upper-level calling code there was a precise memory allocation for this buffer:
m_DecodeBuffer = (UniChar*) calloc(m_FileWindow->WindowSize(), sizeof(UniChar));
So when file is 8-bytes long – the allocated memory is precisely 8 bytes long (when is was 7 or 9 bytes then allocation size was aligned onto some boundary and writing a zero-terminator didn’t cause any effect) and zeroing it (m_DecodeBuffer[8]=0) resulting in corruption of a following heap controlling structures, which causes a crash later.
Of course this zero-termination was unnecessary since every later function operating with UniChar buffer don’t rely on it’s zero-terminator.
The quest was complete.

A “million files” test

Writing a file manager is definitely a quite special kind of fun. Despite a seeming simplicity there’s a lot of details that should be considered when implementing it. Using efficient structures, algorithms and architecture can mean much, regardless that all remains under the hood and is not visible to user. Since the main purpose of a file management app is navigation between folders and showing their’s content, I’ve managed to perform some tests to show how different implementations can handle this (quite simple?) task. At the moment my collection of Mac OS X file managers was 11 different ones (all this software can be easily found in Google, also I don’t claim I’ve tested all file managers for Mac OS X – there’re others).
OK, to be precise – this is a stress test. By stress I mean not a usual stress, but STRESS.
The job is very simple: reading and showing a content of a directory with 1,000,000 files. Nothing more, just it. I’ve taken my old 8Gb USB stick, plugged it into my Macbook Pro running OS X Mavericks and formatted it into HFS+ journaled, also turned off Spotlight on this volume. Then run a tiny app, which I called fs_killer 🙂
int main(int argc, const char * argv[]) {
    char tmp[MAXPATHLEN];
    for(int i = 0; i < 1000000; ++i) {
        sprintf(tmp, “/Volumes/test/%6.6d.txt”, i);
        close(open(tmp, O_WRONLY|O_CREAT, S_IRUSR|S_IWUSR|S_IRGRP));
    }
    return 0;
}

Here’s the test itself: run an app, check if it will be able to open this folder, record how much memory it has consumed and check if app is still usable (cursor movements and scrolling ability). Every app was given a plenty of time to load the directory listing, and this parameter wasn’t counted. The comparison table in alphabetical order is below:

Results
Application name Opened Memory Usable
DCommander yes ~1.7Gb yes
FastCommander yes ~1.1Gb yes
Nimble Commander yes 150Mb yes
Finder
yes
~2Gb yes
ForkLift yes ~1.7Gb yes
Macintosh Explorer yes ~1Gb yes
Midnight Commander yes 133Mb yes
Moroshka File Manager yes 160Mb no
Mover yes ~2Gb no
muCommander no n/a n/a
ZCommander yes 850Mb yes
A few words for conclusion. Two things can be clearly seen:
1) Midnight Commander (mc) is the winner (and the only to run in console) and muCommander is very bad – it was the only one to fail the opening test.
2) These file managers can be divided into 2 groups by memory consumption: less than roughly 200Mb and above it. I suppose this difference to be consequence of an internal data storage structure – while the first group relies on plain C / C++ memory management, the others use Objective C / Cocoa infrastructure to handle directory listing data, which is considerable less efficient in this aspect.
As the bottom line I can only give a link to Nimble Commander, it’s free now: http://magnumbytes.com/