CoreFoundation and memory allocators – why bother?

Any seasoned C++ programmer knows that object allocation does cost CPU cycles, and may cost lots of them. The language itself provides various object allocation types. Such mess might surprise folks who use other user-friendlier languages, especially languages with a garbage collection. But that’s only the beginning. Any C++ Jedi knows about custom allocation strategies, such as memory pools,  buddy allocation, grow-only allocators, can write a generic-purpose memory allocator (probably quite a crappy one) and so forth.
Does it help? Sometimes usage of a custom allocator allows tuning up an application’s performance, by exploiting a specific knowledge about properties of the system.
Does it mean that it might be a good idea to write your own malloc() implementation? Absolutely not. It’s a good challenge for educational purposes, but almost never this will bring any performance benefits.

So what about Cocoa in this aspect?

On Foundation level, Objective-C once had some options to customize the allocation process via NSZone, but they were discarded upon a transition to the ARC. Swift, on the other hand, AFAIK doesn’t even pretend to provide any allocation options.

On CoreFoundation level, many APIs accept a pointer to a memory allocator (CFAllocatorRef) as the first parameter. kCFAllocatorDefault or NULL is passed to use the default allocator, i.e. CFAllocatorGetDefault(), i.e. kCFAllocatorSystemDefault in most cases. CoreFoundation also provides a set of APIs to manipulate the allocation process:
– CFAllocatorCreate
– CFAllocatorAllocate
– CFAllocatorReallocate
– CFAllocatorDeallocate
– CFAllocatorGetPreferredSizeForSize
– CFAllocatorGetContext
An overall mechanics around CFAllocatorRef is quite well documented and, even better, it’s always possible to take a look at the source code of CoreFoundation. So, it’s absolutely ok to use a custom memory allocator on the CoreFoundation level.

“What for?” might be a reasonable question here. Introducing any additional low-level components also implies some maintenance burden in the future, so there should be some heavy pros to bother with a custom memory allocation. Traditionally,  the Achilles’ heel of generic-purpose memory allocators is a dealing with many allocations and consequent deallocations of small amounts of memory. There’re plenty of optimization techniques developed for such tasks, so why not check it on Cocoa?

Suppose we want to spend as less time on memory allocation as possible. Nothing is faster than allocating memory on the stack, obviously. But there are some issues with a stack-based allocation:
• The stack is not limitless. A typical program, which does nothing tricky, is very unlikely to hit the stack limit, but that’s not an advice to carelessly use alloca() everywhere – it will strike back eventually.
• Deallocating a stack-based memory in an arbitrary order is painful and requires some time to manage. In a perfect world, however, it would be great to have an O(1) time complexity for both allocation and deallocation.
• All allocated objects must be freed before an escaping out of allocator’s visibility scope, otherwise, an access to the “leaked” object will lead to an undefined behavior.
To mitigate these issues, a compromise strategy exists:
• Use a stack memory when possible, fall back to a generic-purpose memory allocator otherwise.
• Do increase a stack pointer on allocations, don’t decrease it upon deallocations.
In such case, allocations will be blazingly fast most of the time, while it’s still possible to process requests for big memory chunks. As for the third issue, it falls onto the developer, since the memory allocator can only help with some diagnostic. It’s incredibly easy to write such memory allocator, main steps are described below.

The stack-based allocator is conceptually a classic C++ RAII object. It’s assumed that the client source code will be compiled as C++ or as Objective-C++. The only public method, apart from the constructor and the destructor, provides a CFAllocatorRef pointer to pass into CoreFoundation APIs. The internal state of the allocator consists of the stack itself, a stack pointer, two allocations counters for diagnostic purposes and the CFAllocatorRef pointer.

struct CFStackAllocator
{
  CFStackAllocator() noexcept;
  ~CFStackAllocator() noexcept;
  inline CFAllocatorRef Alloc() const noexcept { return m_Alloc; }
private:
... 
  static const int m_Size = 4096 - 16;
  char m_Buffer[m_Size];
  int m_Left;
  short m_StackObjects;
  short m_HeapObjects;
  const CFAllocatorRef m_Alloc;
};

To initialize the object, the constructor fills counters with defaults and creates a CFAllocatorRef frontend. Only two callbacks are required to build a working CFAllocatorRef: CFAllocatorAllocateCallBack and CFAllocatorDeallocateCallBack.

CFStackAllocator::CFStackAllocator() noexcept:
  m_Left(m_Size),
  m_StackObjects(0),
  m_HeapObjects(0),
  m_Alloc(Construct())
{}

CFAllocatorRef CFStackAllocator::Construct() noexcept
{
  CFAllocatorContext context = {
    0,
    this,
    nullptr,
    nullptr,
    nullptr,
    DoAlloc,
    nullptr,
    DoDealloc,
    nullptr
  };
  return CFAllocatorCreate(kCFAllocatorUseContext, &context);
}

To allocate a memory block, it’s only needed to check whether requested block could be placed in the stack buffer. In this case, the allocation process itself consists only of updating the free space counter. Otherwise, the allocation falls back to the generic malloc().

void *CFStackAllocator::DoAlloc
  (CFIndex _alloc_size, CFOptionFlags _hint, void *_info)
{
  auto me = (CFStackAllocator *)_info;
  if( _alloc_size <= me->m_Left ) {
    void *v = me->m_Buffer + m_Size - me->m_Left;
    me->m_Left -= _alloc_size;
    me->m_StackObjects++;
    return v;
  }
  else {
    me->m_HeapObjects++;
    return malloc(_alloc_size);
  }
}

To deallocate a previously allocated memory block, it’s only needed to check whether that allocation was dispatched to the malloc() and to call free() accordingly.

void CFStackAllocator::DoDealloc(void *_ptr, void *_info)
{
  auto me = (CFStackAllocator *)_info;
  if( _ptr < me->m_Buffer || _ptr >= me->m_Buffer + m_Size ) {
    free(_ptr);
    me->m_HeapObjects--;
  }
  else {
    me->m_StackObjects--;
  }
}

To measure the performance difference between a default Objective-C allocator, a default CoreFoundation allocator and the CFStackAllocator, the following task was executed:
Given N UTF-8 strings, calculate hash values of derived strings which are lowercase and normalized.

An Objective-C variant of the computation:

unsigned long Hash_NSString( const vector<string> &_data )
{
  unsigned long hash = 0;
  @autoreleasepool {
    for( const auto &s: _data ) {
      const auto nsstring = [[NSString alloc] initWithBytes:s.data()
                                                     length:s.length()
                                                   encoding:NSUTF8StringEncoding];
      hash += nsstring.lowercaseString.decomposedStringWithCanonicalMapping.hash;
    }
  }
  return hash;
}

A CoreFoundation counterpart of this task:

unsigned long Hash_CFString( const vector<string> &_data )
{
  unsigned long hash = 0;
  const auto locale = CFLocaleCopyCurrent();
  for( const auto &s: _data ) {
    const auto cfstring = CFStringCreateWithBytes(0,
                                                  (UInt8*)s.data(),
                                                  s.length(),
                                                  kCFStringEncodingUTF8,
                                                  false);
    const auto cfmstring = CFStringCreateMutableCopy(0, 0, cfstring);
    CFStringLowercase(cfmstring, locale);
    CFStringNormalize(cfmstring, kCFStringNormalizationFormD);
    hash += CFHash(cfmstring);
    CFRelease(cfmstring);
    CFRelease(cfstring);
  }
  CFRelease(locale);
  return hash;
}

A CoreFoundation counterpart using a stack-based memory allocation:

unsigned long Hash_CFString_SA( const vector<string> &_data )
{
  unsigned long hash = 0;
  const auto locale = CFLocaleCopyCurrent();
  for( const auto &s: _data ) {
    CFStackAllocator alloc;
    const auto cfstring = CFStringCreateWithBytes(alloc.Alloc(),
                                                  (UInt8*)s.data(),
                                                  s.length(),
                                                  kCFStringEncodingUTF8,
                                                  false);
    const auto cfmstring = CFStringCreateMutableCopy(alloc.Alloc(), 0, cfstring);
    CFStringLowercase(cfmstring, locale);
    CFStringNormalize(cfmstring, kCFStringNormalizationFormD);
    hash += CFHash(cfmstring);
    CFRelease(cfmstring);
    CFRelease(cfstring);
  }
  CFRelease(locale);
  return hash;
}

And here are the results. These functions were called with the same data set consisting of 1,000,000 randomly generated strings with varying lengths.

On the provided data sets range, the CoreFoundation+CFStackAllocator implementation variant is 20%-50% faster than the pure Objective-C implementation and is 7%-20% faster than the pure CoreFoundation implementation. It’s easy to observe that Δ between timings is almost constant and represents the difference between times spent in the management tasks. To be precise, the time spent in management tasks in the CoreFoundation+CFStackAllocator variant is ~800ms less than in the Objective-C variant and is ~270ms less than in the pure CoreFoundation variant. Divided by the strings amount, this Δ is ~800ns and ~270ns per string accordingly.
The stack-based memory allocation is a micro-optimization, of course, but it might be very useful in a time-critical execution path. The complete source code of the CFStackAllocator and of the benchmarking is available in this repository: https://github.com/mikekazakov/CFStackAllocator.

Hacking NSMenu keyboard navigation

AppKit’s menu subsystem is perhaps one of the most rigid and monolithic components in Cocoa. While it has a quite clear API, it contradicts with many idioms assumed in Cocoa, especially regarding customization support.

NSMenu subsystem provides, basically, two major customization points:

  • You can set your NSMenuDelegate implementation to update a structure of a specific menu instance. All NSMenuDelegate’s hooks are called before the menu is displayed.
  • You can put a custom view inside some NSMenuItem element to present a non-regular or an area-specific type of information. Microsoft Office for Mac, for example, has loads of such custom menu elements.

Apart from these points, you can’t control the menu’s behavior. Apparently, there’re some good reasons for that rigidity. A lot of under-the-hood work is done without even bothering on the app side, for instance: a Help->Search functionality triggered via Cmd+Shift+? hotkey. Or, generally speaking, it’s an example of a principle of least user astonishment in action.

What if, however, you want to slightly alter a default routine of input processing? The universal answer is – “then go and write your own implementation from the scratch”. This WWDC session contains more details: “Key Event Handling in Cocoa Applications“. Of course, writing an additional UI component with several KSLoCs might be an entertaining adventure, but polishing every detail of such generic metaphor can easily eat up weeks of work. And, it turns out, usually, there are more important tasks to do.

In my specific case, I wanted to optimize a keyboard navigation of one particular UX pattern in Nimble Commander. It provides hotkeys to show popup menus with various places to navigate go, like favorite locations, mounted volumes, network connections etc. After such popup menu has been shown, a user can choose an appropriate location via arrows keys or via mouse. NC also sets up hotkeys for the first 12 elements: [0, 1, …, 0, -, =]. So, for short lists everything is fine, and a required location can be chosen with two keystrokes – first to pop up a menu and second to trigger a menu item. Longer lists, however, can become hard to navigate, since a required item can be accessed only with key arrows. Yes, there’s some basic letter-based navigation implemented in menus, but it’s absolutely rudimentary and won’t help much. What I wanted was plain and simple – to drop a few letters in the menu and to show only items which contain this substring in their titles. Something similar to this:

No googling gave me any hints on possible approaches to intercepting of NSMenu input. Cocoa’s menu subsystem doesn’t use a regular NSWindow/NSView event routing, so usual hooks can’t be applied in this situation. [NSEvent addLocalMonitorForEventsMatchingMask: handler:] doesn’t do a thing – there’s simply no NSEvent object to process. Perhaps [NSEvent addGlobalMonitorForEventsMatchingMask: handler:] could work, but it’s too heavy to use in a mere UI element.
What else? Anyone who debugged their code after it was called by a menu item, might have noticed that a call stack was full of identifiers containing a word “Carbon”. While a majority of Carbon API was marked as “deprecated” years ago, the menu subsystem still heavily relies on that code.
Well, that’s something at least. Luckily, an event-processing part of Carbon is still available and even wasn’t marked as “deprecated”.

So, with that tested and confirmed, it’s possible to switch to an actual “hack” implementation. Here’re the steps to intercept an incoming NSMenu event:
1. Make a custom menu.
2. Put there an element with a custom view inside.
3. Override the [NSView viewDidMoveToWindow] method on that custom view.
4. Retrieve current Carbon event dispatcher via GetEventDispatcherTarget().
5. Install a new event handler for an appropriate event kind via InstallEventHandler().

Here’s the code snippet (Objective-C++):

- (void) viewDidMoveToWindow
{
  [super viewDidMoveToWindow];

  if( m_EventHandler != nullptr ) {
    RemoveEventHandler(m_EventHandler);
    m_EventHandler = nullptr;
  }

  if( const auto window = self.window ) {
    if( ![window.className isEqualToString:@"NSCarbonMenuWindow"] ) {
      NSLog(@"Sorry, but MGKMenuWithFilter was designed to work with NSCarbonMenuWindow.");
      return;
    }
 
    const auto dispatcher = GetEventDispatcherTarget();
    if( !dispatcher ) {
      NSLog(@"GetEventDispatcherTarget() failed");
      return;
    }

    EventTypeSpec evts[2];
    evts[0].eventClass = kEventClassKeyboard;
    evts[0].eventKind = kEventRawKeyDown;
    evts[1].eventClass = kEventClassKeyboard;
    evts[1].eventKind = kEventRawKeyRepeat;
    const auto result = InstallEventHandler(dispatcher,
                                            CarbonCallback,
                                            2,
                                            &evts[0],
                                            (__bridge void*)self,
                                            &m_EventHandler);
    if( result != noErr ) {
      NSLog(@"InstallEventHandler() failed");
    }
  }
}

At this point, most of the keyboard events are passed to the custom Carbon callback, except for previously set key equivalents. The callback can convey these events onto the custom processing, and if an event wasn’t processed there, it can be sent back to default route:

static OSStatus CarbonCallback(EventHandlerCallRef _handler,
                               EventRef _event,
                               void *_user_data)
{
  if( !_event || !_user_data )
    return noErr;

  const auto menu_item = (__bridge MGKFilterMenuItem*)_user_data;
 
  const auto processed = [menu_item processInterceptedEvent:_event];
 
  if( processed )
    return noErr;
  else
    return CallNextEventHandler( _handler, _event );
}

It’s possible to convert EventRef into an NSEvent object with the following initializer: [NSEvent eventWithEventRef:]. Once NSEvent is received, any appropriate handling can be applied. In the current implementation, the top menu item contains an NSTextField view with a search criteria, which is shown only when some filter was entered. After a string inside this text field is updated, control is handed to the menu object so it can hide or show matching items, this process is quite straightforward. There are some details worth mentioning here:

  • It’s quite convenient to automatically select the first matching menu item after applying a filter if there was no selection before. NSMenu does not provide such interface, but it can be added with another dirty hack:
- (void) higlightCustomItem:(NSMenuItem*)_item
{
  static const auto selHighlightItem = NSSelectorFromString(@"highlightItem:");
  static const auto hack_works = (bool)[self respondsToSelector:selHighlightItem];
  if( hack_works ) {
#pragma clang diagnostic push
#pragma clang diagnostic ignored "-Warc-performSelector-leaks"
    [self performSelector:selHighlightItem withObject:_item];
#pragma clang diagnostic pop
  }
}

 

  • After a filter was applied, any previously set key equivalents must be cleared and vice versa. Otherwise, they can be unintentionally triggered instead of altering criteria for filtering.
  • These hacks are quite dirty and basically, nothing guarantees that they won’t stop working at any moment in the future. If (or when) an underlying NSMenu’s infrastructure will change radically, MGKMenuWithFilter will gracefully fall back to a regular menu behavior. At this moment, however, this infrastructure seems to be pretty stable and hasn’t got major changes for years.
    Current hack compatibility at the moment:
    macOS 10.12 “Sierra” – works
    OS X 10.11 “El Capitan” – works
    OS X 10.10 “Yosemite” – works
    OS X 10.9 “Mavericks” – works
    OS X 10.8 “Mountain Lion” – works
    Mac OS X 10.7 “Lion” – works partially, there’s no [NSMenu highlightItem:]
  • Current implementation assumes a programmatic approach of populating a menu, but with minor changes, it can be extended to support NIB-based menus made in Interface Builder as well.

The source code is available in this repository.
It also includes example projects written in Swift, Objective-C and a project in Objective-C for pre-Yosemite versions of MacOSX.

Updates on fixed_eytzinger_map

This is a brief report on some progress regarding fixed_eytzinger_map and code around it:

  • SFINAE expressions used in heterogeneous lookup were altered to be supported by Microsoft’s compiler (VS2015+).
  • The corresponding test suite was integrated with Catch testing framework.
    Now the source code comes with a makefile/CMakeLists.txt build script, which can compile and start tests, e.g. “make && make test“.
  • The source code is now being automatically tested by:
    • Travis CI:
      • Linux, x86-64
      • GCC 4.8/4.9/5.0 and clang 3.6/3.7/3.8
      • C++11/C++14
    • AppVeyor:
      • Windows, x86/x86-64
      • VisualStudio 2015/2017
      • C++11
  • A corresponding GitHub repository has got a decent readme page.
  • fixed_eytzinger_map is successfully used in Nimble Commander to manage various relationships between UI elements and corresponding action shortcuts.

Currently it seems that this project can be marked as “finished”, as there’s no obvious direction for evolution.

Cache-friendly associative container

Few weeks ago I stumbled upon a nice blogpost describing a flat tree layout optimized for cache locality. Another profound work, in the form of scientific paper, entitled “Array layouts for comparison-based searching” also deserves mentioning. The basic idea there is that the commonly used layout of flat maps, i.e. an array of sorted key values, has the negative side-effect of a not-ideal cache locality. When doing a binary search, memory is accessed on distant positions and every such lookup might cause a cache miss.

The level-order layout (aka BFS or Eytzinger layout) utilizes a different approach. Instead of placing keys in a non-descending order, they are laid out in the order of binary search traversal. This leads to fewer cache misses and thus to better lookup times, especially on big trees. Of course, nothing comes for free, and there are two major drawbacks of this data structure:

  • Terrible complexity of per-element insertion/deletion, which makes such modifying operations barely usable. However, many containers are modified only on initialization stage and accessed in read-only mode afterwards. This especially plays well with the immutable data ideology.
  • Lack of “cheap” ordered traversal, like std::map/boost::flat_map has. A simple answer to this issue might be accepting the fact that traversing such structures is not ordered, exactly like in the case of std::unordered_map.

That said, I’ve decided to try to implement a generic STL-like associative map with Eytzinger layout, providing interface as close to drop-in replacement as it can be. Here’s the fixed_eytzinger_map source code.

 

Design considerations

  • This map is supposed to be a read-only structure, i.e. support only construction and per-element access. However, while keys order is defined upon construction, it’s still possible to alter the values these keys are mapping to.
  • Construction of a new map can be done with std::initializer_list or with a range of InputIterators.
  • operator= makes it possible to change the map’s content, filling it with another map or with a pair of iterators as the source.
  • operator[] has different semantics compared with std::map/std::unordered_map. It can’t add new entries, but [] is too handy for element access to discard it. In case of absent key the map will simply throw an exception, just as at() does.
  • This map uses two memory blocks to store keys and values separately. Instead of using std::pair<key, value>, which creates gaps between keys and thus decreasing SNR when performing lookups, this approach leads to a better memory locality.
  • Since the data is stored in separate locations, iterators can’t provide references to “real” std::pair<key, value> objects. Instead, they have to return proxy std::pair<key&, value&> objects.
  • C++14’s heterogeneous lookup can be enabled via std::less<> comparison operator or any other declaring is_transparent type. Heterogeneous lookup is available for all methods involving search: at, operator[], count, find, equal_range, lower_bound and upper_bound.
  • There’re no rbegin/rend methods, as iteration is not ordered by “less” operator and reverting the internal order wouldn’t make any sense.

 

Performance tests

Various measurements were done to compare fixed_eytzinger_map with the following containers: std::map (libc++3.7), std::unordered_map (libc++3.7), boost::flat_map (1.55).
Compilation was made by clang-800.0.42.1, x64 target, -Os.
Executed on i7-3615QM, running macOS 10.12.2.
X-axis represents amount of elements in the container, Y-axis shows μs/bytes per element.

Lookup times.
This measures how much time is spent on .count(..) method. The sequence of keys to search is semi-random with the fixed seed.

 

Lookup & fetch times.
Similar to the previous graph, but uses the .at(..) method to actually access the values pointed by keys.

 

Construction times.
This measures the time spent on the construction of an associative container from a semi-random std::vector<std::pair<Key,Value>>, performing a single lookup and cleaning up afterwards.

 

Memory consumption.
How much memory is used to construct a container. It’s a bit hard to measure directly, so instead what is shown here is a delta between the memory consumed by test process before and after building a container.

 

Results

It’s not easy to compare these data structures directly, as each has it’s unique properties. However, what can be said, is that Eytzinger layout is always outperforms a sorted array (boost::flat_map) and consumes 2x to 5x less memory than std::unordered_map does. If immutability is not an issue, fixed_eytzinger_map might be worth trying at least.