IO2D demo: CPULoad

Introduction
It’s no secret that standard C++ is stuck in the ‘70s in terms of human-machine interaction. There is a console input-output with a handful of control characters and that’s basically it. You can use tabulation, a carriage return and, if you’re lucky, a bell signal. Such “advanced” interaction techniques of VT100 like text blinking, underscoring or coloring are out of reach. Of course, it’s possible to directly access some platform-specific API, but they differ quite a lot across platforms and are usually rather hostile to C++ idioms. InterfaceKit of BeOS was AFAIK the only native C++ graphics API. Some 3rd party library and/or middleware could serve as an abstraction layer, but this automatically brings a bunch of problems with building and integration, especially for cross-platform software. So, displaying a simple chart or a cat photo becomes an interesting quest instead of a routine action.

At this moment there’s a proposal to add standardized 2D graphics support to C++, known as P0267 or simply IO2D. It hasn’t been published as TS yet and there’s some controversy around it, but still, the proposal was proven to be implementable on different platforms and the reference implementation is available for test usage. The paper introduces concepts of entities like surfaces, colors, paths, brushes and defines a set of drawing operations. A combination of available drawing primitives with a set of drawing properties already allows building a quite sophisticated visualization model. Capabilities of the drawing operations generally resemble Microsoft’s GDI+/System.Drawing or Apple’s Quartz/CoreGraphics. The major difference is that IO2D employs a stateless drawing model instead of sequential state setup and execution.

The implementation of the proposed graphics library consists of two major components: a public library interface and a platform-specific backend (or multiple backends). The public interface provides a stable set of user-facing classes like “image_surface”, “brush” or “path_builder” and does not contain any details about the actual rendering process. Instead, it delegates all requests down to the specified graphics backend. The backend has to provide the actual geometry processing, rendering and interaction with a windowing system. To do that, ideally, the backend should talk directly to an underlying operating system and its graphics interfaces. Or, as a “fallback solution”, it can translate requests to some cross-platform library or middleware.

The CPULoad demo
There are several sample projects available in the RefImpl repository, their purpose is to demonstrate capabilities of the library and to show various usage techniques. The rest of this post contains a step-by-step walkthrough of the CPULoad example. This demo shows graphs of CPU usage on a per-core basis, which looks like this:

The sample code fetches the CPU usage information every 100ms and redraws these “Y=Usage(X)” graphs upon a frame update. The DataSource class provides a functionality to fetch new data and access existing entries via this interface:

class DataSource {
public: 
  void Fetch();
  int CoresCount() const noexcept;
  int SamplesCount() const noexcept;
  float At(int core, int sample) const noexcept;
  […]
};

The profiler routine is the only platform-specific part, everything else is cross-platform and runs identically on Windows, Mac and Linux.

The data presentation consists of several parts:
– Window creation and redraw cycle;
– Clearing the window background;
– Drawing the vertical grid lines;
– Drawing the horizontal grid lines;
– Filling the graphs with gradients;
– Outlining the graph contours.

Window creation and redraw cycle
This sample uses a so-called “managed output surface”, which means that the caller doesn’t need to worry about the window management and can simply delegate these tasks to IO2D. Only 3 steps are required to have a windowed output:
– Create an output_surface object with properties like desired size, pixel format, scaling and redrawing scheme.
– Provide a callback which does the visualization. In this case, the callback tells the DataSource object to fetch new data and then it calls the drawing procedures one by one.
– Start the message cycle by calling begin_show().

void CPUMeter::Run() {
  auto display = output_surface{400, 400, format::argb32, scaling::letterbox, refresh_style::fixed, 30};
  display.draw_callback([&](output_surface& surface){
    Update();
    Display(surface);
  });
  display.begin_show(); 
}

Clearing the window background
Paint() operation fills the surface using a custom brush. There are 4 kinds of brushes – a solid color brush, a surface (i.e. texture) brush and two gradient brushes: linear and radial. The solid color brush is made by simply providing a color to the constructor:

brush m_BackgroundFill{rgba_color::alice_blue};

Thus, filling a background requires only a single method call, as shown below. Paint() has other parameters like brush properties, render properties and clipping properties. They all have default values, so these parameters can be omitted in many cases.

void CPUMeter::DrawBackground(output_surface& surface) const {
  surface.paint(m_BackgroundFill);
}

The outcome is a blank window filled with the Alice Blue color (240, 248, 255):

Drawing the vertical grid lines
Drawing lines is a bit more complex operation. First of all, there has to be a path which describes a geometry to draw. Paths are defined by a sequence of commands given to an instance of the path_builder class. A line can be defined by two commands: define a new figure (.new_figure()) and make a line (.line()).
Since it might be costly to transform a path into a specific format of an underlying graphics API, it’s possible to create an interpreted_path object only once and then to use this “baked” representation on every subsequent drawing. In the snippet below, the vertical line is defined only once. Transformation matrices are then used to draw the line at different positions.
Two methods can draw arbitrary paths: stroke() and fill(). The first one draws a line along the path, while the latter fills the interior of a figure defined by the path. Drawing of the grid is performed via the Stroke() method. In addition to brushes, this method also supports specific parameters like “stroke_props” and “dashes”, which define properties of a drawn line. In the following snippet, those parameters set a width of 1 pixel and a dotted pattern.

stroke_props m_GridStrokeProps{1.f};
brush m_VerticalLinesBrush{rgba_color::cornflower_blue};
dashes m_VerticalLinesDashes{0.f, {1.f, 3.f}};

void CPUMeter::DrawVerticalGridLines(output_surface& surface) const {
  auto pb = path_builder{}; 
  pb.new_figure({0.f, 0.f});
  pb.line({0.f, float(surface.dimensions().y())});
  auto ip = interpreted_path{pb};
 
  for( auto x = surface.dimensions().x() - 1; x >= 0; x -= 10 ) {
    auto rp = render_props{};
    rp.surface_matrix(matrix_2d::init_translate({x + 0.5f, 0}));
    surface.stroke(m_VerticalLinesBrush, ip, nullopt, m_GridStrokeProps, m_VerticalLinesDashes, rp);
  }
}

The result of this stage looks like this:

Drawing the horizontal grid lines
The process of drawing the horizontal lines is very similar to the previous description with the only exception. Since horizontal lines are solid, there’s no dash pattern – nullopt is passed instead.

brush m_HorizontalLinesBrush{rgba_color::blue};

void CPUMeter::DrawHorizontalGridLines(output_surface& surface) const {
  auto cpus = m_Source.CoresCount();
  auto dimensions = surface.dimensions();
  auto height_per_cpu = float(dimensions.y()) / cpus;
 
  auto pb = path_builder{};
  pb.new_figure({0.f, 0.f});
  pb.line({float(dimensions.x()), 0.f});
  auto ip = interpreted_path{pb};
 
  for( auto cpu = 0; cpu < cpus; ++cpu ) {
    auto rp = render_props{};
    rp.surface_matrix(matrix_2d::init_translate({0.f, floorf((cpu+1)*height_per_cpu) + 0.5f}));
    surface.stroke(m_HorizontalLinesBrush, ip, nullopt, m_GridStrokeProps, nullopt, rp);
  }
}

A fully drawn grid looks like this:

Filling the graphs with gradients
Filling the graph’s interior requires another kind of brush – the linear gradient brush. This kind of brush smoothly interpolates colors along some line. The linear brush is defined by two parameters: a line to interpolate along and a set of colors to interpolate. The gradient in the snippet consists of three colors: green, yellow and red, which represents different levels of usage: low, medium and high. The artificially degenerate line of {0, 0}-{0, 1} is used upon the construction of the gradient, this allows to easily translate and scale the gradient later.
Each data point is used as a Y-coordinate in a path, which is being built from right to left until either the left border is reached or no data remains. Both the path and the gradient are then translated and scaled with the same transformation matrix. In the first case, the coordinates of the paths are transformed, while in the second case the anchor points of the gradient are transformed.

brush m_FillBrush{ {0, 0}, {0, 1}, { {0.f, rgba_color::green}, {0.4f, rgba_color::yellow}, {1.0f, rgba_color::red}}};

void CPUMeter::DrawGraphs(output_surface& surface) const {
  auto cpus = m_Source.CoresCount(); 
  auto dimensions = surface.dimensions();
  auto height_per_cpu = float(dimensions.y()) / cpus;
 
  for( auto cpu = 0; cpu < cpus; ++cpu ) {
    auto m = matrix_2d{1, 0, 0, -height_per_cpu, 0, (cpu+1) * height_per_cpu};
 
    auto graph = path_builder{};
    graph.matrix(m);
    auto x = float(dimensions.x()); 
    graph.new_figure({x, 0.f}); 
    for( auto i = m_Source.SamplesCount() - 1; i >= 0 && x >= 0; --i, --x )
      graph.line({x, m_Source.At(cpu, i) }); 
    graph.line({x, 0.f});
    graph.line({float(dimensions.x()), 0.f});
    graph.close_figure();
 
    auto bp = brush_props{};
    bp.brush_matrix(m.inverse());
    surface.fill(m_FillBrush, graph, bp);
  }
}

Filled graphs then look like this afterwards:

Outlining the graph contours
The graph looks unfinished without its contour, so the final touch is to stroke the outline. There is no need to build the same path twice, as the previous one works just fine. The only difference is that the contour should not be closed, so the path is simply copied before the two last commands. A brush with transparency is used to give the outline some smoothness.

brush m_CountourBrush{ rgba_color{0, 0, 255, 128} };
stroke_props m_ContourStrokeProps{1.f};
[…]
    graph.line({x, 0.f});
    auto contour = graph; 
    graph.line({float(dimensions.x()), 0.f});
    […] 
    surface.stroke(m_CountourBrush, contour, nullopt, m_ContourStrokeProps);
  } 
}

And this last touch gives us the final look of the CPU activity monitor:

Conclusion
In my humble opinion, the 2D graphics proposal might bring C++ a solid foundation for visualization support. It’s powerful enough to build complex structures on top of it – here I can refer to the sample SVG renderer as an example. At the same time, it’s not built around some particular low-level graphics API (i.e OpenGL/DirectX/Mantle/Metal/Vulkan), which come and go over time (who remembers Glide?). What is also very important about the proposal is its implementability – I wrote the CoreGraphics backend in ~3 months on a part-time basis. It can be assumed that writing a theoretical Direct2D backend might take about the same time. While it’s easy to propose “just” a support for PostScript, SVG or even HMTL5, the practical implementability of such extensive standards is very doubtful. Having said that, I do think that the proposal, while being a valid direction, is far from being perfect and needs a lot of polishing.

Here’s the link to the IO2D implementation:
https://github.com/mikebmcl/P0267_RefImpl
Sample code:
https://github.com/mikebmcl/P0267_RefImpl/tree/master/P0267_RefImpl/Samples
Samples screenshots :
https://github.com/mikebmcl/P0267_RefImpl/tree/master/P0267_RefImpl/Samples/Screenshots

Cryptocurrency mining on iOS devices


XMR-STAK-CPU running on iPad

Disclaimer

This post should not be treated as an advice to use iOS devices as a cryptocurrency mining machine. That can destroy the battery, fry the CPU/SoC, ruin the system’s responsiveness etc. This is a purely academic research driven by sheer curiosity.

Reasons

Since I got my hands on the latest iPad, I was eager to write something to check horsepower of that machine. Thanks to the recent bubble of cryptocurrencies prices, this ridiculous idea appeared. Of course, there’s no sense in trying to mine bitcoins or similar currencies since CPUs can’t compete with specialized solutions like ASICs in mining those. On the other hand, cryptocurrencies based on CryptoNote, like Monero(XMR ticker), have memory-bound properties which make them hard to crack on tiny dumb devices. That brings at least some amount of sense into solving these crypto puzzles on CPUs. I chose the XMR-STAK-CPU mining software, which is available in a source code, to try to run on iOS, first in a simulator and the on a real device.
As part of this porting experiment, I aimed to keep the original source code untouched and to use the files right out of the repository. Oddly enough, the endeavor was successful and within a few days, I got a complete solution. Challenges of porting and the outcome are described below.

Challenges

SSE vs. NEON
The source code of xmr-stak-cpu contains tons of SIMD instructions. Fortunately, there’re no inline assembler instructions and all calls are made through _mm_XXX intrinsics. That means it’s possible to mimic these calls with C-style functions and macros. The same applies to the data type definitions.
Thanks to the SSE2NEON project, the lion’s share of the work is already done and I basically needed only to properly fiddle with the source code. A trick with a precompiled header was used to do it: when the source was built for a real iOS device – SSE2 was mimicked with NEON and the original includes (<x86intrin.h>, <intrin.h>, <immintrin.h>) were suppressed by defining theirs include guards in advance. Nothing was substituted for iOS Simulator builds since it runs on an x86 machine and there’re no NEON instructions there.

But of course, that could not be absolutely smooth. A couple of x86 instructions was missing in SSE2NEON: _mm_prefetch, _mm_set_epi64x, _mm_cvtsi128_si64, _mm_aesenc_si128 and _mm_aeskeygenassist_si128.

_mm_set_epi64x and _mm_cvtsi128_si64 are trivial to implement on NEON with 1:1 mapping to SSE.

_mm_prefetch is a bit trickier since Intel and ARM have a different approach to controlling of the prefetch instruction and there’s no 1:1 mapping between those. I ended with the __builtin_prefetch(p) intrinsic to mimic _mm_prefetch, which is only a rough approximation.

The most interesting instructions were the cryptographic _mm_aesenc_si128 and _mm_aeskeygenassist_si128. Intel and ARM have a different idea of how to split the AES encryption into a set of commands. Here’s a good visualization of the issue:

It requires a set of instructions to mimic _mm_aesenc_si128 on ARM. The trick is to eliminate the AddRoundKey stage of vaeseq_u8() by providing a key of zeros and to add the actual key in the end by manually doing an XOR operation. This yields 3 instructions instead of one on SSE, but semantics remains the same. Here’s the code:

static inline __attribute__((always_inline))
__m128i _mm_aesenc_si128( __m128i v, __m128i rkey )
{
    const __attribute__((aligned(16))) __m128i zero = {0};
    return veorq_u8( vaesmcq_u8( vaeseq_u8(v, zero) ), rkey );
}

AFAIK there’s no support for encryption keys expansion in NEON, so the _mm_aeskeygenassist_si128 had to be implemented manually. I used the software implementation from xmr-stack-cpu’s soft_aes.c and packed it to fake a single instruction call:

static inline __attribute__((always_inline))
__m128i _mm_aeskeygenassist_si128(__m128i key, const int rcon)
{
    static const uint8_t sbox[256] = {
    0x63, 0x7c, 0x77, 0x7b, 0xf2, 0x6b, 0x6f, 0xc5, 0x30, 0x01, 0x67, 0x2b, 0xfe, 0xd7, 0xab, 0x76,
    0xca, 0x82, 0xc9, 0x7d, 0xfa, 0x59, 0x47, 0xf0, 0xad, 0xd4, 0xa2, 0xaf, 0x9c, 0xa4, 0x72, 0xc0,
    0xb7, 0xfd, 0x93, 0x26, 0x36, 0x3f, 0xf7, 0xcc, 0x34, 0xa5, 0xe5, 0xf1, 0x71, 0xd8, 0x31, 0x15,
    0x04, 0xc7, 0x23, 0xc3, 0x18, 0x96, 0x05, 0x9a, 0x07, 0x12, 0x80, 0xe2, 0xeb, 0x27, 0xb2, 0x75,
    0x09, 0x83, 0x2c, 0x1a, 0x1b, 0x6e, 0x5a, 0xa0, 0x52, 0x3b, 0xd6, 0xb3, 0x29, 0xe3, 0x2f, 0x84,
    0x53, 0xd1, 0x00, 0xed, 0x20, 0xfc, 0xb1, 0x5b, 0x6a, 0xcb, 0xbe, 0x39, 0x4a, 0x4c, 0x58, 0xcf,
    0xd0, 0xef, 0xaa, 0xfb, 0x43, 0x4d, 0x33, 0x85, 0x45, 0xf9, 0x02, 0x7f, 0x50, 0x3c, 0x9f, 0xa8,
    0x51, 0xa3, 0x40, 0x8f, 0x92, 0x9d, 0x38, 0xf5, 0xbc, 0xb6, 0xda, 0x21, 0x10, 0xff, 0xf3, 0xd2,
    0xcd, 0x0c, 0x13, 0xec, 0x5f, 0x97, 0x44, 0x17, 0xc4, 0xa7, 0x7e, 0x3d, 0x64, 0x5d, 0x19, 0x73,
    0x60, 0x81, 0x4f, 0xdc, 0x22, 0x2a, 0x90, 0x88, 0x46, 0xee, 0xb8, 0x14, 0xde, 0x5e, 0x0b, 0xdb,
    0xe0, 0x32, 0x3a, 0x0a, 0x49, 0x06, 0x24, 0x5c, 0xc2, 0xd3, 0xac, 0x62, 0x91, 0x95, 0xe4, 0x79,
    0xe7, 0xc8, 0x37, 0x6d, 0x8d, 0xd5, 0x4e, 0xa9, 0x6c, 0x56, 0xf4, 0xea, 0x65, 0x7a, 0xae, 0x08,
    0xba, 0x78, 0x25, 0x2e, 0x1c, 0xa6, 0xb4, 0xc6, 0xe8, 0xdd, 0x74, 0x1f, 0x4b, 0xbd, 0x8b, 0x8a,
    0x70, 0x3e, 0xb5, 0x66, 0x48, 0x03, 0xf6, 0x0e, 0x61, 0x35, 0x57, 0xb9, 0x86, 0xc1, 0x1d, 0x9e,
    0xe1, 0xf8, 0x98, 0x11, 0x69, 0xd9, 0x8e, 0x94, 0x9b, 0x1e, 0x87, 0xe9, 0xce, 0x55, 0x28, 0xdf,
    0x8c, 0xa1, 0x89, 0x0d, 0xbf, 0xe6, 0x42, 0x68, 0x41, 0x99, 0x2d, 0x0f, 0xb0, 0x54, 0xbb, 0x16};
    uint32_t X1 = _mm_cvtsi128_si32(_mm_shuffle_epi32(key, 0x55));
    uint32_t X3 = _mm_cvtsi128_si32(_mm_shuffle_epi32(key, 0xFF));
    for( int i = 0; i < 4; ++i ) {
        ((uint8_t*)&X1)[i] = sbox[ ((uint8_t*)&X1)[i] ];
        ((uint8_t*)&X3)[i] = sbox[ ((uint8_t*)&X3)[i] ];
    }
    return _mm_set_epi32(((X3 >> 8) | (X3 << 24)) ^ rcon, X3, ((X1 >> 8) | (X1 << 24)) ^ rcon, X1);
}

cpuid
xmr-stack-cpu uses the cpuid command to determine whether SSE and AES instructions are supported on the CPU. The problem was that <cpuid.h> shipped with Xcode doesn’t have an include guard, so it’s not possible to suppress its inclusion as it was done with <x86intrin.h>. Instead, <cpuid.h> had to be faked entirely by fiddling with headers search paths. Here’s the fake header to make xmr-stack-cpu believe that ARM chip supports everything:

#pragma once
#include "TargetConditionals.h"
#if TARGET_OS_SIMULATOR
#define __cpuid_count(__level, __count, __eax, __ebx, __ecx, __edx) \
    __asm(" xchgq %%rbx,%q1\n" \
          " cpuid\n" \
          " xchgq %%rbx,%q1" \
        : "=a"(__eax), "=r" (__ebx), "=c"(__ecx), "=d"(__edx) \
        : "0"(__level), "2"(__count))
#else
static inline __attribute__((always_inline))
void __cpuid_count(uint32_t __level, int32_t __count,
                   int32_t &__eax, int32_t &__ebx, int32_t &__ecx, int32_t &__edx)
{
    __eax = __ebx = __ecx = __edx = -1;
}
#endif

stdout capture
xmr-stack-cpu is a console-based software and I wanted to keep that as is, regardless of what Apple thinks about stdout in iOS. A simple dup2 syscall does the job – stdout could be redirected into a pipe, while another end of that pipe is connected with some UI control like UITextView. Here’s the snippet:

let pipe = Pipe()
var fileHandle: FileHandle!
var source: DispatchSourceRead!

func setupStdout() {
    fileHandle = pipe.fileHandleForReading
    fflush(stdout)
    dup2(pipe.fileHandleForWriting.fileDescriptor, fileno(stdout))
    setvbuf(stdout, nil, _IONBF, 0)
    source = DispatchSource.makeReadSource(fileDescriptor: fileHandle.fileDescriptor,
                                           queue: DispatchQueue.global())
    source.setEventHandler {
        self.readStdout()
    };
    source.resume()
}

func readStdout() {
    let buffer = malloc(4096)!
    let read_ret = read(fileHandle.fileDescriptor, buffer, 4096)
    if read_ret > 0 {
        let data = UnsafeBufferPointer(start: buffer.assumingMemoryBound(to: UInt8.self),
                                       count: read_ret)
        if let str = String(bytes: data, encoding: String.Encoding.utf8) {
            DispatchQueue.main.async {
                self.acceptLog(str: str)
            }
        }
    }
    free(buffer)
 }

Unlimited execution in background
That’s what Apple doesn’t like at all and tries to prevent at any cost. Of course, that makes sense in a perspective of battery life, but when a device is connected to a power source these restrictions look ridiculous. After all, that’s my device and I want it to be able to perform any computations, no matter how time-consuming and complex they are. There’s no universal solution for this problem, but at least one particular combination worked for me on iOS11:
– Creation of a background task upon switching to background mode via UIApplication.shared.beginBackgroundTask and the consequent creation of next tasks in the expiration handler.
– Infinite looped playback of an empty sound file at the same time. I used this solution as a starting point and made a few performance-wise tweaks after.
This hack lets the application to run indefinitely long and prevents it from putting to sleep and closing its network connections. During my tests, it was absolutely fine to leave the miner app working for 12+ hours and that didn’t lead to any terminations or suspensions or connections droppings.

Results

I benchmarked the performance on three Macs from 2012 and two iOS devices. To be fair, all of these Macs have a “notebook-level” hardware and it wouldn’t be correct to make assumptions about “desktop-level” Intel CPUs based on the gathered data. The tests were run with low_power_mode=false and no_prefetch=true flags, during at least 15 minutes.
The results were surprising – despite the usage of an almost brute-force method of instructions translation and lack of any hardware-specific optimizations made for Apple CPUs, iPad 2017 showed pretty solid performance. A9 shows the same hashrate as Core i5-3427U, which itself cost $225 when it was introduced in 2012 (A9 was introduced in 2015) and has a TDP of 17W (A9 has about 4W).  This graph also clearly shows the memory-bound limitations of CryptoNote.

The source code and build instructions are available in this repository.

Abnormal programming at CppCon2017

During last CppCon, a set of C++ puzzles was presented within the SCM Challenge (thanks for an iPad BTW). While most of those puzzles were quite generic and simple, one puzzle did really stand out. The one I’m mentioning here was called “Don’t Repeat Yourself” and was introduced by Arthur O’Dwyer.

Here is problem statement:
Write a program that prints the numbers from 1 to N, then back down to 1, separated by whitespace. But Don’t Repeat Yourself: each word in your submitted source code must be unique! A “word” is defined as in Part 1. How large of an N can you achieve?

And that is the definition of a “word” from Part 1:
Write a function that returns true if every word in its input is unique. A “word” is defined as any maximal string of consecutive word characters, and a “word character” is defined as a digit, letter, or underscore. For example, the string “5+hello-world 0_9/a7” contains five words, all unique; so your function should return true. The string “abc+abc_+abc” contains three words, of which the first and last are both “abc”, so your function should return false.

This task, harmless at first sight, turns into a ridiculously hard riddle. “Uniqueness” means that it’s not possible to access any identifier – it can only be declared, but never touched again. The same applies to preprocessor statements, language keywords and so forth. Unfortunately, there’s no known “right” solution to this puzzle (at least for me). Arthur showed 3 possible solutions made by CppCon participants, including mine, during his lightning talk, but all those were hacks more or less. One exploited the trick with escape sequences in string literals, another used a plain assembled binary injected into the source code. Mine used the stack hack to get a read-write access to the function parameter.

The clean variant of my solution is shown below. It assumes libstdc++ environment and gcc compiler. Of course, there’s no compatibility with other compilers and/or standard libraries. A few notes on the solution:
– Optimization is turned off by a compiler-specific pragma to get access to the stack. Parameter passing would be optimized otherwise.
– User-defined literal is used to mimic the function syntax. Otherwise, it would not be possible to call the function.
– The function parameter N is accessed via stack frame address hack and via zero-sized alloca() hack.
– errno is used as another counter variable, thanks to the library-specific alias.

#include <iostream>
using namespace std;
__attribute__((optnone)) void operator "" _print( decltype(0ULL) )
{
    for(; errno <= ((uint64_t*)__builtin_alloca(0L))[3L];) 
        cerr << ++*__errno_location() << " ";
    while( ((int64_t*)__builtin_frame_address(0))[-1L] )
        cout << ((long*)alloca(0LL))[3]-- << " ";
}
int main()
{
    // print numbers up to 500 and back
    500_print;
}

Painless localization of bundles with Xcode

In Cocoa world, NSLocalizedString() is the “official” way to localize a user-facing string data. It has been around for many years, and most of the time this mechanism works pretty well. Even better, it’s possible to use the XLIFF file format throughout a localization process since the release of Xcode 6. The IDE is smart enough to gather string resources defined in source files and to expose them via XLIFF. This support from IDE and external tools makes the localization process mostly smooth for small projects.

The process gets more clumsy when custom tables (other than “Localizable.strings”) or frameworks come into play. At that moment a relatively slender
NSLocalizedString(@”String”, “comment”)
transforms into a
NSLocalizedStringFromTable(@”String”, “Custom”, “comment”)
or, even worse, into a
NSLocalizedStringFromTableInBundle(@”String”, “Localizable”, [NSBundle bundleForClass:[self class]], “comment”).
With this amount of boilerplate code, an SNR drops below 25% and such inclusions make it hard to reason about the functional logic.

Some projects mitigate this issue by introducing another macro on top of existing, but this approach disables an ability of Xcode to gather localizable strings from the source code. During the refactoring of Nimble Commander’s file operations subsystem (i.e. moving a bunch of code into a separate framework), I found it relatively easy to deal with this issue within the Objective-C++ world.

Since NSLocalizedString is a macro, nobody can stop from removing it away via simple #undef. Next, NSLocalizedString can be defined again as a regular function without any preprocessor magic:

// Internal.h
#pragma once
#undef NSLocalizedString
namespace project {
...
NSString *NSLocalizedString(NSString *_key, const char *_comment);
...
}

This small workaround preserves Xcode’s ability to find and extract localizable strings and is syntactically identical to the macro definition, i.e. no changes are required within an existing code. An implementation of redefined NSLocalizedString is pretty straightforward:

// Internal.mm
#include "Internal.h"
namespace project {
...
static NSBundle *Bundle()
{
  static const auto bundle_id = @"com.company.project.framework";
  static const auto bundle = [NSBundle bundleWithIdentifier:bundle_id];
  return bundle;
}

NSString *NSLocalizedString(NSString *_key, const char *_comment)
{
  return [Bundle() localizedStringForKey:_key value:@"" table:@"Localizable"];
}
...
}

Obviously, “Internal.h” must be included only inside the framework’s source code.