June 1, 2021

Struggles with Limescan

In 2018 I built a small BLE (Bluetooth Low Energy) RFID reader to enable our small iPad PoS app that I called Limetree. It’s been about 3 years since I created this and looking back I’ve come far from this very simple design. This was my first 3D printed project and at the time I was still designing the parts in SketchUp. For an unrelated reason I needed the NRF52832 feather in this device for another project so I pulled it out somewhere in mid-2020 as I had no need for the scanner during COVID. As there is a high chance I get to travel to Sweden again this year (fully vaccinated now!!!), it was time to reassemble the scanner!

Well, it didn’t work out as planned. For some reason the exact same wiring and code just refused to work and in frustration I soldered and desoldered until I damaged the Feather, which lead to the NRF52 chip refusing to boot. Oh well, guess it was time to upgrade to the NRF52840 anyway. Quick primer on the differences between these two feathers is that the latter of the two (hereafter referred to 840) has a native USB stack, which means the main chip handles the USB interface against the host computer. On the 832, there is a separate microcontroller handling USB and bootstrapping the device which prohibits you from having fancy functions such as U2F. This sounded great to me, usually microcontrollers are the less components, the better” and I decided to buy the 840 instad of the 832 as a replacement.

The NRF52840 arrived and I started the work on updating the code to make it run on the NRF52840. The device has a new RGB LED which I of course wanted to take advantage of. Once I felt reasonably certain that the code matched the differences I soldered on the wires to the PN532 breakout that I had from the earlier version and booted it up. It didn’t work. After tinkering back and forth with the code I got really frustrated because it seemed to work SOMETIMES. I would change something, burn the firmware for it to work for a few minutes and then restart it and it wouldn’t work again. If you’ve ever done programming, the worst problems to debug are the ones that are not predictable because there is usually very little indication what’s going wrong.

I don’t own a SWD debugger so most of my debugging here is print statements and blind debugging using the oscilloscope. However the problem I seemed to be having here happened before the serial line initialized properly, meaning that whatever output I wrote to the computer from the microcontroller was lost into the void. I quickly realized that I could use the on-board status LED to just show different colors upon reaching certain parts of the code and wrote a quick status” function which incremented colors every time it was called, allowing me to see where it stopped. In theory, this should yield me the same color every time I started the chip which would allow me to see where it got stuck on boot. Of course, that didn’t work out either. It stopped at random points during the bringup of the chip. Almost out of ideas I started doing really harsh clean turkey” debugging, starting with all code commented out and bringing piece after piece into the library.

Eventually it cleared, the Adafruit Neopixel library I was using for easy control of the LEDs was somehow interfering with the BLE shim that Adafruit provided which sits on top of Nordic Semiconductors BLE stack. To understand this it’s important to know that while all these microcontrollers can execute code within the Arduino Framework” (which I wrote my shit in because I couldn’t be bothered in 2018), the framework” is implemented differently depending on which microcontroller you’re using. Coming from using x86 processors for the past 25 years this is a bit hard to wrap my head around but the Arduino environment” is more like libc, a set of common functions you can compile against which has stuff like ports, GPIO and hardware functions mapped for you. It also provides a setup() function which is called once and loop() function which is called in a loop as long as the device is powered.

On an old AVR Arduino from which the framework stems from, you essentially controlled the entire CPU. Your code ran in a loop and the loop was bootstrapped by the framework. This means that once the processor enters loop(), you are in control of the execution of the entire processor. There is no operating system or multitasking going on which allows you to do a lot of nifty tricks with the hardware. something that other libraries often used.

//This code is really from the Arduino AVR core
int main(void)
    for (;;) {
        if (serialEventRun) serialEventRun();
    return 0;

On a microcontroller like NRF52840 or another popular chip like Espressif’s ESP32 this is no longer true. These devices need to keep a BLE radio or WiFi radio active and these radios are way too complex to be controlled by the user and managing the timing requirements of these stacks. For that reason these devices often run on an RTOS (real-time operating system), the NRF52840 on Zephyr RTOS and ESP32 on FreeRTOS. What this means in reality is that your loop() function is scheduled onto the OS with the lowest priority, allowing the core to deal with other more important tasks for the radio to operate in between the calls to your function. As these cores often run at significantly higher speeds (64 MHz/240MHz vs 16Mhz for the AVR) there is enough headroom to run normal Arduino code and schedule the radio without impacting the Arduino libraries. The delay() function that on AVR halted the execution insteads just yields back to the scheduler with a request to be woken up after X milliseconds.

Why does all this matter? I wanted to re-use the code for this project even though I prefer the ESP32 (the ESP32 having dual cores, making it easier to schedule tasks) but ince the NRF52840 already ran a RTOS I figured that I could just schedule tasks exactly like the rest of the libraries and yield() in the Arduino bootstrapped loop. However every time I tried to create one additional task with the Neopixel and the BLE stack, the microprocessor froze. After a lot of searching I stumbled onto a lead on the Adafruit Forums where one person seems to acknowledge that combining the Neopixel library with the BLE stack seems to cause a conflict which leads to the crash. After digging even deeper it seems that the Adafruit Neopixel library is one of those libraries that try to make the most out of the AVR core. Specifically the task that it gets scheduled on runs for a long time which is fine” when you only have one task to defer to but as soon as you start introducing multiple tasks, the BLE stack gets starved” and crashes the execution as it fails to maintain realtime. After having said FML a couple of times while realizing this I quickly switched out the Neopixel library to another library I use on the ESP32 named FastLED and this worked without a problem.


I connect it all back up together to start testing against the iPad and see that the status LED still doesn’t fully go to green (the color I selected for success). The device fails to connect to the PN532 (RFID reader)??? Down the rabbit hole again it seems. However this is yet again one of those sometimes” issues where every reboot seems like a gamble if it will come up. I connect the feather and breakout to the oscilloscope and start trying to identify what’s going on.



There is a clock signal and there is a data signal going in both directions so it’s clear that the chip is at least speaking SPI. I switch back and forth between the computer and the chip to try to figure out what’s going on. Eventually I see that the signal only bangs out sometimes”. Thinking back to what we had before, the PN532 is wired over SPI and I have a suspicion that the earlier problem is related. But why did this work so well with the NRF52832? After searching for a long time I think that I eventually found the issue in the unlikely place of the CircuitPython implementation for Adafruit, as the developers of that runtime also dealt with this. You see that the NRF52832 had an errata” (faulty hardware implementation) for the SPI which lead the developers at Adafruit to use a software SPI implementation for the older chip, documented here. So essentially what I’m discovering here is the hardware SPI on the NRF52840 is weird and fails to sync with the PN532 for some unknown reason. I’m guessing it’s some sort of assumption in the Adafruit library for the breakout, as many people have complained about similar problems with the ESP32 in the GitHub repo.

At this point I couldn’t be bothered to go deeper and decided to switch the hardware protocol to use I2C instead. The benefits of I2C vs SPI can be debated but for my use case these are interchangeable apart from having to solder the additional pullup resistors. I soldered this together and configured the code to use I2C and it worked on the first try. Board comes up with BLE and the device now works exactly as the NRF52832 did. At this point I’ve massacred the code a fair bit and it was never in good shape to begin with. One should always leave code in better shape than one found it so I end up spending some time on cleaning up the codebase and utilizing the ZephyrRTOS task scheduling instead for the loop based programming model I did earlier.

Because I had spent so much time digging into the code I also finally learned how BLE works”. My previous mental model about BLE is that it’s a wireless serial port” which is very far from the truth. A much better way to think about BLE is that you can write values into the void and tag them as representing specific characteristics, anyone that happens to listen to those messages can easily decode those. With that knowledge I could rewrite a lot of the earlier code I had to work around BLUART and just make my own characteristic. Code is of course on Github.


With my luck throughout this simple switch”, of course the first thing that happens when I reassemble it into the old 3D printed case is that I crack the pins in the case. This design was my first and in hindsight it’s so badly designed that I’m almost ashamed of it. Sure, everyone begins somewhere but this case was too dumb of a design to be SLA printed. Only one thing to do, open up Fusion and design a new one. I re-used the formfactor from the previous design as I couldn’t be bothered to buy a new piece of frosted plexiglass.



This was also an opportunity to fix up the cabling from the earlier version. The earlier version relied on adhesive tape to fixate the Feather to the center of the board for the 4x8 LED array. This time I 3D printed a separate module that would hold the LED featherwing. Mounted this together using some screws and it’s night and day in fit compared to the older version. I do not want to re-experience the pain of debugging this microcontroller but improving physical designs is really fun.



Closing words for this one is that it’s easy to assume that other people have figured your specific use case out before and designed for it. This might be true in the world of X86 where there’s millions of users but for smaller microcontrollers, the likelihood of someone solving one problem the way you do is very low. I’ve also realized why most developers end up transitioning out from using the Arduino framework, while it’s a great way to bootstrap learnings it just makes too many assumptions about logic flow on more modern processors. I hate to blame the hardware” and I should probably just bite the bullet and buy an SWT debugger at some point. My digital caliper also broke when designing the case so I got a new one, icing on the cake to this I guess.

May 25, 2021

Biking in the Bay Area

Before moving to San Francisco I heard a lot about many different parts of the Bay Area. Tech culture, bars, commute, cost of living, hiking, sierras etc but none mentioned the insane biking that the surrounding areas gives you. There is absolute golden roads to bike less than 10 kilometers from San Francisco. For anyone who lives in SF and regulary uses Strava, these images are almost a meme at this point. It’s obligatory for any cyclist to tag their Strava uploads with these images over and over but I realized that not everyone lives in SF and uses Strava. These document my three favorite routes: Paradise Loop (90 KM), Reyes Loop (132 KM) and the Mt. Tam climb through Fairfax (105 KM).

Paradise loop

Reyes Loop

Reyes Loop

Climbing up to Hawk Hill Climbing up to Hawk Hill

Climbing up to Hawk Hill Descending Hawk Hill

Heading out with Nikola Heading out with Nikola

Climbing up to Mt. Tam Climbing up to Mt. Tam through the Lagunitas watershed.

On the ridge Up on the ridge.

Stopping for water

Stopping for water

Descending Tam Looking out from the rigde.

March 8, 2021


Working remotely is draining in many ways but endless video conferencing is probably what one will miss the least about this entire experience. I’ve also learned that very few people seem to have grown up with long form voice conferencing and is as a result absolutely terrible at muting when not speaking. I’ve sat through so many meetings where people fail at muting and has microphones that sounds like they are screaming while skydiving next to a helicopter.

My theory around this is that personal video conferencing quality is tied to ones organizational level. In general: The higher up you are in the company organizational structure you are, the shittier your AV setup is. (Coral’s law for short).


Growing up with Ventrilo -> Teamspeak -> Discord has teached one the importance of using PTT (Push To Talk). It might sound annoying to have to hold a button every time you want to speak but after a short time it becomes second nature, even to the point where I will press this button even though the application I’m using at the moment doesn’t support PTT.

Using PTT can be annoying at times. When trying to type at the same time as talking or not having your hands close to the PTT button is frustrating. A friend gave me the advice to try using a foot pedal, however I couldn’t find one that fit my requirements. I wanted one that could map to any obscure button without using active software to trigger the button. The ones that did were too expensive, which leaves me with only one option: building one myself.

I’m using what I had on the shelf for this one, meaning:

I designed a very simple enclosure that printed in two parts, one serving as both the base plate and the enclosure for the electronics and the second as the hood. I took the inspiration from pedals commonly seen for sewing machines and expression control for synthesizers.





Wiring it up was easy (essentially one pushbutton) and I’m using NicoHood’s HID library which made the code short. I’ve uploaded the project on my github along with the STL files if you want to build one yourself.

March 6, 2021


Sweden has this absolutely terrific yet horrifying and awful liquor named Bäska Droppar” which I make a point of introducing to as many people as possible. On top of being almost undrinkable there is a Swedish studentesque tradition of drinking this in weird ways at weird times. The bottle doesn’t beat around the bush and proudly shows:

Noone is indifferent to the characteristic taste of Bäska Droppar which has made it a classic.”

- The tagline on the back of the bottle. Really.


There are three major milestones to keep in mind when consuming this trash liquor:

23 Besken

This is a true Swedish classic. When turning 23, one is supposed to drink 23 cl of what used to be article number 23 in Systembolaget (The Swedish liquor monopoly). 23 cl is just above a half marathon (21.0975 cl) which leads us to the next milestone.

The Marathon

If 23 cl wasn’t enough for you, you can always go the full marathon. This means drinking exactly 42.194988 cl of this experience. I do promise that after having consumed this volume of Bäsk, you’re going to wish that you didn’t.

The Victory Lap

After consuming 42.194988 cl you might experience a rush of adrenaline from having cleared this much of the horrible taste. If you still feel in good shape you should go for the victory lap, meaning clearing the remaining 7.805012 cl still left in the bottle

If you get the chance to consume this bottle of pure horror, do take the opportunity. It’s after all a Swedish classic.

March 1, 2021





February 8, 2021

Leaving Twitch

In February of 2021 I resigned from Twitch after 6 years of employment and joining Discord tomorrow after writing this post. It’s scary leaving any company where you spent 1/6th of your life inside of. The question has come up enough times now about why I’m leaving and to explain that I first have to talk about how it all started.

To understand why I joined Twitch I need to start with how I met Trance. Cristian Tamas, also known as Trance” on the internet was one of the founders of The GD Studio”, which back in 2012 was one of the few influential broadcasters within the Starcraft 2 esport scene. I got introduced to Trance through my work at DreamHack and helped him and James 2GD Harding with some of the shows up until 2014. Trance was one of the first employees of Twitch in Europe with the intent of acting as a tech/BD arm in Europe, as esport was heavily European at the time. In 2014, the studio was awarded the contract from Valve Software to produce a pre-show” to their DOTA2 tournament with $10 million USD in prize pool that would stretch over 2 weeks. The GD Studio’s idea here was to produce a 24/7 show, flying in all the talent to their house up in Tyresö, Stockholm.


Trance reached out to me to help assemble the production behind the show, as there were tons of parts the studio needed to make this concept work. Although the budget was significant for the studio, they had somehow still managed to bite more than they could chew. Apart from shuttling 20-30 people from Stockholm to Lidingö at any hours of the day they had to cook, commentate games, entertain the people and somehow also produce a show. On top of this they also had 10 newborn kittens behind the sofa. In reality I have no idea how they pulled it off but the technical architecture of the show worked and I became good friends with Trance as a result of this event.

I ended up leaving my job at DreamHack in September to pursue more freelancing gigs similar to the one above. Kept talking to Trance a lot and helping him out with streaming configurations and whatnot. In November of 2014 I got a call from a man named Cyrus Hall in the middle of the night. A recruiter had reached out a while earlier and asked if I was interested in working for Twitch through recommendation from Trance and I said sure, knowing that nothing would come of it. Cyrus had scheduled to talk to me for about an hour but we ended up spending closer to 2-3 hours on the phone, talking about networks, video and my general thinking about where esport broadcasting was headed. It’s hard to remember today when esport broadcasting often rivals or beats traditional broadcasting what scrappy operation it used to be and I felt that my role in all this was to help the esport orgs with the broadcast knowledge I had. At the same time I stood on the other side of the fence, communicating from the esport orgs to the streaming platforms about the requirements that a broadcast” has in comparison to UGC (user generated content) gaming stream.

In February of 2015 I accepted the offer to join Twitch to help them figure out how to make esport broadcasters happy from a technical standpoint. As both esport and Twitch grew at a rapid pace, the scope of my work expanded quicker than I was able to personally tackle, which had me build up a team with people that could help me build the relationships and architect the tooling. There were a lot of onsite events and meetings at this period in life, mostly to build a relation with the organisations on the other end and understand their problems.


One needs to mention all the great colleagues I met at Twitch. Since Twitch encompassed my sober year and I had a lot of time on my hands, I took up playing DOTA2 with coworkers from Twitch while visiting San Francisco. Staying in the office until 04.00 in the morning while bashing out game after game, dreading the meeting starting in less than 5 hours or the flight to some random location. Had the absolute pleasure to meet one of the best directors (now VP) who served partly as my boss but more as my mentor and great friend. If it wasn’t for this individual I wouldn’t be at Twitch past 2016 when he flew out to Sweden. Me and Trance took him to the Twitch afterparty in Jönköping and while I don’t have a great picture of him, I saved this gem taken at 5AM with Trance.

How the sausage is made

Everyone who has met me knows that I never shy away from solving time sensitive problems. It’s a skill that has come from my years in working with live shows & touring gigs and broadcast. One gets used to the high pressure and develops systematic ways of approaching situations like that after a while. That’s the same reason why first responders can act swift but stay calm, they’ve practiced it so many times that it’s not foreign. The same goes for working with live shows, the tone is often tense and agressive but people are only interested in resolving issues as fast as possible.

Of all the weird things that happened over my time at Twitch, the best how the sausage is made” moment I experienced was during E3 2018. During one keynote that was broadcasted on Twitch with a large number of viewers, the feed started exhibiting visual artifacts. Since the viewership on Twitch was significant, people started getting really worked up about this issue, as viewers were spamming the chat about it. I was in the production studio for another reason and got asked by an engineer if I had any idea what could be going on. In broadcast you don’t get to redo. There is no let’s fix that one for next time” when you have a show that runs once a year with massive viewership.

When approaching broadcast problems I’ve found that it helps to always approach it left to right”. As in you start with verifying the absolute source and then work your way to the right” as in down the chain to figure out where the issue is introduced. Baseband video is funny in that way that you can essentially tap the feed at any given point and view it without having to do complex network routing or subscribing to a livestream, just route the matrix to an SDI output and watch the source. This goes against the common debug strategy that people often take, working right to left from where the problem is exhibited towards where the problem isn’t visible any longer. My problem with this approach is while it seems good on paper it fails to take all the branching patterns a signal can take which makes it hard to easily trace backwards. After stepping through the signals it was clear that the issue happened before entering the SDI matrix. Tracing the cable backwards I found that the actual input source was looped through a scan converter (broadcast term for video scaler). The actual fiber feed into the building didn’t have these artifacts and they used the scan converter to split the signal into primary/backup. In this case, both were affected.

To solve this I realized that we needed to replace the scan converter that drove the entire feed for the studio (single point of failure), mid-show, without affecting the video. On top of this we had about 3 minutes to solve it. I realized that the content that was shown at the time was game trailers for up and coming games, all ending on a black slate for about a second. If we could time the switchover to occur exactly at that second, no one would notice.

Doing this however required three physical cables to actually be switched in less than a second. The fiber input needed to be switched to a new scan converter and the lines going out from the old scan converter had to be swapped into the new scan converter. Here I also realized that the feed we got had about 1.2 seconds of latency, meaning that if I dialed into the production bridge of the company who’s feed we were tapping, heard their countdown, we would be able to monkey patch it and notify our production exactly in time.

Switching in the AXS studio

Here I am, sitting on the floor while waiting to switch the SDI cable. Once we felt that it was time, I notified our production that it was going to happen in less than 30 seconds and waited for the cue on my other headphones. The switch went flawless, the new scan converter locked onto the signal and no one watching saw the switch happen. Of course this was dumb and the show should never had split the signal in that way but this is sadly the way it goes a lot of the times in broadcast. It’s a balance between budget and time and often the right decisions gets made but on the wrong risk profile.

Moving on


Twitch realized that we had built one of the worlds best platforms for live video on the internet and went on to build Amazon IVS which I was lucky to be part of. This was a big undertaking and has taken a lot of my focus for the past years, working in a variety of different functions but primarily managing other people and eventually managers. IVS almost speaks for itself, it’s the battle tested video system that drives Twitch packaged up as an AWS service for customers to use in new and unexpected ways. As I had spent the years prior to this meeting customers I helped the person leading the product on a number of customer meetings for IVS.

So why leave Twitch? Isn’t life great? Exciting work on the horizon? Sure, that’s the problem.

I joined Twitch to be part of the transformation from linear TV to online media and the already transformation happened. Society went from old media” with news anchors and linear programming to Netflix, YouTube and Twitch. Streamers, influencers and individual content producers have become part of our daily life and have started overshadowing the traditional media machine. I’m sure there are tons of exciting new developments happening but the large ship has already sailed. It might not seem like it but consider where you get the majority of your entertainment today, chances are it’s OTT services or UGC services mixed and matched after your interests.

COVID was probably an eye-opener for many people but what this worldwide event has highlighted is a glimpse of how our social fabric would look if it moved onto the internet. Zoom has become a behemoth of video conferencing and the online media services have seen tremendous growth over 2020. What I think has passed under the radar for many people though is how the concept of hanging out with your friends has started to move onto the internet. Someone like me grew up using software like Ventrilo and Mumble to connect to VOIP chat rooms with friends all the way back in 2005 but that experience never really went mainstream.

Over the winter break I realized that Discord had finally done just that, taken the experience I had when in Ventrilo when I was 15 and built a product that enables people to actually drop into a space” and hang out with whomever happens to be there for the moment. It’s all transient which tends to eliminate. I myself communicate with my friends back home in Sweden using Discord on the daily so I personally believe that they are onto something that no other service currently manages to fill. For that reason I decided to pursue an opportunity at Discord, ending my tenure at Twitch.

In the words of Primal Scream:

I’m movin’ on up now.
Getting out of the darkness.”


December 26, 2020

On Rust (Pixelcube Part 6)

If you work in programming or adjacent to programming, chances are high that you’ve either heard or read about Rust. Probably one of the worst named programming languages in the world for SEO purposes, it’s Mozilla’s attempt at redefining how we write safe and compiled code in a world where safety isn’t optional. It’s also a non-garbage collected language with almost zero interop overhead, making it a great language to wrap C and C++ with. I’ve personally come in contact with this language as early as 2013 when rawrafox wrote an audio decoder in Rust at Spotify’s WOWHACK at Way Out West Festival.

Rust has been one of those languages where I for a long time just didn’t see the point. If you’re going to take on the pain of writing non-GC code, why not just go full C++ and get all the flexibility that comes with writing C++ in the first place? Why bother with a language that has long compile times and imposes concepts that make certain programming flows hard? What I came to realize is that these arguments miss the point about why programming languages succeed.

Why does a language like Node.JS succeed despite being untyped and slow? If anyone remembers 2012, telling developers that you were writing Javascript as a server side language was more of a bar joke than an actual future people imagined. A language like Go? Why do developers accept the odd design decisions, lack of generics and annoying limitations? There are countless threads on the internet debating the technicalities of why these languages are bad compared to language X, however that doesn’t seem to matter?

I think these languages succeed because they have good ecosystems. Using a third-party library in Node.JS is almost imperative to the language, compared to C++ where a third-party library sometimes takes longer time to just get linking properly than the time it saves you using it. Node makes building on top of the ecosystem easier than building your own code, just a matter of typing npm install whatever. This makes it easy to iterate on ideas and try out approaches that would have taken significant investment in a language like C++. Go takes this even further by defining pragmatic solutions such as the Reader/Writer pair which makes chaining third-party libraries a breeze.

These things are what makes a developer like me productive. I write code because I envision what the end result should look like and try to write code to get to that end result, the actual act of programming itself doesn’t excite me. While I appreciate the work that goes into challenges like Advent of Code”, those kinds of problems never excite me as they do not move me closer to the desired end result of my projects. I sometimes refer to myself as a glue developer”, where I try to glue together as much as external things as possible and rewrite only the stressed parts of the glue. Being able to quickly try a library from the package ecosystem is almost more important than the language itself which is why I gravitate towards languages like Node, Go, PHP etc.


In the spring of 2014 I had just finished building the DreamHack Studio and was trying to ship a new overlay” for DreamLeague Season 1. We had high ambitions for this overlay and relied on a ton of scraping coupled with DOTA replay processing to create a very data heavy overlay with realtime statistics. This image is me and a former co-worker nicknamed Lulle” pushing against a deadline. The clock is correct, it’s 05:58 in the morning and we had to ship the code by 14:00 that day.

Being able to easily add in libraries needed to overcome the problems and not spending 4 hours on making the library link was how we pulled this off. Glue programming enables one to iterate extremely fast, for example in this case I had to write an integration that based on a piece of graphics being triggered would send a DMX signal to the LED controller. The ability to try out 3 different DMX libraries in less than 20 minutes to find the one with the perfect surface area for my problem. Sure the code was dependency spaghetti but we delivered a very innovative broadcast graphics overlay.

Why does this matter?

The reason I’m talking about Rust here is that I’ve hit a roadblock with Go that’s so fundamental to Go itself that it’s hard to actually overcome this without spending a significant amount of time. My goal with the software (nocube) for Pixelcube was to easily be able to iterate on patterns in the same way you can do on a Pixelblaze. Being able to just save and see the result makes it much easier to actually iterate against something nice. Since Go doesn’t support hot code reloading (only loading apart from a hacky runtime modification), the only viable approach is to either interpret another language or call into another runtime.

I’ve experimented with a couple of JS interpreters in Go and quickly realized that this breaks the performance that I was after which led me to experimenting with V8 bindings for Go. This is where it starts getting real messy. Go has no user-definable build system for packages which makes it impossible for packages to configure a complex runtime like V8. So to use V8, most approaches are to either pull a pre-built binding (in this case extract it from a Ruby gem????) or to build V8 from scratch. Building V8 from scratch is not an option either due to the time it takes.

ben0x539 comments: I think cgo mostly takes care of that, the hard part is escaping the Go runtime’s small, dynamically-resized per-goroutine stacks because C code expects big stacks since it can’t resize them

Once that hurdle is out of the way, the next problem is that the Go runtime only provides one way of linking out against other runtimes which is through Cgo. Since Go’s memory layout is unlikely to match a C struct, Go has to prepare the memory for each and every function call in both directions, leading to a lot of memory indirection and overhead. To be fair, this approach is common for GC languages such as .NET so this is not really a failure on Go’s part, just a general problem that GC languages struggle with. This FFI (Foreign function interface) call has to be done for each pattern update. This means that if you’re rendering 5 patterns at 240 FPS, you’re eating that overhead 2400 times with the bidirectional data flow.

I actually did manage to get this working but not at the performance I wanted but was happy to have some sort of hybrid model however. The issue really started to creep up when trying to include another library called Colorchord”. Colorchord is a library that maps colors to notes which is something that I’ve wanted to use from the start but didn’t have time to include last year. Trying to write the Cgo bindings around this turned out to be a nightmare, really pushing at the build system issues of Cgo which had me almost give up. Eventually I realized that it would be easier to just rewrite the entire Notefinder part of the library in Go and even started the work but this left a sour taste in my mouth about using Go for this project in general.

The straw that broke the camel’s back was when I wrote the wrapper functions for packing data into V8 and ended up fighting the lack of generics in Go. Starting out as a Go dev it’s fun to understand that everything is very simple but after writing functionXInt, functionXFloat32, functionXFloat64 etc for the 100th time you end up realizing that generics exist for a reason. It’s not a meme.

All these examples highlight something that should have been clear to me earlier, I’m using the wrong tool for the job. The project in itself requires high performance on shitty devices, lots of FFI into libraries like Aubio, V8 and Colorchord and should preferably not be GC:d for the realtime audio processing. It was clear that I had to reconsider my decision making to avoid the fallacy of sunk costs.

Evaluating Rust

So back to Rust. The language never seemed interesting as it was a technically complex language that didn’t fit my workflow because the ecosystem just seemed to have a lot of friction. While this was probably true at the time, the ecosystem around Rust has developed significantly and the package manager around Rust called Cargo. Trying out Rust in 2020 is almost like trying out a new language compared to the language I experimented with back in 2013. I decided to spend some time on seeing how easy it would be to link against V8 in Rust and less than 10 minutes later I had basically re-created what took me hours to get right in Go.

My next project in Rust was to see if I could build a OPC (OpenPixelControl) implementation. I used an old project written in C to visualize the cube and it would be fun to see how easy it would be to render this in Rust. To my surprise I finished this faster than I had expected and the 3D libraries were mature enough where I could render the cube into a window. Seeing how simple this was to get right I got eager to try out solving the project that was hard in Go, using the Notefinder from Colorchord.


Around here I started streaming my attempt on creating bindings for Colorchord and had a lot of help from ben0x539 who’s a coworker at Twitch. He’s a true RUSTLING and provided a lot of insight into how the language works which was invaluable to getting the bindings done. Want to give this person a big shoutout as Rust is a complex language and it really helps to have someone that can provide training wheels starting out.

The first step was figuring out how to create the bindings. I started out writing them by hand and then searched Google to see if there was an automated way and there absolutely is. Rust has a project called bindgen which builds Rust bindings from C headers, shaving of tons of time. There are always edge cases where automated bindings isn’t the right thing but with C it’s hard to make functions more complex than the obvious so for this it worked. Unlike Go, Rust allows you to define exactly how the project should be built. By default Rust looks for the build.rs file, builds it and runs the result as a preamble to compiling the rest of the project. With this I was able to run bindgen as part of the build pipeline. That means that the project itself just includes the Colorchord git repository and generates the bindings and builds colorchord as a library all in Rust.

The bindings worked almost on the first attempt once the build file was correct and I got the first result out of the Notefinder. It was then trivial to use the same graphics library I used for rendering the 3D visualization of the cube above to draw a copy of the colorchord UI.


As the Notefinder struct is just a raw C struct with floats and ints that act as magic numbers for configuration I thought it would be nice if the library at this point would provide a safe interface for setting these with valid ranges. I quickly ended up writing a Go style solution where I was writing a function per item coupled for each of the different types, after writing function 4 out of 30 I stopped to think if there was a better way to do this in Rust. Enter Macros! Rust allows you to write macros for metaprogramming, which means that I can write a macro that expands to functions instead of having to write these functions 30 times.

macro_rules! notefinder_configuration {
    ($func_name:ident, $setting:ident, $v:ty, $name:ident, $min:expr, $max:expr) => {
        pub fn $func_name(&self, $name: $v) -> Result<(), NoteFinderValidationError<$v>> {
            if $name < $min || $name > $max {
                return Err(NoteFinderValidationError::OutsideValidRange {
                    expected_min: $min,
                    expected_max: $max,
                    found: $name,

            unsafe { (*self.nf).$setting = $name }

This macro when called creates a function based on the value of the variable func_name and even allows you to specify what type that the generic implementation of NoteFinderValidationError should be. The result of this is that instead of having 30 roughly identical code blocks that looked like this one I only need this once and can do this for the rest.

notefinder_configuration!(set_octaves, octaves, i32, octaves, 0, 8);
notefinder_configuration!(set_frequency_bins, freqbins, i32, frequency_bins, 12, 48);
notefinder_configuration!(set_base_hz, base_hz, f32, base_hz, 0., 20000.)

In the end I was so happy with how this binding turned out that I decided to publish it as a Rust crate for other developers to use.

What’s bad about Rust?


The syntax is the worst part about Rust. It feels like you asked a Haskell professor at a university to design the syntax for a language and then asked a kernel developer that only has C experience to make the language more desirable. The syntax is different to many other languages and often for no good reason. Take closures for example, defining anonymous functions in any other language is just the same syntax as a normal function but without the function identifier. Rust however decided that closures should look like this:

|num| {
        println!("{}", num);

Why on earth would you use the vertical bar? Has any Rust developer ever used an non-US keyboard layout? The vertical bar or pipe sign” is uncomfortable to type. On top of this it feels so misplaced in the language itself, the language is already really busy with a lot of different special characters to signify a variety of features and this one could just be afn() {} or something similar. Speaking of busy syntax, here is one of my function declarations:

pub fn get_folded<'a>(&'a self) -> &'a [f32] {

Note how many different characters are used here. The ' signifies the lifetime, the a indicates the lifetime symbol and <> wraps it. The result is just something that’s very busy to read with ' being another good example of a character that developers with non-US keyboards often get wrong.


As I mentioned earlier, Rust’s largest problem is really being named Rust”. I was trying to find a library named palette in Rust and typed in rust palette into Google which gave me the result to the right. On top of this there is also a game named Rust (which is actually a terrific game) which further confuses the language SEO when trying to lookup certain concepts. Sure this is a dumb piece of feedback that can easily be solved by just searching adding lang to a search but coming from other languages it’s annoying.


On top of this the documentation has two other problems. The first being that the main documentation hub docs.rs suffers from some really bad UX. For example, going from a documentation page to the source code repository hidden in a dropdown menu. Secondly, since every version publishes a permalink which Google indexes, you often end up at outdated documentation pages and have to be on the lookout for the small warning at the top. In my opinion the right way to solve this is to make google index a latest” tag and provide permalinks for specific use cases. If I search for tokio async tcpstream I don’t want to know how the function worked 3 years ago, if that was what I was looking for I would add in the library version to the query.

It also seems that a lot of crates neglects specifying exactly what you need to include in Cargo.toml for the library to expose all the functions you want. Often the library has features gated behind feature flags but the first example in the documentation requires one or two features to be explicitly enabled. Developers need to be upfront with exactly how to configure a library to achieve what the documentation says. I heard some arguments defending this that Rust has a good community where everyone is helpful which makes up for this and while this is true to a certain degree, this approach never scales. Being able to search for problem x php and have more than 200 results explaining solutions to your problem is what makes a language like PHP so easy for beginners to grasp.

Going forward

This small evaluation of Rust was more successful than predicted. Two of the major problems I’ve had with Go was basically solved and on top of that you can also do real hotswapping in Rust which unlocks the use case that was impossible to do in standard Go. For that reason it makes a lot of sense to rewrite the existing nocube code to Rust and also solve some of the design mistakes I made with the first iteration at the same time.

Doing a rewrite of a program should always be something that one does with caution. It’s easy to look at a new language and think that it solves all your issues, not understanding that every language comes with their own problems. This however is one of those times where the tool itself isn’t right from the start and to me it just signifies that I should have been less lazy starting out.