December 25, 2020


My friend who goes by the nickname of Chicken” has done this advent series on Instagram where he finds one alcoholic beverage in odd places every day for the 24 days leading up to christmas. This year I decided to cut all of the ones from 2020 together into a compilation just to share this with the world.

November 28, 2020


A while back I was researching ancient MIDI sound modules such as the legendary Roland MT-32. The sound that these older synthesizers produce is extremely dated but in the right context it functions almost as a time capsule into what used to be state of the art. Take for example the iconic opening of The Secret of Monkey Island being played back on the Roland MT-32 and you’ll know what I mean that there is a specific sound that these makes that has gone out of fashion

Synthesizers of today sounds amazing. There are tons of analog synths and newer digital wavetable synths with oversampled filters on the market today both which overcome and emulate the essential characteristics of older synthesis. Compared to this the MT-32 produces a sound that is a relic of the technology of the time. It’s based on the same synthesis approach of the Roland D-50, namely Linear Arithmetic synthesis”. The idea is basically that you combine short PCM samples and subtractive synthesis to create sounds, where using the subtractive synthesis as a base layer and the PCM samples as an embellishment would create a more interesting sound. This technology was mainly invented due to the limitations of memory at the time, making it very expensive to produce synths based solely on PCM samples.

The MT-32 was a consumer version of D-50 and quickly found use by games at the time. Back then it was not feasible to ship streaming audio (again due to the size limitations of floppies) which meant that games took upon themselves to interface with soundcards and modules like the MT-32 to generate audio. Due to the complexity of manually figuring out what sounds every channel would have per popular sound module, General MIDI was introduced in 1991 as a way of creating a standard set of instruments that you could expect from every sound module. Roland followed with the Sound Canvas 55 (SC-55) which shipped with General MIDI and a large variety of new sounds, while the device lacked some of the flexibility in programming that the MT-32 offered it improved on the clarity of the sound.

Shortly afterwards the CD-ROM became popular which allowed for CD quality streaming audio on personal computers, essentially removing the need for separate audio synthesis in the form that the sound modules had offered. On top of that the soundcards that shipped with PCs for PCM audio contained stripped down versions of these synthesizers which eventually got replaced by software only synthesis when CPUs became fast enough.

Roland did however manufacture the weirdest variation of the MT series starting with the MT-80. The Roland MT-80 was essentially a small microprocessor coupled with the sound engine from the SC-55 and a floppy drive. Aimed at musical teachers and students this device allowed users to insert a floppy disk loaded with MIDI files and play them back in a portable stereo fashion. It also offered to mute groups of instruments and adjustments of tempo.

Roland MT-80S

The MT-80 followed by the Roland MT-300, an even more pimped out version with added features and stereo sound as the concept became fairly popular as a way of practicing music with virtual instruments.

Roland MT-300

Roland MT-300

Just seeing these horrible devices made me instantly scour eBay to see if I also could acquire a piece of history with these boxes of grey plastic anno 1996. Sadly they seem to start at around $200-$300 on eBay and that’s before you even factored in the cost of finding a floppy drive that you can replace the most likely non-working one with. I hosted a couple of cassette listening sessions at ATP, as a tongue-in-cheek against the desire to play vinyl only on rotary mixers and thought it would be a great sequel to play some horrible tracks from floppies instead. Since the price was a bit above my taste, the only thing left to do was to build it.

Refining the concept

Starting out I’ve always found it helpful to decide where the boundaries are. If you set out on a project to build something with the inspiration above there are tons of directions you could take but intentionally setting limits makes it more clear what tools and processes one needs to use. For this project it was clear early that:

  • The device should have a physical interface with real buttons. No touchscreens.
  • The device needed to use floppies
  • Bitmap displays was out of the question, if the device needed a display it would be either segment or character LCDs.
  • The device needed to hide the fact that it would be a repackaged Raspberry Pi.
  • The device should not have built-in speakers, focusing only on the playback as if it were a component of a stereo system in 1995.


With this in mind I drew some absolutely terrible sketches to figure out what the interface would look like in order to settle on components. Once I had this done it was trivial to think about what components I would use to build something that worked like the sketches. Using an LCD would be vital to easier select files from the floppy, the buttons should be prominent and if possible in color and the device chassi should be in a grey or black material.

It was then just a matter of finding some nice components that seemed fun to use. I wanted an encoder and stumbled on a rotary encoder with a RGB LED inside, while fancy I felt that it would be a great way to signal the state of the device so I ended up ordering it. For the buttons and LCD, Adafruit had both for a reasonable price so I used these while sketching. I also found a really cheap USB floppy drive on Amazon which I bought along with a 10 pack of old stock” Maxell floppy’s. Sadly 9 out of 10 floppies did not work which had me order a 10 pack of Sony floppies from eBay which worked better. I think this shows how there was an actual quality difference between the floppies back in the days as Maxell just hasn’t held up as well as Sony.

Knowing this it was just a matter of laying the components out with reasonable accuracy in Illustrator to get a feel for how it would end up. For this version I imagined that the rotary encoder would be to the right of the LCD and that the device would be a compact square device. I tried some of the above sketches but a more compact package felt more 90s integrated mini-stereo” with focus on being a device you could have in the kitchen.


Designing the enclosure

Starting from this concept it was just a matter of opening Fusion360 and getting to work on fitting the components. As usual I would be 3D printing this enclosure and to retain structural integrity I tried to minimize the printed part count, instead opting for larger printed pieces and being smart around how I designed the enclosure to fit the pieces.



I started with the front panel as it would hold all the buttons and the encoder. As you can see I deviated somewhat from the concept as I realized the LCD has tons of baseboard padding that I would had to hide somehow. For that reason I couldn’t place the encoder to the right of the display without making the device unreasonably wide. Experimented with a couple of different aspect ratios and settled on something that’s almost” square with visibly rounded edges. The reasoning here is that the device will have more depth due to the floppy so if the front panel was square it would break the cube”, hence why it’s a tad wider than tall as it stands.


One thing that was really hard to figure out was how to properly mount the encoder and buttons without any visible screws on the front. Since the user presses the button inwards, the buttons needs to rest against something that holds either to an internal bracket or to the front panel. I settled on a sandwich concept that locks the button in place against the front panel. While this has some issues when mounting I think it was a reasonable compromise and could be printed in one part. It was also easy to design screw brackets into the PETG on the front panel.

The panel went through a lot more iterations compared to what’s shown here but it’s hard to show every iteration in a great way. I had to consider how the electronics would fit and where to put the Raspberry Pi for it all to make sense. Since the floppy is the deepest part of the device I placed it in the bottom in order to be able to build the rest of the inner workings on top of the device.

The next problem was how to accurately represent the USB Floppy in Fusion360. Since the floppy had some weird curves and I wanted the floppy to be properly sandwiched into the case in order for it to feel like it’s part of the box I had to come up with a creative solution to capture the curve. A real designer probably has some great tools or techniques to do this, however, since I’m not a real designer I ended up placing the floppy on the iPad and just tracing it in Concepts. From there I exported it into Illustrator, cleaned it up and imported it into Fusion360 from where I generated the volume from the drawing. Weirdly enough this worked.


With that done I sketched out a baseplate with walls that would hold the floppy in place and a layer plate which mounted on top of the floppy. Although this took a lot of iterations to get right with the fit and all the screws it was a pretty straightforward piece once I knew what components I wanted to use. Initially I envisioned the Raspberry Pi being mounted straight against the back panel with exposed connectors but it felt weird to go through all this effort only to expose that the device is just a Raspberry Pi with an ugly screen, so I added a buck converter together with a 3,5mm connector in order to fully enclose the Raspberry Pi.



The last part was to add a cover with the correct sinks for M nuts and make sure that the rounded corners extrude all the way throughout the device. Here I experimented with a slanted case as it would be fasted to print but the device ended up too much like a 80s Toyota Supra with popup headlights so I skipped that idea and came to terms with the fact that this lid was going to take a good amount of time to print.


Making the panel work

Initially I envisioned this connecting straight to the GPIO of the RPI. It’s just 4 buttons, a serial display and an encoder with an RGB LED inside. On paper the RPI ticks all of these boxes, hardware PWM, interrupts, GPIO and serial output. Using all of these in Go is another story however. It turns out that the interrupt / PWM support for RPI is actually quite new” in Linux with two different ways of approaching it. The old way was to basically memory map straight against the hardware and manually defining the interrupts with the new solution providing kernel interrupt support for these GPIO pins. Go is somewhere in between, the libraries either has great stability with no interrupts or really bad stability (as in userspace crashes when not run as root) using the interrupts.

Why does interrupts matter so much? Can’t we just poll the GPIO fast enough for the user not to notice? The buttons aren’t the problem here but for anyone who used an encoder without interrupts know how weird they can feel. Encoders use something called quadrature encoding (also known as gray code) to represent the amount of distance and direction the encoder has moved when rotated. This means that we have to either poll the GPIO really fast to catch the nuances of something scrolling through a list without it feeling like the encoder is skipping”. Using interrupts allows the microcontroller to instead tell us when the encoder gray code has changed which we can decode into perfectly tracked states of rotation.


from Wikipedia

After battling this back and forth for a couple of hours with the only viable solution looking like I would have to write my own library, using an Arduino just seemed… easier. Easily done. Wiring up an Arduino to do encoder interrupt and PWM is basically no lines of code. I played around with using Firmata but as before, the support in Go was lackluster so I defined a simple bi-directional serial protocol that I used to just report buttons and set LED color from Go. Serial support in Go is luckily extremely easy. On top of this the LCD Backpack” that Adafruit shipped with the LCD had USB so I got a nice USB TTY to write against for the LCD. This made the Go code quite clean.

With this done I could wire up the encoder and the buttons on a breadboard in order to have a prototype panel to play with. It’s not a pretty sight but it worked for developing the serial protocol. The Arduino framework has tons of problems but it makes it easy to move the code from one microcontroller to another. Here I’m running on an Uno, later ported to a Arduino Nano for the actual package.

panel proto

Here it is soldered down on a hat” for the RPI with the Arduino Nano.

panel proto

Replicating the audio synthesis

Now the challenge was how to go about generating actual audio. There were a number of options here. The first one was using a hardware MIDI chip such as the VS1053B but after hearing the audio quality I quickly realized that this didn’t sound at all close to what I wanted. Discovering this made it clear that I wanted the sounds of the MT-32/SC-55 but in another package which narrowed the scope to using SoundFonts. SoundFonts was introduced by Creative for their SoundBlaster AWE32 as a way of switching out the sounds on the soundcard, you would load a soundfont onto the device much similar to how you load a text font and the device would use the bank of samples with the surrounding settings to generate audio. Today there’s an array of soundfonts available as the format has taken on a life beyond Creative and there are many MT-32/SC-55 soundfonts available.

There is a great softsynth that uses soundfonts called FluidSynth. After experimenting with FluidSynth it was clear that the sound generation was spot on, it was able to re-create masterpieces like Neil Young - Rockin’ In The Free World from some jank MIDI file that has the least tight re-creation of this song I’ve ever experienced. To clarify, FluidSynth is fantastic but the MIDI file of this track is really bad, paired with the SC-55 sound it becomes golden. Just listen to this and you will be convinced.

Knowing this the next step was figuring out how to interact with FluidSynth. FluidSynth has a CLI that I could use but it quickly turned out that the CLI was hard to interface against programmatically. FluidSynth also ships with a C API which seemed to be easy enough to interface against. Go has a reasonably easy way of interfacing with C known as cgo. I found some rudimentary bindings on Github that lacked basically all the features I wanted so I forked the repo and started building my own bindings with a more Go-centric API. I added a ton of functionality which is now in my repo called fluidsynth2 on Github for anyone that ever finds themselves wanting to use this from Go.

Writing the controller software

With that issue out the way the remaining part was to write the MT-420 code, which ended up being abstractions for the various hardware devices, a bunch of specific logic around mounting a floppy in Linux year 2020 (harder than you might think) and general state management. The code is absolutely horrible as I wrote most of it before knowing how the hardware actually worked and mocked a lot of it against the terminal, later replacing the modules with actual implementations and discovering peculiarities about how the hardware actually worked that had me refactor code in sad ways. Feels good to see this initialization of the floppy interface though.

// Floppy
delayWriter("Warming up floppy", delay, display)
var storage storage.Storage
if *mockFS {
    storage = mock.New(*mockPath)
} else {
    storage = floppy.New(lconfig.Floppy.Device, lconfig.Floppy.Mountpoint)


The code is structured in two main parts, the first one is to create the devices” for the controllers” to use. To easier develop I created mock” versions of all devices (using Interfaces in Go) which means you can run the entire application in the terminal and use the keyboard to emulate” the panel. I’m glad I did this as I almost lost my sanity multiple times trying to wrangle with the LCD display. These devices” are then passed onto the controller which is a state machine pattern that I’ve written a couple of times in Golang at this point. Instead of writing a defined list” of states I create a interface that the controller host can call on.

type Option interface {
    Run(c *Controller, events <-chan string, end chan bool) string
    Name() string

The controller then has an index of all available modules” and just calls the entrypoint module in the map and calls Run() with the required data. Each module is then responsible for executing the loop and returns a string, which contains the name of the next module. This means that the module can say go back to X” and the controller will just call that module next loop. This pattern is simple but I’ve found it effective when using blocking interfaces” such as humans interacting with the device.

The question here though is Why use Golang?”. To be honest here I think the real reason is that I was lazy and thought that the scope of this was likely to be really small. That turned out to not be the case, especially with all the workarounds I had to do due to pains of interacting with the kernel in Go. Go is really good at solving problems where the hardware is irrelevant and less so dealing with devices like a USB floppy. While I appreciate the fast cross-compilation, approaching this again I would have written this in Rust instead, as the C interop alone would have saved me time on having to work on the FluidSynth bindings. It’s good to do one of these projects though, really pushing the bar of one’s comfort zone language to understand where the weaknesses are.


You can find all the code for this at my GitHub. The repo also contains the STL files if you want to go ahead and print this yourself.

November 1, 2020







September 13, 2020


The only positive thing to come out of this absolutely trash year is the extra time that I’ve had on working on dumb projects. This project in particular has no purpose whatsoever other than being the realization of an in-joke about file formats. Last few years I’ve switched to almost exclusively buying new music in FLAC or other lossless formats to get closer to technical medium transparency for audio (bandcamp is great). Of course the times you hear anything different between an MP3 encoded in 320 or V0 compared to a FLAC is quite few but we thought it would be funny to only play FLAC at ANDERSTORPSFESTIVALEN and highlight it in a dumb way.

Idea is simple: Display what Format is currently playing (not what song is currently playing) in the ATP bar using some 90s style display technology.

There is this dumb movie prop from the movie Tomorrow Never Dies that is supposed to be a master GPS encoder”. What struck me about this prop is not the idea that a box would hold some offline cryptography key, rather that it comes with a huge nonsense segment display. Just seeing this makes me think this is the dumbest prop I’ve seen, which is why I wanted to use segment displays to show the format.


Reading the current track

To start with, we need to create some sort of API that we can poll to get the currently playing track from the playback MacBook Pro. I personally use Swinsian but there is often Spotify being used in the bar as well, so supporting both of these is a must. Neither Swinsian nor Spotify has a REST/DBUS/Whatever API to get song data out of but it turns out that they both expose AppleScript variables to be compatible with iTunes plugins. Knowing that, creating some sort of shim around AppleScript to publish the variables with a HTTP API would solve a lot of problems.

AppleScript is absolutely terrible. It’s the worst scripting language I’ve had the pleasure to deal with but after some massaging we have this code that gets the state of both Swinsian and Spotify and outputs as some sort of mocked JSON.

on is_running(appName)
    tell application "System Events" to (name of processes) contains appName
end is_running

on psm(state)
    if state is «constant ****kPSP» then
        set ps to "playing"
    else if state is «constant ****kPSp» then
        set ps to "paused"
    else if state is «constant ****kPSS» then
        set ps to "stopped"
        set ps to "unknown"
    end if
    return ps
end psm

if is_running("Swinsian") then
    tell application "Swinsian"
        set wab to my psm(player state)
        set sfileformat to kind of current track
        set strackname to name of current track
        set strackartist to artist of current track
        set strackalbum to album of current track
        set sws to "{\"format\": \"" & sfileformat & "\",\"state\": \"" & wab & "\",\"song\": \"" & strackname & "\",\"artist\": \"" & strackartist & "\",\"album\": \"" & strackalbum & "\"}"
    end tell
end if

if is_running("Spotify") then
    tell application "Spotify"
        set playstate to my psm(player state)
        set trackname to name of current track
        set trackartist to artist of current track
        set trackalbum to album of current track
        set spf to "{\"format\": \"" & "OGG" & "\",\"state\": \"" & playstate & "\",\"song\": \"" & trackname & "\",\"artist\": \"" & trackartist & "\",\"album\": \"" & trackalbum & "\"}"
    end tell
end if

set output to "{ \"spotify\": " & spf & ", \"swinsian\": " & sws & "}"

With this out of the way, we can easily invoke this AppleScript from Go with osascript and capture the output of stdout and serve this over HTTP with gin. I’ve published the end result as go-swinsian-state on my Github if you ever find yourself needing to do something this ugly.

Displaying this on alphanumeric segment displays

Working with segment displays is annoying. It’s essentially one LED per segment and they are wired as a crosspoint matrix meaning that you need tons of pins to drive this. Luckily for me, people have been annoyed at this before and created driver chips, such as the MAX7219 which allows you to control the entire display using just one serial connection. This takes a lot of the headache off the table and allows me to use much smaller headroom microcontrollers.

For this project, a key feature is that the display can’t be connected using Serial/SPI/I2C to the host computer, rather it has to pull data over WLAN. There’s an array of microcontrollers out there now with WiFi support but my personal favorite is the Espressif series of microcontrollers. The ESP8266/ESP32 is in my mind almost a revolution in microcontrollers, offering a huge amount of connectivity with an insane amount of I/O at an unbeatable price (around $3-$4 per chip). Since this project is a one-off, designing a logic board seemed too much effort so I shopped around for an ESP32 in a nice form factor.

Adafruit has a nice series of Feathers” which is essentially a smaller format Arduino with focus on battery power and size. They ship the ESP32-WROOM model for around $20 which is a fair price for the design and ecosystem. It also turns out they have this Featherwing” which is akin to Arduino shields with both the LED segment driver and displays. $14 is a reasonable price for this as well so BOM so far is $34 which gives us a microcontroller with WiFi and a alphanumeric segment display that can fit the words FLAC/MP3/OGG/AAC.

The fun thing about working with the Arduino framework is that there are good libraries for these components available, so stringing all this together is less than 125 lines of code. I published the PlatformIO project on my github as FORMATDISPLAY but the main code is so short that I’ll include it here.

#include <Arduino.h>
#include <Wire.h>
#include <SPI.h>
#include <Adafruit_I2CDevice.h>
#include <Adafruit_GFX.h>
#include "Adafruit_LEDBackpack.h"
#include <WiFi.h>
#include <HTTPClient.h>
#include <ArduinoJson.h>
#include "Settings.h"

Adafruit_AlphaNum4 alpha4 = Adafruit_AlphaNum4();

IPAddress ip;
HTTPClient http;
String url = String(storedURL);
int errCount = 0;

void display(uint8_t a, uint8_t b, uint8_t c, uint8_t d)
    alpha4.writeDigitAscii(0, a);
    alpha4.writeDigitAscii(1, b);
    alpha4.writeDigitAscii(2, c);
    alpha4.writeDigitAscii(3, d);

void displayC(const char *s) {
    for (int i = 0; i < 4; i++) {
        if (s[i] == 0x00) {
            alpha4.writeDigitAscii(i, ' ');
        } else {
            alpha4.writeDigitAscii(i, s[i]); 

void setup()
    //Initialize serial and alphanumeric driver.

    display('C', 'O', 'N', 'N');
    WiFi.begin(storedSSID, storedPASSWORD);

    int wifiRetry = 0;

    while (WiFi.status() != WL_CONNECTED)

        if (wifiRetry > 100) {

    // PRINT IP 
    display('W', 'I', 'F', 'I');
    ip = WiFi.localIP();

    char fip[3];
    itoa(ip[3], fip, 10);
    char veb[4] = {'-', fip[0], fip[1], fip[2]};

void loop()
    if (WiFi.status() == WL_CONNECTED)
        int httpCode = http.GET();
        if (httpCode > 0)
            const size_t capacity = JSON_OBJECT_SIZE(2) + 2 * JSON_OBJECT_SIZE(5) + 940;
            DynamicJsonDocument doc(capacity);

            deserializeJson(doc, http.getString());

            JsonObject spotify = doc["spotify"];
            const char *spotify_format = spotify["format"]; // "OGG"
            const char *spotify_state = spotify["state"];   // "paused"

            JsonObject swinsian = doc["swinsian"];
            const char *swinsian_format = swinsian["format"]; // "MP3"
            const char *swinsian_state = swinsian["state"];   // "playing"

            if (strcmp(swinsian_state, "playing")==0) {
            } else if (strcmp(spotify_state, "playing")==0) {
            } else {
                display(' ', ' ', ' ', ' ');
            errCount = 0;
        } else {
        display('E', 'N', 'E', 'T');

    if (errCount > 10) {
        display('E', 'R', 'R', 'D');


It’s as simple as that and it displays the currently playing format. The code is basically 4 distinct parts. The first is bootstrapping the WiFi and the segment display. Second is acquiring an IP address from the network. Third is generating a HTTP request against the chosen endpoint and the last is parsing the JSON that’s returned and sending it to the display.


There is of course a discussion to be had about the wastefulness of storing an entire JSON blob on the heap of a microcontroller. The design above returns a pretty massive JSON blob with a lot of unwanted data which the microcontroller has to poll. When considering these tradeoffs I think it’s important to remember just how fast the ESP32-WROOM actually is. This microprocessor is a dual core design running at 240 MHz, compare this to a Arduino Uno (ATmega328P) which runs at 16 MHz and it’s obvious that we don’t have to be as careful with wasting cycles here. Building solutions this way allows one to easily prototype the end result and experiment, since everything is just JSON APIs.


Even though it already looks pretty neat, you really only want to see the segment display and hide away the rest of the feather. I designed this extremely simple enclosure in Fusion360 in which the segment display pushed through an opening and is constrained by the footprint of the featherwing. The back cover is designed to snap into the chassi and stay in place using the friction from the PLA, a design I usually do for smaller enclosures like this. There is a platform extruded in the middle to push the Feather into the hole.


3D printed this small enclosure in about 2 hours on the Prusa with using the Prusament Galaxy Black.


This project is really meaningless. At the same time sometimes these projects does not have to be more than fun to work on. If you think this project is meaningless, just wait until my next post on the other project.

September 9, 2020

California Wildfires

When I moved to San Francisco, dealing with wildfires on a yearly cadence was not one of the challenges I expected. That has turned out to be a regular experience at this point. Waking up today however was an experience unlike any other. A thick layer of smoke was trapped higher up in the atmosphere, blocking the majority of sunlight from shining through. Waking up reminded me of waking up in Luleå in winter. Sara took some great photos that I wanted to share. All of these are taken with quite a long exposure to capture light as it was almost nighttime darkness around 11 in the morning.

Wildfire smoke in San Francisco 1

Wildfire smoke in San Francisco 2

Wildfire smoke in San Francisco 3

Wildfire smoke in San Francisco 4

terrythethunder then cut together drone shots from San Francisco with the soundtrack to Blade Runner 2049 and the result was stunning to see.

July 26, 2020

Feeling Better

April 30 2020 at 15:20 I went out for a nice bike ride in the sun, hoping to get some exercise as COVID had changed my life habits pretty wildly. I remember coming down Quintara Street and turning off to 36th Ave after which I don’t remember much more. My next memory is waking up in a CT machine somewhere with immense pain, spending the next 2 days going in and out of surgery of different kinds.


I had gotten hit by a car and broken my arm in two places, broken my pelvis and broken my nose. On top of that I had fractured my back and my neck and lost 4 of my front teeth. It took a while to take this in due to the lack of consciousness from the concussion. The only positive part about the hospital visit was the dose of Ketamine that the doctors administered. After injecting the normal dos for my weight they asked me do you feel anything?”. As a true K-legend I was obliged to answer No, Doctor I do not feel anything” although the effect was present which had them increase the dose further, leading my thoughts to the legend Partiboi69.

Since that day I spent most of my time trying to slowly recover from this accident which is partly why there has been a lack of updates on projects and general writing the last few months. It’s been a long time since I had a major injury in my life so this absolutely added some perspective to life. At some point around my birthday in April I had stated that well this year can’t get any worse at this point” which has proven to be wrong multiple times since.

I waited to post this as I didn’t see a reason to talk about this before I actually felt better, which I finally to a certain extent do. My arm is still healing and I have yet to get all the teeth replaced due to COVID. As the condition has improved the last few weeks I drove up with Sara to Yosemite and spent 2 weeks hiking and enjoying the sun.




night sky

July 20, 2020

Mastering Monsquaz Swap 11

One problem that occurs from the relay writing approach of the Swap is that every participant lacks a reference for the final round. You only know how the song you have right now sounds and how previous rounds sounded which makes it impossible to create coherence across the entire album. People do their best of mixing the tracks but due to the difference in skill and subjective taste the results end up sounding widely different in terms of bias and emphasis.

For that reason we try to have one or two people do a mastering pass” in which we import the stems” (each track played back individually to a file) into a DAW and apply corrections and touch up parts with EQ. This time this task fell on me, mostly because everyone else seemed burned out from Swap 10 that seems to have been quite messy.

Mastering a swap is really hard because authors use multiple instruments per track and exporting instrument” based stems from OpenMPT is buggy, so you end up having to piece together these tracks by slice & dicing the stems into new tracks in the DAW (in my case Ableton) which makes the timeline look ridiculous.


Because authors also tend to create new channels for similar instruments rather than reusing lead / drum tracks when switching instruments you also get a huge number of channels that play sounds at any time in any order. Some of the songs ended up having up to 80+ tracks when drums where split out from reused tracks.


That’s not the worst part. In order to play chords from instruments in OpenMPT you essentially have to play the sound on multiple tracks at the same time in order to not replace the sample player. This means that when mastering you have to group these chorded” stems together in order to apply processing to them individually. This is a tedious process that takes a lot of time, especially when someone adds a 4th voice to a chord on track 43 when the rest of the chord is at track 3-6. Eventually you learn to easily spot these patterns visually, as the image to the right shows.


There’s also 18 tracks to actually go through, which is quite a challenge to keep interest in when the stems are as unorganized as this. It also makes it hard to do really meaningful balancing so you need to settle for touches that clears up mud in the mix. As the samples are pretty wide range the busier parts tend to get really muddy, so creating a couple of main audio busses where you apply selective EQ:ing helps to clear up the mix and get the bassline out of the mud.

After going over these we’re finally able to release these both as FLAC and on SoundCloud. Now I hopefully don’t have to master a swap in a long time once again. If you’re only going to listen to one of these tracks, listen to the brekadown in Super Mike Adriano 64” around 3 minutes and 50 seconds. This is signature Dusthillguy and shows how the unexpected directions a track can take.