November 28, 2020


A while back I was researching ancient MIDI sound modules such as the legendary Roland MT-32. The sound that these older synthesizers produce is extremely dated but in the right context it functions almost as a time capsule into what used to be state of the art. Take for example the iconic opening of The Secret of Monkey Island being played back on the Roland MT-32 and you’ll know what I mean that there is a specific sound that these makes that has gone out of fashion

Synthesizers of today sounds amazing. There are tons of analog synths and newer digital wavetable synths with oversampled filters on the market today both which overcome and emulate the essential characteristics of older synthesis. Compared to this the MT-32 produces a sound that is a relic of the technology of the time. It’s based on the same synthesis approach of the Roland D-50, namely Linear Arithmetic synthesis”. The idea is basically that you combine short PCM samples and subtractive synthesis to create sounds, where using the subtractive synthesis as a base layer and the PCM samples as an embellishment would create a more interesting sound. This technology was mainly invented due to the limitations of memory at the time, making it very expensive to produce synths based solely on PCM samples.

The MT-32 was a consumer version of D-50 and quickly found use by games at the time. Back then it was not feasible to ship streaming audio (again due to the size limitations of floppies) which meant that games took upon themselves to interface with soundcards and modules like the MT-32 to generate audio. Due to the complexity of manually figuring out what sounds every channel would have per popular sound module, General MIDI was introduced in 1991 as a way of creating a standard set of instruments that you could expect from every sound module. Roland followed with the Sound Canvas 55 (SC-55) which shipped with General MIDI and a large variety of new sounds, while the device lacked some of the flexibility in programming that the MT-32 offered it improved on the clarity of the sound.

Shortly afterwards the CD-ROM became popular which allowed for CD quality streaming audio on personal computers, essentially removing the need for separate audio synthesis in the form that the sound modules had offered. On top of that the soundcards that shipped with PCs for PCM audio contained stripped down versions of these synthesizers which eventually got replaced by software only synthesis when CPUs became fast enough.

Roland did however manufacture the weirdest variation of the MT series starting with the MT-80. The Roland MT-80 was essentially a small microprocessor coupled with the sound engine from the SC-55 and a floppy drive. Aimed at musical teachers and students this device allowed users to insert a floppy disk loaded with MIDI files and play them back in a portable stereo fashion. It also offered to mute groups of instruments and adjustments of tempo.

Roland MT-80S

The MT-80 followed by the Roland MT-300, an even more pimped out version with added features and stereo sound as the concept became fairly popular as a way of practicing music with virtual instruments.

Roland MT-300

Roland MT-300

Just seeing these horrible devices made me instantly scour eBay to see if I also could acquire a piece of history with these boxes of grey plastic anno 1996. Sadly they seem to start at around $200-$300 on eBay and that’s before you even factored in the cost of finding a floppy drive that you can replace the most likely non-working one with. I hosted a couple of cassette listening sessions at ATP, as a tongue-in-cheek against the desire to play vinyl only on rotary mixers and thought it would be a great sequel to play some horrible tracks from floppies instead. Since the price was a bit above my taste, the only thing left to do was to build it.

Refining the concept

Starting out I’ve always found it helpful to decide where the boundaries are. If you set out on a project to build something with the inspiration above there are tons of directions you could take but intentionally setting limits makes it more clear what tools and processes one needs to use. For this project it was clear early that:

  • The device should have a physical interface with real buttons. No touchscreens.
  • The device needed to use floppies
  • Bitmap displays was out of the question, if the device needed a display it would be either segment or character LCDs.
  • The device needed to hide the fact that it would be a repackaged Raspberry Pi.
  • The device should not have built-in speakers, focusing only on the playback as if it were a component of a stereo system in 1995.


With this in mind I drew some absolutely terrible sketches to figure out what the interface would look like in order to settle on components. Once I had this done it was trivial to think about what components I would use to build something that worked like the sketches. Using an LCD would be vital to easier select files from the floppy, the buttons should be prominent and if possible in color and the device chassi should be in a grey or black material.

It was then just a matter of finding some nice components that seemed fun to use. I wanted an encoder and stumbled on a rotary encoder with a RGB LED inside, while fancy I felt that it would be a great way to signal the state of the device so I ended up ordering it. For the buttons and LCD, Adafruit had both for a reasonable price so I used these while sketching. I also found a really cheap USB floppy drive on Amazon which I bought along with a 10 pack of old stock” Maxell floppy’s. Sadly 9 out of 10 floppies did not work which had me order a 10 pack of Sony floppies from eBay which worked better. I think this shows how there was an actual quality difference between the floppies back in the days as Maxell just hasn’t held up as well as Sony.

Knowing this it was just a matter of laying the components out with reasonable accuracy in Illustrator to get a feel for how it would end up. For this version I imagined that the rotary encoder would be to the right of the LCD and that the device would be a compact square device. I tried some of the above sketches but a more compact package felt more 90s integrated mini-stereo” with focus on being a device you could have in the kitchen.


Designing the enclosure

Starting from this concept it was just a matter of opening Fusion360 and getting to work on fitting the components. As usual I would be 3D printing this enclosure and to retain structural integrity I tried to minimize the printed part count, instead opting for larger printed pieces and being smart around how I designed the enclosure to fit the pieces.



I started with the front panel as it would hold all the buttons and the encoder. As you can see I deviated somewhat from the concept as I realized the LCD has tons of baseboard padding that I would had to hide somehow. For that reason I couldn’t place the encoder to the right of the display without making the device unreasonably wide. Experimented with a couple of different aspect ratios and settled on something that’s almost” square with visibly rounded edges. The reasoning here is that the device will have more depth due to the floppy so if the front panel was square it would break the cube”, hence why it’s a tad wider than tall as it stands.


One thing that was really hard to figure out was how to properly mount the encoder and buttons without any visible screws on the front. Since the user presses the button inwards, the buttons needs to rest against something that holds either to an internal bracket or to the front panel. I settled on a sandwich concept that locks the button in place against the front panel. While this has some issues when mounting I think it was a reasonable compromise and could be printed in one part. It was also easy to design screw brackets into the PETG on the front panel.

The panel went through a lot more iterations compared to what’s shown here but it’s hard to show every iteration in a great way. I had to consider how the electronics would fit and where to put the Raspberry Pi for it all to make sense. Since the floppy is the deepest part of the device I placed it in the bottom in order to be able to build the rest of the inner workings on top of the device.

The next problem was how to accurately represent the USB Floppy in Fusion360. Since the floppy had some weird curves and I wanted the floppy to be properly sandwiched into the case in order for it to feel like it’s part of the box I had to come up with a creative solution to capture the curve. A real designer probably has some great tools or techniques to do this, however, since I’m not a real designer I ended up placing the floppy on the iPad and just tracing it in Concepts. From there I exported it into Illustrator, cleaned it up and imported it into Fusion360 from where I generated the volume from the drawing. Weirdly enough this worked.


With that done I sketched out a baseplate with walls that would hold the floppy in place and a layer plate which mounted on top of the floppy. Although this took a lot of iterations to get right with the fit and all the screws it was a pretty straightforward piece once I knew what components I wanted to use. Initially I envisioned the Raspberry Pi being mounted straight against the back panel with exposed connectors but it felt weird to go through all this effort only to expose that the device is just a Raspberry Pi with an ugly screen, so I added a buck converter together with a 3,5mm connector in order to fully enclose the Raspberry Pi.



The last part was to add a cover with the correct sinks for M nuts and make sure that the rounded corners extrude all the way throughout the device. Here I experimented with a slanted case as it would be fasted to print but the device ended up too much like a 80s Toyota Supra with popup headlights so I skipped that idea and came to terms with the fact that this lid was going to take a good amount of time to print.


Making the panel work

Initially I envisioned this connecting straight to the GPIO of the RPI. It’s just 4 buttons, a serial display and an encoder with an RGB LED inside. On paper the RPI ticks all of these boxes, hardware PWM, interrupts, GPIO and serial output. Using all of these in Go is another story however. It turns out that the interrupt / PWM support for RPI is actually quite new” in Linux with two different ways of approaching it. The old way was to basically memory map straight against the hardware and manually defining the interrupts with the new solution providing kernel interrupt support for these GPIO pins. Go is somewhere in between, the libraries either has great stability with no interrupts or really bad stability (as in userspace crashes when not run as root) using the interrupts.

Why does interrupts matter so much? Can’t we just poll the GPIO fast enough for the user not to notice? The buttons aren’t the problem here but for anyone who used an encoder without interrupts know how weird they can feel. Encoders use something called quadrature encoding (also known as gray code) to represent the amount of distance and direction the encoder has moved when rotated. This means that we have to either poll the GPIO really fast to catch the nuances of something scrolling through a list without it feeling like the encoder is skipping”. Using interrupts allows the microcontroller to instead tell us when the encoder gray code has changed which we can decode into perfectly tracked states of rotation.


from Wikipedia

After battling this back and forth for a couple of hours with the only viable solution looking like I would have to write my own library, using an Arduino just seemed… easier. Easily done. Wiring up an Arduino to do encoder interrupt and PWM is basically no lines of code. I played around with using Firmata but as before, the support in Go was lackluster so I defined a simple bi-directional serial protocol that I used to just report buttons and set LED color from Go. Serial support in Go is luckily extremely easy. On top of this the LCD Backpack” that Adafruit shipped with the LCD had USB so I got a nice USB TTY to write against for the LCD. This made the Go code quite clean.

With this done I could wire up the encoder and the buttons on a breadboard in order to have a prototype panel to play with. It’s not a pretty sight but it worked for developing the serial protocol. The Arduino framework has tons of problems but it makes it easy to move the code from one microcontroller to another. Here I’m running on an Uno, later ported to a Arduino Nano for the actual package.

panel proto

Here it is soldered down on a hat” for the RPI with the Arduino Nano.

panel proto

Replicating the audio synthesis

Now the challenge was how to go about generating actual audio. There were a number of options here. The first one was using a hardware MIDI chip such as the VS1053B but after hearing the audio quality I quickly realized that this didn’t sound at all close to what I wanted. Discovering this made it clear that I wanted the sounds of the MT-32/SC-55 but in another package which narrowed the scope to using SoundFonts. SoundFonts was introduced by Creative for their SoundBlaster AWE32 as a way of switching out the sounds on the soundcard, you would load a soundfont onto the device much similar to how you load a text font and the device would use the bank of samples with the surrounding settings to generate audio. Today there’s an array of soundfonts available as the format has taken on a life beyond Creative and there are many MT-32/SC-55 soundfonts available.

There is a great softsynth that uses soundfonts called FluidSynth. After experimenting with FluidSynth it was clear that the sound generation was spot on, it was able to re-create masterpieces like Neil Young - Rockin’ In The Free World from some jank MIDI file that has the least tight re-creation of this song I’ve ever experienced. To clarify, FluidSynth is fantastic but the MIDI file of this track is really bad, paired with the SC-55 sound it becomes golden. Just listen to this and you will be convinced.

Knowing this the next step was figuring out how to interact with FluidSynth. FluidSynth has a CLI that I could use but it quickly turned out that the CLI was hard to interface against programmatically. FluidSynth also ships with a C API which seemed to be easy enough to interface against. Go has a reasonably easy way of interfacing with C known as cgo. I found some rudimentary bindings on Github that lacked basically all the features I wanted so I forked the repo and started building my own bindings with a more Go-centric API. I added a ton of functionality which is now in my repo called fluidsynth2 on Github for anyone that ever finds themselves wanting to use this from Go.

Writing the controller software

With that issue out the way the remaining part was to write the MT-420 code, which ended up being abstractions for the various hardware devices, a bunch of specific logic around mounting a floppy in Linux year 2020 (harder than you might think) and general state management. The code is absolutely horrible as I wrote most of it before knowing how the hardware actually worked and mocked a lot of it against the terminal, later replacing the modules with actual implementations and discovering peculiarities about how the hardware actually worked that had me refactor code in sad ways. Feels good to see this initialization of the floppy interface though.

// Floppy
delayWriter("Warming up floppy", delay, display)
var storage storage.Storage
if *mockFS {
    storage = mock.New(*mockPath)
} else {
    storage = floppy.New(lconfig.Floppy.Device, lconfig.Floppy.Mountpoint)


The code is structured in two main parts, the first one is to create the devices” for the controllers” to use. To easier develop I created mock” versions of all devices (using Interfaces in Go) which means you can run the entire application in the terminal and use the keyboard to emulate” the panel. I’m glad I did this as I almost lost my sanity multiple times trying to wrangle with the LCD display. These devices” are then passed onto the controller which is a state machine pattern that I’ve written a couple of times in Golang at this point. Instead of writing a defined list” of states I create a interface that the controller host can call on.

type Option interface {
    Run(c *Controller, events <-chan string, end chan bool) string
    Name() string

The controller then has an index of all available modules” and just calls the entrypoint module in the map and calls Run() with the required data. Each module is then responsible for executing the loop and returns a string, which contains the name of the next module. This means that the module can say go back to X” and the controller will just call that module next loop. This pattern is simple but I’ve found it effective when using blocking interfaces” such as humans interacting with the device.

The question here though is Why use Golang?”. To be honest here I think the real reason is that I was lazy and thought that the scope of this was likely to be really small. That turned out to not be the case, especially with all the workarounds I had to do due to pains of interacting with the kernel in Go. Go is really good at solving problems where the hardware is irrelevant and less so dealing with devices like a USB floppy. While I appreciate the fast cross-compilation, approaching this again I would have written this in Rust instead, as the C interop alone would have saved me time on having to work on the FluidSynth bindings. It’s good to do one of these projects though, really pushing the bar of one’s comfort zone language to understand where the weaknesses are.


You can find all the code for this at my GitHub. The repo also contains the STL files if you want to go ahead and print this yourself.

November 1, 2020







September 13, 2020


The only positive thing to come out of this absolutely trash year is the extra time that I’ve had on working on dumb projects. This project in particular has no purpose whatsoever other than being the realization of an in-joke about file formats. Last few years I’ve switched to almost exclusively buying new music in FLAC or other lossless formats to get closer to technical medium transparency for audio (bandcamp is great). Of course the times you hear anything different between an MP3 encoded in 320 or V0 compared to a FLAC is quite few but we thought it would be funny to only play FLAC at ANDERSTORPSFESTIVALEN and highlight it in a dumb way.

Idea is simple: Display what Format is currently playing (not what song is currently playing) in the ATP bar using some 90s style display technology.

There is this dumb movie prop from the movie Tomorrow Never Dies that is supposed to be a master GPS encoder”. What struck me about this prop is not the idea that a box would hold some offline cryptography key, rather that it comes with a huge nonsense segment display. Just seeing this makes me think this is the dumbest prop I’ve seen, which is why I wanted to use segment displays to show the format.


Reading the current track

To start with, we need to create some sort of API that we can poll to get the currently playing track from the playback MacBook Pro. I personally use Swinsian but there is often Spotify being used in the bar as well, so supporting both of these is a must. Neither Swinsian nor Spotify has a REST/DBUS/Whatever API to get song data out of but it turns out that they both expose AppleScript variables to be compatible with iTunes plugins. Knowing that, creating some sort of shim around AppleScript to publish the variables with a HTTP API would solve a lot of problems.

AppleScript is absolutely terrible. It’s the worst scripting language I’ve had the pleasure to deal with but after some massaging we have this code that gets the state of both Swinsian and Spotify and outputs as some sort of mocked JSON.

on is_running(appName)
    tell application "System Events" to (name of processes) contains appName
end is_running

on psm(state)
    if state is «constant ****kPSP» then
        set ps to "playing"
    else if state is «constant ****kPSp» then
        set ps to "paused"
    else if state is «constant ****kPSS» then
        set ps to "stopped"
        set ps to "unknown"
    end if
    return ps
end psm

if is_running("Swinsian") then
    tell application "Swinsian"
        set wab to my psm(player state)
        set sfileformat to kind of current track
        set strackname to name of current track
        set strackartist to artist of current track
        set strackalbum to album of current track
        set sws to "{\"format\": \"" & sfileformat & "\",\"state\": \"" & wab & "\",\"song\": \"" & strackname & "\",\"artist\": \"" & strackartist & "\",\"album\": \"" & strackalbum & "\"}"
    end tell
end if

if is_running("Spotify") then
    tell application "Spotify"
        set playstate to my psm(player state)
        set trackname to name of current track
        set trackartist to artist of current track
        set trackalbum to album of current track
        set spf to "{\"format\": \"" & "OGG" & "\",\"state\": \"" & playstate & "\",\"song\": \"" & trackname & "\",\"artist\": \"" & trackartist & "\",\"album\": \"" & trackalbum & "\"}"
    end tell
end if

set output to "{ \"spotify\": " & spf & ", \"swinsian\": " & sws & "}"

With this out of the way, we can easily invoke this AppleScript from Go with osascript and capture the output of stdout and serve this over HTTP with gin. I’ve published the end result as go-swinsian-state on my Github if you ever find yourself needing to do something this ugly.

Displaying this on alphanumeric segment displays

Working with segment displays is annoying. It’s essentially one LED per segment and they are wired as a crosspoint matrix meaning that you need tons of pins to drive this. Luckily for me, people have been annoyed at this before and created driver chips, such as the MAX7219 which allows you to control the entire display using just one serial connection. This takes a lot of the headache off the table and allows me to use much smaller headroom microcontrollers.

For this project, a key feature is that the display can’t be connected using Serial/SPI/I2C to the host computer, rather it has to pull data over WLAN. There’s an array of microcontrollers out there now with WiFi support but my personal favorite is the Espressif series of microcontrollers. The ESP8266/ESP32 is in my mind almost a revolution in microcontrollers, offering a huge amount of connectivity with an insane amount of I/O at an unbeatable price (around $3-$4 per chip). Since this project is a one-off, designing a logic board seemed too much effort so I shopped around for an ESP32 in a nice form factor.

Adafruit has a nice series of Feathers” which is essentially a smaller format Arduino with focus on battery power and size. They ship the ESP32-WROOM model for around $20 which is a fair price for the design and ecosystem. It also turns out they have this Featherwing” which is akin to Arduino shields with both the LED segment driver and displays. $14 is a reasonable price for this as well so BOM so far is $34 which gives us a microcontroller with WiFi and a alphanumeric segment display that can fit the words FLAC/MP3/OGG/AAC.

The fun thing about working with the Arduino framework is that there are good libraries for these components available, so stringing all this together is less than 125 lines of code. I published the PlatformIO project on my github as FORMATDISPLAY but the main code is so short that I’ll include it here.

#include <Arduino.h>
#include <Wire.h>
#include <SPI.h>
#include <Adafruit_I2CDevice.h>
#include <Adafruit_GFX.h>
#include "Adafruit_LEDBackpack.h"
#include <WiFi.h>
#include <HTTPClient.h>
#include <ArduinoJson.h>
#include "Settings.h"

Adafruit_AlphaNum4 alpha4 = Adafruit_AlphaNum4();

IPAddress ip;
HTTPClient http;
String url = String(storedURL);
int errCount = 0;

void display(uint8_t a, uint8_t b, uint8_t c, uint8_t d)
    alpha4.writeDigitAscii(0, a);
    alpha4.writeDigitAscii(1, b);
    alpha4.writeDigitAscii(2, c);
    alpha4.writeDigitAscii(3, d);

void displayC(const char *s) {
    for (int i = 0; i < 4; i++) {
        if (s[i] == 0x00) {
            alpha4.writeDigitAscii(i, ' ');
        } else {
            alpha4.writeDigitAscii(i, s[i]); 

void setup()
    //Initialize serial and alphanumeric driver.

    display('C', 'O', 'N', 'N');
    WiFi.begin(storedSSID, storedPASSWORD);

    int wifiRetry = 0;

    while (WiFi.status() != WL_CONNECTED)

        if (wifiRetry > 100) {

    // PRINT IP 
    display('W', 'I', 'F', 'I');
    ip = WiFi.localIP();

    char fip[3];
    itoa(ip[3], fip, 10);
    char veb[4] = {'-', fip[0], fip[1], fip[2]};

void loop()
    if (WiFi.status() == WL_CONNECTED)
        int httpCode = http.GET();
        if (httpCode > 0)
            const size_t capacity = JSON_OBJECT_SIZE(2) + 2 * JSON_OBJECT_SIZE(5) + 940;
            DynamicJsonDocument doc(capacity);

            deserializeJson(doc, http.getString());

            JsonObject spotify = doc["spotify"];
            const char *spotify_format = spotify["format"]; // "OGG"
            const char *spotify_state = spotify["state"];   // "paused"

            JsonObject swinsian = doc["swinsian"];
            const char *swinsian_format = swinsian["format"]; // "MP3"
            const char *swinsian_state = swinsian["state"];   // "playing"

            if (strcmp(swinsian_state, "playing")==0) {
            } else if (strcmp(spotify_state, "playing")==0) {
            } else {
                display(' ', ' ', ' ', ' ');
            errCount = 0;
        } else {
        display('E', 'N', 'E', 'T');

    if (errCount > 10) {
        display('E', 'R', 'R', 'D');


It’s as simple as that and it displays the currently playing format. The code is basically 4 distinct parts. The first is bootstrapping the WiFi and the segment display. Second is acquiring an IP address from the network. Third is generating a HTTP request against the chosen endpoint and the last is parsing the JSON that’s returned and sending it to the display.


There is of course a discussion to be had about the wastefulness of storing an entire JSON blob on the heap of a microcontroller. The design above returns a pretty massive JSON blob with a lot of unwanted data which the microcontroller has to poll. When considering these tradeoffs I think it’s important to remember just how fast the ESP32-WROOM actually is. This microprocessor is a dual core design running at 240 MHz, compare this to a Arduino Uno (ATmega328P) which runs at 16 MHz and it’s obvious that we don’t have to be as careful with wasting cycles here. Building solutions this way allows one to easily prototype the end result and experiment, since everything is just JSON APIs.


Even though it already looks pretty neat, you really only want to see the segment display and hide away the rest of the feather. I designed this extremely simple enclosure in Fusion360 in which the segment display pushed through an opening and is constrained by the footprint of the featherwing. The back cover is designed to snap into the chassi and stay in place using the friction from the PLA, a design I usually do for smaller enclosures like this. There is a platform extruded in the middle to push the Feather into the hole.


3D printed this small enclosure in about 2 hours on the Prusa with using the Prusament Galaxy Black.


This project is really meaningless. At the same time sometimes these projects does not have to be more than fun to work on. If you think this project is meaningless, just wait until my next post on the other project.

September 9, 2020

California Wildfires

When I moved to San Francisco, dealing with wildfires on a yearly cadence was not one of the challenges I expected. That has turned out to be a regular experience at this point. Waking up today however was an experience unlike any other. A thick layer of smoke was trapped higher up in the atmosphere, blocking the majority of sunlight from shining through. Waking up reminded me of waking up in Luleå in winter. Sara took some great photos that I wanted to share. All of these are taken with quite a long exposure to capture light as it was almost nighttime darkness around 11 in the morning.

Wildfire smoke in San Francisco 1

Wildfire smoke in San Francisco 2

Wildfire smoke in San Francisco 3

Wildfire smoke in San Francisco 4

terrythethunder then cut together drone shots from San Francisco with the soundtrack to Blade Runner 2049 and the result was stunning to see.

July 26, 2020

Feeling Better

April 30 2020 at 15:20 I went out for a nice bike ride in the sun, hoping to get some exercise as COVID had changed my life habits pretty wildly. I remember coming down Quintara Street and turning off to 36th Ave after which I don’t remember much more. My next memory is waking up in a CT machine somewhere with immense pain, spending the next 2 days going in and out of surgery of different kinds.


I had gotten hit by a car and broken my arm in two places, broken my pelvis and broken my nose. On top of that I had fractured my back and my neck and lost 4 of my front teeth. It took a while to take this in due to the lack of consciousness from the concussion. The only positive part about the hospital visit was the dose of Ketamine that the doctors administered. After injecting the normal dos for my weight they asked me do you feel anything?”. As a true K-legend I was obliged to answer No, Doctor I do not feel anything” although the effect was present which had them increase the dose further, leading my thoughts to the legend Partiboi69.

Since that day I spent most of my time trying to slowly recover from this accident which is partly why there has been a lack of updates on projects and general writing the last few months. It’s been a long time since I had a major injury in my life so this absolutely added some perspective to life. At some point around my birthday in April I had stated that well this year can’t get any worse at this point” which has proven to be wrong multiple times since.

I waited to post this as I didn’t see a reason to talk about this before I actually felt better, which I finally to a certain extent do. My arm is still healing and I have yet to get all the teeth replaced due to COVID. As the condition has improved the last few weeks I drove up with Sara to Yosemite and spent 2 weeks hiking and enjoying the sun.




night sky

July 20, 2020

Mastering Monsquaz Swap 11

One problem that occurs from the relay writing approach of the Swap is that every participant lacks a reference for the final round. You only know how the song you have right now sounds and how previous rounds sounded which makes it impossible to create coherence across the entire album. People do their best of mixing the tracks but due to the difference in skill and subjective taste the results end up sounding widely different in terms of bias and emphasis.

For that reason we try to have one or two people do a mastering pass” in which we import the stems” (each track played back individually to a file) into a DAW and apply corrections and touch up parts with EQ. This time this task fell on me, mostly because everyone else seemed burned out from Swap 10 that seems to have been quite messy.

Mastering a swap is really hard because authors use multiple instruments per track and exporting instrument” based stems from OpenMPT is buggy, so you end up having to piece together these tracks by slice & dicing the stems into new tracks in the DAW (in my case Ableton) which makes the timeline look ridiculous.


Because authors also tend to create new channels for similar instruments rather than reusing lead / drum tracks when switching instruments you also get a huge number of channels that play sounds at any time in any order. Some of the songs ended up having up to 80+ tracks when drums where split out from reused tracks.


That’s not the worst part. In order to play chords from instruments in OpenMPT you essentially have to play the sound on multiple tracks at the same time in order to not replace the sample player. This means that when mastering you have to group these chorded” stems together in order to apply processing to them individually. This is a tedious process that takes a lot of time, especially when someone adds a 4th voice to a chord on track 43 when the rest of the chord is at track 3-6. Eventually you learn to easily spot these patterns visually, as the image to the right shows.


There’s also 18 tracks to actually go through, which is quite a challenge to keep interest in when the stems are as unorganized as this. It also makes it hard to do really meaningful balancing so you need to settle for touches that clears up mud in the mix. As the samples are pretty wide range the busier parts tend to get really muddy, so creating a couple of main audio busses where you apply selective EQ:ing helps to clear up the mix and get the bassline out of the mud.

After going over these we’re finally able to release these both as FLAC and on SoundCloud. Now I hopefully don’t have to master a swap in a long time once again. If you’re only going to listen to one of these tracks, listen to the brekadown in Super Mike Adriano 64” around 3 minutes and 50 seconds. This is signature Dusthillguy and shows how the unexpected directions a track can take.

June 24, 2020

The process of Monsquaz Swap Album #11

I wrote earlier about the Monsquaz albums and it’s time again. Due to some constraints last year with work I couldn’t contribute much to Swap #10 but I’m back for Swap #11. This is the 8th year that the collective known as Monsquaz” creates another album using the same concept. Swap album #9 turned out to be a really high watermark for the series, as a lot of the participants has matured musically over the years so it’s exciting to do this again. This time around I am going to document each round here and post this entry once I’m done with all entries. I’ll also add a before and after MP3 to hear the changes I made per week.

To quickly recap, the idea is that everyone creates a seed based on a preselected list of samples, which is passed back to the host. The host anonymizes the seed and sends you the seed of someone else which you iterate on for 24 hours. The swap follows a Latin Squares pattern which means that everyone works at every song once. What you end up with is these artifacts that have no distinct style often moving in different directions but with tons of musical experiences that is a joy to listen to afterwards. The most enjoyable part of participating is submitting something really weird in the middle of the compo and hear it resolve once all the songs are released into something fantastic.

To do this we are all using software called OpenMPT as it provides a very stable way of moving the project files between different participants without causing issues, as no VSTs or sound banks are needed to play back the files. OpenMPT is a music tracker which is shortly explained as making music in Microsoft Excel 97.


If you’ve ever seen a DAW before, imagine just rotating it 90 degrees and letting the channels flow towards the bottom instead of towards the right. Every column represents a voice and ever row in that column is a place where the tracker can trigger a sample with a pitch instruction, volume instruction and effect modifier. The format per row works like this:

G#4     08              v46         X67
Note    instrument      Volume      Effect

Sounds limited? Absolutely is, which is part of the beauty of working with this software. A lot of the creativity stems from figuring out ways of noting down musical ideas in this way of thinking about music. Each column can technically contain any instrument and switch seamlessly back and forth but it only has one sample player”. This means that if you trigger a long violin sample and then a drum hit on the same column the drum kick will choke” the earlier sample and take over the sample playback. Achieving polyphony means using multiple columns, even for something as simple as playing a chord.

With that said, here are my notes in chronological order that I took daily when participating. I streamed everything to Twitch as well so I’ve gone ahead and linked to each of the VODs if you are curious of how this actually works and want to see me struggle terribly.

The experimentation stage

In this stage the tracks are changing pretty rapidly changing tempo and structure when creators rework the initial seeds into new ideas and adding on patterns. This is the wildest part of the compo to hear a track as the end result is almost unrecognizable.

Round 1 (The Seed)


For this swap I mimicked a lot of how i began for Swap 9, as the result from that was absolutely amazing. Started out with a very simple bassline melody and moved on to setting the tempo by creating a double tracked kick track. Ended up adding some build up just for the sake of it but could as well have left the seed at this point. Last compo I never figured out filters in OpenMPT so I gave it a try this time by filtering elements back and forth which is used specifically on the last build. Turns out that after I got the filters working I also realized that the filters were used the wrong way. Hope someone fixes that next round…

Round 2



Short loop with a lot of potential with absolutely zero song structure, so that is what I’m attacking with this seed. Created a dub delay effect by panning the snare back and forth and slowly fading it out, emulating a delay. The X effect allows you to pan on top of the adjust volume message which I took advantage of. Lastly it was a matter of just finishing up the buildup and introducing the melody. Pretty happy with how this seed transformed, the melody has a lot of potential but it will need a continuation of the theme.


Round 3



Round 3 was a slower song, playing at 90 BPM with some beautiful guitar melodies that almost felt like a town theme from a game. I think just based off the different edits that the majority of the track was made by the first author and the second author added the somewhere over the rainbow”-esque melody at the end. The guitar is nicely tracked, which is what I screenshotted from this piece. This is a bit too slow I feel but the melody has a lot of potential.

I added therefore upped the base tempo from 90 to 105 and added a further increase to a more tempo part after the last breakdown in the seed. From there I sequenced faster drums and reworked the melody to fit a faster tempo, while adding in a new bassline that follows the arpeggiated guitar. I did not want to spend too much time sequencing my part further here as there are 15 rounds to go for this track, which means it will probably change dramatically at least twice. My changes come in after 1:35 in the submitted song.


Round 4



This seed starts out so weird with an ominous feeling and broken down samples with a distorted kickdrum. Initially I wasn’t sure but the track quickly goes to a really good place with an arpeggiated lead and has a nice modulation to a new key. The issue however is that the last author to work on this track before me decided to ignore all of this and start a new track after the really good layers that already existed. Is this accepted? In a way yes, the swap allows you to create parts that doesn’t fit together and hope that someone resolves them later on. What irks me here is that when doing that you usually start by providing the transition rather than another track. As the track stands seeded, it’s basically just two tracks playing in sequential.

So in true swap fashion I spend about an hour on working on the first part of the track as I think that was most interesting. I didn’t delete the last parts but I prepared for someone to create a transition to the new track that was laying around which should help the next compo participant to work on. On top of this I extended the nice arps with a bassline breakdown (almost my swap signature addition at this point) and added some weird percussion with the soda can opening sample.

Round 5



This song like so many other of the tracks has an identity crisis. It starts out with this very slow and moody Final Fantasy SNES style and mid song transitions into a much faster track. Both of these parts have problems, the first part lacked density so I added some layers to the earlier parts. The second part goes basically nowhere so I spent some time on trying to bridge it into something new.

I wasn’t really feeling that inspired for this round even if the song is good. OpenMPT is so annoying to work with. I understand a lot of people appreciate trackers but it is clear to me why they aren’t favored by a lot of musicians any longer, it’s just too much intricate details to work around in order to get the effects you want. Really, I think the tracker style of composing makes you develop signature tracking patterns and repeat them many times due to it being hard to experiment with. Want to space the notes out and repeat it fast? Copy paste that a million times and hope you got it right. Either I’m using the software wrong or I’m just used to good UX design, which this is not.

Round 6



This one was really fun to work on. Seed was short and contained a lot of discarded ideas so I think that someone during a previous round took the initial seed and reworked it into the 4 bars that existed in the seed. A lot to do here for me today, so started out with giving the song some structure as usual and polishing the general soundscape of the track. I ended up adding tons of patterns in this track, going from 5 to 15 patterns.


The seed had a really beautiful progression where the string lead played Cm11 -> C#maj7 -> F#9sus4 -> D#m7 (I think) with some notes removed for parts of the chords which I think augmented the arpeggiated bell very nicely. I ended up changing these up towards the end to work the track towards something new. To do that I broke the track down and started filtering in element after element in a very house music fashion and ended my contribution by reworking a leftover idea from the seed that hadn’t gotten used.

I decided to not add anything else to the track at this point as I am really hyped to see where this track will end up during the next 12 rounds, there is so much potential here and I feel that the contribution created a great path for someone to continue on. It’s almost impossible to not continue the ending which hopefully should avoid someone tracking in something completely different.

Round 7



Nice day today! Was out walking in the sun and meeting a friend for the first time since the COVID-19 experience started, this might as well be a pseudo-diary at this point. Reason this matters is that I was pretty tired today so I had a hard time convincing myself to get started on tracking. Opening the track today made it clear that there is an identity crisis in the song today. The song I got have three different major themes going that are close to each other in style but not in melodic feeling, so bridging these was the focus for today. It took a while before I found anything to work on with this track but eventually I extended the end of the track which got the creativity rolling. It’s funny that one can be so empty of creativity and it flows back as soon as you start doing the work. Tons of shitty lifehacks describes this as a method that works and after this day I’m inclined to believe it.

Fixed the intro of this track, as it’s pretty much bam on with a filter fade of the lead with the drums. After that I added a sweet bassline solo and worked on bridging the two earlier themes together by fixing the breakdown that someone else added earlier to the song. That’s mostly it for today, I probably contributed 5-8 patterns to this track but nothing groundbreaking this time around.

The progression stage

In this stage the seeds have major themes and melodies but are usually disjointed and are lacking overall song progression. Goal here is to work on existing patterns and add variation while working within the spirit of the track. Some trackers take this opportunity to change the track dramatically by adjusting the tempo or reworking a major part.

Round 8



This track is weird. Great intro that slowly works itself towards something else and then the track changes completely. Without a doubt the weakest track I’ve gotten so far in terms of ideas and polish. Hard to even know what to do with this track. These rounds are generally really hard as there is so much to tackle that it’s hard to know where to start. I can try to fix a transition, make a completely new pattern or just mess around with edits.

Eventually I came up with a cool addition. I discovered that part of the patterns sounded great when playing them in a shuffle” style by stepping through them with CTRL + Enter, so decided to actually make that a change to the pattern. I accomplished that by using the tempo control and manually changing the playback speed up and down to create the varying tempo. Who knows if this survives but after adjusting the effect a bit the edit actually improved the style of the track a lot.

An unrelated topic here, there has been a viewer named Sir_Kane who has watched basically every day of me tracking. Need to do a shoutout here as I probably would have given up on streaming at this point due to the fear of not coming up with ideas. Due to MR KANE waiting you feel obligated to go back and start tracking on stream.

Post-round addition: Gotta spoil this track actually, the shuffle actually made it through the entire compo and someone added an absolutely amazing saxophone solo on top of it that just made the part work. I laughed so much that I felt the need to add the extract here.

Round 9



Initially very hard to figure out what to do with this track. The track has two really strong themes going throughout. The first is early in the track in where someone has used the You got me burning up” sample in a very creative way with some fantastic harmonies and good bassline. The second strong theme is in the middle of the track where it builds up towards a strong melody.

After being uncreative for a couple of minutes I noticed that there seemed to be a bug in some later patterns where the author had copied something wrong by mistake, creating a shift in rhythm that most likely isn’t supposed to be there. Fixing this issue also lead me to think hmm what it goes to this part afterwards”, giving me some ideas on what to work on with the track.

Ended up using the bassline in the middle to extend what I think is one of the stronger parts and created 7 new patterns in total for the track, much more than I had expected. All in all a good day tracking.

Round 10



They keep getting weirder. This is even more messy than the one I got yesterday with multiple themes and tons of strong melodies that doesn’t really fit together. Not really going to try to fix that, rather provide some framework around the concept. Sir_Kane commented that it sounded like a SNES fighting game” so I created a new intro to mimic a game console startup sound, which should sort of hint that this tries to be a videogame tune. I took inspiration from the startup sound to the Playstation and created something that had the same slow lush pads with glimmering bells in the background.

On top of that I added some backing melodies to one of the patterns and completely destroyed another one. There is just one pattern that doesn’t fit and I think the transition is going to have to be one of the classic SWAP COMPO SPEED UP transitions. Not much else to say about this track other than that it was really hard to work with.

Round 11



This track is very different. So far almost every track has been a very unique track which is different to some of the other swaps where tracks end up repeating themes. Probably due to the amount of people involved you get more variety in the tracks this time around. This track is much more chill compared to many of the other tracks but suffers from the lack of balancing.

Layering can be a good thing but in this case there are parts that are just too heavily layered where elements are fighting over each other for attention rather than harmonizing. Not going to tackle that, so decided to add flair to the empty parts instead without trying to overdo it. Enter the Timbale Rim Sample! I started scattering this one around playing some accents which really worked with the theme.

After that I tackled extending what I think is the strongest melody of the track by duplicating some patterns and re-working the melody. It took me longer than it should have to figure out that the melody was in F Major. Note to self here is def to spend more time next year on just remembering the circle of fifths so I don’t have to consult Scale Finder every single time I run into these melodies.

Round 12



Finally a track that starts out differently to the other tracks. This one actually goes for an high bpm kick drum driven beat and the filtering sounds great. It’s gets a bit messy mid track and eventually someone prior to me decided nah” and did a classic swap transition”.

In many other Monsquaz Compo songs, when swappers don’t know what to do a common pattern I see is that they just speed up the track, make a stupid snare arrangement and jump to something completely new hoping that someone else figures the transition out. Something that other authors rarely do because the new track is so different. Instead of attacking this problem I decided to start transitioning the track back to the part that I felt was cohesive for the majority of the track. I did this by slowly rebuilding the same percussion that the track stood on and transitioned the tempo back to what it used to be towards the end of the track. Hopefully someone can fill in here and polish it up.

12 days of creating music in a row is actually more tiring than I expected. I’ve done this before but never this seriously. Previous years I’ve sometimes skipped or added small amounts of percussion when I didn’t have ideas but this time I try to actually contribute real patterns for every iteration which is actually mentally exhausting. I am learning a lot though, my sense for good melody is increasing by every day that passes and I’m starting to actually become friendlier with the arcane tracker UI. I’m likely going to revisit some of these melodies afterwards as there is a lot to learn from the other authors.

The Fixing transitions” stage

When compos are as long as this the stages are more fluid but normally this is the point where creators start trying to fix harsh transitions and make the track flow more seamlessly between ideas. Often this means creating transitions between parts that are different.

Round 13



The best part about being in this compo is when you get to a part and you instantly know who tracked it from the style. This song has a really strong breakdown with some absolutely fantastic chords that just screams one particular author. Very well composed here and is a good break compared to the other breaks in this track. Apart from enjoying this I spent time on fixing transitions left hanging by previous authors.

As OpenMPT has jump instructions the composer is able to build pretty advanced chains of music throughout the track. With this added complexity there is a potential to introduce bugs” into the song. One that I found today was an author that created a hard pointer at position 11. In OpenMPT, positions is an ordered list so if you insert something prior to the pattern the jump instruction breaks. This is probably fine when authors are working alone on a track but when you’re swapping these instructions tend to become pretty dangerous if passed to someone that misses that they suddenly skip patterns. To fix this I converted all of these occurrences to relative jump instructions which should work regardless of # in the position list.

Today I also found out that OpenMPT actually has a visual effect editor called the Parameter editor”!

parameter editor

It’s hard to describe how much happier I would have been if I found this earlier on in the process. I’ve spent too much time at this point hand editing envelopes for filters and effects and OpenMPT just happens to have this secret visual parameter editor that takes that pain away. Very frustrating to discover this but I guess it helps for the last 5 rounds.

Round 14



Oh no. I dreaded a track like this the entire compo. This track is basically all the issues of doing a music swap combined into one track. At one point in the seed there is an ear piercing sound that someone tracked in like this:


Cleaning this track up is hard, there is a lot of repeating patterns that just serve as filler for no reason and the last part absolutely doesn’t fit in. My solution to this was to drastically re-arrange the track by redoing the outro part as an intro that transitions over to the intro to adhere to the don’t delete” spirit of the swap. On top of that I removed all pattern repetitions” where a pattern is played twice in a row to fill out as I felt that served no purpose a lot of the times. Also not going to lie I flat out deleted the ear piercing part, I’m usually pretty strict about rarely deleting in swaps but this was just bad shitposting.

This is probably the first track in this compo where I didn’t add anything, just fixed mixing, mistakes, bad chords and structure. We are reaching the end of the compo so now is the time to solve this lingering issue.

Mixing / Mastering

At this point it’s hard to add anything substansial without reworking large parts of the track. For that reason mixing and mastering becomes the main focus in which the contribution you can do is to polish and fix issues while balancing the different channels.

Round 15



This track is a joy! This really shows of the complex compositions that can come out of the compo while actually retaining decently coherent and fused together. There are no stray patterns, harsh transitions, song shifts or weird chords. Just a really good track with some great composition in an odd time signature.

Since this is Round 15 I mainly focused on fixing bugs and wrapping up the track. For this I spent some time on creating an actual outro, as the previous swapper had tried to add on a disco beat that didn’t fit in towards the end so reworked that to a real outro. Apart from the outro I scattered some timbale rim sounds here and there to give some dynamic to the otherwise pretty static beat.

Track was absolutely awesome to hear but at the same time not much to work on here. This late it’s almost more powerful not to add anything which is why this one got the small polish.

Round 16



Very polished and solid track here. It’s obvious that we’re in round 16 just based on the coherent trackng and nice mixing. With that said, always some mistakes to fix. At this point in time, focusing on mixing and panning helps the track escape muddiness. I added some panning to the crazy timbale pattern and double tracked some of the basslines for punch. Apart from that there wasn’t really much to do other than sparkle percussion at certain points to vary the track.

Unrelated side note: The built-in filter in OpenMPT sounds absolutely terrible. When I started out making music I never believed that there was a big enough difference in filters but after spending time with the Elektron Analog Four/Rytm series I am convinced that filter character is an extremely important part of filter sounds. The problem with OpenMPT is that it’s neither clean” nor musical”, sweeping the filter sounds harsh and the boosting the resonance gives a lot of unwanted artifacts. I’m happy that this is the final stretch of me using this software. Ableton also does this really well with their filter emulation types that allows you to get the filter singing in a musical way.

Round 17



Track was pretty much done. Minor mixing and panning of instruments. Only thing I did was remove a pattern and balance instruments. Found a couple of smaller bugs with filters that was easy to resolve. This usually happens when an author adds new parts in the middle without considering what happens afterwards and this time it was obvious that it was an unintended effect. Didn’t really feel it this day in general so it was pretty nice to get a track that had that amount of polish.

Round 18



What a journey this 2,5 weeks has been. I forgot how draining it is to try to summon some sort of creative energy every day and some days have been challenging to get going. Today was one of them, amidst a flurry of things that has occurred around me the last few days. Luckily nothing that directly affect me but when these events affects friends the emotional burden spills over. To remedy this I started the tracking session by listening to Athletic Theme from Yoshi’s Island a couple of times to get in the mood. I just love the way the S-SMP sounds and how the limited samples really creates an awesome composition.

This track actually needs a lot of work, more than I expected for the last round. A lot of pretty harsh transitions and mixing needs to be improved at several places in the track. I started out with listening through the track and taking notes of every place that had something that bothered me. After the first listen I just went through the list and solved part after part until it all felt more coherent and even. There was a pretty harsh transition towards the end that I initially didn’t know how to approach. What ended up working here was reworking the patterns from the next segment onto a similar style as the pattern I was trying to transition from and aggressively using the filter to make it fade. Last part of this track was adding an outro as the track was trying to pick up speed again with a really nice arpeggiated lead. Facing a dilemma here to either pace the track back up or slow it down. After experimenting with a couple of different approaches I decided to end it from there and slowly fade the parts out. Ending really could have been better in hindsight.

One thing I find helpful when trying to master tracks is listening to tracks in a new context’. After staring at OpenMPT for 18 days it helps to just break the visual connection to the track and listen to it in darkness while standing up. This allows me to hear the track differently and find sounds that sticks out. Depriving myself of the visual stimuli is surpringsly effective, at least for someone like me that is more fluent in visual communication. This is a nice screenshot of me during this at the end of R18.

listening in darkness

Part of the tradition is that the person who finishes the track also names the track. I settled for Sir Kane’s Cinnamon Party” as a tribute to the loyal Twitch viewer who sat through most of this experimentation.

Reflecting on the experience

After doing all the small fixes I spent some time reflecting on things that I’ve learned from this experience:

  1. Setting a schedule like this for myself. The forcing function of this compo means that we together produces 18 tracks (of varying quality) but you get actual output. When making music myself I end up spending endless amount of time on perfecting sounds rather than pushing out tracks to iterate on. Building the Pixelcube was a great showcase of this where I had an actual deadline to meet, the cube HAD to be done by July 4th which meant I finished it by then. Using this technique on my personal music projects is obvious now after doing this.

  2. Wish that I had spent more time on creating melodies early, I deferred this in favour of helping creating structure in the tracks but my weak side clearly is creating catchy melodies. I naturally gravitated to working on the things that I was strong at and next time I should push myself harder to take this opportunity to work on the things I am not as strong at.

  3. Next time I should listen to the track earlier in the day and edit after giving it some time to grow on me. This is something that happens throughout the track but makes my first minutes more of taking in the track rather than effective editing. Applying this to my personal work would be really valuable as well.

  4. OpenMPT is a horrible way to make music for me. There are great learnings from fighting it and I’ve gotten so much better at both using the software and making music but I’m really happy that this is finally over.


I got the opportunity to once again host the SYNC LISTEN in which everyone hears the result of each track for the first time. We usually do this by streaming the result so I leveraged my knowledge of streaming once again. Here is a VOD from the sync listen that covers each track’s seed and result.

This ends the last 3 weeks of tracking and I’m on the fence if I would participate in the next swap. For a long time it felt like a very small group so my impact was pretty large but the group has so much talent now that I feel more that it’s hard to do meaningful contribution across all days. Hopefully I can take the learnings from this compo onto my own work. Who knows, chances are that one feels excited to do this again 2021 but right now I am over using OpenMPT.