On Rust (Pixelcube Part 6)
If you work in programming or adjacent to programming, chances are high that you’ve either heard or read about Rust. Probably one of the worst named programming languages in the world for SEO purposes, it’s Mozilla’s attempt at redefining how we write safe and compiled code in a world where safety isn’t optional. It’s also a non-garbage collected language with almost zero interop overhead, making it a great language to wrap C and C++ with. I’ve personally come in contact with this language as early as 2013 when rawrafox wrote an audio decoder in Rust at Spotify’s “WOWHACK” at Way Out West Festival.
Rust has been one of those languages where I for a long time just didn’t see the point. If you’re going to take on the pain of writing non-GC code, why not just go full C++ and get all the flexibility that comes with writing C++ in the first place? Why bother with a language that has long compile times and imposes concepts that make certain programming flows hard? What I came to realize is that these arguments miss the point about why programming languages succeed.
Why does a language like Node.JS succeed despite being untyped and slow? If anyone remembers 2012, telling developers that you were writing Javascript as a server side language was more of a bar joke than an actual future people imagined. A language like Go? Why do developers accept the odd design decisions, lack of generics and annoying limitations? There are countless threads on the internet debating the technicalities of why these languages are bad compared to language X, however that doesn’t seem to matter?
I think these languages succeed because they have good ecosystems. Using a third-party library in Node.JS is almost imperative to the language, compared to C++ where a third-party library sometimes takes longer time to just get linking properly than the time it saves you using it. Node makes building on top of the ecosystem easier than building your own code, just a matter of typing npm install whatever
. This makes it easy to iterate on ideas and try out approaches that would have taken significant investment in a language like C++. Go takes this even further by defining pragmatic solutions such as the Reader/Writer pair which makes chaining third-party libraries a breeze.
These things are what makes a developer like me productive. I write code because I envision what the end result should look like and try to write code to get to that end result, the actual act of programming itself doesn’t excite me. While I appreciate the work that goes into challenges like “Advent of Code”, those kinds of problems never excite me as they do not move me closer to the desired end result of my projects. I sometimes refer to myself as a “glue developer”, where I try to glue together as much as external things as possible and rewrite only the stressed parts of the glue. Being able to quickly try a library from the package ecosystem is almost more important than the language itself which is why I gravitate towards languages like Node, Go, PHP etc.
In the spring of 2014 I had just finished building the DreamHack Studio and was trying to ship a new “overlay” for DreamLeague Season 1. We had high ambitions for this overlay and relied on a ton of scraping coupled with DOTA replay processing to create a very data heavy overlay with realtime statistics. This image is me and a former co-worker nicknamed “Lulle” pushing against a deadline. The clock is correct, it’s 05:58 in the morning and we had to ship the code by 14:00 that day.
Being able to easily add in libraries needed to overcome the problems and not spending 4 hours on making the library link was how we pulled this off. Glue programming enables one to iterate extremely fast, for example in this case I had to write an integration that based on a piece of graphics being triggered would send a DMX signal to the LED controller. The ability to try out 3 different DMX libraries in less than 20 minutes to find the one with the perfect surface area for my problem. Sure the code was dependency spaghetti but we delivered a very innovative broadcast graphics overlay.
Why does this matter?
The reason I’m talking about Rust here is that I’ve hit a roadblock with Go that’s so fundamental to Go itself that it’s hard to actually overcome this without spending a significant amount of time. My goal with the software (nocube) for Pixelcube was to easily be able to iterate on patterns in the same way you can do on a Pixelblaze. Being able to just save and see the result makes it much easier to actually iterate against something nice. Since Go doesn’t support hot code reloading (only loading apart from a hacky runtime modification), the only viable approach is to either interpret another language or call into another runtime.
I’ve experimented with a couple of JS interpreters in Go and quickly realized that this breaks the performance that I was after which led me to experimenting with V8 bindings for Go. This is where it starts getting real messy. Go has no user-definable build system for packages which makes it impossible for packages to configure a complex runtime like V8. So to use V8, most approaches are to either pull a pre-built binding (in this case extract it from a Ruby gem????) or to build V8 from scratch. Building V8 from scratch is not an option either due to the time it takes.
ben0x539 comments: I think cgo mostly takes care of that, the hard part is escaping the Go runtime’s small, dynamically-resized per-goroutine stacks because C code expects big stacks since it can’t resize them
Once that hurdle is out of the way, the next problem is that the Go runtime only provides one way of linking out against other runtimes which is through Cgo. Since Go’s memory layout is unlikely to match a C struct, Go has to prepare the memory for each and every function call in both directions, leading to a lot of memory indirection and overhead. To be fair, this approach is common for GC languages such as .NET so this is not really a failure on Go’s part, just a general problem that GC languages struggle with. This FFI (Foreign function interface) call has to be done for each pattern update. This means that if you’re rendering 5 patterns at 240 FPS, you’re eating that overhead 2400 times with the bidirectional data flow.
I actually did manage to get this working but not at the performance I wanted but was happy to have some sort of hybrid model however. The issue really started to creep up when trying to include another library called “Colorchord”. Colorchord is a library that maps colors to notes which is something that I’ve wanted to use from the start but didn’t have time to include last year. Trying to write the Cgo bindings around this turned out to be a nightmare, really pushing at the build system issues of Cgo which had me almost give up. Eventually I realized that it would be easier to just rewrite the entire Notefinder part of the library in Go and even started the work but this left a sour taste in my mouth about using Go for this project in general.
The straw that broke the camel’s back was when I wrote the wrapper functions for packing data into V8 and ended up fighting the lack of generics in Go. Starting out as a Go dev it’s fun to understand that everything is very simple but after writing functionXInt
, functionXFloat32
, functionXFloat64
etc for the 100th time you end up realizing that generics exist for a reason. It’s not a meme.
All these examples highlight something that should have been clear to me earlier, I’m using the wrong tool for the job. The project in itself requires high performance on shitty devices, lots of FFI into libraries like Aubio, V8 and Colorchord and should preferably not be GC:d for the realtime audio processing. It was clear that I had to reconsider my decision making to avoid the fallacy of sunk costs.
Evaluating Rust
So back to Rust. The language never seemed interesting as it was a technically complex language that didn’t fit my workflow because the ecosystem just seemed to have a lot of friction. While this was probably true at the time, the ecosystem around Rust has developed significantly and the package manager around Rust called Cargo. Trying out Rust in 2020 is almost like trying out a new language compared to the language I experimented with back in 2013. I decided to spend some time on seeing how easy it would be to link against V8 in Rust and less than 10 minutes later I had basically re-created what took me hours to get right in Go.
My next project in Rust was to see if I could build a OPC (OpenPixelControl) implementation. I used an old project written in C to visualize the cube and it would be fun to see how easy it would be to render this in Rust. To my surprise I finished this faster than I had expected and the 3D libraries were mature enough where I could render the cube into a window. Seeing how simple this was to get right I got eager to try out solving the project that was hard in Go, using the Notefinder from Colorchord.
Around here I started streaming my attempt on creating bindings for Colorchord and had a lot of help from ben0x539 who’s a coworker at Twitch. He’s a true RUSTLING and provided a lot of insight into how the language works which was invaluable to getting the bindings done. Want to give this person a big shoutout as Rust is a complex language and it really helps to have someone that can provide training wheels starting out.
The first step was figuring out how to create the bindings. I started out writing them by hand and then searched Google to see if there was an automated way and there absolutely is. Rust has a project called bindgen which builds Rust bindings from C headers, shaving of tons of time. There are always edge cases where automated bindings isn’t the right thing but with C it’s hard to make functions more complex than the obvious so for this it worked. Unlike Go, Rust allows you to define exactly how the project should be built. By default Rust looks for the build.rs
file, builds it and runs the result as a preamble to compiling the rest of the project. With this I was able to run bindgen as part of the build pipeline. That means that the project itself just includes the Colorchord git repository and generates the bindings and builds colorchord as a library all in Rust.
The bindings worked almost on the first attempt once the build file was correct and I got the first result out of the Notefinder. It was then trivial to use the same graphics library I used for rendering the 3D visualization of the cube above to draw a copy of the colorchord UI.
As the Notefinder struct is just a raw C struct with floats and ints that act as magic numbers for configuration I thought it would be nice if the library at this point would provide a safe interface for setting these with valid ranges. I quickly ended up writing a Go style solution where I was writing a function per item coupled for each of the different types, after writing function 4 out of 30 I stopped to think if there was a better way to do this in Rust. Enter Macros! Rust allows you to write macros for metaprogramming, which means that I can write a macro that expands to functions instead of having to write these functions 30 times.
macro_rules! notefinder_configuration {
($func_name:ident, $setting:ident, $v:ty, $name:ident, $min:expr, $max:expr) => {
pub fn $func_name(&self, $name: $v) -> Result<(), NoteFinderValidationError<$v>> {
if $name < $min || $name > $max {
return Err(NoteFinderValidationError::OutsideValidRange {
expected_min: $min,
expected_max: $max,
found: $name,
});
}
unsafe { (*self.nf).$setting = $name }
Ok(())
}
};
}
This macro when called creates a function based on the value of the variable func_name
and even allows you to specify what type that the generic implementation of NoteFinderValidationError
should be. The result of this is that instead of having 30 roughly identical code blocks that looked like this one I only need this once and can do this for the rest.
notefinder_configuration!(set_octaves, octaves, i32, octaves, 0, 8);
notefinder_configuration!(set_frequency_bins, freqbins, i32, frequency_bins, 12, 48);
notefinder_configuration!(set_base_hz, base_hz, f32, base_hz, 0., 20000.)
In the end I was so happy with how this binding turned out that I decided to publish it as a Rust crate for other developers to use.
What’s bad about Rust?
Syntax
The syntax is the worst part about Rust. It feels like you asked a Haskell professor at a university to design the syntax for a language and then asked a kernel developer that only has C experience to make the language more desirable. The syntax is different to many other languages and often for no good reason. Take closures for example, defining anonymous functions in any other language is just the same syntax as a normal function but without the function identifier. Rust however decided that closures should look like this:
|num| {
println!("{}", num);
};
Why on earth would you use the vertical bar? Has any Rust developer ever used an non-US keyboard layout? The vertical bar or “pipe sign” is uncomfortable to type. On top of this it feels so misplaced in the language itself, the language is already really busy with a lot of different special characters to signify a variety of features and this one could just be afn() {}
or something similar. Speaking of busy syntax, here is one of my function declarations:
pub fn get_folded<'a>(&'a self) -> &'a [f32] {
Note how many different characters are used here. The '
signifies the lifetime, the a
indicates the lifetime symbol and <>
wraps it. The result is just something that’s very busy to read with '
being another good example of a character that developers with non-US keyboards often get wrong.
Documentation
As I mentioned earlier, Rust’s largest problem is really being named “Rust”. I was trying to find a library named palette
in Rust and typed in rust palette
into Google which gave me the result to the right. On top of this there is also a game named Rust (which is actually a terrific game) which further confuses the language SEO when trying to lookup certain concepts. Sure this is a dumb piece of feedback that can easily be solved by just searching adding lang
to a search but coming from other languages it’s annoying.
On top of this the documentation has two other problems. The first being that the main documentation hub docs.rs suffers from some really bad UX. For example, going from a documentation page to the source code repository hidden in a dropdown menu. Secondly, since every version publishes a permalink which Google indexes, you often end up at outdated documentation pages and have to be on the lookout for the small warning at the top. In my opinion the right way to solve this is to make google index a “latest” tag and provide permalinks for specific use cases. If I search for tokio async tcpstream
I don’t want to know how the function worked 3 years ago, if that was what I was looking for I would add in the library version to the query.
It also seems that a lot of crates neglects specifying exactly what you need to include in Cargo.toml
for the library to expose all the functions you want. Often the library has features gated behind feature flags but the first example in the documentation requires one or two features to be explicitly enabled. Developers need to be upfront with exactly how to configure a library to achieve what the documentation says. I heard some arguments defending this that Rust has a good community where everyone is helpful which makes up for this and while this is true to a certain degree, this approach never scales. Being able to search for problem x php
and have more than 200 results explaining solutions to your problem is what makes a language like PHP so easy for beginners to grasp.
Going forward
This small evaluation of Rust was more successful than predicted. Two of the major problems I’ve had with Go was basically solved and on top of that you can also do real hotswapping in Rust which unlocks the use case that was impossible to do in standard Go. For that reason it makes a lot of sense to rewrite the existing nocube code to Rust and also solve some of the design mistakes I made with the first iteration at the same time.
Doing a rewrite of a program should always be something that one does with caution. It’s easy to look at a new language and think that it solves all your issues, not understanding that every language comes with their own problems. This however is one of those times where the tool itself isn’t right from the start and to me it just signifies that I should have been less lazy starting out.