Few weeks has passed since my last entry, mostly because I’ve had a lot of “actual” work on my plate which has had priority over the project. Humans tend to see progress as a exponential process or a skipping process. You will see the result of the creator and the result has to stand for itself, meaning that the time that was put into the project is transparent to the viewer of the project itself. This is probably the hardest part of building a project as for a good project to work, there is a lot of scaffolding that has to be done that progresses the project but not the result. Hence projects progress on a linear timescale while results progress on an exponential timescale. The project will show no result for a long amount of time until all the pieces align and the result evolves quickly. Same goes for this project, spending tons of time on designing the small parts does advance the state but I feel it’s just further away from when the prototype was mounted in my living room. It’s almost like the result has progressed backwards but knowing this and pushing through it is what will take me out of the local minima.
With that said there has still been a ton of progress happening in the few off hours that has surfaced over the weeks. It turns out that repeating a task multiple times just multiplies the odds of something going wrong in each step. As an example, the cheap 3D printer that I’m printing the parts on has broken down in every single imaginable way and I have had to spend time replacing components, driving my to the point of purchasing a new different 3D printer (Prusa i3). The first thing that broke down was the build plate. Adhesive tape just didn’t cut it for me so i decided on printing on glass with thermal pads which has been very successful for my projects so far, while the act of making prints stick are harder, the print quality is better. The second thing that broke down was the stepper motor, quickly followed by the hot-end fan. Both replaced with better components and modded onto the printer in place.
I guess you just can’t assume great consistent quality from a $159 (with shipping) printer from Monoprice. It’s clear that the printer hit that price point by cutting tons of corners and while I still recommend the printer to get into 3D printing, be aware that it will not last you printing over 3 kilos of PLA before something breaks.
The printed drill guides that i showed last post turned out to work very well for the purpose. They fit perfectly on top of the part and made drilling equal holes a breeze. As you can see here the first one fit the entire bar and mounted to a plank to ensure equal holes on both sides whereas the second fitted over one of the corners in order to allow drilling 3 perfect holes per outlet.
With all the parts printed (24+ barholders) and bolts glued into the holders i was able to mount together the physical parts without the actual LED bars within them to test out the rigidity of the solution so far.
Turns out that it fits extremely snug, there was some extra drilling for some alignment purposes but the design 99% works for the use case. Huge relief right here since I’ve been very worried about if the actual thing will come together in the end. This leaves us with 2 remaining large tasks instead of 3, namely electronics and software. There has been a lot of changes in how I’m attacking the electronics but I will leave that for the next post as I believe that there’s more conclusion to be had by explaining it in more detail so that leaves us to the software part.
In an earlier post i discussed about what the optimal solution was for having a lot of CPU horsepower yet a dead simple SPI solution to feed bits to the APA102LED strips. After playing around with USB->SPI bridges and a variety of different solutions, all working but not working perfectly, the most obvious solution surfaced by itself. Sometimes i forget that most computers today are outfitted with a standard high speed communication bus that can be switched between devices and easily extended, namely hardwired Ethernet. Even if the Raspberry Pi is too slow to drive pattern generation and FFT it can without a problem just consume network traffic and convert that into APA102 data at the desired framerate. So i spent time building a Raspberry Pi bridge that connects using ZeroConf/Avahi and consumes pixel traffic from the nocube host.
Another problem that quickly surfaces was the need of having a way of testing out patterns and code without having the cube setup, leading me to build a “simulator”. The simulator was surprisingly easy, since all the “pixels” are mapped in a cartesian coordinate system between 0 and 1 (float64), rendering the pixels in Three.JS took an hour or two with the largest problem being figuring out how to efficiently stream the color updates per pixel. Here’s how the rendering currently looks in Chrome.
After this I started on the actual audio processing part of this project with Aubio. I rescurrected a seemingly abandoned project on Github and added the bindings to the C code i needed that didn’t exist in the current bindings (now available in my fork). With this in place i wrote a simple naive FFT pattern to test how it would look. In this case the pattern just renders the 1024 bins summed down to nearest pixel in the bin. It doesn’t really look great when rendered naively at 5 FPS in a web browser, but here it is regardless.
Apart from this there has been general work on the processing pipeline in the project, now allowing for multiple processing pipelines to run simultaneously in a threaded manner feeding into the result per frame. There is a new rendering loop that allows the internal generation to run at a fixed framerate which lets outputs lock onto that framerate instead of having the output dictate the framerate of the rendering. This makes sure that the patterns renders in full speed on the device I’m using regardless if i have an output connected or not, allowing me to benchmark the performance of the patterns.
Seeing the demo come together has me convinced that this will look really good once up and running as a complete project. There is still a lot of work to do but every day gets me one step closer.
Although I’ve written about how i feel the Internet has changed over the past years, there are still these glimmering moments of brilliance that outshines a lot of the negative self-centeredness we have today. FatalFarm’s “Lasagnacat” is one of them. A series of weird videos published in 2008, exploring the digital tropes of the time only to go into a long hibernation. 9 years later, LasagnaCat reappears on the internet with an array of new videos, all published in a Netflix-esque fashion by dumping them all online in an instant. Of all these videos, there is one that sticks out above the pack of videos. 07/27/1978 starts out in the classic LasagnaCat format, starting with a reenactment of a Garfield strip followed by a capture of the original strip for comparison. After the sketch has ended, LasagnaCat provides their own commentary on the strip and in this video the commentary is a 1 hour long monologue read back by John Blyth Barrymore. The 1 hour monologue, cut to show a one-take shot is backed by a fantastic production value with a nonsensical, yet intriguing script set to the “Kundun” score by Philip Glass.
This is the perfect embodiment of internet mashup culture unlike what I’ve seen before. A deadpan commitment to the joke, creating the impression that the creators are serious about the intent of the video. In a world of short form video, episodic content striving to hit the magic barrier of 10 minutes to meet YouTube’s ad density requirement this video goes against all the established formats to provide one hour of monotone, yet unique and well produced content. It’s hard to actually recommend this video, as the enjoyment i got from the video didn’t come from the video itself. Rather the gratification this video delivers is the meta commentary on what i believe is the authors opinions on the state of YouTube in general.
The last two weeks has mostly been a repetitive process of sketching in Fusion360, slicing, printing, verifying and realizing i need to tweak something. Hence repeating this entire process over for a couple of iterations. Since the tolerances of the fittings needs to be perfect, the whole thing becomes an interplay between designing around the limitations and tolerances of the 3D printer while having the parts come out as i want them. All these iterations has become quite a few prototypes:
This is also the reason i decided to make the lamp in part 2 as a different project. There was a lot of prototyping i needed to do to verify that the approach is sane, building a lamp allowed me to verify some of these ideas without printing parts that i intended to throw away afterwards. Sort of like a stepping stone in order to reach the final decision.
After designing quite a few different variations on the end caps for the rods i settled on a design where three M4 hex nuts can be slotted in, locking the rod in place through pre-drilled holes while holding the L angle in place through a slide-in fit.
With this done (a relief) i had two remaining problems:
Deciding on how the rods connect to each other
How would i drill exact holes on the 3 way fitting & on the rod while keeping it secured?
After looking into a bunch of options on different connectors and solutions it was clear that there had to be an actual connector, a terminal block would not work due to the restricted space within the fittings. Since the budget had already gone out of the window for this project, going for Molex Connectors seemed to be the right choice. I purchased a bunch of Molex SL and Molex 0.93 2 pin connectors from Mouser, with the problem being that Molex has maybe a million different connectors, so figuring out what exact connectors to use was hard. For example, this is the spec for a Molex pin, figuring out what goes with this is not obvious if you haven’t used the ecosystem before. Someone should build a digital Molex compatibility guide. The Molex SL connectors are perfect for the SPI and the 0.93 plug can sustain up to 15A, making it perfect for the use case. You also have to buy the for some reason ultra expensive Molex tools (sad face).
With all this done, a rod could finally be powered up in the form it will have when this cube is done, using the proper connectors and a simulator to drive the APA102
So for the second problem, this was again one of the situations where Love had great feedback on how to approach the problems: “Just 3D print a fitting for the fitting with pre-printed drill holes 4Head”. Sounded like a wild idea but had to try at least, so i designed a hood for the 3-way PVC fittings:
Printing this turned out to work also (after some tweaking of the model). Sliding this on onto the 3 way fitting ensures a snug fit while making it easy to drill perfectly aligned holes. I ended up sketching out the same approach for the rods, with two symmetrical drill mounters that i can screw to a table with a rod mounted and use as a drill guide.
With all this done, the remaining parts for the building of the cube is actually just manufacturing 11 more rods, which due to the printing time (9 hours per rod) will take a fair amount of time on the smaller 3D printer i have. Thanks to this though i can head back onto writing the software. Software wise I’ve changed my mind about some minor design implementations but this is something i will cover in the next blog post.
Hope you enjoyed this update, I will try to post the next update within a few weeks!
I needed a way to test my setup and have something small visible when i work on the Pixelcube in order to see what sort of mounting will work for all this. Since I already had all material needed to build a small one off project I started Fusion 360 and started sketching. The point of this was also to get more familiar with Fusion 360 for the other sketching, i understood the basics prior but there were a lot of “workflow” related approaches i needed to experiment with in order to get to a point where i could build the other parts faster.
In order to mount the rod, there needed to be a solid base that could hold the PixelBlaze controller. It also needed to hold the L angle on which the LED strips were to be mounted, something I solved by drawing out a 45 degree angled L in the print.
The fascinating thing about 3D printing is how the 3D printer doesn’t do any smart operations at all. It follows orders in a large text document on where to move the nozzle and how much to extrude. All the smart layouting is done by the “slicer” which divides the model into “layers” that can then be printed. The slicer also understands where it needs to build supports and how it should build the project up in the most optimal way. Down below is a video of they layers that CURA sliced for me.
Printing this foot took 10 hours and 13 minutes. The printer I’m using (Monoprice MP Select Mini 3D Printer V2) isn’t the fastest so I’m sure this could have been done much quicker with a faster printer.
Soldering it all together and connecting the Pixelblaze ended up looking like this, which is a fair lamp, being built only from spare parts.
So with all this done there’s a couple of things i need to figure out for the Pixelcube:
How do I mount the LED strips in a better way, stripes turned out to take up to much real estate within the rod. Is there a better way that’s smaller? Can i use electrical tape?
The L angle mounting system works well. I need to tweak the measurements of the angle to fit better within but the general approach is solid so far. This is an appreciated positive development of this project.
Soldering the APA102LED’s is going to be pain and suffering. I need to order connectors ASAP and start solder the MOLEX as possible.
I’ve spent the last few weeks on building a cube made out of a bunch of PVC vertices and acrylic edges but with no face, where the edges are filled with over 60 individually addressable LED’s per meter. It’s not done yet, but i felt that sharing an update on why I’m doing this and why it turned out to be harder than i thought would be nice.
Idea & Background
To be honest here, I don’t know how this really started. There is a possibility i was watching something, together with an influx of other ideas that led me to the conclusion that i just had to build a large light fixture. I talked with my friends from Sweden about building something interesting for ANDERSTORPSFESTIVALEN and i guess this is the idea i subconsciously came up with. This is in a way very representative for how my mind works, once it is clear what I want the idea is the result is almost finished inside my head, the end result is clear to me early on but mapping this onto a process of developing is the hard part.
This cube would be a huge physical light fixture that behaved like an LED wall but without the somewhat boring LED wall form factor. If you’ve ever worked with LED walls you know that after a while the 2-dimensional plate tends to become very flat, even if you build intricate shapes with mapping, as light is only projected at a 180 degree angle. There is no volume to it, meaning that perspective is hard to achieve. Since a LED wall maps to a screen, content tend to be 2-dimensional clips, pre-rendered and then applied in a manner to sync to the music. While there is some available software to do generative works, most of this caps out at 60 FPS + has a visible latency due to the processing needed to generate -> feed over HDMI -> go into processor -> get fed to wall.
I wanted to build something that’s different to this, a volumetric cube with the same LED’s that’s in a good LED wall. Having high dynamic range, fast dimming and aggressive driving speed allows for driving speeds over 240 FPS, meaning you’re able to do aggressive PoV (persistence of vision) effects and stroboscopic patterns. All of this while not projecting 2D content onto a 3D fixture, rather building something that’s inherently volumetric
Here’s an early concept around how the cube could look:
Building this monster
Here’s the thing: I had no idea of how to construct this cube when starting. Absolutely zero knowledge about building something physical except for electronics. So in order to prototype how this would even look, I glued together an old Amazon cardboard box and hanged in my kitchen in order to make it easier to visualize what’s needed in order to build it.
I went through a couple of iterations on building materials and eventually decided to buy the acrylic rods needed and just commit to experimenting instead of trying to solve this on paper. After a long internal debate, i ended up deciding the cube would be 50 inches per side (+ some for the fittings), since 50 inches is exactly 127 cm (very pleasing). I ordered the acrylic rods and once they arrived i immediately realized i had a couple of issues.
Turns out that these acrylic tubes are measured in outside diameter, whereas PVC pipe fittings are measured in inside diameter with a standard thickness, meaning my 1/2 rods had no great counterpart. I ended up trying to solve this problem for a fairly long time, trying different approaches with heat shrink tubing, electrical tape and some other experiments in order to adapt them with no good solution.
No idea of how to make the LED strip “float” inside the acrylic tube. I thought i could mount it on a L angle aluminium bar, but how would i fixate the bar with such tight clearances inside the fittings?
No good way of actually fixating the rods inside the fittings. Since the cube is to be suspended at 45/45 degrees rotation, it needs to have a strong rigidity in the edges, something that friction alone can’t provide. I experimented with a bunch of different solutions to this problem with no really good outcome.
With all these problem stacking up i started to get a bit negative about this project. It was hard to solve, at least if i wanted it to be close to what i imagined it and not take shortcuts in regards to the build quality. I did not want to have multiple support wires inside the cube and I could not find a good way of actually mounting it. In times like these it turns out that an outside perspective is key. I had two good friends visiting (Love & Pajlada) and they came over to my house in order to hang out and look at the cube. Something Love immediately suggested is that i should 3D print the parts i needed, both to hold the fittings and to hold the L angle. I knew basically nothing about 3D printing except from a small part that I sent away for 3D print on a previous project. This did not stop me however, i purchased a Monoprice MP Select Mini 3D Printer V2 and started printing some test parts I modelled. Turns out this is exactly what i needed.
This just shows how important it is to have an outside perspective when getting stuck in projects. Having Love vet my ideas and add his perspective basically made this project possible. It’s a possibility that i would have come to this conclusion eventually but never as fast as i did here. The fittings are now tight and I’ve modelled in holders for bolts so i can use it as a mounting plate for the screws. On top of this the parts also hold the L angle which has the LED strip mounted on it.
After experimenting together over the weekend, we put together the cube temporarily, just feeding the strips through the rods and taping it with electrical tape in order to hold it’s form. After the first power-up it was obvious that this was i was striving after from the start:
LED & Processing hardware
The vision puts a couple of different requirements in place, namely a driver that can generate patterns at 240 FPS and do music analysis and a LED strip that’s powerful enough to deliver on this. With these restrictions in place, i had to start looking into what could be used to build this piece. For LED’s it turned out to be rather obvious, the chip / strip that has what i need is the APA102, also referred to as the “superled”. The APA102 has a lot of features in contrast to the WS2812 with the primary one being a separate Data / Clock line, meaning the timing of the data is less sensitive. You also get a much higher dimming frequency + a global intensity allowing for LUT lookup to get a higher dynamic range. There’s really no negative part about this LED except for the price.
In order to drive these LED’s i need a platform that’s fast enough to do super granular FFT analysis, has audio input, networking on-board and preferably some groundwork already done. There’s a bunch of directions that could be taken here:
Since i wanted this to be a no hassle setup, the TouchDesigner setup goes away immediately. On top of this, TouchDesigner has a node based layout and for some reason this never plays well with my way of thinking. I’ve tried these node based UI’s multiple times and never felt that they’ve been powerful compared to just writing it.
PixelBlaze is a great alternative. Ben Hecke has built a very neat small controller on the ESP8266 platform (with a ESP32 upgrade in the works) that’s the easiest to setup and get started with. It basically runs itself however i felt a tad limited by the CPU speed and general pattern blending. It was hard to re-use components of patterns or mix multiple patterns.
The SnickerDoodle is a good alternative but coding FPGA is a nightmare. On top of this the actual ARM core isn’t that fast, it runs a 866 MHz Dual-Core ARM Cortex-A9 which is slower than the Raspberry Pi so a lot of functionality would have to be shifted to the FPGA, even more headache. It’s also not really rapid prototyping working with FPGA’s. I have one of these sitting around but this is a “in case of emergency” solution.
With all this considered, this is the actual solution:
This is the route that I will end up taking with the finished cube. It gives me enough CPU speed in a small package, less of a headache in terms of setup and can be mounted inside enclosures. The FT232H chip speaks SPI at up to 30MHz, which is perfect since i only need 20. However, this requires me to write most of the software to drive it.
Wiring and electrical
How does this cube even get powered? After looking at the datasheet of the LED’s it turns out they can pull up to 50mA per LED. With 75 LED’s per rod and 12 rods, that’s about 900 LED’s to drive at 5V, not exactly something you can do with a regular USB port.
The PSU to drive this needs to be at minimum 45A at 5V in order to drive the LED’s at peak load. Mean Well has exactly what i need. These PSU’s are commonly sighted within LED walls so I felt safe with this option. Still, pushing 45A at 5V requires a thicker cable, since the drop over distance will be substantial. Now this is a question of how the cube gets wired, since the APA102LED’s also needs to be connected in sequence. In order to figure this out, i drew it out in Sketchup in order to visualize the connection order. Once i drew it i realized this is common graph theory issue, how can you walk all the edges in the shortest amount of trips. (entry -> a - ab - bc - cd - da - ae - ef - fg - gh - he - ef (through skip) - fb - bc (through skip) - cg - gh (through skip) - hd )
When building the rods I add an extra signal wire and 5V carrier on the back side of the L angle, that way i can create a common voltage rail for the 5V stuff and have backfeed for signal in order to wire it up as seen in the Sketchup.
In order to connect all this, i’ll use regular Molex connectors in the ends with a custom split one for 5V and serial ones for data in order to wire it all.
Software & Design philosophy
So as mentioned in the beginning of this post, in order to break free from the 2D video mapped onto 3D textures, the cube has to support volumetric patterns. Think of volumetric patterns as resolving pixels inside an XYZ space, you could for example use a 3D model have the intersecting LED’s light up on start. The software should be able to seamlessly mix between patterns and needs input from multiple sources, such as Websocket, OSC, Audio and Time. After discussing the structure with Love & Pajlada, the processing pipeline ended up looking like this:
This is the layout of the intended processing pipeline. The pipeline is almost like a node based editor, in the sense that it uses multiple generators producing floats for every single pixel between 0.0 and 1.0, the operators you chain together with various operations is what creates the effect. You could for example use a stroboscope effect, chain it to a colorlookup that changes the RGB output to red and in that way create a red strobe, a hue shift or colorizer becomes powerful especially when using pre-defined palettes. The generator & color lookup can also make use of the FFT data and the beat in order to generate dynamic, music-driven effects with high resolution.
It helps to start with the layout of the flow, since it makes it easy to figure out what software has to be written and what tools to use. In this case i decided on Go for a couple of different reasons.
I know Go. Use what you know, the intent here is not to build distributed software, but to make this cube look good.
Go has great SPI support. Go supports the SPI chip on the Raspberry Pi, the FT232H chip and the APA102 led through periph.io. This makes it easy to prototype and move between different hardware stacks. I can do the prototyping on the Raspberry Pi, developing on my Mac and effortlessly move this to a Intel based Linux box without having to deal with compiling for different archs.
Go is fast enough. Compared to Node.JS and the other languages i evaluated, Go hits the sweet spot between rapid iteration and execution speed, which is needed for achieving a solid 200 FPS.
There are bindings for aubio and PortAudio. Getting audio in and performing analysis on it turns out to be very easy with good bindings available for these two great libraries. Aubio itself provides you with tools to analyze and make sense out of a PCM audio stream, detecting interesting data like tempo, onset, beat and loudness. On top of that you also get a good FFT implementation. PortAudio makes it easy to acquire audio across all platforms, lending into the goal of portability between platforms.
I’ve been working on this since late October and intend to finish this before June 30th 2019 in order to bring it to ANDERSTORPSFESTIVALEN. As you can see, at the time of writing this post the cube is not yet done. I will post updates here on the blog once more progress has been made. I will end this post with a video that I shot when we tested the cube: