January 22, 2019

Building a weird lamp (Pixelcube Part 2)

I needed a way to test my setup and have something small visible when i work on the Pixelcube in order to see what sort of mounting will work for all this. Since I already had all material needed to build a small one off project I started Fusion 360 and started sketching. The point of this was also to get more familiar with Fusion 360 for the other sketching, i understood the basics prior but there were a lot of workflow” related approaches i needed to experiment with in order to get to a point where i could build the other parts faster.

In order to mount the rod, there needed to be a solid base that could hold the PixelBlaze controller. It also needed to hold the L angle on which the LED strips were to be mounted, something I solved by drawing out a 45 degree angled L in the print.

Base

Top

The fascinating thing about 3D printing is how the 3D printer doesn’t do any smart operations at all. It follows orders in a large text document on where to move the nozzle and how much to extrude. All the smart layouting is done by the slicer” which divides the model into layers” that can then be printed. The slicer also understands where it needs to build supports and how it should build the project up in the most optimal way. Down below is a video of they layers that CURA sliced for me.

Printing this foot took 10 hours and 13 minutes. The printer I’m using (Monoprice MP Select Mini 3D Printer V2) isn’t the fastest so I’m sure this could have been done much quicker with a faster printer.

Printing

Soldering it all together and connecting the Pixelblaze ended up looking like this, which is a fair lamp, being built only from spare parts.

Living

Living2

So with all this done there’s a couple of things i need to figure out for the Pixelcube:

  • How do I mount the LED strips in a better way, stripes turned out to take up to much real estate within the rod. Is there a better way that’s smaller? Can i use electrical tape?

  • The L angle mounting system works well. I need to tweak the measurements of the angle to fit better within but the general approach is solid so far. This is an appreciated positive development of this project.

  • Soldering the APA102 LEDs is going to be pain and suffering. I need to order connectors ASAP and start solder the MOLEX as possible.

January 21, 2019

Building the PIXELCUBE (Part 1)

I’ve spent the last few weeks on building a cube made out of a bunch of PVC vertices and acrylic edges but with no face, where the edges are filled with over 60 individually addressable LEDs per meter. It’s not done yet, but i felt that sharing an update on why I’m doing this and why it turned out to be harder than i thought would be nice.

Idea & Background

To be honest here, I don’t know how this really started. There is a possibility i was watching something, together with an influx of other ideas that led me to the conclusion that i just had to build a large light fixture. I talked with my friends from Sweden about building something interesting for ANDERSTORPSFESTIVALEN and i guess this is the idea i subconsciously came up with. This is in a way very representative for how my mind works, once it is clear what I want the idea is the result is almost finished inside my head, the end result is clear to me early on but mapping this onto a process of developing is the hard part.

This cube would be a huge physical light fixture that behaved like an LED wall but without the somewhat boring LED wall form factor. If you’ve ever worked with LED walls you know that after a while the 2-dimensional plate tends to become very flat, even if you build intricate shapes with mapping, as light is only projected at a 180 degree angle. There is no volume to it, meaning that perspective is hard to achieve. Since a LED wall maps to a screen, content tend to be 2-dimensional clips, pre-rendered and then applied in a manner to sync to the music. While there is some available software to do generative works, most of this caps out at 60 FPS + has a visible latency due to the processing needed to generate -> feed over HDMI -> go into processor -> get fed to wall.

I wanted to build something that’s different to this, a volumetric cube with the same LEDs that’s in a good LED wall. Having high dynamic range, fast dimming and aggressive driving speed allows for driving speeds over 240 FPS, meaning you’re able to do aggressive PoV (persistence of vision) effects and stroboscopic patterns. All of this while not projecting 2D content onto a 3D fixture, rather building something that’s inherently volumetric

Here’s an early concept around how the cube could look: early rendering

Building this monster

Here’s the thing: I had no idea of how to construct this cube when starting. Absolutely zero knowledge about building something physical except for electronics. So in order to prototype how this would even look, I glued together an old Amazon cardboard box and hanged in my kitchen in order to make it easier to visualize what’s needed in order to build it.

Prototype Cube

I went through a couple of iterations on building materials and eventually decided to buy the acrylic rods needed and just commit to experimenting instead of trying to solve this on paper. After a long internal debate, i ended up deciding the cube would be 50 inches per side (+ some for the fittings), since 50 inches is exactly 127 cm (very pleasing). I ordered the acrylic rods and once they arrived i immediately realized i had a couple of issues.

  • Turns out that these acrylic tubes are measured in outside diameter, whereas PVC pipe fittings are measured in inside diameter with a standard thickness, meaning my 1/2 rods had no great counterpart. I ended up trying to solve this problem for a fairly long time, trying different approaches with heat shrink tubing, electrical tape and some other experiments in order to adapt them with no good solution.

  • No idea of how to make the LED strip float” inside the acrylic tube. I thought i could mount it on a L angle aluminium bar, but how would i fixate the bar with such tight clearances inside the fittings?

  • No good way of actually fixating the rods inside the fittings. Since the cube is to be suspended at 45/45 degrees rotation, it needs to have a strong rigidity in the edges, something that friction alone can’t provide. I experimented with a bunch of different solutions to this problem with no really good outcome.

With all these problem stacking up i started to get a bit negative about this project. It was hard to solve, at least if i wanted it to be close to what i imagined it and not take shortcuts in regards to the build quality. I did not want to have multiple support wires inside the cube and I could not find a good way of actually mounting it. In times like these it turns out that an outside perspective is key. I had two good friends visiting (Love & Pajlada) and they came over to my house in order to hang out and look at the cube. Something Love immediately suggested is that i should 3D print the parts i needed, both to hold the fittings and to hold the L angle. I knew basically nothing about 3D printing except from a small part that I sent away for 3D print on a previous project. This did not stop me however, i purchased a Monoprice MP Select Mini 3D Printer V2 and started printing some test parts I modelled. Turns out this is exactly what i needed.

3D printer

tube model

3D printed fittings

This just shows how important it is to have an outside perspective when getting stuck in projects. Having Love vet my ideas and add his perspective basically made this project possible. It’s a possibility that i would have come to this conclusion eventually but never as fast as i did here. The fittings are now tight and I’ve modelled in holders for bolts so i can use it as a mounting plate for the screws. On top of this the parts also hold the L angle which has the LED strip mounted on it.

After experimenting together over the weekend, we put together the cube temporarily, just feeding the strips through the rods and taping it with electrical tape in order to hold it’s form. After the first power-up it was obvious that this was i was striving after from the start:

cubedemo

LED & Processing hardware

The vision puts a couple of different requirements in place, namely a driver that can generate patterns at 240 FPS and do music analysis and a LED strip that’s powerful enough to deliver on this. With these restrictions in place, i had to start looking into what could be used to build this piece. For LEDs it turned out to be rather obvious, the chip / strip that has what i need is the APA102, also referred to as the superled”. The APA102 has a lot of features in contrast to the WS2812 with the primary one being a separate Data / Clock line, meaning the timing of the data is less sensitive. You also get a much higher dimming frequency + a global intensity allowing for LUT lookup to get a higher dynamic range. There’s really no negative part about this LED except for the price.

In order to drive these LEDs i need a platform that’s fast enough to do super granular FFT analysis, has audio input, networking on-board and preferably some groundwork already done. There’s a bunch of directions that could be taken here:

  • TouchDesigner

    Since i wanted this to be a no hassle setup, the TouchDesigner setup goes away immediately. On top of this, TouchDesigner has a node based layout and for some reason this never plays well with my way of thinking. I’ve tried these node based UIs multiple times and never felt that they’ve been powerful compared to just writing it.

  • Arduino

    An Arduino is just too slow.

  • Raspberry Pi

    Raspberry Pi is what i ended up prototyping all this on. It’s a great device but sadly it just doesn’t hold up when you start trying to push multiple volumetric patterns at 200 FPS.

  • PixelBlaze (ESP8266 & ESP32)

    PixelBlaze is a great alternative. Ben Hecke has built a very neat small controller on the ESP8266 platform (with a ESP32 upgrade in the works) that’s the easiest to setup and get started with. It basically runs itself however i felt a tad limited by the CPU speed and general pattern blending. It was hard to re-use components of patterns or mix multiple patterns.

  • SnickerDoodle (ARM/FPGA)

    The SnickerDoodle is a good alternative but coding FPGA is a nightmare. On top of this the actual ARM core isn’t that fast, it runs a 866 MHz Dual-Core ARM Cortex-A9 which is slower than the Raspberry Pi so a lot of functionality would have to be shifted to the FPGA, even more headache. It’s also not really rapid prototyping working with FPGAs. I have one of these sitting around but this is a in case of emergency” solution.

With all this considered, this is the actual solution:

  • Intel NUC with a FT232H chip attached

    This is the route that I will end up taking with the finished cube. It gives me enough CPU speed in a small package, less of a headache in terms of setup and can be mounted inside enclosures. The FT232H chip speaks SPI at up to 30MHz, which is perfect since i only need 20. However, this requires me to write most of the software to drive it.

Wiring and electrical

How does this cube even get powered? After looking at the datasheet of the LEDs it turns out they can pull up to 50mA per LED. With 75 LEDs per rod and 12 rods, that’s about 900 LEDs to drive at 5V, not exactly something you can do with a regular USB port.

(75pixelsx12rods)x50mA==45 000mA (75: pixels: x: 12: rods): x: 50: mA: ==: 45 000: mA

The PSU to drive this needs to be at minimum 45A at 5V in order to drive the LEDs at peak load. Mean Well has exactly what i need. These PSUs are commonly sighted within LED walls so I felt safe with this option. Still, pushing 45A at 5V requires a thicker cable, since the drop over distance will be substantial. Now this is a question of how the cube gets wired, since the APA102 LEDs also needs to be connected in sequence. In order to figure this out, i drew it out in Sketchup in order to visualize the connection order. Once i drew it i realized this is common graph theory issue, how can you walk all the edges in the shortest amount of trips. (entry -> a - ab - bc - cd - da - ae - ef - fg - gh - he - ef (through skip) - fb - bc (through skip) - cg - gh (through skip) - hd )

wiring

When building the rods I add an extra signal wire and 5V carrier on the back side of the L angle, that way i can create a common voltage rail for the 5V stuff and have backfeed for signal in order to wire it up as seen in the Sketchup.

backfeed

In order to connect all this, i’ll use regular Molex connectors in the ends with a custom split one for 5V and serial ones for data in order to wire it all.

Software & Design philosophy

So as mentioned in the beginning of this post, in order to break free from the 2D video mapped onto 3D textures, the cube has to support volumetric patterns. Think of volumetric patterns as resolving pixels inside an XYZ space, you could for example use a 3D model have the intersecting LEDs light up on start. The software should be able to seamlessly mix between patterns and needs input from multiple sources, such as Websocket, OSC, Audio and Time. After discussing the structure with Love & Pajlada, the processing pipeline ended up looking like this:

software layout

This is the layout of the intended processing pipeline. The pipeline is almost like a node based editor, in the sense that it uses multiple generators producing floats for every single pixel between 0.0 and 1.0, the operators you chain together with various operations is what creates the effect. You could for example use a stroboscope effect, chain it to a colorlookup that changes the RGB output to red and in that way create a red strobe, a hue shift or colorizer becomes powerful especially when using pre-defined palettes. The generator & color lookup can also make use of the FFT data and the beat in order to generate dynamic, music-driven effects with high resolution.

It helps to start with the layout of the flow, since it makes it easy to figure out what software has to be written and what tools to use. In this case i decided on Go for a couple of different reasons.

  • I know Go. Use what you know, the intent here is not to build distributed software, but to make this cube look good.

  • Go has great SPI support. Go supports the SPI chip on the Raspberry Pi, the FT232H chip and the APA102 led through periph.io. This makes it easy to prototype and move between different hardware stacks. I can do the prototyping on the Raspberry Pi, developing on my Mac and effortlessly move this to a Intel based Linux box without having to deal with compiling for different archs.

  • Go is fast enough. Compared to Node.JS and the other languages i evaluated, Go hits the sweet spot between rapid iteration and execution speed, which is needed for achieving a solid 200 FPS.

  • There are bindings for aubio and PortAudio. Getting audio in and performing analysis on it turns out to be very easy with good bindings available for these two great libraries. Aubio itself provides you with tools to analyze and make sense out of a PCM audio stream, detecting interesting data like tempo, onset, beat and loudness. On top of that you also get a good FFT implementation. PortAudio makes it easy to acquire audio across all platforms, lending into the goal of portability between platforms.

Timeline

I’ve been working on this since late October and intend to finish this before June 30th 2019 in order to bring it to ANDERSTORPSFESTIVALEN. As you can see, at the time of writing this post the cube is not yet done. I will post updates here on the blog once more progress has been made. I will end this post with a video that I shot when we tested the cube:

January 19, 2019

Mt. Pepega

Pepega2

January 10, 2019

Adapting old lenses

While home in Sweden i stumbled upon a bunch of weird old lenses. Although these lenses are bad in every single regard in comparison to modern primes, their flaws actually contribute to a more interesting image. One thing that i find lacking with modern lenses is the lack of characther. The expensive Sony lenses are virtually perfect, the image comes out clean all throughout and that turns out to be a bit boring. One lens in specific, the Meyer-Optik Görlitz Oreston 1.8 50 turned out to have a interesting profile to it. I’m not the first person to discover that old lenses take on a new life on modern sensors, realizing there’s an entire community revolvoing around using old lenses.

A shot from earlier this morning in the San Francisco morning fog:

Morning Fog

Look at the distorted edges, weird bokeh and general color rendering. It’s not great but it def has a unique profile to it.

Here it is mounted to the Sony A7 III with a K&F Concept adapter:

Meyer-Optik Görlitz Oreston 1.8 50

Two pictures which highlights the busy bokeh:

Busy Bokeh

Tree

December 22, 2018

ANDERSTORPSFESTIVALEN 4 Concept

Since the third iteration of this imaginary festival” almost drowned in a pastell palette, it felt proper to try the opposite for this iteration, which is why we decided to go for all black / white / gray theme this year. Last year had a much darker feeling than two years ago, with the party going much longer into the night with a harder theme overall, so representing this visually would be the key to communicatining our intentions.

Starting out with a simple moodboard, i put together something on the train in August that looked something like this:

affischtest

The goal here was to go for a true #000000, even if this is never advised to do. However after considering this theme for a while I asked Johanna to draw soemthing that represented the previous year. One memory that stood out was when one visitor cut down a smaller tree with a large knife, all during less clear circumstances. Johanna started drawing this tree, with multiple iterations of the parts we wanted in the picture. tree_demo

tree_demo2

After settling on a specific art style with a heavy focus on clean lines with dotted shadows, Johanna drew the final version of the tree, scanned it and I then processed it in Illustrator to give it a more clean pop, ending up like this:

dekandensen

The tree sort of unlocked the rest of the theme in a way. We decided to double down on the retro-computing” feeling for the festival theme together with hand drawn illustrations representing the festival. So for the website I wanted to capture the feeling of just being there. I asked Johanna to draw me a nice moon and a bunch of different clouds in similar style. After some experimentation it turned out the moon would work in a similar style but the clouds ended up better in a shaded style. Johanna painted a bunch of clouds on a larger canvas, I inverted it and processed them.

clouds

I put it all together on the webpage. Moon, clouds, a animated star bakground and the tree. I found a kind of nifty way of animating/scaling these clouds. By defining the aspect ratio per cloud and applying a animation timeline, they scale perfectly all the way from mobile up to a full size desktop:

  $x1-size: 50vw;
  $x1-ratio: 0.66;
  .x1 {
    @extend %arrow;
    left: -20vw;
    animation: move-cloud 250s linear -5s infinite;

    width: $x1-size;
    height: calc(#{$x1-size} * #{$x1-ratio});
    background-image: url("/assets/img/clouds/1.png");
  }

Each cloud has a offset and a animation timer, which gives the animation a fixed order but still has a natural feel to it.

Here’s a demo of how it looks:

You can view the result atanderstorpsfestivalen.se.

November 21, 2018

DESERT GOLFING

desertheader

Desert Golfing is brilliant.

It’s brilliant in a way I don’t think I even fully comprehend yet, so here I am writing about it. Desert Golfing is a game about playing golf in a desert, basically exactly what the name indicates. It’s a procedurally generated map that goes on to what’s supposed to be infinity” (even if it’s around 22500 holes in reality). What’s great about Desert Golfing is the fact that it’s one of the few mobile games that dare to be truly minimalistic. In an age of endless eye candy and glossy animations design to hook you into consuming digital goods, Desert Golfing is a bland, silent and minimalistic experience. There is no menu, no music, no save/load, no leaderboards and no tutorial. The game allows the player to figure it out without interrupting.


There’s something about playing this game alone, pulling off the sickest bouncing shot and watching as the ball perfectly bounces into the hole whilst being unable to share it. We’re so used to the gamification of sharing that it’s almost unnatural to play a game where progress and success isn’t celebrated in some form. The best the game manages to reward you is by throwing in a cactus at some point without any direct functional impact on the game other than taking the joke further.

Desert Golfing shows that minimalism in game design works and is avaliable on iOS, Android and PC. I highly recommend it on a mobile platform as the brilliance really shines through when played for small amounts as the game has virtually no startup time.

November 4, 2018 grandview park