Stage Pieces for Bensbeendead.

There’s an unspoken tenant of the maker movement that demands: calls for bespoke engineering work from friends should always be answered. Maker projects for friends pay dividends in net-happiness injected into the world.

My ‘ol pal Ben (A.K.A Bensbeendead.) knows this kind of work is a favorite of mine. So when he asked, of course I jumped at the opportunity to design and manufacture some “elbows” that mount his laptop and controller atop RGB stage lighting.

Continue reading →

Won Pound by Won Pound is released!

This post is one in a series about GANce

Close-readers, twitter-followers and corporeal-comrades will have already beheld the good news that Won Pound by Won Pound has been released! This is Won’s second album-length project (first of course being Post Space released in 2018), and graces listener’s ears courtesy of Minaret Records, a California jazz label.

The record is accompanied by an album-length music video synthesized with GANce, containing a completely unique video for each track. These 93960 frames have been the ultimate goal of this project since it’s inception, and serve as the best demonstration as to what GANce can do. Within the video (linked below), the video for ‘buzzz’ is a personal favorite, demonstrating the three core elements of a projection file blend:

Continue reading →

GANce Overlays

This post is one in a series about GANce

As it stood, the three main features that would comprise the upcoming collaboration with Won Pound (slated for release mid-April) were:

  • Projection Files (using a styleGAN2 network to project each of the individual frames in a source video, resulting in a series of latent vectors that can be manipulated and fed back into the network to create synthetic videos)
  • Audio Blending (using alpha compositing to combine a frequency domain representation of an audio signal with a series of projected vectors)
  • Network Switching (feeding the same latent vector into multiple networks produced in the same training run, resulting in visually similar results)

As detailed in the previous post. The effect of these three features can be seen in this demo:

 

Knowing we had enough runway to add another large feature to the project, and feeling particularly inspired following a visit to Clifford Ross’ exhibit at the Portland Museum of Art, I began exploring the relationship between the projection source video and the output images synthesized by the network.

Continue reading →

Introducing GANce

This post is one in a series about GANce

In collaboration with Won Pound for his forthcoming album release via minaret records I was recently commissioned to lead an expedition into latent space, encountering intelligences of my own haphazard creation.

A word of warning:

This and subsequent posts as well as the GitHub etc. should be considered toy projects. Development thus far has been results-oriented, with my git HEAD following the confusing and exciting. The goal was to make interesting artistic assets for Won’s release, with as little bandwidth as possible devoted to overthinking the engineering side. This is a fun role-reversal, typically the things that leave my studio look more like brushes than paintings. In publishing this work, the expected outcome is also inverted from my typical desire to share engineering techniques and methods; I hope my sharing the results shifts your perspective on the possible ways to bushwhack through latent space.

So, with that out of the way the following post is a summary of development progress thus far. Here’s a demo:

There are a few repositories associated with this work:

  • GANce, the tool that creates the output images seen throughout this post.
  • Pitraiture, the utility to capture portraits for training.

If you’re uninterested in the hardware/software configurations for image capture and GPU work, you should skip to Synthesizing Images.

Continue reading →

The Silent Dripper

The first revision of this project was shipped in November of 2020, but the subsequent redesign was commissioned and completed the following summer in 2021. This post primarily a journey through that second revision, and it’s publication comes some time after the deliverable was shipped to the client.

Engineering requirements that arrive downstream from artistic intent are my favorite constraints to work inside of. It forces the engineer to assume the role of the artist, considering the feelings and ideas that will be communicated to the audience with the piece. The engineer also has to become an audience member to understand other factors about how viewing will take place, if the environment will change such that the piece needs to respond in kind. The space in between these to roles needs to be projected into the standard space of product requirements, weights, tolerances, latencies etc. that are common in the profession.

As a part of my freelance practice, interdisciplinary artist Sara Dittrich and I recently collaborated on a series of projects, adding to our shared body of work. The most technically challenging part of these most recent works was a component of her piece called The Tender Interval. I urge you to go read her documentation on this project, there is a great video overview as well.

Two performers sit at a table across from each other, above them is an IV stand with two containers full of water. Embedded in the table are two fingerprint sensors, one for each of the people seated at the table. Performers place their hands on the table, with their index fingers covering the sensors. Each time their heart beats, their container emits a single drop of water, which falls from above them into a glass placed next to them on the table. Once their glass fills, they drink the water. Optionally, virtual viewers on twitch can take the place of the second performer by sending commands on twitch that deposit water droplets into the second glass.

Design and manufacture of table and this insert were completed by Sara Dittrich

The device responsible for creating the water droplets (the dripper) ended up being a very technically demanding object to create. The preeminent cause of this difficulty was the requirement that it operate in complete silence. Since the first showings of this piece were done virtually due to the pandemic, we were able to punt this problem and get the around noisy operating levels of V1 using strategic microphone placement. However, this piece would eventually be shown in a gallery setting, which would require totally silent operation.

The following is a feature overview and demonstration of the completed silent dripper:

If you’re interested in building one of these to add to your own projects, there is a github organization that contains the:

Per usual, please send along photos of rebuilds of this project. Submit PRs if you have improvements, or open issues if your run into problems along the way.

The rest of this post will be a deep dive into earlier iterations of this project, and an closer look at the design details and challenges of the final design. It’s easier to understand why a second iteration was needed after reviewing the shortcomings of version 1, so that’s where we’ll start.

Continue reading →