This post is one in a series about GANce
As it stood, the three main features that would comprise the upcoming collaboration with Won Pound (slated for release mid-April) were:
- Projection Files (using a styleGAN2 network to project each of the individual frames in a source video, resulting in a series of latent vectors that can be manipulated and fed back into the network to create synthetic videos)
- Audio Blending (using alpha compositing to combine a frequency domain representation of an audio signal with a series of projected vectors)
- Network Switching (feeding the same latent vector into multiple networks produced in the same training run, resulting in visually similar results)
As detailed in the previous post. The effect of these three features can be seen in this demo:
Knowing we had enough runway to add another large feature to the project, and feeling particularly inspired following a visit to Clifford Ross’ exhibit at the Portland Museum of Art, I began exploring the relationship between the projection source video and the output images synthesized by the network.