In the conclusion of my previous homelab post, I pled to the eBay gods begging for a 4xP100 system. My prayers were heard, possibly by a malevolent spirit as a V100 16GB for $400 surfaced. More money than I’d be willing to spend on a P100 but the cheapest I’d ever seen a V100, I fell to temptation. To use all four cards, I needed something bigger than the Rosewill RSV-R4100U. Enter the OpenBenchTable, and some 3d printed parts I designed to be able to securely mount four compute GPUs:
Tag: linux
Hardware for Engineering Stream
My beloved blog post still sits on the throne as the most effective format for engineering projects. To me, Inlining code, photographs, CAD models and schematics in an interactive way trumps other mediums. This level of interactivity closes the gap between reader and material, allowing an independent relationship with the subject outside of the story being told by the author.
Working on stream to an online audience has a similar effect, the unedited and interactive format yielding a real understanding of the creators process and technique.
For a while there, I’d settled into a nice habit of broadcasting project development live on twitch. Two moves later, things have settled down enough in my personal life that I feel it’s time to try to get back into this habit.
Before we get started again, I took some time to improve the ergonomics (both hardware and software) of my stream setup. The following documents a few smaller projects, all in service of these upcoming broadcasts.
Won Pound by Won Pound is released!
This post is one in a series about GANce
Close-readers, twitter-followers and corporeal-comrades will have already beheld the good news that Won Pound by Won Pound has been released! This is Won’s second album-length project (first of course being Post Space released in 2018), and graces listener’s ears courtesy of Minaret Records, a California jazz label.
The record is accompanied by an album-length music video synthesized with GANce, containing a completely unique video for each track. These 93960 frames have been the ultimate goal of this project since it’s inception, and serve as the best demonstration as to what GANce can do. Within the video (linked below), the video for ‘buzzz’ is a personal favorite, demonstrating the three core elements of a projection file blend:
GANce Overlays
This post is one in a series about GANce
As it stood, the three main features that would comprise the upcoming collaboration with Won Pound (slated for release mid-April) were:
- Projection Files (using a styleGAN2 network to project each of the individual frames in a source video, resulting in a series of latent vectors that can be manipulated and fed back into the network to create synthetic videos)
- Audio Blending (using alpha compositing to combine a frequency domain representation of an audio signal with a series of projected vectors)
- Network Switching (feeding the same latent vector into multiple networks produced in the same training run, resulting in visually similar results)
As detailed in the previous post. The effect of these three features can be seen in this demo:
Knowing we had enough runway to add another large feature to the project, and feeling particularly inspired following a visit to Clifford Ross’ exhibit at the Portland Museum of Art, I began exploring the relationship between the projection source video and the output images synthesized by the network.
Introducing GANce
This post is one in a series about GANce
In collaboration with Won Pound for his forthcoming album release via minaret records I was recently commissioned to lead an expedition into latent space, encountering intelligences of my own haphazard creation.
A word of warning:
This and subsequent posts as well as the GitHub etc. should be considered toy projects. Development thus far has been results-oriented, with my git HEAD following the confusing and exciting. The goal was to make interesting artistic assets for Won’s release, with as little bandwidth as possible devoted to overthinking the engineering side. This is a fun role-reversal, typically the things that leave my studio look more like brushes than paintings. In publishing this work, the expected outcome is also inverted from my typical desire to share engineering techniques and methods; I hope my sharing the results shifts your perspective on the possible ways to bushwhack through latent space.
So, with that out of the way the following post is a summary of development progress thus far. Here’s a demo:
There are a few repositories associated with this work:
- GANce, the tool that creates the output images seen throughout this post.
- Pitraiture, the utility to capture portraits for training.
If you’re uninterested in the hardware/software configurations for image capture and GPU work, you should skip to Synthesizing Images.
Forcing a screen resolution of an Ubuntu guest OS in VirtualBox
I figured that doing this would be trivial but turns out it took a little work:
I’m trying to emulate an official 7″ Raspberry Pi Touch Display in a VM, so for this post the target resolution is 800 x 480. If you want to change it to another resolution swap in yours for the rest of this guide.
First, make sure Auto-resize Guest Display
is deselected in Virtualbox:
Run the following command in your terminal:
cvt 800 480 60
The output should look something the the following, starting with Modeline
…
Copy the text after Modeline
so in this case it would be
"800x480_60.00" 29.50 800 824 896 992 480 483 493 500 -hsync +vsync
And paste it after the following command:
xrandr --newmode
NOTE! You may want to change the 800x480_60.00
to something without an underscore in it, it was causing problems on my system. I changed it to pidisplay
. The resulting command for this example is:
xrandr --newmode "pidisplay" 29.50 800 824 896 992 480 483 493 500 -hsync +vsync
You should be able to run the above command without error. Next, run:
xrandr -q
You’ll be greeted with output similar to this. Note the name of the display device, in this case VGA-1
.
With that output name, enter the following two commands:
xrandr --addmode VGA-1 pidisplay xrandr --output VGA-1 --mode pidisplay
After running that second command, the window should jump to it’s new resolution! You’re done!
Automatically run Electron application at reboot on Raspberry Pi
Here is a quick way to have an application built on electron run at boot on a Raspberry Pi. This worked for me running Raspian Stretch with Desktop.
Edit /home/pi/.config/lxsession/LXDE-pi/autostart
with nano:
sudo nano /home/pi/.config/lxsession/LXDE-pi/autostart
Add the following line:
@npm start --prefix /path/to/project/
The file should now look somewhat like this:
@lxpanel --profile LXDE-pi @pcmanfm --desktop --profile LXDE-pi @xscreensaver -no-splash @point-rpi @npm start --prefix /path/to/project/
Save and exit nano and reboot. Your app should open after the desktop environment loads. Yay!
If you want to be able to get access to the terminal output of your application, install screen with:
sudo apt-get install screen
And then swap:
@npm start --prefix /path/to/project/
For:
@screen -d -m npm start --prefix /path/to/project/
In the above code snippets.
After the pi boots, you can run screen -list
to see what screens are available to attach to then attach to yours with screen -r yourscreen
. Here’s an example:
Press enter, and then see your terminal output.
For more info on how to use screen, check out this link:
https://www.gnu.org/software/screen/manual/screen.html
Electron cannot be started from an SSH session
Update: If you run export DISPLAY=:0
in the terminal prior to npm start
, the application runs just fine on the remote device. Thank you Alex!
https://twitter.com/alexbragdon/status/915601277752573954
In working on an project for work, I have figured out the hard way that Electron has to be started from a terminal session on your target device (ie the computer it is to be viewed on). I am developing an embedded system based on the Raspberry Pi that does not take user input but displays information on a screen.
Upon downloading the electron-quick-start example, everything installs correctly without error and can be done remotely via SSH. Upon running with npm start
, the following error is thrown.
> electron-quick-start@1.0.0 start /home/pi/electron-quick-start > electron . npm ERR! code ELIFECYCLE npm ERR! errno 1 npm ERR! electron-quick-start@1.0.0 start: `electron .` npm ERR! Exit status 1 npm ERR! npm ERR! Failed at the electron-quick-start@1.0.0 start script. npm ERR! This is probably not a problem with npm. There is likely additional logging output above. npm ERR! A complete log of this run can be found in: npm ERR! /home/pi/.npm/_logs/2017-10-04T15_11_16_398Z-debug.log
I spent most of the evening trying to debug npm ERR! code ELIFECYCLE
to no avail. On a lark, I connected a keyboard to the device and ran npm start
and it ran without error. Sigh.
The remote development alternative for doing this is to use Remote Desktop Connection a client comes bundled in with windows. The software can be installed on the remote system (the Raspberry Pi) using apt-get install xrdp
. Upon connecting, opening up a shell in the RDP client, and running npm start
, the example application works just fine.
StripPi – Software Demo, Roadkill Electronics
I’m constantly loosing the remote for my RGB LED strip lights, and I have a few days for spring break, time to get hacking. Here’s a demo and explanation video:
I don’t mention it in the video, but the cool part of this project is how the different processes communicate with each other. Rather than interacting with the different processes through pipes, or something like stdin, I’ve decided to use a TCP websocket server:
Processes on the device send RGB values to the Strip Server via a TCP packet. This very very easy to implement, and almost all of the hard work is taken care of via the socketserver module included in python3. This also allows for interactions with that main process (the StripPi Server process) to take place off of the Raspberry Pi as well. I plan on writing an Alexa integration for this project moving forward, and this should make that a lot easier.
The analog to digital conversion is handled by an MCP3008, exactly the same way as I did it here.
Thanks for reading, more soon.
Raspberry Pi Digital Hourglass
Trying to get the most out of a day has been big theme of my life lately, as I’m sure it is for many people. I’ve found that I always manage my time better when things are urgent; I’m considerably more productive when I have to be.
I want an ascetically pleasing way to be able to represent how much time is left in the day at a granular scale, like an hourglass. Watching individual seconds disappear will look cool and (hopefully) create that sense of urgency that I want to induce.
Technically, this is a really simple thing to accomplish thanks to python and pygame. Here’s a video of a proof of concept running on my laptop:
At the start of each day, the display is filled with squares at random locations, with a random color. As each second elapses, a square will vanish.
To make it easier to see for the video, I’ve made the squares much bigger than they will actually be for the final build. This is what the display looks like with the squares at their actual size:
The code really really simple, like less than 100 lines simple. Here’s how it works:
Here’s the version of the code running on my computer in the video:
import pygame from random import randint from apscheduler.schedulers.blocking import BlockingScheduler import datetime class random_square(object): def __init__(self, max_x_location, max_y_location): self.x_loc = randint(0, max_x_location) self.y_loc = randint(0, max_y_location) max_color_value = 255 red = randint(0, max_color_value) green = randint(0, max_color_value) blue = randint(0, max_color_value) self.color = [red, green, blue] class clock(object): def __init__(self, initial_count, max_count, screen_w, screen_h): self.max_count = max_count self.screen_w = screen_w self.screen_h = screen_h # create the screen object, force pygame fullscreen mode self.screen = pygame.display.set_mode([screen_w, screen_h], pygame.FULLSCREEN) # the screen's width in pixels is stored in the 0th element of the array self.square_size = screen_w / 200 # create the list of squares, initially as empty self.squares = [] # fill the squares with the inital seconds until midnight for second in range(initial_count): self.squares.append(random_square(screen_w, screen_h)) # starts ticking the clock def start(self): scheduler = BlockingScheduler() scheduler.add_job(self.tick, 'interval', seconds=1) try: scheduler.start() except (KeyboardInterrupt, SystemExit): pass # this occurs once every time a unit of time elapses def tick(self): # this will happen once per "day" if len(self.squares) == 0: # fill the list of squares to be drawn for second in range(self.max_count): self.squares.append(random_square(self.screen_w, self.screen_h)) # draw a blank screen self.screen.fill([0, 0, 0]) # draw the squares for square in self.squares: rect = (square.x_loc, square.y_loc, self.square_size, self.square_size) pygame.draw.rect(self.screen, square.color, rect, 0) pygame.display.update() # remove a single square from the list as one tick has elapsed self.squares.pop() # initialize pygame pygame.init() # figure out the parameters of the display we're connected to screen_width = pygame.display.Info().current_w screen_height = pygame.display.Info().current_h screen_size = screen_width, screen_height # determine the number of seconds until midnight seconds_in_a_day = 86400 now = datetime.datetime.now() midnight = now.replace(hour=0, minute=0, second=0, microsecond=0) seconds_until_midnight = seconds_in_a_day - (now - midnight).seconds # create and start the clock! cl = clock(seconds_until_midnight, seconds_in_a_day, screen_width, screen_height) cl.start()
Let’s walk through some of the design decisions of this code. The first thing that’s worth talking about is how the data for the squares is handled:
class random_square(object): def __init__(self, max_x_location, max_y_location): self.x_loc = randint(0, max_x_location) self.y_loc = randint(0, max_y_location) max_color_value = 255 red = randint(0, max_color_value) green = randint(0, max_color_value) blue = randint(0, max_color_value) self.color = [red, green, blue]
It’s just an object with no methods, and on initialization, all the parameters of the square (location and color) are generated randomly as opposed to just floating the raw numbers in arrays around (even though that’s basically what is happening). This let’s us fill the squares array very easily later on in the file here:
# fill the squares with the inital seconds until midnight for second in range(initial_count): self.squares.append(random_square(screen_w, screen_h))
and here:
# this will happen once per "day" if len(self.squares) == 0: # fill the list of squares to be drawn for second in range(self.max_count): self.squares.append(random_square(self.screen_w, self.screen_h))
When it comes time to draw these squares, it also makes that pretty intuitive:
# draw the squares for square in self.squares: rect = (square.x_loc, square.y_loc, self.square_size, self.square_size) pygame.draw.rect(self.screen, square.color, rect, 0)
Again, very simple stuff, but worth it to talk about.
I’ll be back at my place that has the Raspberry Pi and display I would like to use for this project, so more on this then.
Thanks for reading!