First here’s a video of me demonstrating a few of the new features:
So compared to the original version of this project, the following changes are as follows:
- Added function that takes image from incoming tweet, finds most common color in the image and writes it to the LEDs.
- Added fading between colors instead of just jumping between them.
- Added routine to respond to users when an error occurs in their tweet, like it’s missing a color or something is spelled wrong.
- Re-Wrote most of code into an objects and methods on that object to get rid of global variables.
A few notes on the new features:
The operation of the image ingestion feature is pretty simple. All you have to do is tweet an image at @heyWPI just like you would with text. It finds the most common color in the image and then writes it the the LEDs. Here’s an example:
Input:
@heywpi pic.twitter.com/YkalJFSQmS
— Devon Bray (@eso_logic) April 2, 2015
Output:
Thanks, @eso_logic! The image you sent had R: 0, G: 1, B: 32 As the most common color. Writing this to the LEDs! pic.twitter.com/XleQk39vUO — HeyWPI (@heywpi) April 2, 2015
It works pretty well. If you look at the code, you’ll see that I tried to make it as modular as I could so I can possibly improve the color detection algorithm moving forward without making major changes in the code. This required the system to have some kind of memory to keep track of the current values written to the LEDs. Originally, I was using global variables to solve this problem but it wasn’t all that clean so I made it all more object oriented.
As for the fading You can sort of see it in the video, but the fading between colors looks really nice, especially from and to opposite complex colors like purple to orange.
A big problem I had with different people using the project was that sometimes people would use an invalid color. I implemented a default message to send if a received tweet didn’t have a color in the text or didn’t have an image in the body.