YangGAN: Putting Andrew Yang’s Essence into AI

This motion that you see of Andrew, squishing around above, is what’s called a latent space walk of a generative adversarial network or GAN. GAN’s are super duper powerful, sponges of data points that are plotted in 512 dimensional space. Let’s recall that we fail to think well in 3 dimensional space most of the time. So, 512 dimensions is way out there. (Don’t think about it too long, or blood will shoot out your nose. )

Effectively, by “spacially” moving from one Andrew-Yang-head data point in 512 dimensional space, along a yang vector, to another Andrew-Yang-head data point, the image morphs. It’s how we will move virtual characters soon enough.

This particular flavor is of GAN is called StyleGAN2 from NVIDIA. I lovingly trained it with 6000 head shots of Andrew Yang interviews. All I needed to do to collect this data, was scrape his youtube site and then batch export them as individual frames. Using the python tool autocrop, I was able to very quickly amass 15000 frames of Andrew Yang from the chest up. I culled it down to about 6000, throwing away images that didn’t fit nicely in the box with most of his face, facing the screen.

In prototype versions of the GAN’s I’ve made, I discovered the color palette was all over the place. On this GAN, I effectively had to “normalize,” or limit the range of the colors. After experimenting, I stumbled on a look, and batch effected the training frames with a nintendo gameboy color filter.

Gameboy colors because I love Super Mario Land.

I did my GAN development with RunwayML. A nifty little program that allows me to focus on running models and not drinking my face off when my python dependencies don’t install.

OK, So why the YangGAN?

Jokes aside…

Generative Adversarial Networks, Computer Vision, and networks of computers collectively rendering, will revolutionize computer graphics. We will create near-reality very soon, and while we are doing it, destroy the need for human labor to actually work for the current methods of value. We should be talking openly about the dangers of Artificial Intelligence and economic collapse.

I believe Universal Basic Income is the most realistic thing we can collectively do as a country to save ourselves.

Our life expectancy is dropping, we’re fighting our neighbors, we are letting our worst self consume us. We actually need to do something.

I’m asking you to please investigate universal basic income. Andrew’s organization, is the best I see going right now. #yanggang baby.

Links and Reference

For more information about Andrew Yang and his efforts at Humanity Forward, please visit: https://movehumanityforward.com/

Time did a semi-ok main stream piece on it:

https://time.com/4737956/universal-basic-income/

This is a bit heady, but Ian Goodfellow is the guy who more or less put a generator and discriminator together to create the concept of a Generative Adversarial Networks. https://www.youtube.com/watch?v=Z6rxF…

You should download and play with models yourself with RunwayML: http://runwayml.com

Also, I stole the “blood shoot out of your nose” bit from Louis Black.

More of my Machine Learning work will eventually be at http://nytrogen.ai

Please follow and like us: