Discovering Machine Learning Art Production with RunwayML

Here are my first experiments with Machine Learning Art with Runway ML. I’ve been also taking Artificial Images: RunwayML Deep Dive with Derrick Schultz. Which has been both a helpful community and resource. If you are interested in this stuff, you should follow his youtube.

This represents part time work over the last two weeks. Some of my experiments, with hypothesis, are below. It’s super fun!


Incredible nyeGAN

Hypothesis/Purpose:

“I wonder if I could make one of those “smooshy-things” I see on twitter?

Approach:

I made 3 minutes of funny faces at the camera using OBS Studio to capture the video. I exported the video at 30 frames per second into After Effects. I exported the frames into a “data” folder in my AE project structure.

I uploaded the frames to Runway’s Style GAN 2 training, with the Faces DataSet. I let it run for 5000 steps, which took about 5 hours on their servers.

I then edited the first moments of the Incredibles and outputted the frames in After Effects. There was about 1200 frames, I roughly edited out some of the parts where Mr. Incredible wasn’t directly in the camera.

I fed it into my nyeGAN using the training section. I let it cook for about 2 hours, but stopped it when I saw that Mr. Incredible was more “overwriting” than “blending” with my original frames.

Data:

Footage A: 3 minutes of me making funny faces to the camera

Footage B: The open interview sequence, of Disney/Pixar’s The Incredibles.

Conclusion:

A lot of data that is similar, like my funny faces video, doesn’t really make for interesting content. I was also surprised that the two datasets didn’t mix more.

For my next GAN experimentation, I want to scrape some datasets of like things and actually think about the diversity and make up.


Some Pumped Up Motion Tests

Hypothesis/Purpose:

I want to see what comes out of the motion models.

I’ve seen work online using DensePose, and First Order Motion Model, that I wanted to replicate. Eventually, I want to use something like pix2pix to “puppet” characters.

Approach:

Using the awesome work of Marquese Scott, ( Twitter:
https://twitter.com/officialwhzgud ) I ripped his “Pumped up” Video from youtube using 4K Video Download. I exported a section of frames from After Effects, and ran it into a Workspace in Runway.

I also took imagery of Goofy from the internet, and painted over a single frame of the video to test, First Order Motion with full body, and Liquid Warping, which seemed “worth a shot. “

Conclusion:

Open Pif Paf tracked well to the video. Though, I found that despite PoseNet not rendering video out of Runway, the data was a bit better. (JSON) First Order Motion must be “tuned” in Runway for faces, and didn’t quite work for the full body.

Like working with posenet. Tho, I’d love it to render out of Runway.

My most successful take away was the PoseNet export. The time code and positions are normalized between 0 and 1. Within that area, it creates a series of positions (17) using the X and Y data.

How do I get that data into an animation program?

BalletDancin’

Hypothesis/Purpose:

Can I normalize erratic movement? Can render Runway into my After Effects pipeline? if I can get the x and y positions, what ways can I get Z axis data?

Approach:

I found a ballet clip on youtube, “The Top 15 Female Ballet Dancers.” I wanted to isolate an individual who was moving around the stage. I used basic tracking in After Effects, and adjusted some keys to track her to a red cross hair in a 512 x 512 area. Basically, I centered her in the area to be normalized.

I then ran it through, PifPaf and Dense Depth. My purpose for DenseDepth was to see if I could get any sort of Z data.

Conclusion:

Pipe works on the Runway side. I need to figure out how to get the data into some animation software.

Spiderman and his amazing GANs

Hello World!

Purpose:

StyleGANs (with Picasso or Van Gogh) are kind of the “Hello World” of Machine Learning Art. Runway makes it easy to try it.

Approach:

I ripped three 80’s cartoon openings from youtube. I then chose preset StyleGANs in Runway and fired them through. They took about 10 to 15 minutes a piece.

Conclusion:

A necessary exercise to get into the software. A great way to understand what the models are doing behind the scenes. Here are a couple of images that I felt sort of successful from this little exercise. The other two cartoon opens are on the nytrogen youtube.

Machine Learning is a scream!
Please follow and like us: