Category: Portfolio

Select Works.


  • Game Development

    As an animator who started in the flash days, I have always been interested in game development and gamification. I feel animation naturally aligns with it. Here are a few projects that I have worked on independently.


    Bloopco (2015)

    GameDev | Bloopco
    In early 2015, I was prototyping game mechanics that I believed had the potential for anxiety reduction. My technical partner Josh Reynolds and I, began prototyping a biofeedback game mechanic. We discovered that by detecting heart-rate input, we could recognize a deep mindful breath. This became the basis for Bloopco, a health and mindfulness game company. We began building our first game, deciding to develop for the newly launched Apple Watch. Using zen and archery as a narrative backdrop, we built a game called ‘Way of the Bow.’

    With medical consulting from Sarah Lopez, a USC physician and researcher, and art concepts from Bill Green, we were able to complete and launch the basic game on the Apple Store in the fall of 2015.

    In addition to UX design and project management, I did business development, fund raising and strategy.

    Way of the Bow was depreciated in 2017.

    Way of the Bow


    PulseCat (2016)

    In 2016, I was still working with heart rate gaming from my previous start up Bloopco. By chance, I found an unusual audience, golf. Working with golfers, I pivoted the Bloopco technology into PulseCat, a stress management system for athletes. After changing the graphics and branding, I integrated the data into a database system for integration into work out systems.
    GameDev | Bloopco
    GameDev | Bloopco
    After 18 months of work and research, and some convincing test cases, I ran out of money and energy. Below are some sketches and concepting. I demo the prototype below.

    Commander Cluck (2012)

    GameDev | Commander Cluck
    Commander Cluck is a procedural generated side scrolling jump-flap mechanic game. I adopted the simple endless runner to include a jump with a flap ‘joust-like’ descent. I created a space rooster adventurer to match the character’s mechanic and created a world of bad guy aliens, called Zorcanians, for him to squish.\

    Haephaesteus
    01
    Lunara\

    I developed all the art assets in Adobe Photoshop and After Effects and partnered with Kiran Rao and XDev Studios for the development in Cocos2D. Commander Cluck was depreciated in 2017.\

    Here is the trailer:

    Agent Kickback (2015)

    Agent Kickback 01 As a kid, I grew up playing dozens of plat-formers on a commodore 64. For a long time, I imagined a side scrolling open world plat-former about a secret agent, who had to uncover a government alien conspiracy. The mechanic of the game was based on a fun “Kick Back” that the main character had when he fired his plasma cannon.

    Agent Kickback 02
    Agent Kickback 03

    I prototyped and built in the HTML5 engine construct 2, and created a system for fractioning and designing levels, as well as a map and mission system. I built all of the art in Adobe Photoshop and After Effects. I particularly like the art style as it felt creepy and alien like but still fun and cartoony. The awesomely talented Jennifer Kes Remmington did the music track, which I feel hits the mood of the character.

    You can play the desktop web prototype by clicking here.

    Here is a video playthru:

    Here is some various odds and ends from development.

    Sketchbook and Prototypes

    GameDev | Prototype Sketches 01
    GameDev | Prototype Sketches 02
    GameDev | Prototype Sketches 03

  • YangGAN: Putting Andrew Yang’s Essence into AI

    YangGAN: Putting Andrew Yang’s Essence into AI

    This motion that you see of Andrew, squishing around above, is what’s called a latent space walk of a generative adversarial network or GAN. GAN’s are super duper powerful, sponges of data points that are plotted in 512 dimensional space. Let’s recall that we fail to think well in 3 dimensional space most of the time. So, 512 dimensions is way out there. (Don’t think about it too long, or blood will shoot out your nose. )

    Effectively, by “spacially” moving from one Andrew-Yang-head data point in 512 dimensional space, along a yang vector, to another Andrew-Yang-head data point, the image morphs. It’s how we will move virtual characters soon enough.

    This particular flavor is of GAN is called StyleGAN2 from NVIDIA. I lovingly trained it with 6000 head shots of Andrew Yang interviews. All I needed to do to collect this data, was scrape his youtube site and then batch export them as individual frames. Using the python tool autocrop, I was able to very quickly amass 15000 frames of Andrew Yang from the chest up. I culled it down to about 6000, throwing away images that didn’t fit nicely in the box with most of his face, facing the screen.

    In prototype versions of the GAN’s I’ve made, I discovered the color palette was all over the place. On this GAN, I effectively had to “normalize,” or limit the range of the colors. After experimenting, I stumbled on a look, and batch effected the training frames with a nintendo gameboy color filter.

    Gameboy colors because I love Super Mario Land.

    I did my GAN development with RunwayML. A nifty little program that allows me to focus on running models and not drinking my face off when my python dependencies don’t install.

    OK, So why the YangGAN?

    Jokes aside…

    Generative Adversarial Networks, Computer Vision, and networks of computers collectively rendering, will revolutionize computer graphics. We will create near-reality very soon, and while we are doing it, destroy the need for human labor to actually work for the current methods of value. We should be talking openly about the dangers of Artificial Intelligence and economic collapse.

    I believe Universal Basic Income is the most realistic thing we can collectively do as a country to save ourselves.

    Our life expectancy is dropping, we’re fighting our neighbors, we are letting our worst self consume us. We actually need to do something.

    I’m asking you to please investigate universal basic income. Andrew’s organization, is the best I see going right now. #yanggang baby.

    Links and Reference

    For more information about Andrew Yang and his efforts at Humanity Forward, please visit: https://movehumanityforward.com/

    Time did a semi-ok main stream piece on it:

    https://time.com/4737956/universal-basic-income/

    This is a bit heady, but Ian Goodfellow is the guy who more or less put a generator and discriminator together to create the concept of a Generative Adversarial Networks. https://www.youtube.com/watch?v=Z6rxF…

    You should download and play with models yourself with RunwayML: http://runwayml.com

    Also, I stole the “blood shoot out of your nose” bit from Louis Black.

    More of my Machine Learning work will eventually be at http://nytrogen.ai


  • Machine Learning Experiments

    AI is rapidly advancing into computer graphics. It is moving far faster than I imagined it would.

    I believe AI should be taught and experimented with in the educational process. For more of this thinking, please reach my post here.

    Otherwise, have fun exploring some of the nonsense below.

    Stable Diffusion and Dreamstudio

    I have been using stable diffusion fairly obsessively since it’s release in the Fall of 2022. Below are some samples of my work as of November 2022.

    Imagery and Photography Experiments
    Self Portraiture with Stable Diffusion / Dreamstudio
    Science Fiction, Mechs, and Technology Experimentation

    ShatnerGAN

    For whatever reason, I spent a week ripping Captain Kirk shots from the original Star Trek series and training StyleGAN2 from NVidia. This experiment had 15000 close ups of James T Kirk, and the model was trained for 5000 steps.
    Software: 4K Video Downloader, Autocrop, and RunwayML

    Put a GAN on it! – Stealing Beyonce’s Motion Data

    During my class with Derek Shultz I used Beyonce’s “Put a Ring on It,” to experiment with a number of models that existed within the AI model bridge software called RunwayML. This is the first time I integrated machine learning models into my after effects workflow. While it’s essentially pure fun, it allowed me to experiment with a number of the models and get a sense of their capabilities.
    Software: Runway, After Effects, Photoshop

    YangGAN: Andrew Yang’s Essence in AI #yanggang #humanityforward

    After my Captain Kirk GAN, I decided to try another human trained GAN. I found a clip of Andrew Yang speaking about the advancement of AI journalists, and was inspired to match the audio with a latent space walk of a trained GAN. This was trained on about 4000 images of Andrew Yang that I scraped from various interviews. The head was cropped and run through an image adjustment recipe I developed with Python and Image Magik. I trained the GAN in Runway using StyleGAN2.
    Software: 4K Video Downloader, Python: Autocrop and Image Magik, and RunwayML

    Ride the Train! – Experiments with Image Segmentation

    This was an experiment to play with Image Mapping Segmentation. I had seen a number of experiments with Image Mapping but had seen little with using it as a renderer. I used shaders in Maya that were matched to the Image Mapping set up in RunwayML. I rendered each layer through Runway and composited it in After Effects. The technology is far from functional, but the promise is there.
    Software: RunwayML, Autodesk Maya, After Effects

    Machine Learning Motion Model Experiments

    My primary interest in machine learning is experimentation with animation data and motion. These were some experiments I ran to see what motion model got what result. My take away was that the clips needed to be “normalized” to get a good read. That’s why I created a template to track the video.
    Software: 4K Video Downloader, Autocrop, and RunwayML

    Fun with CheapFakes

    This is a fun model and easy to use. I scraped some Arnold Schwartznegger clips from youtube, and had a friend, Daron Jennings, improvise some clips. It was simply a matter of running the model with the appropriate components, and then compositing it in After Effects. It might be something fun to use for the future.
    Software: Wav2Lip, After Effects