Category: Nytrogen

Thoughts on the Future of Computer Graphics: Animation, Production, Real Time Technology, AI, and Distributed Networks.


  • Blockchains and Animation Production

    Blockchains and Animation Production

    This is a three part series on Blockchains and Animation Production. The Nytrogen Newsletter follows my thoughts on the evolution of real time production in computer graphics.

    Part 01: F***** on the First One

    Agents and Lawyers, Oh My…

    When I was 29, I sold a show idea to a television network. I was simply a guy who enjoyed animating things.

    I was working on a studio lot to work on a film. I used the internal studio index to find the people to pitch to, and set myself up to make a deal with this mighty corporation’s animation development department. 

    My writing partner tactfully let the world know that we had set up a deal, and suddenly, a lawyer and an agent magically arrived. They told me, as experts, they would take care of the business dealings. I was removed from negotiations, and kept on the sidelines.

    Acting Agent Reddit AMA

    Three months later, the lawyer placed a stack of papers on the desk in front of me. It outlined a deal where I needed to work my ass off for imaginary outcomes, to which the network would own every character, joke, technical solution and story point until (should the show be green lit) the second season. And then, myself, my writing partner, and our production team would receive only a small percentage.

    “I’m being f*****.” I said out loud.

    My lawyer nodded. “This is a network deal, you always get f***** on the first one.”

    Intellectual property is the life blood of entertainment, and it is systematically controlled through corporate legal systems. Whether intellectual property began as an ethical concept is immaterial, it’s evolved into a mechanism for large expensive legal teams to steal from artists.

    homer simpson court GIF

    Set the Ideas Free

    The internet has proliferated ideas, that only when *compounded* – as in – smashed together – create magical things. Going viral comes from modulation. An idea proliferates because people change it, and make it their own. It may be possible for a single idea to resonate with others, but encapsulating that idea into a box and stamping ownership on it, will limit its ability to evolve.

    This means, in the networked world, we all need to let go of our ideas.

    Give them away. 

    The value we will gain from our collective creative network will outperform the gain we will get from a traditional, legally structured, intellectual property system. 

    I must admit I still have trouble accepting this line of thought. It is counter to what business and law tell you about how you should conduct your business dealings. The reality is that our business dealings have become unacceptably corroded. Corporate interest has become too powerful.

    No artist should ever be “f****** on the first one.”

    Is there another way?

    However, there still needs to be a system.

    Artists need a way to be paid; compensated for their efforts, and rewarded for their devotion to their craft. The ideas, the rates, and the work will need to be protected by more than something as clumsy as a legal team.

    This is why I believe animators need to know about the development of blockchains. It may just be a way for artists to work together without a system composed of lawyers and agents. Decentralized computing is pretty complex stuff, but I believe it important for artists and creators to understand. I will begin to share my thoughts on blockchains as they pertain to computer graphics, next week.


    Part 02: The Lawyer in the Database

    Blockwhat?

    I’ve read hundreds of self proclaimed “simple explanations” about blockchains. They usually begin by talking about Bitcoin and Satoshi Nakamoto. These explanations tend to get a bit “mathy” and spend a lot of time on game theoretical problems about trust and governance. It’s no wonder Silicon Valley types love to tweet about this stuff.

    For our understanding of blockchains as it pertains to computer graphics, I’d like to clear your mind of any preconceived ideas of decentralization or smart contracts and just focus on two simple words:

    Copy and Paste

    Because we all work on a computer, every photoshop file, maya file, python script, or word document contains our ideas, our designs, and our stories. If I like something I create, then I feel it has value. If I can make a copy of it, in essence, I have reduced the scarcity (and the real world value) of the idea. Simply, it’s no longer unique.

    In economics, the concept that an idea (or art work) can be duplicated digitally like this, with no additional labor, is called “zero marginal cost.”When the cost of making an infinite amount of something is the same as making one of something, it begins to challenge conventional capitalist thinking. This is why I believe that our value systems need a realignment to reflect network value rather than individual value.

    And the data we create, should belong to … us.

    So, just what are Blockchains?

    Blockchains are trusted networks. They are trusted because no one owns them, and if built correctly, they provide the necessary ethical practices that define the collective. The nodes on the networks support each other, not by deciding to, but by being incentivized to.

    As opposed to using a centralized company or service to mediate conflict, the governance of blockchains are designed for the collective to benefit. Those that follow the practices, that enable the trust of the network, are rewarded with tokens. In a decentralized, or blockchain enabled network, the collective protects it’s own data.

    I saw Zavian Dar give a fairly clear presentation of blockchain economics at the Blockstack summit. (below)

    I like to say that a blockchain creates it’s own “lawyer in a database.”

    To mistake cryptocurrency as simply a currency to be bought and traded, is missing the point of blockchains. All too often we are worrying about the price of bitcoin and ethereum, instead of understanding the value they really have. The network is built as a means for collective groups to come to a consensus. How we track and pay for value, (cyrptographic tokens) is merely a device for execution.

    By using blockchains to track intellectual property, we enable the collective to protect it’s own value. Things can be copied and pasted, only if conditions are met. The collective ethics or “governance” can be programmed into the chain, so we don’t need to an agent or lawyer to “handle” it for us.

    Next week, I will define how I think a blockchain network could work for a computer graphics collective. It isn’t so much about the technical components, but the concept that artistic collective can govern themselves more ethically and efficiently than the centralized corporate system we have now.


    Part 03: Blockchain Guilds

    Associations of Craftsmen

    In the medieval era, artists and builders formed group associations to protect their “tricks of the trade.” As opposed to artists who were owned by monarchies and religious organizations, the founders of these organizations, or ‘guilds,’ were independent masters who cultivated apprenticeship programs, created documentation, and standardized methods for protecting intellectual property. These collectives were the precursor to universities. 

    The industrial era saw rise of organized labor unions to protect workers from centralized interest. Many of these initiatives were centralized themselves, and were ineffective, either being too small to combat large corporate power, or (worse) becoming big interest themselves.

    Computer graphics artists have no protection now. They are scattered around the globe, unorganized, and often misvalued for their contribution to the craft.

    My hope is a new form of guild will arise, thanks to some fancy computer science.

    Incentivizing Distributed Networks

    The company Otoy has a unique vision for computer graphics. As one of the leaders in rendering with their Octane system, they have begun to think about how distributed rendering would benefit everyone.

    They have proposed a blockchain solution aptly named “render token,” that compensates users on a network who contribute their processing power. By assigning a compensation to the donation of GPU’s, it allows users to govern how their contribution can be used, and (potentially) assign an accurate market value for it.

    From their website:

    Ethereum’s widespread adoption was the key to realizing our vision. Instead of GPUs being used to only mine currencies, we use their intrinsic function to render and leverage the features of the blockchain: network, incentives and trust.

    Otoy has gained a small amount of traction in the entertainment community, (mainly among the motion graphics community) with support from Bad Robot leader, J.J. Abrams, himself.

    They are not the only ones who see this blockchain benefit.

    The network Golem, also provides a token for processing power contribution. A new chain called Helium, rewards users for buying and maintaining independent wireless network hardware. Others, like IPFS (interplanetary file storage) and Storj pay tokens in exchange for storage space. Whether these networks use the large crypto-networks like ethereum, or develop their own side chains to scale, the truth about blockchains is starting to becoming clear.

    Blockchains are coming, and the effect will be incredibly and entirely disruptive.

    When networks of creatives can share their processing power, their storage, and give their intellectual property openly, with full knowledge that they will accurately compensated for their contributions — it suddenly diminishes the need for a centralized company or organization.

    Decentralized Guilds

    My hope is that computer graphics artists will begin to form guilds on the blockchain to protect their intellectual property, gain access to shared assets, and get paid for their contributions (and fairly.) Should this actually work, artists will flock to the networks with the most ethical governance, and thus create global network value.

    The governance of these networks will be like the guilds of the medieval era, but instead of being confined to a city or area, they will propagate to wherever the network will reach. (Everywhere on the planet.)

    Blockchains are still very much in their infancy. It is still uncertain whether projects like Bitcoin, Ethereum or even Otoy will truly scale, but now that the idea has been set free in our collective consciousness, it is only a matter of time before some form of decentralized network becomes a reality. I’m hopeful the technology will rise to the need.

    Thanks for reading!


    Reference and Links for Part I

    The Wealth of Networks by Yochai Benkler:http://www.benkler.org/Benkler_Wealth_Of_Networks.pdf

    Skin in the Game by Nassim Nicholas Taleb: https://en.wikipedia.org/wiki/Skin_in_the_Game_(book)

    (Google) Classification of Intellectual Property Rights: https://scholar.google.com/scholar?q=classification+of+intellectual+property+rights&hl=en&as_sdt=0&as_vis=1&oi=scholart

    Here’s how the average film student is taught about this system: https://www.lightsfilmschool.com/blog/agents-managers-lawyers-film-industry-aek

    Reference and Links for Part II

    The Blockchain and the New Architecture of Trust, Kevin Weinbach:

    https://mitpress.mit.edu/books/blockchain-and-new-architecture-trust

    The Zero Marginal Cost Society, Jeremy Rifkin:

    https://thezeromarginalcostsociety.com/

    PostCapitalism, Paul Mason:

    Reference and Links for Part III

    Otoy and Render Token

    https://www.rendertoken.com/

    Golem

    https://golem.network/

    Helium

    http://helium.com

    IPFS

    https://ipfs.io/

    Storj

    https://storj.io/


  • The Ants Go Marching Open Source

    The Ants Go Marching Open Source

    The surprising effects of open source computer graphics development

    Ever Knock Over an Ant Hill?

    I’d like to bring up a comedy routine from one of my favorites, Brian Regan.

    Do you ever knock over an ant hill? Ever notice how they just start building it again?

    You’d think there would be at least one of the ants who’d go:

    “OH MAN!!!!! I DON’T BELIEVE THIS!!!!!”

    We are that one angry ant and that’s why it’s funny. We care about the things we build, and we get upset when someone knocks the whole thing down. Ant behavior seems counter to who we are. Instead of a single controlling interest, a collective hive mind just builds, without any drive but the creation of the ant hill itself.

    I think this is the perfect analogy to think about open source. Brian Regan is also hilarious.

    Open source?

    In my day job, I use pretty fancy pieces of software to do computer graphics. These days, it’s mainly Autodesk’s Maya, Adobe’s After Effects, and the super duper Unreal Engine from Epic. I’m amazed at the advances these pieces of software have every year.

    However, when a community rallies around a free piece of software, the effects can be even more astounding.

    Blender is an open source 3D package and production suite which, for free, allows for the creation of models, rigs, animation, textures, compositing and editing! Every major part of the animation pipeline has an independent group of developers solving a critical production problem. The community also shares videos about how to build things, provide plug ins and updates, and contribute to an infinite amount of chat rooms, websites and documentation.

    Projects like Blender, the Godot engine, Open Broadcast Software, and the painting application Krita, are part of a growing world of open source computer graphics software. Essentially, a quality graphics pipeline can be created with software that have no licenses.

    At it’s core, open source projects stay independent and free, which allows others to adopt it more readily. When community pain points are discovered, the users themselves can simply take it upon themselves to fix it.

    This is key.

    See, if I want an update to the Unreal Engine, I have to wait for the developer, Epic, to get around to it. (Here’s the Roadmap: https://trello.com/b/TTAVI7Ny/ue4-roadmap) Even if there are 100’s of world class developers working on the problem, because the system is closed there are only a (relatively) few number of people working on it.

    • I have been informed by an Ureal expert that the above is not true. Unreal provides a semi-open source license which allows for opportunities for non-Epic developers to contribute to the code.
    • k. Back to the Rambles.

    For an open source project that I use, there are usually communities working on the same problem sets I have. The bigger and more active that community becomes, the more powerful the tool becomes. The users aren’t boxed out of the development in order to be monetized. The users (and the knowledge they have) become part of the development process itself.

    Below is a visualization of python. You can see how the development of it twists and turns with the needs of the community. What closed company development pipeline would ever create a library like this?

    Open Source for the Ecosystem

    For the time being, the software packages and systems I use in my graphics work are closed. I work in companies, and business models are tied to a mechanism to control scarcity. Most software focused companies will continue to license, use subscriptions or SASS, because that’s how you make 20th century money.

    What I wonder is:

    How long will these closed systems be able to maintain their lead on the rest of the pack?

    How can a localized graphics pipeline compete with an infinite group of user/developers and an ever increasing collection of models, animation and art? Yes, it’s true, that perhaps that our graphics ecosystem will be controlled by Epic, or a titan like Amazon, or Microsoft Azure.

    It also may also be possible that that people will want a free ecosystem, filled with free software, and the value will come from the singular hive mind that is set on building with it.

    Thanks for reading. We’ll see you next week.

    Reference and Links:

    Software –

    Autodesk Maya: http://autodesk.com

    Adobe After Effects: http://adobe.com

    Epic Unreal Engine: http://unrealengine.com

    Blender: http://blender.org

    Godot: http://godotengine.org

    OBS: https://obsproject.com/

    Krita: https://krita.org/en/

    Reading –

    Yokai Benkler, The Wealth of Networks: http://www.benkler.org/Benkler_Wealth_Of_Networks.pdf

    The Agile Manifesto: http://agilemanifesto.org/

    Comedy –

    Brian Regan Official Site: http://brianregan.com/

    And I found his “Ant” routine here: http://inviewmedia.org/index.php/media-gallery/1408-brian-regan-ants-fishing?category_id=12


  • Virtual Beings: Responsive Characters with Intent and Action

    Virtual Beings: Responsive Characters with Intent and Action

    The Nytrogen newsletter follows the disruptions happening to the computer graphics industry. Each week, I send out my thoughts on the technology, work flow, and artistry in the evolution of real time animation production.

    #VirtualBeings

    Last week, the remnants of Oculus Story Studio (rebranded “Fable”), hosted a small conference to show the world the concept of an animated character driven by predictive models (AI). Despite it’s small group, Unreal Engine CTO Kim Libreri was in attendance, and a few other notables. They came up with a fancy hashtag and everything! #virtualbeings

    Here’s their video teaser:

    I hadn’t heard of this virtual being summit until recently, so I was a bit bummed I missed it. It’s a concept that I’ve been thinking about for a while now, and would have loved to hear what others in the space are working on.

    The propagation of voice recognition, chatbots, micro-functions, cloud computing, and the like, has enabled an ecosystem for virtual characters to thrive. Yet, we barely have them among us. I think it’s because (in my opinion) the design of a virtual character straddles two broad development areas.

    I will call each of these development areas “Intent” and “Action.”

    Intent

    Intent is essentially “chatbot developer speak” for the user’s desire for interacting with the virtual characters. When I initiate an interaction with a human, or a bot, I generally have a motivation to why I am speaking with them. I want to ask them the weather forecast for tomorrow, or ask for help on my calculus homework, or if they prefer the Rock to Stone Cold.

    In order for a virtual being to react to a user properly, it first has to learn what the user wants. The development of intent detection is a landscape filled with Machine Learning nuts who will show off their fancy computer vision algorithms and face detection classifiers. Intent can also be broken up into context, and be given long and short term properties. The subject can get complicated very quickly.

    Action

    On the other side of the system, an action is an event (or function) that is triggered when the intent has been classified by the virtual character. The action is powered by entities, or recognized components in the user’s communication.

    In the example above, the action, motivated by intent, would trigger a series of functions that move the character to say “hello.” Actions have as much complexity as Intent. In the short term, these actions will be “canned”, using pre-animated pieces of content. Undoubtedly these actions will become increasingly generated at real time, allowing the characters to generate the corresponding action on the fly.

    Bridging the Worlds

    Around 2 minutes and 30 seconds into the video at the top of the post, Edward Saatchi (the CEO of Fable) says that there is a division between the AI community and the filmmaker community. This is true, and the division of Intent and Action cleanly divides the knowledge chasm that will need to be traversed in order to see the mass propagation of virtual characters. Animators are going to have to learn how to check their code into github, and Engineers are going to have to learn to carry a sketchbook. Once we have both sides communicating, the rest should come with time and focus.

    Let’s leave the ethics of all of it for another time!

    That’s it for this time. Thanks for reading.

    Reference and Links:

    Fable Studio: https://fable-studio.com/

    Oculus Story Studio: https://www.oculus.com/story-studio/

    Virtual Beings Summit: https://www.virtual-beings-summit.com/

    Chatbot vocabulary (like User Intent): https://chatbotsmagazine.com/chatbot-vocabulary-10-chatbot-terms-you-need-to-know-3911b1ef31b4

    Dwayne Johnson (the Rock): https://en.wikipedia.org/wiki/Dwayne_Johnson


  • Graphics Hacking and Neural Nets

    Graphics Hacking and Neural Nets

    Artists are going to be chaining networks together to do some interesting stuff…

    My hope was to write some initial findings on my machine learning experiments this week. But, I generated so many thoughts and sketches (30 pages worth) that it became a mass (and mess) of information. In short, I am in a bit of a shock.

    It’s hard for me to process the reality of what I am seeing as I study machine learning.

    I am excited to parse some of the information, but first I felt it necessary to define my excitement about it. I see machine learning models as the opportunity to create the ultimate graphics hack.

    Previs Hackers

    Previsualization is a bit different from the rest of animation production. I have a previs friend who likes to say that we are “the first into combat”.

    When a big budget movie fires up, a previs artist has to figure out what the hell the damn thing is. A good storyboard can fix story problems, but discussions about “how to build it,” happens during previsualization. These days, major budgetary and creative conversations are dependent on previsualization.

    Therefore, since it’s throw away for problem solving, it’s ok to “cheat” in previs. When I mean cheat, I mean change the scale, use zoe-trope cards for effects, fake the camera rack focus with a gaussian blur in After Effects. Large production pipelines blow up, when they are dependent on badly scaled, messy assets. Perfection hates rampant creativity, so previs is often pushed to the sidelines.

    One of my favorite things about being a previs aritst is creative “cheating” or non-standardized problem solving. The job really gets good when you stumble on these little hacks. I love to take a break with a fellow artist and go for coffee, or a beer, or a walk around the block. If you tell them what you are working on, the good ones will tell you how they would solve the problem.

    “You know what you should do?” they might say in the Starbucks line, “I’d render the character on green, and then make an offset layer in comp for their position.”

    “Dude, why worry about the ramp while you are blocking the action?” they say while deciding on what kind of beans in the Chipotle line “You should animate normally and time remap it.”

    I call this “graphics hacking.”

    It’s a mind set where it’s ok to cheat, to reuse things, to break them. Anything goes, to get the shot. And the graphics hacking conversations are the ones I live for.

    Previs with Networks?

    I’ve started to imagine the graphics hacking conversations that artists will be having a few years from now when they use the things I am now discovering. I think the conversations will be incredibly different.

    “You know what you should try?” they might say while checking their bitcoin account on their phone “maybe train a network to classify and remove all the leaves.”

    “You should try scraping the color sets from from that aerial photography set” they might say as they enter the automated uber “and then I’d use that new style GAN to make it look washed out.”

    I am about half way through an online class on machine learning. I am about a quarter of the way through a book on the subject. This past weekend, I created first neural network hack. In a matter of minutes, I generated a cat on Runway ML!

    If these networks can do what I think they can do, if we can get the data right, the ability to make unimaginable things a reality will be pretty insane. An art form that is based on smashing large portions of data together to yield a fake reality.

    Again, I’m reeling. They will be the ultimate graphics hack.

    I am falling down the rabbit hole. Anyone not taking this technology seriously is in for a shock. I hope to start aggregating my thoughts into work flows in future posts.

    As always, please feel free to comment or reach out with thoughts. Thanks for reading, see you next week.

    Reference:

    I’m having an absolute blast learning from @genekogan: http://genekogan.com/

    ITP@NYU / Machine Learning for Artists: https://ml4a.github.io/classes/

    I‘m primarily in RunWay ML: https://runwayml.com/

    Towards Data Science – Creating Art with GANs: https://towardsdatascience.com/gangogh-creating-art-with-gans-8d087d8f74a1


  • Artists Painting with Artificial Intelligence

    Artists Painting with Artificial Intelligence

    GauGAN, peanutsGAN, and DataPaint


    The Nytrogen newsletter follows the disruptions happening to the computer graphics industry. Each week, I send out my thoughts on the technology, work flow, and artistry in the evolution of real time animation production.

    A few months back, NVIDIA showed off a product called GauGAN. Using a pair of neural networks, (called general adversarial networks, or GAN) they trained a system to recognize 100’s of thousands of images of outdoor pictures. Armed with this network, and trained with the enormous data set, they created a Graphical User Interface, or GUI. Users could create “segmentation maps” that allow anyone to simply sketch out the merest indication of the landscape and have it recreated with the AI’s best guess.

    As shown in the video below, it’s guesses we’re pretty damn good.

    This is a strong indication of the direction the artistic use case for machine learning will go. It’s going to be like wielding a super pen. We will be able the indicate where the tree goes, but the look and style of said tree will probably be generated on the spot.

    Very soon, anyone who can squiggle a few marks (and have access to an enormous visual data set) will make near replicas of some of our greatest visual works. Here’s a couple of fun ideas I had.

    peanutsGAN

    According to Wikipedia:

    Charles M. Schulz created a total of 17,897 Peanuts strips of which there are 15,391 daily strips and 2,506 Sunday strips.

    That’s a lot of drawings of our favorite bald block head that could theoretically be fed to a general adversarial network.

    The average Disney film is about 80 minutes, at 24 frames per second, that’s 1920 drawings. There are at least 10 hand drawn animated classics that I can think of, plus weeks worth of shorts, books, and loads and loads of marketing materials. If someone were looking to extract the style of a Disney film, there is plenty to train on.

    Where’s the DataPaint?

    If these nets are coming with GUI’s to allow artists to wield them, then it begs the question.

    Where can we get, non-licensed, large data sets of imagery?

    I typed in “Batman” into google images. With the millions of “like feature” results that came back, I’m sure a network can learn exactly what he looks like. But is this legal, according to our current laws of copyright?

    this is fucking lasagnas…

    Will our aggregate online imagery eventually be scraped and squished into gigantic mathematical systems? And in the short term, what’s going to happen when networks like this get into Photoshop? Or Unity?

    As news begins to trickle in on the merger of machine learning and animation graphics, I will be sure to keep tabs on the development.

    Please consider subscribing if this is of interest to you. I welcome thoughts and feedback.

    Thanks for reading, we’ll see you next week.

    Links & Reference:

    StyleGANS explained and NVIDIA’s novel architecture for Generative Adversarial Networks at Towards Data Science:

    https://towardsdatascience.com/explained-a-style-based-generator-architecture-for-gans-generating-and-tuning-realistic-6cb2be0f431

    Google QuickDraw: https://quickdraw.withgoogle.com/

    Nvidia AI turns sketches into photorealistic Landscapes in Seconds:

    Search Google Images for Batman:

    https://www.google.com/search?client=firefox-b-1-d&biw=1547&bih=962&tbm=isch&sa=1&ei=f9YkXZnXOc_K-gS1q7TwBw&q=batman&oq=batman&gs_l=img.3..0i67l3j0j0i67j0l2j0i67l3.254505.255099..255345…0.0..0.101.512.5j1……0….1..gws-wiz-img…….35i39.kcbykIW0c_c