Tag: animation


  • Discovering Disco Diffusion and Prompt Design

    Discovering Disco Diffusion and Prompt Design

    Dalle2 is a generative image model by Open AI. It’s a transformative step in computer graphics.

    Dalle2 is in it’s early invite stage, which means, (as of July 2022) I don’t have access. (Hurry up Open AI, please?) Powered by a need-to-understand these models, I soon stumbled upon a model called “Disco Diffusion.” While not of the same power of Dalle, DiscoDiffusion is indicative of the future of generative media. And enormously fun to play with as well. Everything here was created with this DiscoDiffusion Google Collab and a free account.

    —-> (https://colab.research.google.com/github/alembics/disco-diffusion/blob/main/Disco_Diffusion.ipynb)

    My first animation test using Warp in DiscoDiffusion

    AI Comics

    Like most things, I start by clearing my creative blocks with comics. So, to find my footing, I thought about trying to get an AI to generate it’s own comics.

    Since the joke of a comic is somewhat separate from the art, I “prompted” new sentences with the AI powered InferKit. This led me iterate on the writing, until I felt the AI had the joke it wanted to tell.

    I took the reponse prompt from Infer, and then I fed the prompts into a freely available model called Craiyon (previously Dalle-mini) – using other tags for cartoonists and comics to shape the image.

    text_prompts = {
        0: [
    #Subject
    "--- Output copied from InferKit sentence",
    #Description
    "two character","single panel", "cross hatching"
    #Artist
    "gary larson", "new yorker",]
    }

    The first few felt weird. However, liking the possibility, I experimented with a “publishable” version with the merge on photoshop. It felt more authentic, and put structure to the result.

    I then tried the same exercise, but with Disco Diffusion. While the output turned more smushy, the options for experimentation within the collab notebook were numerous, and being open source, inspired a deeper dive. Soon, I think I had the knack of it.

    AI Watercolor Paintings

    While working, I set up another instance to do some landscape watercolors. I wandered through prompts of different places, like Savannah or Newport, but didn’t see much evidence of locations. I grew bored pretty quickly of it.

    AI SciFi Concept Art

    I decided to move into more concepting of science fiction. 1. Because, why not? and 2. I discovered Disco Diffusion has a PulpSciFi Data Set trained, which I switched to for a bit.

    I experimented with key words like “Ralph McQuarrie” and “Star Wars.”

    Finding other artists who are playing with the model online, I discovered the practice of “trending on art station.” It helped the results, however made me feel a little ill about the artists who are actually trending on artstation. It started to lead me to a recognizable paradigm and felt my prompts pulled into “cyberpunk” and “mech.”

    AI SciFi Space Photography

    I then found a really interesting breakthrough. I found someone who started listing camera lenses.

    This inspired me to take the more conceptual ideas I was working with and make it more photographic. I chose prompts that focused on realism, with references like “NASA” or “Star Trek.” I also set detailed instructions for camera lenses, like “27mm” and “tilt shift.”

    AI Alien Mech Technology Photography

    But eventually I wanted to create something… alien I guess.

    I began to organize my prompts into sections, and mess with individual variables to get more of what I was looking to generate.

    I experimented with a number of variants with lenses and lighting, and even trying to Tilt Shift. The best results, according to me, are above.

    Most of the imagery you see came from what is becoming a bit of a “base prompt” for me:

        "Mech Suit with military grade weapon systems in an alien world",
                "Sci fi, Iron Man, technology, attack",
                "photography",
                "27mm",
      

    AI Animation? AI Shorts?

    I’ve only just begun to explore!

    If you want to see some extrodinary work, you should consider joing the DD discord.

    https://discord.gg/fzevz8Z4

    Mind Blown

    The implications of the democratization of AI art will be extraordinary. The architecture of CLIP and the diffusion models, the design process of prompt engineering, and the intellectual property implications are probably more than this blog post.

    As I wrap my head around the use, I’ll update here.

    As always, thanks for reading. Happy prompting!


  • Machine Learning and Animation

    Machine Learning and Animation

    As the tools become more accessible, neural networks may soon be driving the performance of characters

    A machine… can’t possibly… ?!?

    I find myself involved in conversations where I ask graphics artists if they feel threatened by AI (artificial intelligence). Many animators think that being in a “creative” occupation means that they are safe. For many, it’s okay to think automation will wipe out auto-callers and Uber drivers, but keying a character performance is something that we won’t have to worry about for a while.

    I used to be one of these people, but now, I’m not so sure.

    Fake it til’ ya Make it

    The internet continues to show us examples of “deep fakes”where mathematical image recognition models have allowed for the creation of fully manufactured digital characters. Upon viewing this, there is an initial reaction of “that’s crazy,” followed by a dismissive “wave of the hand.” I think there is a misunderstanding what the neural network is really doing.

    It’s easy to confuse this “puppeting” of Vlad Putin or Elon Musk as simply a real time one to one, like that of a performance capture system. Or that the video is is composited or mixed into the pixels in some way.

    But this technology is actually generating an entirely new performance.

    Neural Networks

    Honestly, I have only just begun my journey to understand how neural networks work. In many ways they are still a black box of mathematics full of PhD level vocabulary that a humble unfrozen cave man animator like me can’t wrap their mind around. I can’t tell the mathematical difference between a Lambert and a Phong, but I absolutely know the difference in the way they look and when to use them. Similarly, it will be hard for me to describe the calculations in a General Adversarial Network, but I’m starting to see the possibilities of what it can produce.

    With only a few weeks of reading – and the help of the classes like IPT@New York University (https://ml4a.github.io/classes/) – I’m fairly confident, that at it’s core, neural networks use a whole lot of data, to learn how to make an input on one side come out as something generative on the other. It’s a system that makes guesses (or predictions) and the more data you cram into it, the better those guesses get.

    The whole point of a neural network is to recognize features, or patterns that are not explicitly called out in the data set. These patterns are what the network uses to construct wholly new entities (guesses), that look and feel almost entirely like the original data set. Essentially, with enough data, it can fake a new entity that is nearly indistinguishable from the original. The question becomes:

    What data do we use to animate?

    From Light Cycles to Dinosaurs

    In 1982, to create the illusion that 3d objects moved across the screen, the animators on Tron had to stand over the shoulders of engineers and instruct them to plot out thousands of individual x, y, z coordinates. By the mid 1990’s, instead of entering individual coordinates, software UI’s had become accessible to allow animators to key and refine the motion of an object. This made moving dinosaurs in the computer a reality.

    The curve editor, now commonplace in animation software like Maya and Blender, took 15 years to imagine and develop. It is a way for artists to visualize the acceleration data of 3D objects, and a way for those artists to communicate their intent to the software.

    Whether the data is from captured human performance or “hand keyed,” curve editors are the common language of motion data in the animation world.

    Animator Intent = X, Animated Character = Y

    So, if a neural network could be fed motion data and “learn” how a human moves, it might be possible for it to learn what the animator is trying to communicate and generate predictions of what makes a good animated performance.

    Actually, the development of this, “animator intent = X” variable might be the easy part. We have enormous databases of “scrapable” human motion, considering existing deep learning models like PoseNet could pull data from an ever infinite library on Youtube.

    The features which lead to the Y variable, and the question that I’m going to scratch at is:

    Could a model be developed where the features generate a great animated character performance?

    From Academics to Artists

    Up until now, machine learning is something only the academics played with – and now big tech – as these early scientists have been gobbled up by the tech titans like Uber, Facebook, Google and Tesla.

    Like the engineers and light cycle inputs, only tensor flow python-heads could enter the parameters for a ML model. However, a new wave of UI thinking is now making this approachable to us unfrozen cave men animators. Unity, a powerhouse 3D game engine, has jammed “ML agents” into their software to facilitate ingesting tensor flow models. And a stand alone GUI called Weka, allows for non-coding explorations of deep learning models.

    However, a new visual interface called Runway ML is the first, of what I see, of the development of artist-friendly machine learning tools. There will increasingly be less of a need for artists to get their hands dirty in the code of neural networks, as services like this will provide an accessible way for artists to integrate the thinking directly into their existing animation work flows.

    Maybe even accessible to a cave man animator like me.

    That’s it for this week, thanks for reading. Please subscribe and join the conversation.

    Reference:

    ITP@NYU / Machine Learning for Artists: https://ml4a.github.io/classes/

    Daniel Shiffman’s Coding Train: https://www.youtube.com/channel/UCvjgXvBlbQiydffZU7m1_aw

    Machine Learning Mastery:

    https://machinelearningmastery.com/machine-learning-mastery-weka/

    RunWay ML: https://runwayml.com/

    The TensorFlow Playground: https://playground.tensorflow.org/

    PoseNet Machine Learning Model: https://github.com/tensorflow/tfjs-models/tree/master/posenet

    Andrew Price (the blender guru) made a similar argument: https://youtu.be/FlgLxSLsYWQ


  • The Democratization of Animation Production

    Imagine the hundreds of people that a computer graphics animated feature requires.(Think: Pixar or a VFX heavy superhero blockbuster.)

    Now imagine the entire undertaking of these projects being done by a handful of people. Instead of a pipeline of specialized workers, this handful of people are unique multi-talented “librarians.” Like DJ’s who sample electronic music, animated storytelling will mix streams of data, creating visualizations for a variety of new platforms.


    Hypothesis:

    Computer Graphics Production – as it exists in the movie business – will be disrupted by peer based, real time networks.


    Increasingly, collectives of creative developers are sharing new ideas, code and work flows. By sharing powerful tools and know-how, communities are growing at a pace that will soon outperform the quality and market need of closed systems. Essentially, the open networks will outperform the closed companies. The advances of these creative networks will make the computer graphics artists that work within it, mind-boggling, productive.

    Most interesting to me, is that visualizations might not be rendered on a centralized farm of computers, but by an infinitely scalable, distributed network. The libraries, the labor and the processing power will be shared by all who participate. The more who join in, the more powerful the network will become.

    This is enormously exciting for the art form. The way Youtube empowered content creators, and Instagram made everyone a photographer, new networked technologies will democratize and enhance the animation storytelling process for anyone with an internet connection. Admittedly, it is also threatening to those who exist in the industry today.


    This is what my research and writing has been focused on for the last year. I’ve spent this time exploring engines and new workflows, playing with ways to develop content, and then writing my thoughts over and over. I want to understand this evolution.


    The best way for me to internalize my learning is to write about it, teach it, and share it. And so, it is my hope that the self imposed pressure of a weekly newsletter will keep me diligent on these explorations.

    Every week, I will write a new post discussing my thoughts on technology like game engines, distributed networks, machine learning, agile storytelling, but most importantly, the evolution of the networked artist.

    If you are a computer graphics artist, producer, student, or thinker, I welcome you to subscribe and join in. If you feel this is useful, please pass it on to others who you feel it will be useful to.

    [convertkit form=3045399]


  • Keys & State Machines

    Design patterns for character animation are about to get really complex

    Story Telling Moments

    Character animation is hard. And with real time systems, it’s going to get a lot harder.

    In order to plan through the creation of a character’s performance, an animator uses design strategies to construct the motion. Since the days of Walt Disney and his nine old men, animators have relied on a method of quantifying the actions of characters into story telling moments. These moments are often referred to as “Keys.”

    By drawing a handful of story moments and then “popping” between them, the animator can explore the timing and readability of the shot. Below is an example from Richard William’s Animator Survival Kit, showing the key drawings of a character walking to a chalkboard and starting to write. The story of the performance can be conveyed in three simple moments.

    It’s sometimes very difficult to determine these keys. Realizing this difficulty and then the energy required to flesh out the action, it’s astounding that we only use the sequence of frames once. Entire movies (which are massive undertakings) are animated once, and then thrown away! The energy the animator puts in is equivalent to the visible experience they get out of it.

    This linear output looks something like this:

    While the art form in this sequential logic form is beautiful, it is highly inefficient.

    In a real time system, such as that in an engine, the character’s actions are reusable. These actions can be changed based on the dynamic nature of the environment. Simply thinking of a character in terms of linear keys is too limiting. We need a way to quantify a character performance in a way beyond it’s single use.

    Finite State Machines

    state machine is a mathematical design pattern where an entity exists in bracketed conceptual moments, called “states.”  States are an architecture that allow for a predetermined series of actions to be triggered, provided conditions are met.

    For example, a character entity in an engine may be in a state of “walking” until it is confronted with a street to cross, to which it will change it’s state to “wait for the light.” States aren’t just visibly physical, like walking or jumping. Characters can be in a state of hunger, or in a state of anger, or in a state of existential crisis. When you begin to imagine states for characters, you start to understand how a character performs in a way outside of linear time. Designing keys in this mindset might look something more like this:


    Video games already do this in a limited capacity to satisfy the requirements of a character’s action during game play. A character will begin in an “idle state” and when the user input commands it to run left, it will change it’s state to “run left.” By changing it’s state, the engine knows to play an animation clip of the character running to the left.

    Increasingly, game engines are providing UI systems that allow for the development and design of character state machines. The example below is taken from Unity’s state machine which allows you to import animation clips and arrange them into a pattern that triggers at run time.

    My hunch is that, as real time systems become more and more integral to the animation production process, character work will increasingly become reliant on the development of complex state machines. These massive state machines will not only drive the actions of the character, but the motivational nature of them as well.

    Thanks for Reading. See you next week!

    Here are some references to keep you going on Animation Keys and State Machines.

    The Animator’s Survival Kit by Richard Willliams:

    https://www.amazon.com/Animators-Survival-Kit-Richard-Williams/dp/0571202284

    Game Programming Patterns by Robert Nystrom on State Machines:
    https://gameprogrammingpatterns.com/state.html

    Unity’s Documentation on State Machines:

    https://docs.unity3d.com/Manual/StateMachineBasics.html

    Unreal’s Documentation on Animation Blueprints:

    https://docs.unrealengine.com/en-US/Engine/Animation/AnimBlueprints/index.html


    I began this newsletter to begin a conversation with the computer graphics industry. Should you have thoughts or comments, please feel free to reach out. I can be found on twitter @nyewarburton.


  • ITGM 310: Animation for Games

    Studio, Savannah College of Art and Design, Interactive Design and Game Development, 2021

    This is a course designed to teach the process of building animation systems for game and real time characters.

    We use Unreal Engine and the animation blueprint system, using Autodesk Maya for animation keying, retargeting and clean up. The content currently reflects 3d animated workflow, but I have hopes of integrating 2d animation systems into future content.

    This is the demo reel for Animation for Games.

    This class is for game developers and animators who wish to understand how to build real time animation systems. I try to build the class so it’s accessible to animators with no development experience, but extensible so those with unreal blueprint experience can learn animation systems for NPC’s and combat. Students start by individually building a side scrolling platformer, and can potentially move into directed or collaborative groups for a third person player RPG type project.

    Here is the Final Project Presentation:

    You can see the course website at this link.

    This class has been in development since Fall 2020. A new update for this class is set for Fall 2022.


  • Machine Learning Experiments

    AI is rapidly advancing into computer graphics. It is moving far faster than I imagined it would.

    I believe AI should be taught and experimented with in the educational process. For more of this thinking, please reach my post here.

    Otherwise, have fun exploring some of the nonsense below.

    Stable Diffusion and Dreamstudio

    I have been using stable diffusion fairly obsessively since it’s release in the Fall of 2022. Below are some samples of my work as of November 2022.

    Imagery and Photography Experiments
    Self Portraiture with Stable Diffusion / Dreamstudio
    Science Fiction, Mechs, and Technology Experimentation

    ShatnerGAN

    For whatever reason, I spent a week ripping Captain Kirk shots from the original Star Trek series and training StyleGAN2 from NVidia. This experiment had 15000 close ups of James T Kirk, and the model was trained for 5000 steps.
    Software: 4K Video Downloader, Autocrop, and RunwayML

    Put a GAN on it! – Stealing Beyonce’s Motion Data

    During my class with Derek Shultz I used Beyonce’s “Put a Ring on It,” to experiment with a number of models that existed within the AI model bridge software called RunwayML. This is the first time I integrated machine learning models into my after effects workflow. While it’s essentially pure fun, it allowed me to experiment with a number of the models and get a sense of their capabilities.
    Software: Runway, After Effects, Photoshop

    YangGAN: Andrew Yang’s Essence in AI #yanggang #humanityforward

    After my Captain Kirk GAN, I decided to try another human trained GAN. I found a clip of Andrew Yang speaking about the advancement of AI journalists, and was inspired to match the audio with a latent space walk of a trained GAN. This was trained on about 4000 images of Andrew Yang that I scraped from various interviews. The head was cropped and run through an image adjustment recipe I developed with Python and Image Magik. I trained the GAN in Runway using StyleGAN2.
    Software: 4K Video Downloader, Python: Autocrop and Image Magik, and RunwayML

    Ride the Train! – Experiments with Image Segmentation

    This was an experiment to play with Image Mapping Segmentation. I had seen a number of experiments with Image Mapping but had seen little with using it as a renderer. I used shaders in Maya that were matched to the Image Mapping set up in RunwayML. I rendered each layer through Runway and composited it in After Effects. The technology is far from functional, but the promise is there.
    Software: RunwayML, Autodesk Maya, After Effects

    Machine Learning Motion Model Experiments

    My primary interest in machine learning is experimentation with animation data and motion. These were some experiments I ran to see what motion model got what result. My take away was that the clips needed to be “normalized” to get a good read. That’s why I created a template to track the video.
    Software: 4K Video Downloader, Autocrop, and RunwayML

    Fun with CheapFakes

    This is a fun model and easy to use. I scraped some Arnold Schwartznegger clips from youtube, and had a friend, Daron Jennings, improvise some clips. It was simply a matter of running the model with the appropriate components, and then compositing it in After Effects. It might be something fun to use for the future.
    Software: Wav2Lip, After Effects


  • Blockchains and Animation Production

    Blockchains and Animation Production

    This is a three part series on Blockchains and Animation Production. The Nytrogen Newsletter follows my thoughts on the evolution of real time production in computer graphics.

    Part 01: F***** on the First One

    Agents and Lawyers, Oh My…

    When I was 29, I sold a show idea to a television network. I was simply a guy who enjoyed animating things.

    I was working on a studio lot to work on a film. I used the internal studio index to find the people to pitch to, and set myself up to make a deal with this mighty corporation’s animation development department. 

    My writing partner tactfully let the world know that we had set up a deal, and suddenly, a lawyer and an agent magically arrived. They told me, as experts, they would take care of the business dealings. I was removed from negotiations, and kept on the sidelines.

    Acting Agent Reddit AMA

    Three months later, the lawyer placed a stack of papers on the desk in front of me. It outlined a deal where I needed to work my ass off for imaginary outcomes, to which the network would own every character, joke, technical solution and story point until (should the show be green lit) the second season. And then, myself, my writing partner, and our production team would receive only a small percentage.

    “I’m being f*****.” I said out loud.

    My lawyer nodded. “This is a network deal, you always get f***** on the first one.”

    Intellectual property is the life blood of entertainment, and it is systematically controlled through corporate legal systems. Whether intellectual property began as an ethical concept is immaterial, it’s evolved into a mechanism for large expensive legal teams to steal from artists.

    homer simpson court GIF

    Set the Ideas Free

    The internet has proliferated ideas, that only when *compounded* – as in – smashed together – create magical things. Going viral comes from modulation. An idea proliferates because people change it, and make it their own. It may be possible for a single idea to resonate with others, but encapsulating that idea into a box and stamping ownership on it, will limit its ability to evolve.

    This means, in the networked world, we all need to let go of our ideas.

    Give them away. 

    The value we will gain from our collective creative network will outperform the gain we will get from a traditional, legally structured, intellectual property system. 

    I must admit I still have trouble accepting this line of thought. It is counter to what business and law tell you about how you should conduct your business dealings. The reality is that our business dealings have become unacceptably corroded. Corporate interest has become too powerful.

    No artist should ever be “f****** on the first one.”

    Is there another way?

    However, there still needs to be a system.

    Artists need a way to be paid; compensated for their efforts, and rewarded for their devotion to their craft. The ideas, the rates, and the work will need to be protected by more than something as clumsy as a legal team.

    This is why I believe animators need to know about the development of blockchains. It may just be a way for artists to work together without a system composed of lawyers and agents. Decentralized computing is pretty complex stuff, but I believe it important for artists and creators to understand. I will begin to share my thoughts on blockchains as they pertain to computer graphics, next week.


    Part 02: The Lawyer in the Database

    Blockwhat?

    I’ve read hundreds of self proclaimed “simple explanations” about blockchains. They usually begin by talking about Bitcoin and Satoshi Nakamoto. These explanations tend to get a bit “mathy” and spend a lot of time on game theoretical problems about trust and governance. It’s no wonder Silicon Valley types love to tweet about this stuff.

    For our understanding of blockchains as it pertains to computer graphics, I’d like to clear your mind of any preconceived ideas of decentralization or smart contracts and just focus on two simple words:

    Copy and Paste

    Because we all work on a computer, every photoshop file, maya file, python script, or word document contains our ideas, our designs, and our stories. If I like something I create, then I feel it has value. If I can make a copy of it, in essence, I have reduced the scarcity (and the real world value) of the idea. Simply, it’s no longer unique.

    In economics, the concept that an idea (or art work) can be duplicated digitally like this, with no additional labor, is called “zero marginal cost.”When the cost of making an infinite amount of something is the same as making one of something, it begins to challenge conventional capitalist thinking. This is why I believe that our value systems need a realignment to reflect network value rather than individual value.

    And the data we create, should belong to … us.

    So, just what are Blockchains?

    Blockchains are trusted networks. They are trusted because no one owns them, and if built correctly, they provide the necessary ethical practices that define the collective. The nodes on the networks support each other, not by deciding to, but by being incentivized to.

    As opposed to using a centralized company or service to mediate conflict, the governance of blockchains are designed for the collective to benefit. Those that follow the practices, that enable the trust of the network, are rewarded with tokens. In a decentralized, or blockchain enabled network, the collective protects it’s own data.

    I saw Zavian Dar give a fairly clear presentation of blockchain economics at the Blockstack summit. (below)

    I like to say that a blockchain creates it’s own “lawyer in a database.”

    To mistake cryptocurrency as simply a currency to be bought and traded, is missing the point of blockchains. All too often we are worrying about the price of bitcoin and ethereum, instead of understanding the value they really have. The network is built as a means for collective groups to come to a consensus. How we track and pay for value, (cyrptographic tokens) is merely a device for execution.

    By using blockchains to track intellectual property, we enable the collective to protect it’s own value. Things can be copied and pasted, only if conditions are met. The collective ethics or “governance” can be programmed into the chain, so we don’t need to an agent or lawyer to “handle” it for us.

    Next week, I will define how I think a blockchain network could work for a computer graphics collective. It isn’t so much about the technical components, but the concept that artistic collective can govern themselves more ethically and efficiently than the centralized corporate system we have now.


    Part 03: Blockchain Guilds

    Associations of Craftsmen

    In the medieval era, artists and builders formed group associations to protect their “tricks of the trade.” As opposed to artists who were owned by monarchies and religious organizations, the founders of these organizations, or ‘guilds,’ were independent masters who cultivated apprenticeship programs, created documentation, and standardized methods for protecting intellectual property. These collectives were the precursor to universities. 

    The industrial era saw rise of organized labor unions to protect workers from centralized interest. Many of these initiatives were centralized themselves, and were ineffective, either being too small to combat large corporate power, or (worse) becoming big interest themselves.

    Computer graphics artists have no protection now. They are scattered around the globe, unorganized, and often misvalued for their contribution to the craft.

    My hope is a new form of guild will arise, thanks to some fancy computer science.

    Incentivizing Distributed Networks

    The company Otoy has a unique vision for computer graphics. As one of the leaders in rendering with their Octane system, they have begun to think about how distributed rendering would benefit everyone.

    They have proposed a blockchain solution aptly named “render token,” that compensates users on a network who contribute their processing power. By assigning a compensation to the donation of GPU’s, it allows users to govern how their contribution can be used, and (potentially) assign an accurate market value for it.

    From their website:

    Ethereum’s widespread adoption was the key to realizing our vision. Instead of GPUs being used to only mine currencies, we use their intrinsic function to render and leverage the features of the blockchain: network, incentives and trust.

    Otoy has gained a small amount of traction in the entertainment community, (mainly among the motion graphics community) with support from Bad Robot leader, J.J. Abrams, himself.

    They are not the only ones who see this blockchain benefit.

    The network Golem, also provides a token for processing power contribution. A new chain called Helium, rewards users for buying and maintaining independent wireless network hardware. Others, like IPFS (interplanetary file storage) and Storj pay tokens in exchange for storage space. Whether these networks use the large crypto-networks like ethereum, or develop their own side chains to scale, the truth about blockchains is starting to becoming clear.

    Blockchains are coming, and the effect will be incredibly and entirely disruptive.

    When networks of creatives can share their processing power, their storage, and give their intellectual property openly, with full knowledge that they will accurately compensated for their contributions — it suddenly diminishes the need for a centralized company or organization.

    Decentralized Guilds

    My hope is that computer graphics artists will begin to form guilds on the blockchain to protect their intellectual property, gain access to shared assets, and get paid for their contributions (and fairly.) Should this actually work, artists will flock to the networks with the most ethical governance, and thus create global network value.

    The governance of these networks will be like the guilds of the medieval era, but instead of being confined to a city or area, they will propagate to wherever the network will reach. (Everywhere on the planet.)

    Blockchains are still very much in their infancy. It is still uncertain whether projects like Bitcoin, Ethereum or even Otoy will truly scale, but now that the idea has been set free in our collective consciousness, it is only a matter of time before some form of decentralized network becomes a reality. I’m hopeful the technology will rise to the need.

    Thanks for reading!


    Reference and Links for Part I

    The Wealth of Networks by Yochai Benkler:http://www.benkler.org/Benkler_Wealth_Of_Networks.pdf

    Skin in the Game by Nassim Nicholas Taleb: https://en.wikipedia.org/wiki/Skin_in_the_Game_(book)

    (Google) Classification of Intellectual Property Rights: https://scholar.google.com/scholar?q=classification+of+intellectual+property+rights&hl=en&as_sdt=0&as_vis=1&oi=scholart

    Here’s how the average film student is taught about this system: https://www.lightsfilmschool.com/blog/agents-managers-lawyers-film-industry-aek

    Reference and Links for Part II

    The Blockchain and the New Architecture of Trust, Kevin Weinbach:

    https://mitpress.mit.edu/books/blockchain-and-new-architecture-trust

    The Zero Marginal Cost Society, Jeremy Rifkin:

    https://thezeromarginalcostsociety.com/

    PostCapitalism, Paul Mason:

    Reference and Links for Part III

    Otoy and Render Token

    https://www.rendertoken.com/

    Golem

    https://golem.network/

    Helium

    http://helium.com

    IPFS

    https://ipfs.io/

    Storj

    https://storj.io/