Tag: Game Development


  • Which Game Engine Should I Use?

    Which Game Engine Should I Use?

    By far and away, this is the most asked question that I get from anyone diving into game development.

    Be it a start up team or a small group of students, every project needs to decide their tech stack. It’s definitely a hard question, but the good news is that there are a number of options.

    Below, I’ve listed a selection of game engines that are, in my opinion, a good place to start.

    Unreal Engine

    The first engine that is often spoke of for AAA gaming, animation and visual effects is the Unreal Engine by Epic Games. You should consider this engine the “go to” for AAA style video game development. It’s a tank. It strength is making “realism” for third person, first person or strategy games. Major game studios use it to scale teams of 100’s of people.

    Flooded with money, Epic is the Mongol Horde of the Game Engine world. From their marketing assault, it is clear they wish to own cinematic production in Hollywood, or really anything where there’s a camera and subject. With real time workflows being so disruptive, they might just win. Many, if not all, of the major entertainment computer graphics firms have integrated Unreal, or will integrate Unreal, into their pipeline. Software has eaten film production, and Unreal is the mouth.

    Unreal will most likely be the dominant player in many real time interactive experiences from gaming, to architecture, virtual production, and many other fields that require high fidelity graphics.

    Unreal Engine 5 tech demo running on PS5

    Pro

    If you are interested in creating AAA quality games within the fairly known design paradigms of the console gaming world, then this is a good choice. Even if you don’t wish to use it, you might find yourself sucked into a team or project that is dependent on it. 

    Unreal has made real strides in making programming accessible to artists with their blueprinting system. This node based scripting has been a good way to learn, and an even better way to get the designers more “hands on” in the system.

    If you are visual effects artist, or a feature film animator, it is also a very good choice to think about picking up. It’s sort of becoming industry standard. Every movie shop is looking for people skilled in it’s use right now.

    Con

    Much like Microsoft for business and personal computing was in the 90’s, Unreal will most likely be the corporate operating system for real time 3D content development. It will grow relentlessly, and probably won’t listen to the little voices of the independents.

    If you are Indie developer, interactive artist, data visualizer, or are dependent on rapid prototyping, there are others engines that might suit a bit better.

    For Me

    Since most of my work is entertainment industry facing, Unreal is in the “must learn” category. The animation systems are robust and powerful, and there is no arguing with the asset development pipeline, especially with the acquisition of quixel megascans, the development of Metahumans, and the development of Nanite in Unreal 5.0. It’s not without its frustrations, which for me, comes down to the material and lighting rebuild loading times.

    I don’t use it to prototype unless I am experimenting in animation systems. It’s size makes it hard to work with Github, and there is very little thinking or support for blockchain networks or AI models. They recently launched a python interpreter, but I’ve barely found a support network for developing content with it yet.

    Unity


    Unity applications currently account for three billion installs around the world. As a 3d interactive development package, it is THE dominant player. It is also the major player in independent and mid-tier game development. It is fairly standard in interactive design and commercial application development.

    That means, if you are an independent game developer, this has been your engine for a while, for Unity has the controlling interest to this sector. If you are an interactive developer at an agency or working with a location based experience of some sort, this is also most likely the engine you would use.

    Everything in Unity is a Class. You make an asset and attach a script, and then, it’s interactive. By making it a Prefab, you can use it again and again. This is the core value of game engines, and this is something that Unity has done very well to democratize the technology to creatives. It uses C#, which has a lot of similarities to Java, which can be somewhat difficult to new programmers, but easier to pick up with those with a touch of development experience. Unity, probably inspired by Epic’s blueprints, has begun to integrate Bolt, their own visual scripting system. (Though, I have not used it.)

    They are not without their movie making ambitions though. A recent deal with New Zealand based Weta, shows their claim to be in the visual effects and narrative content world. I watch with a keen eye to see what happens.

    Unity is also a private company, who have a weird licensing system when you actually start making real money. Since their focus is more diverse, (larger interactive market vs a AAA game market) they tend to have plugins for some of the more innovative trends like Augmented Reality, or Machine Learning. Unity also is forming initiatives with auto companies, technology visualization initiatives, and developing an ecosystem of educational content. It’s clear they are positioning themselves as an interactive artist’s tool, more than just a game engine. There will always be games developed with it, but many other things will be built with it as well.

    The Unity Interface is fairly straightforward.

    Pro

    Wide adoption of use. Unity can really be used to do a lot of things, and is a bit of a “swiss army knife” for interactive and independent game experiences. There is a very active community and the learning resources on youtube and educational communities is extremely rich in content.

    Con

    Unity’s corporate-ness is beginning to show. They are clearly a little weary of Epic’s dominance of some sectors, and are also a little “nickle-dimey” on their pricing models. Yes, the engine is free for the most part, but the upgrades for AR and machine learning, which are priced at 50/month, are a little worrisome.

    For Me

    Most of my experience is focused on the Unreal Engine, but I have recently opened my brain to more development work with Unity.

    I’m also in the middle of a reinforcement learning obsession, to which I’m interested in the accessibility of Unity’s Machine Learning Agents. Stay tuned.

    Godot


    As an advocate of open source, this engine is a darling of mine. It’s the “Blender of the game engine world.” There is a small team of developers, led by the remarkably talented Juan Linietsky. With a passionate open source community rallying behind it, it is gaining traction at an astounding rate. Right now, it is best for 2d games, though recently their foray into 3d, and the projected development of the Vulcan render system will most likely change that.

    They use custom coding language called GDScript, which much like python, is a declarative and easy to read language. Relative to Unity and Unreal, Godot has a much smaller base, but that base is extremely rabid.

    I feel that Godot is uniquely positioned when it comes to innovative gaming and decentralized development. When we start distributing our networks and commerce, are we really going to cut in Unreal or Unity?

    Godot, as open source, is the natural choice for teams that are looking to create autonomous or community driven game systems. The community, not a centralized player, will shape it’s functional use. (Whatever that turns out to be.)

    The Open Source Engine, Godot shows loads of promise.

    Pro

    A great open source community that supports the development learning and creation with it. The more the community grows and adapts, the more robust and creative the engine becomes.

    GDScript is also very much like Python, and it is a declarative code. This means that you can look at it, squint a bit, and for the most part, read what is happening. This makes getting very simple stuff up and going in the engine very quickly.

    Con

    Open Source software also can have some sharp edges. Fancy, well funded, software development always tends to look polished, even if the usiblity is frustrating. Open source tends to be the opposite. Functionality is a premium, but that doesn’t always mean the usability or interface has it quite figured out.

    The 3d content in Godot is still a little early. As a result, there aren’t as many support systems in place.

    For Me

    I had a brief and romantic affair with Godot as I played with a number of 2d pixel art prototypes. For learning game development, it is one of my favorite, but for competing with the main stream big boys above, they are a few years away.

    O3DE

    There once was a company that sold online books, that turned into a global juggernaut of a tech company. Amazon has decided to get into the high-end engine game, and their entry, while a bit rough and young, may be an interesting entrance to the space.

    Game developer, Crytech, who were facing financial difficulties, sold their CryEngine to Amazon several years ago. Amazon now had a real time renderer that looked amazing, but the accessibility of the engine left much to desire if it were to be a main stream consumer product. They built up their own GUI and UX on top and renamed the engine, Lumberyard. But, recently, Amazon partnered with the Linux Foundation to open source the engine. With it, they rechristened it “O3DE.”

    Too early, but watching closely

    I have had very little interaction with it. In honesty, I just downloaded the update and have begun poking at it. I mention it because I have been watching the development of Lumberyard for a while, and I really can not discount the efforts of Amazon as they move into the gaming space.

    In a lot of ways the engine looks and feels like Unreal or Unity, but Amazon is designing for the future. Without feeling the obligations of supporting the technical needs of today, they are trying to keep the engine more modular, instituting a system called “gems”

    These will allow developers to pick and choose the kind of plug ins they need for the experience they are building. In the release I downloaded, they offered motion matching systems, which is surprising since neither of the other two major engines have offered it out of the box. It’s a forward looking move.

    There haven’t been a lot of times where Amazon have seriously entered a space, and in a short time made themselves a major competitor. Having AWS so readily available to integrate into the engine, plus the fact they have push firmly into the open source route, shows that they have major plans to position themselves in whatever the metaverse they seem to think is coming.

    It’s not ready for prime time yet, but in 18 months or so, they might be battle ready.

    The Gateway Engines


    For the “just getting started” type, here are some of my recommendations.

    I tried game engines a number of times. I bounced out of Unity when I first tried. I struggled through Xcode development with cocos, barely understanding the process. I then tried Love2D for a small time, but grew tired of Lua. I was lost in the world of game engines.

    Then, I discovered Construct 2. I used it to build a Metroidvania style game called “Agent Kickback.” It was the first time I felt like I could build the entire thing — on my own.

    The software is built for deploying an HTML5 engine, which allowed fast loading times in browser based content — but the real value was the visual coding interface. In construct, you snap functions together like lego pieces.This was the first time I was able to get my mind around concepts like functions, variables, classes, optimization and state machines. It was a huge unlock.

    After this point, I returned to the industry to use engines like Unity and Unreal, and found that I had much more confidence and direction.

    Construct 2 uses a visual coding system that got me going fast.

    I have heard that Game Maker is also of a similar approachability, but I have never used it. (It’s popularity amongst some students makes me include it here.) These engines are a great way to get into game engines. Essentially, they have a low enough technical overhead to let in the artists.

    Once you are in, however, you get it. And when you have reached a point that you want to actually build something a bit more than a starter level, you move on. In that case, move up to one of the engines I have listed above.

    And What about the Web?

    The web is also a wonderful place to play with 3d and game engines.

    There are a lot of Javascript frameworks. For example, Babylon.js is an open source engine from a nifty team at Microsoft,. Playcanvas is a more engine-y looking interface for developing web based interactive work, which was recently bough by Snap. Another favorite of mine is A Frame, a framework that uses HTML tags to place 3d objects, add a little animation, and even do VR!

    Doing VR in AFrame is actually a lot of fun!

    The world these engines render, is built on a wonderful little library called three.js, a javascript core that actually renders 3d objects in the browser. Chances are you have seen something around the internet, and I’m pretty sure three.js was behind it.

    While these frameworks show a lot of promise, (I, personally, love spending time with some of these programs) my feeling is that they are very much in the early days of development. Could the web be used for high fidelity real time graphics and rendering? Well, that debate might best be suited for another post.


    As always, I welcome feedback, thoughts or suggestions. I am always open to discuss with other developers and educators about any of the things I list in my writing. You can always reach out to me on twitter @nyewarburton, my DM’s are always open.

    Thanks for reading, and happy game making!

    Links and Resources

    Unreal Engine by Epic Games

    Unity Game Engine

    Godot Game Engine

    Construct Game Engine

    Game Maker Engine

    Three.js

    A Frame

    Babylon.js

    PlayCanvas


  • Keys & State Machines

    Design patterns for character animation are about to get really complex

    Story Telling Moments

    Character animation is hard. And with real time systems, it’s going to get a lot harder.

    In order to plan through the creation of a character’s performance, an animator uses design strategies to construct the motion. Since the days of Walt Disney and his nine old men, animators have relied on a method of quantifying the actions of characters into story telling moments. These moments are often referred to as “Keys.”

    By drawing a handful of story moments and then “popping” between them, the animator can explore the timing and readability of the shot. Below is an example from Richard William’s Animator Survival Kit, showing the key drawings of a character walking to a chalkboard and starting to write. The story of the performance can be conveyed in three simple moments.

    It’s sometimes very difficult to determine these keys. Realizing this difficulty and then the energy required to flesh out the action, it’s astounding that we only use the sequence of frames once. Entire movies (which are massive undertakings) are animated once, and then thrown away! The energy the animator puts in is equivalent to the visible experience they get out of it.

    This linear output looks something like this:

    While the art form in this sequential logic form is beautiful, it is highly inefficient.

    In a real time system, such as that in an engine, the character’s actions are reusable. These actions can be changed based on the dynamic nature of the environment. Simply thinking of a character in terms of linear keys is too limiting. We need a way to quantify a character performance in a way beyond it’s single use.

    Finite State Machines

    state machine is a mathematical design pattern where an entity exists in bracketed conceptual moments, called “states.”  States are an architecture that allow for a predetermined series of actions to be triggered, provided conditions are met.

    For example, a character entity in an engine may be in a state of “walking” until it is confronted with a street to cross, to which it will change it’s state to “wait for the light.” States aren’t just visibly physical, like walking or jumping. Characters can be in a state of hunger, or in a state of anger, or in a state of existential crisis. When you begin to imagine states for characters, you start to understand how a character performs in a way outside of linear time. Designing keys in this mindset might look something more like this:


    Video games already do this in a limited capacity to satisfy the requirements of a character’s action during game play. A character will begin in an “idle state” and when the user input commands it to run left, it will change it’s state to “run left.” By changing it’s state, the engine knows to play an animation clip of the character running to the left.

    Increasingly, game engines are providing UI systems that allow for the development and design of character state machines. The example below is taken from Unity’s state machine which allows you to import animation clips and arrange them into a pattern that triggers at run time.

    My hunch is that, as real time systems become more and more integral to the animation production process, character work will increasingly become reliant on the development of complex state machines. These massive state machines will not only drive the actions of the character, but the motivational nature of them as well.

    Thanks for Reading. See you next week!

    Here are some references to keep you going on Animation Keys and State Machines.

    The Animator’s Survival Kit by Richard Willliams:

    https://www.amazon.com/Animators-Survival-Kit-Richard-Williams/dp/0571202284

    Game Programming Patterns by Robert Nystrom on State Machines:
    https://gameprogrammingpatterns.com/state.html

    Unity’s Documentation on State Machines:

    https://docs.unity3d.com/Manual/StateMachineBasics.html

    Unreal’s Documentation on Animation Blueprints:

    https://docs.unrealengine.com/en-US/Engine/Animation/AnimBlueprints/index.html


    I began this newsletter to begin a conversation with the computer graphics industry. Should you have thoughts or comments, please feel free to reach out. I can be found on twitter @nyewarburton.


  • Optimization & Leverage

    Optimization & Leverage

    Real Time animation production should start with a change in mindset

    Computer graphics can make gorgeous, high resolution stuff. However, that isn’t always the point.

    The graphics of the game industry evolved on a parallel track to the 3d techniques of the movie business. Instead of focusing entirely on high resolution images for the screen, they focused on reusing things, packing them, and limiting the color palettes. In order to play in real time, the content needed to be optimized. Because of this focus, games have always been looked at as less graphically impressive.

    That’s because most in the movie industry don’t understand the real art of game design.

    Instead of thinking:

    “How do I make this really high quality?”

    Start thinking:

    “What’s the most efficient thing I can build to get the most use out of it?”

    A Space Chicken Showed me the Way

    In 2012, I was an animator, but a novice game designer. After three failed attempts at building a mobile game, I decided to simplify my learning process and rip off what everyone else was doing at that time — build an endless runner.

    Roping in some development help, the result was Commander cluck, a demo of a running and jumping space chicken, that had a single touch mechanic. This was a triumph for me as my first game that actually worked. What threw me about the development process however, was learning and seeing the potential of something called procedural generation.

    I had started to write unique levels out but, after watching the talks and readings from the independent gaming world, I decided to try something new.

    I divided a level into seven “chunks” of content. I made a single level with seven variable chunks. I made four different background sets of pieces. Then, I tied the content variables and the speed into the performance of the player. At run time, the chunks were randomly selected and placed based on the changing variables.

    The result was a game that generated it’s levels and dynamically adjusted the difficulty.

    A fairly simple thing to uncover for most college level game developers, but for me it was like figuring out my first animated walk cycle. I remember my mind exploding at the possibilities.

    Because of this odd space chicken game, I had learned the value of reusing things mathematically.

    Leverage to Infinity and Beyond

    Engines are collections of work flows (tools) and reusable elements (assets) that the game industry has standardized to leverage these kind of opportunities. Every engine comes with the ability to generate levels, set up UI, create a player controller and many have things like gravity, starter templates, or scoring systems.

    All of these developments allow you to get up to speed and experiment with the game content much faster. As you continue to develop your processes, engines allow you to build more tools for duplicating work, offsetting it, and (most importantly) enhancing it. The better the infrastructure below you gets, the more you can improvise faster.

    Experimenting with content in engines is about efficiently leveraging optimized content. This is the genesis of creating compelling procedurally generated content. A subject I will be speaking about in length on this Nytrogen newsletter.

    Animators should begin to internalize the optimization & leverage mindset and not think of an animated story as a linear progression, but a collection of animated pieces. These pieces can be reused and assembled in mathematical ways that I’ll soon be discussing right here.

    Thanks for reading. See you next week.

    Some reading on:

    Dynamic Difficulty Adjustment: https://www.hindawi.com/journals/ahci/2018/5681652/

    Procedural Generation: https://thenewstack.io/new-crop-games-built-procedurally-generated-universes/

    and a 10 year old blog post about randomness in games –

    https://boingboing.net/2009/10/12/my-generation-how-in.html

    I began this newsletter to begin a conversation with the computer graphics industry. Should you have thoughts or comments, please feel free to reach out. I can be found on twitter @nyewarburton.


  • SCAD Pro – Delta TechOps Game Design

    Undergraduate course, Savannah College of Art and Design, SCADPro, 2021

    A mobile game design class partnership with SCADPro and Delta Airlines TechOps.


  • ITGM 236: Core Tech

    Studio, Savannah College of Art and Design, Interactive Design and Game Development, 2021

    This is a core curriculum course on the structure of game engines and blueprinting for the Unreal Engine.

    I used the curriculum developed by Aram Cookson and adapted it to define two projects for the students. The first was a first person camera dungeon explorer. The second project was a top down shooter with a playable character.

    This is a selection of works from the Spring 2021 Final Projects:

    This is work from Adria Graham, Griffin Long, Church Leu and Pierce Rudman.

    This class is being adapted to include new content in material shaders, and basic AI systems.