Virtual Beings: Responsive Characters with Intent and Action

#VirtualBeings

Last week, the remnants of Oculus Story Studio (rebranded “Fable”), hosted a small conference to show the world the concept of an animated character driven by predictive models (AI). Despite it’s small group, Unreal Engine CTO Kim Libreri was in attendance, and a few other notables. They came up with a fancy hashtag and everything! #virtualbeings

Here’s their video teaser:

I hadn’t heard of this virtual being summit until recently, so I was a bit bummed I missed it. It’s a concept that I’ve been thinking about for a while now, and would have loved to hear what others in the space are working on.

The propagation of voice recognition, chatbots, micro-functions, cloud computing, and the like, has enabled an ecosystem for virtual characters to thrive. Yet, we barely have them among us. I think it’s because (in my opinion) the design of a virtual character straddles two broad development areas.

I will call each of these development areas “Intent” and “Action.”

Intent

Intent is essentially “chatbot developer speak” for the user’s desire for interacting with the virtual characters. When I initiate an interaction with a human, or a bot, I generally have a motivation to why I am speaking with them. I want to ask them the weather forecast for tomorrow, or ask for help on my calculus homework, or if they prefer the Rock to Stone Cold.

In order for a virtual being to react to a user properly, it first has to learn what the user wants. The development of intent detection is a landscape filled with Machine Learning nuts who will show off their fancy computer vision algorithms and face detection classifiers. Intent can also be broken up into context, and be given long and short term properties. The subject can get complicated very quickly.

Action

On the other side of the system, an action is an event (or function) that is triggered when the intent has been classified by the virtual character. The action is powered by entities, or recognized components in the user’s communication.

In the example above, the action, motivated by intent, would trigger a series of functions that move the character to say “hello.” Actions have as much complexity as Intent. In the short term, these actions will be “canned”, using pre-animated pieces of content. Undoubtedly these actions will become increasingly generated at real time, allowing the characters to generate the corresponding action on the fly.

Bridging the Worlds

Around 2 minutes and 30 seconds into the video at the top of the post, Edward Saatchi (the CEO of Fable) says that there is a division between the AI community and the filmmaker community. This is true, and the division of Intent and Action cleanly divides the knowledge chasm that will need to be traversed in order to see the mass propagation of virtual characters. Animators are going to have to learn how to check their code into github, and Engineers are going to have to learn to carry a sketchbook. Once we have both sides communicating, the rest should come with time and focus.

Let’s leave the ethics of all of it for another time!

That’s it for this time. Thanks for reading.

Reference and Links:

Fable Studio: https://fable-studio.com/

Oculus Story Studio: https://www.oculus.com/story-studio/

Virtual Beings Summit: https://www.virtual-beings-summit.com/

Chatbot vocabulary (like User Intent): https://chatbotsmagazine.com/chatbot-vocabulary-10-chatbot-terms-you-need-to-know-3911b1ef31b4

Dwayne Johnson (the Rock): https://en.wikipedia.org/wiki/Dwayne_Johnson

Please follow and like us:

You may also enjoy…