top of page

Background

What are "virtual humans"

In short, virtual humans are pieces of software that look like, act like, and interact with humans but exist in virtual environments. They are any cognitive system that achieves (near to) human-level performance. Just as human cognitive systems are immensely complex, these artificial intelligence systems can include a vast array of different features. From "speech recognition, natural language understanding and generation, dialogue modeling, nonverbal communication, task modeling, social reasoning, and emotion modeling"[1]  to advanced graphics and behavioral patterns. The more human-like features a system contains, the more convincingly human it appears. The goal we have for most virtual humans is to make them believable, indistinguishable from a real human. [2] We can consider different levels of virtual humans using these characteristics. Animated characters in films can be considered virtual humans since they look like humans but they lack interaction capabilities. Chat-bots, used frequently these days especially on online shopping sites, have interaction capabilities but don't look like humans. Video game characters offer some level of both, limited interaction capabilities as well as limited visuals. With the huge amount of processing power available to us now, we are beginning to further these these levels and create virtual characters that allow us to interact fully and still look remarkably realistic.

Who are some examples
White man in a black tux, top hat and white gloves juggles a red cone, a blue cube, and a green sphere. He stands on an empty checkered floor with a sunset over stars in the sky.
Screenshot of a conversation with the ELIZA chat-bot.

ELIZA

One of the first chat-bots ever created. Built by the MIT Lincoln Laboratory and one of the most popular uses was to simulate a psychiatrist you could talk to [3]. To see a text of sample conversation, click here.

Adam Powers; the Juggler

Computer animation released by Triple I. One of the first anthropomorphic characters. Also one of the first to use Motion Capture [4].

New Dimensions in Testimony

Sponsored by University of Southern California's Shoah Foundation, this program captures interviews with witnesses to genocide and produces a system that allows people to have full conversations with them [5].

Four thumbnails of survivors sitting in a red armchair in an entrely green background. Three of the imges have complex camera equipment visible.
1966
1981
2015
How do we evaluate these characters?

Alan Turing:

Alan Turing was a scientist during the second world war, where he most famously worked as a code-breaker in Britain's Bletchley Park. While he was instrumental in breaking the code behind Germany's Enigma Machine, he was also a groundbreaking theoretical computer scientist who hypothesized extensively about the future of computing. One of his theories included the possibility of computers as intelligent as humans, an idea we now see realized around us. [6]

​

The Turing Test:

In 1950, when computers were still in their infancy, he created the Turing Test, a test to determine whether an agent was a human or a computer. In other words, a test to see how indistinguishable a computer's intelligence is from a human's. In the standard interpretation of the Turing Test, a human (the interrogator) is allowed to communicate with a another human as well as a computer through a terminal, and the interrogator attempts to identify which is the computer. "The computer attempts to achieve the deception by imitating human verbal behavior. If an interrogator does not make the right identification... then the computer passes the test"[7] This 'imitation game' as Turing dubs it, is how virtual characters are evaluated even today. Although there is some dispute about whether this interpretation is actually what Turing meant in his original paper, it remains the gold standard in the artificial intelligence community. 

​

The Loebner Prize:

Originally sponsored by Hugh Loebner, American inventor, this competition has been running every year since 1990. Every year, virtual humans perform the Turing Test with a panel of judges and the one who is "the most human" comes away with the Loebner Prize. While the general public sees this as finding the best virtual humans we have to date, the Loebner Prize gains a lot  of scorn from the academic community. They tend to view it as a publicity stunt and see the judges as unqualified, the questions as "whimsical", and the interactions far too short (2.5 min on average) to make accurate determinations. [8] [9]  

​

However you think of the Turing Test and the Loebner Prize, they remain the only mainstream ways of evaluating and comparing virtual humans.

Black an white portrait of Alan Turing
Alan Turing
Loebner Prize gold medal. Gold disk with Hugh Loebner's name and portrait engraved in th middle.
Loebner Prize Gold Medal

Where are we now?

From ELIZA, the first chat bot, to the New Dimensions in Testimony project at USC that documents Holocaust survivor's testimony in 3D, we have come a long way. The first systems were only text-based and had very limited vocabularies. They got confused easily and could not discuss many topics. Now, chat bots use the internet to have access to infinite topics and vocabulary and utilize machine-learning to develop more realistic tone by watching humans talk to each other. (See Theory section for more details) Now, we can also put this linguistic prowess together with advanced computer-generated images (CGI) to create virtual characters that not only sound human, but also look human too. Interacting with these characters reveals new rules for social behavior and unsettling reactions. These modern systems are hugely effective for a variety of different applications (see applications section for more) but still have some room to grow. Real-time rendered graphics, the ones necessary for true social interaction, are still too computationally expensive for store-bought systems to hold so the virtual humans we see are still visually virtual. There also needs to be more research involved in the bias possible in creating these virtual humans. These computer generated characters will act the way they are programmed to act, and the more public they become, the more power they have to affect change, both positively and negatively. If developers are not aware of the bias they maintain themselves, the virtual humans they develop will also reflect those biases. Overall our progress in the field of virtual humans has grown exponentially, and continues today. But we are still in the infancy of this research and I predict more advancements will be coming soon to make these characters a part of our every day lives.

bottom of page