1. Why Autonomous Virtual Characters?
Virtual characters, also known as virtual actors, have become very popular in the last ten years, mainly through three-dimensional (3D) movies and games. In movies, they now have very realistic physical and emotional characteristics, including their facial expressions, hair, clothes, and motions. But everything about these characters was rendered in an offline mode; they have no mind at all.
By contrast, games such as Quake and Tomb
Raider create virtual worlds where the characters dynamically
interact with the users.
Moreover, crowds of virtual characters have appeared not only in movies but also in games. Such characters are autonomous.
Autonomous virtual characters (AVCs) are not just for movies
and games. In the future, they can be at the heart of the
simulation of activities for a variety of purposes, including
education and training, treatment of psychological problems,
emergency preparedness, and socialization. Imagine the following scenarios:
2. Properties of Autonomous Virtual Characters
- A user is being trained to perform some complex task,
such as repairing a copy machine. He uses an interactive
user manual, where an autonomous character plays an expert,
showing him how to proceed. At every stage, the user is
able to see what to do next, even when mistakes are made.
- A therapist is helping a patient overcome a fear of public
speaking. To overcome this fear, the patient has to perform
while immersed in a virtual environment consisting of a
seminar room and a virtual audience, which can react to
the user in an autonomous way. The therapist can choose
the type of virtual audience (for instance, one that is
aggressive or sexist) that will result in a more effective
treatment for the patient.
- A user is learning basic life support (BLS) procedures.
She is immersed in a virtual setting, and discovers a victim
lying on the ground. She has to give him BLS through her
proxy, a virtual assistant (VA). The user navigates the
scene, assesses the situation, and makes decisions by issuing
natural voice commands. The VA waits for commands and executes
the actions. If the user's commands are correct, the victim
recovers. In cases where the user provides incorrect commands,
the VA may refuse to do harm to the victim; in such situations,
the VA may prompt the user for retrial, or may suggest
an alternative possibility.
- A real person is socializing with others in a virtual community--such as Second
Life--via an avatar, even though he is not actively present. His AVC could perform some actions for him if it had received instructions from him before.
Autonomy is generally the quality or state of being self-governing.
Rather than acting from a script, an AVC is aware of its
changing virtual environment, making its own decisions in
real time in a coherent and effective way. An AVC should
appear to be spontaneous and unpredictable, making the audience
believe that the character is really alive and has its own
To be autonomous, an AVC must be able to perceive its environment
and decide what to do to reach an intended goal. The decisions
are then transformed into motor control actions, which are
animated so that the behavior is believable. Therefore, an
AVC's behavior consists of always repeating the following
sequence: perception of the environment, action selection,
The problem with designing AVCs is determining how to decide
on the appropriate actions at each point in time, to work
toward the satisfaction of the current goal, which
represents the AVC's most urgent need. At the same time,
there is a need to pay attention to the demands and opportunities
coming from the environment, without neglecting, in the long
term, the satisfaction of other active needs.
There are four properties that determine how AVCs make their
decisions: perception, adaptation and intelligence, memory,
Perception. Perception of the elements
in the environment is essential for AVCs, as it gives them
an awareness of what is changing. An AVC continuously modifies
its environment, which, in turn, influences its perceptions.
Therefore, sensorial information drastically influences AVC
behavior. This means that we cannot build believable AVCs
without considering the way they perceive the world and each
other. Imagine if, in the real world, everyone could see
the whole world, and could hear everything. This is the situation
in virtual reality, so the issue for AVCs is how to filter
this information correctly. To realize believable perception,
AVCs should have sensors that simulate the functionality
of their organic counterparts, mainly for vision, audition,
and tactile sensation. These sensors should be used as a
basis for implementing everyday behaviors, such as visually
directed locomotion, responses to sounds and utterances,
and the handling of objects.
Adaptation and intelligence. Adaptation
and intelligence define how the character is capable of reasoning
about what it perceives, especially when unpredictable events
happen. An AVC should constantly choose the best action so
that it can survive in its environment and accomplish its
goals. As the environment changes, the AVC should be able
to react dynamically to new elements, so its beliefs and
goals may evolve over time. An AVC determines its next action
by reasoning about what it knows to be true at a specific
time. Its knowledge is decomposed into its beliefs and internal
states, goals, and plans, which specify a sequence of actions
required to achieve a specific goal. When simulating large
groups or communities of AVCs, it is possible to use bottom-up
solutions that use artificial life techniques, rather than
top-down, plan-based approaches, such as those that are common
in artificial intelligence. This allows new, unplanned behaviors
Memory. It is necessary for an AVC to have
a memory so that similar behaviors can be selected when predictable
elements reappear. Memory plays an important role in the
modeling of autonomy, as actions are often decided based
on memories. But imagine an AVC in a room containing 100
different objects. Which objects can be considered memorized
by the virtual character? It is tempting to decide that whenever
an object is seen by the AVC, it should be stored in its
memory. But if you consider humans, nobody is able to remember
every single object in a room. Therefore, the memory of a
realistic AVC should not be perfect either.
Emotions. The believability of an AVC is
made possible by the emergence of emotions clearly expressed
at the right moment . An
emotion is an emotive reaction to a perception that induces
a character to assume a physical response, facial expression,
or gesture, or select a specific behavior. The apparent emotions
of an AVC and the way it reacts are what give it the appearance
of a living being with needs and desires. Without them, an
actor would just look like an automaton. Apart from making
them appear more realistic, AVCs' visible emotions can provide
designers with a direct way of influencing the user's emotional
3. The Impact of Research On Autonomous Virtual
The four properties above are very important in creating
believable autonomous characters. Modeling these properties
accurately in real time requires research efforts from various
branches of computer science. We emphasize a few of them
Behavior Planning. Behavior planning involves
the selection of appropriate actions for the AVC to execute.
These decisions should reflect the individual characteristics
of the AVC, including its intelligence, its motivations,
and its social behavior. Besides being individual, action
selection architectures for AVCs should be both reactive
and proactive to be efficient in real-time. The transitions
between reactions and planning should be rapid and continuous
in order to elicit coherent and appropriate behaviors in
changed or unexpected situations. The design of a behavior
planner satisfying these criteria is a research challenge.
Virtual Sensor Design. It is tempting
to simulate perception by directly retrieving the location
of each perceived object straight from the environment. For
realistic AVCs, this is not appropriate; instead, virtual
sensors are used to simulate an AVC's perception of its virtual
environment. What is important is the functionality of a
sensor and how it filters the information flow from the environment.
It is not necessary or efficient to model sensors with biological
accuracy. Therefore, virtual eyes may be represented by a
Z-buffered color image representing a character's vision.
A virtual nose, or other tactile point-like sensors, may
be represented by a simple function evaluating the global
force field at the sensor's location. The virtual ear of
a character may be represented by a function returning the
ongoing sound events. With these virtual sensors, AVCs should
be able to perceive the virtual world in a way that is very
similar to the way they would perceive the real one.
Research advances introduce proprioception--the unconscious perception of movement and spatial orientation arising from stimuli within the body itself--into a unified perception
concept for an AVC in a situated virtual world. The motivation is to
reach persistency and to obtain a cognitive map of the perceived virtual world.
It is also possible to integrate a perception approach by including the faculty
of prediction, for example, the orientation of the AVC’s attention.
In addition to perceiving the virtual world, some AVCs have
to be aware of certain events and characteristics of the
real world. It takes real devices, such as cameras, microphones,
and haptic devices, to capture this information and bring
it to the virtual characters. The information they provide
must be integrated with that of virtual sensors.
Modeling Emotions. To allow AVCs
to respond emotionally to a situation, they could be equipped
with a computational model of emotional behavior. Emotionally
related behavior, such as facial expressions and posture,
can be coupled with this computational model, which can be
used to influence their actions. The development of a good
computational model is a challenge.
Modeling Attention and Gaze. When we walk into a city, we look at other people, objects, or even at nothing in particular. An important aspect that can greatly enhance the realism of crowd and group animation is for characters to be aware of their environment and of the other characters in it. When adding attention behaviors to crowds, we are confronted with two issues: detecting the points of interest the characters are looking at, and editing the characters' motions for use in modeling the gaze behavior.
Modeling Memory-based Emotions. Some researchers mathematically model emotions, behavior, mood, and personality for virtual characters. These models can be used to create an emotionally responsive AVC. However, such models lack the critical component of memory--a memory of not just events but also of past emotional interaction. A memory-based emotion model is needed to take into account the memory of past interactions in order to build long-term relationships between the virtual character and users.
4. Ethical concerns
There are ethical concerns regarding the use of AVCs. One
such concern involves decisions made by real people that
are based on the advice of autonomous characters. Autonomy
means that the AVC makes decisions based on his or her understanding
of the environment and of the rules for the surrounding world.
In a simulated world, how can we be sure that this information
corresponds to reality? As with any computer program, AVCs
are not immune to bugs or tampering. Their advice on critical
matters should always be validated.
Another ethical concern is the fact that an autonomous virtual
human may be indiscernible from a real existing person. For
example, a terrorist group could create a TV spot showing
democratic leaders promoting nondemocratic values. To avoid
misleading and manipulating the public, we will need to use
technology, such as watermarking, to reliably indicate to
the viewer that the human is virtual.
Some autonomous characters promote violence, terrorism,
abuse, or crime, in the context of games or other interactive
situations. If their behavior is realistic, they are likely
to exert a strong negative influence, even when they are
known to be virtual. New laws and regulations will have to
be developed in this area.
When avatars and AVCs are together in the same virtual community, it becomes very difficult for a member of the community to know when he or she is interacting with an avatar or an AVC. In the near future--when AVCs can replace avatars when the user is away--the problem could become even trickier.
5. Scalability and mobility
With the advent of wearable devices, advanced PDAs, and smartphones,
AVCs can be with people all the time to guide and help them. Moreover,
with light see-through head-mounted displays, it becomes easy to add virtual
characters to a real scene. Applications of this technology could be to show people how to use electronic devices or how to find their way through a particular area without the use of a map.
6. Possibilities and challenges
AVCs will be essential in many applications of the future.
They will be our playmates, teachers, therapists, and pets.
Because of their logic and memory, autonomous characters
will bring skills and abilities that complement those of
humans, rather than replace them. The next generation--children
and young adults--is very open to virtual communities and
e-learning, and is expected to have no problems interacting
with virtual humans.
How far are we from such a situation? Current AVCs are becoming
more realistic in terms of their appearance and animation,
and they are able to perceive the virtual world, and the
people living in that world. They may act based on their
perception in an autonomous manner. However, their intelligence
is constrained and limited.
In the near future, we may expect to have AVCs that are
able to learn or understand a few situations, due to the
development of new methods of artificial intelligence. However,
a great deal of research effort is still needed to reach
the point at which AVCs can behave autonomously and interact
naturally, like real creatures. This is especially the case
for simulating autonomous virtual humans. True imitation
of the full complexity of human behavior may not be accomplished
even by the end of this century.
Feb 14 2006
Last updated: Aug 19 2009
Center for Human Modeling and Simulation:
investigates computer graphics modeling and animation
techniques for embodied agents, virtual humans,
and their applications.
a pluridisciplinary lab at the University of
Geneva that is working on virtual human simulation
and virtual worlds.
Engineering & Design:
a Web site created by Craig Reynolds that contains
links to research in computer graphics and animation.
Virtual Reality Lab (VRLab):
a laboratory that is mainly involved in the modelling
and animation of three-dimensional inhabited
Making them remember-emotional virtual characters with memory Kasap, Z.;
Benmoussa, M.; Chaudhuri, P.; Magnenat-Thalmann, N. IEEE Computer Graphics and Applications 29, 2 (2009), 20-29.
Simulating gaze attention behaviors for crowds Grillon, H.; Thalmann, D.
Computer Animation and Virtual Worlds 20, 3-4 (2009), 111-119.
Intelligent virtual humans with autonomy and personality: state-of-the-art Kasap, Z.;
Magnenat-Thalmann, N. Intelligent Decision Technologies 1, 1-2 (2007), 3-15.
An integrated perception
for autonomous virtual agents: active and predictive perception Conde, T.; Thalmann, D.
Computer Animation and Virtual Worlds 17, 3-4, (2006), 457-468.
Generic personality and emotion simulation for conversational agents Egges, A.; Kshirsagar, S.;
Magnenat-Thalmann, N. Computer Animation and Virtual Worlds 15, 1 (2004), 1-13.
Thing Growing: autonomous characters in virtual
reality interactive fiction
Anstey, J.; Pape, D.; Sandin, D. In Proc.
of the IEEE Virtual Reality Conference 2000 (VR
'00) (New Brunswick, NJ, Mar. 18-22, 2000), 71-78.
Autonomous animated interactive characters: do we need them? Blumberg, B. In Proc. of Computer Graphics International 1997 (CGI '97) (Hasselt-Diepenbeek, Belgium), June 23-27, 1997), 29-37.
Thalmann, D., Musse, S. R.,
of virtual humans
Magnenat-Thalmann, N., Thalmann, D. (Eds.),
into virtual reality Gutierrez M., Vexo F., Thalmann, D., 2008
storytelling: using virtual reality technologies for storytelling Subsol G. (Ed.), 2006
multi-level adaptation for interactive autonomous
J., Egbert P. ACM Transactions on Graphics 24
(2): 262-288, 2005
culture: human-agent interaction in a
multicultural world Payr
S., Trappl R., 2004