Narrator: This is Science Today. Researchers at a computer graphics lab at the University of California, Merced, are working to create more realistic human motions for virtual reality avatars. Justin Matthews, a doctoral student in cognitive and information sciences, says they're teaming up with computer scientists and engineers.
Matthews: This research specifically tackles the question of what do people do when they're pointing at objects and they're trying to teach someone about a specific object, how do they reference that object, not only with their hands while pointing, but also with their head or eyes while speaking about the object.
Narrator: Matthews explains that some of the applications could be used in learning and teaching.
Matthews: Maybe something like telemedicine where you're teaching an individual to do a certain task, but you're not physically present in their environment, so you might be controlling an avatar that's in a remote location that you're not there, but you're somewhere else and you can actively control this avatar, either live through your own movements or having a computer model those movements for you.
Narrator: For Science Today, I'm Larissa Branin.