How do we interact in immersive virtual reality?

What does the research tell us about the importance of avatars in VR experiences?

Bookmark and Share

When NASA began to use VR to train astronauts as far back as the 1980s it recognised just how important immersion is to achieve best results.

This video of NASA VR training is over 30 years old.  It shows that the basic form of head-mounted VR displays hasn't changed that much in years.  Professor Anthony Steed of UCL thinks that in the rush to deliver more content, developers and engineers are in danger of overlooking some of the basic science about what works in VR environments.  He shared his thoughts on the importance of 'embodied cognition' in virtual environments in the first 2017 Whitehead Lecture, held at Goldsmiths College, University of London.

There are many ways to assess the 'success' of VR environments.  You can measure people's cognitive and emotional responses.  You can observe their behaviour or monitor their performance of tasks. Or you can simply ask them how they felt.  And you can measure the impact that 'self-representation' (or the use of avatars or other representations of the physical self) has on people’s performance in virtual environments.

Embodiment and the rubber hand illusion

The rubber hand illusion (in which a person can be tricked into believing a rubber hand is their own – and will react if the hand is harmed or threatened) can be replicated in virtual reality.  A significant proportion of people will attempt to pull away a threatened limb in a virtual environment.

Avatars help cognition

Research shows that in cognitive/memory tests, the use of a self-avatar and the ability to move [virtual] hands, significantly enhances task performance.

Why does VR work – and what next?

The research shows that VR works because the brain is plastic and can cope with – and adapt to – unrealistic environments.  And that avatars are important to help immerse us in the environment. Yet there's still a lot of work to be done to address a range of technical challenges.

  • Engineering – we need higher quality displays; lower latency and more tracking
  • User experience – more work to be done on mixed realities, augmented realities and environmental capture
  • Content capture – we must get to a point when capturing 3D content is as easy (and democratic) as capturing video is now.

Professor Anthony Steed is head of the Virtual Environments and Computer Graphics (VECG) group at University College London.

The Whitehead Lectures are organised by Goldsmith College's Departments of Computing and Psychology to explore diverse aspects of cognition, computation and culture.