Functional Similarities in Spatial Representations Between Real and Virtual Environments
Betsy Williams, Gayathri Narasimham, Claire Westerman, John Rieser, Bobby Bodenheimer
Transactions on Applied Perception
Abstract
The two experiments in this paper demonstrate similarities in what
people know about the spatial layout of objects in familiar places
whether their knowledge resulted from exploring the physical
environment on foot or exploring a virtual rendering of it with a
tethered head-mounted display. In both experiments subjects were asked to study the
locations of eight targets in the physical or virtual environment, then
close their eyes, walk (or imagine walking) to a new point of
observation, and then turn and face some of the remembered objects.
In Experiment 1 the results of the statistical analysis were
functionally similar after learning by exploring the virtual
environment and physical environment: The new points of observation
were simple rotations of the point where they were learned. Like
exploring the physical environment, the latencies and errors of the
turning judgments after learning the virtual environment were
significantly worse after the imagined movements than the physical
movements; and the measures were worse for larger degrees of physical
or imagined rotation in both conditions. In Experiment 2 the new
points of observation differed by simple rotations in one condition
versus simple translations in the other condition and again the
results were functionally similar after learning the physical versus
the virtual environment: In both learning conditions, the errors and
latencies in the physical and virtual environments were worse
after rotations than translation and varied as a function of the
disparity of the direction when learning and the direction when
responding.
Bobby Bodenheimer