Computing Reviews
Today's Issue Hot Topics Search Browse Recommended My Account Log In
Review Help
Search
Using virtual environments to assess time-to-contact judgments from pedestrian viewpoints
Seward A., Ashmead D., Bodenheimer B. ACM Transactions on Applied Perception4 (3):18-es,2007.Type:Article
Date Reviewed: Feb 11 2008

Based on the time-to-contact (TTC) estimation of approaching vehicles, a pedestrian makes the street-crossing decisions. People estimate TTC through a very complicated process, and most parts of this judgment process are still unknown to researchers.

Without appropriate mathematical models to precisely describe motion information transfer, perception, and the cognition process, researchers have been treating this unknown process as a black box component. They have been concentrating on experimental methodologies to reveal some key factors involved in the estimation process, trying to quantify their impact.

Seward et al. present four categories of experiments. The experimental, statistical, and analytical methods used are similar to those found in TTC judgment research literature. The main contribution of this paper uses three-dimensional (3D) graphics-rendered video, generated on a desktop graphics workstation, to simulate the typical pedestrian street-crossing situation (as opposed to conducting the experiments in the real world, or using recorded films). The field of view (FOV) is confined within the desired viewing range in the desktop virtual environment by an enclosure apparatus. The experiments are designed in such a way that the results correlate with those of prior work in the field. The flexibility of the 3D rendering is used to extend the experiments in order to cover more subtle cases. TTC is realized by having a vehicle model moving down a fixed length of road at different speeds, from 10 to 30 miles per hour (mph).

The results of the experiments centering around four, seven, and ten-second TTC judgments in the virtual environment show that neither vehicle model (two models are tested in the first categoryexperiments) nor viewpoint of pedestrian (left, head-on, and right viewpoints are tested in the second category experiments) have significant influence on a subject’s TTC judgment. The third category of experiments evaluates a subject’s judgment in short- and long-range TTC in a desktop virtual environment. The results show that the accuracy of estimating TTC declines with the longer TTCs. The results also show that there is a consistent bias toward overestimating the TTC, which is not consistent with some of the previous research results in the literature. The fourth category of experiments shows that using an immersive head-mounted display (HMD), as opposed to the desktop virtual environment, has no significant influence on the TTC judgment.

My opinion, from a 3D graphics researcher perspective, is that using the rendering model adopted by the current desktop graphics workstation (the local illumination model), in TTC judgment research, has three main limitations.

Firstly, in this model, the camera is simplified to a pinhole darkroom system with an infinite small hole. Therefore, everything rendered on the screen is either in or out of focus to the same degree (infinite depth of field), so that the circle-of-confusion optical phenomena cannot be captured. The viewers in the desktop virtual environment do not need to focus their eyes on the moving vehicles; instead, they are focusing on an everything-adjust-in-focus image from a fixed distance (from eye to screen). The effect of things being out of focus might not be significant in single-vehicle TTC judgment, but when multiple vehicles or a more complex scene setting are presented, using a desktop virtual environment might introduce bias in a subject’s TTC judgment.

Secondly, the FOV enclosure apparatus is very helpful in confining the viewing range, but in the desktop virtual environment, the subject’s eye position is not necessarily coincident with the imaginary camera-sensing position where the final image is generated.

Lastly, the desktop virtual environment is not necessarily a stereoscopic vision for the viewer. In the real world, the different perspectives of two eyes result in slight relative displacements (disparities) of objects in the two monocular views of the scene. When one’s viewpoint does not follow head-on direction, the difference between monocular views may become very important, because the two eyes are observing the focusing object from a different distance. When the real 3D scene is rendered to a two-dimensional (2D) image, depth information is permanently lost due to stereoscopic vision.

Items one and three, along with the experiment on different viewpoints, partially explain why there is no significant difference in TTC judgment between binocular and monocular vision in a desktop virtual environment.

Seward et al. designed their TTC judgment experiments in a virtual environment to correlate with the experiments in prior TTC research; some results are consistent with the previous ones. Having two separate display systems, the HMD used in the fourth-category experiment is especially effective in displaying stereoscopic images and tracking the head motion of the viewer. Consequently, the HMD is capable of providing an immersive experience in simulating a real TTC experimental environment. Some experiments were designed to correlate the TTC judgment in the desktop virtual environment with the judgment under HMD virtual environment--this is a good approach. The use of HMD addresses the third limitation listed above. However, limitations one and two are somewhat left open. These limitations should be addressed in a future study. If further experiments can clarify or rule them out, we will have enough confidence to use the desktop virtual environment, which is much easier to access and design, to conduct TTC judgment research.

Reviewer:  Xin (Bob) Long Review #: CR135250 (0812-1219)
Bookmark and Share
 
Virtual Reality (I.3.7 ... )
 
Would you recommend this review?
yes
no
Other reviews under "Virtual Reality": Date
Tracking requirements for augmented reality
Azuma R. Communications of the ACM 36(7): 50-51, 1993. Type: Article
Aug 1 1994
Virtual worlds and multimedia
Magnenat Thalmann N., Thalmann D., John Wiley & Sons, Inc., New York, NY, 1993. Type: Book (9780471939726)
Mar 1 1995
Virtual reality excursions with programs in C
Watkins C., Marenka S., Academic Press Prof., Inc., San Diego, CA, 1994. Type: Book (9780127378657)
May 1 1995
more...

E-Mail This Printer-Friendly
Send Your Comments
Contact Us
Reproduction in whole or in part without permission is prohibited.   Copyright 1999-2024 ThinkLoud®
Terms of Use
| Privacy Policy