Comparing Depth From Motion With Depth From Binocular Disparity

Citation data:

Journal of Experimental Psychology: Human Perception and Performance, ISSN: 0096-1523, Vol: 21, Issue: 3, Page: 679-699

Publication Year:
Usage 9034
Abstract Views 8208
Full Text Views 544
Link-outs 282
Captures 1223
Readers 935
Exports-Saves 288
Social Media 2
Tweets 2
Citations 142
Citation Indexes 142
Repository URL:;;;;;;;;;;;;;;;;;;;;;;;;;
20889135; 20176342; 28447849; 17425536; 20952785; 14629687; 19030852; 17900514; 19815806; 15982122
Frank H. Durgin; Dennis R. Proffitt; Thomas J. Olson; Karen S. Reinke
Elsevier BV; Springer Nature; American Psychological Association (APA); Association for Computing Machinery (ACM); SAGE Publications; Oxford University Press
Psychology; Arts and Humanities; Medicine; Neuroscience; Social Sciences; Computer Science; Mathematics; surface layout; ground plane; extent anisotropy; height-perception; size-perception; adaptation; aftereffects; contrast; density; texture; slant perception; haptic perception; proprioception; geographical slant; geographical slant perception; visually guided action; Social and Behavioral Sciences; Geography
Most Recent Tweet View All Tweets
article description
The accuracy of depth judgments based upon binocular disparity or relative motion (motion parallax and object rotation) was compared in two experiments. A third experiment on stereoscopic depth constancy (the scaling of disparity information with distance) is also reported. In the first experiment, depth judgments were recorded for computer simulations of cones specified by binocular disparity, motion parallax, or stereokinesis (illusory structure from motion). In the second experiment, judgments were recorded for real cones in a structured environment, with depth information from binocular disparity, motion parallax, or object rotation. In both experiments, judgments from binocular disparity information were quite accurate, but judgments based upon geometrically equivalent or more robust motion information reflect poor recovery of quantitative depth information. In the third experiment, stereoscopic depth constancy was demonstrated for distances of 1 to 3 m using real objects in a well-illuminated, structured viewing environment in which monocular depth cues (e.g., shading) were minimized.