Modelling human visual navigation using multi-view scene reconstruction

[thumbnail of art%3A10.1007%2Fs00422-013-0558-2.pdf]
Preview
Text - Published Version
· Available under License Creative Commons Attribution.
· Please see our End User Agreement before downloading.
| Preview
Available under license: Creative Commons Attribution

Please see our End User Agreement.

It is advisable to refer to the publisher's version if you intend to cite from this work. See Guidance on citing.

Add to AnyAdd to TwitterAdd to FacebookAdd to LinkedinAdd to PinterestAdd to Email

Pickup, L. C., Fitzgibbon, A. W. and Glennerster, A. orcid id iconORCID: https://orcid.org/0000-0002-8674-2763 (2013) Modelling human visual navigation using multi-view scene reconstruction. Biological Cybernetics, 107 (4). pp. 449-464. ISSN 0340-1200 doi: 10.1007/s00422-013-0558-2

Abstract/Summary

It is often assumed that humans generate a 3D reconstruction of the environment, either in egocentric or world-based coordinates, but the steps involved are unknown. Here, we propose two reconstruction-based models, evaluated using data from two tasks in immersive virtual reality. We model the observer’s prediction of landmark location based on standard photogrammetric methods and then combine location predictions to compute likelihood maps of navigation behaviour. In one model, each scene point is treated independently in the reconstruction; in the other, the pertinent variable is the spatial relationship between pairs of points. Participants viewed a simple environment from one location, were transported (virtually) to another part of the scene and were asked to navigate back. Error distributions varied substantially with changes in scene layout; we compared these directly with the likelihood maps to quantify the success of the models. We also measured error distributions when participants manipulated the location of a landmark to match the preceding interval, providing a direct test of the landmark-location stage of the navigation models. Models such as this, which start with scenes and end with a probabilistic prediction of behaviour, are likely to be increasingly useful for understanding 3D vision.

Altmetric Badge

Item Type Article
URI https://reading-clone.eprints-hosting.org/id/eprint/34003
Identification Number/DOI 10.1007/s00422-013-0558-2
Refereed Yes
Divisions Interdisciplinary Research Centres (IDRCs) > Centre for Integrative Neuroscience and Neurodynamics (CINN)
Life Sciences > School of Psychology and Clinical Language Sciences > Department of Psychology
Life Sciences > School of Psychology and Clinical Language Sciences > Neuroscience
Interdisciplinary Research Centres (IDRCs) > Centre for Cognition Research (CCR)
Life Sciences > School of Psychology and Clinical Language Sciences > Perception and Action
Publisher Springer
Download/View statistics View download statistics for this item

Downloads

Downloads per month over past year

University Staff: Request a correction | Centaur Editors: Update this record

Search Google Scholar