|Title||Wide-Area Visual Modeling and Tracking for Mobile Augmented Reality|
|Year of Publication||2012|
|Academic Department||Computer Science|
|Number of Pages||157|
|University||University of California|
|City||Santa Barbara, CA|
The vision of ”Anywhere Augmentation” is to enable a ubiquitous augmented reality experience, indoors and out, by leveraging existing geographic resources and providing tools to easily map new spaces with cheap, off-the-shelf hardware. My work considers the use of panoramic datasets to provide complete and detailed environment models for accurate mobile localization and tracking. I present an end-to-end system which covers modeling, localization and tracking for AR. The implementation is demonstrated with the Apple iPad 2 as a mobile AR tablet, using a remote server for computationally heavy localization queries.
This dissertation discusses the details of my methods and my contributions to outdoor modeling and tracking for AR. Outdoor visual modeling is improved by using the ”upright” constraint, which requires that all images be aligned to a common vertical. The resulting model is used to determine the location of a mobile device by matching the camera image to the point cloud. I consider the problem of latency between the mobile device and the localization server, and present strategies to handle this latency. Continuous device local- ization is achieved by real-time tracking on the mobile device, which is shown to be robust to illumination change. In addition, I evaluate the relationship be- tween modeling effort and localization performance, to determine the feasibility of in situ reconstruction of large spaces by a mobile user. The work leads to insights and future directions for pursuing a state-of-the-art ”Anywhere Aug- mentation” system.