SEAR: Scaling Experiences in Multi-user Augmented Reality

IEEE Trans Vis Comput Graph. 2022 May;28(5):1982-1992. doi: 10.1109/TVCG.2022.3150467. Epub 2022 Apr 8.

Abstract

In this paper, we present the design, implementation, and evaluation of SEAR, a collaborative framework for Scaling Experiences in multi-user Augmented Reality (AR). Most AR systems benefit from computer vision (CV) algorithms to detect, classify, or recognize physical objects for augmentation. A widely used acceleration method for mobile AR is to offload the compute-intensive tasks (e.g., CV algorithms) to the network edge. However, we show that the end-to-end latency, an important metric of mobile AR, may dramatically increase when offloading AR tasks from a large number of concurrent users to the edge. SEAR tackles this scalability issue through the innovation of a lightweight collaborative local caching scheme. Our key observation is that nearby AR users may share some common interests, and may even have overlapped views to augment (e.g., when playing a multi-user AR game). Thus, SEAR opportunistically exchanges the results of offloaded AR tasks among users when feasible and leverages compute resources on mobile devices to relieve, if necessary, the edge workload by intelligently reusing these results. We build a prototype of SEAR to demonstrate its efficacy in scaling AR experiences. We conduct extensive evaluations through both real-world experiments and trace-driven simulations. We observe that SEAR not only reduces the end-to-end latency, by up to 130×, compared to the state-of-the-art adaptive edge offloading scheme, but also achieves high object-recognition accuracy for mobile AR.