Browse > Article
http://dx.doi.org/10.13088/jiis.2012.18.1.001

A Mobile Landmarks Guide : Outdoor Augmented Reality based on LOD and Contextual Device  

Zhao, Bi-Cheng (Dept. of Information Engineering, Inha University)
Rosli, Ahmad Nurzid (Dept. of Information Engineering, Inha University)
Jang, Chol-Hee (Dept. of Information Engineering, Inha University)
Lee, Kee-Sung (Dept. of Information Engineering, Inha University)
Jo, Geun-Sik (Dept. of Computer Science & Information Engineering, Inha University)
Publication Information
Journal of Intelligence and Information Systems / v.18, no.1, 2012 , pp. 1-21 More about this Journal
Abstract
In recent years, mobile phone has experienced an extremely fast evolution. It is equipped with high-quality color displays, high resolution cameras, and real-time accelerated 3D graphics. In addition, some other features are includes GPS sensor and Digital Compass, etc. This evolution advent significantly helps the application developers to use the power of smart-phones, to create a rich environment that offers a wide range of services and exciting possibilities. To date mobile AR in outdoor research there are many popular location-based AR services, such Layar and Wikitude. These systems have big limitation the AR contents hardly overlaid on the real target. Another research is context-based AR services using image recognition and tracking. The AR contents are precisely overlaid on the real target. But the real-time performance is restricted by the retrieval time and hardly implement in large scale area. In our work, we exploit to combine advantages of location-based AR with context-based AR. The system can easily find out surrounding landmarks first and then do the recognition and tracking with them. The proposed system mainly consists of two major parts-landmark browsing module and annotation module. In landmark browsing module, user can view an augmented virtual information (information media), such as text, picture and video on their smart-phone viewfinder, when they pointing out their smart-phone to a certain building or landmark. For this, landmark recognition technique is applied in this work. SURF point-based features are used in the matching process due to their robustness. To ensure the image retrieval and matching processes is fast enough for real time tracking, we exploit the contextual device (GPS and digital compass) information. This is necessary to select the nearest and pointed orientation landmarks from the database. The queried image is only matched with this selected data. Therefore, the speed for matching will be significantly increased. Secondly is the annotation module. Instead of viewing only the augmented information media, user can create virtual annotation based on linked data. Having to know a full knowledge about the landmark, are not necessary required. They can simply look for the appropriate topic by searching it with a keyword in linked data. With this, it helps the system to find out target URI in order to generate correct AR contents. On the other hand, in order to recognize target landmarks, images of selected building or landmark are captured from different angle and distance. This procedure looks like a similar processing of building a connection between the real building and the virtual information existed in the Linked Open Data. In our experiments, search range in the database is reduced by clustering images into groups according to their coordinates. A Grid-base clustering method and user location information are used to restrict the retrieval range. Comparing the existed research using cluster and GPS information the retrieval time is around 70~80ms. Experiment results show our approach the retrieval time reduces to around 18~20ms in average. Therefore the totally processing time is reduced from 490~540ms to 438~480ms. The performance improvement will be more obvious when the database growing. It demonstrates the proposed system is efficient and robust in many cases.
Keywords
증강현실;시맨틱웹;링크드데이터;협동 어노테이션;
Citations & Related Records
Times Cited By KSCI : 3  (Citation Analysis)
연도 인용수 순위
1 Henze, N., T. Schinke, and S. Boll., "What is that? Object recognition from natural features on a mobile phone", In Proceedings of the Workshop on Mobile Interaction with the Real World, 2009.
2 Klein, G. and D. Murray, "Parallel tracking and mapping on a camera phone", In ISMAR'09, 2009.
3 Kyoung, K. Y., "Self-Tour Service Technology based on a Smartphone", Journal of Intelligence and Information Systems, Vol.16, No.4 (2010), 147-157.
4 Lee, Y. H., K. J. Oh, V. Sean, and G. S. Jo, "A Collaborative Video Annotation and Browsing System using Linked Data", Journal of Intelligence and Information Systems, Vol.17, No.3(2011), 203-219.
5 Li, X., C. Wu, C. Zach, S. Lazebnik, and J.-M, "Modeling and recognition of landmark image collections using iconic scene graphs", (2008), 427-440.
6 Rohs, M. and B. Gfeller., "Using camera- equipped mobile phones for interacting with real- world objects", In Advances in Pervasive Computing, (2004), 265-271.
7 Skrypnyk, I. and D. G. Lowe, "Scene Modelling, Recognition and Tracking with Invariant Image Features", Proceedings of the Third IEEE and ACM International Symposium on Mixed and Augmented Reality (ISMAR'04),(2004), 110-119.
8 Ta, D., W. Chen, N. Gelfand, and K. Pulli, "Surftrac : Efficient tracking and continuous object recognition using local feature descriptors", In CVPR09, 2009.
9 Takacs, G., V. Chandrasekhar, B. Girod, and R. Grzeszczuk, "Feature Tracking for Mobile Augmented Reality Using Video Coder Motion Vectors", Proceedings of the Sixth IEEE and ACM International Symposium on Mixed and Augmented Reality (ISMAR), 2007.
10 Wagner, D., G. Reitmayr, A. Mulloni, T. Drummond, and D. Schmalstieg, "Pose tracking from natural features on mobile phones", Proceedings of the International Symposium on Mixed and Augmented Reality (ISMAR), 2008.
11 Adams, A., N. Gelfand, and K. Pulli, "Viewfinder alignment", Comput. Graph, Forum, (2008), 597-606.
12 Agarwal, S., N. Snavely, I. Simon, S. Seitz, and R. Szeliski, "Building rome in a day", In Proceedings of the International Conference on Computer Vision (ICCV), 2009.
13 Ankerst, M., M. Breunig, P. Kriegel, and J. Sander, "OPTICS : ordering points to identify the clustering structure", In Proc. of SIGMOD, (1999), 49-60.
14 Bay, H., A. Ess, T. Tuytelaars, and L. V. Gool, "Speeded-up robust features (surf)", Computer Vision and Image Understanding, 2008
15 Bay, H., B. Fasel, and L. V. Gool, "Interactive Museum Guide : Fast and Robust Recognition of Museum Objects", Proceedings of the First International Workshop on Mobile Vision, 2006.
16 Chen, W.-C., Y. Xiong, J. Gao, N. Gelfand, and R. Grzeszczuk, "Efficient Extraction of Robust Image Features on Mobile Devices", Proceedings of the Sixth IEEE and ACM International Symposium on Mixed and Augmented Reality (ISMAR'07), 2007.
17 Cho, Y. and Aekwon Choi, "The Design of Smart -phone Application Design for Intelligent Personalized Service in Exhibition Space", Journal of Intelligence and Information Systems, Vol.17, No.2(2011), 109-117.
18 Datta, R., J. Li, and J. Z. Wang, "Content-based image retrieval : approaches and trends of the new age", in MIR '05 : Proceedings of the 7th ACM SIGMM international workshop on Multimedia information retrieval, (2005), 253-262.
19 Fritz, G., C. Seifert, and L. Paletta, "A Mobile Vision System for Urban Detection with Informative Local Descriptors", Proceedings of the Fourth IEEE International Conference on Computer Vision Systems, (2006), 30-40.