• Title/Summary/Keyword: Virtual Reality video

Search Result 242, Processing Time 0.017 seconds

Study On Online Platform For Personal Exhibition In Metaverse Emvironment (메타버스환경에서 온라인 개인 전시 방법 연구)

  • Park, Yu Mi;Shin, Choon Sung
    • Smart Media Journal
    • /
    • v.11 no.6
    • /
    • pp.37-50
    • /
    • 2022
  • This proposes a direction to build and provide an exhibition space based on the metaverse platform so that artists can independently open their own exhibitions in an online environment. Although many artists are produced every year, offline exhibitions are becoming difficult due to not many art galleries, and offline exhibitions are becoming impossible due to Corona, which began at the end of 2019, and online exhibitions are emerging as an alternative. We analyze cases of existing online exhibition methods and visit online exhibition methods such as web, video, and virtual reality and metaverse environment then present a plan for individual exhibitions in consideration of the recent metaverse environment. The proposed metaverse-based personal exhibition method is structured so that artists can construct a space on the metaverse and place their works, and then viewers can freely take a look on it from a remote location. Based on the proposed exhibition direction, the representative metaverse platform was applied to confirm the characteristics and possibilities of exhibition space and composition of works and users' exhibition experience. In the face of the rise of online exhibitions, space can be constructed in the direction the artist pursues in online exhibitions as well as offline exhibitions, but also online exhibitions, and hopes that online exhibitions will become another genre of exhibitions rather than incidental after the end of Covid-19.

A Mobile Landmarks Guide : Outdoor Augmented Reality based on LOD and Contextual Device (모바일 랜드마크 가이드 : LOD와 문맥적 장치 기반의 실외 증강현실)

  • Zhao, Bi-Cheng;Rosli, Ahmad Nurzid;Jang, Chol-Hee;Lee, Kee-Sung;Jo, Geun-Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.1
    • /
    • pp.1-21
    • /
    • 2012
  • In recent years, mobile phone has experienced an extremely fast evolution. It is equipped with high-quality color displays, high resolution cameras, and real-time accelerated 3D graphics. In addition, some other features are includes GPS sensor and Digital Compass, etc. This evolution advent significantly helps the application developers to use the power of smart-phones, to create a rich environment that offers a wide range of services and exciting possibilities. To date mobile AR in outdoor research there are many popular location-based AR services, such Layar and Wikitude. These systems have big limitation the AR contents hardly overlaid on the real target. Another research is context-based AR services using image recognition and tracking. The AR contents are precisely overlaid on the real target. But the real-time performance is restricted by the retrieval time and hardly implement in large scale area. In our work, we exploit to combine advantages of location-based AR with context-based AR. The system can easily find out surrounding landmarks first and then do the recognition and tracking with them. The proposed system mainly consists of two major parts-landmark browsing module and annotation module. In landmark browsing module, user can view an augmented virtual information (information media), such as text, picture and video on their smart-phone viewfinder, when they pointing out their smart-phone to a certain building or landmark. For this, landmark recognition technique is applied in this work. SURF point-based features are used in the matching process due to their robustness. To ensure the image retrieval and matching processes is fast enough for real time tracking, we exploit the contextual device (GPS and digital compass) information. This is necessary to select the nearest and pointed orientation landmarks from the database. The queried image is only matched with this selected data. Therefore, the speed for matching will be significantly increased. Secondly is the annotation module. Instead of viewing only the augmented information media, user can create virtual annotation based on linked data. Having to know a full knowledge about the landmark, are not necessary required. They can simply look for the appropriate topic by searching it with a keyword in linked data. With this, it helps the system to find out target URI in order to generate correct AR contents. On the other hand, in order to recognize target landmarks, images of selected building or landmark are captured from different angle and distance. This procedure looks like a similar processing of building a connection between the real building and the virtual information existed in the Linked Open Data. In our experiments, search range in the database is reduced by clustering images into groups according to their coordinates. A Grid-base clustering method and user location information are used to restrict the retrieval range. Comparing the existed research using cluster and GPS information the retrieval time is around 70~80ms. Experiment results show our approach the retrieval time reduces to around 18~20ms in average. Therefore the totally processing time is reduced from 490~540ms to 438~480ms. The performance improvement will be more obvious when the database growing. It demonstrates the proposed system is efficient and robust in many cases.