• Title/Summary/Keyword: reality show

Search Result 658, Processing Time 0.029 seconds

Registration of Video Avatar by Comparing Real and Synthetic Images (실제와 합성영상의 비교에 의한 비디오 아바타의 정합)

  • Park Moon-Ho;Ko Hee-Dong;Byun Hye-Ran
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.33 no.8
    • /
    • pp.477-485
    • /
    • 2006
  • In this paper, video avatar, made from live video streams captured from a real participant, was used to represent a virtual participant. By using video avatar to represent participants, the sense of reality for participants can be increased, but the correct registration is also an important issue. We configured the real and virtual cameras to have the same characteristics in order to register the video avatar. Comparing real and synthetic images, which is possible because of the similarities between real and virtual cameras, resolved registration between video avatar captured from real environment and virtual environment. The degree of incorrect registration was represented as energy, and the energy was then minimized to produce seamless registration. Experimental results show the proposed method can be used effectively for registration of video avatar.

Virtual Reality based Total Station Training Content Development (가상현실 기반 토탈스테이션 훈련 콘텐츠 개발)

  • Im, Tami;Kim, Sang-Youn
    • Journal of Digital Contents Society
    • /
    • v.18 no.4
    • /
    • pp.631-639
    • /
    • 2017
  • Development and implementation of virtual training contents has been increasing along with the emphasis on the experience and practice in engineering education. Virtual training makes repeatable sessions possible within safe learning environment which is very similar with real work place. This feature is very helpful to learners when they manipulate real machines back at work after studying with the virtual training content. The purpose of this study is to develop "Total Station and GPSS surveying" virtual training content focusing on both theory and surveying practice within various circumstances and to explore learners experience. Results show high interest, immersion, perceived learning effectiveness, and satisfaction to the content.

Multi-View Supporting VR/AR Visualization System for Supercomputing-based Engineering Analysis Services (슈퍼컴퓨팅 기반의 공학해석 서비스 제공을 위한 멀티 뷰 지원 VR/AR 가시화 시스템 개발)

  • Seo, Dong Woo;Lee, Jae Yeol;Lee, Sang Min;Kim, Jae Seong;Park, Hyung Wook
    • Korean Journal of Computational Design and Engineering
    • /
    • v.18 no.6
    • /
    • pp.428-438
    • /
    • 2013
  • The requirement for high performance visualization of engineering analysis of digital products is increasing since the size of the current analysis problems is more and more complex, which needs high-performance codes as well as high performance computing systems. On the other hand, different companies or customers do not have all the facilities or have difficulties in accessing those computing resources. In this paper, we present a multi-view supporting VR/AR system for providing supercomputing-based engineering analysis services. The proposed system is designed to provide different views supporting VR/AR visualization services depending on the requirement of the customers. It provides a sophisticated VR rendering directly dependent on a supercomputing resource as well as a remotely accessible AR visualization. By providing multi-view centric analysis services, the proposed system can be more easily applied to various customers requiring different levels of high performance computing resources. We will show the scalability and vision of the proposed approach by demonstrating illustrative examples with different levels of complexity.

AR-based Message Annotation System for Personalized Assistance (개인화된 도움을 위한 증강현실기반 메시지 주석시스템)

  • Vinh, Nguyen Van;Jun, Hee-Sung
    • The KIPS Transactions:PartB
    • /
    • v.16B no.6
    • /
    • pp.435-442
    • /
    • 2009
  • We propose an annotation system, which allows users moving on an environment to receive personalized messages that are generated by exploiting contextual information. In the system, the context is defined as an entity including user's identity, location and time. Identity of user is a key data to enable personal aspect of generated message. For sensing the context, the proposed system uses AR(augmented reality) technology. Markers are attached to real objects for tracking user's location. AR can provide an effective annotating method to enhance human's perception and interaction abilities. The received message can be a virtual post-it or three-dimensional virtual model of object overlaid onto the real-world view. Experimental results show that the proposed system works well in real-time with high performance and it can be used as a mobile service for personalized messaging.

Augmented System for Immersive 3D Expansion and Interaction

  • Yang, Ungyeon;Kim, Nam-Gyu;Kim, Ki-Hong
    • ETRI Journal
    • /
    • v.38 no.1
    • /
    • pp.149-158
    • /
    • 2016
  • In the field of augmented reality technologies, commercial optical see-through-type wearable displays have difficulty providing immersive visual experiences, because users perceive different depths between virtual views on display surfaces and see-through views to the real world. Many cases of augmented reality applications have adopted eyeglasses-type displays (EGDs) for visualizing simple 2D information, or video see-through-type displays for minimizing virtual- and real-scene mismatch errors. In this paper, we introduce an innovative optical see-through-type wearable display hardware, called an EGD. In contrast to common head-mounted displays, which are intended for a wide field of view, our EGD provides more comfortable visual feedback at close range. Users of an EGD device can accurately manipulate close-range virtual objects and expand their view to distant real environments. To verify the feasibility of the EGD technology, subject-based experiments and analysis are performed. The analysis results and EGD-related application examples show that EGD is useful for visually expanding immersive 3D augmented environments consisting of multiple displays.

Augmented Visualization of Modeling & Simulation Analysis Results (모델링 & 시뮬레이션 해석 결과 증강가시화)

  • Kim, Minseok;Seo, Dong Woo;Lee, Jae Yeol;Kim, Jae Sung
    • Korean Journal of Computational Design and Engineering
    • /
    • v.22 no.2
    • /
    • pp.202-214
    • /
    • 2017
  • The augmented visualization of analysis results can play an import role as a post-processing tool for the modeling & simulation (M&S) technology. In particular, it is essential to develop such an M&S tool which can run on various multi-devices. This paper presents an augmented reality (AR) approach to visualizing and interacting with M&S post-processing results through mobile devices. The proposed approach imports M&S data, extracts analysis information, and converts the extracted information into the one used for AR-based visualization. Finally, the result can be displayed on the mobile device through an AR marker tracking and a shader-based realistic rendering. In particular, the proposed method can superimpose AR-based realistic scenes onto physical objects such as 3D printing-based physical prototypes in a seamless manner, which can provide more immersive visualization and natural interaction of M&S results than conventional VR or AR-based approaches. A user study has been performed to analyze the qualitative usability. Implementation results will also be given to show the advantage and effectiveness of the proposed approach.

Quality Assessment and Analysis of Stereoscopic 3D Television Pictures (양안식 3D 텔레비전 영상의 화질 평가와 분석)

  • Park, Dae-Chul
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.11 no.4
    • /
    • pp.278-288
    • /
    • 2010
  • In this paper, we carried out quality assessment and analysis of stereoscopic 3D Television pictures according to ITU-R contribution and recommendation by rating scales using DSCQS (Double-Stimuli Continuous Quality Scale) method. The evaluation results show that overall quality and sharpness of stereoscopic pictures revealed almost no difference compared to cases of mono pictures as to natural outdoor scenes, graphic images, and indoor scenes (about 3.0 above ~ 4.0), but depth perception and sensation of reality of stereoscopic images exhibit better quality performance over mono images as indicated more than 4.0 out of 5.0 grade. Evaluation results should be considered as human factors such as disparity when shooting and/or editing 3DTV.

Multi-homing in Heterogeneous Wireless Access Networks: A Stackelberg Game for Pricing

  • Lee, Joohyung
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.5
    • /
    • pp.1973-1991
    • /
    • 2018
  • Multimedia applications over wireless networks have been evolving to augmented reality or virtual reality services. However, a rich data size compared to conventional multimedia services causes bandwidth bottlenecks over wireless networks, which is one of the main reasons why those applications are not used widely. To overcome this limitation, bandwidth aggregation techniques, which exploit a multi-path transmission, have been considered to maximize link utilization. Currently, most of the conventional researches have been focusing on the user end problems to improve the quality of service (QoS) through optimal load distribution. In this paper, we address the joint pricing and load distribution problem for multi-homing in heterogeneous wireless access networks (ANs), considering the interests of both the users and the service providers. Specifically, we consider profit from resource allocation and cost of power consumption expenditure for operation as an utility of each service provider. Here, users decide how much to request the resource and how to split the resource over heterogeneous wireless ANs to minimize their cost while supporting the required QoS. Then, service providers compete with each other by setting the price to maximize their utilities over user reactions. We study the behaviors of users and service providers by analyzing their hierarchical decision-making process as a multileader-, multifollower Stackelberg game. We show that both the user and service provider strategies are closed form solutions. Finally, we discuss how the proposed scheme is well converged to equilibrium points.

Real-time 3D Audio Downmixing System based on Sound Rendering for the Immersive Sound of Mobile Virtual Reality Applications

  • Hong, Dukki;Kwon, Hyuck-Joo;Kim, Cheong Ghil;Park, Woo-Chan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.12
    • /
    • pp.5936-5954
    • /
    • 2018
  • Eight out of the top ten the largest technology companies in the world are involved in some way with the coming mobile VR revolution since Facebook acquired Oculus. This trend has allowed the technology related with mobile VR to achieve remarkable growth in both academic and industry. Therefore, the importance of reproducing the acoustic expression for users to experience more realistic is increasing because auditory cues can enhance the perception of the complicated surrounding environment without the visual system in VR. This paper presents a audio downmixing system for auralization based on hardware, a stage of sound rendering pipelines that can reproduce realiy-like sound but requires high computation costs. The proposed system is verified through an FPGA platform with the special focus on hardware architectural designs for low power and real-time. The results show that the proposed system on an FPGA can downmix maximum 5 sources in real-time rate (52 FPS), with 382 mW low power consumptions. Furthermore, the generated 3D sound with the proposed system was verified with satisfactory results of sound quality via the user evaluation.

A Sound Interpolation Method Using Deep Neural Network for Virtual Reality Sound (가상현실 음향을 위한 심층신경망 기반 사운드 보간 기법)

  • Choi, Jaegyu;Choi, Seung Ho
    • Journal of Broadcast Engineering
    • /
    • v.24 no.2
    • /
    • pp.227-233
    • /
    • 2019
  • In this paper, we propose a deep neural network-based sound interpolation method for realizing virtual reality sound. Through this method, sound between two points is generated by using acoustic signals obtained from two points. Sound interpolation can be performed by statistical methods such as arithmetic mean or geometric mean, but this is insufficient to reflect actual nonlinear acoustic characteristics. In order to solve this problem, in this study, the sound interpolation is performed by training the deep neural network based on the acoustic signals of the two points and the target point, and the experimental results show that the deep neural network-based sound interpolation method is superior to the statistical methods.