• Title/Summary/Keyword: Immersive 3D View

Search Result 27, Processing Time 0.02 seconds

Development of Immersive Augmented Reality interface for Minimally Invasive Surgery (증강현실 기반의 최소침습수술용 인터페이스의 개발)

  • Moon, Jin-Ki;Park, Shin-Suk;Kim, Eugene;Kim, Jin-Wook
    • The Journal of Korea Robotics Society
    • /
    • v.3 no.1
    • /
    • pp.58-67
    • /
    • 2008
  • This study developed a novel augmented reality interface for minimally invasive surgery. The augmented reality technique can alleviate the sensory feedback problem inherent to laparoscopic surgery. An augmented reality system merges real laparoscope image and reconstructed 3D patient model based on diagnostic medical image such as CT, MRI data. By using reconstructed 3D patient model, AR interface could express structure of patient body that is invisible outside visual field of laparoscope. Therefore, an augmented reality system improved sight information of limited laparoscope. In our augmented reality system, the laparoscopic view is located at the center of a wide-angle concave screen and reconstructed 3D patient model is displayed outside the laparoscope. By using a joystick, the laparoscopic view and the reconstructed 3D patient model view are changed concurrently. With our augmented reality system, the surgeon can see the peritoneal cavity from a wide angle of view, without having to move the laparoscope. Since the concave screen serves immersive environments, the surgeon can feel as if she is in the patient body. For these reasons, a surgeon can recognize easily depth information about inner parts of patient and position information of surgical instruments without laparoscope motion. It is possible for surgeon to manipulate surgical instruments more exact and fast. Therefore immersive augmented reality interface for minimally invasive surgery will reduce bodily, environmental load of a surgeon and increase efficiency of MIS.

  • PDF

Light ID and HMD-AR Based Interactive Exhibition Design for Jeonju Hanok Village Immersive 3D View (전주 한옥마을의 실감 3D View를 위한 Light ID 및 HMD-AR 기반 인터렉티브 전시 설계)

  • Min, Byung-Jun;Mariappan, Vinayagam;Cha, Jae-Sang;Kim, Dae-Young;Cho, Ju-Phil
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.11 no.4
    • /
    • pp.414-420
    • /
    • 2018
  • The digital convergence looking for new ways to engage visitors by superimposing virtual content on projection over the real world captured media contents. This paper propose the Light ID based interactive 3D immersive exhibition things view using HMD AR technology. This approach does not required to add any additional infrastructure to be built-in to enable service and uses the installed Lighting or displays devices in the exhibit area. In this approach, the Light ID can be used as a Location Identifier and communication medium to access the content unlike the QR Tag which supports provide the download information through web interface. This utilize the advantages of camera based optical wireless communication (OWC) to receive the media content on smart device to deliver immersive 3D content visualization using AR. The proposed exhibition method is emulated on GALAXY S8 smart phone and the visual performance is evaluated for Jeonju Hanok Village. The experimental results shows that the proposed method can give immersive 3D view for exhibit things in real-time.

Augmented System for Immersive 3D Expansion and Interaction

  • Yang, Ungyeon;Kim, Nam-Gyu;Kim, Ki-Hong
    • ETRI Journal
    • /
    • v.38 no.1
    • /
    • pp.149-158
    • /
    • 2016
  • In the field of augmented reality technologies, commercial optical see-through-type wearable displays have difficulty providing immersive visual experiences, because users perceive different depths between virtual views on display surfaces and see-through views to the real world. Many cases of augmented reality applications have adopted eyeglasses-type displays (EGDs) for visualizing simple 2D information, or video see-through-type displays for minimizing virtual- and real-scene mismatch errors. In this paper, we introduce an innovative optical see-through-type wearable display hardware, called an EGD. In contrast to common head-mounted displays, which are intended for a wide field of view, our EGD provides more comfortable visual feedback at close range. Users of an EGD device can accurately manipulate close-range virtual objects and expand their view to distant real environments. To verify the feasibility of the EGD technology, subject-based experiments and analysis are performed. The analysis results and EGD-related application examples show that EGD is useful for visually expanding immersive 3D augmented environments consisting of multiple displays.

A Method of Hole Filling for Atlas Generation in Immersive Video Coding (몰입형 비디오 부호화의 아틀라스 생성을 위한 홀 채움 기법)

  • Lim, Sung-Gyun;Lee, Gwangsoon;Kim, Jae-Gon
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2021.06a
    • /
    • pp.75-77
    • /
    • 2021
  • MPEG 비디오 그룹은 제한된 3D 공간 내에서 움직임 시차(motion parallax)를 제공하면서 원하는 시점(view)을 렌더링(rendering)하기 위한 표준으로 TMIV(Test Model for Immersive Video)라는 테스트 모델과 함께 효율적인 몰입형 비디오의 부호화를 위한 MIV(MPEG Immersive Video) 표준을 개발하고 있다. 몰입감 있는 시각적 경험을 제공하기 위해서는 많은 수의 시점 비디오가 필요하기 때문에 방대한 양의 비디오를 고효율로 압축하는 것이 불가피하다. TMIV 는 여러 개의 입력 시점 비디오를 소수의 아틀라스(atlas) 비디오로 변환하여 부호화되는 화소수를 줄이게 된다. 아틀라스는 선택된 소수의 기본 시점(basic view) 비디오와 기본 시점으로부터 합성할 수 없는 나머지 추가 시점(additional view) 비디오의 영역들을 패치(patch)로 만들어 패킹(packing)한 비디오이다. 본 논문에서는 아틀라스 비디오의 보다 효율적인 부호화를 위해서 패치 내에 생기는 작은 홀(hole)들을 채우는 기법을 제안한다. 제안기법은 기존 TMIV8.0 에 비해 1.2%의 BD-rate 이 향상된 성능을 보인다.

  • PDF

Spatial Audio Technologies for Immersive Media Services (체감형 미디어 서비스를 위한 공간음향 기술 동향)

  • Lee, Y.J.;Yoo, J.;Jang, D.;Lee, M.;Lee, T.
    • Electronics and Telecommunications Trends
    • /
    • v.34 no.3
    • /
    • pp.13-22
    • /
    • 2019
  • Although virtual reality technology may not be deemed as having a satisfactory quality for all users, it tends to incite interest because of the expectation that the technology can allow one to experience something that they may never experience in real life. The most important aspect of this indirect experience is the provision of immersive 3D audio and video, which interacts naturally with every action of the user. The immersive audio faithfully reproduces an acoustic scene in a space corresponding to the position and movement of the listener, and this technology is also called spatial audio. In this paper, we briefly introduce the trend of spatial audio technology in view of acquisition, analysis, reproduction, and the concept of MPEG-I audio standard technology, which is being promoted for spatial audio services.

An Atlas Generation Method with Tiny Blocks Removal for Efficient 3DoF+ Video Coding (효율적인 3DoF+ 비디오 부호화를 위한 작은 블록 제거를 통한 아틀라스 생성 기법)

  • Lim, Sung-Gyun;Kim, Hyun-Ho;Kim, Jae-Gon
    • Journal of Broadcast Engineering
    • /
    • v.25 no.5
    • /
    • pp.665-671
    • /
    • 2020
  • MPEG-I is actively working on standardization on the coding of immersive video which provides up to 6 degree of freedom (6DoF) in terms of viewpoint. 3DoF+ video, which provides motion parallax to omnidirectional view of 360 video, renders a view at any desired viewpoint using multiple view videos acquisitioned in a limited 3D space covered with upper body motion at a fixed position. The MPEG-I visual group is developing a test model called TMIV (Test Model for Immersive Video) in the process of development of the standard for 3DoF+ video coding. In the TMIV, the redundancy between a set of input view videos is removed, and several atlases are generated by packing patches including the remaining texture and depth regions into frames as compact as possible, and coded. This paper presents an atlas generation method that removes small-sized blocks in the atlas for more efficient 3DoF+ video coding. The proposed method shows a performance improvement of BD-rate bit savings of 0.7% and 1.4%, respectively, in natural and graphic sequences compared to TMIV.

Adaptive Spatio-Temporal Prediction for Multi-view Coding in 3D-Video (3차원 비디오 압축에서의 다시점 부호화를 위한 적응적 시공간적 예측 부호화)

  • 성우철;이영렬
    • Journal of Broadcast Engineering
    • /
    • v.9 no.3
    • /
    • pp.214-224
    • /
    • 2004
  • In this paper, an adaptive spatio-temporal predictive coding based on the H.264 is proposed for 3D immersive media encoding, such as 3D image processing, 3DTV, and 3D videoconferencing. First, we propose a spatio-temporal predictive coding using the same view and inter-view images for the two TPPP, IBBP GOP (group of picture) structures 4hat are different from the conventional simulcast method. Second, an 2D inter-view direct mode for the efficient prediction is proposed when the proposed spatio-temporal prediction uses the IBBP structure. The 2D inter-view direct mode is applied when the temporal direct mode in B(hi-Predictive) picture of the H.264 refers to an inter-view image, since the current temporal direct mode in the H.264 standard could no: be applied to the inter-view image. The proposed method is compared to the conventional simulcast method in terms of PSNR (peak signal to noise ratio) for the various 3D test video sequences. The proposed method shows better PSNR results than the conventional simulcast mode.

Efficient Representation of Patch Packing Information for Immersive Video Coding (몰입형 비디오 부호화를 위한 패치 패킹 정보의 효율적인 표현)

  • Lim, Sung-Gyun;Yoon, Yong-Uk;Kim, Jae-Gon
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • fall
    • /
    • pp.126-128
    • /
    • 2021
  • MPEG(Moving Picture Experts Group) 비디오 그룹은 사용자에게 움직임 시차(motion parallax)를 제공하면서 3D 공간 내에서 임의의 위치와 방향의 시점(view)을 렌더링(rendering) 가능하게 하는 6DoF(Degree of Freedom)의 몰입형 비디오 부호화 표준인 MIV(MPEG Immersive Video) 표준화를 진행하고 있다. MIV 표준화 과정에서 참조 SW 인 TMIV(Test Model for Immersive Video)도 함께 개발하고 있으며 점진적으로 부호화 성능을 개선하고 있다. TMIV 는 여러 뷰로 구성된 방대한 크기의 6DoF 비디오를 압축하기 위하여 입력되는 뷰 비디오들 간의 중복성을 제거하고 남은 영역들은 각각 개별적인 패치(patch)로 만든 후 아틀라스에 패킹(packing)하여 부호화되는 화소수를 줄인다. 이때 아틀라스 비디오에 패킹된 패치들의 위치 정보를 메타데이터로 압축 비트열과 함께 전송하게 되며, 본 논문에서는 이러한 패킹 정보를 보다 효율적으로 표현하기 위한 방법을 제안한다. 제안방법은 기존 TMIV10.0 에 비해 약 10%의 메타데이터를 감소시키고 종단간 BD-rate 성능을 0.1% 향상시킨다.

  • PDF

Neural Network-Based Post Filtering of Atlas for Immersive Video Coding (몰입형 비디오 부호화를 위한 신경망 기반 아틀라스 후처리 필터링)

  • Lim, Sung-Gyun;Lee, Kun-Woo;Kim, Jeong-Woo;Yoon, Yong-Uk;Kim, Jae-Gon
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2022.06a
    • /
    • pp.239-241
    • /
    • 2022
  • MIV(MPEG Immersive Video) 표준은 제한된 3D 공간의 다양한 위치의 뷰(view)들을 효율적으로 압축하여 사용자에게 임의의 위치 및 방향에 대한 6 자유도(6DoF)의 몰입감을 제공한다. MIV 의 참조 소프트웨어인 TMIV(Test Model for Immersive Video)에서는 몰입감을 제공하기 위한 여러 시점의 입력 뷰들 간의 중복 영역을 제거하고 남은 영역들을 패치(patch)로 만들어 패킹(packing)한 아틀라스(atlas)를 생성하고 이를 압축 전송한다. 아틀라스 영상은 일반적인 영상 달리 많은 불연속성을 포함하고 있으며 이는 부호화 효율을 크게 저하시키다 본 논문에서는 아틀라스 영상의 부호화 손실을 줄이기 위한 신경망 기반의 후처리 필터링 기법을 제시한다. 제안기법은 기존의 TMIV 와 비교하여 아틀라스의 복원 화질 향상을 보여준다.

  • PDF

3D Image Capturing and 3D Content Generation for Realistic Broadcasting (실감방송을 위한 3차원 영상 촬영 및 3차원 콘텐츠 제작 기술)

  • Kang, Y.S.;Ho, Y.S.
    • Smart Media Journal
    • /
    • v.1 no.1
    • /
    • pp.10-16
    • /
    • 2012
  • Stereo and multi-view cameras have been used to capture the three-dimensional (3D) scene for 3D contents generation. Besides, depth sensors are frequently used to obtain 3D information of the captured scene in real time. In order to generate 3D contents from captured images, we need several preprocessing operations to reduce noises and distortions in the images. 3D contents are considered as the basic media for realistic broadcasting that provides photo-realistic and immersive feeling to users. In this paper, we show technical trends of 3D image capturing and contents generation, and explain some core techniques for 3D image processing for realistic 3DTV broadcasting.

  • PDF