• Title/Summary/Keyword: 3-D video

Search Result 1,156, Processing Time 0.03 seconds

Video Scene Change Detection Using a 3-D DCT (3-D DCT를 이용한 비디오 장면 전환 검출)

  • 우석훈;원치선
    • Proceedings of the IEEK Conference
    • /
    • 2003.11a
    • /
    • pp.157-160
    • /
    • 2003
  • In this paper. we propose a simple and effective video scene change detection algorithm using a 3-D DCT. The 3-D DCT that we employ is a 2$\times$2$\times$2 DCT has simple computations composed only of adding and shifting operations. The simple average values of multiresolution represented video using the 2$\times$2$\times$2 DCT are used as a detection feature vector.

  • PDF

Video based Point Cloud Compression with Versatile Video Coding (Versatile Video Coding을 활용한 Video based Point Cloud Compression 방법)

  • Gwon, Daeheyok;Han, Heeji;Choi, Haechul
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2020.07a
    • /
    • pp.497-499
    • /
    • 2020
  • 포인트 클라우드는 다수의 3D 포인터를 사용한 3D 데이터의 표현 방식 중 하나이며, 멀티미디어 획득 및 처리 기술의 발전에 따라 다양한 분야에서 주목하고 있는 기술이다. 특히 포인트 클라우드는 3D 데이터를 정밀하게 수집하고 표현할 수 있는 장점을 가진다. 하지만 포인트 클라우드는 방대한 양의 데이터를 가지고 있어 효율적인 압축이 필수적이다. 이에 따라 국제 표준화 단체인 Moving Picture Experts Group에서는 포인트 클라우드 데이터의 효율적인 압축을 위하여 Video based Point Cloud Compression(V-PCC)와 Geometry based Point Cloud Coding에 대한 표준을 제정하고 있다. 이 중 V-PCC는 기존 High Efficiency Video Coding(HEVC) 표준을 활용하여 포인트 클라우드를 압축하여 활용성이 높다는 장점이 있다. 본 논문에서는 V-PCC에 사용하는 HEVC 코덱을 2020년 7월 표준화 완료될 예정인 Versatile Video Coding으로 대체하여 V-PCC의 압축 성능을 더 개선할 수 있음을 보인다.

  • PDF

Technical Improvement Using a Three-Dimensional Video System for Laparoscopic Partial Nephrectomy

  • Komatsuda, Akari;Matsumoto, Kazuhiro;Miyajima, Akira;Kaneko, Gou;Mizuno, Ryuichi;Kikuchi, Eiji;Oya, Mototsugu
    • Asian Pacific Journal of Cancer Prevention
    • /
    • v.17 no.5
    • /
    • pp.2475-2478
    • /
    • 2016
  • Background: Laparoscopic partial nephrectomy is one of the major surgical techniques for small renal masses. However, it is difficult to manage cutting and suturing procedures within acceptable time periods. To overcome this difficulty, we applied a three-dimensional (3D) video system with laparoscopic partial nephrectomy, and evaluated its utility. Materials and Methods: We retrospectively enrolled 31 patients who underwent laparoscopic partial nephrectomy between November 2009 and June 2014. A conventional two-dimensional (2D) video system was used in 20 patients, and a 3D video system in 11. Patient characteristics and video system type (2D or 3D) were recorded, and correlations with perioperative outcomes were analyzed. Results: Mean age of the patients was $55.8{\pm}12.4$, mean body mass index was $25.7{\pm}3.9kg/m^2$, mean tumor size was $2.0{\pm}0.8cm$, mean R.E.N.A.L nephrometry score was $6.9{\pm}1.9$, and clinical stage was T1a in all patients. There were no significant differences in operative time (p=0.348), pneumoperitoneum time (p=0.322), cutting time (p=0.493), estimated blood loss (p=0.335), and Clavien grade of >II complication rate (p=0.719) between the two groups. However, warm ischemic time was significantly shorter in the 3D group than the 2D group (16.1 min vs. 21.2min, p=0.021), which resulted from short suturing time (9.1 min vs. 15.2 min, p=0.008). No open conversion occurred in either group. Conclusions: A 3D video system allows the shortening of warm ischemic time in laparoscopic partial nephrectomy and thus may be useful in improving the procedure.

Implementation of AR Remote Rendering Techniques for Real-time Volumetric 3D Video

  • Lee, Daehyeon;Lee, Munyong;Lee, Sang-ha;Lee, Jaehyun;Kwon, Soonchul
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.12 no.2
    • /
    • pp.90-97
    • /
    • 2020
  • Recently, with the growth of mixed reality industrial infrastructure, relevant convergence research has been proposed. For real-time mixed reality services such as remote video conferencing, the research on real-time acquisition-process-transfer methods is required. This paper aims to implement an AR remote rendering method of volumetric 3D video data. We have proposed and implemented two modules; one, the parsing module of the volumetric 3D video to a game engine, and two, the server rendering module. The result of the experiment showed that the volumetric 3D video sequence data of about 15 MB was compressed by 6-7%. The remote module was streamed at 27 fps at a 1200 by 1200 resolution. The results of this paper are expected to be applied to an AR cloud service.

Media GIS Web Service Architecture using Three-Dimensional GIS Database

  • Kim, Sung-Soo;Kim, Kyong-Ho;Kim, Kyung-Ok
    • Proceedings of the KSRS Conference
    • /
    • 2002.10a
    • /
    • pp.117-122
    • /
    • 2002
  • In this paper, we propose Media GIS web service architecture using 3D geographical database and GPS-related data resulted from 4S-Van. We introduce a novel interoperable geographical data service concept; so-called, Virtual World Mapping (VWM) that can map 3D graphic world with real-world video. Our proposed method can easily retrieve geographical in-formation and attributes to reconstruct 3D virtual space according to certain frame in video sequences. Our proposed system architecture also has an advantage that can provide geographical information service with video stream without any image processing procedures. In addition to, describing the details of our components, we present a Media GIS web service system by using GeoVideo Server, which performs VWM technique.

  • PDF

Web-Based Media GIS Architecture Using the Virtual World Mapping Technique

  • Kim, Sung-Soo;Kim, Kyong-Ho;Kim, Kyoung-Ok
    • Korean Journal of Remote Sensing
    • /
    • v.19 no.1
    • /
    • pp.71-80
    • /
    • 2003
  • In this Paper, we Propose web-based Media GIS architecture using 3D geographical database and GPS-related data resulted from 45-Van. We introduce a novel interoperable geographical data service concept; so-called, Virtual World Mapping (VWM) that can map 3D graphic world with real-world video. Our proposed method can easily retrieve geographical information and attributes to reconstruct 3D virtual space according to certain frame in video sequences. Our proposed system architecture also has an advantage that can provide geographical information service with video stream without any image processing procedures. In addition to, describing the details of our components, we present a Media GIS web service system by using GeoVideoServer, which performs VWM technique.

Video Augmentation by Image-based Rendering

  • Seo, Yong-Duek;Kim, Seung-Jin;Sang, Hong-Ki
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 1998.06b
    • /
    • pp.147-153
    • /
    • 1998
  • This paper provides a method for video augmentation using image interpolation. In computer graphics or augmented reality, 3D information of a model object is necessary to generate 2D views of the model, which are then inserted into or overlayed on environmental views or real video frames. However, we do not require any three dimensional model but images of the model object at some locations to render views according to the motion of video camera which is calculated by an SFM algorithm using point matches under weak-perspective (scaled-orthographic) projection model. Thus, a linear view interpolation algorithm is applied rather than a 3D ray-tracing method to get a view of the model at different viewpoints from model views. In order to get novel views in a way that agrees with the camera motion the camera coordinate system is embedded into model coordinate system at initialization time on the basis of 3D information recovered from video images and model views, respectively. During the sequence, motion parameters from video frames are used to compute interpolation parameters, and rendered model views are overlayed on corresponding video frames. Experimental results for real video frames and model views are given. Finally, discussion on the limitations of the method and subjects for future research are provided.

  • PDF

Video Mosaics in 3D Space

  • Chon, Jaechoon;Fuse, Takashi;Shimizu, Eihan
    • Proceedings of the KSRS Conference
    • /
    • 2003.11a
    • /
    • pp.390-392
    • /
    • 2003
  • Video mosaicing techniques have been widely used in virtual reality environments. Especially in GIS field, video mosaics are becoming more and more common in representing urban environments. Such applications mainly use spherical or panoramic mosaics that are based on images taken from a rotating camera around its nodal point. The viewpoint, however, is limited to location within a small area. On the other hand, 2D-mosaics, which are based on images taken from a translating camera, can acquire data in wide area. The 2D-mosaics still have some problems : it can‘t be applied to images taken from a rotational camera in large angle. To compensate those problems , we proposed a novel method for creating video mosaics in 3D space. The proposed algorithm consists of 4 steps: feature -based optical flow detection, camera orientation, 2D-image projection, and image registration in 3D space. All of the processes are fully automatic and successfully implemented and tested with real images.

  • PDF

A Study on Generation of Free Stereo Mosaic Image Using Video Sequences (비디오 프레임 영상을 이용한 자유 입체 모자이크 영상 제작에 관한 연구)

  • Noh, Myoung-Jong;Cho, Woo-Sug;Park, June-Ku
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.27 no.4
    • /
    • pp.453-460
    • /
    • 2009
  • For constructing 3D information using aerial photograph or video sequences, left and right stereo images having different viewing angle should be prepared in overlapping area. In video sequences, left and right stereo images would be generated by mosaicing left and right slice images extracted in consecutive video sequences. Therefore, this paper is focused on generating left and right stereo mosaic images that are able to construct 3D information and video sequences could be made for the best use. In the stereo mosaic generation, motion parameters between video sequences should be firstly determined. In this paper, to determine motion parameters, free mosaic method using geometric relationship, such as relative orientation parameters, between consecutive frame images without GPS/INS geo-data have applied. After determining the motion parameters, the mosaic image have generated by 4 step processes: image registration, image slicing, determining on stitching line, and 3D image mosaicking. As the result of experiment, generated stereo mosaic image and analyzed result of x, y-parallax have showed.

Effect of 2D Forest Video Viewing and Virtual Reality Forest Video Viewing on Stress Reduction in Adults (2D 숲동영상 및 Virtual Reality 숲동영상 시청이 성인의 스트레스 감소에 미치는 영향)

  • Hong, Sungjun;Joung, Dawou;Lee, Jeongdo;Kim, Da-young;Kim, Soojin;Park, Bum-Jin
    • Journal of Korean Society of Forest Science
    • /
    • v.108 no.3
    • /
    • pp.440-453
    • /
    • 2019
  • This study was carried out to investigate the effect of watching a two-dimensional (2D) forest video and a virtual reality (VR) forest video on stress reduction in adults. Experiments were conducted in an artificial climate room, and 40 subjects participated. After inducing stress in the subjects, subjects watched a 2D gray video, 2D forest video, or VR forest video for 5 mins. The autonomic nervous system activity was evaluated continuously in terms of measured heart rate variability during the experiment. After each experiment, the subject's psychological state was evaluated using a questionnaire. The 2D forest video decreased the viewer's stress index, increased HF, and reduced heart rate compared with the 2D gray video. The VR forest video had a greater stress index reduction effect, LF/HF increase effect, and heart rate reduction effect than the 2D gray video. Psychological measurements showed that subjects felt more comfortable, natural, and calm when watching the 2D gray video, 2D forest video or VR forest video. We also found that the 2D forest video and VR forest video increased positive emotions and reduced negative emotions compared to the 2D gray video. Based on these results, it can be concluded that watching the 2D forest and VR forest videos reduces the stress index and heart rate compared with watching the 2D gray video. Thus, it is considered that the 2D forest video increases the activity of the parasympathetic nervous system, and the VR forest video increases the activity of the sympathetic nervous system. The increased activity of the sympathetic nervous system upon watching the VR forest video is judged to be positive sympathetic nerve activity, such as novelty and curiosity, and not negative sympathetic activity, such as stress and tension. The results of this study are expected to be the basis for examining the visual effects of forest healing, with hope that the utilization of VR, the technology of the fourth industrial revolution in the forestry field, will broaden.