• Title/Summary/Keyword: 3D video

Search Result 1,152, Processing Time 0.073 seconds

Effect of 2D Forest Video Viewing and Virtual Reality Forest Video Viewing on Stress Reduction in Adults (2D 숲동영상 및 Virtual Reality 숲동영상 시청이 성인의 스트레스 감소에 미치는 영향)

  • Hong, Sungjun;Joung, Dawou;Lee, Jeongdo;Kim, Da-young;Kim, Soojin;Park, Bum-Jin
    • Journal of Korean Society of Forest Science
    • /
    • v.108 no.3
    • /
    • pp.440-453
    • /
    • 2019
  • This study was carried out to investigate the effect of watching a two-dimensional (2D) forest video and a virtual reality (VR) forest video on stress reduction in adults. Experiments were conducted in an artificial climate room, and 40 subjects participated. After inducing stress in the subjects, subjects watched a 2D gray video, 2D forest video, or VR forest video for 5 mins. The autonomic nervous system activity was evaluated continuously in terms of measured heart rate variability during the experiment. After each experiment, the subject's psychological state was evaluated using a questionnaire. The 2D forest video decreased the viewer's stress index, increased HF, and reduced heart rate compared with the 2D gray video. The VR forest video had a greater stress index reduction effect, LF/HF increase effect, and heart rate reduction effect than the 2D gray video. Psychological measurements showed that subjects felt more comfortable, natural, and calm when watching the 2D gray video, 2D forest video or VR forest video. We also found that the 2D forest video and VR forest video increased positive emotions and reduced negative emotions compared to the 2D gray video. Based on these results, it can be concluded that watching the 2D forest and VR forest videos reduces the stress index and heart rate compared with watching the 2D gray video. Thus, it is considered that the 2D forest video increases the activity of the parasympathetic nervous system, and the VR forest video increases the activity of the sympathetic nervous system. The increased activity of the sympathetic nervous system upon watching the VR forest video is judged to be positive sympathetic nerve activity, such as novelty and curiosity, and not negative sympathetic activity, such as stress and tension. The results of this study are expected to be the basis for examining the visual effects of forest healing, with hope that the utilization of VR, the technology of the fourth industrial revolution in the forestry field, will broaden.

Interactive Super Multi-view Content Technology (인터랙티브 초다시점 콘텐츠 제작 기술)

  • Cheong, J.S.;Ghyme, S.;Heo, G.S.;Jeong, I.K.
    • Electronics and Telecommunications Trends
    • /
    • v.32 no.5
    • /
    • pp.39-48
    • /
    • 2017
  • Since the world's first 3D commercial film with red-blue glasses was introduced in 1922, remarkable progress has been made in the field of 3D video. 3D video content gained enormous popularity with the movie "Avatar," which greatly increased the sale of 3D TVs. This momentum has weakened owing to lack of 3D content. However, the recent trend of virtual reality (VR) and augmented reality (AR) made 360 VR video and 3D games using a head mounted display wide spread. All these experiences mentioned above require wearing glasses to enjoy 3D content. Super multi-view content technology, on the other hand, enables viewers to enjoy 3D content without glasses on a super multi-view display. In this article, we introduce the technologies used to make super multi-view content, interact with it, and author content, which are developed by ETRI.

Video Subband Coding using Quad-Tree Algorithm (쿼드트리 알고리즘을 이용한 비디오 서브밴드 코딩)

  • An, Chong-Koo;Chu, Hyung-Suk
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.6 no.3
    • /
    • pp.120-126
    • /
    • 2005
  • This paper presents the 3D wavelet based video compression system using quad-tree algorithm. The 3D wavelet based video compression system removes the temporal correlation of the input sequences using the motion compensation filter and decomposes the spatio-temporal subband using the spatial wavelet transform. The proposed system allocates the higher bit rate to the low frequency image of the 3D wavelet sequences and improves the 0.64dB PSNR performance of the reconstructed image in comparison with that of H.263. In addition to the limitation on the propagation of the motion compensation error by the 3D wavelet transform, the proposed system progressively transmits the input sequence according to the resolution and rate scalability.

  • PDF

No-reference quality assessment of dynamic sports videos based on a spatiotemporal motion model

  • Kim, Hyoung-Gook;Shin, Seung-Su;Kim, Sang-Wook;Lee, Gi Yong
    • ETRI Journal
    • /
    • v.43 no.3
    • /
    • pp.538-548
    • /
    • 2021
  • This paper proposes an approach to improve the performance of no-reference video quality assessment for sports videos with dynamic motion scenes using an efficient spatiotemporal model. In the proposed method, we divide the video sequences into video blocks and apply a 3D shearlet transform that can efficiently extract primary spatiotemporal features to capture dynamic natural motion scene statistics from the incoming video blocks. The concatenation of a deep residual bidirectional gated recurrent neural network and logistic regression is used to learn the spatiotemporal correlation more robustly and predict the perceptual quality score. In addition, conditional video block-wise constraints are incorporated into the objective function to improve quality estimation performance for the entire video. The experimental results show that the proposed method extracts spatiotemporal motion information more effectively and predicts the video quality with higher accuracy than the conventional no-reference video quality assessment methods.

A Study of Video-Based Abnormal Behavior Recognition Model Using Deep Learning

  • Lee, Jiyoo;Shin, Seung-Jung
    • International journal of advanced smart convergence
    • /
    • v.9 no.4
    • /
    • pp.115-119
    • /
    • 2020
  • Recently, CCTV installations are rapidly increasing in the public and private sectors to prevent various crimes. In accordance with the increasing number of CCTVs, video-based abnormal behavior detection in control systems is one of the key technologies for safety. This is because it is difficult for the surveillance personnel who control multiple CCTVs to manually monitor all abnormal behaviors in the video. In order to solve this problem, research to recognize abnormal behavior using deep learning is being actively conducted. In this paper, we propose a model for detecting abnormal behavior based on the deep learning model that is currently widely used. Based on the abnormal behavior video data provided by AI Hub, we performed a comparative experiment to detect anomalous behavior through violence learning and fainting in videos using 2D CNN-LSTM, 3D CNN, and I3D models. We hope that the experimental results of this abnormal behavior learning model will be helpful in developing intelligent CCTV.

Real-time Temporal Synchronization and Compensation in Stereoscopic Video (3D 입체 영상시스템의 좌-우 영상에 대한 실시간 동기 에러 검출 및 보정)

  • Kim, Giseok;Cho, Jae-Soo;Lee, Gwangsoon;Lee, Eung-Don
    • Journal of Broadcast Engineering
    • /
    • v.18 no.5
    • /
    • pp.680-690
    • /
    • 2013
  • In this paper, we propose a real-time temporal synchronization and compensation algorithm in stereoscopic video. Many temporal asynchronies are caused in the video editing stage and due to different transmission delays. These temporal asynchronies can degrade the perceived 3D quality. The goal of temporal alignment is to detect and to measure the temporal asynchrony and recover synchronization of the two video streams. In order to recover synchronization of the two video streams, we developed a method to detect asynchronies between the left and the right video streams based on a novel spatiogram information, which is a richer representation, capturing not only the values of the pixels but their spatial relationships as well. The proposed novel spatiogram additionally includes the changes of the spatial color distribution. Furthermore, we propose a block-based method for detection of the pair frame instead of one frame-based method. Various 3D experiments demonstrate the effectiveness of the proposed method.

Video Augmentation of Virtual Object by Uncalibrated 3D Reconstruction from Video Frames (비디오 영상에서의 비보정 3차원 좌표 복원을 통한 가상 객체의 비디오 합성)

  • Park Jong-Seung;Sung Mee-Young
    • Journal of Korea Multimedia Society
    • /
    • v.9 no.4
    • /
    • pp.421-433
    • /
    • 2006
  • This paper proposes a method to insert virtual objects into a real video stream based on feature tracking and camera pose estimation from a set of single-camera video frames. To insert or modify 3D shapes to target video frames, the transformation from the 3D objects to the projection of the objects onto the video frames should be revealed. It is shown that, without a camera calibration process, the 3D reconstruction is possible using multiple images from a single camera under the fixed internal camera parameters. The proposed approach is based on the simplification of the camera matrix of intrinsic parameters and the use of projective geometry. The method is particularly useful for augmented reality applications to insert or modify models to a real video stream. The proposed method is based on a linear parameter estimation approach for the auto-calibration step and it enhances the stability and reduces the execution time. Several experimental results are presented on real-world video streams, demonstrating the usefulness of our method for the augmented reality applications.

  • PDF

Video Processing of MPEG Compressed Data For 3D Stereoscopic Conversion (3차원 입체 변환을 위한 MPGE 압축 데이터에서의 영상 처리 기법)

  • 김만배
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 1998.06a
    • /
    • pp.3-8
    • /
    • 1998
  • The conversion of monoscopic video to 3D stereoscopic video has been studied by some pioneering researchers. In spite of the commercial of potential of the technology, two problems have bothered the progress of this research area: vertical motion parallax and high computational complexity. The former causes the low 3D perception, while the hardware complexity is required by the latter. The previous research has dealt with NTSC video, thur requiring complex processing steps, one of which is motion estimation. This paper proposes 3D stereoscopic conversion method of MPGE encoded data. Our proposed method has the advantage that motion estimation can be avoided by processing MPEG compressed data for the extraction of motion data as well as that camera and object motion in random in random directions can be handled.

  • PDF

A study on video effect available 3D Alpha (3D Alpha를 이용한 비디오 효과에 관한 연구)

  • Joo, Heon-Sik
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2014.07a
    • /
    • pp.277-278
    • /
    • 2014
  • 본 논문에서는 3D Alpha를 이용한 몇 가지 효과를 제안한다. 먼저 3D Alpha로 영상의 위치변화를 주어 3D효과를 나타낼 수 있고, 영상의 거리감을 나타낼 수 있다. 또한 글자의 다양한 변화를 줄 수 있어 그 활용도는 매우 크다고 볼 수 있다. 따라서 다양한 분야에서 응용으로 사용할 수 있다.

  • PDF

Study of Capturing Real-Time 360 VR 3D Game Video for 360 VR E-Sports Broadcast (360 VR E-Sports 중계를 위한 실시간 360 VR 3D Stereo 게임 영상 획득에 관한 연구)

  • Kim, Hyun Wook;Lee, Jun Suk;Yang, Sung Hyun
    • Journal of Broadcast Engineering
    • /
    • v.23 no.6
    • /
    • pp.876-885
    • /
    • 2018
  • Although e-sports broadcasting market based on VR(Virtual Reality) is growing in these days, technology development for securing market competitiveness is quite inadequate in Korea. Global companies such as SLIVER and Facebook already developed and are trying to commercialize 360 VR broadcasting technology which is able to broadcast e-sports in 4K 30FPS VR video. However, 2D video is too poor to use for 360 VR video in that it brings less immersive experience and dizziness and has low resolution in the scene. this paper, we not only proposed and implemented virtual camera technology which is able to capture in-game space as 360 video with 4K 3D by 60FPS for e-sports VR broadcasting but also verified feasibleness of obtaining stereo 360 video up to 4K/60FPS by conducting experiment after setting up virtual camera in sample games from game engine and commercial games.