• Title/Summary/Keyword: 3D video

Search Result 1,152, Processing Time 0.03 seconds

The Effects of Recording Distance and Viewing Distance on Presence, Perceptual Characteristics, and Negative Experiences in Stereoscopic 3D Video

  • Lee, Sanguk;Chung, Donghun
    • Journal of Broadcast Engineering
    • /
    • v.24 no.7
    • /
    • pp.1189-1198
    • /
    • 2019
  • The study explores the effects of recording and viewing distances in stereoscopic 3D on presence, perceptual characteristics, and negative experiences. Groups of 20 participants were randomly assigned to each of the three viewing distances, and all participants were exposed to five versions of the stereoscopic 3D music video that differs in recording distance. The results showed that first, viewers felt a higher experience of presence and had a better perception of objects positioned near the cameras. Second, viewers felt a greater perception of screen transmission as the viewing distance increased. Finally, viewers felt a greater negative experiences due to the joint effects of recording and viewing distance. As investigating the influence of stereoscopic 3D content and viewing environments on psychological factors, the study expects to provide a guideline of human factors in 3D.

A Study of Fire Detection Algorithm for Efficient 4D System (효율적 4D 시스템을 위한 화염 검출 알고리즘 연구)

  • Cho, Kyoung-woo;Wang, Ki-cho;Oh, Chang-heon
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2013.10a
    • /
    • pp.1003-1005
    • /
    • 2013
  • 4D technology provides physical effects with the general videos or 3D videos. Implementing 4D technology, producing 4D metadata according to video play time and frame data is necessary. In this paper, we propose a method to provide physical effects by judging the temperature of video according to color information. In the proposed method, we provide physical effects to watcher by cognizing the color information in the video when a disaster such as fire is occurred. By using the method, it is expected that 4D matadata for sensing experience like heater device can be produced without programmers automatically.

  • PDF

Coding Technique using Depth Map in 3D Scalable Video Codec (확장된 스케일러블 비디오 코덱에서 깊이 영상 정보를 활용한 부호화 기법)

  • Lee, Jae-Yung;Lee, Min-Ho;Chae, Jin-Kee;Kim, Jae-Gon;Han, Jong-Ki
    • Journal of Broadcast Engineering
    • /
    • v.21 no.2
    • /
    • pp.237-251
    • /
    • 2016
  • The conventional 3D-HEVC uses the depth data of the other view instead of that of the current view because the texture data has to be encoded before the corresponding depth data of the current view has been encoded, where the depth data of the other view is used as the predicted depth for the current view. Whereas the conventional 3D-HEVC has no other candidate for the predicted depth information except for that of the other view, the scalable 3D-HEVC utilizes the depth data of the lower spatial layer whose view ID is equal to that of the current picture. The depth data of the lower spatial layer is up-scaled to the resolution of the current picture, and then the enlarged depth data is used as the predicted depth information. Because the quality of the enlarged depth is much higher than that of the depth of the other view, the proposed scheme increases the coding efficiency of the scalable 3D-HEVC codec. Computer simulation results show that the scalable 3D-HEVC is useful and the proposed scheme to use the enlarged depth data for the current picture provides the significant coding gain.

A New Scanning Method for Network-adaptive Scalable Streaming Video Coding (네트워크에 적응적인 스케일러블 스트리밍 비디오 코딩을 위한 새로운 스캔 방법)

  • Park, Gwang-Hoon;Cheong, Won-Sik
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.8 no.3
    • /
    • pp.318-327
    • /
    • 2002
  • This paper Introduces a new scanning method for network-adaptive scalable streaming video coding methodologies such as the MPEG-4 Fine Granular Scalable (FGS) Coding. Proposed scanning method can guarantee the subjectively improved picture quality of the region of the interest in the decoded video by managing the image information of that interested region to be encoded and transmitted most-preferentially, and also to be decoded most-preferentially. Proposed scanning method can lead the FGS coding method to achieve improved picture quality, in about 1dB ~ 3dB better, especially on the region of interest.

A Study for properties of Subdivision to 3D game character education (3D 게임 캐릭터 교육을 위한 Subdivision 특성 연구 (3ds Max의 Open subdivision을 중심으로))

  • Cho, Hyung-ik
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2016.10a
    • /
    • pp.210-212
    • /
    • 2016
  • Today, video games created via 3D softwares become a core part of the essential the video game contents field because their properties that can produce more easier than 2D games and can save budget of contents makings. It is very important that reducing polygon counts of 3D characters and environments for Gaming optimization. We can formulate elaborate 3D game models with low polygon counting in virtue of technological advancements, and these technologies continue to evolve. In 2012, Pixar made public Open subdivision which is the new technology to make high quality 3D models with low polygons and distributed that via Open source verification. This paper will compare and analyze the characteristics, and merits and demerits of these various kinds of these skills(Mesh smooth, Turbo Smooth, Open subdivision) and will inquire which method is the most efficient one to make 3D video games.

  • PDF

Efficient Browsing Method based on Metadata of Video Contents (동영상 컨텐츠의 메타데이타에 기반한 효율적인 브라우징 기법)

  • Chun, Soo-Duck;Shin, Jung-Hoon;Lee, Sang-Jun
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.16 no.5
    • /
    • pp.513-518
    • /
    • 2010
  • The advancement of information technology along with the proliferation of communication and multimedia has increased the demand of digital contents. Video data of digital contents such as VOD, NOD, Digital Library, IPTV, and UCC are getting more permeated in various application fields. Video data have sequential characteristic besides providing the spatial and temporal information in its 3D format, making searching or browsing ineffective due to long turnaround time. In this paper, we suggest ATVC(Authoring Tool for Video Contents) for solving this issue. ATVC is a video editing tool that detects key frames using visual rhythm and insert metadata such as keywords into key frames via XML tagging. Visual rhythm is applied to map 3D spatial and temporal information to 2D information. Its processing speed is fast because it can get pixel information without IDCT, and it can classify edit-effects such as cut, wipe, and dissolve. Since XML data save key frame information via XML tag and keyword information, it can furnish efficient browsing.

Design of Metaverse for Two-Way Video Conferencing Platform Based on Virtual Reality

  • Yoon, Dongeon;Oh, Amsuk
    • Journal of information and communication convergence engineering
    • /
    • v.20 no.3
    • /
    • pp.189-194
    • /
    • 2022
  • As non-face-to-face activities have become commonplace, online video conferencing platforms have become popular collaboration tools. However, existing video conferencing platforms have a structure in which one side unilaterally exchanges information, potentially increase the fatigue of meeting participants. In this study, we designed a video conferencing platform utilizing virtual reality (VR), a metaverse technology, to enable various interactions. A virtual conferencing space and realistic VR video conferencing content authoring tool support system were designed using Meta's Oculus Quest 2 hardware, the Unity engine, and 3D Max software. With the Photon software development kit, voice recognition was designed to perform automatic text translation with the Watson application programming interface, allowing the online video conferencing participants to communicate smoothly even if using different languages. It is expected that the proposed video conferencing platform will enable conference participants to interact and improve their work efficiency.

3D conversion of 2D video using depth layer partition (Depth layer partition을 이용한 2D 동영상의 3D 변환 기법)

  • Kim, Su-Dong;Yoo, Ji-Sang
    • Journal of Broadcast Engineering
    • /
    • v.16 no.1
    • /
    • pp.44-53
    • /
    • 2011
  • In this paper, we propose a 3D conversion algorithm of 2D video using depth layer partition method. In the proposed algorithm, we first set frame groups using cut detection algorithm. Each divided frame groups will reduce the possibility of error propagation in the process of motion estimation. Depth image generation is the core technique in 2D/3D conversion algorithm. Therefore, we use two depth map generation algorithms. In the first, segmentation and motion information are used, and in the other, edge directional histogram is used. After applying depth layer partition algorithm which separates objects(foreground) and the background from the original image, the extracted two depth maps are properly merged. Through experiments, we verify that the proposed algorithm generates reliable depth map and good conversion results.

Spatial-temporal texture features for 3D human activity recognition using laser-based RGB-D videos

  • Ming, Yue;Wang, Guangchao;Hong, Xiaopeng
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.3
    • /
    • pp.1595-1613
    • /
    • 2017
  • The IR camera and laser-based IR projector provide an effective solution for real-time collection of moving targets in RGB-D videos. Different from the traditional RGB videos, the captured depth videos are not affected by the illumination variation. In this paper, we propose a novel feature extraction framework to describe human activities based on the above optical video capturing method, namely spatial-temporal texture features for 3D human activity recognition. Spatial-temporal texture feature with depth information is insensitive to illumination and occlusions, and efficient for fine-motion description. The framework of our proposed algorithm begins with video acquisition based on laser projection, video preprocessing with visual background extraction and obtains spatial-temporal key images. Then, the texture features encoded from key images are used to generate discriminative features for human activity information. The experimental results based on the different databases and practical scenarios demonstrate the effectiveness of our proposed algorithm for the large-scale data sets.