• Title/Summary/Keyword: 입체감

Search Result 381, Processing Time 0.026 seconds

2D to 3D Anaglyph Image Conversion using Linear Curve in HTML5 (HTML5에서 직선의 기울기를 이용한 2D to 3D 입체 이미지 변환)

  • Park, Young Soo
    • Journal of Digital Convergence
    • /
    • v.12 no.12
    • /
    • pp.521-528
    • /
    • 2014
  • In this paper, we propose the method of converting 2D image to 3D image using linear curves in HTML5. We use only one image without any other information about depth map for creating 3D images. So we filter the original image to extract RGB colors for left and right eyes. After selecting the ready-made control point of linear curves to set up depth values, users can set up the depth values and modify them. Based on the depth values that the end users select, we reflect them. Anaglyph 3D is automatically made with the whole and partial depth information. As all of this work has been designed and implemented in Web environment using HTML5, it is very easy and convenient and end users can create any 3D image that they want to make.

Enhancement of the 3D Sound's Performance using Perceptual Characteristics and Loudness (지각 특성 및 라우드니스를 이용한 입체음향의 성능 개선)

  • Koo, Kyo-Sik;Cha, Hyung-Tai
    • Journal of Broadcast Engineering
    • /
    • v.16 no.5
    • /
    • pp.846-860
    • /
    • 2011
  • The binaural auditory system of human has ability to differentiate the direction and the distance of the sound sources by using the information which are inter-aural intensity difference(IID), inter-aural time difference(ITD) and/or the spectral shape difference(SSD). These information is generated from the acoustical transfer of a sound source to pinna, the outer ears. We can create a virtual sound system using the information which is called Head related transfer function(HRTF). However the performance of 3D sound is not always satisfactory because of non-individual characteristics of the HRTF. In this paper, we propose the algorithm that uses human's auditory characteristics for accurate perception. To achieve this, excitation energy of HRTF, global masking threshold and loudness are applied to the proposed algorithm. Informal listening test shows that the proposed method improves the sound localization characteristics much better than conventional methods.

Movie Factors Affecting Satisfaction of 3D Stereoscopic Movie Audiences (3D입체영화관객의 만족에 영향을 미치는 영화적 요인)

  • Yun, Jae-Sun;Lim, Chan
    • The Journal of the Korea Contents Association
    • /
    • v.11 no.9
    • /
    • pp.106-117
    • /
    • 2011
  • The purpose of this study is to examine factors that affect 3D stereoscopic movie audiences' satisfaction in Korea, and to provide strategic solutions to improve audiences' satisfaction. According to study about movie audiences' satisfaction, factors that affect movie audiences' satisfaction are separated to movie internal factors, movie environmental factors and movie audience factors. And leverage of movie internal factors is bigger than either. But 3D stereoscopic movie is different from ordinary movie in shooting, screening, seeing and recognizing method. So it is expected that movie environmental factors and audience factors' influence is bigger than internal factors. So this study carried out a survey targeting 3D stereoscopic audiences. Main result of this study are shown as follows. First, the method to reduce visual fatigue is necessary for satisfaction. Second, it is required that improvement method of seat arrangement to provide high quality 3D stereoscopic contents. Third, level of 3D stereoscopic movies' price need to adjust to valid level.

Improvement of 3D Sound Using Psychoacoustic Characteristics (인간의 청각 특성을 이용한 입체음향의 방향감 개선)

  • Koo, Kyo-Sik;Cha, Hyung-Tai
    • The Journal of the Acoustical Society of Korea
    • /
    • v.30 no.5
    • /
    • pp.255-264
    • /
    • 2011
  • The Head Related Transfer Function (HRTF) means a process related to acoustic transmission from 3d space to the listener's ear. In other words, it contains the information that human can perceive locations of sound sources. So, we make virtual 3d sound using HRTF, despite it doesn't actually exist. But, it can deteriorate some three-dimensional effect by the confusion between front and back directions due to the non-individual HRTF depending on each listener. In this paper, we proposed the new algorithm to reduce the confusion of sound image localization using human's acoustic characteristics. The frequency spectrum and global masking threshold of 3d sounds using HRTF are used to calculate the psychoacoustical differences among each directions. And perceptible cues in each critical band are boosted to create effective 3d sound. As a result, we can make the improved 3d sound, and the performances are much better than conventional methods.

Design and Implementation of Multi-View 3D Video Player (다시점 3차원 비디오 재생 시스템 설계 및 구현)

  • Heo, Young-Su;Park, Gwang-Hoon
    • Journal of Broadcast Engineering
    • /
    • v.16 no.2
    • /
    • pp.258-273
    • /
    • 2011
  • This paper designs and implements a multi-view 3D video player system which is operated faster than existing video player systems. The structure for obtaining the near optimum speed in a multi-processor environment by parallelizing the component modules is proposed to process large volumes of multi-view image data at high speed. In order to use the concurrency of bottleneck, we designed image decoding, synthesis and rendering modules in a pipeline structure. For load balancing, the decoder module is divided into the unit of viewpoint, and the image synthesis module is geometrically divided based on synthesized images. As a result of this experiment, multi-view images were correctly synthesized and the 3D sense could be felt when watching the images on the multi-view autostereoscopic display. The proposed application processing structure could be used to process large volumes of multi-view image data at high speed, using the multi-processors to their maximum capacity.

Beginnings of Mixed Reality : 20th Century Visual and Interactive Art (혼합현실의 단초 - 20세기 영상예술과 인터랙티브 아트를 중심으로)

  • Kim, Hee-Young
    • Cartoon and Animation Studies
    • /
    • s.32
    • /
    • pp.315-333
    • /
    • 2013
  • This study aims to investigate that today's Mixed Reality Technology did not appear suddenly but has its beginnings in 20th century Visual and Interactive Art. First, Photographic Art expressed three-D on the two-dimensional plane and mixed images of reality and virtuality. Photogram made people experience both two-dimensional images and three-dimensional effects concurrently, and Photomontage combined various photos and mixed reality and virtuality. Next, Cinema tried to combine virtuality and reality using objets and CG. Early Cinema composed films and real objets. As computer technology developed, Cinema composed objet CG on real images and then tried background CG compositing. Finally, Telepresence Art tried a new possibility of Mixed Reality breaking the boundary between reality and virtuality, subject and object. It oscillates between virtual space in reality and real space in virtuality, or represents Mixed Reality by remote control of long distance participants. In the future, for the development and direction of Mixed Reality, there will be more need of referring to Visual and Interactive Art.

3D Cloud Animation using Cloud Modeling Method of 2D Meteorological Satellite Images (2차원 기상 위성 영상의 구름 모델링 기법을 이용한 3차원 구름 애니메이션)

  • Lee, Jeong-Jin;Kang, Moon-Koo;Lee, Ho;Shin, Byeong-Seok
    • Journal of Korea Game Society
    • /
    • v.10 no.1
    • /
    • pp.147-156
    • /
    • 2010
  • In this paper, we propose 3D cloud animation by cloud modeling method of 2D images retrieved from a meteorological satellite. First, on the satellite images, we locate numerous control points to perform thin-plate spline warping analysis between consecutive frames for the modeling of cloud motion. In addition, the spectrum channels of visible and infrared wavelengths are used to determine the amount and altitude of clouds for 3D cloud image reconstruction. Pre-integrated volume rendering method is used to achieve seamless inter-laminar shades in real-time using small number of slices of the volume data. The proposed method could successfully construct continuously moving 3D clouds from 2D satellite images at an acceptable speed and image quality.

Face Tracking for Multi-view Display System (다시점 영상 시스템을 위한 얼굴 추적)

  • Han, Chung-Shin;Jang, Se-Hoon;Bae, Jin-Woo;Yoo, Ji-Sang
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.30 no.2C
    • /
    • pp.16-24
    • /
    • 2005
  • In this paper, we proposed a face tracking algorithm for a viewpoint adaptive multi-view synthesis system. The original scene captured by a depth camera contains a texture image and 8 bit gray-scale depth map. From this original image, multi-view images can be synthesized which correspond to viewer's position by using geometrical transformation such as a rotation and a translation. The proposed face tracking technique gives a motion parallax cue by different viewpoints and view angles. In the proposed algorithm, tracking of viewer's dominant face initially established from camera by using statistical characteristics of face colors and deformable templates is done. As a result, we can provide motion parallax cue by detecting viewer's dominant face area and tracking it even under a heterogeneous background and can successfully display the synthesized sequences.

Implementation of Stereoscopic 3D Video Player System Having Less Visual Fatigue and Its Computational Complexity Analysis for Real-Time Processing (시청피로 저감형 S3D 영상 재생 시스템 구현 및 실시간 처리를 위한 알고리즘 연산량 분석)

  • Lee, Jaesung
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.17 no.12
    • /
    • pp.2865-2874
    • /
    • 2013
  • Recently, most of movies top-ranked in the box office are screening in Stereoscopic 3D, and the world's leading electronics companies such as Samsung and LG are getting the hots for 3DTV sales. However, each person has different binocular disparity and different viewing distance, and thus he or she feels the severe visual fatigue and headaches if he or she is watching 3D content with the same binocular disparity, which is very different from things he or she feels in the real world. To solve this problem, this paper proposes and implement a 3D rendering system that correct the disparity of 3D content by reflecting binocular distance and viewing distance. Then, the computational complexity is analyzed. Optical-flow and Warping algorithms turn out to consume 732 seconds and 5.7 seconds per frame, respectively. Therefore, a dedicated chip-set for both blocks is strongly required for real-time HD 3D display.

A Study on the Production of Perspective Images using Drone (드론을 이용한 다시점 투영 이미지 제작 연구)

  • Choi, Ki-chang;Kwon, Soon-chul;Lee, Seung-hyun
    • The Journal of the Convergence on Culture Technology
    • /
    • v.8 no.6
    • /
    • pp.953-958
    • /
    • 2022
  • Holographic Stereogram can provide the depth perception without the visual fatigue and dizziness because it use multiple images acquired from the multiple viewpoints. In order to produce a holographic stereogram, it is necessary to obtain perspective images of a live object and record it on film using a digital hologram printer. when acquiring perspective images, the hologram without distortion can be produced only when the perspective images with a constant distance between the camera and the target is obtained. If the target is small, it is possible to keep the constant distance from the camera to object. but if it is large, this is difficult to keep the constant distance. In this study, we photograph the large object using the POI (Point of Interest) function which is one of the smart flight modes of drone to produce perspective images required for the hologram production. after that, problems such as the unexpected shakings and distance change between camera and object is corrected in post production. as a result, we produce the perspective images.