• Title/Summary/Keyword: stereoscopic camera

Search Result 155, Processing Time 0.027 seconds

A New Mapping Algorithm for Depth Perception in 3D Screen and Its Implementation (3차원 영상의 깊이 인식에 대한 매핑 알고리즘 구현)

  • Ham, Woon-Chul;Kim, Seung-Hwan
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.45 no.6
    • /
    • pp.95-101
    • /
    • 2008
  • In this paper, we present a new smoothing algorithm for variable depth mapping for real time stereoscopic image for 3D display. Proposed algorithm is based on the physical concept, called Laplacian equation and we also discuss the mapping of the depth from scene to displayed image. The approach to solve the problem in stereoscopic image which we adopt in this paper is similar to multi-region algorithm which was proposed by N.Holliman. The main difference thing in our algorithm compared with the N.Holliman's multi-region algorithm is that we use the Laplacian equation by considering the distance between viewer and object. We implement the real time stereoscopic image generation method for OpenGL on the circular polarized LCD screen to demonstrate its real functioning in the visual sensory system in human brain. Even though we make and use artificial objects by using OpenGL to simulate the proposed algorithm we assure that this technology may be applied to stereoscopic camera system not only for personal computer system but also for public broad cast system.

A New Depth and Disparity Visualization Algorithm for Stereoscopic Camera Rig

  • Ramesh, Rohit;Shin, Heung-Sub;Jeong, Shin-Il;Chung, Wan-Young
    • Journal of information and communication convergence engineering
    • /
    • v.8 no.6
    • /
    • pp.645-650
    • /
    • 2010
  • In this paper, we present the effect of binocular cues which plays crucial role for the visualization of a stereoscopic or 3D image. This study is useful in extracting depth and disparity information by image processing technique. A linear relation between the object distance and the image distance is presented to discuss the cause of cybersickness. In the experimental results, three dimensional view of the depth map between the 2D images is shown. A median filter is used to reduce the noises available in the disparity map image. After the median filter, two filter algorithms such as 'Gabor' filter and 'Canny' filter are tested for disparity visualization between two images. The 'Gabor' filter is to estimate the disparity by texture extraction and discrimination methods of the two images, and the 'Canny' filter is used to visualize the disparity by edge detection of the two color images obtained from stereoscopic cameras. The 'Canny' filter is better choice for estimating the disparity rather than the 'Gabor' filter because the 'Canny' filter is much more efficient than 'Gabor' filter in terms of detecting the edges. 'Canny' filter changes the color images directly into color edges without converting them into the grayscale. As a result, more clear edges of the stereo images as compared to the edge detection by 'Gabor' filter can be obtained. Since the main goal of the research is to estimate the horizontal disparity of all possible regions or edges of the images, thus the 'Canny' filter is proposed for decipherable visualization of the disparity.

Comparison of Velocity Fields of Wake behind a Propeller Using 2D PIV and stereoscopic PIV (2D PIV와 stereoscopic PIV 기법으로 측정한 프로펠러 후류의 속도장 비교 연구)

  • Paik Bu-Geun;Lee Sang-Joon
    • 한국가시화정보학회:학술대회논문집
    • /
    • 2002.11a
    • /
    • pp.23-26
    • /
    • 2002
  • The phase-averaged velocity fields of 3 dimensional turbulent wake behind a marine propeller measured by 2D PIV and stereoscopic PIV(SPIV) were compared directly. In-plane velocity fields obtained from the consecutive particle images captured by one camera in 2D PIV have perspective errors due to out-of-plane motion. However, the perspective errors can be removed by measuring three component velocity fields using SPIV method with two cameras. It is also necessary to measure three components velocity fields for the investigation of complicated near-wake behind the propeller for the suitable propeller design. 400 instantaneous velocity fields were measured for each of four different blade phases of $0^{\circ},\;18^{\circ},\;36^{\circ}C\;and\;54^{\circ}$. They were ensemble averaged to investigate the spatial evolution of the propeller wake in the downstream region. The phase-averaged velocity fields show the viscous wake developed along the blade surfaces and tip vortices were formed periodically. The perspective errors caused by the out-of-plane motion was estimated by the comparison of 2D PIV and SPIV results. The difference in the axial mean velocity fields measured by both techniques are nearly proportional to the mean out-of-plane velocity component which has large values in the regions of the tip and trailing vortices. The axial turbulence intensity measured by 2D PIV was overestimated since the out-of-plane velocity fluctuations influence the in-plane velocity vectors and increase the in-plane turbulence intensities.

  • PDF

3D Stereoscopic Augmented Reality with a Monocular Camera (단안카메라 기반 삼차원 입체영상 증강현실)

  • Rho, Seungmin;Lee, Jinwoo;Hwang, Jae-In;Kim, Junho
    • Journal of the Korea Computer Graphics Society
    • /
    • v.22 no.3
    • /
    • pp.11-20
    • /
    • 2016
  • This paper introduces an effective method for generating 3D stereoscopic images that gives immersive 3D experiences to viewers using mobile-based binocular HMDs. Most of previous AR systems with monocular cameras have a common limitation that the same real-world images are provided to the viewer's eyes without parallax. In this paper, based on the assumption that viewers focus on the marker in the scenario of marker based AR, we recovery the binocular disparity about a camera image and a virtual object using the pose information of the marker. The basic idea is to generate the binocular disparity for real-world images and a virtual object, where the images are placed on the 2D plane in 3D defined by the pose information of the marker. For non-marker areas in the images, we apply blur effects to reduce the visual discomfort by decreasing their sharpness. Our user studies show that the proposed method for 3D stereoscopic image provides high depth feeling to viewers compared to the previous binocular AR systems. The results show that our system provides high depth feelings, high sense of reality, and visual comfort, compared to the previous binocular AR systems.

Design of range measurement systems using a sonar and a camera (초음파 센서와 카메라를 이용한 거리측정 시스템 설계)

  • Moon, Chang-Soo;Do, Yong-Tae
    • Journal of Sensor Science and Technology
    • /
    • v.14 no.2
    • /
    • pp.116-124
    • /
    • 2005
  • In this paper range measurement systems are designed using an ultrasonic sensor and a camera. An ultrasonic sensor provides the range measurement to a target quickly and simply but its low resolution is a disadvantage. We tackle this problem by employing a camera. Instead using a stereoscopic sensor, which is widely used for 3D sensing but requires a computationally intensive stereo matching, the range is measured by focusing and structured lighting. In focusing a straightforward focusing measure named as MMDH(min-max difference in histogram) is proposed and compared with existing techniques. In the method of structure lighting, light stripes projected by a beam projector are used. Compared to those using a laser beam projector, the designed system can be constructed easily in a low-budget. The system equation is derived by analysing the sensor geometry. A sensing scenario using the systems designed is in two steps. First, when better accuracy is required, measurements by ultrasonic sensing and focusing of a camera are fused by MLE(maximum likelihood estimation). Second, when the target is in a range of particular interest, a range map of the target scene is obtained by using structured lighting technique. The systems designed showed measurement accuracy up to 0.3[mm] approximately in experiments.

Rigorous Modeling of the First Generation of the Reconnaissance Satellite Imagery

  • Shin, Sung-Woong;Schenk, Tony
    • Korean Journal of Remote Sensing
    • /
    • v.24 no.3
    • /
    • pp.223-233
    • /
    • 2008
  • In the mid 90's, the U.S. government released images acquired by the first generation of photo reconnaissance satellite missions between 1960 and 1972. The Declassified Intelligent Satellite Photographs (DISP) from the Corona mission are of high quality with an astounding ground resolution of about 2 m. The KH-4A panoramic camera system employed a scan angle of $70^{\circ}$ that produces film strips with a dimension of $55\;mm\;{\times}\;757\;mm$. Since GPS/INS did not exist at the time of data acquisition, the exterior orientation must be established in the traditional way by using control information and the interior orientation of the camera. Detailed information about the camera is not available, however. For reconstructing points in object space from DISP imagery to an accuracy that is comparable to high resolution (a few meters), a precise camera model is essential. This paper is concerned with the derivation of a rigorous mathematical model for the KH-4A/B panoramic camera. The proposed model is compared with generic sensor models, such as affine transformation and rational functions. The paper concludes with experimental results concerning the precision of reconstructed points in object space. The rigorous mathematical panoramic camera model for the KH-4A camera system is based on extended collinearity equations assuming that the satellite trajectory during one scan is smooth and the attitude remains unchanged. As a result, the collinearity equations express the perspective center as a function of the scan time. With the known satellite velocity this will translate into a shift along-track. Therefore, the exterior orientation contains seven parameters to be estimated. The reconstruction of object points can now be performed with the exterior orientation parameters, either by intersecting bundle rays with a known surface or by using the stereoscopic KH-4A arrangement with fore and aft cameras mounted an angle of $30^{\circ}$.

Stereoscopic Conversion of Mobile Camera Video (모바일 카메라 영상의 입체 변환)

  • Gil, Jong-In;Jang, Seungeun;Kim, Manbae
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2011.07a
    • /
    • pp.337-338
    • /
    • 2011
  • 본 논문에서는 안드로이드 운영체제 기반 스마트폰에서 Camera Preview를 이용하여 실시간으로 3D 입체영상을 생성하는 기법을 제안한다. 3D 입체영상은 2D 영상에 깊이감을 부여하여 시청시에 입체감을 느낄 수 있도록 변환한 영상이다. 그러나 모바일 단말기에서는 이러한 3D 입체영상을 생성하더라도 하드웨어의 제약으로 인해 사용자가 만족할만한 성능을 구현하는데 어려움이 있다. 먼저 안드로이드 운영체제에서 카메라를 사용하기 위한 구성 및 방법에 대해서 설명하고, 그에 따른 3D 입체변환 알고리즘을 제안한다. 제안 방법에서는 단말기의 성능에 맞는 우수한 결과를 생성하기 위한 에지 추출, 깊이맵 생성 방법을 분석하고, 획득한 깊이맵을 기반으로 하여 좌영상과 우영상을 생성한다. 최종적으로 획득한 두 영상을 병합하여 화면에 Display한다.

  • PDF

Correcting 3D camera tracking data for video composition (정교한 매치무비를 위한 3D 카메라 트래킹 기법에 관한 연구)

  • Lee, Jun-Sang;Lee, Imgeun
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2012.07a
    • /
    • pp.105-106
    • /
    • 2012
  • 일반적으로 CG 합성이라 하면 '자연스러운' 것을 잘된 CG영상이라고 한다. 이 때 촬영된 영상이 정지화면 일 수 만은 없다. 카메라가 움직이는 영상에서는 CG합성도 실사카메라 무빙에 맞게 정확한 정합이 되어야 자연스러운 영상이 된다. 이를 위해 합성단계에서 작업할 때 3D 카메라 트래킹 기술이 필요하다. 카메라트래킹은 촬영된 실사영상만으로 카메라의 3차원 이동정보와 광학적 파라미터 등 촬영시의 3차원 공간을 복원하는 과정을 포함하고 있다. 이 과정에서 카메라 트래킹에 대한 오류의 발생으로 실사와 CG의 합성에 대한 생산성에 많은 문제점을 가지고 있다. 본 논문에서는 이러한 문제를 해결하기 위하여 소프트웨어에서 트래킹데이터를 보정하는 방법을 제안한다.

  • PDF

Stereoscopic Video Compositing with a DSLR and Depth Information by Kinect (키넥트 깊이 정보와 DSLR을 이용한 스테레오스코픽 비디오 합성)

  • Kwon, Soon-Chul;Kang, Won-Young;Jeong, Yeong-Hu;Lee, Seung-Hyun
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.38C no.10
    • /
    • pp.920-927
    • /
    • 2013
  • Chroma key technique which composes images by separating an object from its background in specific color has restrictions on color and space. Especially, unlike general chroma key technique, image composition for stereo 3D display requires natural image composition method in 3D space. The thesis attempted to compose images in 3D space using depth keying method which uses high resolution depth information. High resolution depth map was obtained through camera calibration between the DSLR and Kinect sensor. 3D mesh model was created by the high resolution depth information and mapped with RGB color value. Object was converted into point cloud type in 3D space after separating it from its background according to depth information. The image in which 3D virtual background and object are composed obtained and played stereo 3D images using a virtual camera.

Influence of Depth Differences by Setting 3D Stereoscopic Convergence Point on Presence, Display Perception, and Negative Experiences (스테레오 영상의 깊이감에 따른 프레즌스, 지각된 특성, 부정적 경험의 차이)

  • Lee, SangWook;Chung, Donghun
    • Journal of Broadcast Engineering
    • /
    • v.19 no.1
    • /
    • pp.44-55
    • /
    • 2014
  • The goal of 3D stereoscopy is not only to maximize positive experiences (such as sense of realism) by adding depth information to 2D video but to also minimize negative experiences (such as fatigue). This study examines the impact of different depth levels induced by adjusting 3D camera convergences on positive and negative experiences and finds an optimal parameter for viewers. The results show that there are significant differences among depth levels on spatial involvement, realistic immersion, presence, depth perception, screen transmission, materiality, shape perception, spatial extension and display perception. There are also significant differences for fatigue and unnaturalness. This study suggests that reducing the camera convergence angle of an object by $0.17^{\circ}$ behind the object is the optimal parameter in a 3D stereoscopic setting.