• Title/Summary/Keyword: 3D image sensor

Search Result 334, Processing Time 0.03 seconds

Stereoscopic Video Compositing with a DSLR and Depth Information by Kinect (키넥트 깊이 정보와 DSLR을 이용한 스테레오스코픽 비디오 합성)

  • Kwon, Soon-Chul;Kang, Won-Young;Jeong, Yeong-Hu;Lee, Seung-Hyun
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.38C no.10
    • /
    • pp.920-927
    • /
    • 2013
  • Chroma key technique which composes images by separating an object from its background in specific color has restrictions on color and space. Especially, unlike general chroma key technique, image composition for stereo 3D display requires natural image composition method in 3D space. The thesis attempted to compose images in 3D space using depth keying method which uses high resolution depth information. High resolution depth map was obtained through camera calibration between the DSLR and Kinect sensor. 3D mesh model was created by the high resolution depth information and mapped with RGB color value. Object was converted into point cloud type in 3D space after separating it from its background according to depth information. The image in which 3D virtual background and object are composed obtained and played stereo 3D images using a virtual camera.

Automatic Registration Method for Multiple 3D Range Data Sets (다중 3차원 거리정보 데이타의 자동 정합 방법)

  • 김상훈;조청운;홍현기
    • Journal of KIISE:Software and Applications
    • /
    • v.30 no.12
    • /
    • pp.1239-1246
    • /
    • 2003
  • Registration is the process aligning the range data sets from different views in a common coordinate system. In order to achieve a complete 3D model, we need to refine the data sets after coarse registration. One of the most popular refinery techniques is the iterative closest point (ICP) algorithm, which starts with pre-estimated overlapping regions. This paper presents an improved ICP algorithm that can automatically register multiple 3D data sets from unknown viewpoints. The sensor projection that represents the mapping of the 3D data into its associated range image is used to determine the overlapping region of two range data sets. By combining ICP algorithm with the sensor projection constraint, we can make an automatic registration of multiple 3D sets without pre-procedures that are prone to errors and any mechanical positioning device or manual assistance. The experimental results showed better performance of the proposed method on a couple of 3D data sets than previous methods.

Updating Smartphone's Exterior Orientation Parameters by Image-based Localization Method Using Geo-tagged Image Datasets and 3D Point Cloud as References

  • Wang, Ying Hsuan;Hong, Seunghwan;Bae, Junsu;Choi, Yoonjo;Sohn, Hong-Gyoo
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.37 no.5
    • /
    • pp.331-341
    • /
    • 2019
  • With the popularity of sensor-rich environments, smartphones have become one of the major platforms for obtaining and sharing information. Since it is difficult to utilize GNSS (Global Navigation Satellite System) inside the area with many buildings, the localization of smartphone in this case is considered as a challenging task. To resolve problem of localization using smartphone a four step image-based localization method and procedure is proposed. To improve the localization accuracy of smartphone datasets, MMS (Mobile Mapping System) and Google Street View were utilized. In our approach first, the searching for candidate matching image is performed by the query image of smartphone's using GNSS observation. Second, the SURF (Speed-Up Robust Features) image matching between the smartphone image and reference dataset is done and the wrong matching points are eliminated. Third, the geometric transformation is performed using the matching points with 2D affine transformation. Finally, the smartphone location and attitude estimation are done by PnP (Perspective-n-Point) algorithm. The location of smartphone GNSS observation is improved from the original 10.204m to a mean error of 3.575m. The attitude estimation is lower than 25 degrees from the 92.4% of the adjsuted images with an average of 5.1973 degrees.

3D Brain-Endoscopy Using VRML and 2D CT images (VRML을 이용한 3차원 Brain-endoscopy와 2차원 단면 영상)

  • Kim, D.O.;Ahn, J.Y.;Lee, D.H.;Kim, N.K.;Kim, J.H.;Min, B.G.
    • Proceedings of the KOSOMBE Conference
    • /
    • v.1998 no.11
    • /
    • pp.285-286
    • /
    • 1998
  • Virtual Brain-endoscopy is an effective method to detect lesion in brain. Brain is the most part of the human and is not easy part to operate so that reconstructing in 3D may be very helpful to doctors. In this paper, it is suggested that to increase the reliability, method of matching 3D object with the 2D CT slice. 3D Brain-endoscopy is reconstructed with 35 slices of 2D CT images. There is a plate in 3D brain-endoscopy so as to drag upward or downward to match the relevant 2D CT image. Relevant CT image guides the user to recognize the exact part he or she is investigating. VRML Script is used to make the change in images and PlaneSensor node is used to transmit the y coordinate value with the CT image. The result is test on the PC which has the following spec. 400MHz Clock-speed, 512MB ram, and FireGL 3000 3D accelerator is set up. The VRML file size is 3.83MB. There was no delay in controlling the 3D world and no collision in changing the CT images. This brain-endoscopy can be also put to practical use on medical education through internet.

  • PDF

Image Quality Evaluation and Tolerance Analysis for Camera Lenses with Diffractive Element

  • Lee, Sang-Hyuck;Jeong, Ho-Seop;Jin, Young-Su;Song, Seok-Ho;Park, Woo-Je
    • Journal of the Optical Society of Korea
    • /
    • v.10 no.3
    • /
    • pp.105-111
    • /
    • 2006
  • A novel image quality evaluation method, which is based on combination of the rigorous grating diffraction theory and the ray-optic method, is proposed. It is applied for design optimization and, tolerance analysis of optical imaging systems implementing diffractive optical elements (DOE). The evaluation method can predict the quality and resolution of the image on the image sensor plane through the optical imaging system. Especially, we can simulate the effect of diffraction efficiencies of DOE in the camera lenses module, which is very effective for predicting different color sense and MTF performance. Using this method, we can effectively determine the fabrication tolerances of diffractive and refractive optical elements such as the variations' in profile thickness, and the shoulder of the DOE, as well as conventional parameters such as decenter and tilt in optical-surface alignments. A DOE-based 2M-resolution camera lens module designed by the optimization process based on the proposed image quality evaluation method shows ${\sim}15%$ MTF improvement compared with a design without such an optimization.

A Study on Automatic Seam Tracking using Vision Sensor (비전센서를 이용한 자동추적장치에 관한 연구)

  • 전진환;조택동;양상민
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 1995.10a
    • /
    • pp.1105-1109
    • /
    • 1995
  • A CCD-camera, which is structured with vision system, was used to realize automatic seam-tracking system and 3-D information which is needed to generate torch path, was obtained by using laser-slip beam. To extract laser strip and obtain welding-specific point, Adaptive Hough-transformation was used. Although the basic Hough transformation takes too much time to process image on line, it has a tendency to be robust to the noises as like spatter. For that reson, it was complemented with Adaptive Hough transformation to have an on-line processing ability for scanning a welding-specific point. the dead zone,where the sensing of weld line is impossible, is eliminated by rotating the camera with its rotating axis centered at welding torch. The camera angle is controlled so as to get the minimum image data for the sensing of weld line, hence the image processing time is reduced. The fuzzy controller is adapted to control the camera angle.

  • PDF

THE DEVELOPMENT OF CIRCULARLY POLARIZED SYNTHETIC APERTURE RADAR SENSOR MOUNTED ON UNMANNED AERIAL VEHICLE

  • Baharuddin, Merna;Akbar, Prilando Rizki;Sumantyo, Josaphat Tetuko Sri;Kuze, Hiroaki
    • Proceedings of the KSRS Conference
    • /
    • 2008.10a
    • /
    • pp.441-444
    • /
    • 2008
  • This paper describes the development of a circularly polarized microstrip antenna, as a part of the Circularly Polarized Synthetic Aperture Radar (CP-SAR) sensor which is currently under developed at the Microwave Remote Sensing Laboratory (MRSL) in Chiba University. CP-SAR is a new type of sensor developed for the purpose of remote sensing. With this sensor, lower-noise data/image will be obtained due to the absence of depolarization problems from propagation encounter in linearly polarized synthetic aperture radar. As well the data/images obtained will be investigated as the Axial Ratio Image (ARI), which is a new data that hopefully will reveal unique various backscattering characteristics. The sensor will be mounted on an Unmanned Aerial Vehicle (UAV) which will be aimed for fundamental research and applications. The microstrip antenna works in the frequency of 1.27 GHz (L-Band). The microstrip antenna utilized the proximity-coupled method of feeding. Initially, the optimization process of the single patch antenna design involving modifying the microstrip line feed to yield a high gain (above 5 dBi) and low return loss (below -10 dB). A minimum of 10 MHz bandwidth is targeted at below 3 dB of Axial Ratio for the circularly polarized antenna. A planar array from the single patch is formed next. Consideration for the array design is the beam radiation pattern in the azimuth and elevation plane which is specified based on the electrical and mechanical constraints of the UAV CP-SAR system. This research will contribute in the field of radar for remote sensing technology. The potential application is for landcover, disaster monitoring, snow cover, and oceanography mapping.

  • PDF

Active Shape Model-based Object Tracking using Depth Sensor (깊이 센서를 이용한 능동형태모델 기반의 객체 추적 방법)

  • Jung, Hun Jo;Lee, Dong Eun
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.9 no.1
    • /
    • pp.141-150
    • /
    • 2013
  • This study proposes technology using Active Shape Model to track the object separating it by depth-sensors. Unlike the common visual camera, the depth-sensor is not affected by the intensity of illumination, and therefore a more robust object can be extracted. The proposed algorithm removes the horizontal component from the information of the initial depth map and separates the object using the vertical component. In addition, it is also a more efficient morphology, and labeling to perform image correction and object extraction. By applying Active Shape Model to the information of an extracted object, it can track the object more robustly. Active Shape Model has a robust feature-to-object occlusion phenomenon. In comparison to visual camera-based object tracking algorithms, the proposed technology, using the existing depth of the sensor, is more efficient and robust at object tracking. Experimental results, show that the proposed ASM-based algorithm using depth sensor can robustly track objects in real-time.

Development of Multi-Laser Vision System For 3D Surface Scanning (3 차원 곡면 데이터 획득을 위한 멀티 레이져 비젼 시스템 개발)

  • Lee, J.H.;Kwon, K.Y.;Lee, H.C.;Doe, Y.C.;Choi, D.J.;Park, J.H.;Kim, D.K.;Park, Y.J.
    • Proceedings of the KSME Conference
    • /
    • 2008.11a
    • /
    • pp.768-772
    • /
    • 2008
  • Various scanning systems have been studied in many industrial areas to acquire a range data or to reconstruct an explicit 3D model. Currently optical technology has been used widely by virtue of noncontactness and high-accuracy. In this paper, we describe a 3D laser scanning system developped to reconstruct the 3D surface of a large-scale object such as a curved-plate of ship-hull. Our scanning system comprises of 4ch-parallel laser vision modules using a triangulation technique. For multi laser vision, calibration method based on least square technique is applied. In global scanning, an effective method without solving difficulty of matching problem among the scanning results of each camera is presented. Also minimal image processing algorithm and robot-based calibration technique are applied. A prototype had been implemented for testing.

  • PDF

A Landmark Based Localization System using a Kinect Sensor (키넥트 센서를 이용한 인공표식 기반의 위치결정 시스템)

  • Park, Kwiwoo;Chae, JeongGeun;Moon, Sang-Ho;Park, Chansik
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.63 no.1
    • /
    • pp.99-107
    • /
    • 2014
  • In this paper, a landmark based localization system using a Kinect sensor is proposed and evaluated with the implemented system for precise and autonomous navigation of low cost robots. The proposed localization method finds the positions of landmark on the image plane and the depth value using color and depth images. The coordinates transforms are defined using the depth value. Using coordinate transformation, the position in the image plane is transformed to the position in the body frame. The ranges between the landmarks and the Kinect sensor are the norm of the landmark positions in body frame. The Kinect sensor position is computed using the tri-lateral whose inputs are the ranges and the known landmark positions. In addition, a new matching method using the pin hole model is proposed to reduce the mismatch between depth and color images. Furthermore, a height error compensation method using the relationship between the body frame and real world coordinates is proposed to reduce the effect of wrong leveling. The error analysis are also given to find out the effect of focal length, principal point and depth value to the range. The experiments using 2D bar code with the implemented system show that the position with less than 3cm error is obtained in enclosed space($3,500mm{\times}3,000mm{\times}2,500mm$).