• Title/Summary/Keyword: stereo image

Search Result 1,065, Processing Time 0.025 seconds

Comparison and Analysis of Matching DEM Using KOMPSAT-3 In/Cross-track Stereo Pair (KOMPSAT-3 In/Cross-track 입체영상을 이용한 매칭 DEM 비교 분석)

  • Oh, Kwan-Young;Jeong, Eui-Cheon;Lee, Kwang-Jae;Kim, Youn-Soo;Lee, Won-Jin
    • Korean Journal of Remote Sensing
    • /
    • v.34 no.6_3
    • /
    • pp.1445-1456
    • /
    • 2018
  • The purpose of this study is to compare the quality and characteristics of matching DEMs by using KOMPSAT-3 stereo pair capture in in-track and cross-track. For this purpose, two stereo pairs of KOMPSAT-3 were collected that were taken in the same area. The two stereo pairs have similar stereo geometry elements such as B/H, convergence angle. Sensor modeling for DEM production was performed with RFM affine calibration using multiple GCPs. The GCPs used in the study were extracted from the 0.25 m ortho-image and 5 meter DEM provided by NGII. In addition, matching DEMs were produced at the same resolution as the reference DEMs for a comparison analysis. As a result of the experiment, the horizontal and vertical errors at the CPs indicated an accuracy of 1 to 3 pixels. In addition, the shapes and accuracy of two DEMs produced in areas where the effects of natural or artificial surface land were low were almost similar.

Stereoscopic Video Compositing with a DSLR and Depth Information by Kinect (키넥트 깊이 정보와 DSLR을 이용한 스테레오스코픽 비디오 합성)

  • Kwon, Soon-Chul;Kang, Won-Young;Jeong, Yeong-Hu;Lee, Seung-Hyun
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.38C no.10
    • /
    • pp.920-927
    • /
    • 2013
  • Chroma key technique which composes images by separating an object from its background in specific color has restrictions on color and space. Especially, unlike general chroma key technique, image composition for stereo 3D display requires natural image composition method in 3D space. The thesis attempted to compose images in 3D space using depth keying method which uses high resolution depth information. High resolution depth map was obtained through camera calibration between the DSLR and Kinect sensor. 3D mesh model was created by the high resolution depth information and mapped with RGB color value. Object was converted into point cloud type in 3D space after separating it from its background according to depth information. The image in which 3D virtual background and object are composed obtained and played stereo 3D images using a virtual camera.

Relative Localization for Mobile Robot using 3D Reconstruction of Scale-Invariant Features (스케일불변 특징의 삼차원 재구성을 통한 이동 로봇의 상대위치추정)

  • Kil, Se-Kee;Lee, Jong-Shill;Ryu, Je-Goon;Lee, Eung-Hyuk;Hong, Seung-Hong;Shen, Dong-Fan
    • The Transactions of the Korean Institute of Electrical Engineers D
    • /
    • v.55 no.4
    • /
    • pp.173-180
    • /
    • 2006
  • A key component of autonomous navigation of intelligent home robot is localization and map building with recognized features from the environment. To validate this, accurate measurement of relative location between robot and features is essential. In this paper, we proposed relative localization algorithm based on 3D reconstruction of scale invariant features of two images which are captured from two parallel cameras. We captured two images from parallel cameras which are attached in front of robot and detect scale invariant features in each image using SIFT(scale invariant feature transform). Then, we performed matching for the two image's feature points and got the relative location using 3D reconstruction for the matched points. Stereo camera needs high precision of two camera's extrinsic and matching pixels in two camera image. Because we used two cameras which are different from stereo camera and scale invariant feature point and it's easy to setup the extrinsic parameter. Furthermore, 3D reconstruction does not need any other sensor. And the results can be simultaneously used by obstacle avoidance, map building and localization. We set 20cm the distance between two camera and capture the 3frames per second. The experimental results show :t6cm maximum error in the range of less than 2m and ${\pm}15cm$ maximum error in the range of between 2m and 4m.

Implementation of the SLAM System Using a Single Vision and Distance Sensors (단일 영상과 거리센서를 이용한 SLAM시스템 구현)

  • Yoo, Sung-Goo;Chong, Kil-To
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.45 no.6
    • /
    • pp.149-156
    • /
    • 2008
  • SLAM(Simultaneous Localization and Mapping) system is to find a global position and build a map with sensing data when an unmanned-robot navigates an unknown environment. Two kinds of system were developed. One is used distance measurement sensors such as an ultra sonic and a laser sensor. The other is used stereo vision system. The distance measurement SLAM with sensors has low computing time and low cost, but precision of system can be somewhat worse by measurement error or non-linearity of the sensor In contrast, stereo vision system can accurately measure the 3D space area, but it needs high-end system for complex calculation and it is an expensive tool. In this paper, we implement the SLAM system using a single camera image and a PSD sensors. It detects obstacles from the front PSD sensor and then perceive size and feature of the obstacles by image processing. The probability SLAM was implemented using the data of sensor and image and we verify the performance of the system by real experiment.

Research for Generation of Accurate DEM using High Resolution Satellite Image and Analysis of Accuracy (고해상도 위성영상을 이용한 정밀 DEM 생성 및 정확도 분석에 관한 연구)

  • Jeong, Jae-Hoon;Lee, Tae-Yoon;Kim, Tae-Jung
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.26 no.4
    • /
    • pp.359-365
    • /
    • 2008
  • This paper focused on generation of more accurate DEM and analysis of accuracy. For this, we applied suitable sensor modeling technique for each satellite image and automatic pyramid matching using image pyramid was applied. Matching algorithm based on epipolarity and scene geometry also was applied for stereo matching. IKONOS, Quickbird, SPOT-5, Kompsat-2 were used for experiments. In particular, we applied orbit-attitude sensor modeling technique for Kompsat-2 and performed DEM generation successfully. All DEM generated show good quality. Assessment was carried out using USGS DTED and we also compared between DEM generated in this research and DEM generated from common software. All DEM had $9m{\sim}12m$ Mean Absolute Error and $13m{\sim}16m$ RMS Error. Experimental results show that the DEMs of good performance which is similar to or better than result of DEMs generated from common software.

Whole Mount Preparation of Primary Cultured Neuron for HVEM Observation (배양된 시경세포 관찰을 위한 초고압전자현미경 홀마운트 시료제작기법)

  • Kim, Hyun-Wook;Hong, Soon-Taek;Oh, Seung-Hak;Park, Chang-Hyun;Kim, Hyun;Rhyu, Im-Joo
    • Applied Microscopy
    • /
    • v.41 no.1
    • /
    • pp.69-73
    • /
    • 2011
  • High-voltage electron microscope (HVEM) has higher resolution and penetration power than conventional transmission electron microscope that could be load thick specimen. Some researchers have taken this advantage of HVEM to explore 3-dimensional configuration of the biological structures including tissue and cells. Whole mount preparations has been employed to study some cell lines and primary culture cells. In this study, we would like to introduce useful whole mount preparation method for neuronal studies. The plastic coverslips were punched, covered by formvar membrane and coated with carbon. The neurons obtained embryonic 18 rat hippocampus were seeded on the prepared cover slip. The coverslips were fixed, dried in freeze drier and kept in a descicator until HVEM observation. We could observe detailed neuronal structures such as soma, dendrite and spine under HVEM without conventional thin section and heavy metal stain. The anaglyphic image based on stereo paired image ($-8^{\circ},+8^{\circ}$) provides three dimensional perception of the neuronal dendrites and their spines. This method could be applied to sophisticated analysis of dendritic spine under the various experimental conditions.

A Study on Disparity Correction of Occlusion using Occluding Patterns (가려짐 패턴을 이용한 가려짐 영역의 시차 교정에 관한 연구)

  • Kim Dae-Hyun;Choi Jong-Soo
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.42 no.4 s.304
    • /
    • pp.13-20
    • /
    • 2005
  • In this paper, we propose new smoothing filters, i.e., occluding patterns that can accurately correct disparities of occluded areas in the estimated disparity map. An image is composed of several layers and each layer presents similar disparity. Furthermore, the distribution of the estimated disparities has a specific direction around the boundary of the occlusion, and this distribution presents the different direction with respect to the left- and the right-based disparity map. However, typical smoothing filters, such as mean filter and median filter, did not take into account those characteristic. So, they can decrease some error, but they cannot guarantee the accuracy of the corrected disparity. On the contrary, occluding patterns can accurately correct disparities of occluded areas because they consider both the characteristic that occlusion occurs and the characteristic that disparities of the occlusion are ranged, from estimated disparity maps with respect to the left and the right images. We made experiments on occluding patterns with some real stereo image set, and as a result, we can correct disparities of occluded areas more accurately than typical smoothing filters did.

Estimation of Traffic Volume Using Deep Learning in Stereo CCTV Image (스테레오 CCTV 영상에서 딥러닝을 이용한 교통량 추정)

  • Seo, Hong Deok;Kim, Eui Myoung
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.38 no.3
    • /
    • pp.269-279
    • /
    • 2020
  • Traffic estimation mainly involves surveying equipment such as automatic vehicle classification, vehicle detection system, toll collection system, and personnel surveys through CCTV (Closed Circuit TeleVision), but this requires a lot of manpower and cost. In this study, we proposed a method of estimating traffic volume using deep learning and stereo CCTV to overcome the limitation of not detecting the entire vehicle in case of single CCTV. COCO (Common Objects in Context) dataset was used to train deep learning models to detect vehicles, and each vehicle was detected in left and right CCTV images in real time. Then, the vehicle that could not be detected from each image was additionally detected by using affine transformation to improve the accuracy of traffic volume. Experiments were conducted separately for the normal road environment and the case of weather conditions with fog. In the normal road environment, vehicle detection improved by 6.75% and 5.92% in left and right images, respectively, than in a single CCTV image. In addition, in the foggy road environment, vehicle detection was improved by 10.79% and 12.88% in the left and right images, respectively.

AUTOMATIC PRECISION CORRECTION OF SATELLITE IMAGES

  • Im, Yong-Jo;Kim, Tae-Jung
    • Proceedings of the KSRS Conference
    • /
    • 2002.10a
    • /
    • pp.40-44
    • /
    • 2002
  • Precision correction is the process of geometrically aligning images to a reference coordinate system using GCPs(Ground Control Points). Many applications of remote sensing data, such as change detection, mapping and environmental monitoring, rely on the accuracy of precision correction. However it is a very time consuming and laborious process. It requires GCP collection, the identification of image points and their corresponding reference coordinates. At typical satellite ground stations, GCP collection requires most of man-powers in processing satellite images. A method of automatic registration of satellite images is demanding. In this paper, we propose a new algorithm for automatic precision correction by GCP chips and RANSAC(Random Sample Consensus). The algorithm is divided into two major steps. The first one is the automated generation of ground control points. An automated stereo matching based on normalized cross correlation will be used. We have improved the accuracy of stereo matching by determining the size and shape of match windows according to incidence angle and scene orientation from ancillary data. The second one is the robust estimation of mapping function from control points. We used the RANSAC algorithm for this step and effectively removed the outliers of matching results. We carried out experiments with SPOT images over three test sites which were taken at different time and look-angle with each other. Left image was used to select UP chipsets and right image to match against GCP chipsets and perform automatic registration. In result, we could show that our approach of automated matching and robust estimation worked well for automated registration.

  • PDF

The System for 3D Image Obtain and Provide corresponding to User's Viewpoint (사용자 시점에 대응 3차원 영상 획득 및 제공 시스템)

  • Lee, Seung-Jae;Jeon, Yeong-Mi;Kim, Nam-Woo;Jeong, Do-Un
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2016.10a
    • /
    • pp.835-837
    • /
    • 2016
  • In this research, Detect viewpoint of the user in other to obtain the coordinates and provided obtain a corresponding stereo images of different positions, Provide a system which can be observed remotely break the spatial limits. For system configuration Designed with a physical action such as left and right movement and rotation of the head is the largest factor in human viewpoint change. Therefore, this system is calculated to analyze user viewpoint, Control system for providing three-dimensional images obtained, It is implemented in network communication for data transmission, As the user observed the object in the same space even though free to observe a target at a remote location, Obtaining a stereo image that corresponds to the viewpoint providing a three-dimensional image, We implemented a system that provides the same visual effect and directly observed.

  • PDF