• Title/Summary/Keyword: Depth Information Matching

Search Result 182, Processing Time 0.032 seconds

Prediction of potential Landslide Sites Using GIS (지리정보시스템에 기반한 산지재해 예측)

  • Cha, Kyung Seob;Kim, Tae Hoon;Kim, Young Jin
    • Journal of Korean Society of societal Security
    • /
    • v.1 no.4
    • /
    • pp.57-64
    • /
    • 2008
  • Korea has been suffered from serious damages of lives and properties, due to landslides that are triggered by heavy rains in every monsoon season. This study developed the physically based landslide prediction model which consists of 3 parts, such as slope stability analysis model, groundwater flow model and soil depth model. To evaluate its applicability to the prediction of landslides, the data of actual landslides were plotted on the areas predicted on the GIS map. The matching rate of this model to the actual data was 84.8%. The relation between hydrological and landform factors and potential landslide were analyzed.

  • PDF

Precise Detection of Coplanar Checkerboard Corner Points for Stereo Camera Calibration Using a Single Frame (스테레오 카메라 캘리브레이션을 위한 동일평면 체커보드 코너점 정밀검출)

  • Park, Jeong-Min;Lee, Jong-In;Cho, Joon-Bum;Lee, Joon-Woong
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.7
    • /
    • pp.602-608
    • /
    • 2015
  • This paper proposes an algorithm for precise detection of corner points on a coplanar checkerboard in order to perform stereo camera calibration using a single frame. Considering the conditions of automobile production lines where a stereo camera is attached to the windshield of a vehicle, this research focuses on a coplanar calibration methodology. To obtain the accurate values of the stereo camera parameters using the calibration methodology, precise localization of a large number of feature points on a calibration target image should be ensured. To realize this demand, the idea with respect to a checkerboard pattern design and the use of a Homography matrix are provided. The calibration result obtained by the proposed method is also verified by comparing the depth information from stereo matching and a laser scanner.

Weighted Census Transform and Guide Filtering based Depth Map Generation Method (가중치를 이용한 센서스 변환과 가이드 필터링 기반깊이지도 생성 방법)

  • Mun, Ji-Hun;Ho, Yo-Sung
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.54 no.2
    • /
    • pp.92-98
    • /
    • 2017
  • Generally, image contains geometrical and radiometric errors. Census transform can solve the stereo mismatching problem caused by the radiometric distortion. Since the general census transform compares center of window pixel value with neighbor pixel value, it is hard to obtain an accurate matching result when the difference of pixel value is not large. To solve that problem, we propose a census transform method that applies different 4-step weight for each pixel value difference by applying an assistance window inside the window kernel. If the current pixel value is larger than the average of assistance window pixel value, a high weight value is given. Otherwise, a low weight value is assigned to perform a differential census transform. After generating an initial disparity map using a weighted census transform and input images, the gradient information is additionally used to model a cost function for generating a final disparity map. In order to find an optimal cost value, we use guided filtering. Since the filtering is performed using the input image and the disparity image, the object boundary region can be preserved. From the experimental results, we confirm that the performance of the proposed stereo matching method is improved compare to the conventional method.

Fuzzy Tracking Control Based on Stereo Images for Tracking of Moving Robot (이동 로봇 추적을 위한 스테레오 영상기반 퍼지 추적제어)

  • Min, Hyun-Hong;Yoo, Dong-Sang;Kim, Yong-Tae
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.22 no.2
    • /
    • pp.198-204
    • /
    • 2012
  • Tracking and recognition of robots are required for the cooperation task of robots in various environments. In the paper, a tracking control system of moving robot using stereo image processing, code-book model and fuzzy controller is proposed. First, foreground and background images are separated by using code-book model method. A candidate region is selected based on the color information in the separated foreground image and real distance of the robot is estimated from matching process of depth image that is acquired through stereo image processing. The open and close processing of image are applied and labeling according to the size of mobile robot is used to recognize the moving robot effectively. A fuzzy tracking controller using distance information and mobile information by stereo image processing is designed for effective tracking according to the movement velocity of the target robot. The proposed fuzzy tracking control method is verified through tracking experiments of mobile robots with stereo camera.

Development of an Image Processing Algorithm for Paprika Recognition and Coordinate Information Acquisition using Stereo Vision (스테레오 영상을 이용한 파프리카 인식 및 좌표 정보 획득 영상처리 알고리즘 개발)

  • Hwa, Ji-Ho;Song, Eui-Han;Lee, Min-Young;Lee, Bong-Ki;Lee, Dae-Weon
    • Journal of Bio-Environment Control
    • /
    • v.24 no.3
    • /
    • pp.210-216
    • /
    • 2015
  • Purpose of this study was a development of an image processing algorithm to recognize paprika and acquire it's 3D coordinates from stereo images to precisely control an end-effector of a paprika auto harvester. First, H and S threshold was set using HSI histogram analyze for extracting ROI(region of interest) from raw paprika cultivation images. Next, fundamental matrix of a stereo camera system was calculated to process matching between extracted ROI of corresponding images. Epipolar lines were acquired using F matrix, and $11{\times}11$ mask was used to compare pixels on the line. Distance between extracted corresponding points were calibrated using 3D coordinates of a calibration board. Non linear regression analyze was used to prove relation between each pixel disparity of corresponding points and depth(Z). Finally, the program could calculate horizontal(X), vertical(Y) directional coordinates using stereo camera's geometry. Horizontal directional coordinate's average error was 5.3mm, vertical was 18.8mm, depth was 5.4mm. Most of the error was occurred at 400~450mm of depth and distorted regions of image.

Robust Real-Time Visual Odometry Estimation for 3D Scene Reconstruction (3차원 장면 복원을 위한 강건한 실시간 시각 주행 거리 측정)

  • Kim, Joo-Hee;Kim, In-Cheol
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.4 no.4
    • /
    • pp.187-194
    • /
    • 2015
  • In this paper, we present an effective visual odometry estimation system to track the real-time pose of a camera moving in 3D space. In order to meet the real-time requirement as well as to make full use of rich information from color and depth images, our system adopts a feature-based sparse odometry estimation method. After matching features extracted from across image frames, it repeats both the additional inlier set refinement and the motion refinement to get more accurate estimate of camera odometry. Moreover, even when the remaining inlier set is not sufficient, our system computes the final odometry estimate in proportion to the size of the inlier set, which improves the tracking success rate greatly. Through experiments with TUM benchmark datasets and implementation of the 3D scene reconstruction application, we confirmed the high performance of the proposed visual odometry estimation method.

A study on the fabrication of Y-branch for optical power distribution and its coupling properties with optical fiber (광분배를 위한 Y-branch 제작과 광파이버와의 결합특성에 관한 연구)

  • 김상덕;박수봉;윤중현;이재규;김종빈
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.21 no.12
    • /
    • pp.3277-3285
    • /
    • 1996
  • In this paper, w designed an opical power distribution device for application to an optical switching and an optical subscriber loop. We fabricated PSG thin film by LPCVD. Based on the measured index of fabricted thin film, rib-type waveguide was transformed to two-dimension by the effective index method and we simulated dispersion property to find asingle-mode condition. We found that the optimum design parameters of rib-type waveguide are:cladding layer of 3.mu.m, core layer of 3.mu.m, buffer layer of 10.mu.m, and core width of 4.mu.m. Each side of the guiding region was etched down to 4.mu.m to shape the core. We used these optimum parameters of the rib-type waveguide with branching angle of 0.5.deg. and simulted the Y-branch waveguide by the BPM simulation. Numerical loss in branching area was claculated to be 0.1581dB and equal to the total loss of the Y-branch. The loss of the fabricated Y-branch waveguide on PSG film ws 1.6dB at .lambda.=1.3.mu.m before annealing but was 1.2dB after annealing at 1000.deg. C for 10 minutes. Consequently, the loss of branching area from 3000.mu.m to 6000.mu.m in the z-direction was 0.8dB, and single-mode propagation was confirmed by measuring the near field pattern. For coupling the fabricated Y-branch waveguide with an optical fiber, we fabricated V-groove which was used as the upholder of optical fiber. An etching angle was 54.deg. and the width and depth of guiding groove was 150.mu.m, 70.mu.m, respectively. The optical fiber is inserted onto V-groove. Both the Y-branch and V-groove were connected through the index matching oil. Coupling loss after connecting Y-branch and the optical fiber on V-groove was 0.34dB and that after injecting index mateching oil was 0.14dB.

  • PDF

Markerless camera pose estimation framework utilizing construction material with standardized specification

  • Harim Kim;Heejae Ahn;Sebeen Yoon;Taehoon Kim;Thomas H.-K. Kang;Young K. Ju;Minju Kim;Hunhee Cho
    • Computers and Concrete
    • /
    • v.33 no.5
    • /
    • pp.535-544
    • /
    • 2024
  • In the rapidly advancing landscape of computer vision (CV) technology, there is a burgeoning interest in its integration with the construction industry. Camera calibration is the process of deriving intrinsic and extrinsic parameters that affect when the coordinates of the 3D real world are projected onto the 2D plane, where the intrinsic parameters are internal factors of the camera, and extrinsic parameters are external factors such as the position and rotation of the camera. Camera pose estimation or extrinsic calibration, which estimates extrinsic parameters, is essential information for CV application at construction since it can be used for indoor navigation of construction robots and field monitoring by restoring depth information. Traditionally, camera pose estimation methods for cameras relied on target objects such as markers or patterns. However, these methods, which are marker- or pattern-based, are often time-consuming due to the requirement of installing a target object for estimation. As a solution to this challenge, this study introduces a novel framework that facilitates camera pose estimation using standardized materials found commonly in construction sites, such as concrete forms. The proposed framework obtains 3D real-world coordinates by referring to construction materials with certain specifications, extracts the 2D coordinates of the corresponding image plane through keypoint detection, and derives the camera's coordinate through the perspective-n-point (PnP) method which derives the extrinsic parameters by matching 3D and 2D coordinate pairs. This framework presents a substantial advancement as it streamlines the extrinsic calibration process, thereby potentially enhancing the efficiency of CV technology application and data collection at construction sites. This approach holds promise for expediting and optimizing various construction-related tasks by automating and simplifying the calibration procedure.

Integrated Geospatial Information Construction of Ocean and Terrain Using Multibeam Echo Sounder Data and Airborne Lidar Data (항공 Lidar와 멀티빔 음향측심 자료를 이용한 해상과 육상의 통합 지형공간정보 구축)

  • Lee, Jae-One;Choi, Hye-Won;Yun, Bu-Yeol;Park, Chi-Young
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.17 no.4
    • /
    • pp.28-39
    • /
    • 2014
  • Several studies have been performed globally on the construction of integrated systems that are available for the integrated use of 3D geographic information on terrain and oceans. Research on 3D geographic modeling is also facilitated by the application of Lidar surveying, which enables the highly accurate realization of 3D geographic information for a wide area of land. In addition, a few marine research organizations have been conducting investigations and surveying diverse ocean information for building and applying MGIS(Marine Geographic Information System). However, the construction of integrated geographic information systems for both terrain and oceans has certain limitations resulting from the inconsistency in reference systems and datum levels between two data. Therefore, in this investigation, integrated geospatial information has been realized by using a combined topographical map, after matching the reference systems and datum levels by integration of airborne Lidar data and multi-beam echo sounder data. To verify the accuracy of the integrated geospatial information data, ten randomly selected samples from study areas were selected and analyzed. The results show that the 10 analyzed data samples have an RMSE of 0.46m, which meets the IHO standard(0.5m) for depth accuracy of hydrographic surveys.

Stereo-based Robust Human Detection on Pose Variation Using Multiple Oriented 2D Elliptical Filters (방향성 2차원 타원형 필터를 이용한 스테레오 기반 포즈에 강인한 사람 검출)

  • Cho, Sang-Ho;Kim, Tae-Wan;Kim, Dae-Jin
    • Journal of KIISE:Software and Applications
    • /
    • v.35 no.10
    • /
    • pp.600-607
    • /
    • 2008
  • This paper proposes a robust human detection method irrespective of their pose variation using the multiple oriented 2D elliptical filters (MO2DEFs). The MO2DEFs can detect the humans regardless of their poses unlike existing object oriented scale adaptive filter (OOSAF). To overcome OOSAF's limitation, we introduce the MO2DEFs whose shapes look like the oriented ellipses. We perform human detection by applying four different 2D elliptical filters with specific orientations to the 2D spatial-depth histogram and then by taking the thresholds over the filtered histograms. In addition, we determine the human pose by using convolution results which are computed by using the MO2DEFs. We verify the human candidates by either detecting the face or matching head-shoulder shapes over the estimated rotation. The experimental results showed that the accuracy of pose angle estimation was about 88%, the human detection using the MO2DEFs outperformed that of using the OOSAF by $15{\sim}20%$ especially in case of the posed human.