• Title/Summary/Keyword: Depth Information Matching

Search Result 181, Processing Time 0.032 seconds

Spatial Gap-Filling of Hourly AOD Data from Himawari-8 Satellite Using DCT (Discrete Cosine Transform) and FMM (Fast Marching Method)

  • Youn, Youjeong;Kim, Seoyeon;Jeong, Yemin;Cho, Subin;Kang, Jonggu;Kim, Geunah;Lee, Yangwon
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.4
    • /
    • pp.777-788
    • /
    • 2021
  • Since aerosol has a relatively short duration and significant spatial variation, satellite observations become more important for the spatially and temporally continuous quantification of aerosol. However, optical remote sensing has the disadvantage that it cannot detect AOD (Aerosol Optical Depth) for the regions covered by clouds or the regions with extremely high concentrations. Such missing values can increase the data uncertainty in the analyses of the Earth's environment. This paper presents a spatial gap-filling framework using a univariate statistical method such as DCT-PLS (Discrete Cosine Transform-based Penalized Least Square Regression) and FMM (Fast Matching Method) inpainting. We conducted a feasibility test for the hourly AOD product from AHI (Advanced Himawari Imager) between January 1 and December 31, 2019, and compared the accuracy statistics of the two spatial gap-filling methods. When the null-pixel area is not very large (null-pixel ratio < 0.6), the validation statistics of DCT-PLS and FMM techniques showed high accuracy of CC=0.988 (MAE=0.020) and CC=0.980 (MAE=0.028), respectively. Together with the AI-based gap-filling method using extra explanatory variables, the DCT-PLS and FMM techniques can be tested for the low-resolution images from the AMI (Advanced Meteorological Imager) of GK2A (Geostationary Korea Multi-purpose Satellite 2A), GEMS (Geostationary Environment Monitoring Spectrometer) and GOCI2 (Geostationary Ocean Color Imager) of GK2B (Geostationary Korea Multi-purpose Satellite 2B) and the high-resolution images from the CAS500 (Compact Advanced Satellite) series soon.

Distinction of Real Face and Photo using Stereo Vision (스테레오비전을 이용한 실물 얼굴과 사진의 구분)

  • Shin, Jin-Seob;Kim, Hyun-Jung;Won, Il-Yong
    • Journal of the Korea Society of Computer and Information
    • /
    • v.19 no.7
    • /
    • pp.17-25
    • /
    • 2014
  • In the devices that leave video records, it is an important issue to distinguish whether the input image is a real object or a photo when securing an identifying image. Using a single image and sensor, which is a simple way to distinguish the target from distance measurement has many weaknesses. Thus, this paper proposes a way to distinguish a simple photo and a real object by using stereo images. It is not only measures the distance to the target, but also checks a three-dimensional effect by making the depth map of the face area. They take pictures of the photos and the real faces, and the measured value of the depth map is applied to the learning algorithm. Exactly through iterative learning to distinguish between the real faces and the photos looked for patterns. The usefulness of the proposed algorithm was verified experimentally.

Location Estimation for Multiple Targets Using Tree Search Algorithms under Cooperative Surveillance of Multiple Robots (다중로봇 협업감시 시스템에서 트리 탐색 기법을 활용한 다중표적 위치 좌표 추정)

  • Park, So Ryoung;Noh, Sanguk
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.38A no.9
    • /
    • pp.782-791
    • /
    • 2013
  • This paper proposes the location estimation techniques of distributed targets with the multi-sensor data perceived through IR sensors of the military robots. In order to match up targets with measured azimuths, we apply the maximum likelihood (ML), depth-first, and breadth-first tree search algorithms, in which we use the measured azimuths and the number of pixels on IR screen for pruning branches and selecting candidates. After matching up targets with azimuths, we estimate the coordinate of each target by obtaining the intersection point of the azimuths with the least square error (LSE) algorithm. The experimental results show the probability of missing target, mean of the number of calculating nodes, and mean error of the estimated coordinates of the proposed algorithms.

A Fast and Accurate Face Detection and Tracking Method by using Depth Information (깊이정보를 이용한 고속 고정밀 얼굴검출 및 추적 방법)

  • Bae, Yun-Jin;Choi, Hyun-Jun;Seo, Young-Ho;Kim, Dong-Wook
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.37 no.7A
    • /
    • pp.586-599
    • /
    • 2012
  • This paper proposes a fast face detection and tracking method which uses depth images as well as RGB images. It consists of the face detection procedure and the face tracking procedure. The face detection method basically uses an existing method, Adaboost, but it reduces the size of the search area by using the depth image. The proposed face tracking method uses a template matching technique and incorporates an early-termination scheme to reduce the execution time further. The results from implementing and experimenting the proposed methods showed that the proposed face detection method takes only about 39% of the execution time of the existing method. The proposed tracking method takes only 2.48ms per frame with $640{\times}480$ resolution. For the exactness, the proposed detection method showed a little lower in detection ratio but in the error ratio, which is for the cases when a detected one as a face is not really a face, the proposed method showed only about 38% of that of the previous method. The proposed face tracking method turned out to have a trade-off relationship between the execution time and the exactness. In all the cases except a special one, the tracking error ratio is as low as about 1%. Therefore, we expect the proposed face detection and tracking methods can be used individually or in combined for many applications that need fast execution and exact detection or tracking.

Semantic Image Retrieval Using Color Distribution and Similarity Measurement in WordNet (컬러 분포와 WordNet상의 유사도 측정을 이용한 의미적 이미지 검색)

  • Choi, Jun-Ho;Cho, Mi-Young;Kim, Pan-Koo
    • The KIPS Transactions:PartB
    • /
    • v.11B no.4
    • /
    • pp.509-516
    • /
    • 2004
  • Semantic interpretation of image is incomplete without some mechanism for understanding semantic content that is not directly visible. For this reason, human assisted content-annotation through natural language is an attachment of textual description to image. However, keyword-based retrieval is in the level of syntactic pattern matching. In other words, dissimilarity computation among terms is usually done by using string matching not concept matching. In this paper, we propose a method for computerized semantic similarity calculation In WordNet space. We consider the edge, depth, link type and density as well as existence of common ancestors. Also, we have introduced method that applied similarity measurement on semantic image retrieval. To combine wi#h the low level features, we use the spatial color distribution model. When tested on a image set of Microsoft's 'Design Gallery Line', proposed method outperforms other approach.

Disparity estimation using wavelet transformation and reference points (웨이블릿 변환과 기준점을 이용한 변위 추정)

  • 노윤향;고병철;변혜란;유지상
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.27 no.2A
    • /
    • pp.137-145
    • /
    • 2002
  • In the method of 3D modeling, stereo matching method which obtains three dimensional depth information from the two images is taken from the different view points. In general, it is very essential work for the 3D modeling from 2D stereo images to estimate the exact disparity through fading the conjugate pair of pixel from the left and right image. In this paper to solve the problems of the stereo image disparity estimation, we introduce a novel approach method to improve the exactness and efficiency of the disparity. In the first place, we perform a wavelet transformation of the stereo images and set the reference points in the image by the feature-based matching method. This reference points have very high probability over 95 %. In the base of these reference points we can decide the size of the variable block searching windows for estimating dense disparity of area based method and perform the ordering constraint to prevent mismatching. By doing this, we could estimate the disparity in a short time and solve the occlusion caused by applying the fried-sized windows and probable error caused by repeating patterns.

Catadioptric Omnidirectional Stereo Imaging System and Reconstruction of 3-dimensional Coordinates (Catadioptric 전방향 스테레오 영상시스템 및 3차원 좌표 복원)

  • Kim, Soon-Cheol;Yi, Soo-Yeong
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.16 no.6
    • /
    • pp.4108-4114
    • /
    • 2015
  • An image acquisition by using an optical mirror is called as a catadioptric method. The catadioptric imaging method is generally used for acquisition of 360-degree all directional visual information in an image. An exemplar omnidirectional optical mirror is the bowl-shaped hyperbolic mirror. In this paper, a single camera omnidirectional stereo imaging method is studied with an additional concave lens. It is possible to obtain 3 dimensional coordinates of environmental objects from the omnidirectional stereo image by matching the stereo image having different view points. The omnidirectional stereo imaging system in this paper is cost-effective and relatively easy for correspondence matching because of consistent camera intrinsic parameters in the stereo image. The parameters of the imaging system are extracted through 3-step calibration and the performance for 3-dimensional coordinates reconstruction is verified through experiments in this paper. Measurable range of the proposed imaging system is also presented by depth-resolution analysis.

LiDAR Data Interpolation Algorithm for 3D-2D Motion Estimation (3D-2D 모션 추정을 위한 LiDAR 정보 보간 알고리즘)

  • Jeon, Hyun Ho;Ko, Yun Ho
    • Journal of Korea Multimedia Society
    • /
    • v.20 no.12
    • /
    • pp.1865-1873
    • /
    • 2017
  • The feature-based visual SLAM requires 3D positions for the extracted feature points to perform 3D-2D motion estimation. LiDAR can provide reliable and accurate 3D position information with low computational burden, while stereo camera has the problem of the impossibility of stereo matching in simple texture image region, the inaccuracy in depth value due to error contained in intrinsic and extrinsic camera parameter, and the limited number of depth value restricted by permissible stereo disparity. However, the sparsity of LiDAR data may increase the inaccuracy of motion estimation and can even lead to the result of motion estimation failure. Therefore, in this paper, we propose three interpolation methods which can be applied to interpolate sparse LiDAR data. Simulation results obtained by applying these three methods to a visual odometry algorithm demonstrates that the selective bilinear interpolation shows better performance in the view point of computation speed and accuracy.

Convenient View Calibration of Multiple RGB-D Cameras Using a Spherical Object (구형 물체를 이용한 다중 RGB-D 카메라의 간편한 시점보정)

  • Park, Soon-Yong;Choi, Sung-In
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.3 no.8
    • /
    • pp.309-314
    • /
    • 2014
  • To generate a complete 3D model from depth images of multiple RGB-D cameras, it is necessary to find 3D transformations between RGB-D cameras. This paper proposes a convenient view calibration technique using a spherical object. Conventional view calibration methods use either planar checkerboards or 3D objects with coded-pattern. In these conventional methods, detection and matching of pattern features and codes takes a significant time. In this paper, we propose a convenient view calibration method using both 3D depth and 2D texture images of a spherical object simultaneously. First, while moving the spherical object freely in the modeling space, depth and texture images of the object are acquired from all RGB-D camera simultaneously. Then, the external parameters of each RGB-D camera is calibrated so that the coordinates of the sphere center coincide in the world coordinate system.

A Fast and Accurate Face Detection and Tracking Method by using Depth Information and color information (깊이정보와 컬러정보를 이용한 고속 고정밀 얼굴검출 및 추적 방법)

  • Kim, Woo-Youl;Seo, Young-Ho;Kim, Dong-Wook
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.16 no.9
    • /
    • pp.1825-1838
    • /
    • 2012
  • This paper proposes a fast face detection and tracking method which uses depth images as well as RGB images. It consists of the face detection procedure and the face tracking procedure. The face detection method basically uses an existing method, Adaboost, but it reduces the size of the search area by using the depth information and skin color. The proposed face tracking method uses a template matching technique and incorporates an early-termination scheme to reduce the execution time further. The results from implementing and experimenting the proposed methods showed that the proposed face detection method takes only about 39% of the execution time of the existing method. The proposed tracking method takes only 2.48ms per frame. For the exactness, the proposed detection method and previous method showed a same detection ratio but in the error ratio, which is about 0.66%, the proposed method showed considerably improved performance. In all the cases except a special one, the tracking error ratio is as low as about 1%. Therefore, we expect the proposed face detection and tracking methods can be used individually or in combined for many applications that need fast execution and exact detection or tracking.