• Title/Summary/Keyword: 영상기반 위치 추정

Search Result 262, Processing Time 0.025 seconds

Application of Remote Sensing Techniques to Survey and Estimate the Standing-Stock of Floating Debris in the Upper Daecheong Lake (원격탐사 기법 적용을 통한 대청호 상류 유입 부유쓰레기 조사 및 현존량 추정 연구)

  • Youngmin Kim;Seon Woong Jang ;Heung-Min Kim;Tak-Young Kim;Suho Bak
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.5_1
    • /
    • pp.589-597
    • /
    • 2023
  • Floating debris in large quantities from land during heavy rainfall has adverse social, economic, and environmental impacts, but the monitoring system for the concentration area and amount is insufficient. In this study, we proposed an efficient monitoring method for floating debris entering the river during heavy rainfall in Daecheong Lake, the largest water supply source in the central region, and applied remote sensing techniques to estimate the standing-stock of floating debris. To investigate the status of floating debris in the upper of Daecheong Lake, we used a tracking buoy equipped with a low-orbit satellite communication terminal to identify the movement route and behavior characteristics, and used a drone to estimate the potential concentration area and standing-stock of floating debris. The location tracking buoys moved rapidly during the period when the cumulative rainfall for 3 days increased by more than 200 to 300 mm. In the case of Hotan Bridge, which showed the longest distance, it moved about 72.8 km for one day, and the maximum moving speed at this time was 5.71 km/h. As a result of calculating the standing-stock of floating debris using a drone after heavy rainfall, it was found to be 658.8 to 9,165.4 tons, with the largest amount occurring in the Seokhori area. In this study, we were able to identify the main concentrations of floating debris by using location-tracking buoys and drones. It is believed that remote sensing-based monitoring methods, which are more mobile and quicker than traditional monitoring methods, can contribute to reducing the cost of collecting and processing large amounts of floating debris that flows in during heavy rain periods in the future.

Refinement of Building Boundary using Airborne LiDAR and Airphoto (항공 LiDAR와 항공사진을 이용한 건물 경계 정교화)

  • Kim, Hyung-Tae;Han, Dong-Yeob
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.11 no.3
    • /
    • pp.136-150
    • /
    • 2008
  • Many studies have been carried out for automatic extraction of building by LiDAR data or airphoto. Combining the benefits of 3D location information data and shape information data of image can improve the accuracy. So, in this research building recognition algorithm based on contour was used to improve accuracy of building recognition by LiDAR data and elaborate building boundary recognition by airphoto. Building recognition algorithm based on contour can generate building boundary and roof structure information. Also it shows better accuracy of building detection than the existing recognition methods based on TIN or NDSM. Out of creating buffers in regular size on the building boundary which is presumed by contour, this research limits the boundary area of airphoto and elaborate building boundary to fit into edge of airphoto by double active contour. From the result of this research, 3D building boundary will be able to be detected by optimal matching on the constant range of extracted boundary in the future.

  • PDF

Optimal Facial Emotion Feature Analysis Method based on ASM-LK Optical Flow (ASM-LK Optical Flow 기반 최적 얼굴정서 특징분석 기법)

  • Ko, Kwang-Eun;Park, Seung-Min;Park, Jun-Heong;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.21 no.4
    • /
    • pp.512-517
    • /
    • 2011
  • In this paper, we propose an Active Shape Model (ASM) and Lucas-Kanade (LK) optical flow-based feature extraction and analysis method for analyzing the emotional features from facial images. Considering the facial emotion feature regions are described by Facial Action Coding System, we construct the feature-related shape models based on the combination of landmarks and extract the LK optical flow vectors at each landmarks based on the centre pixels of motion vector window. The facial emotion features are modelled by the combination of the optical flow vectors and the emotional states of facial image can be estimated by the probabilistic estimation technique, such as Bayesian classifier. Also, we extract the optimal emotional features that are considered the high correlation between feature points and emotional states by using common spatial pattern (CSP) analysis in order to improvise the operational efficiency and accuracy of emotional feature extraction process.

Gaussian Noise Reduction Algorithm using Self-similarity (자기 유사성을 이용한 가우시안 노이즈 제거 알고리즘)

  • Jeon, Yougn-Eun;Eom, Min-Young;Choe, Yoon-Sik
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.44 no.5
    • /
    • pp.1-10
    • /
    • 2007
  • Most of natural images have a special property, what is called self-similarity, which is the basis of fractal image coding. Even though an image has local stationarity in several homogeneous regions, it is generally non-stationarysignal, especially in edge region. This is the main reason that poor results are induced in linear techniques. In order to overcome the difficulty we propose a non-linear technique using self-similarity in the image. In our work, an image is classified into stationary and non-stationary region with respect to sample variance. In case of stationary region, do-noising is performed as simply averaging of its neighborhoods. However, if the region is non-stationary region, stationalization is conducted as make a set of center pixels by similarity matching with respect to bMSE(block Mean Square Error). And then do-nosing is performed by Gaussian weighted averaging of center pixels of similar blocks, because the set of center pixels of similar blocks can be regarded as nearly stationary. The true image value is estimated by weighted average of the elements of the set. The experimental results show that our method has better performance and smaller variance than other methods as estimator.

Vision Based Estimation of 3-D Position of Target for Target Following Guidance/Control of UAV (무인 항공기의 목표물 추적을 위한 영상 기반 목표물 위치 추정)

  • Kim, Jong-Hun;Lee, Dae-Woo;Cho, Kyeum-Rae;Jo, Seon-Yeong;Kim, Jung-Ho;Han, Dong-In
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.14 no.12
    • /
    • pp.1205-1211
    • /
    • 2008
  • This paper describes methods to estimate 3-D position of target with respect to reference frame through monocular image from unmanned aerial vehicle (UAV). 3-D position of target is used as information for surveillance, recognition and attack. In this paper. 3-D position of target is estimated to make guidance and control law, which can follow target, user interested. It is necessary that position of target is measured in image to solve 3-D position of target. In this paper, kalman filter is used to track and output position of target in image. Estimation of target's 3-D position is possible using result of image tracking and information of UAV and camera. To estimate this, two algorithms are used. One is methode from arithmetic derivation of dynamics between UAV, carmer, and target. The other is LPV (Linear Parametric Varying). These methods have been run on simulation, and compared in this paper.

Localization Using 3D-Lidar Based Road Reflectivity Map and IPM Image (3D-Lidar 기반 도로 반사도 지도와 IPM 영상을 이용한 위치추정)

  • Jung, Tae-Ki;Song, Jong-Hwa;Im, Jun-Hyuck;Lee, Byung-Hyun;Jee, Gyu-In
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.22 no.12
    • /
    • pp.1061-1067
    • /
    • 2016
  • Position of the vehicle for driving is essential to autonomous navigation. However, there appears GPS position error due to multipath which is occurred by tall buildings in downtown area. In this paper, GPS position error is corrected by using camera sensor and highly accurate map made with 3D-Lidar. Input image through inverse perspective mapping is converted into top-view image, and it works out map matching with the map which has intensity of 3D-Lidar. Performance comparison was conducted between this method and traditional way which does map matching with input image after conversion of map to pinhole camera image. As a result, longitudinal error declined 49% and complexity declined 90%.

A Study on Automatic Crack Detection Process for Wall-Climbing Robot based on Vacuum Absorption Method (진공흡착방식 기반의 벽면 이동로봇을 위한 자동 균열검출 프로세스에 관한 연구)

  • Park, Jae-Min;Shin, Dong-Ho;Kim, Hyun-Seop;Kim, Hyung-Hoon;Kim, Sang-Hoon
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2019.10a
    • /
    • pp.1034-1037
    • /
    • 2019
  • 본 논문은 진공을 이용한 흡착방식과 바퀴형 이동방식을 사용하는 벽면 이동로봇의 구성과 로봇 내부에서의 균열검출 및 처리 프로세스에 관한 연구이다. 임베디드 시스템에서 기계학습을 이용한 균열검출을 구현하기 위해 YOLO v3를 수정하여 구동하였으며, 검출된 균열의 영상을 저장하고 위치 정보를 추정하였다. 또한, 균열 정보를 수집하기 위해 고정 IP를 갖는 서버를 구축하고 각 기기 간의 효율적인 통신 네트워크를 구성하였다. 본 기술은 균열검출 작업뿐만 아니라 보수작업에도 활용될 수 있어, 대형 구조물과 건축물 등의 안전진단뿐만 아니라 안전성 향상에 이바지할 수 있을 것으로 예상한다.

Real-Time 3D Ultrasound Imaging Method Using a Cross Array Based on Synthetic Aperture Focusing: II. Linear Wave Front Transmission Approach (합성구경 기반의 교차어레이를 이용한 실시간 3차원 초음파 영상화 기법 : II. 선형파면 송신 방법)

  • 김강식;송태경
    • Journal of Biomedical Engineering Research
    • /
    • v.25 no.5
    • /
    • pp.403-414
    • /
    • 2004
  • In the accompanying paper, we proposed a real. time volumetric imaging method using a cross array based on receive dynamic focusing and synthetic aperture focusing along lateral and elevational directions, respetively. But synthetic aperture methods using spherical waves are subject to beam spreading with increasing depth due to the wave diffraction phenomenon. Moreover, since the proposed method uses only one element for each transmission, it has a limited transmit power. To overcome these limitations, we propose a new real. time volumetric imaging method using cross arrays based on synthetic aperture technique with linear wave fronts. In the proposed method, linear wave fronts having different angles on the horizontal plane is transmitted successively from all transmit array elements. On receive, by employing the conventional dynamic focusing and synthetic aperture methods along lateral and elevational directions, respectively, ultrasound waves can be focused effectively at all imaging points. Mathematical analysis and computer simulation results show that the proposed method can provide uniform elevational resolution over a large depth of field. Especially, since the new method can construct a volume image with a limited number of transmit receive events using a full transmit aperture, it is suitable for real-time 3D imaging with high transmit power and volume rate.

Development of a real-time surface image velocimeter using an android smartphone (스마트폰을 이용한 실시간 표면영상유속계 개발)

  • Yu, Kwonkyu;Hwang, Jeong-Geun
    • Journal of Korea Water Resources Association
    • /
    • v.49 no.6
    • /
    • pp.469-480
    • /
    • 2016
  • The present study aims to develop a real-time surface image velocimeter (SIV) using an Android smartphone. It can measure river surface velocity by using its built-in sensors and processors. At first the SIV system figures out the location of the site using the GPS of the phone. It also measures the angles (pitch and roll) of the device by using its orientation sensors to determine the coordinate transform from the real world coordinates to image coordinates. The only parameter to be entered is the height of the phone from the water surface. After setting, the camera of the phone takes a series of images. With the help of OpenCV, and open source computer vision library, we split the frames of the video and analyzed the image frames to get the water surface velocity field. The image processing algorithm, similar to the traditional STIV (Spatio-Temporal Image Velocimeter), was based on a correlation analysis of spatio-temporal images. The SIV system can measure instantaneous velocity field (1 second averaged velocity field) once every 11 seconds. Averaging this instantaneous velocity measurement for sufficient amount of time, we can get an average velocity field. A series of tests performed in an experimental flume showed that the measurement system developed was greatly effective and convenient. The measured results by the system showed a maximum error of 13.9 % and average error less than 10 %, when we compared with the measurements by a traditional propeller velocimeter.

The Relationship Analysis between the Epicenter and Lineaments in the Odaesan Area using Satellite Images and Shaded Relief Maps (위성영상과 음영기복도를 이용한 오대산 지역 진앙의 위치와 선구조선의 관계 분석)

  • CHA, Sung-Eun;CHI, Kwang-Hoon;JO, Hyun-Woo;KIM, Eun-Ji;LEE, Woo-Kyun
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.19 no.3
    • /
    • pp.61-74
    • /
    • 2016
  • The purpose of this paper is to analyze the relationship between the location of the epicenter of a medium-sized earthquake(magnitude 4.8) that occurred on January 20, 2007 in the Odaesan area with lineament features using a shaded relief map(1/25,000 scale) and satellite images from LANDSAT-8 and KOMPSAT-2. Previous studies have analyzed lineament features in tectonic settings primarily by examining two-dimensional satellite images and shaded relief maps. These methods, however, limit the application of the visual interpretation of relief features long considered as the major component of lineament extraction. To overcome some existing limitations of two-dimensional images, this study examined three-dimensional images, produced from a Digital Elevation Model and drainage network map, for lineament extraction. This approach reduces mapping errors introduced by visual interpretation. In addition, spline interpolation was conducted to produce density maps of lineament frequency, intersection, and length required to estimate the density of lineament at the epicenter of the earthquake. An algorithm was developed to compute the Value of the Relative Density(VRD) representing the relative density of lineament from the map. The VRD is the lineament density of each map grid divided by the maximum density value from the map. As such, it is a quantified value that indicates the concentration level of the lineament density across the area impacted by the earthquake. Using this algorithm, the VRD calculated at the earthquake epicenter using the lineament's frequency, intersection, and length density maps ranged from approximately 0.60(min) to 0.90(max). However, because there were differences in mapped images such as those for solar altitude and azimuth, the mean of VRD was used rather than those categorized by the images. The results show that the average frequency of VRD was approximately 0.85, which was 21% higher than the intersection and length of VRD, demonstrating the close relationship that exists between lineament and the epicenter. Therefore, it is concluded that the density map analysis described in this study, based on lineament extraction, is valid and can be used as a primary data analysis tool for earthquake research in the future.