• Title/Summary/Keyword: 영상취득시스템

Search Result 384, Processing Time 0.025 seconds

Definition of flash drought and analysis of hydrometeorological characteristics in South Korea (국내 돌발가뭄의 정의 및 수문기상학적 특성 분석)

  • Lee, Hee-Jin;Nam, Won-Ho;Yoon, Dong-Hyun;Svoboda, Mark D.
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2021.06a
    • /
    • pp.72-72
    • /
    • 2021
  • 가뭄은 수개월, 수년 이상에 걸쳐 서서히 발생 및 지속되며, 식생에 대한 피해가 발생할 때까지 확실한 인식이 어렵다. 최근에는 기후변화로 인하여 가뭄의 발생빈도가 증가하고 있으며, 기상 이상에 따른 다양한 형태의 가뭄이 발생하고 있다. 미국에서 정의한 'Flash Drought'는 비교적 짧은 기간 동안 표면온도의 상승과 비정상적으로 낮고 빠르게 감소하는 토양수분으로 인하여 식생에 대한 극심한 스트레스를 유발하면서 광범위한 작물 손실 및 용수공급 감소 등에 대한 피해를 야기하는 가뭄이다. 국내에서는 Flash Drought에 대한 모니터링이 활성화되지 않았기 때문에 본 연구에서는 '돌발가뭄'이라는 새로운 정의를 제시하면서 토양수분, 증발산량, 강수량, 기온 등의 가뭄 관련 주요인자를 활용하여 국내에서 발생한 돌발가뭄 (Flash Drought)를 감지하고 분석하고자 하였다. 돌발가뭄을 분석하기 위하여 선행연구에서 제시한 가뭄 관련 주요인자 기준 유형, 증발산량 기반 가뭄지수를 활용한 유형 등을 활용하였으며, 지상관측자료로는 국내 76개 종관기상관측 자료를 활용하였다. 또한, 토양수분 자료는 GRACE (Gravity Recovery and Climate Experiment) 위성영상 자료를 취득한 후 5 km 공간해상도 자료로 활용하였다. 가뭄지수의 경우 증발산 기반 가뭄지수 중 표준강수증발산지수 SPEI (Standardized Precipitation Evapotranspiration), 증발스트레스지수 ESI (Evaporative Stress Index) 등을 활용하여 국내 돌발가뭄 유형에 대한 분석에 적용하였다. 본 연구를 통하여 아직 명확하게 정의되지 않은 돌발가뭄에 대한 유형별 분석 및 국내 돌발가뭄의 수문기상학적 특성을 분석하고자 한다.

  • PDF

Study on Big Data Linkage Method for Managing Port Infrastructure Disasters and Aging (항만 인프라 재해 및 노후화 관리를 위한 빅데이터 연계 방안 연구)

  • Choi, Woo-geun;Park, Sun-ho
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.10a
    • /
    • pp.134-137
    • /
    • 2021
  • This study aims to develop a digital twin and big data-based port infrastructure control system that reflects smart maintenance technology. It is a technology that can evaluate aging and disaster risk by converting heterogeneous data such as sensing data and image data acquired from port infrastructure into big data, visualized in a digital twin-based control system, and comprehensively analyzed. The meaning of big data to express the physical world and processes by combining data, which are the core components of the virtual world, and the matters to be reflected in each stage of securing, processing, storing, analyzing and utilizing necessary big data, and we would like to define methods for linking with IT resources.

  • PDF

Improvement of Mid-Wave Infrared Image Visibility Using Edge Information of KOMPSAT-3A Panchromatic Image (KOMPSAT-3A 전정색 영상의 윤곽 정보를 이용한 중적외선 영상 시인성 개선)

  • Jinmin Lee;Taeheon Kim;Hanul Kim;Hongtak Lee;Youkyung Han
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.6_1
    • /
    • pp.1283-1297
    • /
    • 2023
  • Mid-wave infrared (MWIR) imagery, due to its ability to capture the temperature of land cover and objects, serves as a crucial data source in various fields including environmental monitoring and defense. The KOMPSAT-3A satellite acquires MWIR imagery with high spatial resolution compared to other satellites. However, the limited spatial resolution of MWIR imagery, in comparison to electro-optical (EO) imagery, constrains the optimal utilization of the KOMPSAT-3A data. This study aims to create a highly visible MWIR fusion image by leveraging the edge information from the KOMPSAT-3A panchromatic (PAN) image. Preprocessing is implemented to mitigate the relative geometric errors between the PAN and MWIR images. Subsequently, we employ a pre-trained pixel difference network (PiDiNet), a deep learning-based edge information extraction technique, to extract the boundaries of objects from the preprocessed PAN images. The MWIR fusion imagery is then generated by emphasizing the brightness value corresponding to the edge information of the PAN image. To evaluate the proposed method, the MWIR fusion images were generated in three different sites. As a result, the boundaries of terrain and objects in the MWIR fusion images were emphasized to provide detailed thermal information of the interest area. Especially, the MWIR fusion image provided the thermal information of objects such as airplanes and ships which are hard to detect in the original MWIR images. This study demonstrated that the proposed method could generate a single image that combines visible details from an EO image and thermal information from an MWIR image, which contributes to increasing the usage of MWIR imagery.

Building a Web-Based Undesignated Cultural Heritages Management Information System - A Case Study of the Namsan Area in Kyeongju - (웹을 이용한 비지정 문화재 관리 시스템 구축 - 경주 남산 지역을 중심으로 -)

  • Jo, Myung-Hee;Jang, Sung-Hyun;Kim, Hyoung-Sub
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.15 no.4
    • /
    • pp.151-161
    • /
    • 2012
  • The purpose of this study was to build a web-server for culture heritages management information system in order to efficiently manage and safely preserve undesignated cultural properties in the Namsan area in Kyeongju, which have been neglected so far. In order to achieve this purpose, data were collected on the basis of undesignated cultural properties in the study area. To acquire the location and range GPS were used and spatial data including geographic coordinates, visual materials and structured interviews were conducted through field survey. In addition, in order to obtained reliable and accurate locations of undesignated cultural properties which are scattered, DGPS(Differential Global Positioning System) were used. The spatial database was constructed based on the standard of cultural properties and attribute data was linked to geo-spatial information(digital map and aerial photographs). This system was built in a web-server environment. The result shows detailed description on the selected output for selected location and property information can be located on the map. In particular, a database to search for the status and modification of cultural properties will provide information to the users.

Gaze Detection System using Real-time Active Vision Camera (실시간 능동 비전 카메라를 이용한 시선 위치 추적 시스템)

  • 박강령
    • Journal of KIISE:Software and Applications
    • /
    • v.30 no.12
    • /
    • pp.1228-1238
    • /
    • 2003
  • This paper presents a new and practical method based on computer vision for detecting the monitor position where the user is looking. In general, the user tends to move both his face and eyes in order to gaze at certain monitor position. Previous researches use only one wide view camera, which can capture a whole user's face. In such a case, the image resolution is too low and the fine movements of user's eye cannot be exactly detected. So, we implement the gaze detection system with dual camera systems(a wide and a narrow view camera). In order to locate the user's eye position accurately, the narrow view camera has the functionalities of auto focusing and auto panning/tilting based on the detected 3D facial feature positions from the wide view camera. In addition, we use dual R-LED illuminators in order to detect facial features and especially eye features. As experimental results, we can implement the real-time gaze detection system and the gaze position accuracy between the computed positions and the real ones is about 3.44 cm of RMS error.

Comparison of Multi-angle TerraSAR-X Staring Mode Image Registration Method through Coarse to Fine Step (Coarse to Fine 단계를 통한 TerraSAR-X Staring Mode 다중 관측각 영상 정합기법 비교 분석)

  • Lee, Dongjun;Kim, Sang-Wan
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.3
    • /
    • pp.475-491
    • /
    • 2021
  • With the recent increase in available high-resolution (< ~1 m) satellite SAR images, the demand for precise registration of SAR images is increasing in various fields including change detection. The registration between high-resolution SAR images acquired in different look angle is difficult due to speckle noise and geometric distortion caused by the characteristics of SAR images. In this study, registration is performed in two stages, coarse and fine, using the x-band SAR data imaged at staring spotlight mode of TerraSAR-X. For the coarse registration, a method combining the adaptive sampling method and SAR-SIFT (Scale Invariant Feature Transform) is applied, and three rigid methods (NCC: Normalized Cross Correlation, Phase Congruency-NCC, MI: Mutual Information) and one non-rigid (Gefolki: Geoscience extended Flow Optical Flow Lucas-Kanade Iterative), for the fine registration stage, was performed for performance comparison. The results were compared by using RMSE (Root Mean Square Error) and FSIM (Feature Similarity) index, and all rigid models showed poor results in all image combinations. It is confirmed that the rigid models have a large registration error in the rugged terrain area. As a result of applying the Gefolki algorithm, it was confirmed that the RMSE of Gefolki showed the best result as a 1~3 pixels, and the FSIM index also obtained a higher value than 0.02~0.03 compared to other rigid methods. It was confirmed that the mis-registration due to terrain effect could be sufficiently reduced by the Gefolki algorithm.

Real-Time Implementation of the Relative Position Estimation Algorithm Using the Aerial Image Sequence (항공영상에서 상대 위치 추정 알고리듬의 실시간 구현)

  • Park, Jae-Hong;Kim, Gwan-Seok;Kim, In-Cheol;Park, Rae-Hong;Lee, Sang-Uk
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.39 no.3
    • /
    • pp.66-77
    • /
    • 2002
  • This paper deals with an implementation of the navigation parameter extraction technique using the TMS320C80 multimedia video processor (MVP). Especially, this Paper focuses on the relative position estimation algorithm which plays an important role in real-time operation of the overall system. Based on the relative position estimation algorithm using the images obtained at two locations, we develop a fast algorithm that can reduce large amount of computation time and fit into fixed-point processors. Then, the algorithm is reconfigured for parallel processing using the 4 parallel processors in the MVP. As a result, we shall demonstrate that the navigation parameter extraction system employing the MVP can operate at full-frame rate, satisfying real-time requirement of the overall system.

Road Crack Detection based on Object Detection Algorithm using Unmanned Aerial Vehicle Image (드론영상을 이용한 물체탐지알고리즘 기반 도로균열탐지)

  • Kim, Jeong Min;Hyeon, Se Gwon;Chae, Jung Hwan;Do, Myung Sik
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.18 no.6
    • /
    • pp.155-163
    • /
    • 2019
  • This paper proposes a new methodology to recognize cracks on asphalt road surfaces using the image data obtained with drones. The target section was Yuseong-daero, the main highway of Daejeon. Furthermore, two object detection algorithms, such as Tiny-YOLO-V2 and Faster-RCNN, were used to recognize cracks on road surfaces, classify the crack types, and compare the experimental results. As a result, mean average precision of Faster-RCNN and Tiny-YOLO-V2 was 71% and 33%, respectively. The Faster-RCNN algorithm, 2Stage Detection, showed better performance in identifying and separating road surface cracks than the Yolo algorithm, 1Stage Detection. In the future, it will be possible to prepare a plan for building an infrastructure asset-management system using drones and AI crack detection systems. An efficient and economical road-maintenance decision-support system will be established and an operating environment will be produced.

Real-time Implementation and Application of Pointing Region Estimation System using 3D Geometric Information in Real World (실세계 3차원 기하학 정보를 이용한 실시간 지시영역 추정 시스템의 구현 및 응용)

  • Han, Yun-Sang;Seo, Yung-Ho;Doo, Kyoung-Soo;Kim, Jin-Tae;Choi, Jong-Soo
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.45 no.2
    • /
    • pp.29-36
    • /
    • 2008
  • In this paper we propose a real-time method to estimate a pointing region from two camera images. In general, a pointing target exists in the face direction when a human points to something. Therefore, we regard the direction of pointing as the straight line that connects the face position with the fingertip position. First, the method extracts two points in the face and the fingertips region by using detecting the skin color of human being. And we used the 3D geometric information to obtain a pointing detection and its region. In order to evaluate the performance, we have build up an ICIGS(Interactive Cinema Information Guiding System) with two camera and a beam project.

Sorghum Panicle Detection using YOLOv5 based on RGB Image Acquired by UAV System (무인기로 취득한 RGB 영상과 YOLOv5를 이용한 수수 이삭 탐지)

  • Min-Jun, Park;Chan-Seok, Ryu;Ye-Seong, Kang;Hye-Young, Song;Hyun-Chan, Baek;Ki-Su, Park;Eun-Ri, Kim;Jin-Ki, Park;Si-Hyeong, Jang
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.24 no.4
    • /
    • pp.295-304
    • /
    • 2022
  • The purpose of this study is to detect the sorghum panicle using YOLOv5 based on RGB images acquired by a unmanned aerial vehicle (UAV) system. The high-resolution images acquired using the RGB camera mounted in the UAV on September 2, 2022 were split into 512×512 size for YOLOv5 analysis. Sorghum panicles were labeled as bounding boxes in the split image. 2,000images of 512×512 size were divided at a ratio of 6:2:2 and used to train, validate, and test the YOLOv5 model, respectively. When learning with YOLOv5s, which has the fewest parameters among YOLOv5 models, sorghum panicles were detected with mAP@50=0.845. In YOLOv5m with more parameters, sorghum panicles could be detected with mAP@50=0.844. Although the performance of the two models is similar, YOLOv5s ( 4 hours 35 minutes) has a faster training time than YOLOv5m (5 hours 15 minutes). Therefore, in terms of time cost, developing the YOLOv5s model was considered more efficient for detecting sorghum panicles. As an important step in predicting sorghum yield, a technique for detecting sorghum panicles using high-resolution RGB images and the YOLOv5 model was presented.