• Title/Summary/Keyword: Cloud detection

Search Result 372, Processing Time 0.022 seconds

Oil Spill Monitoring in Norilsk, Russia Using Google Earth Engine and Sentinel-2 Data (Google Earth Engine과 Sentinel-2 위성자료를 이용한 러시아 노릴스크 지역의 기름 유출 모니터링)

  • Minju Kim;Chang-Uk Hyun
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.3
    • /
    • pp.311-323
    • /
    • 2023
  • Oil spill accidents can cause various environmental issues, so it is important to quickly assess the extent and changes in the area and location of the spilled oil. In the case of oil spill detection using satellite imagery, it is possible to detect a wide range of oil spill areas by utilizing the information collected from various sensors equipped on the satellite. Previous studies have analyzed the reflectance of oil at specific wavelengths and have developed an oil spill index using bands within the specific wavelength ranges. When analyzing multiple images before and after an oil spill for monitoring purposes, a significant amount of time and computing resources are consumed due to the large volume of data. By utilizing Google Earth Engine, which allows for the analysis of large volumes of satellite imagery through a web browser, it is possible to efficiently detect oil spills. In this study, we evaluated the applicability of four types of oil spill indices in the area of various land cover using Sentinel-2 MultiSpectral Instrument data and the cloud-based Google Earth Engine platform. We assessed the separability of oil spill areas by comparing the index values for different land covers. The results of this study demonstrated the efficient utilization of Google Earth Engine in oil spill detection research and indicated that the use of oil spill index B ((B3+B4)/B2) and oil spill index C (R: B3/B2, G: (B3+B4)/B2, B: (B6+B7)/B5) can contribute to effective oil spill monitoring in other regions with complex land covers.

Scaling Attack Method for Misalignment Error of Camera-LiDAR Calibration Model (카메라-라이다 융합 모델의 오류 유발을 위한 스케일링 공격 방법)

  • Yi-ji Im;Dae-seon Choi
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.33 no.6
    • /
    • pp.1099-1110
    • /
    • 2023
  • The recognition system of autonomous driving and robot navigation performs vision work such as object recognition, tracking, and lane detection after multi-sensor fusion to improve performance. Currently, research on a deep learning model based on the fusion of a camera and a lidar sensor is being actively conducted. However, deep learning models are vulnerable to adversarial attacks through modulation of input data. Attacks on the existing multi-sensor-based autonomous driving recognition system are focused on inducing obstacle detection by lowering the confidence score of the object recognition model.However, there is a limitation that an attack is possible only in the target model. In the case of attacks on the sensor fusion stage, errors in vision work after fusion can be cascaded, and this risk needs to be considered. In addition, an attack on LIDAR's point cloud data, which is difficult to judge visually, makes it difficult to determine whether it is an attack. In this study, image scaling-based camera-lidar We propose an attack method that reduces the accuracy of LCCNet, a fusion model (camera-LiDAR calibration model). The proposed method is to perform a scaling attack on the point of the input lidar. As a result of conducting an attack performance experiment by size with a scaling algorithm, an average of more than 77% of fusion errors were caused.

Fog Detection over the Korean Peninsula Derived from Satellite Observations of Polar-orbit (MODIS) and Geostationary (GOES-9) (극궤도(MODIS) 및 정지궤도(GOES-9) 위성 관측을 이용한 한반도에서의 안개 탐지)

  • Yoo, Jung-Moon;Yun, Mi-Young;Jeong, Myeong-Jae;Ahn, Myoung-Hwan
    • Journal of the Korean earth science society
    • /
    • v.27 no.4
    • /
    • pp.450-463
    • /
    • 2006
  • Seasonal threshold values for fog detection over the ten airport areas within the Korean Peninsula have been derived from the data of polar-orbit Aqua/Terra MODIS and geostationary GOES-9 during a two years. The values are obtained from reflectance at $0.65{\mu}m\;(R_{0.65})$ and the difference in brightness temperature between $3.7{\mu}m\;and\;11{\mu}m\;(T_{3.7-11})$. In order to examine the discrepancy between the threshold values of two kinds of satellites, the following four parameters have been analyzed under the condition of daytime/nighttime and fog/clear-sky, utilizing their simultaneous observations over the Seoul metropolitan area: brightness temperature at $3.7{\mu}m$, the temperature at $11{\mu}m,\;the\;T_{3.7-11}$ for day and night, and the $R_{0.65}$ for daytime. The parameters show significant correlations (r<0.5) in spatial distribution between the two kinds of satellites. The discrepancy between their infrared thresholds is mainly due to the disagreement in their spatial resolutions and spectral bands, particularly at $3.7{\mu}m$. Fog detection from GOES-9 over the nine airport areas except the Cheongju airport has revealed accuracy of 60% in the daytime and 70% in the nighttime, based on statistical verification. The accuracy decreases in foggy cases with twilight, precipitation, short persistence, or the higher cloud above fog. The sensitivity of radiance and reflectance with wavelength has been analyzed in numerical experiments with respect to various meteorological conditions to investigate optical characteristics of the three channels.

Detection of Forest Fire Damage from Sentinel-1 SAR Data through the Synergistic Use of Principal Component Analysis and K-means Clustering (Sentinel-1 SAR 영상을 이용한 주성분분석 및 K-means Clustering 기반 산불 탐지)

  • Lee, Jaese;Kim, Woohyeok;Im, Jungho;Kwon, Chunguen;Kim, Sungyong
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.5_3
    • /
    • pp.1373-1387
    • /
    • 2021
  • Forest fire poses a significant threat to the environment and society, affecting carbon cycle and surface energy balance, and resulting in socioeconomic losses. Widely used multi-spectral satellite image-based approaches for burned area detection have a problem in that they do not work under cloudy conditions. Therefore, in this study, Sentinel-1 Synthetic Aperture Radar (SAR) data from Europe Space Agency, which can be collected in all weather conditions, were used to identify forest fire damaged area based on a series of processes including Principal Component Analysis (PCA) and K-means clustering. Four forest fire cases, which occurred in Gangneung·Donghae and Goseong·Sokcho in Gangwon-do of South Korea and two areas in North Korea on April 4, 2019, were examined. The estimated burned areas were evaluated using fire reference data provided by the National Institute of Forest Science (NIFOS) for two forest fire cases in South Korea, and differenced normalized burn ratio (dNBR) for all four cases. The average accuracy using the NIFOS reference data was 86% for the Gangneung·Donghae and Goseong·Sokcho fires. Evaluation using dNBR showed an average accuracy of 84% for all four forest fire cases. It was also confirmed that the stronger the burned intensity, the higher detection the accuracy, and vice versa. Given the advantage of SAR remote sensing, the proposed statistical processing and K-means clustering-based approach can be used to quickly identify forest fire damaged area across the Korean Peninsula, where a cloud cover rate is high and small-scale forest fires frequently occur.

Study on the Possibility of Estimating Surface Soil Moisture Using Sentinel-1 SAR Satellite Imagery Based on Google Earth Engine (Google Earth Engine 기반 Sentinel-1 SAR 위성영상을 이용한 지표 토양수분량 산정 가능성에 관한 연구)

  • Younghyun Cho
    • Korean Journal of Remote Sensing
    • /
    • v.40 no.2
    • /
    • pp.229-241
    • /
    • 2024
  • With the advancement of big data processing technology using cloud platforms, access, processing, and analysis of large-volume data such as satellite imagery have recently been significantly improved. In this study, the Change Detection Method, a relatively simple technique for retrieving soil moisture, was applied to the backscattering coefficient values of pre-processed Sentinel-1 synthetic aperture radar (SAR) satellite imagery product based on Google Earth Engine (GEE), one of those platforms, to estimate the surface soil moisture for six observatories within the Yongdam Dam watershed in South Korea for the period of 2015 to 2023, as well as the watershed average. Subsequently, a correlation analysis was conducted between the estimated values and actual measurements, along with an examination of the applicability of GEE. The results revealed that the surface soil moisture estimated for small areas within the soil moisture observatories of the watershed exhibited low correlations ranging from 0.1 to 0.3 for both VH and VV polarizations, likely due to the inherent measurement accuracy of the SAR satellite imagery and variations in data characteristics. However, the surface soil moisture average, which was derived by extracting the average SAR backscattering coefficient values for the entire watershed area and applying moving averages to mitigate data uncertainties and variability, exhibited significantly improved results at the level of 0.5. The results obtained from estimating soil moisture using GEE demonstrate its utility despite limitations in directly conducting desired analyses due to preprocessed SAR data. However, the efficient processing of extensive satellite imagery data allows for the estimation and evaluation of soil moisture over broad ranges, such as long-term watershed averages. This highlights the effectiveness of GEE in handling vast satellite imagery datasets to assess soil moisture. Based on this, it is anticipated that GEE can be effectively utilized to assess long-term variations of soil moisture average in major dam watersheds, in conjunction with soil moisture observation data from various locations across the country in the future.

Automatic gasometer reading system using selective optical character recognition (관심 문자열 인식 기술을 이용한 가스계량기 자동 검침 시스템)

  • Lee, Kyohyuk;Kim, Taeyeon;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.1-25
    • /
    • 2020
  • In this paper, we suggest an application system architecture which provides accurate, fast and efficient automatic gasometer reading function. The system captures gasometer image using mobile device camera, transmits the image to a cloud server on top of private LTE network, and analyzes the image to extract character information of device ID and gas usage amount by selective optical character recognition based on deep learning technology. In general, there are many types of character in an image and optical character recognition technology extracts all character information in an image. But some applications need to ignore non-of-interest types of character and only have to focus on some specific types of characters. For an example of the application, automatic gasometer reading system only need to extract device ID and gas usage amount character information from gasometer images to send bill to users. Non-of-interest character strings, such as device type, manufacturer, manufacturing date, specification and etc., are not valuable information to the application. Thus, the application have to analyze point of interest region and specific types of characters to extract valuable information only. We adopted CNN (Convolutional Neural Network) based object detection and CRNN (Convolutional Recurrent Neural Network) technology for selective optical character recognition which only analyze point of interest region for selective character information extraction. We build up 3 neural networks for the application system. The first is a convolutional neural network which detects point of interest region of gas usage amount and device ID information character strings, the second is another convolutional neural network which transforms spatial information of point of interest region to spatial sequential feature vectors, and the third is bi-directional long short term memory network which converts spatial sequential information to character strings using time-series analysis mapping from feature vectors to character strings. In this research, point of interest character strings are device ID and gas usage amount. Device ID consists of 12 arabic character strings and gas usage amount consists of 4 ~ 5 arabic character strings. All system components are implemented in Amazon Web Service Cloud with Intel Zeon E5-2686 v4 CPU and NVidia TESLA V100 GPU. The system architecture adopts master-lave processing structure for efficient and fast parallel processing coping with about 700,000 requests per day. Mobile device captures gasometer image and transmits to master process in AWS cloud. Master process runs on Intel Zeon CPU and pushes reading request from mobile device to an input queue with FIFO (First In First Out) structure. Slave process consists of 3 types of deep neural networks which conduct character recognition process and runs on NVidia GPU module. Slave process is always polling the input queue to get recognition request. If there are some requests from master process in the input queue, slave process converts the image in the input queue to device ID character string, gas usage amount character string and position information of the strings, returns the information to output queue, and switch to idle mode to poll the input queue. Master process gets final information form the output queue and delivers the information to the mobile device. We used total 27,120 gasometer images for training, validation and testing of 3 types of deep neural network. 22,985 images were used for training and validation, 4,135 images were used for testing. We randomly splitted 22,985 images with 8:2 ratio for training and validation respectively for each training epoch. 4,135 test image were categorized into 5 types (Normal, noise, reflex, scale and slant). Normal data is clean image data, noise means image with noise signal, relfex means image with light reflection in gasometer region, scale means images with small object size due to long-distance capturing and slant means images which is not horizontally flat. Final character string recognition accuracies for device ID and gas usage amount of normal data are 0.960 and 0.864 respectively.

Risk Assessment and Safety Measures for Methanol Separation Process in BPA Plant (BPA 공장의 메탄올 분리공정에서 위험성 평가 및 안전대책)

  • Woo, In-Sung;Lee, Joong-Hee;Lee, In-Bok;Chon, Young-Woo;Park, Hee-Chul;Hwang, Seong-Min;Kim, Tae-Ok
    • Journal of the Korean Institute of Gas
    • /
    • v.16 no.3
    • /
    • pp.22-28
    • /
    • 2012
  • For a methanol separation column of the BPA (Bisphenol A) plant, HAZOP (hazard and operability) assessment was performed and damage ranges were predicted from the accident scenarios for the fire and the explosion. As a result, the damage range of the jet fire was 20 m in the case of rupture of the discharge pipe (50 mm diameter) of safety valve, and that of the flash fire was 267 m in the case of catastrophic rupture. Also, the damage ranges of the unconfined vapor cloud explosion (UVCE) for the rupture of the discharge pipe and for the catastrophic rupture were 22 m and 542 m, respectively. For the worst case of release scenarios, safety measures were suggested as follows: the pressure instruments, which can detect abnormal rise of the internal pressure in the methanol separation column, should be installed by the 2 out of 3 voting method in the top section of the column. Through the detection, the instruments should simultaneously shut down the control and the emergency shut-off valves.

Intelligent Motion Pattern Recognition Algorithm for Abnormal Behavior Detections in Unmanned Stores (무인 점포 사용자 이상행동을 탐지하기 위한 지능형 모션 패턴 인식 알고리즘)

  • Young-june Choi;Ji-young Na;Jun-ho Ahn
    • Journal of Internet Computing and Services
    • /
    • v.24 no.6
    • /
    • pp.73-80
    • /
    • 2023
  • The recent steep increase in the minimum hourly wage has increased the burden of labor costs, and the share of unmanned stores is increasing in the aftermath of COVID-19. As a result, theft crimes targeting unmanned stores are also increasing, and the "Just Walk Out" system is introduced to prevent such thefts, and LiDAR sensors, weight sensors, etc. are used or manually checked through continuous CCTV monitoring. However, the more expensive sensors are used, the higher the initial cost of operating the store and the higher the cost in many ways, and CCTV verification is difficult for managers to monitor around the clock and is limited in use. In this paper, we would like to propose an AI image processing fusion algorithm that can solve these sensors or human-dependent parts and detect customers who perform abnormal behaviors such as theft at low costs that can be used in unmanned stores and provide cloud-based notifications. In addition, this paper verifies the accuracy of each algorithm based on behavior pattern data collected from unmanned stores through motion capture using mediapipe, object detection using YOLO, and fusion algorithm and proves the performance of the convergence algorithm through various scenario designs.

Study on Applicability of Cloth Simulation Filtering Algorithm for Segmentation of Ground Points from Drone LiDAR Point Clouds in Mountainous Areas (산악지형 드론 라이다 데이터 점군 분리를 위한 CSF 알고리즘 적용에 관한 연구)

  • Seul Koo ;Eon Taek Lim ;Yong Han Jung ;Jae Wook Suk ;Seong Sam Kim
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.5_2
    • /
    • pp.827-835
    • /
    • 2023
  • Drone light detection and ranging (LiDAR) is a state-of-the-art surveying technology that enables close investigation of the top of the mountain slope or the inaccessible slope, and is being used for field surveys in mountainous terrain. To build topographic information using Drone LiDAR, a preprocessing process is required to effectively separate ground and non-ground points from the acquired point cloud. Therefore, in this study, the point group data of the mountain topography was acquired using an aerial LiDAR mounted on a commercial drone, and the application and accuracy of the cloth simulation filtering algorithm, one of the ground separation techniques, was verified. As a result of applying the algorithm, the separation accuracy of the ground and the non-ground was 84.3%, and the kappa coefficient was 0.71, and drone LiDAR data could be effectively used for landslide field surveys in mountainous terrain.

A Design of Authentication Mechanism for Secure Communication in Smart Factory Environments (스마트 팩토리 환경에서 안전한 통신을 위한 인증 메커니즘 설계)

  • Joong-oh Park
    • Journal of Industrial Convergence
    • /
    • v.22 no.4
    • /
    • pp.1-9
    • /
    • 2024
  • Smart factories represent production facilities where cutting-edge information and communication technologies are fused with manufacturing processes, reflecting rapid advancements and changes in the global manufacturing sector. They capitalize on the integration of robotics and automation, the Internet of Things (IoT), and the convergence of artificial intelligence technologies to maximize production efficiency in various manufacturing environments. However, the smart factory environment is prone to security threats and vulnerabilities due to various attack techniques. When security threats occur in smart factories, they can lead to financial losses, damage to corporate reputation, and even human casualties, necessitating an appropriate security response. Therefore, this paper proposes a security authentication mechanism for safe communication in the smart factory environment. The components of the proposed authentication mechanism include smart devices, an internal operation management system, an authentication system, and a cloud storage server. The smart device registration process, authentication procedure, and the detailed design of anomaly detection and update procedures were meticulously developed. And the safety of the proposed authentication mechanism was analyzed, and through performance analysis with existing authentication mechanisms, we confirmed an efficiency improvement of approximately 8%. Additionally, this paper presents directions for future research on lightweight protocols and security strategies for the application of the proposed technology, aiming to enhance security.