• Title/Summary/Keyword: AI 영상인식

Search Result 102, Processing Time 0.027 seconds

Detection of Wildfire Smoke Plumes Using GEMS Images and Machine Learning (GEMS 영상과 기계학습을 이용한 산불 연기 탐지)

  • Jeong, Yemin;Kim, Seoyeon;Kim, Seung-Yeon;Yu, Jeong-Ah;Lee, Dong-Won;Lee, Yangwon
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.5_3
    • /
    • pp.967-977
    • /
    • 2022
  • The occurrence and intensity of wildfires are increasing with climate change. Emissions from forest fire smoke are recognized as one of the major causes affecting air quality and the greenhouse effect. The use of satellite product and machine learning is essential for detection of forest fire smoke. Until now, research on forest fire smoke detection has had difficulties due to difficulties in cloud identification and vague standards of boundaries. The purpose of this study is to detect forest fire smoke using Level 1 and Level 2 data of Geostationary Environment Monitoring Spectrometer (GEMS), a Korean environmental satellite sensor, and machine learning. In March 2022, the forest fire in Gangwon-do was selected as a case. Smoke pixel classification modeling was performed by producing wildfire smoke label images and inputting GEMS Level 1 and Level 2 data to the random forest model. In the trained model, the importance of input variables is Aerosol Optical Depth (AOD), 380 nm and 340 nm radiance difference, Ultra-Violet Aerosol Index (UVAI), Visible Aerosol Index (VisAI), Single Scattering Albedo (SSA), formaldehyde (HCHO), nitrogen dioxide (NO2), 380 nm radiance, and 340 nm radiance were shown in that order. In addition, in the estimation of the forest fire smoke probability (0 ≤ p ≤ 1) for 2,704 pixels, Mean Bias Error (MBE) is -0.002, Mean Absolute Error (MAE) is 0.026, Root Mean Square Error (RMSE) is 0.087, and Correlation Coefficient (CC) showed an accuracy of 0.981.

Cat Behavior Pattern Analysis and Disease Prediction System of Home CCTV Images using AI (AI를 이용한 홈CCTV 영상의 반려묘 행동 패턴 분석 및 질병 예측 시스템 연구)

  • Han, Su-yeon;Park, Dea-Woo
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.9
    • /
    • pp.1266-1271
    • /
    • 2022
  • Cats have strong wildness so they have a characteristic of hiding diseases well. The disease may have already worsened when the guardian finds out that the cat has a disease. It will be of great help in treating the cat's disease if the owner can recognize the cat's polydipsia, polyuria, and frequent urination more quickly. In this paper, 1) Efficient version of DeepLabCut for pose estimation, 2) YOLO v4 for object detection, 3) LSTM is used for behavior prediction, and 4) BoT-SORT is used for object tracking running on an artificial intelligence device. Using artificial intelligence technology, it predicts the cat's next, polyuria and frequency of urination through the analysis of the cat's behavior pattern from the home CCTV video and the weight sensor of the water bowl. And, through analysis of cat behavior patterns, we propose an application that reports disease prediction and abnormal behavior to the guardian and delivers it to the guardian's mobile and the server system.

Escape Route Prediction and Tracking System using Artificial Intelligence (인공지능을 활용한 도주경로 예측 및 추적 시스템)

  • Yang, Bum-suk;Park, Dea-woo
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.05a
    • /
    • pp.225-227
    • /
    • 2022
  • Now In Seoul, about 75,000 CCTVs are installed in 25 district offices. Each ward office in Seoul has built a control center for CCTV control and is building information such as people, vehicle types, license plate recognition and color classification into big data through 24-hour artificial intelligence intelligent image analysis. Seoul Metropolitan Government has signed MOUs with the Ministry of Land, Infrastructure and Transport, the National Police Agency, the Fire Service, the Ministry of Justice, and the military base to enable rapid response to emergency/emergency situations. In other words, we are building a smart city that is safe and can prevent disasters by providing CCTV images of each ward office. In this paper, the CCTV image is designed to extract the characteristics of the vehicle and personnel when an incident occurs through artificial intelligence, and based on this, predict the escape route and enable continuous tracking. It is designed so that the AI automatically selects and displays the CCTV image of the route. It is designed to expand the smart city integration platform by providing image information and extracted information to the adjacent ward office when the escape route of a person or vehicle related to an incident is expected to an area other than the relevant jurisdiction. This paper will contribute as basic data to the development of smart city integrated platform research.

  • PDF

A Study on the Density Analysis of Multi-objects Using Drone Imaging (드론 영상을 활용한 다중객체의 밀집도 분석 연구)

  • WonSeok Jang;HyunSu Kim;JinMan Park;MiSeon Han;SeongChae Baek;JeJin Park
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.23 no.2
    • /
    • pp.69-78
    • /
    • 2024
  • Recently, the use of CCTV to prevent crowd accidents has been promoted, but research is needed to compensate for the spatial limitations of CCTV. In this study, pedestrian density was measured using drone footage, and based on a review of existing literature, a threshold of 6.7 people/m2 was selected as the cutoff risk level for crowd accidents. In addition, we conducted a preliminary study to determine drone parameters and found that the pedestrian recognition rate was high at a drone altitude of 20 meters and an angle of 60°. Based on a previous study, we selected a target area with a high concentration of pedestrians and measured pedestrian density, which was found to be 0.27~0.30 per m2. The study shows it is possible to measure risk levels by determining pedestrian densities in target areas using drone images. We believe drone surveillance will be utilized for crowd safety management in the near future.

Development of a deep-learning based tunnel incident detection system on CCTVs (딥러닝 기반 터널 영상유고감지 시스템 개발 연구)

  • Shin, Hyu-Soung;Lee, Kyu-Beom;Yim, Min-Jin;Kim, Dong-Gyou
    • Journal of Korean Tunnelling and Underground Space Association
    • /
    • v.19 no.6
    • /
    • pp.915-936
    • /
    • 2017
  • In this study, current status of Korean hazard mitigation guideline for tunnel operation is summarized. It shows that requirement for CCTV installation has been gradually stricted and needs for tunnel incident detection system in conjunction with the CCTV in tunnels have been highly increased. Despite of this, it is noticed that mathematical algorithm based incident detection system, which are commonly applied in current tunnel operation, show very low detectable rates by less than 50%. The putative major reasons seem to be (1) very weak intensity of illumination (2) dust in tunnel (3) low installation height of CCTV to about 3.5 m, etc. Therefore, an attempt in this study is made to develop an deep-learning based tunnel incident detection system, which is relatively insensitive to very poor visibility conditions. Its theoretical background is given and validating investigation are undertaken focused on the moving vehicles and person out of vehicle in tunnel, which are the official major objects to be detected. Two scenarios are set up: (1) training and prediction in the same tunnel (2) training in a tunnel and prediction in the other tunnel. From the both cases, targeted object detection in prediction mode are achieved to detectable rate to higher than 80% in case of similar time period between training and prediction but it shows a bit low detectable rate to 40% when the prediction times are far from the training time without further training taking place. However, it is believed that the AI based system would be enhanced in its predictability automatically as further training are followed with accumulated CCTV BigData without any revision or calibration of the incident detection system.

Training Performance Analysis of Semantic Segmentation Deep Learning Model by Progressive Combining Multi-modal Spatial Information Datasets (다중 공간정보 데이터의 점진적 조합에 의한 의미적 분류 딥러닝 모델 학습 성능 분석)

  • Lee, Dae-Geon;Shin, Young-Ha;Lee, Dong-Cheon
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.40 no.2
    • /
    • pp.91-108
    • /
    • 2022
  • In most cases, optical images have been used as training data of DL (Deep Learning) models for object detection, recognition, identification, classification, semantic segmentation, and instance segmentation. However, properties of 3D objects in the real-world could not be fully explored with 2D images. One of the major sources of the 3D geospatial information is DSM (Digital Surface Model). In this matter, characteristic information derived from DSM would be effective to analyze 3D terrain features. Especially, man-made objects such as buildings having geometrically unique shape could be described by geometric elements that are obtained from 3D geospatial data. The background and motivation of this paper were drawn from concept of the intrinsic image that is involved in high-level visual information processing. This paper aims to extract buildings after classifying terrain features by training DL model with DSM-derived information including slope, aspect, and SRI (Shaded Relief Image). The experiments were carried out using DSM and label dataset provided by ISPRS (International Society for Photogrammetry and Remote Sensing) for CNN-based SegNet model. In particular, experiments focus on combining multi-source information to improve training performance and synergistic effect of the DL model. The results demonstrate that buildings were effectively classified and extracted by the proposed approach.

Transfer Learning-based Object Detection Algorithm Using YOLO Network (YOLO 네트워크를 활용한 전이학습 기반 객체 탐지 알고리즘)

  • Lee, Donggu;Sun, Young-Ghyu;Kim, Soo-Hyun;Sim, Issac;Lee, Kye-San;Song, Myoung-Nam;Kim, Jin-Young
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.20 no.1
    • /
    • pp.219-223
    • /
    • 2020
  • To guarantee AI model's prominent recognition rate and recognition precision, obtaining the large number of data is essential. In this paper, we propose transfer learning-based object detection algorithm for maintaining outstanding performance even when the volume of training data is small. Also, we proposed a tranfer learning network combining Resnet-50 and YOLO(You Only Look Once) network. The transfer learning network uses the Leeds Sports Pose dataset to train the network that detects the person who occupies the largest part of each images. Simulation results yield to detection rate as 84% and detection precision as 97%.

Improved CNN Algorithm for Object Detection in Large Images

  • Yang, Seong Bong;Lee, Soo Jin
    • Journal of the Korea Society of Computer and Information
    • /
    • v.25 no.1
    • /
    • pp.45-53
    • /
    • 2020
  • Conventional Convolutional Neural Network(CNN) algorithms have limitations in detecting small objects in large image. In this paper, we propose an improved model which is based on Region Of Interest(ROI) selection and image dividing technique. We prepared YOLOv3 / Faster R-CNN algorithms which are transfer-learned by airfield and aircraft datasets. Also we prepared large images for testing. In order to verify our model, we selected airfield area from large image as ROI first and divided it in two power n orders. Then we compared the aircraft detection rates by number of divisions. We could get the best size of divided image pieces for efficient small object detection derived from the comparison of aircraft detection rates. As a result, we could verify that the improved CNN algorithm can detect small object in large images.

A Development of Marine Observation Buoy Monitoring System Using Trail Camera and AtoN AIS (트레일 카메라 및 AIS를 이용한 해양관측부이용 감시시스템의 개발)

  • Gang, Yong-Soo;Wong, Chii-Lok;Hwang, Hun-Gyu;Kang, Seok-Sun;Kim, Hyen-Woo
    • Proceedings of the Korean Institute of Navigation and Port Research Conference
    • /
    • 2018.05a
    • /
    • pp.306-307
    • /
    • 2018
  • 본 논문에서는 해양관측부이 보호 및 해상 관측 등을 위해 사용되고 있는 국내외 영상감시 시스템 및 기술 현황을 살펴보고, 차세대 해상용 통신 네트워크 및 인공위성을 통한 해양 공공시설의 안전감시 시스템이 가져야 할 요구사항과 이에 대한 국내외 기술개발 현황을 살펴본다. 또한, 선박 인식 및 추적, 나아가 충돌 예측 등을 수행하여, 해상사고를 예방할 수 있는 해양관측부이용 감시시스템의 개발에 관한 내용을 다룬다. 이를 위해 개발하는 시스템은 해양관측부이에 장착되어 저전력으로 동작하며, 해수에 강한 트레일 감시카메라를 개발하여 적용한다. 추가적으로 AIS정보를 활용한 충돌 예방 경고 모듈이 탑재되고, LTE-M 등과 같은 차세대 해상이동통신 및 위성망 M2M 네트워크를 응용한 통신 모듈을 기반으로 육상 알람 기능을 제공한다. 이를 통해 시스템의 신뢰성을 확보하고, 대형 선박과의 해상사고(선박추돌사고 및 기름유출 등)와 소형선박에 의한 시설물 훼손(Vandalism)의 발생 가능성을 인지할 수 있는 종합적인 데이터를 수집하여 사고의 예방 및 재난 상황 등을 예측함으로써 중요시설의 안전 및 해양환경 보호에 기여하고자 한다.

  • PDF

CNN3D-Based Bus Passenger Prediction Model Using Skeleton Keypoints (Skeleton Keypoints를 활용한 CNN3D 기반의 버스 승객 승하차 예측모델)

  • Jang, Jin;Kim, Soo Hyung
    • Smart Media Journal
    • /
    • v.11 no.3
    • /
    • pp.90-101
    • /
    • 2022
  • Buses are a popular means of transportation. As such, thorough preparation is needed for passenger safety management. However, the safety system is insufficient because there are accidents such as a death accident occurred when the bus departed without recognizing the elderly approaching to get on in 2018. There is a safety system that prevents pinching accidents through sensors on the back door stairs, but such a system does not prevent accidents that occur in the process of getting on and off like the above accident. If it is possible to predict the intention of bus passengers to get on and off, it will help to develop a safety system to prevent such accidents. However, studies predicting the intention of passengers to get on and off are insufficient. Therefore, in this paper, we propose a 1×1 CNN3D-based getting on and off intention prediction model using skeleton keypoints of passengers extracted from the camera image attached to the bus through UDP-Pose. The proposed model shows approximately 1~2% higher accuracy than the RNN and LSTM models in predicting passenger's getting on and off intentions.