• Title/Summary/Keyword: learning through the image

Search Result 925, Processing Time 0.033 seconds

Detecting Greenhouses from the Planetscope Satellite Imagery Using the YOLO Algorithm (YOLO 알고리즘을 활용한 Planetscope 위성영상 기반 비닐하우스 탐지)

  • Seongsu KIM;Youn-In CHUNG;Yun-Jae CHOUNG
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.26 no.4
    • /
    • pp.27-39
    • /
    • 2023
  • Detecting greenhouses from the remote sensing datasets is useful in identifying the illegal agricultural facilities and predicting the agricultural output of the greenhouses. This research proposed a methodology for automatically detecting greenhouses from a given Planetscope satellite imagery acquired in the areas of Gimje City using the deep learning technique through a series of steps. First, multiple training images with a fixed size that contain the greenhouse features were generated from the five training Planetscope satellite imagery. Next, the YOLO(You Only Look Once) model was trained using the generated training images. Finally, the greenhouse features were detected from the input Planetscope satellite image. Statistical results showed that the 76.4% of the greenhouse features were detected from the input Planetscope satellite imagery by using the trained YOLO model. In future research, the high-resolution satellite imagery with a spatial resolution less than 1m should be used to detect more greenhouse features.

Proposed TATI Model for Predicting the Traffic Accident Severity (교통사고 심각 정도 예측을 위한 TATI 모델 제안)

  • Choo, Min-Ji;Park, So-Hyun;Park, Young-Ho
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.10 no.8
    • /
    • pp.301-310
    • /
    • 2021
  • The TATI model is a Traffic Accident Text to RGB Image model, which is a methodology proposed in this paper for predicting the severity of traffic accidents. Traffic fatalities are decreasing every year, but they are among the low in the OECD members. Many studies have been conducted to reduce the death rate of traffic accidents, and among them, studies have been steadily conducted to reduce the incidence and mortality rate by predicting the severity of traffic accidents. In this regard, research has recently been active to predict the severity of traffic accidents by utilizing statistical models and deep learning models. In this paper, traffic accident dataset is converted to color images to predict the severity of traffic accidents, and this is done via CNN models. For performance comparison, we experiment that train the same data and compare the prediction results with the proposed model and other models. Through 10 experiments, we compare the accuracy and error range of four deep learning models. Experimental results show that the accuracy of the proposed model was the highest at 0.85, and the second lowest error range at 0.03 was shown to confirm the superiority of the performance.

Weather Recognition Based on 3C-CNN

  • Tan, Ling;Xuan, Dawei;Xia, Jingming;Wang, Chao
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.8
    • /
    • pp.3567-3582
    • /
    • 2020
  • Human activities are often affected by weather conditions. Automatic weather recognition is meaningful to traffic alerting, driving assistance, and intelligent traffic. With the boost of deep learning and AI, deep convolutional neural networks (CNN) are utilized to identify weather situations. In this paper, a three-channel convolutional neural network (3C-CNN) model is proposed on the basis of ResNet50.The model extracts global weather features from the whole image through the ResNet50 branch, and extracts the sky and ground features from the top and bottom regions by two CNN5 branches. Then the global features and the local features are merged by the Concat function. Finally, the weather image is classified by Softmax classifier and the identification result is output. In addition, a medium-scale dataset containing 6,185 outdoor weather images named WeatherDataset-6 is established. 3C-CNN is used to train and test both on the Two-class Weather Images and WeatherDataset-6. The experimental results show that 3C-CNN achieves best on both datasets, with the average recognition accuracy up to 94.35% and 95.81% respectively, which is superior to other classic convolutional neural networks such as AlexNet, VGG16, and ResNet50. It is prospected that our method can also work well for images taken at night with further improvement.

Effectiveness Evaluation of Peer Education Program on Smoking Prevention and Cessation for Elementary School Students (아동 금연 도우미 교육프로그램 개발 및 효과평가)

  • Kim, Young-Bok;Kim, Shin-Woel;Shin, Jun-Ho
    • Journal of agricultural medicine and community health
    • /
    • v.29 no.1
    • /
    • pp.15-28
    • /
    • 2004
  • Objectives: This study was performed to examined the effectiveness evaluation of peer education program on smoking prevention and cessation for elementary school students. Methods: Data were collected from 60 students in a rural area through self-administrated questionnaires. Child-leaders participated the peer education program to assist their friend, parent, and adult in community to quit the smoking for 4 weeks. Results and Conclusions: Major conclusions were as follows : 1. The peer education program on smoking prevention and cessation for elementary school students was reinforce to increasing the tobacco knowledge and the cessation skill, learning the communication skill, and improving the empowerment. 2. Image of tobacco, intention of smoking in future, recommendation for smoking cessation, pro of smoking. con of smoking, and level of assert in post-test were higher than those in pre-test. 3. There were significant differences in image of tobacco, con of smoking, and level of assert by grade between the pre-test and the post-test of peer education program. But intention of smoking in future, recommendation for smoking cessation, and pro of smoking were not related to effectiveness of peer education program. 4. Child-leaders for smoking prevention and cessation performed the their task to 1.4 persons per student. 5. Participating students were satisfied with the contents of program, the usefulness of educational materials, and preference of parents, but they were not satisfied with the usefulness of task note, learning time, and lecture room.

  • PDF

Estimation of Sweet Pepper Crop Fresh Weight with Convolutional Neural Network (합성곱 신경망을 이용한 온실 파프리카의 작물 생체중 추정)

  • Moon, Taewon;Park, Junyoung;Son, Jung Eek
    • Journal of Bio-Environment Control
    • /
    • v.29 no.4
    • /
    • pp.381-387
    • /
    • 2020
  • Various studies have been attempted to estimate and measure the fresh weight of crops. However, no studies have used raw images of sweet peppers to estimate fresh weight. Recently, image processing research using convolution neural network (CNN) that can use raw data is increasing. In this study, the crop fresh weight was estimated by using the images of sweet peppers as inputs of CNN. The experiment was performed in a greenhouse growing sweet pepper (Capsicum annuum L.). The fresh weight, the output of the CNN, was regressed based on the data collected through destructive investigation. The highest coefficient of determination (R2) of the trained CNN was 0.95. The estimated fresh weight showed a very similar trend to the actual measured value.

A Study on Automatically Information Collection of Underground Facility Using R-CNN Techniques (R-CNN 기법을 이용한 지중매설물 제원 정보 자동 추출 연구)

  • Hyunsuk Park;Kiman Hong;Yongsung Cho
    • Journal of the Society of Disaster Information
    • /
    • v.19 no.3
    • /
    • pp.689-697
    • /
    • 2023
  • Purpose: The purpose of this study is to automatically extract information on underground facilities using a general-purpose smartphone in the process of applying the mini-trenching method. Method: Data sets for image learning were collected under various conditions such as day and night, height, and angle, and the object detection algorithm used the R-CNN algorithm. Result: As a result of the study, F1-Score was applied as a performance evaluation index that can consider the average of accurate predictions and reproduction rates at the same time, and F1-Score was 0.76. Conclusion: The results of this study showed that it was possible to extract information on underground buried materials based on smartphones, but it is necessary to improve the precision and accuracy of the algorithm through additional securing of learning data and on-site demonstration.

Video Camera Model Identification System Using Deep Learning (딥 러닝을 이용한 비디오 카메라 모델 판별 시스템)

  • Kim, Dong-Hyun;Lee, Soo-Hyeon;Lee, Hae-Yeoun
    • The Journal of Korean Institute of Information Technology
    • /
    • v.17 no.8
    • /
    • pp.1-9
    • /
    • 2019
  • With the development of imaging information communication technology in modern society, imaging acquisition and mass production technology have developed rapidly. However, crime rates using these technology are increased and forensic studies are conducted to prevent it. Identification techniques for image acquisition devices are studied a lot, but the field is limited to images. In this paper, camera model identification technique for video, not image is proposed. We analyzed video frames using the trained model with images. Through training and analysis by considering the frame characteristics of video, we showed the superiority of the model using the P frame. Then, we presented a video camera model identification system by applying a majority-based decision algorithm. In the experiment using 5 video camera models, we obtained maximum 96.18% accuracy for each frame identification and the proposed video camera model identification system achieved 100% identification rate for each camera model.

Augmented Reality-based Billiards Training System (AR을 이용한 당구 학습 시스템)

  • Kang, Seung-Woo;Choi, Kang-Sun
    • Journal of Practical Engineering Education
    • /
    • v.12 no.2
    • /
    • pp.309-319
    • /
    • 2020
  • Billiards is a fun and popular sport, but both route planning and cueing prevent beginners from becoming skillful. A beginner in billiards requires constant concentration and training to reach the right level, but without the right motivating factor, it is easy to lose interests. This study aims to induce interest in billiards and accelerate learning by utilizing billiard path prediction and visualization on a highly immersive augmented reality platform that combines a stereo camera and a VR headset. For implementation, the placement of billiard balls is recognized through the OpenCV image processing program, and physics simulation, path search, and visualization are performed in Unity Engine. As a result, accurate path prediction can be achieved. This made it possible for beginners to reduce the psychological burden of planning the path, focus only on accurate cueing, and gradually increase their billiard proficiency by getting used to the path suggested by the algorithm for a long time. We confirm that the proposed AR billiards is remarkably effective as a learning assistant tool.

Density map estimation based on deep-learning for pest control drone optimization (드론 방제의 최적화를 위한 딥러닝 기반의 밀도맵 추정)

  • Baek-gyeom Seong;Xiongzhe Han;Seung-hwa Yu;Chun-gu Lee;Yeongho Kang;Hyun Ho Woo;Hunsuk Lee;Dae-Hyun Lee
    • Journal of Drive and Control
    • /
    • v.21 no.2
    • /
    • pp.53-64
    • /
    • 2024
  • Global population growth has resulted in an increased demand for food production. Simultaneously, aging rural communities have led to a decrease in the workforce, thereby increasing the demand for automation in agriculture. Drones are particularly useful for unmanned pest control fields. However, the current method of uniform spraying leads to environmental damage due to overuse of pesticides and drift by wind. To address this issue, it is necessary to enhance spraying performance through precise performance evaluation. Therefore, as a foundational study aimed at optimizing drone-based pest control technologies, this research evaluated water-sensitive paper (WSP) via density map estimation using convolutional neural networks (CNN) with a encoder-decoder structure. To achieve more accurate estimation, this study implemented multi-task learning, incorporating an additional classifier for image segmentation alongside the density map estimation classifier. The proposed model in this study resulted in a R-squared (R2) of 0.976 for coverage area in the evaluation data set, demonstrating satisfactory performance in evaluating WSP at various density levels. Further research is needed to improve the accuracy of spray result estimations and develop a real-time assessment technology in the field.

Object Detection Based on Deep Learning Model for Two Stage Tracking with Pest Behavior Patterns in Soybean (Glycine max (L.) Merr.)

  • Yu-Hyeon Park;Junyong Song;Sang-Gyu Kim ;Tae-Hwan Jun
    • Proceedings of the Korean Society of Crop Science Conference
    • /
    • 2022.10a
    • /
    • pp.89-89
    • /
    • 2022
  • Soybean (Glycine max (L.) Merr.) is a representative food resource. To preserve the integrity of soybean, it is necessary to protect soybean yield and seed quality from threats of various pests and diseases. Riptortus pedestris is a well-known insect pest that causes the greatest loss of soybean yield in South Korea. This pest not only directly reduces yields but also causes disorders and diseases in plant growth. Unfortunately, no resistant soybean resources have been reported. Therefore, it is necessary to identify the distribution and movement of Riptortus pedestris at an early stage to reduce the damage caused by insect pests. Conventionally, the human eye has performed the diagnosis of agronomic traits related to pest outbreaks. However, due to human vision's subjectivity and impermanence, it is time-consuming, requires the assistance of specialists, and is labor-intensive. Therefore, the responses and behavior patterns of Riptortus pedestris to the scent of mixture R were visualized with a 3D model through the perspective of artificial intelligence. The movement patterns of Riptortus pedestris was analyzed by using time-series image data. In addition, classification was performed through visual analysis based on a deep learning model. In the object tracking, implemented using the YOLO series model, the path of the movement of pests shows a negative reaction to a mixture Rina video scene. As a result of 3D modeling using the x, y, and z-axis of the tracked objects, 80% of the subjects showed behavioral patterns consistent with the treatment of mixture R. In addition, these studies are being conducted in the soybean field and it will be possible to preserve the yield of soybeans through the application of a pest control platform to the early stage of soybeans.

  • PDF