• Title/Summary/Keyword: Deep Learning System

Search Result 1,745, Processing Time 0.032 seconds

Calculated Damage of Italian Ryegrass in Abnormal Climate Based World Meteorological Organization Approach Using Machine Learning

  • Jae Seong Choi;Ji Yung Kim;Moonju Kim;Kyung Il Sung;Byong Wan Kim
    • Journal of The Korean Society of Grassland and Forage Science
    • /
    • v.43 no.3
    • /
    • pp.190-198
    • /
    • 2023
  • This study was conducted to calculate the damage of Italian ryegrass (IRG) by abnormal climate using machine learning and present the damage through the map. The IRG data collected 1,384. The climate data was collected from the Korea Meteorological Administration Meteorological data open portal.The machine learning model called xDeepFM was used to detect IRG damage. The damage was calculated using climate data from the Automated Synoptic Observing System (95 sites) by machine learning. The calculation of damage was the difference between the Dry matter yield (DMY)normal and DMYabnormal. The normal climate was set as the 40-year of climate data according to the year of IRG data (1986~2020). The level of abnormal climate was set as a multiple of the standard deviation applying the World Meteorological Organization (WMO) standard. The DMYnormal was ranged from 5,678 to 15,188 kg/ha. The damage of IRG differed according to region and level of abnormal climate with abnormal temperature, precipitation, and wind speed from -1,380 to 1,176, -3 to 2,465, and -830 to 962 kg/ha, respectively. The maximum damage was 1,176 kg/ha when the abnormal temperature was -2 level (+1.04℃), 2,465 kg/ha when the abnormal precipitation was all level and 962 kg/ha when the abnormal wind speed was -2 level (+1.60 ㎧). The damage calculated through the WMO method was presented as an map using QGIS. There was some blank area because there was no climate data. In order to calculate the damage of blank area, it would be possible to use the automatic weather system (AWS), which provides data from more sites than the automated synoptic observing system (ASOS).

Individual Pig Detection Using Kinect Depth Information and Convolutional Neural Network (키넥트 깊이 정보와 컨볼루션 신경망을 이용한 개별 돼지의 탐지)

  • Lee, Junhee;Lee, Jonguk;Park, Daihee;Chung, Yongwha
    • The Journal of the Korea Contents Association
    • /
    • v.18 no.2
    • /
    • pp.1-10
    • /
    • 2018
  • Aggression among pigs adversely affects economic returns and animal welfare in intensive pigsties. Recently, some studies have applied information technology to a livestock management system to minimize the damage resulting from such anomalies. Nonetheless, detecting each pig in a crowed pigsty is still challenging problem. In this paper, we propose a new Kinect camera and deep learning-based monitoring system for the detection of the individual pigs. The proposed system is characterized as follows. 1) The background subtraction method and depth-threshold are used to detect only standing-pigs in the Kinect-depth image. 2) The standing-pigs are detected by using YOLO (You Only Look Once) which is the fastest and most accurate model in deep learning algorithms. Our experimental results show that this method is effective for detecting individual pigs in real time in terms of both cost-effectiveness (using a low-cost Kinect depth sensor) and accuracy (average 99.40% detection accuracies).

Leakage Prevention System of Mobile Data using Object Recognition and Beacon (사물인식과 비콘을 활용한 모바일 내부정보 유출방지 시스템)

  • Chae, Geonhui;Choi, Seongmin;Seol, Jihwan;Lee, Jaeheung
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.18 no.5
    • /
    • pp.17-23
    • /
    • 2018
  • The rapid development of mobile technology has increased the use of mobile devices, and the possibility of security incidents is also increasing. The leakage of information through photos is the most representative. Previous methods for preventing this are disadvantageous in that they can not take pictures for other purposes. In this paper, we design and implement a system to prevent information leakage through photos using object recognition and beacon. The system inspects pictures through object recognition based on deep learning and verifies whether security policies are violated. In addition, the location of the mobile device is identified through the beacon and the appropriate rules are applied. Web applications for administrator allow you to set rules for taking photos by location. As soon as a user takes a photo, they apply appropriate rules to the location to automatically detect photos that do not conform to security policies.

A Research on V2I-based Accident Prevention System for the Prevention of Unexpected Accident of Autonomous Vehicle (자율주행 차량의 돌발사고 방지를 위한 V2I 기반의 사고 방지체계 연구)

  • Han, SangYong;Kim, Myeong-jun;Kang, Dongwan;Baek, Sunwoo;Shin, Hee-seok;Kim, Jungha
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.20 no.3
    • /
    • pp.86-99
    • /
    • 2021
  • This research proposes the Accident Prevention System to prevent collision accident that can occur due to blind spots such as crossway or school zone using V2I communication. Vision sensor and LiDAR sensor located in the infrastructure of crossway somewhere like that recognize objects and warn vehicles at risk of accidents to prevent accidents in advance. Using deep learning-based YOLOv4 to recognize the object entering the intersection and using the Manhattan Distance value with LiDAR sensors to calculate the expected collision time and the weight of braking distance and secure safe distance. V2I communication used ROS (Robot Operating System) communication to prevent accidents in advance by conveying various information to the vehicle, including class, distance, and speed of entry objects, in addition to collision warning.

Video Camera Model Identification System Using Deep Learning (딥 러닝을 이용한 비디오 카메라 모델 판별 시스템)

  • Kim, Dong-Hyun;Lee, Soo-Hyeon;Lee, Hae-Yeoun
    • The Journal of Korean Institute of Information Technology
    • /
    • v.17 no.8
    • /
    • pp.1-9
    • /
    • 2019
  • With the development of imaging information communication technology in modern society, imaging acquisition and mass production technology have developed rapidly. However, crime rates using these technology are increased and forensic studies are conducted to prevent it. Identification techniques for image acquisition devices are studied a lot, but the field is limited to images. In this paper, camera model identification technique for video, not image is proposed. We analyzed video frames using the trained model with images. Through training and analysis by considering the frame characteristics of video, we showed the superiority of the model using the P frame. Then, we presented a video camera model identification system by applying a majority-based decision algorithm. In the experiment using 5 video camera models, we obtained maximum 96.18% accuracy for each frame identification and the proposed video camera model identification system achieved 100% identification rate for each camera model.

Mask Wearing Detection System using Deep Learning (딥러닝을 이용한 마스크 착용 여부 검사 시스템)

  • Nam, Chung-hyeon;Nam, Eun-jeong;Jang, Kyung-Sik
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.25 no.1
    • /
    • pp.44-49
    • /
    • 2021
  • Recently, due to COVID-19, studies have been popularly worked to apply neural network to mask wearing automatic detection system. For applying neural networks, the 1-stage detection or 2-stage detection methods are used, and if data are not sufficiently collected, the pretrained neural network models are studied by applying fine-tuning techniques. In this paper, the system is consisted of 2-stage detection method that contain MTCNN model for face recognition and ResNet model for mask detection. The mask detector was experimented by applying five ResNet models to improve accuracy and fps in various environments. Training data used 17,217 images that collected using web crawler, and for inference, we used 1,913 images and two one-minute videos respectively. The experiment showed a high accuracy of 96.39% for images and 92.98% for video, and the speed of inference for video was 10.78fps.

A Systematic Review of Toxicological Studies to Identify the Association between Environmental Diseases and Environmental Factors (환경성질환과 환경유해인자의 연관성을 규명하기 위한 독성 연구 고찰)

  • Ka, Yujin;Ji, Kyunghee
    • Journal of Environmental Health Sciences
    • /
    • v.47 no.6
    • /
    • pp.505-512
    • /
    • 2021
  • Background: The occurrence of environmental disease is known to be associated with chronic exposure to toxic chemicals, including waterborne contaminants, air/indoor pollutants, asbestos, ingredients in humidifier disinfectants, etc. Objectives: In this study, we reviewed toxicological studies related to environmental disease as defined by the Environmental Health Act in Korea and toxic chemicals. We also suggested a direction for future toxicological research necessary for the prevention and management of environmental disease. Methods: Trends in previous studies related to environmental disease were investigated through PubMed and Web of Science. A detailed review was provided on toxicological studies related to the humidifier disinfectants. We identified adverse outcome pathways (AOPs) that can be linked to the induction of environmental diseases, and proposed a chemical screening system that uses AOP, chemical toxicity big data, and deep learning models to select chemicals that induce environmental disease. Results: Research on chemical toxicity is increasing every year, but there is a limitation to revealing a clear causal relationship between exposure to chemicals and the occurrence of environmental disease. It is necessary to develop various exposure- and effect-biomarkers related to disease occurrence and to conduct toxicokinetic studies. A novel chemical screening system that uses AOP and chemical toxicity big data could be useful for selecting chemicals that cause environmental diseases. Conclusions: From a toxicological point of view, developing AOP related to environmental diseases and a deep learning-based chemical screening system will contribute to the prevention of environmental diseases in advance.

Deep Learning-based Text Summarization Model for Explainable Personalized Movie Recommendation Service (설명 가능한 개인화 영화 추천 서비스를 위한 딥러닝 기반 텍스트 요약 모델)

  • Chen, Biyao;Kang, KyungMo;Kim, JaeKyeong
    • Journal of Information Technology Services
    • /
    • v.21 no.2
    • /
    • pp.109-126
    • /
    • 2022
  • The number and variety of products and services offered by companies have increased dramatically, providing customers with more choices to meet their needs. As a solution to this information overload problem, the provision of tailored services to individuals has become increasingly important, and the personalized recommender systems have been widely studied and used in both academia and industry. Existing recommender systems face important problems in practical applications. The most important problem is that it cannot clearly explain why it recommends these products. In recent years, some researchers have found that the explanation of recommender systems may be very useful. As a result, users are generally increasing conversion rates, satisfaction, and trust in the recommender system if it is explained why those particular items are recommended. Therefore, this study presents a methodology of providing an explanatory function of a recommender system using a review text left by a user. The basic idea is not to use all of the user's reviews, but to provide them in a summarized form using only reviews left by similar users or neighbors involved in recommending the item as an explanation when providing the recommended item to the user. To achieve this research goal, this study aims to provide a product recommendation list using user-based collaborative filtering techniques, combine reviews left by neighboring users with each product to build a model that combines text summary methods among deep learning-based natural language processing methods. Using the IMDb movie database, text reviews of all target user neighbors' movies are collected and summarized to present descriptions of recommended movies. There are several text summary methods, but this study aims to evaluate whether the review summary is well performed by training the Sequence-to-sequence+attention model, which is a representative generation summary method, and the BertSum model, which is an extraction summary model.

Design and Implementation of Human and Object Classification System Using FMCW Radar Sensor (FMCW 레이다 센서 기반 사람과 사물 분류 시스템 설계 및 구현)

  • Sim, Yunsung;Song, Seungjun;Jang, Seonyoung;Jung, Yunho
    • Journal of IKEEE
    • /
    • v.26 no.3
    • /
    • pp.364-372
    • /
    • 2022
  • This paper proposes the design and implementation results for human and object classification systems utilizing frequency modulated continuous wave (FMCW) radar sensor. Such a system requires the process of radar sensor signal processing for multi-target detection and the process of deep learning for the classification of human and object. Since deep learning requires such a great amount of computation and data processing, the lightweight process is utmost essential. Therefore, binary neural network (BNN) structure was adopted, operating convolution neural network (CNN) computation in a binary condition. In addition, for the real-time operation, a hardware accelerator was implemented and verified via FPGA platform. Based on performance evaluation and verified results, it is confirmed that the accuracy for multi-target classification of 90.5%, reduced memory usage by 96.87% compared to CNN and the run time of 5ms are achieved.

Development of a Web Platform System for Worker Protection using EEG Emotion Classification (뇌파 기반 감정 분류를 활용한 작업자 보호를 위한 웹 플랫폼 시스템 개발)

  • Ssang-Hee Seo
    • Journal of Internet of Things and Convergence
    • /
    • v.9 no.6
    • /
    • pp.37-44
    • /
    • 2023
  • As a primary technology of Industry 4.0, human-robot collaboration (HRC) requires additional measures to ensure worker safety. Previous studies on avoiding collisions between collaborative robots and workers mainly detect collisions based on sensors and cameras attached to the robot. This method requires complex algorithms to continuously track robots, people, and objects and has the disadvantage of not being able to respond quickly to changes in the work environment. The present study was conducted to implement a web-based platform that manages collaborative robots by recognizing the emotions of workers - specifically their perception of danger - in the collaborative process. To this end, we developed a web-based application that collects and stores emotion-related brain waves via a wearable device; a deep-learning model that extracts and classifies the characteristics of neutral, positive, and negative emotions; and an Internet-of-things (IoT) interface program that controls motor operation according to classified emotions. We conducted a comparative analysis of our system's performance using a public open dataset and a dataset collected through actual measurement, achieving validation accuracies of 96.8% and 70.7%, respectively.