• Title/Summary/Keyword: 합성곱 신경망 모델

Search Result 312, Processing Time 0.027 seconds

A Study on the Artificial Intelligence-Based Soybean Growth Analysis Method (인공지능 기반 콩 생장분석 방법 연구)

  • Moon-Seok Jeon;Yeongtae Kim;Yuseok Jeong;Hyojun Bae;Chaewon Lee;Song Lim Kim;Inchan Choi
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.28 no.5
    • /
    • pp.1-14
    • /
    • 2023
  • Soybeans are one of the world's top five staple crops and a major source of plant-based protein. Due to their susceptibility to climate change, which can significantly impact grain production, the National Agricultural Science Institute is conducting research on crop phenotypes through growth analysis of various soybean varieties. While the process of capturing growth progression photos of soybeans is automated, the verification, recording, and analysis of growth stages are currently done manually. In this paper, we designed and trained a YOLOv5s model to detect soybean leaf objects from image data of soybean plants and a Convolution Neural Network (CNN) model to judgement the unfolding status of the detected soybean leaves. We combined these two models and implemented an algorithm that distinguishes layers based on the coordinates of detected soybean leaves. As a result, we developed a program that takes time-series data of soybeans as input and performs growth analysis. The program can accurately determine the growth stages of soybeans up to the second or third compound leaves.

Deep Learning for Automatic Change Detection: Real-Time Image Analysis for Cherry Blossom State Classification (자동 변화 감지를 위한 딥러닝: 벚꽃 상태 분류를 위한 실시간 이미지 분석)

  • Seung-Bo Park;Min-Jun Kim;Guen-Mi Kim;Jeong-Tae Kim;Da-Ye Kim;Dong-Gyun Ham
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2023.07a
    • /
    • pp.493-494
    • /
    • 2023
  • 본 논문은 벚꽃나무 영상 데이터를 활용하여 벚꽃의 상태(개화, 만개, 낙화)를 실시간으로 분류하는 연구를 소개한다. 이 연구의 목적은, 실시간으로 취득되는 벚꽃나무의 영상 데이터를 사전에 학습된 CNN 기반 이미지 분류 모델을 통해 벚꽃의 상태에 따라 분류하는 것이다. 약 1,000장의 벚꽃나무 이미지를 활용하여 CNN 모델을 학습시키고, 모델이 새로운 이미지에 대해 얼마나 정확하게 벚꽃의 상태를 분류하는지를 평가하였다. 학습데이터는 훈련 데이터와 검증 데이터로 나누었으며, 개화, 만개, 낙화 등의 상태별로 폴더를 구분하여 관리하였다. 또한, ImageNet 데이터셋에서 사전 학습된 ResNet50 가중치를 사용하는 전이학습 방법을 적용하여 학습 과정을 더 효율적으로 수행하고, 모델의 성능을 향상시켰다.

  • PDF

A Study on Similar Trademark Search Model Using Convolutional Neural Networks (합성곱 신경망(Convolutional Neural Network)을 활용한 지능형 유사상표 검색 모형 개발)

  • Yoon, Jae-Woong;Lee, Suk-Jun;Song, Chil-Yong;Kim, Yeon-Sik;Jung, Mi-Young;Jeong, Sang-Il
    • Management & Information Systems Review
    • /
    • v.38 no.3
    • /
    • pp.55-80
    • /
    • 2019
  • Recently, many companies improving their management performance by building a powerful brand value which is recognized for trademark rights. However, as growing up the size of online commerce market, the infringement of trademark rights is increasing. According to various studies and reports, cases of foreign and domestic companies infringing on their trademark rights are increased. As the manpower and the cost required for the protection of trademark are enormous, small and medium enterprises(SMEs) could not conduct preliminary investigations to protect their trademark rights. Besides, due to the trademark image search service does not exist, many domestic companies have a problem that investigating huge amounts of trademarks manually when conducting preliminary investigations to protect their rights of trademark. Therefore, we develop an intelligent similar trademark search model to reduce the manpower and cost for preliminary investigation. To measure the performance of the model which is developed in this study, test data selected by intellectual property experts was used, and the performance of ResNet V1 101 was the highest. The significance of this study is as follows. The experimental results empirically demonstrate that the image classification algorithm shows high performance not only object recognition but also image retrieval. Since the model that developed in this study was learned through actual trademark image data, it is expected that it can be applied in the real industrial environment.

Binary classification of bolts with anti-loosening coating using transfer learning-based CNN (전이학습 기반 CNN을 통한 풀림 방지 코팅 볼트 이진 분류에 관한 연구)

  • Noh, Eunsol;Yi, Sarang;Hong, Seokmoo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.22 no.2
    • /
    • pp.651-658
    • /
    • 2021
  • Because bolts with anti-loosening coatings are used mainly for joining safety-related components in automobiles, accurate automatic screening of these coatings is essential to detect defects efficiently. The performance of the convolutional neural network (CNN) used in a previous study [Identification of bolt coating defects using CNN and Grad-CAM] increased with increasing number of data for the analysis of image patterns and characteristics. On the other hand, obtaining the necessary amount of data for coated bolts is difficult, making training time-consuming. In this paper, resorting to the same VGG16 model as in a previous study, transfer learning was applied to decrease the training time and achieve the same or better accuracy with fewer data. The classifier was trained, considering the number of training data for this study and its similarity with ImageNet data. In conjunction with the fully connected layer, the highest accuracy was achieved (95%). To enhance the performance further, the last convolution layer and the classifier were fine-tuned, which resulted in a 2% increase in accuracy (97%). This shows that the learning time can be reduced by transfer learning and fine-tuning while maintaining a high screening accuracy.

An Improved CNN-LSTM Hybrid Model for Predicting UAV Flight State (무인항공기 비행 상태 예측을 위한 개선된 CNN-LSTM 혼합모델)

  • Hyun Woo Seo;Eun Ju Choi;Byoung Soo Kim;Yong Ho Moon
    • Journal of Aerospace System Engineering
    • /
    • v.18 no.3
    • /
    • pp.48-55
    • /
    • 2024
  • In recent years, as the commercialization of unmanned aerial vehicles (UAVs) has been actively promoted, much attention has been focused on developing a technology to ensure the safety of UAVs. In general, the UAV has the potential to enter an uncontrollable state caused by sudden maneuvers, disturbances, and pilot error. To prevent entering an uncontrolled situation, it is essential to predict the flight state of the UAV. In this paper, we propose a flight state prediction technique based on an improved CNN-LSTM hybrid mode to enhance the flight state prediction performance. Simulation results show that the proposed prediction technique offers better state prediction performance than the existing prediction technique, and can be operated in real-time in an on-board environment.

Epileptic Seizure Detection Using CNN Ensemble Models Based on Overlapping Segments of EEG Signals (뇌파의 중첩 분할에 기반한 CNN 앙상블 모델을 이용한 뇌전증 발작 검출)

  • Kim, Min-Ki
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.10 no.12
    • /
    • pp.587-594
    • /
    • 2021
  • As the diagnosis using encephalography(EEG) has been expanded, various studies have been actively performed for classifying EEG automatically. This paper proposes a CNN model that can effectively classify EEG signals acquired from healthy persons and patients with epilepsy. We segment the EEG signals into sub-signals with smaller dimension to augment the EEG data that is necessary to train the CNN model. Then the sub-signals are segmented again with overlap and they are used for training the CNN model. We also propose ensemble strategy in order to improve the classification accuracy. Experimental result using public Bonn dataset shows that the CNN can detect the epileptic seizure with the accuracy above 99.0%. It also shows that the ensemble method improves the accuracy of 3-class and 5-class EEG classification.

Non-intrusive Calibration for User Interaction based Gaze Estimation (사용자 상호작용 기반의 시선 검출을 위한 비강압식 캘리브레이션)

  • Lee, Tae-Gyun;Yoo, Jang-Hee
    • Journal of Software Assessment and Valuation
    • /
    • v.16 no.1
    • /
    • pp.45-53
    • /
    • 2020
  • In this paper, we describe a new method for acquiring calibration data using a user interaction process, which occurs continuously during web browsing in gaze estimation, and for performing calibration naturally while estimating the user's gaze. The proposed non-intrusive calibration is a tuning process over the pre-trained gaze estimation model to adapt to a new user using the obtained data. To achieve this, a generalized CNN model for estimating gaze is trained, then the non-intrusive calibration is employed to adapt quickly to new users through online learning. In experiments, the gaze estimation model is calibrated with a combination of various user interactions to compare the performance, and improved accuracy is achieved compared to existing methods.

Face Frontalization Model with A.I. Based on U-Net using Convolutional Neural Network (합성곱 신경망(CNN)을 이용한 U-Net 기반의 인공지능 안면 정면화 모델)

  • Lee, Sangmin;Son, Wonho;Jin, ChangGyun;Kim, Ji-Hyun;Kim, JiYun;Park, Naeun;Kim, Gaeun;Kwon, Jin young;Lee, Hye Yi;Kim, Jongwan;Oh, Dukshin
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2020.11a
    • /
    • pp.685-688
    • /
    • 2020
  • 안면 인식은 Face ID를 비롯하여 미아 찾기, 범죄자 추적 등의 분야에 도입되고 있다. 안면 인식은 최근 딥러닝을 통해 인식률이 향상되었으나, 측면에서의 인식률은 정면에 비해 특징 추출이 어려우므로 비교적 낮다. 이런 문제는 해당 인물의 정면이 없고 측면만 존재할 경우 안면 인식을 통한 신원확인이 어려워 단점으로 작용될 수 있다. 본 논문에서는 측면 이미지를 바탕으로 정면을 생성함으로써 안면 인식을 적용할 수 있는 상황을 확장하는 인공지능 기반의 안면 정면화 모델을 구현한다. 모델의 안면 특징 추출을 위해 VGG-Face를 사용하며 특징 추출에서 생길 수 있는 정보 손실을 막기 위해 U-Net 구조를 사용한다.

Implementation of Interactive Media Content Production Framework based on Gesture Recognition (제스처 인식 기반의 인터랙티브 미디어 콘텐츠 제작 프레임워크 구현)

  • Koh, You-jin;Kim, Tae-Won;Kim, Yong-Goo;Choi, Yoo-Joo
    • Journal of Broadcast Engineering
    • /
    • v.25 no.4
    • /
    • pp.545-559
    • /
    • 2020
  • In this paper, we propose a content creation framework that enables users without programming experience to easily create interactive media content that responds to user gestures. In the proposed framework, users define the gestures they use and the media effects that respond to them by numbers, and link them in a text-based configuration file. In the proposed framework, the interactive media content that responds to the user's gesture is linked with the dynamic projection mapping module to track the user's location and project the media effects onto the user. To reduce the processing speed and memory burden of the gesture recognition, the user's movement is expressed as a gray scale motion history image. We designed a convolutional neural network model for gesture recognition using motion history images as input data. The number of network layers and hyperparameters of the convolutional neural network model were determined through experiments that recognize five gestures, and applied to the proposed framework. In the gesture recognition experiment, we obtained a recognition accuracy of 97.96% and a processing speed of 12.04 FPS. In the experiment connected with the three media effects, we confirmed that the intended media effect was appropriately displayed in real-time according to the user's gesture.

Automated Construction Progress Management Using Computer Vision-based CNN Model and BIM (이미지 기반 기계 학습과 BIM을 활용한 자동화된 시공 진도 관리 - 합성곱 신경망 모델(CNN)과 실내측위기술, 4D BIM을 기반으로 -)

  • Rho, Juhee;Park, Moonseo;Lee, Hyun-Soo
    • Korean Journal of Construction Engineering and Management
    • /
    • v.21 no.5
    • /
    • pp.11-19
    • /
    • 2020
  • A daily progress monitoring and further schedule management of a construction project have a significant impact on the construction manager's decision making in schedule change and controlling field operation. However, a current site monitoring method highly relies on the manually recorded daily-log book by the person in charge of the work. For this reason, it is difficult to take a detached view and sometimes human error such as omission of contents may occur. In order to resolve these problems, previous researches have developed automated site monitoring method with the object recognition-based visualization or BIM data creation. Despite of the research results along with the related technology development, there are limitations in application targeting the practical construction projects due to the constraints in the experimental methods that assume the fixed equipment at a specific location. To overcome these limitations, some smart devices carried by the field workers can be employed as a medium for data creation. Specifically, the extracted information from the site picture by object recognition technology of CNN model, and positional information by GIPS are applied to update 4D BIM data. A standard CNN model is developed and BIM data modification experiments are conducted with the collected data to validate the research suggestion. Based on the experimental results, it is confirmed that the methods and performance are applicable to the construction site management and further it is expected to contribute speedy and precise data creation with the application of automated progress monitoring methods.