• Title/Summary/Keyword: neural network algorithm

Search Result 3,549, Processing Time 0.034 seconds

Application of POD reduced-order algorithm on data-driven modeling of rod bundle

  • Kang, Huilun;Tian, Zhaofei;Chen, Guangliang;Li, Lei;Wang, Tianyu
    • Nuclear Engineering and Technology
    • /
    • v.54 no.1
    • /
    • pp.36-48
    • /
    • 2022
  • As a valid numerical method to obtain a high-resolution result of a flow field, computational fluid dynamics (CFD) have been widely used to study coolant flow and heat transfer characteristics in fuel rod bundles. However, the time-consuming, iterative calculation of Navier-Stokes equations makes CFD unsuitable for the scenarios that require efficient simulation such as sensitivity analysis and uncertainty quantification. To solve this problem, a reduced-order model (ROM) based on proper orthogonal decomposition (POD) and machine learning (ML) is proposed to simulate the flow field efficiently. Firstly, a validated CFD model to output the flow field data set of the rod bundle is established. Secondly, based on the POD method, the modes and corresponding coefficients of the flow field were extracted. Then, an deep feed-forward neural network, due to its efficiency in approximating arbitrary functions and its ability to handle high-dimensional and strong nonlinear problems, is selected to build a model that maps the non-linear relationship between the mode coefficients and the boundary conditions. A trained surrogate model for modes coefficients prediction is obtained after a certain number of training iterations. Finally, the flow field is reconstructed by combining the product of the POD basis and coefficients. Based on the test dataset, an evaluation of the ROM is carried out. The evaluation results show that the proposed POD-ROM accurately describe the flow status of the fluid field in rod bundles with high resolution in only a few milliseconds.

Application of CCTV Image and Semantic Segmentation Model for Water Level Estimation of Irrigation Channel (관개용수로 CCTV 이미지를 이용한 CNN 딥러닝 이미지 모델 적용)

  • Kim, Kwi-Hoon;Kim, Ma-Ga;Yoon, Pu-Reun;Bang, Je-Hong;Myoung, Woo-Ho;Choi, Jin-Yong;Choi, Gyu-Hoon
    • Journal of The Korean Society of Agricultural Engineers
    • /
    • v.64 no.3
    • /
    • pp.63-73
    • /
    • 2022
  • A more accurate understanding of the irrigation water supply is necessary for efficient agricultural water management. Although we measure water levels in an irrigation canal using ultrasonic water level gauges, some errors occur due to malfunctions or the surrounding environment. This study aims to apply CNN (Convolutional Neural Network) Deep-learning-based image classification and segmentation models to the irrigation canal's CCTV (Closed-Circuit Television) images. The CCTV images were acquired from the irrigation canal of the agricultural reservoir in Cheorwon-gun, Gangwon-do. We used the ResNet-50 model for the image classification model and the U-Net model for the image segmentation model. Using the Natural Breaks algorithm, we divided water level data into 2, 4, and 8 groups for image classification models. The classification models of 2, 4, and 8 groups showed the accuracy of 1.000, 0.987, and 0.634, respectively. The image segmentation model showed a Dice score of 0.998 and predicted water levels showed R2 of 0.97 and MAE (Mean Absolute Error) of 0.02 m. The image classification models can be applied to the automatic gate-controller at four divisions of water levels. Also, the image segmentation model results can be applied to the alternative measurement for ultrasonic water gauges. We expect that the results of this study can provide a more scientific and efficient approach for agricultural water management.

Damaged cable detection with statistical analysis, clustering, and deep learning models

  • Son, Hyesook;Yoon, Chanyoung;Kim, Yejin;Jang, Yun;Tran, Linh Viet;Kim, Seung-Eock;Kim, Dong Joo;Park, Jongwoong
    • Smart Structures and Systems
    • /
    • v.29 no.1
    • /
    • pp.17-28
    • /
    • 2022
  • The cable component of cable-stayed bridges is gradually impacted by weather conditions, vehicle loads, and material corrosion. The stayed cable is a critical load-carrying part that closely affects the operational stability of a cable-stayed bridge. Damaged cables might lead to the bridge collapse due to their tension capacity reduction. Thus, it is necessary to develop structural health monitoring (SHM) techniques that accurately identify damaged cables. In this work, a combinational identification method of three efficient techniques, including statistical analysis, clustering, and neural network models, is proposed to detect the damaged cable in a cable-stayed bridge. The measured dataset from the bridge was initially preprocessed to remove the outlier channels. Then, the theory and application of each technique for damage detection were introduced. In general, the statistical approach extracts the parameters representing the damage within time series, and the clustering approach identifies the outliers from the data signals as damaged members, while the deep learning approach uses the nonlinear data dependencies in SHM for the training model. The performance of these approaches in classifying the damaged cable was assessed, and the combinational identification method was obtained using the voting ensemble. Finally, the combination method was compared with an existing outlier detection algorithm, support vector machines (SVM). The results demonstrate that the proposed method is robust and provides higher accuracy for the damaged cable detection in the cable-stayed bridge.

Comparison of Anomaly Detection Performance Based on GRU Model Applying Various Data Preprocessing Techniques and Data Oversampling (다양한 데이터 전처리 기법과 데이터 오버샘플링을 적용한 GRU 모델 기반 이상 탐지 성능 비교)

  • Yoo, Seung-Tae;Kim, Kangseok
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.32 no.2
    • /
    • pp.201-211
    • /
    • 2022
  • According to the recent change in the cybersecurity paradigm, research on anomaly detection methods using machine learning and deep learning techniques, which are AI implementation technologies, is increasing. In this study, a comparative study on data preprocessing techniques that can improve the anomaly detection performance of a GRU (Gated Recurrent Unit) neural network-based intrusion detection model using NGIDS-DS (Next Generation IDS Dataset), an open dataset, was conducted. In addition, in order to solve the class imbalance problem according to the ratio of normal data and attack data, the detection performance according to the oversampling ratio was compared and analyzed using the oversampling technique applied with DCGAN (Deep Convolutional Generative Adversarial Networks). As a result of the experiment, the method preprocessed using the Doc2Vec algorithm for system call feature and process execution path feature showed good performance, and in the case of oversampling performance, when DCGAN was used, improved detection performance was shown.

Deep learning-based Human Action Recognition Technique Considering the Spatio-Temporal Relationship of Joints (관절의 시·공간적 관계를 고려한 딥러닝 기반의 행동인식 기법)

  • Choi, Inkyu;Song, Hyok
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.05a
    • /
    • pp.413-415
    • /
    • 2022
  • Since human joints can be used as useful information for analyzing human behavior as a component of the human body, many studies have been conducted on human action recognition using joint information. However, it is a very complex problem to recognize human action that changes every moment using only each independent joint information. Therefore, an additional information extraction method to be used for learning and an algorithm that considers the current state based on the past state are needed. In this paper, we propose a human action recognition technique considering the positional relationship of connected joints and the change of the position of each joint over time. Using the pre-trained joint extraction model, position information of each joint is obtained, and bone information is extracted using the difference vector between the connected joints. In addition, a simplified neural network is constructed according to the two types of inputs, and spatio-temporal features are extracted by adding LSTM. As a result of the experiment using a dataset consisting of 9 behaviors, it was confirmed that when the action recognition accuracy was measured considering the temporal and spatial relationship features of each joint, it showed superior performance compared to the result using only single joint information.

  • PDF

Implementation of an alarm system with AI image processing to detect whether a helmet is worn or not and a fall accident (헬멧 착용 여부 및 쓰러짐 사고 감지를 위한 AI 영상처리와 알람 시스템의 구현)

  • Yong-Hwa Jo;Hyuek-Jae Lee
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.23 no.3
    • /
    • pp.150-159
    • /
    • 2022
  • This paper presents an implementation of detecting whether a helmet is worn and there is a fall accident through individual image analysis in real-time from extracting the image objects of several workers active in the industrial field. In order to detect image objects of workers, YOLO, a deep learning-based computer vision model, was used, and for whether a helmet is worn or not, the extracted images with 5,000 different helmet learning data images were applied. For whether a fall accident occurred, the position of the head was checked using the Pose real-time body tracking algorithm of Mediapipe, and the movement speed was calculated to determine whether the person fell. In addition, to give reliability to the result of a falling accident, a method to infer the posture of an object by obtaining the size of YOLO's bounding box was proposed and implemented. Finally, Telegram API Bot and Firebase DB server were implemented for notification service to administrators.

Robust Scheme of Segmenting Characters of License Plate on Irregular Illumination Condition (불규칙 조명 환경에 강인한 번호판 문자 분리 기법)

  • Kim, Byoung-Hyun;Han, Young-Joon;Hahn, Hern-Soo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.14 no.11
    • /
    • pp.61-71
    • /
    • 2009
  • Vehicle license plate is the only way to check the registrated information of a vehicle. Many works have been devoted to the vision system of recognizing the license plate, which has been widely used to control an illegal parking. However, it is difficult to correctly segment characters on the license plate since an illumination is affected by a weather change and a neighboring obstacles. This paper proposes a robust method of segmenting the character of the license plate on irregular illumination condition. The proposed method enhance the contrast of license plate images using the Chi-Square probability density function. For segmenting characters on the license plate, binary images with the high quality are gained by applying the adaptive threshold. Preprocessing and labeling algorithm are used to eliminate noises existing during the whole segmentation process. Finally, profiling method is applied to segment characters on license plate from binary images.

Identification of Multiple Cancer Cell Lines from Microscopic Images via Deep Learning (심층 학습을 통한 암세포 광학영상 식별기법)

  • Park, Jinhyung;Choe, Se-woon
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.05a
    • /
    • pp.374-376
    • /
    • 2021
  • For the diagnosis of cancer-related diseases in clinical practice, pathological examination using biopsy is essential after basic diagnosis using imaging equipment. In order to proceed with such a biopsy, the assistance of an oncologist, clinical pathologist, etc. with specialized knowledge and the minimum required time are essential for confirmation. In recent years, research related to the establishment of a system capable of automatic classification of cancer cells using artificial intelligence is being actively conducted. However, previous studies show limitations in the type and accuracy of cells based on a limited algorithm. In this study, we propose a method to identify a total of 4 cancer cells through a convolutional neural network, a kind of deep learning. The optical images obtained through cell culture were learned through EfficientNet after performing pre-processing such as identification of the location of cells and image segmentation using OpenCV. The model used various hyper parameters based on EfficientNet, and trained InceptionV3 to compare and analyze the performance. As a result, cells were classified with a high accuracy of 96.8%, and this analysis method is expected to be helpful in confirming cancer.

  • PDF

DNN based Binary Classification Model by Particular Matter Concentration (DNN 기반의 미세먼지 농도별 이진 분류 모델)

  • Lee, Jong-sung;Jung, Yong-jin;Oh, Chang-heon
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.05a
    • /
    • pp.277-279
    • /
    • 2021
  • There is a problem that learning of a prediction model is not well performed depending on the characteristics of each particular matter concentration. To solve this problem, it is necessary to design a prediction model for low concentration and high concentration separately. Therefore, a classification model is needed to classify the concentration of particular matter into low and high concentrations. This paper proposes a classification model to classify low and high concentrations based on the concentration of particular matter. DNN was used as the classification model algorithm, and the classification model was designed by applying the optimal parameters after searching for hyper parameters. As for the result of evaluating the performance of the model, 97.54% of the low concentration classification was measured. And in the case of high concentration classification, 85.51% was measured.

  • PDF

Artificial Neural Network-based Thermal Environment Prediction Model for Energy Saving of Data Center Cooling Systems (데이터센터 냉각 시스템의 에너지 절약을 위한 인공신경망 기반 열환경 예측 모델)

  • Chae-Young Lim;Chae-Eun Yeo;Seong-Yool Ahn;Sang-Hyun Lee
    • The Journal of the Convergence on Culture Technology
    • /
    • v.9 no.6
    • /
    • pp.883-888
    • /
    • 2023
  • Since data centers are places that provide IT services 24 hours a day, 365 days a year, data center power consumption is expected to increase to approximately 10% by 2030, and the introduction of high-density IT equipment will gradually increase. In order to ensure the stable operation of IT equipment, various types of research are required to conserve energy in cooling and improve energy management. This study proposes the following process for energy saving in data centers. We conducted CFD modeling of the data center, proposed an artificial intelligence-based thermal environment prediction model, compared actual measured data, the predicted model, and the CFD results, and finally evaluated the data center's thermal management performance. It can be seen that the predicted values of RCI, RTI, and PUE are also similar according to the normalization used in the normalization method. Therefore, it is judged that the algorithm proposed in this study can be applied and provided as a thermal environment prediction model applied to data centers.