• 제목/요약/키워드: long-memory

검색결과 1,152건 처리시간 0.027초

A Systems Engineering Approach for Predicting NPP Response under Steam Generator Tube Rupture Conditions using Machine Learning

  • Tran Canh Hai, Nguyen;Aya, Diab
    • 시스템엔지니어링학술지
    • /
    • 제18권2호
    • /
    • pp.94-107
    • /
    • 2022
  • Accidents prevention and mitigation is the highest priority of nuclear power plant (NPP) operation, particularly in the aftermath of the Fukushima Daiichi accident, which has reignited public anxieties and skepticism regarding nuclear energy usage. To deal with accident scenarios more effectively, operators must have ample and precise information about key safety parameters as well as their future trajectories. This work investigates the potential of machine learning in forecasting NPP response in real-time to provide an additional validation method and help reduce human error, especially in accident situations where operators are under a lot of stress. First, a base-case SGTR simulation is carried out by the best-estimate code RELAP5/MOD3.4 to confirm the validity of the model against results reported in the APR1400 Design Control Document (DCD). Then, uncertainty quantification is performed by coupling RELAP5/MOD3.4 and the statistical tool DAKOTA to generate a large enough dataset for the construction and training of neural-based machine learning (ML) models, namely LSTM, GRU, and hybrid CNN-LSTM. Finally, the accuracy and reliability of these models in forecasting system response are tested by their performance on fresh data. To facilitate and oversee the process of developing the ML models, a Systems Engineering (SE) methodology is used to ensure that the work is consistently in line with the originating mission statement and that the findings obtained at each subsequent phase are valid.

Consistency check algorithm for validation and re-diagnosis to improve the accuracy of abnormality diagnosis in nuclear power plants

  • Kim, Geunhee;Kim, Jae Min;Shin, Ji Hyeon;Lee, Seung Jun
    • Nuclear Engineering and Technology
    • /
    • 제54권10호
    • /
    • pp.3620-3630
    • /
    • 2022
  • The diagnosis of abnormalities in a nuclear power plant is essential to maintain power plant safety. When an abnormal event occurs, the operator diagnoses the event and selects the appropriate abnormal operating procedures and sub-procedures to implement the necessary measures. To support this, abnormality diagnosis systems using data-driven methods such as artificial neural networks and convolutional neural networks have been developed. However, data-driven models cannot always guarantee an accurate diagnosis because they cannot simulate all possible abnormal events. Therefore, abnormality diagnosis systems should be able to detect their own potential misdiagnosis. This paper proposes a rulebased diagnostic validation algorithm using a previously developed two-stage diagnosis model in abnormal situations. We analyzed the diagnostic results of the sub-procedure stage when the first diagnostic results were inaccurate and derived a rule to filter the inconsistent sub-procedure diagnostic results, which may be inaccurate diagnoses. In a case study, two abnormality diagnosis models were built using gated recurrent units and long short-term memory cells, and consistency checks on the diagnostic results from both models were performed to detect any inconsistencies. Based on this, a re-diagnosis was performed to select the label of the second-best value in the first diagnosis, after which the diagnosis accuracy increased. That is, the model proposed in this study made it possible to detect diagnostic failures by the developed consistency check of the sub-procedure diagnostic results. The consistency check process has the advantage that the operator can review the results and increase the diagnosis success rate by performing additional re-diagnoses. The developed model is expected to have increased applicability as an operator support system in terms of selecting the appropriate AOPs and sub-procedures with re-diagnosis, thereby further increasing abnormal event diagnostic accuracy.

저 사양 IoT 장치간의 암호화 알고리즘 성능 비교 (Comparison of encryption algorithm performance between low-spec IoT devices)

  • 박정규;김재호
    • 사물인터넷융복합논문지
    • /
    • 제8권1호
    • /
    • pp.79-85
    • /
    • 2022
  • 사물인터넷(IoT)은 다양한 플랫폼, 컴퓨팅 성능, 기능을 가지는 장치를 연결한다. 네트워크의 다양성과 IoT 장치의 편재로 인해 보안 및 개인 정보 보호에 대한 요구가 증가하고 있다. 따라서 암호화 메커니즘은 이러한 증가된 요구 사항을 충족할 만큼 충분히 강력해야 하고 동시에 저 사양의 장치에 구현될 수 있을 만큼 충분히 효과적이어야 한다. 논문에서는 IoT에서 사용할 수 있는 다양한 유형의 장치에 대한 최신 암호화 기본 요소 및 체계의 성능 및 메모리 제한 사항을 제시한다. 또한, IoT 네트워크에 자주 사용되는 저 사양의 장치에서 가장 일반적으로 사용되는 암호화 알고리즘의 성능에 대한 자세한 성능 평가를 수행한다. 데이터 보호 기능을 제공하기 위해 바이너리 링에서 암호화 비대칭 완전 동형 암호화와 대칭 암호화 AES 128비트를 사용했다. 실험 결과 IoT 장치는 대칭 암호를 구현하는데 충분한 성능을 가지고 있었으나 비대칭 암호 구현에서는 성능이 저하되는 것을 알 수 있다.

BIS(Bus Information System) 정확도 향상을 위한 머신러닝 적용 방안 연구 (A Study on the Application of Machine Learning to Improve BIS (Bus Information System) Accuracy)

  • 장준용;박준태
    • 한국ITS학회 논문지
    • /
    • 제21권3호
    • /
    • pp.42-52
    • /
    • 2022
  • BIS(Bus Information System) 서비스는 대도시를 포함하여 중소도시까지 전국적으로 확대운영되는 추세이며, 이용자의 만족도는 지속적으로 향상되고 있다. 이와 함께 버스도착시간 신뢰성 향상 관련 기술개발, 오차 최소화를 위한 개선 연구가 지속되고 있으며 무엇보다 정보 정확도의 중요성이 부각되고 있다. 본 연구에서는 기계학습 방법인 LSTM을 이용하여 정확도 성능을 평가하였으며 기존 칼만필터, 뉴럴 네트워크 등 방법론과 비교하였다. 실제 여행시간과 예측값에 대해 표준오차를 분석한 결과 LSTM 기계학습 방법이 기존 알고리즘에 비해 정확도는 약 1% 높고, 표준오차는 약 10초 낮은 것으로 분석되었다. 반면 총 162개 구간 중 109개 구간(67.3%) 우수한 것으로 분석되어 LSTM 방법이 전적으로 우수한 것은 아닌 것으로 나타났다. 구간 특성 분석을 통한 알고리즘 융합시 더욱 향상된 정확도 예측이 가능할 것으로 판단된다.

LSTM 인공신경망을 이용한 자동차 A/S센터 수리 부품 수요 예측 모델 연구 (A Study on the Demand Prediction Model for Repair Parts of Automotive After-sales Service Center Using LSTM Artificial Neural Network)

  • 정동균;박영식
    • 한국정보시스템학회지:정보시스템연구
    • /
    • 제31권3호
    • /
    • pp.197-220
    • /
    • 2022
  • Purpose The purpose of this study is to identifies the demand pattern categorization of repair parts of Automotive After-sales Service(A/S) and proposes a demand prediction model for Auto repair parts using Long Short-Term Memory (LSTM) of artificial neural networks (ANN). The optimal parts inventory quantity prediction model is implemented by applying daily, weekly, and monthly the parts demand data to the LSTM model for the Lumpy demand which is irregularly in a specific period among repair parts of the Automotive A/S service. Design/methodology/approach This study classified the four demand pattern categorization with 2 years demand time-series data of repair parts according to the Average demand interval(ADI) and coefficient of variation (CV2) of demand size. Of the 16,295 parts in the A/S service shop studied, 96.5% had a Lumpy demand pattern that large quantities occurred at a specific period. lumpy demand pattern's repair parts in the last three years is predicted by applying them to the LSTM for daily, weekly, and monthly time-series data. as the model prediction performance evaluation index, MAPE, RMSE, and RMSLE that can measure the error between the predicted value and the actual value were used. Findings As a result of this study, Daily time-series data were excellently predicted as indicators with the lowest MAPE, RMSE, and RMSLE values, followed by Weekly and Monthly time-series data. This is due to the decrease in training data for Weekly and Monthly. even if the demand period is extended to get the training data, the prediction performance is still low due to the discontinuation of current vehicle models and the use of alternative parts that they are contributed to no more demand. Therefore, sufficient training data is important, but the selection of the prediction demand period is also a critical factor.

Personal Driving Style based ADAS Customization using Machine Learning for Public Driving Safety

  • Giyoung Hwang;Dongjun Jung;Yunyeong Goh;Jong-Moon Chung
    • 인터넷정보학회논문지
    • /
    • 제24권1호
    • /
    • pp.39-47
    • /
    • 2023
  • The development of autonomous driving and Advanced Driver Assistance System (ADAS) technology has grown rapidly in recent years. As most traffic accidents occur due to human error, self-driving vehicles can drastically reduce the number of accidents and crashes that occur on the roads today. Obviously, technical advancements in autonomous driving can lead to improved public driving safety. However, due to the current limitations in technology and lack of public trust in self-driving cars (and drones), the actual use of Autonomous Vehicles (AVs) is still significantly low. According to prior studies, people's acceptance of an AV is mainly determined by trust. It is proven that people still feel much more comfortable in personalized ADAS, designed with the way people drive. Based on such needs, a new attempt for a customized ADAS considering each driver's driving style is proposed in this paper. Each driver's behavior is divided into two categories: assertive and defensive. In this paper, a novel customized ADAS algorithm with high classification accuracy is designed, which divides each driver based on their driving style. Each driver's driving data is collected and simulated using CARLA, which is an open-source autonomous driving simulator. In addition, Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU) machine learning algorithms are used to optimize the ADAS parameters. The proposed scheme results in a high classification accuracy of time series driving data. Furthermore, among the vast amount of CARLA-based feature data extracted from the drivers, distinguishable driving features are collected selectively using Support Vector Machine (SVM) technology by comparing the amount of influence on the classification of the two categories. Therefore, by extracting distinguishable features and eliminating outliers using SVM, the classification accuracy is significantly improved. Based on this classification, the ADAS sensors can be made more sensitive for the case of assertive drivers, enabling more advanced driving safety support. The proposed technology of this paper is especially important because currently, the state-of-the-art level of autonomous driving is at level 3 (based on the SAE International driving automation standards), which requires advanced functions that can assist drivers using ADAS technology.

Tunnel wall convergence prediction using optimized LSTM deep neural network

  • Arsalan, Mahmoodzadeh;Mohammadreza, Taghizadeh;Adil Hussein, Mohammed;Hawkar Hashim, Ibrahim;Hanan, Samadi;Mokhtar, Mohammadi;Shima, Rashidi
    • Geomechanics and Engineering
    • /
    • 제31권6호
    • /
    • pp.545-556
    • /
    • 2022
  • Evaluation and optimization of tunnel wall convergence (TWC) plays a vital role in preventing potential problems during tunnel construction and utilization stage. When convergence occurs at a high rate, it can lead to significant problems such as reducing the advance rate and safety, which in turn increases operating costs. In order to design an effective solution, it is important to accurately predict the degree of TWC; this can reduce the level of concern and have a positive effect on the design. With the development of soft computing methods, the use of deep learning algorithms and neural networks in tunnel construction has expanded in recent years. The current study aims to employ the long-short-term memory (LSTM) deep neural network predictor model to predict the TWC, based on 550 data points of observed parameters developed by collecting required data from different tunnelling projects. Among the data collected during the pre-construction and construction phases of the project, 80% is randomly used to train the model and the rest is used to test the model. Several loss functions including root mean square error (RMSE) and coefficient of determination (R2) were used to assess the performance and precision of the applied method. The results of the proposed models indicate an acceptable and reliable accuracy. In fact, the results show that the predicted values are in good agreement with the observed actual data. The proposed model can be considered for use in similar ground and tunneling conditions. It is important to note that this work has the potential to reduce the tunneling uncertainties significantly and make deep learning a valuable tool for planning tunnels.

Prediction of pollution loads in agricultural reservoirs using LSTM algorithm: case study of reservoirs in Nonsan City

  • Heesung Lim;Hyunuk An;Gyeongsuk Choi;Jaenam Lee;Jongwon Do
    • 농업과학연구
    • /
    • 제49권2호
    • /
    • pp.193-202
    • /
    • 2022
  • The recurrent neural network (RNN) algorithm has been widely used in water-related research areas, such as water level predictions and water quality predictions, due to its excellent time series learning capabilities. However, studies on water quality predictions using RNN algorithms are limited because of the scarcity of water quality data. Therefore, most previous studies related to water quality predictions were based on monthly predictions. In this study, the quality of the water in a reservoir in Nonsan, Chungcheongnam-do Republic of Korea was predicted using the RNN-LSTM algorithm. The study was conducted after constructing data that could then be, linearly interpolated as daily data. In this study, we attempt to predict the water quality on the 7th, 15th, 30th, 45th and 60th days instead of making daily predictions of water quality factors. For daily predictions, linear interpolated daily water quality data and daily weather data (rainfall, average temperature, and average wind speed) were used. The results of predicting water quality concentrations (chemical oxygen demand [COD], dissolved oxygen [DO], suspended solid [SS], total nitrogen [T-N], total phosphorus [TP]) through the LSTM algorithm indicated that the predictive value was high on the 7th and 15th days. In the 30th day predictions, the COD and DO items showed R2 that exceeded 0.6 at all points, whereas the SS, T-N, and T-P items showed differences depending on the factor being assessed. In the 45th day predictions, it was found that the accuracy of all water quality predictions except for the DO item was sharply lowered.

제주도 동부 중산간지역 지하수위 예측에 적합한 인공신경망 모델의 활성화함수 연구 (A study on activation functions of Artificial Neural Network model suitable for prediction of the groundwater level in the mid-mountainous area of eastern Jeju island)

  • 신문주;김정훈;강수연;이정한;강경구
    • 한국수자원학회:학술대회논문집
    • /
    • 한국수자원학회 2023년도 학술발표회
    • /
    • pp.520-520
    • /
    • 2023
  • 제주도 동부 중산간 지역은 화산암으로 구성된 지하지질로 인해 지하수위의 변동폭이 크고 변동양상이 복잡하여 인공신경망(Artificial Neural Network, ANN) 모델 등을 활용한 지하수위의 예측이 어렵다. ANN에 적용되는 활성화함수에 따라 지하수의 예측성능은 달라질 수 있으므로 활성화함수의 비교분석 후 적절한 활성화함수의 사용이 반드시 필요하다. 본 연구에서는 5개 활성화함수(sigmoid, hyperbolic tangent(tanh), Rectified Linear Unit(ReLU), Leaky Rectified Linear Unit(Leaky ReLU), Exponential Linear Unit(ELU))를 제주도 동부 중산간지역에 위치한 2개 지하수 관정에 대해 비교분석하여 최적 활성화함수 도출을 목표로 한다. 또한 최적 활성화함수를 활용한 ANN의 적용성을 평가하기 위해 최근 널리 사용되고 있는 순환신경망 모델인 Long Short-Term Memory(LSTM) 모델과 비교분석 하였다. 그 결과, 2개 관정 중 지하수위 변동폭이 상대적으로 큰 관정은 ELU 함수, 상대적으로 작은 관정은 Leaky ReLU 함수가 지하수위 예측에 적절하였다. 예측성능이 가장 낮은 활성화함수는 sigmoid 함수로 나타나 첨두 및 최저 지하수위 예측 시 사용을 지양해야 할 것으로 판단된다. 도출된 최적 활성화함수를 사용한 ANN-ELU 모델 및 ANN-Leaky ReLU 모델을 LSTM 모델과 비교분석한 결과 대등한 지하수위 예측성능을 나타내었다. 이것은 feed-forward 방식인 ANN 모델을 사용하더라도 적절한 활성화함수를 사용하면 최신 순환신경망과 대등한 결과를 도출하여 활용 가능성이 충분히 있다는 것을 의미한다. 마지막으로 LSTM 모델은 가장 적절한 예측성능을 나타내어 다양한 인공지능 모델의 예측성능 비교를 위한 기준이 되는 참고모델로 활용 가능하다. 본 연구에서 제시한 방법은 지하수위 예측과 더불어 하천수위 예측 등 다양한 시계열예측 및 분석연구에 유용하게 사용될 수 있다.

  • PDF

역학적 모델과 딥러닝 모델을 결합한 저수지 수온 및 수질 예측 (Predicting water temperature and water quality in a reservoir using a hybrid of mechanistic model and deep learning model)

  • 김성진;정세웅
    • 한국수자원학회:학술대회논문집
    • /
    • 한국수자원학회 2023년도 학술발표회
    • /
    • pp.150-150
    • /
    • 2023
  • 기작기반의 역학적 모델과 자료기반의 딥러닝 모델은 수질예측에 다양하게 적용되고 있으나, 각각의 모델은 고유한 구조와 가정으로 인해 장·단점을 가지고 있다. 특히, 딥러닝 모델은 우수한 예측 성능에도 불구하고 훈련자료가 부족한 경우 오차와 과적합에 따른 분산(variance) 문제를 야기하며, 기작기반 모델과 달리 물리법칙이 결여된 예측 결과를 생산할 수 있다. 본 연구의 목적은 주요 상수원인 댐 저수지를 대상으로 수심별 수온과 탁도를 예측하기 위해 기작기반과 자료기반 모델의 장점을 융합한 PGDL(Process-Guided Deep Learninig) 모델을 개발하고, 물리적 법칙 만족도와 예측 성능을 평가하는데 있다. PGDL 모델 개발에 사용된 기작기반 및 자료기반 모델은 각각 CE-QUAL-W2와 순환 신경망 딥러닝 모델인 LSTM(Long Short-Term Memory) 모델이다. 각 모델은 2020년 1월부터 12월까지 소양강댐 댐 앞의 K-water 자동측정망 지점에서 실측한 수온과 탁도 자료를 이용하여 각각 보정하고 훈련하였다. 수온 및 탁도 예측을 위한 PGDL 모델의 주요 알고리즘은 LSTM 모델의 목적함수(또는 손실함수)에 실측값과 예측값의 오차항 이외에 역학적 모델의 에너지 및 질량 수지 항을 제약 조건에 추가하여 예측결과가 물리적 보존법칙을 만족하지 않는 경우 penalty를 부가하여 매개변수를 최적화시켰다. 또한, 자료 부족에 따른 LSTM 모델의 예측성능 저하 문제를 극복하기 위해 보정되지 않은 역학적 모델의 모의 결과를 모델의 훈련자료로 사용하는 pre-training 기법을 활용하여 실측자료 비율에 따른 모델의 예측성능을 평가하였다. 연구결과, PGDL 모델은 저수지 수온과 탁도 예측에 있어서 경계조건을 통한 에너지와 질량 변화와 저수지 내 수온 및 탁도 증감에 따른 공간적 에너지와 질량 변화의 일치도에 있어서 LSTM보다 우수하였다. 또한 역학적 모델 결과를 LSTM 모델의 훈련자료의 일부로 사용한 PGDL 모델은 적은 양의 실측자료를 사용하여도 CE-QUAL-W2와 LSTM 보다 우수한 예측 성능을 보였다. 연구결과는 다차원의 역학적 수리수질 모델과 자료기반 딥러닝 모델의 장점을 결합한 새로운 모델링 기술의 적용 가능성을 보여주며, 자료기반 모델의 훈련자료 부족에 따른 예측 성능 저하 문제를 극복하기 위해 역학적 모델이 유용하게 활용될 수 있음을 시사한다.

  • PDF