• Title/Summary/Keyword: improved algorithm

Search Result 4,961, Processing Time 0.036 seconds

Role of unstructured data on water surface elevation prediction with LSTM: case study on Jamsu Bridge, Korea (LSTM 기법을 활용한 수위 예측 알고리즘 개발 시 비정형자료의 역할에 관한 연구: 잠수교 사례)

  • Lee, Seung Yeon;Yoo, Hyung Ju;Lee, Seung Oh
    • Journal of Korea Water Resources Association
    • /
    • v.54 no.spc1
    • /
    • pp.1195-1204
    • /
    • 2021
  • Recently, local torrential rain have become more frequent and severe due to abnormal climate conditions, causing a surge in human and properties damage including infrastructures along the river. In this study, water surface elevation prediction algorithm was developed using the LSTM (Long Short-term Memory) technique specialized for time series data among Machine Learning to estimate and prevent flooding of the facilities. The study area is Jamsu Bridge, the study period is 6 years (2015~2020) of June, July and August and the water surface elevation of the Jamsu Bridge after 3 hours was predicted. Input data set is composed of the water surface elevation of Jamsu Bridge (EL.m), the amount of discharge from Paldang Dam (m3/s), the tide level of Ganghwa Bridge (cm) and the number of tweets in Seoul. Complementary data were constructed by using not only structured data mainly used in precedent research but also unstructured data constructed through wordcloud, and the role of unstructured data was presented through comparison and analysis of whether or not unstructured data was used. When predicting the water surface elevation of the Jamsu Bridge, the accuracy of prediction was improved and realized that complementary data could be conservative alerts to reduce casualties. In this study, it was concluded that the use of complementary data was relatively effective in providing the user's safety and convenience of riverside infrastructure. In the future, more accurate water surface elevation prediction would be expected through the addition of types of unstructured data or detailed pre-processing of input data.

Water Segmentation Based on Morphologic and Edge-enhanced U-Net Using Sentinel-1 SAR Images (형태학적 연산과 경계추출 학습이 강화된 U-Net을 활용한 Sentinel-1 영상 기반 수체탐지)

  • Kim, Hwisong;Kim, Duk-jin;Kim, Junwoo
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.5_2
    • /
    • pp.793-810
    • /
    • 2022
  • Synthetic Aperture Radar (SAR) is considered to be suitable for near real-time inundation monitoring. The distinctly different intensity between water and land makes it adequate for waterbody detection, but the intrinsic speckle noise and variable intensity of SAR images decrease the accuracy of waterbody detection. In this study, we suggest two modules, named 'morphology module' and 'edge-enhanced module', which are the combinations of pooling layers and convolutional layers, improving the accuracy of waterbody detection. The morphology module is composed of min-pooling layers and max-pooling layers, which shows the effect of morphological transformation. The edge-enhanced module is composed of convolution layers, which has the fixed weights of the traditional edge detection algorithm. After comparing the accuracy of various versions of each module for U-Net, we found that the optimal combination is the case that the morphology module of min-pooling and successive layers of min-pooling and max-pooling, and the edge-enhanced module of Scharr filter were the inputs of conv9. This morphologic and edge-enhanced U-Net improved the F1-score by 9.81% than the original U-Net. Qualitative inspection showed that our model has capability of detecting small-sized waterbody and detailed edge of water, which are the distinct advancement of the model presented in this research, compared to the original U-Net.

Application of convolutional autoencoder for spatiotemporal bias-correction of radar precipitation (CAE 알고리즘을 이용한 레이더 강우 보정 평가)

  • Jung, Sungho;Oh, Sungryul;Lee, Daeeop;Le, Xuan Hien;Lee, Giha
    • Journal of Korea Water Resources Association
    • /
    • v.54 no.7
    • /
    • pp.453-462
    • /
    • 2021
  • As the frequency of localized heavy rainfall has increased during recent years, the importance of high-resolution radar data has also increased. This study aims to correct the bias of Dual Polarization radar that still has a spatial and temporal bias. In many studies, various statistical techniques have been attempted to correct the bias of radar rainfall. In this study, the bias correction of the S-band Dual Polarization radar used in flood forecasting of ME was implemented by a Convolutional Autoencoder (CAE) algorithm, which is a type of Convolutional Neural Network (CNN). The CAE model was trained based on radar data sets that have a 10-min temporal resolution for the July 2017 flood event in Cheongju. The results showed that the newly developed CAE model provided improved simulation results in time and space by reducing the bias of raw radar rainfall. Therefore, the CAE model, which learns the spatial relationship between each adjacent grid, can be used for real-time updates of grid-based climate data generated by radar and satellites.

Data Augmentation using a Kernel Density Estimation for Motion Recognition Applications (움직임 인식응용을 위한 커널 밀도 추정 기반 학습용 데이터 증폭 기법)

  • Jung, Woosoon;Lee, Hyung Gyu
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.27 no.4
    • /
    • pp.19-27
    • /
    • 2022
  • In general, the performance of ML(Machine Learning) application is determined by various factors such as the type of ML model, the size of model (number of parameters), hyperparameters setting during the training, and training data. In particular, the recognition accuracy of ML may be deteriorated or experienced overfitting problem if the amount of dada used for training is insufficient. Existing studies focusing on image recognition have widely used open datasets for training and evaluating the proposed ML models. However, for specific applications where the sensor used, the target of recognition, and the recognition situation are different, it is necessary to build the dataset manually. In this case, the performance of ML largely depends on the quantity and quality of the data. In this paper, training data used for motion recognition application is augmented using the kernel density estimation algorithm which is a type of non-parametric estimation method. We then compare and analyze the recognition accuracy of a ML application by varying the number of original data, kernel types and augmentation rate used for data augmentation. Finally experimental results show that the recognition accuracy is improved by up to 14.31% when using the narrow bandwidth Tophat kernel.

Development of Graph based Deep Learning methods for Enhancing the Semantic Integrity of Spaces in BIM Models (BIM 모델 내 공간의 시멘틱 무결성 검증을 위한 그래프 기반 딥러닝 모델 구축에 관한 연구)

  • Lee, Wonbok;Kim, Sihyun;Yu, Youngsu;Koo, Bonsang
    • Korean Journal of Construction Engineering and Management
    • /
    • v.23 no.3
    • /
    • pp.45-55
    • /
    • 2022
  • BIM models allow building spaces to be instantiated and recognized as unique objects independently of model elements. These instantiated spaces provide the required semantics that can be leveraged for building code checking, energy analysis, and evacuation route analysis. However, theses spaces or rooms need to be designated manually, which in practice, lead to errors and omissions. Thus, most BIM models today does not guarantee the semantic integrity of space designations, limiting their potential applicability. Recent studies have explored ways to automate space allocation in BIM models using artificial intelligence algorithms, but they are limited in their scope and relatively low classification accuracy. This study explored the use of Graph Convolutional Networks, an algorithm exclusively tailored for graph data structures. The goal was to utilize not only geometry information but also the semantic relational data between spaces and elements in the BIM model. Results of the study confirmed that the accuracy was improved by about 8% compared to algorithms that only used geometric distinctions of the individual spaces.

A Study on the Optimization of Main Dimensions of a Ship by Design Search Techniques based on the AI (AI 기반 설계 탐색 기법을 통한 선박의 주요 치수 최적화)

  • Dong-Woo Park;Inseob Kim
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.28 no.7
    • /
    • pp.1231-1237
    • /
    • 2022
  • In the present study, the optimization of the main particulars of a ship using AI-based design search techniques was investigated. For the design search techniques, the SHERPA algorithm by HEEDS was applied, and CFD analysis using STAR-CCM+ was applied for the calculation of resistance performance. Main particulars were automatically transformed by modifying the main particulars of the ship at the stage of preprocessing using JAVA script and Python. Small catamaran was chosen for the present study, and the main dimensions of the length, breadth, draft of demi-hull, and distance between demi-hulls were considered as design variables. Total resistance was considered as an objective function, and the range of displaced volume considering the arrangement of the outfitting system was chosen as the constraint. As a result, the changes in the individual design variables were within ±5%, and the total resistance of the optimized hull form was decreased by 11% compared with that of the existing hull form. Throughout the present study, the resistance performance of small catamaran could be improved by the optimization of the main dimensions without direct modification of the hull shape. In addition, the application of optimization using design search techniques is expected for the improvement in the resistance performance of a ship.

Developing a New Algorithm for Conversational Agent to Detect Recognition Error and Neologism Meaning: Utilizing Korean Syllable-based Word Similarity (대화형 에이전트 인식오류 및 신조어 탐지를 위한 알고리즘 개발: 한글 음절 분리 기반의 단어 유사도 활용)

  • Jung-Won Lee;Il Im
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.3
    • /
    • pp.267-286
    • /
    • 2023
  • The conversational agents such as AI speakers utilize voice conversation for human-computer interaction. Voice recognition errors often occur in conversational situations. Recognition errors in user utterance records can be categorized into two types. The first type is misrecognition errors, where the agent fails to recognize the user's speech entirely. The second type is misinterpretation errors, where the user's speech is recognized and services are provided, but the interpretation differs from the user's intention. Among these, misinterpretation errors require separate error detection as they are recorded as successful service interactions. In this study, various text separation methods were applied to detect misinterpretation. For each of these text separation methods, the similarity of consecutive speech pairs using word embedding and document embedding techniques, which convert words and documents into vectors. This approach goes beyond simple word-based similarity calculation to explore a new method for detecting misinterpretation errors. The research method involved utilizing real user utterance records to train and develop a detection model by applying patterns of misinterpretation error causes. The results revealed that the most significant analysis result was obtained through initial consonant extraction for detecting misinterpretation errors caused by the use of unregistered neologisms. Through comparison with other separation methods, different error types could be observed. This study has two main implications. First, for misinterpretation errors that are difficult to detect due to lack of recognition, the study proposed diverse text separation methods and found a novel method that improved performance remarkably. Second, if this is applied to conversational agents or voice recognition services requiring neologism detection, patterns of errors occurring from the voice recognition stage can be specified. The study proposed and verified that even if not categorized as errors, services can be provided according to user-desired results.

A Study on Machine Learning of the Drivetrain Simulation Model for Development of Wind Turbine Digital Twin (풍력발전기 디지털트윈 개발을 위한 드라이브트레인 시뮬레이션 모델의 기계학습 연구)

  • Yonadan Choi;Tag Gon Kim
    • Journal of the Korea Society for Simulation
    • /
    • v.32 no.3
    • /
    • pp.33-41
    • /
    • 2023
  • As carbon-free has been getting interest, renewable energy sources have been increasing. However, renewable energy is intermittent and variable so it is difficult to predict the produced electrical energy from a renewable energy source. In this study, digital-twin concept is applied to solve difficulties in predicting electrical energy from a renewable energy source. Considering that rotation of wind turbine has high correlation with produced electrical energy, a model which simulates rotation in the drivetrain of a wind turbine is developed. The base of a drivetrain simulation model is set with well-known state equation in mechanical engineering, which simulates the rotating system. Simulation based machine learning is conducted to get unknown parameters which are not provided by manufacturer. The simulation is repeated and parameters in simulation model are corrected after each simulation by optimization algorithm. The trained simulation model is validated with 27 real wind turbine operation data set. The simulation model shows 4.41% error in average compared to real wind turbine operation data set. Finally, it is assessed that the drivetrain simulation model represents the real wind turbine drivetrain system well. It is expected that wind-energy-prediction accuracy would be improved as wind turbine digital twin including the developed drivetrain simulation model is applied.

Performance Analysis of DoS/DDoS Attack Detection Algorithms using Different False Alarm Rates (False Alarm Rate 변화에 따른 DoS/DDoS 탐지 알고리즘의 성능 분석)

  • Jang, Beom-Soo;Lee, Joo-Young;Jung, Jae-Il
    • Journal of the Korea Society for Simulation
    • /
    • v.19 no.4
    • /
    • pp.139-149
    • /
    • 2010
  • Internet was designed for network scalability and best-effort service which makes all hosts connected to Internet to be vulnerable against attack. Many papers have been proposed about attack detection algorithms against the attack using IP spoofing and DoS/DDoS attack. Purpose of DoS/DDoS attack is achieved in short period after the attack begins. Therefore, DoS/DDoS attack should be detected as soon as possible. Attack detection algorithms using false alarm rates consist of the false negative rate and the false positive rate. Moreover, they are important metrics to evaluate the attack detections. In this paper, we analyze the performance of the attack detection algorithms using the impact of false negative rate and false positive rate variation to the normal traffic and the attack traffic by simulations. As the result of this, we find that the number of passed attack packets is in the proportion to the false negative rate and the number of passed normal packets is in the inverse proportion to the false positive rate. We also analyze the limits of attack detection due to the relation between the false negative rate and the false positive rate. Finally, we propose a solution to minimize the limits of attack detection algorithms by defining the network state using the ratio between the number of packets classified as attack packets and the number of packets classified as normal packets. We find the performance of attack detection algorithm is improved by passing the packets classified as attacks.

A study on the application of the agricultural reservoir water level recognition model using CCTV image data (농업용 저수지 CCTV 영상자료 기반 수위 인식 모델 적용성 검토)

  • Kwon, Soon Ho;Ha, Changyong;Lee, Seungyub
    • Journal of Korea Water Resources Association
    • /
    • v.56 no.4
    • /
    • pp.245-259
    • /
    • 2023
  • The agricultural reservoir is a critical water supply system in South Korea, providing approximately 60% of the agricultural water demand. However, the reservoir faces several issues that jeopardize its efficient operation and management. To address this issues, we propose a novel deep-learning-based water level recognition model that uses CCTV image data to accurately estimate water levels in agricultural reservoirs. The model consists of three main parts: (1) dataset construction, (2) image segmentation using the U-Net algorithm, and (3) CCTV-based water level recognition using either CNN or ResNet. The model has been applied to two reservoirs G-reservoir and M-reservoir with observed CCTV image and water level time series data. The results show that the performance of the image segmentation model is superior, while the performance of the water level recognition model varies from 50 to 80% depending on water level classification criteria (i.e., classification guideline) and complexity of image data (i.e., variability of the image pixels). The performance of the model can be improved if more numbers of data can be collected.