• 제목/요약/키워드: Machine learning (ML)

검색결과 300건 처리시간 0.023초

Hybrid machine learning with HHO method for estimating ultimate shear strength of both rectangular and circular RC columns

  • Quang-Viet Vu;Van-Thanh Pham;Dai-Nhan Le;Zhengyi Kong;George Papazafeiropoulos;Viet-Ngoc Pham
    • Steel and Composite Structures
    • /
    • 제52권2호
    • /
    • pp.145-163
    • /
    • 2024
  • This paper presents six novel hybrid machine learning (ML) models that combine support vector machines (SVM), Decision Tree (DT), Random Forest (RF), Gradient Boosting (GB), extreme gradient boosting (XGB), and categorical gradient boosting (CGB) with the Harris Hawks Optimization (HHO) algorithm. These models, namely HHO-SVM, HHO-DT, HHO-RF, HHO-GB, HHO-XGB, and HHO-CGB, are designed to predict the ultimate strength of both rectangular and circular reinforced concrete (RC) columns. The prediction models are established using a comprehensive database consisting of 325 experimental data for rectangular columns and 172 experimental data for circular columns. The ML model hyperparameters are optimized through a combination of cross-validation technique and the HHO. The performance of the hybrid ML models is evaluated and compared using various metrics, ultimately identifying the HHO-CGB model as the top-performing model for predicting the ultimate shear strength of both rectangular and circular RC columns. The mean R-value and mean a20-index are relatively high, reaching 0.991 and 0.959, respectively, while the mean absolute error and root mean square error are low (10.302 kN and 27.954 kN, respectively). Another comparison is conducted with four existing formulas to further validate the efficiency of the proposed HHO-CGB model. The Shapely Additive Explanations method is applied to analyze the contribution of each variable to the output within the HHO-CGB model, providing insights into the local and global influence of variables. The analysis reveals that the depth of the column, length of the column, and axial loading exert the most significant influence on the ultimate shear strength of RC columns. A user-friendly graphical interface tool is then developed based on the HHO-CGB to facilitate practical and cost-effective usage.

Prediction of intensive care unit admission using machine learning in patients with odontogenic infection

  • Joo-Ha Yoon;Sung Min Park
    • Journal of the Korean Association of Oral and Maxillofacial Surgeons
    • /
    • 제50권4호
    • /
    • pp.216-221
    • /
    • 2024
  • Objectives: This study aimed to develop and validate a model to predict the need for intensive care unit (ICU) admission in patients with dental infections using an automated machine learning (ML) program called H2O-AutoML. Materials and Methods: Two models were created using only the information available at the initial examination. Model 1 was parameterized with only clinical symptoms and blood tests, excluding contrast-enhanced multi-detector computed tomography (MDCT) images available at the initial visit, whereas model 2 was created with the addition of the MDCT information to the model 1 parameters. Although model 2 was expected to be superior to model 1, we wanted to independently determine this conclusion. A total of 210 patients who visited the Department of Oral and Maxillofacial Surgery at the Dankook University Dental Hospital from March 2013 to August 2023 was included in this study. The patients' demographic characteristics (sex, age, and place of residence), systemic factors (hypertension, diabetes mellitus [DM], kidney disease, liver disease, heart disease, anticoagulation therapy, and osteoporosis), local factors (smoking status, site of infection, postoperative wound infection, dysphagia, odynophagia, and trismus), and factors known from initial blood tests were obtained from their medical charts and retrospectively reviewed. Results: The generalized linear model algorithm provided the best diagnostic accuracy, with an area under the receiver operating characteristic values of 0.8289 in model 1 and 0.8415 in model 2. In both models, the C-reactive protein level was the most important variable, followed by DM. Conclusion: This study provides unprecedented data on the use of ML for successful prediction of ICU admission based on initial examination results. These findings will considerably contribute to the development of the field of dentistry, especially oral and maxillofacial surgery.

손가락 움직임 인식을 위한 웨어러블 디바이스 설계 및 ML 기법별 성능 분석 (Design and Performance Analysis of ML Techniques for Finger Motion Recognition)

  • 정우순;이형규
    • 한국산업정보학회논문지
    • /
    • 제25권2호
    • /
    • pp.129-136
    • /
    • 2020
  • 손가락 움직임 인식을 통한 제어는 직관적인 인간-컴퓨터 상호작용 방법의 하나이다. 본 연구에서는 여러 가지 ML (Machine learning) 기법을 사용하여 효율적인 손가락 움직임 인식을 위한 웨어러블 디바이스를 구현한다. 움직임 인식을 위한 시계열 데이터 분석에 전통적으로 사용되어 온 HMM (Hidden markov model) 및 DTW (Dynamic time warping) 기법뿐만 아니라 NN (Neural network) 기법을 적용하여 손가락 움직임 인식의 효율성 및 정확성을 비교하고 분석한다. 제안된 시스템의 경우, 경량화된 ML 모델을 설계하기 위해 각 ML 기법에 대해 최적화된 전처리 프로세스를 적용한다. 실험 결과, 최적화된 NN, HMM 및 DTW 기반 손가락 움직임 인식시스템은 각각 99.1%, 96.6%, 95.9%의 정확도를 제공한다.

IoT Security and Machine Learning

  • Almalki, Sarah;Alsuwat, Hatim;Alsuwat, Emad
    • International Journal of Computer Science & Network Security
    • /
    • 제22권5호
    • /
    • pp.103-114
    • /
    • 2022
  • The Internet of Things (IoT) is one of the fastest technologies that are used in various applications and fields. The concept of IoT will not only be limited to the fields of scientific and technical life but will also gradually spread to become an essential part of our daily life and routine. Before, IoT was a complex term unknown to many, but soon it will become something common. IoT is a natural and indispensable routine in which smart devices and sensors are connected wirelessly or wired over the Internet to exchange and process data. With all the benefits and advantages offered by the IoT, it does not face many security and privacy challenges because the current traditional security protocols are not suitable for IoT technologies. In this paper, we presented a comprehensive survey of the latest studies from 2018 to 2021 related to the security of the IoT and the use of machine learning (ML) and deep learning and their applications in addressing security and privacy in the IoT. A description was initially presented, followed by a comprehensive overview of the IoT and its applications and the basic important safety requirements of confidentiality, integrity, and availability and its application in the IoT. Then we reviewed the attacks and challenges facing the IoT. We also focused on ML and its applications in addressing the security problem on the IoT.

A Network Packet Analysis Method to Discover Malicious Activities

  • Kwon, Taewoong;Myung, Joonwoo;Lee, Jun;Kim, Kyu-il;Song, Jungsuk
    • Journal of Information Science Theory and Practice
    • /
    • 제10권spc호
    • /
    • pp.143-153
    • /
    • 2022
  • With the development of networks and the increase in the number of network devices, the number of cyber attacks targeting them is also increasing. Since these cyber-attacks aim to steal important information and destroy systems, it is necessary to minimize social and economic damage through early detection and rapid response. Many studies using machine learning (ML) and artificial intelligence (AI) have been conducted, among which payload learning is one of the most intuitive and effective methods to detect malicious behavior. In this study, we propose a preprocessing method to maximize the performance of the model when learning the payload in term units. The proposed method constructs a high-quality learning data set by eliminating unnecessary noise (stopwords) and preserving important features in consideration of the machine language and natural language characteristics of the packet payload. Our method consists of three steps: Preserving significant special characters, Generating a stopword list, and Class label refinement. By processing packets of various and complex structures based on these three processes, it is possible to make high-quality training data that can be helpful to build high-performance ML/AI models for security monitoring. We prove the effectiveness of the proposed method by comparing the performance of the AI model to which the proposed method is applied and not. Forthermore, by evaluating the performance of the AI model applied proposed method in the real-world Security Operating Center (SOC) environment with live network traffic, we demonstrate the applicability of the our method to the real environment.

Assessment of maximum liquefaction distance using soft computing approaches

  • Kishan Kumar;Pijush Samui;Shiva S. Choudhary
    • Geomechanics and Engineering
    • /
    • 제37권4호
    • /
    • pp.395-418
    • /
    • 2024
  • The epicentral region of earthquakes is typically where liquefaction-related damage takes place. To determine the maximum distance, such as maximum epicentral distance (Re), maximum fault distance (Rf), or maximum hypocentral distance (Rh), at which an earthquake can inflict damage, given its magnitude, this study, using a recently updated global liquefaction database, multiple ML models are built to predict the limiting distances (Re, Rf, or Rh) required for an earthquake of a given magnitude to cause damage. Four machine learning models LSTM (Long Short-Term Memory), BiLSTM (Bidirectional Long Short-Term Memory), CNN (Convolutional Neural Network), and XGB (Extreme Gradient Boosting) are developed using the Python programming language. All four proposed ML models performed better than empirical models for limiting distance assessment. Among these models, the XGB model outperformed all the models. In order to determine how well the suggested models can predict limiting distances, a number of statistical parameters have been studied. To compare the accuracy of the proposed models, rank analysis, error matrix, and Taylor diagram have been developed. The ML models proposed in this paper are more robust than other current models and may be used to assess the minimal energy of a liquefaction disaster caused by an earthquake or to estimate the maximum distance of a liquefied site provided an earthquake in rapid disaster mapping.

Study on failure mode prediction of reinforced concrete columns based on class imbalanced dataset

  • Mingyi Cai;Guangjun Sun;Bo Chen
    • Earthquakes and Structures
    • /
    • 제27권3호
    • /
    • pp.177-189
    • /
    • 2024
  • Accurately predicting the failure modes of reinforced concrete (RC) columns is essential for structural design and assessment. In this study, the challenges of imbalanced datasets and complex feature selection in machine learning (ML) methods were addressed through an optimized ML approach. By combining feature selection and oversampling techniques, the prediction of seismic failure modes in rectangular RC columns was improved. Two feature selection methods were used to identify six input parameters. To tackle class imbalance, the Borderline-SMOTE1 algorithm was employed, enhancing the learning capabilities of the models for minority classes. Eight ML algorithms were trained and fine-tuned using k-fold shuffle split cross-validation and grid search. The results showed that the artificial neural network model achieved 96.77% accuracy, while k-nearest neighbor, support vector machine, and random forest models each achieved 95.16% accuracy. The balanced dataset led to significant improvements, particularly in predicting the flexure-shear failure mode, with accuracy increasing by 6%, recall by 8%, and F1 scores by 7%. The use of the Borderline-SMOTE1 algorithm significantly improved the recognition of samples at failure mode boundaries, enhancing the classification performance of models like k-nearest neighbor and decision tree, which are highly sensitive to data distribution and decision boundaries. This method effectively addressed class imbalance and selected relevant features without requiring complex simulations like traditional methods, proving applicable for discerning failure modes in various concrete members under seismic action.

Using machine learning for anomaly detection on a system-on-chip under gamma radiation

  • Eduardo Weber Wachter ;Server Kasap ;Sefki Kolozali ;Xiaojun Zhai ;Shoaib Ehsan;Klaus D. McDonald-Maier
    • Nuclear Engineering and Technology
    • /
    • 제54권11호
    • /
    • pp.3985-3995
    • /
    • 2022
  • The emergence of new nanoscale technologies has imposed significant challenges to designing reliable electronic systems in radiation environments. A few types of radiation like Total Ionizing Dose (TID) can cause permanent damages on such nanoscale electronic devices, and current state-of-the-art technologies to tackle TID make use of expensive radiation-hardened devices. This paper focuses on a novel and different approach: using machine learning algorithms on consumer electronic level Field Programmable Gate Arrays (FPGAs) to tackle TID effects and monitor them to replace before they stop working. This condition has a research challenge to anticipate when the board results in a total failure due to TID effects. We observed internal measurements of FPGA boards under gamma radiation and used three different anomaly detection machine learning (ML) algorithms to detect anomalies in the sensor measurements in a gamma-radiated environment. The statistical results show a highly significant relationship between the gamma radiation exposure levels and the board measurements. Moreover, our anomaly detection results have shown that a One-Class SVM with Radial Basis Function Kernel has an average recall score of 0.95. Also, all anomalies can be detected before the boards are entirely inoperative, i.e. voltages drop to zero and confirmed with a sanity check.

기계학습 기법을 적용한 고압 인젝터의 분사율 예측 (Machine-Learning Based Prediction of Rate of Injection in High-Pressure Injector)

  • 윤린;박지호;심형섭
    • 한국분무공학회지
    • /
    • 제29권3호
    • /
    • pp.147-154
    • /
    • 2024
  • This study explores the rate of injection (ROI) and injection quantities of a solenoid-type high-pressure injector under varying conditions by integrating experimental methods with machine learning (ML) techniques. Experimental data for fuel injection were obtained using a Zeuch-based HDA Moehwald injection rate measurement system, which served as the foundation for developing a machine learning model. An artificial neural network (ANN) was employed to predict the ROI, ensuring accurate representation of injection behaviors and patterns. The present study examines the impact of ambient conditions, including chamber temperature, chamber pressure, and injection pressure, on the transient profiles of the ROI, quasi-steady ROI, and injection duration. Results indicate that increasing the injection pressure significantly increases ROI, with chamber pressure affecting its initial rising peak. However, the chamber temperature effect on ROI is minimal. The trained ANN model, incorporating three input conditions, accurately reflected experimental measurements and demonstrated expected trends and patterns. This model facilitates the prediction of various ROI profiles without the need for additional experiments, significantly reducing the cost and time required for developing injection control systems in next-generation aero-engine combustors.

A Lightweight Software-Defined Routing Scheme for 5G URLLC in Bottleneck Networks

  • 맛사;담프로힘;김석훈
    • 인터넷정보학회논문지
    • /
    • 제23권2호
    • /
    • pp.1-7
    • /
    • 2022
  • Machine learning (ML) algorithms have been intended to seamlessly collaborate for enabling intelligent networking in terms of massive service differentiation, prediction, and provides high-accuracy recommendation systems. Mobile edge computing (MEC) servers are located close to the edge networks to overcome the responsibility for massive requests from user devices and perform local service offloading. Moreover, there are required lightweight methods for handling real-time Internet of Things (IoT) communication perspectives, especially for ultra-reliable low-latency communication (URLLC) and optimal resource utilization. To overcome the abovementioned issues, this paper proposed an intelligent scheme for traffic steering based on the integration of MEC and lightweight ML, namely support vector machine (SVM) for effectively routing for lightweight and resource constraint networks. The scheme provides dynamic resource handling for the real-time IoT user systems based on the awareness of obvious network statues. The system evaluations were conducted by utillizing computer software simulations, and the proposed approach is remarkably outperformed the conventional schemes in terms of significant QoS metrics, including communication latency, reliability, and communication throughput.