• Title/Summary/Keyword: Machine learning (ML)

Search Result 300, Processing Time 0.023 seconds

The Investigation of Employing Supervised Machine Learning Models to Predict Type 2 Diabetes Among Adults

  • Alhmiedat, Tareq;Alotaibi, Mohammed
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.9
    • /
    • pp.2904-2926
    • /
    • 2022
  • Currently, diabetes is the most common chronic disease in the world, affecting 23.7% of the population in the Kingdom of Saudi Arabia. Diabetes may be the cause of lower-limb amputations, kidney failure and blindness among adults. Therefore, diagnosing the disease in its early stages is essential in order to save human lives. With the revolution in technology, Artificial Intelligence (AI) could play a central role in the early prediction of diabetes by employing Machine Learning (ML) technology. In this paper, we developed a diagnosis system using machine learning models for the detection of type 2 diabetes among adults, through the adoption of two different diabetes datasets: one for training and the other for the testing, to analyze and enhance the prediction accuracy. This work offers an enhanced classification accuracy as a result of employing several pre-processing methods before applying the ML models. According to the obtained results, the implemented Random Forest (RF) classifier offers the best classification accuracy with a classification score of 98.95%.

Study on Accelerating Distributed ML Training in Orchestration

  • Su-Yeon Kim;Seok-Jae Moon
    • International journal of advanced smart convergence
    • /
    • v.13 no.3
    • /
    • pp.143-149
    • /
    • 2024
  • As the size of data and models in machine learning training continues to grow, training on a single server is becoming increasingly challenging. Consequently, the importance of distributed machine learning, which distributes computational loads across multiple machines, is becoming more prominent. However, several unresolved issues remain regarding the performance enhancement of distributed machine learning, including communication overhead, inter-node synchronization challenges, data imbalance and bias, as well as resource management and scheduling. In this paper, we propose ParamHub, which utilizes orchestration to accelerate training speed. This system monitors the performance of each node after the first iteration and reallocates resources to slow nodes, thereby speeding up the training process. This approach ensures that resources are appropriately allocated to nodes in need, maximizing the overall efficiency of resource utilization and enabling all nodes to perform tasks uniformly, resulting in a faster training speed overall. Furthermore, this method enhances the system's scalability and flexibility, allowing for effective application in clusters of various sizes.

A machine learning assisted optical multistage interconnection network: Performance analysis and hardware demonstration

  • Sangeetha Rengachary Gopalan;Hemanth Chandran;Nithin Vijayan;Vikas Yadav;Shivam Mishra
    • ETRI Journal
    • /
    • v.45 no.1
    • /
    • pp.60-74
    • /
    • 2023
  • Integration of the machine learning (ML) technique in all-optical networks can enhance the effectiveness of resource utilization, quality of service assurances, and scalability in optical networks. All-optical multistage interconnection networks (MINs) are implicitly designed to withstand the increasing highvolume traffic demands at data centers. However, the contention resolution mechanism in MINs becomes a bottleneck in handling such data traffic. In this paper, a select list of ML algorithms replaces the traditional electronic signal processing methods used to resolve contention in MIN. The suitability of these algorithms in improving the performance of the entire network is assessed in terms of injection rate, average latency, and latency distribution. Our findings showed that the ML module is recommended for improving the performance of the network. The improved performance and traffic grooming capabilities of the module are also validated by using a hardware testbed.

SEQUENTIAL MINIMAL OPTIMIZATION WITH RANDOM FOREST ALGORITHM (SMORF) USING TWITTER CLASSIFICATION TECHNIQUES

  • J.Uma;K.Prabha
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.4
    • /
    • pp.116-122
    • /
    • 2023
  • Sentiment categorization technique be commonly isolated interested in threes significant classifications name Machine Learning Procedure (ML), Lexicon Based Method (LB) also finally, the Hybrid Method. In Machine Learning Methods (ML) utilizes phonetic highlights with apply notable ML algorithm. In this paper, in classification and identification be complete base under in optimizations technique called sequential minimal optimization with Random Forest algorithm (SMORF) for expanding the exhibition and proficiency of sentiment classification framework. The three existing classification algorithms are compared with proposed SMORF algorithm. Imitation result within experiential structure is Precisions (P), recalls (R), F-measures (F) and accuracy metric. The proposed sequential minimal optimization with Random Forest (SMORF) provides the great accuracy.

External vs. Internal: An Essay on Machine Learning Agents for Autonomous Database Management Systems

  • Fatima Khalil Aljwari
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.10
    • /
    • pp.164-168
    • /
    • 2023
  • There are many possible ways to configure database management systems (DBMSs) have challenging to manage and set.The problem increased in large-scale deployments with thousands or millions of individual DBMS that each have their setting requirements. Recent research has explored using machine learning-based (ML) agents to overcome this problem's automated tuning of DBMSs. These agents extract performance metrics and behavioral information from the DBMS and then train models with this data to select tuning actions that they predict will have the most benefit. This paper discusses two engineering approaches for integrating ML agents in a DBMS. The first is to build an external tuning controller that treats the DBMS as a black box. The second is to incorporate the ML agents natively in the DBMS's architecture.

Development of benthic macroinvertebrate species distribution models using the Bayesian optimization (베이지안 최적화를 통한 저서성 대형무척추동물 종분포모델 개발)

  • Go, ByeongGeon;Shin, Jihoon;Cha, Yoonkyung
    • Journal of Korean Society of Water and Wastewater
    • /
    • v.35 no.4
    • /
    • pp.259-275
    • /
    • 2021
  • This study explored the usefulness and implications of the Bayesian hyperparameter optimization in developing species distribution models (SDMs). A variety of machine learning (ML) algorithms, namely, support vector machine (SVM), random forest (RF), boosted regression tree (BRT), XGBoost (XGB), and Multilayer perceptron (MLP) were used for predicting the occurrence of four benthic macroinvertebrate species. The Bayesian optimization method successfully tuned model hyperparameters, with all ML models resulting an area under the curve (AUC) > 0.7. Also, hyperparameter search ranges that generally clustered around the optimal values suggest the efficiency of the Bayesian optimization in finding optimal sets of hyperparameters. Tree based ensemble algorithms (BRT, RF, and XGB) tended to show higher performances than SVM and MLP. Important hyperparameters and optimal values differed by species and ML model, indicating the necessity of hyperparameter tuning for improving individual model performances. The optimization results demonstrate that for all macroinvertebrate species SVM and RF required fewer numbers of trials until obtaining optimal hyperparameter sets, leading to reduced computational cost compared to other ML algorithms. The results of this study suggest that the Bayesian optimization is an efficient method for hyperparameter optimization of machine learning algorithms.

Implementation of Target Object Tracking Method using Unity ML-Agent Toolkit (Unity ML-Agents Toolkit을 활용한 대상 객체 추적 머신러닝 구현)

  • Han, Seok Ho;Lee, Yong-Hwan
    • Journal of the Semiconductor & Display Technology
    • /
    • v.21 no.3
    • /
    • pp.110-113
    • /
    • 2022
  • Non-playable game character plays an important role in improving the concentration of the game and the interest of the user, and recently implementation of NPC with reinforcement learning has been in the spotlight. In this paper, we estimate an AI target tracking method via reinforcement learning, and implement an AI-based tracking agency of specific target object with avoiding traps through Unity ML-Agents Toolkit. The implementation is built in Unity game engine, and simulations are conducted through a number of experiments. The experimental results show that outstanding performance of the tracking target with avoiding traps is shown with good enough results.

Predicting the maximum lateral load of reinforced concrete columns with traditional machine learning, deep learning, and structural analysis software

  • Pelin Canbay;Sila Avgin;Mehmet M. Kose
    • Computers and Concrete
    • /
    • v.33 no.3
    • /
    • pp.285-299
    • /
    • 2024
  • Recently, many engineering computations have realized their digital transformation to Machine Learning (ML)-based systems. Predicting the behavior of a structure, which is mainly computed with structural analysis software, is an essential step before construction for efficient structural analysis. Especially in the seismic-based design procedure of the structures, predicting the lateral load capacity of reinforced concrete (RC) columns is a vital factor. In this study, a novel ML-based model is proposed to predict the maximum lateral load capacity of RC columns under varying axial loads or cyclic loadings. The proposed model is generated with a Deep Neural Network (DNN) and compared with traditional ML techniques as well as a popular commercial structural analysis software. In the design and test phases of the proposed model, 319 columns with rectangular and square cross-sections are incorporated. In this study, 33 parameters are used to predict the maximum lateral load capacity of each RC column. While some traditional ML techniques perform better prediction than the compared commercial software, the proposed DNN model provides the best prediction results within the analysis. The experimental results reveal the fact that the performance of the proposed DNN model can definitely be used for other engineering purposes as well.

Comparison of machine learning algorithms for regression and classification of ultimate load-carrying capacity of steel frames

  • Kim, Seung-Eock;Vu, Quang-Viet;Papazafeiropoulos, George;Kong, Zhengyi;Truong, Viet-Hung
    • Steel and Composite Structures
    • /
    • v.37 no.2
    • /
    • pp.193-209
    • /
    • 2020
  • In this paper, the efficiency of five Machine Learning (ML) methods consisting of Deep Learning (DL), Support Vector Machine (SVM), Random Forest (RF), Decision Tree (DT), and Gradient Tree Booting (GTB) for regression and classification of the Ultimate Load Factor (ULF) of nonlinear inelastic steel frames is compared. For this purpose, a two-story, a six-story, and a twenty-story space frame are considered. An advanced nonlinear inelastic analysis is carried out for the steel frames to generate datasets for the training of the considered ML methods. In each dataset, the input variables are the geometric features of W-sections and the output variable is the ULF of the frame. The comparison between the five ML methods is made in terms of the mean-squared-error (MSE) for the regression models and the accuracy for the classification models, respectively. Moreover, the ULF distribution curve is calculated for each frame and the strength failure probability is estimated. It is found that the GTB method has the best efficiency in both regression and classification of ULF regardless of the number of training samples and the space frames considered.

Identification of shear transfer mechanisms in RC beams by using machine-learning technique

  • Zhang, Wei;Lee, Deuckhang;Ju, Hyunjin;Wang, Lei
    • Computers and Concrete
    • /
    • v.30 no.1
    • /
    • pp.43-74
    • /
    • 2022
  • Machine learning technique is recently opening new opportunities to identify the complex shear transfer mechanisms of reinforced concrete (RC) beam members. This study employed 1224 shear test specimens to train decision tree-based machine learning (ML) programs, by which strong correlations between shear capacity of RC beams and key input parameters were affirmed. In addition, shear contributions of concrete and shear reinforcement (the so-called Vc and Vs) were identified by establishing three independent ML models trained under different strategies with various combinations of datasets. Detailed parametric studies were then conducted by utilizing the well-trained ML models. It appeared that the presence of shear reinforcement can make the predicted shear contribution from concrete in RC beams larger than the pure shear contribution of concrete due to the intervention effect between shear reinforcement and concrete. On the other hand, the size effect also brought a significant impact on the shear contribution of concrete (Vc), whereas, the addition of shear reinforcements can effectively mitigate the size effect. It was also found that concrete tends to be the primary source of shear resistance when shear span-depth ratio a/d<1.0 while shear reinforcements become the primary source of shear resistance when a/d>2.0.