• Title/Summary/Keyword: machine learning model

Search Result 2,511, Processing Time 0.027 seconds

Method of Analyzing Important Variables using Machine Learning-based Golf Putting Direction Prediction Model (머신러닝 기반 골프 퍼팅 방향 예측 모델을 활용한 중요 변수 분석 방법론)

  • Kim, Yeon Ho;Cho, Seung Hyun;Jung, Hae Ryun;Lee, Ki Kwang
    • Korean Journal of Applied Biomechanics
    • /
    • v.32 no.1
    • /
    • pp.1-8
    • /
    • 2022
  • Objective: This study proposes a methodology to analyze important variables that have a significant impact on the putting direction prediction using a machine learning-based putting direction prediction model trained with IMU sensor data. Method: Putting data were collected using an IMU sensor measuring 12 variables from 6 adult males in their 20s at K University who had no golf experience. The data was preprocessed so that it could be applied to machine learning, and a model was built using five machine learning algorithms. Finally, by comparing the performance of the built models, the model with the highest performance was selected as the proposed model, and then 12 variables of the IMU sensor were applied one by one to analyze important variables affecting the learning performance. Results: As a result of comparing the performance of five machine learning algorithms (K-NN, Naive Bayes, Decision Tree, Random Forest, and Light GBM), the prediction accuracy of the Light GBM-based prediction model was higher than that of other algorithms. Using the Light GBM algorithm, which had excellent performance, an experiment was performed to rank the importance of variables that affect the direction prediction of the model. Conclusion: Among the five machine learning algorithms, the algorithm that best predicts the putting direction was the Light GBM algorithm. When the model predicted the putting direction, the variable that had the greatest influence was the left-right inclination (Roll).

A Case Study of Rapid AI Service Deployment - Iris Classification System

  • Yonghee LEE
    • Korean Journal of Artificial Intelligence
    • /
    • v.11 no.4
    • /
    • pp.29-34
    • /
    • 2023
  • The flow from developing a machine learning model to deploying it in a production environment suffers challenges. Efficient and reliable deployment is critical for realizing the true value of machine learning models. Bridging this gap between development and publication has become a pivotal concern in the machine learning community. FastAPI, a modern and fast web framework for building APIs with Python, has gained substantial popularity for its speed, ease of use, and asynchronous capabilities. This paper focused on leveraging FastAPI for deploying machine learning models, addressing the potentials associated with integration, scalability, and performance in a production setting. In this work, we explored the seamless integration of machine learning models into FastAPI applications, enabling real-time predictions and showing a possibility of scaling up for a more diverse range of use cases. We discussed the intricacies of integrating popular machine learning frameworks with FastAPI, ensuring smooth interactions between data processing, model inference, and API responses. This study focused on elucidating the integration of machine learning models into production environments using FastAPI, exploring its capabilities, features, and best practices. We delved into the potential of FastAPI in providing a robust and efficient solution for deploying machine learning systems, handling real-time predictions, managing input/output data, and ensuring optimal performance and reliability.

Prediction of Weight of Spiral Molding Using Injection Molding Analysis and Machine Learning (사출성형 CAE와 머신러닝을 이용한 스파이럴 성형품의 중량 예측)

  • Bum-Soo Kim;Seong-Yeol Han
    • Design & Manufacturing
    • /
    • v.17 no.1
    • /
    • pp.27-32
    • /
    • 2023
  • In this paper, we intend to predict the mass of the spiral using CAE and machine learning. First, We generated 125 data for the experiment through a complete factor design of 3 factors and 5 levels. Next, the data were derived by performing a molding analysis through CAE, and the machine learning process was performed using a machine learning tool. To select the optimal model among the models learned using the learning data, accuracy was evaluated using RMSE. The evaluation results confirmed that the Support Vector Machine had a good predictive performance. To evaluate the predictive performance of the predictive model, We randomly generated 10 non-overlapping data within the existing injection molding condition level. We compared the CAE and support vector machine results by applying random data. As a result, good performance was confirmed with a MAPE value of 0.48%.

  • PDF

Machine Learning Prediction for the Recurrence After Electrical Cardioversion of Patients With Persistent Atrial Fibrillation

  • Soonil Kwon;Eunjung Lee;Hojin Ju;Hyo-Jeong Ahn;So-Ryoung Lee;Eue-Keun Choi;Jangwon Suh;Seil Oh;Wonjong Rhee
    • Korean Circulation Journal
    • /
    • v.53 no.10
    • /
    • pp.677-689
    • /
    • 2023
  • Background and Objectives: There is limited evidence regarding machine-learning prediction for the recurrence of atrial fibrillation (AF) after electrical cardioversion (ECV). This study aimed to predict the recurrence of AF after ECV using machine learning of clinical features and electrocardiograms (ECGs) in persistent AF patients. Methods: We analyzed patients who underwent successful ECV for persistent AF. Machine learning was designed to predict patients with 1-month recurrence. Individual 12-lead ECGs were collected before and after ECV. Various clinical features were collected and trained the extreme gradient boost (XGBoost)-based model. Ten-fold cross-validation was used to evaluate the performance of the model. The performance was compared to the C-statistics of the selected clinical features. Results: Among 718 patients (mean age 63.5±9.3 years, men 78.8%), AF recurred in 435 (60.6%) patients after 1 month. With the XGBoost-based model, the areas under the receiver operating characteristic curves (AUROCs) were 0.57, 0.60, and 0.63 if the model was trained by clinical features, ECGs, and both (the final model), respectively. For the final model, the sensitivity, specificity, and F1-score were 84.7%, 28.2%, and 0.73, respectively. Although the AF duration showed the best predictive performance (AUROC, 0.58) among the clinical features, it was significantly lower than that of the final machine-learning model (p<0.001). Additional training of extended monitoring data of 15-minute single-lead ECG and photoplethysmography in available patients (n=261) did not significantly improve the model's performance. Conclusions: Machine learning showed modest performance in predicting AF recurrence after ECV in persistent AF patients, warranting further validation studies.

Determination of Optimal Adhesion Conditions for FDM Type 3D Printer Using Machine Learning

  • Woo Young Lee;Jong-Hyeok Yu;Kug Weon Kim
    • Journal of Practical Engineering Education
    • /
    • v.15 no.2
    • /
    • pp.419-427
    • /
    • 2023
  • In this study, optimal adhesion conditions to alleviate defects caused by heat shrinkage with FDM type 3D printers with machine learning are researched. Machine learning is one of the "statistical methods of extracting the law from data" and can be classified as supervised learning, unsupervised learning and reinforcement learning. Among them, a function model for adhesion between the bed and the output is presented using supervised learning specialized for optimization, which can be expected to reduce output defects with FDM type 3D printers by deriving conditions for optimum adhesion between the bed and the output. Machine learning codes prepared using Python generate a function model that predicts the effect of operating variables on adhesion using data obtained through adhesion testing. The adhesion prediction data and verification data have been shown to be very consistent, and the potential of this method is explained by conclusions.

Fault Diagnosis Management Model using Machine Learning

  • Yang, Xitong;Lee, Jaeseung;Jung, Heokyung
    • Journal of information and communication convergence engineering
    • /
    • v.17 no.2
    • /
    • pp.128-134
    • /
    • 2019
  • Based on the concept of Industry 4.0, various sensors are attached to facilities and equipment to collect data in real time and diagnose faults using analyzing techniques. Diagnostic technology continuously monitors faults or performance degradation of facilities and equipment in operation and diagnoses abnormal symptoms to ensure safety and availability through maintenance before failure occurs. In this paper, we propose a model to analyze the data and diagnose the state or failure using machine learning. The diagnosis model is based on a support vector machine (SVM)-based diagnosis model and a self-learning one-class SVM-based diagnostic model. In the future, it is expected that this model can be applied to facilities used in the entire industry by applying the actual data to the diagnostic model proposed in this paper, conducting the experiment, and verifying it through the model performance evaluation index.

Development of ensemble machine learning model considering the characteristics of input variables and the interpretation of model performance using explainable artificial intelligence (수질자료의 특성을 고려한 앙상블 머신러닝 모형 구축 및 설명가능한 인공지능을 이용한 모형결과 해석에 대한 연구)

  • Park, Jungsu
    • Journal of Korean Society of Water and Wastewater
    • /
    • v.36 no.4
    • /
    • pp.239-248
    • /
    • 2022
  • The prediction of algal bloom is an important field of study in algal bloom management, and chlorophyll-a concentration(Chl-a) is commonly used to represent the status of algal bloom. In, recent years advanced machine learning algorithms are increasingly used for the prediction of algal bloom. In this study, XGBoost(XGB), an ensemble machine learning algorithm, was used to develop a model to predict Chl-a in a reservoir. The daily observation of water quality data and climate data was used for the training and testing of the model. In the first step of the study, the input variables were clustered into two groups(low and high value groups) based on the observed value of water temperature(TEMP), total organic carbon concentration(TOC), total nitrogen concentration(TN) and total phosphorus concentration(TP). For each of the four water quality items, two XGB models were developed using only the data in each clustered group(Model 1). The results were compared to the prediction of an XGB model developed by using the entire data before clustering(Model 2). The model performance was evaluated using three indices including root mean squared error-observation standard deviation ratio(RSR). The model performance was improved using Model 1 for TEMP, TN, TP as the RSR of each model was 0.503, 0.477 and 0.493, respectively, while the RSR of Model 2 was 0.521. On the other hand, Model 2 shows better performance than Model 1 for TOC, where the RSR was 0.532. Explainable artificial intelligence(XAI) is an ongoing field of research in machine learning study. Shapley value analysis, a novel XAI algorithm, was also used for the quantitative interpretation of the XGB model performance developed in this study.

Distributed In-Memory Caching Method for ML Workload in Kubernetes (쿠버네티스에서 ML 워크로드를 위한 분산 인-메모리 캐싱 방법)

  • Dong-Hyeon Youn;Seokil Song
    • Journal of Platform Technology
    • /
    • v.11 no.4
    • /
    • pp.71-79
    • /
    • 2023
  • In this paper, we analyze the characteristics of machine learning workloads and, based on them, propose a distributed in-memory caching technique to improve the performance of machine learning workloads. The core of machine learning workload is model training, and model training is a computationally intensive task. Performing machine learning workloads in a Kubernetes-based cloud environment in which the computing framework and storage are separated can effectively allocate resources, but delays can occur because IO must be performed through network communication. In this paper, we propose a distributed in-memory caching technique to improve the performance of machine learning workloads performed in such an environment. In particular, we propose a new method of precaching data required for machine learning workloads into the distributed in-memory cache by considering Kubflow pipelines, a Kubernetes-based machine learning pipeline management tool.

  • PDF

Development of Medical Cost Prediction Model Based on the Machine Learning Algorithm (머신러닝 알고리즘 기반의 의료비 예측 모델 개발)

  • Han Bi KIM;Dong Hoon HAN
    • Journal of Korea Artificial Intelligence Association
    • /
    • v.1 no.1
    • /
    • pp.11-16
    • /
    • 2023
  • Accurate hospital case modeling and prediction are crucial for efficient healthcare. In this study, we demonstrate the implementation of regression analysis methods in machine learning systems utilizing mathematical statics and machine learning techniques. The developed machine learning model includes Bayesian linear, artificial neural network, decision tree, decision forest, and linear regression analysis models. Through the application of these algorithms, corresponding regression models were constructed and analyzed. The results suggest the potential of leveraging machine learning systems for medical research. The experiment aimed to create an Azure Machine Learning Studio tool for the speedy evaluation of multiple regression models. The tool faciliates the comparision of 5 types of regression models in a unified experiment and presents assessment results with performance metrics. Evaluation of regression machine learning models highlighted the advantages of boosted decision tree regression, and decision forest regression in hospital case prediction. These findings could lay the groundwork for the deliberate development of new directions in medical data processing and decision making. Furthermore, potential avenues for future research may include exploring methods such as clustering, classification, and anomaly detection in healthcare systems.

Deep Learning-based Delinquent Taxpayer Prediction: A Scientific Administrative Approach

  • YongHyun Lee;Eunchan Kim
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.18 no.1
    • /
    • pp.30-45
    • /
    • 2024
  • This study introduces an effective method for predicting individual local tax delinquencies using prevalent machine learning and deep learning algorithms. The evaluation of credit risk holds great significance in the financial realm, impacting both companies and individuals. While credit risk prediction has been explored using statistical and machine learning techniques, their application to tax arrears prediction remains underexplored. We forecast individual local tax defaults in Republic of Korea using machine and deep learning algorithms, including convolutional neural networks (CNN), long short-term memory (LSTM), and sequence-to-sequence (seq2seq). Our model incorporates diverse credit and public information like loan history, delinquency records, credit card usage, and public taxation data, offering richer insights than prior studies. The results highlight the superior predictive accuracy of the CNN model. Anticipating local tax arrears more effectively could lead to efficient allocation of administrative resources. By leveraging advanced machine learning, this research offers a promising avenue for refining tax collection strategies and resource management.