• 제목/요약/키워드: machine learning (ML)

검색결과 280건 처리시간 0.028초

Development of ML and IoT Enabled Disease Diagnosis Model for a Smart Healthcare System

  • Mehra, Navita;Mittal, Pooja
    • International Journal of Computer Science & Network Security
    • /
    • 제22권7호
    • /
    • pp.1-12
    • /
    • 2022
  • The current progression in the Internet of Things (IoT) and Machine Learning (ML) based technologies converted the traditional healthcare system into a smart healthcare system. The incorporation of IoT and ML has changed the way of treating patients and offers lots of opportunities in the healthcare domain. In this view, this research article presents a new IoT and ML-based disease diagnosis model for the diagnosis of different diseases. In the proposed model, vital signs are collected via IoT-based smart medical devices, and the analysis is done by using different data mining techniques for detecting the possibility of risk in people's health status. Recommendations are made based on the results generated by different data mining techniques, for high-risk patients, an emergency alert will be generated to healthcare service providers and family members. Implementation of this model is done on Anaconda Jupyter notebook by using different Python libraries in it. The result states that among all data mining techniques, SVM achieved the highest accuracy of 0.897 on the same dataset for classification of Parkinson's disease.

트랜잭션 기반 머신러닝에서 특성 추출 자동화를 위한 딥러닝 응용 (A Deep Learning Application for Automated Feature Extraction in Transaction-based Machine Learning)

  • 우덕채;문현실;권순범;조윤호
    • 한국IT서비스학회지
    • /
    • 제18권2호
    • /
    • pp.143-159
    • /
    • 2019
  • Machine learning (ML) is a method of fitting given data to a mathematical model to derive insights or to predict. In the age of big data, where the amount of available data increases exponentially due to the development of information technology and smart devices, ML shows high prediction performance due to pattern detection without bias. The feature engineering that generates the features that can explain the problem to be solved in the ML process has a great influence on the performance and its importance is continuously emphasized. Despite this importance, however, it is still considered a difficult task as it requires a thorough understanding of the domain characteristics as well as an understanding of source data and the iterative procedure. Therefore, we propose methods to apply deep learning for solving the complexity and difficulty of feature extraction and improving the performance of ML model. Unlike other techniques, the most common reason for the superior performance of deep learning techniques in complex unstructured data processing is that it is possible to extract features from the source data itself. In order to apply these advantages to the business problems, we propose deep learning based methods that can automatically extract features from transaction data or directly predict and classify target variables. In particular, we applied techniques that show high performance in existing text processing based on the structural similarity between transaction data and text data. And we also verified the suitability of each method according to the characteristics of transaction data. Through our study, it is possible not only to search for the possibility of automated feature extraction but also to obtain a benchmark model that shows a certain level of performance before performing the feature extraction task by a human. In addition, it is expected that it will be able to provide guidelines for choosing a suitable deep learning model based on the business problem and the data characteristics.

Analysis of Open-Source Hyperparameter Optimization Software Trends

  • Lee, Yo-Seob;Moon, Phil-Joo
    • International Journal of Advanced Culture Technology
    • /
    • 제7권4호
    • /
    • pp.56-62
    • /
    • 2019
  • Recently, research using artificial neural networks has further expanded the field of neural network optimization and automatic structuring from improving inference accuracy. The performance of the machine learning algorithm depends on how the hyperparameters are configured. Open-source hyperparameter optimization software can be an important step forward in improving the performance of machine learning algorithms. In this paper, we review open-source hyperparameter optimization softwares.

New Approaches to Xerostomia with Salivary Flow Rate Based on Machine Learning Algorithm

  • Yeon-Hee Lee;Q-Schick Auh;Hee-Kyung Park
    • Journal of Korean Dental Science
    • /
    • 제16권1호
    • /
    • pp.47-62
    • /
    • 2023
  • Purpose: We aimed to investigate the objective cutoff values of unstimulated flow rates (UFR) and stimulated salivary flow rates (SFR) in patients with xerostomia and to present an optimal machine learning model with a classification and regression tree (CART) for all ages. Materials and Methods: A total of 829 patients with oral diseases were enrolled (591 females; mean age, 59.29±16.40 years; 8~95 years old), 199 patients with xerostomia and 630 patients without xerostomia. Salivary and clinical characteristics were collected and analyzed. Result: Patients with xerostomia had significantly lower levels of UFR (0.29±0.22 vs. 0.41±0.24 ml/min) and SFR (1.12±0.55 vs. 1.39±0.94 ml/min) (P<0.001), respectively, compared to those with non-xerostomia. The presence of xerostomia had a significantly negative correlation with UFR (r=-0.603, P=0.002) and SFR (r=-0.301, P=0.017). In the diagnosis of xerostomia based on the CART algorithm, the presence of stomatitis, candidiasis, halitosis, psychiatric disorder, and hyperlipidemia were significant predictors for xerostomia, and the cutoff ranges for xerostomia for UFR and SFR were 0.03~0.18 ml/min and 0.85~1.6 ml/min, respectively. Conclusion: Xerostomia was correlated with decreases in UFR and SFR, and their cutoff values varied depending on the patient's underlying oral and systemic conditions.

Prediction of medication-related osteonecrosis of the jaw (MRONJ) using automated machine learning in patients with osteoporosis associated with dental extraction and implantation: a retrospective study

  • Da Woon Kwack;Sung Min Park
    • Journal of the Korean Association of Oral and Maxillofacial Surgeons
    • /
    • 제49권3호
    • /
    • pp.135-141
    • /
    • 2023
  • Objectives: This study aimed to develop and validate machine learning (ML) models using H2O-AutoML, an automated ML program, for predicting medication-related osteonecrosis of the jaw (MRONJ) in patients with osteoporosis undergoing tooth extraction or implantation. Patients and Methods: We conducted a retrospective chart review of 340 patients who visited Dankook University Dental Hospital between January 2019 and June 2022 who met the following inclusion criteria: female, age ≥55 years, osteoporosis treated with antiresorptive therapy, and recent dental extraction or implantation. We considered medication administration and duration, demographics, and systemic factors (age and medical history). Local factors, such as surgical method, number of operated teeth, and operation area, were also included. Six algorithms were used to generate the MRONJ prediction model. Results: Gradient boosting demonstrated the best diagnostic accuracy, with an area under the receiver operating characteristic curve (AUC) of 0.8283. Validation with the test dataset yielded a stable AUC of 0.7526. Variable importance analysis identified duration of medication as the most important variable, followed by age, number of teeth operated, and operation site. Conclusion: ML models can help predict MRONJ occurrence in patients with osteoporosis undergoing tooth extraction or implantation based on questionnaire data acquired at the first visit.

Machine learning-based techniques to facilitate the production of stone nano powder-reinforced manufactured-sand concrete

  • Zanyu Huang;Qiuyue Han;Adil Hussein Mohammed;Arsalan Mahmoodzadeh;Nejib Ghazouani;Shtwai Alsubai;Abed Alanazi;Abdullah Alqahtani
    • Advances in nano research
    • /
    • 제15권6호
    • /
    • pp.533-539
    • /
    • 2023
  • This study aims to examine four machine learning (ML)-based models for their potential to estimate the splitting tensile strength (STS) of manufactured sand concrete (MSC). The ML models were trained and tested based on 310 experimental data points. Stone nanopowder content (SNPC), curing age (CA), and water-to-cement (W/C) ratio were also studied for their impacts on the STS of MSC. According to the results, the support vector regression (SVR) model had the highest correlation with experimental data. Still, all of the optimized ML models showed promise in estimating the STS of MSC. Both ML and laboratory results showed that MSC with 10% SNPC improved the STS of MSC.

Surface-Engineered Graphene surface-enhanced Raman scattering Platform with Machine-learning Enabled Classification of Mixed Analytes

  • Jae Hee Cho;Garam Bae;Ki-Seok An
    • 센서학회지
    • /
    • 제33권3호
    • /
    • pp.139-146
    • /
    • 2024
  • Surface-enhanced Raman scattering (SERS) enables the detection of various types of π-conjugated biological and chemical molecules owing to its exceptional sensitivity in obtaining unique spectra, offering nondestructive classification capabilities for target analytes. Herein, we demonstrate an innovative strategy that provides significant machine learning (ML)-enabled predictive SERS platforms through surface-engineered graphene via complementary hybridization with Au nanoparticles (NPs). The hybridized Au NPs/graphene SERS platforms showed exceptional sensitivity (10-7 M) due to the collaborative strong correlation between the localized electromagnetic effect and the enhanced chemical bonding reactivity. The chemical and physical properties of the demonstrated SERS platform were systematically investigated using microscopy and spectroscopic analysis. Furthermore, an innovative strategy employing ML is proposed to predict various analytes based on a featured Raman spectral database. Using a customized data-preprocessing algorithm, the feature data for ML were extracted from the Raman peak characteristic information, such as intensity, position, and width, from the SERS spectrum data. Additionally, sophisticated evaluations of various types of ML classification models were conducted using k-fold cross-validation (k = 5), showing 99% prediction accuracy.

Path Loss Prediction Using an Ensemble Learning Approach

  • Beom Kwon;Eonsu Noh
    • 한국컴퓨터정보학회논문지
    • /
    • 제29권2호
    • /
    • pp.1-12
    • /
    • 2024
  • 경로 손실(Path Loss)을 예측하는 것은 셀룰러 네트워크(Cellular Network)에서 기지국(Base Station) 의 설치 위치 선정 등 무선망 설계에 중요한 요인 중 하나다. 기존에는 기지국의 최적 설치 위치를 결정하기 위해 수많은 현장 테스트(Field Tests)를 통해 경로 손실 값을 측정했다. 따라서 측정에 많은 시간이 소요된다는 단점이 있었다. 이러한 문제를 해결하기 위해 본 연구에서는 머신러닝(Machine Learning, ML) 기반의 경로 손실 예측 방법을 제안한다. 특히, 경로 손실 예측 성능을 향상시키기 위해서 앙상블 학습(Ensemble Learning) 접근법을 적용하였다. 부트스트랩 데이터 세트(Bootstrap Dataset)을 활용하여 서로 다른 하이퍼파라미터(Hyperparameter) 구성을 갖는 모델들을 얻고, 이 모델들을 앙상블하여 최종 모델을 구축했다. 인터넷상에 공개된 경로 손실 데이터 세트를 활용하여 제안하는 앙상블 기반 경로 손실 예측 방법과 다양한 ML 기반 방법들의 성능을 평가 및 비교했다. 실험 결과, 제안하는 방법이 기존 방법들보다 우수한 성능을 달성하였으며, 경로 손실 값을 가장 정확하게 예측할 수 있다는 것을 입증하였다.

Development of ensemble machine learning models for evaluating seismic demands of steel moment frames

  • Nguyen, Hoang D.;Kim, JunHee;Shin, Myoungsu
    • Steel and Composite Structures
    • /
    • 제44권1호
    • /
    • pp.49-63
    • /
    • 2022
  • This study aims to develop ensemble machine learning (ML) models for estimating the peak floor acceleration and maximum top drift of steel moment frames. For this purpose, random forest, adaptive boosting, gradient boosting regression tree (GBRT), and extreme gradient boosting (XGBoost) models were considered. A total of 621 steel moment frames were analyzed under 240 ground motions using OpenSees software to generate the dataset for ML models. From the results, the GBRT and XGBoost models exhibited the highest performance for predicting peak floor acceleration and maximum top drift, respectively. The significance of each input variable on the prediction was examined using the best-performing models and Shapley additive explanations approach (SHAP). It turned out that the peak ground acceleration had the most significant impact on the peak floor acceleration prediction. Meanwhile, the spectral accelerations at 1 and 2 s had the most considerable influence on the maximum top drift prediction. Finally, a graphical user interface module was created that places a pioneering step for the application of ML to estimate the seismic demands of building structures in practical design.

Optimizing shallow foundation design: A machine learning approach for bearing capacity estimation over cavities

  • Kumar Shubham;Subhadeep Metya;Abdhesh Kumar Sinha
    • Geomechanics and Engineering
    • /
    • 제37권6호
    • /
    • pp.629-641
    • /
    • 2024
  • The presence of excavations or cavities beneath the foundations of a building can have a significant impact on their stability and cause extensive damage. Traditional methods for calculating the bearing capacity and subsidence of foundations over cavities can be complex and time-consuming, particularly when dealing with conditions that vary. In such situations, machine learning (ML) and deep learning (DL) techniques provide effective alternatives. This study concentrates on constructing a prediction model based on the performance of ML and DL algorithms that can be applied in real-world settings. The efficacy of eight algorithms, including Regression Analysis, k-Nearest Neighbor, Decision Tree, Random Forest, Multivariate Regression Spline, Artificial Neural Network, and Deep Neural Network, was evaluated. Using a Python-assisted automation technique integrated with the PLAXIS 2D platform, a dataset containing 272 cases with eight input parameters and one target variable was generated. In general, the DL model performed better than the ML models, and all models, except the regression models, attained outstanding results with an R2 greater than 0.90. These models can also be used as surrogate models in reliability analysis to evaluate failure risks and probabilities.