• 제목/요약/키워드: machine learning for regression

검색결과 581건 처리시간 0.031초

Application of machine learning in optimized distribution of dampers for structural vibration control

  • Li, Luyu;Zhao, Xuemeng
    • Earthquakes and Structures
    • /
    • 제16권6호
    • /
    • pp.679-690
    • /
    • 2019
  • This paper presents machine learning methods using Support Vector Machine (SVM) and Multilayer Perceptron (MLP) to analyze optimal damper distribution for structural vibration control. Regarding different building structures, a genetic algorithm based optimization method is used to determine optimal damper distributions that are further used as training samples. The structural features, the objective function, the number of dampers, etc. are used as input features, and the distribution of dampers is taken as an output result. In the case of a few number of damper distributions, multi-class prediction can be performed using SVM and MLP respectively. Moreover, MLP can be used for regression prediction in the case where the distribution scheme is uncountable. After suitable post-processing, good results can be obtained. Numerical results show that the proposed method can obtain the optimized damper distributions for different structures under different objective functions, which achieves better control effect than the traditional uniform distribution and greatly improves the optimization efficiency.

A machine learning framework for performance anomaly detection

  • Hasnain, Muhammad;Pasha, Muhammad Fermi;Ghani, Imran;Jeong, Seung Ryul;Ali, Aitizaz
    • 인터넷정보학회논문지
    • /
    • 제23권2호
    • /
    • pp.97-105
    • /
    • 2022
  • Web services show a rapid evolution and integration to meet the increased users' requirements. Thus, web services undergo updates and may have performance degradation due to undetected faults in the updated versions. Due to these faults, many performances and regression anomalies in web services may occur in real-world scenarios. This paper proposed applying the deep learning model and innovative explainable framework to detect performance and regression anomalies in web services. This study indicated that upper bound and lower bound values in performance metrics provide us with the simple means to detect the performance and regression anomalies in updated versions of web services. The explainable deep learning method enabled us to decide the precise use of deep learning to detect performance and anomalies in web services. The evaluation results of the proposed approach showed us the detection of unusual behavior of web service. The proposed approach is efficient and straightforward in detecting regression anomalies in web services compared with the existing approaches.

Low-GloSea6 기상 예측 소프트웨어의 머신러닝 기법 적용 연구 (A Study of the Application of Machine Learning Methods in the Low-GloSea6 Weather Prediction Solution)

  • 박혜성;조예린;신대영;윤은옥;정성욱
    • 한국정보전자통신기술학회논문지
    • /
    • 제16권5호
    • /
    • pp.307-314
    • /
    • 2023
  • 슈퍼컴퓨팅 기술 및 하드웨어 기술이 발전함에 따라 기후 예측 모델도 고도화되고 있다. 한국 기상청 역시 영국 기상청으로부터 GloSea5을 도입하였고 한국 기상 환경에 맞추어 업데이트된 GloSea6를 운용 중이다. 각 대학 및 연구기관에서는 슈퍼컴퓨터보다는 사양이 낮은 중소규모 서버에서 활용하기 위해 저해상도 결합모델인 Low-GloSea6를 구축하여 사용하고 있다. 본 논문에서는 중소규모 서버에서의 기상 연구의 효율성을 위한 Low-GloSea6 소프트웨어를 분석하여 가장 많은 CPU Time을 점유하는 대기 모델의 tri_sor.F90 모듈의 tri_sor_dp_dp 서브루틴을 Hotspot으로 검출하였다. 해당 함수에 머신러닝의 한 종류인 선형 회귀 모델을 적용하여 해당 기법의 가능성을 확인한다. 이상치 데이터를 제거 후 선형 회귀 모델을 학습한 결과 RMSE는 2.7665e-08, MAE는 1.4958e-08으로 Lasso 회귀, ElasticNet 회귀보다 더욱 좋은 성능을 보였다. 이는 Low-GloSea6 수행 과정 중 Hotspot으로 검출된 tri_sor.F90 모듈에 머신러닝 기법 적용 가능성을 확인하였다.

설명 가능한 인공지능을 이용한 지역별 출산율 차이 요인 분석 (Analysis of Regional Fertility Gap Factors Using Explainable Artificial Intelligence)

  • 이동우;김미경;윤정윤;류동원;송재욱
    • 산업경영시스템학회지
    • /
    • 제47권1호
    • /
    • pp.41-50
    • /
    • 2024
  • Korea is facing a significant problem with historically low fertility rates, which is becoming a major social issue affecting the economy, labor force, and national security. This study analyzes the factors contributing to the regional gap in fertility rates and derives policy implications. The government and local authorities are implementing a range of policies to address the issue of low fertility. To establish an effective strategy, it is essential to identify the primary factors that contribute to regional disparities. This study identifies these factors and explores policy implications through machine learning and explainable artificial intelligence. The study also examines the influence of media and public opinion on childbirth in Korea by incorporating news and online community sentiment, as well as sentiment fear indices, as independent variables. To establish the relationship between regional fertility rates and factors, the study employs four machine learning models: multiple linear regression, XGBoost, Random Forest, and Support Vector Regression. Support Vector Regression, XGBoost, and Random Forest significantly outperform linear regression, highlighting the importance of machine learning models in explaining non-linear relationships with numerous variables. A factor analysis using SHAP is then conducted. The unemployment rate, Regional Gross Domestic Product per Capita, Women's Participation in Economic Activities, Number of Crimes Committed, Average Age of First Marriage, and Private Education Expenses significantly impact regional fertility rates. However, the degree of impact of the factors affecting fertility may vary by region, suggesting the need for policies tailored to the characteristics of each region, not just an overall ranking of factors.

Machine learning modeling of irradiation embrittlement in low alloy steel of nuclear power plants

  • Lee, Gyeong-Geun;Kim, Min-Chul;Lee, Bong-Sang
    • Nuclear Engineering and Technology
    • /
    • 제53권12호
    • /
    • pp.4022-4032
    • /
    • 2021
  • In this study, machine learning (ML) techniques were used to model surveillance test data of nuclear power plants from an international database of the ASTM E10.02 committee. Regression modeling was conducted using various techniques, including Cubist, XGBoost, and a support vector machine. The root mean square deviation of each ML model for the baseline dataset was less than that of the ASTM E900-15 nonlinear regression model. With respect to the interpolation, the ML methods provided excellent predictions with relatively few computations when applied to the given data range. The effect of the explanatory variables on the transition temperature shift (TTS) for the ML methods was analyzed, and the trends were slightly different from those for the ASTM E900-15 model. ML methods showed some weakness in the extrapolation of the fluence in comparison to the ASTM E900-15, while the Cubist method achieved an extrapolation to a certain extent. To achieve a more reliable prediction of the TTS, it was confirmed that advanced techniques should be considered for extrapolation when applying ML modeling.

Extreme Learning Machine 기반 퍼지 패턴 분류기 설계 (Design of Fuzzy Pattern Classifier based on Extreme Learning Machine)

  • 안태천;노석범;황국연;왕계홍;김용수
    • 한국지능시스템학회논문지
    • /
    • 제25권5호
    • /
    • pp.509-514
    • /
    • 2015
  • 본 논문에서는 인공 신경망의 일종인 Extreme Learning Machine의 학습 알고리즘을 기반으로 하여 노이즈에 강한 특성을 보이는 퍼지 집합 이론을 이용한 새로운 패턴 분류기를 제안 한다. 기존 인공 신경망에 비해 학습속도가 매우 빠르며, 모델의 일반화 성능이 우수하다고 알려진 Extreme Learning Machine의 학습 알고리즘을 퍼지 패턴 분류기에 적용하여 퍼지 패턴 분류기의 학습 속도와 패턴 분류 일반화 성능을 개선 한다. 제안된 퍼지패턴 분류기의 학습 속도와 일반화 성능을 평가하기 위하여, 다양한 머신 러닝 데이터 집합을 사용한다.

데이터마이닝 기법들을 통한 제주 안개 예측 방안 연구 (A Study on Fog Forecasting Method through Data Mining Techniques in Jeju)

  • 이영미;배주현;박다빈
    • 한국환경과학회지
    • /
    • 제25권4호
    • /
    • pp.603-613
    • /
    • 2016
  • Fog may have a significant impact on road conditions. In an attempt to improve fog predictability in Jeju, we conducted machine learning with various data mining techniques such as tree models, conditional inference tree, random forest, multinomial logistic regression, neural network and support vector machine. To validate machine learning models, the results from the simulation was compared with the fog data observed over Jeju(184 ASOS site) and Gosan(185 ASOS site). Predictive rates proposed by six data mining methods are all above 92% at two regions. Additionally, we validated the performance of machine learning models with WRF (weather research and forecasting) model meteorological outputs. We found that it is still not good enough for operational fog forecast. According to the model assesment by metrics from confusion matrix, it can be seen that the fog prediction using neural network is the most effective method.

Classification of COVID-19 Disease: A Machine Learning Perspective

  • Kinza Sardar
    • International Journal of Computer Science & Network Security
    • /
    • 제24권3호
    • /
    • pp.107-112
    • /
    • 2024
  • Nowadays the deadly virus famous as COVID-19 spread all over the world starts from the Wuhan China in 2019. This disease COVID-19 Virus effect millions of people in very short time. There are so many symptoms of COVID19 perhaps the Identification of a person infected with COVID-19 virus is really a difficult task. Moreover it's a challenging task to identify whether a person or individual have covid test positive or negative. We are developing a framework in which we used machine learning techniques..The proposed method uses DecisionTree, KNearestNeighbors, GaussianNB, LogisticRegression, BernoulliNB , RandomForest , Machine Learning methods as the classifier for diagnosis of covid ,however, 5-fold and 10-fold cross-validations were applied through the classification process. The experimental results showed that the best accuracy obtained from Decision Tree classifiers. The data preprocessing techniques have been applied for improving the classification performance. Recall, accuracy, precision, and F-score metrics were used to evaluate the classification performance. In future we will improve model accuracy more than we achieved now that is 93 percent by applying different techniques

Prediction of concrete compressive strength using non-destructive test results

  • Erdal, Hamit;Erdal, Mursel;Simsek, Osman;Erdal, Halil Ibrahim
    • Computers and Concrete
    • /
    • 제21권4호
    • /
    • pp.407-417
    • /
    • 2018
  • Concrete which is a composite material is one of the most important construction materials. Compressive strength is a commonly used parameter for the assessment of concrete quality. Accurate prediction of concrete compressive strength is an important issue. In this study, we utilized an experimental procedure for the assessment of concrete quality. Firstly, the concrete mix was prepared according to C 20 type concrete, and slump of fresh concrete was about 20 cm. After the placement of fresh concrete to formworks, compaction was achieved using a vibrating screed. After 28 day period, a total of 100 core samples having 75 mm diameter were extracted. On the core samples pulse velocity determination tests and compressive strength tests were performed. Besides, Windsor probe penetration tests and Schmidt hammer tests were also performed. After setting up the data set, twelve artificial intelligence (AI) models compared for predicting the concrete compressive strength. These models can be divided into three categories (i) Functions (i.e., Linear Regression, Simple Linear Regression, Multilayer Perceptron, Support Vector Regression), (ii) Lazy-Learning Algorithms (i.e., IBk Linear NN Search, KStar, Locally Weighted Learning) (iii) Tree-Based Learning Algorithms (i.e., Decision Stump, Model Trees Regression, Random Forest, Random Tree, Reduced Error Pruning Tree). Four evaluation processes, four validation implements (i.e., 10-fold cross validation, 5-fold cross validation, 10% split sample validation & 20% split sample validation) are used to examine the performance of predictive models. This study shows that machine learning regression techniques are promising tools for predicting compressive strength of concrete.

Emerging Machine Learning in Wearable Healthcare Sensors

  • Gandha Satria Adi;Inkyu Park
    • 센서학회지
    • /
    • 제32권6호
    • /
    • pp.378-385
    • /
    • 2023
  • Human biosignals provide essential information for diagnosing diseases such as dementia and Parkinson's disease. Owing to the shortcomings of current clinical assessments, noninvasive solutions are required. Machine learning (ML) on wearable sensor data is a promising method for the real-time monitoring and early detection of abnormalities. ML facilitates disease identification, severity measurement, and remote rehabilitation by providing continuous feedback. In the context of wearable sensor technology, ML involves training on observed data for tasks such as classification and regression with applications in clinical metrics. Although supervised ML presents challenges in clinical settings, unsupervised learning, which focuses on tasks such as cluster identification and anomaly detection, has emerged as a useful alternative. This review examines and discusses a variety of ML algorithms such as Support Vector Machines (SVM), Random Forests (RF), Decision Trees (DT), Neural Networks (NN), and Deep Learning for the analysis of complex clinical data.