• Title/Summary/Keyword: Regression Algorithms

Search Result 401, Processing Time 0.036 seconds

Robust Regression and Stratified Residuals for Left-Truncated and Right-Censored Data

  • Kim, Chul-Ki
    • Journal of the Korean Statistical Society
    • /
    • v.26 no.3
    • /
    • pp.333-354
    • /
    • 1997
  • Computational algorithms to calculate M-estimators and rank estimators of regression parameters from left-truncated and right-censored data are developed herein. In the case of M-estimators, new statistical methods are also introduced to incorporate leverage assements and concomitant scale estimation in the presence of left truncation and right censoring on the observed response. Furthermore, graphical methods to examine the residuals from these data are presented. Two real data sets are used for illustration.

  • PDF

Selecting Machine Learning Model Based on Natural Language Processing for Shanghanlun Diagnostic System Classification (자연어 처리 기반 『상한론(傷寒論)』 변병진단체계(辨病診斷體系) 분류를 위한 기계학습 모델 선정)

  • Young-Nam Kim
    • 대한상한금궤의학회지
    • /
    • v.14 no.1
    • /
    • pp.41-50
    • /
    • 2022
  • Objective : The purpose of this study is to explore the most suitable machine learning model algorithm for Shanghanlun diagnostic system classification using natural language processing (NLP). Methods : A total of 201 data items were collected from 『Shanghanlun』 and 『Clinical Shanghanlun』, 'Taeyangbyeong-gyeolhyung' and 'Eumyangyeokchahunobokbyeong' were excluded to prevent oversampling or undersampling. Data were pretreated using a twitter Korean tokenizer and trained by logistic regression, ridge regression, lasso regression, naive bayes classifier, decision tree, and random forest algorithms. The accuracy of the models were compared. Results : As a result of machine learning, ridge regression and naive Bayes classifier showed an accuracy of 0.843, logistic regression and random forest showed an accuracy of 0.804, and decision tree showed an accuracy of 0.745, while lasso regression showed an accuracy of 0.608. Conclusions : Ridge regression and naive Bayes classifier are suitable NLP machine learning models for the Shanghanlun diagnostic system classification.

  • PDF

Super Resolution Technique Through Improved Neighbor Embedding (개선된 네이버 임베딩에 의한 초해상도 기법)

  • Eum, Kyoung-Bae
    • Journal of Digital Contents Society
    • /
    • v.15 no.6
    • /
    • pp.737-743
    • /
    • 2014
  • For single image super resolution (SR), interpolation based and example based algorithms are extensively used. The interpolation algorithms have the strength of theoretical simplicity. However, those algorithms are tending to produce high resolution images with jagged edges, because they are not able to use more priori information. Example based algorithms have been studied in the past few years. For example based SR, the nearest neighbor based algorithms are extensively considered. Among them, neighbor embedding (NE) has been inspired by manifold learning method, particularly locally linear embedding. However, the sizes of local training sets are always too small. So, NE algorithm is weak in the performance of the visuality and quantitative measure by the poor generalization of nearest neighbor estimation. An improved NE algorithm with Support Vector Regression (SVR) was proposed to solve this problem. Given a low resolution image, the pixel values in its high resolution version are estimated by the improved NE. Comparing with bicubic and NE, the improvements of 1.25 dB and 2.33 dB are achieved in PSNR. Experimental results show that proposed method is quantitatively and visually more effective than prior works using bicubic interpolation and NE.

A Noble Decoding Algorithm Using MLLR Adaptation for Speaker Verification (MLLR 화자적응 기법을 이용한 새로운 화자확인 디코딩 알고리듬)

  • 김강열;김지운;정재호
    • The Journal of the Acoustical Society of Korea
    • /
    • v.21 no.2
    • /
    • pp.190-198
    • /
    • 2002
  • In general, we have used the Viterbi algorithm of Speech recognition for decoding. But a decoder in speaker verification has to recognize same word of every speaker differently. In this paper, we propose a noble decoding algorithm that could replace the typical Viterbi algorithm for the speaker verification system. We utilize for the proposed algorithm the speaker adaptation algorithms that transform feature vectors into the region of the client' characteristics in the speech recognition. There are many adaptation algorithms, but we take MLLR (Maximum Likelihood Linear Regression) and MAP (Maximum A-Posterior) adaptation algorithms for proposed algorithm. We could achieve improvement of performance about 30% of EER (Equal Error Rate) using proposed algorithm instead of the typical Viterbi algorithm.

DEVELOPMENT OF A MAJORITY VOTE DECISION MODULE FOR A SELF-DIAGNOSTIC MONITORING SYSTEM FOR AN AIR-OPERATED VALVE SYSTEM

  • KIM, WOOSHIK;CHAI, JANGBOM;KIM, INTAEK
    • Nuclear Engineering and Technology
    • /
    • v.47 no.5
    • /
    • pp.624-632
    • /
    • 2015
  • A self-diagnostic monitoring system is a system that has the ability to measure various physical quantities such as temperature, pressure, or acceleration from sensors scattered over a mechanical system such as a power plant, in order to monitor its various states, and to make a decision about its health status. We have developed a self-diagnostic monitoring system for an air-operated valve system to be used in a nuclear power plant. In this study, we have tried to improve the self-diagnostic monitoring system to increase its reliability. We have implemented three different machine learning algorithms, i.e., logistic regression, an artificial neural network, and a support vector machine. After each algorithm performs the decision process independently, the decision-making module collects these individual decisions and makes a final decision using a majority vote scheme. With this, we performed some simulations and presented some of its results. The contribution of this study is that, by employing more robust and stable algorithms, each of the algorithms performs the recognition task more accurately. Moreover, by integrating these results and employing the majority vote scheme, we can make a definite decision, which makes the self-diagnostic monitoring system more reliable.

Prediction of Academic Performance of College Students with Bipolar Disorder using different Deep learning and Machine learning algorithms

  • Peerbasha, S.;Surputheen, M. Mohamed
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.7
    • /
    • pp.350-358
    • /
    • 2021
  • In modern years, the performance of the students is analysed with lot of difficulties, which is a very important problem in all the academic institutions. The main idea of this paper is to analyze and evaluate the academic performance of the college students with bipolar disorder by applying data mining classification algorithms using Jupiter Notebook, python tool. This tool has been generally used as a decision-making tool in terms of academic performance of the students. The various classifiers could be logistic regression, random forest classifier gini, random forest classifier entropy, decision tree classifier, K-Neighbours classifier, Ada Boost classifier, Extra Tree Classifier, GaussianNB, BernoulliNB are used. The results of such classification model deals with 13 measures like Accuracy, Precision, Recall, F1 Measure, Sensitivity, Specificity, R Squared, Mean Absolute Error, Mean Squared Error, Root Mean Squared Error, TPR, TNR, FPR and FNR. Therefore, conclusion could be reached that the Decision Tree Classifier is better than that of different algorithms.

Prediction of concrete compressive strength using non-destructive test results

  • Erdal, Hamit;Erdal, Mursel;Simsek, Osman;Erdal, Halil Ibrahim
    • Computers and Concrete
    • /
    • v.21 no.4
    • /
    • pp.407-417
    • /
    • 2018
  • Concrete which is a composite material is one of the most important construction materials. Compressive strength is a commonly used parameter for the assessment of concrete quality. Accurate prediction of concrete compressive strength is an important issue. In this study, we utilized an experimental procedure for the assessment of concrete quality. Firstly, the concrete mix was prepared according to C 20 type concrete, and slump of fresh concrete was about 20 cm. After the placement of fresh concrete to formworks, compaction was achieved using a vibrating screed. After 28 day period, a total of 100 core samples having 75 mm diameter were extracted. On the core samples pulse velocity determination tests and compressive strength tests were performed. Besides, Windsor probe penetration tests and Schmidt hammer tests were also performed. After setting up the data set, twelve artificial intelligence (AI) models compared for predicting the concrete compressive strength. These models can be divided into three categories (i) Functions (i.e., Linear Regression, Simple Linear Regression, Multilayer Perceptron, Support Vector Regression), (ii) Lazy-Learning Algorithms (i.e., IBk Linear NN Search, KStar, Locally Weighted Learning) (iii) Tree-Based Learning Algorithms (i.e., Decision Stump, Model Trees Regression, Random Forest, Random Tree, Reduced Error Pruning Tree). Four evaluation processes, four validation implements (i.e., 10-fold cross validation, 5-fold cross validation, 10% split sample validation & 20% split sample validation) are used to examine the performance of predictive models. This study shows that machine learning regression techniques are promising tools for predicting compressive strength of concrete.

A Comparative Study of Medical Data Classification Methods Based on Decision Tree and System Reconstruction Analysis

  • Tang, Tzung-I;Zheng, Gang;Huang, Yalou;Shu, Guangfu;Wang, Pengtao
    • Industrial Engineering and Management Systems
    • /
    • v.4 no.1
    • /
    • pp.102-108
    • /
    • 2005
  • This paper studies medical data classification methods, comparing decision tree and system reconstruction analysis as applied to heart disease medical data mining. The data we study is collected from patients with coronary heart disease. It has 1,723 records of 71 attributes each. We use the system-reconstruction method to weight it. We use decision tree algorithms, such as induction of decision trees (ID3), classification and regression tree (C4.5), classification and regression tree (CART), Chi-square automatic interaction detector (CHAID), and exhausted CHAID. We use the results to compare the correction rate, leaf number, and tree depth of different decision-tree algorithms. According to the experiments, we know that weighted data can improve the correction rate of coronary heart disease data but has little effect on the tree depth and leaf number.

Recent deep learning methods for tabular data

  • Yejin Hwang;Jongwoo Song
    • Communications for Statistical Applications and Methods
    • /
    • v.30 no.2
    • /
    • pp.215-226
    • /
    • 2023
  • Deep learning has made great strides in the field of unstructured data such as text, images, and audio. However, in the case of tabular data analysis, machine learning algorithms such as ensemble methods are still better than deep learning. To keep up with the performance of machine learning algorithms with good predictive power, several deep learning methods for tabular data have been proposed recently. In this paper, we review the latest deep learning models for tabular data and compare the performances of these models using several datasets. In addition, we also compare the latest boosting methods to these deep learning methods and suggest the guidelines to the users, who analyze tabular datasets. In regression, machine learning methods are better than deep learning methods. But for the classification problems, deep learning methods perform better than the machine learning methods in some cases.

Comparison of Data Mining Classification Algorithms for Categorical Feature Variables (범주형 자료에 대한 데이터 마이닝 분류기법 성능 비교)

  • Sohn, So-Young;Shin, Hyung-Won
    • IE interfaces
    • /
    • v.12 no.4
    • /
    • pp.551-556
    • /
    • 1999
  • In this paper, we compare the performance of three data mining classification algorithms(neural network, decision tree, logistic regression) in consideration of various characteristics of categorical input and output data. $2^{4-1}$. 3 fractional factorial design is used to simulate the comparison situation where factors used are (1) the categorical ratio of input variables, (2) the complexity of functional relationship between the output and input variables, (3) the size of randomness in the relationship, (4) the categorical ratio of an output variable, and (5) the classification algorithm. Experimental study results indicate the following: decision tree performs better than the others when the relationship between output and input variables is simple while logistic regression is better when the other way is around; and neural network appears a better choice than the others when the randomness in the relationship is relatively large. We also use Taguchi design to improve the practicality of our study results by letting the relationship between the output and input variables as a noise factor. As a result, the classification accuracy of neural network and decision tree turns out to be higher than that of logistic regression, when the categorical proportion of the output variable is even.

  • PDF