• Title/Summary/Keyword: support vector machines (SVM)

Search Result 286, Processing Time 0.028 seconds

Automatic Email Multi-category Classification Using Dynamic Category Hierarchy and Non-negative Matrix Factorization (비음수 행렬 분해와 동적 분류 체계를 사용한 자동 이메일 다원 분류)

  • Park, Sun;An, Dong-Un
    • Journal of KIISE:Software and Applications
    • /
    • v.37 no.5
    • /
    • pp.378-385
    • /
    • 2010
  • The explosive increase in the use of email has made to need email classification efficiently and accurately. Current work on the email classification method have mainly been focused on a binary classification that filters out spam-mails. This methods are based on Support Vector Machines, Bayesian classifiers, rule-based classifiers. Such supervised methods, in the sense that the user is required to manually describe the rules and keyword list that is used to recognize the relevant email. Other unsupervised method using clustering techniques for the multi-category classification is created a category labels from a set of incoming messages. In this paper, we propose a new automatic email multi-category classification method using NMF for automatic category label construction method and dynamic category hierarchy method for the reorganization of email messages in the category labels. The proposed method in this paper, a large number of emails are managed efficiently by classifying multi-category email automatically, email messages in their category are reorganized for enhancing accuracy whenever users want to classify all their email messages.

Compressive strength prediction of CFRP confined concrete using data mining techniques

  • Camoes, Aires;Martins, Francisco F.
    • Computers and Concrete
    • /
    • v.19 no.3
    • /
    • pp.233-241
    • /
    • 2017
  • During the last two decades, CFRP have been extensively used for repair and rehabilitation of existing structures as well as in new construction applications. For rehabilitation purposes CFRP are currently used to increase the load and the energy absorption capacities and also the shear strength of concrete columns. Thus, the effect of CFRP confinement on the strength and deformation capacity of concrete columns has been extensively studied. However, the majority of such studies consider empirical relationships based on correlation analysis due to the fact that until today there is no general law describing such a hugely complex phenomenon. Moreover, these studies have been focused on the performance of circular cross section columns and the data available for square or rectangular cross sections are still scarce. Therefore, the existing relationships may not be sufficiently accurate to provide satisfactory results. That is why intelligent models with the ability to learn from examples can and must be tested, trying to evaluate their accuracy for composite compressive strength prediction. In this study the forecasting of wrapped CFRP confined concrete strength was carried out using different Data Mining techniques to predict CFRP confined concrete compressive strength taking into account the specimens' cross section: circular or rectangular. Based on the results obtained, CFRP confined concrete compressive strength can be accurately predicted for circular cross sections using SVM with five and six input parameters without spending too much time. The results for rectangular sections were not as good as those obtained for circular sections. It seems that the prediction can only be obtained with reasonable accuracy for certain values of the lateral confinement coefficient due to less efficiency of lateral confinement for rectangular cross sections.

Damaged cable detection with statistical analysis, clustering, and deep learning models

  • Son, Hyesook;Yoon, Chanyoung;Kim, Yejin;Jang, Yun;Tran, Linh Viet;Kim, Seung-Eock;Kim, Dong Joo;Park, Jongwoong
    • Smart Structures and Systems
    • /
    • v.29 no.1
    • /
    • pp.17-28
    • /
    • 2022
  • The cable component of cable-stayed bridges is gradually impacted by weather conditions, vehicle loads, and material corrosion. The stayed cable is a critical load-carrying part that closely affects the operational stability of a cable-stayed bridge. Damaged cables might lead to the bridge collapse due to their tension capacity reduction. Thus, it is necessary to develop structural health monitoring (SHM) techniques that accurately identify damaged cables. In this work, a combinational identification method of three efficient techniques, including statistical analysis, clustering, and neural network models, is proposed to detect the damaged cable in a cable-stayed bridge. The measured dataset from the bridge was initially preprocessed to remove the outlier channels. Then, the theory and application of each technique for damage detection were introduced. In general, the statistical approach extracts the parameters representing the damage within time series, and the clustering approach identifies the outliers from the data signals as damaged members, while the deep learning approach uses the nonlinear data dependencies in SHM for the training model. The performance of these approaches in classifying the damaged cable was assessed, and the combinational identification method was obtained using the voting ensemble. Finally, the combination method was compared with an existing outlier detection algorithm, support vector machines (SVM). The results demonstrate that the proposed method is robust and provides higher accuracy for the damaged cable detection in the cable-stayed bridge.

Hybrid machine learning with moth-flame optimization methods for strength prediction of CFDST columns under compression

  • Quang-Viet Vu;Dai-Nhan Le;Thai-Hoan Pham;Wei Gao;Sawekchai Tangaramvong
    • Steel and Composite Structures
    • /
    • v.51 no.6
    • /
    • pp.679-695
    • /
    • 2024
  • This paper presents a novel technique that combines machine learning (ML) with moth-flame optimization (MFO) methods to predict the axial compressive strength (ACS) of concrete filled double skin steel tubes (CFDST) columns. The proposed model is trained and tested with a dataset containing 125 tests of the CFDST column subjected to compressive loading. Five ML models, including extreme gradient boosting (XGBoost), gradient tree boosting (GBT), categorical gradient boosting (CAT), support vector machines (SVM), and decision tree (DT) algorithms, are utilized in this work. The MFO algorithm is applied to find optimal hyperparameters of these ML models and to determine the most effective model in predicting the ACS of CFDST columns. Predictive results given by some performance metrics reveal that the MFO-CAT model provides superior accuracy compared to other considered models. The accuracy of the MFO-CAT model is validated by comparing its predictive results with existing design codes and formulae. Moreover, the significance and contribution of each feature in the dataset are examined by employing the SHapley Additive exPlanations (SHAP) method. A comprehensive uncertainty quantification on probabilistic characteristics of the ACS of CFDST columns is conducted for the first time to examine the models' responses to variations of input variables in the stochastic environments. Finally, a web-based application is developed to predict ACS of the CFDST column, enabling rapid practical utilization without requesting any programing or machine learning expertise.

Development of Knee Pain Diagnosis Questionnaire and Clinical Study of Diagnostic Correspondent Rate (슬통 진단용 설문지개발 및 진단 일치도 평가연구)

  • Hwang, Ji-Hoo;Kim, Yu-Jong;Kim, Eun-Jung;Lee, Cham-Kyul;Lee, Eun-Yong;Lee, Seung-Deok;Kim, Kap-Sung
    • Journal of Acupuncture Research
    • /
    • v.29 no.5
    • /
    • pp.61-74
    • /
    • 2012
  • Objectives : This study is perfomed for preparation of oriental medicine clinical guidelines for drawing up the standards of oriental medicine demonstration and diagnosis classification about the knee pain. Methods : Statistical analysis about Crane's-knee wind(鶴膝風), arthralgia syndrome(痺症), knee injury(膝傷), gout arthritis(痛風), Youk jeol poung(歷節風) classified experts' opinions about knee pain patients by Delphi method is conducted by using oriental medicine diagnosis questionnaire. The result was classified by using linear discriminant analysis(LDA), diagonal linear discriminant analysis(DLDA), diagonal quadratic discriminant analysis(DQDA), K-nearest neighbor classification(KNN), classification and regression trees(CART), support vector machines(SVM). Results : The results are summarized as follows. 1. The result analyzed by using LDA has a hit rate of 81.65% in comparison with the original diagnosis. 2. The result analyzed by using DLDA has a hit rate of 63.3% in comparison with the original diagnosis. 3. The result analyzed by using DQDA has a hit rate of 65.14% in comparison with the original diagnosis. 4. The result analyzed by using KNN has a hit rate of 74.31% in comparison with the original diagnosis. 5. The result analyzed by using CART has a hit rate of 75.23% in comparison with the original diagnosis when the test of selected 13 significant questions based on analysis of variance was performed. 6. The result analyzed by using SVM has a hit rate of 87.16% in comparison with the original diagnosis. Conclusions : Statistical analysis using oriental medicine diagnosis questionnaire on knee pain generally turned out to have a significant result.

Response Modeling for the Marketing Promotion with Weighted Case Based Reasoning Under Imbalanced Data Distribution (불균형 데이터 환경에서 변수가중치를 적용한 사례기반추론 기반의 고객반응 예측)

  • Kim, Eunmi;Hong, Taeho
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.1
    • /
    • pp.29-45
    • /
    • 2015
  • Response modeling is a well-known research issue for those who have tried to get more superior performance in the capability of predicting the customers' response for the marketing promotion. The response model for customers would reduce the marketing cost by identifying prospective customers from very large customer database and predicting the purchasing intention of the selected customers while the promotion which is derived from an undifferentiated marketing strategy results in unnecessary cost. In addition, the big data environment has accelerated developing the response model with data mining techniques such as CBR, neural networks and support vector machines. And CBR is one of the most major tools in business because it is known as simple and robust to apply to the response model. However, CBR is an attractive data mining technique for data mining applications in business even though it hasn't shown high performance compared to other machine learning techniques. Thus many studies have tried to improve CBR and utilized in business data mining with the enhanced algorithms or the support of other techniques such as genetic algorithm, decision tree and AHP (Analytic Process Hierarchy). Ahn and Kim(2008) utilized logit, neural networks, CBR to predict that which customers would purchase the items promoted by marketing department and tried to optimized the number of k for k-nearest neighbor with genetic algorithm for the purpose of improving the performance of the integrated model. Hong and Park(2009) noted that the integrated approach with CBR for logit, neural networks, and Support Vector Machine (SVM) showed more improved prediction ability for response of customers to marketing promotion than each data mining models such as logit, neural networks, and SVM. This paper presented an approach to predict customers' response of marketing promotion with Case Based Reasoning. The proposed model was developed by applying different weights to each feature. We deployed logit model with a database including the promotion and the purchasing data of bath soap. After that, the coefficients were used to give different weights of CBR. We analyzed the performance of proposed weighted CBR based model compared to neural networks and pure CBR based model empirically and found that the proposed weighted CBR based model showed more superior performance than pure CBR model. Imbalanced data is a common problem to build data mining model to classify a class with real data such as bankruptcy prediction, intrusion detection, fraud detection, churn management, and response modeling. Imbalanced data means that the number of instance in one class is remarkably small or large compared to the number of instance in other classes. The classification model such as response modeling has a lot of trouble to recognize the pattern from data through learning because the model tends to ignore a small number of classes while classifying a large number of classes correctly. To resolve the problem caused from imbalanced data distribution, sampling method is one of the most representative approach. The sampling method could be categorized to under sampling and over sampling. However, CBR is not sensitive to data distribution because it doesn't learn from data unlike machine learning algorithm. In this study, we investigated the robustness of our proposed model while changing the ratio of response customers and nonresponse customers to the promotion program because the response customers for the suggested promotion is always a small part of nonresponse customers in the real world. We simulated the proposed model 100 times to validate the robustness with different ratio of response customers to response customers under the imbalanced data distribution. Finally, we found that our proposed CBR based model showed superior performance than compared models under the imbalanced data sets. Our study is expected to improve the performance of response model for the promotion program with CBR under imbalanced data distribution in the real world.

Development of Sentiment Analysis Model for the hot topic detection of online stock forums (온라인 주식 포럼의 핫토픽 탐지를 위한 감성분석 모형의 개발)

  • Hong, Taeho;Lee, Taewon;Li, Jingjing
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.1
    • /
    • pp.187-204
    • /
    • 2016
  • Document classification based on emotional polarity has become a welcomed emerging task owing to the great explosion of data on the Web. In the big data age, there are too many information sources to refer to when making decisions. For example, when considering travel to a city, a person may search reviews from a search engine such as Google or social networking services (SNSs) such as blogs, Twitter, and Facebook. The emotional polarity of positive and negative reviews helps a user decide on whether or not to make a trip. Sentiment analysis of customer reviews has become an important research topic as datamining technology is widely accepted for text mining of the Web. Sentiment analysis has been used to classify documents through machine learning techniques, such as the decision tree, neural networks, and support vector machines (SVMs). is used to determine the attitude, position, and sensibility of people who write articles about various topics that are published on the Web. Regardless of the polarity of customer reviews, emotional reviews are very helpful materials for analyzing the opinions of customers through their reviews. Sentiment analysis helps with understanding what customers really want instantly through the help of automated text mining techniques. Sensitivity analysis utilizes text mining techniques on text on the Web to extract subjective information in the text for text analysis. Sensitivity analysis is utilized to determine the attitudes or positions of the person who wrote the article and presented their opinion about a particular topic. In this study, we developed a model that selects a hot topic from user posts at China's online stock forum by using the k-means algorithm and self-organizing map (SOM). In addition, we developed a detecting model to predict a hot topic by using machine learning techniques such as logit, the decision tree, and SVM. We employed sensitivity analysis to develop our model for the selection and detection of hot topics from China's online stock forum. The sensitivity analysis calculates a sentimental value from a document based on contrast and classification according to the polarity sentimental dictionary (positive or negative). The online stock forum was an attractive site because of its information about stock investment. Users post numerous texts about stock movement by analyzing the market according to government policy announcements, market reports, reports from research institutes on the economy, and even rumors. We divided the online forum's topics into 21 categories to utilize sentiment analysis. One hundred forty-four topics were selected among 21 categories at online forums about stock. The posts were crawled to build a positive and negative text database. We ultimately obtained 21,141 posts on 88 topics by preprocessing the text from March 2013 to February 2015. The interest index was defined to select the hot topics, and the k-means algorithm and SOM presented equivalent results with this data. We developed a decision tree model to detect hot topics with three algorithms: CHAID, CART, and C4.5. The results of CHAID were subpar compared to the others. We also employed SVM to detect the hot topics from negative data. The SVM models were trained with the radial basis function (RBF) kernel function by a grid search to detect the hot topics. The detection of hot topics by using sentiment analysis provides the latest trends and hot topics in the stock forum for investors so that they no longer need to search the vast amounts of information on the Web. Our proposed model is also helpful to rapidly determine customers' signals or attitudes towards government policy and firms' products and services.

Performance Improvement on Short Volatility Strategy with Asymmetric Spillover Effect and SVM (비대칭적 전이효과와 SVM을 이용한 변동성 매도전략의 수익성 개선)

  • Kim, Sun Woong
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.119-133
    • /
    • 2020
  • Fama asserted that in an efficient market, we can't make a trading rule that consistently outperforms the average stock market returns. This study aims to suggest a machine learning algorithm to improve the trading performance of an intraday short volatility strategy applying asymmetric volatility spillover effect, and analyze its trading performance improvement. Generally stock market volatility has a negative relation with stock market return and the Korean stock market volatility is influenced by the US stock market volatility. This volatility spillover effect is asymmetric. The asymmetric volatility spillover effect refers to the phenomenon that the US stock market volatility up and down differently influence the next day's volatility of the Korean stock market. We collected the S&P 500 index, VIX, KOSPI 200 index, and V-KOSPI 200 from 2008 to 2018. We found the negative relation between the S&P 500 and VIX, and the KOSPI 200 and V-KOSPI 200. We also documented the strong volatility spillover effect from the VIX to the V-KOSPI 200. Interestingly, the asymmetric volatility spillover was also found. Whereas the VIX up is fully reflected in the opening volatility of the V-KOSPI 200, the VIX down influences partially in the opening volatility and its influence lasts to the Korean market close. If the stock market is efficient, there is no reason why there exists the asymmetric volatility spillover effect. It is a counter example of the efficient market hypothesis. To utilize this type of anomalous volatility spillover pattern, we analyzed the intraday volatility selling strategy. This strategy sells short the Korean volatility market in the morning after the US stock market volatility closes down and takes no position in the volatility market after the VIX closes up. It produced profit every year between 2008 and 2018 and the percent profitable is 68%. The trading performance showed the higher average annual return of 129% relative to the benchmark average annual return of 33%. The maximum draw down, MDD, is -41%, which is lower than that of benchmark -101%. The Sharpe ratio 0.32 of SVS strategy is much greater than the Sharpe ratio 0.08 of the Benchmark strategy. The Sharpe ratio simultaneously considers return and risk and is calculated as return divided by risk. Therefore, high Sharpe ratio means high performance when comparing different strategies with different risk and return structure. Real world trading gives rise to the trading costs including brokerage cost and slippage cost. When the trading cost is considered, the performance difference between 76% and -10% average annual returns becomes clear. To improve the performance of the suggested volatility trading strategy, we used the well-known SVM algorithm. Input variables include the VIX close to close return at day t-1, the VIX open to close return at day t-1, the VK open return at day t, and output is the up and down classification of the VK open to close return at day t. The training period is from 2008 to 2014 and the testing period is from 2015 to 2018. The kernel functions are linear function, radial basis function, and polynomial function. We suggested the modified-short volatility strategy that sells the VK in the morning when the SVM output is Down and takes no position when the SVM output is Up. The trading performance was remarkably improved. The 5-year testing period trading results of the m-SVS strategy showed very high profit and low risk relative to the benchmark SVS strategy. The annual return of the m-SVS strategy is 123% and it is higher than that of SVS strategy. The risk factor, MDD, was also significantly improved from -41% to -29%.

Robo-Advisor Algorithm with Intelligent View Model (지능형 전망모형을 결합한 로보어드바이저 알고리즘)

  • Kim, Sunwoong
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.39-55
    • /
    • 2019
  • Recently banks and large financial institutions have introduced lots of Robo-Advisor products. Robo-Advisor is a Robot to produce the optimal asset allocation portfolio for investors by using the financial engineering algorithms without any human intervention. Since the first introduction in Wall Street in 2008, the market size has grown to 60 billion dollars and is expected to expand to 2,000 billion dollars by 2020. Since Robo-Advisor algorithms suggest asset allocation output to investors, mathematical or statistical asset allocation strategies are applied. Mean variance optimization model developed by Markowitz is the typical asset allocation model. The model is a simple but quite intuitive portfolio strategy. For example, assets are allocated in order to minimize the risk on the portfolio while maximizing the expected return on the portfolio using optimization techniques. Despite its theoretical background, both academics and practitioners find that the standard mean variance optimization portfolio is very sensitive to the expected returns calculated by past price data. Corner solutions are often found to be allocated only to a few assets. The Black-Litterman Optimization model overcomes these problems by choosing a neutral Capital Asset Pricing Model equilibrium point. Implied equilibrium returns of each asset are derived from equilibrium market portfolio through reverse optimization. The Black-Litterman model uses a Bayesian approach to combine the subjective views on the price forecast of one or more assets with implied equilibrium returns, resulting a new estimates of risk and expected returns. These new estimates can produce optimal portfolio by the well-known Markowitz mean-variance optimization algorithm. If the investor does not have any views on his asset classes, the Black-Litterman optimization model produce the same portfolio as the market portfolio. What if the subjective views are incorrect? A survey on reports of stocks performance recommended by securities analysts show very poor results. Therefore the incorrect views combined with implied equilibrium returns may produce very poor portfolio output to the Black-Litterman model users. This paper suggests an objective investor views model based on Support Vector Machines(SVM), which have showed good performance results in stock price forecasting. SVM is a discriminative classifier defined by a separating hyper plane. The linear, radial basis and polynomial kernel functions are used to learn the hyper planes. Input variables for the SVM are returns, standard deviations, Stochastics %K and price parity degree for each asset class. SVM output returns expected stock price movements and their probabilities, which are used as input variables in the intelligent views model. The stock price movements are categorized by three phases; down, neutral and up. The expected stock returns make P matrix and their probability results are used in Q matrix. Implied equilibrium returns vector is combined with the intelligent views matrix, resulting the Black-Litterman optimal portfolio. For comparisons, Markowitz mean-variance optimization model and risk parity model are used. The value weighted market portfolio and equal weighted market portfolio are used as benchmark indexes. We collect the 8 KOSPI 200 sector indexes from January 2008 to December 2018 including 132 monthly index values. Training period is from 2008 to 2015 and testing period is from 2016 to 2018. Our suggested intelligent view model combined with implied equilibrium returns produced the optimal Black-Litterman portfolio. The out of sample period portfolio showed better performance compared with the well-known Markowitz mean-variance optimization portfolio, risk parity portfolio and market portfolio. The total return from 3 year-period Black-Litterman portfolio records 6.4%, which is the highest value. The maximum draw down is -20.8%, which is also the lowest value. Sharpe Ratio shows the highest value, 0.17. It measures the return to risk ratio. Overall, our suggested view model shows the possibility of replacing subjective analysts's views with objective view model for practitioners to apply the Robo-Advisor asset allocation algorithms in the real trading fields.

Utilizing Unlabeled Documents in Automatic Classification with Inter-document Similarities (문헌간 유사도를 이용한 자동분류에서 미분류 문헌의 활용에 관한 연구)

  • Kim, Pan-Jun;Lee, Jae-Yun
    • Journal of the Korean Society for information Management
    • /
    • v.24 no.1 s.63
    • /
    • pp.251-271
    • /
    • 2007
  • This paper studies the problem of classifying documents with labeled and unlabeled learning data, especially with regards to using document similarity features. The problem of using unlabeled data is practically important because in many information systems obtaining training labels is expensive, while large quantities of unlabeled documents are readily available. There are two steps In general semi-supervised learning algorithm. First, it trains a classifier using the available labeled documents, and classifies the unlabeled documents. Then, it trains a new classifier using all the training documents which were labeled either manually or automatically. We suggested two types of semi-supervised learning algorithm with regards to using document similarity features. The one is one step semi-supervised learning which is using unlabeled documents only to generate document similarity features. And the other is two step semi-supervised learning which is using unlabeled documents as learning examples as well as similarity features. Experimental results, obtained using support vector machines and naive Bayes classifier, show that we can get improved performance with small labeled and large unlabeled documents then the performance of supervised learning which uses labeled-only data. When considering the efficiency of a classifier system, the one step semi-supervised learning algorithm which is suggested in this study could be a good solution for improving classification performance with unlabeled documents.