DOI QR코드

DOI QR Code

Genetic Outlier Detection for a Robust Support Vector Machine

  • Lee, Heesung (Department of Railroad Electrical and Electronics Engineering, Korea National University of Transportation, Gyeonggi-do) ;
  • Kim, Euntai (School of Electrical and Electronic Engineering, Yonsei University.)
  • Received : 2015.04.08
  • Accepted : 2015.06.25
  • Published : 2015.09.30

Abstract

Support vector machine (SVM) has a strong theoretical foundation and also achieved excellent empirical success. It has been widely used in a variety of pattern recognition applications. Unfortunately, SVM also has the drawback that it is sensitive to outliers and its performance is degraded by their presence. In this paper, a new outlier detection method based on genetic algorithm (GA) is proposed for a robust SVM. The proposed method parallels the GA-based feature selection method and removes the outliers that would be considered as support vectors by the previous soft margin SVM. The proposed algorithm is applied to various data sets in the UCI repository to demonstrate its performance.

Keywords

1. Introduction

Support vector machine (SVM) was proposed by Vapnik et al.[1, 2]; it implements structural risk minimization [3]. Beginning with its early success with optical character recognition [1], SVM has been widely applied to a range of areas [4-6]. SVM possesses a strong theoretical foundation and enjoys excellent empirical success in pattern recognition problems and industrial applications [7]. However, SVM also has the drawback of sensitivity to outliers and its performance can be degraded by their presence. Even though slack variables are introduced to suppress outliers [8, 9], outliers continue to influence the determination of the decision hyperplane because they have a relatively high margin loss compared to those of the other data points [10]. Further, when quadratic margin loss is employed, the influence of outliers increases [11]. Previous research has considered this problem [8-10, 12-14].

In [12], an adaptive margin was proposed to reduce the margin losses (hence the influence) of data far from their class centroids. The margin loss was scaled based on the distance between each data point and the center of the class. In [13, 14], the robust loss function was employed to limit the maximal margin loss of the outlier. Further, a robust SVM based on a smooth ramp loss was proposed in [8]. It suppresses the influence of outliers by employing the Huber loss function. Most works have aimed at reducing the effect of outliers by changing the margin loss function; only a small number have aimed at identifying the outliers and removing them in the training set. For example, Xu et al. proposed an outlier detection method using convex optimization in [10]. However, their method is complex and relaxation is employed to approximate the optimization.

In this paper, a new robust SVM based on a genetic algorithm (GA) [15] is proposed. The proposed method locates the outliers among the samples and removes them from the training set. The basic idea of this SVM parallels that of genetic feature selection, wherein GAs locate the irrelevant or redundant features and remove them by mimicking natural evolution. In the proposed method, GA detects and removes outliers that would be considered as support vectors by the previous soft margin SVM

The remainder of this paper is organized as follows. In Section 2, we offer preliminary information on GAs. In Section 3, we describe the proposed method. Section 4 details the experimental results that demonstrate the performance and our conclusions are presented in Section 5.

 

2. Genetic Algorithms

Genetic Algorithms (GAs) are engineering models obtained from the natural mechanisms of genetics and evolution and are applicable to a wide range of problems. GAs typically maintain and manipulate a population of individuals that represents a set of candidate solutions for a given problem. The viability of each candidate solution is evaluated based on its fitness and the population evolves better solutions via selection, crossover, and mutation. In the selection process, some individuals are copied to produce a tentative offspring population. The number of copies of an individual in the next generation is proportional to the individual’s relative fitness value. Promising individuals are therefore more likely to be present in the next generation. The selected individuals are modified to search for a global optimal solution using crossover and mutation. GAs provide a simple yet robust optimization methodology [16].

 

3. Genetic Outlier Selection For Support Vector Machines

In this section, a new outlier detection method based on a genetic algorithm is proposed. First, dual quadratic optimization is formulated in a soft margin SVM and support vector candidates are selected from the training set based on the Lagrange multiplier. Then, the candidates are divided into either support vectors or outliers using GA. Figure 1 presents the overall procedure for the proposed method.

Figure 1.Procedure of the proposed method.

Suppose that M data points {x1, x2, ..., xM} (xi ∈ Rn) are given, each of which is labeled with a binary class yi ∈ {−1, 1}. The goal of the SVM is to design a decision hyperplane

that maximally separates two classes

and

where W and w0 are the weight and bias of the decision function, respectively. The SVM is trained by solving

subject to

where X = [x1, x2, ..., xM] T, Y = diag (y1, y2, ..., yM), 1 = [1, 1, ..., 1]T, and 0 = [0, 0, ..., 0]T. Ξ = [ξ1, ..., ξM] T is a slack variable and implies a margin loss at each data point. C is a constant and denotes the penalty for a misclassification. The above formulation can be recast into

subject to

where Λ = [λ1, λ2, ..., λM] T is a Lagrange multiplier vector and the nonnegative number λi is a Lagrange multiplier associated with xi. In a standard SVM, the data points with positive λi are support vectors and contribute to the decision hyperplane according to

The interesting point is that if outliers are included in the training set, the outliers are likely to have positive margin loss and contribute to the hyperplanes. Further, the outliers tend to have relatively large margin loss and significantly influence the determination of the hyperplane, thereby making the SVM sensitive to the presence of outliers. In this paper, a robust SVM design scheme is proposed based on GA. First, a set of support vector candidates

is prepared by collecting the data points with positive Lagrange multipliers. As stated, not only the support vectors but also some outliers may be included in S. The goal of support vector selection is to determine a subset Sv ⊆ S that includes only support vectors such that the classification accuracy of the SVM is maximized while the number of data points in the subset card(Sv) is minimized, where card(·) denotes the cardinality. This is a bi-criteria combinatorial optimization problem and is usually intractable because it has an NP-hard search space. The implementation of the support vector selection parallels the feature selection method. The use of GA is a promising solution to this bi-criteria optimization problem because the feature selection methods based on GA outperform the non-GA feature selection methods [16-18]. To retain the support vectors and discard the outliers in subset Sv, the GA chromosome is represented by a binary string consisting of ones and zeros, as illustrated in Figure 2. In this figure, “1” and “0” indicate whether the associated data points should be retained or discarded in the set of support vectors, respectively. Genetic operators are applied to generate new chromosomes in the new generation. There are two types of genetic operators: crossover and mutation. The purpose of the crossover is to exchange information among different potential solutions. The mutation introduces genetic material that may have been missing from the initial population or lost during crossover operations [19]. In this paper, one-point crossover and bit-flip mutation [20] are employed as genetic operators. When a validation set V is denoted as V = {v1, v2, ...vm}, the fitness function of a chromosome is computed using

Figure 2.Chromosome used in the support vector selection.

where

In this equation, m is the number of validation data points and α is a design coefficient. The fitness function actually implies the bi-criteria that the classification accuracy of the SVM should be maximized while the number of data points in the subset card(Sv) should be minimized. The first term is aimed at improving the classification performance and the second term is aimed at the compactness of the SVM. The coefficient α plays an essential role in striking a balance between the classification performance and the classification cost. The parameters of the GA and SVM are given in Table 1.

Table 1.Experiment parameters

In Table 1, α is set to 0.1 to emphasize the classification accuracy over the classification cost.

 

4. Experimental Results

In this section, the validity of the proposed scheme is demonstrated by applying it to five databases of the UCI repository [21]. The UCI repository has been widely used within the pattern recognition community as a benchmark problem for machine learning algorithms. The five databases are the Wine, Haberman, Transfusion, Garman, and Pima sets. All the sets except the Wine set are binary; first and second classes are used in the Wine set. The databases used in the experiments are summarized in Table 2.

Table 2.Datasets used in the experiments

In this experiment, the databases are randomly divided into four equal-sized subsets. Two subsets are used for training and the remaining two subsets are used for validation and testing. The training and validation sets are used to design a robust SVM and the test sets are used to evaluate the performance of the algorithms. To demonstrate the robustness of the proposed method against outliers, approximately 5% and 10% of the training samples were randomly selected from each class and their labels were reversed. Five independent runs were performed for statistical verification; the linear kernel was used for SVM. The performances of the proposed method and the general soft margin SVM were compared in terms of average testing accuracy and the number of support vectors. The results are summarized in Tables 3 and 4. In the tables, the proposed robust SVM is denoted as GASVM. It is observed that the standard SVM exhibits a marginally better performance than that of the proposed method for only the Australian database in the non-outlier case. In the majority of the cases, the proposed method achieves superior classification accuracy using a smaller number of support vectors than that of the standard SVM. That is, the proposed method is less sensitive to outliers and requires a reduced number of support vectors compared to the standard SVM. Further, by comparing the cases with 5% outliers and 10% outliers as indicated in Figure 3 and Figure 4, it can be observed that in the standard SVM, the greater the number of outliers that are included in the training set, the greater the number of support vectors generated and hence, the more the performance is degraded. In the proposed method, however, less sensitivity is exhibited toward the outliers and the increase in support vectors is limited. The reason for the improved performance of the proposed method is that only useful and discriminatory support vectors are selected and the brunt of the outlier influence on the SVM training is removed. To highlight the robustness of the proposed method, the test accuracy of the GASVM was normalized with respect to that of the standard SVM and the relative performances of the two SVMs are presented in Figure 5. In this figure, the length of the bar l denotes

Table 3.Comparing the results of the proposed method (GASVM) with those of a previous method (SVM) in terms of testing accuracy

Table 4.Comparing the results of the proposed method (GASVM) with those of a previous method (SVM) in terms of the number of support vectors

Figure 3.Correct classification ratio of the SVM and GASVM.

Figure 4.Number of support vectors of the SVM and GASVM.

Figure 5.Relative performance of the proposed method compared to a general SVM.

where CGASV M and CSV M are the correct classification rates of GASVM and standard SVM, respectively. From this figure, it is clear that the greater the number of outliers included, the higher the relative excellence of the proposed method over the standard method.

 

5. Conclusions

In this paper, we presented a new method for detecting outliers to improve the robustness of SVM. The proposed method detected outliers within the support vectors assigned by soft margin SVM using GA, and demonstrated recognition performance and a total number of support vectors superior to those of previous methods. Using the proposed method, the robustness of SVM was improved and the SVM was simplified by outlier deletion. The validity of the suggested method was demonstrated through experiments with five databases from the UCI repository.

References

  1. C. Cortes, and V. Vapnik, “Support-vector networks,” Machine Learning, vol. 20, no. 3, pp. 273-297, Sep. 1995. https://doi.org/10.1007/BF00994018
  2. V. N. Vapnik, Statistical Learning Theory. Wiley, 1998.
  3. S. Jun, “An outlier data analysis using support vector regression,” Journal of The Korean Institute of Intelligent Systems, vol. 18, no. 6, pp. 876-880, 2008. https://doi.org/10.5391/JKIIS.2008.18.6.876
  4. V. Hoang, M. Le, and K. Jo, “Hybrid cascade boosting machine using variant scale blocks based HOG features for pedestrian detection,” Neurocomputing, vol. 135, pp. 357-366, 2014. https://doi.org/10.1016/j.neucom.2013.12.017
  5. S. Seo, H. Yang, K. Sim, “Behavior learning and evolution of swarm robot system using support vector machine,” Journal of The Korean Institute of Intelligent Systems, vol. 18, no. 5, pp. 712-717, 2008. https://doi.org/10.5391/JKIIS.2008.18.5.712
  6. H. Shin, H. Jung, K. Cho, and J. Lee “A prediction method of learning outcomes based on regression model for effective peer review learning,” Journal of The Korean Institute of Intelligent Systems, vol. 22, no. 5, pp. 624-630, 2012. https://doi.org/10.5391/JKIIS.2012.22.5.624
  7. S. Kumar, Neural Networks: A Classroom Approach. McGraw-Hill, 2005.
  8. L. Wang, H. Jia, and J. Li, “Training robust support vector machine with smooth ramp loss in primal space,” Neurocomputing, vol. 71, pp. 3020-3025, 2008. https://doi.org/10.1016/j.neucom.2007.12.032
  9. H. Lee, S. Hong, B. Lee and E. Kim “Design of robust support vector machine using genetic algorithm,” Journal of The Korean Institute of Intelligent Systems, vol. 20, no. 3, pp. 375-379, 2010. https://doi.org/10.5391/JKIIS.2010.20.3.375
  10. L. Xu, K. Crammer, and D. Schuurmans, “Robust support vector machine training via convex outlier ablation,” in Proc. the 21st National Conference on Artificial Intelligence, pp. 536-542, 2006.
  11. J. A. K. Suykens, and J. Vandewalle, “Least squares support vector machine classifiers,” Neural Processing Letters, vol. 9, no. 3, pp. 293-300, 1999. https://doi.org/10.1023/A:1018628609742
  12. Q. Song, W. Hu, and W. Xie, “Robust support vector machine with bullet hole image classification,” IEEE Trans. Systems, Man, and Cybernetics-Part C: Applications and Reviews, vol. 32, no. 4, pp. 440–448, 2002. https://doi.org/10.1109/TSMCC.2002.807277
  13. N. Krause and Y. Singer, “Leveraging the margin more carefully,” in Proc. the 21st International Conference on Machine Learning, vol. 69, 2004.
  14. P. Bartlett and S. Mendelson, “Rademacher and Gaussian complexities: risk bounds and structural results,” Journal of Machine Learning Research, vol. 3, pp. 463-482, 2002.
  15. L. Davis, Handbook of Genetic Algorithms. Van Nostrand Reinhold, 1991.
  16. H. Lee, E. Kim, and M. Park, “A genetic feature weighting scheme for pattern recognition,” Integrated Computer-Aided Engineering, vol. 14, pp. 161-171, 2007.
  17. L. Kuncheva and L. Jain, “Nearest neighbor classifier: simultaneousediting and feature selection,” Pattern Recognition Letters, vol. 20, pp. 1149-1156, 1999. https://doi.org/10.1016/S0167-8655(99)00082-3
  18. I. Oh, J. Lee, and B. Moon, “Hybrid genetic algorithms for feature selection,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 26, no. 11, pp. 1424-1437, 2004. https://doi.org/10.1109/TPAMI.2004.105
  19. H. Juo and H. Chang, “A new symbiotic evolution-based fuzzy-neural approach to fault diagnosis of marine propulsion systems,” Artificial Intelligence, vol. 17, pp. 919-930, 2004.
  20. Z. Michalewicz, Genetic Algorithms + Data Structures = Evolution Programs. Springer, 1996.
  21. P. M. Murphy and D. W. Aha, “UCI Repository for Machine Learning Databases,” Technical report, Dept. of Information and Computer Science, Univ. of California, Irvine, Calif., 1994.

Cited by

  1. Establishing an ANN-Based Risk Model for Ground Subsidence Along Railways vol.8, pp.10, 2018, https://doi.org/10.3390/app8101936