• 제목/요약/키워드: vector data

검색결과 3,324건 처리시간 0.027초

구형자료(球型資料)에 대(對)한 부트스트랩 신뢰원추체(信賴圓錐體) (Bootstrap Confidence Cones for Spherical Data)

  • 신양규
    • Journal of the Korean Data and Information Science Society
    • /
    • 제3권1호
    • /
    • pp.33-46
    • /
    • 1992
  • The set of eigenvectors of the second moment matrix and the mean vector are the measures of orientation for a distribution supported on the unit sphere. Bootstrap confidence cone for the eigenvector is constructed and the consistency of this method is discussed. The performance of our bootstrap cone for the eigenvector is compared with that of the asymptotic confidence cones for two measures under the parametric assumptions for the underlying distributions and that of the bootstrap cone for the mean vector by Monte Carlo simulation.

  • PDF

A Kernel Approach to Discriminant Analysis for Binary Classification

  • 신양규
    • Journal of the Korean Data and Information Science Society
    • /
    • 제12권2호
    • /
    • pp.83-93
    • /
    • 2001
  • We investigate a kernel approach to discriminant analysis for binary classification as a machine learning point of view. Our view of the kernel approach follows support vector method which is one of the most promising techniques in the area of machine learning. As usual discriminant analysis, the kernel method can discriminate an object most likely belongs to. Moreover, it has some advantage over discriminant analysis such as data compression and computing time.

  • PDF

Forecasting volatility via conditional autoregressive value at risk model based on support vector quantile regression

  • Shim, Joo-Yong;Hwang, Chang-Ha
    • Journal of the Korean Data and Information Science Society
    • /
    • 제22권3호
    • /
    • pp.589-596
    • /
    • 2011
  • The conditional autoregressive value at risk (CAViaR) model is useful for risk management, which does not require the assumption that the conditional distribution does not vary over time but the volatility does. But it does not provide volatility forecasts, which are needed for several important applications such as option pricing and portfolio management. For a variety of probability distributions, it is known that there is a constant relationship between the standard deviation and the distance between symmetric quantiles in the tails of the distribution. This inspires us to use a support vector quantile regression (SVQR) for volatility forecasts with the distance between CAViaR forecasts of symmetric quantiles. Simulated example and real example are provided to indicate the usefulness of proposed forecasting method for volatility.

3D 벡터 데이터를 이용한 효과적인 내부문양 표현 (Effective Internal Pattern Expression Using 3D Vector Data)

  • 박성준;조진수;황보택근
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2008년도 하계종합학술대회
    • /
    • pp.645-646
    • /
    • 2008
  • Silhouette extraction is widely used in many computer graphics applications. In this paper, we proposed a method for extracting 3D silhouette and internal pattern from 3D vector data. To do this, we first make an edge-list, secondly define the silhouette, and finally remove hidden lines. After getting the silhouette, we extract internal pattern using adjacent edge's dihedral. The proposed method not only effectively improves the performance of extracting 3D silhouette and internal pattern from 3D vector data but also reduces the computational complexity.

  • PDF

고차원 데이터의 분류를 위한 서포트 벡터 머신을 이용한 피처 감소 기법 (Feature reduction for classifying high dimensional data sets using support vector machine)

  • 고석하;이현주
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2008년도 하계종합학술대회
    • /
    • pp.877-878
    • /
    • 2008
  • We suggest a feature reduction method to classify mouse function data sets, which integrate several biological data sets represented as high dimensional vectors. To increase classification accuracy and decrease computational overhead, it is important to reduce the dimension of features. To do this, we employed Hybrid Huberized Support Vector Machine with kernels used for a kernel logistic regression method. When compared to support vector machine, this a pproach shows the better accuracy with useful features for each mouse function.

  • PDF

객체지향 데이터 모델을 이용한 다물체 동역학 해석 시스템 개발 (Development of a Multibody Dynamics Analysis System Using the Object-Oriented Data Model)

  • 박태원;송현석;서종휘;한형석;이재경
    • 한국정밀공학회:학술대회논문집
    • /
    • 한국정밀공학회 2003년도 춘계학술대회 논문집
    • /
    • pp.1487-1490
    • /
    • 2003
  • In this paper, the application of object-oriented Data Model to develop a multibody dynamic system, called O-DYN, is introduced. Mechanical components, such as bodies, joints, forces are modeled as objects which have data and method by using object-oriented modeling methodology. O-DYN, a dynamic analysis system, based on the object-oriented modeling concept is made in C++. One example is analyzed through the O-DYN, It is expected that the analysis program or individual module constructed in this paper would be useful for mechanical engineers in predicting the dynamic responses of multibody systems and developing an analysis program

  • PDF

Estimating Fuzzy Regression with Crisp Input-Output Using Quadratic Loss Support Vector Machine

  • 황창하;홍덕헌;이상복
    • 한국데이터정보과학회:학술대회논문집
    • /
    • 한국데이터정보과학회 2004년도 추계학술대회
    • /
    • pp.53-59
    • /
    • 2004
  • Support vector machine(SVM) approach to regression can be found in information science literature. SVM implements the regularization technique which has been introduced as a way of controlling the smoothness properties of regression function. In this paper, we propose a new estimation method based on quadratic loss SVM for a linear fuzzy regression model of Tanaka's, and furthermore propose a estimation method for nonlinear fuzzy regression. This approach is a very attractive approach to evaluate nonlinear fuzzy model with crisp input and output data.

  • PDF

Adaptive ridge procedure for L0-penalized weighted support vector machines

  • Kim, Kyoung Hee;Shin, Seung Jun
    • Journal of the Korean Data and Information Science Society
    • /
    • 제28권6호
    • /
    • pp.1271-1278
    • /
    • 2017
  • Although the $L_0$-penalty is the most natural choice to identify the sparsity structure of the model, it has not been widely used due to the computational bottleneck. Recently, the adaptive ridge procedure is developed to efficiently approximate a $L_q$-penalized problem to an iterative $L_2$-penalized one. In this article, we proposed to apply the adaptive ridge procedure to solve the $L_0$-penalized weighted support vector machine (WSVM) to facilitate the corresponding optimization. Our numerical investigation shows the advantageous performance of the $L_0$-penalized WSVM compared to the conventional WSVM with $L_2$ penalty for both simulated and real data sets.

Transductive SVM을 위한 분지-한계 알고리즘 (A Branch-and-Bound Algorithm for Finding an Optimal Solution of Transductive Support Vector Machines)

  • 박찬규
    • 한국경영과학회지
    • /
    • 제31권2호
    • /
    • pp.69-85
    • /
    • 2006
  • Transductive Support Vector Machine(TSVM) is one of semi-supervised learning algorithms which exploit the domain structure of the whole data by considering labeled and unlabeled data together. Although it was proposed several years ago, there has been no efficient algorithm which can handle problems with more than hundreds of training examples. In this paper, we propose an efficient branch-and-bound algorithm which can solve large-scale TSVM problems with thousands of training examples. The proposed algorithm uses two bounding techniques: min-cut bound and reduced SVM bound. The min-cut bound is derived from a capacitated graph whose cuts represent a lower bound to the optimal objective function value of the dual problem. The reduced SVM bound is obtained by constructing the SVM problem with only labeled data. Experimental results show that the accuracy rate of TSVM can be significantly improved by learning from the optimal solution of TSVM, rather than an approximated solution.

Sparse Kernel Regression using IRWLS Procedure

  • Park, Hye-Jung
    • Journal of the Korean Data and Information Science Society
    • /
    • 제18권3호
    • /
    • pp.735-744
    • /
    • 2007
  • Support vector machine(SVM) is capable of providing a more complete description of the linear and nonlinear relationships among random variables. In this paper we propose a sparse kernel regression(SKR) to overcome a weak point of SVM, which is, the steep growth of the number of support vectors with increasing the number of training data. The iterative reweighted least squares(IRWLS) procedure is used to solve the optimal problem of SKR with a Laplacian prior. Furthermore, the generalized cross validation(GCV) function is introduced to select the hyper-parameters which affect the performance of SKR. Experimental results are then presented which illustrate the performance of the proposed procedure.

  • PDF