• Title/Summary/Keyword: Sample Vector

Search Result 269, Processing Time 0.024 seconds

Study of Personal Credit Risk Assessment Based on SVM

  • LI, Xin;XIA, Han
    • The Journal of Industrial Distribution & Business
    • /
    • v.13 no.10
    • /
    • pp.1-8
    • /
    • 2022
  • Purpose: Support vector machines (SVMs) ensemble has been proposed to improve classification performance of Credit risk recently. However, currently used fusion strategies do not evaluate the importance degree of the output of individual component SVM classifier when combining the component predictions to the final decision. To deal with this problem, this paper designs a support vector machines (SVMs) ensemble method based on fuzzy integral, which aggregates the outputs of separate component SVMs with importance of each component SVM. Research design, data, and methodology: This paper designs a personal credit risk evaluation index system including 16 indicators and discusses a support vector machines (SVMs) ensemble method based on fuzzy integral for designing a credit risk assessment system to discriminate good creditors from bad ones. This paper randomly selects 1500 sample data of personal loan customers of a commercial bank in China 2015-2020 for simulation experiments. Results: By comparing the experimental result SVMs ensemble with the single SVM, the neural network ensemble, the proposed method outperforms the single SVM, and neural network ensemble in terms of classification accuracy. Conclusions: The results show that the method proposed in this paper has higher classification accuracy than other classification methods, which confirms the feasibility and effectiveness of this method.

High Bit-Rates Quantization of the First-Order Markov Process Based on a Codebook-Constrained Sample-Adaptive Product Quantizers (부호책 제한을 가지는 표본 적응 프로덕트 양자기를 이용한 1차 마르코프 과정의 고 전송률 양자화)

  • Kim, Dong-Sik
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.49 no.1
    • /
    • pp.19-30
    • /
    • 2012
  • For digital data compression, the quantization is the main part of the lossy source coding. In order to improve the performance of quantization, the vector quantizer(VQ) can be employed. The encoding complexity, however, exponentially increases as the vector dimension or bit rate gets large. Much research has been conducted to alleviate such problems of VQ. Especially for high bit rates, a constrained VQ, which is called the sample-adaptive product quantizer(SAPQ), has been proposed for reducing the hugh encoding complexity of regular VQs. SAPQ has very similar structure as to the product VQ(PQ). However, the quantizer performance can be better than the PQ case. Further, the encoding complexity and the memory requirement for the codebooks are lower than the regular full-search VQ case. Among SAPQs, 1-SAPQ has a simple quantizer structure, where each product codebook is symmetric with respect to the diagonal line in the underlying vector space. It is known that 1-SAPQ shows a good performance for i.i.d. sources. In this paper, a study on designing 1-SAPQ for the first-order Markov process. For an efficient design of 1-SAPQ, an algorithm for the initial codebook is proposed, and through the numerical analysis it is shown that 1-SAPQ shows better quantizer distortion than the VQ case, of which encoding complexity is similar to that of 1-SAPQ, and shows distortions, which are close to that of the DPCM(differential pulse coded modulation) scheme with the Lloyd-Max quantizer.

Particle Filtering based Object Tracking Method using Feedback and Tracking Box Correction (피드백과 박스 보정을 이용한 Particle Filtering 객체추적 방법론)

  • Ahn, Jung-Ho
    • Journal of Satellite, Information and Communications
    • /
    • v.8 no.1
    • /
    • pp.77-82
    • /
    • 2013
  • The object tracking method using particle filtering has been proved successful since it is based on the Monte Carlo simulation to estimate the posterior distribution of the state vector that is nonlinear and non-Gaussian in the real-world situation. In this paper, we present two nobel methods that can improve the performance of the object tracking algorithm based on the particle filtering. First one is the feedback method that replace the low-weighted tracking sample by the estimated state vector in the previous frame. The second one is an tracking box correction method to find an confidence interval of back projection probability on the estimated candidate object area. An sample propagation equation is also presented, which is obtained by experiments. We designed well-organized test data set which reflects various challenging circumstances, and, by using it, experimental results proved that the proposed methods improves the traditional particle filter based object tracking method.

The Optimal Ellipse Estimation Method for Chromosome Bands Extraction (염색체 마디 추출을 위한 최적타원 추정기법)

  • Lee, Sang-Yeol;Lee, Kwon-Soon;Jeon, Gye-Rok;Chang, Yong-Hoon;Eom, Sang-Hui
    • Proceedings of the KOSOMBE Conference
    • /
    • v.1995 no.05
    • /
    • pp.227-229
    • /
    • 1995
  • This paper attempts to examine an optimal method for the chromosome specific vector extraction. Usually, represented method are used with a line segmentation on a chromosome Image. It is not Inaccurate but also needs a long time for the analysis. This paper purpose to aquire specific vector in the image with a using optimal ellipse estimation method. Normally, shapes of chromosomes are curved and too difficult to analyze automatically. A chromosome has a lot of band which looks like an ellipse. If we can estimate their bands with an ellipse, we can reconstruct the sample Which Is straight and can be analyzed easily. We have rearranged a chromosome Image with above proposed. Result shows a reconstructed sample which Is simple for chromosome analysis.

  • PDF

Optimal SVM learning method based on adaptive sparse sampling and granularity shift factor

  • Wen, Hui;Jia, Dongshun;Liu, Zhiqiang;Xu, Hang;Hao, Guangtao
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.4
    • /
    • pp.1110-1127
    • /
    • 2022
  • To improve the training efficiency and generalization performance of a support vector machine (SVM) in a large-scale set, an optimal SVM learning method based on adaptive sparse sampling and the granularity shift factor is presented. The proposed method combines sampling optimization with learner optimization. First, an adaptive sparse sampling method based on the potential function density clustering is designed to adaptively obtain sparse sampling samples, which can achieve a reduction in the training sample set and effectively approximate the spatial structure distribution of the original sample set. A granularity shift factor method is then constructed to optimize the SVM decision hyperplane, which fully considers the neighborhood information of each granularity region in the sparse sampling set. Experiments on an artificial dataset and three benchmark datasets show that the proposed method can achieve a relatively higher training efficiency, as well as ensure a good generalization performance of the learner. Finally, the effectiveness of the proposed method is verified.

A Rao-Robson Chi-Square Test for Multivariate Normality Based on the Mahalanobis Distances

  • Park, Cheolyong
    • Communications for Statistical Applications and Methods
    • /
    • v.7 no.2
    • /
    • pp.385-392
    • /
    • 2000
  • Many tests for multivariate normality are based on the spherical coordinates of the scaled residuals of multivariate observations. Moore and Stubblebine's (1981) Pearson chi-square test is based on the radii of the scaled residuals, or equivalently the sample Mahalanobis distances of the observations from the sample mean vector. The chi-square statistic does not have a limiting chi-square distribution since the unknown parameters are estimated from ungrouped data. We will derive a simple closed form of the Rao-Robson chi-square test statistic and provide a self-contained proof that it has a limiting chi-square distribution. We then provide an illustrative example of application to a real data with a simulation study to show the accuracy in finite sample of the limiting distribution.

  • PDF

Fast Stitching Algorithm by using Feature Tracking (특징점 추적을 통한 다수 영상의 고속 스티칭 기법)

  • Park, Siyoung;Kim, Jongho;Yoo, Jisang
    • Journal of Broadcast Engineering
    • /
    • v.20 no.5
    • /
    • pp.728-737
    • /
    • 2015
  • Stitching algorithm obtain a descriptor of the feature points extracted from multiple images, and create a single image through the matching process between the each of the feature points. In this paper, a feature extraction and matching techniques for the creation of a high-speed panorama using video input is proposed. Features from Accelerated Segment Test(FAST) is used for the feature extraction at high speed. A new feature point matching process, different from the conventional method is proposed. In the matching process, by tracking region containing the feature point through the Mean shift vector required for matching is obtained. Obtained vector is used to match the extracted feature points. In order to remove the outlier, the RANdom Sample Consensus(RANSAC) method is used. By obtaining a homography transformation matrix of the two input images, a single panoramic image is generated. Through experimental results, we show that the proposed algorithm improve of speed panoramic image generation compared to than the existing method.

A pilot study using machine learning methods about factors influencing prognosis of dental implants

  • Ha, Seung-Ryong;Park, Hyun Sung;Kim, Eung-Hee;Kim, Hong-Ki;Yang, Jin-Yong;Heo, Junyoung;Yeo, In-Sung Luke
    • The Journal of Advanced Prosthodontics
    • /
    • v.10 no.6
    • /
    • pp.395-400
    • /
    • 2018
  • PURPOSE. This study tried to find the most significant factors predicting implant prognosis using machine learning methods. MATERIALS AND METHODS. The data used in this study was based on a systematic search of chart files at Seoul National University Bundang Hospital for one year. In this period, oral and maxillofacial surgeons inserted 667 implants in 198 patients after consultation with a prosthodontist. The traditional statistical methods were inappropriate in this study, which analyzed the data of a small sample size to find a factor affecting the prognosis. The machine learning methods were used in this study, since these methods have analyzing power for a small sample size and are able to find a new factor that has been unknown to have an effect on the result. A decision tree model and a support vector machine were used for the analysis. RESULTS. The results identified mesio-distal position of the inserted implant as the most significant factor determining its prognosis. Both of the machine learning methods, the decision tree model and support vector machine, yielded the similar results. CONCLUSION. Dental clinicians should be careful in locating implants in the patient's mouths, especially mesio-distally, to minimize the negative complications against implant survival.

Reliability-based combined high and low cycle fatigue analysis of turbine blade using adaptive least squares support vector machines

  • Ma, Juan;Yue, Peng;Du, Wenyi;Dai, Changping;Wriggers, Peter
    • Structural Engineering and Mechanics
    • /
    • v.83 no.3
    • /
    • pp.293-304
    • /
    • 2022
  • In this work, a novel reliability approach for combined high and low cycle fatigue (CCF) estimation is developed by combining active learning strategy with least squares support vector machines (LS-SVM) (named as ALS-SVM) surrogate model to address the multi-resources uncertainties, including working loads, material properties and model itself. Initially, a new active learner function combining LS-SVM approach with Monte Carlo simulation (MCS) is presented to improve computational efficiency with fewer calls to the performance function. To consider the uncertainty of surrogate model at candidate sample points, the learning function employs k-fold cross validation method and introduces the predicted variance to sequentially select sampling. Following that, low cycle fatigue (LCF) loads and high cycle fatigue (HCF) loads are firstly estimated based on the training samples extracted from finite element (FE) simulations, and their simulated responses together with the sample points of model parameters in Coffin-Manson formula are selected as the MC samples to establish ALS-SVM model. In this analysis, the MC samples are substituted to predict the CCF reliability of turbine blades by using the built ALS-SVM model. Through the comparison of the two approaches, it is indicated that the reliability model by linear cumulative damage rule provides a non-conservative result compared with that by the proposed one. In addition, the results demonstrate that ALS-SVM is an effective analysis method holding high computational efficiency with small training samples to gain accurate fatigue reliability.

High Bit Rate Image Coder Using DPCM based on Sample-Adaptive Product Quantizer (표본 적응 프러덕트 양자기에 기초한 DPCM을 이용한 고 전송률 영상 압축)

  • 김동식;이상욱
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.24 no.12B
    • /
    • pp.2382-2390
    • /
    • 1999
  • In this paper, we employed a new quantization scheme called sample-adaptive product quantizer (SAPQ) to quantize image data based on the differential pulse code modulation (DPCM) coder, which has fixed length outputs and high bit rates. In order to improve the performance of traditional DPCM coders, the scalar quantizer should be replaced by the vector quantizer (VQ). As the bit rate increases, it will be nearly impossible to implement a conventional VQ or modified VQ, such as the tree-structured VQ, even if the modified VQ can significantly reduce the encoding complexity. SAPQ has a form of the feed-forward adaptive scalar quantizer having a short adaptation period. However, since SAPQ is a structurally constrained VQ, SAPQ can achieve VQ-level performance with a low encoding complexity. Since SAPQ has a scalar quantizer structure, by using the traditional scalar value predictors, we can easily apply SAPQ to DPCM coders. For synthetic data and real images, by employing SAPQ as the quantizer part of DPCM coders, we obtained a 2~3 dB improvement over the DPCM coders, which are based on the Lloyd-Max scalar quantizers, for data rates above 4 b/point.

  • PDF