• Title/Summary/Keyword: Statistical feature

Search Result 666, Processing Time 0.021 seconds

An approach of evaluation and mechanism study on the high and steep rock slope in water conservancy project

  • Yang, Meng;Su, Huaizhi;Wen, Zhiping
    • Computers and Concrete
    • /
    • v.19 no.5
    • /
    • pp.527-535
    • /
    • 2017
  • In this study, an aging deformation statistical model for a unique high and steep rock slope was proposed, and the aging characteristic of the slope deformation was better reflected. The slope displacement was affected by multiple-environmental factors in multiple scales and displayed the same tendency with a rising water level. The statistical model of the high and steep rock including non-aging factors was set up based on previous analyses and the study of the deformation and residual tendency. The rule and importance of the water level factor as a non-aging unit was analyzed. A partitioned statistical model and mutation model were established for the comprehensive cumulative displacement velocity with the monitoring study under multiple factors and multiple parameters. A spatial model was also developed to reflect and predict the whole and sectional deformation character by combining aging, deformation and space coordinates. A neural network model was built to fit and predict the deformation with a high degree of precision by mastering its feature of complexity and randomness. A three-dimensional finite element model of the slope was applied to approach the structure character using numerical simulations. Further, a three-dimensional finite element model of the slope and dam was developed, and the whole deformation state was analyzed. This study is expected to provide a powerful and systematic method to analyze very high, important and dangerous slopes.

Three-dimensional Distortion-tolerant Object Recognition using Computational Integral Imaging and Statistical Pattern Analysis (집적 영상의 복원과 통계적 패턴분석을 이용한 왜곡에 강인한 3차원 물체 인식)

  • Yeom, Seok-Won;Lee, Dong-Su;Son, Jung-Young;Kim, Shin-Hwan
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.34 no.10B
    • /
    • pp.1111-1116
    • /
    • 2009
  • In this paper, we discuss distortion-tolerant pattern recognition using computational integral imaging reconstruction. Three-dimensional object information is captured by the integral imaging pick-up process. The captured information is numerically reconstructed at arbitrary depth-levels by averaging the corresponding pixels. We apply Fisher linear discriminant analysis combined with principal component analysis to computationally reconstructed images for the distortion-tolerant recognition. Fisher linear discriminant analysis maximizes the discrimination capability between classes and principal component analysis reduces the dimensionality with the minimum mean squared errors between the original and the restored images. The presented methods provide the promising results for the classification of out-of-plane rotated objects.

Image Registration Based On Statistical Descriptors In Frequency Domain

  • Chang, Min-hyuk;Ahmad, Muhammad-Bilal;Lee, Cheul-hee;Chun, Jong-hoon;Park, Seung-jin;Park, Jong-an
    • Proceedings of the IEEK Conference
    • /
    • 2002.07c
    • /
    • pp.1531-1534
    • /
    • 2002
  • Shape description and its corresponding matching algorithm is one of the main concerns in MPEG-7. In this paper, a new method is proposed for shape registration of 2D objects for MPEG-7 Shapes are recognized using the Hu statistical moments in frequency domain. The Hu moments are moment-based descriptors of planar shapes, which are invariant under general translation, rotational, scaling, and reflection transformation. The image is transformed into frequency domain using Fourier Transform. Annular and radial wedge distributions fur the power spectra are extracted. Different statistical features (Hu moments) are found f3r the power spectrum of each selected transformed individual feature. The Euclidean distance of the extracted moment descriptors of the features are found with respect to the shapes in the database. The minimum Euclidean distance is the candidate for the matched shape. The simulation results are performed on the test shapes of MPEG-7.

  • PDF

Recognition of Numeric Characters in License Plate based on Independent Component Analysis (독립성분 분석을 이용한 번호판 숫자 인식)

  • Jeong, Byeong-Jun;Kang, Hyun-Chul
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.46 no.2
    • /
    • pp.99-107
    • /
    • 2009
  • This paper presents an enhanced hybrid model based on Independent Component Analysis(ICA) in order to features of numeric characters in license plates. ICA which is used only in high dimensional statistical features doesn't consider statistical features in low dimension and correlation between numeric characters. To overcome the drawbacks of ICA, we propose an improved ICA with the hybrid model using both Principle Component Analysis(PCA) and Linear Discriminant Analysis(LDA). Experiment results show that the proposed model has a superior performance in feature extraction and recognition compared with ICA only as well as other hybrid models.

Classification-Based Approach for Hybridizing Statistical and Rule-Based Machine Translation

  • Park, Eun-Jin;Kwon, Oh-Woog;Kim, Kangil;Kim, Young-Kil
    • ETRI Journal
    • /
    • v.37 no.3
    • /
    • pp.541-550
    • /
    • 2015
  • In this paper, we propose a classification-based approach for hybridizing statistical machine translation and rulebased machine translation. Both the training dataset used in the learning of our proposed classifier and our feature extraction method affect the hybridization quality. To create one such training dataset, a previous approach used auto-evaluation metrics to determine from a set of component machine translation (MT) systems which gave the more accurate translation (by a comparative method). Once this had been determined, the most accurate translation was then labelled in such a way so as to indicate the MT system from which it came. In this previous approach, when the metric evaluation scores were low, there existed a high level of uncertainty as to which of the component MT systems was actually producing the better translation. To relax such uncertainty or error in classification, we propose an alternative approach to such labeling; that is, a cut-off method. In our experiments, using the aforementioned cut-off method in our proposed classifier, we managed to achieve a translation accuracy of 81.5% - a 5.0% improvement over existing methods.

Hybrid Fuzzy Least Squares Support Vector Machine Regression for Crisp Input and Fuzzy Output

  • Shim, Joo-Yong;Seok, Kyung-Ha;Hwang, Chang-Ha
    • Communications for Statistical Applications and Methods
    • /
    • v.17 no.2
    • /
    • pp.141-151
    • /
    • 2010
  • Hybrid fuzzy regression analysis is used for integrating randomness and fuzziness into a regression model. Least squares support vector machine(LS-SVM) has been very successful in pattern recognition and function estimation problems for crisp data. This paper proposes a new method to evaluate hybrid fuzzy linear and nonlinear regression models with crisp inputs and fuzzy output using weighted fuzzy arithmetic(WFA) and LS-SVM. LS-SVM allows us to perform fuzzy nonlinear regression analysis by constructing a fuzzy linear regression function in a high dimensional feature space. The proposed method is not computationally expensive since its solution is obtained from a simple linear equation system. In particular, this method is a very attractive approach to modeling nonlinear data, and is nonparametric method in the sense that we do not have to assume the underlying model function for fuzzy nonlinear regression model with crisp inputs and fuzzy output. Experimental results are then presented which indicate the performance of this method.

The Efficient Feature Extraction of Handwritten Numerals in GLVQ Clustering Network (GLVQ클러스터링을 위한 필기체 숫자의 효율적인 특징 추출 방법)

  • Jeon, Jong-Won;Min, Jun-Yeong
    • The Transactions of the Korea Information Processing Society
    • /
    • v.2 no.6
    • /
    • pp.995-1001
    • /
    • 1995
  • The structure of a typical pattern recognition consists a pre-processing, a feature extraction(algorithm) and classification or recognition. In classification, when widely varying patterns exist in same category, we need the clustering which organize the similar patterns. Clustering algorithm is two approaches. Firs, statistical approaches which are k-means, ISODATA algorithm. Second, neural network approach which is T. Kohonen's LVQ(Learning Vector Quantization). Nikhil R. Palet al proposed the GLVQ(Generalized LVQ, 1993). This paper suggest the efficient feature extraction methods of handwritten numerals in GLVQ clustering network. We use the handwritten numeral data from 21's authors(ie, 200 patterns) and compare the proportion of misclassified patterns for each feature extraction methods. As results, when we use the projection combination method, the classification ratio is 98.5%.

  • PDF

Retrieval System Adopting Statistical Feature of MPEG Video (MPEG 비디오의 통계적 특성을 이용한 검색 시스템)

  • Yu, Young-Dal;Kang, Dae-Seong;Kim, Dai-Jin
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.38 no.5
    • /
    • pp.58-64
    • /
    • 2001
  • Recently many informations are transmitted ,md stored as video data, and they are on the rapid increase because of popularization of high performance computer and internet. In this paper, to retrieve video data, shots are found through analysis of video stream and the method of detection of key frame is studied. Finally users can retrieve the video efficiently. This Paper suggests a new feature that is robust to object movement in a shot and is not sensitive to change of color in boundary detection of shots, and proposes the characterizing value that reflects the characteristic of kind of video (movie, drama, news, music video etc,). The key frames are pulled out from many frames by using the local minima and maxima of differential of the value. After original frame(not de image) are reconstructed for key frame, indexing process is performed through computing parameters. Key frames that arc similar to user's query image arc retrieved through computing parameters. It is proved that the proposed methods are better than conventional method from experiments. The retrieval accuracy rate is so high in experiments.

  • PDF

A Iris Recognition Using Zernike Moment and Wavelet (Zernike 모멘트와 Wavelet을 이용한 홍채인식)

  • Choi, Chang-Soo;Park, Jong-Cheon;Jun, Byoung-Min
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.11 no.11
    • /
    • pp.4568-4575
    • /
    • 2010
  • Iris recognition is a biometric technology that uses iris pattern information, which has features of stability, security etc. Because of this reason, it is especially appropriate under certain circumstances of requiring a high security. Recently, using the iris information has a variety uses in the fields of access control and information security. In extracting the iris feature, it is desirable to extract the feature which is invariant to size, lights, rotation. We have easy solutions to the problem of iris size and lights by previous processing but there is still problem of iris feature extract invariant to rotation. In this paper, To improve an awareness ratio and decline in speed for a revision of rotation, it is proposed that the iris recognition method using Zernike Moment and Daubechies Wavelet. At first step, the proposed method groups rotated iris into similar things by statistical feature of Zernike Moment invariant to a rotation, which shortens processing time of iris recognition and looks equal to an established method in the performance of recognition too. therefore, proposed method could confirm the possibility of effective application for large scale iris recognition system.

Feature selection and prediction modeling of drug responsiveness in Pharmacogenomics (약물유전체학에서 약물반응 예측모형과 변수선택 방법)

  • Kim, Kyuhwan;Kim, Wonkuk
    • The Korean Journal of Applied Statistics
    • /
    • v.34 no.2
    • /
    • pp.153-166
    • /
    • 2021
  • A main goal of pharmacogenomics studies is to predict individual's drug responsiveness based on high dimensional genetic variables. Due to a large number of variables, feature selection is required in order to reduce the number of variables. The selected features are used to construct a predictive model using machine learning algorithms. In the present study, we applied several hybrid feature selection methods such as combinations of logistic regression, ReliefF, TurF, random forest, and LASSO to a next generation sequencing data set of 400 epilepsy patients. We then applied the selected features to machine learning methods including random forest, gradient boosting, and support vector machine as well as a stacking ensemble method. Our results showed that the stacking model with a hybrid feature selection of random forest and ReliefF performs better than with other combinations of approaches. Based on a 5-fold cross validation partition, the mean test accuracy value of the best model was 0.727 and the mean test AUC value of the best model was 0.761. It also appeared that the stacking models outperform than single machine learning predictive models when using the same selected features.