• Title/Summary/Keyword: probability transformation method

Search Result 70, Processing Time 0.025 seconds

A Study On the Simulation Model of the Transformation of Random Variables Using FBI (Fortran Based Interpreter) (FBI(Fortran Based Interpreter)를 이용한 확률변수 변환의 시뮬레이션 모델에 관한 연구)

  • Kim, Won-Gyeong
    • Journal of Korean Institute of Industrial Engineers
    • /
    • v.13 no.2
    • /
    • pp.105-115
    • /
    • 1987
  • Although there are many theoretical methods for the transformation of random variables. it is difficult to find probability density functions for the new random variables because of the complexity in mathematics. The author developed a simulation model solving the above difficulties using FBI (Fortran Based Interpreter) routines. The FBI is a kind of language Interpreter analyzing the arithmetic statement in character data forms. In this paper. the FBI routines will be explained and the structure and applications of simulation model will be also demonstrated. Polynomial curve fitting method is applied to define the probability density function which can not be defined by well-known pdf. This program can also be used for instructing mathematical statistics and identifying distribution of the simulated data.

  • PDF

Multi-time probability density functions of the dynamic non-Gaussian response of structures

  • Falsone, Giovanni;Laudani, Rossella
    • Structural Engineering and Mechanics
    • /
    • v.76 no.5
    • /
    • pp.631-641
    • /
    • 2020
  • In the present work, an approach for the multiple time probabilistic characterization of the response of linear structural systems subjected to random non-Gaussian processes is presented. Its fundamental property is working directly on the multiple time probability density functions of the actions and of the response. This avoids of passing through the evaluation of the response statistical moments at multiple time or correlations, reducing the computational effort in a consistent measure. This approach is the extension to the multiple time case of a previously published dynamic Probability Transformation Method (PTM) working on a single evolution of the response statistics. The application to some simple examples has revealed the efficiency of the method, both in terms of computational effort and in terms of accuracy.

Voice Personality Transformation Using a Probabilistic Method (확률적 방법을 이용한 음성 개성 변환)

  • Lee Ki-Seung
    • The Journal of the Acoustical Society of Korea
    • /
    • v.24 no.3
    • /
    • pp.150-159
    • /
    • 2005
  • This paper addresses a voice personality transformation algorithm which makes one person's voices sound as if another person's voices. In the proposed method, one person's voices are represented by LPC cepstrum, pitch period and speaking rate, the appropriate transformation rules for each Parameter are constructed. The Gaussian Mixture Model (GMM) is used to model one speaker's LPC cepstrums and conditional probability is used to model the relationship between two speaker's LPC cepstrums. To obtain the parameters representing each probabilistic model. a Maximum Likelihood (ML) estimation method is employed. The transformed LPC cepstrums are obtained by using a Minimum Mean Square Error (MMSE) criterion. Pitch period and speaking rate are used as the parameters for prosody transformation, which is implemented by using the ratio of the average values. The proposed method reveals the superior performance to the previous VQ-based method in subjective measures including average cepstrum distance reduction ratio and likelihood increasing ratio. In subjective test. we obtained almost the same correct identification ratio as the previous method and we also confirmed that high qualify transformed speech is obtained, which is due to the smoothly evolving spectral contours over time.

Prediction of the Probability of Job Loss due to Digitalization and Comparison by Industry: Using Machine Learning Methods

  • Park, Heedae;Lee, Kiyoul
    • Journal of Korea Trade
    • /
    • v.25 no.5
    • /
    • pp.110-128
    • /
    • 2021
  • Purpose - The essential purpose of this study is to analyze the possibility of substitution of an individual job resulting from technological development represented by the 4th Industrial Resolution, considering the different effects of digital transformation on the labor market. Design/methodology - In order to estimate the substitution probability, this study used two data sets which the job characteristics data for individual occupations provided by KEIS and the information on occupational status of substitution provided by Frey and Osborne(2013). In total, 665 occupations were considered in this study. Of these, 80 occupations had data with labels of substitution status. The primary goal of estimation was to predict the degree of substitution for 607 of 665 occupations (excluding 58 with markers). It utilized three methods a principal component analysis, an unsupervised learning methodology of machine learning, and Ridge and Lasso from supervised learning methodology. After extracting significant variables based on the three methods, this study carried out logistics regression to estimate the probability of substitution for each occupation. Findings - The probability of substitution for other occupational groups did not significantly vary across individual models, and the rank order of the probabilities across occupational groups were similar across models. The mean of three methods of substitution probability was analyzed to be 45.3%. The highest value was obtained using the PCA method, and the lowest value was derived from the LASSO method. The average substitution probability of the trading industry was 45.1%, very similar to the overall average. Originality/value - This study has a significance in that it estimates the job substitution probability using various machine learning methods. The results of substitution probability estimation were compared by industry sector. In addition, This study attempts to compare between trade business and industry sector.

Balanced Simultaneous Confidence Intervals in Logistic Regression Models

  • Lee, Kee-Won
    • Journal of the Korean Statistical Society
    • /
    • v.21 no.2
    • /
    • pp.139-151
    • /
    • 1992
  • Simultaneous confidence intervals for the parameters in the logistic regression models with random regressors are considered. A method based on the bootstrap and its stochastic approximation will be developed. A key idea in using the bootstrap method to construct simultaneous confidence intervals is the concept of prepivoting which uses the transformation of a root by its estimated cumulative distribution function. Repeated use of prepivoting makes the overall coverage probability asymptotically correct and the coverage probabilities of the individual confidence statement asymptotically equal. This method is compared with ordinary asymptotic methods based on Scheffe's and Bonferroni's through Monte Carlo simulation.

  • PDF

Transformation in Kernel Density Estimation (변환(變換)을 이용(利用)한 커널함수추정추정법(函數推定推定法))

  • Seog, Kyung-Ha
    • Journal of the Korean Data and Information Science Society
    • /
    • v.3 no.1
    • /
    • pp.17-24
    • /
    • 1992
  • The problem of estimating symmetric probability density with high kurtosis is considered. Such densities are often estimated poorly by a global bandwidth kernel estimation since good estimation of the peak of the distribution leads to unsatisfactory estimation of the tails and vice versa. In this paper, we propose a transformation technique before using a global bandwidth kernel estimator. Performance of density estimator based on proposed transformation is investigated through simulation study. It is observed that our method offers a substantial improvement for the densities with high kurtosis. However, its performance is a little worse than that of ordinary kernel estimator in the situation where the kurtosis is not high.

  • PDF

WAVELET-BASED FOREST AREAS CLASSIFICATION BY USING HIGH RESOLUTION IMAGERY

  • Yoon Bo-Yeol;Kim Choen
    • Proceedings of the KSRS Conference
    • /
    • 2005.10a
    • /
    • pp.698-701
    • /
    • 2005
  • This paper examines that is extracted certain information in forest areas within high resolution imagery based on wavelet transformation. First of all, study areas are selected one more species distributed spots refer to forest type map. Next, study area is cut 256 x 256 pixels size because of image processing problem in large volume data. Prior to wavelet transformation, five texture parameters (contrast, dissimilarity, entropy, homogeneity, Angular Second Moment (ASM≫ calculated by using Gray Level Co-occurrence Matrix (GLCM). Five texture images are set that shifting window size is 3x3, distance .is 1 pixel, and angle is 45 degrees used. Wavelet function is selected Daubechies 4 wavelet basis functions. Result is summarized 3 points; First, Wavelet transformation images derived from contrast, dissimilarity (texture parameters) have on effect on edge elements detection and will have probability used forest road detection. Second, Wavelet fusion images derived from texture parameters and original image can apply to forest area classification because of clustering in Homogeneous forest type structure. Third, for grading evaluation in forest fire damaged area, if data fusion of established classification method, GLCM texture extraction concept and wavelet transformation technique effectively applied forest areas (also other areas), will obtain high accuracy result.

  • PDF

Correspondence Matching of Stereo Images by Sampling of Planar Region in the Scene Based on RANSAC (RANSAC에 기초한 화면내 평면 영역 샘플링에 의한 스테레오 화상의 대응 매칭)

  • Jung, Nam-Chae
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.12 no.4
    • /
    • pp.242-249
    • /
    • 2011
  • In this paper, the correspondence matching method of stereo images was proposed by means of sampling projective transformation matrix in planar region of scene. Though this study is based on RANSAC, it does not use uniform distribution by random sampling in RANSAC, but use multi non-uniform computed from difference in positions of feature point of image or templates matching. The existing matching method sampled that the correspondence is presumed to correct by use of the condition which the correct correspondence is almost satisfying, and applied RANSAC by matching the correspondence into one to one, but by sampling in stages in multi probability distribution computed for image in the proposed method, the correct correspondence of high probability can be sampled among multi correspondence candidates effectively. In the result, we could obtain many correct correspondence and verify effectiveness of the proposed method in the simulation and experiment of real images.

Function Embedding and Projective Measurement of Quantum Gate by Probability Amplitude Switch (확률진폭 스위치에 의한 양자게이트의 함수 임베딩과 투사측정)

  • Park, Dong-Young
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.12 no.6
    • /
    • pp.1027-1034
    • /
    • 2017
  • In this paper, we propose a new function embedding method that can measure mathematical projections of probability amplitude, probability, average expectation and matrix elements of stationary-state unit matrix at all control operation points of quantum gates. The function embedding method in this paper is to embed orthogonal normalization condition of probability amplitude for each control operating point into a binary scalar operator by using Dirac symbol and Kronecker delta symbol. Such a function embedding method is a very effective means of controlling the arithmetic power function of a unitary gate in a unitary transformation which expresses a quantum gate function as a tensor product of a single quantum. We present the results of evolutionary operation and projective measurement when we apply the proposed function embedding method to the ternary 2-qutrit cNOT gate and compare it with the existing methods.

A Three Dimensional Study on the Probability of Slope Failure(II) (사면(斜面)의 삼차원(三次元) 파괴확률(破壞確率)에 관(關)한 연구(硏究)(II))

  • Kim, Young-Su;Tcha, Hong-Jun;Jung, Sung-Kwan
    • Journal of Industrial Technology
    • /
    • v.3
    • /
    • pp.53-63
    • /
    • 1983
  • The probability of failure is used to analyze the reliability of three dimensional slope failure, instead of conventional factor of safety. The strength parameters are assumed to be normal variated and beta variated. These are interval estimated under the specified confidence level and maximum likelihood estimation. The pseudonormal and beta random variables are generated using the uniform probability transformation method according to central limit theorem and rejection method. By means of a Monte-Carlo Simulation, the probability of failure is defined as; Pf=M/N N : Total number of trials M : Total number of failures Some of the conclusions derived from the case study include; 1. Three dimensional factors of safety are generally much higher than 2-D factors of safety. However situations appear to exist where the 3-D factor of safety can be lower than the 2-D factor of safety. 2. The F3/F2 ratio appears to be quite sensitive to c and ${\phi}$ and to the shape of the 3-D shear surface and the slope but not to be to the unit weight of soil. 3. In cases that strength parameters are assumed to be normal variated and beta variated, the relationships between safety factor and the probability of failure are fairly consistent, regardless of the shape of the 3-D shear surface and the slope. 4. As the c-value is increased, the probability of failure for the same safety factor is increased and as the ${\phi}-value$ is increased, the probability of failure for the same safety factor is decreased.

  • PDF