• 제목/요약/키워드: lognormal distribution

Search Result 248, Processing Time 0.026 seconds

An Empirical Study on the Technology Innovation Distribution, Technology Imitation Distribution and New International Trade Theory (기술혁신분포, 기술모방분포 그리고 신 국제무역이론에 대한 실증연구)

  • Cho, Sang Sup;Min, Kyung Se;Cho, Byung Sun;Hwang, Ho Young
    • Journal of Korea Technology Innovation Society
    • /
    • v.21 no.2
    • /
    • pp.860-874
    • /
    • 2018
  • This study aims at empirical analysis of the new international trade theory (Melitz, 2012, 2014, 2015). The new international trade theory is centered on the effect of heterogeneous firms on the technological competitiveness on the trade effect and resulted from the important assumption that the form of the enterprise technology distribution determines the trade effect. This study empirically estimated the distribution of enterprise technology in Korean manufacturing. For the purpose of this study, we divided Korea's total enterprise technology distribution into technological innovation and technical imitation distribution, then statistically verified the distribution type and evaluated the appropriateness of the new international trade theory. Based on the empirical results of this study, we briefly suggested the direction of technology policy.

Succession and Heterogeneity of Plant Community in Mt. Yongam, Kwangnung Experimental Forest (광릉내 용암산 식물군집의 천이와 이질성)

  • You, Young-Han;Kwang-Je Gi;Dong-Uk Han;Young-se Kwak;Joon-He Kim
    • The Korean Journal of Ecology
    • /
    • v.18 no.1
    • /
    • pp.89-97
    • /
    • 1995
  • In order to study the successional trend and the heterogeneity of forest community, we investigated DBH frequency distribution of dominant tree species and the changes of several community indicies including ${\beta}-diversity\;({\beta}_t)$ along a belt transect in Mt. Yongam, Kwangnung Experimental Forest, which has been preserved for about 530 years. Quercus serrata, Carpinus laxiflora, and C. cordata were the three dominant species and their DBH frequency distribution showed a reverse J-shaped form, so these species seem to maintain by themselves. Dominancediversity curve had a lognormal distribution. d and H'for pooled quadrats were 0.13 and 1.09, respectively, but these indices within each quadiat varied with the range of 0.13 to 0.57 and 0.5 to 1.09, respectively. The value of ${\beta}_t$ along the belt transect ranged from 0.14 to 0.42. These results suggest that this forest community is in the stable climax stage but the components experience a heterogeneous microsuccession.

  • PDF

Speckle Removal of SAR Imagery Using a Point-Jacobian Iteration MAP Estimation

  • Lee, Sang-Hoon
    • Korean Journal of Remote Sensing
    • /
    • v.23 no.1
    • /
    • pp.33-42
    • /
    • 2007
  • In this paper, an iterative MAP approach using a Bayesian model based on the lognormal distribution for image intensity and a GRF for image texture is proposed for despeckling the SAR images that are corrupted by multiplicative speckle noise. When the image intensity is logarithmically transformed, the speckle noise is approximately Gaussian additive noise, and it tends to a normal probability much faster than the intensity distribution. MRFs have been used to model spatially correlated and signal-dependent phenomena for SAR speckled images. The MRF is incorporated into digital image analysis by viewing pixel types as slates of molecules in a lattice-like physical system defined on a GRF Because of the MRF-SRF equivalence, the assignment of an energy function to the physical system determines its Gibbs measure, which is used to model molecular interactions. The proposed Point-Jacobian Iterative MAP estimation method was first evaluated using simulation data generated by the Monte Carlo method. The methodology was then applied to data acquired by the ESA's ERS satellite on Nonsan area of Korean Peninsula. In the extensive experiments of this study, The proposed method demonstrated the capability to relax speckle noise and estimate noise-free intensity.

A Study on Cost Rate Analysis Methodology of Credit Card Value Proposition (신용카드 부가서비스 요율 분석 방법론에 대한 연구)

  • Lee, Chan-Kyung;Roh, Hyung-Bong
    • Journal of Korean Society for Quality Management
    • /
    • v.46 no.4
    • /
    • pp.797-820
    • /
    • 2018
  • Purpose: It is to seek for an appropriate cost rate analysis methodology of credit card value propositions in Korea. For this issue, it is claimed that methodologies based on probability distribution is more suitable than methodologies based on data-mining. The analysis model constructed for the cost rate estimation is called VCPM model. Methods: The model includes two major variables denoted as S and P. S is monthly credit card usage amount. P stands for the proportion of usage amount at special merchants over the whole monthly usage amount. The distributions assumed for P are positively skewed distributions such as exponential, gamma and lognormal. The major inputs to the model are also derived from S and P, which are E(S) and the aggregate proportion of usage amount at special merchants over the total monthly usage amount. Results: When the credit card's value proposition is general discount, the VCPM model fits well and generates reasonable cost rate(denoted as R). However, it seems that the model does not work well for other types of credit cards. Conclusion: The VCPM model is reliable for calculating cost rate for credit cards with positively skewed distribution of P, which are general discount card. However, another model should be built for cards with other types of distributions of P.

Probabilistic analysis of gust factors and turbulence intensities of measured tropical cyclones

  • Tianyou Tao;Zao Jin;Hao Wang
    • Wind and Structures
    • /
    • v.38 no.4
    • /
    • pp.309-323
    • /
    • 2024
  • The gust factor and turbulence intensity are two crucial parameters that characterize the properties of turbulence. In tropical cyclones (TCs), these parameters exhibit significant variability, yet there is a lack of established formulas to account for their probabilistic characteristics with consideration of their inherent connection. On this condition, a probabilistic analysis of gust factors and turbulence intensities of TCs is conducted based on fourteen sets of wind data collected at the Sutong Cable-stayed Bridge site. Initially, the turbulence intensities and gust factors of recorded data are computed, followed by an analysis of their probability densities across different ranges categorized by mean wind speed. The Gaussian, lognormal, and generalized extreme value (GEV) distributions are employed to fit the measured probability densities, with subsequent evaluation of their effectiveness. The Gumbel distribution, which is a specific instance of the GEV distribution, has been identified as an optimal choice for probabilistic characterizations of turbulence intensity and gust factor in TCs. The corresponding empirical models are then established through curve fitting. By utilizing the Gumbel distribution as a template, the nexus between the probability density functions of turbulence intensity and gust factor is built, leading to the development of a generalized probabilistic model that statistically describe turbulence intensity and gust factor in TCs. Finally, these empirical models are validated using measured data and compared with suggestions recommended by specifications.

Estimation of sewer deterioration by Weibull distribution function (와이블 분포함수를 이용한 하수관로 노후도 추정)

  • Kang, Byongjun;Yoo, Soonyu;Park, Kyoohong
    • Journal of Korean Society of Water and Wastewater
    • /
    • v.34 no.4
    • /
    • pp.251-258
    • /
    • 2020
  • Sewer deterioration models are needed to forecast the remaining life expectancy of sewer networks by assessing their conditions. In this study, the serious defect (or condition state 3) occurrence probability, at which sewer rehabilitation program should be implemented, was evaluated using four probability distribution functions such as normal, lognormal, exponential, and Weibull distribution. A sample of 252 km of CCTV-inspected sewer pipe data in city Z was collected in the first place. Then the effective data (284 sewer sections of 8.15 km) with reliable information were extracted and classified into 3 groups considering the sub-catchment area, sewer material, and sewer pipe size. Anderson-Darling test was conducted to select the most fitted probability distribution of sewer defect occurrence as Weibull distribution. The shape parameters (β) and scale parameters (η) of Weibull distribution were estimated from the data set of 3 classified groups, including standard errors, 95% confidence intervals, and log-likelihood values. The plot of probability density function and cumulative distribution function were obtained using the estimated parameter values, which could be used to indicate the quantitative level of risk on occurrence of CS3. It was estimated that sewer data group 1, group 2, and group 3 has CS3 occurrence probability exceeding 50% at 13th-year, 11th-year, and 16th-year after the installation, respectively. For every data groups, the time exceeding the CS3 occurrence probability of 90% was also predicted to be 27th- to 30th-year after the installation.

Design of Random Number Generator for Simulation of Speech-Waveform Coders (음성엔코더 시뮬레이션에 사용되는 난수발생기 설계)

  • 박중후
    • The Journal of the Acoustical Society of Korea
    • /
    • v.20 no.2
    • /
    • pp.3-9
    • /
    • 2001
  • In this paper, a random number generator for simulation of speech-waveform coders was designed. A random number generator having a desired probability density function and a desired power spectral density is discussed and experimental results are presented. The technique is based on Sondhi algorithm which consists of a linear filter and a memoryless nonlinearity. Several methods of obtaining memoryless nonlinearities for some typical continuous distributions are discussed. Sondhi algorithm is analyzed in the time domain using the diagonal expansion of the bivariate Gaussian probability density function. It is shown that the Sondhi algorithm gives satisfactory results when the memoryless nonlinearity is given in an antisymmetric form as in uniform, Cauchy, binary and gamma distribution. It is shown that the Sondhi algorithm does not perform well when the corresponding memoryless nonlinearity cannot be obtained analytically as in Student-t and F distributions, and when the memoryless nonlinearity can not be expressed in an antisymmetric form as in chi-squared and lognormal distributions.

  • PDF

Frequency Analysis of Daily Rainfall in Han River Basin Based on Regional L-moments Algorithm (L-모멘트법을 이용한 한강유역 일강우량자료의 지역빈도해석)

  • Lee, Dong-Jin;Heo, Jun-Haeng
    • Journal of Korea Water Resources Association
    • /
    • v.34 no.2
    • /
    • pp.119-130
    • /
    • 2001
  • At-site and regional frequency analyses of annual maximum 1-, 2-, and 3-days rainfall in Han River basin was performed and compared based on the regional L-moments algorithm. To perform regional frequency analysis, Han River basin was subdivided into 3 sub-basins such as South Han River, North Han River, and downstream regions. For each sub-basin, the discordancy and homogeneity tests were performed. As the results of goodness of fit tests, lognormal model was selected as an appropriate probability distribution for both South Han River and downstream regions and gamma-3 model for North han River region. From Monte carlo simulation, RBIAS and RRMSE of the estimated quantiles from regional frequency analysis and at-site frequency analysis were calculated and compared each other. Regional frequency analysis shows less RRMSE of the estimated quantiles than at-sites frequency analysis in overall return periods. The differences of BRMSE between two approaches increase as the return period increases. As a result, it is shown that regional frequency analysis performs better than at-site analysis for annual maximum rainfall data in Han River basin.

  • PDF

Despeckling and Classification of High Resolution SAR Imagery (고해상도 SAR 영상 Speckle 제거 및 분류)

  • Lee, Sang-Hoon
    • Korean Journal of Remote Sensing
    • /
    • v.25 no.5
    • /
    • pp.455-464
    • /
    • 2009
  • Lee(2009) proposed the boundary-adaptive despeckling method using a Bayesian model which is based on the lognormal distribution for image intensity and a Markov random field(MRF) for image texture. This method employs the Point-Jacobian iteration to obtain a maximum a posteriori(MAP) estimate of despeckled imagery. The boundary-adaptive algorithm is designed to use less information from more distant neighbors as the pixel is closer to boundary. It can reduce the possibility to involve the pixel values of adjacent region with different characteristics. The boundary-adaptive scheme was comprehensively evaluated using simulation data and the effectiveness of boundary adaption was proved in Lee(2009). This study, as an extension of Lee(2009), has suggested a modified iteration algorithm of MAP estimation to enhance computational efficiency and to combine classification. The experiment of simulation data shows that the boundary-adaption results in yielding clear boundary as well as reducing error in classification. The boundary-adaptive scheme has also been applied to high resolution Terra-SAR data acquired from the west coast of Youngjong-do, and the results imply that it can improve analytical accuracy in SAR application.

Predicting claim size in the auto insurance with relative error: a panel data approach (상대오차예측을 이용한 자동차 보험의 손해액 예측: 패널자료를 이용한 연구)

  • Park, Heungsun
    • The Korean Journal of Applied Statistics
    • /
    • v.34 no.5
    • /
    • pp.697-710
    • /
    • 2021
  • Relative error prediction is preferred over ordinary prediction methods when relative/percentile errors are regarded as important, especially in econometrics, software engineering and government official statistics. The relative error prediction techniques have been developed in linear/nonlinear regression, nonparametric regression using kernel regression smoother, and stationary time series models. However, random effect models have not been used in relative error prediction. The purpose of this article is to extend relative error prediction to some of generalized linear mixed model (GLMM) with panel data, which is the random effect models based on gamma, lognormal, or inverse gaussian distribution. For better understanding, the real auto insurance data is used to predict the claim size, and the best predictor and the best relative error predictor are comparatively illustrated.