• Title/Summary/Keyword: random parameter

Search Result 604, Processing Time 0.02 seconds

Analysis of golf putting for Elite & Novice golfers Using Jerk Cost Function (저크비용함수를 이용한 골프 숙련자와 초보자간의 퍼팅 동작 분석)

  • Lim, Young-Tae;Choi, Jin-Sung;Han, Young-Min;Kim, Hyung-Sik;Yi, Jeong-Han;Jun, Jae-Hun;Tack, Gye-Rae
    • Korean Journal of Applied Biomechanics
    • /
    • v.16 no.1
    • /
    • pp.1-10
    • /
    • 2006
  • The purpose of this study was to identify critical parameters of a putting performance using jerk cost function. Jerk is the time rate of change of acceleration and it has been suggested that a skilled performance is characterized by decreased jerk magnitude. Four elite golfers($handicap{\leq}2$) and 4 novice golfers participated in this study for the comparison. The 3D kinematic data were collected for each subject performing 5 trials of putts for each of these distances (random order): 1m, 3m, 5m The putting stroke was divided into 3 phases such as back swing. down swing and follow-through. In this study, it was assumed that there exist smoothness difference between elite and novice golfers during putting. The distance and jerk-cost function of Putting stroke for each phase were analyzed Results showed that there was a significant difference in jerk cost function at putter toe (at media-lateral direction) and at the center of mass between two groups by increasing putting distance. From these it could be concluded that jerk can be used as a kinematic parameter for distinguishing elite and novice golfers.

Core Demand Market by Visitor's Characteristics of Mountain Types of a National Park -focused on Demographic and Social Economical Factors- (국립공원 방문객 특성을 이용한 핵심수요시장연구 -인구통계학적 변인과 사회경제학적 변인을 중심으로-)

  • Gwak, Gang-Hee
    • The Journal of the Korea Contents Association
    • /
    • v.13 no.7
    • /
    • pp.361-368
    • /
    • 2013
  • This research aims to offer the information required for demand increase on marketing strategy level by investigating Mudeungsan visitors' demographic characteristics and social economical variables. To accomplish this study, the proper analyzing model needs to be applied because a grave error of parameters will be led if regression model appropriate for analyzing the data of a continuous probability variable is applied, in case that dependent variable is a discrete random variable which have a discrete probability distribution. Therefore data analysis was performed with Poisson model. However, as the data was showing an overdispersion, parameter was estimated with the Binomial Poisson model able to cover the problem. As a result, some explanatory variables turned out to be significant such as visitor's age, occupation, preferred season to visit, type of company, five days working, and preferring type of tourism. Author could offer to the national park the information about characteristics of core market revealed and marketing strategy for it, based on those influential variables.

Rank transformation analysis for 4 $\times$ 4 balanced incomplete block design (4 $\times$ 4 균형불완전블럭모형의 순위변환분석)

  • Choi, Young-Hun
    • Journal of the Korean Data and Information Science Society
    • /
    • v.21 no.2
    • /
    • pp.231-240
    • /
    • 2010
  • If only fixed effects exist in a 4 $\times$ 4 balanced incomplete block design, powers of FR statistic for testing a main effect show the highest level with a few replications. Under the exponential and double exponential distributions, FR statistic shows relatively high powers with big differences as compared with the F statistic. Further in a traditional balanced incomplete block design, powers of FR statistic having a fixed main effect and a random block effect show superior preference for all situations without regard to the effect size of a main effect, the parameter size and the type of population distributions of a block effect. Powers of FR statistic increase in a high speed as replications increase. Overall power preference of FR statistic for testing a main effect is caused by unique characteristic of a balanced incomplete block design having one main and block effect with missing observations, which sensitively responds to small increase of main effect and sample size.

Ferroelectric Properly of Bi3.75La0.25Ti3O12 Ceramic Sintered in the Ambient (분위기 소결공정에 의한 Bi3.75La0.25Ti3O12세라믹의 강유전특성)

  • 김응권;박춘배;박기엽;송준태
    • Journal of the Korean Institute of Electrical and Electronic Material Engineers
    • /
    • v.15 no.9
    • /
    • pp.783-787
    • /
    • 2002
  • In recent year, B $i_{4-}$x L $a_{x}$ $Ti_3$ $O_{12(BLT)}$ is one of promising substitute materials for the ferroelectric random access memory(FRAM) applications. But the systematic composition is still insufficient, so this experiment was carried out in ceramic ambient sintering process which has the very excellent ferroelectric property. Samples were prepared by a bulk and the purpose which was estimated with a suitability of thin films applications. The density of B $i_{3.75}$ L $a_{0.25}$ $Ti_3$ $O_{12}$ was high and the XRD pattern showed that the intensity of main peak (117) was increased at the argon ambient sintering. Controlling the quantity of oxygen, crystallization showed a thin, long plate like type, and we obtained the excellent dielectric and polarization properties at the argon atmosphere sintering. Also this sintering process was effective at the bulk sample. Argon ambient sintered sample produced higher permittivity of 154, the remanent polarization(2Pr) of 6.8 uC/$\textrm{cm}^2$ compared with that sintered in air and oxygen ambient. And this sintering process showed a possibility which could be applied to thin films process..

A hybrid self-adaptive Firefly-Nelder-Mead algorithm for structural damage detection

  • Pan, Chu-Dong;Yu, Ling;Chen, Ze-Peng;Luo, Wen-Feng;Liu, Huan-Lin
    • Smart Structures and Systems
    • /
    • v.17 no.6
    • /
    • pp.957-980
    • /
    • 2016
  • Structural damage detection (SDD) is a challenging task in the field of structural health monitoring (SHM). As an exploring attempt to the SDD problem, a hybrid self-adaptive Firefly-Nelder-Mead (SA-FNM) algorithm is proposed for the SDD problem in this study. First of all, the basic principle of firefly algorithm (FA) is introduced. The Nelder-Mead (NM) algorithm is incorporated into FA for improving the local searching ability. A new strategy for exchanging the information in the firefly group is introduced into the SA-FNM for reducing the computation cost. A random walk strategy for the best firefly and a self-adaptive control strategy of three key parameters, such as light absorption, randomization parameter and critical distance, are proposed for preferably balancing the exploitation and exploration ability of the SA-FNM. The computing performance of the SA-FNM is evaluated and compared with the basic FA by three benchmark functions. Secondly, the SDD problem is mathematically converted into a constrained optimization problem, which is then hopefully solved by the SA-FNM algorithm. A multi-step method is proposed for finding the minimum fitness with a big probability. In order to assess the accuracy and the feasibility of the proposed method, a two-storey rigid frame structure without considering the finite element model (FEM) error and a steel beam with considering the model error are taken examples for numerical simulations. Finally, a series of experimental studies on damage detection of a steel beam with four damage patterns are performed in laboratory. The illustrated results show that the proposed method can accurately identify the structural damage. Some valuable conclusions are made and related issues are discussed as well.

Statistical Tests and Applications for the Stability of an Estimated Cointegrating Vector (공적분벡터의 안정성에 대한 실증연구)

  • Kim, Tae-Ho;Hwang, Sung-Hye;Kim, Mi-Yun
    • The Korean Journal of Applied Statistics
    • /
    • v.18 no.3
    • /
    • pp.503-519
    • /
    • 2005
  • Cointegration test is usually performed under the assumption that the cointegrating vector is constant for the whole sample period. Most previous studies have used conventional cointegration methods in testing for a stable long-run equilibrium relation among related variables. However they have overlooked that the long-run equilibrium may not the unique and the stable relation may not be guaranteed. This study develops the additional statistical tests for the stability of the estimated cointegrating vector. Three tests for the parameter stability of a cointegrated regression model are utilized and applied to identify the types of variations in the long-run relation between the domestic unemployment and the rotated macroeconomic variables of interest. The present paper finds that, there exists a stable but, time-varying long-run relation between those. The observed variation in cointegrating relations is generally characterized by a discrete one-time shift, rather than a gradually evolving random walk process which is attributable to the IMF financial and economic crisis.

FracSys와 UDEC을 이용한 사면 파괴 양상 분석 통계적 절리망 생성 기법 및 Monte Carlo Simulation을 통한 사면 안정성 해석

  • 김태희;최재원;윤운상;김춘식
    • Proceedings of the Korean Geotechical Society Conference
    • /
    • 2002.03a
    • /
    • pp.651-656
    • /
    • 2002
  • In general, the most important problem in slope stability analysis is that there is no definite way to describe the natural three-dimensional Joint network. Therefore, the many approaches were tried to anlayze the slope stability. Numerical modeling approach is one of the branch to resolve the complexity of natural system. UDEC, FLAC, and SWEDGE are widely used commercial code for the purpose on stability analysis. For the purpose on the more appropriate application of these kind of code, however, three-dimensional distribution of joint network must be identified in more explicit way. Remaining problem is to definitely describe the three dimensional network of joint and bedding, but it is almost impossible in practical sense. Three dimensional joint generation method with random number generation and the results of generation to UDEC have been applied to settle the refered problems in field site. However, this approach also has a important problem, and it is that joint network is generated only once. This problem lead to the limitation on the application to field case, in practical sense. To get rid of this limitation, Monte Carlo Simulation is proposed in this study 1) statistical analysis of input values and definition of the applied system with statistical parameter, 2) instead of the consideration of generated network as a real system, generated system is just taken as one reliable system, 3) present the design parameters, through the statistical analysis of ouput values Results of this study are not only the probability of failure, but also area of failure block, shear strength, normal strength and failure pattern, and all of these results are described in statistical parameters. The results of this study, shear strength, failure area, pattern etc, can provide the direct basement on the design, cutoff angle, support pattern, support strength and etc.

  • PDF

Statistical Convergence Properties of an Adaptive Normalized LMS Algorithm with Gaussian Signals (가우시안 신호를 갖는 적응 정규화 LMS 앨고리듬의 통계학적 수렴 성질)

  • Sung Ho CHO;Iickho SONG;Kwang Ho PARK
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.16 no.12
    • /
    • pp.1274-1285
    • /
    • 1991
  • This paper presents a statistical convergence analysis of the normalized least mean square(NLMS)algorithm that employs a single-pole lowpass filter, In this algorithm the lowpass filter is used to adjust its output towards the estimated value of the input signal power recursively. The estimated input signal power so obtained at each time is then used to normalize the convergence parameter. Under the assumption that the primary and reference inputs to the adaptive filter are zero mean wide sense stationary, and Gaussian random processes, and further making use of the independence assumption. we derive expressions that characterize the mean and maen squared behavior of the filter coefficients as well as the mean squared estimation error. Conditions for the mean and mean squared convergence are explored. Comparisons are also made between the performance of the NLMS algorithm and that of the popular least mean square(LMS) algorithm Finally, experimental results that show very good agreement between the analytical and emprincal results are presented.

  • PDF

Classification and Generator Polynomial Estimation Method for BCH Codes (BCH 부호 식별 및 생성 파라미터 추정 기법)

  • Lee, Hyun;Park, Cheol-Sun;Lee, Jae-Hwan;Song, Young-Joon
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.38A no.2
    • /
    • pp.156-163
    • /
    • 2013
  • The use of an error-correcting code is essential in communication systems where the channel is noisy. When channel coding parameters are unknown at a receiver side, decoding becomes difficult. To perform decoding without the channel coding information, we should estimate the parameters. In this paper, we introduce a method to reconstruct the generator polynomial of BCH(Bose-Chaudhuri-Hocquenghem) codes based on the idea that the generator polynomial is compose of minimal polynomials and BCH code is cyclic code. We present a probability compensation method to improve the reconstruction performance. This is based on the concept that a random data pattern can also be divisible by a minimal polynomial of the generator polynomial. And we confirm the performance improvement through an intensive computer simulation.

Discrete-Time Analysis of Throughput and Response Time for LAP Derivative Protocols under Markovian Block-Error Pattern (마르코프 오류모델 하에서의 LAP 계열 프로토콜들의 전송성능과 반응시간에 대한 이산-시간 해석)

  • Cho, Young-Jong;Choi, Dug-Kyoo
    • The Transactions of the Korea Information Processing Society
    • /
    • v.4 no.11
    • /
    • pp.2786-2800
    • /
    • 1997
  • In this paper, we investigate how well the channel memory (statistical dependence in the occurrence of transmission errors) can be used in the evaluation of widely used error control schemes. For this we assume a special case named as the simplest Markovian block-error pattern with two states, in which each block is classified into two classes of whether the block transmission is in error or not. We apply the derived pattern to the performance evaluation of the practical link-level procedures, LAPB/D/M with multi-reject options, and investigate both throughput and user-perceived response time behaviors on the discrete-time domain to determine how much the performance of error recovery action is improved under burst error condition. Through numerical examples, we show that the simplest Markovian block-error pattern tends to be superior in throughput and delay characteristics to the random error case. Also, instead of mean alone, we propose a new measure of the response time specified as mean plus two standard deviations 50 as to consider user-perceived worst cases, and show that it results in much greater sensitivity to parameter variations than does mean alone.

  • PDF