• Title/Summary/Keyword: 가중함수

Search Result 340, Processing Time 0.034 seconds

μ-Synthesis Controller Design and Experimental Verification for a Seismic-excited MDOF Building (지진을 받는 다자유도 건물의 μ합성 제어기 설계 및 검증실험)

  • 민경원;주석준;이영철
    • Journal of the Earthquake Engineering Society of Korea
    • /
    • v.6 no.6
    • /
    • pp.41-48
    • /
    • 2002
  • This study is on the structural control experiment for a small scale three-story building structure employing on active mass damper subjected to earthquake loading. $\mu$-synthesis controllers, which belong to robust control strategies, were designed and their performance were experimentally verified. Frequency-dependent weighting functions corresponding to disturbance input and controlled output were defined and combined to produce optimal $\mu$-synthesis controllers. The experiment result shows 60-70% reduction in RMS responses under the band-limited white noise excitation and 30-45% reduction in peak responses under the scaled earthquake excitations. Good agreement was obtained between the simulations based on the identified mathematical model and experimental results. And the simulations for the system with uncertainties show that the designed controllers are robust within a specified range of uncertainties.

Evaluation of Weighted Correlator for Multipath Mitigation in GPS Receiver (GPS수신기의 다중경로 오차 제거를 위한 가중 상관기의 성능평가)

  • Shin, Mi-Young;Jang, Han-Jin;Suh, Sang-Hyun;Park, Chan-Sik;Hwang, Dong-Hwan;Lee, Sang-Jeong
    • Journal of Navigation and Port Research
    • /
    • v.31 no.5 s.121
    • /
    • pp.409-414
    • /
    • 2007
  • The effect of multipath is especially serious in urban area and sea surface where buildings and water reflect GPS signal. Multipath brings about the performance degradation on many GPS application because the presence of multipath causes the diminution of pseudorange measurement accuracy in turn position accuracy. In this paper, a multipath mitigation named weighted correlation method is implemented on software GPS receiver, in which the asymmetric correlation function is compensated by modifying the late correlation value. Asymmetry compensation is obtained as weighted sum of two correlators which have different early-late chip spaces. This structure is adopted to lessen the computation load lower keeping up performance similar to that. The performance of implemented multipath mitigation technique is evaluated using GPS signal and multipath signal generated by GPS signal generator and software GPS receiver. The test results show that the weighted correlation method gives hefter performance than the standard correlator and the narrow correlator.

Analysis of influencing on Inefficiencies of Korean Banking Industry using Weighted Russell Directional Distance Model (가중평균 러셀(Russell) 방향거리함수모형을 이용한 은행산업의 비효율성 분석)

  • Yang, Dong-Hyun;Chang, Young-Jae
    • Journal of Digital Convergence
    • /
    • v.17 no.5
    • /
    • pp.117-125
    • /
    • 2019
  • This study measured inefficiencies of Korean banks with weighted Russell directional distance function, WRDDM, for the years of 2004-2013. Checking contributions of inputs and outputs to these inefficiencies, we found that non-performing loan as undesirable output was the most influential factor. The annual average of inefficiencies of Korean banks was 0.3912, and it consisted of non-performing loan 0.1883, output factors 0.098 except non-performing loan, input factors 0.098. The annual average inefficiency went sharply up from 0.2995 to 0.4829 mainly due to the sharp increase of inefficiency of non-performing loan from 0.1088 to 0.2678 before and after 2007-2008 Global financial crisis. We empirically showed the non-performing loan needed to be considered since it was the most important factor among the influential factors of technical inefficiency such as manpower, total deposit, securities, and non-performing loan. This study had some limitation since we did not control financial environment factor in WRDDM.

Directional headphone design based on delay-weight-sum beamforming technique (지연-가중-합 빔형성 기반의 지향성 헤드폰 설계)

  • Jeong, Jihyeon;Noh, Jeein;Park, Youngjin
    • Proceedings of the Korean Society for Noise and Vibration Engineering Conference
    • /
    • 2014.10a
    • /
    • pp.712-712
    • /
    • 2014
  • 본 논문에서는 지연-가중-합 빔형성 방법이 적용된 지향성 헤드폰의 마이크로폰 배치 설계를 설명한다. 마이크로폰의 갯수, 마이크로폰 간의 간격 등이 헤드폰 지향성에 영향을 미치는 설계 변수가 된다. 본 논문에서는 현실성을 고려하여 4개 이하의 마이크로폰을 포함한 10cm 길이의 배열을 타겟으로 한다. 전방으로부터의 소리를 증폭하고 후방으로부터의 소리를 감쇠하여, 전-후방 음압차를 최대화하는 것을 목표로 하였다. 구형 머리전달 함수를 이용한 시뮬레이션을 통해 최적의 마이크로폰 배치를 결정하였다. 설계된 헤드폰은 3개의 마이크로폰을 이용하여 300~3000Hz의 주파수 대역에서 평균 34.6dB의 전-후방 음압차를 보였다. 이 결과는 선행 연구에서 수행된 지연-합 빔형성 방법을 이용한 결과에 비해 8.8dB 뛰어난 성능이다.

  • PDF

임의중단모형에서 신뢰도의 비모수적 통합형 추정량

  • 이재만;차영준;장덕준
    • Communications for Statistical Applications and Methods
    • /
    • v.5 no.3
    • /
    • pp.685-694
    • /
    • 1998
  • 임상실험이나 신뢰성공학 분야에서 임의 중단자료를 이용한 비모수적 신뢰도 추정량으로 Kaplan-Meier 추정량과 Nelson형 추정량이 많이 사용되고 있다. 그러나 Nelson형 추정량은 평균제곱오차의 관점에서 Kaplan-Meier 추정량보다 추정능력이 우수한 반면 편의는 신뢰도가 감소함에 따라 양의 방향으로 점증하는 소표본 특성을 갖는다. Nelson형 추정량의 이러한 특성 때문에 신뢰도의 함수로 표현되는 잔여수명 분위수함수 등의 추정시에는 평균제곱오차의 관점에서 Kaplan-Meier 추정량보다 추정능력이 떨어짐을 볼 수 있다. 이러한 점을 고려하여 이 두 추정량을 가중평균으로 통합한 새로운 비모수적 신뢰도 추정량을 제안하고 추정량의 특성을 비교 분석하였다.

  • PDF

Testing of a discontinuity point in the log-variance function based on likelihood (가능도함수를 이용한 로그분산함수의 불연속점 검정)

  • Huh, Jib
    • Journal of the Korean Data and Information Science Society
    • /
    • v.20 no.1
    • /
    • pp.1-9
    • /
    • 2009
  • Let us consider that the variance function in regression model has a discontinuity/change point at unknown location. Yu and Jones (2004) proposed the local polynomial fit to estimate the log-variance function which break the positivity of the variance. Using the local polynomial fit, Huh (2008) estimate the discontinuity point of the log-variance function. We propose a test for the existence of a discontinuity point in the log-variance function with the estimated jump size in Huh (2008). The proposed method is based on the asymptotic distribution of the estimated jump size. Numerical works demonstrate the performance of the method.

  • PDF

A Parallel Equalization Algorithm with Weighted Updating by Two Error Estimation Functions (두 오차 추정 함수에 의해 가중 갱신되는 병렬 등화 알고리즘)

  • Oh, Kil-Nam
    • Journal of the Institute of Electronics Engineers of Korea TC
    • /
    • v.49 no.7
    • /
    • pp.32-38
    • /
    • 2012
  • In this paper, to eliminate intersymbol interference of the received signal due to multipath propagation, a parallel equalization algorithm using two error estimation functions is proposed. In the proposed algorithm, multilevel two-dimensional signals are considered as equivalent binary signals, then error signals are estimated using the sigmoid nonlinearity effective at the initial phase equalization and threshold nonlinearity with high steady-state performance. The two errors are scaled by a weight depending on the relative accuracy of the two error estimations, then two filters are updated differentially. As a result, the combined output of two filters was to be the optimum value, fast convergence at initial stage of equalization and low steady-state error level were achieved at the same time thanks to the combining effect of two operation modes smoothly. Usefulness of the proposed algorithm was verified and compared with the conventional method through computer simulations.

Parameter Estimation and Confidence Limits for the Log-Gumbel Distribution (대수(對數)-Gumbel 확률분포함수(確率分布函數)의 매개변수(媒介變數) 추정(推定)과 신뢰한계(信賴限界) 유도(誘導))

  • Heo, Jun Haeng
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.13 no.4
    • /
    • pp.151-161
    • /
    • 1993
  • The log-Gumbel distribution in real space is defined by transforming the conventional log-Gumbel distribution in log space. For this model, the parameter estimation techniques are applied based on the methods of moments, maximum likelihood and probability weighted moments. The asymptotic variances of estimator of the quantiles for each estimation method are derived to find the confidence limits for a given return period. Finally, the log-Gumbel model is applied to actual flood data to estimate the parameters, quantiles and confidence limits.

  • PDF

Analysis on Optimality of Proportional Navigation Based on Nonlinear Formulation (비선형 운동방정식에 근거한 비례항법유도의 최적성에 관한 해석)

  • Jeon, In-Soo;Lee, Jin-Ik
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.37 no.4
    • /
    • pp.367-371
    • /
    • 2009
  • Analysis on optimality of the proportional navigation guidance(PNG) law is presented in this paper. While most of previous studies on optimality of PNG were relied on the linear formulation, this paper is based on the nonlinear formulation. The analysis shows that PNG is an optimal solution minimizing a range-weighted control energy, where the weighting function is an inverse of $\alpha$ power of the distance-to-target. We show that the navigation constant N is related to $\alpha$ directly. And also the conditions required to ensure the analysis result are investigated.

Classification of Epilepsy Using Distance-Based Feature Selection (거리 기반의 특징 선택을 이용한 간질 분류)

  • Lee, Sang-Hong
    • Journal of Digital Convergence
    • /
    • v.12 no.8
    • /
    • pp.321-327
    • /
    • 2014
  • Feature selection is the technique to improve the classification performance by using a minimal set by removing features that are not related with each other and characterized by redundancy. This study proposed new feature selection using the distance between the center of gravity of the bounded sum of weighted fuzzy membership functions (BSWFMs) provided by the neural network with weighted fuzzy membership functions (NEWFM) in order to improve the classification performance. The distance-based feature selection selects the minimum features by removing the worst features with the shortest distance between the center of gravity of BSWFMs from the 24 initial features one by one, and then 22 minimum features are selected with the highest performance result. The proposed methodology shows that sensitivity, specificity, and accuracy are 97.7%, 99.7%, and 98.7% with 22 minimum features, respectively.