• Title/Summary/Keyword: Error sum

Search Result 491, Processing Time 0.032 seconds

The Effect of Uncertainty in Roughness and Discharge on Flood Inundation Mapping (조도계수와 유량의 불확실성이 홍수범람도 구축에 미치는 영향)

  • Jung, Younghun;Yeo, Kyu Dong;Kim, Soo Young;Lee, Seung Oh
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.33 no.3
    • /
    • pp.937-945
    • /
    • 2013
  • The accuracy of flood inundation maps is determined by the uncertainty propagated from all variables involved in the overall process including input data, model parameters and modeling approaches. This study investigated the uncertainty arising from key variables (flow condition and Manning's n) among model variables in flood inundation mapping for the Missouri River near Boonville, Missouri, USA. Methodology of this study involves the generalized likelihood uncertainty estimation (GLUE) to quantify the uncertainty bounds of flood inundation area. Uncertainty bounds in the GLUE procedure are evaluated by selecting two likelihood functions, which is two statistic (inverse of sum of squared error (1/SAE) and inverse of sum of absolute error (1/SSE)) based on an observed water surface elevation and simulated water surface elevations. The results from GLUE show that likelihood measure based on 1/SSE is more sensitive on observation than likelihood measure based on 1/SAE, and that the uncertainty propagated from two variables produces an uncertainty bound of about 2% in the inundation area compared to observed inundation. Based on the results obtained form this study, it is expected that this study will be useful to identify the characteristic of flood.

An analysis of optimal design conditions of LDPC decoder for IEEE 802.11n Wireless LAN Standard (IEEE 802.11n 무선랜 표준용 LDPC 복호기의 최적 설계조건 분석)

  • Jung, Sang-Hyeok;Na, Young-Heon;Shin, Kyung-Wook
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.14 no.4
    • /
    • pp.939-947
    • /
    • 2010
  • The LDPC(Low-Density Parity-Check) code, which is one of the channel encoding methods in IEEE 802.11n wireless LAN standard, has superior error-correcting capabilities. Since the hardware complexity of LDPC decoder is high, it is very important to take into account the trade-offs between hardware complexity and decoding performance. In this paper, the effects of LLR(Log-Likelihood Ratio) approximation on the performance of MSA(Min-Sum Algorithm)-based LDPC decoder are analyzed, and some optimal design conditions are derived. The parity check matrix with block length of 1,944 bits and code rate of 1/2 in IEEE 802.11n WLAN standard is used. In the case of $BER=10^{-3}$, the $E_b/N_o$ difference between LLR bit-widths (6,4) and (7,5) is 0.62 dB, and $E_b/N_o$ difference for iteration cycles 6 and 7 is 0.3 dB. The simulation results show that optimal BER performance can be achieved by LLR bit-width of (7,5) and iteration cycle of 7.

Moving Object Detection Algorithm for Surveillance System (무인 감시 시스템을 위한 이동물체 검출 알고리즘)

  • Lim Kang-mo;Lee Joo-shin
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.30 no.1C
    • /
    • pp.44-53
    • /
    • 2005
  • In this paper, a improved moving object detection algorithm for stable performance of surveillance system in case of iterative moving in limited area and rapidly illuminance change in background scene is proposed. The proposed algorithm is that background scenes are sampled for initializing background image then the sampled fames are divided by block and sum of graylevel value for each block pixel was calculated, respectively. The initialization of background image is that background frame is respectively reconstructed with selecting only the maximum graylevel value and the minimum graylevel value of blocks located at same position between adjacent frames, then reference images of background are set by the reconstructed background images. Moving object detecting is that the current image frame is divided by block then sum of graylevel value for each block pixel is calculated. If the calculated value is out of graylevel range of the initialized two reference images, it is decided with moving objects block, otherwise it is decided background. The evaluated results is that the error rate of the proposed method is less than the error rate of the existing methods from $0.01{\%}$ to $20.33{\%}$ and the detection rate of the proposed method is better than the existing methods from $0.17{\%}\;to\;22.83{\%}$.

Analysis of Communication Performance According to Detection Sequence of MMSE Soft Decision Interference Cancellation Scheme for MIMO System (다중 입출력 시스템 MMSE 연판정 간섭 제거 기법의 검출 순서에 따른 통신 성능 분석)

  • Lee, Hee-Kwon;Kim, Deok-Chan;Kim, Tae-Hyeong;Kim, Yong-Kab
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.12 no.6
    • /
    • pp.636-642
    • /
    • 2019
  • In this paper, we analyzed BER (Bit Error Rate) communication performance according to the detection order of MMSE (Minimum Mean Square Error) based soft decision interference cancellation. As the detection order method, antenna index order method, absolute value magnitude order method of channel elements, absolute value sum order method of channel elements, and SNR (Signal Noise Ratio) order method are proposed. BER performance for the scheme was measured and analyzed. As a simulation environment, 16-QAM (Quadrature Amplitude Modulation) modulation is used in an uncoded environment of an M×M multiple-input multiple-output system, and an independent Rayleigh attenuation channel is considered. The simulation results show that the performance gain is about 1.5dB when the SNR-based detection order method is M=4, and the performance gain is about 3.5dB when M=8 and about 3.5dB when M=16. The more BER performance was confirmed, the more the detection order method of the received signal prevented the interference and error spreading occurring in the detection process.

A Study on Sample Frequency Channel Selection of Near-Field Receiving Measurement for the Active Phased Array Antenna for Mono-Pulse Accuracy (모노펄스 정확도를 위한 능동배열위상레이다의 근접전계 수신시험 표본 주파수 채널 선택에 대한 연구)

  • Kwon, Yong-Wook;Yoon, Jae-Bok;Yoo, Woo-Sung;Jang, Heon-Soon;Kim, Do-Yeol
    • The Journal of Korean Institute of Electromagnetic Engineering and Science
    • /
    • v.28 no.4
    • /
    • pp.318-327
    • /
    • 2017
  • It is essential for the near-field receiving measurement to make beam pattern and check the performance of a active phased array antenna system. Also, we could obtain compensation value for mono-pulse function through the near-field receive test, however, if the radar has many frequency channel, the test would take long time and hard effort. So it is needed that frequency channels are selected for measurement and calculates the values for other frequency channels to improve efficiency in development and manufacture. In this case, the phase variations in sum and del channels would be checked. The phase measurement includes un-linear characteristic because of wrapping effect. Generally, radars have similar path length in sum and del channel, but if a radar has a electrical length gap between sum and del channel, errors could occur by phase's wrapping effect. In this paper, the interpolation method's error caused by electrical length gap is checked and the effective method for frequency channel selection to avoid wrapping effect is introduced.

The Rearch Of Method in the Appropriate number of Demand and Supply of OMD (한의사인력(韓醫師人力) 공급(供給)의 적정화방안(適定化方案) 연구(硏究))

  • Lee, Jong-Soo
    • The Journal of Korean Medicine
    • /
    • v.19 no.1
    • /
    • pp.299-326
    • /
    • 1998
  • 1. Comparison of demand and supply A. Assumption of estimation of demand and supply we will briefly assumptions used for presumption once more before comparing the result of estimation of demand and supply examined previously 1) supply - The average applying rate for state. examination of graduate: ${\alpha}$=1.03109 - The ratio of successful applicants of state examinations: ${\beta}$=0.97091 - Mortality classified by age : presumed data of the Bureau of statistics - Emigrating rate: 0 % - Time of retire: unconsidered - An army doctor number: unconsidered and regard number of employed oriental medicine doctor. - Standard of 1995 : The number of survival oriental medicine doctor is 8195. the number of employed oriental medicine doctor is 7419. 2) demand - derivated demand method Daily the average amount of medical treatment: according to medical insurance federation data. there is 16 or 6 non allowance patient, we consider amount of medical treatment as 22 persons in practical because 21.94 persons (founded practical examination) are converted to allowance in comming demand. Daily the proper amount of medical treatment: 7 hours form -35 persons 5 hours 30 minutes form -28 persons. Yearly medical treatment days: 229 days. 255 days. 269 days . Increasing rate of visiting hospital days: -1996 year. 1997 year. 1998 year- . Rate of applying insurance: yearly average 71.51% (among the investigated patient) B. Comparison of total sum result 1) supply (provision) Table Ⅳ-1 below shows the estimation of the oriental medicine doctor in the future.

  • PDF
  • Accuracy Comparisons between Traditional Adjustment and Least Square Method (최소제곱법을 적용한 지적도근점측량 계산의 정확도 분석)

    • Lee, Jong-Min;Jung, Wan-Suk;Lee, Sa-Hyung
      • Journal of Cadastre & Land InformatiX
      • /
      • v.45 no.2
      • /
      • pp.117-130
      • /
      • 2015
    • A least squares method for adjusting the horizontal network satisfies the conditions which is minimizing the sum of the squares of errors based on probability theory. This research compared accuracy of 3rd cadastral control points adjusted by traditional and least square method with respect to the result of Network-RTK. Test results showed the least square method more evenly distribute closure error than traditional method. Mean errors of least square and traditional adjusting method are 2.7cm, 2.2cm respectively. In addition, blunder in angle observations can be detected by comparing position errors which calculated by forward and backward initial coordinates. However, distance blunder cannot offer specific observation line occurred mistake because distance error propagates several observation lines which have similar directions.

    심근조영심초음파에서 심장의 움직임을 보정한 비침습적 심근관류모델의 정량적 평가

    • 이재훈;김희중;정남식;임세중;김기황
      • Proceedings of the Korean Society of Medical Physics Conference
      • /
      • 2003.09a
      • /
      • pp.49-49
      • /
      • 2003
    • 목적 : 심초음파는 비침습적이므로 반복적으로 정확히 심질환의 경과를 관찰하여 치료효과 및 수술시기를 정할 수 있는 검사로서 임상적으로 매우 유용하다. 실시간 심근조영심초음파에 의한 time intensity 평가는 부위별로 수행됨으로 연속적으로 위치하는 관심영역이 intensity에 있어 심장의 움직임 변화에 영향을 받는다. Time intensity 곡선의 최적의 곡선맞춤을 위해 주기적인 심장 운동 매개변수를 조합해 기존의 모델을 보정한 안정적인 측정방법을 제시한다. 방법 : 심장의 운동에 의한 특징적인 정보를 설명하기 위해 기존의 문헌에 제시된 지수 함수에 주어진 심박수로 만들어진 시간에 관한 일반적인 정형파 함수를 추가한다. C(t) = A[1 - exp($\beta$t)] + Dsine(2$\pi$ft + $\theta$) C(t): videointensity A: plateau videointensity (blood volume) $\beta$: capillary blood velocity (rate constant of rise in videointensity) t: pulsing interval (ms) D: displacement from the periodic variance of the curve (estimated motion field from the ejection point for the ratio between systole and diastole) f: heart rate $\theta$: transit time issue A $\times$ $\beta$ : myocardial blood flow 관상동맥의 관류 데이터에 대한 실험이 펄스간격에 대한 비디오 세기로 수행되었다. 그리고 이러한 결과들이 the sum of squares due to error, R square, root mean squared error로 평가되었다. 결과 : 실험결과, 주기적인 심장의 움직임과 심박출 시점으로부터의 변위를 잘 기술하고 곡선에서의 측정 점들이 예측된 심장 움직임에 따라 성공적으로 표시되었다. 뿐만 아니라 보정된 모델이 현저한 적합도의 향상을 보여주었다. 결론 : 제시된 접근방법은 각각의 측정에서 심장 운동 영역의 변화에 독립적이며 측정 시점에 의해 영향받지 않고 심근 관류의 안정적인 측정이 가능하다. 심장의 움직임에 관한 매개변수를 조합한 모델로 곡선접합을 수행함으로써 관류의 정량적 정보를 좀더 정확하게 얻을 수 있으며 임상적 이용을 가능하게 할 것으로 기대된다.

    • PDF

    A Univariate Loss Function Approach to Multiple Response Surface Optimization: An Interactive Procedure-Based Weight Determination (다중반응표면 최적화를 위한 단변량 손실함수법: 대화식 절차 기반의 가중치 결정)

    • Jeong, In-Jun
      • Knowledge Management Research
      • /
      • v.21 no.1
      • /
      • pp.27-40
      • /
      • 2020
    • Response surface methodology (RSM) empirically studies the relationship between a response variable and input variables in the product or process development phase. The ultimate goal of RSM is to find an optimal condition of the input variables that optimizes (maximizes or minimizes) the response variable. RSM can be seen as a knowledge management tool in terms of creating and utilizing data, information, and knowledge about a product production and service operations. In the field of product or process development, most real-world problems often involve a simultaneous consideration of multiple response variables. This is called a multiple response surface (MRS) problem. Various approaches have been proposed for MRS optimization, which can be classified into loss function approach, priority-based approach, desirability function approach, process capability approach, and probability-based approach. In particular, the loss function approach is divided into univariate and multivariate approaches at large. This paper focuses on the univariate approach. The univariate approach first obtains the mean square error (MSE) for individual response variables. Then, it aggregates the MSE's into a single objective function. It is common to employ the weighted sum or the Tchebycheff metric for aggregation. Finally, it finds an optimal condition of the input variables that minimizes the objective function. When aggregating, the relative weights on the MSE's should be taken into account. However, there are few studies on how to determine the weights systematically. In this study, we propose an interactive procedure to determine the weights through considering a decision maker's preference. The proposed method is illustrated by the 'colloidal gas aphrons' problem, which is a typical MRS problem. We also discuss the extension of the proposed method to the weighted MSE (WMSE).

    Understanding of F2 Metrics Used to Evaluate Similarity of Dissolution Profiles (유사인자를 사용하여 용출양상 유사성을 비교하는 방법에 대한 고찰)

    • Cho, Mi-Hyun;Kim, Jeong-Ho;Lee, Hyeon-Tae;Sah, Hong-Kee
      • Journal of Pharmaceutical Investigation
      • /
      • v.33 no.3
      • /
      • pp.245-253
      • /
      • 2003
    • Dissolution profile comparsions can be done by virtue of the similarity factor $(f_2)$. It is a logarithmic reciprocal square root transformation of the sum of squared error of % dissolution differences between two profiles at several time points. It gives information on the degree of similarity between the two profiles: An $f_2$ value between 50 and 100 suggests the similarity/equivalence of the two dissolution curves being compared. The objective of this report was to provide a careful examination on the $f_2$ metrics in detail. It was shown that $f_2$ values exceeded 50, when relative differences in % dissolved between two products were less than 15% at all time points. The similarity factor value was also found to be greater than 50, in cases when absolute % dissolution differences were below 10% at all time points. Interestingly, the $f_2$ value was changed by the number of the time points selected for calculation. In particular, $f_2$ tended to have higher values, when the $f_2$ metrics used a large number of time points in which % dissolved reached plateau. Finally, since the similarity factor was a sample statistics, it was impossible to infer type I/II errors and sampling error. Despite certain limitations inherited in the $f_2$ metrics, it was easy and convenient to evaluate how similar the two dissolution profiles were.


    (34141) Korea Institute of Science and Technology Information, 245, Daehak-ro, Yuseong-gu, Daejeon
    Copyright (C) KISTI. All Rights Reserved.