• Title/Summary/Keyword: Minimum Error

Search Result 1,381, Processing Time 0.025 seconds

Optimal Monitoring Frequency Estimation Using Confidence Intervals for the Temporal Model of a Zooplankton Species Number Based on Operational Taxonomic Units at the Tongyoung Marine Science Station

  • Cho, Hong-Yeon;Kim, Sung;Lee, Youn-Ho;Jung, Gila;Kim, Choong-Gon;Jeong, Dageum;Lee, Yucheol;Kang, Mee-Hye;Kim, Hana;Choi, Hae-Young;Oh, Jina;Myong, Jung-Goo;Choi, Hee-Jung
    • Ocean and Polar Research
    • /
    • v.39 no.1
    • /
    • pp.13-21
    • /
    • 2017
  • Temporal changes in the number of zooplankton species are important information for understanding basic characteristics and species diversity in marine ecosystems. The aim of the present study was to estimate the optimal monitoring frequency (OMF) to guarantee and predict the minimum number of species occurrences for studies concerning marine ecosystems. The OMF is estimated using the temporal number of zooplankton species through bi-weekly monitoring of zooplankton species data according to operational taxonomic units in the Tongyoung coastal sea. The optimal model comprises two terms, a constant (optimal mean) and a cosine function with a one-year period. The confidence interval (CI) range of the model with monitoring frequency was estimated using a bootstrap method. The CI range was used as a reference to estimate the optimal monitoring frequency. In general, the minimum monitoring frequency (numbers per year) directly depends on the target (acceptable) estimation error. When the acceptable error (range of the CI) increases, the monitoring frequency decreases because the large acceptable error signals a rough estimation. If the acceptable error (unit: number value) of the number of the zooplankton species is set to 3, the minimum monitoring frequency (times per year) is 24. The residual distribution of the model followed a normal distribution. This model can be applied for the estimation of the minimal monitoring frequency that satisfies the target error bounds, as this model provides an estimation of the error of the zooplankton species numbers with monitoring frequencies.

Efficient Link Adaptation Scheme using Precoding for LTE-Advanced Uplink MIMO (LTE-Advanced에서 프리코딩에 의한 효율적인 상향링크 적응 방식)

  • Park, Ok-Sun;Ahn, Jae-Min
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.36 no.2B
    • /
    • pp.159-167
    • /
    • 2011
  • LTE-Advanced system requires uplink multi-antenna transmission in order to achieve the peak spectral efficiency of 15bps/Hz. In this paper, the uplink MIMO system model for the LTE-Advanced is proposed and an efficient link adaptation shceme using precoding is considered providing error rate reduction and system capacity enhancement. In particular, the proposed scheme determines a transmission rank by selecting the optimal wideband precoding matrix, which is based on the derived signal-to-interference and noise ratio (SINR) for the minimum mean squared error (MMSE) receivers of $2{\times}4$ multiple input multiple output (MIMO). The proposed scheme is verified by simulation with a practical MIMO channel model. The simulation results of average block-error-rate(BLER) reflect that the gain due to the proposed rank adapted transmission over full-rank transmission is evident particularly in the case of lower modulation and coding scheme (MCS) and high mobility, which means the severe channel fading environment.

The Analysis of Changma Structure using Radiosonde Observational Data from KEOP-2007: Part I. the Assessment of the Radiosonde Data (KEOP-2007 라디오존데 관측자료를 이용한 장마 특성 분석: Part I. 라디오존데 관측 자료 평가 분석)

  • Kim, Ki-Hoon;Kim, Yeon-Hee;Chang, Dong-Eon
    • Atmosphere
    • /
    • v.19 no.2
    • /
    • pp.213-226
    • /
    • 2009
  • In order to investigate the characteristics of Changma over the Korean peninsula, KEOP-2007 IOP (Intensive Observing Period) was conducted from 15 June 2007 to 15 July 2007. KEOP-2007 IOP is high spatial and temporal radiosonde observations (RAOB) which consisted of three special stations (Munsan, Haenam, and Ieodo) from National Institute of Meteorological Research, five operational stations (Sokcho, Baengnyeongdo, Pohang, Heuksando, and Gosan) from Korea Meteorological Administration (KMA), and two operational stations (Osan and Gwangju) from Korean Air Force (KAF) using four different types of radiosonde sensors. The error statistics of the sensor of radiosonde were investigated using quality control check. The minimum and maximum error frequency appears at the sensor of RS92-SGP and RS1524L respectively. The error frequency of DFM-06 tends to increase below 200 hPa but RS80-15L and RS1524L show vice versa. Especially, the error frequency of RS1524L tends to increase rapidly over 200 hPa. Systematic biases of radiosonde show warm biases in case of temperature and dry biases in case of relative humidity compared with ECMWF (European Center for Medium-Range Weather Forecast) analysis data and precipitable water vapor from GPS. The maximum and minimum values of systematic bias appear at the sensor of DFM-06 and RS92-SGP in case of temperature and RS80-15L and DFM-06 in case of relative humidity. The systematic warm and dry biases at all sensors tend to increase during daytime than nighttime because air temperature around sensor increases from the solar heating during daytime. Systematic biases of radiosonde are affected by the sensor type and the height of the sun but random errors are more correlated with the moisture conditions at each observation station.

Fast Motion Estimation Algorithm via Minimum Error for Each Step (단계별 최소에러를 통한 고속 움직임 예측 알고리즘)

  • Kim, Jong Nam
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.20 no.8
    • /
    • pp.1531-1536
    • /
    • 2016
  • In this paper, we propose a fast motion estimation algorithm which is important in performance of video encoding. Even though so many fast algorithms for motion estimation have been published due to its tremendous computational amount of for full search algorithm, efforts for reducing computations in motion estimation still remain. In the paper, we propose an algorithm that reduces unnecessary computations only, while keeping prediction quality the same as that of the full search. The proposed algorithm does not calculate block matching error for each candidate at once to find motion vectors but divides the calculation procedure into several steps and calculates partial sum of block errors. By doing that, we can estimate the minimum error point early and get the enhancement of calculation speed by reducing unnecessary computations. The proposed algorithm uses smaller computations than conventional fast search algorithms with the same prediction quality as full search.

Classification accuracy measures with minimum error rate for normal mixture (정규혼합분포에서 최소오류의 분류정확도 측도)

  • Hong, C.S.;Lin, Meihua;Hong, S.W.;Kim, G.C.
    • Journal of the Korean Data and Information Science Society
    • /
    • v.22 no.4
    • /
    • pp.619-630
    • /
    • 2011
  • In order to estimate an appropriate threshold and evaluate its performance for the data mixed with two different distributions, nine kinds of well-known classification accuracy measures such as MVD, Youden's index, the closest-to- (0,1) criterion, the amended closest-to- (0,1) criterion, SSS, symmetry point, accuracy area, TA, TR are clustered into five categories on the basis of their characters. In credit evaluation study, it is assumed that the score random variable follows normal mixture distributions of the default and non-default states. For various normal mixtures, optimal cut-off points for classification measures belong to each category are obtained and type I and II error rates corresponding to these cut-off points are calculated. Then we explore the cases when these error rates are minimized. If normal mixtures might be estimated for these kinds of real data, we could make use of results of this study to select the best classification accuracy measure which has the minimum error rate.

Scheduling Algorithm to Minimize Total Error for Imprecise On-Line Tasks

  • Song, Gi-Hyeon
    • Journal of Korea Multimedia Society
    • /
    • v.10 no.12
    • /
    • pp.1741-1751
    • /
    • 2007
  • The imprecise computation technique ensures that all time-critical tasks produce their results before their deadlines by trading off the quality of the results for the computation time requirements of the tasks. In the imprecise computation, most scheduling problems of satisfying both 0/1 constraints and timing constraints, while the total error is minimized, are NP-complete when the optional tasks have arbitrary processing times. In the previous studies, the reasonable strategies of scheduling tasks with the 0/1 constraints on uniprocessors and multiprocessors for minimizing the total error are proposed. But, these algorithms are all off-line algorithms. Then, in the on-line scheduling, NORA(No Off-line tasks and on-line tasks Ready upon Arrival) algorithm can find a schedule with the minimum total error. In NORA algorithm, EDF(Earliest Deadline First) strategy is adopted in the scheduling of optional tasks. On the other hand, for the task system with 0/1 constraints, NORA algorithm may not suitable any more for minimizing total error of the imprecise tasks. Therefore, in this paper, an on-line algorithm is proposed to minimize total error for the imprecise real-time task system with 0/1 constraints. This algorithm is suitable for the imprecise on-line system with 0/1 constraints. Next, to evaluate performance of this algorithm, a series of experiments are done. As a consequence of the performance comparison, it has been concluded that IOSMTE(Imprecise On-line Scheduling to Minimize Total Error) algorithm proposed in this paper outperforms LOF(Longest Optional First) strategy and SOF(Shortest Optional First) strategy for the most cases.

  • PDF

Minimum-Distance Decoding of Linear Block Codes with Soft-Decision (연판정에 의한 선형 블록 부호의 최소 거리 복호법)

  • 심용걸;이충웅
    • Journal of the Korean Institute of Telematics and Electronics A
    • /
    • v.30A no.7
    • /
    • pp.12-18
    • /
    • 1993
  • We have proposed a soft-decision decoding method for block codes. With careful examinations of the first hard-decision decoded results, The candidate codewords are efficiently searched for. Thus, we can reduce the decoding complexity (the number of hard-decision decodings) and lower the block error probability. Computer simulation results are presented for the (23,12) Golay code. They show that the decoding complexity is considerably reduced and the block error probability is close to that of the maximum likelihood decoder.

  • PDF

Estimation of the Lorenz Curve of the Pareto Distribution

  • Kang, Suk-Bok;Cho, Young-Suk
    • Communications for Statistical Applications and Methods
    • /
    • v.6 no.1
    • /
    • pp.285-292
    • /
    • 1999
  • In this paper we propose the several estimators of the Lorenz curve in the Pareto distribution and obtain the bias and the mean squared error for each estimator. We compare the proposed estimators with the uniformly minimum variance unbiased estimator (UMVUE) and the maximum likelihood estimator (MLE) in terms of the mean squared error (MSE) through Monte Carlo methods and discuss the results.

  • PDF

SIMULTANEOUS RANDOM ERROR CORRECTION AND BURST ERROR DETECTION IN LEE WEIGHT CODES

  • Jain, Sapna
    • Honam Mathematical Journal
    • /
    • v.30 no.1
    • /
    • pp.33-45
    • /
    • 2008
  • Lee weight is more appropriate for some practical situations than Hamming weight as it takes into account magnitude of each digit of the word. In this paper, we obtain a sufficient condition over the number of parity check digits for codes correcting random errors and simultaneously detecting burst errors with Lee weight consideration.

Speech Enhancement Using Lip Information and SFM (입술정보 및 SFM을 이용한 음성의 음질향상알고리듬)

  • Baek, Seong-Joon;Kim, Jin-Young
    • Speech Sciences
    • /
    • v.10 no.2
    • /
    • pp.77-84
    • /
    • 2003
  • In this research, we seek the beginning of the speech and detect the stationary speech region using lip information. Performing running average of the estimated speech signal in the stationary region, we reduce the effect of musical noise which is inherent to the conventional MlMSE (Minimum Mean Square Error) speech enhancement algorithm. In addition to it, SFM (Spectral Flatness Measure) is incorporated to reduce the speech signal estimation error due to speaking habit and some lacking lip information. The proposed algorithm with Wiener filtering shows the superior performance to the conventional methods according to MOS (Mean Opinion Score) test.

  • PDF