• Title/Summary/Keyword: Error distribution

Search Result 2,055, Processing Time 0.034 seconds

Consideration of Image Quality of Dithered Picture by Constrained Average Method Using Various Probability Distribution Models

  • Sato, Mitsuhiro;Hasegawa, Madoka;Kato, Shigeo
    • Proceedings of the IEEK Conference
    • /
    • 2002.07c
    • /
    • pp.1495-1498
    • /
    • 2002
  • The constrained average method is one of dither methods which combines edge emphasis and grayscale rendition to provide legibility of textual region and proper quality of continuous tone region. How-ever, image quality of continuous tone region is insufficient compared to other dither methods, such as ordered dither methods or the error diffusion method. The constrained average method uses a uniform distribution function to decide number of lit pixels related to the average intensity in a picture area. However, actual distribution of continuous tone region is closer to the Laplacian distribution or triangle distribution. In this paper, we introduce various probability distributions and the actual luminance distribution to decide the threshold value of the constrained average method in order to improve image quality of dithered image.

  • PDF

The Error Rate Performance of APK System in the Presence of Interference and Noise (간섭과 잡음의 존재하에서 APK 시스템의 오율 특성)

  • Chae, Jong-Won;Gong, Byeong-Ok;Jo, Seong-Jun
    • Journal of the Korean Institute of Telematics and Electronics
    • /
    • v.21 no.3
    • /
    • pp.66-72
    • /
    • 1984
  • In this paper, the error rate performance of L-level amplitude shift keying (ASK), M-ary phase shift keying (PSK) and amplitude phase keying (APK) systems have been studied in the presence of interference and noise. Using the derived error probability equations, the error rate performance of each L-level ASK and M-ary PSK system has been evaluated in terms of carrier-to-noise power ratio (CNR), carrier-to-interferer power ratio (CIR), and envelope distribution of interferer. These results are combined and then the error rate performance of APK signal has been found. Finally, the error rate performance is compared and discussed.

  • PDF

Genetic Parameter Estimation with Normal and Poisson Error Mixed Models for Teat Number of Swine

  • Lee, C.;Wang, C.D.
    • Asian-Australasian Journal of Animal Sciences
    • /
    • v.14 no.7
    • /
    • pp.910-914
    • /
    • 2001
  • The teat number of a sow plays an important role for weaning pigs and has been utilized in selection of swine breeding stock. Various linear models have been employed for genetic analyses of teat number although the teat number can be considered as a count trait. Theoretically, Poisson error mixed models are more appropriate for count traits than Normal error mixed models. In this study, the two models were compared by analyzing data simulated with Poisson error. Considering the mean square errors and correlation coefficients between observed and fitted values, the Poisson generalized linear mixed model (PGLMM) fit the data better than the Normal error mixed model. Also these two models were applied to analyzing teat numbers in four breeds of swine (Landrace, Yorkshire, crossbred of Landrace and Yorkshire, crossbred of Landrace, Yorkshire, and Chinese indigenous Min pig) collected in China. However, when analyzed with the field data, the Normal error mixed model, on the contrary, fit better for all the breeds than the PGLMM. The results from both simulated and field data indicate that teat numbers of swine might not have variance equal to mean and thus not have a Poisson distribution.

Delaunay mesh generation technique adaptive to the mesh Density using the optimization technique (최적화 방법을 이용한 Delaunay 격자의 내부 격자밀도 적응 방법)

  • Hong J. T.;Lee S. R.;Park C. H.;Yang D. Y.
    • Proceedings of the Korean Society for Technology of Plasticity Conference
    • /
    • 2004.10a
    • /
    • pp.75-78
    • /
    • 2004
  • A mesh generation algorithm adapted to the mesh density map using the Delaunay mesh generation technique is developed. In the finite element analyses of the forging processes, the numerical error increases as the process goes on because of discrete property of the finite elements or severe distortion of elements. Especially, in the region where stresses and strains are concentrated, the numerical discretization error will be highly increased. However, it is too time consuming to use a uniformly fine mesh in the whole domain to reduce the expected numerical error. Therefore, it is necessary to construct locally refined mesh at the region where the error is concentrated such as at the die corner. In this study, the point insertion algorithm is used and the mesh size is controlled by moving nodes to optimized positions according to a mesh density map constructed with a posteriori error estimation. An optimization technique is adopted to obtain a good position of nodes. And optimized smoothing techniques are also adopted to have smooth distribution of the mesh and improve the mesh element quality.

  • PDF

Sensitivity Analysis of Long Baseline System with Three Transponders (세 개의 트랜스폰더로 이루어진 장기선 위치추적장치의 민감도 해석)

  • Kim, Sea-Moon;Lee, Pan-Mook;Lee, Chong-Moo;Lim, Yong-Kon
    • Proceedings of the Korea Committee for Ocean Resources and Engineering Conference
    • /
    • 2003.05a
    • /
    • pp.27-31
    • /
    • 2003
  • Underwater acoustic navigation systems are classified into three systems: ultra-short baseline (USBL), short baseline (SBL), and long baseline (LBL). Because the USBL system estimates the angle of a submersible, the estimation error becomes large if the submersible is far from the USBL transducer array mounted under a support vessel. SBL and LBL systems estimate submersible's location more accurately because they have wider distribution of measuring sensors. Especially LBL systems are widely used as a navigation system for deep ocean applications. Although it is most accurate system it still has estimation errors because of noise, measurement error, refraction and multi-path of acoustic signal, or wrong information of the distributed transponders. In this paper the estimation error of the LBL system are analyzed from a point of sensitivity. It is assumed that the error exists only in the distance between a submersible and the transponders. For this purpose sensitivity of the estimated position with respect to relative distances between them is analyzed. The result says that estimation error is small if the submersible is close to transponders but not near the ocean bottom.

  • PDF

Gaussian noise addition approaches for ensemble optimal interpolation implementation in a distributed hydrological model

  • Manoj Khaniya;Yasuto Tachikawa;Kodai Yamamoto;Takahiro Sayama;Sunmin Kim
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2023.05a
    • /
    • pp.25-25
    • /
    • 2023
  • The ensemble optimal interpolation (EnOI) scheme is a sub-optimal alternative to the ensemble Kalman filter (EnKF) with a reduced computational demand making it potentially more suitable for operational applications. Since only one model is integrated forward instead of an ensemble of model realizations, online estimation of the background error covariance matrix is not possible in the EnOI scheme. In this study, we investigate two Gaussian noise based ensemble generation strategies to produce dynamic covariance matrices for assimilation of water level observations into a distributed hydrological model. In the first approach, spatially correlated noise, sampled from a normal distribution with a fixed fractional error parameter (which controls its standard deviation), is added to the model forecast state vector to prepare the ensembles. In the second method, we use an adaptive error estimation technique based on the innovation diagnostics to estimate this error parameter within the assimilation framework. The results from a real and a set of synthetic experiments indicate that the EnOI scheme can provide better results when an optimal EnKF is not identified, but performs worse than the ensemble filter when the true error characteristics are known. Furthermore, while the adaptive approach is able to reduce the sensitivity to the fractional error parameter affecting the first (non-adaptive) approach, results are usually worse at ungauged locations with the former.

  • PDF

Estimation of the Random Error of Eddy Covariance Data from Two Towers during Daytime (주간에 두 타워로부터 관측된 에디 공분산 자료의 확률 오차의 추정)

  • Lim, Hee-Jeong;Lee, Young-Hee;Cho, Changbum;Kim, Kyu Rang;Kim, Baek-Jo
    • Atmosphere
    • /
    • v.26 no.3
    • /
    • pp.483-492
    • /
    • 2016
  • We have examined the random error of eddy covariance (EC) measurements on the basis of two-tower approach during daytime. Two EC towers were placed on the grassland with different vegetation density near Gumi-weir. We calculated the random error using three different methods. The first method (M1) is two-tower method suggested by Hollinger and Richardson (2005) where random error is based on differences between simultaneous flux measurements from two towers in very similar environmental conditions. The second one (M2) is suggested by Kessomkiat et al. (2013), which is extended procedure to estimate random error of EC data for two towers in more heterogeneous environmental conditions. They removed systematic flux difference due to the energy balance deficit and evaporative fraction difference between two sites before determining the random error of fluxes using M1 method. Here, we introduce the third method (M3) where we additionally removed systematic flux difference due to available energy difference between two sites. Compared to M1 and M2 methods, application of M3 method results in more symmetric random error distribution. The magnitude of estimated random error is smallest when using M3 method because application of M3 method results in the least systematic flux difference between two sites among three methods. An empirical formula of random error is developed as a function of flux magnitude, wind speed and measurement height for use in single tower sites near Nakdong River. This study suggests that correcting available energy difference between two sites is also required for calculating the random error of EC data from two towers at heterogeneous site where vegetation density is low.

Comparative study on dynamic analyses of non-classically damped linear systems

  • Greco, Annalisa;Santini, Adolfo
    • Structural Engineering and Mechanics
    • /
    • v.14 no.6
    • /
    • pp.679-698
    • /
    • 2002
  • In this paper some techniques for the dynamic analysis of non-classically damped linear systems are reviewed and compared. All these methods are based on a transformation of the governing equations using a basis of complex or real vectors. Complex and real vector bases are presented and compared. The complex vector basis is represented by the eigenvectors of the complex eigenproblem obtained considering the non-classical damping matrix of the system. The real vector basis is a set of Ritz vectors derived either as the undamped normal modes of vibration of the system, or by the load dependent vector algorithm (Lanczos vectors). In this latter case the vector basis includes the static correction concept. The rate of convergence of these bases, with reference to a parametric structural system subjected to a fixed spatial distribution of forces, is evaluated. To this aim two error norms are considered, the first based on the spatial distribution of the load and the second on the shear force at the base due to impulsive loading. It is shown that both error norms point out that the rate of convergence is strongly influenced by the spatial distribution of the applied forces.

Extraction of a crack opening from a continuous approach using regularized damage models

  • Dufour, Frederic;Pijaudier-Cabot, Gilles;Choinska, Marta;Huerta, Antonio
    • Computers and Concrete
    • /
    • v.5 no.4
    • /
    • pp.375-388
    • /
    • 2008
  • Crack opening governs many transfer properties that play a pivotal role in durability analyses. Instead of trying to combine continuum and discrete models in computational analyses, it would be attractive to derive from the continuum approach an estimate of crack opening, without considering the explicit description of a discontinuous displacement field in the computational model. This is the prime objective of this contribution. The derivation is based on the comparison between two continuous variables: the distribution if the effective non local strain that controls damage and an analytical distribution of the effective non local variable that derives from a strong discontinuity analysis. Close to complete failure, these distributions should be very close to each other. Their comparison provides two quantities: the displacement jump across the crack [U] and the distance between the two profiles. This distance is an error indicator defining how close the damage distribution is from that corresponding to a crack surrounded by a fracture process zone. It may subsequently serve in continuous/discrete models in order to define the threshold below which the continuum approach is close enough to the discrete one in order to switch descriptions. The estimation of the crack opening is illustrated on a one-dimensional example and the error between the profiles issued from discontinuous and FE analyses is found to be of a few percents close to complete failure.

Privacy Amplification of Quantum Key Distribution Systems Using Dual Universal Hush Function (듀얼 유니버셜 해쉬 함수를 이용한 양자 키 분배 시스템의 보안성 증폭)

  • Lee, Sun Yui;Kim, Jin Young
    • Journal of Satellite, Information and Communications
    • /
    • v.12 no.1
    • /
    • pp.38-42
    • /
    • 2017
  • This paper introduces the concept of a dual hash function to amplify security in a quantum key distribution system. We show the use of the relationship between quantum error correction and security to provide security amplification. Also, in terms of security amplification, the approach shows that phase error correction offers better security. We describe the process of enhancing security using the universal hash function using the BB84 protocol, which is a typical example of QKD. Finally, the deterministic universal hash function induces the security to be evaluated in the quantum Pauli channel without depending on the length of the message.