• Title/Summary/Keyword: Gaussian density

Search Result 364, Processing Time 0.025 seconds

A study on the debelopment of the Ultrasonic imaging system for tissue characterization (조직의 정량화를 위한 초음파 영상시스템의 개발에 관한 연구)

  • Choe, Jong-Ho;Choe, Jong-Su
    • The Journal of the Acoustical Society of Korea
    • /
    • v.6 no.3
    • /
    • pp.31-42
    • /
    • 1987
  • An ultrasonic pulse-echo diagnostic system for tissue characterization with the estimation of attenuation coefficients is developed and its performance has been examined by system implementation. The system divided into the ultrasonic generator, A/D converter, data communication, computer for signal processing. The methods for estimating the spatial distribution of acoustic attenuation coefficients using the moment analysis are proposed. The experimental results indicate the potential of the methods for tissue characterization.

  • PDF

PCD 공구에 의한 Graphite/Epoxy 복합재료 가공시 발생하는 표면조도의 특성 연구

  • 왕덕현
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 1992.10a
    • /
    • pp.101-105
    • /
    • 1992
  • Machined graphite/epoxy composite surfaces were studied by using SEM(Scanning Electron Microscopy). surface profilometry and its analysis to determine suitable surface describing parameters for machined unidirectional and laminate composite surface. The surface roughness and profile are found to be highly dependent on the fiber layup direction and the measurement direction. Machined unidirectional and 0.deg. 45 .deg. 90 .deg. plies in laminate composite surface profiles are found to be Gaussian in the direction of machining. Since there exist bare fibers without matrix smearing in 0 .deg. ply, higher surface roughness values were found in this orientation. It was possible to machine 90 .deg. and -45 .deg. plies due to the adjacent plies, which were holding those plies. It was found that the microgeometrical variations in terms of roughness parameters Ra without Dy (maximum Damage Depth) region and Dy are better descriptors of the machined laminate composite surface than commonly used roughness parameters Ra and Ra. The characteristics of surface profiles in laminate composite are well represented in CHD (Cumulative Height Distribution) plot and PPD (Percentage Probability Density) plot. Also, the power spectral density function is shown to be capable of identifying the wavelength distribution of the machining damage.

Constraining primordial non-Gaussianity with the 3-point correlation function of the SDSS-IV eBOSS DR14 quasar sample

  • Choi, Peter D.;Rossi, Graziano;Slepian, Zachary;Eisenstein, Daniel;Ho, Shirley;Schlegel, David
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.42 no.1
    • /
    • pp.53.3-53.3
    • /
    • 2017
  • While quasars are sparse in number density, they reside at relatively high-redshift as compared to galaxies. Hence, they are likely to be less non-linearly evolved than the galaxy population, and thus have a distribution that more closely mirrors the primordial density field. Therefore, they offer an intriguing opportunity to search for primordial non-Gaussianity (PNG). To this end, the 3-point correlation function (3PCF) is an excellent statistical tool to detect departures from Gaussianity, vanishing for a Gaussian field. In this work, we will make the first-ever measurement of the large-scale quasar 3PCF from the SDSS-IV DR14 quasar sample (spanning the largest volume to-date) to place constraints on PNG through the usual f_NL-type parametrization. This work will use the order N^2-time 3PCF algorithm of Slepian & Eisenstein (2015), with N the number of objects.

  • PDF

Construction of Multiple-Rate Quasi-Cyclic LDPC Codes via the Hyperplane Decomposing

  • Jiang, Xueqin;Yan, Yier;Lee, Moon-Ho
    • Journal of Communications and Networks
    • /
    • v.13 no.3
    • /
    • pp.205-210
    • /
    • 2011
  • This paper presents an approach to the construction of multiple-rate quasi-cyclic low-density parity-check (LDPC) codes. Parity-check matrices of the proposed codes consist of $q{\times}q$ square submatrices. The block rows and block columns of the parity-check matrix correspond to the hyperplanes (${\mu}$-fiats) and points in Euclidean geometries, respectively. By decomposing the ${\mu}$-fiats, we obtain LDPC codes of different code rates and a constant code length. The code performance is investigated in term of the bit error rate and compared with those of LDPC codes given in IEEE standards. Simulation results show that our codes perform very well and have low error floors over the additive white Gaussian noise channel.

Pointwise Estimation of Density of Heteroscedastistic Response in Regression

  • Hyun, Ji-Hoon;Kim, Si-Won;Lee, Sung-Dong;Byun, Wook-Jae;Son, Mi-Kyoung;Kim, Choong-Rak
    • The Korean Journal of Applied Statistics
    • /
    • v.25 no.1
    • /
    • pp.197-203
    • /
    • 2012
  • In fitting a regression model, we often encounter data sets which do not follow Gaussian distribution and/or do not have equal variance. In this case estimation of the conditional density of a response variable at a given design point is hardly solved by a standard least squares method. To solve this problem, we propose a simple method to estimate the distribution of the fitted vales under heteroscedasticity using the idea of quantile regression and the histogram techniques. Application of this method to a real data sets is given.

A Study on an Improved LBG Algorithm to Design the Code Book of VQ (VQ의 코드북 생성을 위한 LBG 알고리즘의 개선에 관한 연구)

  • 김장한
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.25 no.1A
    • /
    • pp.48-55
    • /
    • 2000
  • In this paper, an assumption to design a quantizer, is proposed that if one small region of a probability density function is represented larger probability and bigger total error than another neighbour region, then the quantizer is not optimal. It is tested when the probability functions are Gaussian, Laplacian and uniform density function by the computer simulations. A new LBG algorithm which originates from this assumption in addition to LBG algorithm, is designed for the vector quantizer. The new LBG algorithm presents better performance than the original LBG algorithm in the average error and the variance of the error.

  • PDF

An Improvement of UMP-BP Decoding Algorithm Using the Minimum Mean Square Error Linear Estimator

  • Kim, Nam-Shik;Kim, Jae-Bum;Park, Hyun-Cheol;Suh, Seung-Bum
    • ETRI Journal
    • /
    • v.26 no.5
    • /
    • pp.432-436
    • /
    • 2004
  • In this paper, we propose the modified uniformly most powerful (UMP) belief-propagation (BP)-based decoding algorithm which utilizes multiplicative and additive factors to diminish the errors introduced by the approximation of the soft values given by a previously proposed UMP BP-based algorithm. This modified UMP BP-based algorithm shows better performance than that of the normalized UMP BP-based algorithm, i.e., it has an error performance closer to BP than that of the normalized UMP BP-based algorithm on the additive white Gaussian noise channel for low density parity check codes. Also, this algorithm has the same complexity in its implementation as the normalized UMP BP-based algorithm.

  • PDF

Improved Upper Bounds on Low Density Parity Check Codes Performance for the Input Binary AWGN Channel

  • Yu Yi;Lee, Moon-Ho
    • Proceedings of the IEEK Conference
    • /
    • 2002.06a
    • /
    • pp.323-326
    • /
    • 2002
  • In this paper, we study the improved bounds on the performance of low-density parity-check (LDPC) codes over binary-input additive white Gaussian noise (AWGN) channels with belief propagation (BP) decoding in log domain. We define an extended Gallager ensemble based on a new method of constructing parity check matrix and make use of this way to improve upper bound of LDPC codes. At the same time, many simulation results are presented in this paper. These results indicate the extended Gallager ensembles based on Hamming codes have typical minimum distance ratio, which is very close to the asymptotic Gilbert Varshamov bound and the superior performance which is better than the original Gallager ensembles.

  • PDF

Separate Reconstruction of Speed of Sound, Density, and Absorption Parameters in Ultrasound Inverse Scattering Tomography

  • Kwon, Sung-Jae
    • The Journal of the Acoustical Society of Korea
    • /
    • v.18 no.2E
    • /
    • pp.18-23
    • /
    • 1999
  • This paper proposes a method of separately determining three intrinsic mechanical parameters of an unknown object in the framework of ultrasound inverse scattering tomography. Those parameters are the speed of sound, density, and absorption whose values are given as the solution of an inhomogeneous Helmholtz wave equation. The separate reconstruction method is mathematically formulated, the integral equations are discretized using the sinc basis functions, and the Newton-Raphson method is adopted as a numerical solver in a measurement configuration where the object is insonified by an incident plane wave over 360˚ and the scattered field is measured by detectors arranged in a rectangular fashion around it. Two distinct frequencies are used to separate each parameter of three Gaussian objects that are either located at the same position or separately from each other. Computer simulation results show that the separate reconstruction method is able to separately reconstruct the three mechanical parameters. The absorption parameter turns out to be a little difficult to reconstruct as compared with the other two parameters.

  • PDF

Blind Signal Processing for Impulsive Noise Channels

  • Kim, Nam-Yong;Byun, Hyung-Gi;You, Young-Hwan;Kwon, Ki-Hyeon
    • Journal of Communications and Networks
    • /
    • v.14 no.1
    • /
    • pp.27-33
    • /
    • 2012
  • In this paper, a new blind signal processing scheme for equalization in fading and impulsive-noise channel environments is introduced based on probability density functionmatching method and a set of Dirac-delta functions. Gaussian kernel of the proposed blind algorithm has the effect of cutting out the outliers on the difference between the desired level values and impulse-infected outputs. And also the proposed algorithm has relatively less sensitivity to channel eigenvalue ratio and has reduced computational complexity compared to the recently introduced correntropy algorithm. According to these characteristics, simulation results show that the proposed blind algorithm produces superior performance in multi-path communication channels corrupted with impulsive noise.