• Title/Summary/Keyword: 가우스법

Search Result 119, Processing Time 0.022 seconds

Performance Analysis of MFSK Signal using Reed-Solomon / Convolutional Concatenated Coding and MRC Diversity Techniques in m-distributed Fading Environment (m-분포 페이딩 환경에서 Reed-Solomon/컨벌루션 연접 부호화 기법과 MRC 다이버시티 기법을 함께 이용하는 MFSK 신호의 성능 해석)

  • 이희덕;강희조;조성준
    • The Proceeding of the Korean Institute of Electromagnetic Engineering and Science
    • /
    • v.5 no.2
    • /
    • pp.10-19
    • /
    • 1994
  • The error rate equation of Reed-Solomon/Convoutional concatenated coded MFSK signal transmitted over m-distributed fading channel with Additive White Gaussian Noise (AWGN) and re- ceived with Maximal Ratio Combining (MRC) diversity has been derived. The bit error probability has been evaluated using the derived equation and shown n figures as a function of signal to noise ratio, fading index and the number of diversity branches. From the results obtained, we have shown that the bit error probability of MFSK signal is improved by using coding technique in fading environment. The concatenated coding technique is found to be very effective. When concatenated coding and MRC diversity reception techniques are used together in fading environ- ment, the improvement of error performance attains about 6.6 dB in terms of SNR as compared with that of employing only concatenated coding case.

  • PDF

An Improved Reconstruction Algorithm of Convolutional Codes Based on Channel Error Rate Estimation (채널 오류율 추정에 기반을 둔 길쌈부호의 개선된 재구성 알고리즘)

  • Seong, Jinwoo;Chung, Habong
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.42 no.5
    • /
    • pp.951-958
    • /
    • 2017
  • In an attack context, the adversary wants to retrieve the message from the intercepted noisy bit stream without any prior knowledge of the channel codes used. The process of finding out the code parameters such as code length, dimension, and generator, for this purpose, is called the blind recognition of channel codes or the reconstruction of channel codes. In this paper, we suggest an improved algorithm of the blind recovery of rate k/n convolutional encoders in a noisy environment. The suggested algorithm improves the existing algorithm by Marazin, et. al. by evaluating the threshold value through the estimation of the channel error probability of the BSC. By applying the soft decision method by Shaojing, et. al., we considerably enhance the success rate of the channel reconstruction.

Comparison of Analysis Performance of Additive Noise Signals by Independent Component Analysis (독립성분분석법에 의한 잡음첨가신호의 분석성능비교)

  • Cho Yong-Hyun;Park Yong-Soo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.15 no.3
    • /
    • pp.294-299
    • /
    • 2005
  • This paper presents the separation performance of the linearly mixed image signals with additive noises by using an independent component analyses(ICAs) of the fixed-point(FP) algorithm based on Newton and secant method, respectively. The Newton's FP-ICA uses the slope of objective function, and the secant's FP-ICA also uses the tangent line of objective function. The 2 kinds of ICA have been applied to the 2 dimensional 2-image with $512\times512$ pixels. Then Gaussian noise and Laplacian noise are added to the mixed images, respectively. The experimental results show that the Newton's FP-ICA has better the separation speed than secant FP-ICA and the secant's FP-ICA has also the better separation rate than Newton's FP-ICA. Especially, the Newton and secant method gives relatively larger improvement degree in separation speed and rate as the noise increases.

Noise Modeling for CR Images of High-strength Materials (고강도매질 CR 영상의 잡음 모델링)

  • Hwang, Jung-Won;Hwang, Jae-Ho
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.45 no.5
    • /
    • pp.95-102
    • /
    • 2008
  • This paper presents an appropriate approach for modeling noise in Computed Radiography(CR) images of high strength materials. The approach is specifically designed for types of noise with the statistical and nonlinear properties. CR images Ere degraded even before they are encoded by computer process. Various types of noise often contribute to contaminate radiography image, although they are detected on digitalization. Quantum noise, which is Poisson distributed, is a shot noise, but the photon distribution on Image Plate(IP) of CR system is not always Poisson process. The statistical properties are relative and case-dependant due to its material characteristics. The usual assumption of a distribution of Poisson, binomial and Gaussian statistics are considered. Nonlinear effect is also represented in the process of statistical noise model. It leads to estimate the noise variance in regions from high to low intensity, specifying analytical model. The analysis approach is tested on a database of steel tube step-wedge CR images. The results are available for the comparative parameter studies which measure noise coherence, distribution, signal/noise ratios(SNR) and nonlinear interpolation.

Image Interpolation Using Linear Modeling for the Absolute Values of Wavelet Coefficients Across Scale (스케일간 웨이블릿 계수 절대치의 선형 모델링을 이용한 영상 보간)

  • Kim Sang-Soo;Eom Il-Kyu;Kim Yoo-Shin
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.42 no.6
    • /
    • pp.19-26
    • /
    • 2005
  • Image interpolation in the wavelet domain usually takes advantage of the probabilistic models for the intrascale statistics and the interscale dependency. In this paper, we adopt the linear model for the absolute values of wavelet coefficients of interpolated image across scale to estimate the variances of extrapolated bands. The proposed algorithm uses randomly generated wavelet coefficients based on the estimated parameters for probabilistic model. Random number generation according to the estimated probabilistic model may induce the 'salt and pepper' noise in subbands. We reduce the noise power by Wiener filtering. We observe that the proposed method generates the histogram of the subband coefficients similar to the that of original image. Experimental results show that our method outperforms the previous wavelet-domain interpolation method as well as the conventional bicubic method.

Investigation of Turbulence Structures and Development Turbulence Model Based upon a Higher Order Averaging Method (고차평균법에 의한 난류구조의 규명 및 난류모델의 개발)

  • 여운광;편종근
    • Journal of Korean Society of Coastal and Ocean Engineers
    • /
    • v.4 no.4
    • /
    • pp.201-207
    • /
    • 1992
  • The averaged non-linear term in the turbulence equations, suggested by Yeo (1987), is analyzed theoretically and experimentally. It was formulated by applying the filtering concepts to the convolution integral average definition with the Gaussian response function. This filtering approach seems to be superior to the conventional averaging methods in which all four terms at the doubly average vol must be defined separately, and it also gives a very useful tool in understanding the turbulence structures. By theoretically analyzing the newly derived description for the averaged non-linear terms, it is found that the vortex stretching can be explicitly accounted for. Furthermore, comparisons of the correlation coefficients based on the experimental data show that the vortex stretching acts most significantly on the turbulence residual stress. Thus, it strongly supports the claim that the vortex stretching is essential in the transfer of turbulence. In addition. a general form of turbulent energy models in LES is derived, by which it is recognized that the Smagorinsky, the vorticity and the SGS energy models are not distinctive.

  • PDF

Performance Evaluation of Finite Field Arithmetic Implementations in Network Coding (네트워크 코딩에서의 유한필드 연산의 구현과 성능 영향 평가)

  • Lee, Chul-Woo;Park, Joon-Sang
    • Journal of the Korea Society of Computer and Information
    • /
    • v.13 no.2
    • /
    • pp.193-201
    • /
    • 2008
  • Using Network Coding in P2P systems yields great benefits, e.g., reduced download delay. The core notion of Network Coding is to allow encoding and decoding at intermediate nodes, which are prohibited in the traditional networking. However, improper implementation of Network Coding may reduce the overall performance of P2P systems. Network Coding cannot work with general arithmetic operations, since its arithmetic is over a Finite Field and the use of an efficient Finite Field arithmetic algorithm is the key to the performance of Network Coding. Also there are other important performance parameters in Network Coding such as Field size. In this paper we study how those factors influence the performance of Network Coding based systems. A set of experiments shows that overall performance of Network Coding can vary 2-5 times by those factors and we argue that when developing a network system using Network Coding those performance parameters must be carefully chosen.

  • PDF

Parallel solution of linear systems on the CRAY-2 using multi/micro tasking library (CRAY-2에서 멀티/마이크로 태스킹 라이브러리를 이용한 선형시스템의 병렬해법)

  • Ma, Sang-Back
    • The Transactions of the Korea Information Processing Society
    • /
    • v.4 no.11
    • /
    • pp.2711-2720
    • /
    • 1997
  • Multitasking and microtasking on the CRAY machine provides still another way to improve computational power. Since CRAY-2 has 4 processors we can achieve speedup up to 4 properly designed algorithms. In this paper we present two parallelizations of linear system solution in the CRAY-2 with multitasking and microtasking library. One is the LU decomposition on the dense matrices and the other is the iterative solution of large sparse linear systems with the preconditioner proposed by Radicati di Brozolo. In the first case we realized a speedup of 1.3 with 2 processors for a matrix of dimension 600 with the multitasking and in the second case a speedup of around 3 with 4 processors for a matrix of dimension 600 with the multitasking and in the second case a speedup of around 3 with 4 processors for a matrix of dimension 8192 with the microtasking. In the first case the speedup is limited because of the nonuniform vector lenghts. In the second case the ILU(0) preconditioner with Radicati's technique seem to realize a reasonable high speedup with 4 processors.

  • PDF

An Extraction Method of Glomerulus Region from Renal Tissue Image (신장조직 영상에서 사구체 영역의 추출법)

  • Kim, Eung-Kyeu
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.13 no.2
    • /
    • pp.70-76
    • /
    • 2012
  • In this paper, an automatic extraction method of glomerulus region from human renal tissue image is presented. The important information reflecting the state of kidneys richly included in the glomeruli, so it should be the first step to extract the glomerulus region from the renal tissue image for the further quantitative analysis of the renal condition. Especially, there is no clear difference between the glomerulus and other tissues, so the glomerulus region can not be easily extracted from its background by the existing segmentation methods. The outer edge of a glomerulus region is regarded as a common property for the regions of this kind ; a two- dimensional Gaussian distribution is used to convolve with an original image first and then the image is thresholded at this blurred image ; a closed curve corresponding to the outer edge can be obtained by usual pattern processing skills like thinning, branch-cutting, hole-filling etc., Finally, the glomerulus region can be obtained by extracting the area in the original image surrounded by the closed curve. The glomerulus regions are correctly extracted by 85 percentages and experimental results show the proposed method is effective.

The Selective p-Distribution for Adaptive Refinement of L-Shaped Plates Subiected to Bending (휨을 받는 L-형 평판의 적응적 세분화를 위한 선택적 p-분배)

  • Woo, Kwang-Sung;Jo, Jun-Hyung;Lee, Seung-Joon
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.20 no.5
    • /
    • pp.533-541
    • /
    • 2007
  • The Zienkiewicz-Zhu(Z/Z) error estimate is slightly modified for the hierarchical p-refinement, and is then applied to L-shaped plates subjected to bending to demonstrate its effectiveness. An adaptive procedure in finite element analysis is presented by p-refinement of meshes in conjunction with a posteriori error estimator that is based on the superconvergent patch recovery(SPR) technique. The modified Z/Z error estimate p-refinement is different from the conventional approach because the high order shape functions based on integrals of Legendre polynomials are used to interpolate displacements within an element, on the other hand, the same order of basis function based on Pascal's triangle tree is also used to interpolate recovered stresses. The least-square method is used to fit a polynomial to the stresses computed at the sampling points. The strategy of finding a nearly optimal distribution of polynomial degrees on a fixed finite element mesh is discussed such that a particular element has to be refined automatically to obtain an acceptable level of accuracy by increasing p-levels non-uniformly or selectively. It is noted that the error decreases rapidly with an increase in the number of degrees of freedom and the sequences of p-distributions obtained by the proposed error indicator closely follow the optimal trajectory.