• Title/Summary/Keyword: Maximum likelihood detection

Search Result 250, Processing Time 0.028 seconds

Detection of QTL on Bovine X Chromosome by Exploiting Linkage Disequilibrium

  • Kim, Jong-Joo
    • Asian-Australasian Journal of Animal Sciences
    • /
    • v.21 no.5
    • /
    • pp.617-623
    • /
    • 2008
  • A fine-mapping method exploiting linkage disequilibrium was used to detect quantitative trait loci (QTL) on the X chromosome affecting milk production, body conformation and productivity traits. The pedigree comprised 22 paternal half-sib families of Black-and-White Holstein bulls in the Netherlands in a grand-daughter design for a total of 955 sons. Twenty-five microsatellite markers were genotyped to construct a linkage map on the chromosome X spanning 170 Haldane cM with an average inter-marker distance of 7.1 cM. A covariance matrix including elements about identical-by-descent probabilities between haplotypes regarding QTL allele effects was incorporated into the animal model, and a restricted maximum-likelihood method was applied for the presence of QTL using the LDVCM program. Significance thresholds were obtained by permuting haplotypes to phenotypes and by using a false discovery rate procedure. Seven QTL responsible for conformation types (teat length, rump width, rear leg set, angularity and fore udder attachment), behavior (temperament) and a mixture of production and health (durable prestation) were detected at the suggestive level. Some QTL affecting teat length, rump width, durable prestation and rear leg set had small numbers of haplotype clusters, which may indicate good classification of alleles for causal genes or markers that are tightly associated with the causal mutation. However, higher maker density is required to better refine the QTL position and to better characterize functionally distinct haplotypes which will provide information to find causal genes for the traits.

Two New Types of Candidate Symbol Sorting Schemes for Complexity Reduction of a Sphere Decoder

  • Jeon, Eun-Sung;Kim, Yo-Han;Kim, Dong-Ku
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.32 no.9C
    • /
    • pp.888-894
    • /
    • 2007
  • The computational complexity of a sphere decoder (SD) is conventionally reduced by decoding order scheme which sorts candidate symbols in the ascending order of the Euclidean distance from the output of a zero-forcing (ZF) receiver. However, since the ZF output may not be a reliable sorting reference, we propose two types of sorting schemes to allow faster decoding. The first is to use the newly found lattice points in the previous search round instead of the ZF output (Type I). Since these lattice points are closer to the received signal than the ZF output, they can serve as a more reliable sorting reference for finding the maximum likelihood (ML) solution. The second sorting scheme is to sort candidate symbols in descending order according to the number of candidate symbols in the following layer, which are called child symbols (Type II). These two proposed sorting schemes can be combined with layer sorting for more complexity reduction. Through simulation, the Type I and Type II sorting schemes were found to provide 12% and 20% complexity reduction respectively over conventional sorting schemes. When they are combined with layer sorting, Type I and Type II provide an additional 10-15% complexity reduction while maintaining detection performance.

Subcarrier and Power Allocation for Multiuser MIMO-OFDM Systems with Various Detectors

  • Mao, Jing;Chen, Chen;Bai, Lin;Xiang, Haige;Choi, Jinho
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.10
    • /
    • pp.4738-4758
    • /
    • 2017
  • Resource allocation plays a crucial role in multiuser multiple input multiple output orthogonal frequency division multiplexing (MIMO-OFDM) systems to improve overall system performance. While previously proposed resource allocation algorithms are mainly designed from the point of view of the information-theoretic, we formulate the resource allocation problem as an average bit error rate (BER) minimization problem subject to a total power constraint when considering employing realistic MIMO detection techniques. Subsequently, we derive the optimal subcarrier and power allocation algorithms for three types of well-known MIMO detectors, including the maximum likelihood (ML) detector, linear detectors, and successive interference cancellation (SIC) detectors. To reduce the complexity, we also propose a two-step suboptimal algorithm that separates subcarrier and power allocation for each detector. We also analyze the diversity gain of the proposed suboptimal algorithms for various MIMO detectors. Simulation results confirm that the proposed suboptimal algorithm for each detector can achieve a comparable performance with the optimal allocation with a much lower complexity. Moreover, it is shown that the suboptimal algorithms perform better than the conventional algorithms that are known in the literature.

Comparison of Window Functions for the Estimation of Leak Location for Underground Plastic Pipes (지하매설 플라스틱 배관의 누수지점 추정을 위한 창함수 비교 연구)

  • Lee, Young-Sup
    • Transactions of the Korean Society for Noise and Vibration Engineering
    • /
    • v.20 no.6
    • /
    • pp.568-576
    • /
    • 2010
  • It is widely known that the leak locating of underground plastic pipelines is much more difficult than that of cast iron pipelines. The precision of the leak locating depends upon the speed of leak signal and the time delay estimation between the two sensors on the pipeline. In this paper, six different windowing filters are considered to improve the time delay estimation especially for the plastic pipelines. The time delay is usually estimated from the peak time of cross-correlation functions. The filtering windows including rectangle, Roth, Wiener, SCOT, PHAT and maximum likelihood are applied to derive the generalized cross-correlation function and compared each other. Experimental results for the actual plastic underground water supply pipeline show that the introduction of the filtering windows improved the precision of time delay estimation. Some window functions provide excellent leak locating capability for the plastic pipe of 98 m long, which is less than 1 % of the pipe lengths. Also a new probabilistic approach that the combinations of all results from each filtering window is suggested for the better leak locating.

Verification method for 4x4 MIMO algorithm implementation and results (4x4 MIMO 알고리즘 구현 및 결과에 대한 검증 방법)

  • Choi, Jun-su;Hur, Chang-wu
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.19 no.5
    • /
    • pp.1157-1162
    • /
    • 2015
  • This paper is the design and implementation to the 4x4 MIMO algorithm based on OFDM, and presented how to verify the implemented result. Algorithm applied the MRVD and QRM-MLD. Matlab and Simulink are used to design channel presumption & MIMO algorithm by Floating-point and Fixed-point model. After then implement VHDL using Modelsim. Performance of algorithm is checked by comparing Simulink model, Modelsim simulation, ISE ChipScope with the result measured by oscilloscope. This method is useful to verify an algorithm with uncompleted system. Conformance between the result of ChipScope and the result of oscilloscope is confirmed, it could be applied on the Backhaul system.

Laver Farm Feature Extraction From Landsat ETM+ Using Independent Component Analysis

  • Han J. G.;Yeon Y. K.;Chi K. H.;Hwang J. H.
    • Proceedings of the KSRS Conference
    • /
    • 2004.10a
    • /
    • pp.359-362
    • /
    • 2004
  • In multi-dimensional image, ICA-based feature extraction algorithm, which is proposed in this paper, is for the purpose of detecting target feature about pixel assumed as a linear mixed spectrum sphere, which is consisted of each different type of material object (target feature and background feature) in spectrum sphere of reflectance of each pixel. Landsat ETM+ satellite image is consisted of multi-dimensional data structure and, there is target feature, which is purposed to extract and various background image is mixed. In this paper, in order to eliminate background features (tidal flat, seawater and etc) around target feature (laver farm) effectively, pixel spectrum sphere of target feature is projected onto the orthogonal spectrum sphere of background feature. The rest amount of spectrum sphere of target feature in the pixel can be presumed to remove spectrum sphere of background feature. In order to make sure the excellence of feature extraction method based on ICA, which is proposed in this paper, laver farm feature extraction from Landsat ETM+ satellite image is applied. Also, In the side of feature extraction accuracy and the noise level, which is still remaining not to remove after feature extraction, we have conducted a comparing test with traditionally most popular method, maximum-likelihood. As a consequence, the proposed method from this paper can effectively eliminate background features around mixed spectrum sphere to extract target feature. So, we found that it had excellent detection efficiency.

  • PDF

Nonbinary Convolutional Codes and Modified M-FSK Detectors for Power-Line Communications Channel

  • Ouahada, Khmaies
    • Journal of Communications and Networks
    • /
    • v.16 no.3
    • /
    • pp.270-279
    • /
    • 2014
  • The Viterbi decoding algorithm, which provides maximum - likelihood decoding, is currently considered the most widely used technique for the decoding of codes having a state description, including the class of linear error-correcting convolutional codes. Two classes of nonbinary convolutional codes are presented. Distance preserving mapping convolutional codes and M-ary convolutional codes are designed, respectively, from the distance-preserving mappings technique and the implementation of the conventional convolutional codes in Galois fields of order higher than two. We also investigated the performance of these codes when combined with a multiple frequency-shift keying (M-FSK) modulation scheme to correct narrowband interference (NBI) in power-line communications channel. Themodification of certain detectors of the M-FSK demodulator to refine the selection and the detection at the decoder is also presented. M-FSK detectors used in our simulations are discussed, and their chosen values are justified. Interesting and promising obtained results have shown a very strong link between the designed codes and the selected detector for M-FSK modulation. An important improvement in gain for certain values of the modified detectors was also observed. The paper also shows that the newly designed codes outperform the conventional convolutional codes in a NBI environment.

Estimating the Direction and Distance of an Unknown Radiation Source Using RMC (RMC를 이용한 미지 선원의 방향, 거리 예측)

  • Shin, Youngjun;Kim, Geehyun;Lee, Gyemin
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.53 no.9
    • /
    • pp.118-125
    • /
    • 2016
  • Rotating modulation collimator(RMC) is a remote sensing technique for a radiation source. This paper introduces an RMC system model and its image reconstruction algorithm based on Kowash's research. The reconstructed image can show the direction of a source. However, the distance to the source cannot be recovered. Moreover, the RMC image suffers from $180^{\circ}$ ambiguity. In this paper, we propose a distance estimation method using two RMCs together with a solution to the ambiguity. We also demonstrate its performance using simulated RMC data.

Comparisons of Object Recognition Performance with 3D Photon Counting & Gray Scale Images

  • Lee, Chung-Ghiu;Moon, In-Kyu
    • Journal of the Optical Society of Korea
    • /
    • v.14 no.4
    • /
    • pp.388-394
    • /
    • 2010
  • In this paper the object recognition performance of a photon counting integral imaging system is quantitatively compared with that of a conventional gray scale imaging system. For 3D imaging of objects with a small number of photons, the elemental image set of a 3D scene is obtained using the integral imaging set up. We assume that the elemental image detection follows a Poisson distribution. Computational geometrical ray back propagation algorithm and parametric maximum likelihood estimator are applied to the photon counting elemental image set in order to reconstruct the original 3D scene. To evaluate the photon counting object recognition performance, the normalized correlation peaks between the reconstructed 3D scenes are calculated for the varied and fixed total number of photons in the reconstructed sectional image changing the total number of image channels in the integral imaging system. It is quantitatively illustrated that the recognition performance of the photon counting integral imaging system can be similar to that of a conventional gray scale imaging system as the number of image viewing channels in the photon counting integral imaging (PCII) system is increased up to the threshold point. Also, we present experiments to find the threshold point on the total number of image channels in the PCII system which can guarantee a comparable recognition performance with a gray scale imaging system. To the best of our knowledge, this is the first report on comparisons of object recognition performance with 3D photon counting & gray scale images.

The Comparative Study for Property of Learning Effect based on Truncated time and Delayed S-Shaped NHPP Software Reliability Model (절단고정시간과 지연된 S-형태 NHPP 소프트웨어 신뢰모형에 근거한 학습효과특성 비교연구)

  • Kim, Hee Cheul
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.8 no.4
    • /
    • pp.25-34
    • /
    • 2012
  • In this study, in the process of testing before the release of the software products designed, software testing manager in advance should be aware of the testing-information. Therefore, the effective learning effects perspective has been studied using the NHPP software. The finite failure nonhomogeneous Poisson process models presented and applied property of learning effect based on truncated time and delayed S-shaped software reliability. Software error detection techniques known in advance, but influencing factors for considering the errors found automatically and learning factors, by prior experience, to find precisely the error factor setting up the testing manager are presented comparing the problem. As a result, the learning factor is greater than autonomous errors-detected factor that is generally efficient model can be confirmed. This paper, a failure data analysis was performed, using time between failures, according to the small sample and large sample sizes. The parameter estimation was carried out using maximum likelihood estimation method. Model selection was performed using the mean square error and coefficient of determination, after the data efficiency from the data through trend analysis was performed.