• 제목/요약/키워드: maximum-likelihood detection

검색결과 250건 처리시간 0.024초

Detection of QTL on Bovine X Chromosome by Exploiting Linkage Disequilibrium

  • Kim, Jong-Joo
    • Asian-Australasian Journal of Animal Sciences
    • /
    • 제21권5호
    • /
    • pp.617-623
    • /
    • 2008
  • A fine-mapping method exploiting linkage disequilibrium was used to detect quantitative trait loci (QTL) on the X chromosome affecting milk production, body conformation and productivity traits. The pedigree comprised 22 paternal half-sib families of Black-and-White Holstein bulls in the Netherlands in a grand-daughter design for a total of 955 sons. Twenty-five microsatellite markers were genotyped to construct a linkage map on the chromosome X spanning 170 Haldane cM with an average inter-marker distance of 7.1 cM. A covariance matrix including elements about identical-by-descent probabilities between haplotypes regarding QTL allele effects was incorporated into the animal model, and a restricted maximum-likelihood method was applied for the presence of QTL using the LDVCM program. Significance thresholds were obtained by permuting haplotypes to phenotypes and by using a false discovery rate procedure. Seven QTL responsible for conformation types (teat length, rump width, rear leg set, angularity and fore udder attachment), behavior (temperament) and a mixture of production and health (durable prestation) were detected at the suggestive level. Some QTL affecting teat length, rump width, durable prestation and rear leg set had small numbers of haplotype clusters, which may indicate good classification of alleles for causal genes or markers that are tightly associated with the causal mutation. However, higher maker density is required to better refine the QTL position and to better characterize functionally distinct haplotypes which will provide information to find causal genes for the traits.

Two New Types of Candidate Symbol Sorting Schemes for Complexity Reduction of a Sphere Decoder

  • 전은성;김요한;김동구
    • 한국통신학회논문지
    • /
    • 제32권9C호
    • /
    • pp.888-894
    • /
    • 2007
  • The computational complexity of a sphere decoder (SD) is conventionally reduced by decoding order scheme which sorts candidate symbols in the ascending order of the Euclidean distance from the output of a zero-forcing (ZF) receiver. However, since the ZF output may not be a reliable sorting reference, we propose two types of sorting schemes to allow faster decoding. The first is to use the newly found lattice points in the previous search round instead of the ZF output (Type I). Since these lattice points are closer to the received signal than the ZF output, they can serve as a more reliable sorting reference for finding the maximum likelihood (ML) solution. The second sorting scheme is to sort candidate symbols in descending order according to the number of candidate symbols in the following layer, which are called child symbols (Type II). These two proposed sorting schemes can be combined with layer sorting for more complexity reduction. Through simulation, the Type I and Type II sorting schemes were found to provide 12% and 20% complexity reduction respectively over conventional sorting schemes. When they are combined with layer sorting, Type I and Type II provide an additional 10-15% complexity reduction while maintaining detection performance.

Subcarrier and Power Allocation for Multiuser MIMO-OFDM Systems with Various Detectors

  • Mao, Jing;Chen, Chen;Bai, Lin;Xiang, Haige;Choi, Jinho
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제11권10호
    • /
    • pp.4738-4758
    • /
    • 2017
  • Resource allocation plays a crucial role in multiuser multiple input multiple output orthogonal frequency division multiplexing (MIMO-OFDM) systems to improve overall system performance. While previously proposed resource allocation algorithms are mainly designed from the point of view of the information-theoretic, we formulate the resource allocation problem as an average bit error rate (BER) minimization problem subject to a total power constraint when considering employing realistic MIMO detection techniques. Subsequently, we derive the optimal subcarrier and power allocation algorithms for three types of well-known MIMO detectors, including the maximum likelihood (ML) detector, linear detectors, and successive interference cancellation (SIC) detectors. To reduce the complexity, we also propose a two-step suboptimal algorithm that separates subcarrier and power allocation for each detector. We also analyze the diversity gain of the proposed suboptimal algorithms for various MIMO detectors. Simulation results confirm that the proposed suboptimal algorithm for each detector can achieve a comparable performance with the optimal allocation with a much lower complexity. Moreover, it is shown that the suboptimal algorithms perform better than the conventional algorithms that are known in the literature.

지하매설 플라스틱 배관의 누수지점 추정을 위한 창함수 비교 연구 (Comparison of Window Functions for the Estimation of Leak Location for Underground Plastic Pipes)

  • 이영섭
    • 한국소음진동공학회논문집
    • /
    • 제20권6호
    • /
    • pp.568-576
    • /
    • 2010
  • It is widely known that the leak locating of underground plastic pipelines is much more difficult than that of cast iron pipelines. The precision of the leak locating depends upon the speed of leak signal and the time delay estimation between the two sensors on the pipeline. In this paper, six different windowing filters are considered to improve the time delay estimation especially for the plastic pipelines. The time delay is usually estimated from the peak time of cross-correlation functions. The filtering windows including rectangle, Roth, Wiener, SCOT, PHAT and maximum likelihood are applied to derive the generalized cross-correlation function and compared each other. Experimental results for the actual plastic underground water supply pipeline show that the introduction of the filtering windows improved the precision of time delay estimation. Some window functions provide excellent leak locating capability for the plastic pipe of 98 m long, which is less than 1 % of the pipe lengths. Also a new probabilistic approach that the combinations of all results from each filtering window is suggested for the better leak locating.

4x4 MIMO 알고리즘 구현 및 결과에 대한 검증 방법 (Verification method for 4x4 MIMO algorithm implementation and results)

  • 최준수;허창우
    • 한국정보통신학회논문지
    • /
    • 제19권5호
    • /
    • pp.1157-1162
    • /
    • 2015
  • 본 논문에서는 OFDM 기반의 4x4 MIMO 알고리즘을 설계 및 구현을 하였으며, 구현된 결과를 검증하기 위한 방법을 제시한다. 알고리즘은 MRVD와 QRM-MLD을 적용했다. Matlab과 Simulink를 이용하여 채널 추정 및 MIMO 알고리즘을 Floating-point와 Fixed-point 모델로 설계하였다. 그 다음 Modelsim을 이용하여 VHDL로 구현한다. 구현된 알고리즘의 성능 검증을 위해 설계한 Simulink 모델과 Modelsim 시뮬레이션, ISE ChipScope, 그리고 오실로스 코프로 측정한 결과를 비교하는 방법을 사용하였다. 이 방법은 시스템이 완성되지 않은 상태에서 구현된 알고리즘을 검증하는 방법이다. 검증 결과 ChipScope의 결과와 오실로스코프의 결과가 동일함을 확인하였고, 백홀 시스템에 적용이 가능함을 확인하였다.

Laver Farm Feature Extraction From Landsat ETM+ Using Independent Component Analysis

  • Han J. G.;Yeon Y. K.;Chi K. H.;Hwang J. H.
    • 대한원격탐사학회:학술대회논문집
    • /
    • 대한원격탐사학회 2004년도 Proceedings of ISRS 2004
    • /
    • pp.359-362
    • /
    • 2004
  • In multi-dimensional image, ICA-based feature extraction algorithm, which is proposed in this paper, is for the purpose of detecting target feature about pixel assumed as a linear mixed spectrum sphere, which is consisted of each different type of material object (target feature and background feature) in spectrum sphere of reflectance of each pixel. Landsat ETM+ satellite image is consisted of multi-dimensional data structure and, there is target feature, which is purposed to extract and various background image is mixed. In this paper, in order to eliminate background features (tidal flat, seawater and etc) around target feature (laver farm) effectively, pixel spectrum sphere of target feature is projected onto the orthogonal spectrum sphere of background feature. The rest amount of spectrum sphere of target feature in the pixel can be presumed to remove spectrum sphere of background feature. In order to make sure the excellence of feature extraction method based on ICA, which is proposed in this paper, laver farm feature extraction from Landsat ETM+ satellite image is applied. Also, In the side of feature extraction accuracy and the noise level, which is still remaining not to remove after feature extraction, we have conducted a comparing test with traditionally most popular method, maximum-likelihood. As a consequence, the proposed method from this paper can effectively eliminate background features around mixed spectrum sphere to extract target feature. So, we found that it had excellent detection efficiency.

  • PDF

Nonbinary Convolutional Codes and Modified M-FSK Detectors for Power-Line Communications Channel

  • Ouahada, Khmaies
    • Journal of Communications and Networks
    • /
    • 제16권3호
    • /
    • pp.270-279
    • /
    • 2014
  • The Viterbi decoding algorithm, which provides maximum - likelihood decoding, is currently considered the most widely used technique for the decoding of codes having a state description, including the class of linear error-correcting convolutional codes. Two classes of nonbinary convolutional codes are presented. Distance preserving mapping convolutional codes and M-ary convolutional codes are designed, respectively, from the distance-preserving mappings technique and the implementation of the conventional convolutional codes in Galois fields of order higher than two. We also investigated the performance of these codes when combined with a multiple frequency-shift keying (M-FSK) modulation scheme to correct narrowband interference (NBI) in power-line communications channel. Themodification of certain detectors of the M-FSK demodulator to refine the selection and the detection at the decoder is also presented. M-FSK detectors used in our simulations are discussed, and their chosen values are justified. Interesting and promising obtained results have shown a very strong link between the designed codes and the selected detector for M-FSK modulation. An important improvement in gain for certain values of the modified detectors was also observed. The paper also shows that the newly designed codes outperform the conventional convolutional codes in a NBI environment.

RMC를 이용한 미지 선원의 방향, 거리 예측 (Estimating the Direction and Distance of an Unknown Radiation Source Using RMC)

  • 신영준;김기현;이계민
    • 전자공학회논문지
    • /
    • 제53권9호
    • /
    • pp.118-125
    • /
    • 2016
  • 방사능 누출 사고 시 대응이나 핵안보 검증을 위한 핵물질 탐지에 있어서, 방사선을 방출하는 미지의 선원에 대한 위치 정보를 파악하는 것은 중요하다. 그러한 기구 중 하나인 회전 변조 시준기는 미지 선원을 원격 감지하기 위한 장비로서 영상화를 통해 선원의 위치 탐지가 가능하다. 본 논문에서는 Kowash의 연구를 기초로 회전 변조 시준기의 시스템 모델과 그를 영상화하는 알고리즘을 소개한다. 하지만 결과 영상화 이미지는 선원의 방향은 보여줄 수 있으나 선원의 거리를 찾지 못하는 문제점이 있다. 또한 선원의 실제 방향뿐 아니라 $180^{\circ}$ 대칭방향에서도 선원을 추정하는 모호성 문제를 안고 있다. 본 논문에서 우리는 영상화 결과의 방향 대칭적 모호성을 해결하고, 두 대의 RMC를 이용해 거리를 추정하는 방법을 제안한다. 그리고 이를 RMC 시뮬레이션 데이터를 이용하여 성능을 검증한다.

Comparisons of Object Recognition Performance with 3D Photon Counting & Gray Scale Images

  • Lee, Chung-Ghiu;Moon, In-Kyu
    • Journal of the Optical Society of Korea
    • /
    • 제14권4호
    • /
    • pp.388-394
    • /
    • 2010
  • In this paper the object recognition performance of a photon counting integral imaging system is quantitatively compared with that of a conventional gray scale imaging system. For 3D imaging of objects with a small number of photons, the elemental image set of a 3D scene is obtained using the integral imaging set up. We assume that the elemental image detection follows a Poisson distribution. Computational geometrical ray back propagation algorithm and parametric maximum likelihood estimator are applied to the photon counting elemental image set in order to reconstruct the original 3D scene. To evaluate the photon counting object recognition performance, the normalized correlation peaks between the reconstructed 3D scenes are calculated for the varied and fixed total number of photons in the reconstructed sectional image changing the total number of image channels in the integral imaging system. It is quantitatively illustrated that the recognition performance of the photon counting integral imaging system can be similar to that of a conventional gray scale imaging system as the number of image viewing channels in the photon counting integral imaging (PCII) system is increased up to the threshold point. Also, we present experiments to find the threshold point on the total number of image channels in the PCII system which can guarantee a comparable recognition performance with a gray scale imaging system. To the best of our knowledge, this is the first report on comparisons of object recognition performance with 3D photon counting & gray scale images.

절단고정시간과 지연된 S-형태 NHPP 소프트웨어 신뢰모형에 근거한 학습효과특성 비교연구 (The Comparative Study for Property of Learning Effect based on Truncated time and Delayed S-Shaped NHPP Software Reliability Model)

  • 김희철
    • 디지털산업정보학회논문지
    • /
    • 제8권4호
    • /
    • pp.25-34
    • /
    • 2012
  • In this study, in the process of testing before the release of the software products designed, software testing manager in advance should be aware of the testing-information. Therefore, the effective learning effects perspective has been studied using the NHPP software. The finite failure nonhomogeneous Poisson process models presented and applied property of learning effect based on truncated time and delayed S-shaped software reliability. Software error detection techniques known in advance, but influencing factors for considering the errors found automatically and learning factors, by prior experience, to find precisely the error factor setting up the testing manager are presented comparing the problem. As a result, the learning factor is greater than autonomous errors-detected factor that is generally efficient model can be confirmed. This paper, a failure data analysis was performed, using time between failures, according to the small sample and large sample sizes. The parameter estimation was carried out using maximum likelihood estimation method. Model selection was performed using the mean square error and coefficient of determination, after the data efficiency from the data through trend analysis was performed.