• Title/Summary/Keyword: training sequence

Search Result 316, Processing Time 0.027 seconds

Performance Analysis of Adaptive Equalizers to Enhance DTV Receiving Performance (DTV수신 성능 향상을 위한 적응 등화기의 성능 분석)

  • Yoon, Young-Jun;Sohn, Won
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2001.10a
    • /
    • pp.533-536
    • /
    • 2001
  • 지상파 DTV 전송 시스템은 "training sequence"를 이용한 결정 궤환 등화기(DFE)를 사용한다. 이 등화기는 정적 다중경로 환경에서는 심볼 간의 간섭을 적절히 제거하는 반면, 동적 다중경로 환경에서는 "training sequence" 의 주기가 채널 효율 문제로 충분히 짧지 않기 때문에 충분히 제거하지 못하는 것으로 보고되고 있다. 이 논문에서는 이 문제점을 해결하기 위해서 "training sequence" 를 이용한 DFE와 선형 등화기, 블라인드 DFE와 선형 등화기 및 혼합형 DFE를 구현하여 정적 채널과 동적 채널에 대한 성능을 분석하였다.

  • PDF

An end-to-end synthesis method for Korean text-to-speech systems (한국어 text-to-speech(TTS) 시스템을 위한 엔드투엔드 합성 방식 연구)

  • Choi, Yeunju;Jung, Youngmoon;Kim, Younggwan;Suh, Youngjoo;Kim, Hoirin
    • Phonetics and Speech Sciences
    • /
    • v.10 no.1
    • /
    • pp.39-48
    • /
    • 2018
  • A typical statistical parametric speech synthesis (text-to-speech, TTS) system consists of separate modules, such as a text analysis module, an acoustic modeling module, and a speech synthesis module. This causes two problems: 1) expert knowledge of each module is required, and 2) errors generated in each module accumulate passing through each module. An end-to-end TTS system could avoid such problems by synthesizing voice signals directly from an input string. In this study, we implemented an end-to-end Korean TTS system using Google's Tacotron, which is an end-to-end TTS system based on a sequence-to-sequence model with attention mechanism. We used 4392 utterances spoken by a Korean female speaker, an amount that corresponds to 37% of the dataset Google used for training Tacotron. Our system obtained mean opinion score (MOS) 2.98 and degradation mean opinion score (DMOS) 3.25. We will discuss the factors which affected training of the system. Experiments demonstrate that the post-processing network needs to be designed considering output language and input characters and that according to the amount of training data, the maximum value of n for n-grams modeled by the encoder should be small enough.

Effect of Training Sequence Control in On-line Learning for Multilayer Perceptron (다계층 퍼셉트론의 온라인 학습에서 학습 순서 제어의 효과)

  • Lee, Jae-Young;Kim, Hwang-Soo
    • Journal of KIISE:Software and Applications
    • /
    • v.37 no.7
    • /
    • pp.491-502
    • /
    • 2010
  • When human beings acquire and develop knowledge through education, their prior knowledge influences the next learning process. As this is a fact that should be considered in machine learning, we need to examine the effects of controlling the order of training sequence on machine learning. In this research, the role of the supervisor is extended to control the order of training samples, in addition to just instructing the target values for classification problems. The supervisor sequences the training examples categorized by SOM to the learning model which in this case is MLP. The proposed method is distinguished in that it selects the most instructive example from categories formed by SOM to assist the learning progress, while others use SOM only as a preprocessing method for training samples. The result shows that the method is effective in terms of the number of samples used and time taken in training.

LMS based Iterative Decision Feedback Equalizer for Wireless Packet Data Transmission (무선 패킷데이터 전송을 위한 LMS기반의 반복결정 귀환 등화기)

  • Choi Yun-Seok;Park Hyung-Kun
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.10 no.7
    • /
    • pp.1287-1294
    • /
    • 2006
  • In many current wireless packet data system, the short-burst transmissions are used, and training overhead is very significant for such short burst formats. So, the availability of the short training sequence and the fast converging algorithm is essential in the adaptive equalizer. In this paper, the new equalizer algorithm is proposed to improve the performance of a MTLMS (multiple-training least mean square) based DFE (decision feedback equalizer)using the short training sequence. In the proposed method, the output of the DFE is fed back to the LMS (least mean square) based adaptive DEF loop iteratively and used as an extended training sequence. Instead of the block operation using ML (maximum likelihood) estimator, the low-complexity adaptive LMS operation is used for overall processing. Simulation results show that the perfonnance of the proposed equalizer is improved with a linear computational increase as the iterations parameter in creases and can give the more robustness to the time-varying fading.

Efficient Beam-Training Technique for Millimeter-Wave Cellular Communications

  • Ku, Bon Woo;Han, Dae Gen;Cho, Yong Soo
    • ETRI Journal
    • /
    • v.38 no.1
    • /
    • pp.81-89
    • /
    • 2016
  • In this paper, a beam ID preamble (BIDP) technique, where a beam ID is transmitted in the physical layer, is proposed for efficient beam training in millimeter-wave cellular communication systems. To facilitate beam ID detection in a multicell environment with multiple beams, a BIDP is designed such that a beam ID is mapped onto a Zadoff-Chu sequence in association with its cell ID. By analyzing the correlation property of the BIDP, it is shown that multiple beams can be transmitted simultaneously with the proposed technique with minimal interbeam interference in a multicell environment, where beams have different time delays due to propagation delay or multipath channel delay. Through simulation with a spatial channel model, it is shown that the best beam pairs can be found with a significantly reduced processing time of beam training in the proposed technique.

An Efficient and Accurate Artificial Neural Network through Induced Learning Retardation and Pruning Training Methods Sequence

  • Bandibas, Joel;Kohyama, Kazunori;Wakita, Koji
    • Proceedings of the KSRS Conference
    • /
    • 2003.11a
    • /
    • pp.429-431
    • /
    • 2003
  • The induced learning retardation method involves the temporary inhibition of the artificial neural network’s active units from participating in the error reduction process during training. This stimulates the less active units to contribute significantly to reduce the network error. However, some less active units are not sensitive to stimulation making them almost useless. The network can then be pruned by removing the less active units to make it smaller and more efficient. This study focuses on making the network more efficient and accurate by developing the induced learning retardation and pruning sequence training method. The developed procedure results to faster learning and more accurate artificial neural network for satellite image classification.

  • PDF

Feature Selection with Ensemble Learning for Prostate Cancer Prediction from Gene Expression

  • Abass, Yusuf Aleshinloye;Adeshina, Steve A.
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.12spc
    • /
    • pp.526-538
    • /
    • 2021
  • Machine and deep learning-based models are emerging techniques that are being used to address prediction problems in biomedical data analysis. DNA sequence prediction is a critical problem that has attracted a great deal of attention in the biomedical domain. Machine and deep learning-based models have been shown to provide more accurate results when compared to conventional regression-based models. The prediction of the gene sequence that leads to cancerous diseases, such as prostate cancer, is crucial. Identifying the most important features in a gene sequence is a challenging task. Extracting the components of the gene sequence that can provide an insight into the types of mutation in the gene is of great importance as it will lead to effective drug design and the promotion of the new concept of personalised medicine. In this work, we extracted the exons in the prostate gene sequences that were used in the experiment. We built a Deep Neural Network (DNN) and Bi-directional Long-Short Term Memory (Bi-LSTM) model using a k-mer encoding for the DNA sequence and one-hot encoding for the class label. The models were evaluated using different classification metrics. Our experimental results show that DNN model prediction offers a training accuracy of 99 percent and validation accuracy of 96 percent. The bi-LSTM model also has a training accuracy of 95 percent and validation accuracy of 91 percent.

Integer Frequency Offset Estimation using PN Sequence within Training Symbol for OFDM System (PN 시퀀스의 위상추적을 통한 Orthogonal Frequency Division Multiplexing 신호의 정수배 주파수 옵셋 추정)

  • Ock, Youn Chul
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.51 no.6
    • /
    • pp.290-297
    • /
    • 2014
  • The synchronization of OFDM receiver is consisted of symbol timing offset(STO) estimation in time domain and carrier frequency offset(CFO) estimation in frequency domain. This paper proposes new algorithm for correcting the integer CFO after we have done correcting the STO and partial CFO. ICFO must be corrected, since the ICFO lead to degrade bit error rate(BER) of demodulation performance. The PN sequence has information which is subcarrier order since the modified PN sequence, length is same subcarrier, is used in this paper and is modulated each subcarrier by each chip. Thus the receiver track phase of PN sequence after FFTin order to find the subcarrier frequency offset. The proposed algorithm is faster and more simple than convenient methode as measuring carrier energy.

A BLMS Adaptive Receiver for Direct-Sequence Code Division Multiple Access Systems

  • Hamouda Walaa;McLane Peter J.
    • Journal of Communications and Networks
    • /
    • v.7 no.3
    • /
    • pp.243-247
    • /
    • 2005
  • We propose an efficient block least-mean-square (BLMS) adaptive algorithm, in conjunction with error control coding, for direct-sequence code division multiple access (DS-CDMA) systems. The proposed adaptive receiver incorporates decision feedback detection and channel encoding in order to improve the performance of the standard LMS algorithm in convolutionally coded systems. The BLMS algorithm involves two modes of operation: (i) The training mode where an uncoded training sequence is used for initial filter tap-weights adaptation, and (ii) the decision-directed where the filter weights are adapted, using the BLMS algorithm, after decoding/encoding operation. It is shown that the proposed adaptive receiver structure is able to compensate for the signal-to­noise ratio (SNR) loss incurred due to the switching from uncoded training mode to coded decision-directed mode. Our results show that by using the proposed adaptive receiver (with decision feed­back block adaptation) one can achieve a much better performance than both the coded LMS with no decision feedback employed. The convergence behavior of the proposed BLMS receiver is simulated and compared to the standard LMS with and without channel coding. We also examine the steady-state bit-error rate (BER) performance of the proposed adaptive BLMS and standard LMS, both with convolutional coding, where we show that the former is more superior than the latter especially at large SNRs ($SNR\;\geq\;9\;dB$).

A New Compensated Criterion in Testing Trained Codebooks

  • Kim, Dong-Sik
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.24 no.7A
    • /
    • pp.1052-1063
    • /
    • 1999
  • In designing the quantizer of a coding scheme using a training sequence (TS), the training algorithm tries to find a quantizer that minimizes the distortion measured in the TS. In order to evaluate the trained quantizer or compare the coding scheme, we can observe the minimized distortion. However, the minimized distortion is a biased estimate of the minimal distortion for the input distribution. Hence, we could often overestimate a quantizer or make a wrong comparison even if we use a validating sequence. In this paper, by compensating the minimized distortion for the TS, a new estimate is proposed. Compensating term is a function of the training ratio, the TS size to the codebook size. Several numerical results are also introduced for the proposed estimate.

  • PDF