• Title/Summary/Keyword: computational complexity reduction

Search Result 257, Processing Time 0.021 seconds

Total Degradation Performance Evaluation of the Time- and Frequency-Domain Clipping in OFDM Systems (OFDM 시스템에서 시간 및 주파수 영역 클리핑의 Total Degradation 성능평가)

  • Han, Chang-Sik;Seo, Man-Jung;Im, Sung-Bin
    • Journal of the Institute of Electronics Engineers of Korea TC
    • /
    • v.44 no.7 s.361
    • /
    • pp.17-22
    • /
    • 2007
  • OFDM (Orthogonal Frequency Division Multiplexing) is a special case of multicarrier transmission, where a single data stream is transmitted over a number of lower-rate subcarrier. One of the main reasons to use OFDM is to increase robustness against frequency-selective fading or narrowband interference. Unfortunately, an OFDM signal consists of a number of independently modulated subcarriers, which can give a large PAPR (Peak-to-Average Power Ratio) when added up coherently. In this paper, we investigate the performance of a simple PAPR reduction scheme, which requires no change of a receiver structure or no additional information transmission. The approach we employed is clipping in the time and frequency domains. The time-domain clipping is carried out with a predetermined clipping level while the frequency-domain clipping is done within EVM (Error Vector Magnitude). This approach is suboptimal with lower computational complexity compared to the optimal method. This evaluation is carried out on the OFDM system with an nonlinear amplifier. The simulation results demonstrated that the PAPR reduction algorithm is one of ways to reduce the effects of the nonlinear distortion of an HPA (High Power Amplifier).

PAPR Reduction Method of OFDM System Using Fuzzy Theory (Fuzzy 이론을 이용한 OFDM 시스템에서 PAPR 감소 기법)

  • Lee, Dong-Ho;Choi, Jung-Hun;Kim, Nam;Lee, Bong-Woon
    • The Journal of Korean Institute of Electromagnetic Engineering and Science
    • /
    • v.21 no.7
    • /
    • pp.715-725
    • /
    • 2010
  • Orthgonal Frequency Division Multiplexing(OFDM) system is effective for the high data rate transmission in the frequency selective fading channel. In this paper we propose PAPR(Peak to Average Power Ratio) reduction method of problem in OFDM system used Fuzzy theory that often control machine. This thesis proposes PAPR reducing method of OFDM system using Fuzzy theory. The advantages for using Fuzzy theory to reduce PAPR are that it is easy to manage the data and embody the hardware, and required smaller amount of operation. Firstly, we proposed simple algorithm that is reconstructed at receiver with transmitted overall PAPR which is reduced PAPR of sub-block using Fuzzy. Although there are some drawbacks that the operation of the system is increased comparing conventional OFDM system and it is needed to send the information about Fuzzy indivisually, it is assured that the performance of the system is enhanced for PAPR reducing. To evaluate the perfomance, the proposed search algorithm is compared with the proposed algorithm in terms of the complementary cumulative distribution function(CCDF) of the PAPR and the computational complexity. As a result of using the QPSK and 16QAM modulation, Fuzzy theory method is more an effective method of reducing 2.3 dB and 3.1 dB PAPR than exiting OFDM system when FFT size(N)=512, and oversampling=4 in the base PR of $10^{-5}$.

Design of a Bit-Serial Divider in GF(2$^{m}$ ) for Elliptic Curve Cryptosystem (타원곡선 암호시스템을 위한 GF(2$^{m}$ )상의 비트-시리얼 나눗셈기 설계)

  • 김창훈;홍춘표;김남식;권순학
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.27 no.12C
    • /
    • pp.1288-1298
    • /
    • 2002
  • To implement elliptic curve cryptosystem in GF(2$\^$m/) at high speed, a fast divider is required. Although bit-parallel architecture is well suited for high speed division operations, elliptic curve cryptosystem requires large m(at least 163) to support a sufficient security. In other words, since the bit-parallel architecture has an area complexity of 0(m$\^$m/), it is not suited for this application. In this paper, we propose a new serial-in serial-out systolic array for computing division operations in GF(2$\^$m/) using the standard basis representation. Based on a modified version of tile binary extended greatest common divisor algorithm, we obtain a new data dependence graph and design an efficient bit-serial systolic divider. The proposed divider has 0(m) time complexity and 0(m) area complexity. If input data come in continuously, the proposed divider can produce division results at a rate of one per m clock cycles, after an initial delay of 5m-2 cycles. Analysis shows that the proposed divider provides a significant reduction in both chip area and computational delay time compared to previously proposed systolic dividers with the same I/O format. Since the proposed divider can perform division operations at high speed with the reduced chip area, it is well suited for division circuit of elliptic curve cryptosystem. Furthermore, since the proposed architecture does not restrict the choice of irreducible polynomial, and has a unidirectional data flow and regularity, it provides a high flexibility and scalability with respect to the field size m.

Design and Performance Analysis of the Efficient Equalization Method for OFDM system using QAM in multipath fading channel (다중경로 페이딩 채널에서 QAM을 사용하는 OFDM시스템의 효율적인 등화기법 설계 및 성능분석)

  • 남성식;백인기;조성호
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.25 no.6B
    • /
    • pp.1082-1091
    • /
    • 2000
  • In this paper, the efficient equalization method for OFDM(Orthogonal Frequency Division Multiflexing) System using the QAM(Quadrature Amplitude Modulation) in multipath fading channel is proposed in order to faster and more efficiently equalize the received signals that are sent over real channel. In generally, the one-tap linear equalizers have been used in the frequency-domain as the existing equalization method for OFDM system. In this technique, if characteristics of the channel are changed fast, the one-tap linear equalizers cannot compensate for the distortion due to time variant multipath channels. Therefore, in this paper, we use one-tap non-linear equalizers instead of using one-tap linear equalizers in the frequency-domain, and also use the linear equalizer in the time-domain to compensate the rapid performance reduction at the low SNR(Signal-to-Noise Ratio) that is the disadvantage of the non-linear equalizer. In the frequency-domain, when QAM signals, consisting of in-phase components and quadrature (out-phase) components, are sent over the complex channel, the only in-phase and quadrature components of signals distorted by the multipath fading are changed the same as signals distorted by the noise. So the cross components are canceled in the frequency-domain equalizer. The time-domain equalizer and the adaptive algorithm that has lower-error probability and fast convergence speed are applied to compensate for the error that is caused by canceling the cross components in the frequency-domain equalizer. In the time-domain, To compensate for the performance of frequency-domain equalizer the time-domain equalizes the distorted signals at a frame by using the Gold-code as a training sequence in the receiver after the Gold-codes are inserted into the guard signal in the transmitter. By using the proposed equalization method, we can achieve faster and more efficient equalization method that has the reduced computational complexity and improved performance.

  • PDF

Performance Evaluation of Inter-Sector Collaborative PF Schedulers for Multi-User MIMO Transmission Using Zero Forcing (영점 강제 다중 사용자 MIMO 전송 시 셀 간 정보 교환을 활용한 협력적 PF 스케줄러의 성능 평가)

  • Lee, Ji-Won;Sung, Won-Jin
    • Journal of the Institute of Electronics Engineers of Korea TC
    • /
    • v.47 no.2
    • /
    • pp.40-46
    • /
    • 2010
  • Multi-user MIMO (Multiple-Input Multiple-Output) systems require collaborative PF schedulers to improve the performance of the log sum of average transmission rates. While the performance of single cell based conventional PF schedulers has been evaluated over various channel conditions, scheduling algorithms by multiple base stations which select multiple users over a given time frame and their performance require further investigations. In this paper, we apply a collaborative PF scheduler to the distributed multi-user MIMO system, which assigns radio resources to multiple users by exchanging user channel information from base stations located in three adjacent sectors. We further evaluate its performance in terms of the log sum of average transmission rates. The performance is compared to that of the full-search collaborative PF scheduler which searches over all possible combinations of user groups, and that of a parallel PF scheduler that determines users without channel information exchange among base stations. We show the log sum of average transmission rates of the collaborative PF scheduler outperforms that of the parallel PF scheduler in low percentile region. In addition, the collaborative PF scheduler exhibits a negligible performance degradation when compared to the full-search collaborative PF scheduler while a significant reduction of the computational complexity is achievable at the same time.

Quantization Noise Reduction in Block-Coded Video Using the Characteristics of Block Boundary Area (블록 경계 영역 특성을 이용한 블록 부호화 영상에서의 양자화 잡음 제거)

  • Kwon Kee-Koo;Yang Man-Seok;Ma Jin-Suk;Im Sung-Ho;Lim Dong-Sun
    • The KIPS Transactions:PartB
    • /
    • v.12B no.3 s.99
    • /
    • pp.223-232
    • /
    • 2005
  • In this paper, we propose a novel post-filtering algorithm with low computational complexity that improves the visual quality of decoded images using block boundary classification and simple adaptive filter (SAF). At first, each block boundary is classified into smooth or complex sub-region. And for smooth-smooth sub-regions, the existence of blocking artifacts is determined using blocky strength. And simple adaptive filtering is processed in each block boundary area. The proposed method processes adaptively, that is, a nonlinear 1-D 8-tap filter is applied to smooth-smooth sub-regions with blocking artifacts, and for smooth-complex or complex-smooth sub-regions, a nonlinear 1-D variant filter is applied to block boundary pixels so as to reduce the blocking and ringing artifacts. And for complex-complex sub-regions, a nonlinear 1-D 2-tap filter is only applied to adjust two block boundary pixels so as to preserve the image details. Experimental results show that the proposed algorithm produced better results than those of conventional algorithms both subjective and objective viewpoints.

Transfer Learning using Multiple ConvNet Layers Activation Features with Principal Component Analysis for Image Classification (전이학습 기반 다중 컨볼류션 신경망 레이어의 활성화 특징과 주성분 분석을 이용한 이미지 분류 방법)

  • Byambajav, Batkhuu;Alikhanov, Jumabek;Fang, Yang;Ko, Seunghyun;Jo, Geun Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.205-225
    • /
    • 2018
  • Convolutional Neural Network (ConvNet) is one class of the powerful Deep Neural Network that can analyze and learn hierarchies of visual features. Originally, first neural network (Neocognitron) was introduced in the 80s. At that time, the neural network was not broadly used in both industry and academic field by cause of large-scale dataset shortage and low computational power. However, after a few decades later in 2012, Krizhevsky made a breakthrough on ILSVRC-12 visual recognition competition using Convolutional Neural Network. That breakthrough revived people interest in the neural network. The success of Convolutional Neural Network is achieved with two main factors. First of them is the emergence of advanced hardware (GPUs) for sufficient parallel computation. Second is the availability of large-scale datasets such as ImageNet (ILSVRC) dataset for training. Unfortunately, many new domains are bottlenecked by these factors. For most domains, it is difficult and requires lots of effort to gather large-scale dataset to train a ConvNet. Moreover, even if we have a large-scale dataset, training ConvNet from scratch is required expensive resource and time-consuming. These two obstacles can be solved by using transfer learning. Transfer learning is a method for transferring the knowledge from a source domain to new domain. There are two major Transfer learning cases. First one is ConvNet as fixed feature extractor, and the second one is Fine-tune the ConvNet on a new dataset. In the first case, using pre-trained ConvNet (such as on ImageNet) to compute feed-forward activations of the image into the ConvNet and extract activation features from specific layers. In the second case, replacing and retraining the ConvNet classifier on the new dataset, then fine-tune the weights of the pre-trained network with the backpropagation. In this paper, we focus on using multiple ConvNet layers as a fixed feature extractor only. However, applying features with high dimensional complexity that is directly extracted from multiple ConvNet layers is still a challenging problem. We observe that features extracted from multiple ConvNet layers address the different characteristics of the image which means better representation could be obtained by finding the optimal combination of multiple ConvNet layers. Based on that observation, we propose to employ multiple ConvNet layer representations for transfer learning instead of a single ConvNet layer representation. Overall, our primary pipeline has three steps. Firstly, images from target task are given as input to ConvNet, then that image will be feed-forwarded into pre-trained AlexNet, and the activation features from three fully connected convolutional layers are extracted. Secondly, activation features of three ConvNet layers are concatenated to obtain multiple ConvNet layers representation because it will gain more information about an image. When three fully connected layer features concatenated, the occurring image representation would have 9192 (4096+4096+1000) dimension features. However, features extracted from multiple ConvNet layers are redundant and noisy since they are extracted from the same ConvNet. Thus, a third step, we will use Principal Component Analysis (PCA) to select salient features before the training phase. When salient features are obtained, the classifier can classify image more accurately, and the performance of transfer learning can be improved. To evaluate proposed method, experiments are conducted in three standard datasets (Caltech-256, VOC07, and SUN397) to compare multiple ConvNet layer representations against single ConvNet layer representation by using PCA for feature selection and dimension reduction. Our experiments demonstrated the importance of feature selection for multiple ConvNet layer representation. Moreover, our proposed approach achieved 75.6% accuracy compared to 73.9% accuracy achieved by FC7 layer on the Caltech-256 dataset, 73.1% accuracy compared to 69.2% accuracy achieved by FC8 layer on the VOC07 dataset, 52.2% accuracy compared to 48.7% accuracy achieved by FC7 layer on the SUN397 dataset. We also showed that our proposed approach achieved superior performance, 2.8%, 2.1% and 3.1% accuracy improvement on Caltech-256, VOC07, and SUN397 dataset respectively compare to existing work.