• Title/Summary/Keyword: Fixed Complexity

Search Result 287, Processing Time 0.026 seconds

A Heuristic Algorithm for Optimal Facility Placement in Mobile Edge Networks

  • Jiao, Jiping;Chen, Lingyu;Hong, Xuemin;Shi, Jianghong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.7
    • /
    • pp.3329-3350
    • /
    • 2017
  • Installing caching and computing facilities in mobile edge networks is a promising solution to cope with the challenging capacity and delay requirements imposed on future mobile communication systems. The problem of optimal facility placement in mobile edge networks has not been fully studied in the literature. This is a non-trivial problem because the mobile edge network has a unidirectional topology, making existing solutions inapplicable. This paper considers the problem of optimal placement of a fixed number of facilities in a mobile edge network with an arbitrary tree topology and an arbitrary demand distribution. A low-complexity sequential algorithm is proposed and proved to be convergent and optimal in some cases. The complexity of the algorithm is shown to be $O(H^2{\gamma})$, where H is the height of the tree and ${\gamma}$ is the number of facilities. Simulation results confirm that the proposed algorithm is effective in producing near-optimal solutions.

Adaptive P-SLM Method with New Phase Sequence for PAPR Reduction of MIMO-OFDM Systems (MIMO-OFDM 시스템의 PAPR 감소를 위한 새로운 위상시퀀스의 적응형 P-SLM기법)

  • Yoo, Eun-Ji;Byun, Youn-Shik
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.36 no.3C
    • /
    • pp.149-156
    • /
    • 2011
  • MIMO-OFDM(Multiple Input Multiple Output-Orthogonal Frequency Division Multiplexing) has been spotlighted as a solution of high-quality service for next generation's wireless communications. However, like OFDM, one of main problems of MIMO-OFDM is the high PAPR(Peak-to-Average Power Ratio). In this paper, an adaptive P-SLM(Partitioned-SeLetive Mapping) based on new phase sequence is proposed to reduce PAPR. The proposed method has better performance and lower complexity than conventional method due to the use of periodic multiplication and adaptability by fixed critical PAPR value. Simulation results show that the proposed method has better performance and lower complexity than conventional method.

Deep CNN based Pilot Allocation Scheme in Massive MIMO systems

  • Kim, Kwihoon;Lee, Joohyung
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.10
    • /
    • pp.4214-4230
    • /
    • 2020
  • This paper introduces a pilot allocation scheme for massive MIMO systems based on deep convolutional neural network (CNN) learning. This work is an extension of a prior work on the basic deep learning framework of the pilot assignment problem, the application of which to a high-user density nature is difficult owing to the factorial increase in both input features and output layers. To solve this problem, by adopting the advantages of CNN in learning image data, we design input features that represent users' locations in all the cells as image data with a two-dimensional fixed-size matrix. Furthermore, using a sorting mechanism for applying proper rule, we construct output layers with a linear space complexity according to the number of users. We also develop a theoretical framework for the network capacity model of the massive MIMO systems and apply it to the training process. Finally, we implement the proposed deep CNN-based pilot assignment scheme using a commercial vanilla CNN, which takes into account shift invariant characteristics. Through extensive simulation, we demonstrate that the proposed work realizes about a 98% theoretical upper-bound performance and an elapsed time of 0.842 ms with low complexity in the case of a high-user-density condition.

FPGA Implementation of an FDTrS/DF Signal Detector for High-density DVD System (고밀도 DVD 시스템을 위한 FDTrS/DF 신호 검출기의 FPGA 구현)

  • 정조훈
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.25 no.10B
    • /
    • pp.1732-1743
    • /
    • 2000
  • In this paper a fixed-delay trellis search with decision feedback (FDTrS/DF) for high-density DVD systems (4.7-15GB) is proposed and implemented with FPGA. The proposed FDTrS/DF is derived by transforming the binary tree search structure into trellis search structure implying that FDTrS/DF performs better than the singnal detection techniques based on tree search structure such as FDTS/DF and SSD/DF. Advantages of FDTrS/DF are significant reductions in hardware complexity due to the unique structure of FDTrS composed of only one trellis stage requiring no traceback procedure usually implemented in the Viterbi detector. Also in this paper the PDFS/DF and SSD/DF orginally proposed for high-density magnetic recording systems are modified for the DVD system and compared with the proposed FDTrS/DF. In order to increase speed in the FPGA implementation the pipelining technique and absolute branch metric (instead of square branch metric) are applied. The proposed FDTrS/DF is shown to provide the best performance among various signal detection techniques such as PRML, DFE, FDTS/DF and SSD/DF even with a small hardware complexity.

  • PDF

Low-Complexity VFF-RLS Algorithm Using Normalization Technique (정규화 기법을 이용한 낮은 연산량의 가변 망각 인자 RLS 기법)

  • Lee, Seok-Jin;Lim, Jun-Seok;Sung, Koeng-Mo
    • The Journal of the Acoustical Society of Korea
    • /
    • v.29 no.1
    • /
    • pp.18-23
    • /
    • 2010
  • The RLS (Recursive Least Squares) method is a broadly used adaptive algorithm for signal processing in electronic engineering. The RLS algorithm shows a good performance and a fast adaptation within a stationary environment, but it shows a Poor performance within a non-stationary environment because the method has a fixed forgetting factor. In order to enhance 'tracking' performances, BLS methods with an adaptive forgetting factor had been developed. This method shows a good tracking performance, however, it suffers from heavy computational loads. Therefore, we propose a modified AFF-RLS which has relatively low complexity m this paper.

Performance Analysis on Various Design Issues of Turbo Decoder (다양한 Design Issue에 대한 터보 디코더의 성능분석)

  • Park Taegeun;Kim Kiwhan
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.29 no.12A
    • /
    • pp.1387-1395
    • /
    • 2004
  • Turbo decoder inherently requires large memory and intensive hardware complexity due to iterative decoding, despite of excellent decoding efficiency. To decrease the memory space and reduce hardware complexity, various design issues have to be discussed. In this paper, various design issues on Turbo decoder are investigated and the tradeoffs between the hardware complexity and the performance are analyzed. Through the various simulations on the fixed-length analysis, we decided 5-bits for the received data, 6-bits for a priori information, and 7-bits for the quantization state metric, so the performance gets close to that of infinite precision. The MAX operation which is the main function of Log-MAP decoding algorithm is analyzed and the error correction term for MAX* operation can be efficiently implemented with very small hardware overhead. The size of the sliding window was decided as 32 to reduce the state metric memory space and to achieve an acceptable BER.

Performance Analysis on Various Design Issues of Quasi-Cyclic Low Density Parity Check Decoder (Quasi-Cyclic Low Density Panty Check 복호기의 다양한 설계 관점에 대한 성능분석)

  • Chung, Su-Kyung;Park, Tae-Geun
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.46 no.11
    • /
    • pp.92-100
    • /
    • 2009
  • In this paper, we analyze the hardware architecture of Low Density Parity Check (LDPC) decoder using Log Likelihood Ration-Belief Propagation (LLR-BP) decoding algorithm. Various design issues that affect the decoding performance and the hardware complexity are discussed and the tradeoffs between the hardware complexity and the performance are analyzed. The message data for passing error probability is quantized to 7 bits and among them the fractional part is 4 bits. To maintain the decoding performance, the integer and fractional parts for the intrinsic information is 2 bits and 4 bits respectively. We discuss the alternate implementation of $\Psi$(x) function using piecewise linear approximation. Also, we improve the hardware complexity and the decoding time by applying overlapped scheduling.

Optimization and Real-time Implementation of QCELP Vocoder (QCELP 보코더의 최적화 및 실시간 구현)

  • 변경진;한민수;김경수
    • The Journal of the Acoustical Society of Korea
    • /
    • v.19 no.1
    • /
    • pp.78-83
    • /
    • 2000
  • Vocoders used in digital mobile phone adopt new improved algorithm to achieve better communication quality. Therefore the communication problem occurs between mobile phones using different vocoder algorithms. In this paper, the efficient implementation of 8kbps and 13kbps QCELP into one DSP chip to solve this problem is presented. We also describe the optimization method at each level, that is, algorithm-level, equation-level, and coding-level, to reduce the complexity for the QCELP vocoder algorithm implementation. The complexity in the codebook search-loop that is the main part for the QCELP algorithm complexity can be reduced about 50% by using these optimizations. The QCELP implementation with our DSP requires only 25 MIPS of computation for the 8kbps and 33 MIPS for the 13kbps ones. The DSP for our real-time implementation is a 16-bit fixed-point one specifically designed for vocoder applications and has a simple architecture compared to general-purpose ones in order to reduce the power consumption.

  • PDF

Variable Step LMS Algorithm using Fibonacci Sequence (피보나치 수열을 활용한 가변스텝 LMS 알고리즘)

  • Woo, Hong-Chae
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.19 no.2
    • /
    • pp.42-46
    • /
    • 2018
  • Adaptive signal processing is quite important in various signal and communication environments. In adaptive signal processing methods since the least mean square(LMS) algorithm is simple and robust, it is used everywhere. As the step is varied in the variable step(VS) LMS algorithm, the fast convergence speed and the small excess mean square error can be obtained. Various variable step LMS algorithms are researched for better performances. But in some of variable step LMS algorithms the computational complexity is quite large for better performances. The fixed step LMS algorithm with a low computational complexity merit and the variable step LMS algorithm with a fast convergence merit are combined in the proposed sporadic step algorithm. As the step is sporadically updated, the performances of the variable step LMS algorithm can be maintained in the low update rate using Fibonacci sequence. The performances of the proposed variable step LMS algorithm are proved in the adaptive equalizer.

Image warping using an adaptive partial matching method (적응적 부분 정합 방법을 이용한 영상 비틀림 방법)

  • 임동근;호요성
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.22 no.12
    • /
    • pp.2783-2797
    • /
    • 1997
  • This paper proposes a new motion estimation algorithm that employs matching in a variable search area. Instead of uisg a fixed search range for coarse motion estimation, we examine a varying search range, which is determined adaptively by the peak signal to noise ratio (PSNR) of the frame difference. The hexagonal matching method is one of the refined methods in image warping. It produces improved image quality, but it requires a large amount of computataions. The proposed adaptive partial matching method reduces computational complexity below about 50% of the hexagonal matching method, while maintaining the image quality comparable. The performance of two motion compensation methods, which combine the affine or bilinear transformation with the proposed motion estimation algorithm, is evaluated based on the following criteria:computtational complexity, number of coding bits, and reconstructed image quality. The quality of reconstructed images by the proposed method is substantially improved relative to the conventional BMA method, and is comparable to the full hexagonal matching method;in addition, computational complexity and the number of coding bits are reduced significantly.

  • PDF