• Title/Summary/Keyword: GPU distributed processing

Search Result 21, Processing Time 0.027 seconds

From WiFi to WiMAX: Efficient GPU-based Parameterized Transceiver across Different OFDM Protocols

  • Li, Rongchun;Dou, Yong;Zhou, Jie;Li, Baofeng;Xu, Jinbo
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.7 no.8
    • /
    • pp.1911-1932
    • /
    • 2013
  • Orthogonal frequency-division multiplexing (OFDM) has become a popular modulation scheme for wireless protocols because of its spectral efficiency and robustness against multipath interference. Although the components of various OFDM protocols are functionally similar, they remain distinct because of the characteristics of the environment. Recently, graphics processing units (GPUs) have been used to accelerate the signal processing of the physical layer (PHY) because of their great computational power, high development efficiency, and flexibility. In this paper, we describe the implementation of parameterized baseband modules using GPUs for two different OFDM protocols, namely, 802.11a and 802.16. First, we introduce various modules in the modulator/demodulator parts of the transmitter and receiver and analyze the computational complexity of each module. We then describe the integration of the GPU-based baseband modules of the two protocols using the parameterized method. GPU-based implementations are addressed to explain how to accelerate the baseband processing to archive real-time throughput. Finally, the performance results of each signal processing module are evaluated and analyzed. The experiments show that the GPU-based 802.11a and 802.16 PHY meet the real-time requirement and demonstrate good bit error ratio (BER) performance. The performance comparison indicates that our GPU-based implemented modules have better flexibility and throughput to the current ones.

High-Performance Korean Morphological Analyzer Using the MapReduce Framework on the GPU

  • Cho, Shi-Won;Lee, Dong-Wook
    • Journal of Electrical Engineering and Technology
    • /
    • v.6 no.4
    • /
    • pp.573-579
    • /
    • 2011
  • To meet the scalability and performance requirements of data analyses, which often involve voluminous data, efficient parallel or concurrent algorithms and frameworks are essential. We present a high-performance Korean morphological analyzer which employs the MapReduce framework on the graphics processing unit (GPU). MapReduce is a programming framework introduced by Google to aid the development of web search applications on a large number of central processing units (CPUs). GPUs are designed as a special-purpose co-processor. Their programming interfaces are typically formulated for graphics applications. Compared to CPUs, GPUs have greater computation power and memory bandwidth; however, GPUs are more difficult to program because of the design of their architectures. The performance of the Korean morphological analyzer using the MapReduce framework on the GPU is evaluated in comparison with the CPU-based model. The proposed Korean Morphological analyzer shows promising scalable performance on distributed computing with the GPU.

Analysis of Implementing Mobile Heterogeneous Computing for Image Sequence Processing

  • BAEK, Aram;LEE, Kangwoon;KIM, Jae-Gon;CHOI, Haechul
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.10
    • /
    • pp.4948-4967
    • /
    • 2017
  • On mobile devices, image sequences are widely used for multimedia applications such as computer vision, video enhancement, and augmented reality. However, the real-time processing of mobile devices is still a challenge because of constraints and demands for higher resolution images. Recently, heterogeneous computing methods that utilize both a central processing unit (CPU) and a graphics processing unit (GPU) have been researched to accelerate the image sequence processing. This paper deals with various optimizing techniques such as parallel processing by the CPU and GPU, distributed processing on the CPU, frame buffer object, and double buffering for parallel and/or distributed tasks. Using the optimizing techniques both individually and combined, several heterogeneous computing structures were implemented and their effectiveness were analyzed. The experimental results show that the heterogeneous computing facilitates executions up to 3.5 times faster than CPU-only processing.

The Need of Cache Partitioning on Shared Cache of Integrated Graphics Processor between CPU and GPU (내장형 GPU 환경에서 CPU-GPU 간의 공유 캐시에서의 캐시 분할 방식의 필요성)

  • Sung, Hanul;Eom, Hyeonsang;Yeom, HeonYoung
    • KIISE Transactions on Computing Practices
    • /
    • v.20 no.9
    • /
    • pp.507-512
    • /
    • 2014
  • Recently, Distributed computing processing begins using both CPU(Central processing unit) and GPU(Graphic processing unit) to improve the performance to overcome darksilicon problem which cannot use all of the transistors because of the electric power limitation. There is an integrated graphics processor that CPU and GPU share memory and Last level cache(LLC). But, There is no LLC access rules between CPU and GPU, so if GPU and CPU processes run together at the same time, performance of both processes gets worse because of the contention on the LLC. This Paper gives evidence to prove the need of the Cache Partitioning and is mentioned about the cache partitioning design using page coloring to allocate the L3 Cache space only for the GPU process to guarantee GPU process performance.

A Study on the Performance Improvement of Software Digital Filter using GPU (GPU를 이용한 소프트웨어 디지털 필터의 성능개선에 관한 연구)

  • Yeom, Jae-Hwan;Oh, Se-Jin;Roh, Duk-Gyoo;Jung, Dong-Kyu;Hwang, Ju-Yeon;Oh, Chungsik;Kim, Hyo-Ryoung
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.19 no.4
    • /
    • pp.153-161
    • /
    • 2018
  • This paper describes the performance improvement of Software (SW) digital filter using GPU (Graphical Processing Unit). The previous developed SW digital filter has a problem that it operates on a CPU (Central Processing Unit) basis and has a slow speed. The GPU was introduced to filter the data of the EAVN (East Asian VLBI Network) observation to improve the operation speed and to process data with other stations through filtering, respectively. In order to enhance the computational speed of the SW digital filter, NVIDIA Titan V GPU board with built-in Tensor Core is used. The processing speed of about 0.78 (1Gbps, 16MHz BW, 16-IF) and 1.1 (2Gbps, 32MHz BW, 16-IF) times for the observing time was achieved by filtering the 95 second observation data of 2 Gbps (512 MHz BW, 1-IF), respectively. In addition, 2Gbps data is digitally filtered for the 1 and 2Gbps simultaneously observed with KVN (Korean VLBI Network), and compared with the 1Gbps, we obtained similar values such as cross power spectrum, phase, and SNR (Signal to Noise Ratio). As a result, the effectiveness of developed SW digital filter using GPU in this research was confirmed for utilizing the data processing and analysis. In the future, it is expected that the observation data will be able to be filtered in real time when the distributed processing optimization of source code for using multiple GPU boards.

An Analysis of Existing Studies on Parallel and Distributed Processing of the Rete Algorithm (Rete 알고리즘의 병렬 및 분산 처리에 관한 기존 연구 분석)

  • Kim, Jaehoon
    • The Journal of Korean Institute of Information Technology
    • /
    • v.17 no.7
    • /
    • pp.31-45
    • /
    • 2019
  • The core technologies for intelligent services today are deep learning, that is neural networks, and parallel and distributed processing technologies such as GPU parallel computing and big data. However, for intelligent services and knowledge sharing services through globally shared ontologies in the future, there is a technology that is better than the neural networks for representing and reasoning knowledge. It is a knowledge representation of IF-THEN in RIF or SWRL, which is the standard rule language of the Semantic Web, and can be inferred efficiently using the rete algorithm. However, when the number of rules processed by the rete algorithm running on a single computer is 100,000, its performance becomes very poor with several tens of minutes, and there is an obvious limitation. Therefore, in this paper, we analyze the past and current studies on parallel and distributed processing of rete algorithm, and examine what aspects should be considered to implement an efficient rete algorithm.

Extending Caffe for Machine Learning of Large Neural Networks Distributed on GPUs (대규모 신경회로망 분산 GPU 기계 학습을 위한 Caffe 확장)

  • Oh, Jong-soo;Lee, Dongho
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.7 no.4
    • /
    • pp.99-102
    • /
    • 2018
  • Caffe is a neural net learning software which is widely used in academic researches. The GPU memory capacity is one of the most important aspects of designing neural net architectures. For example, many object detection systems require to use less than 12GB to fit a single GPU. In this paper, we extended Caffe to allow to use more than 12GB GPU memory. To verify the effectiveness of the extended software, we executed some training experiments to determine the learning efficiency of the object detection neural net software using a PC with three GPUs.

A Performance Analysis of Model Training Due to Different Batch Sizes in Synchronous Distributed Deep Learning Environments (동기식 분산 딥러닝 환경에서 배치 사이즈 변화에 따른 모델 학습 성능 분석)

  • Yerang Kim;HyungJun Kim;Heonchang Yu
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2023.11a
    • /
    • pp.79-80
    • /
    • 2023
  • 동기식 분산 딥러닝 기법은 그래디언트 계산 작업을 다수의 워커가 나누어 병렬 처리함으로써 모델 학습 과정을 효율적으로 단축시킨다. 배치 사이즈는 이터레이션 단위로 처리하는 데이터 개수를 의미하며, 학습 속도 및 학습 모델의 품질에 영향을 미치는 중요한 요소이다. 멀티 GPU 환경에서 작동하는 분산 학습의 경우, 가용 GPU 메모리 용량이 커짐에 따라 선택 가능한 배치 사이즈의 상한이 증가한다. 하지만 배치 사이즈가 학습 속도 및 학습 모델 품질에 미치는 영향은 GPU 활용률, 총 에포크 수, 모델 파라미터 개수 등 다양한 변수에 영향을 받으므로 최적값을 찾기 쉽지 않다. 본 연구는 동기식 분산 딥러닝 환경에서 실험을 통해 최적의 배치 사이즈 선택에 영향을 미치는 주요 요인을 분석한다.

Efficient Tiled Stereo Display System for Tangible Meeting

  • Kim, Ig-Jae;Ahn, Sang-Chul;Kim, Hyoung-Gon
    • 한국정보디스플레이학회:학술대회논문집
    • /
    • 2007.08b
    • /
    • pp.1239-1241
    • /
    • 2007
  • In this paper, we present a tiled display system for tangible meeting. We built our system as a distributed system and use GPU based warping and image blending technique for real-time processing. For efficiency, we update specific area only, where the remote user exist, in real-time and blended it with static panoramic image of remote site.

  • PDF

Scalable Big Data Pipeline for Video Stream Analytics Over Commodity Hardware

  • Ayub, Umer;Ahsan, Syed M.;Qureshi, Shavez M.
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.4
    • /
    • pp.1146-1165
    • /
    • 2022
  • A huge amount of data in the form of videos and images is being produced owning to advancements in sensor technology. Use of low performance commodity hardware coupled with resource heavy image processing and analyzing approaches to infer and extract actionable insights from this data poses a bottleneck for timely decision making. Current approach of GPU assisted and cloud-based architecture video analysis techniques give significant performance gain, but its usage is constrained by financial considerations and extremely complex architecture level details. In this paper we propose a data pipeline system that uses open-source tools such as Apache Spark, Kafka and OpenCV running over commodity hardware for video stream processing and image processing in a distributed environment. Experimental results show that our proposed approach eliminates the need of GPU based hardware and cloud computing infrastructure to achieve efficient video steam processing for face detection with increased throughput, scalability and better performance.