• Title/Summary/Keyword: Computations Execution

Search Result 33, Processing Time 0.026 seconds

Design Space Exploration of Many-Core Architecture for Sound Synthesis of Guitar on Portable Device (휴대 장치용 기타 음 합성을 위한 매니코어 아키텍처의 디자인 공간 탐색)

  • Kang, Myeongsu;Kim, Jong-Myon
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2014.01a
    • /
    • pp.1-4
    • /
    • 2014
  • Although physical modeling synthesis is becoming more and more efficient in rich and natural high-quality sound synthesis, its high computational complexity limits its use in portable devices. This constraint motivated research of single-instruction multiple-data many-core architectures that support the tremendous amount of computations by exploiting massive parallelism inherent in physical modeling synthesis. Since no general consensus has been reached which grain sizes of many-core processors and memories provide the most efficient operation for sound synthesis, design space exploration is conducted for seven processing element (PE) configurations. To find an optimal PE configuration, each PE configuration is evaluated in terms of execution time, area and energy efficiencies. Experimental results show that all PE configurations are satisfied with the system requirements to be implemented in portable devices.

  • PDF

An Efficient Dynamic Load balancing Strategy for Tree-structured Computations (트리구조의 계산을 위한 효율적인 동적 부하분산 전략)

  • Hwang, In-Jae;Hong, Dong-Kweon
    • The KIPS Transactions:PartA
    • /
    • v.8A no.4
    • /
    • pp.455-460
    • /
    • 2001
  • For some applications, the computational structure changes dynamically during the program execution. When this happens, static partitioning and allocation of tasks are not enough to achieve high performance in parallel computers. In this paper, we propose a dynamic load balancing algorithm efficiently distributes the computation with dynamically growing tree structure to processors. We present an implementation technique for the algorithm on mesh architectures, and analyze its complexity. We also demonstrate through experiments how our algorithm provides good quality solutions.

  • PDF

Acceleration of ECC Computation for Robust Massive Data Reception under GPU-based Embedded Systems (GPU 기반 임베디드 시스템에서 대용량 데이터의 안정적 수신을 위한 ECC 연산의 가속화)

  • Kwon, Jisu;Park, Daejin
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.24 no.7
    • /
    • pp.956-962
    • /
    • 2020
  • Recently, as the size of data used in an embedded system increases, the need for an ECC decoding operation to robustly receive a massive data is emphasized. In this paper, we propose a method to accelerate the execution of computations that derive syndrome vectors when ECC decoding is performed using Hamming code in an embedded system with a built-in GPU. The proposed acceleration method uses the matrix-vector multiplication of the decoding operation using the CSR format, one of the data structures representing sparse matrix, and is performed in parallel in the CUDA kernel of the GPU. We evaluated the proposed method using a target embedded board with a GPU, and the result shows that the execution time is reduced when ECC decoding operation accelerated based on the GPU than used only CPU.

Automatic decomposition of unstructured meshes employing genetic algorithms for parallel FEM computations

  • Rama Mohan Rao, A.;Appa Rao, T.V.S.R.;Dattaguru, B.
    • Structural Engineering and Mechanics
    • /
    • v.14 no.6
    • /
    • pp.625-647
    • /
    • 2002
  • Parallel execution of computational mechanics codes requires efficient mesh-partitioning techniques. These mesh-partitioning techniques divide the mesh into specified number of submeshes of approximately the same size and at the same time, minimise the interface nodes of the submeshes. This paper describes a new mesh partitioning technique, employing Genetic Algorithms. The proposed algorithm operates on the deduced graph (dual or nodal graph) of the given finite element mesh rather than directly on the mesh itself. The algorithm works by first constructing a coarse graph approximation using an automatic graph coarsening method. The coarse graph is partitioned and the results are interpolated onto the original graph to initialise an optimisation of the graph partition problem. In practice, hierarchy of (usually more than two) graphs are used to obtain the final graph partition. The proposed partitioning algorithm is applied to graphs derived from unstructured finite element meshes describing practical engineering problems and also several example graphs related to finite element meshes given in the literature. The test results indicate that the proposed GA based graph partitioning algorithm generates high quality partitions and are superior to spectral and multilevel graph partitioning algorithms.

Accelerating Self-Similarity-Based Image Super-Resolution Using OpenCL

  • Jun, Jae-Hee;Choi, Ji-Hoon;Lee, Dae-Yeol;Jeong, Seyoon;Cho, Suk-Hee;Kim, Hui-Yong;Kim, Jong-Ok
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.4 no.1
    • /
    • pp.10-15
    • /
    • 2015
  • This paper proposes the parallel implementation of a self-similarity based image SR (super-resolution) algorithm using OpenCL. The SR algorithm requires tremendous computations to search for a similar patch. This becomes a bottleneck for the real-time conversion from a FHD image to UHD. Therefore, it is imperative to accelerate the processing speed of SR algorithms. For parallelization, the SR process is divided into several kernels, and memory optimization is performed. In addition, two GPUs are used for further acceleration. The experimental results shows that a GPGPU implementation can speed up over 140 times compared to a single-core CPU. Furthermore, it was confirmed experimentally that utilizing two GPUs can speed up the execution time proportionally, up to 277 times.

A Data Fault Attack on the Miller Algorithm for Pairing Computation in Mobile Ad-Hoc Network Environments (이동 Ad-Hoc 네트워크 환경에서 페어링 연산의 밀러 알고리듬에 대한 데이터 오류 공격)

  • Bae, KiSeok;Sohn, GyoYong;Park, YoungHo;Moon, SangJae
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.50 no.2
    • /
    • pp.70-79
    • /
    • 2013
  • Recently, there has been introduced various types of pairing computations to implement ID based cryptosystem for mobile ad hoc network. The Miller algorithm is the most popular algorithm for the typical pairing computation such as Weil, Tate and Ate. In this paper, we analyze the feasibility of concrete data fault injection attack, which was proposed by Whelan and Scott, in terms of regardless of round positions during the execution of the Miller algorithm. As the simulation results, the proposed attack that can be employed to regardless of round positions and coordinate systems is effective and powerful.

A Design of a Tile Based Rasterizer Using Memory Hierarchy Structure (메모리 계층 구조를 사용한 타일 기반 레스터라이져 설계)

  • Kim, Do Hyun;Kwak, Jae Chang
    • Journal of IKEEE
    • /
    • v.19 no.4
    • /
    • pp.590-595
    • /
    • 2015
  • This paper proposes a design of efficient hierarchy structure in the tile based rasterizer. The proposed hierarchy structure avoids unnecessary calls of low level tile at which a calculation is not required. A low level tile is classified into three categories based on its maximum, minimum position, and inside outside test. The necessity of calculations on the corresponding low level tile can be determined by its classification. The overall amount of computations for graphic processing can be reduced by not calling for the low level tile with no calculation. The proposed hierarchy structure can reduce an execution time of graphic processing. It shows higher efficiency with the more vertex density of formulating 3D model.

Multi-Sized cumulative Summary Structure Driven Light Weight in Frequent Closed Itemset Mining to Increase High Utility

  • Siva S;Shilpa Chaudhari
    • Journal of information and communication convergence engineering
    • /
    • v.21 no.2
    • /
    • pp.117-129
    • /
    • 2023
  • High-utility itemset mining (HIUM) has emerged as a key data-mining paradigm for object-of-interest identification and recommendation systems that serve as frequent itemset identification tools, product or service recommendation systems, etc. Recently, it has gained widespread attention owing to its increasing role in business intelligence, top-N recommendation, and other enterprise solutions. Despite the increasing significance and the inability to provide swift and more accurate predictions, most at-hand solutions, including frequent itemset mining, HUIM, and high average- and fast high-utility itemset mining, are limited to coping with real-time enterprise demands. Moreover, complex computations and high memory exhaustion limit their scalability as enterprise solutions. To address these limitations, this study proposes a model to extract high-utility frequent closed itemsets based on an improved cumulative summary list structure (CSLFC-HUIM) to reduce an optimal set of candidate items in the search space. Moreover, it employs the lift score as the minimum threshold, called the cumulative utility threshold, to prune the search space optimal set of itemsets in a nested-list structure that improves computational time, costs, and memory exhaustion. Simulations over different datasets revealed that the proposed CSLFC-HUIM model outperforms other existing methods, such as closed- and frequent closed-HUIM variants, in terms of execution time and memory consumption, making it suitable for different mined items and allied intelligence of business goals.

Batch Processing Algorithm for Moving k-Farthest Neighbor Queries in Road Networks (도로망에서 움직이는 k-최원접 이웃 질의를 위한 일괄 처리 알고리즘)

  • Cho, Hyung-Ju
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2021.07a
    • /
    • pp.223-224
    • /
    • 2021
  • Recently, k-farthest neighbor (kFN) queries have not as much attention as k-nearest neighbor (kNN) queries. Therefore, this study considers moving k-farthest neighbor (MkFN) queries for spatial network databases. Given a positive integer k, a moving query point q, and a set of data points P, MkFN queries can constantly retrieve k data points that are farthest from the query point q. The challenge with processing MkFN queries in spatial networks is to avoid unnecessary or superfluous distance calculations between the query and associated data points. This study proposes a batch processing algorithm, called MOFA, to enable efficient processing of MkFN queries in spatial networks. MOFA aims to avoid dispensable distance computations based on the clustering of both query and data points. Moreover, a time complexity analysis is presented to clarify the effect of the clustering method on the query processing time. Extensive experiments using real-world roadmaps demonstrated the efficiency and scalability of the MOFA when compared with a conventional solution.

  • PDF

Compact Field Remapping for Dynamically Allocated Structures (동적으로 할당된 구조체를 위한 압축된 필드 재배치)

  • Kim, Jeong-Eun;Han, Hwan-Soo
    • Journal of KIISE:Software and Applications
    • /
    • v.32 no.10
    • /
    • pp.1003-1012
    • /
    • 2005
  • The most significant difference of embedded systems from general purpose systems is that embedded systems are allowed to use only limited resources including battery and memory. Especially, the number of applications increases which deal with multimedia data. In those systems with high data computations, the delay of memory access is one of the major bottlenecks hurting the system performance. As a result, many researchers have investigated various techniques to reduce the memory access cost. Most programs generally have locality in memory references. Temporal locality of references means that a resource accessed at one point will be used again in the near future. Spatial locality of references is that likelihood of using a resource gets higher if resources near it were just accessed. The latest embedded processors usually adapt cache memory to exploit these two types of localities. Processors access faster cache memory than off-chip memory, reducing the latency. In this paper we will propose the enhanced dynamic allocation technique for structure-type data in order to eliminate unused memory space and to reduce both the cache miss rate and the application execution time. The proposed approach aggregates fields from multiple records dynamically allocated and consecutively remaps them on the memory space. Experiments on Olden benchmarks show $13.9\%$ L1 cache miss rate drop and $15.9\%$ L2 cache miss drop on average, compared to the previously proposed techniques. We also find execution time reduced by $10.9\%$ on average, compared to the previous work.