• Title/Summary/Keyword: associative processor

Search Result 15, Processing Time 0.025 seconds

Design of a scalable general-purpose parallel associative processor using content-addressable memory (Content-Addressable Memory를 이용한 확장 가능한 범용 병렬 Associative Processor 설계)

  • Park, Tae-Geun
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.43 no.2 s.344
    • /
    • pp.51-59
    • /
    • 2006
  • Von Neumann architecture suffers from the interface between the central processing unit and the memory, which is called 'Von Neumann bottleneck' In this paper, we propose a scalable general-purpose associative processor (AP) based on content-addressable memory (CAM) which solves this problem and is suitable for the search-oriented applications. We propose an efficient instruction set and a structural scalability to extend for larger applications. We define twelve instructions and provide some reduced instructions to speed up which execute two instructions in a single instruction cycle. The proposed AP performs in a bit-serial, word-parallel fashion and can be considered as a 32-bit general-purpose parallel processor with a massively parallel SIMD structure. We design and simulate a maximum/minumum search greater-than/less-than search, and parallel addition to verify the proposed architecture. The algorithms are executed in a constant time O(k) regardless of the number of input data.

Architecture of a scalable general-purpose associative processor and its applications (확장 가능한 범용 Associative Processor 구조 및 응용)

  • Yun, Jae-Bok;Kim, Ju-Young;Kim, Jin-Wook;Park, Tae-Geun
    • Proceedings of the IEEK Conference
    • /
    • 2005.11a
    • /
    • pp.1141-1144
    • /
    • 2005
  • 일반 컴퓨터에서 중앙처리장치와 메모리 사이의 병목 현상인 "Von Neumann Bottleneck"을 보이는데 본 논문에서는 이러한 문제점을 해소하고 검색위주의 응용분야에서 우수한 성능을 보이는 확장 가능한 범용 Associative Processor(AP) 구조를 제안하였다. 본 연구에서는 Associative computing을 효율적으로 수행할 수 있는 명령어 세트를 제안하였으며 다양하고 대용량 응용분야에도 적용할 수 있도록 구조를 확장 가능하게 설계함으로써 유연한 구조를 갖는다. 12 가지의 명령어가 정의되었으며 프로그램이 효율적으로 수행될 수 있도록 명령어 셋을 구성하고 연속된 명령어를 하나의 명령어로 구현함으로써 처리시간을 단축하였다. 제안된 프로세서는 bit-serial, word-parallel로 동작하며 대용량 병렬 SIMD 구조를 갖는 32 비트 범용 병렬 프로세서로 동작한다. 포괄적인 검증을 위하여 명령어 단위의 검증 뿐 아니라 최대/최소 검색, 이상/이하 검색, 병렬 덧셈 등의 기본적인 병렬 알고리즘을 검증하였으며 알고리즘은 처리 데이터의 개수와는 무관한 상수의 복잡도 O(k)를 갖으며 데이터의 비트 수만큼의 이터레이션을 갖는다.

  • PDF

Instruction Flow based Early Way Determination Technique for Low-power L1 Instruction Cache

  • Kim, Gwang Bok;Kim, Jong Myon;Kim, Cheol Hong
    • Journal of the Korea Society of Computer and Information
    • /
    • v.21 no.9
    • /
    • pp.1-9
    • /
    • 2016
  • Recent embedded processors employ set-associative L1 instruction cache to improve the performance. The energy consumption in the set-associative L1 instruction cache accounts for considerable portion in the embedded processor. When an instruction is required from the processor, all ways in the set-associative instruction cache are accessed in parallel. In this paper, we propose the technique to reduce the energy consumption in the set-associative L1 instruction cache effectively by accessing only one way. Gshare branch predictor is employed to predict the instruction flow and determine the way to fetch the instruction. When the branch prediction is untaken, next instruction in a sequential order can be fetched from the instruction cache by accessing only one way. According to our simulations with SPEC2006 benchmarks, the proposed technique requires negligible hardware overhead and shows 20% energy reduction on average in 4-way L1 instruction cache.

Associative Memories for 3-D Object (Aircraft) Identification (연상 메모리를 사용한 3차원 물체(항공기)인식)

  • 소성일
    • Information and Communications Magazine
    • /
    • v.7 no.3
    • /
    • pp.27-34
    • /
    • 1990
  • The $(L,\psi)$ feature description on the binary boundary air craft image is introduced of classifying 3-D object (aircraft) identification. Three types for associative matrix memories are employed and tested for their classification performance. The fast association involved in these memories can be implemented using a parallel optical matrix-vector operation. Two associative memories are based on pseudoinverse solutions and the third one is interoduced as a paralell version of a nearest-neighbor classifier. Detailed simulation results for each associative processor are provided.

  • PDF

A Parallel Algorithm For Rectilinear Steiner Tree Using Associative Processor (연합 처리기를 이용한 직교선형 스타이너 트리의 병렬 알고리즘)

  • Taegeun Park
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.32B no.8
    • /
    • pp.1057-1063
    • /
    • 1995
  • This paper describes an approach for constucting a Rectilinear Steiner Tree (RST) derivable from a Minimum Spanning Tree (MST), using Associative Processor (AP). We propose a fast parallel algorithm using AP's basic algorithms which can be realized by the processing capability of rudimentary logic and the selective matching capability of Content- Addressable Memory (CAM). The main idea behind the proposed algorithm is to maximize the overlaps between the consecutive edges in MST, thus minimizing the cost of a RST. An efficient parallel linear algorithm with O(n) complexity to construct a RST is proposed using an algorithm to find a MST, where n is the number of nodes. A node insertion method is introduced to allow the Z-type layout. The routing process which only depends on the neighbor edges and the no-rerouting strategy both help to speed up finding a RST.

  • PDF

Cache and Pipeline Architecture Improvement and Low Power Design of Embedded Processor (임베디드 프로세서의 캐시와 파이프라인 구조개선 및 저전력 설계)

  • Jung, Hong-Kyun;Ryoo, Kwang-Ki
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2008.10a
    • /
    • pp.289-292
    • /
    • 2008
  • This paper presents a branch prediction algorithm and a 4-way set-associative cache for performance improvement of OpenRISC processor and a clock gating algorithm using ODC (Observability Don't Care) operation for a low-power processor. The branch prediction algorithm has a structure using BTB(Branch Target Buffer) and 4-way set associative cache has lower miss rate than direct-mapped cache. The clock gating algorithm reduces dynamic power consumption. As a result of estimation of performance and dynamic power, the performance of the OpenRISC processor using the proposed algorithm is improved about 8.9% and dynamic power of the processor using samsung $0.18{\mu}m$ technology library is reduced by 13.9%.

  • PDF

Design of an Asynchronous Data Cache with FIFO Buffer for Write Back Mode (Write Back 모드용 FIFO 버퍼 기능을 갖는 비동기식 데이터 캐시)

  • Park, Jong-Min;Kim, Seok-Man;Oh, Myeong-Hoon;Cho, Kyoung-Rok
    • The Journal of the Korea Contents Association
    • /
    • v.10 no.6
    • /
    • pp.72-79
    • /
    • 2010
  • In this paper, we propose the data cache architecture with a write buffer for a 32bit asynchronous embedded processor. The data cache consists of CAM and data memory. It accelerates data up lood cycle between the processor and the main memory that improves processor performance. The proposed data cache has 8 KB cache memory. The cache uses the 4-way set associative mapping with line size of 4 words (16 bytes) and pseudo LRU replacement algorithm for data replacement in the memory. Dirty register and write buffer is used for write policy of the cache. The designed data cache is synthesized to a gate level design using $0.13-{\mu}m$ process. Its average hit rate is 94%. And the system performance has been improved by 46.53%. The proposed data cache with write buffer is very suitable for a 32-bit asynchronous processor.

Parallel algorithm of global routing for general purpose associative processign system (법용 연합 처리 시스템에서의 전역배선 병렬화 기법)

  • Park, Taegeun
    • Journal of the Korean Institute of Telematics and Electronics A
    • /
    • v.32A no.4
    • /
    • pp.93-102
    • /
    • 1995
  • This paper introduces a general purpose Associative Processor(AP) which is very efficient for search-oriented applications. The proposed architecture consists of three main functional blocks: Content-Addressable Memory(CAM) arry, row logic, and control section. The proposed AP is a Single-Instruction, Multiple-Data(SIMD) device based on a CAM core and an array of high speed processors. As an application for the proposed hardware, we present a parallel algorithm to solve a global routing problem in the layout process utilizing the processing capabilities of a rudimentary logic and the selective matching and writing capability of CAMs, along with basic algorithms such a minimum(maximum) search, less(greater) than search and parallel arithmetic. We have focused on the simultaneous minimization of the desity of the channels and the wire length by sedking a less crowded channel with shorter wire distance. We present an efficient mapping technique of the problem into the CAM structure. Experimental results on difficult examples, on randomly generated data, and on benchmark problems from MCNC are included.

  • PDF

A Study on the Design of Format Converter for Pixel-Parallel Image Processing (픽셀-병렬 영상처리에 있어서 포맷 컨버터 설계에 관한 연구)

  • 김현기;김현호;하기종;최영규;류기환;이천희
    • Proceedings of the IEEK Conference
    • /
    • 2001.06b
    • /
    • pp.269-272
    • /
    • 2001
  • In this paper we proposed the format converter design and implementation for real time image processing. This design method is based on realized the large processor-per-pixel array by integrated circuit technology in which this two types of integrated structure is can be classify associative parallel processor and parallel process with DRAM cell. Layout pitch of one-bit-wide logic is identical memory cell pitch to array high density PEs in integrate structure. This format converter design has control path implementation efficiently, and can be utilized the high technology without complicated controller hardware. Sequence of array instruction are generated by host computer before process start, and instructions are saved on unit controller. Host computer is executed the pixel-parallel operation starting at saved instructions after processing start

  • PDF

Performance Analysis of Multicore Processor Architectures Based On Cache Size Effects (캐쉬 용량 효과에 대한 멀티코어 프로세서의 성능 연구)

  • Lee, Jongbok
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.12 no.6
    • /
    • pp.175-180
    • /
    • 2012
  • In order to overcome the complexity and performance limit problems of superscalar processors, the multicore architecture has been prevalent recently. The configuration and the size of instruction and data caches greatly gives effect on the performance of multicore processors. Using SPEC 2000 benchmarks as input, the trace-driven simulation has been performed for the 2-core to 16-core architectures with different sizes of caches extensively. As a result, the 2-way set associative instruction and data cache with the size of 64KB brought the best cost-effective performance.