• Title/Summary/Keyword: Memory Reference

Search Result 288, Processing Time 0.037 seconds

Anticipatory I/O Management for Clustered Flash Translation Layer in NAND Flash Memory

  • Park, Kwang-Hee;Yang, Jun-Sik;Chang, Joon-Hyuk;Kim, Deok-Hwan
    • ETRI Journal
    • /
    • v.30 no.6
    • /
    • pp.790-798
    • /
    • 2008
  • Recently, NAND flash memory has emerged as a next generation storage device because it has several advantages, such as low power consumption, shock resistance, and so on. However, it is necessary to use a flash translation layer (FTL) to intermediate between NAND flash memory and conventional file systems because of the unique hardware characteristics of flash memory. This paper proposes a new clustered FTL (CFTL) that uses clustered hash tables and a two-level software cache technique. The CFTL can anticipate consecutive addresses from the host because the clustered hash table uses the locality of reference in a large address space. It also adaptively switches logical addresses to physical addresses in the flash memory by using block mapping, page mapping, and a two-level software cache technique. Furthermore, anticipatory I/O management using continuity counters and a prefetch scheme enables fast address translation. Experimental results show that the proposed address translation mechanism for CFTL provides better performance in address translation and memory space usage than the well-known NAND FTL (NFTL) and adaptive FTL (AFTL).

  • PDF

A Study on Flash Memory Management Techniques (플래시메모리의 관리 기법 연구)

  • Kim, Jeong-Joon;Chung, Sung-Taek
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.17 no.4
    • /
    • pp.143-148
    • /
    • 2017
  • Flash Memory which is light and strong external shock as storage of small electronics like smartphone, digital camera, car black box has been widely used. Since the operation speed of the read operation and the write operation are different from each other, and the flash memory has the feature that it is not possible to overwrite, the delete operation is added to solve these problems. Wear-leveling must also be considered, since the number of erase times of the flash memory is limited. Many studies have been conducted on the substitutional algorithms of flash memory based on these characteristics of recent flash memories. So, to solve the problem that has existing buffer replacement algorithm this thesis divide page into 6 groups and when proposed algorithm select victim page, it consider reference page frequency and page recency.

270 MHz Full HD H.264/AVC High Profile Encoder with Shared Multibank Memory-Based Fast Motion Estimation

  • Lee, Suk-Ho;Park, Seong-Mo;Park, Jong-Won
    • ETRI Journal
    • /
    • v.31 no.6
    • /
    • pp.784-794
    • /
    • 2009
  • We present a full HD (1080p) H.264/AVC High Profile hardware encoder based on fast motion estimation (ME). Most processing cycles are occupied with ME and use external memory access to fetch samples, which degrades the performance of the encoder. A novel approach to fast ME which uses shared multibank memory can solve these problems. The proposed pixel subsampling ME algorithm is suitable for fast motion vector searches for high-quality resolution images. The proposed algorithm achieves an 87.5% reduction of computational complexity compared with the full search algorithm in the JM reference software, while sustaining the video quality without any conspicuous PSNR loss. The usage amount of shared multibank memory between the coarse ME and fine ME blocks is 93.6%, which saves external memory access cycles and speeds up ME. It is feasible to perform the algorithm at a 270 MHz clock speed for 30 frame/s real-time full HD encoding. Its total gate count is 872k, and internal SRAM size is 41.8 kB.

A Genetic Algorithm for Directed Graph-based Supply Network Planning in Memory Module Industry

  • Wang, Li-Chih;Cheng, Chen-Yang;Huang, Li-Pin
    • Industrial Engineering and Management Systems
    • /
    • v.9 no.3
    • /
    • pp.227-241
    • /
    • 2010
  • A memory module industry's supply chain usually consists of multiple manufacturing sites and multiple distribution centers. In order to fulfill the variety of demands from downstream customers, production planners need not only to decide the order allocation among multiple manufacturing sites but also to consider memory module industrial characteristics and supply chain constraints, such as multiple material substitution relationships, capacity, and transportation lead time, fluctuation of component purchasing prices and available supply quantities of critical materials (e.g., DRAM, chip), based on human experience. In this research, a directed graph-based supply network planning (DGSNP) model is developed for memory module industry. In addition to multi-site order allocation, the DGSNP model explicitly considers production planning for each manufacturing site, and purchasing planning from each supplier. First, the research formulates the supply network's structure and constraints in a directed-graph form. Then, a proposed genetic algorithm (GA) solves the matrix form which is transformed from the directed-graph model. Finally, the final matrix, with a calculated maximum profit, can be transformed back to a directed-graph based supply network plan as a reference for planners. The results of the illustrative experiments show that the DGSNP model, compared to current memory module industry practices, determines a convincing supply network planning solution, as measured by total profit.

A Hardware Cache Prefetching Scheme for Multimedia Data with Intermittently Irregular Strides (단속적(斷續的) 불규칙 주소간격을 갖는 멀티미디어 데이타를 위한 하드웨어 캐시 선인출 방법)

  • Chon Young-Suk;Moon Hyun-Ju;Jeon Joongnam;Kim Sukil
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.31 no.11
    • /
    • pp.658-672
    • /
    • 2004
  • Multimedia applications are required to process the huge amount of data at high speed in real time. The memory reference instructions such as loads and stores are the main factor which limits the high speed execution of processor. To enhance the memory reference speed, cache prefetch schemes are used so as to reduce the cache miss ratio and the total execution time by previously fetching data into cache that is expected to be referenced in the future. In this study, we present an advanced data cache prefetching scheme that improves the conventional RPT (reference prediction table) based scheme. We considers the cache line size in calculation of the address stride referenced by the same instruction, and enhances the prefetching algorithm so that the effect of prefetching could be maintained even if an irregular address stride is inserted into the series of uniform strides. According to experiment results on multimedia benchmark programs, the cache miss ratio has been improved 29% in average compared to the conventional RPT scheme while the bus usage has increased relatively small amount (0.03%).

Design of a Fast Multi-Reference Frame Integer Motion Estimator for H.264/AVC

  • Byun, Juwon;Kim, Jaeseok
    • JSTS:Journal of Semiconductor Technology and Science
    • /
    • v.13 no.5
    • /
    • pp.430-442
    • /
    • 2013
  • This paper presents a fast multi-reference frame integer motion estimator for H.264/AVC. The proposed system uses the previously proposed fast multi-reference frame algorithm. The previously proposed algorithm executes a full search area motion estimation in reference frames 0 and 1. After that, the search areas of motion estimation in reference frames 2, 3 and 4 are minimized by a linear relationship between the motion vector and the distances from the current frame to the reference frames. For hardware implementation, the modified algorithm optimizes the search area, reduces the overlapping search area and modifies a division equation. Because the search area is reduced, the amount of computation is reduced by 58.7%. In experimental results, the modified algorithm shows an increase of bit-rate in 0.36% when compared with the five reference frame standard. The pipeline structure and the memory controller are also adopted for real-time video encoding. The proposed system is implemented using 0.13 um CMOS technology, and the gate count is 1089K with 6.50 KB of internal SRAM. It can encode a Full HD video ($1920{\times}1080P@30Hz$) in real-time at a 135 MHz clock speed with 5 reference frames.

Data Protocol and Air Interface Communication Parameters for Radio Frequency Identification (RFID의 프로토콜 및 인터페이스 파라미터)

  • Choi, Sung-Woon
    • Proceedings of the Safety Management and Science Conference
    • /
    • 2007.11a
    • /
    • pp.323-328
    • /
    • 2007
  • This paper introduces radio frequency identification(RFID) information technologies for item management such as application interface of data protocol, data encoding rules and logical memory functions for data protocol, and, unique identification for RF tags. This study presents reference architecture and definition of parameters to be standardized, various parameters for air intreface communications, and, application requirements profiles.

  • PDF

Management Technique of Energy-Efficient Cache and Memory for Mobile IoT Devices (모바일 사물인터넷 디바이스를 위한 에너지 효율적인 캐시 및 메모리 관리 기법)

  • Bahn, Hyokyung
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.21 no.2
    • /
    • pp.27-32
    • /
    • 2021
  • This paper proposes an energy-efficient cache and memory management scheme for next-generation IoT devices. The proposed scheme adopts a low-power phase-change memory (PCM) as the main memory of IoT devices, aims at minimizing the write traffic to PCM, which is vulnerable to write operations. Specifically, when a cache block of the last-level cache memory is flushed to main memory, the cache block that causes less writes to PCM is preferentially replaced by tracking the modifications of each cache line that constitutes the cache block. In addition, by considering the reference bit of the cache block and the dirty bit of the cache lines, our scheme reduces the energy consumption without degrading the memory system performances. Through simulations using SPEC benchmarks, it is shown that the proposed scheme reduces the write traffic to PCM by 34.6% on average and the power consumption by 28.9%, without memory performance degradations.

A Design of CMOS Subbandgap Reference using Pseudo-Resistors (가상저항을 이용한 CMOS Subbandgap 기준전압회로 설계)

  • Lee, Sang-Ju;Lim, Shin-Il
    • Proceedings of the KIEE Conference
    • /
    • 2006.10c
    • /
    • pp.609-611
    • /
    • 2006
  • This paper describes a CMOS sub-bandgap reference using Pseudo-Resistors which can be widely used in flash memory, DRAM, ADC and Power management circuits. Bandgap reference circuit operates weak inversion for reducing power consumption and uses Pseudo-Resistors for reducing the chip area, instead of big resistor. It is implemented in 0.35um Standard 1P4M CMOS process. The temperature coefficient is 5ppm/$^{\circ}C$ from $40^{\circ}C$ to $100^{\circ}C$ and minimum power supply voltage is 1.2V The core area is 1177um${\times}$617um. Total current is below 2.8uA and output voltage is 0.598V at $27^{\circ}C$.

  • PDF

Design of Memory-Access-Efficient H.264 Intra Predictor Integrated with Motion Compensator (H.264 복호기에서 움직임 보상기와 연계하여 메모리 접근면에서 효율적인 인트라 예측기 설계)

  • Park, Jong-Sik;Lee, Seong-Soo
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.45 no.6
    • /
    • pp.37-42
    • /
    • 2008
  • In H.264/AVC decoder, intra predictor, motion compensator, and deblocking filter need to read reference images in external frame memory in decoding process. They read external frame memory very frequently, which lowers system operation speed and increases power consumption. This paper proposes a intra predictor integrated with motion compensator without external frame memory. It achieves power reduction and memory bandwidth minimization by exploiting data reuse of common and repetitive pixels. The proposed infra predictor achieves more than $45%\;{\sim}\;75%$ cycle time reduction compared with conventional intra predictors.