• Title/Summary/Keyword: memory size reduction

Search Result 97, Processing Time 0.022 seconds

A design of convolutional encoder and interleaver with minimized memory size (메모리 크기를 최소화한 인터리버 및 길쌈부호기의 설계)

  • 임인기;김경수;조한진
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.24 no.12B
    • /
    • pp.2424-2429
    • /
    • 1999
  • In this paper, we present a memory efficient implementation method of channel encoder using convolutional encoding and interleaving. In conventional method, two separate RAMs must be used for the channel encoder: one RAM for storing frame data and another RAM for interleaving. In our method, without using interleaving RAM, we only use two small RAMs for buffering input frame data. We can process convolutional encoding and interleaving concurrently by using the two RAMs. There are several advantages when applying channel encoder designed using this method to several digital mobile telecommunications : the reduction of memory size ranging 33 % - 60 %, simplified procedure of receiving frame data, and resultant timing margin gained by the simplified procedure.

  • PDF

An Experimental 0.8 V 256-kbit SRAM Macro with Boosted Cell Array Scheme

  • Chung, Yeon-Bae;Shim, Sang-Won
    • ETRI Journal
    • /
    • v.29 no.4
    • /
    • pp.457-462
    • /
    • 2007
  • This work presents a low-voltage static random access memory (SRAM) technique based on a dual-boosted cell array. For each read/write cycle, the wordline and cell power node of selected SRAM cells are boosted into two different voltage levels. This technique enhances the read static noise margin to a sufficient level without an increase in cell size. It also improves the SRAM circuit speed due to an increase in the cell read-out current. A 0.18 ${\mu}m$ CMOS 256-kbit SRAM macro is fabricated with the proposed technique, which demonstrates 0.8 V operation with 50 MHz while consuming 65 ${\mu}W$/MHz. It also demonstrates an 87% bit error rate reduction while operating with a 43% higher clock frequency compared with that of conventional SRAM.

  • PDF

Performance of the Coupling Canceller with the Various Window Size on the Multi-Level Cell NAND Flash Memory Channel (멀티레벨셀 낸드 플래시 메모리에서 커플링 제거기의 윈도우 크기에 따른 성능 비교)

  • Park, Dong-Hyuk;Lee, Jae-Jin
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.37 no.8A
    • /
    • pp.706-711
    • /
    • 2012
  • Multi-level cell NAND flash is a flash memory technology using multiple levels per cell to allow more bits to be stored. Currently, most multi-level cell NAND stores 2 bits of information per cell. This reduces the amount of margin separating the states and results in the possibility of more errors. The most error cause is coupling noise. Thus, in this paper, we studied coupling noise cancellation scheme for reduction memory on the 16-level cell NAND flash memory channel. Also, we compared the performance threshold detection and proposed scheme.

The Effect of Cold-rolling on Microstructure and Transformation Behavior of Cu-Zn-Al shape Memory Alloy (냉간가공에 의한 CuZnAl계 현상기억합급의 결정립미세화와 특성평가)

  • Lee, Sang-Bong;Park, No-Jin
    • Korean Journal of Materials Research
    • /
    • v.9 no.3
    • /
    • pp.322-326
    • /
    • 1999
  • In this study, cold-rolling and appropriate annealing was adopted for the grain refining of Cu-26.65Zn-4. 05Al-0.31Ti(wt%) shape memory alloy. For the cold deformation of this alloy the ducti1e $\alpha$-phase must be contained. After heat treatment at $550^{\circ}C$ the $(\alpha+$\beta)$-dual phase with 40vol.% $\alpha$-phase was obtained which could be rolled at room temperature. This alloy was cold rolled into a final thickness of 1.0mm with total reduction degrees of 70% and 90%. The rolled sheets were betanized at $800^{\circ}C$ for various times, then quenched into ice water. The grain size of co]d rolled samples were $60~80\mu\textrm{m}$ which is much smaller comparing with the hot-rolled samples. And the 90% rolled sample showed smaller grain size than the case of the 70% rolled one. The small grain size had influence on the phase transformation temperatures and stabilization of the austenitic phases.

  • PDF

Adaptive Garbage Collection Policy based on Analysis of Page Ratio for Flash Memory (플래시 메모리를 위한 페이지 비율 분석 기반의 적응적 가비지 컬렉션 정책)

  • Lee, Soung-Hwan;Lee, Tae-Hoon;Chung, Ki-Dong
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.36 no.5
    • /
    • pp.422-428
    • /
    • 2009
  • NAND flash memory is widely used in embedded systems because of many attractive features, such as small size, light weight, low power consumption and fast access speed. However, it requires garbage collection, which includes erase operations. Erase operation is slower than other operations. Further, a block has a limited erase lifetime (typically 100,000) after which a block becomes unusable. The proposed garbage collection policy focuses on minimizing the total number of erase operations, the deviation value of each block and the garbage collection time. NAND flash memory consists of pages of three types, such as valid pages, invalid pages and free pages. In order to achieve above goals, we use a page ratio to decide when to do garbage collection and to select the target victimblock. Additionally, we implement allocating method and group management method. Simulation results show that the proposed policy performs better than Greedy or CAT with the maximum rate 85% of reduction in the deviation value of the erase operations and 6% reduction in garbage collection time.

Garbage Collection Method for NAND Flash Memory based on Analysis of Page Ratio (페이지 비율 분석 기반의 NAND 플래시 메모리를 위한 가비지 컬렉션 기법)

  • Lee, Seung-Hwan;Ok, Dong-Seok;Yoon, Chang-Bae;Lee, Tae-Hoon;Chung, Ki-Dong
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.15 no.9
    • /
    • pp.617-625
    • /
    • 2009
  • NAND flash memory is widely used in embedded systems because of many attractive features, such as small size, light weight, low power consumption and fast access speed. However, it requires garbage collection, which includes erase operations. Erase operation is very slow. Besides, the number of the erase operations allowed to be carried out for each block is limited. The proposed garbage collection method focuses on minimizing the total number of erase operations, the deviation value of each block and the garbage collection time. NAND flash memory consists of pages of three types, such as valid pages, invalid pages and free pages. In order to achieve above goals, we use a page rate to decide when to do garbage collection and to select the target victim block. Additionally, We implement allocating method and group management method. Simulation results show that the proposed policy performs better than Greedy or CAT with the maximum rate at 82% of reduction in the deviation value of erase operation and 75% reduction in garbage collection time.

An Improvement MPEG-2 Video Encoder Through Efficient Frame Memory Interface (효율적인 프레임 메모리 인터페이스를 통한 MPEG-2 비디오 인코더의 개선)

  • 김견수;고종석;서기범;정정화
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.24 no.6B
    • /
    • pp.1183-1190
    • /
    • 1999
  • This paper presents an efficient hardware architecture to improve the frame memory interface occupying the largest hardware area together with motion estimator in implementing MPEG-2 video encoder as an ASIC chip. In this architecture, the memory size for internal data buffering and hardware area for frame memory interface control logic are reduced through the efficient memory map organization of the external SDRAM having dual bank and memory access timing optimization between the video encoder and external SDRAM. In this design, 0.5 m, CMOS, TLM (Triple Layer Metal) standard cells are used as design libraries and VHDL simulator and logic synthesis tools are used for hardware design add verification. The hardware emulator modeled by C-language is exploited for various test vector generation and functional verification. The architecture of the improved frame memory interface occupies about 58% less hardware area than the existing architecture[2-3], and it results in the total hardware area reduction up to 24.3%. Thus, the (act that the frame memory interface influences on the whole area of the video encoder severely is presented as a result.

  • PDF

Profile Guided Selection of ARM and Thumb Instructions at Function Level (함수 수준에서 프로파일 정보를 이용한 ARM과 Thumb 명령어의 선택)

  • Soh Changho;Han Taisook
    • Journal of KIISE:Software and Applications
    • /
    • v.32 no.3
    • /
    • pp.227-235
    • /
    • 2005
  • In the embedded system domain, both memory requirement and energy consumption are great concerns. To save memory and energy, the 32 bit ARM processor supports the 16 bit Thumb instruction set. For a given program, the Thumb code is typically smaller than the ARM code. However, the limitations of the Thumb instruction set can often lead to generation of poorer quality code. To generate codes with smaller size but a little slower execution speed, Krishnaswarmy suggests a profiling guided selection algorithm at module level for generating mixed ARM and Thumb codes for application programs. The resulting codes of the algorithm give significant code size reductions with a little loss in performance. When the instruction set is selected at module level, some functions, which should be compiled in Thumb mode to reduce code size, are compiled to ARM code. It means we have additional code size reduction chance. In this paper, we propose a profile guided selection algorithm at function level for generating mixed ARM and Thumb codes for application programs so that the resulting codes give additional code size reductions without loss in performance compared to the module level algorithm. We can reduce 2.7% code size additionally with no performance penalty

A Vector-Perturbation Based Lattice-Reduction using look-Up Table (격자 감소 기반 전부호화 기법에서의 효율적인 Look-Up Table 생성 방법)

  • Han, Jae-Won;Park, Dae-Young
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.36 no.6A
    • /
    • pp.551-557
    • /
    • 2011
  • We investigate lattice-reduction-aided precoding techniques using Look-Up table (LUT) for multi-user multiple-input multiple-output(MIMO) systems. Lattice-reduction-aided vector perturbation (VP) gives large sum capacity with low encoding complexity. Nevertheless lattice-reduction process based on the LLL-Algorithm still requires high computational complexity since it involves several iterations of size reduction and column vector exchange. In this paper, we apply the LUT-aided lattice reduction on VP and propose a scheme to generate the LUT efficiently. Simulation results show that a proposed scheme has similar orthogonality defect and Bit-Error-Rate(BER) even with lower memory size.

Compact CNN Accelerator Chip Design with Optimized MAC And Pooling Layers (MAC과 Pooling Layer을 최적화시킨 소형 CNN 가속기 칩)

  • Son, Hyun-Wook;Lee, Dong-Yeong;Kim, HyungWon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.25 no.9
    • /
    • pp.1158-1165
    • /
    • 2021
  • This paper proposes a CNN accelerator which is optimized Pooling layer operation incorporated in Multiplication And Accumulation(MAC) to reduce the memory size. For optimizing memory and data path circuit, the quantized 8bit integer weights are used instead of 32bit floating-point weights for pre-training of MNIST data set. To reduce chip area, the proposed CNN model is reduced by a convolutional layer, a 4*4 Max Pooling, and two fully connected layers. And all the operations use specific MAC with approximation adders and multipliers. 94% of internal memory size reduction is achieved by simultaneously performing the convolution and the pooling operation in the proposed architecture. The proposed accelerator chip is designed by using TSMC65nmGP CMOS process. That has about half size of our previous paper, 0.8*0.9 = 0.72mm2. The presented CNN accelerator chip achieves 94% accuracy and 77us inference time per an MNIST image.