• Title/Summary/Keyword: Direct Memory Access

Search Result 69, Processing Time 0.028 seconds

Performance Improvement Method of Multi-Port Memory Controller Using An Effective Multi-Channel Direct memory Access Management (효과적인 다채널 직접 메모리 접근 관리를 통한 멀티포트 메모리 컨트롤러의 성능 향상 방법)

  • Chun, Ik-Jae;Lyuh, Chun-Gi;Roh, Tae Moon;Lee, Moon-Sik
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.51 no.4
    • /
    • pp.33-41
    • /
    • 2014
  • This paper presents an effective memory access method for a high-speed data transfer on mobile systems using a direct memory access controller that considers the characteristics of a multi-port memory controller. The direct memory access controller has an integrated channel management function to control multiple direct memory access channels. The channels are physically separated and operate independently from each other. Experimental results show that the proposed direct memory access method improves the transfer performance by up to 72% and 69% on read and write transfer cycles, respectively. The total number of transfer cycles of the proposed method is 63% less than in a commercial method under 4-channel access.

SDRAM Fast Accession By DMA (Direct Memory Access) (DMA(Direct Memory Access)을 이용한 SDRAM의 고속 인터페이스)

  • Kim, Jin-Wan;Cho, Hyun-Mook
    • Journal of IKEEE
    • /
    • v.10 no.1 s.18
    • /
    • pp.22-29
    • /
    • 2006
  • In this paper, we present the efficient way of SDRAM accessing through the DMA(Direct Memory Access) when a microprocessor and peripheral blocks are sharing a SDRAM. The microprocessor is able to access a memory through the AMBA which is the system bus provided by ARM Corporation and DMAs are able to access a memory through their own bus. Peripheral block's reading and writing on the SDRAM memory are realized by the intermediate DMA in order to minimize times of access and addressing the memory. While the microprocessor doesn‘t access to the SDRAM aproaching other registers or occurring a hit signal for fetching program or data, the DMAs may read/write the data in the SDRAM without an interference of the AMBA. This way increases the efficient of the system and performance is more by 16.8%.

  • PDF

Design of Cache Memory System for Next Generation CPU (차세대 CPU를 위한 캐시 메모리 시스템 설계)

  • Jo, Ok-Rae;Lee, Jung-Hoon
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.11 no.6
    • /
    • pp.353-359
    • /
    • 2016
  • In this paper, we propose a high performance L1 cache structure for the high clock CPU. The proposed cache memory consists of three parts, i.e., a direct-mapped cache to support fast access time, a two-way set associative buffer to reduce miss ratio, and a way-select table. The most recently accessed data is stored in the direct-mapped cache. If a data has a high probability of a repeated reference, when the data is replaced from the direct-mapped cache, the data is stored into the two-way set associative buffer. For the high performance and fast access time, we propose an one way among two ways set associative buffer is selectively accessed based on the way-select table (WST). According to simulation results, access time can be reduced by about 7% and 40% comparing with a direct cache and Intel i7-6700 with two times more space respectively.

A design of Direct Memory Access For H.264 Encoder (H.264 Encoder용 Direct Memory Access (DMA) 설계)

  • Jung, Il-Sub;Suh, Ki-Bum
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2008.10a
    • /
    • pp.91-94
    • /
    • 2008
  • The designed module save to memory after received Image from CMOS image Sensor(CIS), and set a motion of Encoder module, read from memory per one macroblock each original Image and reference image then supply or save. the time required 470 cycle when processed one macroblock. For designed construct verification, I develop reference Encoder C like JM 9.4 and I proved this module with test vector which achieved from reference encoder C.

  • PDF

(PMU (Performance Monitoring Unit)-Based Dynamic XIP(eXecute In Place) Technique for Embedded Systems) (내장형 시스템을 위한 PMU (Performance Monitoring Unit) 기반 동적 XIP (eXecute In Place) 기법)

  • Kim, Dohun;Park, Chanik
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.3 no.3
    • /
    • pp.158-166
    • /
    • 2008
  • These days, mobile embedded systems adopt flash memory capable of XIP feature since they can reduce memory usage, power consumption, and software load time. XIP provides direct access to ROM and flash memory for processors. However, using XIP incurs unnecessary degradation of applications' performance because direct access to ROM and flash memory shows more delay than that to main memory. In this paper, we propose a memory management framework, dynamic XIP, which can resolve the performance degradation of using XIP. Using a constrained RAM cache, dynamic XIP can dynamically change XIP region according to page access pattern to reduce performance degradation in execution time or energy consumption resulting from native XIP problem. The proposed framework consists of a page profiler gathering applications' memory access pattern using PMU and an XIP manager deciding that a page is accessed whether in main memory or in flash memory. The proposed framework is implemented and evaluated in Linux kernel. Our evaluation shows that our framework can reduce execution time at most 25% and energy consumption at most 22% compared with using XIP-only case adopted in general mobile embedded systems. Moreover, the evaluation shows that in execution time and energy consumption, our modified LRU algorithm with code page filters can reduce more than at most 90% and 80% respectively compared with applying just existing LRU algorithm to dynamic XIP.

  • PDF

Implementation of External Memory Expansion Device for Large Image Processing (대규모 영상처리를 위한 외장 메모리 확장장치의 구현)

  • Choi, Yongseok;Lee, Hyejin
    • Journal of Broadcast Engineering
    • /
    • v.23 no.5
    • /
    • pp.606-613
    • /
    • 2018
  • This study is concerned with implementing an external memory expansion device for large-scale image processing. It consists of an external memory adapter card with a PCI(Peripheral Component Interconnect) Express Gen3 x8 interface mounted on a graphics workstation for image processing and an external memory board with external DDR(Dual Data Rate) memory. The connection between the memory adapter card and the external memory board is made through the optical interface. In order to access the external memory, both Programmable I/O and DMA(Direct Memory Access) methods can be used to efficiently transmit and receive image data. We implemented the result of this study using the boards equipped with Altera Stratix V FPGA(Field Programmable Gate Array) and 40G optical transceiver and the test result shows 1.6GB/s bandwidth performance.. It can handle one channel of 4K UHD(Ultra High Density) image. We will continue our study in the future for showing bandwidth of 3GB/s or more.

A Design of Direct Memory Access (DMA) Controller For H.264 Encoder (H.264 Encoder용 Direct Memory Access (DMA) 제어기 설계)

  • Song, In-Keun
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.14 no.2
    • /
    • pp.445-452
    • /
    • 2010
  • In this paper, an attempt has been made to design the controller applicable for H.264 level3 encoder of baseline profile on full hardware basis. The designed controller module first stores the images supplied from CMOS Image Sensor(CIS) at main memory, and then reads or stores the image data in macroblock unit according to encoder operation. The timing cycle of the DMA controller required to process a macroblock is 478 cycles. In order to verify the our design, reference-C encoder, which is compatible to JM 9.4, is developed and the designed controller is verified by using the test vector generated from the reference C code. The number of cycle in the designed DMA controller is reduced by 40% compared with the cycle of using Xilinx MIG.

Design of a DMA Controller for Augmented Reality in Embedded System (증강현실을 위한 임베디드 시스템의 DMA 컨트롤러 설계)

  • Jang, Su Yeon;Oh, Jung Hwan;Yoon, Young Hyun;Lee, Seong Mo;Lee, Seung Eun
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.23 no.7
    • /
    • pp.822-828
    • /
    • 2019
  • An Augmented Reality(AR) provides virtual information with a real environment, and the processor needs to access the memory for the AR system. However, the processor has the heavy workload as the technology improvement leads to increase the size of data. We need a specific module to reduce the workload to overcome the limitation. In this paper, we propose a Direct Memory Access(DMA) controller displaying image instead of the processor. We implemented the proposed DMA controller on a Field Programmable Gate Array(FPGA) and demonstrated the functionality of the DMA controller based on an Avalon Memory Mapped(Avalon-MM) interface. Also, the DMA controller is fabricated by using Magnachip/Hynix 0.35um CMOS technology and verified the feasibility of the embedded system.

Performance Analysis of NVMe SSDs and Design of Direct Access Engine on Virtualized Environment (가상화 환경에서 NVMe SSD 성능 분석 및 직접 접근 엔진 개발)

  • Kim, Sewoog;Choi, Jongmoo
    • KIISE Transactions on Computing Practices
    • /
    • v.24 no.3
    • /
    • pp.129-137
    • /
    • 2018
  • NVMe(Non-Volatile Memory Express) SSD(Solid State Drive) is a high-performance storage that makes use of flash memory as a storage cell, PCIe as an interface and NVMe as a protocol on the interface. It supports multiple I/O queues which makes it feasible to process parallel-I/Os on multi-core environments and to provide higher bandwidth than SATA SSDs. Hence, NVMe SSD is considered as a next generation-storage for data-center and cloud computing system. However, in the virtualization system, the performance of NVMe SSD is not fully utilized due to the bottleneck of the software I/O stack. Especially, when it uses I/O stack of the hypervisor or the host operating system like Xen and KVM, I/O performance degrades seriously due to doubled-I/O stack between host and virtual machine. In this paper, we propose a new I/O engine, called Direct-AIO (Direct-Asynchronous I/O) engine, that can access NVMe SSD directly for I/O performance improvements on QEMU emulator. We develop our proposed I/O engine and analyze I/O performance differences between the existed I/O engine and Direct-AIO engine.

A Study of Java Card File System with File Cache and Direct Access function (File Cache 및 Direct Access기능을 추가한 Java Card File System에 관한 연구)

  • Lee, Yun-Seok;Jun, Ha-Yong;Jung, Min-Soo
    • Journal of Korea Multimedia Society
    • /
    • v.11 no.3
    • /
    • pp.404-413
    • /
    • 2008
  • As toward a ubiquitous society, a lot of methods have been proposed to protect personal privacy. Smart Cards with CPU and Memory are widely being used to implement the methods. The use of Java Card is also gradually getting expanded into more various applications. Because there is no standards in Java Card File System, Generally, Java Card File System follows the standards of Smart Card File System. However, one of disadvantages of the Java Card File System using a standard of Smart Card File System is that inefficient memory use and increasing processing time are caused by redundancy of data and program codes. In this paper, a File Cache method and a Direct Access method are proposed to solve the problems. The proposed methods are providing efficient memory use and reduced processing time by reduce a program codes.

  • PDF