• Title/Summary/Keyword: Embedded Memory

Search Result 730, Processing Time 0.03 seconds

High-Speed Pipelined Memory Architecture for Gigabit ATM Packet Switching (Gigabit ATM Packet 교환을 위한 파이프라인 방식의 고속 메모리 구조)

  • Gab Joong Jeong;Mon Key Lee
    • Journal of the Korean Institute of Telematics and Electronics C
    • /
    • v.35C no.11
    • /
    • pp.39-47
    • /
    • 1998
  • This paper describes high-speed pipelined memory architecture for a shared buffer ATM switch. The memory architecture provides high speed and scalability. It eliminates the restriction of memory cycle time in a shared buffer ATM switch. It provides versatile performance in a shared buffer ATM switch using its scalability. It consists of a 2-D array configuration of small memory banks. Increasing the array configuration enlarges the entire memory capacity. Maximum cycle time of the designed pipelined memory is 4 ns with 5 V V$\_$dd/ and 25$^{\circ}C$. It is embedded in the prototype chip of a shared scalable buffer ATM switch with 4 x 4 configuration of 4160-bit SRAM memory banks. It is integrated in 0.6 $\mu\textrm{m}$ 2-metal 1-poly CMOS technology.

  • PDF

A garbage collector design and implementation for flash memory file system (플래시 메모리 파일 시스템을 위한 가비지 콜렉터 설계 및 구현)

  • Kim, Ki-Young;Son, Sung-Hoon;Shin, Dong-Ha
    • The KIPS Transactions:PartA
    • /
    • v.14A no.1 s.105
    • /
    • pp.39-46
    • /
    • 2007
  • Recently flash memory is widely accepted as a storage devise of embedded systems for portability and performance reasons. Flash memory has many distinguishing features compared to legacy magnetic disks. Especially, a file system for flash memory usually assumes the form of log-structured file system and it employs garbage collector accordingly. Since the garbage collector can greatly affect the performance of file system, it should be designed carefully considering flash memory features. In this paper, we suggest a new garbage collector for existing JFFS2 (Journaling Flash File System II) file system. By extensive performance evaluation, we show that the proposed garbage collector achieves improved performance in terms of flash memory consumption rate, increased flash memory life time, and improved wear-leveling.

Low-Complexity Deeply Embedded CPU and SoC Implementation (낮은 복잡도의 Deeply Embedded 중앙처리장치 및 시스템온칩 구현)

  • Park, Chester Sungchung;Park, Sungkyung
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.17 no.3
    • /
    • pp.699-707
    • /
    • 2016
  • This paper proposes a low-complexity central processing unit (CPU) that is suitable for deeply embedded systems, including Internet of things (IoT) applications. The core features a 16-bit instruction set architecture (ISA) that leads to high code density, as well as a multicycle architecture with a counter-based control unit and adder sharing that lead to a small hardware area. A co-processor, instruction cache, AMBA bus, internal SRAM, external memory, on-chip debugger (OCD), and peripheral I/Os are placed around the core to make a system-on-a-chip (SoC) platform. This platform is based on a modified Harvard architecture to facilitate memory access by reducing the number of access clock cycles. The SoC platform and CPU were simulated and verified at the C and the assembly levels, and FPGA prototyping with integrated logic analysis was carried out. The CPU was synthesized at the ASIC front-end gate netlist level using a $0.18{\mu}m$ digital CMOS technology with 1.8V supply, resulting in a gate count of merely 7700 at a 50MHz clock speed. The SoC platform was embedded in an FPGA on a miniature board and applied to deeply embedded IoT applications.

Reducing Power Consumption of Data Caches for Embedded Processors (임베디드 프로세서를 위한 선인출 데이터캐시의 저전력화 방안)

  • Moon, Hyun-Ju;Jee, Sung-Hyun
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.44 no.1
    • /
    • pp.1-9
    • /
    • 2007
  • Since data caches used in modern embedded processors consume significant fraction of total processor power up to 40%, embedded processors need power-efficient high performance data caches. This paper proposes a prefetching data cache structure which pursuing low power consumption. We added tag history table on existing data cache structure which includes hardware unit for data prefetching so that reduce the number of parallel lookup on tag memory. This strategic cache structure remarkably reduces power consumption for parallel tag lookup. Experimental results show that the proposed cache architecture induce low power consumption while maintain the same cache performance.

Design and Implementation of the Gateway for Remote Monitoring a Combine (콤바인 원격 모니터링을 위한 게이트웨이 설계 및 개발)

  • Moon, Y.K.;Song, Y.H.;Shin, K.Y.;Lee, S.S.;Choi, C.H.;Mun, J.H.
    • Journal of Biosystems Engineering
    • /
    • v.32 no.3
    • /
    • pp.197-205
    • /
    • 2007
  • The objective of this study was to design and implement a gateway for remote monitoring a combine. Many researchers have designed and implemented trouble-shooting system of agricultural machine. but the system didn't have network system or used wired network system. But monitoring machine have been operated in the out of door. In such an environment, each machine have to be operated under on a guarantee of mobility and stability. Thus, we have developed a gateway with an embedded system including the XScale PXA255 processor and wireless network device. We have also built an embedded Linux kernel and several devices. We developed an embedded application for monitoring a combine and this application is also capable of receiving signals from other clients and sending them to a server via Wireless LAN. Finally, results of performance evaluation which measured CPU share and memory sizes have shown that it is possible to provide monitoring service stably.

A New File System for Multimedia Data Stream (멀티미디어 데이터 스트림을 위한 파일 시스템의 설계 및 구현)

  • Lee, Minsuk;Song, Jin-Seok
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.1 no.2
    • /
    • pp.90-103
    • /
    • 2006
  • There are many file systems in various operating systems. Those are usually designed for server environments, where the common cases are usually 'multiple active users', 'great many small files' And they assume a big main memory to be used as buffer cache. So the existing file systems are not suitable for resource hungry embedded systems that process multimedia data streams. In this study, we designed and implemented a new file system which efficiently stores and retrieves multimedia data steams. The proposed file system has a very simple disk layout, which guarantees a quick disk initialization and file system recovery. And we introduced a new indexing-scheme, called the time-based indexing scheme, with the file system. With the indexing scheme, the file system maintains the relation between time and the location for all the multimedia streams. The scheme is useful in searching and playing the compressed multimedia streams by locating exact frame position with given time, resulting in reduction of CPU processing and power consumption. The proposed file system and its APIs utilizing the time-based indexing schemes were implemented firstly on a Linux environment, though it is operating system independent. In the performance evaluation on a real DVR system, which measured the execution time of multi-threaded reading and writing, we found the proposed file system is maximum 38.7% faster than EXT2 file system.

  • PDF

Demand Paging Method Using Improved Algorithms on Non-OS Embedded System (Non-OS 임베디드 시스템에서 개선된 알고리즘을 적용한 요구 페이징 기법)

  • Lew, Kyeung Seek;Jeon, Chang Kyu;Kim, Yong Deak
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.5 no.4
    • /
    • pp.225-233
    • /
    • 2010
  • In this paper, we try to improve the performance of the demand paging loader suggested to use the demand paging way that is not based on operating system. The demand paging switching strategy used in the existing operating system can know the recently used pages by running multi-processing. Then, based on it, some page switching strategies have been made for the recently used pages or the frequently demanded pages. However, the strategies based on operating system cannot be applied in single processing that is not based on operating system because any context switching never occur on the single processing. So, this paper is trying to suggest the demand paging switching strategies that can be applied in paging loader running in single process. In the Return-Prediction-Algorithm, we saw the improved performance in the program that the function call occurred frequently in a long distance. And then, in the Most-Frequently-Used-Page-Remain-Algorithm, we saw the improved performance in the program that the references frequently occurred for the particular pages. Likewise, it had an enormous effect on keeping the memory reduction performance by the demand paging and reducing the running time delay at the same time.

The Developement of Smart TV and Smart Home Platform based on HTML5 (HTML5를 기반으로 한 스마트 TV와 스마트 홈용 플랫폼 개발)

  • Kim, Gwang-Jun;Kang, Ki-Woong;Han, Kyu-Cheol;Jang, Seung-Jin;Yoon, Chan-Ho
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.9 no.9
    • /
    • pp.991-998
    • /
    • 2014
  • Embedded System operates hardware installed like processor, memory device, various input/output devices and software to control them. This thesis presents MPU module and Base board which are efficient industrial control through design and manufacture as developing S5PV210 CPU of SAMSUNG used by ARM Cortex-A8 based on Android which is Open mobile platform is installed to embedded system. Data for temperature and humidity which are received by CAN communication module proved the suitability and validity for the embedded platform design as implementing application program employed the native App with Linux Kernel based on the Android OS and application of HTML5.

Development of Debugging Tool for LEON3-based Embedded Systems (LEON3 기반 임베디드 시스템을 위한 디버깅 도구 개발)

  • Ryu, Sang-Moon
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.20 no.4
    • /
    • pp.474-479
    • /
    • 2014
  • LEON3 is a 32-bit synthesizable processor based on the SPARC V8. It can be connected to AMBA 2.0 bus and has a 7- stage pipeline, IEEE-754 FPU and 256[KB] cache. It can be easily implemented using FPGA and used for a SoC design. DSU which comes with LEON3 can be used to control and monitor the operation of LEON3. And DSU makes it easy to set a debugging environment for the development of both hardware and software for an embedded systems based on LEON3. This paper presents the summary of the debugging tool for LEON3 based embedded systems. The debugging tool can initialize the target hardware, find out how the target hardware is configured, load application code to a specified memory space and run that application code. To provide users a debugging environment, it can set breakpoints and control the operation of LEON3 correspondingly. And function call trace is one of key functions of the debugging tool.

Development of a Low-cost Industrial OCR System with an End-to-end Deep Learning Technology

  • Subedi, Bharat;Yunusov, Jahongir;Gaybulayev, Abdulaziz;Kim, Tae-Hyong
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.15 no.2
    • /
    • pp.51-60
    • /
    • 2020
  • Optical character recognition (OCR) has been studied for decades because it is very useful in a variety of places. Nowadays, OCR's performance has improved significantly due to outstanding deep learning technology. Thus, there is an increasing demand for commercial-grade but affordable OCR systems. We have developed a low-cost, high-performance OCR system for the industry with the cheapest embedded developer kit that supports GPU acceleration. To achieve high accuracy for industrial use on limited computing resources, we chose a state-of-the-art text recognition algorithm that uses an end-to-end deep learning network as a baseline model. The model was then improved by replacing the feature extraction network with the best one suited to our conditions. Among the various candidate networks, EfficientNet-B3 has shown the best performance: excellent recognition accuracy with relatively low memory consumption. Besides, we have optimized the model written in TensorFlow's Python API using TensorFlow-TensorRT integration and TensorFlow's C++ API, respectively.