• Title/Summary/Keyword: in-memory system

Search Result 3,242, Processing Time 0.033 seconds

A Self-Description File System for NAND Flash Memory (낸드 플래시 메모리를 위한 자기-서술 파일 시스템)

  • Han, Jun-Yeong;Park, Sang-Oh;Kim, Sung-Jo
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.15 no.2
    • /
    • pp.98-113
    • /
    • 2009
  • Conventional file systems for harddisk drive cannot be applied to NAND flash memory, because the physical characteristics of NAND flash memory differs from those of harddisk drive. To address this problem, various file systems with better reliability and efficiency have also been developed recently. However, those file systems have inherent overheads for updating the file's metadata pages, because those file systems save file's meta-data and data separately. Furthermore, those file systems have a critical reliability problem: file systems fail when either a page in meta-data of a file system or a file itself fails. In this paper, we propose a self-description page technique and In Memory Core File System technique to address these efficiency and reliability problems, and develop SDFS(Self-Description File System) newly. SDFS can be safely recovered, although some pages fail, and improves write and read performance by 36% and 15%, respectively, and reduces mounting time by 1/20 compared with YAFFS2.

Analysis of the Influence of the Conflict Management Policy of the Transactional Memory on the System Performance and Bus Traffic (시스템 성능 및 버스 트래픽에 대한 트랜잭셔널 메모리의 충돌 관리 정책 영향 분석)

  • Kim, Young-Kyu;Moon, Byungin
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.37B no.11
    • /
    • pp.1041-1049
    • /
    • 2012
  • The transactional memory was proposed to solve the problems of the conventional lock-based synchronization methods in the shared memory multiprocessor system. Various implementation methods for putting the high performance transactional memory to practical use have been continuously studied. However, these studies focus only on the commercialization and performance enhancement of the transactional memory. Besides, there have been few studies to analyze the system overhead of the transactional memory according to the conflict management policy. Thus this paper classifies hardware transactional memory, which is one kind of transactional memories, into four types according to the conflict management policy, and then compares and analyzes their performance and system bus traffic through their modeling and simulation. In addition, the most effective conflict management policy for the hardware transactional memory is presented through these comparison and analysis.

A Case Study of a Navigator Optimization Process

  • Cho, Doosan
    • International journal of advanced smart convergence
    • /
    • v.6 no.1
    • /
    • pp.26-31
    • /
    • 2017
  • When mobile navigator device accesses data randomly, the cache memory performance is rapidly deteriorated due to low memory access locality. For instance, GPS (General Positioning System) of navigator program for automobiles or drones, that are currently in common use, uses data from 32 satellites and computes current position of a receiver. This computation of positioning is the major part of GPS which accounts more than 50% computation in the program. In this computation task, the satellite signals are received in real time and stored in buffer memories. At this task, since necessary data cannot be sequentially stored, the data is read and used at random. This data accessing patterns are generated randomly, thus, memory system performance is worse by low data locality. As a result, it is difficult to process data in real time due to low data localization. Improving the low memory access locality inherited on the algorithms of conventional communication applications requires a certain optimization technique to solve this problem. In this study, we try to do optimizations with data and memory to improve the locality problem. In experiment, we show that our case study can improve processing speed of core computation and improve our overall system performance by 14%.

Efficient Management of PCM-based Swap Systems with a Small Page Size

  • Park, Yunjoo;Bahn, Hyokyung
    • JSTS:Journal of Semiconductor Technology and Science
    • /
    • v.15 no.5
    • /
    • pp.476-484
    • /
    • 2015
  • Due to the recent advances in non-volatile memory technologies such as PCM, a new memory hierarchy of computer systems is expected to appear. In this paper, we explore the performance of PCM-based swap systems and discuss how this system can be managed efficiently. Specifically, we introduce three management techniques. First, we show that the page fault handling time can be reduced by attaching PCM on DIMM slots, thereby eliminating the software stack overhead of block I/O and the context switch time. Second, we show that it is effective to reduce the page size and turn off the read-ahead option under the PCM swap system where the page fault handling time is sufficiently small. Third, we show that the performance is not degraded even with a small DRAM memory under a PCM swap device; this leads to the reduction of DRAM's energy consumption significantly compared to HDD-based swap systems. We expect that the result of this paper will lead to the transition of the legacy swap system structure of "large memory - slow swap" to a new paradigm of "small memory - fast swap."

A kernel memory collecting method for efficent disk encryption key search (디스크 암호화 키의 효율적인 탐색을 위한 커널 메모리 수집 방법)

  • Kang, Youngbok;Hwang, Hyunuk;Kim, Kibom;Lee, Kyoungho;Kim, Minsu;Noh, Bongnam
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.23 no.5
    • /
    • pp.931-938
    • /
    • 2013
  • It is hard to extract original data from encrypted data before getting the password in encrypted data with disk encryption software. This encryption key of disk encryption software can be extract by using physical memory analysis. Searching encryption key time in the physical memory increases with the size of memory because it is intended for whole memory. But physical memory data includes a lot of data that is unrelated to encryption keys like system kernel objects and file data. Therefore, it needs the method that extracts valid data for searching keys by analysis. We provide a method that collect only saved memory parts of disk encrypting keys in physical memory by analyzing Windows kernel virtual address space. We demonstrate superiority because the suggested method experimentally reduces more of the encryption key searching space than the existing method.

Analysis of Faults of Large Power System by Memory-Limited Computer (소형전자계산기에 의한 대전력계통의 고장해석)

  • Young Moon Park
    • 전기의세계
    • /
    • v.21 no.4
    • /
    • pp.39-44
    • /
    • 1972
  • This paper describes a new approach for minimizing working memory spaces without loosing too much amount of computing time in the analysis of power system faults. This approach requires the decomposition of alrge power system into several small groups of subsystems, forms individual bus impedance matrics, store them in the auxiliary memory, later assembles them to the original total system by algorithms. And also the approach uses techniques for diagonalizing primitive impedances and expanding the system bus impedance matrices by adding a fault bus. These scheme ensures a remarkable savings of working storage and continous computations of fault currents and voltages with the voried fault locations.

  • PDF

A Dynamic Allocation Scheme for Improving Memory Utilization in Xen (Xen에서 메모리 이용률 향상을 위한 동적 할당 기법)

  • Lee, Kwon-Yong;Park, Sung-Yong
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.37 no.3
    • /
    • pp.147-160
    • /
    • 2010
  • The system virtualization shows interest in the consolidation of servers for the efficient utilization of system resources. There are many various researches to utilize a server machine more efficiently through the system virtualization technique, and improve performance of the virtualization software. These researches have studied with the activity to control the resource allocation of virtual machines dynamically focused on CPU, or to manage resources in the cross-machine using the migration. However, the researches of the memory management have been wholly lacking. In this respect, the use of memory is limited to allocate the memory statically to virtual machine in server consolidation. Unfortunately, the static allocation of the memory causes a great quantity of the idle memory and decreases the memory utilization. The underutilization of the memory makes other side effects such as the load of other system resources or the performance degradation of services in virtual machines. In this paper, we suggest the dynamic allocation of the memory in Xen to control the memory allocation of virtual machines for the utilization without the performance degradation. Using AR model for the prediction of the memory usage and ACO (Ant Colony Optimization) algorithm for optimizing the memory utilization, the system operates more virtual machines without the performance degradation of servers. Accordingly, we have obtained 1.4 times better utilization than the static allocation.

Design and Evaluation of Transaction Processing System based on Main Memory Database (주기억장치 데이터베이스 기반 트랜잭션 처리 시스템의 설계 및 평가)

  • 심종익
    • Journal of Korea Multimedia Society
    • /
    • v.2 no.4
    • /
    • pp.367-377
    • /
    • 1999
  • Nowadays, the number of database applications which need fast transaction processing are increasing. One way to improve the performance of transaction processing is to reside the whole database in main memory As semiconductor memory becomes cheaper and chip densities increase, the research to improve transaction throughput rates of transaction processing system, using main memory databases, has begun In this thesis, how to implement a high performance transaction processing system based on main memory databases, new concurrency control scheme, recovery scheme and storage structure is presented. The objective of the proposed schemes is to improve the transaction processing system performance measured by transaction throughput and response times.

  • PDF

Improvement Method and Performance Analysis of Shared Memory in Dual Core Embedded Linux system (듀얼코어 임베디드 리눅스 시스템에서 공유 메모리 성능 개선 방안 및 성능 분석)

  • Jung, Ji-Sung;Kim, Chang-Bong
    • Journal of Internet Computing and Services
    • /
    • v.11 no.4
    • /
    • pp.95-106
    • /
    • 2010
  • Recently multiple process communicate together. They share resource and information for cooperation in complicated programming environment. Kernel provides IPC (Inter -Process Communication) for communication with each other process. Shared Memory is a technique that many processes can access to identical memory area in the Linux environment. In this paper, we propose a performance improvement method of shared memory in the dual-core embedded linux system which is consist of different core and different operating system. We construct the MPC2530F (ARM926F+ARM946E) linux system and measure the performance therein. We attempt a performance enhancement in each CPU for each process which uses a shared memory.

Parallel Computing Environment for R with on Supercomputer Systems (빅데이터 분석을 위한 슈퍼컴퓨터 환경에서 R의 병렬처리)

  • Lee, Sang Yeol;Won, Joong Ho
    • Journal of the Korean Operations Research and Management Science Society
    • /
    • v.39 no.4
    • /
    • pp.19-31
    • /
    • 2014
  • We study parallel processing techniques for the R programming language of high performance computing technology. In this study, we used massively parallel computing system which has 25,408 cpu cores. We conducted a performance evaluation of a distributed memory system using MPI and of a the shared memory system using OpenMP. Our findings are summarized as follows. First, For some particular algorithms, parallel processing is about 150 times faster than serial processing in R. Second, the distributed memory system gets faster as the number of nodes increases while shared memory system is limited in the improvement of performance, due to the limit of the number of cpus in a single system.