• Title/Summary/Keyword: 제한 메모리

Search Result 567, Processing Time 0.022 seconds

A study on Performance improvement of the JCVM through the Smart Card Memory Management (스마트 카드 메모리 관리를 통한 JCVM 성능 향샹)

  • 김민정;조증보;이상용;정민수
    • Proceedings of the Korea Multimedia Society Conference
    • /
    • 2004.05a
    • /
    • pp.354-357
    • /
    • 2004
  • 자바는 스마트 카드 상의 다중 애플리케이션 기능을 지원하기 위한 가장 유용한 프로그래밍 언어이다. $\lrcorner$CVM(Java Card Virtual Machine)은 자바 언어로 작성된 프로그램들을 스마트 카드 상에서 동작 가능하게 한다. 현재 스마트 카드는 작은 프로세서를 가지고 있으며 이런 제한적인 환경에서의 JCVM 성능 향상은 매우 중요한 이슈 중의 하나이다 그리고 기존의 스마트 카드는 쓰기 속도가 느린 EEPROM에 객체를 생성하여 사용함으로 JCVM의 성능 저하를 가져왔다. 본 논문에서는 스마트 카드 메모리 관리, 즉, EEPROM에서 RAM으로 객체를 이동시킴으로써 JCVM 성능을 보다 향상시키는 알고리즘에 관해 제안하고자 한다.

  • PDF

BUILDS 소개

  • 소프트웨어센터
    • Computational Structural Engineering
    • /
    • v.3 no.4
    • /
    • pp.37-39
    • /
    • 1990
  • BUILDS 프로그램은 건물구조의 해석에서부터 도면화까지의 일관작업을 MS-DOS 등의 제한된 메모리상에서도 가능하도록 한 건물설계의 일관시스템(Integrated BUIL ding Design System)으로서, 각 작업에 따라 여러개의 부시스템(subsystem)으로 프로그램을 구성하고 이들을 집대성한 것이다.

  • PDF

EAST: An Efficient and Advanced Space-management Technique for Flash Memory using Reallocation Blocks (재할당 블록을 이용한 플래시 메모리를 위한 효율적인 공간 관리 기법)

  • Kwon, Se-Jin;Chung, Tae-Sun
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.13 no.7
    • /
    • pp.476-487
    • /
    • 2007
  • Flash memory offers attractive features, such as non-volatile, shock resistance, fast access, and low power consumption for data storage. However, it has one main drawback of requiring an erase before updating the contents. Furthermore, flash memory can only be erased limited number of times. To overcome limitations, flash memory needs a software layer called flash translation layer (FTL). The basic function of FTL is to translate the logical address from the file system like file allocation table (FAT) to the physical address in flash memory. In this paper, a new FTL algorithm called an efficient and advanced space-management technique (EAST) is proposed. EAST improves the performance by optimizing the number of log blocks, by applying the state transition, and by using reallocation blocks. The results of experiments show that EAST outperforms FAST, which is an enhanced log block scheme, particularly when the usage of flash memory is not full.

Hybrid Main Memory Systems Using Next Generation Memories Based on their Access Characteristics (차세대 메모리의 접근 특성에 기반한 하이브리드 메인 메모리 시스템)

  • Kim, Hyojeen;Noh, Sam H.
    • Journal of KIISE
    • /
    • v.42 no.2
    • /
    • pp.183-189
    • /
    • 2015
  • Recently, computer systems have encountered difficulties in making further progress due to the technical limitations of DRAM based main memory technologies. This has motivated the development of next generation memory technologies that have high density and non-volatility. However, these new memory technologies also have their own intrinsic limitations, making it difficult for them to currently be used as main memory. In order to overcome these problems, we propose a hybrid main memory system, namely HyMN, which utilizes the merits of next generation memory technologies by combining two types of memory: Write-Affable RAM(WAM) and Read-Affable RAM(ReAM). In so doing, we analyze the appropriate WAM size for HyMN, at which we can avoid the performance degradation. Further, we show that the execution time performance of HyMN, which provides an additional benefit of durability against unexpected blackouts, is almost comparable to legacy DRAM systems under normal operations.

Page Replacement Policy for Memory Load Adaption to Reduce Storage Writes and Page Faults (스토리지 쓰기량과 페이지 폴트를 줄이는 메모리 부하 적응형 페이지 교체 정책)

  • Bahn, Hyokyung;Park, Yunjoo
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.22 no.6
    • /
    • pp.57-62
    • /
    • 2022
  • Recently, fast storage media such as phage-change memory (PCM) emerge, and memory management policies for slow disk storage need to be revisited. In this paper, we propose a new page replacement policy that makes use of PCM as a swap device of virtual memory systems. The proposed policy aims at reducing write traffic to the swap device as well as reducing the number of page faults pursued by traditional page replacement policies. This is because a write operation in PCM is slow and PCM has limited write endurances. Specifically, the proposed policy focuses on the reduction of page faults when the memory load of the system is high, but it aims at reducing write traffic to storage when free memory space is sufficient. Simulation experiments with various memory reference traces show that the proposed policy reduces write traffic to PCM without performance degradations.

Different Load Shedding using utilization of Spatial over Data Stream (데이터 스트림에서 공간의 이용도를 이용한 차등적 부하제한 기법)

  • Kim, Ho;Baek, Sung-Ha;Lee, Dong-Wook;Shin, Soong-Sun;Bae, Hae-Young
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2009.04a
    • /
    • pp.340-343
    • /
    • 2009
  • u-GIS 환경에서 GeoSensor로부터 수집되는 시공간 데이터는 데이터 스트림의 특징을 포함한다. 데이터 스트림은 다양한 입력 속도로 끊임없이 입력되고, 데이터의 크기 또한 가변적이다. 이런 이유로 한정적인 메모리와 처리능력의 시스템은 과부하 현상이 발생한다. 이를 해결하기 위해 초과되는 데이터를 버려 메모리 초과를 방지하는 기법들이 연구되고 있다. 공간질의는 공간과 위치 값을 기반으로 이루어지는 연산으로 공간질의 정확도는 공간과 위치 정보를 통해 보장된다. 그러나 기존 기법인 랜덤부하제한 기법과 의미적부하제한 기법은 공간질의가 요구하는 공간과 위치 값에 대해 고려하지 않고 삭제하기 때문에 공간질의에 대한 정확도가 감소하는 문제를 갖는다. 본 논문에서는 공간의 이용도를 이용하여 차등적 비율을 적용한 부하제한 기법은 연구하였다. 이 기법은 등록된 공간질의의 영역 겹침 정도에 따라 중요 레벨을 증가시키고, 이를 토대로 시공간 데이터의 중요도를 파악하여 중요도마다 주어진 비율에 의하여 차등적으로 삭제한다. 결과적으로 기존 기법보다 다소 높은 Drop rate를 통해 질의 처리 속도를 빠르게 회복시켰으며, 중요 데이터를 최대한 유지하여 Error rate를 감소시켰다.

Block Associativity Limit Scheme for Efficient Flash Translation Layer (효율적인 플래시 변환 계층을 위한 블록 연관성 제한 기법)

  • Ok, Dong-Seok;Lee, Tae-Hoon;Chung, Ki-Dong
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.16 no.6
    • /
    • pp.673-677
    • /
    • 2010
  • Recently, NAND flash memory has been widely used in embedded systems, personal computers, and server systems because of its attractive features, such as non-volatility, fast access speed, shock resistance, and low power consumption. Due to its hardware characteristics, specifically its 'erase-before-write' feature, Flash Translation Layer is required for using flash memory like hard disk drive. Many FTL schemes have been proposed, but conventional FTL schemes have problems such as block thrashing and block associativity problem. The KAST scheme tried to solve these problems by limiting the number of associations between data block and log block to K. But it has also block thrashing problem in random access I/O pattern. In this paper, we proposed a new FTL scheme, UDA-LBAST. Like KAST, the proposed scheme also limits the log block association, but does not limit data block association. So we could minimize the cost of merge operations, and reduce merge costs by using a new block reclaim scheme, log block garbage collection.

An Efficient Graph Algorithm Processing Scheme using GPUs with Limited Memory (제한된 메모리를 가진 GPU를 이용한 효율적인 그래프 알고리즘 처리 기법)

  • Song, Sang-ho;Lee, Hyeon-byeong;Choi, Do-jin;Lim, Jong-tae;Bok, Kyoung-soo;Yoo, Jae-soo
    • The Journal of the Korea Contents Association
    • /
    • v.22 no.8
    • /
    • pp.81-93
    • /
    • 2022
  • Recently, research on processing a large-capacity graph using GPUs has been conducting. In order to process a large-capacity graph in a GPU with limited memory, the graph must be divided into subgraphs and then processed by scheduling subgraphs. In this paper, we propose an efficient graph algorithm processing scheme in GPU environments with limited memory and performance evaluation. The proposed scheme consists of a graph differential subgraph scheduling method and a graph segmentation method. The bulk graph segmentation method determines how a large-capacity graph can be segmented into subgraphs so that it can be processed efficiently by the GPU. The differential subgraph scheduling method schedule subgraphs processed by GPUs to reduce redundant transmission of the repeatedly used data between HOST-GPUs. It shows the superiority of the proposed scheme by performing various performance evaluations.

Face detect hardware implementation for embedded system (임베디드 시스템 적용을 위한 얼굴검출 하드웨어 설계)

  • Kim, Yoon-Gu;Jeong, Yong-Jin
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.44 no.9
    • /
    • pp.40-47
    • /
    • 2007
  • For image processing hardware, including a face detecting engine, efficient constitution of external and internal memories is a consequential point because huge memory is required to store various signal processing filters and incoming images. In this paper, we modified a face detect algerian of a general filter method for efficient hardware design. In the hardware, several memory design techniques are presented for efficient handling of image data : re-accessing avoidance with minimized internal memory usage, residing frequently accessed memory and sequence memory accessing. The hardware which can process 25 frame image data per one second with 40KB internal memory was verified by using ARM(S3C2440A) and Virtex4 FPGA and it is being fabricated as a ASIC chip using Samsung CMOS 0.18um technology.

Efficient Memory Update Module for Video Object Segmentation (동영상 물체 분할을 위한 효율적인 메모리 업데이트 모듈)

  • Jo, Junho;Cho, Nam Ik
    • Journal of Broadcast Engineering
    • /
    • v.27 no.4
    • /
    • pp.561-568
    • /
    • 2022
  • Most deep learning-based video object segmentation methods perform the segmentation with past prediction information stored in external memory. In general, the more past information is stored in the memory, the better results can be obtained by accumulating evidence for various changes in the objects of interest. However, all information cannot be stored in the memory due to hardware limitations, resulting in performance degradation. In this paper, we propose a method of storing new information in the external memory without additional memory allocation. Specifically, after calculating the attention score between the existing memory and the information to be newly stored, new information is added to the corresponding memory according to each score. In this way, the method works robustly because the attention mechanism reflects the object changes well without using additional memory. In addition, the update rate is adaptively determined according to the accumulated number of matches in the memory so that the frequently updated samples store more information to maintain reliable information.