• Title/Summary/Keyword: Cache Technique

Search Result 150, Processing Time 0.104 seconds

Management Technique of Buffer Cache for Rendering Systems (렌더링 시스템을 위한 버퍼캐쉬 관리기법)

  • Shin, Donghee;Bahn, Hyokyung
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.18 no.5
    • /
    • pp.155-160
    • /
    • 2018
  • In this paper, we found that buffer cache in general systems does not perform well in rendering software, and presented a new buffer cache management scheme that resolves this problem. To do so, we collected various file I/O traces of rending software and analyzed their characteristics. From this analysis, we observed that file I/Os in rendering consist of long loops, short loops, random accesses, and write-once accesses. Based on this observation, we presented a buffer cache management scheme that allocates cache space to each access types and manages them appropriately, thereby improving the buffer cache performances by 19% on average and up to 55%.

A Study on Location-based Routing Technique for Improving the Performance of P2P in MANET (MANET에서 P2P 성능 향상을 위한 위치기반 라우팅 기법에 관한 연구)

  • Yang, Hwanseok
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.11 no.2
    • /
    • pp.37-45
    • /
    • 2015
  • The technology development of MANET and dissemination of P2P services has been made very widely. In particular, the development of many application services for the integration of P2P services in MANET has been made actively. P2P networks are commonly used because of the advantages of efficient use of network bandwidth and rapid information exchange. In P2P network, the infrastructure managing each node in the middle does not exist and each node is a structure playing a role as the sender and receiver. Such a structure is very similar to the structure of the MANET. However, it is difficult to provide reliable P2P service due to the high mobility of mobile nodes. In this paper, we propose location-based routing technique in order to provide efficient file sharing and management between nodes. GMN managing the group is elected after network is configured to the area of a certain size. Each node is assigned an identifier of 12 bit dynamically to provide routing which uses location information to the identifier. ZGT is managed in the GMN in order to provide management of group nodes and distributed cache information. The distributed cache technique is applied to provide a rapid retrieval of the sharing files in the each node. The excellent performance of the proposed technique was confirmed through experiments.

A Study on Caching Management Technique in Mobile Ad-hoc Network (Mobile Ad-hoc Network에서 캐싱 관리 기법에 관한 연구)

  • Yang, Hwan Seok;Yoo, Seung Jae
    • Convergence Security Journal
    • /
    • v.12 no.4
    • /
    • pp.91-96
    • /
    • 2012
  • MANET is developed technique fairly among many field of wireless network. Nodes which consist of MANET transmit data using multi-hop wireless connection. Caching scheme is technique which can improve data access capacity and availability of nodes. Previous studies were achieved about dynamic routing protocol to improve multi-hop connection of moving nodes. But management and maintenance of effective cache information because of movement of nodes is not easy. In this study, we proposed cluster-based caching scheme to manage connection by decreasing overhead and moving of nodes as moving node discovers cache of wish information. And HLP was used to maintain effective cache table in each cluster head. Efficiency of proposed technique in this study was confirmed by experiment.

Document Replacement Policy by Site Popularity in Web Cache (웹 캐시에서 사이트의 인기도에 의한 도큐먼트 교체정책)

  • Yoo, Hang-Suk;Jang, Tea-Mu
    • Journal of Korea Game Society
    • /
    • v.3 no.1
    • /
    • pp.67-73
    • /
    • 2003
  • Most web caches save documents temporarily into themselves on the basis of those documents. And when a corresponding document exists within the cache on wei s request, web cache sends the document to corresponding user. On the contrary, when there is not any document within the cache, web cache requests a new document to the related server to copy the document into the cache and then rum it back to user. Here, web cache uses a replacement policy to change existing document into a new one due to exceeded capacity of cache. Typical replacement policy includes document-based LRU or LFU technique and other various replacement policies are used to replace the documents within cache effectively. However, these replacement policies function only with regard to the time and frequency of document request, not considering the popularity of each web site. Based on replacement policies with regard to documents on frequent requests and the popularity of each web site, this paper aims to present the document replacement policies with regard to the popularity of each web site, which are suitable for latest network environments to enhance the hit-ratio of cache and efficiently manage the contents of cache by effectively replacing documents on intermittent requests by new ones.

  • PDF

Energy-aware Instruction Cache Design using Backward Branch Information for Embedded Processors (임베디드 시스템에서 후방 분기 명령어 정보를 이용한 저전력 명령어 캐쉬 설계 기법)

  • Yang, Na-Ra;Kim, Jong-Myon;Kim, Cheol-Hong
    • Journal of the Korea Society of Computer and Information
    • /
    • v.13 no.6
    • /
    • pp.33-39
    • /
    • 2008
  • Energy efficiency should be considered together with performance when designing embedded processors. This paper proposes a new energy-aware instruction cache design using backward branch information to reduce the energy consumption in an embedded processor, since instruction caches consume a significant fraction of the on-chip energy. Proposed instruction cache is composed of two caches: a large main instruction cache and a small loop instruction cache. Proposed technique enables the selective access between the main instruction cache and the loop instruction cache to reduce the number of accesses to the main instruction cache, leading to good energy efficiency. Analysis results show that the proposed instruction cache reduces the energy consumption by 20% on the average, compared to the traditional instruction cache.

  • PDF

Asynchronous Cache Consistency Technique (비동기적 캐쉬 일관성 유지 기법)

  • 이찬섭
    • Journal of the Korea Society of Computer and Information
    • /
    • v.9 no.2
    • /
    • pp.33-40
    • /
    • 2004
  • According as client/server is generalized by development of computer performance and information communication technology, Servers uses local cache for extensibility and early response time, and reduction of limited bandwidth. Consistency of cached data need between server and client this time and much technique are proposed according to this. This Paper improved update frequency cache consistency in old. Existent consistency techniques is disadvantage that response time is late because synchronous declaration or abort step increases because delaying write intention declaration. Techniques that is proposed in this paper did to perform referring update time about object that page request or when complete update operation happens to solve these problem. Therefore, have advantage that response is fast because could run write intention declaration or update by sel_mode electively asynchronously when update operation consists and abort step decreases and clearer selection.

  • PDF

Efficient On-Chip Idle Cache Utilization Technique in Chip Multi-Processor Architecture (칩 멀티 프로세서 구조에서 온칩 유휴 캐시의 효과적인 활용 방안)

  • Kwak, Jong Wook
    • Journal of the Korea Society of Computer and Information
    • /
    • v.18 no.10
    • /
    • pp.13-21
    • /
    • 2013
  • Recently, although the number of cores on a chip multi-processor increases, multi-programming or multi-threaded programming techniques to utilize the whole cores are still insufficient. Therefore, there inevitably exist some idle cores which are not working. This results in a waste of the caches, so-called idle caches which are dedicated to those idle cores. In this research, we propose amethodology to exploit idle caches effectively as victimcaches of on-chip memory resource. In simulation results, we have achieved 19.4%and 10.2%IPC improvement in 4-core and 16-core respectively, compared to previous technique.

Document Replacement Policy by Web Site Popularity (웹 사이트의 인기도에 의한 도큐먼트 교체정책)

  • Yoo, Hang-Suk;Chang, Tae-Mu
    • Journal of the Korea Society of Computer and Information
    • /
    • v.13 no.1
    • /
    • pp.227-232
    • /
    • 2008
  • General web caches save documents temporarily into themselves on the basis of those documents. And when a corresponding document exists within the cache on user's request. web cache sends the document to corresponding user. On the contrary. when there is not any document within the cache, web cache requests a new document to the related server to copy the document into the cache and then turn it back to user. Here, web cache uses a replacement policy to change existing document into a new one due to exceeded capacity of cache. Typical replacement policy includes document-based LRU or LFU technique and other various replacement policies are used to replace the documents within cache effectively. However. these replacement policies function only with regard to the time and frequency of document request. not considering the popularity of each web site. Based on replacement policies with regard to documents on frequent requests and the popularity of each web site, this paper aims to present the document replacement policies with regard to the popularity of each web site, which are suitable for latest network environments to enhance the hit-ratio of cache and efficiently manage the contents of cache by effectively replacing documents on intermittent requests by new ones.

  • PDF

Performance Evaluation of SSD Cache Based on DM-Cache (DM-Cache를 이용해 구현한 SSD 캐시의 성능 평가)

  • Lee, Jaemyoun;Kang, Kyungtae
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.3 no.11
    • /
    • pp.409-418
    • /
    • 2014
  • The amount of data located in storage servers has dramatically increased with the growth in cloud and social networking services. Storage systems with very large capacities may suffer from poor reliability and long latency, problems which can be addressed by the use of a hybrid disk, in which mechanical and flash memory storage are combined. The Linux-based SSD(solid-state disk) uses a caching technique based on the DM-cache utility. We assess the limitations of DM-cache by evaluating its performance in diverse environments, and identify problems with the caching policy that it operates in response to various commands. This policy is effective in reducing latency when Linux is running in native mode; but when Linux is installed as a guest operating systems on a virtual machine, the overhead incurred by caching actually reduces performance.

Management Technique of Energy-Efficient Cache and Memory for Mobile IoT Devices (모바일 사물인터넷 디바이스를 위한 에너지 효율적인 캐시 및 메모리 관리 기법)

  • Bahn, Hyokyung
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.21 no.2
    • /
    • pp.27-32
    • /
    • 2021
  • This paper proposes an energy-efficient cache and memory management scheme for next-generation IoT devices. The proposed scheme adopts a low-power phase-change memory (PCM) as the main memory of IoT devices, aims at minimizing the write traffic to PCM, which is vulnerable to write operations. Specifically, when a cache block of the last-level cache memory is flushed to main memory, the cache block that causes less writes to PCM is preferentially replaced by tracking the modifications of each cache line that constitutes the cache block. In addition, by considering the reference bit of the cache block and the dirty bit of the cache lines, our scheme reduces the energy consumption without degrading the memory system performances. Through simulations using SPEC benchmarks, it is shown that the proposed scheme reduces the write traffic to PCM by 34.6% on average and the power consumption by 28.9%, without memory performance degradations.