• Title/Summary/Keyword: cache management scheme

Search Result 78, Processing Time 0.028 seconds

A Cache Management Scheme for Effective Processing of Continuous Partial Match Queries in Mobile Computing Environments (이동 컴퓨팅 환경에서 연속 부분 부합 질의의 효과적인 처리를 위한 캐시 관리 방안)

  • Jeong, Yeon-Don;Lee, Ji-Yeon;Lee, Yun-Jun;Kim, Myeong-Ho
    • Journal of KIISE:Databases
    • /
    • v.28 no.2
    • /
    • pp.253-265
    • /
    • 2001
  • 본 논문은 이동 컴퓨팅 환경에서 연속 부분 질의의 효과적인 처리를 위한 캐시 관리 방안을 제안한다. 연속 부분 부합 질의란 질의의 결과가 클라이언트의 메모리에 일관성을 유지하면서 지속되는 부분 부합 질의이다. 기존의 이동 환경을 위한 캐시 관리 기법은 레코드 식별자를 기반으로 하는 방법들이다. 하지만, 부분 부합 질의는 데이터의 내용을 기반으로 탐색하는 질의이기 때문에 이러한 레코드 식별자를 기반으로 하는 방법들은 캐시 관리를 효율적으로 할 수 없다. 제안하는 캐시 관리 방안에서는, 이동 클라이언트의 캐시 상태를 프레디킷(predicate)으로 기술하고, 서버가 캐시 관리를 위해 클라이언트에게 방송하는 캐시 무효화 정보, 즉 Cache Invalidation Reports(CIR)을 프레디킷으로 구성한다. 이러한 프레디킷 표현을 사용하여, 일련의 캐시 관리 기법-억지 방법(the brute-force method), 빼기 방법(the subtraction method), 교차 방법(the intersection method)-들을 제안한다. 그리고, 제안하는 방법의 계산 복잡도를 계산한다.

  • PDF

Dynamic Cache Management Scheme on Demand-Based FTL Considering Data Access Pattern (데이터 접근 패턴을 고려한 요구 기반 FTL 내 캐시의 동적 관리 기법)

  • Lee, Bit-Na;Song, Nae-Young;Koh, Kern
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2011.06a
    • /
    • pp.547-550
    • /
    • 2011
  • 플래시 메모리는 낮은 전력 소비와 높은 성능으로 인해 휴대용 기기에 널리 사용되고 있다. FTL은 플래시 내 자료를 관리하는 소프트웨어 계층으로 플래시 전체의 성능에 영향을 끼친다. 그 중 페이지 레벨 매핑 기법을 적용한 FTL은 유연성이 높고 속도가 빠르나 주소 변환 테이블의 크기가 큰 단점이 있다. 이를 해결하기 위해 자주 접근되는 영역의 매핑 주소만을 매핑 테이블 캐시에 올려놓는 Demand-based FTL(DFTL)이 제안되었다. DFTL 에서는 CMT(Cache Mapping Table)의 참조율이 떨어지는 경우 빈번한 플래시 메모리 접근 오버헤드가 발생하게 된다. 이러한 문제는 흔히 발생하는 일반적인 순차 접근에서조차 문제가 된다. 이에 본 논문에서는 저장 장치의 접근 패턴을 예측하여 CMT의 참조 엔트리를 미리 읽어오는 기법을 제안한다. 제안하는 기법은 저장 장치 접근 패턴의 순차성을 판단하여 연속된 매핑 주소를 미리 CMT에 올려놓고, 읽어오는 매핑 주소 엔트리의 양은 동적으로 관리한다. 추가적으로 CMT에서 발생하는 스래싱(thrashing) 을 파악하기 위해 쫓겨나는 희생 엔트리의 접근 여부를 분석하여 이를 활용하였다. 실험 결과에서 본 기법은 기존의 DFTL에 비해 약간의 공간 오버헤드와 함께 평균 50% 증가한 참조율을 보였다.

Access Frequency Based Selective Buffer Cache Management Strategy For Multimedia News Data (접근 요청 빈도에 기반한 멀티미디어 뉴스 데이터의 선별적 버퍼 캐쉬 관리 전략)

  • Park, Yong-Un;Seo, Won-Il;Jeong, Gi-Dong
    • The Transactions of the Korea Information Processing Society
    • /
    • v.6 no.9
    • /
    • pp.2524-2532
    • /
    • 1999
  • In this paper, we present a new buffer pool management scheme designed for video type news objects to build a cost-effective News On Demand storage server for serving users requests beyond the limitation of disk bandwidth. In a News On Demand Server where many of users request for video type news objects have to be serviced keeping their playback deadline, the maximum numbers of concurrent users are limited by the maximum disk bandwidth the server provides. With our proposed buffer cache management scheme, a requested data is checked to see whether or not it is worthy of caching by checking its average arrival interval and current disk traffic density. Subsequently, only granted news objects are permitted to get into the buffer pool, where buffer allocation is made not on the block basis but on the object basis. We evaluated the performance of our proposed caching algorithm through simulation. As a result of the simulation, we show that by using this caching scheme to support users requests for real time news data, compared with serving those requests only by disks, 30% of extra requests are served without additional cost increase.

  • PDF

Enhanced ANTSEC Framework with Cluster based Cooperative Caching in Mobile Ad Hoc Networks

  • Umamaheswari, Subbian;Radhamani, Govindaraju
    • Journal of Communications and Networks
    • /
    • v.17 no.1
    • /
    • pp.40-46
    • /
    • 2015
  • In a mobile ad hoc network (MANET), communication between mobile nodes occurs without centralized control. In this environment the mobility of a node is unpredictable; this is considered as a characteristic of wireless networks. Because of faulty or malicious nodes, the network is vulnerable to routing misbehavior. The resource constrained characteristics of MANETs leads to increased query delay at the time of data access. In this paper, AntHocNet+ Security (ANTSEC) framework is proposed that includes an enhanced cooperative caching scheme embedded with artificial immune system. This framework improves security by injecting immunity into the data packets, improves the packet delivery ratio and reduces end-to-end delay using cross layer design. The issues of node failure and node malfunction are addressed in the cache management.

An Efficient Caching Strategy in Data Broadcasting (데이터 방송 환경에서의 효율적인 캐슁 정책)

  • Kim, Su-Yeon;Choe, Yang-Hui
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.26 no.12
    • /
    • pp.1476-1484
    • /
    • 1999
  • TV 방송 분야에서 다양한 정보와 상호 작용성을 제공하기 위해서 최근 기존 방송 내용인 A/V 스트림 외 부가정보 방송이 시도되고 있다. 데이타 방송에 대한 기존 연구는 대부분 고정된 내용의 데이타를 방송하는 환경을 가정하고 있어서 그 결과가 방송 내용의 변화가 많은 환경에 부적합하다. 본 논문에서는 데이타에 대한 접근이 반복되지 않을 가능성이 높고 사용자 접근 확률을 예상하기 어려운 상황에서 응답 시간을 개선하는 방안으로 수신 데이타를 무조건 캐쉬에 반입하고 교체가 필요한 경우 다음 방송 시각이 가장 가까운 페이지를 축출하는 사용자 단말 시스템에서의 캐슁 정책을 제안하였다. 제안된 캐쉬 관리 정책은 평균적인 캐쉬 접근 실패 비용을 줄임으로써 사용자 응답 시간을 개선하며, 서로 다른 스케줄링 기법을 사용하는 다양한 방송 제공자가 공존하는 환경에서 보편적으로 효과를 가져올 수 있다.Abstract Recently, many television broadcasters have tried to disseminate digital multimedia data in addition to the traditional content (audio-visual stream). The broadcast data need to be cached by a client system, to provide a reasonable response time for a user request. Previous studies assumed the dissemination of a fixed set of items, and the results are not suitable when broadcast items are frequently changed. In this paper, we propose a novel cache management scheme that chooses the replacement victim based on the remaining time to the next broadcast instance. The proposed scheme reduces response time, where it is hard to predict the probability distribution of user accesses. The caching policy we present here significantly reduces expected response time by minimizing expected cache miss penalty, and can be applied without difficulty to different scheduling algorithms.

Technique for Estimating the Number of Active Flows in High-Speed Networks

  • Yi, Sung-Won;Deng, Xidong;Kesidis, George;Das, Chita R.
    • ETRI Journal
    • /
    • v.30 no.2
    • /
    • pp.194-204
    • /
    • 2008
  • The online collection of coarse-grained traffic information, such as the total number of flows, is gaining in importance due to a wide range of applications, such as congestion control and network security. In this paper, we focus on an active queue management scheme called SRED since it estimates the number of active flows and uses the quantity to indicate the level of congestion. However, SRED has several limitations, such as instability in estimating the number of active flows and underestimation of active flows in the presence of non-responsive traffic. We present a Markov model to examine the capability of SRED in estimating the number of flows. We show how the SRED cache hit rate can be used to quantify the number of active flows. We then propose a modified SRED scheme, called hash-based two-level caching (HaTCh), which uses hashing and a two-level caching mechanism to accurately estimate the number of active flows under various workloads. Simulation results indicate that the proposed scheme provides a more accurate estimation of the number of active flows than SRED, stabilizes the estimation with respect to workload fluctuations, and prevents performance degradation by efficiently isolating non-responsive flows.

  • PDF

Data Replication and Migration Scheme for Load Balancing in Distributed Memory Environments (분산 인-메모리 환경에서 부하 분산을 위한 데이터 복제와 이주 기법)

  • Choi, Kitae;Yoon, Sangwon;Park, Jaeyeol;Lim, Jongtae;Bok, Kyoungsoo;Yoo, Jaesoo
    • KIISE Transactions on Computing Practices
    • /
    • v.22 no.1
    • /
    • pp.44-49
    • /
    • 2016
  • Recently, data has been growing dramatically along with the growth of social media and digital devices. A distributed memory processing system has been used to efficiently process large amounts of data. However, if a load is concentrated in a certain node in distributed environments, a node performance significantly degrades. In this paper, we propose a load balancing scheme to distribute load in a distributed memory environment. The proposed scheme replicates hot data to multiple nodes for managing a node's load and migrates the data by considering the load of the nodes when nodes are added or removed. The client reduces the number of accesses to the central server by directly accessing the data node through the metadata information of the hot data. In order to show the superiority of the proposed scheme, we compare it with the existing load balancing scheme through performance evaluation.

Adaptive Deadline-aware Scheme (ADAS) for Data Migration between Cloud and Fog Layers

  • Khalid, Adnan;Shahbaz, Muhammad
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.3
    • /
    • pp.1002-1015
    • /
    • 2018
  • The advent of Internet of Things (IoT) and the evident inadequacy of Cloud networks concerning management of numerous end nodes have brought about a shift of paradigm giving birth to Fog computing. Fog computing is an extension of Cloud computing that extends Cloud resources at the edge of the network, closer to the user. Cloud computing has become one of the essential needs of people over the Internet but with the emerging concept of IoT, traditional Clouds seem inadequate. IoT entails extremely low latency and for that, the Cloud servers that are distant and unknown to the user appear to be unsuitable. With the help of Fog computing, the Fog devices installed would be closer to the user that will provide an immediate storage for the frequently needed data. This paper discusses data migration between different storage types especially between Cloud devices and then presents a mechanism to migrate data between Cloud and Fog Layer. We call this mechanism Adaptive Deadline-Aware Scheme (ADAS) for Data migration between Cloud and Fog. We will demonstrate that we can access and process latency sensitive "hot" data through the proposed ADAS more efficiently than with a traditional Cloud setup.

Content Centric Networking Naming Scheme for Efficient Data Sharing (효율적인 데이타 교환을 위한 Content-Centric Networking 식별자 방안)

  • Kim, Dae-Youb
    • Journal of Korea Multimedia Society
    • /
    • v.15 no.9
    • /
    • pp.1126-1132
    • /
    • 2012
  • To enhance network efficiency, CCN allow intermediate network nodes between a content consumer and a content publisher to temporarily cache transmitted contents. Then the network nodes immediately return back the cached contents to another consumers when the nodes receives relevant contents request messages from the consumers. For that, CCN utilizes hierarchical content names to forward a request message as well as a response message. However, such content names semantically contain much information about domain/user as well as content itself. So it is possible to invade users' privacy. In this paper, we first review both the problem of CCN name in the view point of privacy and proposed schemes. Then we propose an improved name management scheme for users' privacy preservation.

Small Active Command Design for High Density DRAMs

  • Lee, Kwangho;Lee, Jongmin
    • Journal of the Korea Society of Computer and Information
    • /
    • v.24 no.11
    • /
    • pp.1-9
    • /
    • 2019
  • In this paper, we propose a Small Active Command scheme which reduces the power consumption of the command bus to DRAM. To do this, we target the ACTIVE command, which consists of multiple packets, containing the row address that occupies the largest size among the addresses delivered to the DRAM. The proposed scheme identifies frequently referenced row addresses as Hot pages first, and delivers index numbers of small caches (tables) located in the memory controller and DRAM. I-ACTIVE and I-PRECHARGE commands using unused bits of existing DRAM commands are added for index number transfer and cache synchronization management. Experimental results show that the proposed method reduces the command bus power consumption by 20% and 8.1% on average in the close-page and open-page policies, respectively.