• Title/Summary/Keyword: cache-hit

Search Result 172, Processing Time 0.023 seconds

A Design of Proxy Servers the VOD System (VOD 시스템을 위한 Clustered Proxy Server 설계)

  • 배기범;김종훈;이철훈
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 1998.10a
    • /
    • pp.130-132
    • /
    • 1998
  • 기존의 VOD 시스템을 확대하여 원거리에 존재하는 서비스 제공자로부터에서 사용자에세 더 빠른 응답서비스를 제공하고, 장애가 발생하더라도 사용자에게 중단 없이 제공되도록 하기 위하여 프락시 서버를 두었는데, 특히 일정 지역내 존재하는 프락시 서버들간에 틀러스터를 형성하여 단일 프락시 서버로 만족할 수 없는 다수의 클라이언트로부터의 요구를 받아들이고 cache사이즈와 감소와, 더 높은 hit tate를 제공하도록 한다. 또한 이 클러스터내의 프로그램관리를 위해 비디오데이터에 대한 정보를 hint로 저장하여 각 프락시에서 관리함으로써 네트웍을 통해 사용자까지 실시간으로 비디오 데이터를 전송이 가증하고, 직접 데이터를 전달 없이 클러스터를 효율적으로 관리할 수 있도록 한다.

  • PDF

Cooperative Caching of Web Server Cluster for Improving Cache Hit Rate (캐시 적중률 향상을 위한 웹 서버 클러스터의 협력적 캐싱)

  • 김희규;최창열;박기진;김성수
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2003.04d
    • /
    • pp.563-565
    • /
    • 2003
  • 최근 클러스터에 대한 연구는 내용 기반 클러스터의 부하 분배와 캐시 정책에 집중되고 있다. 본 논문에서는 웹 서비스의 고가용성 및 확장성을 제공하는 클러스터 환경에서 힌트 기반 협력적 캐싱의 캐시 적중률을 향상시키기 위해 기존의 DFR 기법을 개선하였다. 서비스 접근 확률을 이용하여 주 복사본과 종속 복사본을 선택적으로 제거하는 메모리 교체 방법을 제시하였으며, DFR 방식과 성능을 비교, 분석한 결과 DFR 방식보다 적은 디스크 접근률을 얻을 수 있었다.

  • PDF

A Technique of Replacing XML Semantic Cache (XML 시맨틱 캐쉬의 교체 기법)

  • Hong, Jung-Woo;Kang, Hyun-Chul
    • The Journal of Society for e-Business Studies
    • /
    • v.12 no.3
    • /
    • pp.211-234
    • /
    • 2007
  • In e-business, XML is a major format of data and it is essential to efficiently process queries against XML data. XML query caching has received much attention for query performance improvement. In employing XML query caching, some efficient technique of cache replacement is required. The previous techniques considered as a replacement unit either the whole query result or the path in the query result. The former is simple to employ but it is not efficient whereas the latter is more efficient and yet the size difference among the potential victims is large, and thus, efficiency of caching would be limited. In this paper, we propose a new technique where the element in the query result is are placement unit to overcome the limitations of the previous techniques. The proposed technique could enhance the cache efficiency to a great extent because it would not pick a victim whose size is too large to store a new cached item, the variance in the size of victims would be small, and the unused space of the cache storage would be small. A technique of XML semantic cache replacement is presented which is based on the replacement function that takes into account cache hit ratio, last access time, fetch time, size of XML semantic region, size of element in XML semantic region, etc. We implemented a prototype XML semantic cache system that employs the proposed technique, and conducted a detailed set of experiments over a LAN environment. The experimental results showed that our proposed technique outperformed the previous ones.

  • PDF

A Hashing Scheme using Round Robin in a Wireless Internet Proxy Server Cluster System (무선 인터넷 프록시 서버 클러스터 시스템에서 라운드 로빈을 이용한 해싱 기법)

  • Kwak, Huk-Eun;Chung, Kyu-Sik
    • The KIPS Transactions:PartA
    • /
    • v.13A no.7 s.104
    • /
    • pp.615-622
    • /
    • 2006
  • Caching in a Wireless Internet Proxy Server Cluster Environment has an effect that minimizes the time on the request and response of Internet traffic and Web user As a way to increase the hit ratio of cache, we can use a hash function to make the same request URLs to be assigned to the same cache server. The disadvantage of the hashing scheme is that client requests cannot be well-distributed to all cache servers so that the performance of the whole system can depend on only a few busy servers. In this paper, we propose an improved load balancing scheme using hashing and Round Robin scheme that distributes client requests evenly to cache servers. In the existing hashing scheme, if a hashing value for a request URL is calculated, the server number is statically fixed at compile time while in the proposed scheme it is dynamically fixed at run time using round robin method. We implemented the proposed scheme in a Wireless Internet Proxy Server Cluster Environment and performed experiments using 16 PCs. Experimental results show the even distribution of client requests and the 52% to 112% performance improvement compared to the existing hashing method.

Device Caching Strategy Maximizing Expected Content Quality

  • Choi, Minseok
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.1
    • /
    • pp.111-118
    • /
    • 2021
  • This paper proposes a novel method of caching contents that can be encoded into multiple quality levels in device-to-device (D2D)-assisted caching networks. Different from the existing caching schemes, the author allows caching fractions of an individual file and considers the self cache hit event, which the user can find the desired content in its device. The author analyzes the tradeoff between the quality of cached contents and the cache hit rate, and proposes the device caching method maximizing the expected quality that the user can enjoy. Depending on the parameter of the relationship between the quality and the file size, the optimal caching method can be obtained by solving the convex optimization problem and the DC programming problem. If the file size increases faster than the quality, the cached fractions of the contents continuously increase as the popularity grows. Meanwhile, if the file size increases slower than the quality, some of the high-popularity files are entirely cached but others are not cached at all.

A Study on Mobility-Aware Edge Caching and User Association Algorithm (이동성 기반의 엣지 캐싱 및 사용자 연결 알고리즘 연구)

  • TaeYoon, Lee;SuKyoung, Lee
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.12 no.2
    • /
    • pp.47-52
    • /
    • 2023
  • Mobile Edge Computing(MEC) is considered as a promising technology to effectively support the explosively increasing traffic demands. It can provide low-latency services and reduce network traffic by caching contents at the edge of networks such as Base Station(BS). Although users may associate with the nearest BSs, it is more beneficial to associate users to the BS where the requested content is cached to reduce content download latency. Therefore, in this paper, we propose a mobility-aware joint caching and user association algorithm to imporve the cache hit ratio. In particular, the proposed algorithm performs caching and user association based on sojourn time and content preferences. Simulation results show that the proposed scheme improves the performance in terms of cache hit ratio and latency as compared with existing schemes.

A Web Cache Algorithm for Small Organizations (소규모 기관을 위한 웹 캐쉬 알고리즘)

  • 민경훈;민경훈;장혁수;주우석
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.25 no.8A
    • /
    • pp.1115-1123
    • /
    • 2000
  • Most of the existing web caches are used in huge organizations. But many internet users belong to small organizations such as a venture company or a PC room. Users are in general in multiple window environments, and use several programs concurrently with rapid preference change within a relatively short period of time. We develop a network-path based algorithm. It organizes a cache according to the network paths of the requested URLs and builds a network cache farm where caches are logically connected with each other and each cache has its own preference over certain network paths. The algorithm has been implemented and tested in a real site. The performance results show that the new algorithm outperforms the existing algorithms in the hit ratio and response time dramatically with low cost.

  • PDF

General Web Cache Implementation Using NIO (NIO를 이용한 범용 웹 캐시 구현)

  • Lee, Chul-Hui;Shin, Yong-Hyeon
    • Journal of Advanced Navigation Technology
    • /
    • v.20 no.1
    • /
    • pp.79-85
    • /
    • 2016
  • Network traffic is increased rapidly, due to mobile and social network, such as smartphones and facebook, in recent web environment. In this paper, we improved web response time of existing system using direct buffer of NIO and DMA. This solved the disadvantage of JAVA, such as CPU performance reduction due to the blocking of I/O, garbage collection of buffer. Key values circulated many data due to priority change put on a hash map operated easily and apply a priority modification algorithm. Large response data is separated and stored at a fast direct buffer and improved performance. This paper showed that the proposed method using NIO was much improved performance, in many test situations of cache hit and cache miss.

An Address Translation Technique Large NAND Flash Memory using Page Level Mapping (페이지 단위 매핑 기반 대용량 NAND플래시를 위한 주소변환기법)

  • Seo, Hyun-Min;Kwon, Oh-Hoon;Park, Jun-Seok;Koh, Kern
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.16 no.3
    • /
    • pp.371-375
    • /
    • 2010
  • SSD is a storage medium based on NAND Flash memory. Because of its short latency, low power consumption, and resistance to shock, it's not only used in PC but also in server computers. Most SSDs use FTL to overcome the erase-before-overwrite characteristic of NAND flash. There are several types of FTL, but page mapped FTL shows better performance than others. But its usefulness is limited because of its large memory footprint for the mapping table. For example, 64MB memory space is required only for the mapping table for a 64GB MLC SSD. In this paper, we propose a novel caching scheme for the mapping table. By using the mapping-table-meta-data we construct a fully associative cache, and translate the address within O(1) time. The simulation results show more than 80 hit ratio with 32KB cache and 90% with 512KB cache. The overall memory footprint was only 1.9% of 64MB. The time overhead of cache miss was measured lower than 2% for most workload.

Research on Event Mechanism for Reducing Power Overheads in Cache Memory Synchronization (캐시 메모리 동기화 전력 감소를 위한 이벤트 메커니즘에 대한 연구)

  • Pak, Young-Jin;Jeong, Ha-Young;Lee, Yong-Surk
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.48 no.3
    • /
    • pp.69-75
    • /
    • 2011
  • In this paper, we propose an anycast event driven synchronization mechanism to reduce power overheads. Our proposed mechanism can reduce unnecessary polling operations on SHI(Snoop Hit Invalidate) or SHR(Snoop Hit Read) states. It prevents waisting bandwidth and reduces power overheads on polling operation. Also it decreases transition power of state change compared to broadcast model. Simulation results indicated that the proposed architecture had about 15.3% of power decrease compared to spin-lock model and about 4.7% of power decrease compared to broadcast model. Overall results indicated that proposed synchronization mechanism could increase power efficiency of multi-core system by reducing power overheads.