• Title/Summary/Keyword: 페이지 캐시

Search Result 61, Processing Time 0.023 seconds

Web Traffic Data Analyze for Cache Server (캐시 서버를 위한 웹 트래픽 데이터 분석)

  • Seulki Jung;Yillbyung Lee
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2008.11a
    • /
    • pp.1303-1306
    • /
    • 2008
  • 전체 웹 트래픽 요소 중 가장 큰 비중을 차지하는 HTTP 트래픽을 대상으로 하여 과거의 데이터와 비교 분석해 보았다. 현재의 웹 페이지의 경우 최소 10개~ 20개 이상의 또 다른 객체를 요청 하게 되고 있음을 발견했다. 이는 텍스트가 주를 이루었던 과거의 객체들과 매우 다른 양상을 보인다. 최근의 웹 트레이스 로그를 분석하여 기존 알고리즘들의 문제점을 발견하여 지적 하며 새로운 캐싱 알고리즘의 개념을 제안한다.

Separate Factor Caching Scheme for Mobile Web Service (모바일 웹 서비스를 위한 요소분할 캐싱 기법)

  • Sim, Kun-Jung;Kang, Eui-Sun;Kim, Jong-Keun;Ko, Hee-Ae;Lim, Young-Hwan
    • The KIPS Transactions:PartD
    • /
    • v.14D no.4 s.114
    • /
    • pp.447-458
    • /
    • 2007
  • The objective of this study is to provide faster mobile web service by improving performance of Contents Cache used for mobile web service in the existing Mobile Gate System. It was found that two elements existed in Mark-Up page transcoded by Contents Generator. One of the elements was dependent only on the requested DIDL page and Mark-Up type. The other was dependent on each of the requested DIDL page, Mark-Up type, size of mobile display 모바일 장치 to request service, type of images available and color depth count of the images available. The conventional Contents Cache saved the entire Mark-Up page to hold both of the two elements. This caused the problem where storage space was not effectively used because reusable elements were repetitively saved in cache memory domain due to change in one of the elements even though all the other elements were the same. As a result, a larger number of transcoded Mark-Up pages could not be saved in the same cache memory size. Therefore, in this study, Mark-Up pages transcoded by Contents Generator were divided into two elements and were separately saved. Also, in order to respond to the demand for replacing data in cache with new data, this study applied two algorithms of LFU and LRU. This study proposed the method to implement cache performance of faster speed by enabling to save more number of the transcoded Mark-Up pages in the same cache storage space.

Extended Buffer Management with Flash Memory SSDs (플래시메모리 SSD를 이용한 확장형 버퍼 관리)

  • Sim, Do-Yoon;Park, Jang-Woo;Kim, Sung-Tan;Lee, Sang-Won;Moon, Bong-Ki
    • Journal of KIISE:Databases
    • /
    • v.37 no.6
    • /
    • pp.308-314
    • /
    • 2010
  • As the price of flash memory continues to drop and the technology of flash SSD controller innovates, high performance flash SSDs with affordable prices flourish in the storage market. Nevertheless, it is hard to expect that flash SSDs will replace harddisks completely as database storage. Instead, the approach to use flash SSD as a cache for harddisks would be more practical, and, in fact, several hybrid storage architectures for flash memory and harddisk have been suggested in the literature. In this paper, we propose a new approach to use flash SSD as an extended buffer for main buffer in database systems, which stores the pages replaced out from main buffer and returns the pages which are re-referenced in the upper buffer layer, improving the system performance drastically. In contrast to the existing approaches to use flash SSD as a cache in the lower storage layer, our approach, which uses flash SSD as an extended buffer in the upper host, can provide fast random read speed for the warm pages which are being replaced out from the limited main buffer. In fact, for all the pages which are missing from the main buffer in a real TPC-C trace, the hit ratio in the extended buffer could be more than 60%, and this supports our conjecture that our simple extended buffer approach could be very effective as a cache. In terms of performance/price, our extended buffer architecture outperforms two other alternative approaches with the same cost, 1) large main buffer and 2) more harddisks.

SDN/NFV Based Web Cache Consistency and JavaScript Transmission Acceleration Scheme to Enhance Web Performance in Mobile Network (모바일 네트워크에서 SDN/NFV 기반의 웹 성능 향상을 위한 웹 캐시 일관성 제공과 JavaScript 전송 가속화 방안)

  • Kim, Gijeong;Lee, Sungwon
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.39B no.6
    • /
    • pp.414-423
    • /
    • 2014
  • The number and size of resource constituting the web page has been increasing steadily, and this circumstance leads to rapidly falling quality of web service in mobile network that offer relatively higher delay. Moreover, Improving the quality of a web services protocol is difficult to provide network function because the current network architecture has closed structure. In this paper, we suggest schemes to enhance web performance in mobile network, which are Check Coded DOM scheme and Functional JavaScript Transmission scheme, and then try to seek idea which can be provided suggested schemes as a network function using NFV(Network Function Virtualization). For the performance evaluation and analysis about the suggested schemes, we perform network simulation using SMPL library. We confirm that suggested schemes offer better performance in term of page loading time, the number of message and the amount of traffic in the network than HTTP Protocol.

Small Active Command Design for High Density DRAMs

  • Lee, Kwangho;Lee, Jongmin
    • Journal of the Korea Society of Computer and Information
    • /
    • v.24 no.11
    • /
    • pp.1-9
    • /
    • 2019
  • In this paper, we propose a Small Active Command scheme which reduces the power consumption of the command bus to DRAM. To do this, we target the ACTIVE command, which consists of multiple packets, containing the row address that occupies the largest size among the addresses delivered to the DRAM. The proposed scheme identifies frequently referenced row addresses as Hot pages first, and delivers index numbers of small caches (tables) located in the memory controller and DRAM. I-ACTIVE and I-PRECHARGE commands using unused bits of existing DRAM commands are added for index number transfer and cache synchronization management. Experimental results show that the proposed method reduces the command bus power consumption by 20% and 8.1% on average in the close-page and open-page policies, respectively.

A Study of User Identification in Data Preprocessing for Web Usage Mining (웹 이용 마이닝을 위한 데이터 전처리에서 사용자 구분에 관한 연구)

  • 최영환;이상용
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2001.10b
    • /
    • pp.118-120
    • /
    • 2001
  • 웹 이용 마이닝은 거대만 웹 데이터 저장소의 로그들을 이용하여 웹 사용자의 사용 패턴을 분석하는 데이터 마이닝 기술이다. 마이닝 기술을 적용하기 위해서는 전처리 과정 중의 사용자와 세션을 정확하게 구분해야 하는데, 표준 웹 로그 형식의 웹 로그만으로는 사용자를 완전히 구분할 수 없다. 따라서 정확한 결과를 얻기 위해 사용자와 세션을 구분할 수 있는 모듈을 웹 서버에서 제공하거나, 각각의 페이지에 적당한 실행 필드를 삽입해야 한다. 사용자와 세션을 구분하는 데는 캐시 문제, 방화벽 문제. IP(ISP)문제, 프라이버시 문제, 쿠키 문제 등 많은 문제들이 있지만, 이 문제를 해결하기 위한 명확한 방법은 아직 없다. 이 논문은 참조 로그와 에이전트 로그, 그리고 액세스 로그 등 서버측 클릭스트림 데이터만을 이용하여 사용자와 세션을 구분하는 방법을 제안한다.

  • PDF

The Pre-Service and Post-Transcoding Method for Enhancing the Response Time of Mobile Web Service (모바일 웹 서비스의 응답시간을 향상시키기 위한 선 서비스 후 변환 방법)

  • Kang, Eui-Sun;Park, Dae-Hyuck;Lim, Young-Hwan
    • The KIPS Transactions:PartD
    • /
    • v.14D no.7
    • /
    • pp.783-790
    • /
    • 2007
  • One of the particulars to be considered for providing wireless web service with PC web page is the hardware environment between PC and mobile device. It is because time cost is required in producing mobile contents to suit environment of the connected terminal. Therefore, server side should take account of response time and disk capacity of server. Response time is delayed by content conversion and disk capacity need to store various versions about one content. This paper suggests a pre-service and post-transcoding method to provide faster response time for a mobile terminal. The pre-service is to minimize response time by placing the top priority in servicing contents saved in cache as much as possible even if the quality of contents serviced to mobile terminal may be low. After pre-service is provided, the mobile content is transcoded for the terminal later. Performance level of method proposed was compared through experiment and the result of analysis was described.

A Performance Study on CPU-GPU Data Transfers of Unified Memory Device (통합메모리 장치에서 CPU-GPU 데이터 전송성능 연구)

  • Kwon, Oh-Kyoung;Gu, Gibeom
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.11 no.5
    • /
    • pp.133-138
    • /
    • 2022
  • Recently, as GPU performance has improved in HPC and artificial intelligence, its use is becoming more common, but GPU programming is still a big obstacle in terms of productivity. In particular, due to the difficulty of managing host memory and GPU memory separately, research is being actively conducted in terms of convenience and performance, and various CPU-GPU memory transfer programming methods are suggested. Meanwhile, recently many SoC (System on a Chip) products such as Apple M1 and NVIDIA Tegra that bundle CPU, GPU, and integrated memory into one large silicon package are emerging. In this study, data between CPU and GPU devices are used in such an integrated memory device and performance-related research is conducted during transmission. It shows different characteristics from the existing environment in which the host memory and GPU memory in the CPU are separated. Here, we want to compare performance by CPU-GPU data transmission method in NVIDIA SoC chips, which are integrated memory devices, and NVIDIA SMX-based V100 GPU devices. For the experimental workload for performance comparison, a two-dimensional matrix transposition example frequently used in HPC applications was used. We analyzed the following performance factors: the difference in GPU kernel performance according to the CPU-GPU memory transfer method for each GPU device, the transfer performance difference between page-locked memory and pageable memory, overall performance comparison, and performance comparison by workload size. Through this experiment, it was confirmed that the NVIDIA Xavier can maximize the benefits of integrated memory in the SoC chip by supporting I/O cache consistency.

A Scheme on High-Performance Caching and High-Capacity File Transmission for Cloud Storage Optimization (클라우드 스토리지 최적화를 위한 고속 캐싱 및 대용량 파일 전송 기법)

  • Kim, Tae-Hun;Kim, Jung-Han;Eom, Young-Ik
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.37 no.8C
    • /
    • pp.670-679
    • /
    • 2012
  • The recent dissemination of cloud computing makes the amount of data storage to be increased and the cost of storing the data grow rapidly. Accordingly, data and service requests from users also increases the load on the cloud storage. There have been many works that tries to provide low-cost and high-performance schemes on distributed file systems. However, most of them have some weaknesses on performing parallel and random data accesses as well as data accesses of frequent small workloads. Recently, improving the performance of distributed file system based on caching technology is getting much attention. In this paper, we propose a CHPC(Cloud storage High-Performance Caching) framework, providing parallel caching, distributed caching, and proxy caching in distributed file systems. This study compares the proposed framework with existing cloud systems in regard to the reduction of the server's disk I/O, prevention of the server-side bottleneck, deduplication of the page caches in each client, and improvement of overall IOPS. As a results, we show some optimization possibilities on the cloud storage systems based on some evaluations and comparisons with other conventional methods.

Optimizing LRU Lock Management in the Linux Kernel for Improving Parallel Write Throughout in Many-Core CPU Systems (매니코어 CPU 시스템의 병렬 쓰기 성능 향상을 위한 리눅스 커널의 LRU 관리 최적화 기법)

  • Eun-Kyu Byun;Gibeom Gu;Kwang-Jin Oh;Jiwoo Bang
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.12 no.7
    • /
    • pp.209-216
    • /
    • 2023
  • Modern HPC systems are equipped with many-core CPUs with dozens of cores. When performing parallel I/O in such a system, there is a limit to scalability due to the problem of the LRU lock management policy of the Linux system. The study proposes an improved FinerLRU to solve this problem. Our new FinerLRU improves the parallel write performance of file systems using the buffer cache through granular lock management by increasing the number of LRU locks upto the maximum number of cores. The proposed method was implemented in Linux 5.18.11, and the performance was measured on two types of CPUs, Intel Icelake Xeon and Intel Knights landing, with different characteristics, and it was found that a performance improvement of about two times can be obtained in both types of systems.