• Title/Summary/Keyword: pre-fetching

Search Result 16, Processing Time 0.031 seconds

Pre-Fetching Strategies Based on User Interactions in Multi-Channel Environments (사용자 인터랙션을 이용한 다중채널 환경에서의 프리페칭 전략)

  • Choi, Junwan;Lee, Choonhwa
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2010.11a
    • /
    • pp.952-954
    • /
    • 2010
  • 효율적인 파일 공유를 일차적인 목표로 하였던 P2P 프로토콜 기술은 서서히 멀티미디어 스트리밍으로 옮겨져 가고 있다. Swarming 을 이용한 P2P 스트리밍 시스템에서 비디오를 중심으로 한 채널변경 또는 재생지점 변경으로 인해 지연현상이 발생하게 되는데, P2P 시스템에서 해결해야 하는 당면 과제이다. 지연현상을 해결하기 위한 기존의 연구로는 프리페칭 전략이 있지만, 이들은 모든 사용자들의 시청패턴을 고려하지 않았다. 본 논문에서는 사용자 인터랙션과 같은 social meta-data 를 이용하여 프리페칭을 지원하는 시스템을 제안한다.

AN EFFECTIVE SEGMENT PRE-FETCHING FOR SHORT-FORM VIDEO STREAMING

  • Nguyen Viet Hung;Truong Thu Huong
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.3
    • /
    • pp.81-93
    • /
    • 2023
  • The popularity of short-form video platforms like TikTok has increased recently. Short-form videos are significantly shorter than traditional videos, and viewers regularly switch between different types of content to watch. Therefore, a successful prefetching strategy is essential for this novel type of video. This study provides a resource-effective prefetching technique for streaming short-form videos. The suggested solution dynamically adjusts the quantity of prefetched video data based on user viewing habits and network traffic conditions. The results of the experiments demonstrate that, in comparison to baseline approaches, our method may reduce data waste by 21% to 83%, start-up latency by 50% to 99%, and the total time of Re-buffering by 90% to 99%.

Application-Oriented Context Pre-fetch Method for Enhancing Inference Performance in Ontology-based Context Management (온톨로지 기반의 상황정보관리에서 추론 성능 향상을 위한 어플리케이션 지향적 상황정보 선인출 기법)

  • Lee Jae-Ho;Park In-Suk;Lee Dong-Man;Hyun Soon-Joo
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.12 no.4
    • /
    • pp.254-263
    • /
    • 2006
  • Ontology-based context models are widely used in ubiquitous computing environment because they have advantages in the acquisition of conceptual context through inferencing, context sharing, and context reusing. Among the benefits, inferencing enables context-aware applications to use conceptual contexts which cannot be acquired by sensors. However, inferencing causes processing delay and thus becomes the major obstacle to the implementation of context-aware applications. The delay becomes longer as the amount of contexts increases. In this paper, we propose a context pre-fetching method to reduce the size of contexts to be processed in a working memory in attempt to speed up inferencing. For this, we extend the query-tree method to identify contexts relevant to the queries of a context-aware application. Maintaining the pre-fetched contexts optimal in a working memory, the processing delay of inference reduces without the loss of the benefits of ontology-based context model. We apply the proposed scheme to our ubiquitous computing middleware, Active Surroundings, and demonstrate the performance enhancement by experiments.

Improving Prefetching Effects by Exploiting Reference Patterns (참조패턴을 이용한 선반입의 개선)

  • Lee, Hyo-Jeong;Doh, In-Hwan;Noh, Sam-H.
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.14 no.2
    • /
    • pp.226-230
    • /
    • 2008
  • Prefetching is one of widely used techniques to improve performance of I/O. But it has been reported that prefetching can bring adverse result on some reference pattern. This paper proposes a prefet-ching frame that can be adopted on existing prefetching techniques simply. The frame called IPRP (Improving Prefetching Effects by Exploiting Reference Patterns) and detects reference patterns online and control pre-fetching upon the characteristics of the detected pattern. In our experiment, we adopted IPRP on Linux read-ahead prefetching. IPRP could prevent adverse result clearly when Linux read-ahead prefetching increases total execution time about $40%{\sim}70%$. When Linux read-ahead prefetching could bring some benefit, IPRP with read- ahead performed similar or slightly better benefit on execution time. With this result we could see our IPRP can complement and improve legacy prefetching techniques efficiently.

Perfomance Evaluation of efficent handover Latency Using MIH Services in MIPv4 (MIH를 이용한 효율적인 MIPv4망의 구성에 관한 연구)

  • Kim, Ki-Yong;Jang, Jong-Wook
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2007.10a
    • /
    • pp.75-78
    • /
    • 2007
  • Mobile IP provides hand-held devices with mobility which allows the user to do work over the network. However, handover time due transfer between access routers causes network delays and data loss. L2Trigger Handover expects this handover to take place, and executes L3 handover before L2 handover takes place, thereby reducing overall handover latency, although it still is an issue since handover latency between AR is not completely eliminated in L2 trigger handover. In this paper took into consideration where MIH is used in MIPv4 and using MIH Table when handover is about to occur in MN(Mobile Node), thereby pre-fetching data needed by Handover. In this way, when the handover is estimated, it improves the init time that L2trigger had. Furthermore we can find that we can execute the handover with shorten init time in smaller and narrow overlap length

  • PDF

Research on Web Cache Infection Methods and Countermeasures (웹 캐시 감염 방법 및 대응책 연구)

  • Hong, Sunghyuck;Han, Kun-Hee
    • Journal of Convergence for Information Technology
    • /
    • v.9 no.2
    • /
    • pp.17-22
    • /
    • 2019
  • Cache is a technique that improves the client's response time, thereby reducing the bandwidth and showing an effective side. However, there are vulnerabilities in the cache technique as well as in some techniques. Web caching is convenient, but it can be exploited by hacking and cause problems. Web cache problems are mainly caused by cache misses and excessive cache line fetch. If the cache miss is high and excessive, the cache will become a vulnerability, causing errors such as transforming the secure data and causing problems for both the client and the system of the user. If the user is aware of the cache infection and the countermeasure against the error, the user will no longer feel the cache error or the problem of the infection occurrence. Therefore, this study proposed countermeasures against four kinds of cache infections and errors, and suggested countermeasures against web cache infections.