• Title/Summary/Keyword: Cache Policy

Search Result 136, Processing Time 0.021 seconds

A Scheme of Efficient Contents Service and Sharing By Associating Media Server with Location-Aware Overlay Network (미디어 서버와 위치-인지 오버레이 네트워크를 연계한 효율적 콘텐츠 공유 및 서비스 방법)

  • Chung, Won-Ho;Lee, Seung Yeon
    • Journal of Broadcast Engineering
    • /
    • v.23 no.1
    • /
    • pp.26-35
    • /
    • 2018
  • The recent development of overlay network technology enables distributed sharing of various types of contents. Although overlay network has great advantages as a huge content repository, it is practically difficult to directly provide such Internet service as streaming of contents. On the other hand, the media server, which is specialized in content services, has excellent service capabilities, but it suffers from the huge contents that are constantly created and requires large expansion of severs and storages, and thus requires much effort for efficient management of the huge repository. Hence, the association of an overlay network of huge storage with a media server of high performance content service will show a great synergy effect. In this paper, a location-aware scheme of constructing overlay networks and associating it with media server is proposed, and then cache-based contents management and service policy are proposed for efficient content service. The performance is analysed for one of the content services, streaming service.

Efficient Method to Support Mobile Virtualization-based Cloud Resource Management (모바일 가상화기반 클라우드 자원관리를 지원하는 효율적 방법)

  • Kang, Yongho;Jang, Changbok;Lee, Wanjik;Heo, Seokyeol;Kim, Jooman
    • Journal of Digital Convergence
    • /
    • v.12 no.2
    • /
    • pp.277-283
    • /
    • 2014
  • Recently, various cloud service has been being provided on mobile devices as well as desktop pc and server computer. Also, Smartphone users are very rapidly increasing, and they are using it for enjoying various services(cloud service, game, banking service, mobile office, etc.). So, research to utilize resources on mobile device has been conducted. In this paper, We have suggested efficient method of cloud resource management by using information of available physical resources(CPU, memory, storage, etc.) between mobile devices, and information of physical resource in mobile device. Suggested technology is possible to guarantee real-time process and efficiently manage resources.

Enhancing LRU Buffer Replacement Policy with Delayed Write of Not-cold-dirty-pages for Flash Memory (플래시 메모리를 위한 Not-cold-Page 쓰기지연을 통한 LRU 버퍼교체 정책 개선)

  • Jung Ho-Young;Park Sung-Min;Cha Jae-Hyuk;Kang Soo-Yong
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.33 no.9
    • /
    • pp.634-641
    • /
    • 2006
  • Flash memory has many advantages like non-volatility and fast I/O speed, but it has also disadvantages such as not-in-place-update data and asymmetric read/write/erase speed. For the performance of flash memory storage, it is essential for the buffer replacement algorithms to reduce the number of write operations that also affects the number of erase operations. A new buffer replacement algorithm is proposed in this paper, that delays the writes of not-cold-dirty pages in the buffer cache of flash storage. We show that this algorithm effectively decreases the number of write operations and erase operations without much degradation of hit ratio. As a result overall performance of flash I/O speed is improved.

Design of a Virtual Machine based on the Lua interpreter for the On-Board Control Procedure Execution Environment (탑재운영절차서 실행환경을 위한 Lua 인터프리터 기반의 가상머신 설계)

  • Kang, Sooyeon;Koo, Cheolhea;Ju, Gwanghyeok;Park, Sihyeong;Kim, Hyungshin
    • Journal of Satellite, Information and Communications
    • /
    • v.9 no.4
    • /
    • pp.127-133
    • /
    • 2014
  • In this paper, we present the design, functions and performance analysis of the virtual machine (VM) based on the Lua interpreter for On-Board Control Procedure Execution Environment (OEE). The development of the OEE has been required in order to operate the lunar explorer mission autonomously which is planned by Korea Aerospace Research Institute (KARI) autonomously. The concept of On-Board Control Procedure (OBCP) is already being applied to the deep space missions with a long propagation delay and a limited data transmission capacity since it ensure he autonomy of the mission without the ground intervention. The interpreter is the execution engine in the VM and it interpreters high-level programming codes line by line and executes the VM instructions. So the execution speed is very more slower than that of natively compiled codes. In order to overcome it, we design and implement OEE using register-based Lua interpreter for execution engine in OEE. We present experimental results on a range of additional hardware configurations such as usages of cache and floating point unit. We expect those to utilized to the OBCP scheduling policy and the system with Lua interpreter.

Optimizing LRU Lock Management in the Linux Kernel for Improving Parallel Write Throughout in Many-Core CPU Systems (매니코어 CPU 시스템의 병렬 쓰기 성능 향상을 위한 리눅스 커널의 LRU 관리 최적화 기법)

  • Eun-Kyu Byun;Gibeom Gu;Kwang-Jin Oh;Jiwoo Bang
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.12 no.7
    • /
    • pp.209-216
    • /
    • 2023
  • Modern HPC systems are equipped with many-core CPUs with dozens of cores. When performing parallel I/O in such a system, there is a limit to scalability due to the problem of the LRU lock management policy of the Linux system. The study proposes an improved FinerLRU to solve this problem. Our new FinerLRU improves the parallel write performance of file systems using the buffer cache through granular lock management by increasing the number of LRU locks upto the maximum number of cores. The proposed method was implemented in Linux 5.18.11, and the performance was measured on two types of CPUs, Intel Icelake Xeon and Intel Knights landing, with different characteristics, and it was found that a performance improvement of about two times can be obtained in both types of systems.

An Application-Specific and Adaptive Power Management Technique for Portable Systems (휴대장치를 위한 응용프로그램 특성에 따른 적응형 전력관리 기법)

  • Egger, Bernhard;Lee, Jae-Jin;Shin, Heon-Shik
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.34 no.8
    • /
    • pp.367-376
    • /
    • 2007
  • In this paper, we introduce an application-specific and adaptive power management technique for portable systems that support dynamic voltage scaling (DVS). We exploit both the idle time of multitasking systems running soft real-time tasks as well as memory- or CPU-bound code regions. Detailed power and execution time profiles guide an adaptive power manager (APM) that is linked to the operating system. A post-pass optimizer marks candidate regions for DVS by inserting calls to the APM. At runtime, the APM monitors the CPU's performance counters to dynamically determine the affinity of the each marked region. for each region, the APM computes the optimal voltage and frequency setting in terms of energy consumption and switches the CPU to that setting during the execution of the region. Idle time is exploited by monitoring system idle time and switching to the energy-wise most economical setting without prolonging execution. We show that our method is most effective for periodic workloads such as video or audio decoding. We have implemented our method in a multitasking operating system (Microsoft Windows CE) running on an Intel XScale-processor. We achieved up to 9% of total system power savings over the standard power management policy that puts the CPU in a low Power mode during idle periods.