• Title/Summary/Keyword: Buffer-Sharing

Search Result 69, Processing Time 0.019 seconds

Design and Evaluation of a Mechanism for the Decision of Dynamic Buffer Sharing Size to improve on admission rate (승인율 향상을 위한 동적 버퍼 공유 사이즈 결정 메카니즘의 설계 및 평가)

  • 박규석;송태섭;김연실
    • Proceedings of the Korea Multimedia Society Conference
    • /
    • 1998.04a
    • /
    • pp.361-365
    • /
    • 1998
  • 대용량 데이터, 높은 전송율, 실시간 제한의 특성을 가진 연속미디어 데이터 서비스를 지원하는 시스템은 집중적인 I/O 발생으로 인해 서비스 가능 사용자 수는 활용할 수 있는 여유자원에 의해 제한된다. 그러므로 본 논문은 인접한 요구간의 시간 간격(interval)을 버퍼에 캐슁하여 공유하는 기법을 기반으로, 동적으로 버퍼 공유 크기를 결정하고 요구들을 그룹핑함으로써 인접한 블록들의 버퍼 점유를 막고, 디스크 억세스를 감소시킨다. 또한 그룹 반환을 통해 버퍼 사용 효율을 높이고 여유 자원을 확보함으로써 승인제어에서 자원 활용 효율이 향상됨을 시뮬레이션을 통해 보인다.

  • PDF

On Improving Wireless TCP Performance Using Supervisory Control (관리 제어를 이용한 무선 TCP 성능 향상에 관한 방법)

  • Byun, Hee-Jung
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.16 no.10
    • /
    • pp.1013-1017
    • /
    • 2010
  • This paper proposes a systematic approach to the rate-based feedback control based on the supervisory control framework for discrete event systems. We design the supervisor to achieve the desired behavior for TCP wireless networks. From the analysis and simulation results, it is shown that the controlled networks guarantee the fair sharing of the available bandwidth and avoid the packet loss caused by the buffer overflow of TCP wireless networks.

Buffer Sharing Techniques for Improving Performance of VOD Server (VOD 서버의 성능향상을 위한 버퍼공유기법)

  • 류혜선;김봉진;진민
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 1998.10b
    • /
    • pp.286-288
    • /
    • 1998
  • 본 논문은 VOD 서버에서 자원 활용 및 서비스 상황에 따라 융통성 있게 버퍼공유 크기를 변경함으로써 버퍼공유 효과를 높이면서 시스템 성능을 향상시켜 다수의 사용자들의 요구를 수용할 수 있는 버퍼공유 기법을 제안하다. 공유쌍이 버퍼를 모두 선점하여 더 이상의 새로운 서비스가 불가능한 경우 여분의 디스크 대역폭이 존재한다면 기존 서비스에 아무런 영향 없이 공유쌍을 분리시켜 새로운 서비스를 제공하도록 한다.

시뮬레이션을 이용한 버스티 입력 트래픽을 가진 공유 버퍼형 ATM 스위치의 성능분석

  • 김지수
    • Proceedings of the Korea Society for Simulation Conference
    • /
    • 1999.04a
    • /
    • pp.1-5
    • /
    • 1999
  • An ATM switch is the basic component of an ATM network, and its functioning is to switch incoming cells arriving at an input port to the output port associated with an appropriate virtual path. In case of an ATM switch with buffer sharing scheme, the performance analysis is very difficult due to the interactions between the address queues. In this paper, the influences of the degree of traffic burstiness and some traffic routing properties are investigated by using the simulation. Also, some cell access strategies including priority access and cell dropping are compared in terms of cell loss probability.

  • PDF

A Global Buffer Manager for a Shared Disk File System in SAN Clusters (SAN 환경에서 공유 디스크 파일 시스템을 위한 전역 버퍼 관리자)

  • 박선영;손덕주;신범주;김학영;김명준
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.10 no.2
    • /
    • pp.134-145
    • /
    • 2004
  • With rapid growth in the amount of data transferred on the Internet, traditional storage systems have reached the limits of their capacity and performance. SAN (Storage Area Network), which connects hosts to disk with the Fibre Channel switches, provides one of the powerful solutions to scale the data storage and servers. In this environment, the maintenance of data consistency among hosts is an important issue because multiple hosts share the files on disks attached to the SAN. To preserve data consistency, each host can execute the disk I/O whenever disk read and write operations are requested. However, frequent disk I/O requests cause the deterioration of the overall performance of a SAN cluster. In this paper, we introduce a SANtopia global buffer manager to improve the performance of a SAN cluster reducing the number of disk I/Os. We describe the design and algorithms of the SANtopia global buffer manager, which provides a buffer cache sharing mechanism among the hosts in the SAN cluster. Micro-benchmark results to measure the performance of block I/O operations show that the global buffer manager achieves speed-up by the factor of 1.8-12.8 compared with the existing method using disk I/O operations. Also, File system micro-benchmark results show that SANtopia file system with the global buffer manager improves performance by the factor of 1.06 in case of directories and 1.14 in case of files compared with the file system without a global buffer manager.

Cell Marking Priority Control Considering User Level Priority in ATM Network (ATM 네트워크에서 사용자 레벨 우선 순위를 고려한 셀 마킹 및 우선 순위 제어)

  • O, Chang-Se;Kim, Tae-Yun
    • The Transactions of the Korea Information Processing Society
    • /
    • v.1 no.4
    • /
    • pp.490-501
    • /
    • 1994
  • In this study the problems of cell marking method used in the field of ATM network traffic control are presented. Also an extended cell marking method considering the user level priority is proposed. The conventional traffic monitoring schemes set the CLP bit of a cell to 1 only under the circumstance of the violation of traffic contract. It causes that the number of low level cells increases and the levels of cells are lowered regardless of the user level priority. The three level priority control method combining FCI bit with CLP bit has also been proposed. It divides CLP=0 cells into two levels. Consequently, the proposed method preserves more cells in high level than the conventional one and the real loss of high level cells can be reduced. The performance of the proposed scheme has also been analyzed by the PBS(partial buffer sharing) with two thresholds for the proposed three levels. The result shows that the PBS with two thresholds can give more efficient control than the scheme with no priority, or the PBS with one threshold.

  • PDF

Improving the Availability of Scalable on-demand Streams by Dynamic Buffering on P2P Networks

  • Lin, Chow-Sing
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.4 no.4
    • /
    • pp.491-508
    • /
    • 2010
  • In peer-to-peer (P2P) on-demand streaming networks, the alleviation of server load depends on reciprocal stream sharing among peers. In general, on-demand video services enable clients to watch videos from beginning to end. As long as clients are able to buffer the initial part of the video they are watching, on-demand service can provide access to the video to the next clients who request to watch it. Therefore, the key challenge is how to keep the initial part of a video in a peer's buffer for as long as possible, and thus maximize the availability of a video for stream relay. In addition, to address the issues of delivering data on lossy network and providing scalable quality of services for clients, the adoption of multiple description coding (MDC) has been proven as a feasible resolution by much research work. In this paper, we propose a novel caching scheme for P2P on-demand streaming, called Dynamic Buffering. The proposed Dynamic Buffering relies on the feature of MDC to gradually reduce the number of cached descriptions held in a client's buffers, once the buffer is full. Preserving as many initial parts of descriptions in the buffer as possible, instead of losing them all at one time, effectively extends peers’ service time. In addition, this study proposes a description distribution balancing scheme to further improve the use of resources. Simulation experiments show that Dynamic Buffering can make efficient use of cache space, reduce server bandwidth consumption, and increase the number of peers being served.

A Cache Consistency Control for B-Tree Indices in a Database Sharing System (데이타베이스 공유 시스템에서 B-트리 인덱스를 위한 캐쉬 일관성 제어)

  • On, Gyeong-O;Jo, Haeng-Rae
    • The KIPS Transactions:PartD
    • /
    • v.8D no.5
    • /
    • pp.593-604
    • /
    • 2001
  • A database sharing system (DSS) refers to a system for high performance transaction processing. In the DSS, the processing nodes are coupled via a high speed network and share a common database at the disk level. Each node has a local memory and a separate copy of operating system. To reduce the number of disk accesses, the node caches data pages and index pages in its memory buffer. In general, B-tree index pages are accessed more often and thus cached at more processing nodes, than their corresponding data pages. There are also complicated operations in the B-tree such as Fetch, Fetch Next, Insertion and Deletion. Therefore, an efficient cache consistency scheme supporting high level concurrency is required. In this paper, we propose cache consistency schemes using identifiers of index pages and page_LSN of leaf page. The propose schemes can improve the system throughput by reducing the required message traffic between nodes and index re-traversal.

  • PDF

Receiver-driven Cooperation-based Concurrent Multipath Transfer over Heterogeneous Wireless Networks

  • Cao, Yuanlong;Liu, Qinghua;Zuo, Yi;Huang, Minghe
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.7
    • /
    • pp.2354-2370
    • /
    • 2015
  • The advantages of employing SCTP-based Concurrent Multipath Transfer (CMT) have been demonstrated to be very useful for data delivery over multi-homed wireless networks. However, there is still significant ongoing work addressing some remaining limitations and challenges. The most important concern when applying CMT to data delivery is related to handling packet reordering and buffer blocking. Another concern on this topic is that current sender-based CMT solutions seldom consider balancing the overhead and sharing the load between the sender and receiver. This paper proposes a novel Receiver-driven Cooperation-based Concurrent Multipath Transfer solution (CMT-Rev) with the following aims: (i) to balance overhead and share load between the sender and receiver, by moving some functions including congestion and flow control from the sender onto receiver; (ii) to mitigate the data reordering and buffer blocking problems, by using an adaptive receiver-cooperative path aggregation model, (iii) to adaptively transmit packets over multiple paths according to their receiver-inspired sending rate values, by employing a new receiver-aware data distribution scheduler. Simulation results show that CMT-Rev outperforms the existing CMT solutions in terms of data delivery performance.