• Title/Summary/Keyword: Buffer Size

Search Result 630, Processing Time 0.024 seconds

Optimal buffer partition for provisioning QoS of wireless network

  • Phuong Nguyen Cao;Dung Le Xuan;Quan Tran Hong
    • Proceedings of the IEEK Conference
    • /
    • summer
    • /
    • pp.57-60
    • /
    • 2004
  • Next generation wireless network is evolving toward IP-based network that can various provide multimedia services. A challenge in wireless mobile Internet is support of quality of service over wireless access networks. DiffServ architecture is proposed for evolving wireless mobile Internet. In this paper we propose an algorithm for optimal buffer partitioning which requires the minimal channel capacity to satisfy the QoS requirements of input traffic. We used a partitioned buffer with size B to serve a layered traffic at each DiffServ router. We consider a traffic model with a single source generates traffic having J $(J\geq2)$ quality of service (QoS) classes. QoS in this case is described by loss probability $\varepsilon_j$. for QoS class j. Traffic is admitted or rejected based on the buffer occupancy and its service class. Traffic is generated by heterogeneous Markov-modulated fluid source (MMFS).

  • PDF

A 4-parallel Scheduling Architecture for High-performance H.264/AVC Deblocking Filter (고성능 H.264/AVC 디블로킹 필터를 위한 4-병렬 스케줄링 아키텍처)

  • Ko, Byung-Soo;Kong, Jin-Hyeung
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.49 no.8
    • /
    • pp.63-72
    • /
    • 2012
  • In this paper, we proposed a parallel architecture of line & block edge filter for high-performance H.264/AVC deblocking filter for Quad Full High Definition(Quad FHD) video real time processing. To improve throughput, we designed 4-parallel block edge filter with 16 line edge filter. To reduce internal buffer size and processing cycle, we scheduled 4-parallel zig-zag scan order as deblocking filtering order. To avoid data conflicts we placed 1 delay cycle between block edge filtering. We implemented interleaving buffer, as internal buffer of block edge filter, to sharing buffer for reducing buffer size. The proposed architecture was simulated in 0.18um standard cell library. The maximum operation frequency is 108MHz. The gate count is 140.16Kgates. The proposed H.264/AVC deblocking filter can support Quad FHD at 113.17 frames per second by running at 90MHz.

Establishing optimal gap size for precast beam bridges with a buffer-gap-elastomeric bearings system

  • Farag, Mousa M.N.;Mehanny, Sameh S.F.;Bakhoum, Mourad M.
    • Earthquakes and Structures
    • /
    • v.9 no.1
    • /
    • pp.195-219
    • /
    • 2015
  • A partial (hybrid) seismic isolation scheme for precast girder bridges in the form of a "buffer-gap-elastomeric bearings" system has been endorsed in the literature as an efficient seismic design system. However, no guides exist to detail an optimal gap size for different configurations. A numerical study is established herein for different scenarios according to Euro code seismic requirements in order to develop guidelines for the selection of optimal buffer-gap arrangements for various design cases. Various schemes are hence designed for ductile and limited ductility behavior of the bridge piers for different seismic demand levels. Seven real ground records are selected to perform incremental dynamic analysis of the bridges up to failure. Bridges with typical short and high piers are studied; and different values of initial gaps at piers are also investigated varying from a zero gap (i.e., fully locked) condition up to an initial gap at piers that is three quarters the gap left at abutments. Among the main conclusions is that the as-built initial gaps at piers (and especially large gap sizes that are ${\geq}1/2$ as-built gaps at abutments) do not practically reduce the seismic design demand and do not affect the reserve capacity of the bridge against failure for bridges featuring long piers, especially when these bridges are designed a priori for ductile behavior. To the contrary, the "buffer-gap-elastomeric bearings" system is more effective for the bridge schemes with short piers having a large difference between the stiffness of the bearings and that of their supporting (much stiffer) squat piers, particularly for designs with limited ductility. Such effectiveness is even amplified for the case of larger initial as-built gap sizes at piers.

Design and Implementation of File System Using Local Buffer Cache for Digital Convergence Devices (디지털 컨버전스 기기를 위한 지역 버퍼 캐쉬 파일 시스템 설계 및 구현)

  • Jeong, Geun-Jae;Cho, Moon-Haeng;Lee, Cheol-Hoon
    • The Journal of the Korea Contents Association
    • /
    • v.7 no.8
    • /
    • pp.21-30
    • /
    • 2007
  • Due to the growth of embedded systems and the development of semi-conductor and storage devices, digital convergence devises is ever growing. Digital convergence devices are equipments into which various functions such as communication, playing movies and wave files and electronic dictionarys are integrated. Example are portable multimedia players(PMPs), personal digital assistants(PDAs), and smart phones. Therefore, these devices need an efficient file system which manages and controls various types of files. In designing such file systems, the size constraint for small embedded systems as well as performance and compatibility should be taken into account. In this paper, we suggest the partial buffer cache technique. Contrary to the traditional buffer cache, the partial buffer cache is used for only the FAT meta data and write-only data. Simulation results show that we could enhance the write performance more than 30% when the file size is larger than about 100 KBytes.

A Study on the Performance Analysis of Statistical Multiplexer by the Queueing Model (Queueing 모델에 의한 통계적 다중화기의 성능 분석에 관한 연구)

  • 이주식;김태준;김근배;이종현;임해진;박병철
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.17 no.1
    • /
    • pp.1-10
    • /
    • 1992
  • In the already presented paper, the performance analysis of SMUX was need a lot of time bacause of recussive method, So this paper proposed the new mathematical method for the SMUX's performance. And it is constructed and analyzed the queueing model for SMUX based on the Go-back N retransmission ARQ model. It is assumed that it has the infinite buffer and a single server. It is analzied the parameters that have influence on the byffer and a single server. It is analzied the parameters that have influence on the buffer behaviour of statistical multiplexer overflow probabilities. maximum buffer size. mean wating time and mean buffer size, and these relationship are represented the graphs and datas which provided a guide to the buffer design problem. As these results. it is efficient when the SMUX is apphed that the traffic density is below 0.5 and transmission error probabilities is below 10­$^3$. So this paper can be applied to the basic and theoretical background prior to the implementation of SMUX.

  • PDF

A Buffer Cache Replacement Algorithm for Considering both Hybrid Main Memory and Storage (하이브리드 메인 메모리와 스토리지의 특성을 고려한 버퍼 캐시 교체 정책)

  • Kang, Dong Hyun;Eom, Young Ik
    • Journal of KIISE
    • /
    • v.42 no.8
    • /
    • pp.947-953
    • /
    • 2015
  • PRAM is being considered as a potential successor to DRAM because of its characteristics such as byte-addressability, non-volatility, and high density. To gain its benefits, buffer cache replacement algorithm based on PRAM has been actively studied. However, most of the previous studies on buffer cache replacement algorithm limitedly exploit the byte-level performance of PRAM by focusing its limited lifetime and slower access latency compared to DRAM. In this paper, we propose a novel buffer cache replacement algorithm that fully considers the byte-level performance of PRAM and the performance of secondary storage. To take advantage of small size write on PRAM, proposed scheme keeps pages, which are frequently accessed with a small size write, on PRAM and allows the selective page migration from DRAM to PRAM. As a result, our scheme significantly reduces the number of PRAM writes. Our experimental results indicate for real workloads that our scheme reduces the number of PRAM writes by up to 92% and improves its performance by up to 62% compared to CLOCK.

Adaptive Row Major Order: a Performance Optimization Method of the Transform-space View Join (적응형 행 기준 순서: 변환공간 뷰 조인의 성능 최적화 방법)

  • Lee Min-Jae;Han Wook-Shin;Whang Kyu-Young
    • Journal of KIISE:Databases
    • /
    • v.32 no.4
    • /
    • pp.345-361
    • /
    • 2005
  • A transform-space index indexes objects represented as points in the transform space An advantage of a transform-space index is that optimization of join algorithms using these indexes becomes relatively simple. However, the disadvantage is that these algorithms cannot be applied to original-space indexes such as the R-tree. As a way of overcoming this disadvantages, the authors earlier proposed the transform-space view join algorithm that joins two original- space indexes in the transform space through the notion of the transform-space view. A transform-space view is a virtual transform-space index that allows us to perform join in the transform space using original-space indexes. In a transform-space view join algorithm, the order of accessing disk pages -for which various space filling curves could be used -makes a significant impact on the performance of joins. In this paper, we Propose a new space filling curve called the adaptive row major order (ARM order). The ARM order adaptively controls the order of accessing pages and significantly reduces the one-pass buffer size (the minimum buffer size required for guaranteeing one disk access per page) and the number of disk accesses for a given buffer size. Through analysis and experiments, we verify the excellence of the ARM order when used with the transform-space view join. The transform-space view join with the ARM order always outperforms existing ones in terms of both measures used: the one-pass buffer size and the number of disk accesses for a given buffer size. Compared to other conventional space filling curves used with the transform-space view join, it reduces the one-pass buffer size by up to 21.3 times and the number of disk accesses by up to $74.6\%$. In addition, compared to existing spatial join algorithms that use R-trees in the original space, it reduces the one-pass buffer size by up to 15.7 times and the number of disk accesses by up to $65.3\%$.

A Local Buffer Allocation Scheme for Multimedia Data on Linux (리눅스 상에서 멀티미디어 데이타를 고려한 지역 버퍼 할당 기법)

  • 신동재;박성용;양지훈
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.9 no.4
    • /
    • pp.410-419
    • /
    • 2003
  • The buffer cache of general operating systems such as Linux manages file data by using global block replacement policy and read ahead. As a result, multimedia data with a low locality of reference and various consumption rate have low cache hit ratio and consume additional buffers because of read ahead. In this paper we have designed and implemented a new buffer allocation algorithm for multimedia data on Linux. Our approach keeps one read-ahead cache per every opened multimedia file and dynamically changes the read-ahead group size based on the buffer consumption rate of the file. This distributes resources fairly and optimizes the buffer consumption. This paper compares the system performance with that of Linux 2.4.17 in terms of buffer consumption and buffer hit ratio.

An Improvement of Performance for Data Downstream in IEEE 802.11x Wireless LAN Networks (IEEE 802.11x 무선 랜에서의 데이터 다운스트림 성능 향상)

  • Kim, Ji-Hong;Kim, Yong-Hyun;Hong, Youn-Sik
    • Journal of the Institute of Electronics Engineers of Korea TC
    • /
    • v.43 no.11 s.353
    • /
    • pp.149-158
    • /
    • 2006
  • We proposed a method for improving a performance of TCP downstream between a desktop PC as a fixed host and a PDA as a mobile host in a wired and wireless network based on IEEE 802.11x wireless LAN. With data transmission between these heterogeneous terminals a receiving time during downstream is slower than that during upstream by 20% at maximum. The reason is that their congestion window size will be oscillated due to a significantly lower packet processing rate at receiver compared to a packet sending rate at sender. Thus it will cause to increase the number of control packets to negotiate their window size. To mitigate these allergies, we proposed two distinct methods. First, by increasing a buffer size of a PDA at application layer an internal processing speed of a socket receive buffer of TCP becomes faster and then the window size is more stable. However, a file access time in a PDA is kept nearly constant as the buffer size increases. With the buffer size of 32,768bytes the receiving time is faster by 32% than with that of 512bytes. Second, a delay between packets to be transmitted at sender should be given. With an inter-packet delay of 5ms at sender a resulting receiving time is faster by 7% than without such a delay.

A Real-Time Multiple Circular Buffer Model for Streaming MPEG-4 Media (MPEG-4 미디어 스트리밍에 적합한 실시간형 다중원형버퍼 모델)

  • 신용경;김상욱
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.9 no.1
    • /
    • pp.13-24
    • /
    • 2003
  • MPEG-4 is a standard for multimedia applications and provides a set of technologies to satisfy the needs of authors, service providers and end users alike. In this paper, we suggest a Real-time Multiple Circular Buffer (M4RM Buffer) model, which is suitable for streaming these MPEG-4 contents efficiently. M4RM buffer generates each structure of the buffer, which matches well with each object composing an MPEG-4 content, according to the transferred information, and manipulates multiple read/write operations only by its reference. It divides the decoder buffer and the composition buffer, which are described in the standard, by the unit of frame allocated to minimize the range of access. This buffer unit of a frame is allocated according to the object description. Also, it processes the objects synchronization within the buffer and provides APIs for an efficient buffer management to process the real-time user events. Based on the performance evaluation, we show that M4RM buffer model decreases the waiting time in a buffer frame, and so allows the real-time streaming of an MPEG-4 content using the smaller size of the memory block than IM1-2D and Window Media Player.