• 제목/요약/키워드: queue

검색결과 1,258건 처리시간 0.024초

PERFORMANCE ANALYSIS OF TWO FINITE BUFFERS QUEUEING SYSTEM WITH PRIORITY SCHEDULING DEPENDENT UPON QUEUE LENGTH

  • Choi Doo-Il
    • Journal of applied mathematics & informatics
    • /
    • 제22권1_2호
    • /
    • pp.523-533
    • /
    • 2006
  • We analyze two finite buffers queueing system with priority scheduling dependent upon queue length. Customers are classified into two types ( type-l and type-2 ) according to their characteristics. Here, the customers can be considered as traffics such as voice and data in telecommunication networks. In order to support customers with characteristics of burstiness and time-correlation between interarrival, the arrival of the type-2 customer is assumed to be an Markov- modulated Poisson process(MMPP). The service order of customers in each buffer is determined by the queue length of two buffers. Methods of embedded Markov chain and supplementary variable give us information for queue length of two buffers. Finally, performance measures such as loss and mean delay are derived.

혼합트래픽 네트워크에서 혼잡회피를 위한 큐 관리 알고리즘 (Queue Management Algorithm for Congestion Avoidance in Mixed-Traffic Network)

  • 김창희
    • 디지털산업정보학회논문지
    • /
    • 제8권2호
    • /
    • pp.81-94
    • /
    • 2012
  • This paper suggests PARED algorithm, a modified RED algorithm, that actively reacts to dynamic changes in network to apply packet drop probability flexibly. The main idea of PARED algorithm is that it compares the target queue length to the average queue length which is the criterion of changes in packet drop probability and feeds the gap into packet drop probability. That is, when the difference between the average queue length and the target queue length is great, it reflects as much as the difference in packet drop probability, and reflects little when the difference is little. By doing so, packet drop probability could be actively controled and effectively dealt with in the network traffic situation. To evaluate the performance of the suggested algorithm, we conducted simulations by changing network traffic into a dynamic stat. At the experiments, the suggested algorithm was compared to the existing RED one and then to ARED one that provided the basic idea for this algorithm. The results proved that the suggested PARED algorithm is superior to the existing algorithms.

A Note on the Relationships among the Queue Lengths at Various Epochs of a Queue with BMAP Arrivals

  • Kim, Nam K;Chae, Kyung C;Lee, Ho W
    • Management Science and Financial Engineering
    • /
    • 제9권2호
    • /
    • pp.1-12
    • /
    • 2003
  • For a stationary queue with BMAP arrivals, Takine and Takahashi [8] present a relationship between the queue length distributions at a random epoch and at a departure epoch by using the rate conservation law of Miyazawa [6]. In this note, we derive the same relationship by using the elementary balance equation, ‘rate-in = rate-out’. Along the same lines, we additionally derive relationships between the queue length distributions at a random epoch and at an arrival epoch. All these relationships hold for a broad class of finite-as well as infinite-capacity queues with BMAP arrivals.

M/G/1 Queueing System wish Vacation and Limited-1 Service Policy

  • Lee, B-L.;W. Ryu;Kim, D-U.;Park, B.U.;J-W. Chung
    • Journal of the Korean Statistical Society
    • /
    • 제30권4호
    • /
    • pp.661-666
    • /
    • 2001
  • In this paper we consider an M/G/1 queue where the server of the system has a vacation time and the service policy is limited-1. In this system, upon termination of a vacation the server returns to the queue and serves at most one message in the queue before taking another vacation. We consider two models. In the first, if the sever finds the queue empty at the end of a cacation, then the sever immediately takes another vacation. In the second model, if no message have arrived during a vacation, the sever waits for the first arrival to serve. The analysis of this system is particularly useful for a priority class polling system. We derive Laplace-Stieltjes transforms of the waiting time for both models, and compare their mean waiting times.

  • PDF

Balking Phenomenon in the $M^{[x]}/G/1$ Vacation Queue

  • Madan, Kailash C.
    • Journal of the Korean Statistical Society
    • /
    • 제31권4호
    • /
    • pp.491-507
    • /
    • 2002
  • We analyze a single server bulk input queue with optional server vacations under a single vacation policy and balking phenomenon. The service times of the customers as well as the vacation times of the server have been assumed to be arbitrary (general). We further assume that not all arriving batches join the system during server's vacation periods. The supplementary variable technique is employed to obtain time-dependent probability generating functions of the queue size as well as the system size in terms of their Laplace transforms. For the steady state, we obtain probability generating functions of the queue size as well as the system size, the expected number of customers and the expected waiting time of the customers in the queue as well as the system, all in explicit and closed forms. Some special cases are discussed and some known results have been derived.

GI/GI/c/K 대기행렬의 고객수 분포 방정식에 대한 해석 (An Interpretation of the Equations for the GI/GI/c/K Queue Length Distribution)

  • 채경철;김남기;최대원
    • 대한산업공학회지
    • /
    • 제28권4호
    • /
    • pp.390-396
    • /
    • 2002
  • We present a meaningful interpretation of the equations for the steady-state queue length distribution of the GI/GI/c/K queue so that the equations are better understood and become more applicable. As a byproduct, we present an exact expression of the mean queue waiting time for the M/GI/c queue.

ATM 망에서 발생되는 D-BMAP/Geo/1/K Queue의 Departure 프로세스 (The Departures Process of a D-BMAP/Geo/1/K Queue Arising in an ATM Network)

  • 박두영
    • 자연과학논문집
    • /
    • 제7권
    • /
    • pp.75-82
    • /
    • 1995
  • ATM 망의 모델링에서 발생되어 지는 D-BMAP/Geo/1/K queue의 Departure 프로세스를 구하고 그 프로세스의 bursty 정도와 correlation에 대하여 연구하였다. 또한 그 departures 프로세스의 interdeparture time의 autocorrelation 계수 (correlogram)가 oscillation을 할 수 있다는 것을 알았다.

  • PDF

패킷 스케줄러를 위한 빠르고 확장성 있는 우선순위 큐의 하드웨어 구조 (A Fast and Scalable Priority Queue Hardware Architecture for Packet Schedulers)

  • 김상균;문병인
    • 대한전자공학회논문지SD
    • /
    • 제44권10호
    • /
    • pp.55-60
    • /
    • 2007
  • 본 논문에서는 QoS를 보장하면서 빠른 네트워크 속도를 지원해 줄 수 있는 우선순위 큐의 구조를 제안한다. 제안한 큐의 구조는 하나의 큐로 여러 개의 출력부에 출력을 보낼 수 있어 면적을 줄일 수 있고, 제어 블록을 추가함으로써 기존의 multiple systolic way 우선순위 큐보다 더 빠른 속도로 동작할 수 있기 때문에 높은 패킷 처리 속도를 요구하는 패킷 스케줄러 등에 적합한 구조이다. 또한, 이 구조는 높은 확장성을 지원한다.

이중 큐 구조를 갖는 웹 서버 (Double Queue Management for Reducing disk I/O of Web Servers)

  • 염미령
    • 정보처리학회논문지A
    • /
    • 제8A권4호
    • /
    • pp.293-298
    • /
    • 2001
  • 본 논문에서 구현한 더블큐 웹서버는 동시에 들어오는 요청들을 두 가지로 분류하여 서비스한다. 캐쉬되어 있는 문서를 요구하는 요청은 서비스 큐에 넣고, 캐쉬되지 않은 문서를 요구하는 요청은 기다림 큐에 넣는다. 더블큐 웹서버는 서비스 큐의 모든 문서를 서비스 한 후 기다림 큐의 요청을 서비스한다. 이 방식은 디스크 접근 오버헤드를 줄이기 위해 캐쉬 된 문서의 서비스를 우선하는 정책으로 아파치 웹서버와 비교 실험 결과 서버의 성능과 평균 사용자 응답 시간을 향상시켰다.

  • PDF

Performance Improvement of Web Service Based on GPGPU and Task Queue

  • Kim, Changsu;Kim, Kyunghwan;Jung, Hoekyung
    • Journal of information and communication convergence engineering
    • /
    • 제19권4호
    • /
    • pp.257-262
    • /
    • 2021
  • Providing web services to users has become expensive in recent times. For better web services, a web server is provided with high-performance technology. To achieve great web service experiences, tools such as general-purpose graphics processing units (GPGPUs), artificial intelligence, high-performance computing, and three-dimensional simulation are widely used. However, graphics processing units (GPUs) are used in high-speed operations and have limited general applications. In this study, we developed a task queue in a GPU to improve the performance of a web service using a multiprocessor and studied how to receive and process user requests in bulk. We propose the use of a GPGPU-based task queue to process user requests more than GPGPU based a central processing unit thread, and to process more GPU threads on task queue at about 136% to 233%, and proved that the proposed method is effective for web service.