• Title/Summary/Keyword: Central Queue

Search Result 19, Processing Time 0.022 seconds

A Dynamical Hybrid CAC Scheme and Its Performance Analysis for Mobile Cellular Network with Multi-Service

  • Li, Jiping;Wu, Shixun;Liu, Shouyin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.6 no.6
    • /
    • pp.1522-1545
    • /
    • 2012
  • Call admission control (CAC) plays an important role in mobile cellular network to guarantee the quality of service (QoS). In this paper, a dynamic hybrid CAC scheme with integrated cutoff priority and handoff queue for mobile cellular network is proposed and some performance metrics are derived. The unique characteristic of the proposed CAC scheme is that it can support any number of service types and that the cutoff thresholds for handoff calls are dynamically adjusted according to the number of service types and service priority index. Moreover, timeouts of handoff calls in queues are also considered in our scheme. By modeling the proposed CAC scheme with a one-dimensional Markov chain (1DMC), some performance metrics are derived, which include new call blocking probability ($P_{nb}$), forced termination probability (PF), average queue length, average waiting time in queue, offered traffic utilization, wireless channel utilization and system performance which is defined as the ratio of channel utilization to Grade of Service (GoS) cost function. In order to validate the correctness of the derived analytical performance metrics, simulation is performed. It is shown that simulation results match closely with the derived analytic results in terms of $P_{nb}$ and PF. And then, to show the advantage of 1DMC modeling for the performance analysis of our proposed CAC scheme, the computing complexity of multi-dimensional Markov chain (MDMC) modeling in performance analysis is analyzed in detail. It is indicated that state-space cardinality, which reflects the computing complexity of MDMC, increases exponentially with the number of service types and total channels in a cell. However, the state-space cardinality of our 1DMC model for performance analysis is unrelated to the number of service types and is determined by total number of channels and queue capacity of the highest priority service in a cell. At last, the performance comparison between our CAC scheme and Mahmoud ASH's scheme is carried out. The results show that our CAC scheme performs well to some extend.

Performance Improvement of Web Service Based on GPGPU and Task Queue

  • Kim, Changsu;Kim, Kyunghwan;Jung, Hoekyung
    • Journal of information and communication convergence engineering
    • /
    • v.19 no.4
    • /
    • pp.257-262
    • /
    • 2021
  • Providing web services to users has become expensive in recent times. For better web services, a web server is provided with high-performance technology. To achieve great web service experiences, tools such as general-purpose graphics processing units (GPGPUs), artificial intelligence, high-performance computing, and three-dimensional simulation are widely used. However, graphics processing units (GPUs) are used in high-speed operations and have limited general applications. In this study, we developed a task queue in a GPU to improve the performance of a web service using a multiprocessor and studied how to receive and process user requests in bulk. We propose the use of a GPGPU-based task queue to process user requests more than GPGPU based a central processing unit thread, and to process more GPU threads on task queue at about 136% to 233%, and proved that the proposed method is effective for web service.

Architecture of Multiple-Queue Manager for Input-Queued Switch Tolerating Arbitration Latency (중재 지연 내성을 가지는 입력 큐 스위치의 다중 큐 관리기 구조)

  • 정갑중;이범철
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.26 no.12C
    • /
    • pp.261-267
    • /
    • 2001
  • This paper presents the architecture of multiple-queue manager for input-queued switch, which has arbitration latency, and the design of the chip. The proposed architecture of multiple-queue manager provides wire-speed routing with a pipelined buffer management, and the tolerance of requests and grants data transmission latency between the input queue manager and central arbiter using a new request control method, which is based on a high-speed shifter. The multiple-input-queue manager has been implemented in a field programmable gate array chip, which provides OC-48c port speed. It enhances the maximum throughput of the input queuing switch up to 98.6% with 128-cell shared input buffer in 16$\times$16 switch size.

  • PDF

A Study on the Intelligent Load Management System Based on Queue with Diffusion Markov Process Model (확산 Markov 프로세스 모델을 이용한 Queueing System 기반 지능 부하관리에 관한 연구)

  • Kim, Kyung-Dong;Kim, Seok-Hyun;Lee, Seung-Chul
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.58 no.5
    • /
    • pp.891-897
    • /
    • 2009
  • This paper presents a novel load management technique that can lower the peak demand caused by package airconditioner loads in large apartment complex. An intelligent hierarchical load management system composed of a Central Intelligent Management System(CIMS) and multiple Local Intelligent Management Systems(LIMS) is proposed to implement the proposed technique. Once the required amount of the power reduction is set, CIMS issues tokens, which can be used by each LIMS as a right to turn on the airconditioner. CIMS creates and maintains a queue for fair and proper allocation of the tokens among the LIMS requesting tokens. By adjusting the number tokens and queue management policies, desired power reduction can be achieved smoothly. The Markov Birth and Death process and the Balance Equations utilizing the Diffusion Model are employed for evaluation of queue performances during transient periods until the static balances among the states are achieved. The proposed technique is tested using a summer load data of a large apartment complex and give promising results demonstrating the usability in load management while minimizing the customer inconveniences.

A Study on the CHIPS in the Cross-Border Payment System - Compared with Fedwire - (국제전자결제시스템으로서 CHIPS에 관한 연구 -Fedwire와 비교하여-)

  • Lee, Byeong-Ryul;Lee, Cheon-Woo
    • International Commerce and Information Review
    • /
    • v.8 no.4
    • /
    • pp.71-88
    • /
    • 2006
  • This article want to discuss on comparative research between CHIPS and Fedwire as the cross-border payment systems which America have and use at present. CHIPS is a New York-based automated private-sector clearing facility for large-dollar transfers. It is a central switch communication and settlement system whose 53 participating banks exchange same-day payment messages over dedicated communication lines linking each one to the CHIPS central computer. On January 22, 2001, CHIPS introduced immediate finality for payment released from the CHIPS queue. Unlike the Fedwire system, The CHIPS system is not a real-time gross settlement system. Instead, CHIPS is hybrid system that uses a computer program to select payment order in a queue for release to the receiving bank. CHIPS are governed by CHIPS Rules and Administrative Procedures. Fedwire system is a nationwide electronic fund-transfer system facilitating same-day transfers throughout the United States. It is a gross settlement system providing immediate credit to the receiving bank's master account. Communicating between a Federal Reserve Bank and Fedwire users can be either on-line or off-line. Fedwire transfers are governed by Subpart B of Regulation J, issued by the Federal Reserve Board, which incorporates U.C.C. Article 4A but preempts or supersedes any of its inconsistent provisions.

  • PDF

An Efficient Central Queue Management Algorithm for High-speed Parallel Packet Filtering (고속 병렬 패킷 여과를 위한 효율적인 단일버퍼 관리 방안)

  • 임강빈;박준구;최경희;정기현
    • Journal of the Institute of Electronics Engineers of Korea TC
    • /
    • v.41 no.7
    • /
    • pp.63-73
    • /
    • 2004
  • This paper proposes an efficient centralized sin91e buffer management algorithm to arbitrate access contention mon processors on the multi-processor system for high-speed Packet filtering and proves that the algorithm provides reasonable performance by implementing it and applying it to a real multi-processor system. The multi-processor system for parallel packet filtering is modeled based on a network processor to distribute the packet filtering rules throughout the processors to speed up the filtering. In this paper we changed the number of processors and the processing time of the filtering rules as variables and measured the packet transfer rates to investigate the performance of the proposed algorithm.

A Study of Autonomous Intelligent Load Management System Based on Queueing Model (큐잉모델에 기초한 자율 지능 부하 관리 시스템 연구)

  • Lee, Seung-Chul;Hong, Chang-Ho;Kim, Kyung-Dong;Lee, In-Yong;Park, Chan-Eom
    • Journal of the Korean Institute of Illuminating and Electrical Installation Engineers
    • /
    • v.22 no.2
    • /
    • pp.134-141
    • /
    • 2008
  • This paper presents an innovative load management technique that can effectively lower the summer peak load by adjusting the aircondition loads through smoothe coordinations between utility companies and large customers. An intelligent hierarchical load management system composed of a Central Intelligent Load Management System(CIMS) and multiple Local Intelligent Management Systems(LIMS) is also proposed to implement the reposed technique. Upon receiving a load curtailment request from the utilities, CIMS issues tokens, which can be used by each LIMS as a right to turn on the airconditioner. CIMS creates and maintains a queue for fair allocation of the tokens among the LIMS demanding tokens. By adjusting the number tokens and queue management Policies, desired load factors can be achieved conveniently. The Markov Birth and Death Process and the Balance Equations are employed in estimating various queue performances. The proposed technique is tested using a summer load data of a large apartment complex and proved to be quite effective in load management while minimizing the customer inconveniences.

A Study on the design of operations system for managing the mobile communication network (이동통신망 관리용 운용시스템 설계에 관한 연구)

  • 하기종
    • Journal of the Korea Society for Simulation
    • /
    • v.6 no.2
    • /
    • pp.71-79
    • /
    • 1997
  • In this paper, operations system was designed for the centralization of data processing of various state information from the facilities of mobile communication network. And the system performance experimental system module was measured and analyzed from the designed experimental system module. The configuration of system design was presented with the centralized type to monite and control the facilities of mobile communication network in the central office. The communication process design of the internal system was implemented with the resource of message queue having a excellent transmission ability for processing of a great quantity of information in the inter-process communication among communication resources of UNIX system. The process with a server function from the internal communication processes was constructed with a single server or a double server according to the quantity of operations and implemented with the policy of the presented server. And then, we have measured performance elements in accordance with the change of input parameters from the designed experimental module : response time, waiting time, buffer length, the maximum quantity existing in message queue. And from these results, we have compared and analyzed the system state each server algorithm according to performance variations.

  • PDF

A Simulation Study on Queueing Delay Performance of Slotted ALOHA under Time-Correlated Channels

  • Yoora Kim
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.15 no.3
    • /
    • pp.43-51
    • /
    • 2023
  • Slotted ALOHA (S-ALOHA) is a classical medium access control protocol widely used in multiple access communication networks, supporting distributed random access without the need for a central controller. Although stability and delay have been extensively studied in existing works, most of these studies have assumed ideal channel conditions or independent fading, and the impact of time-correlated wireless channels has been less addressed. In this paper, we investigate the queueing delay performance in S-ALOHA networks under time-correlated channel conditions by utilizing a Gilbert-Elliott model. Through simulation studies, we demonstrate how temporal correlation in the wireless channel affects the queueing delay performance. We find that stronger temporal correlation leads to increased variability in queue length, a larger probability of having queue overflows, and higher congestion levels in the S-ALOHA network. Consequently, there is an increase in the average queueing delay, even under a light traffic load. With these findings, we provide valuable insights into the queueing delay performance of S-ALOHA networks, supplementing the existing understanding of delay in S-ALOHA networks.

Design and Implementation of the Central Queue Based Loop Scheduling Method (중앙 큐 기반의 루프 스케쥴링 기법의 설계 및 구현)

  • Kim, Hyun-Chul;Kim, Hyo-Cheol;Yoo, Kee-Young
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.38 no.5
    • /
    • pp.16-26
    • /
    • 2001
  • In this paper, we present a new scheduling method called CDSS(Carried-Dependence Self-Scheduling) for efficiently execution of the loop with intra dependency between iterations based on the central queue. We also implemented it on shared memory system using Java language. Also, we study the modification that converts the existing self-scheduling method based on the central task queue for parallel loops onto the same form applied to loop with loop-carried dependences. The proposed method is self scheduling and assigns the loops in three-level considering the synchronization point according to the dependence distance of the loops. To adapt the proposed scheme and modified methods into various platforms, including a uni-processor system, we use threads for implementation. Compared to other assignment algorithms with various changes of application and system parameters, CDSS is found to be more efficient than other methods in overall execution time including scheduling overheads. CDSS shows improved performance over modified SS, Factoring, GSS and CSS by about 0.02, 40.5, 46.1 and 53.6%, respectively. In CDSS, we achieve the best performance on varying application programs using a few threads, which equal the dependence distance.

  • PDF