• Title/Summary/Keyword: Queue and Latency

Search Result 25, Processing Time 0.024 seconds

Analysis of Web Response Time on Queue Managements and Transmission Latency in Congested Network (혼잡 망에서의 큐 제어 방식과 전송지연시간에 대한 웹 반응 시간 분석)

  • Seok, Woo-Jin
    • The KIPS Transactions:PartC
    • /
    • v.15C no.4
    • /
    • pp.321-328
    • /
    • 2008
  • In this paper, we analyze web response time depending on queue managements and transmission latencies in highly congested network situation. Under FIFO scheme, the response times are for three different sizes of queue are almost the same, but the response time increases as traffic intensity increases. The performance between different queue sizes shows more different in 90% and 98% traffic intensity than in 80% traffic intensity. Especially the difference becomes bigger in short latency case than long latency case. Under RED scheme, three differently tunned RED schemes do not impact on the response time when the size of queue is relatively large. When the queue size becomes smaller, the response time of the differently tunned RED schemes becomes different for short latency case while the response times are almost same for long latency case. When comparing FIFO and RED schemes under same size of queue, RED scheme shows less response time than that of FIFO for the long latency case in high traffic intensity.

Optimizing Fsync Performance with Dynamic Queue Depth Adaptation

  • Park, Daejun;Kim, Min Ji;Shin, Dongkun
    • JSTS:Journal of Semiconductor Technology and Science
    • /
    • v.15 no.5
    • /
    • pp.570-576
    • /
    • 2015
  • Existing flash storage devices such as universal flash storage and solid state disk support command queuing to improve storage I/O bandwidth. Command queuing allows multiple read/write requests to be pending in a device queue. Because multi-channel and multi-way architecture of flash storage devices can handle multiple requests simultaneously, command queuing is an indispensable technique for utilizing parallel architecture. However, command queuing can be harmful to the latency of fsync system call, which is critical to application responsiveness. We propose a dynamic queue depth adaptation technique, which reduces the queue depth if user application is expected to send fsync calls. Experiments show that the proposed technique reduces the fsync latency by 79% on average compared to the original scheme.

Architecture of Multiple-Queue Manager for Input-Queued Switch Tolerating Arbitration Latency (중재 지연 내성을 가지는 입력 큐 스위치의 다중 큐 관리기 구조)

  • 정갑중;이범철
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.26 no.12C
    • /
    • pp.261-267
    • /
    • 2001
  • This paper presents the architecture of multiple-queue manager for input-queued switch, which has arbitration latency, and the design of the chip. The proposed architecture of multiple-queue manager provides wire-speed routing with a pipelined buffer management, and the tolerance of requests and grants data transmission latency between the input queue manager and central arbiter using a new request control method, which is based on a high-speed shifter. The multiple-input-queue manager has been implemented in a field programmable gate array chip, which provides OC-48c port speed. It enhances the maximum throughput of the input queuing switch up to 98.6% with 128-cell shared input buffer in 16$\times$16 switch size.

  • PDF

QuLa: Queue and Latency-Aware Service Selection and Routing in Service-Centric Networking

  • Smet, Piet;Simoens, Pieter;Dhoedt, Bart
    • Journal of Communications and Networks
    • /
    • v.17 no.3
    • /
    • pp.306-320
    • /
    • 2015
  • Due to an explosive growth in services running in different datacenters, there is need for service selection and routing to deliver user requests to the best service instance. In current solutions, it is generally the client that must first select a datacenter to forward the request to before an internal load-balancer of the selected datacenter can select the optimal instance. An optimal selection requires knowledge of both network and server characteristics, making clients less suitable to make this decision. Information-Centric Networking (ICN) research solved a similar selection problem for static data retrieval by integrating content delivery as a native network feature. We address the selection problem for services by extending the ICN-principles for services. In this paper we present Queue and Latency, a network-driven service selection algorithm which maps user demand to service instances, taking into account both network and server metrics. To reduce the size of service router forwarding tables, we present a statistical method to approximate an optimal load distribution with minimized router state required. Simulation results show that our statistical routing approach approximates the average system response time of source-based routing with minimized state in forwarding tables.

Application Study of FQ-CoDel Algorithm based on QoS-guaranteed Class in Tactical Network (전술환경에서 QoS 보장을 위한 클래스 기반 FQ-Codel 알고리즘 적용 연구)

  • Park, Juman
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.19 no.3
    • /
    • pp.53-58
    • /
    • 2019
  • This paper proposes a class-based FQ-CoDel(Flow Queue-Control Delay) algorithm. A variety of application system services create bottlenecks in tactical communication network and the bottlenecks cause some problems such as traffic loss and delay. Therefore, more research on effective traffic processing is needed. The proposed class-based FQ-CoDel algorithm, suggests dynamic buffer management and scheduling, classifies specific packets in each queue according to service attribute and criticality and checks periodically latency of the packets in each queue. Also, it abandons the packets if some packets stay in queue above schedule time and manages the total amount of traffic stored in queue with certain level.

A Dynamic Queue Management for Network Coding in Mobile Ad-hoc Network

  • Kim, Byun-Gon;Kim, Kwan-Woong;Huang, Wei;Yu, C.;Kim, Yong K.
    • International journal of advanced smart convergence
    • /
    • v.2 no.1
    • /
    • pp.6-11
    • /
    • 2013
  • Network Coding (NC) is a new paradigm for network communication. In network coding, intermediate nodes create new packets by algebraically combining ingress packets and send it to its neighbor node by broadcast manner. NC has rapidly emerged as a major research area in information theory due to its wide applicability to communication through real networks. Network coding is expected to improve throughput and channel efficiency in the wireless multi-hop network. Many researches have been carried out to employ network coding to wireless ad-hoc network. In this paper, we proposed a dynamic queue management to improve coding opportunistic to enhance efficiency of NC. In our design, intermediate nodes are buffering incoming packets to encode queue. We expect that the proposed algorithm shall improve encoding rate of network coded packet and also reduce end to end latency. From the simulation, the proposed algorithm achieved better performance in terms of coding gain and packet delivery rate than static queue management scheme.

Improvement of Multi-Queue Block Layer for Fast User Response (사용자 응답성 향상을 위한 멀티큐 블록계층 개선)

  • Shin, Heeyoung;Kim, Taeseok
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.14 no.2
    • /
    • pp.97-102
    • /
    • 2019
  • Multi-queue I/O block layer has been recently employed in Linux kernel to support fast storage devices such as NVMe SSDs, but it lacks differentiated I/O services yet. In this paper, we propose an I/O scheduling scheme that can improve the user responsiveness of foreground processes, which are closely related to user satisfaction. To this end, we redesign the existing multi-queue block layer to classify the I/O requests from foreground processes and schedule them by exploiting the feature of NVMe interface. Experimental results show that latency and launch time of the foreground processes have been significantly improved compared to original Linux kernel.

In-band Network Telemetry based Network Anomaly Detection Scheme (INT 기반 네트워크 이상 상태 탐지 기술 연구)

  • Lim, Jiyoon;Nam, Sukhyun;Yoo, Jae-Hyoung;Hong, James Won-Ki
    • KNOM Review
    • /
    • v.22 no.3
    • /
    • pp.13-19
    • /
    • 2019
  • Network anomaly detection is a technology that collects information about flows on a network and detects malicious attacks occurring in a network in real time. In-band Network Telemetry (INT) technology provides more detailed information in real time, that is not provided by existing networks, such as hop latency and queue occupancy. In this paper, we propose the method to implement an anomaly detection system with higher performance by using INT as an input feature of machine learning and verify it through experiments.

Performance Management of Token Bus Networks for Computer Integrated Manufacturing (컴퓨터 통합생산을 위한 토큰버스 네트워크의 성능관리)

  • Lee, Sang-Ho;Lee, Suk
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.13 no.6
    • /
    • pp.152-160
    • /
    • 1996
  • This paper focuses on development and evaluation of a performance management algorithm for IEEE 802.4 token bus networks to serve large-scale integrated manufacturing systems. Such factory automation networks have to satisfy delay constraints imposed on time-critical messages while maintaining as much network capacity as possible for non-time-critical messages. This paper presents a network performance manager that adjusts queue capacity as well as timers by using a set of fuzzy rules and fuzzy inference mechanism. The efficacy of the performance management has been demonstrated by a series of simulation experiments.

  • PDF

A Locality-Aware Write Filter Cache for Energy Reduction of STTRAM-Based L1 Data Cache

  • Kong, Joonho
    • JSTS:Journal of Semiconductor Technology and Science
    • /
    • v.16 no.1
    • /
    • pp.80-90
    • /
    • 2016
  • Thanks to superior leakage energy efficiency compared to SRAM cells, STTRAM cells are considered as a promising alternative for a memory element in on-chip caches. However, the main disadvantage of STTRAM cells is high write energy and latency. In this paper, we propose a low-cost write filter (WF) cache which resides between the load/store queue and STTRAM-based L1 data cache. To maximize efficiency of the WF cache, the line allocation and access policies are optimized for reducing energy consumption of STTRAM-based L1 data cache. By efficiently filtering the write operations in the STTRAM-based L1 data cache, our proposed WF cache reduces energy consumption of the STTRAM-based L1 data cache by up to 43.0% compared to the case without the WF cache. In addition, thanks to the fast hit latency of the WF cache, it slightly improves performance by 0.2%.