• Title/Summary/Keyword: low latency

Search Result 528, Processing Time 0.024 seconds

Ethernet-Based Avionic Databus and Time-Space Partition Switch Design

  • Li, Jian;Yao, Jianguo;Huang, Dongshan
    • Journal of Communications and Networks
    • /
    • v.17 no.3
    • /
    • pp.286-295
    • /
    • 2015
  • Avionic databuses fulfill a critical function in the connection and communication of aircraft components and functions such as flight-control, navigation, and monitoring. Ethernet-based avionic databuses have become the mainstream for large aircraft owning to their advantages of full-duplex communication with high bandwidth, low latency, low packet-loss, and low cost. As a new generation aviation network communication standard, avionics full-duplex switched ethernet (AFDX) adopted concepts from the telecom standard, asynchronous transfer mode (ATM). In this technology, the switches are the key devices influencing the overall performance. This paper reviews the avionic databus with emphasis on the switch architecture classifications. Based on a comparison, analysis, and discussion of the different switch architectures, we propose a new avionic switch design based on a time-division switch fabric for high flexibility and scalability. This also merges the design concept of space-partition switch fabric to achieve reliability and predictability. The new switch architecture, called space partitioned shared memory switch (SPSMS), isolates the memory space for each output port. This can reduce the competition for resources and avoid conflicts, decrease the packet forwarding latency through the switch, and reduce the packet loss rate. A simulation of the architecture with optimized network engineering tools (OPNET) confirms the efficiency and significant performance improvement over a classic shared memory switch, in terms of overall packet latency, queuing delay, and queue size.

Enhanced Prediction Algorithm for Near-lossless Image Compression with Low Complexity and Low Latency

  • Son, Ji Deok;Song, Byung Cheol
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.5 no.2
    • /
    • pp.143-151
    • /
    • 2016
  • This paper presents new prediction methods to improve compression performance of the so-called near-lossless RGB-domain image coder, which is designed to effectively decrease the memory bandwidth of a system-on-chip (SoC) for image processing. First, variable block size (VBS)-based intra prediction is employed to eliminate spatial redundancy for the green (G) component of an input image on a pixel-line basis. Second, inter-color prediction (ICP) using spectral correlation is performed to predict the R and B components from the previously reconstructed G-component image. Experimental results show that the proposed algorithm improves coding efficiency by up to 30% compared with an existing algorithm for natural images, and improves coding efficiency with low computational cost by about 50% for computer graphics (CG) images.

Design and Architecture of Low-Latency High-Speed Turbo Decoders

  • Jung, Ji-Won;Lee, In-Ki;Choi, Duk-Gun;Jeong, Jin-Hee;Kim, Ki-Man;Choi, Eun-A;Oh, Deock-Gil
    • ETRI Journal
    • /
    • v.27 no.5
    • /
    • pp.525-532
    • /
    • 2005
  • In this paper, we propose and present implementation results of a high-speed turbo decoding algorithm. The latency caused by (de)interleaving and iterative decoding in a conventional maximum a posteriori turbo decoder can be dramatically reduced with the proposed design. The source of the latency reduction is from the combination of the radix-4, center to top, parallel decoding, and early-stop algorithms. This reduced latency enables the use of the turbo decoder as a forward error correction scheme in real-time wireless communication services. The proposed scheme results in a slight degradation in bit error rate performance for large block sizes because the effective interleaver size in a radix-4 implementation is reduced to half, relative to the conventional method. To prove the latency reduction, we implemented the proposed scheme on a field-programmable gate array and compared its decoding speed with that of a conventional decoder. The results show an improvement of at least five fold for a single iteration of turbo decoding.

  • PDF

A study on Improving Latency-Optimized Fair Queuing Algorithm (최적 레이턴시 기반 공정 큐잉 방식의 개선에 관한 연구)

  • Kim, Tae-Joon
    • Journal of Korea Multimedia Society
    • /
    • v.10 no.1
    • /
    • pp.83-93
    • /
    • 2007
  • WFQ (Weighted Fair Queuing) is the most popular fair queuing algorithm, but it had the inherent drawback of a poet bandwidth utilization, particularly under the traffic requiring a low rate but tight delay bound such as internet phone. It was recently identified that the poor utilization is mainly due to the non-optimized latency of a flow and then LOFQ(Latency-Optimized Fair Queuing) to overcome the drawback was introduced. In this paper, we improve the performance of LOFQ by introducing an occupied resource optimization function and reduce the implementation complexity of recursive resource transformation by revising the transformation scheme. We also prove the superiority of LOFQ over WFQ in terms of utilization. The simulation result shows that the improved LOFQ provides $20{\sim}30%$ higher utilization than that in the legacy LOFQ.

  • PDF

Bandwidth Utilization in Latency-Optimized Fair Queuing Algorithm (최적 레이턴시 기반 공정 큐잉 알고리즘의 대역폭 이용도)

  • Kim, Tae-Joon
    • The KIPS Transactions:PartC
    • /
    • v.14C no.2
    • /
    • pp.155-162
    • /
    • 2007
  • WFQ (Weighted Fair Queuing) is the most popular fair queuing algorithm, but it had the inherent drawback of a poor bandwidth utilization, particularly under the traffic requiring a low rate but tight delay bound such as internet phone, It was recently identified that the poor utilization is mainly due to non optimized latency of a flow and then LOFQ(Latency-Optimized Fair Queuing) to overcome the drawback was introduced, LOFQ was also improved through introducing an occupied resource optimization function and the implementation complexity of recursive resource transformation was reduced with revising the transformation scheme. However, the performance of LOFQ has been evaluated by means of simulation, so that there are some difficulties in evaluating the performance in the terms of the accuracy and evaluation time, In this paper, we develop how to analytically compute the bandwidth utilization in LOFQ.

A Handover Procedure for Seamless Service Support between Wired and Wireless Networks (유선망과 무선망간의 끊김없는 서비스를 지원하기 위한 핸드오버 절차)

  • Yang, Ok-Sik;Choi, Seong-Gon;Choi, Jun-Kyun
    • Journal of the Institute of Electronics Engineers of Korea TC
    • /
    • v.42 no.12
    • /
    • pp.45-52
    • /
    • 2005
  • This paper proposes low latency handover procedure for seamless connectivity and QoS support between wired (e.g. Ethernet) and wireless (e.g. WLAN, WiBro(802.16-compatible), CDMA) networks by the mobile-assisted and server-initiated handover strategy. It is assumed that the server decides the best target network considering network status and user preferences. In this procedure, a mobile terminal associates with the wireless link decided at the server in advance and receives CoA as well. When handover occurs without the prediction in wired networks, the server performs fast binding update using physical handover trigger through the MIH (media independent handover) function. As a result, a mobile terminal does not need to perform L2 and L3 handover during handover so that this procedure decreases handover latency and loss.

Providing Efficient Secured Mobile IPv6 by SAG and Robust Header Compression

  • Wu, Tin-Yu;Chao, Han-Chieh;Lo, Chi-Hsiang
    • Journal of Information Processing Systems
    • /
    • v.5 no.3
    • /
    • pp.117-130
    • /
    • 2009
  • By providing ubiquitous Internet connectivity, wireless networks offer more convenient ways for users to surf the Internet. However, wireless networks encounter more technological challenges than wired networks, such as bandwidth, security problems, and handoff latency. Thus, this paper proposes new technologies to solve these problems. First, a Security Access Gateway (SAG) is proposed to solve the security issue. Originally, mobile terminals were unable to process high security calculations because of their low calculating power. SAG not only offers high calculating power to encrypt the encryption demand of SAG's domain, but also helps mobile terminals to establish a multiple safety tunnel to maintain a secure domain. Second, Robust Header Compression (RoHC) technology is adopted to increase the utilization of bandwidth. Instead of Access Point (AP), Access Gateway (AG) is used to deal with the packet header compression and de-compression from the wireless end. AG's high calculating power is able to reduce the load on AP. In the original architecture, AP has to deal with a large number of demands by header compression/de-compression from mobile terminals. Eventually, wireless networks must offer users "Mobility" and "Roaming". For wireless networks to achieve "Mobility" and "Roaming," we can use Mobile IPv6 (MIPv6) technology. Nevertheless, such technology might cause latency. Furthermore, how the security tunnel and header compression established before the handoff can be used by mobile terminals handoff will be another great challenge. Thus, this paper proposes to solve the problem by using Early Binding Updates (EBU) and Security Access Gateway (SAG) to offer a complete mechanism with low latency, low handoff mechanism calculation, and high security.

Realtime Wireless Sensor Line Protocol for Forest Fire Monitoring System (실시간 센서 네트워크 프로토콜을 이용한 산불 모니터링 시스템의 구현)

  • Kim, Jae-Ho;Lee, Sang-Shin;Ahn, Il-Yeup;Kim, Tae-Hyun;Won, Kwang-Ho;Kim, Seong-Dong
    • Proceedings of the IEEK Conference
    • /
    • 2005.11a
    • /
    • pp.1031-1034
    • /
    • 2005
  • This paper introduces a novel sensor network protocol, R-WSLP(Realtime Wireless Sensor Line Protocol), which has extremely low latency characteristic in large-scale WSN. R-WSLP is proposed to implement realtime forest fire monitoring system. We propose Distributed TDMA method for the multiple channel access and Time Synchronized Forwarding Mechanism instead of routing technique to achieve low latency network. Also, R-WSLP provides extremely low power operation which we accomplished by reducing idle listening. In our experimentation, we get successful results at the forest fire monitoring system with our protocol.

  • PDF

Pre-Registration Performance Method without Foreign Agent Helping (IP 계층 핸드오프 중 외부에이전트 도움 없는 사전등록 수행방법)

  • 김장식;강대욱
    • Proceedings of the Korea Multimedia Society Conference
    • /
    • 2002.11b
    • /
    • pp.557-560
    • /
    • 2002
  • IP층 핸드오프 중에 등록 지연 (latency)과 지연으로 야기된 패킷 손실 (loss)이 존재할 수 있으며, 이 중 지연은 실시간 서비스 또는 지연에 민감한 서비스에 크게 영향을 미친다. 이러한 지연을 해결하기 위해서 IETF는 사전등록 (Pre-Registration)을 할 수 있는 Low Latency Handoff을 제안했다. 여기에서는 이전 외부에이전트 (oFA)와 새로운 외부에이전트 (nFA)가 존재하여야 되며, 만약 nFA가 없다면 사후등록(Post-Registration)을 수행하게 된다. 그러나 다양한 네트워크환경에서 외부에이전트가 존재하지 않은 경우도 있을 수 있다. 본 논문에서는 이러한 상황에 적응하도록 링크계층 정보를 이용하여 사전등록을 할 수 있도록 제안한다.

  • PDF

Performance Evaluation and Analysis of Multiple Scenarios of Big Data Stream Computing on Storm Platform

  • Sun, Dawei;Yan, Hongbin;Gao, Shang;Zhou, Zhangbing
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.7
    • /
    • pp.2977-2997
    • /
    • 2018
  • In big data era, fresh data grows rapidly every day. More than 30,000 gigabytes of data are created every second and the rate is accelerating. Many organizations rely heavily on real time streaming, while big data stream computing helps them spot opportunities and risks from real time big data. Storm, one of the most common online stream computing platforms, has been used for big data stream computing, with response time ranging from milliseconds to sub-seconds. The performance of Storm plays a crucial role in different application scenarios, however, few studies were conducted to evaluate the performance of Storm. In this paper, we investigate the performance of Storm under different application scenarios. Our experimental results show that throughput and latency of Storm are greatly affected by the number of instances of each vertex in task topology, and the number of available resources in data center. The fault-tolerant mechanism of Storm works well in most big data stream computing environments. As a result, it is suggested that a dynamic topology, an elastic scheduling framework, and a memory based fault-tolerant mechanism are necessary for providing high throughput and low latency services on Storm platform.