• Title/Summary/Keyword: low-latency processing

Search Result 105, Processing Time 0.027 seconds

Analysis of E2E Latency for Data Setup in 5G Network (5G 망에서 Data Call Setup E2E Latency 분석)

  • Lee, Hong-Woo;Lee, Seok-Pil
    • Journal of Internet Computing and Services
    • /
    • v.20 no.5
    • /
    • pp.113-119
    • /
    • 2019
  • The key features of 5G mobile communications recently commercialized can be represented by High Data Rate, Connection Density and Low Latency, of which the features most distinct from the existing 4G will be low Latency, which will be the foundation for various new service offerings. AR and self-driving technologies are being considered as services that utilize these features, and 5G Network Latency is also being discussed in related standards. However, it is true that the discussion of E2E Latency from a service perspective is much lacking. The final goal to achieve low Latency at 5G is to achieve 1ms of air interface based on RTD, which can be done through Ultra-reliable Low Latency Communications (URLLC) through Rel-16 in early 20 years, and further network parity through Mobile Edge Computing (MEC) is also being studied. In addition to 5G network-related factors, the overall 5G E2E Latency also includes link/equipment Latency on the path between the 5G network and the IDC server for service delivery, and the Processing Latency for service processing within the mobile app and server. Meanwhile, it is also necessary to study detailed service requirements by separating Latency for initial setup of service and Latency for continuous service. In this paper, the following three factors were reviewed for initial setup of service. First, the experiment and analysis presented the impact on Latency on the Latency in the case of 1 Data Lake Setup, 2 CRDX On/Off for efficient power, and finally 3H/O on Latency. Through this, we expect Low Latency to contribute to the service requirements and planning associated with Latency in the initial setup of the required services.

A Study on Low-Latency Handoff for Heterogeneous Networks (이기종망간 Low-Latency Handoff 에 관한 연구)

  • Lee, Hwan-Goo;Kim, Do-Hyung;Kim, Won-Tae;Kwak, Ji-Young;Lee, Kyung-Hee
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2007.11a
    • /
    • pp.721-722
    • /
    • 2007
  • MIP(Mobile IP)는 모바일 노드가 서브넷간에 IP 계층의 핸드오프를 가능하게 한다. Low-Latency Handoff 는 이러한 MIP 에서 등록 절차를 조절함으로써 핸드오프 시간을 줄이고 패킷 loss 를 줄이게 한다. 이기종망간 버티컬 핸드오프에서는 보통 Low-Latency Handoff 가 논외로 되어 있으나 모바일 노드가 고속으로 기존망을 벗어날 경우 Low-Latency Handoff 를 적용하면 패킷 loss 를 줄이는 데에 효과를 보게 된다.

A Study on InfiniBand Network Low-Latency Assure for High Speed Processing of Exponential Transaction (폭증스트림 고속 처리를 위한 InfiniBand 환경에서의 Low-Latency 보장 연구)

  • Jung, Hyedong;Hong, Jinwoo
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2013.11a
    • /
    • pp.259-261
    • /
    • 2013
  • 금융 IT와 같은 분야에서는 빅데이터의 큰 특징 중 하나인 Velocity의 개선이 가장 큰 문제이다. 이는 산업의 특성상 승자가 시장을 독식하는 구조로 0.1 초라도 빠른 시스템 속도를 확보하면 시장 경쟁력이 매우 크기 때문이다. 비단 금융 IT 뿐만 아니라 다른 산업들도 최근 보다 빠른 속도의 데이터 처리에 매우 민감하게 반응하는 환경으로 변화하고 있으므로 이에 대한 솔루션이 필요하며 본 연구에서는 폭증스트림의 고속처리를 위한 Low-Latency에 대한 다양한 실험과 환경 구축을 통해 빅데이터의 Velocity 문제를 해결할 수 있는 방안을 제시한다.

Low-latency SAO Architecture and its SIMD Optimization for HEVC Decoder

  • Kim, Yong-Hwan;Kim, Dong-Hyeok;Yi, Joo-Young;Kim, Je-Woo
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.3 no.1
    • /
    • pp.1-9
    • /
    • 2014
  • This paper proposes a low-latency Sample Adaptive Offset filter (SAO) architecture and its Single Instruction Multiple Data (SIMD) optimization scheme to achieve fast High Efficiency Video Coding (HEVC) decoding in a multi-core environment. According to the HEVC standard and its Test Model (HM), SAO operation is performed only at the picture level. Most realtime decoders, however, execute their sub-modules on a Coding Tree Unit (CTU) basis to reduce the latency and memory bandwidth. The proposed low-latency SAO architecture has the following advantages over picture-based SAO: 1) significantly less memory requirements, and 2) low-latency property enabling efficient pipelined multi-core decoding. In addition, SIMD optimization of SAO filtering can reduce the SAO filtering time significantly. The simulation results showed that the proposed low-latency SAO architecture with significantly less memory usage, produces a similar decoding time as a picture-based SAO in single-core decoding. Furthermore, the SIMD optimization scheme reduces the SAO filtering time by approximately 509% and increases the total decoding speed by approximately 7% compared to the existing look-up table approach of HM.

A Memory-efficient Hand Segmentation Architecture for Hand Gesture Recognition in Low-power Mobile Devices

  • Choi, Sungpill;Park, Seongwook;Yoo, Hoi-Jun
    • JSTS:Journal of Semiconductor Technology and Science
    • /
    • v.17 no.3
    • /
    • pp.473-482
    • /
    • 2017
  • Hand gesture recognition is regarded as new Human Computer Interaction (HCI) technologies for the next generation of mobile devices. Previous hand gesture implementation requires a large memory and computation power for hand segmentation, which fails to give real-time interaction with mobile devices to users. Therefore, in this paper, we presents a low latency and memory-efficient hand segmentation architecture for natural hand gesture recognition. To obtain both high memory-efficiency and low latency, we propose a streaming hand contour tracing unit and a fast contour filling unit. As a result, it achieves 7.14 ms latency with only 34.8 KB on-chip memory, which are 1.65 times less latency and 1.68 times less on-chip memory, respectively, compare to the best-in-class.

Eager Data Transfer Mechanism for Reducing Communication Latency in User-Level Network Protocols

  • Won, Chul-Ho;Lee, Ben;Park, Kyoung;Kim, Myung-Joon
    • Journal of Information Processing Systems
    • /
    • v.4 no.4
    • /
    • pp.133-144
    • /
    • 2008
  • Clusters have become a popular alternative for building high-performance parallel computing systems. Today's high-performance system area network (SAN) protocols such as VIA and IBA significantly reduce user-to-user communication latency by implementing protocol stacks outside of operating system kernel. However, emerging parallel applications require a significant improvement in communication latency. Since the time required for transferring data between host memory and network interface (NI) make up a large portion of overall communication latency, the reduction of data transfer time is crucial for achieving low-latency communication. In this paper, Eager Data Transfer (EDT) mechanism is proposed to reduce the time for data transfers between the host and network interface. The EDT employs cache coherence interface hardware to directly transfer data between the host and NI. An EDT-based network interface was modeled and simulated on the Linux-based, complete system simulation environment, Linux/SimOS. Our simulation results show that the EDT approach significantly reduces the data transfer time compared to DMA-based approaches. The EDTbased NI attains 17% to 38% reduction in user-to-user message time compared to the cache-coherent DMA-based NIs for a range of message sizes (64 bytes${\sim}$4 Kbytes) in a SAN environment.

Study on Low-Latency overcome of Stock Trading system in Cloud (클라우드 환경에서 주식 체결 시스템의 저지연 극복에 관한 연구)

  • Kim, Keun-Heui;Moon, Seok-Jae;Yoon, Chang-Pyo;Lee, Dae-Sung
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.18 no.11
    • /
    • pp.2658-2663
    • /
    • 2014
  • To minimize low latency and improve the processing speed of the stock trading system, various technologies have been introduced. However, expensive network equipment has limitation for improving speed of trading system. Also, it is true that there is not much advantage by introducing those kind of systems. In this paper, we propose a low-Latency SPT(Safe Proper Time) scheme for overcoming the stock trading system in a cloud. The proposed method minimizes the CPI in order to reduce the CPU overhead that is based on the understanding of the kernel. and this approach satisfies the data timeliness.

An Adaptive Polling Selection Technique for Ultra-Low Latency Storage Systems (초저지연 저장장치를 위한 적응형 폴링 선택 기법)

  • Chun, Myoungjun;Kim, Yoona;Kim, Jihong
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.14 no.2
    • /
    • pp.63-69
    • /
    • 2019
  • Recently, ultra-low latency flash storage devices such as Z-SSD and Optane SSD were introduced with the significant technological improvement in the storage devices which provide much faster response time than today's other NVMe SSDs. With such ultra-low latency, $10{\mu}s$, storage devices the cost of context switch could be an overhead during interrupt-driven I/O completion process. As an interrupt-driven I/O completion process could bring an interrupt handling overhead, polling or hybrid-polling for the I/O completion is known to perform better. In this paper, we analyze tail latency problem in a polling process caused by process scheduling in data center environment where multiple applications run simultaneously under one system and we introduce our adaptive polling selection technique which dynamically selects efficient processing method between two techniques according to the system's conditions.

Game Theory-Based Scheme for Optimizing Energy and Latency in LEO Satellite-Multi-access Edge Computing

  • Ducsun Lim;Dongkyun Lim
    • International journal of advanced smart convergence
    • /
    • v.13 no.2
    • /
    • pp.7-15
    • /
    • 2024
  • 6G network technology represents the next generation of communications, supporting high-speed connectivity, ultra-low latency, and integration with cutting-edge technologies, such as the Internet of Things (IoT), virtual reality, and autonomous vehicles. These advancements promise to drive transformative changes in digital society. However, as technology progresses, the demand for efficient data transmission and energy management between smart devices and network equipment also intensifies. A significant challenge within 6G networks is the optimization of interactions between satellites and smart devices. This study addresses this issue by introducing a new game theory-based technique aimed at minimizing system-wide energy consumption and latency. The proposed technique reduces the processing load on smart devices and optimizes the offloading decision ratio to effectively utilize the resources of Low-Earth Orbit (LEO) satellites. Simulation results demonstrate that the proposed technique achieves a 30% reduction in energy consumption and a 40% improvement in latency compared to existing methods, thereby significantly enhancing performance.

Enhanced Prediction Algorithm for Near-lossless Image Compression with Low Complexity and Low Latency

  • Son, Ji Deok;Song, Byung Cheol
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.5 no.2
    • /
    • pp.143-151
    • /
    • 2016
  • This paper presents new prediction methods to improve compression performance of the so-called near-lossless RGB-domain image coder, which is designed to effectively decrease the memory bandwidth of a system-on-chip (SoC) for image processing. First, variable block size (VBS)-based intra prediction is employed to eliminate spatial redundancy for the green (G) component of an input image on a pixel-line basis. Second, inter-color prediction (ICP) using spectral correlation is performed to predict the R and B components from the previously reconstructed G-component image. Experimental results show that the proposed algorithm improves coding efficiency by up to 30% compared with an existing algorithm for natural images, and improves coding efficiency with low computational cost by about 50% for computer graphics (CG) images.