• Title/Summary/Keyword: data latency

Search Result 747, Processing Time 0.025 seconds

Analysis of E2E Latency for Data Setup in 5G Network (5G 망에서 Data Call Setup E2E Latency 분석)

  • Lee, Hong-Woo;Lee, Seok-Pil
    • Journal of Internet Computing and Services
    • /
    • v.20 no.5
    • /
    • pp.113-119
    • /
    • 2019
  • The key features of 5G mobile communications recently commercialized can be represented by High Data Rate, Connection Density and Low Latency, of which the features most distinct from the existing 4G will be low Latency, which will be the foundation for various new service offerings. AR and self-driving technologies are being considered as services that utilize these features, and 5G Network Latency is also being discussed in related standards. However, it is true that the discussion of E2E Latency from a service perspective is much lacking. The final goal to achieve low Latency at 5G is to achieve 1ms of air interface based on RTD, which can be done through Ultra-reliable Low Latency Communications (URLLC) through Rel-16 in early 20 years, and further network parity through Mobile Edge Computing (MEC) is also being studied. In addition to 5G network-related factors, the overall 5G E2E Latency also includes link/equipment Latency on the path between the 5G network and the IDC server for service delivery, and the Processing Latency for service processing within the mobile app and server. Meanwhile, it is also necessary to study detailed service requirements by separating Latency for initial setup of service and Latency for continuous service. In this paper, the following three factors were reviewed for initial setup of service. First, the experiment and analysis presented the impact on Latency on the Latency in the case of 1 Data Lake Setup, 2 CRDX On/Off for efficient power, and finally 3H/O on Latency. Through this, we expect Low Latency to contribute to the service requirements and planning associated with Latency in the initial setup of the required services.

Variable latency L1 data cache architecture design in multi-core processor under process variation

  • Kong, Joonho
    • Journal of the Korea Society of Computer and Information
    • /
    • v.20 no.9
    • /
    • pp.1-10
    • /
    • 2015
  • In this paper, we propose a new variable latency L1 data cache architecture for multi-core processors. Our proposed architecture extends the traditional variable latency cache to be geared toward the multi-core processors. We added a specialized data structure for recording the latency of the L1 data cache. Depending on the added latency to the L1 data cache, the value stored to the data structure is determined. It also tracks the remaining cycles of the L1 data cache which notifies data arrival to the reservation station in the core. As in the variable latency cache of the single-core architecture, our proposed architecture flexibly extends the cache access cycles considering process variation. The proposed cache architecture can reduce yield losses incurred by L1 cache access time failures to nearly 0%. Moreover, we quantitatively evaluate performance, power, energy consumption, power-delay product, and energy-delay product when increasing the number of cache access cycles.

Eager Data Transfer Mechanism for Reducing Communication Latency in User-Level Network Protocols

  • Won, Chul-Ho;Lee, Ben;Park, Kyoung;Kim, Myung-Joon
    • Journal of Information Processing Systems
    • /
    • v.4 no.4
    • /
    • pp.133-144
    • /
    • 2008
  • Clusters have become a popular alternative for building high-performance parallel computing systems. Today's high-performance system area network (SAN) protocols such as VIA and IBA significantly reduce user-to-user communication latency by implementing protocol stacks outside of operating system kernel. However, emerging parallel applications require a significant improvement in communication latency. Since the time required for transferring data between host memory and network interface (NI) make up a large portion of overall communication latency, the reduction of data transfer time is crucial for achieving low-latency communication. In this paper, Eager Data Transfer (EDT) mechanism is proposed to reduce the time for data transfers between the host and network interface. The EDT employs cache coherence interface hardware to directly transfer data between the host and NI. An EDT-based network interface was modeled and simulated on the Linux-based, complete system simulation environment, Linux/SimOS. Our simulation results show that the EDT approach significantly reduces the data transfer time compared to DMA-based approaches. The EDTbased NI attains 17% to 38% reduction in user-to-user message time compared to the cache-coherent DMA-based NIs for a range of message sizes (64 bytes${\sim}$4 Kbytes) in a SAN environment.

Memory Latency Hiding Techniques (메모리 지연을 감추는 기법들)

  • Ki, An-Do
    • Electronics and Telecommunications Trends
    • /
    • v.13 no.3 s.51
    • /
    • pp.61-70
    • /
    • 1998
  • The obvious way to make a computer system more powerful is to make the processor as fast as possible. Furthermore, adopting a large number of such fast processors would be the next step. This multiprocessor system could be useful only if it distributes workload uniformly and if its processors are fully utilized. To achieve a higher processor utilization, memory access latency must be reduced as much as possible and even more the remaining latency must be hidden. The actual latency can be reduced by using fast logic and the effective latency can be reduced by using cache. This article discusses what the memory latency problem is, how serious it is by presenting analytical and simulation results, and existing techniques for coping with it; such as write-buffer, relaxed consistency model, multi-threading, data locality optimization, data forwarding, and data prefetching.

RTK Latency Estimation and Compensation Method for Vehicle Navigation System

  • Jang, Woo-Jin;Park, Chansik;Kim, Min;Lee, Seokwon;Cho, Min-Gyou
    • Journal of Positioning, Navigation, and Timing
    • /
    • v.6 no.1
    • /
    • pp.17-26
    • /
    • 2017
  • Latency occurs in RTK, where the measured position actually outputs past position when compared to the measured time. This latency has an adverse effect on the navigation accuracy. In the present study, a system that estimates the latency of RTK and compensates the position error induced by the latency was implemented. To estimate the latency, the speed obtained from an odometer and the speed calculated from the position change of RTK were used. The latency was estimated with a modified correlator where the speed from odometer is shifted by a sample until to find best fit with speed from RTK. To compensate the position error induced by the latency, the current position was calculated from the speed and heading of RTK. To evaluate the performance of the implemented method, the data obtained from an actual vehicle was applied to the implemented system. The results of the experiment showed that the latency could be estimated with an error of less than 12 ms. The minimum data acquisition time for the stable estimation of the latency was up to 55 seconds. In addition, when the position was compensated based on the estimated latency, the position error decreased by at least 53.6% compared with that before the compensation.

A Fault Tolerant Data Management Scheme for Healthcare Internet of Things in Fog Computing

  • Saeed, Waqar;Ahmad, Zulfiqar;Jehangiri, Ali Imran;Mohamed, Nader;Umar, Arif Iqbal;Ahmad, Jamil
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.1
    • /
    • pp.35-57
    • /
    • 2021
  • Fog computing aims to provide the solution of bandwidth, network latency and energy consumption problems of cloud computing. Likewise, management of data generated by healthcare IoT devices is one of the significant applications of fog computing. Huge amount of data is being generated by healthcare IoT devices and such types of data is required to be managed efficiently, with low latency, without failure, and with minimum energy consumption and low cost. Failures of task or node can cause more latency, maximum energy consumption and high cost. Thus, a failure free, cost efficient, and energy aware management and scheduling scheme for data generated by healthcare IoT devices not only improves the performance of the system but also saves the precious lives of patients because of due to minimum latency and provision of fault tolerance. Therefore, to address all such challenges with regard to data management and fault tolerance, we have presented a Fault Tolerant Data management (FTDM) scheme for healthcare IoT in fog computing. In FTDM, the data generated by healthcare IoT devices is efficiently organized and managed through well-defined components and steps. A two way fault-tolerant mechanism i.e., task-based fault-tolerance and node-based fault-tolerance, is provided in FTDM through which failure of tasks and nodes are managed. The paper considers energy consumption, execution cost, network usage, latency, and execution time as performance evaluation parameters. The simulation results show significantly improvements which are performed using iFogSim. Further, the simulation results show that the proposed FTDM strategy reduces energy consumption 3.97%, execution cost 5.09%, network usage 25.88%, latency 44.15% and execution time 48.89% as compared with existing Greedy Knapsack Scheduling (GKS) strategy. Moreover, it is worthwhile to mention that sometimes the patients are required to be treated remotely due to non-availability of facilities or due to some infectious diseases such as COVID-19. Thus, in such circumstances, the proposed strategy is significantly efficient.

Delay and Energy Efficient Data Aggregation in Wireless Sensor Networks

  • Le, Huu Nghia;Choe, Junseong;Shon, Minhan;Choo, Hyunseung
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2012.04a
    • /
    • pp.607-608
    • /
    • 2012
  • Data aggregation is a fundamental problem in wireless sensor networks which attracts great attention in recent years. Delay and energy efficiencies are two crucial issues of designing a data aggregation scheme. In this paper, we propose a distributed, energy efficient algorithm for collecting data from all sensor nodes with the minimum latency called Delay-aware Power-efficient Data Aggregation algorithm (DPDA). The DPDA algorithm minimizes the latency in data collection process by building a time efficient data aggregation network structure. It also saves sensor energy by decreasing node transmission distances. Energy is also well-balanced between sensors to achieve acceptable network lifetime. From intensive experiments, the DPDA scheme could significantly decrease the data collection latency and obtain reasonable network lifetime compared with other approaches.

Agent with Low-latency Overcoming Technique for Distributed Cluster-based Machine Learning

  • Seo-Yeon, Gu;Seok-Jae, Moon;Byung-Joon, Park
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.15 no.1
    • /
    • pp.157-163
    • /
    • 2023
  • Recently, as businesses and data types become more complex and diverse, efficient data analysis using machine learning is required. However, since communication in the cloud environment is greatly affected by network latency, data analysis is not smooth if information delay occurs. In this paper, SPT (Safe Proper Time) was applied to the cluster-based machine learning data analysis agent proposed in previous studies to solve this delay problem. SPT is a method of remotely and directly accessing memory to a cluster that processes data between layers, effectively improving data transfer speed and ensuring timeliness and reliability of data transfer.

An Area Efficient Network Interface Architecture (NoC에서 면적 효율적인 Network Interface 구조에 관한 연구)

  • Lee, Ser-Hoon;Hwang, Sun-Young
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.33 no.5C
    • /
    • pp.361-370
    • /
    • 2008
  • NoC is adopted for data communication between processors and IPs in MPSoC system. NoC has an advantage of scalability in that system can be easily expanded just by adding switches. However, as the number of switches increases, chip area increases as well as data transfer latency. This paper proposes an architecture that can reduce the number of switches in the system by sharing network interfaces. To reduce NI area, the modules sharing network interface use a common buffer in network interface. Experimental results show that the chip area has been reduced by 46.5% and data transfer latency by 17.1%, respectively, compared to conventional architecture.

A Distributed LT Codes-based Data Transmission Technique for Multicast Services in Vehicular Ad-hoc Networks

  • Zhou, Yuan;Fei, Zesong;Huang, Gaishi;Yang, Ang;Kuang, Jingming
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.7 no.4
    • /
    • pp.748-766
    • /
    • 2013
  • In this paper, we consider an infrastructure-vehicle-vehicle (I2V2V) based Vehicle Ad-hoc Networks (VANETs), where one base station multicasts data to d vehicular users with the assistance of r vehicular users. A Distributed Luby Transform (DLT) codes based transmission scheme is proposed over lossy VANETs to reduce transmission latency. Furthermore, focusing on the degree distribution of DLT codes, a Modified Deconvolved Soliton Distribution (MDSD) is designed to further reduce the transmission latency and improve the transmission reliability. We investigate the network behavior of the transmission scheme with MDSD, called MDLT based scheme. Closed-form expressions of the transmission latency of the proposed schemes are derived. Performance simulation results show that DLT based scheme can reduce transmission latency significantly compared with traditional Automatic Repeat Request (ARQ) and Luby Transform (LT) codes based schemes. In contrast to DLT based scheme, the MDLT based scheme can further reduce transmission latency and improve FER performance substantially, when both the source-to-relay and relay-to-sink channels are erasure channels.