• 제목/요약/키워드: data latency

검색결과 739건 처리시간 0.03초

5G 망에서 Data Call Setup E2E Latency 분석 (Analysis of E2E Latency for Data Setup in 5G Network)

  • 이홍우;이석필
    • 인터넷정보학회논문지
    • /
    • 제20권5호
    • /
    • pp.113-119
    • /
    • 2019
  • 최근 상용화된 5G 이동통신의 주요한 특징은 High Data Rate와 Connection Density 그리고 Low Latency로 대표할 수 있는데, 이중 기존 4G와 가장 차별되는 특징은 Low Latency로 다양한 새로운 서비스 제공의 기반이 될 것이다. 이러한 특징을 활용한 서비스로는 AR, 자율주행 등이 검토되고 있으며 관련표준에서도 5G Network Latency 논의를 진행하고 있다. 그러나 서비스 관점의 E2E Latency 논의는 많이 부족한 것이 사실이다. 5G에서 Low Latency를 달성을 위한 최종목표는 RTD 기준 Air Interface 1ms 달성으로 이는 '20년 초 Rel-16을 통한 URLLC(Ultra-reliable Low Latency Communications)를 통해 가능하며, 추가적으로 MEC(Moble Edge Computing)를 통한 Network latency 감소도 연구 중이다. 전체 5G E2E Latency는 5G Network 관련 외에도 다양한 요인이 존재하는데, 주요 요인으로는 5G Network과 서비스 제공을 위한 IDC Server 사이의 경로에 놓인 선로/장비 Latency, 단말 App과 Server 내 서비스 처리를 위한 Processing Latency 등이 존재한다. 한편, 서비스 초기 Setup을 위한 Latency와 서비스가 지속 중인 경우의 Latency를 구분하여 세부 서비스 요구사항에 대하여 연구하는것도 필요한데, 이를 위해 본 논문에서는 서비스 초기 Setup과 관련하여 다음과 같은 세가지 요인에 대하여 검토를 진행하였다. 첫째로 (1) Data호 Setup시에 발생 가능한 Latency, 둘째 전력 효율화를 위한 (2) CRDX On/Off에 따른 영향, 마지막으로 (3) H/O가 발생되는 경우에 Latency에 대하여 Latency에 미치는 영향을 실험과 분석을 제시했다. 이를 통해 우리는 Low Latency가 필요한 서비스의 초기 Setup시에 Latency와 관련된 서비스 요구사항 및 기획에 기여할 수 있을 것으로 기대한다.

Variable latency L1 data cache architecture design in multi-core processor under process variation

  • Kong, Joonho
    • 한국컴퓨터정보학회논문지
    • /
    • 제20권9호
    • /
    • pp.1-10
    • /
    • 2015
  • In this paper, we propose a new variable latency L1 data cache architecture for multi-core processors. Our proposed architecture extends the traditional variable latency cache to be geared toward the multi-core processors. We added a specialized data structure for recording the latency of the L1 data cache. Depending on the added latency to the L1 data cache, the value stored to the data structure is determined. It also tracks the remaining cycles of the L1 data cache which notifies data arrival to the reservation station in the core. As in the variable latency cache of the single-core architecture, our proposed architecture flexibly extends the cache access cycles considering process variation. The proposed cache architecture can reduce yield losses incurred by L1 cache access time failures to nearly 0%. Moreover, we quantitatively evaluate performance, power, energy consumption, power-delay product, and energy-delay product when increasing the number of cache access cycles.

Eager Data Transfer Mechanism for Reducing Communication Latency in User-Level Network Protocols

  • Won, Chul-Ho;Lee, Ben;Park, Kyoung;Kim, Myung-Joon
    • Journal of Information Processing Systems
    • /
    • 제4권4호
    • /
    • pp.133-144
    • /
    • 2008
  • Clusters have become a popular alternative for building high-performance parallel computing systems. Today's high-performance system area network (SAN) protocols such as VIA and IBA significantly reduce user-to-user communication latency by implementing protocol stacks outside of operating system kernel. However, emerging parallel applications require a significant improvement in communication latency. Since the time required for transferring data between host memory and network interface (NI) make up a large portion of overall communication latency, the reduction of data transfer time is crucial for achieving low-latency communication. In this paper, Eager Data Transfer (EDT) mechanism is proposed to reduce the time for data transfers between the host and network interface. The EDT employs cache coherence interface hardware to directly transfer data between the host and NI. An EDT-based network interface was modeled and simulated on the Linux-based, complete system simulation environment, Linux/SimOS. Our simulation results show that the EDT approach significantly reduces the data transfer time compared to DMA-based approaches. The EDTbased NI attains 17% to 38% reduction in user-to-user message time compared to the cache-coherent DMA-based NIs for a range of message sizes (64 bytes${\sim}$4 Kbytes) in a SAN environment.

메모리 지연을 감추는 기법들 (Memory Latency Hiding Techniques)

  • 기안도
    • 전자통신동향분석
    • /
    • 제13권3호통권51호
    • /
    • pp.61-70
    • /
    • 1998
  • The obvious way to make a computer system more powerful is to make the processor as fast as possible. Furthermore, adopting a large number of such fast processors would be the next step. This multiprocessor system could be useful only if it distributes workload uniformly and if its processors are fully utilized. To achieve a higher processor utilization, memory access latency must be reduced as much as possible and even more the remaining latency must be hidden. The actual latency can be reduced by using fast logic and the effective latency can be reduced by using cache. This article discusses what the memory latency problem is, how serious it is by presenting analytical and simulation results, and existing techniques for coping with it; such as write-buffer, relaxed consistency model, multi-threading, data locality optimization, data forwarding, and data prefetching.

RTK Latency Estimation and Compensation Method for Vehicle Navigation System

  • Jang, Woo-Jin;Park, Chansik;Kim, Min;Lee, Seokwon;Cho, Min-Gyou
    • Journal of Positioning, Navigation, and Timing
    • /
    • 제6권1호
    • /
    • pp.17-26
    • /
    • 2017
  • Latency occurs in RTK, where the measured position actually outputs past position when compared to the measured time. This latency has an adverse effect on the navigation accuracy. In the present study, a system that estimates the latency of RTK and compensates the position error induced by the latency was implemented. To estimate the latency, the speed obtained from an odometer and the speed calculated from the position change of RTK were used. The latency was estimated with a modified correlator where the speed from odometer is shifted by a sample until to find best fit with speed from RTK. To compensate the position error induced by the latency, the current position was calculated from the speed and heading of RTK. To evaluate the performance of the implemented method, the data obtained from an actual vehicle was applied to the implemented system. The results of the experiment showed that the latency could be estimated with an error of less than 12 ms. The minimum data acquisition time for the stable estimation of the latency was up to 55 seconds. In addition, when the position was compensated based on the estimated latency, the position error decreased by at least 53.6% compared with that before the compensation.

A Fault Tolerant Data Management Scheme for Healthcare Internet of Things in Fog Computing

  • Saeed, Waqar;Ahmad, Zulfiqar;Jehangiri, Ali Imran;Mohamed, Nader;Umar, Arif Iqbal;Ahmad, Jamil
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제15권1호
    • /
    • pp.35-57
    • /
    • 2021
  • Fog computing aims to provide the solution of bandwidth, network latency and energy consumption problems of cloud computing. Likewise, management of data generated by healthcare IoT devices is one of the significant applications of fog computing. Huge amount of data is being generated by healthcare IoT devices and such types of data is required to be managed efficiently, with low latency, without failure, and with minimum energy consumption and low cost. Failures of task or node can cause more latency, maximum energy consumption and high cost. Thus, a failure free, cost efficient, and energy aware management and scheduling scheme for data generated by healthcare IoT devices not only improves the performance of the system but also saves the precious lives of patients because of due to minimum latency and provision of fault tolerance. Therefore, to address all such challenges with regard to data management and fault tolerance, we have presented a Fault Tolerant Data management (FTDM) scheme for healthcare IoT in fog computing. In FTDM, the data generated by healthcare IoT devices is efficiently organized and managed through well-defined components and steps. A two way fault-tolerant mechanism i.e., task-based fault-tolerance and node-based fault-tolerance, is provided in FTDM through which failure of tasks and nodes are managed. The paper considers energy consumption, execution cost, network usage, latency, and execution time as performance evaluation parameters. The simulation results show significantly improvements which are performed using iFogSim. Further, the simulation results show that the proposed FTDM strategy reduces energy consumption 3.97%, execution cost 5.09%, network usage 25.88%, latency 44.15% and execution time 48.89% as compared with existing Greedy Knapsack Scheduling (GKS) strategy. Moreover, it is worthwhile to mention that sometimes the patients are required to be treated remotely due to non-availability of facilities or due to some infectious diseases such as COVID-19. Thus, in such circumstances, the proposed strategy is significantly efficient.

Delay and Energy Efficient Data Aggregation in Wireless Sensor Networks

  • Le, Huu Nghia;Choe, Junseong;Shon, Minhan;Choo, Hyunseung
    • 한국정보처리학회:학술대회논문집
    • /
    • 한국정보처리학회 2012년도 춘계학술발표대회
    • /
    • pp.607-608
    • /
    • 2012
  • Data aggregation is a fundamental problem in wireless sensor networks which attracts great attention in recent years. Delay and energy efficiencies are two crucial issues of designing a data aggregation scheme. In this paper, we propose a distributed, energy efficient algorithm for collecting data from all sensor nodes with the minimum latency called Delay-aware Power-efficient Data Aggregation algorithm (DPDA). The DPDA algorithm minimizes the latency in data collection process by building a time efficient data aggregation network structure. It also saves sensor energy by decreasing node transmission distances. Energy is also well-balanced between sensors to achieve acceptable network lifetime. From intensive experiments, the DPDA scheme could significantly decrease the data collection latency and obtain reasonable network lifetime compared with other approaches.

Agent with Low-latency Overcoming Technique for Distributed Cluster-based Machine Learning

  • Seo-Yeon, Gu;Seok-Jae, Moon;Byung-Joon, Park
    • International Journal of Internet, Broadcasting and Communication
    • /
    • 제15권1호
    • /
    • pp.157-163
    • /
    • 2023
  • Recently, as businesses and data types become more complex and diverse, efficient data analysis using machine learning is required. However, since communication in the cloud environment is greatly affected by network latency, data analysis is not smooth if information delay occurs. In this paper, SPT (Safe Proper Time) was applied to the cluster-based machine learning data analysis agent proposed in previous studies to solve this delay problem. SPT is a method of remotely and directly accessing memory to a cluster that processes data between layers, effectively improving data transfer speed and ensuring timeliness and reliability of data transfer.

NoC에서 면적 효율적인 Network Interface 구조에 관한 연구 (An Area Efficient Network Interface Architecture)

  • 이서훈;황선영
    • 한국통신학회논문지
    • /
    • 제33권5C호
    • /
    • pp.361-370
    • /
    • 2008
  • 여러개의 프로세서와 IP들로 이루어진 MPSoC 시스템은 모듈간 통신을 위해 NoC가 지원되어야 한다. NoC는 스위치의 추가만으로 시스템을 쉽게 확장할 수 있는 장점을 가지고 있으나, 시스템의 복잡도가 증가함에 따라 NoC를 구성하는 스위치의 수가 증가하게 되며, 증가된 스위치로 인해 전체 시스템 면적과 데이터 전송 latency가 증가하게 된다. 본 논문에서는 network interface를 공유하여 시스템에서 요구되는 스위치의 수를 감소시켜 전체 시스템의 면적 및 데이터 전송 latency를 감소시키는 방안을 제시한다. Network interface에 연결된 모듈간 버퍼를 공유하는 방식을 사용하여 network interface의 면적을 감소시켰다. 실험결과 스위치 수 및 network interface의 면적감소로 인해 전체 시스템의 면적은 기존에 비해 평균 46.5% 감소하였으며, 데이터 latency는 평균 17.1% 감소하였다.

A Distributed LT Codes-based Data Transmission Technique for Multicast Services in Vehicular Ad-hoc Networks

  • Zhou, Yuan;Fei, Zesong;Huang, Gaishi;Yang, Ang;Kuang, Jingming
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제7권4호
    • /
    • pp.748-766
    • /
    • 2013
  • In this paper, we consider an infrastructure-vehicle-vehicle (I2V2V) based Vehicle Ad-hoc Networks (VANETs), where one base station multicasts data to d vehicular users with the assistance of r vehicular users. A Distributed Luby Transform (DLT) codes based transmission scheme is proposed over lossy VANETs to reduce transmission latency. Furthermore, focusing on the degree distribution of DLT codes, a Modified Deconvolved Soliton Distribution (MDSD) is designed to further reduce the transmission latency and improve the transmission reliability. We investigate the network behavior of the transmission scheme with MDSD, called MDLT based scheme. Closed-form expressions of the transmission latency of the proposed schemes are derived. Performance simulation results show that DLT based scheme can reduce transmission latency significantly compared with traditional Automatic Repeat Request (ARQ) and Luby Transform (LT) codes based schemes. In contrast to DLT based scheme, the MDLT based scheme can further reduce transmission latency and improve FER performance substantially, when both the source-to-relay and relay-to-sink channels are erasure channels.