• Title/Summary/Keyword: Latency Time

Search Result 993, Processing Time 0.032 seconds

Garbage Collection Synchronization Technique for Improving Tail Latency of Cloud Databases (클라우드 데이터베이스에서의 꼬리응답시간 감소를 위한 가비지 컬렉션 동기화 기법)

  • Han, Seungwook;Hahn, Sangwook Shane;Kim, Jihong
    • Journal of KIISE
    • /
    • v.44 no.8
    • /
    • pp.767-773
    • /
    • 2017
  • In a distributed system environment, such as a cloud database, the tail latency needs to be kept short to ensure uniform quality of service. In this paper, through experiments on a Cassandra database, we show that long tail latency is caused by a lack of memory space because the database cannot receive any request until free space is reclaimed by writing the buffered data to the storage device. We observed that, since the performance of the storage device determines the amount of time required for writing the buffered data, the performance degradation of Solid State Drive (SSD) due to garbage collection results in a longer tail latency. We propose a garbage collection synchronization technique, called SyncGC, that simultaneously performs garbage collection in the java virtual machine and in the garbage collection in SSD concurrently, thus hiding garbage collection overheads in the SSD. Our evaluations on real SSDs show that SyncGC reduces the tail latency of $99.9^{th}$ and, $99.9^{th}-percentile$ by 31% and 36%, respectively.

Analytical Modelling and Heuristic Algorithm for Object Transfer Latency in the Internet of Things (사물인터넷에서 객체전송지연을 계산하기 위한 수리적 모델링 및 휴리스틱 알고리즘의 개발)

  • Lee, Yong-Jin
    • Journal of Internet of Things and Convergence
    • /
    • v.6 no.3
    • /
    • pp.1-6
    • /
    • 2020
  • This paper aims to integrate the previous models about mean object transfer latency in one framework and analyze the result through the computational experience. The analytical object transfer latency model assumes the multiple packet losses and the Internet of Things(IoT) environment including multi-hop wireless network, where fast re-transmission is not possible due to small window. The model also considers the initial congestion window size and the multiple packet loss in one congestion window. Performance evaluation shows that the lower and upper bounds of the mean object transfer latency are almost the same when both transfer object size and packet loss rate are small. However, as packet loss rate increases, the size of the initial congestion window and the round-trip time affect the upper and lower bounds of the mean object transfer latency.

Inter-AP Security Transition Mechanism and Its FSM in WLAN AP Supporting Fast Roaming (이동 무선랜 접속장치의 접속점 보안 천이 메커니즘과 유한상태머신)

  • Chung ByungHo;Kang You Sung;Oh KyungHee;Kim SangHa
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.30 no.6C
    • /
    • pp.601-606
    • /
    • 2005
  • Recently with the high expectation of voice over WLAN service, to supped fast inter-AP security transition in WLAN AP is one of the most actively investigating issues. It is also very important to minimize inter-AP security transition latency, while maintaining constantly the secure association from old AP when a station transits to new AP. Hence, this paper first defines secure transition latency as a primary performance metric of AP system in WLAN supporting IEEE802.11i, 802.1x, and 802.11f, and then presents low latency inter-AP security transition mechanism and its security FSM whose objective is to minimize inter-AP transition latency. Experiment shows that the proposed scheme outperforms the legacy 802.1X AP up to $79\%$ with regard to the transition latency.

Refined fixed granularity algorithm on Networks of Workstations (NOW 환경에서 개선된 고정 분할 단위 알고리즘)

  • Gu, Bon-Geun
    • The KIPS Transactions:PartA
    • /
    • v.8A no.2
    • /
    • pp.117-124
    • /
    • 2001
  • At NOW (Networks Of Workstations), the load sharing is very important role for improving the performance. The known load sharing strategy is fixed-granularity, variable-granularity and adaptive-granularity. The variable-granularity algorithm is sensitive to the various parameters. But Send algorithm, which implements the fixed-granularity strategy, is robust to task granularity. And the performance difference between Send and variable-granularity algorithm is not substantial. But, in Send algorithm, the computing time and the communication time are not overlapped. Therefore, long latency time at the network has influence on the execution time of the parallel program. In this paper, we propose the preSend algorithm. In the preSend algorithm, the master node can send the data to the slave nodes in advance without the waiting for partial results from the slaves. As the master node sent the next data to the slaves in advance, the slave nodes can process the data without the idle time. As stated above, the preSend algorithm can overlap the computing time and the communication time. Therefore we reduce the influence of the long latency time at the network and the execution time of the parallel program on the NOW. To compare the execution time of two algorithms, we use the $320{\times}320$ matrix multiplication. The comparison results of execution times show that the preSend algorithm has the shorter execution time than the Send algorithm.

  • PDF

Low Latency Encoding Algorithm for Duo-Binary Turbo Codes with Tall Biting Trellises (이중 입력 터보 코드를 위한 저지연 부호화 알고리즘)

  • Park, Soak-Min;Kwak, Jae-Young;Lee, Kwy-Ro
    • Proceedings of the KIEE Conference
    • /
    • 2008.10b
    • /
    • pp.117-118
    • /
    • 2008
  • The low latency encoder for high data rate duo-binary turbo codes with tail biting trellises is considered. Encoder hardware architecture is proposed using inherent encoding property of duo-binary turbo codes. And we showed that half of execution time as well as the energy can be reduced with the proposed architecture.

  • PDF

A Study of Mobile IPv6 Fast Handover Algorithms in WLAN Environment (무선랜 환경에서 Mobile IPv6 Fast Handover 알고리즘에 관한 연구)

  • 이재황;김평수;김영근
    • Proceedings of the IEEK Conference
    • /
    • 2003.07a
    • /
    • pp.509-512
    • /
    • 2003
  • 본 논문은 무선랜 환경에서 Mobile IPv6 Node가 이동 시에 발생하는 Handover Latency 를 줄이기 위한 새로운 알고리즘을 제안한다. 현재 Mobile IPv6 Fast Handover Protocol 은 Layer2 에서의 Handover 의 도움을 전제로 하기 때문에 실제 구현상에서 Real-time 이나Delay 에 민감한 Application 에 적용하기 어렵다. 이 문제를 해결하기 위해 무선랜에서 사용하는 Beacon 신호를 이용한 Dominant NAR 알고리즘을 적용하여 MIPv6 Fast Handover 과정을 선 처리하여 Handover Latency를 줄이고자 한다.

  • PDF

Effect of Caching and Prefetching Techniques to Accelerate Channel Search Latency in IPTVs

  • Bahn, Hyokyung
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.14 no.2
    • /
    • pp.17-22
    • /
    • 2022
  • Due to the recent advances in high-speed communication technologies as well as the easy production of high-quality video contents, IPTV is becoming increasingly popular. Meanwhile, as the number of IPTV channels increases, channel search time to find the desired channel keeps increasing. In this paper, we discuss how to improve the channel search latency in IPTV, and introduce caching and prefetching techniques that are widely used in memory management systems. Specifically, we adopt memory replacement, prefetching, and caching techniques in IPTV channel search interfaces and show the effectiveness of these techniques as the number of channels are varied.

QoS-aware Cross Layer Handover Scheme for High-Speed vehicles

  • Nashaat, Heba
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.1
    • /
    • pp.135-158
    • /
    • 2018
  • High-Speed vehicles can be considered as multiple mobile nodes that move together in a large-scale mobile network. High-speed makes the time allowed for a mobile node to complete a handover procedure shorter and more frequently. Hence, several protocols are used to manage the mobility of mobile nodes such as Network Mobility (NEMO). However, there are still some problems such as high handover latency and packet loss. So efficient handover management is needed to meet Quality of Service (QoS) requirements for real-time applications. This paper utilizes the cross-layer seamless handover technique for network mobility presented in cellular networks. It extends this technique to propose QoS-aware NEMO protocol which considers QoS requirements for real-time applications. A novel analytical framework is developed to compare the performance of the proposed protocol with basic NEMO using cost functions for realistic city mobility model. The numerical results show that QoS-aware NEMO protocol improves the performance in terms of handover latency, packet delivery cost, location update cost, and total cost.

Effect on Audio Play Latency for Real-Time HMD-Based Headphone Listening (HMD를 이용한 오디오 재생 기술에서 Latency의 영향 분석)

  • Son, Sangmo;Jo, Hyun;Kim, Sunmin
    • Proceedings of the Korean Society for Noise and Vibration Engineering Conference
    • /
    • 2014.10a
    • /
    • pp.141-145
    • /
    • 2014
  • A minimally appropriate time delay of audio data processing is investigated for rendering virtual sound source direction in real-time head-tracking environment under headphone listening. Less than 3.7 degree of angular mismatch should be maintained in order to keep desired sound source directions in virtually fixed while listeners are rotating their head in a horizontal plane. The angular mismatch is proportional to speed of head rotation and data processing delay. For 20 degree/s head rotation, which is a relatively slow head-movement case, less than total of 63ms data processing delay should be considered.

  • PDF

An FPGA Design of High-Speed Turbo Decoder

  • Jung Ji-Won;Jung Jin-Hee;Choi Duk-Gun;Lee In-Ki
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.30 no.6C
    • /
    • pp.450-456
    • /
    • 2005
  • In this paper, we propose a high-speed turbo decoding algorithm and present results of its implementation. The latency caused by (de)interleaving and iterative decoding in conventional MAP turbo decoder can be dramatically reduced with the proposed scheme. The main cause of the time reduction is to use radix-4, center to top, and parallel decoding algorithm. The reduced latency makes it possible to use turbo decoder as a FEC scheme in the real-time wireless communication services. However the proposed scheme costs slight degradation in BER performance because the effective interleaver size in radix-4 is reduced to an half of that in conventional method. To ensure the time reduction, we implemented the proposed scheme on a FPGA chip and compared with conventional one in terms of decoding speed. The decoding speed of the proposed scheme is faster than conventional one at least by 5 times for a single iteration of turbo decoding.