• 제목/요약/키워드: queuing delay

검색결과 109건 처리시간 0.03초

Multi-Objective Handover in LTE Macro/Femto-Cell Networks

  • Roy, Abhishek;Shin, Jitae;Saxena, Navrati
    • Journal of Communications and Networks
    • /
    • 제14권5호
    • /
    • pp.578-587
    • /
    • 2012
  • One of the key elements in the emerging, packet-based long term evolution (LTE) cellular systems is the deployment of multiple femtocells for the improvement of coverage and data rate. However, arbitrary overlaps in the coverage of these femtocells make the handover operation more complex and challenging. As the existing handover strategy of LTE systems considers only carrier to interference plus noise ratio (CINR), it often suffers from resource constraints in the target femtocell, thereby leading to handover failure. In this paper, we propose a new efficient, multi-objective handover solution for LTE cellular systems. The proposed solution considers multiple parameters like signal strength and available bandwidth in the selection of the optimal target cell. This results in a significant increase in the handover success rate, thereby reducing the blocking of handover and new sessions. The overall handover process is modeled and analyzed by a three-dimensional Markov chain. The analytical results for the major performance metrics closely resemble the simulation results. The simulation results show that the proposed multi-objective handover offers considerable improvement in the session blocking rates, session queuing delay, handover latency, and goodput during handover.

Throughput and Delay Analysis of a Reliable Cooperative MAC Protocol in Ad Hoc Networks

  • Jang, Jaeshin;Kim, Sang Wu;Wie, Sunghong
    • Journal of Communications and Networks
    • /
    • 제14권5호
    • /
    • pp.524-532
    • /
    • 2012
  • In this paper, we present the performance evaluation of the reliable cooperative media access control (RCO-MAC) protocol, which has been proposed in [1] by us in order to enhance system throughput in bad wireless channel environments. The performance of this protocol is evaluated with computer simulation as well as mathematical analysis in this paper. The system throughput, two types of average delays, average channel access delay, and average system delay, which includes the queuing delay in the buffer, are used as performance metrics. In addition, two different traffic models are used for performance evaluation: The saturated traffic model for computing system throughput and average channel access delay, and the exponential data generation model for calculating average system delay. The numerical results show that the RCO-MAC protocol proposed by us provides over 20% more system throughput than the relay distributed coordination function (rDCF) scheme. The numerical results show that the RCO-MAC protocol provides a slightly higher average channel access delay over a greater number of source nodes than the rDCF. This is because a greater number of source nodes provide more opportunities for cooperative request to send (CRTS) frame collisions and because the value of the related retransmission timer is greater in the RCO-MAC protocol than in the rDCF protocol. The numerical results also confirm that the RCO-MAC protocol provides better average system delay over the whole gamut of the number of source nodes than the rDCF protocol.

SPMLD: Sub-Packet based Multipath Load Distribution for Real-Time Multimedia Traffic

  • Wu, Jiyan;Yang, Jingqi;Shang, Yanlei;Cheng, Bo;Chen, Junliang
    • Journal of Communications and Networks
    • /
    • 제16권5호
    • /
    • pp.548-558
    • /
    • 2014
  • Load distribution is vital to the performance of multipath transport. The task becomes more challenging in real-time multimedia applications (RTMA), which impose stringent delay requirements. Two key issues to be addressed are: 1) How to minimize end-to-end delay and 2) how to alleviate packet reordering that incurs additional recovery time at the receiver. In this paper, we propose sub-packet based multipath load distribution (SPMLD), a new model that splits traffic at the granularity of sub-packet. Our SPMLD model aims to minimize total packet delay by effectively aggregating multiple parallel paths as a single virtual path. First, we formulate the packet splitting over multiple paths as a constrained optimization problem and derive its solution based on progressive approximation method. Second, in the solution, we analyze queuing delay by introducing D/M/1 model and obtain the expression of dynamic packet splitting ratio for each path. Third, in order to describe SPMLD's scheduling policy, we propose two distributed algorithms respectively implemented in the source and destination nodes. We evaluate the performance of SPMLD through extensive simulations in QualNet using real-time H.264 video streaming. Experimental results demonstrate that: SPMLD outperforms previous flow and packet based load distribution models in terms of video peak signal-to-noise ratio, total packet delay, end-to-end delay, and risk of packet reordering. Besides, SPMLD's extra overhead is tiny compared to the input video streaming.

A Model for Analyzing the Performance of Wireless Multi-Hop Networks using a Contention-based CSMA/CA Strategy

  • Sheikh, Sajid M.;Wolhuter, Riaan;Engelbrecht, Herman A.
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제11권5호
    • /
    • pp.2499-2522
    • /
    • 2017
  • Multi-hop networks are a low-setup-cost solution for enlarging an area of network coverage through multi-hop routing. Carrier sense multiple access with collision avoidance (CSMA/CA) is frequently used in multi-hop networks. Multi-hop networks face multiple problems, such as a rise in contention for the medium, and packet loss under heavy-load, saturated conditions, which consumes more bandwidth due to re-transmissions. The number of re-transmissions carried out in a multi-hop network plays a major role in the achievable quality of service (QoS). This paper presents a statistical, analytical model for the end-to-end delay of contention-based medium access control (MAC) strategies. These strategies schedule a packet before performing the back-off contention for both differentiated heterogeneous data and homogeneous data under saturation conditions. The analytical model is an application of Markov chain theory and queuing theory. The M/M/1 model is used to derive access queue waiting times, and an absorbing Markov chain is used to determine the expected number of re-transmissions in a multi-hop scenario. This is then used to calculate the expected end-to-end delay. The prediction by the proposed model is compared to the simulation results, and shows close correlation for the different test cases with different arrival rates.

Kiss & Ride Zone 설치에 따른 교통망 영향 분석 (Impact Analysis of Transportation Network by The Installation of Kiss & Ride Zone)

  • 홍기만;백바름;김현명
    • 한국도로학회논문집
    • /
    • 제15권5호
    • /
    • pp.145-156
    • /
    • 2013
  • PURPOSES : This research is a study on the changes in the road network of the surrounding area is installed according to the Kiss & Ride Zone. METHODS : Estimating the transportation mode of students by using the Metropolitan household Surveys(2006) and estimating the O/D by Kiss & Ride ratio with the estimated data, then being applied to a method of reducing the number of lanes for certain sections of the road which would be installed with Kiss & Ride Zone. RESULTS : The reason why it is different for delay resolving time and the affected roads as the Kiss & Ride percentage change, was the impact of the Kiss & Ride Zone's installation position. CONCLUSIONS : The purpose of the study was to analyze the impact of the road network in accordance with the installation of Kiss & Ride Zone by using speed and queue delay resolving time, and it is a need to develop a quantitative evaluation technique which was using various indicators in impact analysis according to the installation of the traffic safety facilities in the future.

An Offloading Scheduling Strategy with Minimized Power Overhead for Internet of Vehicles Based on Mobile Edge Computing

  • He, Bo;Li, Tianzhang
    • Journal of Information Processing Systems
    • /
    • 제17권3호
    • /
    • pp.489-504
    • /
    • 2021
  • By distributing computing tasks among devices at the edge of networks, edge computing uses virtualization, distributed computing and parallel computing technologies to enable users dynamically obtain computing power, storage space and other services as needed. Applying edge computing architectures to Internet of Vehicles can effectively alleviate the contradiction among the large amount of computing, low delayed vehicle applications, and the limited and uneven resource distribution of vehicles. In this paper, a predictive offloading strategy based on the MEC load state is proposed, which not only considers reducing the delay of calculation results by the RSU multi-hop backhaul, but also reduces the queuing time of tasks at MEC servers. Firstly, the delay factor and the energy consumption factor are introduced according to the characteristics of tasks, and the cost of local execution and offloading to MEC servers for execution are defined. Then, from the perspective of vehicles, the delay preference factor and the energy consumption preference factor are introduced to define the cost of executing a computing task for another computing task. Furthermore, a mathematical optimization model for minimizing the power overhead is constructed with the constraints of time delay and power consumption. Additionally, the simulated annealing algorithm is utilized to solve the optimization model. The simulation results show that this strategy can effectively reduce the system power consumption by shortening the task execution delay. Finally, we can choose whether to offload computing tasks to MEC server for execution according to the size of two costs. This strategy not only meets the requirements of time delay and energy consumption, but also ensures the lowest cost.

다수 지연규격을 지원하는 시작시각 기반 공정패킷 스케줄러 (A Start-Time Based Fair Packet Scheduler Supporting Multiple Delay Bounds)

  • 김태준
    • 한국멀티미디어학회논문지
    • /
    • 제9권3호
    • /
    • pp.323-332
    • /
    • 2006
  • 실시간 멀티미디어 응용의 서비스 품질을 보장하는 공정 패킷 스케줄링 알고리즘은 타임스탬프 계산시 사용되는 패킷의 기준시각 측면에서 종료시각 방식과 시작시각 방식으로 나된다. 전자는 트래픽 흐름의 속도에 반비례하는 레이턴시 특성으로 인해 트래픽 흐름의 예약속도 조정으로 다양한 지연규격을 지원할 수 있어 대부분의 스케줄러에 적용되고 있으나 과잉예약에 따른 대역폭 손실이 발생한다. 반면 후자는 과잉예약에 따른 대역폭 손실은 없으나 흐름의 수에 종속되는 레이턴시 특성으로 인해 다수의 지연규격을 수용하기 어려운 문제점이 있다. 본 논문에서는 다수 지연규격(multiple delay bounds)을 효과적으로 지원할 수 있는 시작시각 기반 공정 패킷 스케줄러를 제안하고, 제안된 스케줄러의 성능특성을 분석한다. 분석결과 종료시각 기반 스케줄러보다 최대 30% 높은 이용도 특성을 보였다.

  • PDF

대역폭 이용도 측면에서 공정 패킷 스케줄러의 성능 분석 (Performance Analysis of Fair Packet Schedulers in Bandwidth Utilization)

  • 안효범;김태준
    • 한국멀티미디어학회논문지
    • /
    • 제9권2호
    • /
    • pp.197-207
    • /
    • 2006
  • 공정 패킷 스케줄러에서 트래픽 흐름의 속도에 의해 결정되는 최대전달지연이 그 흐름의 요구 지연규격을 위반할 경우 예약속도를 높여서 이를 줄여야 한다. 이러한 과잉예약의 결과로 전송대역폭이 손실되나, 이전연구에서 사용되었던 레이턴시, 공정성 및 구현복잡성의 세가지 성능지표로는 손실대역폭을 평가할 수 없다. 본 논문에서는 스케줄링 서버 자원의 손실특성을 평가할 수 있는 대역폭 이용도 지표를 제안하고, 대역폭 및 페이로드(payload) 이용도 측면에서 공정 패킷 스케줄러의 성능을 분석 및 평가하였다. 평가결과 요구 지연규격이 느슨할수록 높은 페이로드 이용도를 얻을 수 있었고, 특히 WFQ급 레이턴시를 갖는 스케줄러의 페이로드 이용도가 SCFQ에 비해 50%까지 개선됨을 발견할 수 있었다.

  • PDF

채널오류에 강한 애드혹 네트워크용 협력통신 MAC 프로토콜에 관한 연구 (A Study on a Reliable Cooperative MAC Protocol for Ad Hoc Networks)

  • 장재신
    • 한국통신학회논문지
    • /
    • 제35권6A호
    • /
    • pp.577-584
    • /
    • 2010
  • 본 논문에서는 열악한 무선채널 환경에서 시스템 처리량을 개선시킬 수 있는 협력통신용 MAC 프로토콜을 제안하고, 컴퓨터 모의실험을 통해서 성능평가를 수행하였다. 성능평가 척도로는 시스템 처리량과 평균지연시간을 사용하였으며, 성능평가 결과를 통해서 본 논문에서 제안한 기법이 기존 rDCF 기법과 비교할 때 시스템 처리량 측면에서 24% 정도 증가함을 확인할 수 있었다. 한편 평균지연시간 측면에서는 본 논문에서 제안한 기법이 단말기 수가 상대적으로 작을 경우에는 우수하지만, 단말기 수가 상대적으로 클 경우에는 미세하지만 열악한 것을 알 수 있었다. 그 이유는 단말기 수가 상대적으로 클 경우에는 채널경쟁에 의해 시스템 성능이 좌우되며, 본 논문에서 제안한 기법의 경우는 각 노드에서 수행하는 재전송 절차 때문에 CRTS 프레임을 전송한 후 설정하는 재전송 타이머의 설정시간이 기존 방식보다 큼에 기인한다. 그러나 버퍼 내 큐잉지연시간까지 고려하면 전체적인 시스템 지연시간은 rDCF 기법에 비해 감소할 것으로 예상된다.

Grant-Aware Scheduling Algorithm for VOQ-Based Input-Buffered Packet Switches

  • Han, Kyeong-Eun;Song, Jongtae;Kim, Dae-Ub;Youn, JiWook;Park, Chansung;Kim, Kwangjoon
    • ETRI Journal
    • /
    • 제40권3호
    • /
    • pp.337-346
    • /
    • 2018
  • In this paper, we propose a grant-aware (GA) scheduling algorithm that can provide higher throughput and lower latency than a conventional dual round-robin matching (DRRM) method. In our proposed GA algorithm, when an output receives requests from different inputs, the output not only sends a grant to the selected input, but also sends a grant indicator to all the other inputs to share the grant information. This allows the inputs to skip the granted outputs in their input arbiters in the next iteration. Simulation results using OPNET show that the proposed algorithm provides a maximum 3% higher throughput with approximately 31% less queuing delay than DRRM.