• Title/Summary/Keyword: packet loss probability

Search Result 89, Processing Time 0.023 seconds

A Study on Improving TCP Performance in Wireless Network (무선 네트워크에서 TCP성능향상을 위한 연구)

  • Kim, Chang-Hee
    • Journal of Digital Contents Society
    • /
    • v.10 no.2
    • /
    • pp.279-289
    • /
    • 2009
  • As the TCP is the protocol designed for the wired network that packet loss probability is very low, because TCP transmitter takes it for granted that the packet loss by the wireless network characteristics is occurred by the network congestion and lowers the transmitter's transmission rate, the performance is degraded. In this article, we suggest the newly improved algorithm using two parameters, the local retransmission time value and the local retransmission critical value to the BS based on the Snoop. This technique adjusts the base stations local retransmission timer effectively according to the wireless link status to recover the wireless packet loss rapidly. We checked that as a result of the suggested algorithm through various simulations, A-Snoop protocol improve more the wireless TCP transmission rate by recovering the packet loss effectively in the wireless link that the continuous packet loss occur than the Snoop protocol.

  • PDF

Analysis of a Wireless Transmitter Model Considering Retransmission for Real Time Traffic (재전송을 고려한 무선 전송 단에서 실시간 데이터 전송 모델의 분석)

  • Kim, Tae-Yong;Kim, Young-Yong
    • Proceedings of the KIEE Conference
    • /
    • 2005.05a
    • /
    • pp.215-217
    • /
    • 2005
  • There are two types of packet loss probabilities used in both the network layer and the physical layer within the wireless transmitter such as a queueing discard probability and transmission loss probability. We analyze these loss performances in order to guarantee Quality of Service (QoS) which is the basic of the future network. The queuing loss probability is caused by a maximum allowable delay time and the transmission loss probability is caused by a wireless channel error. These two types of packet loss probabilities are not easily analyzed due to recursive feedback which, originates as a result at a queueing delay and a number of retransmission attempts. We consider a wireless transmitter to a M/D/1 queueing model. We configurate the model to have a finite-size FIFO buffer in order to analyze the real-time traffic streams. Then we present the approaches used for evaluating the loss probabilities of this M/D/1/K queueing model. To analyze the two types of probabilities which have mutual feedbacks with each other, we drive the solutions recursively. The validity and accuracy of the analysis are confirmed by the computer simulation. From the following solutions, we suggest a minimum of 'a Maximum Allowable Delay Time' for real-time traffic in order to initially guarantee the QoS. Finally, we analyze the required service rate for each type utilizing real-time traffic and we apply our valuable analysis to a N-user's wireless network in order to get the fundamental information (types of supportable real-type traffics, types of supportable QoS, supportable maximum number of users) for network design.

  • PDF

Threshold-based Filtering Buffer Management Scheme in a Shared Buffer Packet Switch

  • Yang, Jui-Pin;Liang, Ming-Cheng;Chu, Yuan-Sun
    • Journal of Communications and Networks
    • /
    • v.5 no.1
    • /
    • pp.82-89
    • /
    • 2003
  • In this paper, an efficient threshold-based filtering (TF) buffer management scheme is proposed. The TF is capable of minimizing the overall loss performance and improving the fairness of buffer usage in a shared buffer packet switch. The TF consists of two mechanisms. One mechanism is to classify the output ports as sctive or inactive by comparing their queue lengths with a dedicated buffer allocation factor. The other mechanism is to filter the arrival packets of inactive output ports when the total queue length exceeds a threshold value. A theoretical queuing model of TF is formulated and resolved for the overall packet loss probability. Computer simulations are used to compare the overall loss performance of TF, dynamic threshold (DT), static threshold (ST) and pushout (PO). We find that TF scheme is more robust against dynamic traffic variations than DT and ST. Also, although the over-all loss performance between TF and PO are close to each other, the implementation of TF is much simpler than the PO.

Comparison about TCP and Snoop protocol on wired and wireless integrated network (유무선 혼합망에서 TCP와 Snoop 프로토콜 비교에 관한 연구)

  • Kim, Chang Hee
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.5 no.2
    • /
    • pp.141-156
    • /
    • 2009
  • As the TCP is the protocol designed for the wired network that packet loss probability is very low, because TCP transmitter takes it for granted that the packet loss by the wireless network characteristics is occurred by the network congestion and lowers the transmitter's transmission rate, the performance is degraded. The Snoop Protocol was designed for the wired network by putting the Snoop agent module on the BS(Base Station) that connect the wire network to the wireless network to complement the TCP problem. The Snoop agent cash the packets being transferred to the wireless terminal and recover the loss by resending locally for the error occurred in the wireless link. The Snoop agent blocks the unnecessary congestion control by preventing the dupack (duplicate acknowledgement)for the retransmitted packet from sending to the sender and hiding the loss in the wireless link from the sender. We evaluated the performance in the wired/wireless network and in various TCP versions using the TCP designed for the wired network and the Snoop designed for the wireless network and evaluated the performance of the wired/wireless hybrid network in the wireless link environment that the continuous packet loss occur.

A Stochastic Model for Maximizing the Lifetime of Wireless Sensor Networks (확률모형을 이용한 무선센서망 수명 최대화에 관한 분석)

  • Lee, Doo-Ho;Yang, Won-Seok
    • Journal of the Korean Operations Research and Management Science Society
    • /
    • v.37 no.3
    • /
    • pp.69-78
    • /
    • 2012
  • Reduction of power consumption has been a major issue and an interesting challenge to maximize the lifetime of wireless sensor networks. We investigate the practical meaning of N-policy in queues as a power saving technique in a WSN. We consider the N-policy of a finite M/M/1 queue. We formulate the optimization problem of power consumption considering the packet loss probability. We analyze the trade-off between power consumption and the packet loss probability and demonstrate the operational characteristics of N-policy as a power saving technique in a WSN with various numerical examples.

Space and Time Priority Queues with Randomized Push-Out Scheme (확률적 밀어내기 정책을 가지는 공간-시간 우선순위 대기행렬)

  • Kilhwan Kim
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.46 no.2
    • /
    • pp.57-71
    • /
    • 2023
  • In this study, we analyze a finite-buffer M/G/1 queueing model with randomized pushout space priority and nonpreemptive time priority. Space and time priority queueing models have been extensively studied to analyze the performance of communication systems serving different types of traffic simultaneously: one type is sensitive to packet delay, and the other is sensitive to packet loss. However, these models have limitations. Some models assume that packet transmission times follow exponential distributions, which is not always realistic. Other models use general distributions for packet transmission times, but their space priority rules are too rigid, making it difficult to fine-tune service performance for different types of traffic. Our proposed model addresses these limitations and is more suitable for analyzing communication systems that handle different types of traffic with general packet length distributions. For the proposed queueing model, we first derive the distribution of the number of packets in the system when the transmission of each packet is completed, and we then obtain packet loss probabilities and the expected number of packets for each type of traffic. We also present a numerical example to explore the effect of a system parameter, the pushout probability, on system performance for different packet transmission time distributions.

Enhancing TCP Performance over Wireless Network with Variable Segment Size

  • Park, Keuntae;Park, Sangho;Park, Daeyeon
    • Journal of Communications and Networks
    • /
    • v.4 no.2
    • /
    • pp.108-117
    • /
    • 2002
  • TCP, which was developed on the basis of wired links, supposes that packet losses are caused by network congestion. In a wireless network, however, packet losses due to data corruption occur frequently. Since TCP does not distinguish loss types, it applies its congestion control mechanism to non-congestion losses as well as congestion losses. As a result, the throughput of TCP is degraded. To solve this problem of TCP over wireless links, previous researches, such as split-connection and end-to-end schemes, tried to distinguish the loss types and applied the congestion control to only congestion losses; yet they do nothing for non-congestion losses. We propose a novel transport protocol for wireless networks. The protocol called VS-TCP (Variable Segment size Transmission Control Protocol) has a reaction mechanism for a non-congestion loss. VS-TCP varies a segment size according to a non-congestion loss rate, and therefore enhances the performance. If packet losses due to data corruption occur frequently, VS-TCP decreases a segment size in order to reduce both the retransmission overhead and packet corruption probability. If packets are rarely lost, it increases the size so as to lower the header overhead. Via simulations, we compared VS-TCP and other schemes. Our results show that the segment-size variation mechanism of VS-TCP achieves a substantial performance enhancement.

Performance Improvement on RED Based Gateway in TCP Communication Network

  • Prabhavat, Sumet;Varakulsiripunth, Ruttikorn
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2004.08a
    • /
    • pp.782-787
    • /
    • 2004
  • Internet Engineering Task Force (IETF) has been considering the deployment of the Random Early Detection (RED) in order to avoid the increasing of packet loss rates which caused by an exponential increase in network traffic and buffer overflow. Although RED mechanism can prevent buffer overflow and hence reduce an average values of packet loss rates, but this technique is ineffective in preventing the consecutive drop in the high traffic condition. Moreover, it increases a probability and average number of consecutive dropped packet in the low traffic condition (named as "uncritical condition"). RED mechanism effects to TCP congestion control that build up the consecutive of the unnecessary transmission rate reducing; lead to low utilization on the link and consequently degrade the network performance. To overcome these problems, we have proposed a new mechanism, named as Extended Drop slope RED (ExRED) mechanism, by modifying the traditional RED. The numerical and simulation results show that our proposed mechanism reduces a drop probability in the uncritical condition.

  • PDF

Active Queue Management using Adaptive RED

  • Verma, Rahul;Iyer, Aravind;Karandikar, Abhay
    • Journal of Communications and Networks
    • /
    • v.5 no.3
    • /
    • pp.275-281
    • /
    • 2003
  • Random Early Detection (RED) [1] is an active queue management scheme which has been deployed extensively to reduce packet loss during congestion. Although RED can improve loss rates, its performance depends severely on the tuning of its operating parameters. The idea of adaptively varying RED parameters to suit the network conditions has been investigated in [2], where the maximum packet dropping probability $max_p$ has been varied. This paper focuses on adaptively varying the queue weight $\omega_q$ in conjunction with $max_p$ to improve the performance. We propose two algorithms viz., $\omega_q$-thresh and $\omega_q$-ewma to adaptively vary $\omega_q$. The performance is measured in terms of the packet loss percentage, link utilization and stability of the instantaneous queue length. We demonstrate that varying $\omega_q$ and $max_p$ together results in an overall improvement in loss percentage and queue stability, while maintaining the same link utilization. We also show that $max_p$ has a greater influence on loss percentage and queue stability as compared to $\omega_q$, and that varying $\omega_q$ has a positive influence on link utilization.