• Title/Summary/Keyword: Parallel TCP Transmission

Search Result 12, Processing Time 0.027 seconds

Smartphone Real Time Streaming Service using Parallel TCP Transmission (병렬 TCP 통신을 이용한 스마트폰 실시간 스트리밍 서비스)

  • Kim, Jang-Young
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.20 no.5
    • /
    • pp.937-941
    • /
    • 2016
  • This paper proposed an efficient multiple TCP mechanism using Android smartphones for remote control video Wi-Fi stream transmission via network communications in real time. The wireless video stream transmission mechanism can be applied in various area such as real time server stream transmissions, movable drones, disaster robotics and real time security monitoring systems. Moreover, we urgently need to transmit data in timely fashion such as medical emergency, security surveillance and disaster prevention. Our parallel TCP transmission system can play an important role in several area such as real time server stream transmissions, movable drones, disaster robotics and real time security monitoring systems as mentioned in the previous sentence. Therefore, we designed and implemented a parallel TCP transmission (parallel stream) for an efficient real time video streaming services. In conclusion, we evaluated proposed mechanism using parallel TCP transmission under various environments with performance analysis.

A Maximum Mechanism of Data Transfer Rate using Parallel Transmission Technology on High Performance Network (고성능 네트워크에서 병렬 전송 기술을 이용한 전송률 극대화 메커니즘)

  • Kim, Young-Shin;Huh, Eui-Nam
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.34 no.9
    • /
    • pp.425-434
    • /
    • 2007
  • Even though Internet backbone speeds have increased in the last few years due to projects like Internet 2 and NGI, many high performance distributed applications are able to achieve only a small fraction of the available bandwidth. The cause of such problem is due to a character of TCP/IP. The primary goal of this protocol is reliable data transmission. Therefore high speed data transmission didn't be considered when TCP/IP is designed. Hence several researchers have been studied in order to solve the problem of TCP/IP. One of these research results, parallel transfer technique, solves this problem to use parallel TCP connections on application level. Additionally, this technique is compatibility. Recently, these researchers have been studied a mechanism to decide the number of parallel TCP connections. However, some researchers reported the number of parallel TCP connection base on only empirical results. Although hardware performance of host affects transmission rate, the hardware performance didn't be considered in their works. Hence, we collect all data related to transmission rate, such as hardware state information (cpu utilization, interrupt, context switch). Then, we analyzed collected data. And, we suggest a new mechanism determining number of parallel TCP connections for maximization of performance based on our analysis.

UDT Parallel Transfer Technologies Adaptive to Network Status In High Speed Network (고속네트워크에서 네트워크 혼잡상태에 적응적인 UDT 병렬전송 기법)

  • Park, Jong Seon;Cho, Gi Hwan
    • Smart Media Journal
    • /
    • v.2 no.4
    • /
    • pp.51-59
    • /
    • 2013
  • With increasing transmission speed of backbone networks, it is getting to provide enough available bandwidth. However, the bandwidth is not effectively utilized in volumetric data transfer. This mainly comes from the transmission protocol, TCP, which is used for most applications. TCP is inherently difficult to adapt the available bandwidth because of it's own characteristic of transfer mechanism. UDT is a prominent application level data transfer protocol which is targeting high speed network. In this paper, we propose UDT parallel transfer technologies which is adaptive to network status and then evaluate their performance in two points of view. Firstly, we measure data transfer rate of UDT with rate congestion control methods, and compare them with basic UDT. Secondly, we apply parallel transfer technologies adapted to network status, and measure their performance. Experimental results showed that UDT rate congestion control method outperforms UDT with 106% improvement in RTT 100ms section set with jitter 30ms. In addition, performance of parallel transfer with rate congestion control method showed 107% improvement than that of parallel transfer in RTT 400ms section set with jitter 20ms.

  • PDF

TCP-ROME: A Transport-Layer Parallel Streaming Protocol for Real-Time Online Multimedia Environments

  • Park, Ju-Won;Karrer, Roger P.;Kim, Jong-Won
    • Journal of Communications and Networks
    • /
    • v.13 no.3
    • /
    • pp.277-285
    • /
    • 2011
  • Real-time multimedia streaming over the Internet is rapidly increasing with the popularity of user-created contents, Web 2.0 trends, and P2P (peer-to-peer) delivery support. While many homes today are broadband-enabled, the quality of experience (QoE) of a user is still limited due to frequent interruption of media playout. The vulnerability of TCP (transmission control protocol), the popular transport-layer protocol for streaming in practice, to the packet losses, retransmissions, and timeouts makes it hard to deliver a timely and persistent flow of packets for online multimedia contents. This paper presents TCP-real-time online multimedia environment (ROME), a novel transport-layer framework that allows the establishment and coordination of multiple many-to-one TCP connections. Between one client with multiple home addresses and multiple co-located or distributed servers, TCP-ROME increases the total throughput by aggregating the resources of multiple TCP connections. It also overcomes the bandwidth fluctuations of network bottlenecks by dynamically coordinating the streams of contents from multiple servers and by adapting the streaming rate of all connections to match the bandwidth requirement of the target video.

Analysis of the Interference between Parallel Socket Connections and Prediction of the Bandwidth (병렬 연결 간의 트래픽 간섭 현상 분석 및 대역폭 예측)

  • Kim Young-Shin;Huh Eui-Nam;Kim Il-Jung;Hwang Jun
    • Journal of Internet Computing and Services
    • /
    • v.7 no.1
    • /
    • pp.131-141
    • /
    • 2006
  • Recently, many researchers have been studied several high performance data transmission techniques such as TCP buffer Tuning, XCP and Parallel Sockets. The Parallel Sockets is an application level library for parallel data transfer, while TCP tuning, XCP and DRS are developed on kernel level. However, parallel socket is not analyzed in detail yet and need more enhancements, In this paper, we verify performance of parallel transfer technique through several experiments and analyze character of traffic interference among socket connections. In order to enhance parallel transfer management mechanism, we predict the number of socket connections to obtain SLA of the network resource and at the same time, affected network bandwidth of existing connections is measured mathematically due to the interference of other parallel transmission. Our analytical scheme predicts very well network bandwidth for applications using the parallel socket only with 8% error.

  • PDF

A Study on Ring Buffer for Efficiency of Mass Data Transmission in Unstable Network Environment (불안정한 네트워크 환경에서 대용량 데이터의 전송 효율화를 위한 링 버퍼에 관한 연구)

  • Song, Min-Gyu;Kim, Hyo-Ryoung
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.15 no.6
    • /
    • pp.1045-1054
    • /
    • 2020
  • In this paper, we designed a TCP/IP based ring buffer system that can stably transfer bulk data streams in the unstable network environments. In the scheme we proposed, The observation data stream generated and output by each radio observatory's backend system as a UDP frame is stored as a UDP packet in a large capacity ring buffer via a socket buffer in the client system. Thereafter, for stable transmission to the remote destination, the packets are processed in TCP and transmitted to the socket buffer of server system in the correlation center, which packets are stored in a large capacity ring buffer if there is no problem with the packets. In case of errors such as loss, duplication, and out of order delivery, the packets are retransmitted through TCP flow control, and we guaranteed that the reliability of data arriving at the correlation center. When congestion avoidance occurs due to network performance instability, we also suggest that performance degradation can be minimized by applying parallel streams.

A Simulation-Based Study of FAST TCP Compared to SCTP: Towards Multihoming Implementation Using FAST TCP

  • Arshad, Mohammad Junaid;Saleem, Mohammad
    • Journal of Communications and Networks
    • /
    • v.12 no.3
    • /
    • pp.275-284
    • /
    • 2010
  • The current multihome-aware protocols (like stream control transmission protocol (SCTP) or parallel TCP for concurrent multipath data transfer (CMT) are not designed for high-capacity and large-latency networks; they often have performance problems transferring large data files over shared long-distance wide area networks. It has been shown that SCTP-CMT is more sensitive to receive buffer (rbuf) constraints, and this rbuf-blocking problem causes considerable throughput loss when multiple paths are used simultaneously. In this research paper, we demonstrate the weakness of SCTP-CMT rbuf constraints, and we then identify that rbuf-blocking problem in SCTP multihoming is mostly due to its loss-based nature for detecting network congestion. We present a simulation-based performance comparison of FAST TCP versus SCTP in high-speed networks for solving a number of throughput issues. This work proposes an end-to-end transport layer protocol (i.e., FAST TCP multihoming as a reliable, delaybased, multihome-aware, and selective ACK-based transport protocol), which can transfer data between a multihomed source and destination hosts through multiple paths simultaneously. Through extensive ns-2 simulations, we show that FAST TCP multihoming achieves the desired goals under a variety of network conditions. The experimental results and survey presented in this research also provide an insight on design decisions for the future high-speed multihomed transport layer protocols.

A Design and Implementation of Bulk Data Transmission Tool based on UDT (UDT 기반의 대용량 데이터 전송도구 설계 및 구현)

  • Park, Jong-Seon;Kim, Seung-Hae;Hwang, Gun-Joon;Cho, Gi-Hwan
    • Journal of the Institute of Electronics Engineers of Korea TC
    • /
    • v.49 no.2
    • /
    • pp.23-31
    • /
    • 2012
  • With advance of high bandwidth network infrastructure, the requirement is dramatically increasing to cooperate between the users who are far from each other and make use of bulk data. However, as the prominent data transmission protocol, it is well known that TCP suffers some degrees of inefficiency for bulk data transmission when RTT is relatively big. So, some works are on going to suggest a new transmission method to utilize the bandwidth in effective. UDT(UDP-based Data Transfer protocol) is one of these. It is a UDP based application level protocol which can guarantee reliability and stability. much like as TCP. In this paper, we present a design and implementation of UDT based bulk data transmission tool by applying parallel and compressive techniques. The implementation result is examined to measured its performance improvement on a real test-bed, and then compared with existing bulk data transmission tools. Experimental results show that proposed tool is more stable and shows greater performance than that of native UDT. Especially, the performances show 244% improvement in RTT 400ms without losses and 229% in RTT 250ms with 0.005% losses respectively.

HWbF(Hit and WLC based Firewall) Design using HIT technique for the parallel-processing and WLC(Weight Least Connection) technique for load balancing (병렬처리 HIT 기법과 로드밸런싱 WLC기법이 적용된 HWbF(Hit and WLC based Firewall) 설계)

  • Lee, Byung-Kwan;Kwon, Dong-Hyeok;Jeong, Eun-Hee
    • Journal of Internet Computing and Services
    • /
    • v.10 no.2
    • /
    • pp.15-28
    • /
    • 2009
  • This paper proposes HWbF(Hit and WLC based Firewall) design which consists of an PFS(Packet Filter Station) and APS(Application Proxy Station). PFS is designed to reduce bottleneck and to prevent the transmission delay of them by distributing packets with PLB(Packet Load Balancing) module, and APS is designed to manage a proxy cash server by using PCSLB(Proxy Cash Server Load Balancing) module and to detect a DoS attack with packet traffic quantity. Therefore, the proposed HWbF in this paper prevents packet transmission delay that was a drawback in an existing Firewall, diminishes bottleneck, and then increases the processing speed of the packet. Also, as HWbF reduce the 50% and 25% of the respective DoS attack error detection rate(TCP) about average value and the fixed critical value to 38% and 17%. with the proposed expression by manipulating the critical value according to the packet traffic quantity, it not only improve the detection of DoS attack traffic but also diminishes the overload of a proxy cash server.

  • PDF

OFPT: OpenFlow based Parallel Transport in Datacenters

  • Liu, Bo;XU, Bo;Hu, Chao;Hu, Hui;Chen, Ming
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.10
    • /
    • pp.4787-4807
    • /
    • 2016
  • Although the dense interconnection datacenter networks (DCNs) (e.g. FatTree) provide multiple paths and high bisection bandwidth for each server pair, the single-path TCP (SPT) and ECMP which are widely used currently neither achieve high bandwidth utilization nor have good load balancing. Due to only one available transmission path, SPT cannot make full use of all available bandwidth, while ECMP's random hashing results in many collisions. In this paper, we present OFPT, an OpenFlow based Parallel Transport framework, which integrates precise routing and scheduling for better load balancing and higher network throughput. By adopting OpenFlow based centralized control mechanism, OFPT computes the optimal path and bandwidth provision for each flow according to the global network view. To guarantee high throughput, OFPT dynamically schedules flows with Seamless Flow Migration Mechanism (SFMM), which can avoid packet loss in flow rerouting. Finally, we test OFPT on Mininet and implement it in a real testbed. The experimental results show that the average network throughput in OFPT is up to 97.5% of bisection bandwidth, which is higher than ECMP by 36%. Besides, OFPT decreases the average flow completion time (AFCT) and achieves better scalability.