• Title/Summary/Keyword: HTTP Performance

Search Result 125, Processing Time 0.027 seconds

A Study on Improving HTTP latency for the Latency Web Document Processing (효율적인 웹문서 처리를 위한 HTTP 지연 개선에 관한 연구)

  • 고일석;최우진;나윤지;류승렬
    • The Journal of the Korea Contents Association
    • /
    • v.2 no.2
    • /
    • pp.47-52
    • /
    • 2002
  • Recently, network overload is greatly increased with explosive use of internet. So the Hyper-Text Transfer Protocol(HTTP) is required improve of performance for decreasing of latency on the web document processing. The P-HTTP is one of the improved mood of He HTTP and has pipeline structure, but performance of the P-HTTP is decreased on interaction between the TCP and P-HTTP. Modification of structural design of the HTTP is not enough to improvement this problem. In this paper, we analyse performance of the HTTP and P-HTTP, and propose a new method on improving HTTP latency for the efficient web document processing.

  • PDF

Modeling and Performance Evaluation of the Web server supporting Persistent Connection (Persistent Connection을 지원하는 웹서버 모델링 및 성능분석)

  • Min, Byeong-Seok;Nam, Ui-Seok;Lee, Sang-Mun;Sim, Yeong-Seok;Kim, Hak-Bae
    • The KIPS Transactions:PartC
    • /
    • v.9C no.4
    • /
    • pp.605-614
    • /
    • 2002
  • Amount of the web traffic web server handles are explosively increasing, which requires that the performance of the web server should be improved for the various web services. Although the analysis for the HTTP traffic with the proper tuning for the web server is essential, the research relevant to the subject are insignificant. In particular, although most of applications are implemented over HTTP 1.1 protocol, the researches mostly deal with the performance evaluation of the HTTP 1.0 protocol. Consequently, the modeling approach and the performance evaluation over HTTP 1.1 protocol have not been well formed. Therefore, basing on the HTTP 1.1 protocol supporting persistent connection, we present an analytical end-to-end tandem queueing model for web server to consider the specific hardware configuration inside web server beginning at accepting the user request until completing the service. we compare various performances between HTTP 1.0 and HTTP 1.1 under the overloading condition, and then analyze the characteristics of the HTTP traffic that include file size requested to web server, the OFF time between file transfers, the frequency of requests, and the temporal locality of requests. Presented model is verified through the comparing the server throughput according to varying requests rate with the real web server. Thereafter, we analyze the performance evaluation of the web server, according to the interrelation between TCP Listen queue size, the number of HTTP threads and the size of the network buffers.

Performance Analysis of QUIC Protocol for Web and Streaming Services (웹 및 스트리밍 서비스에 대한 QUIC 프로토콜 성능 분석)

  • Nam, Hye-Been;Jung, Joong-Hwa;Choi, Dong-Kyu;Koh, Seok-Joo
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.10 no.5
    • /
    • pp.137-144
    • /
    • 2021
  • The IETF has recently been standardizing the QUIC protocol for HTTP/3 services. It is noted that HTTP/3 uses QUIC as the underlying protocol, whereas HTTP/1.1 and HTTP/2 are based on TCP. Differently from TCP, the QUIC uses 0-RTT or 1-RTT transmissions to reduce the connection establishment delays of TCP and SCTP. Moreover, to solve the head-of-line blocking problem, QUIC uses the multi-streaming feature. In addition, QUIC provides various features, including the connection migration, and it is available at the Chrome browser. In this paper, we analyze the performance of QUIC for HTTP-based web and streaming services by comparing with the existing TCP and Streaming Control Transmission Protocol (SCTP) in the network environments with different link delays and packet error rates. From the experimental results, we can see that QUIC provides better throughputs than TCP and SCTP, and the gaps of performances get larger, as the link delays and packet error rates increase.

A Study on Next Generation HTTP-based Adaptive Streaming Transmission Protocol for Realistic Media (실감미디어 전송을 위한 차세대 HTTP 기반 적응적 스트리밍 전송 프로토콜 연구)

  • Song, Minjeong;Yoo, Seong-geun;Park, Sang-il
    • Journal of Broadcast Engineering
    • /
    • v.24 no.4
    • /
    • pp.602-612
    • /
    • 2019
  • Various streaming technologies are being studied to guarantee the QoE of viewers due to the development of realistic media. HTTP adaptive streaming is a typical example, and it is based on HTTP / 1.1 and TCP. These protocols have become one of the causes of delaying the image delay and increasing the waiting time of web pages. Therefore, in this paper, we propose a QUIC-DASH system applying the UDP-based transmission protocols QUIC and HTTP / 2 to the MPEG-DASH system after analyzing various transmission protocols and development process of HTTP. Through experiments, the QUIC-DASH system confirmed the possibility of providing optimal performance in terms of transmission speed of LTE environment than existing system. We also suggest various future studies for better performance.

Analysis of Average Waiting Time and Average Turnaround Time in Web Environment (웹 환경에서의 평균 대기 시간 및 평균 반환 시간의 분석)

  • Lee, Yong-Jin
    • The KIPS Transactions:PartC
    • /
    • v.9C no.6
    • /
    • pp.865-874
    • /
    • 2002
  • HTTP (HyperText Transfer Protocol) is a transfer protocol used by the World Wide Web distributed hypermedia system to retrieve the objects. Because HTTP is a connection oriented protocol, it uses TCP (Transmission control Protocol) as a transport layer. But it is known that HTTP interacts with TCP badly. it is discussed about factors affecting the performance or HTTP over TCP, the transaction time obtained by the per-transaction TCP connections for HTTP access and the TCP slow-start overheads, and the transaction time for T-TCP (Transaction TCP) which is one or methods improving the performance or HTTP over TCP. Average waiting time and average turnaround time are important parameters to satisfy QoS (Quality of Service) of end users. Formulas for calculating two parameters are derived. Such formulas can be used for the environment in which each TCP or T-TCP transaction time is same or different. Some experiments and computational experiences indicate that the proposed formulas are well acted, can be applied to the environment which the extension of bandwidth is necessary, and time characteristics of T-TCP are superior to that of TCP. Also, the load distribution method of web server based on the combination of bandwidths is discussed to reduce average waiting time and average turnaround time.

A Study on the implementation of a Portable Http Live Streaming Transmitter (휴대용 Http 라이브 스트리밍 전송기 구현에 관한 연구)

  • Cho, Tae-Kyung;Lee, Jea-Hee
    • The Transactions of the Korean Institute of Electrical Engineers P
    • /
    • v.63 no.3
    • /
    • pp.206-211
    • /
    • 2014
  • In this Paper, We proposed the HLS(Http Live Streaming) transmitter which is operated easily and cheap in all networks and client environments compared to the exist video live streaming transmitter. We analyzed the HLS protocol and then implemented for making the HLS transmitter cheaper and portable. After designing the HLS transmitter hardware using the RISC processor of Arm11 core, we ported the Linux Operating System and implemented the HLS protocol using the open source FFmpeg and Segmenter. For the performance evaluation of the developed HLS transmitter, we made the testing environment which is including the notebook, iPhone, and aroid Phone. In this testing environment, we analysed the received video data at the client displayer. As a results of the performance evaluation, we could certify that the proposed HLS transmitter has a higher performance than the Apple company's HLS.

Performance Analysis of Topology for Improvement of LAN Performance in Multimedia Service Environment (멀티미디어 환경에서 LAN성능향상을 위한 토폴로지 성능분석)

  • 조병록;임성진;송재철
    • Journal of Internet Computing and Services
    • /
    • v.2 no.5
    • /
    • pp.93-104
    • /
    • 2001
  • In this paper. we analyze networks in accordance with topology which consists of ATM-LAN in order to enhance performance in multimedia environment, We analyze job processing time of Email, FTP and Http server for variance of node in star, ring and mesh topology, We analyze a load factor of Email, FTP and Http server and variance of sending/receiving rate for various topology, We examined that the constituent networks in this paper can minimize load of server in any other topology.

  • PDF

Transmission Performance Comparison and Analysis with Different Publish/Subscribe Protocol (발행-구독 프로토콜에서 전송 성능의 비교 및 분석)

  • Fan, Zujie;Kim, JaeSoo
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2020.07a
    • /
    • pp.77-80
    • /
    • 2020
  • In this paper, we analyze and compare the performance of different publish and subscribe protocols in the real application environment. This paper provides a horizontal comparison of current publish/subscribe protocols in terms of security, throughput, and delay performance. Thanks to the use of lightweight frameworks, the MQTT protocol has demonstrated excellent performance in terms of delay performance. However, the AMQP protocol has more advantages in security and throughput. Although the REST/HTTP protocol has the worst delay performance, it is excellent in terms of compatibility because it is based on the HTTP protocol.

  • PDF

Development of a High Performance Web Server Using A Real-Time Compression Architecture (실시간 압축 전송 아키텍쳐를 이용한 고성능 웹 서버 구현)

  • 민병조;강명석;우천희;남의석;김학배
    • Journal of the Korea Computer Industry Society
    • /
    • v.5 no.3
    • /
    • pp.345-354
    • /
    • 2004
  • In these days, such services are popularized as E-commerce, E-government, multimedia services, and home networking applications. Most web traffics generated contemporarily basically use the Hyper Text Transfer Protocol(HTTP). Unfortunately, the HTTP is improper for these applications that comprise significant components of the web traffics. In this paper, we introduce a real-time contents compression architecture that maximizes the web service performance as well as reduces the response time. This architecture is built into the linux kernel-based web accelerating module. It guarantees not only the freshness of compressed contents but also the minimum time delay using an server-state adaptive algorithm, which can determine whether the server sends the compressed message considering the consumption of server resources when heavy requests reach the web server Also, We minimize the CPU overhead of the web server by exclusively implementing the compression kernel-thread. The testing results validates that this architecture saves the bandwidth of the web server and that elapsed time improvement is dramatic.

  • PDF

Development of a High Performance Web Server Using A Real-Time Compression Architecture (실시간 압축 전송 아키텍쳐를 이용한 고성능 웹서버 구현)

  • Min Byungjo;Hwang June;Kim Hagbae
    • The KIPS Transactions:PartC
    • /
    • v.11C no.6 s.95
    • /
    • pp.781-786
    • /
    • 2004
  • In these days, such services are popularized as E-commerce, E- government, multimedia services, and home networking applications. Most web traffics generated contemporarily basically use the Hyper Text Transfer Protocol(HTTP). Unfortunately, the HTTP is improper for these applications that comprise significant components of the web traffics. In this paper, we introduce a real-time contents compression architecture that maximizes the web service performance as well as reduces the response time. This architecture is built into the linux kernel-based web accelerating module. It guarantees not only the freshness of compressed contents but also the minimum time delay using an server-state adaptive algorithm, which can determine whether the server sends the compressed message considering the consumption of sewer resources when heavy requests reach the web server. Also, We minimize the CPU overhead of the web server by exclusively implementing the compression kernel-thread. The testing results validates that this architecture saves the bandwidth of the web server and that elapsed time improvement is dramatic.