Compare the performance of HTTP/1.1, HTTP/2 and HTTP/3

Published On 十一月 22, 2023

category http | tags benchmark HTTP/3


Recently, I am asked to incorporate HTTP/3 into our services to seek for some performance improvement. Firstly, I need to learn what problems each major version of HTTP tries to solve. Secondly, I need to conduct an experiment that is close to the usage scenario in our production to compare their performances.

Version comparison

Chat-GPT is becoming more and more important in my daily work. Apart from asking it to write some code snippets for, I am also using it to learn new technologies.

Here is a summary of the advantages of each HTTP version over its precedence from Chat-GPT:

HTTP/2

Keywords: multiplexing, binary

HTTP/2 introduces several improvements over HTTP/1.1, which can significantly enhance the efficiency and speed of web communication. Here are some of the major advantages of HTTP/2:

  • Multiplexing: HTTP/2 allows multiple requests and responses to be sent concurrently and on the same connection. This is a significant improvement over HTTP/1.1, which could only process one request per connection at a time. This reduces the latency and the overhead of establishing multiple TCP connections. P.S, though Keep-alive is supported in HTTP/1.1, an established connection can only process requests one by one.
  • Binary Protocol: Unlike HTTP/1.1, which is a textual protocol, HTTP/2 is a binary protocol. This makes it more efficient to parse, more compact 'on the wire', and less error-prone to interpret.
  • Server Push: This is a new feature in HTTP/2, where a server can send responses proactively into client cache. This can potentially speed up the loading times of web pages by avoiding round-trip time for the client to make the actual request.
  • Header Compression: HTTP/2 uses HPACK compression, which reduces overhead. Headers are often sent with each request and response. By compressing these, we can save bandwidth and improve speed.
  • Stream Prioritization: HTTP/2 allows the client to specify priority of the multiple requests. This allows more important resources to be sent before less important ones, improving the perceived performance of web pages.

HTTP/3

Keywords: UDP, mobile

HTTP/3 is the latest version of the Hypertext Transfer Protocol (HTTP) and it introduces several improvements over HTTP/2. The main differences arise from the change of the underlying transport protocol from TCP in HTTP/2 to QUIC in HTTP/3. Here are some of the advantages:

  • Improved Performance in Poor Network Conditions: QUIC, the transport protocol for HTTP/3, was designed to improve performance over connections with high latency and packet loss, such as mobile and long-distance connections. QUIC reduces latency by establishing connections with fewer round trips than TCP, and by resuming connections without a handshake.
  • Elimination of Head-of-Line Blocking: In HTTP/2, packet loss in one stream could affect other streams due to the way TCP works. This is known as head-of-line blocking. QUIC solves this problem by handling streams independently, so the loss of a packet in one stream doesn't affect others.
  • Better Encryption: QUIC includes encryption by default. It uses TLS 1.3, which has several improvements over the older versions used by HTTP/2.
  • Connection Migration: QUIC is designed to handle changes in the network better than TCP. For example, if a user switches from Wi-Fi to cellular data, QUIC can keep the connection open and just switch to the new IP address, reducing delays.
  • Reduced Protocol Overhead: QUIC headers are typically smaller than TCP+TLS headers, reducing the amount of data that needs to be transmitted.
  • Improved Congestion Control: QUIC has improved congestion control mechanisms compared to TCP.

However, it's worth noting that while HTTP/3 has several theoretical advantages over HTTP/2, real-world performance can vary. HTTP/3 is still relatively new, and not all servers and clients fully support it yet. Furthermore, some networks may block or throttle QUIC traffic because it's harder to analyze and optimize than TCP traffic.

Benchmarking

I am doing this experiment because I didn't find any performance comparison data available at the time of writing.

Client

Since existing HTTP benchmarking tools, such as wrk, do not support http/3, I have to write a wrk-like benchmarking tool by myself. Here is a list of well-known HTTP benchmark tools. HTTP/3 is not even available in some languages like Go-lang. Fortunately, it's supported in .NET 8 which is the platform we are working on.

Full code:

Example usage:

BenchmarkHttp.exe -U https://localhost:5001/ -D 20 -T 10 -C 200 -H 3
which means I am hammering the server with 10 thread (T), 200 connections (C) in total using HTTP (H) version 3 for 20 seconds (D).

It's worth noting that using multiple threads does not lead to an improvement. Therefore, I am using only one thread in the benchmark below.

Compared with wrk, it can perform equally good when using a single thread and HTTP/1.1. However, it should be noted wrk does not support windows and it is running in a a ubuntu subsystem. I tried to build and run my implementation in ubuntu too but the result is worse.

wrk:

root@xuryan-desktop:~/wrk# ./wrk -t1 -c200 -d20s https://172.25.208.1:5001
Running 20s test @ https://172.25.208.1:5001
  1 threads and 200 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     3.20ms    1.65ms  57.53ms   92.58%
    Req/Sec    53.44k     4.77k   59.43k    88.50%
  1064260 requests in 20.03s, 2.73GB read
Requests/sec:  53141.88
Transfer/sec:    139.34MB

Ours (windows):

PS C:\work\my\repos\BenchmarkHttp> .\bin\Debug\net8.0\BenchmarkHttp.exe -U https://localhost:5001/ -D 20 -T 1 -C 200 -H 1
Running 20s test @ https://localhost:5001/
1 threads and 200 connections
        1108522 requests in 20s
Requests/sec: 55426
Average Latency: 3.61ms

Ours (ubuntu):

root@xuryan-desktop:~/benchmark# dotnet ./bin/Debug/net8.0/BenchmarkHttp.dll -D 20 -T 1 -C 200 -U https://172.25.208.1:5001/ -H 1
Running 20s test @ https://172.25.208.1:5001/
1 threads and 200 connections
        752614 requests in 20s
Requests/sec: 37630
Average Latency: 5.31ms

Server

Our services in production are using Grpc.AspNetCore (An ASP.NET Core framework for hosting gRPC services). The underlying HTTP server is Kestrel or Http.sys. Requests are all from the internal network.

To mimic this, the server code in this benchmark is based on a template generated by dotnet new webapp -o KestrelService. Note it should be fine to use a web app because gRpc is not the focus in this benchmark.

Full code:

using Microsoft.AspNetCore.Server.Kestrel.Core;

var builder = WebApplication.CreateBuilder(args);

// Add services to the container.
builder.Services.AddRazorPages();

builder.WebHost.ConfigureKestrel((context, options) =>
{
    options.ListenAnyIP(5001, listenOptions =>
    {
        listenOptions.Protocols = HttpProtocols.Http1AndHttp2AndHttp3;
        listenOptions.UseHttps();
    });
});

var app = builder.Build();

// Configure the HTTP request pipeline.
if (!app.Environment.IsDevelopment())
{
    app.UseExceptionHandler("/Error");
    // The default HSTS value is 30 days. You may want to change this for production scenarios, see https://aka.ms/aspnetcore-hsts.
    app.UseHsts();
}

app.UseHttpsRedirection();
app.UseStaticFiles();

app.UseRouting();

app.UseAuthorization();

app.MapRazorPages();

app.Run();

Home page of the web service accessed from chrome

Environment

Device:

Device name xuryan-desktop
Processor   Intel(R) Core(TM) i9-10900X CPU @ 3.70GHz   3.70 GHz
Installed RAM   32.0 GB (31.7 GB usable)
Device ID   DF4F1A7C-279B-4985-9783-D85E0A70A79B
Product ID  00330-80000-00000-AA557
System type 64-bit operating system, x64-based processor
Pen and touch   No pen or touch input is available for this display

OS:

Edition Windows 11 Enterprise
Version 23H2
Installed on    1/13/2023
OS build    22631.2715
Experience  Windows Feature Experience Pack 1000.22677.1000.0

Both the server and client (benchmark tool) are running in .NET 8 on the same device above, which means there is only local call.

Results

Several runs show similar results:

PS C:\work\my\repos\KestrelService> BenchmarkHttp.exe -U https://localhost:5001/ -D 20 -T 1 -C 200 -H 1
Running 20s test @ https://localhost:5001/
1 threads and 200 connections
        1115200 requests in 20s
Requests/sec: 55760
Average Latency: 3.59ms

PS C:\work\my\repos\KestrelService> BenchmarkHttp.exe -U https://localhost:5001/ -D 20 -T 1 -C 200 -H 2
Running 20s test @ https://localhost:5001/
1 threads and 200 connections
        1623077 requests in 20s
Requests/sec: 81153
Average Latency: 2.46ms

PS C:\work\my\repos\KestrelService> BenchmarkHttp.exe -U https://localhost:5001/ -D 20 -T 1 -C 200 -H 3
Running 20s test @ https://localhost:5001/
1 threads and 200 connections
        768692 requests in 20s
Requests/sec: 38434
Average Latency: 5.20ms

Shown in a table:

Version Req/Sec Avg latency (ms)
HTTP/1.1 55760 3.59
HTTP/2 81153 2.46
HTTP/3 38434 5.20

Conclusion

HTT/3 is designed specifically to perform well in less-reliable networks with less-than-high bandwidth (think cellular networks). The advantages of HTTP/3 may not benefit in our scenario where we rely heavily on multiple simultaneous streams and the networking condition is very good.

From the benchmark results, we learn:

  1. HTTP/2 performs very well compared to HTTP/1.1 because it handles multiple simultaneous streams well.
  2. It's unacceptable that HTTP/3 performs a lot worse than HTTP/1.1 or HTTP/2. The reason is not clear but I guess it's because the implementation in .NET is not optimized as it's quite new. Anyway, I have reported this performance downgradation to the .NET official in Github.

qq email facebook github
© 2024 - Xurui Yan. All rights reserved
Built using pelican