nghttp2.org

HTTP/2 C library and tools

Nghttp2 v0.4.1 Released

nghttp2 v0.4.1 just released today. It is mainly a bug fix release. The supported HTTP/2 protocol version remains h2-12.

Although this release is tagged as bug fix release, nghttpx got some improvements. Firstly, --npn-list option now works with ALPN. Secondly, * is officially allowed as <HOST> parameter in --frontend option and it means wildcard address, which can bind to both IPv4 and IPv6 addresses.

Httpbin for HTTP/2 Clients

httpbin is a HTTP Request & Response Service developped by A Kenneth Reitz Project. Currently it only serves in HTTP/1.1.

nghttp2.org now serves its functionality in HTTP/2 and SPDY. You can access the service here.

Currently, some json elements (e.g., url) are a bit strange due to the fact that httpbin service is reverse proxied.

BLOCKED Frame Did Not Block DATA Frame?

nghttp HTTP/2 client included in nghttp2 has a nice feature to show the incoming and outgoing HTTP/2 frames. It is extremely usable for debugging.

Here is a snippet of the log nghttp produced today:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
[  0.359] recv HEADERS frame <length=154, flags=0x04, stream_id=1>
          ; END_HEADERS
          (padlen=0)
          ; First response header
[  0.359] recv DATA frame <length=3798, flags=0x00, stream_id=1>
[  0.362] recv DATA frame <length=4096, flags=0x00, stream_id=1>
[  0.362] send HEADERS frame <length=27, flags=0x05, stream_id=3>
          ; END_STREAM | END_HEADERS
          (padlen=0)
          ; Open new stream
[  0.364] recv DATA frame <length=4096, flags=0x00, stream_id=1>
[  0.391] recv DATA frame <length=2715, flags=0x00, stream_id=1>
[  0.392] recv DATA frame <length=0, flags=0x01, stream_id=1>
[  0.394] recv HEADERS frame <length=40, flags=0x04, stream_id=3>
          ; END_HEADERS
          (padlen=0)
          ; First response header
[  0.394] recv DATA frame <length=3798, flags=0x00, stream_id=3>
[  0.394] recv DATA frame <length=4096, flags=0x00, stream_id=3>
[  0.402] recv DATA frame <length=4096, flags=0x00, stream_id=3>
[  0.406] recv DATA frame <length=4096, flags=0x00, stream_id=3>
[  0.430] recv DATA frame <length=4096, flags=0x00, stream_id=3>
[  0.430] send WINDOW_UPDATE frame <length=4, flags=0x00, stream_id=0>
          (window_size_increment=34887)
[  0.434] recv DATA frame <length=4096, flags=0x00, stream_id=3>
[  0.435] recv DATA frame <length=4096, flags=0x00, stream_id=3>
[  0.438] recv DATA frame <length=4096, flags=0x00, stream_id=3>
[  0.442] recv DATA frame <length=4096, flags=0x00, stream_id=3>
[  0.442] send WINDOW_UPDATE frame <length=4, flags=0x00, stream_id=3>
          (window_size_increment=36566)
[  0.470] recv DATA frame <length=4096, flags=0x00, stream_id=3>
[  0.471] recv DATA frame <length=4096, flags=0x00, stream_id=3>
[  0.474] recv DATA frame <length=4096, flags=0x00, stream_id=3>
[  0.475] recv DATA frame <length=1976, flags=0x00, stream_id=3>
[  0.475] recv BLOCKED frame <length=0, flags=0x00, stream_id=0>
[  0.488] recv DATA frame <length=4096, flags=0x00, stream_id=3>
[  0.488] recv DATA frame <length=4096, flags=0x00, stream_id=3>
[  0.488] recv DATA frame <length=4096, flags=0x00, stream_id=3>
[  0.489] recv DATA frame <length=2417, flags=0x00, stream_id=3>
[  0.489] recv BLOCKED frame <length=0, flags=0x00, stream_id=3>

When you see this, you may wonder why BLOCKED frame recieved at t=0.475 was followed by DATA frame. BLOCKED with stream_id=0 means connection level window size is depleted, and server is unable to send more DATA until it receives WINDOW_UPDATE frame from client. There is no WINDOW_UPDATE between t=0.475 and t=0.488, so you may think this must be a mistake, server has a bug.

Of course not. WINDOW_UPDATE with stream_id=0 was already sent t=0.430. But due to latency, before it reached at the server, server used all available window size and sent BLOCKED frame. After that, the server finally received the WINDOW_UPDATE from client and update its window size and started to send DATA frame again.

BLOCKED and WINDOW_UPDATE frames

New Version v0.4.0 Out Now

We released new version v0.4.0 just now. Quick summary of this release:

How Dependency Based Prioritization Works

The dependency based prioritization was first introduced in HTTP/2 draft-11 and further refined in HTTP/2 draft-12, which is the latest draft version at the of this writing. The draft describes its mechanism and requirements for client and server in 5.3. Stream priority in detail. In short, dependency based prioritization works like this:

  • A stream can depend on another stream. If stream B depends on stream A, stream B is not processed unless stream A is closed or stream A cannot progress due to, for example, flow control or data is not available from backend content server. These dependency links form a tree, which is called dependency tree (circular dependency is not allowed).

  • Dependency has weight. This is used to determine how much available resource (e.g., bandwidth) is allocated to a stream.

    Quoted from 5.3.2. Dependency weighting:

    Streams with the same dependencies SHOULD be allocated resources proportionally based on their weight. Thus, if stream B depends on stream A with weight 4, and C depends on stream A with weight 12, and if no progress can be made on A, stream B ideally receives one third of the resources allocated to stream C.

We do not describe about weight further in this post. We focus about stream dependency part and assumes all weightings are equal.

We first have to say that prioritization in HTTP/2 is completely optional feature. Client can freely provide prioritization information to a server, but server has a choice to ignore them. Simple server implementation may ignore prioritization altogether.

So how is this mechanism used for our Web pages? When loading a typical Web page, client first requests HTML file. It then parses received portion of HTML file and finds the links to resources, such as CSS, Javascript and images, and issues requests to get these resources as well. Suppose that client wants HTML in highest priority, since it is the main page to show. Then it wants CSS or Javascript in medium priority. Images are in lowest priority. These requirement can be expressed as dependency: CSSs and Javascripts depend on a HTML file. Images depend on CSSs or Javascripts files. Providing these prioritization information to the server, client can load resoures in opitimal order.

nghttp2 fully implements prioritization. So let’s see how the prioritization works in the real use case. We use nghttp2 documentation index.html generated by sphinx (the same page is available at https://nghttp2.org/documentation/) as a test page. That page contains links to the followings resources:

  • theme.css
  • doctools.js
  • jquery.js
  • theme.js
  • underscore.js

We use nghttp command-line client with -a option. With -a option, it also downloads links found in HTML page it is downloading. It is programmed to categorize resources in the following priority levels.

  1. HIGHEST — main HTML (index.html)
  2. HIGH — CSS files (theme.css)
  3. MIDDLE — Javascript files (doctools.js, jquery.js, theme.js)
  4. LOWEST — Images (none in this case)

Hopefully, we’d like to build dependency tree like this:

Ideal dependency tree

We use following command-line to run nghttp client:

1
$ nghttp -nva https://localhost:3000/doc/manual/html/index.html

With -v option, we can see what happens in HTTP/2 frameing layer. First, nghttp requested index.html to the server:

1
2
3
4
5
6
7
8
9
10
11
[  0.041] send HEADERS frame <length=67, flags=0x05, stream_id=1>
          ; END_STREAM | END_HEADERS
          (padlen=0)
          ; Open new stream
          :authority: localhost:3000
          :method: GET
          :path: /doc/manual/html/index.html
          :scheme: https
          accept: */*
          accept-encoding: gzip, deflate
          user-agent: nghttp2/0.4.0-DEV

index.html is stream 1 (see stream_id=1). index.html does not depend any streams since this is the first stream ever in this connection. Then server responded with HTTP response header in HEADERS frame and its response body in DATA frames:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
[  0.042] (stream_id=1, noind=0) :status: 200
[  0.042] (stream_id=1, noind=0) server: nghttpd nghttp2/0.4.0-DEV
[  0.042] (stream_id=1, noind=0) content-length: 13978
[  0.042] (stream_id=1, noind=0) cache-control: max-age=3600
[  0.042] (stream_id=1, noind=0) date: Sun, 27 Apr 2014 13:18:26 GMT
[  0.042] (stream_id=1, noind=0) last-modified: Sun, 27 Apr 2014 10:48:12 GMT
[  0.042] recv HEADERS frame <length=84, flags=0x04, stream_id=1>
          ; END_HEADERS
          (padlen=0)
          ; First response header
[  0.043] recv DATA frame <length=4096, flags=0x00, stream_id=1>
[  0.044] recv DATA frame <length=4096, flags=0x00, stream_id=1>
[  0.044] recv DATA frame <length=4096, flags=0x00, stream_id=1>
[  0.044] recv DATA frame <length=1690, flags=0x00, stream_id=1>
[  0.044] recv DATA frame <length=0, flags=0x01, stream_id=1>
          ; END_STREAM

END_STREAM means that content of stream 1 was completely received. nghttp parsed received HTML in DATA frame found that links to resources. First it requested 3 Javascript files:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
[  0.044] send HEADERS frame <length=32, flags=0x25, stream_id=3>
          ; END_STREAM | END_HEADERS | PRIORITY
          (padlen=0, stream_id=1, weight=16, exclusive=0)
          ; Open new stream
          :authority: localhost:3000
          :method: GET
          :path: /doc/manual/html/_static/jquery.js
          :scheme: https
          accept: */*
          accept-encoding: gzip, deflate
          user-agent: nghttp2/0.4.0-DEV
[  0.044] send HEADERS frame <length=34, flags=0x25, stream_id=5>
          ; END_STREAM | END_HEADERS | PRIORITY
          (padlen=0, stream_id=1, weight=16, exclusive=0)
          ; Open new stream
          :authority: localhost:3000
          :method: GET
          :path: /doc/manual/html/_static/underscore.js
          :scheme: https
          accept: */*
          accept-encoding: gzip, deflate
          user-agent: nghttp2/0.4.0-DEV
[  0.044] send HEADERS frame <length=32, flags=0x25, stream_id=7>
          ; END_STREAM | END_HEADERS | PRIORITY
          (padlen=0, stream_id=1, weight=16, exclusive=0)
          ; Open new stream
          :authority: localhost:3000
          :method: GET
          :path: /doc/manual/html/_static/doctools.js
          :scheme: https
          accept: */*
          accept-encoding: gzip, deflate
          user-agent: nghttp2/0.4.0-DEV

There is priority information in each request HEADERS. For example, stream requesting jquery.js has stream_id=1, weight=16, exclusive=0, which means it depend on stream 1 with weight 16. exclusive=0 means that its dependency is not exclusive, so if there are any dependencies to a designated stream, new stream joins existing siblings. We’ll see the example of exclusive=1 case soon. The other 2 requests also depend on stream 1. At this moment, dependency tree became like this:

Dependency tree after stream 3, 5 and 7 were requested

Then nghttp found CSS link and issued request:

1
2
3
4
5
6
7
8
9
10
11
[  0.044] send HEADERS frame <length=34, flags=0x25, stream_id=9>
          ; END_STREAM | END_HEADERS | PRIORITY
          (padlen=0, stream_id=1, weight=16, exclusive=1)
          ; Open new stream
          :authority: localhost:3000
          :method: GET
          :path: /doc/manual/html/_static/css/theme.css
          :scheme: https
          accept: */*
          accept-encoding: gzip, deflate
          user-agent: nghttp2/0.4.0-DEV

Here we have stream_id=1, weight=16, exclusive=1. This is a bit different from the previous priority information. This time we have exclusive=1, which means that stream 9 solely depends on stream 1 and the streams which formerly depend on stream 1 depend on stream 9. So resulting dependency tree became like this:

Dependency tree after stream 3, 5, 7 and 9 were requested

The last resource was theme.js:

1
2
3
4
5
6
7
8
9
10
11
[  0.044] send HEADERS frame <length=33, flags=0x25, stream_id=11>
          ; END_STREAM | END_HEADERS | PRIORITY
          (padlen=0, stream_id=9, weight=16, exclusive=0)
          ; Open new stream
          :authority: localhost:3000
          :method: GET
          :path: /doc/manual/html/_static/js/theme.js
          :scheme: https
          accept: */*
          accept-encoding: gzip, deflate
          user-agent: nghttp2/0.4.0-DEV

The priority information for this request was stream_id=9, weight=16, exclusive=0. The stream 9 was theme.css. So unlike the first 3 Javascript files, this request directly specified the dependency to stream 9. Finally the dependency tree became like this:

Final dependency tree

This completely reflects the priority levels nghttp client implements.

Did the server respect this prioritization? Let’s see the DATA flow of these streams. Since stream 1 was already finished, stream 9 was the highest priority. The log shows that server correctly sent its DATA first:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
[  0.047] recv DATA frame <length=4096, flags=0x00, stream_id=9>
[  0.047] recv DATA frame <length=4096, flags=0x00, stream_id=9>
[  0.047] recv DATA frame <length=4096, flags=0x00, stream_id=9>
[  0.047] recv DATA frame <length=4096, flags=0x00, stream_id=9>
[  0.047] recv DATA frame <length=4096, flags=0x00, stream_id=9>
[  0.047] recv DATA frame <length=4096, flags=0x00, stream_id=9>
[  0.047] recv DATA frame <length=4096, flags=0x00, stream_id=9>
[  0.047] send WINDOW_UPDATE frame <length=4, flags=0x00, stream_id=0>
          (window_size_increment=34458)
[  0.048] recv DATA frame <length=4096, flags=0x00, stream_id=9>
[  0.048] recv DATA frame <length=4096, flags=0x00, stream_id=9>
[  0.048] recv DATA frame <length=4096, flags=0x00, stream_id=9>
[  0.048] recv DATA frame <length=4096, flags=0x00, stream_id=9>
[  0.048] send WINDOW_UPDATE frame <length=4, flags=0x00, stream_id=9>
          (window_size_increment=32768)
[  0.048] recv DATA frame <length=4096, flags=0x00, stream_id=9>
[  0.048] recv DATA frame <length=2405, flags=0x00, stream_id=9>
[  0.048] recv BLOCKED frame <length=0, flags=0x00, stream_id=0>

[  0.048] recv DATA frame <length=4096, flags=0x00, stream_id=9>
[  0.048] recv DATA frame <length=4096, flags=0x00, stream_id=9>
[  0.048] recv DATA frame <length=4096, flags=0x00, stream_id=9>
[  0.048] recv DATA frame <length=1690, flags=0x00, stream_id=9>
[  0.048] recv BLOCKED frame <length=0, flags=0x00, stream_id=9>

[  0.049] send WINDOW_UPDATE frame <length=4, flags=0x00, stream_id=0>
          (window_size_increment=35173)
[  0.049] send WINDOW_UPDATE frame <length=4, flags=0x00, stream_id=9>
          (window_size_increment=32767)
[  0.049] recv DATA frame <length=4096, flags=0x00, stream_id=3>
[  0.049] recv DATA frame <length=4096, flags=0x00, stream_id=5>
[  0.049] recv DATA frame <length=4096, flags=0x00, stream_id=7>
[  0.049] recv DATA frame <length=1675, flags=0x00, stream_id=11>
[  0.049] recv DATA frame <length=4096, flags=0x00, stream_id=3>
[  0.049] recv DATA frame <length=2421, flags=0x00, stream_id=5>
[  0.049] recv BLOCKED frame <length=0, flags=0x00, stream_id=0>

[  0.049] recv DATA frame <length=4096, flags=0x00, stream_id=9>
[  0.049] recv DATA frame <length=4096, flags=0x00, stream_id=9>
[  0.049] recv DATA frame <length=4096, flags=0x00, stream_id=9>
[  0.049] send WINDOW_UPDATE frame <length=4, flags=0x00, stream_id=0>
          (window_size_increment=34458)
[  0.050] recv DATA frame <length=4096, flags=0x00, stream_id=9>
[  0.050] recv DATA frame <length=4096, flags=0x00, stream_id=9>
[  0.050] recv DATA frame <length=4096, flags=0x00, stream_id=9>
[  0.050] recv DATA frame <length=105, flags=0x00, stream_id=9>
[  0.050] recv DATA frame <length=0, flags=0x01, stream_id=9>
          ; END_STREAM

You will notice that there are stream 3, 5, 7 and 11 interleaved before stream 9 got finished. This is because stream 9 could not make progress because of flow control. You see BLOCKED frame for stream 9 in the above log. The progress of stream 9 was blocked until WINDOW_UPDATE frame for stream 9 was arrived to the server. While stream 9 was blocked, streams which depend on stream 9 were unblocked and started to send its DATA frames. After the server received WINDOW_UPDATE frame for stream 9, it started to send DATA frame of stream 9 again and stream 3, 5, 7 and 11 were blocked.

After stream 9 was finished, the remaining streams had equal priority, so they were sent interleaved:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
[  0.050] recv DATA frame <length=2584, flags=0x00, stream_id=7>
[  0.050] recv DATA frame <length=0, flags=0x01, stream_id=11>
          ; END_STREAM
[  0.050] recv DATA frame <length=4096, flags=0x00, stream_id=3>
[  0.050] recv DATA frame <length=3812, flags=0x00, stream_id=5>
[  0.050] recv BLOCKED frame <length=0, flags=0x00, stream_id=0>

[  0.050] recv DATA frame <length=0, flags=0x01, stream_id=7>
          ; END_STREAM
[  0.050] recv DATA frame <length=4096, flags=0x00, stream_id=3>
[  0.050] recv DATA frame <length=4096, flags=0x00, stream_id=5>
[  0.050] recv DATA frame <length=4096, flags=0x00, stream_id=3>
[  0.050] send WINDOW_UPDATE frame <length=4, flags=0x00, stream_id=0>
          (window_size_increment=35173)
[  0.050] recv DATA frame <length=4096, flags=0x00, stream_id=3>
[  0.050] recv DATA frame <length=4096, flags=0x00, stream_id=3>
[  0.050] recv DATA frame <length=4096, flags=0x00, stream_id=3>
[  0.050] recv DATA frame <length=4096, flags=0x00, stream_id=3>
[  0.050] send WINDOW_UPDATE frame <length=4, flags=0x00, stream_id=3>
          (window_size_increment=32768)
[  0.051] recv DATA frame <length=4096, flags=0x00, stream_id=3>
[  0.051] recv DATA frame <length=1690, flags=0x00, stream_id=3>
[  0.051] recv BLOCKED frame <length=0, flags=0x00, stream_id=0>

[  0.051] recv DATA frame <length=4096, flags=0x00, stream_id=3>
[  0.051] recv DATA frame <length=4096, flags=0x00, stream_id=5>
[  0.051] recv DATA frame <length=4096, flags=0x00, stream_id=5>
[  0.051] send WINDOW_UPDATE frame <length=4, flags=0x00, stream_id=0>
          (window_size_increment=34458)
[  0.051] recv DATA frame <length=4096, flags=0x00, stream_id=5>
[  0.051] recv DATA frame <length=4096, flags=0x00, stream_id=5>
[  0.051] recv DATA frame <length=4096, flags=0x00, stream_id=5>
[  0.051] recv DATA frame <length=4096, flags=0x00, stream_id=5>
[  0.051] recv DATA frame <length=2416, flags=0x00, stream_id=5>
[  0.051] recv DATA frame <length=0, flags=0x01, stream_id=5>
          ; END_STREAM
[  0.051] recv DATA frame <length=4085, flags=0x00, stream_id=3>
[  0.051] recv BLOCKED frame <length=0, flags=0x00, stream_id=0>

[  0.051] recv DATA frame <length=4096, flags=0x00, stream_id=3>
[  0.051] recv DATA frame <length=4096, flags=0x00, stream_id=3>
[  0.051] recv DATA frame <length=4096, flags=0x00, stream_id=3>

HTTP/2 Draft-12 Update

We updated test server to the latest HTTP/2 draft-12 implementation.

As before, we service h2-12 at port 443 and h2c-12 at port 80. We also continue to support SPDY and HTTP/1. If http URI is used to access to this site, we issue ALTSVC frame and Alt-Svc header field to indicate that we also service same contents at port 443 in h2-12 protocol.

Let’s review the highlights of the changes in draft-12. The dependency based priority was introduced in draft-11. In draft-12, it is further enhanced (and simplified). nghttp2 fully implements dependency based priority mechanism. From our experience, we know that it is crucial to retain information of closed streams to make the prioritization work. This is also covered in the draft-12. nghttp2 retains closed streams as much as possible. The upper bound of the sum of active and retained closed streams are SETTINGS_MAX_CONCURRENT_STREAMS the server declares.

BLOCKED frame is newly introduced in draft-12. It indicates that DATA frame transfer is blocked due to exhaustion of flow control send window. nghttp2 implements transmission and reception of BLOCKED frame for both connection and stream level flow control.

Compressed DATA frame is introduced in draft-12. This is supposed to be a replacement of Transfer-Encoding: gzip in HTTP/1. nghttp2 supports compressed DATA frame in its library API. But the public test server does not support compress DATA frame at the moment.

To access this site via HTTP/2, you can use nghttp client included in nghttp2 repository. It works with both https and http URI. node-http2 client may also work.

The easiest way to test HTTP/2 is use h2-12 enabled Firefox browser. You can find the download link from here. Don’t forget to set network.http.spdy.enabled.http2draft and security.ssl.enable_alpn to true in about:config screen. Please note that Firefox only supports HTTP/2 in https URI.

H2load Now Supports SPDY in Clear Text

We added -p and --no-tls-proto option to h2load in order to support SPDY in clear text.

Previously h2load supports SPDY only for https URI. This is because SPDY has no mechanism to negotiate its protocol version without ALPN/NPN. With -p option, user can specify the ALPN identifier to use when http URI (in clear text) is used.

Almost all SPDY enabled public web servers are now deployed with SSL/TLS. But inside the server farm, between SSL/TLS terminator and backend server may communicate SPDY in clear text. This option is very handy to perform load tests for those SPDY servers.

nghttp2.org Goes Live

Now this nghttp2 public test server can be accessible via nghttp2.org and you don’t have to remember cryptic IP address. The SSL/TLS certificate is still self-signed, we are preparing valid one so stay tuned.

Let me summarize the current status of the public test server. The supported HTTP/2 protocol is HTTP/2 draft-11.

We offer HTTP/2 over TLS, identified by h2-11 in ALPN, at the port 443. We also offer SPDY/3.1 and HTTP/1.1 on that port as a fallback. Again, the certificate is self-signed at the time of this writing.

We offer HTTP/2 over (plain, non-encrypted) TCP, identified by h2c-11, at the port 80. We also offer HTTP/1.1 on that port as a fallback. We support HTTP Upgrade in HTTP/1.1 connection to HTTP/2. The details of how to perform HTTP Upgrade from HTTP/1.1 to HTTP/2 is described in here. We advertise h2-11 in Alt-Svc header field and ALTSVC HTTP/2 frame on this port.

You can use nghttp CLI client, which is included in nghttp2 repository, to access to both services.

At the time of this writing, “special build” Mozilla Firefox is the only web browser that can speak h2-11. You can find the link to download it in here. Firefox only supports https URI for HTTP/2. This means that there is no browser which can access to this site in plain, non-encrypted HTTP/2. You need to enable HTTP/2 and ALPN in about:config as described in the above link. You may also need to set security.use_mozillapkix_verification to false to access to this site (See here about this configuration parameter).