Renato Athaydes Personal Website

Sharing knowledge for a better world

Measuring the performance of Go against Dart HTTP servers!?

A few months ago a blog post turned up on Reddit with the title Dart vs Go REST Server Performance Comparison Study (henceforth, DartVSGo). It is well written and has nice charts and a decent setup, with the benchmark test tool, wrk, running on one DigitalOcean droplet (a VM), and the server under test running on another.

Unfortunately, though, it had a few problems that, as is common with benchmarks, led to a series of unwarranted conclusions, providing a distorted view of reality.

The conclusion, therefore, was misleading in my opinion. To prove that, I decided to do what anyone with a week to spare on debunking wrong claims they find on the Internet would do: cloned the Gitlab repository, fixed all the problems I found, and re-ran the (improved) benchmarks… and wrote a tool to generate pretty charts using Dart/Flutter to boot!

Obviously (as I wouldn’t be writing this blog post otherwise), I expected to find a completely different picture than what the original blog post painted.

Here are the main conclusions for those of you who won’t read past the introduction (but please do read the whole post if you’re going to claim how even wrong-er I am):

If you don’t believe me, keep reading as I go through the problems in the original blog post, show how to fix them, and then present my results in even prettier charts than the original blog post had…

Dart HTTP server misconceptions

Before I get to the main point of this post, I wanted to correct a few misconceptions I feel the DartVsGo blog post contained!

If you already know or don’t care about them, please skip to the next section where we get to the beefy part of this blog post.

dart:io HttpServer is deprecated

The DartVsGo blog post claimed that Dart is deprecating the old HTTP server in its standard library in favour of shelf.

From the introduction:

… there is a shift in the Dart world from the original now “discontinued” HTTP Server and a new one called Shelf.

In a previous post, the author claims that:

The original HTTP Server for Dart is a part of the core dart:io library that you get with your standard imports. Unfortunately in the last year or so it has been discontinued.

It links to a dart-archive repository that used to publish a http_server package on pub.dev which was indeed discontinued, but that’s NOT dart:io’s HttpServer (dart:io is not a package, but part of the Dart standard libary), which is what the author actually used in the benchmarks.

Let me make this clear: the dart:io HttpServer is NOT deprecated, neither is it discontinued. In fact, it’s the base for all Dart frameworks, including shelf.

I believe the author may have either confused dart:io HttpServer with the discontinued http_server external package, or simply misinterpreted this quote from the HttpServer class’ docs linked above:

Note: HttpServer provides low-level HTTP functionality. We recommend users evaluate the high-level APIs discussed at Write HTTP servers on dart.dev.

This note is just trying to make it clear that most users would probably want high-level APIs such as those provided by shelf (probably so they don’t get disappointed at Dart by not having the nice features they’re used to from using JS frameworks and similar), and that HttpServer may be too low-level for them.

With Dart developers being mostly front-end developers (due to the huge popularity of Flutter), that’s understandable as they’re less likely to be familiar with the HTTP protocol, headers, body encoding and so on. But that clearly does not imply that nobody should use this! And it certainly doesn’t mean this class is deprecated… you know, deprecation is not a term you use metaphorically or something: something is deprecated only when it’s marked as so, and in Dart there’s a @deprecated anotation to use when you want to deprecate something, and clearly, HttpServer is not marked as being deprecated, and probably never will, given it’s part of the basic blocks for all other HTTP frameworks!

Dart HTTP servers can only use one CPU core

Still in the introduction, we find this statement:

… on the scaling with multiple processors aspect since by default the Go server does and in Dart only one, Conduit , truly supports it.

Here’s another article claiming a similar thing:

You should implement multi-threaded requests handling yourself… Aqueduct is abandoned and I did not find any other solution which would care about that.

Here’s what the HttpServer:bind method documentation (the first method you will need to call to start a HTTP server) says:

… if shared is true and more HttpServers from this isolate or other isolates are bound to the port, then the incoming connections will be distributed among all the bound HttpServers. Connections can be distributed over multiple isolates this way.

I don’t know if this could be any clearer (I suppose this should be mentioned in the class documentation as well?). If you know some Dart, you probably know that Dart requires Isolates for multi-threading (i.e. to scale to more than one CPU). All that’s needed to start a HttpServer on more than one Thread is to give the shared: true argument to the bind method. You do need to “manually” start more than one Isolate, but good news… that’s not rocket science.

Here’s the full code for spawning an Isolate for each available CPU core, then running a handler function on each:

import 'dart:io';
import 'dart:isolate';

const port = 8081;

void runServer(_) async {
  print('Running server on ${Isolate.current.debugName}');
  final server =
      await HttpServer.bind(InternetAddress.anyIPv6, port, shared: true);
  await for (final request in server) {
    request.response..write('my response')..close();
  }
}

Future<void> main() async {
  Iterable.generate(Platform.numberOfProcessors, (i) {
    Isolate.spawn(runServer, null, debugName: 'iso-$i');
  }).toList(); // create isolates eagerly

  // block forever
  await Future.delayed(Duration(days: 99999999));
}

How do you do that with shelf? Well, similarly, bootstraping your pipeline on each Isolate with shared: true:

final server = await shelf_io.serve(handler, address, port, shared: true);

All Dart frameworks, except for Conduit, support doing this or something very similar, and leaving the distribution of TPC connections into Isolates to the Dart runtime system is the right thing to do (it would be very difficult to do it in Dart itself performantly)!

Conduit decided to expose its own API to control multi-threading (but it also relies on the Dart VM to actually do it), which actually makes things a bit harder if you want to have any control over that! But it was pretty easy to use a different approach for Conduit, while still achieving the same setup.

Binding to localhost by default

Ok, this one is pretty minor… and it’s not really a Dart misconception (but I couldn’t find a good place to put this on, so here it is…), but an interesting point nevertheless, I think.

If you look at the original code, you’ll see that the author is creating a socket server bound to the localhost address, by default.

That’s not what you should normally do when your intention is to expose a HTTP server to the outside world!

To illustrate that, let me show you what happens when you do that by using Curl:

â–¶ curl -v localhost:8080/
*   Trying 127.0.0.1:8080...
* connect to 127.0.0.1 port 8080 failed: Connection refused
*   Trying ::1:8080...
* Connected to localhost (::1) port 8080 (#0)
> GET / HTTP/1.1
> Host: localhost:8080
> User-Agent: curl/7.79.1
> Accept: */*
> 
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< content-type: text/plain; charset=utf-8
< x-frame-options: SAMEORIGIN
< x-xss-protection: 1; mode=block
< transfer-encoding: chunked
< x-content-type-options: nosniff
<

Check out the first few lines of output! What’s going on?

First, curl tries to hit the ipv4 address 127.0.0.1… that’s cool, that’s how localhost is supposed to work. But notice that it doesn’t find anything there! So, it tries the next best thing, ipv6 address ::1, which actually works.

Using lsof, we can see what Dart does when you bind to localhost:

â–¶ lsof -i:8080
COMMAND   PID   USER   FD   TYPE            DEVICE SIZE/OFF NODE NAME
dart    30804 renato    8u  IPv6 0x3dbf90dcf38d6b5      0t0  TCP localhost:http-alt (LISTEN)

It binds to localhost (the loopback interface - which bypasses the networking hardware entirely) using an IPv6 socket… well, at least on my machine!

The behaviour is clearly explained in the docs for HttpServer:bind:

If address is a String, bind will perform a InternetAddress.lookup and use the first value in the list.

What does the lookup function return for localhost?

import 'dart:io';

main() async {
  print(await InternetAddress.lookup('localhost'));
}

Result:

[InternetAddress('::1', IPv6), InternetAddress('127.0.0.1', IPv4)]

It just so happens that the IPv6 address is first in the list, so Dart binds to that… this is a little problematic because this causes only IPv6 to work, which may break some HTTP clients. Also, trying curl 127.0.0.1:8080/ will fail, which may surprise some users who expect localhost to be synonym with 127.0.0.1.

If you run benchmarks using localhost, you may be including the time to lookup the actual IP address to connect to in your benchmarks, inadvertedly. And because Dart is not binding to the usual “first choice”, 127.0.0.1, your timings for Dart may already be incorrect from the get go!

The better way to do this in Dart is to use the InternetAddress constants anyIPv4, anyIPv6 (which also allows ipv4 by default), or loopbackIPv4 and loopbackIPv6 if you really want to listen on local (your machine) connections only.

When binding to InternetAddress.anyIPv6, lsof shows that the socket is now bound correctly:

â–¶ lsof -i:8080
COMMAND   PID   USER   FD   TYPE            DEVICE SIZE/OFF NODE NAME
dart    31465 renato    8u  IPv6 0x3dbf90dcf3897b5      0t0  TCP *:http-alt (LISTEN)

And now, both IPv4 and IPv6 work. You can hit localhost, 127.0.0.1 and even [::1], and it will all work fine.

Go documentation is not clear on how it parses the address argument, but it shows examples of starting a HTTP server with address :8080, which is what is used in the benchmarks:

http.ListenAndServe(":8080", nil)

When running the Go server and checking its socket connection with lsof, it’s clear it behaves as Dart using anyIPv6:

â–¶ lsof -i:8080
COMMAND   PID   USER   FD   TYPE             DEVICE SIZE/OFF NODE NAME
goserver 5061 renato    3u  IPv6 0xdd9b5703a2b8e0fb      0t0  TCP *:http-alt (LISTEN)

So, that’s what I am going to use in the benchmarks where possible.

Doing the plumbing

We need to do some basic plumbing before running benchmarks so that all Dart frameworks can benefit from running on multiple Isolates. By doing that, we can also discover if they actually support doing that, as they should.

To simplify boostrapping the server, I wrote this simple helper function that runs the same function on one Isolate per available CPU core:

import 'dart:io';
import 'dart:isolate';

import 'config.dart';

typedef ServerFunction = Function(Config);

Future<Never> runOnIsolates(ServerFunction function, Config config) async {
  Iterable.generate(Platform.numberOfProcessors, (i) {
    Isolate.spawn(function, config, debugName: 'iso-$i');
  }).toList(); // create isolates eagerly

  // block forever
  await Future.delayed(Duration(days: 99999999));
  throw '';
}

Now, bootstraping any server on multiple Isolates is as easy as this:

runOnIsolates(runAlfredServer, config);

config is the parsed CLI options object. The Config type looks like this:

class Config {
  final Object address;
  final int port;
  final bool staticResponse;
  final bool compressed;

  Config({
    required this.address,
    required this.port,
    required this.staticResponse,
    required this.compressed,
  });
}

Server Implementations

I made a few changes to the Dart HTTP Server implementations for all frameworks, because as I’ll show, the comparison with Go was not really fair.

Differences between Dart implementations and Go

In order to show the problems with the standard HTTP Server implementation (the other frameworks suffer from exactly the same problems), let me first show you the original implementation (with very minor changes as mentioned in the previous section) below:

import 'dart:io';

import 'config.dart';

void runStandardHttpServer(Config config) {
  HttpServer.bind(config.address, config.port, shared: true).then((server) {
    server.listen(config.staticResponse
        ? (HttpRequest request) {
            request.response.write('Hello World!\n');
            request.response.close();
          }
        : (HttpRequest request) {
            request.response
                .write('The time is ${DateTime.now().toIso8601String()}\n');
            request.response.close();
          });
  });
}

If we’re going to benchmark something, we should try as much as possible to compare apples to apples. That means we need to make sure the applications being compared are doing approximately the same thing.

So, let’s run a single request against the Dart server above (using a static response for now) and see what the response looks like:

â–¶ curl -v 127.0.0.1:8080/
*   Trying 127.0.0.1:8080...
* Connected to 127.0.0.1 (127.0.0.1) port 8080 (#0)
> GET / HTTP/1.1
> Host: 127.0.0.1:8080
> User-Agent: curl/7.79.1
> Accept: */*
> 
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< content-type: text/plain; charset=utf-8
< x-frame-options: SAMEORIGIN
< x-xss-protection: 1; mode=block
< transfer-encoding: chunked
< x-content-type-options: nosniff
< 
Hello World!
* Connection #0 to host 127.0.0.1 left intact

Let’s start the Go server now and see its response… here’s the Go code, by the way:

package main

import (
	"flag"
	"fmt"
	"net/http"
	"runtime"
	"time"
)

func main() {
	var singleCore bool
	var staticString bool
	flag.BoolVar(&singleCore, "single", false, "Run on a single core only")
	flag.BoolVar(&staticString, "static", false, "Use static string")
	flag.Parse()

	if singleCore {
		runtime.GOMAXPROCS(1)
	}

	fmt.Print("Go Server Listening on localhost:8080 ")
	if singleCore {
		fmt.Print("with single core ")
	} else {
		fmt.Print("with max cores ")
	}

	if staticString {
		fmt.Println("with static response")
		http.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {
			fmt.Fprintln(w, "Hello World!")
		})
	} else {
		fmt.Println("with dynamic response")
		http.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {
			fmt.Fprintln(w, "The time is: "+time.Now().Format(time.RFC3339))
		})
	}

	http.ListenAndServe(":8080", nil)
}

Response:

â–¶ curl -v 127.0.0.1:8080/
*   Trying 127.0.0.1:8080...
* Connected to 127.0.0.1 (127.0.0.1) port 8080 (#0)
> GET / HTTP/1.1
> Host: 127.0.0.1:8080
> User-Agent: curl/7.79.1
> Accept: */*
> 
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< Date: Sat, 20 Aug 2022 09:32:57 GMT
< Content-Length: 13
< Content-Type: text/plain; charset=utf-8
< 
Hello World!
* Connection #0 to host 127.0.0.1 left intact

Notice how there are a few important differences.

First of all, Dart is serving the response using chunked encoding, while Go is using the simpler Content-Length header to delimit the body instead… For such a small, static response, you definitely don’t need chunked encoding, so it’s a little bit strange that Dart defaults to using it. We’ll see how to change that soon.

Another difference is the number of headers in each response. Dart includes security headers by default, while Go includes only the bare minimum allowed by the spec! This difference is understandable given the different target demographics for each language.

But it makes a big difference in the size of the responses.

Using the RawHTTP CLI (my own, tiny version of curl), we can check how many bytes are being transferred for each server.

Dart:

â–¶ rawhttp send -p stats -t "GET http://[::1]:8080/"
Connect time: 1.80 ms
First received byte time: 2.03 ms
Total response time: 17.01 ms
Bytes received: 205
Throughput (bytes/sec): 13686

Go:

â–¶ rawhttp send -p stats -t "GET http://[::1]:8080/"
Connect time: 1.82 ms
First received byte time: 1.82 ms
Total response time: 12.80 ms
Bytes received: 129
Throughput (bytes/sec): 11751

Go is sending 129 bytes back, while Dart is sending 205 bytes. That means Dart is doing “more work”, or at least writing more data… all because of those extra headers.

We also notice that the Dart response takes a little bit more time, 17ms VS 12.8ms for Go… hm, let’s ignore that for now as this is definitely not a benchmark yet!

Let’s see what happens when we use wrk2, an improved version of wrk, for a more in-depth check…

Dart:

â–¶ wrk2 -t10 -c20 -d30s -R100 "http://[::1]:8080/"
Running 30s test @ http://[::1]:8080/
  10 threads and 20 connections
  Thread calibration: mean lat.: 3.972ms, rate sampling interval: 13ms
  Thread calibration: mean lat.: 3.411ms, rate sampling interval: 12ms
  Thread calibration: mean lat.: 4.162ms, rate sampling interval: 13ms
  Thread calibration: mean lat.: 4.090ms, rate sampling interval: 14ms
  Thread calibration: mean lat.: 4.037ms, rate sampling interval: 14ms
  Thread calibration: mean lat.: 3.721ms, rate sampling interval: 13ms
  Thread calibration: mean lat.: 3.113ms, rate sampling interval: 10ms
  Thread calibration: mean lat.: 3.451ms, rate sampling interval: 11ms
  Thread calibration: mean lat.: 4.206ms, rate sampling interval: 14ms
  Thread calibration: mean lat.: 3.188ms, rate sampling interval: 10ms
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     2.25ms    1.02ms   7.39ms   75.66%
    Req/Sec    10.71     37.42   222.00     91.52%
  3010 requests in 30.00s, 605.53KB read
Requests/sec:    100.33
Transfer/sec:     20.18KB

Go:

â–¶ wrk2 -t10 -c20 -d30s -R100 "http://[::1]:8080/"
Running 30s test @ http://[::1]:8080/
  10 threads and 20 connections
  Thread calibration: mean lat.: 2.097ms, rate sampling interval: 10ms
  Thread calibration: mean lat.: 1.953ms, rate sampling interval: 10ms
  Thread calibration: mean lat.: 2.131ms, rate sampling interval: 10ms
  Thread calibration: mean lat.: 1.860ms, rate sampling interval: 10ms
  Thread calibration: mean lat.: 1.717ms, rate sampling interval: 10ms
  Thread calibration: mean lat.: 1.903ms, rate sampling interval: 10ms
  Thread calibration: mean lat.: 1.567ms, rate sampling interval: 10ms
  Thread calibration: mean lat.: 1.667ms, rate sampling interval: 10ms
  Thread calibration: mean lat.: 1.619ms, rate sampling interval: 10ms
  Thread calibration: mean lat.: 1.600ms, rate sampling interval: 10ms
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     1.62ms  817.00us   5.63ms   77.78%
    Req/Sec    11.71     42.78   222.00     92.09%
  3010 requests in 30.00s, 382.13KB read
Requests/sec:    100.33
Transfer/sec:     12.74KB

In summary, Dart has an average latency of 2.25ms while Go has 1.62ms. But notice that during the warmup runs, Dart started quite a bit slower, at around 4ms, but then gradually went down, while Go started at a strong 2ms already!

This seems to confirm that Go is faster than Dart, right?

Well, not so fast. wrk2 claims to have a resolution of at most 1ms, so the difference here may be mostly noise?! In the real benchmark, we’ll use a latency histogram and measure more scenarios than just hello world.

Another noticeable problem is that Dart transferred 605.53KB of data, while Go only had to do 382.13KB. Does it actually make a difference? And more importantly, can we “fix” that?

Improving the standard Dart HttpServer implementation

The first thing we need to do is remove unnecessary headers. That can be done immediately after starting the server as follows:

Removing the security headers may have serious implications for a HTTP server serving public websites and HTTP APIs. Please don’t do this if you don’t fully understand the consequences.

  // removing security headers Dart adds by default
  server.defaultResponseHeaders.clear();

We also need to manually add the content-length header to stop chunking the response, while avoiding sending an undelimited response!

const helloWorld = 'Hello World!\n';
const helloWorldLength = helloWorld.length;

request.response
  ..headers.add('content-length', helloWorldLength)
  ..write(helloWorld)
  ..close();

With these changes, the HTTP response becomes:

HTTP/1.1 200 OK
content-length: 13

Hello World!

Let’s do a quick check on how that affects latency:

â–¶ wrk2 -t10 -c20 -d30s -R100 "http://[::1]:8080/"
Running 30s test @ http://[::1]:8080/
  10 threads and 20 connections
  Thread calibration: mean lat.: 3.798ms, rate sampling interval: 12ms
  Thread calibration: mean lat.: 3.477ms, rate sampling interval: 12ms
  Thread calibration: mean lat.: 3.580ms, rate sampling interval: 11ms
  Thread calibration: mean lat.: 3.381ms, rate sampling interval: 11ms
  Thread calibration: mean lat.: 3.621ms, rate sampling interval: 12ms
  Thread calibration: mean lat.: 3.356ms, rate sampling interval: 12ms
  Thread calibration: mean lat.: 2.951ms, rate sampling interval: 10ms
  Thread calibration: mean lat.: 3.023ms, rate sampling interval: 10ms
  Thread calibration: mean lat.: 2.464ms, rate sampling interval: 10ms
  Thread calibration: mean lat.: 2.945ms, rate sampling interval: 10ms
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     2.23ms    1.10ms   7.85ms   78.54%
    Req/Sec    11.17     39.55   222.00     91.70%
  3010 requests in 30.00s, 152.85KB read
Requests/sec:    100.32

Unfortunately, that barely changes anything, even after a massive improvement in the amount of data transferred (152.85KB, from 605.53KB).

And as you may have noticed, the Dart response is missing the Date header. According to RFC-7231, a response must contain the Date header in most cases, and it’s good manners to also send the Content-Type header…

With a few more stylistic changes to make the code nicer, we end up with this:

void runStandardHttpServer(Config config) async {
  final server =
      await HttpServer.bind(config.address, config.port, shared: true);

  // removing security headers Dart adds by default
  server.defaultResponseHeaders.clear();

  // serve plain text by default
  server.defaultResponseHeaders.add('content-type', 'text/plain; charset=utf-8');

  if (config.staticResponse) {
    _runStandardHttpServerStatic(config, server);
  } else {
    _runStandardHttpServerDynamic(config, server);
  }
}

void _runStandardHttpServerStatic(
    Config config, Stream<HttpRequest> requests) async {
  await for (final request in requests) {
    request.response
      ..headers.date = DateTime.now()
      ..headers.add('content-length', helloWorldLength)
      ..write(helloWorld)
      ..close();
  }
}

void _runStandardHttpServerDynamic( ...

Let’s look at the full HTTP response again:

HTTP/1.1 200 OK
content-type: text/plain; charset=utf-8
date: Wed, 24 Aug 2022 17:25:43 GMT
content-length: 13

Hello World!

Much better! And it matches the Go HTTP Server’s response exactly.

Let’s quickly check again with wrk2 how the Dart server is doing.

â–¶ wrk2 -t10 -c20 -d20s -R100 "http://[::1]:8080/"
Running 20s test @ http://[::1]:8080/
  10 threads and 20 connections
  Thread calibration: mean lat.: 2.290ms, rate sampling interval: 10ms
  Thread calibration: mean lat.: 2.248ms, rate sampling interval: 10ms
  Thread calibration: mean lat.: 2.303ms, rate sampling interval: 10ms
  Thread calibration: mean lat.: 2.253ms, rate sampling interval: 10ms
  Thread calibration: mean lat.: 2.426ms, rate sampling interval: 10ms
  Thread calibration: mean lat.: 2.277ms, rate sampling interval: 10ms
  Thread calibration: mean lat.: 2.237ms, rate sampling interval: 10ms
  Thread calibration: mean lat.: 2.425ms, rate sampling interval: 10ms
  Thread calibration: mean lat.: 2.065ms, rate sampling interval: 10ms
  Thread calibration: mean lat.: 2.304ms, rate sampling interval: 10ms
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     1.79ms    1.33ms  16.48ms   89.71%
    Req/Sec    10.48     38.90   222.00     92.37%
  2010 requests in 20.01s, 253.21KB read
Requests/sec:    100.47
Transfer/sec:     12.66KB

This means that the Dart implementation went from an average latency of 6.38ms to only 3.33ms, cutting the amount of data transferred from over 200KB to just 51.29KB! If I run this a few more times, the latency gets even lower (2.8, 2.5…) as the Dart JIT does its magic.

However, it’s still slower than Go (latency of 1.95ms), though not dramatically so.

In any case, I am not yet ready to throw the towel on Dart’s behalf, after all the real benchmark will be run over a real network, with a higher load, for a much longer time than this.

Also, Dart has different compilation modes we should also consider, as we’ll see in the next section.

To finalize the changes to the Dart implementation so that it’s actually comparable with the Go implementation, we just need to restore the necessary headers.

I decided to use this helper function on all Dart frameworks that use the standard HttpHeaders class to represent headers:

void setHeaders(HttpHeaders headers, int contentLength) {
  headers
    ..removeAll('x-frame-options')
    ..removeAll('x-xss-protection')
    ..removeAll('x-content-type-options')
    ..add('Content-Length', contentLength);
}

We’re still missing the Date header, though, so I had to add that on the frameworks that did not include it by default.

With that, the static response implementation becomes:

void _runStandardHttpServerStatic(Config config) async {
  final server =
      await HttpServer.bind(config.address, config.port, shared: true);
  await for (final request in server) {
    final response = request.response;
    setHeaders(response.headers, helloWorldLength);
    response
      ..headers.date = DateTime.now()
      ..write(helloWorld)
      ..close();
  }
}

Finally, the HTTP response now looks exactly the same as Go’s:

â–¶ rawhttp send -t "GET http://127.0.0.1:8080/"
HTTP/1.1 200 OK
content-type: text/plain; charset=utf-8
date: Tue, 23 Aug 2022 18:03:41 GMT
content-length: 13

Hello World!

Alfred

The Alfred framework is an Express-like library.

The default HTTP response looked like this:

HTTP/1.1 200 OK
content-type: text/plain; charset=utf-8
x-frame-options: SAMEORIGIN
x-xss-protection: 1; mode=block
transfer-encoding: chunked
x-content-type-options: nosniff

Hello World!

Exactly the same response as Dart’s HttpServer gives.

void runAlfredServer(Config config) async {
  final server = Alfred(logLevel: LogType.error);
  server.get('/', config.staticResponse ? _staticResponse : _dynamicResponse);
  await server.listen(config.port, config.address);
}

String _staticResponse(HttpRequest req, HttpResponse res) {
  res.headers.date = DateTime.now().toUtc();
  res.contentLength = helloWorldLength;
  setHeaders(res.headers, helloWorldLength);
  return helloWorld;
}

Response:

HTTP/1.1 200 OK
content-type: text/plain; charset=utf-8
date: Tue, 23 Aug 2022 18:12:09 GMT
content-length: 13

Hello World!

Conduit

The Conduit is a more complex, full-stack framework including things like ORM, authorization, serialization, etc.

Unlike all the other frameworks, it’s not possible to just run it on multiple Isolates because it handles multi-threading itself.

To start the server on multiple Isolates, you just need to call the start method like this:

await app.start(numberOfInstances: cores);

The default HTTP response Conduit gives is a little bit curious:

HTTP/1.1 200 OK
content-type: application/json; charset=utf-8
x-frame-options: SAMEORIGIN
x-xss-protection: 1; mode=block
x-content-type-options: nosniff
content-length: 16
server: conduit/1

"Hello World!\n"

It’s similar to the others, but it actually uses the JSON Content-Type by default! Strangely for a fully-fledged framework, it did not include the Date header.

Fixing those was easy enough, and we end up with the following ApplicationChannel implementation:

class StaticBenchmarkChannel extends conduit.ApplicationChannel {
  @override
  conduit.Controller get entryPoint {
    final router = conduit.Router();

    router.route("/").linkFunction((request) async {
      final headers = request.response.headers;
      setHeaders(headers, helloWorldLength);
      headers.removeAll('server');
      headers.date = DateTime.now();
      return conduit.Response.ok(helloWorld,
          headers: const {'content-type': 'text/plain; charset=utf-8'});
    });

    return router;
  }
}

Response:

HTTP/1.1 200 OK
content-type: text/plain; charset=utf-8
date: Tue, 23 Aug 2022 18:29:09 GMT
content-length: 13

Hello World!

Jaguar

Another full-stack framework, Jaguar supports both an Express-like API as well as a class-based, annotation-driven route declaration syntax, similar to Java REST frameworks like Spring Boot.

By default, Jaguar’s Response is just like the other frameworks’:

HTTP/1.1 200 OK
content-type: text/plain; charset=utf-8
x-frame-options: SAMEORIGIN
x-xss-protection: 1; mode=block
x-content-type-options: nosniff
content-length: 13

Hello World!

No Date header. And, unfortunately, Jaguar chose to use its own type for headers.

Only after looking into the Jaguar source code was I able to figure out a way to remove the default headers and set a Date header, as Jaguar seems to have tried to make that as hidden as possible.

This decision is strange, also, because if you use context.headers, you get a Dart HttpHeaders object, not JaguarHttpHeaders. Really, there’s no need to create your own type for headers when Dart has a perfect type already for that.

In order to be able to start Jaguar in multiple Isolates, each instance of Jaguar has to be created with the argument multiThreaded: true.

Another problem with Jaguar: the address argument must be a String, which as we’ve seen, is pretty much wrong, as there’s no way to tell it to bind on anyIPv6 address. It bind on 0.0.0.0 which is an IPv4 address:

â–¶ lsof -i:8080
COMMAND   PID   USER   FD   TYPE             DEVICE SIZE/OFF NODE NAME
dart    94428 renato    8u  IPv4 0xdd9b56fa0777a5b3      0t0  TCP *:http-alt (LISTEN)

That means Jaguar can’t support IPv6 and IPv4 at the same time unless Dart has some “special String” value for that!

Anyway, in the end, it looks like this:

void runJaguarServer(Config config) async {
  final server = Jaguar(multiThread: true, port: config.port);
  server.get('/', config.staticResponse ? _staticResponse : _dynamicResponse);
  await server.serve();
}

String _staticResponse(Context context) {
  final headers = context.req.ioRequest.response.headers;
  setHeaders(headers, helloWorldLength);
  headers.date = DateTime.now();
  return helloWorld;
}

Response:

HTTP/1.1 200 OK
content-type: text/plain; charset=utf-8
date: Tue, 23 Aug 2022 19:34:20 GMT
content-length: 13

Hello World!

Shelf

Shelf is the default framework for implementing Dart HTTP servers. It’s maintained by Google and the Dart team itself.

According to its docs, it was inspired by Connect for NodeJS and Rack for Ruby.

The default HTTP response with Shelf, surprisingly, is not exactly identical to Dart’s HttpServer, it is even more verbose:

HTTP/1.1 200 OK
x-powered-by: Dart with package:shelf
date: Tue, 23 Aug 2022 19:41:34 GMT
content-length: 13
x-frame-options: SAMEORIGIN
content-type: text/plain; charset=utf-8
x-xss-protection: 1; mode=block
x-content-type-options: nosniff

Hello World!

Wow, that probably explains why DartVSGo claimed it to be slow, as we’ve seen that the size of the response is proportional to the time it takes.

I struggled a little bit to find a way to remove all the headers.

https://github.com/schultek/jaspr

https://medium.com/@razvantmz/dart-frog-a-minimalistic-backend-framework-for-dart-e9b479d923e7

Adds a Server header.

I changed as much as I could to make all frameworks provide the same responses as Go, but unfortunately, it was not always possible.

Responses after my changes:

Perfect!

There’s no way I could find to remove the default headers, so the response is unchanged.

There’s no way I could find to remove the default headers, but at least, setting the content-length header causes the response to stop using transfer-encoding: chunked.

As with Conduit, Jaguar does not expose the “native” Dart HttpHeaders object, which means that you can only add headers, not change the default headers, unless the framework authors provide another mechanism for changing the default headers that I just couldn’t find.

To all framework authors: please expose the perfectly well designed Dart HttpHeaders object instead of rolling out your own. Not having full control over what HTTP responses should look like makes your framework a no go for a lot of people.

HTTP/1.1 200 OK
content-type: text/plain; charset=utf-8
x-frame-options: SAMEORIGIN
x-xss-protection: 1; mode=block
x-content-type-options: nosniff
content-length: 13

Hello World!

Slightly better than the default.

Shelf does expose HttpHeaders, but for some reason, made it impossible to completely remove the Server header, so I had to set that to the empty String (and created a bug report).

HTTP/1.1 200 OK
content-type: text/plain; charset=utf-8
date: Sat, 20 Aug 2022 14:25:34 GMT
content-length: 13
server: 

Hello World!

Close enough.

Dart compilation modes

Dart can be executed directly from source code as a scripting language (as I’ve done so far in this post), but like Go, it can also compile to a binary executable.

What not many people seem to realize, is that it can also compile to an intermediate, pre-compiled format called jit-snapshot, which is best described in the Dart docs:

JIT modules include all the parsed classes and compiled code that’s generated during a training run of a program.

Dart also compiles to cross-platform portable modules (kernel), which seem similar to Java bytecode, and down to Javascript (which used to be its main compilation target).

When discussing Dart performance, we can’t just skip over these compilation modes! So I decided to include the following in the benchmarks:

Running the benchmarks

To run benchmarks, I prefer to use [wrk2]() rather than wrk because that has improved reports and a better methodology to measure latency.

Read the linked wrk2 documentation for details about that.

I disagree with the author of the original blog post that running load tests in the cloud is a good idea, both because I am cheap and don’t want to spend money if I don’t have to, and because you can’t control “noise” around the VM (specially cheap DO droplets), both network and CPU.

Luckily, as a nerd, I have several computers available in my own home where I can run tests.

I decided to use the following setup:

All using my local network and IPv4 addresses.

Written on Sun, 31 Mar 2019 15:25:00 +0000