Update on my Node.js Memory and GC Benchmark

Posted on 29 September 2010

I was lucky enough to have a short chat with Ryan at JSConf.eu last weekend in Berlin about the memory allocation comparison between Node.js and RingoJS I had done. He didn’t have any suggestion for tuning Node.js or V8 for higher memory and garbage collector throughput, but thought it was possible that Node’s lackluster performance in the benchmark had to do with binary buffers and getting data in and out of them.

Thinking about a memory and garbage collection benchmark that didn’t involve buffers quickly led me to JSON. Parsing JSON is a fairly frequent task for a web application, and it can put a considerable load on the garbage collector. Also, from my work on the Rhino JSON parser I knew that it is comparable in speed to V8’s (something that unfortunately isn’t true of, say, RegExp implementations), so the results would reflect GC performance pretty well.

Here’s the source code of the JSON parsing benchmark I wrote. Basically it builds a ~25kb JSON string at startup that is parsed once for each request. The JSON string consists of an object with 100 child objects, each of which contain 10 short string properties.

The results I got confirmed those of the other benchmarks, with Node.js scoring 613 requests per second and RingoJS 1123 (this result was updated from the original post, see notes below). This is even more noteworthy as it builds on a Hello-World HTTP benchmark where Node clearly outperforms Ringo! Here’s the graph showing the distribution of response times across all 50,000 requests made:

benchmark result graph

My original concusion was to blame the V8’s garbage collector for not being tuned to deal with the workload generated by my benchmarks. But after some feedback from Node and V8 developers (see notes below), it looks like the problem is more complicated than that. What can be said is that Node performance degrades notably with very high levels of memory allocation, be it buffers, strings, or objects. It’ll be interesting to see whether Node and/or V8 teams will be able to detect and solve these problems in the future.

Notes

  1. Update: I reran the benchmark after Isaac Schlueter pointed out problems with serving strings in Node and provided a patch. The original results I published for Node and Ringo were 495 and 1116 reqs/sec, respectively.

  2. Update: V8 developer Vyacheslav Egorov provided some insight in his Hacker News comments. It looks like it’s not the V8 GC that is to blame for the performance degradation after all, and more research is needed to find the bottlenecks. I apologize for putting the blame on the GC and updated my conclusion accordingly.

Comments