I am doing a quick stress test on two (kinda) hello world projects written in node.js and asp.net-core. Both of them are running in production mode and without a logger attached to them. The result is astonishing! ASP.NET core is outperforming node.js app even after doing some extra work whereas the node.js app is just rendering a view.
http://localhost:3000/nodejs
node.js
Using: node.js, express and vash rendering engine.
The code in this endpoint is
router.get('/', function(req, res, next) {
var vm = {
title: 'Express',
time: new Date()
}
res.render('index', vm);
});
As you can see, it does nothing apart from sending current date via the time
variable to the view.
http://localhost:5000/aspnet-core
asp.net core
Using: ASP.NET Core, default template targeting dnxcore50
However this app does something other than just rendering a page with a date on it. It generates 5 paragraphs of various random texts. This should theoretically make this little bit heavier than the nodejs app.
Here is the action method that render this page
[ResponseCache(Location = ResponseCacheLocation.None, NoStore = true)]
[Route("aspnet-core")]
public IActionResult Index()
{
var sb = new StringBuilder(1024);
GenerateParagraphs(5, sb);
ViewData["Message"] = sb.ToString();
return View();
}
Update: Following suggestion by Gorgi Kosev
Using npm install -g recluster-cli && NODE_ENV=production recluster-cli app.js 8
Can't believe my eyes! It can't be true that in this basic test asp.net core is way faster than nodejs. Off course this is not the only metric used to measure performance between these two web technologies, but I am wondering what am I doing wrong in the node.js side?.
Being a professional asp.net developer and wishing to adapt node.js in personal projects, this is kind of putting me off - as I'm a little paranoid about performance. I thought node.js is faster than asp.net core (in general - as seen in various other benchmarks) I just want to prove it to myself (to encourage myself in adapting node.js).
Please reply in comment if you want me to include more code snippets.
Update: Time distribution of .NET Core app
Server response
HTTP/1.1 200 OK
Cache-Control: no-store,no-cache
Date: Fri, 12 May 2017 07:46:56 GMT
Pragma: no-cache
Transfer-Encoding: chunked
Content-Type: text/html; charset=utf-8
Server: Kestrel
As many others have alluded, the comparison lacks context.
At the time of its release, the async approach of node.js was revolutionary. Since then other languages and web frameworks have been adopting the approaches they took mainstream.
To understand what the difference meant, you need to simulate a blocking request that represents some IO workload, such as a database request. In a thread-per-request system, this will exhaust the threadpool and new requests will be put in to a queue waiting for an available thread.
With non-blocking-io frameworks this does not happen.
Consider this node.js server that waits 1 second before responding
const server = http.createServer((req, res) => {
setTimeout(() => {
res.statusCode = 200;
res.end();
}, 1000);
});
Now let's throw 100 concurrent conenctions at it, for 10s. So we expect about 1000 requests to complete.
$ wrk -t100 -c100 -d10s http://localhost:8000
Running 10s test @ http://localhost:8000
100 threads and 100 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 1.01s 10.14ms 1.16s 99.57%
Req/Sec 0.13 0.34 1.00 86.77%
922 requests in 10.09s, 89.14KB read
Requests/sec: 91.34
Transfer/sec: 8.83KB
As you can see we get in the ballpark with 922 completed.
Now consider the following asp.net code, written as though async/await were not supported yet, therefore dating us back to the node.js launch era.
app.Run((context) =>
{
Thread.Sleep(1000);
context.Response.StatusCode = 200;
return Task.CompletedTask;
});
$ wrk -t100 -c100 -d10s http://localhost:5000
Running 10s test @ http://localhost:5000
100 threads and 100 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 1.08s 74.62ms 1.15s 100.00%
Req/Sec 0.00 0.00 0.00 100.00%
62 requests in 10.07s, 5.57KB read
Socket errors: connect 0, read 0, write 0, timeout 54
Requests/sec: 6.16
Transfer/sec: 566.51B
62! Here we see the limit of the threadpool. By tuning it up we could get more concurrent requests happening, but at the cost of more server resources.
For these IO-bound workloads, the move to avoid blocking the processing threads was that dramatic.
Now let's bring it to today, where that influence has rippled through the industry and allow dotnet to take advantage of its improvements.
app.Run(async (context) =>
{
await Task.Delay(1000);
context.Response.StatusCode = 200;
});
$ wrk -t100 -c100 -d10s http://localhost:5000
Running 10s test @ http://localhost:5000
100 threads and 100 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 1.01s 19.84ms 1.16s 98.26%
Req/Sec 0.12 0.32 1.00 88.06%
921 requests in 10.09s, 82.75KB read
Requests/sec: 91.28
Transfer/sec: 8.20KB
No surprises here, we now match node.js.
So what does all this mean?
Your impressions that node.js is the "fastest" come from an era we are no longer living in. Add to that it was never node/js/v8 that were "fast", it was that they broke the thread-per-request model. Everyone else has been catching up.
If your goal is the fastest possible processing of single requests, then look at the serious benchmarks instead of rolling your own. But if instead what you want is simply something that scales to modern standards, then go for whichever language you like and make sure you are not blocking those threads.
Disclaimer: All code written, and tests run, on an ageing MacBook Air during a sleepy Sunday morning. Feel free to grab the code and try it on Windows or tweak to your needs - https://github.com/csainty/nodejs-vs-aspnetcore