We are using protractor
for testing internal AngularJS applications.
Besides functional tests, we check for performance regressions with the help of protractor-perf
which is based on nodejs browser-perf
library. Because, "Performance is a feature".
With protractor-perf
we can measure and assert different performance characteristics while making browser actions, for example:
browser.get('http://www.angularjs.org');
perf.start(); // Start measuring the metrics
element(by.model('todoText')).sendKeys('write a protractor test');
element(by.css('[value="add"]')).click();
perf.stop(); // Stop measuring the metrics
if (perf.isEnabled) { // Is perf measuring enabled ?
// Check for perf regressions, just like you check for functional regressions
expect(perf.getStats('meanFrameTime')).toBeLessThan(60);
};
Now, for an another internal application we have a set of selenium-based tests written in Python.
Is it possible to check for performance regressions with selenium-python, or should I rewrite the tests using protractor
to be able to write browser performance tests?
There is a possibility to get closer to what browser-perf
is doing by collecting the chrome performance logs and analyzing them.
To get performance logs, turn on performance
logs by tweaking loggingPrefs
desired capability:
from selenium import webdriver
from selenium.webdriver.common.desired_capabilities import DesiredCapabilities
caps = DesiredCapabilities.CHROME
caps['loggingPrefs'] = {'performance': 'ALL'}
driver = webdriver.Chrome(desired_capabilities=caps)
driver.get('https://stackoverflow.com')
logs = [json.loads(log['message'])['message'] for log in driver.get_log('performance')]
with open('devtools.json', 'wb') as f:
json.dump(logs, f)
driver.close()
At this point, devtools.json
file would contain a bunch of trace records:
[
{
"params": {
"timestamp": 1419571233.19293,
"frameId": "16639.1",
"requestId": "16639.1",
"loaderId": "16639.2",
"type": "Document",
"response": {
"mimeType": "text/plain",
"status": 200,
"fromServiceWorker": false,
"encodedDataLength": -1,
"headers": {
"Access-Control-Allow-Origin": "*",
"Content-Type": "text/plain;charset=US-ASCII"
},
"url": "data:,",
"statusText": "OK",
"connectionId": 0,
"connectionReused": false,
"fromDiskCache": false
}
},
"method": "Network.responseReceived"
},
{
"params": {
"timestamp": 1419571233.19294,
"encodedDataLength": 0,
"requestId": "16639.1"
},
"method": "Network.loadingFinished"
},
..
]
Now, the question is, what to do with it.
One option that was initially suggested during the Google Test Automation Conference is to submit the logs to webpagetest.org. There is an example in java available here, but, at the moment, I had no luck implementing it in Python.
In theory, the UI report generated by webpagetest.org would look like this:
They also provide the metrics in JSON/XML and other formats that can be further analyzed.
This is really something, thanks to Vivek Singh for the pointing comment.
browser-perf also uses the logging functionality to pick up the tracing logs, and analyzes the data.