Mark Stosberg
2013-02-27 14:00:40 UTC
Our team was recently benchmarking using Mechanize vs
Selenium::Remote::Driver for a basic request pattern:
- get a page
- check the title
- check the body content
I expected Selenium to be slower, but was surprised it was about 8 times
slower!
As I look more closely at the inner workings, what was going on became
more clear.
Mech makes one request, and uses the cached content to check the title
and content.
Meanwhile, it appears the Selenium protocol dictates that 3 requests are
used:
1. Instruct the driven browser to load the page.
(1a. The browser actually makes the request)
2. Ask the browser what the title of the page loaded is.
3. Ask the browser for the page source (or ask it to find an element in
the DOM on your behalf.
It seems like some considerable speed gains might be made with some kind
of hybrid approach-- something that involved sometimes caching the page
source locally, and querying it multiple times locally, rather than
getting one HTTP request to the browser for each query.
For now, we are sticking with an approach of "Mechanize when we can,
Selenium when we have to".
How are other people improving the performance of functional browser
teesting?
I realize there are some non-Selenium solutions out there, but I like
the ability to take advantage of services like Sauce Labs, which
provide parallel Selenium testing in the cloud.
Mark
Selenium::Remote::Driver for a basic request pattern:
- get a page
- check the title
- check the body content
I expected Selenium to be slower, but was surprised it was about 8 times
slower!
As I look more closely at the inner workings, what was going on became
more clear.
Mech makes one request, and uses the cached content to check the title
and content.
Meanwhile, it appears the Selenium protocol dictates that 3 requests are
used:
1. Instruct the driven browser to load the page.
(1a. The browser actually makes the request)
2. Ask the browser what the title of the page loaded is.
3. Ask the browser for the page source (or ask it to find an element in
the DOM on your behalf.
It seems like some considerable speed gains might be made with some kind
of hybrid approach-- something that involved sometimes caching the page
source locally, and querying it multiple times locally, rather than
getting one HTTP request to the browser for each query.
For now, we are sticking with an approach of "Mechanize when we can,
Selenium when we have to".
How are other people improving the performance of functional browser
teesting?
I realize there are some non-Selenium solutions out there, but I like
the ability to take advantage of services like Sauce Labs, which
provide parallel Selenium testing in the cloud.
Mark