Friday, February 28, 2014

Using automated tests to test features only, or testing overall User Experience, Features, and Performance?

Since the pre-mobile application days of automated web app tests using Selenium, I have observed so many automated testers worrying about setting timeout on server response of the web app being tested, so that they can instruct Selenium to wait X number of seconds for response time of the browser to finish loading the content of the page such that features can be without a disruption from any delay in response from the backend.

My feeling always has been: why?

Selenium has the ability for the automated test code to wait until the entire content of the page has been loaded for any amount of time: using ajax or not.

But isn't testing a web/mobile app also wanting to test features + user experience + performance (which includes backend response times and component/page loading )?

If the automated tester wants to test features only, set a very generous timeout from the response to allow any potential performance issues to load to the remote client (web/mobile) to finish before testing and interacting with the page/screen.

Again, I ask: why?

Instead of using allowing a timeout before failing a test, why not just set a timeout before page/screen validation of what you think is an acceptable server response/page/screen load time before validating the page/screen that is acceptable for UX?

This will give the QA team insight of not just the product features that are working, but also working within an acceptable user experience and backend server performance response. Setting your own UX standards and acceptance criteria in the automated test code can provide a valuable insight into more aspects than features. I can tell you that I have found so many performance and UX related bugs with this testing strategy.

No comments:

Post a Comment