Monday, March 21, 2016

Designing a Complete Automation Suite

In my last two positions, I have been the sole Software Engineer in Test for a brand-new software platform.  This means that I have been able to set up an entire arsenal of automated tests for both the backend and the front-end.  This article will describe the suite of tools I used to test a web application.

API Tests:
First, I set up tests that checked that every API call worked correctly.  I did this through Runscope, an API testing tool I highly recommend.  I was fortunate enough to have engineers who set up Swagger, a framework that lists API calls in a really easy-to-read format.  I used the information from Swagger to set up my Runscope tests.  For testing against our development environment, I created tests for every GET, POST, PUT, and DELETE request.  I set this suite of tests to run once a day as a general health check for the development environment.  For testing against our production environment, I set up tests for every GET request that ran hourly.  This way we would be notified quickly if any of our GET requests stopped working.  Runscope is also good for monitoring the response times of API calls.

Backend Tests:
The backend of our platform was written in Java, and we developed a suite of integration tests using JUnit and AssertJ.  Generally the developers would create a few basic tests outlining the functionality of whatever new feature they had coded, and I would create the rest of the tests, using the strategy that I outlined in my previous post.

Front-End Tests:
The front-end of our platform was written in Angular, so I used Protractor to write a series of end-to-end tests.  The UI developers were responsible for writing unit tests, so I focused on testing typical user interactions.  For example, in my Contact-Info tests, I tested adding and editing a phone number, adding and editing an email address, and adding and editing a mailing address.  If it was possible to delete contact info in our UI, I would have added those tests to the suite as well.  I set up two suites of tests: a smoke suite and a full suite.  The smoke suite was for basic feature testing and was integrated into the build process, running whenever a developer pushed new code.  If they pushed a change that broke a feature, they would be notified immediately and the code would not be added to the build.  The full suite was set to run every test, and was configured in Jenkins to run overnight against the development environment using SauceLabs.  If anything happened to cause the tests to fail, I'd be notified by email immediately.

Since the UI also included uploaded images, I integrated Applitools visual testing into my Protractor tests.  Their platform is extremely easy to integrate and use.  I wrote tests to check that the image I added to a page appeared on the screen, and tests to check that if I changed an image, the new image would appear on the screen.

Load Tests:
In my previous position, I set up a series of load tests using SoapUI.  This tool is similar to Runscope in that it tests API calls, but it has the added functionality of being able to configure load testing.  I set up the following test scenarios: a Baseline test, where a small group of simulated users would run through a series of API requests; a Load test, where a small group of users would make a more frequent series of requests; a Stress test, where a large group of users would make a frequent series of requests; and a Soak test, where a small group of users would make a frequent series of requests for a long test time.  We were able to use this suite of tests to monitor response times, check for memory leaks, and test our load balance setup.

Every application is different, so you may find that you need a different set of tools for your test toolbox!  But I hope that this blog post gives you a jumping-off point to think about how to test your application thoroughly.