First automation pass complete

The first pass of automation for the site is now complete. At this juncture I have been able to make framework tests for the major pieces of functionality. They are still somewhat hardcoded, and not every part of the site has automation, but the bulk of the work is complete.

The objects have been created, and there is working code to verify functionality. For example, data can be entered into forms and that information can be verified in the table of results. This has been repeated across multiple pages.

There are currently 21 tests, which is a rate of 1 test case completed per day since I started. They are quite straightforward, but perform the same actions I would take if verifying the pages manually. The goal was to get the automation up and running and build out the objects I am most interested in.

And for that reason, several pages have been ignored at this point. They are either too unpredictable in the information they contain, aren't used heavily in Production, or the amount of work to create a test exceeds the time it would take to test that portion manually. If something takes me 5-10 minutes to test by hand, and I only check it once a month, it's not worth doing yet. There are better time investments to make.

The next phase of the operation is to add error checking and make tests more dynamic. At this point I have added the code to count rows in a table, as well as to check for the existence of the table. Both of these will be used so the code can adjust to the found results.

Additionally, execution profiles will be created so values can be changed from a central location.

From there, additional code will be added to improve site navigation and timing mechanisms added so they can all be strung together. There will also be components to reset search criteria if needed.

The next phase will take about two weeks, probably less, depending on how busy I am with manual testing. By the end of May, the tests will be able to be run against Production for an after deployment smoke test, and within QA on a regular basis for functional testing.

It will also be during the functional testing where additional code will be added to adjust to corner cases or handle results that comes from larger data sampling. This is also when additional test ideas come to mind.

While these are not the most complex of tests, I feel the progress has been very good. It has gone from zero coverage, to more than 60%. But that 60% coverage accounts for 80%+ of where users spend their time.

Then again, I could be wrong.

Author Signature for Posts