Synergex QA takes full advantage of automated tests
By Johnson Luong, Synergex Software Test Engineer
Manual testing is a slow, time-consuming process that is also monotonous, laborious, and prone to human error. Oh, and did I mention it’s boring? What’s the solution? Automated testing, of course. It removes the human element from test execution (though not from test creation) and frees up test engineers to concentrate on writing better tests or fixing current tests. It’s more efficient, too: automated tests can be run overnight or while the test engineer is at lunch.
Here at Synergex, QA has replaced the majority of our manual tests with automated tests. Since automated tests run faster than manual tests, we can spend less time executing tests for our products, which enables us to report test results to developers sooner and allows them to spend more time eradicating the bugs before release. In addition, we create automated regression tests from most bugs that we discover so our automated tests can be as thorough as possible. We use TestComplete from SmartBear to write our automated tests in scripting languages such as JavaScript or Python. TestComplete parses scripts and performs actions based on what the scripts assert.
For example, a test script may tell TestComplete to type into a text box, click a button, and then assert that the text inside the text box is correct. After a test is finished, TestComplete will display the results for the test engineer to review. TestComplete also supports a continuous integration strategy (where integrated builds are performed and tested on a regular and frequent basis) and can be set up to publish failed test results to bug-tracking tools, such as JIRA, automatically. For more information about TestComplete, visit https://smartbear.com/product/testcomplete.
The testing we do at Synergex is more than just writing and executing automated tests: we also create a test plan to identify which tests to write and execute to avoid wasting time on unnecessary testing. For example, if an existing product is being released with a new version of Synergy/DE, there’s no reason to test it constantly during a testing cycle if nothing has changed in the underlying code for the product. Test plans are also useful in documenting a history of test results for tracking purposes. For example, if we inform a developer that a feature of Workbench has a bug in it, we can also tell when it last worked correctly, so the developer can look specifically at the changes since then that may have caused the bug.
It’s also imperative that we organize our tests in a logical way, to categorize and test discreet functions or features. For example, if a test validates a specific tagging feature in Workbench, it belongs in the Workbench tagging section of the test plan. This allows us to test all the tagging features of Workbench together, without having to waste time testing unrelated Workbench functions.
Synergex uses VMware Workstation to run automated tests. Each Virtual Machine (VM) inside Workstation is set up identically, to remove as many third-party or other unrelated factors as possible. Since our automated tests are so extensive, it usually takes days for an individual computer to run them all completely; however, since we organize our tests by features, we are able to break up and separate our tests to run on individual VMs, thus reducing the overall amount of time it takes to run everything. We log the amount of time it takes to run a test for a feature and then use that time estimate to help determine which test to run on which VM. We can then set up the VMs in such a way that all tests finish at roughly the same time, so virtually no time is lost waiting for a test to finish.
TestComplete is set up to report test results to a centralized location on our servers, so we can access test results from any computer, rather than just the VM that ran the test. This ability to review a test’s results independent of which VM it was run on also allows anyone to review the results—not just the test engineer who ran the test. After reviewing the results, we mark down the overall result in the test plan for tracking purposes, we report any plausible bugs to the developers, and the developers help us determine which ones are actual bugs. If an issue is a bug, a developer fixes it and tells us whether we should write an automated regression test. This process enables us to strengthen the overall effectiveness of our tests.
We use Workstation’s snapshot feature to save snapshots of testing states. If a developer needs to see the exact state of a VM to debug something, we can immediately revert Workstation back to that state, without having to run the test again to get to a specific point. For more information about VMware Workstation, visit https://www.vmware.com/products/workstation.
Manual testing might be an easy way to start testing, but test engineers must use structured automated testing if they want to progress. Automated tests allow test engineers to spend less time executing tests and more time reviewing test results. Test engineers should also use virtualization technology, such as VMware Workstation, to run more tests at the same time. The quicker turnaround time also allows developers to fix bugs faster, and additional tests created from test failures enable the test to be more robust and thorough. Overall, it’s a win-win situation for everyone. |