Automated Testing
![Picture](/uploads/1/1/2/6/11260941/9945385.jpg?274)
Why is automated testing listed after the manual testing steps? Because one should never use automated testing for any software process that hasn't first been successfully tested manually to some acceptable level. It is important in order to understand the software flow and processes, so accurate automated testing scripts can be created.
- Automated testing is ideal for tests that need to run multiple times, where issues are generally not prevalent or expected. Otherwise automated testing will require a large amount of manual interactions to change automated scripts so they can continue to run repeatedly.
- Automated Regression Testing runs scripts that simulate and check the accuracy of business processes against recorded screen results. If differences are noted the issue is logged and the automated testing is repeated until no issues are found.
- Performance Testing runs scripts that simulate hundreds or thousands of "virtual" software users and then checks the response time of specific business transactions against different loads (normal and busy business periods) in order to measure the performance of both the software and the systems (infrastructure) it runs on.
- MSU uses HP's Unified Functional Testing (UFT) software suite for regression testing; it also allows for the use of data-driven tests that provide a large range of dynamic testing based on specific test scenarios and cases.
- MSU utilizes HP's LoadRunner for Performance Testing, which included load and stress testing.
- For both UFT and LoadRunner, testing analysts record the software application's screens and navigation. Built-in tools along with additional programming support allows for a wide range of functional and performance testing capabilities, and generate test results and supporting analytics.