There are many ways to start an argument on an SAP project and here’s another one to add to the list: define these terms – unit testing, system testing, integration testing, regression testing, scenario testing, end-to-end testing, end user testing, user acceptance testing, stress testing, load testing, performance testing, string testing, usability testing, security and authorizations testing, cut over testing, dry run testing, application testing, interface testing, day-in-the-life testing. I probably missed a few testing types, but the point is there are many kinds of testing and the same testing may be referred to by different names.
Each project has its own language and the way people refer to the various kinds of testing is a potential source of confusion. On a certain level it doesn’t matter what you call your various flavors of testing as long as everyone on the project uses the same terms and means the same things.
I have come across the same term used on different projects to mean different things. My goal here is not to provide definitive definitions, although I will take a shot at some loose definitions, but to make the point that a common understanding of what you mean on your project is what counts. Your project team (SAP and non-SAP team members) will be much better off as long as you speak the same language and mean the same things.
For example, when I say integration test you might think I mean the same thing as when you say end-to-end test. On some projects we would be right to agree and on other projects we should disagree. Consequently I have found that short capsule summaries of the various kinds of testing you do can save a lot of debate and frustration. Of course these capsules are tricky because there isn’t always a black and white distinction between one kind of testing and another, so some mental flexibility is helpful, but don’t be too flexible.
In general the type of testing is tied to a project phase or system environment, but project phases and system environments can be a hot topic, too (and a subject of future blog entries). Who does the testing provides an additional clue where it fits in the overall project lifecycle. What follows are a few broad definitions of testing types that I use. You can accept, refine or reject these but I hope the underlying message sticks: create usable working definitions that fit your project. And remember not every project will need to perform every one of these types of testing.
A short checklist to review when thinking about each type of testing:
- What systems (SAP, non-SAP) are needed?
- What environments (development, QA, training, etc.) are needed?
- What data (master data, transaction data, and historical data) is needed?
- Who does the testing (development team, test team, end users, etc.)?
- What are the testing success criteria?
- How are results documented and able to be audited?
- What test cases (positive and negative) are required?
- Who provides sign-off?
Without further ado, some loose definitions:
SAP Unit Testing
This tests isolated pieces of functionality, for example, creation and save of a sales order. The test is done in the development by a configuration specialist and confirms that the sales order can be saved using the SAP organization elements (sales organization, company code, credit control area, etc.) along with the customer master data set up, partner functions, material master data, etc. It establishes a baseline of SAP functionality.
For ABAP development, for example, unit testing shows that a report can be created from developer generated data. Assistance in data generation may come from a functional consultant.
SAP System Testing
This is testing where elements of related SAP functionality are linked together in the development environment to ensure the pieces work together. For example, a quote-to-cash flow would show that a quote can be used to create a sales order, a delivery can be created and processed from the order, the delivery can be billed, the billing released to accounting, and a customer payment applied against the accounting invoice. Each of the component parts is unit tested ahead of time and the data used in testing is usually fabricated based on the knowledge of the project team.
SAP Scenario / String Testing
this tests specific business cases. For example, there may be configuration and business process design that is unique to a certain customer set or a given product line or a set of services. Tangible products and services are processed very differently from each other, so you might have different scenarios you need to test. Again this testing is usually done in the development environment to prove out a requirement – an argument can be made to say this is a test case you would cover in system testing. Scenario testing can also happen in the QA environment, but I prefer to call that string testing.
This testing also includes execution of interfaces and other development objects, e.g. reports, with fabricated data.
SAP Integration Testing
This testing is similar to scenario testing except it is typically done in the QA environment and uses more realistic data. Ideally the data has come from a near real data extraction, conversion and load exercise (not necessarily a full conversion) so the data has a certain familiarity to it for a business end user, e.g. recognizable customers, materials, pricing, vendors, contracts, etc. The testing shows that the business process as designed and configured in SAP runs using representative real world data. In addition the testing shows interface triggers, reports, workflow are working.
SAP Interface Testing
Testing of interfaces typically occurs at different points in a project so it is important to know what you are testing when. During the project development phase isolated interface testing usually refers to unit testing activities where you confirm that your code can consume a file of your own making. You might have two development systems – one SAP, one non-SAP – where you run a test to show that the sender can generate a file and the receiver can consume it. In the QA environment interface testing might involve execution of business transactions on the sending system followed by looking for automatic generation of the interface output; this is then followed by the receiving system consuming that file and proving that a business process continues on the receiver. Your interface testing might prove that the whole process runs automatically with business events triggering the outbound interface correctly, automatic transfer and consumption by the receiver.
This testing and its definition can become even trickier if you use a message bus where the idea of point-to-point interfaces doesn’t apply and you need to consider publish-and-subscribe models.
Whatever you are doing under the guise of interface testing, you need to be clear about the scope of the tests and the success criteria. Typically interface testing becomes part of larger testing activities as a project progresses. In my experiences interface testing shows that the triggering works, the data selection (and exclusion) is accurate and complete, data transfer is successful, and the receiver is able to consume the sent data. Wrapped around this is showing that all the steps run automatically and that error handling and restart capability (e.g. data problems, connectivity failures) is in place.
SAP End-to-End Testing
This is similar to scenario testing in that a specific business case is tested from start to finish and includes running of interfaces, reports, manual inputs, workflow, etc. In short it is attempting to simulate a real world business process and, in order to make it as real as possible, it is done using the most realistic data. Ideally the data used was the result of a data extract, conversion and load process. I would expect this kind of testing to occur in a QA environment: at some level it can be seen as a way of validating that the individual unit tests, scenario tests, integration tests and interface tests produced results that work together.
SAP End User Testing & User Acceptance Testing
I grouped these two together because they are closely related, if not identical. The goal here is to ensure that end users are able to perform their designated job functions with the new system(s). A crucial part of this testing is referring back to the business requirements (you have some of those, right?) and blueprint to ensure that the expected features, functions and capabilities are available. As part of the project user involvement along the way should have been providing feedback to ensure the design met the requirements, so there should not be any big surprises.
Again this is activity that usually occurs in a QA environment with realistic data and the inclusion of end user security and authorizations.
SAP Stress / Load / Performance Testing
This kind of testing examines things like whether the system response time is acceptable, whether periodic processes run quickly enough, whether the expected concurrent user load can be supported. It also identifies processing bottlenecks and ABAP coding inefficiencies. It is rare for a project to have worked out all the system performance tuning perfectly ahead and to have every program running optimized code. Consequently the first stress test on a system can be painful as lots of little things pop up that weren’t necessarily an issue in isolated testing.
The testing is geared towards simulating peak loads of activity, either online users or periodic batch processing, and identifies the steps needed to improve performance. Given that the initial test reveals lots of areas for improvement you should expect to run through this a couple of times to ensure the results are good.
SAP Usability Testing
This testing is usually concerned with how many key strokes and mouse clicks it takes to perform a function; how easy and intuitive it is to navigate around the system and find whatever it is that you are looking for. In an SAP implementation using the standard GUI there isn’t much scope for this kind of testing: end user training shows how to navigate, how to create short cuts and favorites, modify screen layouts, etc. On the other hand a project that involves building portals may well need to perform this kind of testing, not just for reasons mentioned earlier, but also for consistency of look and feel.
SAP Security and Authorizations Testing
Ensuring that users are only able to execute transactions and access appropriate data is critical to any project, especially with today’s needs for SOX compliance. This testing is typically done in a QA environment against near-final configuration and data from a full extract, conversion and load exercise. Test IDs for job roles are created and used to both confirm what a user can do and what a user cannot do. More often than not this kind of testing is combined with end user or user acceptance testing.
SAP Cut Over / Dry Run Testing
This kind of testing is simulating and practicing certain major one-time events in the project lifecycle. Typically the terms “dry run” and “conversion” together to mean a full scale execution of the all tasks involved to extract data from legacy systems, perform any kind of data conversion, load the results into SAP (and any other systems) and fully validate the results, including a user sign-off. Most projects have several dry run conversions which progress from an exercise in capturing all the steps, checkpoints and sign-offs in data conversion to a timed exercise to ensure everything can be accomplished in the time window for go-live. Once it becomes a timed event a dry run data conversion readily rolls into a cut over test, where it is one component of an overall cut over activity sequence: a cut over test usually ensures that all the necessary tasks, e.g. importing transports; manual configuration; extracting, converting and loading data; unlocking user IDs; starting up periodic processing for interfaces, etc. are all identified and can be executed in the go-live time window.
This term can be construed as so broad it has no meaning as an “application” can mean a lot of things. I have only ever heard it as generic blanket term for another kind of testing, e.g. SAP application testing, so it needs to be refined and given context to be of use.
SAP Day-In-The-Life (DITL) Testing
This is one of my favorite kinds of testing – it really is what is says it is. Run the system the way you expect it to be run during a regular business day. Real users, real data, real volumes, real authorizations, real interface and periodic job execution – the closest you can get to a production environment before you go-live with the system.
Not every day in business is the same so you might want to run a few DITL tests. However these can be difficult to organize because of the need to have end users trained and available for extended periods of time as well as having all partner systems able to participate in the activities with real and synchronized data across the systems, real users, real data volumes, etc.
SAP Regression Testing
Each time you put a new release of code and configuration into your production system you want to be sure you don’t cause any changes in any processing beyond what you expect to change. Hence the role of regression testing: test your existing functionality to be confident it still works as expected with the newly updated configuration and code base. Clearly you don’t want to find you have issues in production after you make the changes, consequently regression testing in a QA environment that has similar data to production is a good test bed. In some cases automated testing can be effectively deployed as a fast and regular method to ensure core business processes are not adversely affected by new releases of code and configuration.
As mentioned earlier I don’t claim to have perfect definitions for all types of testing, but I believe it is worthwhile having good definitions that are commonly understood and communicated across your project.