- Test Planning, Monitoring and Control
- Test Analysis
- Test Design
- Test Implementation
- Test Execution
- Evaluating Exit Criteria and Reporting
- Test Closure
For each test level, test planning starts at the initiation of the test process for that level and continues throughout the project until the completion of closure activities for that level. It involves the identification of the activities and resources required to meet the mission and objectives identified in the test strategy. Test planning also includes identifying the methods for gathering and tracking the metrics that will be used to guide the project, determine adherence to plan and assess achievement of the objectives. By determining useful metrics during the planning stages, tools can be selected, training can be scheduled and documentation guidelines can be established.
The strategy (or strategies) selected for the testing project help to determine the tasks that should occur during the planning stages. For example, when using the risk-based testing strategy (see Chapter 2), risk analysis is used to guide the test planning process regarding the mitigating activities required to reduce the identified product risks and to help with contingency planning. If a number of likely and serious potential defects related to security are identified, a significant amount of effort should be spent developing and executing security tests. Likewise, if it is identified that serious defects are usually found in the design specification, the test planning process could result in additional static testing (reviews) of the design specification.
Risk information may also be used to determine the priorities of the various testing activities. For example, where system performance is a high risk, performance testing may be conducted as soon as integrated code is available. Similarly, if a reactive strategy is to be employed, planning for the creation of test charters and tools for dynamic testing techniques such as exploratory testing may be warranted.
In addition, the test planning stage is where the approach to testing is clearly defined by the Test Manager, including which test levels will be employed, the goals and objectives of each level, and what test techniques will be used at each level of testing. For example, in risk-based testing of certain avionics systems, a risk assessment prescribes what level of code coverage is required and thereby which testing techniques should be used.
Complex relationships may exist between the test basis (e.g., specific requirements or risks), test conditions and the tests that cover them. Many-to-many relationships often exist between these work products. These need to be understood to enable effective implementation of test planning, monitoring and control. Tool decisions may also depend on the understanding of the relationships between the work products.
Relationships may also exist between work products produced by the development team and the testing team. For example, the traceability matrix may need to track the relationships between the detailed design specification elements from the system designers, the business requirements from the business analysts, and the test work products defined by the testing team. If low-level test cases are to be designed and used, there may be a requirement defined in the planning stages that the detailed design documents from the development team are to be approved before test case creation can start. When following an Agile lifecycle, informal transfer-of-information sessions may be used to convey information between teams prior to the start of testing.
The test plan may also list the specific features of the software that are within its scope (based on risk analysis, if appropriate), as well as explicitly identifying features that are not within its scope. Depending on the levels of formality and documentation appropriate to the project, each feature that is within scope may be associated with a corresponding test design specification.
There may also be a requirement at this stage for the Test Manager to work with the project architects to define the initial test environment specification, to verify availability of the resources required, to ensure that the people who will configure the environment are committed to do so, and to understand cost/delivery timescales and the work required to complete and deliver the test environment.
Finally, all external dependencies and associated service level agreements (SLAs) should be identified and, if required, initial contact should be made. Examples of dependencies are resource requests to outside groups, dependencies on other projects (if working within a program), external vendors or development partners, the deployment team, and database administrators
In order for a Test Manager to provide efficient test control, a testing schedule and monitoring framework needs to be established to enable tracking of test work products and resources against the plan. This framework should include the detailed measures and targets that are needed to relate the status of test work products and activities to the plan and strategic objectives.
For small and less complex projects, it may be relatively easy to relate test work products and activities to the plan and strategic objectives, but generally more detailed objectives need to be defined to achieve this. This can include the measures and targets to meet test objectives and coverage of the test basis.
Of particular importance is the need to relate the status of test work products and activities to the test basis in a manner that is understandable and relevant to the project and business stakeholders.Defining targets and measuring progress based on test conditions and groups of test conditions can be used as a means to achieve this by relating other testing work products to the test basis via the test conditions. Properly configured traceability, including the ability to report on traceability status, makes the complex relationships that exist between development work products, the test basis, and the test work products more transparent and comprehensible.
Sometimes, the detailed measures and targets that stakeholders require to be monitored do not relate directly to system functionality or a specification, especially if there is little or no formal documentation. For example, a business stakeholder may be more interested in establishing coverage against an operational business cycle even though the specification is defined in terms of system functionality. Involvement of business stakeholders at an early stage in a project can help define these measures and targets which not only can be used to help provide better control during the project, but can also help to drive and influence the testing activities throughout the project. For example, stakeholder measures and targets may result in the structuring of test design and test implementation work products and/or test execution schedules to facilitate the accurate monitoring of testing progress against these measures. These targets also help to provide traceability for a specific test level and have the potential to help provide information traceability across different test levels.
Test control is an ongoing activity. It involves comparing actual progress against the plan and implementing corrective actions when needed. Test control guides the testing to fulfill the mission, strategies, and objectives, including revisiting the test planning activities as needed. Appropriate reactions to the control data depend on detailed planning information.
Rather than consider test analysis and design together as described in the Foundation Level syllabus, the Advanced syllabi consider them as separate activities, albeit recognizing that they can be implemented as parallel, integrated, or iterative activities to facilitate the production of test design work products.
Test analysis is the activity that defines “what” is to be tested in the form of test conditions. Test conditions can be identified by analysis of the test basis, test objectives, and product risks. They can be viewed as the detailed measures and targets for success (e.g., as part of the exit criteria) and should be traceable back to the test basis and defined strategic objectives, including test objectives and other project or stakeholder criteria for success. Test conditions should also be traceable forward to test designs and other test work products as those work products are created.
Test analysis for a given level of testing can be performed as soon as the basis for testing is established for that level. Formal test techniques and other general analytical techniques (e.g., analytical risk-based strategies and analytical requirements-based strategies) can be used to identify test conditions. Test conditions may or may not specify values or variables depending on the level of testing, the information available at the time of carrying out the analysis and the chosen level of detail (i.e., the degree of granularity of documentation).
There are a number of factors to consider when deciding on the level of detail at which to specify test conditions, including:
Level of testing
Level of detail and quality of the test basis
System/software complexity
Project and product risk
The relationship between the test basis, what is to be tested and how it is to be tested
Software development lifecycle in use
Test management tool being utilized
Level at which test design and other test work products are to be specified and documented
Skills and knowledge of the test analysts
The level of maturity of the test process and the organization itself (note that higher maturity may require a greater level of detail, or allow a lesser level of detail)
Availability of other project stakeholders for consultation
Specifying test conditions in a detailed fashion will tend to result in a larger number of test conditions. For example, you might have a single general test condition, “Test checkout,” for an e-commerce application. However, in a detailed test condition document, this might be split into multiple test conditions, with one condition for each supported payment method, one condition for each possible destination country, and so forth.
Some advantages of specifying test conditions at a detailed level include:
Facilitates more flexibility in relating other test work products (e.g., test cases) to the test basis and test objectives, thus providing better and more detailed monitoring and control for a Test Manager
Contributes to defect prevention, as discussed in the Foundation Level, by occurring early in a project for higher levels of testing, as soon as the test basis is established and potentially before system architecture and detailed design are available
Relates testing work products to stakeholders in terms that they can understand (often, test cases and other testing work products mean nothing to business stakeholders and simple metrics such as number of test cases executed mean nothing to the coverage requirements of stakeholders)
Helps influence and direct not just other testing activities, but also other development activities
Enables test design, implementation and execution, together with the resulting work products to be optimized by more efficient coverage of detailed measures and targets
Provides the basis for clearer horizontal traceability within a test level
Some disadvantages of specifying test conditions at a detailed level include:
Potentially time-consuming
Maintainability can become difficult in a changing environment
Level of formality needs to be defined and implemented across the team
Specification of detailed test conditions can be particularly effective in the following situations:
Lightweight test design documentation methods, such as checklists, are being used due to accommodate the development lifecycle, cost and/or time constraints or other factors
Little or no formal requirements or other development work products are available as the test basis
The project is large-scale, complex or high risk and requires a level of monitoring and control that cannot be delivered by simply relating test cases to development work products
Test conditions may be specified with less detail when the test basis can be related easily and directly to test design work products. This is more likely to be the case for the following:
Component level testing
Less complex projects where simple hierarchical relationships exist between what is to be tested and how it is to be tested
Acceptance testing where use cases can be utilized to help define tests
Test design is the activity that defines “how” something is to be tested. It involves the identification of test cases by the stepwise elaboration of the identified test conditions or test basis using test techniques identified in the test strategy and/or the test plan.
Depending on the approaches being used for test monitoring, test control, and traceability, test cases may be directly related (or indirectly related via the test conditions) to the test basis and defined objectives. These objectives include strategic objectives, test objectives and other project or stakeholder criteria for success.
Test design for a given test level can be performed once test conditions are identified and enough information is available to enable the production of either low or high-level test cases, according to the employed approach to test design. For higher levels of testing, it is more likely that test design is a separate activity following earlier test analysis. For lower levels of testing, it is likely that test analysis and design will be conducted as an integrated activity.
It is also likely that some tasks that normally occur during test implementation will be integrated into the test design process when using an iterative approach to building the tests required for execution; e.g., the creation of test data. In fact, this approach can optimize the coverage of test conditions, either creating low-level or high-level test cases in the process.
Test implementation is the activity during which tests are organized and prioritized by the Test Analysts. In formally-documented contexts, test implementation is the activity in which test designs are implemented as concrete test cases, test procedures, and test data. Some organizations following the IEEE 829 [IEEE829] standard define inputs and their associated expected results in test case specifications and test steps in test procedure specifications. More commonly, each test’s inputs, expected results, and test steps are documented together. Test implementation also includes the creation of stored test data (e.g., in flat files or database tables).
Test implementation also involves final checks to ensure the test team is ready for test execution to take place. Checks could include ensuring delivery of the required test environment, test data and code (possibly running some test environment and/or code acceptance tests) and that all test cases have been written, reviewed and are ready to be run. It may also include checking against explicit and implicit entry criteria for the test level in question (see Section 1.7). Test implementation can also involve developing a detailed description of the test environment and test data.
The level of detail and associated complexity of work done during test implementation may be influenced by the detail of the test work products (e.g., test cases and test conditions). In some cases, particularly where tests are to be archived for long-term re-use in regression testing, tests may provide detailed descriptions of the steps necessary to execute a test, so as to ensure reliable, consistent execution regardless of the tester executing the test. If regulatory rules apply, tests should provide evidence of compliance to applicable standards (see section 2.9).
During test implementation, the order in which manual and automated tests are to be run should be included in a test execution schedule. Test Managers should carefully check for constraints, including risks and priorities, that might require tests to be run in a particular order or on particular equipment. Dependencies on the test environment or test data must be known and checked.
There may be some disadvantages to early test implementation. With an Agile lifecycle, for example, the code may change dramatically from iteration to iteration, rendering much of the implementation work obsolete. Even without a lifecycle as change-prone as Agile, any iterative or incremental lifecycle may result in significant changes between iterations, making scripted tests unreliable or subject to high maintenance needs. The same is true for poorly-managed sequential lifecycles where the requirements change frequently, even late into the project. Before embarking on an extensive test implementation effort, it is wise to understand the software development lifecycle and the predictability of the software features that will be available for testing.
There may be some advantages in early test implementation. For example, concrete tests provide worked examples of how the software should behave, if written in accordance with the test basis. Business domain experts are likely to find verification of concrete tests easier than verification of abstract business rules, and may thereby identify further weaknesses in software specifications. Such verified tests may provide illuminating illustrations of required behavior for software designers and developers.
Test execution begins once the test object is delivered and the entry criteria to test execution are satisfied. Tests should be designed or at least defined prior to test execution. Tools should be in place, particularly for test management, defect tracking and (if applicable) test execution automation. Test results tracking, including metrics tracking, should be working and the tracked data should be understood by all team members. Standards for test logging and defect reporting should be available and published. By ensuring these items are in place prior to test execution, the execution can proceed efficiently.
Tests should be executed according to the test cases, although the Test Manager should consider allowing some amount of latitude so that the tester can cover additional interesting test scenarios and behaviors that are observed during testing. When following a test strategy that is at least in part reactive, some time should be reserved for test sessions using experience-based and defect-based techniques. Of course, any failure detected during such unscripted testing must describe the variations from the written test case that are necessary to reproduce the failure. Automated tests will follow their defined instructions without deviation.
The main role of a Test Manager during test execution is to monitor progress according to the test plan and, if required, to initiate and carry out control actions to guide testing toward a successful conclusion in terms of mission, objectives, and strategy. To do so, the Test Manager can use traceability from the test results back to the test conditions, the test basis, and ultimately the test objectives, and also from the test objectives forward to the test results. This process is described in detail in Section 2.6.
Documentation and reporting for test progress monitoring and control are discussed in detail in Section 2.6.
From the point of view of the test process, it is important to ensure that effective processes are in place to provide the source information necessary for evaluating exit criteria and reporting.
Definition of the information requirements and methods for collection are part of test planning, monitoring and control. During test analysis, test design, test implementation and test execution, the Test Manager should ensure that members of the test team responsible for those activities are providing the information required in an accurate and timely manner so as to facilitate effective evaluation and reporting.
The frequency and level of detail required for reporting are dependent on the project and the organization. This should be negotiated during the test planning phase and should include consultation with relevant project stakeholders.
Once test execution is determined to be complete, the key outputs should be captured and either passed to the relevant person or archived. Collectively, these are test closure activities. Test closure activities fall into four main groups:
Test completion check – ensuring that all test work is indeed concluded. For example, all planned tests should be either run or deliberately skipped, and all known defects should be either fixed and confirmation tested, deferred for a future release, or accepted as permanent restrictions.
Test artifacts handover – delivering valuable work products to those who need them. For example, known defects deferred or accepted should be communicated to those who will use and support the use of the system. Tests and test environments should be given to those responsible for maintenance testing. Regression test sets (either automated or manual) should be documented and delivered to the maintenance team.
Lessons learned – performing or participating in retrospective meetings where important lessons (both from within the test project and across the whole software development lifecycle) can be documented. In these meetings, plans are established to ensure that good practices can be repeated and poor practices are either not repeated or, where issues cannot be resolved, they are accommodated within project plans. Areas to be considered include the following:
Was the user representation in the quality risk analysis sessions a broad enough crosssection? For example, due to late discovery of unanticipated defect clusters, the team might have discovered that a broader cross-section of user representatives should participate in quality risk analysis sessions on future projects.
Were the estimates accurate? For example, estimates may have been significantly misjudged and therefore future estimation activities will need to account for this together with the underlying reasons, e.g., was testing inefficient or was the estimate actually lower than it should have been.
What are the trends and the results of cause and effect analysis of the defects? For example, assess if late change requests affected the quality of the analysis and development, look for trends that indicate bad practices, e.g., skipping a test level which would have found defects earlier and in a more cost effective manner, for perceived savings of time. Check if defect trends could be related to areas such as new technologies, staffing changes, or the lack of skills.
Are there potential process improvement opportunities?
Were there any unanticipated variances from the plan that should be accommodated in future planning?
Archiving results, logs, reports, and other documents and work products in the configuration management system. For example, the test plan and project plan should both be stored in a planning archive, with a clear linkage to the system and version they were used on.
These tasks are important, often missed, and should be explicitly included as part of the test plan. It is common for one or more of these tasks to be omitted, usually due to premature reassignment or dismissal of project team members, resource or schedule pressures on subsequent projects, or team burnout. On projects carried out under contract, such as custom development, the contract should specify the tasks required.
