Testing

 

Testing is not considered to be part of the Project Management approach. More often, it is defined within the methodology being used or in a specific testing methodology. However, most Project Managers have to deal with technology development and all should wish to test their solutions before moving into live operations.

In this section we examine testing:

 

What do we test?

It is common to think of testing as a technical issue - something that is done for technology developments. It is good practice, however, to test all elements of all solutions, whether or not technology is involved. Maybe "test" is not a good word to use. We need to be confident that every element of the solution is ready for operational use.

Here are some examples, starting with some common technical issues but moving onto critical business issues:

All these aspects can, and probably should, be tested. In some cases it is possible to test multiple aspects in a single process. For example, if I test the new computer system, using trained end-users, who are following the procedural documentation provided, I am validating the processes, documentation, and effectiveness of the training as well as the correct operation of the system. Trial runs or operational pilots can also test the overall solution.

 

Case Study

A surprise operational test of disaster recovery procedures failed when it was discovered that copies of the procedures were only available back in the office.

 

Faults are Inevitable

There is a naive view that if you do the right amount of testing the end-product will be correct.

There is a scientific / mathematical view that you can logically define a test process which will cover every designed aspect of a solution and thus create a complete proof of the solution. It is a good theory but it is hard to find practical examples.

Defects remaining after a given testing effort -  Available as a PowerPoint slideThere is a practical view that the more testing you do, the closer you get to finding all the errors - but to reach a perfect result would require infinite effort. You would choose, therefore, how much effort was justified to achieve an acceptable compromise between effort and quality.

There is a realistic view, as demonstrated by Myers, that you will not approach zero errors, but, instead, approach the limit of faults that would ever be uncovered by a formalised testing process. (The Art of Software Testing by Glenford J. Myers, 1979)

There is a chaos-theory view that once you reach the limit, the disruption caused by further dabbling with the solution will generate more problems than are solved.

What is certain is that there is a balance between testing effort and the degree of quality. The choice should be a business decision, based on optimising benefit. There is no right answer. The manufacturers of a quality product will want to invest more time in testing than the suppliers of a cheap commodity product. A life-critical system, such as an aircraft, will warrant greater confidence levels than a non-critical application such as a typing tutor program.

Errors found in the Space Shuttle on-board software - available as a PowerPoint slide (see slide 2)It is interesting to compare theory with reality. The onboard flight software for the Space Shuttle is a mission-critical, life-critical, national-pride-critical software application. It is hard to imagine any system that would merit greater degrees of testing and quality. (By the way, none of these issues related to any accidents or injuries.)

This chart shows the errors found in the code in the period 1988 to 1993:

  • errors that got into released versions
  • failures that occurred
  • failures that occurred during flight (just one in 1988).

The pattern of reducing errors does bear an interesting resemblance to the theoretical norms. At first there is a relatively high number of faults, steadily reducing but never reaching zero.

 

Case Study

A company's pension fund bought a software package to manage their pensions. Years later, when that system was being replaced, parallel running showed a consistent discrepancy in the results from the new system. After investigation it was discovered that the new system was correct but the old system had been overpaying on every transaction. Several millions of pounds were unrecoverable.

A very simple error was found in the specification - a plus where there should have been a minus. The supplier's testing had not found it because the results matched their incorrect specification. The pension administrators demanded compensation. The software suppliers asked: "Didn't you test it?" "Didn't you check even one transaction?"

There followed a legal dispute. The software supplier's main defence was that you had to be very stupid not to test a computer application. They eventually settled 50:50 out of court.

 

 

Building a Solid, Scientific Proof

The main goal of formal testing is to produce a sound, logically valid proof that the results meet desired levels of confidence. Testing must be conducted in a carefully controlled manner to achieve this. It must methodically address every component of the solution.

The Testing Wall - available as a PowerPoint slide

A great deal of testing is normally conducted as part of the development process - testing components of the solution as they are built to ensure they meet their specifications. This is a valuable and valid process, but it does not form part of the formal proof of the solution. Until development is completed there is no guarantee that components have been stabilised. There is also no independent verification that they met the overall needs in the context of the overall solution.

As well as providing a reasonable degree of confidence prior to formal testing, the informal tests can provide useful material and preparation for the subsequent formal tests. They lay a foundation that is the staring point for formal testing.

Testing a complex solution cannot usually be achieved in a single step. Normally the work is broken down into manageable pieces. For example, you might:

 

Three Main Types of Testing

At the end of this section we list many specific types of testing and some variants in the terminology people use. For now, we will simply recognise the three main uses...

 

Unit Testing

Unit Testing - available as a PowerPoint slide (see slide 1)Unit testing validates in detail each component that was developed. It is typically the developer's point of view - does it conform to the specification. It ignores whether that component works in the context of the completed system.

 

User Testing

User Testing - available as a PowerPoint slide (see slide 2)User testing, or business testing, looks at the results from a business / user perspective. Does the final product do the things we want it to do? Do all the individual features work properly? Is it fit for purpose?

 

Technical/Operational Testing and Tuning

Technical Testing and Tuning - available as a PowerPoint slide (see slide 3)Technical testing and tuning examines the capability of the solution to be operated safely, efficiently and dependably at normal and peak levels of usage. It is concerned with the inner operational workings and procedures. It does not concern itself with whether it meets the users' functional needs, but it is concerned about the demands they will place upon it.

 

Test Data

There is an art to creating good test data. Having a complete, fictitious world in which to follow realistic storylines is a good way to test out the various business scenarios. Remember that all the sub-plots of the story have to co-exist. You will need to follow normal transactions alongside all the abnormal situations - amendments, cancellations, failures, mis-matches. You will also want to simulate the passage of time. Follow the scenarios throughout their natural lifecycle and beyond into consequences such as management reporting and accounting.

Good test data requires good preparation. The scenarios should be planned in advance. They should be constructed to explore every aspect of the business solution in a logical manner. Expected results should be predicted alongside the test scenarios so that you are not dependent on the judgement of the tester.

  • Paint the backdrops
  • Tell a story
  • Divide the story into acts and scenes
  • Any good story has twists and turns
  • Many misadventures will befall our hero
  • But the hero will always find a way to escape and recover
  • And there will be a happy ending?
Test Data - available as a PowerPoint slide

Baseline

The test data is intended to validate the correct functionality of the solution. But where is that correct baseline defined? Consider these possibilities...

Baseline

Comments

Original Project Description Shows the original intentions but is unlikely to contain enough detail and will probably not have been kept up to date as ideas changed.
Requirements Definition or Functional Specification In theory these are definitive statements of what the business needs. Again, they may not contain sufficient detail and may not have been kept up to date. In some contractual procurement scenarios, these may have been defined to be the sole definition of the requirements that the developers must meet.
Systems or Technical Specification Typically these have been developed to a high degree of detail and have been maintained. One drawback is that they may be the developer's interpretation of the business needs. Testing against these will not detect errors in that interpretation (see the pensions Case Study above).
Business users' current interpretation of their needs Sometimes tests are defined independently of the original specifications to represent the current business needs. This may be a good test of the suitability of the end-result, but it is unlikely to match the precise details of the developed solution.
Maintained and agreed definition of requirements Ideally, there will be a definitive statement of currently agreed requirements. As understanding of the business needs evolved and detailed designs were produced, this source will have been updated to show precisely what the solution should do. Disadvantages may be that it has been a moving target and there may be no comparison to what was originally requested.

 

Test Tools

Test Tools - available as a PowerPoint slideMany test tools are available these days. They offer control, speed, and repeatability of tests. They are particularly useful for testing scenarios that require high volumes of transactions such as peak-volume network loading. Repeatability is useful for running re-tests after problems have been solved and regression testing where a component of the solution has been changed and it is necessary to validate that other functionality has not been affected.

The disadvantages of test tools are the time and cost. Preparation of test data has to be exhaustive and correct (although in some cases you might be able to capture scripts from the initial manual testing). The tools themselves can represent a significant cost.

 

Regression Testing

Regression Testing - available as a PowerPoint slideRegression means going back. With testing processes we mean that you need to repeat previously successful tests any time there is a chance that subsequent changes could have affected an aspect of the solution.

The testing process involved building a sequence of tests, many of which stood upon the successful results of earlier tests. If a component of the solution has been changed, all other components which relied upon it might have been affected. Similarly, all tests that relied on an earlier test might no long be valid. Consider carefully which solution and testing components need to be re-validated.

One consequence of this issue is that changes during the testing process are bad news. There will always be errors and changes during the testing, but it might be better to defer the correction of some minor problems to make better overall progress. A seemingly innocuous example with a big hidden catch is when a software supplier suggests that a problem you are experiencing would be solved by moving to their next software release in which the problem has been fixed. Moving to a new release probably means that every test you did to date is now invalidated and will have to be repeated.

 

User Acceptance Testing

User Acceptance Testing (UAT) is a common name for the type of testing that forms the basis for the business accepting the solution from its developers (whether internal or external). It is a common requirement in many organisations and in many contracts. There are two drawbacks - it might not be very reliable and it might be a waste of effort. In fact, you might get a better solution, faster and cheaper if you do not do it...

The developers will need to conduct exhaustive methodical tests regardless of the need for independent User Acceptance Tests. Best quality will be achieved if they include the business and users in these tests to make sure they have not misunderstood the business needs. If, therefore, the developers' formal testing is done with full participation of the business and with the responsible user managers having prime authority over the content, review and sign-off of the tests, the requirement for user acceptance can be met in a single phase of testing instead of two different phases.

Conversely, where the business is left to define and conduct its own independent testing, it is common to find that they do not have the methodical, comprehensive approach of the developers. The reliability of the tests is often questionable.

A single, combined phase of systems testing and User Acceptance Tests can produce the best results in the shortest time. Consider whether it is appropriate and permissible in your circumstances.

 

Testing Process

There are many detailed approaches to testing. See if there is a specific approach to be used in your case. If not, here is a basic process that you might follow.

Test Objectives Form - click for Word versionTest Objectives

First, define a comprehensive set of tests to build the solid testing "wall". Every requirement and plausible scenario should be covered - with no gaps and, preferably, with no overlaps or duplication.

Each test may be described initially in terms of its objective, for example, an objective might be "to test that a cancelled order reverses all transaction data, works orders, stock positions and financial postings".

Identify who is responsible for conducting this test and name the relevant user manager who is to approve the definition, review and sign-off of the test.

At a later stage, you might add scheduling and sequencing information.

 

Click for larger version

Click for Word version

Click for Excel version

Test Objectives Form - click for Word versionTest Definition

When the list of test objectives has been reviewed and agreed, detail the test scenario and steps, for example:

  1. note stock level for item XYZ
  2. note credit available for customer CUS01
  3. create order #1234 for quantity 1 of item XYZ at price £100 for customer CUS01
  4. check stock level for item XYZ
  5. check credit available for customer CUS01
  6. cancel order #1234
  7. check stock level for item XYZ
  8. check credit available for customer CUS01
  9. etc

Alongside these steps, write the expected results - ie how the tester will know that it worked as expected. For example:

  1. 100
  2. £1000
  3. Order confirmation message confirms quantity 1 of item XYZ at price £100 for customer CUS01
  4. 99
  5. £900
  6. Order cancellation message confirms quantity 1 of item XYZ at price £100 for customer CUS01
  7. 100
  8. £1000
  9. etc

The remainder of this form is used as a checklist for running the test. For each step the tester either ticks the step to note that it worked or creates an test incident control record. In this approach, we do not allow any other action. In other approaches a variety of other actions might be taken in an attempt to avoid noting the failure. Although this might seem desirable, the lack of control can be dangerous and counter-productive. Encourage the expectation and belief that the purpose of a tester is to find discrepancies and that registering them is a good thing. In fact, you will find a majority of the problems are faults in the test scripts rather than the actual solution.

 

Click for larger version

Click for Word version

Click for Excel version

Test Objectives Form - click for Word versionTest Control Log

Progress will be tracked in a test control log. At any point in the process it should be clear what the status is for each test, along with the various incidents that were reported.

Click for larger version

Click for Word version

Click for Excel version

Test Objectives Form - click for Word versionTest Incident Control

For each discrepancy in the testing a test incident control form notes the details and tracks any remedial action.

 

Click for larger version

Click for Word version

Test Objectives Form - click for Word versionTest Incident Control Log

Test incidents are logged and tracked to completion.

 

Click for larger version

Click for Word version

Click for Excel version

Test Objectives Form - click for Word versionTest Signoff

When a test is completed, it should be reviewed and formally signed-off by the test leader, the responsible user manager and anyone else who is been identified as an authoritative reviewer.

In some cases you might note that a test was not entirely satisfactory although the problems were not sufficiently severe that you would wish to delay completing the tests or releasing the system. You should record what future action is required to remedy this problem. Ensure that appropriate remedial actions are defined and agreed for action at an appropriate time.

 

Click for larger version

Click for Word version

Specific Types of Testing

Here is a non-exhaustive list showing several types of testing. Consider which of these would be appropriate in your circumstances.

Note also that these types may have different characteristics when applied to different aspects of the overall solution. For example, you could pilot a new computer system, a training course or a new workgroup structure.

Specific methodologies will use their own terminology. You should note that several expressions can mean more than one thing. The best example here is "Conference Room Pilot" where we list five different usages we have heard, each at a different stage in the lifecycle.

 

Type

Comments

Conference Room Pilot 1 Trying out possible software solutions as part of the selection process. The name comes from the concept that the interested parties shut themselves into a single room to simulate the conduct of business operations.
2 Testing possible systems solutions by simulating business operations. This is essentially a form of design.
3 Testing developed solutions by simulating live operations.
4 Demonstrating operations and ascertaining business / user acceptability by simulating live usage of the completed system.
5 Live running of a small part of the overall business on the new system to test it under real conditions before transferring the remainder of the enterprise to the new system.
Configuration Testing Testing that the configuration of packaged software meets the business needs prior to formal testing.
Data Load / Data Conversion Tests Tests that data prepared for the new system is acceptable, for example, controls, comparisons with pre-converted data, integrity checking of linked records, validation of standard fields. Data may have been converted or loaded manually.
Data Purification / Integrity / Quality Review by the end-user departments that the operational data they hold is complete and correct. This exercise will often be conducted over a period of several months prior to data conversion for the new system. The data review might be supported by computer systems that highlight incomplete data and inconsistencies.
Disaster Recovery Testing 1 Test the ability to re-instate the systems using off-site data and resources.
2 Test the overall disaster recovery procedures and facilities from a business perspective.
Fallback Testing Test the contingency plan for reverting to the old system or to an alternative emergency solution (eg manual operation) in the event of a failure of the new system.
Informal tests 1 Trying out ideas as a design aid.
2 Checking that a developed component is fit to be released for formal testing.
Integration Testing 1 Test of the sharing or transfer of transactions and data between the technical solution and all related systems.
2 Overall testing of the business solution in conjunction with all other related operations and systems.
Link Test Test input, output and shared data are correctly transferred between associated programs and databases.
Live Pilot Live running of a small part of the overall business on the new system to test it under real conditions before transferring the remainder of the enterprise to the new system.
Model Office or Simulated Live Running Informal testing where users try out the system as if it were real, testing that the processes, procedures, and operational support operate correctly and work in harmony using simulated normal work and volumes.
Module Tests Test that a developed module meets its technical specification.
Operational Acceptance Testing Formal tests to satisfy the IT operations department that the developed system is of adequate quality to enter the live environment and go into live production.
Operations Testing Testing of technical operational procedures such as start-up, shut-down, batch processing, special stationery handling, output handling, controls, error recovery, system backup and recovery procedures etc.
Parallel Pilot Test running of a small part of the overall business on the new system to test it under real conditions. Differs from Parallel Running in that not all input need be duplicated with the existing system and there is no attempt to reconcile the overall results between the two systems in a controlled manner.
Parallel Running Form of testing whereby the results on the new system are compared with identical real data passing through the old systems. This is normally achieved by duplicating the transactions for a specific time period and reconciling the results with the existing system. Very often it is not possible to get parallel results because the new system is not a duplicate copy of the old one. Where it is possible, parallel running may require a great deal of user effort to do things in duplicate and to reconcile the results.
Pilot Completion and live usage of a solution such that it can be tried out in a limited part of the organisation or marketplace. It is intended as a proof of concept. The eventual solution will probably be developed further or modified to take advantage of the lessons learned.
Program Tests Test that a developed program meets its technical specification.
Prototyping 1 Development of a limited technical model of a component that can be used as a design tool to validate the concept. A prototype may use technology or techniques that are of no use beyond the prototyping work (eg screens simulated using PowerPoint).
2 Development of the technical solution in stages such that each degree of refinement can be validated before moving to the next stage.
3 The configuration of packaged software to apply the organisation's requirements and validate that they are properly addressed. When completed, the configured version will be the complete version ready for testing.
Regression testing 1 Returning to earlier tests after a change has been made, both to check that the change was correct and to ensure no unforeseen impact has occurred. This is vital to maintain the integrity of prior testing during formalised, controlled testing. During formal testing, the environment should be designed to allow reversion and repeats. Timescales should assume a number of repeat cycles will be required.
2 Re-testing a system following changes such as bug fixes or upgrades. Ideally, the original tests will have been preserved and be relatively easy to repeat and reconcile.
Security Testing 1 Test overall protection from unauthorised access or usage. It should include physical access, access through external network links, firewalls, improper access by internal users, encryption, trusted third-parties, electronic emissions of physical and wireless networks, etc.
2 Testing the mapping of individual users' access to specific functions, data and authorisation levels.
System Testing 1 Main formal test of the overall technical solution.
2 Main formal test of the functionality of an overall solution.
Pen Test / Tiger Team Attack A "penetration testing" attempt to break through security measures by a specialist external team.
Unit Testing Formal Tests applied to each "unit" of functionality within the system.
User Acceptance Testing Testing of the full solution by the business / users to validate that it operates correctly and meets requirements. The implication is that this is the point at which the business agrees to take the solution as produced by the developers (whether internal or external).
Volume Testing / Load Testing Creating sufficient hit rates, network loading, transactions and data volumes to simulate normal and peak loads thus verifying that response times and processing times will be satisfactory and that file sizes are sufficiently large. This also gives a firm basis for effective scheduling, operational capacity and tuning requirements.

 

 

 

 

 
ePMbook - click to re-load
Copyright ©  Simon Wallace, 1999-2016