Testing is not considered to be part of the Project Management approach. More often, it is defined within the methodology being used or in a specific testing methodology. However, most Project Managers have to deal with technology development and all should wish to test their solutions before moving into live operations.
In this section we examine testing:
It is common to think of testing as a technical issue - something that is done for technology developments. It is good practice, however, to test all elements of all solutions, whether or not technology is involved. Maybe "test" is not a good word to use. We need to be confident that every element of the solution is ready for operational use.
Here are some examples, starting with some common technical issues but moving onto critical business issues:
All these aspects can, and probably should, be tested. In some cases it is possible to test multiple aspects in a single process. For example, if I test the new computer system, using trained end-users, who are following the procedural documentation provided, I am validating the processes, documentation, and effectiveness of the training as well as the correct operation of the system. Trial runs or operational pilots can also test the overall solution.
Case Study |
A surprise operational test of disaster recovery procedures failed when it was discovered that copies of the procedures were only available back in the office. |
There is a naive view that if you do the right amount of testing the end-product will be correct.
There is a scientific / mathematical view that you can logically define a test process which will cover every designed aspect of a solution and thus create a complete proof of the solution. It is a good theory but it is hard to find practical examples.
There
is a practical view that the more testing you do, the closer you
get to finding all the errors - but to reach a perfect result
would require infinite effort. You would choose, therefore, how
much effort was justified to achieve an acceptable compromise
between effort and quality.
There is a realistic view, as demonstrated by Myers, that you will not approach zero errors, but, instead, approach the limit of faults that would ever be uncovered by a formalised testing process. (The Art of Software Testing by Glenford J. Myers, 1979)
There is a chaos-theory view that once you reach the limit, the disruption caused by further dabbling with the solution will generate more problems than are solved.
What is certain is that there is a balance between testing effort and the degree of quality. The choice should be a business decision, based on optimising benefit. There is no right answer. The manufacturers of a quality product will want to invest more time in testing than the suppliers of a cheap commodity product. A life-critical system, such as an aircraft, will warrant greater confidence levels than a non-critical application such as a typing tutor program.
A company's pension fund bought a
software package to manage their pensions. Years later,
when that system was being replaced, parallel running
showed a consistent discrepancy in the results from the
new system. After investigation it was discovered that
the new system was correct but the old system had been
overpaying on every transaction. Several millions of
pounds were unrecoverable. A very simple error was found in the specification - a plus where there should have been a minus. The supplier's testing had not found it because the results matched their incorrect specification. The pension administrators demanded compensation. The software suppliers asked: "Didn't you test it?" "Didn't you check even one transaction?" There followed a legal dispute. The software supplier's main defence was that you had to be very stupid not to test a computer application. They eventually settled 50:50 out of court. |
The main goal of formal testing is to produce a sound, logically valid proof that the results meet desired levels of confidence. Testing must be conducted in a carefully controlled manner to achieve this. It must methodically address every component of the solution.
A great deal of testing is normally conducted as part of the development process - testing components of the solution as they are built to ensure they meet their specifications. This is a valuable and valid process, but it does not form part of the formal proof of the solution. Until development is completed there is no guarantee that components have been stabilised. There is also no independent verification that they met the overall needs in the context of the overall solution.
As well as providing a reasonable degree of confidence prior to formal testing, the informal tests can provide useful material and preparation for the subsequent formal tests. They lay a foundation that is the staring point for formal testing.
Testing a complex solution cannot usually be achieved in a single step. Normally the work is broken down into manageable pieces. For example, you might:
At the end of this section we list many specific types of testing and some variants in the terminology people use. For now, we will simply recognise the three main uses...
Unit
testing validates in detail each component that was developed. It
is typically the developer's point of view - does it conform to
the specification. It ignores whether that component works in the
context of the completed system.
User
testing, or business testing, looks at the results from a
business / user perspective. Does the final product do the things
we want it to do? Do all the individual features work properly?
Is it fit for purpose?
Technical
testing and tuning examines the capability of the solution to be
operated safely, efficiently and dependably at normal and peak
levels of usage. It is concerned with the inner operational
workings and procedures. It does not concern itself with whether
it meets the users' functional needs, but it is concerned about
the demands they will place upon it.
The test data is intended to validate the correct functionality of the solution. But where is that correct baseline defined? Consider these possibilities...
Baseline |
Comments |
Original Project Description | Shows the original intentions but is unlikely to contain enough detail and will probably not have been kept up to date as ideas changed. |
Requirements Definition or Functional Specification | In theory these are definitive statements of what the business needs. Again, they may not contain sufficient detail and may not have been kept up to date. In some contractual procurement scenarios, these may have been defined to be the sole definition of the requirements that the developers must meet. |
Systems or Technical Specification | Typically these have been developed to a high degree of detail and have been maintained. One drawback is that they may be the developer's interpretation of the business needs. Testing against these will not detect errors in that interpretation (see the pensions Case Study above). |
Business users' current interpretation of their needs | Sometimes tests are defined independently of the original specifications to represent the current business needs. This may be a good test of the suitability of the end-result, but it is unlikely to match the precise details of the developed solution. |
Maintained and agreed definition of requirements | Ideally, there will be a definitive statement of currently agreed requirements. As understanding of the business needs evolved and detailed designs were produced, this source will have been updated to show precisely what the solution should do. Disadvantages may be that it has been a moving target and there may be no comparison to what was originally requested. |
Many test
tools are available these days. They offer control, speed, and
repeatability of tests. They are particularly useful for testing
scenarios that require high volumes of transactions such as peak-volume
network loading. Repeatability is useful for running re-tests
after problems have been solved and regression
testing where a component of the solution has been changed
and it is necessary to validate that other functionality has not
been affected.
The disadvantages of test tools are the time and cost. Preparation of test data has to be exhaustive and correct (although in some cases you might be able to capture scripts from the initial manual testing). The tools themselves can represent a significant cost.
Regression
means going back. With testing processes we mean that you need to
repeat previously successful tests any time there is a chance
that subsequent changes could have affected an aspect of the
solution.
The testing process involved building a sequence of tests, many of which stood upon the successful results of earlier tests. If a component of the solution has been changed, all other components which relied upon it might have been affected. Similarly, all tests that relied on an earlier test might no long be valid. Consider carefully which solution and testing components need to be re-validated.
One consequence of this issue is that changes during the testing process are bad news. There will always be errors and changes during the testing, but it might be better to defer the correction of some minor problems to make better overall progress. A seemingly innocuous example with a big hidden catch is when a software supplier suggests that a problem you are experiencing would be solved by moving to their next software release in which the problem has been fixed. Moving to a new release probably means that every test you did to date is now invalidated and will have to be repeated.
User Acceptance Testing (UAT) is a common name for the type of testing that forms the basis for the business accepting the solution from its developers (whether internal or external). It is a common requirement in many organisations and in many contracts. There are two drawbacks - it might not be very reliable and it might be a waste of effort. In fact, you might get a better solution, faster and cheaper if you do not do it...
The developers will need to conduct exhaustive methodical tests regardless of the need for independent User Acceptance Tests. Best quality will be achieved if they include the business and users in these tests to make sure they have not misunderstood the business needs. If, therefore, the developers' formal testing is done with full participation of the business and with the responsible user managers having prime authority over the content, review and sign-off of the tests, the requirement for user acceptance can be met in a single phase of testing instead of two different phases.
Conversely, where the business is left to define and conduct its own independent testing, it is common to find that they do not have the methodical, comprehensive approach of the developers. The reliability of the tests is often questionable.
A single, combined phase of systems testing and User Acceptance Tests can produce the best results in the shortest time. Consider whether it is appropriate and permissible in your circumstances.
There are many detailed approaches to testing. See if there is a specific approach to be used in your case. If not, here is a basic process that you might follow.
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
Here is a non-exhaustive list showing several types of testing. Consider which of these would be appropriate in your circumstances.
Note also that these types may have different characteristics when applied to different aspects of the overall solution. For example, you could pilot a new computer system, a training course or a new workgroup structure.
Specific methodologies will use their own terminology. You should note that several expressions can mean more than one thing. The best example here is "Conference Room Pilot" where we list five different usages we have heard, each at a different stage in the lifecycle.
Type |
Comments |
|
Conference Room Pilot | 1 | Trying out possible software solutions as part of the selection process. The name comes from the concept that the interested parties shut themselves into a single room to simulate the conduct of business operations. |
2 | Testing possible systems solutions by simulating business operations. This is essentially a form of design. | |
3 | Testing developed solutions by simulating live operations. | |
4 | Demonstrating operations and ascertaining business / user acceptability by simulating live usage of the completed system. | |
5 | Live running of a small part of the overall business on the new system to test it under real conditions before transferring the remainder of the enterprise to the new system. | |
Configuration Testing | Testing that the configuration of packaged software meets the business needs prior to formal testing. | |
Data Load / Data Conversion Tests | Tests that data prepared for the new system is acceptable, for example, controls, comparisons with pre-converted data, integrity checking of linked records, validation of standard fields. Data may have been converted or loaded manually. | |
Data Purification / Integrity / Quality | Review by the end-user departments that the operational data they hold is complete and correct. This exercise will often be conducted over a period of several months prior to data conversion for the new system. The data review might be supported by computer systems that highlight incomplete data and inconsistencies. | |
Disaster Recovery Testing | 1 | Test the ability to re-instate the systems using off-site data and resources. |
2 | Test the overall disaster recovery procedures and facilities from a business perspective. | |
Fallback Testing | Test the contingency plan for reverting to the old system or to an alternative emergency solution (eg manual operation) in the event of a failure of the new system. | |
Informal tests | 1 | Trying out ideas as a design aid. |
2 | Checking that a developed component is fit to be released for formal testing. | |
Integration Testing | 1 | Test of the sharing or transfer of transactions and data between the technical solution and all related systems. |
2 | Overall testing of the business solution in conjunction with all other related operations and systems. | |
Link Test | Test input, output and shared data are correctly transferred between associated programs and databases. | |
Live Pilot | Live running of a small part of the overall business on the new system to test it under real conditions before transferring the remainder of the enterprise to the new system. | |
Model Office or Simulated Live Running | Informal testing where users try out the system as if it were real, testing that the processes, procedures, and operational support operate correctly and work in harmony using simulated normal work and volumes. | |
Module Tests | Test that a developed module meets its technical specification. | |
Operational Acceptance Testing | Formal tests to satisfy the IT operations department that the developed system is of adequate quality to enter the live environment and go into live production. | |
Operations Testing | Testing of technical operational procedures such as start-up, shut-down, batch processing, special stationery handling, output handling, controls, error recovery, system backup and recovery procedures etc. | |
Parallel Pilot | Test running of a small part of the overall business on the new system to test it under real conditions. Differs from Parallel Running in that not all input need be duplicated with the existing system and there is no attempt to reconcile the overall results between the two systems in a controlled manner. | |
Parallel Running | Form of testing whereby the results on the new system are compared with identical real data passing through the old systems. This is normally achieved by duplicating the transactions for a specific time period and reconciling the results with the existing system. Very often it is not possible to get parallel results because the new system is not a duplicate copy of the old one. Where it is possible, parallel running may require a great deal of user effort to do things in duplicate and to reconcile the results. | |
Pilot | Completion and live usage of a solution such that it can be tried out in a limited part of the organisation or marketplace. It is intended as a proof of concept. The eventual solution will probably be developed further or modified to take advantage of the lessons learned. | |
Program Tests | Test that a developed program meets its technical specification. | |
Prototyping | 1 | Development of a limited technical model of a component that can be used as a design tool to validate the concept. A prototype may use technology or techniques that are of no use beyond the prototyping work (eg screens simulated using PowerPoint). |
2 | Development of the technical solution in stages such that each degree of refinement can be validated before moving to the next stage. | |
3 | The configuration of packaged software to apply the organisation's requirements and validate that they are properly addressed. When completed, the configured version will be the complete version ready for testing. | |
Regression testing | 1 | Returning to earlier tests after a change has been made, both to check that the change was correct and to ensure no unforeseen impact has occurred. This is vital to maintain the integrity of prior testing during formalised, controlled testing. During formal testing, the environment should be designed to allow reversion and repeats. Timescales should assume a number of repeat cycles will be required. |
2 | Re-testing a system following changes such as bug fixes or upgrades. Ideally, the original tests will have been preserved and be relatively easy to repeat and reconcile. | |
Security Testing | 1 | Test overall protection from unauthorised access or usage. It should include physical access, access through external network links, firewalls, improper access by internal users, encryption, trusted third-parties, electronic emissions of physical and wireless networks, etc. |
2 | Testing the mapping of individual users' access to specific functions, data and authorisation levels. | |
System Testing | 1 | Main formal test of the overall technical solution. |
2 | Main formal test of the functionality of an overall solution. | |
Pen Test / Tiger Team Attack | A "penetration testing" attempt to break through security measures by a specialist external team. | |
Unit Testing | Formal Tests applied to each "unit" of functionality within the system. | |
User Acceptance Testing | Testing of the full solution by the business / users to validate that it operates correctly and meets requirements. The implication is that this is the point at which the business agrees to take the solution as produced by the developers (whether internal or external). | |
Volume Testing / Load Testing | Creating sufficient hit rates, network loading, transactions and data volumes to simulate normal and peak loads thus verifying that response times and processing times will be satisfactory and that file sizes are sufficiently large. This also gives a firm basis for effective scheduling, operational capacity and tuning requirements. |
![]() |
Copyright © Simon Wallace, 1999-2016 |