Difference between revisions of "Acceptance Tests"

From MSEC
Jump to navigation Jump to search
Access restrictions were established for this page. If you see this message, you have no access to this page.
Line 2: Line 2:
In [https://lisacrispin.com/2011/11/08/using-the-agile-testing-quadrants/ Brian Marick's Test Quadrant] model, [[Acceptance Tests]] are  '''"business facing"''' tests that '''"support the team"'''.
In [https://lisacrispin.com/2011/11/08/using-the-agile-testing-quadrants/ Brian Marick's Test Quadrant] model, [[Acceptance Tests]] are  '''"business facing"''' tests that '''"support the team"'''.


[[Acceptance Tests]] are written:
The [[Acceptance Stage]] is the first step in the [[Acceptance Phase]] of a [[Deployment Pipeline]], and an important one.


:* '''From the perspective of an “external user of the system”'''
[[Acceptance Tests]] represent one of the key 'gates' in making progress with our [[Release Candidate]]s. Once the [[Acceptance Tests]] pass, we know that the system...
:* '''Running “life-like scenarios”'''
:* '''In a “production-like test environment”.'''


They are the first step in the [[Acceptance Stage]] of a [[Deployment Pipeline]], and an important one.
<blockquote>"Does what our users want the system to do"</blockquote>


[[Acceptance Tests]] represent one of the key 'gates' in making progress with our [[Release Candidate]]s.
This is the first time that we evaluate our software in the form that it will be deployed into [[Production]].
 
==What are Acceptance Tests?==
Effective Acceptance Tests:
 
:* '''Are written from the perspective of an external user of the system'''
:* '''Evaluate the system in life-like scenarios'''
:* '''Are evaluated in production-like test environments'''
:* '''Interact with the System Under Test (SUT) through public interfaces (no back-door access for tests)'''
:* '''Focus only on WHAT the system does, not HOW it does it'''
:* '''Are part of a systemic, strategic approach to testing and Continuous Delivery'''
 
 
==How to Write Acceptance Tests==
 
The most effective way to create an [[Acceptance Tests|Acceptance Test]] is to write an [[Executable Specification]], that describes the desired behaviour of the new piece of software, before we write any code. [[Acceptance Tests]] focus only on what the system/code should do; and say nothing about how it does it.
 
We do this for every change that we intend to make. These specifications guide the development process: we work to fulfil these [[Executable Specification|Executable Specifications]] as we carry out lower level, [[Test Driven Developmemnt]] testing until the specification is met.
 
We make the scenarios that these tests capture atomic, and don’t share test-data between test cases. Each test-case starts from the assumption of a running, functioning system, but one that contains no data.
 
The [[Acceptance Tests]] are business facing, i.e written from the perspective of external consumers of the system. [[Acceptance Tests]] are designed to support programming and guide the development of code to meet the users' need.
 
These functional, whole-system tests are difficult to get right. It helps to consciously work to separate concerns in our testing, to allow the system under test to change, without invalidating the tests that evaluate its behaviour. For more detail on that, see the [[Four-Layer Approach]] to Acceptance Testing.
 
 
==Automating the Acceptance Stage==
 
We aim to eliminate the need for manual regression testing, and automate any and all repeatable processes, in the Deployment Pipeline. Manual processes are slow, unreliable and expensive. There is a role for Manual Testing (see Chapter 11), and we should use human beings where they can have the best effect - in creative processes and in qualitative assessments. We use our computers to carry out routine, repeatable tasks. They are much more efficient and reliable than we are, for that kind of work.
 
As well as testing the code, we can test the configuration and deployment.
 
By using the same (or as close as possible) mechanisms, tools and techniques to deploy into test environments, as we will use when we get to Production, we can get a high level of confidence that everything will work together. By the time we get to Production, everything has been tested together multiple times, we have reduced the likelihood of ‘unpleasant surprises’, and we have a high level of confidence that everything will work.


This is the first time that we evaluate our software in the form that it will be deployed into [[Production]].
Acceptance Tests can act as a kind of whole system super-integration test. If we assemble all the components that represent a deployable unit of software and evaluate them together, if the Acceptance Tests pass, we know that they are configured appropriately, work together and can be deployed successfully.
 
Scaling Up
 
Acceptance Tests take time and can be expensive. We need enough tests to spot unexpected problems, but we don't, usually, have unlimited resources. We can design the test infrastructure to run multiple tests in parallel, within available resources, having regard to which tests take the longest amount of time.
 
We can use sampling strategies, and grow the number and complexity of Acceptance Tests as the software grows. In this way, we achieve test coverage as required to determine the releasability of the software, and not as a goal in itself.
 
The aim should be to allow developers to add any test that they need, but also for developers to care enough to think about the cost, in time, of each test. Again we have the trade-off between thoroughness and speed. Both matter, but ‘slow and thorough’ is as bad as 'fast and sketchy’!
 
Tips for Writing Acceptance Tests
 
Incorporate Acceptance Tests into the development process from the start.
Create an Executable Specification for the desired behaviour of each new piece of software before starting on the code.
Think of the least technical person who understands the problem-domain, reading the Acceptance Tests. The tests should make sense to that person.
Create a new Acceptance Test for every Acceptance Criteria for every User Story.
Make it easy to identify Acceptance Tests and differentiate them from other sorts of tests.
Automate control of test environments, and Control the Variables, so the tests are reproducible.
Make it easy for development teams to run Acceptance Tests and get results, by automating deployment to the test environment and automating control of test execution.
Automate the collection of results, so developers can easily get the answer to the question "Has this Release Candidate passed Acceptance Testing?".
Don't chase test coverage as a goal - good coverage comes as a side-effect of good practice, but makes a poor target.
Leave room to scale up the number and complexity of Acceptance Tests, as the software grows in complexity.

Revision as of 14:45, 5 August 2021

In Brian Marick's Test Quadrant model, Acceptance Tests are "business facing" tests that "support the team".

The Acceptance Stage is the first step in the Acceptance Phase of a Deployment Pipeline, and an important one.

Acceptance Tests represent one of the key 'gates' in making progress with our Release Candidates. Once the Acceptance Tests pass, we know that the system...

"Does what our users want the system to do"

This is the first time that we evaluate our software in the form that it will be deployed into Production.

What are Acceptance Tests?

Effective Acceptance Tests:

  • Are written from the perspective of an external user of the system
  • Evaluate the system in life-like scenarios
  • Are evaluated in production-like test environments
  • Interact with the System Under Test (SUT) through public interfaces (no back-door access for tests)
  • Focus only on WHAT the system does, not HOW it does it
  • Are part of a systemic, strategic approach to testing and Continuous Delivery


How to Write Acceptance Tests

The most effective way to create an Acceptance Test is to write an Executable Specification, that describes the desired behaviour of the new piece of software, before we write any code. Acceptance Tests focus only on what the system/code should do; and say nothing about how it does it.

We do this for every change that we intend to make. These specifications guide the development process: we work to fulfil these Executable Specifications as we carry out lower level, Test Driven Developmemnt testing until the specification is met.

We make the scenarios that these tests capture atomic, and don’t share test-data between test cases. Each test-case starts from the assumption of a running, functioning system, but one that contains no data.

The Acceptance Tests are business facing, i.e written from the perspective of external consumers of the system. Acceptance Tests are designed to support programming and guide the development of code to meet the users' need.

These functional, whole-system tests are difficult to get right. It helps to consciously work to separate concerns in our testing, to allow the system under test to change, without invalidating the tests that evaluate its behaviour. For more detail on that, see the Four-Layer Approach to Acceptance Testing.


Automating the Acceptance Stage

We aim to eliminate the need for manual regression testing, and automate any and all repeatable processes, in the Deployment Pipeline. Manual processes are slow, unreliable and expensive. There is a role for Manual Testing (see Chapter 11), and we should use human beings where they can have the best effect - in creative processes and in qualitative assessments. We use our computers to carry out routine, repeatable tasks. They are much more efficient and reliable than we are, for that kind of work.

As well as testing the code, we can test the configuration and deployment.

By using the same (or as close as possible) mechanisms, tools and techniques to deploy into test environments, as we will use when we get to Production, we can get a high level of confidence that everything will work together. By the time we get to Production, everything has been tested together multiple times, we have reduced the likelihood of ‘unpleasant surprises’, and we have a high level of confidence that everything will work.

Acceptance Tests can act as a kind of whole system super-integration test. If we assemble all the components that represent a deployable unit of software and evaluate them together, if the Acceptance Tests pass, we know that they are configured appropriately, work together and can be deployed successfully.

Scaling Up

Acceptance Tests take time and can be expensive. We need enough tests to spot unexpected problems, but we don't, usually, have unlimited resources. We can design the test infrastructure to run multiple tests in parallel, within available resources, having regard to which tests take the longest amount of time.

We can use sampling strategies, and grow the number and complexity of Acceptance Tests as the software grows. In this way, we achieve test coverage as required to determine the releasability of the software, and not as a goal in itself.

The aim should be to allow developers to add any test that they need, but also for developers to care enough to think about the cost, in time, of each test. Again we have the trade-off between thoroughness and speed. Both matter, but ‘slow and thorough’ is as bad as 'fast and sketchy’!

Tips for Writing Acceptance Tests

Incorporate Acceptance Tests into the development process from the start. Create an Executable Specification for the desired behaviour of each new piece of software before starting on the code. Think of the least technical person who understands the problem-domain, reading the Acceptance Tests. The tests should make sense to that person. Create a new Acceptance Test for every Acceptance Criteria for every User Story. Make it easy to identify Acceptance Tests and differentiate them from other sorts of tests. Automate control of test environments, and Control the Variables, so the tests are reproducible. Make it easy for development teams to run Acceptance Tests and get results, by automating deployment to the test environment and automating control of test execution. Automate the collection of results, so developers can easily get the answer to the question "Has this Release Candidate passed Acceptance Testing?". Don't chase test coverage as a goal - good coverage comes as a side-effect of good practice, but makes a poor target. Leave room to scale up the number and complexity of Acceptance Tests, as the software grows in complexity.