Defining good Acceptance Criteria
Dec 2, 2019, 10:45 PM
## Over and under definition One of the most common ‘transformation’ hurdles that marks the move towards organisation agility is good problem definition. Initially, an organisation might ‘over define’ a problem. That is, describe in excruciating detail, what exactly should happen, often months in advance. One of the more common mistakes is then to throw this proverbial baby out with the bathwater for the sake of ‘digital transformation’. This often leads to scope that is ‘under defined’. How do you get the balance right?
This pendulum swing of over and under definition happens at all levels of the organisation. From design and architecture through to delivery and operations. In this I am going to focus on a ‘micro’ level activity, Acceptance Criteria (ACs).
## Acceptance Criteria If you are ‘doing agile software delivery’ you are probably using some form of ‘Acceptance Criteria’ to try and define your ‘definition of done’. But ACs, just like everything else are subject to human interpretation, and there are good ACs and bad ACs.
Whilst I don’t believe there is a ‘just do this and you will get good acceptance criteria’ methodology out there, what I can write about is what I have found helps my teams focus on the right level of discussion for an AC.
But first lets discuss ‘why are we discussing ACs anyway?’ What does a good AC discussion help us with? In my view, a good AC will help us:
- Have a shared understanding of the problem
- Understand when we will be finished
- Have leeway to interpret the solution to the problem
And this means a good AC is itself, open to interpretation in any given situation. For instance, if the ticket was a ‘spike’, I would not follow these rules.
## 3 key areas So anyway, here are some of the the things I do during ticket kick off that I believe help the team get to good ACs:
- Focus on the data. What data do you want, when, where from and why? I’ve found these questions hold across ‘data boundaries’ (ie. UI, API, DB) as well as ‘interface types’ (ie. voice vs ui)
- Its important to understand why that data is necessary for the user. What value do they get from it?
- Its important to understand when that data is needed. Does the user need it immediately, can it load in later?
- Where is the data coming from? Is it in the app already or do I have to get it from another service?
- Why is this data important? Is data the user needs more important than others?
- Be broad about your cross functional requirements. We often focus on the characteristics of a software solution as the be all and end all of CFRs but I have found that thinking wider than this and thinking of the behaviour of the system as whole is helpful. I’ve picked out some key ones that I find are often forgotten.
- It can be tempting to not repeat yourself around CFRs. I have never found this to not lead to problems. Repeat yourself. You might want to avoid the detail of each CFR on EVERY ticket… but prioritise the one you care about for each ticket.
- Observability. How will you know that the thing is or isn’t working? This is key to you understanding the experience of problems. You can’t know everything upfront, but by thinking about it know you will help save a lot of effort.
- Monitoring in production. How will you know if this has been successful and your users are happy?
- Documentation. Find your team is finding it difficult to keep the internal or external documentation up to date? Don’t split it into separate tickets make it integral to the completion of the ticket!
- External demonstrations. Do you have to demonstrate your software to your peers or neighbors at any point? Then make it a CFR and show your team you value it! It doesn’t matter if it isn’t picked to be demoed. Get into the habit of doing it for everything!
- Other examples based on what you do! Yes I am sure you can think of those things that are based on how your company or team operates :)
- Understand what actions the user wants to take. What does the user want to achieve? What do they want to do with the data? What value are they trying to get out of the transaction? What should they be able to do?
Usually I find that by focusing on those 3 areas, the group of us defining the ACs can come up with the right level of detail.
I try to compare defining ACs similarly to ‘black box testing’. I want them to help me with inputs and outputs but I don’t want them to tell me about the middle bit. The bit I am going to define. They should be agnostic as to how that is achieved but care deeply about the output.
Good ACs are a small, but integral, part of defining a successful chunk of work. Helping your team get better at them will reap its own rewards.
Written by @defmyfunc who lives and works in Manchester, UK. You should follow him on Twitter