user-acceptance-testing.jpg

User Acceptance Testing – It’s Time for Some Clarity

If you were my customer on a typical ThreeWill project and I said to you “we need to confirm that the solution we’re developing will actually fulfill the day-in and day-out needs of users prior to production go-live” and I called this “User Acceptance Testing”, I’d bet you would agree that this is a reasonable assertion. (Duh!) However, I’d also bet that if we left the conversation at the point, you and I would likely come away from our discussion with different expectations on how this would be accomplished. Why am I willing to make these bets? Well, my experience in past projects plays a part for sure. However, it’s also the proliferation of testing terminology and common misuse of those terms, particularly in Agile project approaches, that is a sure-fire way to cause confusion and miscommunication. So, I thought I would write this blog post to alleviate some of that confusion.

To be clear, I am discussing UAT (User Acceptance Testing) in terms of the most common project implementation approach we use at ThreeWill, i.e., Agile/Scrum. Our most common scenario involves standard Scrum roles. This will typically include roles fulfilled by our customers (at minimum this includes Project Sponsor, Product Owner, and Subject Matter Experts) and roles fulfilled by ThreeWill (usually including Scrum Master and Development Team). All references to “customer” in this post refer to the ThreeWill customer for the project and “Development Team” refers to the team of development, testing, and business analysts responsible for the delivery of product backlog items in our Agile/Scrum project approach.

What’s in a Name?

One common impediment to clear understanding we see is the actual terminology used to describe testing in Agile projects. There are many different types of testing that might be employed on a project and it’s easy to get confused on the purpose of each. The figure below shows one of the most concise visual representations of Agile testing and was originally developed by Brian Marick and later included in one of my favorite Agile testing books by Gregory and Krispin called “Agile testing: A Practical Guide for Testers and Agile Teams1.

Figure 1 – Agile Testing Quadrants

As indicated in the figure above, UAT is considered a “quadrant 3” type of testing and focuses on a “business-facing” critique of the product/system and will typically involve mainly manual testing. Testing in other quadrants, particularly functional testing, is also a key component of typical ThreeWill projects (see Brandon Holloway’s recent blog post on “Quality Assurance Levels – Which One is Right for Your Project?“). However, a key difference between UAT and these other types of testing is that customer roles typically perform the actual execution steps for UAT.

Why (you ask)? It’s because we are interested in validating that the solution supports the needs of real end-users as they execute their own job functions. There is no value in having end users simply re-execute testing already performed by the development team during iterations (aka sprint cycles). In fact, it’s not uncommon for a UAT team to be unfamiliar with details and acceptance criteria from the product backlog. So, by the time we enlist future end-users in a UAT effort, the testing by the development team should have already verified that the solution performs according to the acceptance criteria defined by the Product Owner. Instead, we are interested in potential patterns of use of the solution in broader testing scenarios that may not have even been envisioned or covered by functional testing performed by the development team. For this reason, Product Owners will typically be responsible for defining the test scenarios to be included in UAT and will define those scenarios at a very high level, avoiding detailed testing steps or user interface interactions. The idea here is to encourage some variation in the actual test steps being performed by each UAT tester as long as they validate that the solution supports their needs. The bottom line is that UAT is more about “validation” than “verification”.

The concept of having actual end-users of a system perform testing scenarios to confirm that it supports their needs as part of an overall business process is not new. In older plan-driven project approaches, this was often planned as the final step or “phase” prior to production go-live of a system. However, that kind of thinking is distinctly “non-Agile” because it contradicts the iterative nature of Agile which eschews a phased project approach. So, should UAT be executed as part of an iteration in an Agile project or should it be executed only once per release or should it be executed on an ad-hoc basis? These options can be confusing.

By the way… this earlier use of the term “acceptance” in UAT also causes confusion in an Agile project because it is often overused to mean different things. For example, some would say that “Acceptance Testing” and “User Acceptance Testing” mean the same thing while others might claim that “Acceptance Testing” is related to functional tests executed to confirm specific acceptance criteria in a single user story rather than broad scenario-based testing executed by end-users (which is what I’ve described above).

When in Doubt – Channel Your Inner Bruce

So, how do we select the appropriate approach for testing given all of the options mentioned above? How do we deal with confusion in terms? How do we handle differences in project approaches in the application of testing?

One of my tried-and-true approaches for dealing with too many options or uncertainty in making a decision is to find a Bruce Lee quote that applies (OK – maybe this doesn’t always provide an exact answer but generally it helps me focus!). In this case…

“Absorb what is useful, discard what is useless and add what is specifically your own” – Bruce Lee1

In other words, there is no “one size fits all” answer. The appropriate approach to UAT needs to be tailored to each specific project and we place a high priority on working with Project Sponsors and Product Owners to determine the right fit. Our experience has shown that the keys to success with any UAT effort

Our experience has shown that the keys to success with any UAT effort is:

  1. Don’t be too dogmatic about process. We see each customer as being unique in their needs and constraints related to UAT,
  2. As with anything agile, favor “individuals and interactions over processes and tools”. Keep the process as simple as possible when viewed from the perspective of a UAT tester, and
  3. Generate enthusiasm. The most “bang for the buck” in any UAT effort depends on a team of highly motivated testers that recognize the benefits of both the planned solution as well as the UAT effort related to it.

I’ve provided three examples below to illustrate the variety we see in UAT approaches. There certainly are others. I just wanted to illustrate the variety of approaches that might be needed for different projects.

“One Man Band”

A common “minimalist” approach we see relies mainly upon the Product Owner managing a modest effort, maybe including only the Product Owner or a designated SME running some testing based upon very high level/informal test scenarios. The timing of this testing is usually ad-hoc and based on the Product Owner’s view of when enough features have been completed to support a particular desired testing scenario. Any issues resulting from this testing are reported informally via email or verbally by the Product Owner.

It is possible that, even with this minimalist approach, new user stories may be generated as a result of testing in addition to issues/bugs. As with all of the approaches mentioned here, any new user stories or bugs resulting from UAT never interrupt the iteration plan being executed by the development team and are considered as potential items for the next project iteration. The only exception is “blocking bugs”. If a bug is preventing execution of UAT testing and testing delays cannot be accepted, then a development team member may need to address the bug immediately.

“Don’t Slow the Roll”

This approach is a more traditional plan-driven project approach where UAT is delayed until the end of the project as indicated in the figure below. Feature development iterations proceed without any UAT feedback until a final “hardening” iteration.

Figure 2 – Traditional UAT

The Product Owner typically selects a group of SMEs (subject matter experts) that serve as a “UAT Team”. UAT team members commit to a fixed time period where testing is their top job priority. The development team executes feature development iterations to a planned completion point (“Feature Dev N”) followed by a “hardening” iteration where the development team is focused on resolving issues/bugs reported by the UAT team. After completion of the hardening iteration(s), the development team is focused on any remaining steps necessary to promote the solution into production use by the client. This final iteration is sometimes designated as a “Transition” iteration.

This approach may be dictated by availability of SMEs to commit to a focused UAT time period or may be a required process step in the client’s production readiness process. Regardless of motivation, there are some definite drawbacks to this approach.

The idea of treating iterations as having “types” implies the presence of “gates” to move from one type to another, e.g., “finish all features before hardening” or “resolve all UAT issues before transitioning”, etc. Taken to extreme, this results in a “phase gate” project approach which defeats some core agile benefits like delivering value in increments that can be regularly reviewed and used as a basis for decisions for future increments. In this approach, the ability of the development team to respond to any UAT issues that might motivate a Product Owner to request new features or nontrivial design changes is severely limited. This might result in “compromise” resolutions to UAT issues in order to meet project timeline commitments.

“Go with the Flow”

This final example is a more agile-friendly approach but requires at least a part-time commitment of UAT team members over a longer period of time as indicated in the figure below.

Figure 3 – Parallel Iterations

As in other approaches, the Product Owner typically identifies a group of SMEs that can commit to a series of iterations that run in parallel to development team iterations. Typically, there is some lag period before the start of UAT iterations so that the development team can provide enough features to support the scenario-based testing required by the UAT team.

Similar to other approaches, the Product Owner identifies the high-level testing scenarios that the UAT team will execute for each iteration. The level of testing included in each UAT iteration may vary depending upon the timing of availability of product features coming out of the feature development iterations. Any issues, whether ultimately related to bugs or new feature requests/changes, will be considered by the Product Owner and development team during planning for the next feature development sprint. In the above figure, care must be taken to avoid development of any significant new features beginning with development team iteration “Dev N”, since the UAT team would not see those features in their final iteration “UAT N-1”.

With this approach, the UAT Team is able to see resolved issues and any suggested new/changed features in subsequent UAT iterations. Coordination of the development team/UAT sprints can be facilitated in multiple ways, including UAT team attendance at development team iteration reviews and vice versa or even with a periodic “scrum of scrums”-type meeting.

“Making It Your Own”

As a final point…you may be thinking…” OK – how do I determine the right fit for my project?” Unfortunately, there is no single formula for this and experience does play a role. However, as indicated in the table below, there are some factors that you should consider when making decisions on UAT approach. Thinking through these items can serve as a basis for deciding on the proper UAT approach for your project.

Table 1 – Considerations for UAT Approach Selection

Why?
  • What is the motivation for UAT?
  • Do we only want issues (defects) reported or are suggestions that turn into new backlog items acceptable?
  • Are there any customer-specific testing requirements (e.g., regulatory) that must be covered?
Who?
  • Will we have a “dedicated” UAT team?
  • How large will the team be?
  • How much time can each team member dedicate to the effort?
  • Will the team consist of multiple levels of expertise or single level?
  • Can we find testers that want to make a contribution to the testing effort or are we dealing with a pool of testers that are already overworked and view testing as a burden?
What?
  • Do we thoroughly understand the main motivations of system users?
  • What resources are available for developing testing scenarios? Are they high level (e.g., story maps, vision statements, etc.) or lower level (e.g., workflows, procedures, use cases, etc.)
  • Do we want to base testing scenarios on specific business processes/sub-processes or do we want to base on roles (e.g., “a day in the life of a <insert your role here>”) or even user “journeys” that cut across multiple processes?
  • What format do we want to use for communicating testing scenarios?
When?
  • Will team members be available for a continuous effort (even if part time) or will they only be available on a periodic basis?
  • Are there any customer-specific testing requirements that dictate the timing/reporting format for testing?
  • Can UAT team members attend sprint/iteration reviews throughout the project?
  • At what point in the project do we believe we’ll have “critical mass” with enough features to support useful scenario testing?
How?
  • How do we want to report issues?
  • Are there customer-specific requirements for tracking issues?
  • Will someone be able to “curate” reported issues to avoid duplication or erroneous issues?
  • How will testers receive feedback on issue status?
  • Are there any formal sign-off procedures that need to be satisfied before calling UAT “done”?

 

I hope this post has helped to provide some clarity on what we mean by “user acceptance testing”, why it is an important consideration for projects, and practical ways that it can be accomplished. UAT is not a substitute for other types of “quadrant 3” testing mentioned at the beginning of this article and may be used in concert with alpha/beta testing or pilot program approaches to promote solution adoption by business users. I’ve focused on direct benefits of UAT in this article. However, there are important “side benefits” as well, including building a core group of knowledgeable/enthusiastic users that can help promote buy-in from the rest of the user community and ultimately drive project success.

Bob MorrisUser Acceptance Testing – It’s Time for Some Clarity

Join the conversation

This site uses Akismet to reduce spam. Learn how your comment data is processed.