Follow up from the 2017 ASTQB Summit with Brandon Holloway

Danny Ryan

Co-Host – Danny Ryan

Bio – LinkedIn – Twitter

Tommy Ryan

Guest – Brandon Holloway

Bio – LinkedIn

Danny:Hello and welcome to the Two Bald Brothers and a Microphone podcast. This is one of the bald brothers, Danny, and I am here with Brandon Holloway. Brandon, how are you doing?


Brandon:I’m good. How are you?


Danny:Good, good. I just want to catch up with you. You recently went to a conference. What was the name of the conference?


Brandon:It was the ASTQB Summit and ASTQB stands for American Software Testing Qualifications Board.


Danny:So you guys just geek out and sit around and test software the whole time you’re there?


Brandon:Yes, something like that. I want to say they limited it to a hundred people and it was about that that showed up so I think they did maxed it out. Yeah, just a bunch of testers sitting around trying to find what’s wrong with everything, I guess, right?


Danny:Excellent. Where was the conference? Where was the summit?


Brandon:It was in Irvine, California.


Danny:Have you ever been to Irvine?


Brandon:No, this is my first time. Beautiful weather, I’ll tell you that.


Danny:I heard you brought your beautiful wife with you as well.


Brandon:I sure did. We made it a little extended vacation. I went to the conference on Friday and then we stayed a couple of nights in California and actually drove out to Vegas for a couple more nights.


Danny:Nice. Did you have a good time?


Brandon:We did. We did.


Danny:Excellent. Excellent. How many days was the summit? Was it a couple of days? Was it a week?


Brandon:Actually it was just one day. It was a full day from I guess like 8:00 a.m. to 6:00 p.m. or something like that. It was a full day.


Danny:Then do they have a keynote to kick it off?


Brandon:Yeah, there was a keynote speaker. Actually I forget the guy’s name but he’s the head of QA at I think Blizzard Entertainment which is video game, stuff like that. One of my previous jobs actually did gaming testing, different type of gaming. It was actually cool to hear about some of the stuff that they go through.


Danny:Very nice. Very nice. I didn’t know that about you. You were testing a specific game?


Brandon:Well, when I say gaming, this was actually like casino games is what I did. It was video slot machines, I guess. Blizzard Entertainment, that’s more of actual video games. It’s the same but not really.


Danny:Do you use any of that testing knowledge about casino games when you’re in Las Vegas?


Brandon:No, I did not. If anything, it helps me stay away from those slot machines because I know that you’re not going to win.


Danny:You will not win.


Brandon:You will not win unless you can rig them like we can, testing them which is not possible.


Danny:Yes. They don’t like-


Brandon:I’ve had many, many jackpots at work.


Danny:That’s how you plan to retire?


Brandon:Yeah, if I could find somebody that will cash those tickets, sure.


Danny:Nice, nice. The sessions, you got a keynote and then were there specific sessions that you signed up for after that?


Brandon:Yeah. They had three breakout sessions and basically you just got to choose which one you wanted to go to each time. I tried to find ones that, I guess, closely fit some of the things that I may do at ThreeWill. The first one was you had a choice between a breakout session mobile testing and then one of business analysis. I guess if anybody is also juggling being a [BA2 00:04:00] or just wanted to talk about requirements and things like that, they would go to that one. Of course I went to the mobile testing one because that is starting to be more and more required and necessary in testing nowadays so I thought that would be most helpful.


Danny:Very nice. Any big takeaways or anything from a mobile … I guess is this one of the sessions that you’d already taken a test for?


Brandon:Yes, actually I do have a certification through ASTQB for a certified mobile tester. A lot of it was the same content but it was … When I studied for these exams, I didn’t actually take a course. I would just study myself through online material. It was actually nice. Someone actually going through slides and explaining to us, like an expert in the field walking us through it. The main takeaway about mobile testing is this. Nowadays with the Internet of things and everything pretty much is a smart device and everybody wants to use their mobile phones to access applications and it’s really a big thing and an important part of testing nowadays. You don’t just jump online and do web testing on Internet Explorer anymore. It’s big.


Danny:What other sessions were you able to go to?


Brandon:I went to one on Agile testing. Basically, I deal with this everyday at ThreeWill and, I guess, they focused on Scrum and testing in that environment versus like a Waterfall type of environment. I was very familiar with all the terms that they were using and pretty much … Another one of my certifications is … There’s an extension on the foundational level certification that’s Agile testing so I had that one as well. A lot of this was a review for me as well but it’s always a good refresher to hear from the testing side of things, me being the only tester at ThreeWill. That was a good course to take.


Danny:Nice. Any other takeaways from the day that you had from being out there?


Brandon:Well, I guess my main takeaway, like I said, I knew a lot of the material being gone over but my main takeaway is like I just mentioned being the only tester at ThreeWill, it was really cool to be in this conference environment with a bunch of other testers.


Danny:With fellow nerds like you.


Brandon:I’m sorry?


Danny:With fellow nerds like you.




Danny:QA nerds like you. It’s nice.


Brandon:Hello, hello perfectionist that look for every minor detail and critique everything. Exactly. It was just cool because … I don’t know. It validates my thinking on a lot of things. When sitting in these sessions and everything and everybody has the same thought process as me. I talked to a couple of different people and everything and it’s just nice being around a lot of testers to get that whole feel from everybody.


Danny:I’ve experienced that same thing where I think I’m coming up with maybe an approach … A lot of the marketing stuff that I do, I go about it a certain way. Then sometimes you go to these conferences and just by hearing someone else say they took a similar approach, it does validate. I just makes you feel like, well, I am maybe doing. Maybe I’m doing something


Brandon:Exactly because you’re not around other marketing people at ThreeWill everyday, I don’t think. This is like my situation. I’m not really around a lot of testers all the time so this was actually good getting out there and making sure I’m still on top of the game or whatever.


Danny:Great. I appreciate you doing all these certifications. I think it’s great that you’re also taking advantage of getting this training and it’s a wonderful thing. I love having you on projects. You just do a great job for us. I really appreciate keeping your skills up to date and going to conferences like this and bringing that back to projects. You’re up to good stuff.


Brandon:Well, I appreciate it and I think it’s important to keep up with certifications because a lot of these things like Agile testing … Before, I wasn’t doing Agile testing only with ThreeWill and mobile testing which is in recent years becoming bigger. I need to stay current with all testing methods and whatnot because, just like everything else, it’s evolving as well. I try my best to stay on top of it and get certified as many ways as I can.


Danny:Excellent, excellent. Well, I appreciate you taking the time to do this and we’ll talk you maybe next quarter and thanks for all the hard work you put on projects and I appreciate it, Brandon.


Brandon:All right. No problem. Thank you.


Danny:Take care. Have a great day. Thank you everybody for listening and have a wonderful day. Take care. Bye-bye.


Additional Credits

Podcast Producer – Oliver Penegar
Intro/Outro Music – Daniel Bassett

read more
empty.authorFollow up from the 2017 ASTQB Summit with Brandon Holloway

User Acceptance Testing – It’s Time for Some Clarity

Bob Morris is a Project Manager and a Principle Scrum Master at ThreeWill. He has over 20 years of experience with successfully leading technology projects and teams in both project management and senior technical management positions. This experience includes delivery of software product development, enterprise software deployment and I/T infrastructure projects.

If you were my customer on a typical ThreeWill project and I said to you “we need to confirm that the solution we’re developing will actually fulfill the day-in and day-out needs of users prior to production go-live” and I called this “User Acceptance Testing”, I’d bet you would agree that this is a reasonable assertion. (Duh!) However, I’d also bet that if we left the conversation at the point, you and I would likely come away from our discussion with different expectations on how this would be accomplished. Why am I willing to make these bets? Well, my experience in past projects plays a part for sure. However, it’s also the proliferation of testing terminology and common misuse of those terms, particularly in Agile project approaches, that is a sure-fire way to cause confusion and miscommunication. So, I thought I would write this blog post to alleviate some of that confusion.

To be clear, I am discussing UAT (User Acceptance Testing) in terms of the most common project implementation approach we use at ThreeWill, i.e., Agile/Scrum. Our most common scenario involves standard Scrum roles. This will typically include roles fulfilled by our customers (at minimum this includes Project Sponsor, Product Owner, and Subject Matter Experts) and roles fulfilled by ThreeWill (usually including Scrum Master and Development Team). All references to “customer” in this post refer to the ThreeWill customer for the project and “Development Team” refers to the team of development, testing, and business analysts responsible for the delivery of product backlog items in our Agile/Scrum project approach.

What’s in a Name?

One common impediment to clear understanding we see is the actual terminology used to describe testing in Agile projects. There are many different types of testing that might be employed on a project and it’s easy to get confused on the purpose of each. The figure below shows one of the most concise visual representations of Agile testing and was originally developed by Brian Marick and later included in one of my favorite Agile testing books by Gregory and Krispin called “Agile testing: A Practical Guide for Testers and Agile Teams1.

Figure 1 – Agile Testing Quadrants

As indicated in the figure above, UAT is considered a “quadrant 3” type of testing and focuses on a “business-facing” critique of the product/system and will typically involve mainly manual testing. Testing in other quadrants, particularly functional testing, is also a key component of typical ThreeWill projects (see Brandon Holloway’s recent blog post on “Quality Assurance Levels – Which One is Right for Your Project?“). However, a key difference between UAT and these other types of testing is that customer roles typically perform the actual execution steps for UAT.

Why (you ask)? It’s because we are interested in validating that the solution supports the needs of real end-users as they execute their own job functions. There is no value in having end users simply re-execute testing already performed by the development team during iterations (aka sprint cycles). In fact, it’s not uncommon for a UAT team to be unfamiliar with details and acceptance criteria from the product backlog. So, by the time we enlist future end-users in a UAT effort, the testing by the development team should have already verified that the solution performs according to the acceptance criteria defined by the Product Owner. Instead, we are interested in potential patterns of use of the solution in broader testing scenarios that may not have even been envisioned or covered by functional testing performed by the development team. For this reason, Product Owners will typically be responsible for defining the test scenarios to be included in UAT and will define those scenarios at a very high level, avoiding detailed testing steps or user interface interactions. The idea here is to encourage some variation in the actual test steps being performed by each UAT tester as long as they validate that the solution supports their needs. The bottom line is that UAT is more about “validation” than “verification”.

The concept of having actual end-users of a system perform testing scenarios to confirm that it supports their needs as part of an overall business process is not new. In older plan-driven project approaches, this was often planned as the final step or “phase” prior to production go-live of a system. However, that kind of thinking is distinctly “non-Agile” because it contradicts the iterative nature of Agile which eschews a phased project approach. So, should UAT be executed as part of an iteration in an Agile project or should it be executed only once per release or should it be executed on an ad-hoc basis? These options can be confusing.

By the way… this earlier use of the term “acceptance” in UAT also causes confusion in an Agile project because it is often overused to mean different things. For example, some would say that “Acceptance Testing” and “User Acceptance Testing” mean the same thing while others might claim that “Acceptance Testing” is related to functional tests executed to confirm specific acceptance criteria in a single user story rather than broad scenario-based testing executed by end-users (which is what I’ve described above).

When in Doubt – Channel Your Inner Bruce

So, how do we select the appropriate approach for testing given all of the options mentioned above? How do we deal with confusion in terms? How do we handle differences in project approaches in the application of testing?

One of my tried-and-true approaches for dealing with too many options or uncertainty in making a decision is to find a Bruce Lee quote that applies (OK – maybe this doesn’t always provide an exact answer but generally it helps me focus!). In this case…

“Absorb what is useful, discard what is useless and add what is specifically your own” – Bruce Lee1

In other words, there is no “one size fits all” answer. The appropriate approach to UAT needs to be tailored to each specific project and we place a high priority on working with Project Sponsors and Product Owners to determine the right fit. Our experience has shown that the keys to success with any UAT effort

Our experience has shown that the keys to success with any UAT effort is:

  1. Don’t be too dogmatic about process. We see each customer as being unique in their needs and constraints related to UAT,
  2. As with anything agile, favor “individuals and interactions over processes and tools”. Keep the process as simple as possible when viewed from the perspective of a UAT tester, and
  3. Generate enthusiasm. The most “bang for the buck” in any UAT effort depends on a team of highly motivated testers that recognize the benefits of both the planned solution as well as the UAT effort related to it.

I’ve provided three examples below to illustrate the variety we see in UAT approaches. There certainly are others. I just wanted to illustrate the variety of approaches that might be needed for different projects.

“One Man Band”

A common “minimalist” approach we see relies mainly upon the Product Owner managing a modest effort, maybe including only the Product Owner or a designated SME running some testing based upon very high level/informal test scenarios. The timing of this testing is usually ad-hoc and based on the Product Owner’s view of when enough features have been completed to support a particular desired testing scenario. Any issues resulting from this testing are reported informally via email or verbally by the Product Owner.

It is possible that, even with this minimalist approach, new user stories may be generated as a result of testing in addition to issues/bugs. As with all of the approaches mentioned here, any new user stories or bugs resulting from UAT never interrupt the iteration plan being executed by the development team and are considered as potential items for the next project iteration. The only exception is “blocking bugs”. If a bug is preventing execution of UAT testing and testing delays cannot be accepted, then a development team member may need to address the bug immediately.

“Don’t Slow the Roll”

This approach is a more traditional plan-driven project approach where UAT is delayed until the end of the project as indicated in the figure below. Feature development iterations proceed without any UAT feedback until a final “hardening” iteration.

Figure 2 – Traditional UAT

The Product Owner typically selects a group of SMEs (subject matter experts) that serve as a “UAT Team”. UAT team members commit to a fixed time period where testing is their top job priority. The development team executes feature development iterations to a planned completion point (“Feature Dev N”) followed by a “hardening” iteration where the development team is focused on resolving issues/bugs reported by the UAT team. After completion of the hardening iteration(s), the development team is focused on any remaining steps necessary to promote the solution into production use by the client. This final iteration is sometimes designated as a “Transition” iteration.

This approach may be dictated by availability of SMEs to commit to a focused UAT time period or may be a required process step in the client’s production readiness process. Regardless of motivation, there are some definite drawbacks to this approach.

The idea of treating iterations as having “types” implies the presence of “gates” to move from one type to another, e.g., “finish all features before hardening” or “resolve all UAT issues before transitioning”, etc. Taken to extreme, this results in a “phase gate” project approach which defeats some core agile benefits like delivering value in increments that can be regularly reviewed and used as a basis for decisions for future increments. In this approach, the ability of the development team to respond to any UAT issues that might motivate a Product Owner to request new features or nontrivial design changes is severely limited. This might result in “compromise” resolutions to UAT issues in order to meet project timeline commitments.

“Go with the Flow”

This final example is a more agile-friendly approach but requires at least a part-time commitment of UAT team members over a longer period of time as indicated in the figure below.

Figure 3 – Parallel Iterations

As in other approaches, the Product Owner typically identifies a group of SMEs that can commit to a series of iterations that run in parallel to development team iterations. Typically, there is some lag period before the start of UAT iterations so that the development team can provide enough features to support the scenario-based testing required by the UAT team.

Similar to other approaches, the Product Owner identifies the high-level testing scenarios that the UAT team will execute for each iteration. The level of testing included in each UAT iteration may vary depending upon the timing of availability of product features coming out of the feature development iterations. Any issues, whether ultimately related to bugs or new feature requests/changes, will be considered by the Product Owner and development team during planning for the next feature development sprint. In the above figure, care must be taken to avoid development of any significant new features beginning with development team iteration “Dev N”, since the UAT team would not see those features in their final iteration “UAT N-1”.

With this approach, the UAT Team is able to see resolved issues and any suggested new/changed features in subsequent UAT iterations. Coordination of the development team/UAT sprints can be facilitated in multiple ways, including UAT team attendance at development team iteration reviews and vice versa or even with a periodic “scrum of scrums”-type meeting.

“Making It Your Own”

As a final point…you may be thinking…” OK – how do I determine the right fit for my project?” Unfortunately, there is no single formula for this and experience does play a role. However, as indicated in the table below, there are some factors that you should consider when making decisions on UAT approach. Thinking through these items can serve as a basis for deciding on the proper UAT approach for your project.

Table 1 – Considerations for UAT Approach Selection

  • What is the motivation for UAT?
  • Do we only want issues (defects) reported or are suggestions that turn into new backlog items acceptable?
  • Are there any customer-specific testing requirements (e.g., regulatory) that must be covered?
  • Will we have a “dedicated” UAT team?
  • How large will the team be?
  • How much time can each team member dedicate to the effort?
  • Will the team consist of multiple levels of expertise or single level?
  • Can we find testers that want to make a contribution to the testing effort or are we dealing with a pool of testers that are already overworked and view testing as a burden?
  • Do we thoroughly understand the main motivations of system users?
  • What resources are available for developing testing scenarios? Are they high level (e.g., story maps, vision statements, etc.) or lower level (e.g., workflows, procedures, use cases, etc.)
  • Do we want to base testing scenarios on specific business processes/sub-processes or do we want to base on roles (e.g., “a day in the life of a <insert your role here>”) or even user “journeys” that cut across multiple processes?
  • What format do we want to use for communicating testing scenarios?
  • Will team members be available for a continuous effort (even if part time) or will they only be available on a periodic basis?
  • Are there any customer-specific testing requirements that dictate the timing/reporting format for testing?
  • Can UAT team members attend sprint/iteration reviews throughout the project?
  • At what point in the project do we believe we’ll have “critical mass” with enough features to support useful scenario testing?
  • How do we want to report issues?
  • Are there customer-specific requirements for tracking issues?
  • Will someone be able to “curate” reported issues to avoid duplication or erroneous issues?
  • How will testers receive feedback on issue status?
  • Are there any formal sign-off procedures that need to be satisfied before calling UAT “done”?


I hope this post has helped to provide some clarity on what we mean by “user acceptance testing”, why it is an important consideration for projects, and practical ways that it can be accomplished. UAT is not a substitute for other types of “quadrant 3” testing mentioned at the beginning of this article and may be used in concert with alpha/beta testing or pilot program approaches to promote solution adoption by business users. I’ve focused on direct benefits of UAT in this article. However, there are important “side benefits” as well, including building a core group of knowledgeable/enthusiastic users that can help promote buy-in from the rest of the user community and ultimately drive project success.

read more
Bob MorrisUser Acceptance Testing – It’s Time for Some Clarity

Quality Assurance Levels – Which One is Right for Your Project?

Brandon Holloway is a Quality Assurance Engineer at ThreeWill. He has over 10 years of QA experience in requirements gathering, risk analysis, project planning, project sizing, scheduling, testing, defect/bug tracking, management, and reporting.

Testing is an investment and just like with our health, prevention is key. It is cheaper to prevent problems than it is to repair them. Adding Quality Assurance to the ThreeWill practice creates a better product, increases the client’s satisfaction, improves efficiency, and improves the company’s reputation. At ThreeWill, we offer three “Levels” of QA services.

Silver Level

This is the most lightweight approach we offer. We will conduct a Feature/Function Risk Assessment (for each Product Backlog item). We will provide test cases in a predefined format which include, but are not limited to, test scenarios and high priority test cases. These test cases will be reviewed by the developer and/or the client and will be executed during each sprint for each new piece of functionality delivered. Full testing will be done on 1 operating system and 1 browser type/version (assuming environment for all are available and needed). We will also perform smoke testing on 1 additional browser if needed. Full regression testing will be done on the release candidate, including only high priority test cases. Defects will be tracked on the ThreeWill extranet site.

Gold Level

This is the middle-of-the-road approach. This includes everything from the Silver Level, and more. We provide the additional option of having test cases created in the client’s repository rather than our own predefined format if desired. We will also offer a Traceability Matrix upon request. We will fully test up to 2 browser type/versions and smoke test up to 2 additional browsers, rather than just 1 for each. While defects are tracked in our own extranet site for the Level 3 service, for Level 2 we offer the option to have them stored and tracked in the client’s current bug tracking system instead. Also included in Level 2 are test reports which contain test results, a count of open defects by priority based on each Product Backlog item, and any other relevant test tracking information.

Platinum Level

This is the total package! This, of course, includes everything from the Silver and Gold Levels, plus more. More test cases are provided and with more detail (complete test steps, expected results, preconditions, High/Medium/Low priority test cases, and positive/negative test cases). Not only will full regression testing be performed on the release candidate, but we will also perform limited regression testing during each sprint. Full testing is offered on up to 2 operating systems and 4 browser types/versions, and smoke testing is still performed on up to 2 additional browsers. Any other relevant testing information that may be available will also be shared.

read more
Brandon HollowayQuality Assurance Levels – Which One is Right for Your Project?

Top Challenges when Testing Mobile Features for Web Apps

Brandon Holloway is a Quality Assurance Engineer at ThreeWill. He has over 10 years of QA experience in requirements gathering, risk analysis, project planning, project sizing, scheduling, testing, defect/bug tracking, management, and reporting.

With mobile devices being relied on more today than ever, it has become more and more important for these devices to work well with web applications. Testing these devices has its own set of challenges for QA Engineers. Here are 3 of the biggest challenges I’ve encountered while testing software for mobile devices, based on my limited experience. If you want to learn about SharePoint on mobile, Check this out!

Note These challenges relate to testing a web application that was meant primarily for a desktop browser such as IE or Chrome, but is expected to be compatible with mobile devices. I’m not referring to applications built specifically for mobile devices, such as something you would download from the App Store.

1. High Volume of Test Runs

Most of the time when a customer wants an application to be compatible with mobile devices, they aren’t just referring to a specific type of phone or tablet. Even ignoring the separate challenge of getting access to many different devices, just having to run test cases multiple times to cover all devices you have adds a lot of time to the testing effort. Many of the issues you will find are related to how a device’s screen resolution reacts to the responsiveness of the application. This means you need many devices with various screen sizes. Smaller phones, bigger phones, tablets, etc. You have to test these each in Portrait mode and Landscape mode. You may want to test on wifi vs LTE vs 3G. It just keeps adding up.

Chrome’s desktop browser does have a pretty cool mobile device emulator built in. It is pretty reliable for testing responsiveness for various resolutions, but it isn’t perfect. It is always better to have the actual devices on hand. I have found several responsiveness bugs on the actual devices that looked fine in Chrome’s emulator. Also remember that more devices means more regression testing. Same thing for testing bug fixes. Everything is multiplied.

2. Different Devices Are, Well, Different

While it seems that mobile devices with similar resolutions may behave similarly with a web application, it isn’t necessarily true. I’ve found that iPhones and many phones running Android seem to work fairly well most of the time. Then there are Windows phones. I’ve found some really head-scratching bugs on those that don’t show up anywhere else. I can’t let Apple and Samsung off the hook completely though, because they each have their own quirks. I found both iPad and Galaxy tablets more bug-prone than the phones. And keep in mind the developers will have their own set of challenges tracking down the root cause of a bug for this phone vs that phone vs this tablet, etc. It can become a juggling act for everyone.

3. Documentation

This really goes hand in hand with #1. Scrum fits mobile testing well, mainly because of the more light-weight approach of documentation/requirements. With so many devices and potential issues related to many factors on those devices, you can only be so detailed on paper if you want to test sufficiently. Exploratory and Ad hoc testing are huge parts of mobile testing.

What has been your experience with testing mobile apps?  Leave comments/questions below and I’ll respond to them.

read more
Brandon HollowayTop Challenges when Testing Mobile Features for Web Apps

Do Software Testers Need to Know How to Code?

Brandon Holloway is a Quality Assurance Engineer at ThreeWill. He has over 10 years of QA experience in requirements gathering, risk analysis, project planning, project sizing, scheduling, testing, defect/bug tracking, management, and reporting.

If you are interested in software testing but don’t have a coding background, don’t panic! There are plenty of testing opportunities out there for people who don’t know how to code. I would even venture to say that most software testers know little about writing code. Don’t get me wrong; it can still be very valuable. For instance, knowing the inner workings can really help with test cases and start you in the right direction when running through different test scenarios.

Now, of course, there are many variables that come into play when determining whether or not a tester needs to know how to write code. For instance, while most developers do unit testing, sometimes it may be on the tester to do it instead. Or some testers may use complex automation tools that require some code. In these cases, it is probably safe to say that knowing how to code, at least minimally, is required.

I can understand what most code is doing by reading a few lines, but I don’t know how to code beyond pretty simple statements. So speaking of myself and my own position, knowing how to code is not a necessity. Doing web testing in a fast paced SCRUM methodology keeps me focused mainly on manual testing of the application. This may be putting myself in the user’s shoes and running through use cases, inputting invalid values for negative testing, or exploratory testing on multiple browsers to make sure everything is compatible. I’m lucky enough to work with some pretty smart developers who can really dig in and troubleshoot bugs from a code level when I point out what happens for the end user.

I know many excellent testers who can’t write code. In most cases, as long as you have a good understanding of the application and what all it should and shouldn’t do, you have what you need. Plus, who wants to build stuff, when you can break it!

read more
Brandon HollowayDo Software Testers Need to Know How to Code?

Agile Web Testing vs Waterfall Web Testing

Brandon Holloway is a Quality Assurance Engineer at ThreeWill. He has over 10 years of QA experience in requirements gathering, risk analysis, project planning, project sizing, scheduling, testing, defect/bug tracking, management, and reporting.

I did a good bit of web testing within a waterfall model before I came to ThreeWill 3 years ago. Transitioning to scrum took some time, but once I got the hang of it, it was a welcome change. Some of these things may apply more to some than others (depending on how many applications you are juggling at a time with testing, how many other testers you have, how long your Sprints are, etc.). Here are some of the biggest contrasts I’ve seen so far, based solely on my own experience.

Requirements Evolve

In a waterfall model, requirements are usually set in stone from the beginning. This is definitely not the case with scrum. The client is heavily involved from start to finish, so naturally tweaks are made here and there. This makes it difficult to create test cases early because acceptance criteria changes with the requirements. Every project is different, but for the most part I’ve found that the later in the Sprint I write test cases, the less time I spend later going back and making edits.

Manual vs Automated

At ThreeWill, most of the testing I do is for customized functionality created specifically for a client’s needs. While automated test scripts are certainly advantageous for applications that are going to be revisited several times or where a large amount of data needs to be loaded, almost all of the testing I do is manual. Also, the fast pace of feature development Sprints means I need to focus heavily on the testing itself rather than creating scripts. Again, every project is different and there are certainly times where automated testing is beneficial (e.g. regression testing).


To me, it just feels more like a team effort with scrum. Of course this is coming from a Tester, who back in the waterfall days wasn’t really involved until there was something ready to test. Being involved from the beginning of a project with the client, developers, and everyone else lets me know what I’m getting into early, instead of just being thrown a dozen pages of requirements later. It also allows me to better understand what exactly the clients are going for, which is very important in order for me to effectively put myself in their shoes while I’m testing.

read more
Brandon HollowayAgile Web Testing vs Waterfall Web Testing

SCURM [sic] – Quality Assurance and SCRUM

Brandon Holloway is a Quality Assurance Engineer at ThreeWill. He has over 10 years of QA experience in requirements gathering, risk analysis, project planning, project sizing, scheduling, testing, defect/bug tracking, management, and reporting.

Note from Danny – As I was looking for a an image to go with this episode on Shuttershock, I ran into this image that had SCRUM misspelled. Well, that’s a perfect image for Quality Assurance and SCRUM…

Danny:Hello, this is Danny Ryan. Welcome to the ThreeWill podcast. Today I have Brandon Holloway.


Brandon :Thanks for having me, Danny.


Danny:You’re welcome. I don’t know why I want to start singing.


Brandon :I don’t know.


Danny:I’m going to start singing if that’s okay.


Brandon :[Crosstalk 00:00:14].


Danny:I’m going to sing. All my questions for you will be sung.


Brandon :Interesting.


Danny:Brandon helps out with our QA, otherwise known as quality assurance. He makes sure that everything’s staying tiptop quality-wise on our projects. I have asked him to sit down and just tell me what does it mean to do QA on a typical ThreeWill project? What does that mean to me?


Let’s get started off with this question. When do you typically get pulled in first for the project? Just we’re talking about generally for the typical project that you’d be a part of.


Brandon :I’ll usually get pulled in a lot of times at the very beginning even though it’s going to be a few weeks before there’s going to be any real testing even if it’s still in the requirements gathering, or architecture phases. Sometimes it’s good for me to be there because I’m still thinking in the beginning on how would a user approach this which is the mindset I try to put myself into. They try to involve me as early as possible that I could provide some value.


Danny:You usually start off with taking a look at the product backlog and what the user stories are. Is that where your first thing that you interact with on the project?


Brandon :Right, right, the user stories and the product backlog, that’s right. A lot of times a lot of what I focus on is the acceptance criteria which is definitely not the only thing I look at for it. Like I said, I got to put myself in the user perspective. What does this story mean to the typical user and just figure how to build test cases off of that.


A lot of times early in the sprints and early in the project I can go ahead and start working on test cases based off of mainly the product backlog, even well before testing is actually available to be done.


Danny:Then you would, as you find defects you would log those defects. Is there a typical place where we’re putting those defects?


Brandon :On most projects we use SharePoint to …


Danny:SharePoint, tell me more about this. What is this?


Brandon :Yes, it’s this thing we use around here called SharePoint. It’s a pretty cool little application.


Danny:Sounds fascinating, fascinating.


Brandon :Yeah, a lot of times we just have an issues list. It’s just a basic list in SharePoint that a lot of the developers will set up alerts for so they see. They know when they get a issue coming across they can go straight in there and work on it. It’s just [crosstalk 00:02:54] …


Danny:It probably has a status column as far as where the defect …


Brandon :The normal, the priority, what I think the priority should be. This is what I think the severity should be, what type of issue it is, that kind of thing. I’ll just assign it to whoever. Usually whoever is assigned to that particular product backlog, or that feature, that can vary though. Sometimes we’ll pass it to somebody else that it may be a better fit to work on.


Danny:You know when to retest because they’re updating that issue log.


Brandon :Correct, yeah. There’s a few different statuses that it’ll go though. When I first open an issue, I always see it’s an open status. When the developer fixes it, it’ll go decoded status. It’s not available for me to test yet. They have to just check their coding. Whenever they get ready for another QA deployment, then they deploy it and then it’s in the QA environment ready to be tested. Then they’ll move it to either deployed, to QA, or ready for QA. There’s a couple different statuses on the project.


Danny:A lot of our projects will have a separate QA environment that you work within. Is that correct?


Brandon :Yes.


Danny:Is that normal?


Brandon :Yeah, it’s pretty normal. We’ve used a couple different cloud service 1 time to ….


Danny:Geese. Host failure. Host failure. Just a second. It could be anybody. I’m sorry. Now I’m going to shut this off and here we go. You were saying, sir, yes.


Brandon :The best case is probably when the clients themselves have QA environments set up that I can go in there and use. That’s usually the best method just because that’s basically what they’re going to have their own UAT environment in that same environment to production. Everything should be similar. It should be more along the lines of how it’s going to be in the real world. I have a couple different virtual machines that I’ll host a QA site in. It depends on the project and if they have an actual QA environment for us to work on or not.


Danny:Got you.


Brandon :We just deal with what we got.


Danny:Yeah. It just depends on the project really.


Brandon :It does.


Danny:Yeah, how does it work with …? Are you testing? Within the Sprint cycle, are you testing things that were implemented in the previous cycle and then they’re doing a defect resolution, or giving some capacity towards defect resolution in certain sprints? How does that all work?


Brandon :In a perfect world …


Danny:In a perfect world. Tell me this imaginary place …


Brandon :… which it is not.


Danny:… that you call in a perfect world.


Brandon :Excuse me. Say it’s a …. For instance we’ll say a 2 week sprint. In a perfect world, I would write my test cases maybe the first week or somewhere in that time frame. Maybe the beginning of the second week of that sprint all those product backlog items of stories will be available for me to test.


I should be able to get those at least a good round of testing in within a couple days. Maybe by Wednesday of the second week, or so, assuming the sprint ends on a Friday. Whatever issues come out of that, they can be working those issues and get them back to me and I can retest them so I can get everything closed out and everything marked as tested by the last day, or the day before the last day of the sprint. That’s a perfect world.


Danny:That’s a perfect world.


Brandon :It doesn’t always happen like that, of course. There’s just a lot of factors, a lot of movement going on everywhere. People change priorities of stories and the client’s involved pretty heavily in Scrum. We just adapt with it. I adapt just like everybody else does. Sometimes that means maybe a couple of product backlog items for a sprint may just be all the way pushed to the next sprint.


Sometimes I may get them on the actual last day of the sprint. I at least get what testing done I can, like maybe a good smoke test of the functionality, or just do some ad hoc testing to hit the points that I think may be most prone to have issues, at least get as much done of a first round of testing as I can before the beginning of the next week rolls around and we have a sprint review.


From there I’ll clean up all the test cases and run the rest of the tests and everything. It may be into the very beginning of the next sprint. That happens sometimes. It’s not a perfect world.


Danny:It sounds like a lot of the type of testing that you’re doing, is it more like end user testing, really trying to focus in on what the experience that …


Brandon :It is.


Danny:… the end user would have with …?


Brandon :Right. It’s pretty much all manual testing and it’s mainly use cases. A lot of the reason for that is balancing a couple different projects. Whatever my capacity is on a particular project, I have to base my testing scope around that capacity. There’s always more testing to be done. You could test forever on something and still not 100% cover everything. It’s just the nature of it.


Basically, I build off what my capacity is for this project, what we decided on with the client for this project, out of that amount of hours I’m going to get the most effective testing I can done out of that. That’s pretty much going to be manual use case testing; not a lot of tools involved. It’s more of a time sensitive thing. We want to make sure we hit everything that we can from a user’s perspective so that ….


Danny:Got you. It sounds like you’re across multiple projects. You probably have to …. You’re probably coming in since we’re building a lot of web-based stuff, probably have to get a list of which browsers are supported, that sort of stuff as well.


Brandon :Right, yeah, that’s another big part of it. Sometimes there’ll be a project where Internet Explorer 10 only. That’s the only browser I have to test in. Many times, they want to support it on Chrome. They want it IE10 and up which includes IE11 and in some cases maybe Microsoft Edge will start coming in since it’s a newer browser.


Sometimes Firefox needs to be supported. It just depends on the project. A lot of times there is definitely cross browser testing. IE, in particular, tends to behave differently in some situations than say Chrome and most [crosstalk 00:10:02] …




Brandon :… Firefox.




Brandon :Yeah. IE is fun. There’s always that to deal with.


Danny:Are you crying? There’s no need to cry. Did IE bring tears to your eye? Don’t look like tears of joy either.


Brandon :My eye’s just watering.


Danny:Tell me a couple more questions and I’ll let you go. I’ll let you leave. What do you find is the most challenging part of what you do?


Brandon :Definitely 1 thing is that just the nature of being Agile Scrum is that you don’t get as concrete requirements to test on, which for testing is very important. It’s important for everybody at my project. I base what I build in my test cases off of what is supposed to happen in these situations.


You don’t have detailed requirements, very detailed requirements sometimes in Scrum, at least in my experience. That’s 1 thing. There’s a lot more interaction between myself and developers. I ask questions. That’s how I like how everybody’s so open here. It’s just easy to get with somebody about something like that. Definitely I’ll need things clarified.


Sometimes it takes a while for me to get the full picture of everything without these hard requirements and everything. That’s mainly 1 of the challenges and of course just time in general, what things get pushed back. Testing is at the end of everything.


Sometimes I find myself working late on a Friday, a little bit on the weekend, which it’s not a big deal. That definitely can come into play, especially later in projects depending on what needs to be put into the backlog at the last minute or taken out.


Danny:I think when we look at what makes a great QA person is they have to come in and take some initiative to come into it. You’re coming in to a project and really trying to understand what are we trying to do for the client as well. I think you do a great job at that from what I’m hearing from other people. You’ve tested my Web site.


Brandon :I’m about half way done with it. I got 2 lists of things.


Danny:For people who come to, it’s half tested. How’s that make you feel? It’s half tested. Don’t worry. I change things daily. I mean I don’t change the name of the company daily, but I do change a lot of stuff on my Web site daily just to keep things moving.


Brandon :I didn’t realize how much there was on that Web site.


Danny:It’s a dynamic site. It’s a dynamic.


Brandon :It’s pretty large.


Danny:Maybe it’s a softball, maybe it’s not a softball question, but what do you love about what you do?


Brandon :Man, I just don’t know. It’s like I’m a perfectionist in a way. I hate to say I like it’s like a treasure hunt almost. It’s fun. There’s something in here that I’m going to find that’s not doing what it’s supposed to be doing.


It’s like finding things like that, not mistakes. That’s a bad word for it. I like finding bugs. It’s just like a hunt for me. Since I first started testing, I didn’t even know coming out of college with my degree, I didn’t even know there was a profession, software tester, or whatever.


I have a cousin that did that at a company back in Columbus. I was like, “Man, this sounds like something right up my alley.” I started doing it. I love it. It’s just you never know what you’re going to come across. Working with great developers obviously helps a lot.


Sometimes in the past, other jobs sometimes you don’t have developers that are very willing to …. Some look at QA as, “Oh, that guy, my nemesis.” It’s not like that here. Everybody’s on the same team. It’s great.


Danny:I heard, within the past week, Eric was talking about how much he appreciates what you do and how he feels better after you’ve tested something. It’s wonderful having you around to do that. You’re an important part of each 1 of our projects. I love it that you love to do. It is treasure hunting. You’re trying to go figure out.


Brandon :Can’t think of another way to put it. [Crosstalk 00:14:41].


Danny:It’s fun. No, it’s a great analogy. It’s a wonderful analogy. You’re trying to go in and figure out what are the important, when talking about treasure, what are the big things that we might’ve overseen as a developer.


Brandon :A lot of times I have to put myself into the end user’s shoes is what I’m doing. I’m thinking, “What would a user do here? Does this look right to the user’s perspective?” Just giving that whole side of things, I like it.


Danny:We love having you here too. We can really think through the what from somebody from Alabama would do.


Brandon :Here we go with this again.


Danny:For those that’s not an inside joke. He’s from Phoenix City, Alabama. Yes, he’s representing Alabama in the ThreeWill family. We now have a southeast …


Brandon :Not the school. Not the school Alabama.


Danny:Not the school. Yeah, let’s get that straight.


Brandon :Even the state.


Danny:I’m sorry. You’re representing the state of Alabama.


Brandon :We’ll say that, yeah.


Danny:The school of Auburn.


Brandon :Auburn.


Danny:Geese, there’s too many of you guys around here. Why don’t you go hang out with our other Auburn friends and get the heck out of here? That’s the end of this episode. Let’s get out of here.


Brandon :[Crosstalk 00:16:00].


Danny:Yeah, I’m going to kick the …. Here’s where I’d kick the microphone over and like, “Get out of here.” Slam the door.


Brandon :Y’all are the guys hiring the Auburn folks. Do you know what I mean? That should tell you something.


Danny:With that, I think we’ll wrap up this episode. Thanks so much for listening everybody. Brandon, thank you for your time.


Brandon :Thank you.


Danny:Go get back to testing and go back. You’ve got a long ride home from here. You’re heading …


Brandon :Yeah, a couple hours.


Danny:… going to head home. A couple hours.


Brandon :Back to Alabama.


Danny:All right. Just test this like you were someone from Alabama. Thank you for spending your time here, Brandon.


Brandon :Thanks.


Danny:Take care. Bye-bye.


read more
Brandon HollowaySCURM [sic] – Quality Assurance and SCRUM

The Five Most Common Web Defects

Brandon Holloway is a Quality Assurance Engineer at ThreeWill. He has over 10 years of QA experience in requirements gathering, risk analysis, project planning, project sizing, scheduling, testing, defect/bug tracking, management, and reporting.

In the last few years as a software tester I’ve done a lot of web testing. I’ve come across all kinds of defects, but some of the most common web defects are found without digging very deep at all. Here are 5 common web defects that can be found at, or just below, the surface.

1. Grammatical/Spelling Errors

Ahh… These are the ultimate eye-rollers for developers. However, they are still defects. While usually they have no effect on functionality, they WILL eventually be noticed if they slip through. On the positive side, they are (usually) super easy to fix. A good way to boost your defect count (kidding of course)!

2. Cosmetic Issues

This can be anything from branding that is misplaced to a header that becomes misaligned when the browser is widened (assuming the website is to be responsive). Many times these are found during cross-browser testing (see below). Again, these will be noticed rather quickly by end users if they slip through. Nobody wants to pay good money for an ugly website!

3. Cross-Browser Issues

These are common because sometimes developers may use one particular type of browser while coding and testing in their dev environment. This is usually a Chrome vs IE thing, at least in my experience. I’ve been told that in development these two can be vastly different, causing some headaches for developers.

4. Form Validation

Many times there are just so many rules to the types of characters allowed in input forms that you are bound to find some of these. Not to mention many times you have client-side AND server-side validation, which sometimes don’t work well with each other.

5. Overall Usability

This can be overlooked easily. Sometimes it’s hard for developers to step back and really look at the system as a whole from the end user’s perspective. The more trained on the system the end users will be, the less of a problem this should be. As a tester, the most important thing to remember is that, at the end of the day, you are playing the role of THAT user.

What would you add to this list?  Leave a comment below.

read more
Brandon HollowayThe Five Most Common Web Defects

3 Things to Consider When Testing Out of The Box SharePoint Features

Brandon Holloway is a Quality Assurance Engineer at ThreeWill. He has over 10 years of QA experience in requirements gathering, risk analysis, project planning, project sizing, scheduling, testing, defect/bug tracking, management, and reporting.

Testing Out of The Box SharePoint Features

In the last couple years, I’ve done a decent amount of QA testing that either fully or partially involved SharePoint. One thing that is almost always utilized, regardless of the size of the project, is the use (at least to some degree) of out of the Box (OOTB) features. These are features that “come with” SharePoint, ready to use immediately without using custom code. When testing OOTB features, here are 3 things to consider:

1. Integration

While OOTB features are pretty reliable when used as originally intended, it is very common for some of them to be integrated with custom code which furthers their capabilities. It is important in these situations not to focus solely on testing the new code, but to test the full integration with the OOTB features. Also, what if the original OOTB features are not totally dependent on the new code, meaning not only can you use them for their newly expanded capabilities, but you can still use them as they were originally intended, as well? In this situation, you may think testing it as it was before would be a waste of time since without the new pieces, it is still OOTB functionality. This isn’t necessarily true. You never know what the new code could be touching in the back end, so some regression testing should be in order.

2. Usability

While it seems straightforward, usability can easily be overlooked. Sometimes customers will decide to go with an OOTB feature for something they need. This could be to save costs or because exactly what they need just happens to be there already. Regardless of the reason, the OOTB feature in question should always be tested for usability. Just because they want to use a particular “tried and true” feature that comes ready to use doesn’t mean that it will 100% fit their needs. You need some good, solid use cases (both positive and negative) in these situations. Many OOTB features have multiple possible configurations, whether we are talking about workflows, alerts, forms, or simple views. Do your due diligence and make sure that the customer will get all that they need with this feature and its “as-is” functionality and configurability.

3. You Never Know

OOTB functionality always works like it’s supposed to, right? Well, mostly. I have seen some funky things in some OOTB features that made me scratch my head a little. Maybe something is sort of misleading; maybe it’s counterintuitive. Googling such things almost always turns up something like, “that’s just how it works.” The point is, no matter how simple the OOTB functionality in question may seem, if it is a part of what the customer will be using on a regular basis, test it anyway. Will you find anything that hasn’t already been found and discussed a hundred times in QA and/or Dev forums across the internet and eventually deemed “acceptable”? Probably not… but running at least a handful of high-level test cases can’t hurt. You never know.

What Would You Add to This List?

Leave a comment below…

read more
Brandon Holloway3 Things to Consider When Testing Out of The Box SharePoint Features

Modern QA Practices – Scrum, SharePoint and Transparency

Brandon Holloway is a Quality Assurance Engineer at ThreeWill. He has over 10 years of QA experience in requirements gathering, risk analysis, project planning, project sizing, scheduling, testing, defect/bug tracking, management, and reporting.

Modern QA Practices

I started my Software Testing career working for a large credit card processing company where I did mostly mainframe testing. I was one of maybe 60-70 QA Analysts utilizing waterfall methodology for mostly internal project work. Today, I’m working for a much smaller consulting company, testing mostly Microsoft SharePoint and utilizing Scrum. Needless to say, these are vastly different worlds for any role in software development. I’d like to focus on the QA role and share some thoughts based on my own experience.

Waterfall vs Scrum

I think the biggest benefit of Waterfall for testers is the amount of documentation. A tester can never have too many requirements, specs, notes, etc. to assist with test cases and learn how the system should work. With Scrum, documentation is still available to an extent, but we must rely more on constant communication with developers, product owners, and others to make sure we have everything that we need. On the flip side of this though, this constant communication with Scrum has proven to me time after time to be extremely valuable. Not only do we learn things from each other by informally chatting about this feature or that feature, but this also kills the whole “developer vs tester” mindset that some people still carry. I’ve always felt like I was more part of a team with Scrum.

Another big difference is that most of the time with Waterfall, we don’t get “working software” until towards the end of the project. This allows time for all test cases to be written and ready to run. There should not be any new features popping up at the last minute. Most of the time testers will know what to expect and be 100% ready to test as soon as the first build is ready. However, this can cause a major time crunch at the end if there are too many defects uncovered, especially showstoppers. Scrum mostly prevents this by giving us working software very early in the project, although not fully complete until later. This allows us to uncover issues much earlier and gain hands-on experience from the end user’s perspective along the way. The main downside I’ve experienced is a large increase in regression testing, as should be expected.

Mainframe vs SharePoint

I’ll just come out and say it. To me, mainframe testing can be redundant and boring. In fact, I don’t know if I’ve met another tester who would rather do mainframe testing than web testing. Most of the testing I do with SharePoint focuses on customizations requested by customers such as custom forms (with validation), custom lists, custom permissions, branding, or any other features that don’t come OOTB (Out Of The Box) with SharePoint. Many times the testing also involves integration between SharePoint and other applications. The variety of needs by various customers ensures that redundancy is not a concern. Our developers can do some pretty cool stuff with SharePoint, and it’s always fun to come up with creative ways to try to break (I mean… to test…) their code.

Projects I’ve tested range anywhere from setting up a simple web site in SharePoint 2010 for a customer with mostly OOTB features to a very elaborate on-boarding system for a sports team which utilizes Microsoft Azure on the front end and Office 365 and SharePoint 2013 to manage backend data. Although I’ve done some backend and integration testing that the end user may rarely (if ever) have to do, most of my testing with SharePoint has been from that end user’s perspective. I often have to put myself in the customers’ shoes and make sure everything really “makes sense” to ensure they are getting everything that they want. Sometimes I may even find that something as simple as enabling a certain OOTB feature may help the user get more out of the application.

I’m pretty sure neither the mainframe world nor SharePoint is going away anytime soon, but I’m much more excited to see what the future holds for the latter.

Internal Projects vs Consulting

While almost every project I worked on at the larger company was internal, the vast majority of my testing nowadays has been for our customers. The biggest differences I’ve seen are with communication and transparency. Reporting test results was pretty cut-and-dry internally. Everything was done the same way, time after time. With customers, you never know how interested in testing details they may be, so you need to be prepared at all times to be transparent about your work. I’ve been on projects where a small test summary including counts of Test Cases, Test Runs, and Issues was all that they wanted. I’ve also been on projects where they wanted to review each test case and even utilize them in some of their User Acceptance testing. Sometimes they will contact me directly regarding testing.

I also feel like I take more pride in my work nowadays. When a project is a success, hearing the customer compliment the team and rave about how much easier the application has made their life really makes me feel like I’ve accomplished something great, even though I’m just a small part of that team. It’s not that this same feeling isn’t possible with internal work, but hearing from someone outside of the company really magnifies that feeling of knowing I helped make a difference.

What are your experiences with the new world of QA?  Leave a comment below…

read more
Brandon HollowayModern QA Practices – Scrum, SharePoint and Transparency