prevention.jpg

Quality Assurance Levels – Which One is Right for Your Project?

Brandon Holloway is a Quality Assurance Engineer at ThreeWill. He has over 10 years of QA experience in requirements gathering, risk analysis, project planning, project sizing, scheduling, testing, defect/bug tracking, management, and reporting.

Testing is an investment and just like with our health, prevention is key. It is cheaper to prevent problems than it is to repair them. Adding Quality Assurance to the ThreeWill practice creates a better product, increases the client’s satisfaction, improves efficiency, and improves the company’s reputation. At ThreeWill, we offer three “Levels” of QA services.

Silver Level

This is the most lightweight approach we offer. We will conduct a Feature/Function Risk Assessment (for each Product Backlog item). We will provide test cases in a predefined format which include, but are not limited to, test scenarios and high priority test cases. These test cases will be reviewed by the developer and/or the client and will be executed during each sprint for each new piece of functionality delivered. Full testing will be done on 1 operating system and 1 browser type/version (assuming environment for all are available and needed). We will also perform smoke testing on 1 additional browser if needed. Full regression testing will be done on the release candidate, including only high priority test cases. Defects will be tracked on the ThreeWill extranet site.

Gold Level

This is the middle-of-the-road approach. This includes everything from the Silver Level, and more. We provide the additional option of having test cases created in the client’s repository rather than our own predefined format if desired. We will also offer a Traceability Matrix upon request. We will fully test up to 2 browser type/versions and smoke test up to 2 additional browsers, rather than just 1 for each. While defects are tracked in our own extranet site for the Level 3 service, for Level 2 we offer the option to have them stored and tracked in the client’s current bug tracking system instead. Also included in Level 2 are test reports which contain test results, a count of open defects by priority based on each Product Backlog item, and any other relevant test tracking information.

Platinum Level

This is the total package! This, of course, includes everything from the Silver and Gold Levels, plus more. More test cases are provided and with more detail (complete test steps, expected results, preconditions, High/Medium/Low priority test cases, and positive/negative test cases). Not only will full regression testing be performed on the release candidate, but we will also perform limited regression testing during each sprint. Full testing is offered on up to 2 operating systems and 4 browser types/versions, and smoke testing is still performed on up to 2 additional browsers. Any other relevant testing information that may be available will also be shared.

read more
Brandon HollowayQuality Assurance Levels – Which One is Right for Your Project?
testing-mobile-apps.jpg

Top Challenges when Testing Mobile Features for Web Apps

Brandon Holloway is a Quality Assurance Engineer at ThreeWill. He has over 10 years of QA experience in requirements gathering, risk analysis, project planning, project sizing, scheduling, testing, defect/bug tracking, management, and reporting.

With mobile devices being relied on more today than ever, it has become more and more important for these devices to work well with web applications. Testing these devices has its own set of challenges for QA Engineers. Here are 3 of the biggest challenges I’ve encountered while testing software for mobile devices, based on my limited experience.

Note These challenges relate to testing a web application that was meant primarily for a desktop browser such as IE or Chrome, but is expected to be compatible with mobile devices. I’m not referring to applications built specifically for mobile devices, such as something you would download from the App Store.

1. High Volume of Test Runs

Most of the time when a customer wants an application to be compatible with mobile devices, they aren’t just referring to a specific type of phone or tablet. Even ignoring the separate challenge of getting access to many different devices, just having to run test cases multiple times to cover all devices you have adds a lot of time to the testing effort. Many of the issues you will find are related to how a device’s screen resolution reacts to the responsiveness of the application. This means you need many devices with various screen sizes. Smaller phones, bigger phones, tablets, etc. You have to test these each in Portrait mode and Landscape mode. You may want to test on wifi vs LTE vs 3G. It just keeps adding up.

Chrome’s desktop browser does have a pretty cool mobile device emulator built in. It is pretty reliable for testing responsiveness for various resolutions, but it isn’t perfect. It is always better to have the actual devices on hand. I have found several responsiveness bugs on the actual devices that looked fine in Chrome’s emulator. Also remember that more devices means more regression testing. Same thing for testing bug fixes. Everything is multiplied.

2. Different Devices Are, Well, Different

While it seems that mobile devices with similar resolutions may behave similarly with a web application, it isn’t necessarily true. I’ve found that iPhones and many phones running Android seem to work fairly well most of the time. Then there are Windows phones. I’ve found some really head-scratching bugs on those that don’t show up anywhere else. I can’t let Apple and Samsung off the hook completely though, because they each have their own quirks. I found both iPad and Galaxy tablets more bug-prone than the phones. And keep in mind the developers will have their own set of challenges tracking down the root cause of a bug for this phone vs that phone vs this tablet, etc. It can become a juggling act for everyone.

3. Documentation

This really goes hand in hand with #1. Scrum fits mobile testing well, mainly because of the more light-weight approach of documentation/requirements. With so many devices and potential issues related to many factors on those devices, you can only be so detailed on paper if you want to test sufficiently. Exploratory and Ad hoc testing are huge parts of mobile testing.

What has been your experience with testing mobile apps?  Leave comments/questions below and I’ll respond to them.

read more
Brandon HollowayTop Challenges when Testing Mobile Features for Web Apps
breaking-window.jpg

Do Software Testers Need to Know How to Code?

Brandon Holloway is a Quality Assurance Engineer at ThreeWill. He has over 10 years of QA experience in requirements gathering, risk analysis, project planning, project sizing, scheduling, testing, defect/bug tracking, management, and reporting.

If you are interested in software testing but don’t have a coding background, don’t panic! There are plenty of testing opportunities out there for people who don’t know how to code. I would even venture to say that most software testers know little about writing code. Don’t get me wrong; it can still be very valuable. For instance, knowing the inner workings can really help with test cases and start you in the right direction when running through different test scenarios.

Now, of course, there are many variables that come into play when determining whether or not a tester needs to know how to write code. For instance, while most developers do unit testing, sometimes it may be on the tester to do it instead. Or some testers may use complex automation tools that require some code. In these cases, it is probably safe to say that knowing how to code, at least minimally, is required.

I can understand what most code is doing by reading a few lines, but I don’t know how to code beyond pretty simple statements. So speaking of myself and my own position, knowing how to code is not a necessity. Doing web testing in a fast paced SCRUM methodology keeps me focused mainly on manual testing of the application. This may be putting myself in the user’s shoes and running through use cases, inputting invalid values for negative testing, or exploratory testing on multiple browsers to make sure everything is compatible. I’m lucky enough to work with some pretty smart developers who can really dig in and troubleshoot bugs from a code level when I point out what happens for the end user.

I know many excellent testers who can’t write code. In most cases, as long as you have a good understanding of the application and what all it should and shouldn’t do, you have what you need. Plus, who wants to build stuff, when you can break it!

read more
Brandon HollowayDo Software Testers Need to Know How to Code?
web-testing.png

Agile Web Testing vs Waterfall Web Testing

Brandon Holloway is a Quality Assurance Engineer at ThreeWill. He has over 10 years of QA experience in requirements gathering, risk analysis, project planning, project sizing, scheduling, testing, defect/bug tracking, management, and reporting.

I did a good bit of web testing within a waterfall model before I came to ThreeWill 3 years ago. Transitioning to scrum took some time, but once I got the hang of it, it was a welcome change. Some of these things may apply more to some than others (depending on how many applications you are juggling at a time with testing, how many other testers you have, how long your Sprints are, etc.). Here are some of the biggest contrasts I’ve seen so far, based solely on my own experience.

Requirements Evolve

In a waterfall model, requirements are usually set in stone from the beginning. This is definitely not the case with scrum. The client is heavily involved from start to finish, so naturally tweaks are made here and there. This makes it difficult to create test cases early because acceptance criteria changes with the requirements. Every project is different, but for the most part I’ve found that the later in the Sprint I write test cases, the less time I spend later going back and making edits.

Manual vs Automated

At ThreeWill, most of the testing I do is for customized functionality created specifically for a client’s needs. While automated test scripts are certainly advantageous for applications that are going to be revisited several times or where a large amount of data needs to be loaded, almost all of the testing I do is manual. Also, the fast pace of feature development Sprints means I need to focus heavily on the testing itself rather than creating scripts. Again, every project is different and there are certainly times where automated testing is beneficial (e.g. regression testing).

Collaboration

To me, it just feels more like a team effort with scrum. Of course this is coming from a Tester, who back in the waterfall days wasn’t really involved until there was something ready to test. Being involved from the beginning of a project with the client, developers, and everyone else lets me know what I’m getting into early, instead of just being thrown a dozen pages of requirements later. It also allows me to better understand what exactly the clients are going for, which is very important in order for me to effectively put myself in their shoes while I’m testing.

read more
Brandon HollowayAgile Web Testing vs Waterfall Web Testing
scurm.jpg

SCURM [sic] – Quality Assurance and SCRUM

Brandon Holloway is a Quality Assurance Engineer at ThreeWill. He has over 10 years of QA experience in requirements gathering, risk analysis, project planning, project sizing, scheduling, testing, defect/bug tracking, management, and reporting.

Note from Danny – As I was looking for a an image to go with this episode on Shuttershock, I ran into this image that had SCRUM misspelled. Well, that’s a perfect image for Quality Assurance and SCRUM…

Danny:Hello, this is Danny Ryan. Welcome to the ThreeWill podcast. Today I have Brandon Holloway.

 

Brandon :Thanks for having me, Danny.

 

Danny:You’re welcome. I don’t know why I want to start singing.

 

Brandon :I don’t know.

 

Danny:I’m going to start singing if that’s okay.

 

Brandon :[Crosstalk 00:00:14].

 

Danny:I’m going to sing. All my questions for you will be sung.

 

Brandon :Interesting.

 

Danny:Brandon helps out with our QA, otherwise known as quality assurance. He makes sure that everything’s staying tiptop quality-wise on our projects. I have asked him to sit down and just tell me what does it mean to do QA on a typical ThreeWill project? What does that mean to me?

 

Let’s get started off with this question. When do you typically get pulled in first for the project? Just we’re talking about generally for the typical project that you’d be a part of.

 

Brandon :I’ll usually get pulled in a lot of times at the very beginning even though it’s going to be a few weeks before there’s going to be any real testing even if it’s still in the requirements gathering, or architecture phases. Sometimes it’s good for me to be there because I’m still thinking in the beginning on how would a user approach this which is the mindset I try to put myself into. They try to involve me as early as possible that I could provide some value.

 

Danny:You usually start off with taking a look at the product backlog and what the user stories are. Is that where your first thing that you interact with on the project?

 

Brandon :Right, right, the user stories and the product backlog, that’s right. A lot of times a lot of what I focus on is the acceptance criteria which is definitely not the only thing I look at for it. Like I said, I got to put myself in the user perspective. What does this story mean to the typical user and just figure how to build test cases off of that.

 

A lot of times early in the sprints and early in the project I can go ahead and start working on test cases based off of mainly the product backlog, even well before testing is actually available to be done.

 

Danny:Then you would, as you find defects you would log those defects. Is there a typical place where we’re putting those defects?

 

Brandon :On most projects we use SharePoint to …

 

Danny:SharePoint, tell me more about this. What is this?

 

Brandon :Yes, it’s this thing we use around here called SharePoint. It’s a pretty cool little application.

 

Danny:Sounds fascinating, fascinating.

 

Brandon :Yeah, a lot of times we just have an issues list. It’s just a basic list in SharePoint that a lot of the developers will set up alerts for so they see. They know when they get a issue coming across they can go straight in there and work on it. It’s just [crosstalk 00:02:54] …

 

Danny:It probably has a status column as far as where the defect …

 

Brandon :The normal, the priority, what I think the priority should be. This is what I think the severity should be, what type of issue it is, that kind of thing. I’ll just assign it to whoever. Usually whoever is assigned to that particular product backlog, or that feature, that can vary though. Sometimes we’ll pass it to somebody else that it may be a better fit to work on.

 

Danny:You know when to retest because they’re updating that issue log.

 

Brandon :Correct, yeah. There’s a few different statuses that it’ll go though. When I first open an issue, I always see it’s an open status. When the developer fixes it, it’ll go decoded status. It’s not available for me to test yet. They have to just check their coding. Whenever they get ready for another QA deployment, then they deploy it and then it’s in the QA environment ready to be tested. Then they’ll move it to either deployed, to QA, or ready for QA. There’s a couple different statuses on the project.

 

Danny:A lot of our projects will have a separate QA environment that you work within. Is that correct?

 

Brandon :Yes.

 

Danny:Is that normal?

 

Brandon :Yeah, it’s pretty normal. We’ve used a couple different cloud service 1 time to ….

 

Danny:Geese. Host failure. Host failure. Just a second. It could be anybody. I’m sorry. Now I’m going to shut this off and here we go. You were saying, sir, yes.

 

Brandon :The best case is probably when the clients themselves have QA environments set up that I can go in there and use. That’s usually the best method just because that’s basically what they’re going to have their own UAT environment in that same environment to production. Everything should be similar. It should be more along the lines of how it’s going to be in the real world. I have a couple different virtual machines that I’ll host a QA site in. It depends on the project and if they have an actual QA environment for us to work on or not.

 

Danny:Got you.

 

Brandon :We just deal with what we got.

 

Danny:Yeah. It just depends on the project really.

 

Brandon :It does.

 

Danny:Yeah, how does it work with …? Are you testing? Within the Sprint cycle, are you testing things that were implemented in the previous cycle and then they’re doing a defect resolution, or giving some capacity towards defect resolution in certain sprints? How does that all work?

 

Brandon :In a perfect world …

 

Danny:In a perfect world. Tell me this imaginary place …

 

Brandon :… which it is not.

 

Danny:… that you call in a perfect world.

 

Brandon :Excuse me. Say it’s a …. For instance we’ll say a 2 week sprint. In a perfect world, I would write my test cases maybe the first week or somewhere in that time frame. Maybe the beginning of the second week of that sprint all those product backlog items of stories will be available for me to test.

 

I should be able to get those at least a good round of testing in within a couple days. Maybe by Wednesday of the second week, or so, assuming the sprint ends on a Friday. Whatever issues come out of that, they can be working those issues and get them back to me and I can retest them so I can get everything closed out and everything marked as tested by the last day, or the day before the last day of the sprint. That’s a perfect world.

 

Danny:That’s a perfect world.

 

Brandon :It doesn’t always happen like that, of course. There’s just a lot of factors, a lot of movement going on everywhere. People change priorities of stories and the client’s involved pretty heavily in Scrum. We just adapt with it. I adapt just like everybody else does. Sometimes that means maybe a couple of product backlog items for a sprint may just be all the way pushed to the next sprint.

 

Sometimes I may get them on the actual last day of the sprint. I at least get what testing done I can, like maybe a good smoke test of the functionality, or just do some ad hoc testing to hit the points that I think may be most prone to have issues, at least get as much done of a first round of testing as I can before the beginning of the next week rolls around and we have a sprint review.

 

From there I’ll clean up all the test cases and run the rest of the tests and everything. It may be into the very beginning of the next sprint. That happens sometimes. It’s not a perfect world.

 

Danny:It sounds like a lot of the type of testing that you’re doing, is it more like end user testing, really trying to focus in on what the experience that …

 

Brandon :It is.

 

Danny:… the end user would have with …?

 

Brandon :Right. It’s pretty much all manual testing and it’s mainly use cases. A lot of the reason for that is balancing a couple different projects. Whatever my capacity is on a particular project, I have to base my testing scope around that capacity. There’s always more testing to be done. You could test forever on something and still not 100% cover everything. It’s just the nature of it.

 

Basically, I build off what my capacity is for this project, what we decided on with the client for this project, out of that amount of hours I’m going to get the most effective testing I can done out of that. That’s pretty much going to be manual use case testing; not a lot of tools involved. It’s more of a time sensitive thing. We want to make sure we hit everything that we can from a user’s perspective so that ….

 

Danny:Got you. It sounds like you’re across multiple projects. You probably have to …. You’re probably coming in since we’re building a lot of web-based stuff, probably have to get a list of which browsers are supported, that sort of stuff as well.

 

Brandon :Right, yeah, that’s another big part of it. Sometimes there’ll be a project where Internet Explorer 10 only. That’s the only browser I have to test in. Many times, they want to support it on Chrome. They want it IE10 and up which includes IE11 and in some cases maybe Microsoft Edge will start coming in since it’s a newer browser.

 

Sometimes Firefox needs to be supported. It just depends on the project. A lot of times there is definitely cross browser testing. IE, in particular, tends to behave differently in some situations than say Chrome and most [crosstalk 00:10:02] …

 

Danny:No.

 

Brandon :… Firefox.

 

Danny:No.

 

Brandon :Yeah. IE is fun. There’s always that to deal with.

 

Danny:Are you crying? There’s no need to cry. Did IE bring tears to your eye? Don’t look like tears of joy either.

 

Brandon :My eye’s just watering.

 

Danny:Tell me a couple more questions and I’ll let you go. I’ll let you leave. What do you find is the most challenging part of what you do?

 

Brandon :Definitely 1 thing is that just the nature of being Agile Scrum is that you don’t get as concrete requirements to test on, which for testing is very important. It’s important for everybody at my project. I base what I build in my test cases off of what is supposed to happen in these situations.

 

You don’t have detailed requirements, very detailed requirements sometimes in Scrum, at least in my experience. That’s 1 thing. There’s a lot more interaction between myself and developers. I ask questions. That’s how I like how everybody’s so open here. It’s just easy to get with somebody about something like that. Definitely I’ll need things clarified.

 

Sometimes it takes a while for me to get the full picture of everything without these hard requirements and everything. That’s mainly 1 of the challenges and of course just time in general, what things get pushed back. Testing is at the end of everything.

 

Sometimes I find myself working late on a Friday, a little bit on the weekend, which it’s not a big deal. That definitely can come into play, especially later in projects depending on what needs to be put into the backlog at the last minute or taken out.

 

Danny:I think when we look at what makes a great QA person is they have to come in and take some initiative to come into it. You’re coming in to a project and really trying to understand what are we trying to do for the client as well. I think you do a great job at that from what I’m hearing from other people. You’ve tested my Web site.

 

Brandon :I’m about half way done with it. I got 2 lists of things.

 

Danny:For people who come to threewill.com, it’s half tested. How’s that make you feel? It’s half tested. Don’t worry. I change things daily. I mean I don’t change the name of the company daily, but I do change a lot of stuff on my Web site daily just to keep things moving.

 

Brandon :I didn’t realize how much there was on that Web site.

 

Danny:It’s a dynamic site. It’s a dynamic.

 

Brandon :It’s pretty large.

 

Danny:Maybe it’s a softball, maybe it’s not a softball question, but what do you love about what you do?

 

Brandon :Man, I just don’t know. It’s like I’m a perfectionist in a way. I hate to say I like it’s like a treasure hunt almost. It’s fun. There’s something in here that I’m going to find that’s not doing what it’s supposed to be doing.

 

It’s like finding things like that, not mistakes. That’s a bad word for it. I like finding bugs. It’s just like a hunt for me. Since I first started testing, I didn’t even know coming out of college with my degree, I didn’t even know there was a profession, software tester, or whatever.

 

I have a cousin that did that at a company back in Columbus. I was like, “Man, this sounds like something right up my alley.” I started doing it. I love it. It’s just you never know what you’re going to come across. Working with great developers obviously helps a lot.

 

Sometimes in the past, other jobs sometimes you don’t have developers that are very willing to …. Some look at QA as, “Oh, that guy, my nemesis.” It’s not like that here. Everybody’s on the same team. It’s great.

 

Danny:I heard, within the past week, Eric was talking about how much he appreciates what you do and how he feels better after you’ve tested something. It’s wonderful having you around to do that. You’re an important part of each 1 of our projects. I love it that you love to do. It is treasure hunting. You’re trying to go figure out.

 

Brandon :Can’t think of another way to put it. [Crosstalk 00:14:41].

 

Danny:It’s fun. No, it’s a great analogy. It’s a wonderful analogy. You’re trying to go in and figure out what are the important, when talking about treasure, what are the big things that we might’ve overseen as a developer.

 

Brandon :A lot of times I have to put myself into the end user’s shoes is what I’m doing. I’m thinking, “What would a user do here? Does this look right to the user’s perspective?” Just giving that whole side of things, I like it.

 

Danny:We love having you here too. We can really think through the what from somebody from Alabama would do.

 

Brandon :Here we go with this again.

 

Danny:For those that’s not an inside joke. He’s from Phoenix City, Alabama. Yes, he’s representing Alabama in the ThreeWill family. We now have a southeast …

 

Brandon :Not the school. Not the school Alabama.

 

Danny:Not the school. Yeah, let’s get that straight.

 

Brandon :Even the state.

 

Danny:I’m sorry. You’re representing the state of Alabama.

 

Brandon :We’ll say that, yeah.

 

Danny:The school of Auburn.

 

Brandon :Auburn.

 

Danny:Geese, there’s too many of you guys around here. Why don’t you go hang out with our other Auburn friends and get the heck out of here? That’s the end of this episode. Let’s get out of here.

 

Brandon :[Crosstalk 00:16:00].

 

Danny:Yeah, I’m going to kick the …. Here’s where I’d kick the microphone over and like, “Get out of here.” Slam the door.

 

Brandon :Y’all are the guys hiring the Auburn folks. Do you know what I mean? That should tell you something.

 

Danny:With that, I think we’ll wrap up this episode. Thanks so much for listening everybody. Brandon, thank you for your time.

 

Brandon :Thank you.

 

Danny:Go get back to testing and go back. You’ve got a long ride home from here. You’re heading …

 

Brandon :Yeah, a couple hours.

 

Danny:… going to head home. A couple hours.

 

Brandon :Back to Alabama.

 

Danny:All right. Just test this like you were someone from Alabama. Thank you for spending your time here, Brandon.

 

Brandon :Thanks.

 

Danny:Take care. Bye-bye.

 

read more
Brandon HollowaySCURM [sic] – Quality Assurance and SCRUM
common-web-defects.jpg

The Five Most Common Web Defects

Brandon Holloway is a Quality Assurance Engineer at ThreeWill. He has over 10 years of QA experience in requirements gathering, risk analysis, project planning, project sizing, scheduling, testing, defect/bug tracking, management, and reporting.

In the last few years as a software tester I’ve done a lot of web testing. I’ve come across all kinds of defects, but some of the most common web defects are found without digging very deep at all. Here are 5 common web defects that can be found at, or just below, the surface.

1. Grammatical/Spelling Errors

Ahh… These are the ultimate eye-rollers for developers. However, they are still defects. While usually they have no effect on functionality, they WILL eventually be noticed if they slip through. On the positive side, they are (usually) super easy to fix. A good way to boost your defect count (kidding of course)!

2. Cosmetic Issues

This can be anything from branding that is misplaced to a header that becomes misaligned when the browser is widened (assuming the website is to be responsive). Many times these are found during cross-browser testing (see below). Again, these will be noticed rather quickly by end users if they slip through. Nobody wants to pay good money for an ugly website!

3. Cross-Browser Issues

These are common because sometimes developers may use one particular type of browser while coding and testing in their dev environment. This is usually a Chrome vs IE thing, at least in my experience. I’ve been told that in development these two can be vastly different, causing some headaches for developers.

4. Form Validation

Many times there are just so many rules to the types of characters allowed in input forms that you are bound to find some of these. Not to mention many times you have client-side AND server-side validation, which sometimes don’t work well with each other.

5. Overall Usability

This can be overlooked easily. Sometimes it’s hard for developers to step back and really look at the system as a whole from the end user’s perspective. The more trained on the system the end users will be, the less of a problem this should be. As a tester, the most important thing to remember is that, at the end of the day, you are playing the role of THAT user.

What would you add to this list?  Leave a comment below.

read more
Brandon HollowayThe Five Most Common Web Defects
ootb-testing.jpg

3 Things to Consider When Testing Out of The Box SharePoint Features

Brandon Holloway is a Quality Assurance Engineer at ThreeWill. He has over 10 years of QA experience in requirements gathering, risk analysis, project planning, project sizing, scheduling, testing, defect/bug tracking, management, and reporting.

Testing Out of The Box SharePoint Features

In the last couple years, I’ve done a decent amount of QA testing that either fully or partially involved SharePoint. One thing that is almost always utilized, regardless of the size of the project, is the use (at least to some degree) of out of the Box (OOTB) features. These are features that “come with” SharePoint, ready to use immediately without using custom code. When testing OOTB features, here are 3 things to consider:

1. Integration

While OOTB features are pretty reliable when used as originally intended, it is very common for some of them to be integrated with custom code which furthers their capabilities. It is important in these situations not to focus solely on testing the new code, but to test the full integration with the OOTB features. Also, what if the original OOTB features are not totally dependent on the new code, meaning not only can you use them for their newly expanded capabilities, but you can still use them as they were originally intended, as well? In this situation, you may think testing it as it was before would be a waste of time since without the new pieces, it is still OOTB functionality. This isn’t necessarily true. You never know what the new code could be touching in the back end, so some regression testing should be in order.

2. Usability

While it seems straightforward, usability can easily be overlooked. Sometimes customers will decide to go with an OOTB feature for something they need. This could be to save costs or because exactly what they need just happens to be there already. Regardless of the reason, the OOTB feature in question should always be tested for usability. Just because they want to use a particular “tried and true” feature that comes ready to use doesn’t mean that it will 100% fit their needs. You need some good, solid use cases (both positive and negative) in these situations. Many OOTB features have multiple possible configurations, whether we are talking about workflows, alerts, forms, or simple views. Do your due diligence and make sure that the customer will get all that they need with this feature and its “as-is” functionality and configurability.

3. You Never Know

OOTB functionality always works like it’s supposed to, right? Well, mostly. I have seen some funky things in some OOTB features that made me scratch my head a little. Maybe something is sort of misleading; maybe it’s counterintuitive. Googling such things almost always turns up something like, “that’s just how it works.” The point is, no matter how simple the OOTB functionality in question may seem, if it is a part of what the customer will be using on a regular basis, test it anyway. Will you find anything that hasn’t already been found and discussed a hundred times in QA and/or Dev forums across the internet and eventually deemed “acceptable”? Probably not… but running at least a handful of high-level test cases can’t hurt. You never know.

What Would You Add to This List?

Leave a comment below…

read more
Brandon Holloway3 Things to Consider When Testing Out of The Box SharePoint Features
quality-assurance.jpg

Modern QA Practices – Scrum, SharePoint and Transparency

Brandon Holloway is a Quality Assurance Engineer at ThreeWill. He has over 10 years of QA experience in requirements gathering, risk analysis, project planning, project sizing, scheduling, testing, defect/bug tracking, management, and reporting.

Modern QA Practices

I started my Software Testing career working for a large credit card processing company where I did mostly mainframe testing. I was one of maybe 60-70 QA Analysts utilizing waterfall methodology for mostly internal project work. Today, I’m working for a much smaller consulting company, testing mostly Microsoft SharePoint and utilizing Scrum. Needless to say, these are vastly different worlds for any role in software development. I’d like to focus on the QA role and share some thoughts based on my own experience.

Waterfall vs Scrum

I think the biggest benefit of Waterfall for testers is the amount of documentation. A tester can never have too many requirements, specs, notes, etc. to assist with test cases and learn how the system should work. With Scrum, documentation is still available to an extent, but we must rely more on constant communication with developers, product owners, and others to make sure we have everything that we need. On the flip side of this though, this constant communication with Scrum has proven to me time after time to be extremely valuable. Not only do we learn things from each other by informally chatting about this feature or that feature, but this also kills the whole “developer vs tester” mindset that some people still carry. I’ve always felt like I was more part of a team with Scrum.

Another big difference is that most of the time with Waterfall, we don’t get “working software” until towards the end of the project. This allows time for all test cases to be written and ready to run. There should not be any new features popping up at the last minute. Most of the time testers will know what to expect and be 100% ready to test as soon as the first build is ready. However, this can cause a major time crunch at the end if there are too many defects uncovered, especially showstoppers. Scrum mostly prevents this by giving us working software very early in the project, although not fully complete until later. This allows us to uncover issues much earlier and gain hands-on experience from the end user’s perspective along the way. The main downside I’ve experienced is a large increase in regression testing, as should be expected.

Mainframe vs SharePoint

I’ll just come out and say it. To me, mainframe testing can be redundant and boring. In fact, I don’t know if I’ve met another tester who would rather do mainframe testing than web testing. Most of the testing I do with SharePoint focuses on customizations requested by customers such as custom forms (with validation), custom lists, custom permissions, branding, or any other features that don’t come OOTB (Out Of The Box) with SharePoint. Many times the testing also involves integration between SharePoint and other applications. The variety of needs by various customers ensures that redundancy is not a concern. Our developers can do some pretty cool stuff with SharePoint, and it’s always fun to come up with creative ways to try to break (I mean… to test…) their code.

Projects I’ve tested range anywhere from setting up a simple web site in SharePoint 2010 for a customer with mostly OOTB features to a very elaborate on-boarding system for a sports team which utilizes Microsoft Azure on the front end and Office 365 and SharePoint 2013 to manage backend data. Although I’ve done some backend and integration testing that the end user may rarely (if ever) have to do, most of my testing with SharePoint has been from that end user’s perspective. I often have to put myself in the customers’ shoes and make sure everything really “makes sense” to ensure they are getting everything that they want. Sometimes I may even find that something as simple as enabling a certain OOTB feature may help the user get more out of the application.

I’m pretty sure neither the mainframe world nor SharePoint is going away anytime soon, but I’m much more excited to see what the future holds for the latter.

Internal Projects vs Consulting

While almost every project I worked on at the larger company was internal, the vast majority of my testing nowadays has been for our customers. The biggest differences I’ve seen are with communication and transparency. Reporting test results was pretty cut-and-dry internally. Everything was done the same way, time after time. With customers, you never know how interested in testing details they may be, so you need to be prepared at all times to be transparent about your work. I’ve been on projects where a small test summary including counts of Test Cases, Test Runs, and Issues was all that they wanted. I’ve also been on projects where they wanted to review each test case and even utilize them in some of their User Acceptance testing. Sometimes they will contact me directly regarding testing.

I also feel like I take more pride in my work nowadays. When a project is a success, hearing the customer compliment the team and rave about how much easier the application has made their life really makes me feel like I’ve accomplished something great, even though I’m just a small part of that team. It’s not that this same feeling isn’t possible with internal work, but hearing from someone outside of the company really magnifies that feeling of knowing I helped make a difference.

What are your experiences with the new world of QA?  Leave a comment below…

read more
Brandon HollowayModern QA Practices – Scrum, SharePoint and Transparency