Monday 23 October 2017

The guilt of the stopping plates

I believe that Testers can be involved in the development process in a miriad of ways. But does this cause me to feel guilty sometimes? Wait, this might not make any sense. Let me explain.

Testing for me can be a whole host of tasks. For instance I enjoy doing exploratory testing and I can sometimes find some pleasure in running prescribed regression tests. I love getting involved in the early discussions of a piece of work and I understand the importance of supporting a feature going live. Investigating issues in the software, whether that is preshipping or in live, can also be a great thrill.

I've worked on teams when I've been involved with looking at the monitoring and I understand the benefits of engaging with customer feedback. I've written integration automation tests at multiple different levels of the software stack and I can frequently be found advocating for good build pipelines.

Whilst I am talking about advocating for things, I believe that testability is everyones responsibility because it benefits everyone when it is done well. I've dabbled with contract tests and I think performance testing is important too. Which leads us on to security and privacy, which I wish I spent more time on.

Above I've listed at least 15 separate elements of testing, and I'm sure if I sat for long enough I could come up with 15 more. That is a dangerously large number of plates to keep spinning at once. Try visualising keeping that many plates spinning. See, it's foolish right?

But I think all those things above are really important, I think that if I didn't at least think about them I'd be a bad tester. In fact, it's disingenuous to speak about that as a hypothetical. It's not that I imagine I would consider myself a bad tester if I dropped any of the plates. I know for a fact that whenever I catch myself not having done one of them I do more hand wringing than is sensible. I blame myself and believe I'm terrible at my job if any of those plates stops spinning and hits the floor.
 
What I should do is consider that if I tried to just do those first 15 things every day that would allow for around 30minutes for each task. Clearly it's unreasonable to expect anyone to achieve that level of context switching. I wouldn't expect it of anyone else, but I always expect it of myself.

Maybe in this world where testing means a lot more than running manual test scripts we should sometimes remember that it can't always mean engaging in all of the practices all of the time. It would be more realistic to just keep some of the plates spinning and accept that the others aren't in play. Or delegate them, get a developer to spin some of them most of the time and you just check in every now and again. Or maybe just forgive yourself sometimes for only being human.

Even as I write this I know when that next performance bug or hole in coverage is noticed I'll forget this logical reasoning all over again. But maybe we should be okay with this because the joy of realising testing can add value in more ways has come with the burden of noticing when you're not playing your part in all of those ways.

It is only words. And words are all we have

Words are important right? The words we use to talk about the things we do are important. They have to be because we spend considerable effort debating them. 

This is something I've discussed with colleagues quite a bit over the last couple of months and I think it's interesting. When you're trying to grow a testing team in a culture that doesn't fully understand what testers do, which is probably every culture in all fairness, being able to have an agreed understanding of your purpose and methodologies as a discipline is really useful. Obviously, agreeing on things is easier to do when you're all agreed on your vocabulary. This feels to me to be not all that controversial.

However sometimes certain terms can acquire a bad reputation if a team has had a bad experience of them. I know this first hand from seeing how people who've had unproductive encounters with 'BDD' react when you start using some of the associated words. As a result I worked in a team where we'd frequently have 'Kick-off' meetings for a ticket. This would be a process involving multiple disciplines who would sit down and discuss a ticket before we worked on it, collaboratively adding some Acceptance Criteria  to the ticket, discussing what and how we'd develop and test a feature. There were frequently some front end automation tests created and some regression tests to add to a pack that could be used as a kind of documentation of what the feature now did.

It doesn't take a genius to see that although we had very actively dropped the terms '3 Amigos' and 'Scenarios' we were still doing a large chunk of the core elements of a software development practice which a large section of the team had discarded as a disaster. Had this team salvaged the working parts out of a wreckage of a previous practice left burning in the ditch? Or were they basically repainting the car and driving around in it, strongly proclaiming that their previous vehicle had driven itself into a tree and they had nothing to do with why it had crashed and no interest in fixing it (ignoring the fact that they were still using it?) 

Today I was watching Atypical on Netflix, and at one point the father goes to a support group for parents of kids with autism. He doesn't regularly go to the sessions and therefore gets corrected multiple times for using words in a way the rest of the group sees as unacceptable. They have all obviously agreed on a ubiquitous language over time and the way he was talking about his life and family was deemed to be offensive and lacking enlightenment. Basically he was just trying to help his family and understand his place but he was being judged because he didn't understand how this closed group were using language. The problem he has is that they have grown sensitive to certain words and are unable to see though his phrasing to the intent of what he's saying and so the conversation becomes unproductive.

Is this a problem we have? I certainly know how much the phrase Quality Assurance makes me want to launch into a lecture on how I reject that label. Do we need to be careful about discounting people as 'not our kind of tester' just because they haven't become like linguistical clones of us?

The more I thought about this the more I realised that what's important is to give people the opportunity to explain what they mean. Scratch beneath whatever buzz words people are or aren't using and you may have more meaningful discussions. Yes, it's a bit more work but it's worth it because communication is not aways as universal as we think and not everyone understands the strict lexicon your tightknit group has formed. Whether that is a development team, a testing department in a company or a local test community.

So yes, they are only words. And words are all we have. Use them carefully but forgivingly at they same time. 

Sunday 22 October 2017

How many columns do you need on your scrum board?

How many columns do you need on your scrum board?
I saw this conversation come up on a slack channel discussing Agile Testing and I almost replied, but I realised I was about to rant, so I started putting it into a blog post. Here's the end result. Hold on,  it's predictably ill-informed and rambling.

So... How many columns do you need on your scrum board?
Well, I have worked in a team where we essentially had (TODO, In Progress, Ready for Release, Done). I think in a place where you aren’t in control of releasing whenever you want, that’s the best I would want. I always want to treat (DEV+TEST+CODE Review) as just, ‘Doing the thing.' So that should all be one column, in my opinion, for what it's worth. And it's worth a great deal in this tiny space of the internet remember, because I'm in charge.

So, over time I'd grown to believe that fewer columns meant a team working closer together and that's what I always wanted to strive for. And then I was working with a team where we had (TODO, in progress, Ready to Test and Tested) and  I found myself campaigning for more columns. Which obviously I should hate myself for.

But I had a good reason. The developers would throw everything into ready for test, but the deployment to our test environment required collaboration with other teams and a bucket load of manual steps. As a result, we would have things which had been put into ready for test which would sit there for almost a week before they were even built never mind deployed to a place testers could see them.

It was frequently being stated that we didn’t have enough testers, or that we were being made to work on other things. Basically, according to this point of view Test was proving to be a bottleneck in the process. From my perspective, our main problem was that we couldn't actually get our hands on things to test.

So we introduced discussing separating up the board to distinguish between dev finished and ready to test. Normally I would be arguing for just getting the team to work closer or get everyone in the team understanding what could be done to make it quicker to get things to test after the development was finished.

Sometimes, however, some people in the team are reluctant to change, and your last resort is to get the board to accurately reflect reality. This can help highlight issues. You can strive in general to work a certain way but certain circumstances lead you to want to something totally opposite to that.
Can I tell you that this worked, that the team worked better? Actually, it's difficult to say. The end result wasn't where I learnt my lesson.
Can I tell you, with hindsight, that I support my decision to argue for this change to the board? Yes. Totally.
Did the situation mean that I think that every board should have separate 'ready to deploy to test' and 'ready to test' columns? No, not at all.
So, how many columns do you need on your scrum board?
As few as possible, until you need more than that.