Showing posts with label software development. Show all posts
Showing posts with label software development. Show all posts

Sunday, 14 October 2018

Can you just run a Full Regression on this please

Recently I saw someone ask for a 'full regression.' It is probably the first time this specific person has uttered the phrase in my presence but I've heard it from quite a variety of people for a plethora of reasons. It's a term that has begun to frustrate me and it seemed like it was time I got my thoughts down as to why I am am experiencing this mild irritation. Maybe you don't care that it is causing me annoyance. Maybe you aren't seeing this anywhere so it doesn't seem worth talking about. Maybe you should appreciate that this blog isn't about your problems it's about mine, so you can either enjoy me pontificating on problems in prose or go elsewhere for undoubtedly higher quality content.

The source of my problem is that when someone asks for a "full regression" I'm not sure I understand what they think they are asking for. Or perhaps the problem is that I have a belief that I do understand what they are asking for and think they shouldn't be asking for it. And yes, I have probably just made this more complicated rather than less.

So, let's work it through. When someone asks for a "full regression" what are they really asking for? Do they really want you to run every scenario you've ever thought of? Surely they can't really want you to go through every possible path you've ever thought of in the product. Will that even be possible in whatever sensible time frames you have? What if you run routes you have not executed before? If you run through paths that were previously unexplored how can you possibly say if it actually works the way it worked before the change if you have no idea what the behaviour used to be. So then either your mission to look for the regressions in the software is dependant on having at some point in the past done a surprisingly thorough job of testing or right now you have to perform the testing on a version of the software without the changes and one with. That might not even be possible. Maybe that's what's being asked for though.

Or are they really saying when they request or suggest a 'full regression' is  "This change has been so vast and this next release is so important that you should spend infinite time getting us as much information as humanly possible about how the application works." This raises issues of infinite time frames requiring you to live for an infinite number of years, putting your ability to complete your testing within your life span seriously in to question. But it could well be what they intended. After all they did say "full."

Or do they mean "I have no idea how what I've changed will impact the product because I don't actually understand how this thing works beyond the section I changed. Basically I'm working blind here. Therefore can you just take all of the risk that I've created by telling me it's "okay"? . Just so long as we all understand that I don't have a good definition of what is intended by the word okay." That could well be what they mean right? It's always what I interpret it to be. And that makes me grumpy.

I don't know whether other people have heard this term used or in fact used it themself but I find it troublesome. If when someone tells you that the change they've made doesn't have any new functionality but you just need to run a full regression that should instantly set off alarm bells.

It might be worth discussing with them how there's no such thing as a full regression, for all the pedantic and sarcastic reasons Iisted above.(You may want to phrase it less argumentative though). If they are still talking to you after that you could ask them if they could help you understand what they've changed you could do some targeted testing and get a better idea of whether their change has had any negative affects. Surely that's what they really want anyway. It's just easier to pretend that there is a magic risk free way of achieving this. It is unfortunately our job in this situation to remind them that magic is a lie and that to pretend otherwise leads to problems. Tell them it's impossible to test everything because that's likely to be infinite testing. Try and help them understand that all testing is actually limited to a restricted area and is therefore in a sense targeted. When you don't understand what you're aiming at you are targeting your testing at certain areas, they just are less likely to be the right ones. Ask them to help you set the targets intelligently together based on what has changed. Any work you can do as a team to naroow the sights of your targeted testing a little will increase the chance of it being effective.

Above all else resist just shouting "WHAT DOES FULL REGRESSION EVEN MEAN?" whenever anyone uses the term, but challenge them politely, respectfully and calmly. It's important because if people use the phrase and you just nod and tell them you've done it they will keep asking for it and no one will ever resolve what the problem is. Also, they could actually mean "this is the most dangerous change anyone has ever made to anything, what do we need to do to reduce the risk?" and you'd never know how scared you need to be about the change because you haven't asked them to clarify but have assumed the worst of your coworker.

So what is a "full regression"? It's a lie people tell themselves when they don't want to admit the truth. Try admitting it instead, it's hard but it's probably safer and it's definitely more genuine.

If you want to hear me talk about testing some more and interview people about their guilt around their own testing why not check out my podcast. It is the primary reason I don't blog here that much at the moment. That and life getting in the way.
theguiltytester.libsyn.com

If you want to shout at me and tell me I'm wrong about this I am surprisingly receptive to that. Tweet me @allcapstester

Monday, 23 October 2017

It is only words. And words are all we have

Words are important right? The words we use to talk about the things we do are important. They have to be because we spend considerable effort debating them. 

This is something I've discussed with colleagues quite a bit over the last couple of months and I think it's interesting. When you're trying to grow a testing team in a culture that doesn't fully understand what testers do, which is probably every culture in all fairness, being able to have an agreed understanding of your purpose and methodologies as a discipline is really useful. Obviously, agreeing on things is easier to do when you're all agreed on your vocabulary. This feels to me to be not all that controversial.

However sometimes certain terms can acquire a bad reputation if a team has had a bad experience of them. I know this first hand from seeing how people who've had unproductive encounters with 'BDD' react when you start using some of the associated words. As a result I worked in a team where we'd frequently have 'Kick-off' meetings for a ticket. This would be a process involving multiple disciplines who would sit down and discuss a ticket before we worked on it, collaboratively adding some Acceptance Criteria  to the ticket, discussing what and how we'd develop and test a feature. There were frequently some front end automation tests created and some regression tests to add to a pack that could be used as a kind of documentation of what the feature now did.

It doesn't take a genius to see that although we had very actively dropped the terms '3 Amigos' and 'Scenarios' we were still doing a large chunk of the core elements of a software development practice which a large section of the team had discarded as a disaster. Had this team salvaged the working parts out of a wreckage of a previous practice left burning in the ditch? Or were they basically repainting the car and driving around in it, strongly proclaiming that their previous vehicle had driven itself into a tree and they had nothing to do with why it had crashed and no interest in fixing it (ignoring the fact that they were still using it?) 

Today I was watching Atypical on Netflix, and at one point the father goes to a support group for parents of kids with autism. He doesn't regularly go to the sessions and therefore gets corrected multiple times for using words in a way the rest of the group sees as unacceptable. They have all obviously agreed on a ubiquitous language over time and the way he was talking about his life and family was deemed to be offensive and lacking enlightenment. Basically he was just trying to help his family and understand his place but he was being judged because he didn't understand how this closed group were using language. The problem he has is that they have grown sensitive to certain words and are unable to see though his phrasing to the intent of what he's saying and so the conversation becomes unproductive.

Is this a problem we have? I certainly know how much the phrase Quality Assurance makes me want to launch into a lecture on how I reject that label. Do we need to be careful about discounting people as 'not our kind of tester' just because they haven't become like linguistical clones of us?

The more I thought about this the more I realised that what's important is to give people the opportunity to explain what they mean. Scratch beneath whatever buzz words people are or aren't using and you may have more meaningful discussions. Yes, it's a bit more work but it's worth it because communication is not aways as universal as we think and not everyone understands the strict lexicon your tightknit group has formed. Whether that is a development team, a testing department in a company or a local test community.

So yes, they are only words. And words are all we have. Use them carefully but forgivingly at they same time.