Blog Archives

Live, Live, Read, Read

Everyone is talking about “context driven testing” like it’s a revelation. I’m seeing lots of people talk about it at events. Even the other week, at Cambridge lean coffee, someone posed the topic “What is Context Driven testing?”, to which no-one around the table seem to have an answer, although not for want of trying! Several of us have gone through courses run by leaders in the context driven testing community, and others are active contributors in the testing community – but it became apparent that we did not have a clear answer to this fairly direct question. Maybe more fundamentally, we’re struggling with how (or “if”) OUR testing practices differ from this current ideal.

Surely ALL testing should be context driven? Without context nothing (metrics, bug reports, user research) has true meaning, and never has. As the title shows, even basic words in the English language have different meanings and pronunciations depending on the context they are being used in. Context matters, even in the basics of language. Am I wrong in thinking other people realise that context is important in most situations?

The Graduate Theory

On the one hand, this might be a reaction against (or training for) people new to testing, who are blindly following instructions. To people who are merely going through the motions and doing what they are told without questioning why, or maybe just implementing what they have learned, perhaps very badly or at least blindly. Maybe some testers don’t quite progress beyond this stage in their understanding of their profession.

When you’re learning a new skill (testing), you often need structure and boundaries, but maybe many new testers fail to progress beyond those boundaries and achieve true competency, developing to apply and adapt those skills based on the needs of the task at hand.  Is this what dictates the need to identify that context matters?

This article talks ShuHaRi, which is term used to describe the progression of learning.  It talks about the initial stage Shu where you are learning how to do something, without worrying too much about the theory, then Ha where you learn the principle and theory behind the techniques and finally the Ri stage where you learn from your own practice and adapt as you go.  So is context driven testing as a concept aimed at people in the Shu stage of testing or or is it a way practice for all to learn.  If it is a practice for all to learn the what are the basic principles of context driven testing that people need to grasp to enable them to successfully progress and how do they differ from other testing concepts?  Or is context driven testing really the name for people who have reached the ri stage in their testing abilities?

If it is the case that people are not graduating from this initial introduction of skills, than does having a term that seems almost like a job description aid these people in understanding what the crux of context driven testing is? Likewise do these ‘lay’ people make up the majority of testers attending conferences and getting involved in the community?  Even if the graduate theory holds true, and large swathes of our profession are just going through the motions, then how come so many of us, new and old to testing, do not feel equipped to answer what appears to be a simple question? What is context driven testing?

Is the term “context driven” being used to emphasise that the context in which a system will be used should force you to adapt and shape your approach techniques to match? Is it to remind people of the 5 W’s and a H that we should be asking so that we know that everyone involved in the product has the same understanding of the product and it’s aims?  

The Cynical Theory

It’s also possible that I simply enjoy understanding the systems I work with, and so my testing has always been framed and contextualised by what I know, and so what seems obvious to me is a revelation to many others. As a result, I should probably call this the “Maybe I am just lucky” Theory.

Even when I started (*cough cough* 15 years ago), I worked closely with developers to understand the products we were building and the changes that we were making. We sat in separate areas but it was a close, communicative relationship. I had the opportunity to go to customer sites and work with them, seeing how they were using the tools and the struggles they faced. We had clear build systems, with automated tests being written by both the developers and the testers, as well as a clear set of other test to run.

I’ve been in the enviable position where I’m invited to get involved from the start of a project right through to demo stands at conferences and customer visits. It’s possible that many testers aren’t in this position of inclusion, and just run through comprehensive checklists defined by their managers or the dev team around them. In that case, have we really only just realised that meaningful context yields significantly more effective tests?

This still doesn’t give an obvious answer to “What is context driven testing?”. Can I just call myself a context driven tester? Or is this, disappointingly, more about a new term for something good testers have been applying all along, and seems to be encouraging a segregation in the testing community? Which is my roundabout way of asking “Is this ultimately all about speaking and consultancy fees?

The Underlying Point

When you first learn something, I absolutely agree that you need information boundaries to enable you to do directed, structured learning. However, if many of us within the software testing community are uncertain as to the actual definition of this term, then I begin to wonder whether those boundaries or explanations missing or maybe we’re looking for a deeper answer then actually exists.

I recently read Rethinking Expertise and encountered Harry Collins and Robert Evan’s idea of connoisseurship in respect of technical pursuits. This is taken from http://reagle.org/joseph/2009/01/CollinsEvans2007-expertise.html

Technical connoisseurship

“experience within the conventions of judgment rather than experience of the skill itself.” “turns on on interactional expertise alone”.

e.g., architect can recommend tiles, even if no tiling experience

Hearing about connoisseurship made me wonder how one gauges one’s own expertise in relation to others – do we just have to wait to be judged by connoisseurs who are somehow collectively nominated? Is there some test we need to pass to call ourselves a ‘context driven tester’? Even if we change the title, what is to say we are not hoaxers or posers or fakers?

Possibly the path I was lucky enough to follow in my career was not the norm, and this may be why I struggle to empathise with the need to distinguish this delineated approach from ‘good testing’.

I just don’t understand how Context Driven Testing is a new idea. Or at least, I firmly believe that it really really shouldn’t be. To test whether anything works, you have to know what it’s supposed to do, and that surely requires more than a modicum of context. There’s a critical distinction I want to make here (in case it’s not been obvious): My not understanding the term is not the same as not advocating that context is a pivotal part of determining what and how to test on a given project. I’m just not sure that changing the way I identify myself is the intention behind the phrase “context driven testing”. Should I change my job title to make it clear how I work, or should we assume that all testers are CDT unless specifically stated otherwise (Graduates & learners aside).

Overall I am just left feeling like I have missed something really important, or some critical nuance has passed me by :o(

Five ways to make testing more positive

Occasionally over the years I have got distracted from my love of testing, by the fact testing can be seen in a negative light. It can be seen as a bottleneck in the development process, which can delay releases and end up with the tester arguing about everything.  I have even focused on the negative side myself. Testing is still not always accepted as of parallel importance to development, and there is still code being thrown over walls at times.  I feel it is part of the role of the test engineer to help organisations believe in testing and see how it can benefit a company. To do this the company needs to experience the power of testing and see it as positive and constructive.  This will highlight testing can be used to build quality and as an integral building block in shaping products. This article offers ways that the test engineer can help to portray the role in a positive light.

Here are some ways to try and improve the perception of testing and your role as a test engineer:

1. Try to be involved from the start

Our job as testers is not to find bugs, it is to build in quality from the start and hope there are then fewer bugs to find. Try to be involved from the start.  As soon as a story is defined by the product or a feature is suggested then help to validate that an idea has merit rather than highlighting all the reasons it might not work. That is not to say don’t identify areas that need investigation but be constructive.

An example would be, “We are going to build a bridge out of marshmallows”.  The temptation here might be to laugh especially if you are an engineering company, however as testers we should be trying to establish the purpose of the bridge itself. Is it for decorating a cake or for a 10 tonne truck to drive across?

2. Work with developers

Work with developers to identify areas to test when you are breaking down how something will be implemented.  Collaborate on ways to approach the testing to as it may be easier to test things at a unit or integration level. Working with the developers you can identify how to test something and any additional access points you might need to do those tests e.g. an internal command line argument, a way to configure a base state in an application etc. Working with others helps them to identify areas that they need to test or at least consider whilst developing.  When you are breaking down a story into tasks, try to get people to elaborate around the task, for example, what areas in the product will it touch upon. Try and get them to explain their vision to you as this often highlights assumptions, and can reduce ambiguity and help establish a concrete concensus amongst the team.  At Red Gate, once we have accepted a story, our developers and testers do a joint code planning session where we talk through the low level implementation and produce architecture or state diagrams to visualise the implementation and highlight both the test and code tasks that are needed.

3. Focus on quality

Help to create team focus on quality and user experience by running focused team exploratory testing sessions, where the whole team is put into pairs and then tries to use the product to achieve a particular goal. For example on the Deployment Manager team, we ran a session where our goal was to ‘Explore the creation and deployment of database packages in Deployment Manager. During the sessions the pairs are asked to note down (on post-its) any issues, surprises (good and bad), any questions and lastly any ideas. We normally allow 30 minutes for the activity and at the end of this time we report on our findings.  This feedback is later grouped into areas and actions to be taken are identified. This is similar to a bug hunt but with more focus.  It’s a great way to share the product knowledge across the team and helps to get the team working more closely together.

4. Find the root cause

If you do find an issue, investigate it to find the root cause of the issue. It’s often a one-to-many relationship from cause to symptoms, so a issue repository full of root causes is going to be much smaller than one full of symptoms. Then explain the issue to others giving all the information they need to understand the issue and to be able to reproduce it easily.  Also identify any possible ramifications of the issue as they might not be aware of them.  Don’t forget to show the information visually for the full effect if applicable.

5. Keep all communication constructive and positive

Try to keep all communication constructive and positive.  Try to convey information to others by selling them your idea and try to avoid destructive trigger words e.g. show stopper, epic failure.  Trigger words can sometimes make people tense or react without listening to the actual content of the conversation. Let’s say you have realised your product only uses colour to differentiate between successful and failed tasks.  We need to avoid saying “Our product is useless for people with Red-Green colour blindness”, instead say something along the lines of “Our product only identifies success and failure by the use of colour, for someone with colour blindness there is no way to determine which is which.  Would we be able to identify it in another way to, for example using symbols like a tick and a cross?”.  The second version is longer but it identifies the current situation, why it is a problem and offers a possible solution.  I often forget that people do not always have the same context as me, but try to set the scene to others as you see it then identify why something is of particular concern to you.  This will help them be able to see it from your perspective.