Blog Archives

Start small and don’t fear failure

Simon Stewart from Facebook just mentioned in an interview, “what would you do, if you weren’t afraid?”. Well that is something I have wrestled with over the last 12 months or so. I have tried to find an internal balance that has enabled me to get the most out of any opportunity that I am faced with. We often say ‘live life like it is the only one you get’ and ‘you get out what you put in’ but do we really act on these, or do we spend more time being restrained due to fear?  These phrases have circled around in my head a lot over the past months, gently reminding me to set aside my inner fears and relish the learning situations offered in the community.  They force me to suppress my concerns over failure and embarrassment in favour of the rewards that will come from success. This change of attitude has rekindle my willingness to try new things.  There are so many activities within our industry that will take you out of your comfort zone, but we need to learn to give these opportunities a proper chance.  The only way we can do this is by being open to the possibilities they offer, and open to the possibility of failure (and make no mistake, that is scary!)


Fake it till you make it

Approaching strangers may seem like an insurmountable blocker for some of us, but the benefits of overcoming this are tenfold.  Some who know me will have the belief that I find talking to people easy, but that is not the case. I like to talk, this is true, but those initial moments when I attempt to strike up a conversation with someone new still fill me with dread. You will probably find that the more people you talk to about this, the more of us you’ll discover who share these feelings.  So why do people think I find it easy? Well, I like many others fake it just long enough to strike up that conversation and then watch it unfold.  How do we do it? Well I can’t speak for everyone, but I know I try smiling to relax me, but also to see what response that gets, and then I go with “Hi I am Emma, what’s your name?”, followed by something safe like “So how are you finding the event?”


Fail early and fail often

Within the agile community you may hear people talking about fail early, and of course I hope that this does not happen the first time you try something new, but we all fail at things. And yep, I have tried to strike up a conversation with someone only for it to fail miserably, but the wonderful thing about failing is that once it has happened it does not seem as scary to fail again. Also, in a true agile fashion, you could always adapt & hopefully you will just take the failure as a learning experience.  The successful conversations you manage to establish will make up for any that don’t work, you just have to keep trying.  As with most other things we do: the more you do this, the easier it becomes.


Prepare for impact

Conquering your fears opens you up to all the amazing people within our community (Agile, Test, Developer, etc). This then gives them the ability to have a profound impact on us. It is amazing how some exceptional people can touch you, even when they’re speaking to a large group, and something they say strikes a chord with you and causes a response. This is often possible for the speakers at a conference, or that attendee to a Lean Coffee meeting who imparts that little gem of knowledge that inspires you to try something new, or investigate another possibility.

However there is still a further pot of expertise and knowledge to draw from, and that’s *anybody else you meet at these events*.


To let other people have this opportunity to have an impact on you, you need to talk to them. You also need to open yourself up to the proposition of anyone teaching you something, and potentially having a profound influence on you.  Some of our deepest learning comes when we expose (and embrace) our vulnerabilities.


Reap what you sow

Just last week I got the opportunity to go to Agile 2014 in Orlando where I managed to silence my inner demons and tried my best to be social and invested in what was on offer.  The return on investment for this small compromise on my part was unbelievable.  The wealth of knowledge that this enabled me to draw upon was phenomenal.  Beyond that, there were also those few wonderful people with whom I was lucky enough to quickly and easily develop a natural rapport.  What it is that triggers that connection, I cannot tell you – It may be the kind smile or the joke that sets the relaxed tone for further interactions. What I CAN tell you, is that these relationships will help further you as a person both personally and professionally.  Some of the relationships that I have created at events over this last year have opened my mind in so many ways and were the best investment in myself I could ever make.


Break down barriers

Going to these conferences you often have an awareness of some more prominent people in the community, and perhaps feel intimidated by them or may have put them on a pedestal. What you need to remember is that YOU made that artificial barrier.  There was a time when they were like anyone else, attending conferences for the first time. Even now you can bet that several of them get nervous before speaking, and some of them also don’t find it easy to strike up a conversation, so you are in great company.


So for those of you who are regulars on the circuit, remember to include those new faces. And to those newer faces, make sure you don’t exclude yourself, but instead make the most of the golden opportunity to interact with all the fabulous people around you.  If you do this you will be astounded by the profound effect it can have and what a lasting impression some of these people can have on you even in such a short time.


Thank you to all those at Agile 2014 who made me feel so welcome and shared their experiences and knowledge with me.


Live, Live, Read, Read

Everyone is talking about “context driven testing” like it’s a revelation. I’m seeing lots of people talk about it at events. Even the other week, at Cambridge lean coffee, someone posed the topic “What is Context Driven testing?”, to which no-one around the table seem to have an answer, although not for want of trying! Several of us have gone through courses run by leaders in the context driven testing community, and others are active contributors in the testing community – but it became apparent that we did not have a clear answer to this fairly direct question. Maybe more fundamentally, we’re struggling with how (or “if”) OUR testing practices differ from this current ideal.

Surely ALL testing should be context driven? Without context nothing (metrics, bug reports, user research) has true meaning, and never has. As the title shows, even basic words in the English language have different meanings and pronunciations depending on the context they are being used in. Context matters, even in the basics of language. Am I wrong in thinking other people realise that context is important in most situations?

The Graduate Theory

On the one hand, this might be a reaction against (or training for) people new to testing, who are blindly following instructions. To people who are merely going through the motions and doing what they are told without questioning why, or maybe just implementing what they have learned, perhaps very badly or at least blindly. Maybe some testers don’t quite progress beyond this stage in their understanding of their profession.

When you’re learning a new skill (testing), you often need structure and boundaries, but maybe many new testers fail to progress beyond those boundaries and achieve true competency, developing to apply and adapt those skills based on the needs of the task at hand.  Is this what dictates the need to identify that context matters?

This article talks ShuHaRi, which is term used to describe the progression of learning.  It talks about the initial stage Shu where you are learning how to do something, without worrying too much about the theory, then Ha where you learn the principle and theory behind the techniques and finally the Ri stage where you learn from your own practice and adapt as you go.  So is context driven testing as a concept aimed at people in the Shu stage of testing or or is it a way practice for all to learn.  If it is a practice for all to learn the what are the basic principles of context driven testing that people need to grasp to enable them to successfully progress and how do they differ from other testing concepts?  Or is context driven testing really the name for people who have reached the ri stage in their testing abilities?

If it is the case that people are not graduating from this initial introduction of skills, than does having a term that seems almost like a job description aid these people in understanding what the crux of context driven testing is? Likewise do these ‘lay’ people make up the majority of testers attending conferences and getting involved in the community?  Even if the graduate theory holds true, and large swathes of our profession are just going through the motions, then how come so many of us, new and old to testing, do not feel equipped to answer what appears to be a simple question? What is context driven testing?

Is the term “context driven” being used to emphasise that the context in which a system will be used should force you to adapt and shape your approach techniques to match? Is it to remind people of the 5 W’s and a H that we should be asking so that we know that everyone involved in the product has the same understanding of the product and it’s aims?  

The Cynical Theory

It’s also possible that I simply enjoy understanding the systems I work with, and so my testing has always been framed and contextualised by what I know, and so what seems obvious to me is a revelation to many others. As a result, I should probably call this the “Maybe I am just lucky” Theory.

Even when I started (*cough cough* 15 years ago), I worked closely with developers to understand the products we were building and the changes that we were making. We sat in separate areas but it was a close, communicative relationship. I had the opportunity to go to customer sites and work with them, seeing how they were using the tools and the struggles they faced. We had clear build systems, with automated tests being written by both the developers and the testers, as well as a clear set of other test to run.

I’ve been in the enviable position where I’m invited to get involved from the start of a project right through to demo stands at conferences and customer visits. It’s possible that many testers aren’t in this position of inclusion, and just run through comprehensive checklists defined by their managers or the dev team around them. In that case, have we really only just realised that meaningful context yields significantly more effective tests?

This still doesn’t give an obvious answer to “What is context driven testing?”. Can I just call myself a context driven tester? Or is this, disappointingly, more about a new term for something good testers have been applying all along, and seems to be encouraging a segregation in the testing community? Which is my roundabout way of asking “Is this ultimately all about speaking and consultancy fees?

The Underlying Point

When you first learn something, I absolutely agree that you need information boundaries to enable you to do directed, structured learning. However, if many of us within the software testing community are uncertain as to the actual definition of this term, then I begin to wonder whether those boundaries or explanations missing or maybe we’re looking for a deeper answer then actually exists.

I recently read Rethinking Expertise and encountered Harry Collins and Robert Evan’s idea of connoisseurship in respect of technical pursuits. This is taken from

Technical connoisseurship

“experience within the conventions of judgment rather than experience of the skill itself.” “turns on on interactional expertise alone”.

e.g., architect can recommend tiles, even if no tiling experience

Hearing about connoisseurship made me wonder how one gauges one’s own expertise in relation to others – do we just have to wait to be judged by connoisseurs who are somehow collectively nominated? Is there some test we need to pass to call ourselves a ‘context driven tester’? Even if we change the title, what is to say we are not hoaxers or posers or fakers?

Possibly the path I was lucky enough to follow in my career was not the norm, and this may be why I struggle to empathise with the need to distinguish this delineated approach from ‘good testing’.

I just don’t understand how Context Driven Testing is a new idea. Or at least, I firmly believe that it really really shouldn’t be. To test whether anything works, you have to know what it’s supposed to do, and that surely requires more than a modicum of context. There’s a critical distinction I want to make here (in case it’s not been obvious): My not understanding the term is not the same as not advocating that context is a pivotal part of determining what and how to test on a given project. I’m just not sure that changing the way I identify myself is the intention behind the phrase “context driven testing”. Should I change my job title to make it clear how I work, or should we assume that all testers are CDT unless specifically stated otherwise (Graduates & learners aside).

Overall I am just left feeling like I have missed something really important, or some critical nuance has passed me by :o(

Error conditions need testing too

The other day I left a highly fashionable shop (ok, a supermarket) with some clothes and, when I got home, I realised that the tags were still attached.  So the following day I went to a different store of the same chain to get the tags removed, and was pleased to note that the the dreaded alarm didn’t go off as I initially walked in the door. Being an enthusiastic test engineer, this made me think about errors and how we handle them.

“When and where” data

The importance of errors can vary depending on the type of work you’re doing.  As a general rule, if an error is encountered in a system, the ideal result would be that the user is given enough information to work around the issue or at least understand why it happened, and the product designers get the information they need in order to know what happened and why.

I’m not saying that we should focus our energy on the errors – of course it’s more important that the system works correctly –  however, we should definitely not ignore them.  At Red Gate, the tools have the ability to automatically send in error reports.  These error reports should provide us with the essential information, which in many circumstances will enable us to understand how the error occurred and thus determine why.  These error reports then automatically generate a new bug in our repository or update an already existing one if they are linked.

When it comes to deciding which functions and errors are the most important, Elisabeth Hendrickson raises the concept of the Never and Always heuristic in her book Explore it: what must your application always do and what must it never do?  For these security tags (as a trivial example), presumably they should never fall off and they should always set off the alarm when they pass through the detectors at the exit. Any information relating to a failure in those areas is critical for the designers and store-owners to know “When & Where” in detail in case it’s a failure in the tag, or just in that store’s tag-scanner. The end-users don’t particularly care beyond whether or not the tags set off alarms incorrectly!

Testing that errors behave correctly

Recently there were several articles about a widely used GnuTLS library where the error case not working correctly actually opened several operating systems and applications up to a security hole.  The issue, which left many systems vulnerable to connection monitoring and package decryption, came about because certain errors that might occur during x509 certificate verification were not being handled correctly and were being reported as successful. This flaw would enable someone to create an invalid certificate that would pass verification and fool otherwise-secure systems, and thus allow for the decryption of  protected communications. This is a perfect example of why even error cases demand attention – let’s take a quick look into what lead up to this issue.

Coding errors vs. missing tests

Surprisingly to me, the Ars Technica article I read talks of this being “the result of someone making critical mistakes in source code that controls critical functions of the program”, whereas my internal testing alarm bell is going off, asking where were the tests for this? This is a testable case, just like everything else in the code-base should be.

Of course, just with any other bugs in the code, the problem doesn’t boil down to either poor coding or lack of testing, but it is the combination of both that has left this library so vulnerable. Looking at the code itself, there appeared to be confusion over C error handling, where less than zero is returned for failure and zero for success, as compared with 0 meaning false and 1 meaning true – something so simple that can have such far reaching implications.

Minimising Risk

The article goes on to mention that it is “significant that no one managed to notice such glaring errors” which makes me suspect that the author hasn’t done much production coding. We all know that even, with the best of intentions, we occasionally commit large chunks of code which result in code reviews becoming more of a shoulder surfing exercise!

The lesson to be learned here is that attention to detail is absolutely vital, and the best way to maintain that is to encode it in your test suite. In addition, make sure to keep code commits small.  If you don’t do code reviews and pairs coding, consider adding them into your organisation.

If the code seems hard to test, that can sometimes imply that we have not designed it correctly in the first place.  When designing a new piece of code try to bear in mind this question: how will it be tested? Consider using elements of test driven development, as it can really help ensure testability as well as correct code.

How do you test for correct errors?

Our team deliberately has an error in our system that we use for testing – specifically, we check it’s displaying the correct information, which gives us confidence that other errors will also be handled in the same manner. That said, we recently found out that that was not enough – when we made changes to log4net, this error functionality got broken and the relevant information was suddenly not being sent in. On the plus side, we were already looking for our errors to behave in a certain way, which gave us a fighting chance of spotting this change in behaviour.

We’ve now extended our test to check that the information is being displayed correctly and that this information is then handed over to the application engine which will report it back to the team. As you can see, this isn’t a horrendously complex case, so there’s no reason you shouldn’t be running tests for it!

Wrapping up

I know the tags at a supermarket are only a deterrent, but if they don’t work the cost to the business could be huge.  We should always focus on making sure that the product does what it’s supposed to, but we should also always allocate some time to determine what will happen if (when) an error occurs.

What information will be collected? can you automatically send it back into the organisation? Could you even automatically open an issue from it? And just as we prioritise what functionality is important, it’s important to identify where an error would be most costly, and then to try and provoke those errors to check how they’re handled. It is always better to know.

Five ways to make testing more positive

Occasionally over the years I have got distracted from my love of testing, by the fact testing can be seen in a negative light. It can be seen as a bottleneck in the development process, which can delay releases and end up with the tester arguing about everything.  I have even focused on the negative side myself. Testing is still not always accepted as of parallel importance to development, and there is still code being thrown over walls at times.  I feel it is part of the role of the test engineer to help organisations believe in testing and see how it can benefit a company. To do this the company needs to experience the power of testing and see it as positive and constructive.  This will highlight testing can be used to build quality and as an integral building block in shaping products. This article offers ways that the test engineer can help to portray the role in a positive light.

Here are some ways to try and improve the perception of testing and your role as a test engineer:

1. Try to be involved from the start

Our job as testers is not to find bugs, it is to build in quality from the start and hope there are then fewer bugs to find. Try to be involved from the start.  As soon as a story is defined by the product or a feature is suggested then help to validate that an idea has merit rather than highlighting all the reasons it might not work. That is not to say don’t identify areas that need investigation but be constructive.

An example would be, “We are going to build a bridge out of marshmallows”.  The temptation here might be to laugh especially if you are an engineering company, however as testers we should be trying to establish the purpose of the bridge itself. Is it for decorating a cake or for a 10 tonne truck to drive across?

2. Work with developers

Work with developers to identify areas to test when you are breaking down how something will be implemented.  Collaborate on ways to approach the testing to as it may be easier to test things at a unit or integration level. Working with the developers you can identify how to test something and any additional access points you might need to do those tests e.g. an internal command line argument, a way to configure a base state in an application etc. Working with others helps them to identify areas that they need to test or at least consider whilst developing.  When you are breaking down a story into tasks, try to get people to elaborate around the task, for example, what areas in the product will it touch upon. Try and get them to explain their vision to you as this often highlights assumptions, and can reduce ambiguity and help establish a concrete concensus amongst the team.  At Red Gate, once we have accepted a story, our developers and testers do a joint code planning session where we talk through the low level implementation and produce architecture or state diagrams to visualise the implementation and highlight both the test and code tasks that are needed.

3. Focus on quality

Help to create team focus on quality and user experience by running focused team exploratory testing sessions, where the whole team is put into pairs and then tries to use the product to achieve a particular goal. For example on the Deployment Manager team, we ran a session where our goal was to ‘Explore the creation and deployment of database packages in Deployment Manager. During the sessions the pairs are asked to note down (on post-its) any issues, surprises (good and bad), any questions and lastly any ideas. We normally allow 30 minutes for the activity and at the end of this time we report on our findings.  This feedback is later grouped into areas and actions to be taken are identified. This is similar to a bug hunt but with more focus.  It’s a great way to share the product knowledge across the team and helps to get the team working more closely together.

4. Find the root cause

If you do find an issue, investigate it to find the root cause of the issue. It’s often a one-to-many relationship from cause to symptoms, so a issue repository full of root causes is going to be much smaller than one full of symptoms. Then explain the issue to others giving all the information they need to understand the issue and to be able to reproduce it easily.  Also identify any possible ramifications of the issue as they might not be aware of them.  Don’t forget to show the information visually for the full effect if applicable.

5. Keep all communication constructive and positive

Try to keep all communication constructive and positive.  Try to convey information to others by selling them your idea and try to avoid destructive trigger words e.g. show stopper, epic failure.  Trigger words can sometimes make people tense or react without listening to the actual content of the conversation. Let’s say you have realised your product only uses colour to differentiate between successful and failed tasks.  We need to avoid saying “Our product is useless for people with Red-Green colour blindness”, instead say something along the lines of “Our product only identifies success and failure by the use of colour, for someone with colour blindness there is no way to determine which is which.  Would we be able to identify it in another way to, for example using symbols like a tick and a cross?”.  The second version is longer but it identifies the current situation, why it is a problem and offers a possible solution.  I often forget that people do not always have the same context as me, but try to set the scene to others as you see it then identify why something is of particular concern to you.  This will help them be able to see it from your perspective.