“When” to focus on documentation in Agile Software Development?

The only time “Documentation” was ever mentioned in Agile (aka in the Agile Manifesto) is in this statement –

Working Software over Comprehensive Documentation

Followed by a very important warning which many manage to neglect – “That is, while there is value in the item(s) on the right, we value the items on the left more.” From that, unfortunately, a ludicrous idea of not documenting at all has been generated over time.

LackOfDocs

Still a Radical concept!

Being part of many debates over years on this topic, it’s obvious how this area of the manifesto is less explored. Which is why Business Analysts and Product Owners are still wondering when to document in the Agile way or should they do it alone. Collaboration is key.

Sure – BDD helps us document user stories with collaboration and help create the Acceptance Criterion for planning discussions. Some have successfully established ATDD which helps in writing failing Acceptance tests and creates an exceptional red-green-refactor cycle similar to TDD.

BUT

Among all these, a developer still wonders WHEN do they document the intricate details of the technical architecture, necessary information about a dependency and system designs while supporting “Technical excellence” without which no one can really become agile. We can tell each other that TDD helps us in focusing continuous attention to architectural detail, but how many developers actually use it and writes a test before each line of code.

There are many cases where the product documentations are nowhere to be found as the developers were told during planning – “We are now doing it the Agile way and all you will get are a bunch of statements written in Gherkin syntax.”

Agile Principles clarifying the WHEN

#2 Welcome changing requirements, even late in development.

We are seeing changing specifications in a weekly/monthly basis. Based on the type of business it’s highly likely to see changing expectations on a daily basis as well.

Can we really document anything before knowing what to build? Give it a deep thought.

When we talk about User Stories – most popular technique for specification gathering – we are not documenting a requirement. We are starting of with WHY, the least amount of known high level outcome, with or without knowing how the output might look like.

If a team never documents anything else other than the stories, that’s when it starts getting impractical with a huge risk to quality, understanding and sustainability. Documentation is required.

#7 Working software is the primary measure of progress.

Traditionally we have learned to document first as we used to plan heavily to begin with. We know how that’s against agility according to Cone of Uncertainty. We can never know about a feature/specification well enough when we start of. We learn through early feedback on iterations that can generate enough discussion.

Reason why we should build a piece of software first based on high level WHYs and review how it may look like. Once confident and final on a characteristic or functionality, then we should document.

Only document to record evidence of a built feature which is accepted.

#9 Continuous attention to technical excellence and good design enhances agility.

With even changing specifications on continuous feedback, we need even changing updates on the documentation in real time. Documentations essentially should reflect current state of affairs rather than what we hope it could be.

When working in huge organisations where a single monolithic code base is being used and updated daily/weekly by many many developers, that’s how chaos starts. Documentation should be in codes – true. It is also true that not every developer are capable of doing it as we hope they do. If they can’t in the code, make sure they do anywhere else that can be shared and available for everyone.

Duplicate documentation is better than no documentation, as long as they provide the same recorded evidence.

Conclusion

Therefore, the original manifesto statements about “.. over comprehensive documentation” actually translates into the following:

First build a working piece of software and then focus on documenting how it was built and what it does, for shared understanding.

Continuous Integration, Delivery & Deploy – It’s part of our lives !

Stakeholder: What do you mean by 2 more weeks? It was almost done yesterday wasn’t it? I was waiting for it for the past 2 weeks and now it will be almost a month. Can’t you do anything about it?

Product Owner: Yes it was but we found a bug which we can’t fix and test in time. Our 3rd party provider never mentioned that X has a dependency on Y. It will be ready tomorrow but have missed today’s release deadline.

Scrum Master: Agree, I have managed to get to the bottom of this but unable to mitigate as contractually they are within their SLA. Look at the bright side though, we have managed to push through 6 other items as planned. The PO and the members have done their best they could.

Stakeholder: I understand. It’s time we do something about providing a solution to these last minute issues. It’s practically blocking a high revenue business critical feature.

Product Owner: That’s true.

Scrum Master: We can try continuous delivery as an option.

 

We have heard similar complaints several times (at least I did). If you are part of a development team at any level, you probably know what Integration, Delivery and Deploy means. If not then you have this post to guide you to understand the basics. Before we dive deeper, it is important to state a fact –

The terms ‘Integration’, ‘Delivery’ and ‘Deploy’ are not appealing and worthless at times – if it doesn’t have a “Continuous” flow.

If they are still appealing to you, you are probably working within a team who prefers traditional approaches like waterfall over the 21st century approaches to become agile or lean or anything that promotes continuous improvement and based on Empiricism.

 

Continuous is, in fact the most underrated word in a product development life cycle. If you use it often at your work, pat your back and join me in spreading the word. This article is targeted towards non-technical professionals working around a development team, ideally a stakeholder/product owner/alike. It can be a good recap for those who don’t feel 100% confident about the topic as well.

 

Feedback – now, this is very appealing. It is quite powerful on it’s own. Imagine what it represents when we talk about “Continuous Feedback“. Feedback in the context of software development means reviewing deliverables after the Implementation + Integration + Delivery (+ Deploy, if we are capable). The moment we go deep inside a “Software” discussion, we end up taking about coding, testing, processes, tools and practices to establish this feedback cycle. This is where many stakeholders loses interest as they don’t give a damn about how we make it happen. Technical experts are hired to help translate those low level information in to high level business goals, not the opposite. Stakeholders just need what they have requested, preferably on or before the timeline proposed.

Definitions

Continuous Integration (CI)

Continuous Integration is a software development practice where members of a team integrate their work frequently, usually each person integrates at least daily – leading to multiple integrations per day. Each integration is verified by an automated build (including test) to detect integration errors as quickly as possible.” – Martin Fowler

Continuous Delivery (CD)

Continuous Delivery is a software development discipline where you build software in such a way that the software can be released to production at any time.” – Martin Fowler, again.

Continuous Deploy (CD – yes I know)

Simply, the act of making a completed product available on Production or Live environment, continuously. – well, me.

It is not the same as continuous delivery. It is confusing because the acronym CD is used for both. Delivery can be on a UAT or a Staging or a Demo environment, which although resembles the production environment to some extent, may not really the same. Continuous Deploy, hence, is not recommended unless we are extremely confident.

The power of “Continuous” (Stay with me)

If you are a heavy commuter like me, live in South of England, use overground DLR/Southern Railway/Thameslink or similar plus the London underground, you are already witnessing the “Feedback” loop every bloody day. The difference is – they all represent a unique feedback cycle which may or may not suit us.

Unless you have given up and made peace with it, you probably feel a shift on your stress level when travelling on different services. I am going to use an example journey from my home city to London St Pancras to explain this.. hold my beer.

Continuous Commitment and Integration

What wakes me up everyday is the joy of working as an Agilist and not the excruciating pain of travelling to London, everyday. Travelling becomes a default daily commitment for a greater good. To commit every morning without a fail and to reach the train door in time I have to make sure that I leave home on time, everyday.

To repeat this everyday, as a human, I have to make sure that I play an hour of World of Tanks (yes, everyday) and get enough sleep. Unfortunately we humans also need a “turn it off and on again” the way our PCs do, unless we tend to screw up badly. This is non-negotiable and if we are not doing what resets our day, we ARE going to fuck up. It’s a matter of being committed “continuously” towards what we have to do and make it our second nature.

When we compare this to CI in software development, we can see where we may be falling behind (if we are). If we are not committing everyday on our codebase and merging the latest changes (reset), chances are we will always see side effects that waste more of our valuable time. We may be working on a branch which should have been synchronised days ago. Hence. everyday. To ensure a healthy codebase, therefore, we need tests to run continuously whenever we commit for the fastest feedback on the changes made. If it breaks anything it’s smarter to fix it now, rather than blaming the lack of sleep later.

Delivery and Deploy – How Continuous are we talking about?

As much as we are capable of, as fast as we can. It’s a culture thing and very much depends on our capability and the people around us. It depends on budget, mindset, commitment and necessity. It can be a week, few days or few times a day which varies widely with the type of product and any dependencies around it. It depends on third party teams we are working with, clients we are providing value to or simply the reason that we are trying to improve “continuously” aka Kaizen.

Whatever the reason, it is always preferred to choose the fastest option and use the right approach to get it delivered without waiting for a stage gated process of batch deployment, treating it like a ceremony. If we are making a big deal of it, we are already accepting that we need improvements.

Back to travelling…

Choose what works for you over what is “Best Practice”

After I reach the Brighton station, I see 2 train services which varies way too much in terms of providing value to me as a commuter. T (Thameslink), stops in almost every station in the route, as it provides a “cheap” service and takes more than 1.5hrs to reach London St. Pancras. G (Gatwick Express, same owner), stops in limited stops and reaches London Victoria within an hour or so; and yes it is expensive. Around 20% more towards the cost. T services are reputed for being late in a regular basis and can be quite frustrating at times.

Careful observers might have spotted that I mentioned G only reaches to Victoria. Which means to reach St Pancras it will take another 30min anyway. Technically, I am not saving any time and literally spending more money. But the “Values” I am receiving are more important to me such as, a peaceful travel where I get a seat which faces the way the train is heading (yeah.. you are not the only one !), free wifi (most days), socket for charging, table for the laptop and always be on time. From Victoria, the tube is also extremely reliable even though it IS a dependency.

In software delivery, this is what matters the most. True, 2-3 commitment of continuous delivery a week is possible and is the end goal for some but is it possible with your skill set, dependencies or culture? May be not. If you are being forced to do so, you will most probably go and look for better options in your career elsewhere. We can aim for it but we shouldn’t try to be the superman we are not, we may actually won’t even need it. So technically it will become a waste of time, effort and will create a stressful environment.

Stress is a bitch !

Do you experience a shift on your stress level when travelling via Overground vs Underground?

The T service have a train every 30mins on peak hours to St Pancras. The G service have it every 15mins giving me a flexible and less stressful commute, on top of what other benefits it is providing.

Even though it is stage gated, it just works for me.

 

There is another reason. If we manage to reach any London terminal and see a Tube Strike.. no brainer, there are other means to reach your destination. It will delay your journey for that day but not everyday.

Even more stress free flexibility – The underground Victoria line have a service every minute or two. You can practically watch a train leave and still don’t give a damn; the next is moments away, as close as 100m. That’s the power of “Continuous” – anything.

As a Stakeholder..

Similarly, a stakeholder should be stress free about a missed release and shouldn’t worry about missing a delivery window; just to be told that they will have to wait for another 2 weeks even though it will be ready for production the next day. That’s the most irritating “process” which doesn’t makes sense to them and to anyone who favours common sense.

How do we handle this stress as a stakeholder? We do understand that things go wrong, after all we are professionals. It doesn’t mean that we now have to wait for another 2 weeks. So we will stress out and pressure the team to do a “Hotfix”. Even though we know it can have bad implications, we will be biased to push it anyway and most probably will get what we wanted. We wouldn’t care about another stakeholder as it is not important anymore. Our success is based on how well we get things out of this team, so screw the team or any other stakeholder.

If the team was delivering continuously, a deploy to production would be considered as just another step to add in the Definition of Done .

 

It’s a win for sure but for a short term. The fix we pushed out may have bugs as the development team members were rushed and may miss out few bugs. It will turn on us the moment we show our back. The reactions to this will be fatal. Now the team knows that the stakeholder is being unreasonable. They want to help but they can’t as the process don’t allow them. So next time when time will come to commit to something, they will commit to less amount of work to avoid a conflict.

Do we need this shit? It’s not worth the hassle trust me.

Aim to reduce batch sizes, continuously

If we are moving from traditional to modern approaches, batch release makes sense. In fact you cannot have an agile transformation without it. The fact that we have to reduce the feedback cycle from quarter/year to 2/4 weeks (as Scrum suggests for example), is still alien to many companies. ‘Continuous’ in this case is the length of the iteration cycle.

But this doesn’t mean once we have achieved the 2-4 weeks cycle, we cannot aim for even more granular continuous releases. The whole point of reducing batch sizes to as small as one item per release, is to provide support to the stakeholders’ need.

As a Development Team Member..

We don’t need a hostile environment, instead simply use common sense and be human for a moment while thinking about providing a feedback, if there is a need. Do a hotfix if that’s what it is necessary, just don’t say we can’t do anything for another 2 weeks. What will go wrong? It will slow the team down for the very same reason the resistance is there. At least we can reflect there is a change needed on process or tool or whatever that is blocking us. Stakeholders will be forced to understand as they are on the same boat and will stop pushing as it will force them to think long term.

Hope you all have a great week ahead, be continuous.. stay agile.. think lean 😉

Agile – It’s not about delivering working software faster

What is agile? Most cliched answer to this translates to “delivering working software faster”. Because, agile principles clearly states – “Working software is the primary measure of progress”. It doesn’t mention anything about the speed though. In rest of the article, where ever you see “value” translate it to “shed-load of money” if that works for you. It is very important to establish what is the definition of “value” for you, to understand the true meaning of what agility is.

If you buy a new bike or car, it’s performance will always be lower than a well oiled 2year old vehicle with similar specs. Agile teams are like that. The team members can be experienced and brilliant but to really know what part of the product is making the most value, requires investment on thought process. That’s where a Product Owner/Expert comes in with data to prioritise the most valuable work on top of the backlog and own the decision. It’s their job to also convey “why” it is more valuable to the team as well to validate the possibility.

Example Problem Statement

A company hired 10 team members paying £500/day. Screw prioritisation; an expenditure of £5k/day needs to be used up in anyway possible on that 8hrs contracted. Even if that means we make the member work on items no one really give a shit about or have any value. If we focus on priority we will end up wasting 3hrs a day, out of which the members will celebrate a 2hr lunch and will be milking the strategy. How do you propose to handle that?

Solution

Assuming a 20 days working month, paying £5k/day or £100k/month is actually a lot of money to go down the drain, if they work on features no one asked for. The cost can only be justified if they focus on the most valuable item first, instead of an item you may not need for a month. In many cases, as much as 80% work remaining are actually not necessary for early feedback, as the most valuable 20% decides the fate of a product. Work on them first and save your time and money and stop focusing on what the members are doing, as long as they are working on the right thing you require.

Pareto Principle

Does the above solution sounds familiar? Yes, I am referring to the Pareto Principle which is known for a century now. Albeit, saying that 4:1 distribution is “accurate” would be a wrong generalisation as it’s application varies. But it does make a point about how we should approach the product backlog. Empiricism, hence is the only way to go with faster feedback to continuously update the backlog as we know more and more about the product and end user’s expectation.

Source**

 

Working software is mandatory but the trick is to convey the message like this –

Agile is NOT about delivering working software faster, it is about delivering the most valuable software faster.

End Note

Prioritisation is all that matters. If you have adopted “Agile” and still focusing on speed or amount of work for monitoring purpose, then you are doing it wrong. There, Agile defined the way most businesses want to hear. Value can have different definitions for different businesses, “Money” being the most common form, which we should really focus on.

Hope this article makes it clear why “Agile” makes sense in development.

Technical Debt: 3 ways to articulate and get buy-in from “that person”

The decision of removing technical debt can be crucial and is often a grey area. In some cases, to increase the speed of delivery, we have to carry the debt as long as we have control over it. When shit gets real and everything is slowly getting out of hands, we need to repay this debt by investing time and effort in, say refactoring. But it can be a hassle to carry out when we need it the most; not because we hate it but because we never get buy-in from “that person” who doesn’t think it’s important.

 

In self managed teams it may not be an issue but other teams may have faced a discussion at least once around this in their lifetime. “Refactoring doesn’t provide any value to the end user” – most common reply when we try to prioritise housekeeping. In several cases the person standing in your way is non-technical or have limited knowledge of the architecture which needs this debt removed asap. At this moment we have to open up bravely and explain “that person” why we need it happen, now. Here are 3 analogies that I use often to articulate the situation and get the buy-in.

Swim back to Shore Alive

I came up with this after watching Gattaca (**spoiler alert**) where two brothers compete to swim as far as they can, far from the shore. In the climax, Vincent was willing to die to beat his brother Anton, to prove to himself that his dream is larger than his own life.

 

Anyway, as a regular swimmer the hardest part of challenging yourself is to know your true capability and stamina. Just because you have stamina to swim 1 mile at one go, you don’t attempt to swim 1 mile inside the sea. You go 500m in and pivot so that you can use the other 500m capacity to come back to the shore, alive. You repeat this thousand times to increase your stamina and never ever lie about it. Lying to yourself and being overconfident will kill you. You will be fooling no one but yourself.

Now, apply this analogy to technical debt where the distance and capability is comparable to the threshold of debt we can carry at a given time. Within a team it should be non-negotiable, we shouldn’t keep building technical debt because someone have promised a deadline to launch, without asking us. You can try going that extra mile to deliver more ignoring this technical debt but sooner than later you will be screwed big time, your software will not be scalable anymore. You will have no choice but to stop everything and start from scratch. Stop early to analyse and always keep a recovery plan.

The Boiling Frog

This analogy is out there for a while (Source – 1, 2), although not used often in reference to technical debt (except the linked articles). Here’s how the story goes.

Researchers found that when they put a frog in a pan of boiling water, the instant reaction from the frog is to jump out asap. Because it’s sudden, it changes the environment within fraction of seconds and the pain is unbearable (I guess).

On the other hand, when they put a frog in cold water and put the water to boil over time, the frog just boiled to death. This time the change in temperature was so gradual that the frog does not realise it’s boiling to death.

When our business is hit by a sudden change and we see it going out of hand, we react quickly. If a continuous integration run fails blocking everyone from using the code base without rebuilding it, we have to fix it right away. Therefore, we act asap and jump out of the problem right away. In case of technical debt, it is slow and gradual, a death threat on the rise. It’s so subtle to begin with that no one cares (we may not even know). We eventually find it out and choose to delay the necessary changes. At one point it becomes so bad, we cannot recover anymore. Before we realise that the threshold of acceptance has crossed far back, we are already in deep trouble.

Financial Debt

The most widely used analogy around technical debt is simply comparing it with financial debt.

Assume you got a new credit card with a 40% APR. Your card limit is £1200 and you get a month interest free usage. As long as you borrowed money and paid within that 1 month window, you pay nothing on interest. Even if you go above the duration and keep paying a minimum about back every month it is within your control. The moment you stop paying anything after borrowing, you are screwed as the Capital plus the interest becomes your new capital which adds more interest and your loose the control.

Technical debt is exactly that. If you have to borrow debt to go fast and deliver an extremely important feature, by all means do it. Just make sure you keep a track of the debt you are creating by not maintaining the code regularly. Loose coupling is fine and still scalable. The moment we start writing hard coded software and tightly coupled features, we slowly increase the debt to never pay it back fully without loosing everything and starting from scratch. Pay the minimal debt back when you can and make it a habit, a practice. Don’t let the debt ruin your business.

End Note

Be true to yourself and leave the decision of controlling the technical debt to the community who knows best, the developers. We are not working in the code base daily, they do. Trust their instinct. If you are a big fan of velocity like me, let them estimate keeping refactoring in mind. The team shouldn’t entertain “that person” who always ask a silly question like “can’t we wait till the next month?”. On the contrary, if they have a business strategy to back it up with data, don’t ignore it either. We just have to know what we are capable of supporting and stop lying to ourselves.

Hope this helps. If you have few more ways, feel free to share the ideas/analogies !

Interested in DevOps? Try Flux with XP practices

Culture, movement or practice whatever you call it, DevOps is beautiful. It ensures the collaboration and communication of software professionals who usually doesn’t think alike in most organisations – the Development team and the Operations team. DevOps practices advocates automation of the process of software delivery and infrastructure changes to align these two. Scalability, security and organised chaos promotes a distributed decision making culture, exactly what we need for being agile.

So which framework best suits us, while adopting this DevOps culture? In my biased opinion I feel it’s eXtreme Programming (XP). It’s brilliant practices/ideas are often borrowed by other frameworks including the concept of “User Stories”. Since most frameworks doesn’t specify these practices, most professionals include XP principles (e.g. TDD, pair programming) anyway. Reason why XP, as a methodology, is underrated and overshadowed by other popular frameworks quite heavily.

Flux framework compliments XP, by adding the concept of decoupled processes and making sure DevOps adoption doesn’t stay just a great idea but is also implemented to save the world 😉

Individuals and interactions OVER processes and tools

“Over” is often read as “Not” by a majority of Agile adopters, who finally starts to realise why agility is far better than many traditional techniques. Is it their mistake? Of course not. Agile Manifesto is quite philosophical and the inclusion of “over” just assumes that the reader won’t translate it to suit their needs. It’s either this or that, not both. If both then to what extent? No one have a definite answer, as it was meant to help evolution which is the beauty of the agile manifesto. But it does scare the organisation trying to transition as not all organisations are consultancies. Not everyone is working for a “client”. Sometimes stability is more important to measure few aspects of the transition. This stability and standardisation of processes is necessary to some extent, as long as it’s not blocking the product development.

No doubt, Individuals and Interactions are important, but it can’t work on it’s own without processes and practices to support the outcome. We need a basic level of standardised processes and practices to accompany this vision of adopting devops to become agile. In fact, most frameworks have vague undocumented processes which are not standardised for all teams. It is extremely unsettling for organisations who are paranoid to begin with. If documented, they are extremely ambiguous, hence often misinterpreted (ref. 1, 2, 3).

XDE complimenting XP – Closing the gaps

I have never seen XP work within immature teams because it needs a sense of responsibility and the urge to know WHY, which only comes from experience. If we ask an immature team to follow XP, they usually try Scrum instead and include TDD and pair programming to call it XP. Mostly because Scrum has a guide which they can refer back to. But there are some subtle differences between XP and Scrum, as documented by Mike Cohn.

When we realise most frameworks are ambiguous about implementing set processes, we often fill in the gaps to support the agile principles ourselves. But during this we may end up in a process which can do more harm than good. By leaving a gap we are letting our mind wander. Most professionals look for the processes first and then learn the principles behind it. Some never care to learn the principles at all, as they assume implementing a framework takes care of everything.

This blind faith and incomplete knowledge promotes half baked assumption of knowing what works and what doesn’t. First we should go by book and then we should focus on mastering it or even bending the rules. XDE tries to close these gaps by formalising the Definition of Done and support to DevOps mindset while advocating the best practices from XP.

Companies trying to adopt DevOps needs a framework which have a set of processes for all teams; is predictable yet highly customisable.

XDE provides that skeleton by defining the start and end of the development lifecycle within the bubble. “Done” for a product increment is defined to include End user feedback – Continuous Delivery plus at least some feedback from users before starting the next work item. It creates a transparent environment of keeping the road map visible at all time by focusing on the value to the end user.

To assure that the Bubble doesn’t start working on the next item in the backlog, XDE introduces One Rule (1R) which creates a process of working on one at a time and only focus on outcome not output.

One Rule (1R) makes sure we are focused on Outcome not Output.

Decouple Processes and Succeed

As we know XDE doesn’t proposes any practices on how the product is built but it recommends XP principles. XP principles with it’s test first approach suits the best but needs a robust stabilising skeleton which XDE provides – hence compliments each other. While the 1R team members work following the One Rule, if a team member is free doing nothing (as they are not allowed to work on anything else) they have no choice but to focus on the ongoing work.

  • “Are you free? Great, get the chair and let’s pair for the rest of the day.”
  • “Yoda is off sick today and we need to review the unit tests before he can start implementing the code tomorrow. Can you do it while you wait?”

Therefore, XDE helps organisation to adopt devops mindset smoothly with the least friction possible and XP assures that the quality of the delivery is spot on and ready for feedback. Try XDE along with XP to initiate the DevOps mindset and help your organisation is agile transformation. Focus on outcome not output.

Test Automation is Dead, Period.

This article is aimed to all software test professionals who are working as or planning to pursue a career as Automation Test Engineer/SDET/SDITs in future. If the organisation we are working for is trying to be agile, these test professionals can make an enormous contribution by “directing” the development and not just being a silo of knowledge on a specialised department. So focus and read this very carefully as this will make you the legend you can be, before learning something which is dying or is already dead in agile development.

Simplest definition of test automation is provided in wiki –

Test automation is the use of special software to control the execution of tests and the comparison of actual outcomes with predicted outcomes.

 

“Special Software” – this is your hint. An automation testing framework is exactly that, which is maintained side by side. In many cases, it has an entirely different code base; responsibility of which falls on the shoulders of specialist Automated Testers. I have been there myself. While working as a Software Developer in Test, I used to create frameworks from scratch and maintain. Doesn’t matter what programming language the application is written on, these frameworks can “test” every bloody functionality on the application under test, brilliant stuff. UI, API or Integration you name it, we used to have solutions for everything. Reason why I have enormous respect towards the Testing Community. Only problem was, we were testing AFTER the product was built. Here’s why it didn’t made sense.

Someone gave me a piece of advice years back – “Don’t call it wrong, call it different”. My reaction was neutral, even though I completely disagree, as the person had like 20 years of experience. In fact, I thought I learned something “valuable” that day from a very experienced professional. Few months later, both of us shared an awkward glance, which to some extent proved that I didn’t learned anything valuable and was simply fooled to believe so. That “calling different” attitude became an impediment on an ongoing project as it needed urgent rescue and we realised we should have called it “wrong” and made it right.. sad times.

 

My point is, we have to call it wrong when it is; unless you can never initiate the process of correcting it. Calling it “different” is bureaucratic encouragement to keep doing whatever we are doing and screwing the organisation we are working for, sometimes unconsciously. Let’s come to the subject which needs help, by not being called as different/wrong but being surgically removed – Test Automation. It might be right in a distant past, may be for some, which I have never seen producing real value so will trust and respect the past decisions.

Ask any software test professional “What is your test coverage like?”. You will usually get a reply with a number or percent. You can easily assume they have an Automated Testing Framework to check if the tests are passing, after a product is build/changed. These tests reduces the manual effort while regression testing (mostly functional tests) and saves valuable time before the release deadline. Isn’t it? Proud organisations share these metrics often by saying “we have X% test coverage which is way better than market standard of Y%”.

Now, here are some words/phrases mentioned in the above paragraph, which we should pay attention and understand why they smell.

Testing after the product has been build/changed

If we are testing after a product is built, we are –

Simply validating what it does, not what it should do.

If we don’t want a baby, we use a condom. We don’t have sex anyway and pray that she will not get pregnant; well unless you forget to buy one. Prevention is always advised over “dealing with the consequences” later on, doesn’t matter if it’s your sex life or delivering a software. Have protected sex and stop worrying about raising an “unwanted” baby or even worse consequences. The tests should be our condoms which we put in place before building a product to establish a predictable outcome. Try using this example at your work. If you get a mail from your HR then I am not responsible but feel free to blame it on me. Use weird analogies like this to make a point, people usually remember them for a long time.

Regression testing before the release deadline

Admit it, it’s the truth. In fact, I have seen many teams doing a ceremonial regression testing before a release date. They still does regression testing even if they have automated test coverage (god I hate the term) of 85-90%. Guess what, they still have missed critical bugs. When asked, the usual reply was “it was an edge case which wasn’t covered in our automated testing framework. We will include the test before the next production deploy, promise”.

So do we do regression testing in targeted areas, as in risk based testing? Yes we can, but then we might miss several other areas which “we think” is not risky. We can never win as there will always be bugs.

As long as those bugs are not failed acceptance criteria, they will be treated as learning rather than fails.

The trick to improve is to run regression test while doing continuous integration as part of the same code base. If it fails, fix and re-run as explained in the XP best practices. At least it is failing on your CI environment not in production. Regression testing should happen every time there is a change, doesn’t matter how small the change is. It should happen as early as possible and should never be treated as a ceremony.

Exploratory Testing, therefore, should be the only activity after a product is deployed to find edge cases and that’s why manual testing can never be dead.

Test automation, therefore, might reduce your manual effort but you are still managing and troubleshooting a separate framework and wasting enormous amount of time and effort to prove something which is already given. It is still not “defining” what the product should do.

Test Coverage fallacy

Quality assurance starts before the software is built. Fact is, it never assures the quality if you are writing tests after and it simply becomes an activity of executing the tests. So burn that test coverage report as it only proves that the work is done. It never says how it assures the quality.

Another aspect of “doing it right for wrong reasons” is unit testing in TDD. A good amount of software development professionals and higher management uses the % “test” coverage at unit level in their metrics to prove the hard work done to assure quality. You might know where I am going with this. If you are doing a Test Driven Development (TDD) you are NOT testing anything that matters to the end users, so stop adding the number of unit tests in your test coverage report.

TDD is for architecture and is a brilliant method to create a robust solution which supports the functionalities on top of it. The unit tests are merely the by-products of a good design and does not guarantee the complete functional behaviours. TDD is always recommended but don’t try to make it sound like it’s testing the acceptance criterion. In fact, a good programmer writes these unit tests to be confident in his own coding skills not because someone ask them to do it. In most cases, TDD is enforced to reduce technical debt by creating a culture of continuous refactoring.

Third aspect of the coverage fallacy is, coverage doesn’t mean it is always fully automated. It can be half manual and half automated. Coverage simply means we are confident about the % number of the functionalities and the rest are not tested. This creates a sense of false security. We should always test everything and we can only do it when we direct what we are building. If exploratory testing finds an edge case then we can add a direction to tackle the behaviour and not write to test to run for later. Regression testing after the changes are in place is considered already late. If you are an experienced testers you might read the code changes and figure out what areas have no affect by the change and decide to leave it, but you are still late.

Use your expertise BEFORE the development – Acceptance Guides not Tests

So what about you planned career as a Test Automation engineer? Well, no one said your skills are worthless, in fact you can use more of it.

We just have to shift left, a bit more left than we initially expected.

Not only we have to stop writing automated tests, we write them as guides before we decide to start working on the work item. So the programmers now have to implement the minimal code to pass your statement, hence preventing the defects for you. Better if you can help them in a pair programming session. Being said that, it doesn’t mean that we cannot apply the principles behind TDD. Enter Acceptance Test Driven Development (ATDD), Continuous Integration and Continuous Delivery.

Assuming that everyone have heard/read about these; if not there are a huge collection of online materials to read. In a nutshell, ATDD advocates that we should write acceptance tests before writing the code for the software we build. It defines what we should make with minimal codes to pass these tests, just like TDD but in a higher level which the end users are expecting to work.

Are these acceptance tests really tests though?

Yes and No. These are not tests but are called as one in my opinion (topic of a debate?). Enlighten me if you know. Instead these are expected results (or behaviours) which are used as guides to develop a piece of functionality. Therefore, we should really be calling them “Acceptance Guides” (just a thought). Reason why a lot of us use behaviours as proposed in BDD, as they kind of compliment each other. These can include non-functional tests as well.

 

Verdict

Therefore, to summarise, we are talking about removing regression tests completely after development while including them within the same code base as acceptance tests/guides (doesn’t matter what it’s called at this moment). Also, we need to remove unit tests as a data to include within a Test Coverage metrics. Behold, we have just proved Test Automation can be dead or is already dead.

Be Lean to stay agile.