Interactive Persona Modelling – The Game

A niche community of product development practitioners are embracing the use of Personas in their discovery phases. We have a soft spot towards it, which is mostly visible in the implementation of tests in BDD and ATDD, when we discuss it in the context of software development. Although, it should be used from the inception of any product.

Many argues that it needs a lot of overhead, to find an exponential number of ways a user can use a product. Finding a rational method to meet the expectations of these users is smart work, rather than hard work. Here is a good way to use the persona in software development. But I have never seen an interactive way of “modelling” the most effective persona. There’s even a dedicated tool for this online, which helps us to some extent. Although, majority are still exploring the concept and frankly are struggling due to the core reason. Time and cost. So..

How do we model a Persona without wasting too much time/cost ?

This article is about creating these Personas from scratch using a game called “Interactive Persona Modelling” and not about how we use it in practice.

What is a Persona?

  • An Archetypical user of the intended product
  • Fictitious but based on real user information
  • Bringing an imaginary user to real life with immutable set of personal data
  • Quite common to have more than one page of documentation written on each persona

Origin of the Game

Few days back, I was part of a mind blowing 2-days “thing” called Emergent Agile Leadership by Tobias Mayer, an Agile Philosopher and Psychiatrist (opinions are mine) painting a Teal shade among us. He has managed to shake up my thoughts more than any other agilists I have ever known. Why am I calling the event a “thing” though? Well, I am still struggling to find out what it was about and what happened to me during the sessions. All I have is more questions and the rest of the life to figure out the answers. It was the best experience so far, inside a room full of 12 more strangers, who somehow are not strangers anymore.

Tobias introduced us to his game “Help me to see it”. We were in a circle where one person starts with a name of any person and one characteristic to describe something about him/her. The game goes like this. The 2nd person now have a name and a feature of a hypothetical figure say “This is Simon who is short”. 2nd person now adds another characteristic like “Simon is a short midget who has pink hair”; 3rd – “He also have a fat belly”; 4th.. – “He is wearing leather shoes” — and so on. We ended up with something like this –

“Simon is a midget with pink hair and fat belly. He is wearing leather shoes while participating in a festival. He is drunk and dancing with a Monkey on his shoulder. By profession he is a martial arts trainer. Simon is shouting and swearing to the DJ and expressing his joy while taking a time off from work.”

 

Being a Dragon Ball Z nerd, this conversation immediately reminded me of this guy (that’s when I added the bit “He is a martial arts trainer”):

 

Anyway, as you can imagine things can easily go out of hands if someone start saying “Simon is now riding a unicorn”. At this point anyone from the group can say “I don’t see it, help me see it”. If the weird idea gain support, the next person in the queue (clockwise or anti-clockwise) can explain why Simon is riding the unicorn and keep the game running. Fortunately, we were all mature enough to stop it before it went out of bounds.

For keeping this game a special experience for you in one of Tobias’s sessions, I will not reveal other fun aspects of the game, but you get the picture. You can guess where I am going with this now, can’t you? Some mechanics of this above game helped me to formulate this special game for Persona Modelling.

The Game – “Interactive Persona Modelling”

Prerequisite

  1. Most of the attendees should NOT know why they are meeting on the day. If they do, you may end up getting biased, predictable and pre-planned persona.
  2. More members, the better; For a maximum of 15mins (negotiable) for each Persona.
  3. Invite multiple real users or clients on the day; Or invite employees who knows them very well say Account Managers, Product Owners/Managers or Stakeholders.
  4. A room to accommodate everyone depending on how many are joining.
  5. A Facilitator, who will also be the Scribe

The Game Flow

  1. The facilitator helps create a standing circle of individuals and initiates the first round by calling just “a” name without any behavioral patterns of the desired persona. Next person have to say where the persona adds value, say a “super-admin user”. From this point onwards, everyone starts to add their personal behavioral patterned one by one.
  2. Make it personal – A description of the Persona doesn’t have to be purely work related. Persona is supposed to be “life like” with pure emotional backgrounds such as where they work, how they drive, where they travel or who they supplied hash in past. If you find an image of a real person, go for it. It is supposed to look as real as possible and printed and hung on the wall for display. NO.. cats or dogs pictures are not allowed, unless you are building an artificial food for them as your product.
  3. The role Scribe, sees the facilitator writing down every aspect of the conversation which “no one is in control of”. If a member starts showing authority over this fun situation, the facilitator will have to step in and explain the rules one more time.
  4. Anyone is allowed to stop the game to understand the context of a behaviour/facts mentioned. If they don’t agree, they have to find 2 more individuals who also disagree. If 3 or more people disagree, that characteristic of the Persona will not be documented by the Scribe.
  5. If the game continues, it suggests everyone is in agreement. The cycle can stop after 1 minute or can run a maximum timebox of 15mins. It should be more than sufficient when the right individuals are in the room.
  6. If the game abruptly stops due to a disagreement, that specific behaviour/data/fact should be discussed outside of this timebox. It should not disrupt the 15mins planned runtime.
  7. On purpose, this game have to be played while standing without any distraction like mobiles, notebooks, laptops or even watches.
  8. When the timebox is up, Scribe/Facilitator will read the whole document aloud and let everyone know how the persona feels like to them. They he/she will format in a way the company prefers, adds the image and frame it. Job Done !

Scribe’s Role

The Scribe’s job is very crucial in this game, they have to note everything without thinking if it’s a valid data or invalid. Scribe can be a UX designer or a person with exceptional drawing skills who doesn’t write. Instead he/she create a visual summary. A Cartoonist perhaps, open up show your skills.

A Scenario

If a member says “the super-admin user should be able to search”, you may also expect someone say “the SA user can search any directory”. BUT the next person may say “SU cannot search data from Directory X, due to our software using E2E encryption on that database”. Result – It MAY expose the negative elements which no one really though about and came into picture because a PO once had a hard time dealing with it.

An Example (If I may !)

Say, we are building a software for FBI, where we are trying to use a Persona which represents a hypothetical drug lord near Albuquerque, New Mexico. We want to know what he is like and how he interacts with his fellow dealers, so the FBI can better predict their next move to raid the drug lab full of Crystal Meth, hidden somewhere near an industrial area.

Dr. Walter White aka Heisenberg

 

Walter is a graduate of the California Institute of Technology who once was a promising chemist. He co-founded the company “Gray Matter Technologies”. He left Gray Matter abruptly selling his shares for $5,000. Soon afterward, the company made a fortune of roughly $2.16 billion, much of it from his research.

Walt subsequently moved to Albuquerque, New Mexico, where he became a high school chemistry teacher. He is diagnosed with Stage IIIA lung cancer. After this discovery, he resorts to manufacturing methamphetamine and drug dealing to ensure his family’s financial security after his death.

He is now pulled deeper into the illicit drug trade. He is taking more desperate and ruthless measures to assure his and his family’s safety and well-being. He has recently adopted the alias “Heisenberg” which is recognisable as the kingpin figure in the local drug trade. He remains a national security risk and should be shot on site. A reward of $1M will be paid to find him, dead or alive.

Known accomplish is Jesse Pinkman, who went under deep shit after being emotionally attached to Walter and is currently being hunted by the DEA (Next persona to be built soon).

 

Possibilities and Conclusion

Don’t you think we can make a brilliant TV series based on this story. I will name it “Breaking Bad”. Thumbs Up for this original idea, I have made up just now.. oh Wait ! 😉

Now you know where this game can be used, sky is your limit. Jokes apart, this method can create multiple personas within an hour, to be used in every aspect of the development cycle and not just testing. It can also be used in any other industry and just just software. Have a great time playing the game. Embrace the Personas and make a difference on your approach.

Technical Debt: 3 ways to articulate and get buy-in from “that person”

The decision of removing technical debt can be crucial and is often a grey area. In some cases, to increase the speed of delivery, we have to carry the debt as long as we have control over it. When shit gets real and everything is slowly getting out of hands, we need to repay this debt by investing time and effort in, say refactoring. But it can be a hassle to carry out when we need it the most; not because we hate it but because we never get buy-in from “that person” who doesn’t think it’s important.

 

In self managed teams it may not be an issue but other teams may have faced a discussion at least once around this in their lifetime. “Refactoring doesn’t provide any value to the end user” – most common reply when we try to prioritise housekeeping. In several cases the person standing in your way is non-technical or have limited knowledge of the architecture which needs this debt removed asap. At this moment we have to open up bravely and explain “that person” why we need it happen, now. Here are 3 analogies that I use often to articulate the situation and get the buy-in.

Swim back to Shore Alive

I came up with this after watching Gattaca (**spoiler alert**) where two brothers compete to swim as far as they can, far from the shore. In the climax, Vincent was willing to die to beat his brother Anton, to prove to himself that his dream is larger than his own life.

 

Anyway, as a regular swimmer the hardest part of challenging yourself is to know your true capability and stamina. Just because you have stamina to swim 1 mile at one go, you don’t attempt to swim 1 mile inside the sea. You go 500m in and pivot so that you can use the other 500m capacity to come back to the shore, alive. You repeat this thousand times to increase your stamina and never ever lie about it. Lying to yourself and being overconfident will kill you. You will be fooling no one but yourself.

Now, apply this analogy to technical debt where the distance and capability is comparable to the threshold of debt we can carry at a given time. Within a team it should be non-negotiable, we shouldn’t keep building technical debt because someone have promised a deadline to launch, without asking us. You can try going that extra mile to deliver more ignoring this technical debt but sooner than later you will be screwed big time, your software will not be scalable anymore. You will have no choice but to stop everything and start from scratch. Stop early to analyse and always keep a recovery plan.

The Boiling Frog

This analogy is out there for a while (Source – 1, 2), although not used often in reference to technical debt (except the linked articles). Here’s how the story goes.

Researchers found that when they put a frog in a pan of boiling water, the instant reaction from the frog is to jump out asap. Because it’s sudden, it changes the environment within fraction of seconds and the pain is unbearable (I guess).

On the other hand, when they put a frog in cold water and put the water to boil over time, the frog just boiled to death. This time the change in temperature was so gradual that the frog does not realise it’s boiling to death.

When our business is hit by a sudden change and we see it going out of hand, we react quickly. If a continuous integration run fails blocking everyone from using the code base without rebuilding it, we have to fix it right away. Therefore, we act asap and jump out of the problem right away. In case of technical debt, it is slow and gradual, a death threat on the rise. It’s so subtle to begin with that no one cares (we may not even know). We eventually find it out and choose to delay the necessary changes. At one point it becomes so bad, we cannot recover anymore. Before we realise that the threshold of acceptance has crossed far back, we are already in deep trouble.

Financial Debt

The most widely used analogy around technical debt is simply comparing it with financial debt.

Assume you got a new credit card with a 40% APR. Your card limit is £1200 and you get a month interest free usage. As long as you borrowed money and paid within that 1 month window, you pay nothing on interest. Even if you go above the duration and keep paying a minimum about back every month it is within your control. The moment you stop paying anything after borrowing, you are screwed as the Capital plus the interest becomes your new capital which adds more interest and your loose the control.

Technical debt is exactly that. If you have to borrow debt to go fast and deliver an extremely important feature, by all means do it. Just make sure you keep a track of the debt you are creating by not maintaining the code regularly. Loose coupling is fine and still scalable. The moment we start writing hard coded software and tightly coupled features, we slowly increase the debt to never pay it back fully without loosing everything and starting from scratch. Pay the minimal debt back when you can and make it a habit, a practice. Don’t let the debt ruin your business.

End Note

Be true to yourself and leave the decision of controlling the technical debt to the community who knows best, the developers. We are not working in the code base daily, they do. Trust their instinct. If you are a big fan of velocity like me, let them estimate keeping refactoring in mind. The team shouldn’t entertain “that person” who always ask a silly question like “can’t we wait till the next month?”. On the contrary, if they have a business strategy to back it up with data, don’t ignore it either. We just have to know what we are capable of supporting and stop lying to ourselves.

Hope this helps. If you have few more ways, feel free to share the ideas/analogies !

Definition of Done (DoD) in XDE Framework

In my last post about Definition of Done (DoD) – The Holy Cow Approach, we have seen how “Done” can be misinterpreted just because there is no set definition for everyone. We have ways to deal with DoD by agreeing together what is Done before starting work on a User Story or any form of backlog item. This definition therefore can change depending on context, product, team or even client demand.

What if we don’t have to go through this never ending debate of defining a done for each work item? The internet if full of these discussion, agreement and disagreements which we can live without.

What if, we have a universal DoD which establishes shared understanding company wide?

Definition of Done (DoD) in XDE is a shared understanding

Xtreme Decoupled Engineering (XDE) has a beautiful way of replacing this small talk with something that adds real value – Delivery and End user feedback. In XDE, we don’t need to establish a ground rule about where to stop, before starting a work. We stop and call it Done when we get “a” user feedback – good, bad or neutral. If it’s good we celebrate, if bad we learn and if neutral we let the Product Expert decide where to go from there.

If this is not Simplicity, what is?

 

The One Rule in XDE makes sure we do not get distracted and the Delivery Expert being the Servant Leader of the 1R Team guides the bubble to focus on one work at a time. DoD is therefore universal to all teams and anyone interested to know the ETA of a certain value, don’t have to worry about what “state” it is coming out. It will always come out in a “Ready to Ship” state whenever it is done and wherever it is deployed for the feedback.

DoD in XDE invokes the boundary for Cycle Time measurement

Cycle time is the total time from the start to the end of the development process, which increases predictability especially if we are part of a Service Level Agreement (SLA). DoD in XDE takes help from the One Rule to establish these SLAs with predictable data over time to create a healthy metric to optimise the process. Here’s a visual which summarises how it work:

As we can see, the Cycle time is basically the duration which a 1R team takes to achieve the DoD following One Rule. It also promotes implementation of DevOps by an extreme reduction of multitasking and focusing on the end user feedback.

Conclusion

We have a tendency of making things complicated when there is a simpler solution. XDE’s definition of Done simplifies this ambiguous topic of discussion. An organisation can worry about bigger things which needs attention and teams can work towards the same goal every time like a second nature.

More about XDE: http://www.xdecouple.com

Interested in DevOps? Try Flux with XP practices

Culture, movement or practice whatever you call it, DevOps is beautiful. It ensures the collaboration and communication of software professionals who usually doesn’t think alike in most organisations – the Development team and the Operations team. DevOps practices advocates automation of the process of software delivery and infrastructure changes to align these two. Scalability, security and organised chaos promotes a distributed decision making culture, exactly what we need for being agile.

So which framework best suits us, while adopting this DevOps culture? In my biased opinion I feel it’s eXtreme Programming (XP). It’s brilliant practices/ideas are often borrowed by other frameworks including the concept of “User Stories”. Since most frameworks doesn’t specify these practices, most professionals include XP principles (e.g. TDD, pair programming) anyway. Reason why XP, as a methodology, is underrated and overshadowed by other popular frameworks quite heavily.

Flux framework compliments XP, by adding the concept of decoupled processes and making sure DevOps adoption doesn’t stay just a great idea but is also implemented to save the world 😉

Individuals and interactions OVER processes and tools

“Over” is often read as “Not” by a majority of Agile adopters, who finally starts to realise why agility is far better than many traditional techniques. Is it their mistake? Of course not. Agile Manifesto is quite philosophical and the inclusion of “over” just assumes that the reader won’t translate it to suit their needs. It’s either this or that, not both. If both then to what extent? No one have a definite answer, as it was meant to help evolution which is the beauty of the agile manifesto. But it does scare the organisation trying to transition as not all organisations are consultancies. Not everyone is working for a “client”. Sometimes stability is more important to measure few aspects of the transition. This stability and standardisation of processes is necessary to some extent, as long as it’s not blocking the product development.

No doubt, Individuals and Interactions are important, but it can’t work on it’s own without processes and practices to support the outcome. We need a basic level of standardised processes and practices to accompany this vision of adopting devops to become agile. In fact, most frameworks have vague undocumented processes which are not standardised for all teams. It is extremely unsettling for organisations who are paranoid to begin with. If documented, they are extremely ambiguous, hence often misinterpreted (ref. 1, 2, 3).

XDE complimenting XP – Closing the gaps

I have never seen XP work within immature teams because it needs a sense of responsibility and the urge to know WHY, which only comes from experience. If we ask an immature team to follow XP, they usually try Scrum instead and include TDD and pair programming to call it XP. Mostly because Scrum has a guide which they can refer back to. But there are some subtle differences between XP and Scrum, as documented by Mike Cohn.

When we realise most frameworks are ambiguous about implementing set processes, we often fill in the gaps to support the agile principles ourselves. But during this we may end up in a process which can do more harm than good. By leaving a gap we are letting our mind wander. Most professionals look for the processes first and then learn the principles behind it. Some never care to learn the principles at all, as they assume implementing a framework takes care of everything.

This blind faith and incomplete knowledge promotes half baked assumption of knowing what works and what doesn’t. First we should go by book and then we should focus on mastering it or even bending the rules. XDE tries to close these gaps by formalising the Definition of Done and support to DevOps mindset while advocating the best practices from XP.

Companies trying to adopt DevOps needs a framework which have a set of processes for all teams; is predictable yet highly customisable.

XDE provides that skeleton by defining the start and end of the development lifecycle within the bubble. “Done” for a product increment is defined to include End user feedback – Continuous Delivery plus at least some feedback from users before starting the next work item. It creates a transparent environment of keeping the road map visible at all time by focusing on the value to the end user.

To assure that the Bubble doesn’t start working on the next item in the backlog, XDE introduces One Rule (1R) which creates a process of working on one at a time and only focus on outcome not output.

One Rule (1R) makes sure we are focused on Outcome not Output.

Decouple Processes and Succeed

As we know XDE doesn’t proposes any practices on how the product is built but it recommends XP principles. XP principles with it’s test first approach suits the best but needs a robust stabilising skeleton which XDE provides – hence compliments each other. While the 1R team members work following the One Rule, if a team member is free doing nothing (as they are not allowed to work on anything else) they have no choice but to focus on the ongoing work.

  • “Are you free? Great, get the chair and let’s pair for the rest of the day.”
  • “Yoda is off sick today and we need to review the unit tests before he can start implementing the code tomorrow. Can you do it while you wait?”

Therefore, XDE helps organisation to adopt devops mindset smoothly with the least friction possible and XP assures that the quality of the delivery is spot on and ready for feedback. Try XDE along with XP to initiate the DevOps mindset and help your organisation is agile transformation. Focus on outcome not output.

Test Automation is Dead, Period.

This article is aimed to all software test professionals who are working as or planning to pursue a career as Automation Test Engineer/SDET/SDITs in future. If the organisation we are working for is trying to be agile, these test professionals can make an enormous contribution by “directing” the development and not just being a silo of knowledge on a specialised department. So focus and read this very carefully as this will make you the legend you can be, before learning something which is dying or is already dead in agile development.

Simplest definition of test automation is provided in wiki –

Test automation is the use of special software to control the execution of tests and the comparison of actual outcomes with predicted outcomes.

 

“Special Software” – this is your hint. An automation testing framework is exactly that, which is maintained side by side. In many cases, it has an entirely different code base; responsibility of which falls on the shoulders of specialist Automated Testers. I have been there myself. While working as a Software Developer in Test, I used to create frameworks from scratch and maintain. Doesn’t matter what programming language the application is written on, these frameworks can “test” every bloody functionality on the application under test, brilliant stuff. UI, API or Integration you name it, we used to have solutions for everything. Reason why I have enormous respect towards the Testing Community. Only problem was, we were testing AFTER the product was built. Here’s why it didn’t made sense.

Someone gave me a piece of advice years back – “Don’t call it wrong, call it different”. My reaction was neutral, even though I completely disagree, as the person had like 20 years of experience. In fact, I thought I learned something “valuable” that day from a very experienced professional. Few months later, both of us shared an awkward glance, which to some extent proved that I didn’t learned anything valuable and was simply fooled to believe so. That “calling different” attitude became an impediment on an ongoing project as it needed urgent rescue and we realised we should have called it “wrong” and made it right.. sad times.

 

My point is, we have to call it wrong when it is; unless you can never initiate the process of correcting it. Calling it “different” is bureaucratic encouragement to keep doing whatever we are doing and screwing the organisation we are working for, sometimes unconsciously. Let’s come to the subject which needs help, by not being called as different/wrong but being surgically removed – Test Automation. It might be right in a distant past, may be for some, which I have never seen producing real value so will trust and respect the past decisions.

Ask any software test professional “What is your test coverage like?”. You will usually get a reply with a number or percent. You can easily assume they have an Automated Testing Framework to check if the tests are passing, after a product is build/changed. These tests reduces the manual effort while regression testing (mostly functional tests) and saves valuable time before the release deadline. Isn’t it? Proud organisations share these metrics often by saying “we have X% test coverage which is way better than market standard of Y%”.

Now, here are some words/phrases mentioned in the above paragraph, which we should pay attention and understand why they smell.

Testing after the product has been build/changed

If we are testing after a product is built, we are –

Simply validating what it does, not what it should do.

If we don’t want a baby, we use a condom. We don’t have sex anyway and pray that she will not get pregnant; well unless you forget to buy one. Prevention is always advised over “dealing with the consequences” later on, doesn’t matter if it’s your sex life or delivering a software. Have protected sex and stop worrying about raising an “unwanted” baby or even worse consequences. The tests should be our condoms which we put in place before building a product to establish a predictable outcome. Try using this example at your work. If you get a mail from your HR then I am not responsible but feel free to blame it on me. Use weird analogies like this to make a point, people usually remember them for a long time.

Regression testing before the release deadline

Admit it, it’s the truth. In fact, I have seen many teams doing a ceremonial regression testing before a release date. They still does regression testing even if they have automated test coverage (god I hate the term) of 85-90%. Guess what, they still have missed critical bugs. When asked, the usual reply was “it was an edge case which wasn’t covered in our automated testing framework. We will include the test before the next production deploy, promise”.

So do we do regression testing in targeted areas, as in risk based testing? Yes we can, but then we might miss several other areas which “we think” is not risky. We can never win as there will always be bugs.

As long as those bugs are not failed acceptance criteria, they will be treated as learning rather than fails.

The trick to improve is to run regression test while doing continuous integration as part of the same code base. If it fails, fix and re-run as explained in the XP best practices. At least it is failing on your CI environment not in production. Regression testing should happen every time there is a change, doesn’t matter how small the change is. It should happen as early as possible and should never be treated as a ceremony.

Exploratory Testing, therefore, should be the only activity after a product is deployed to find edge cases and that’s why manual testing can never be dead.

Test automation, therefore, might reduce your manual effort but you are still managing and troubleshooting a separate framework and wasting enormous amount of time and effort to prove something which is already given. It is still not “defining” what the product should do.

Test Coverage fallacy

Quality assurance starts before the software is built. Fact is, it never assures the quality if you are writing tests after and it simply becomes an activity of executing the tests. So burn that test coverage report as it only proves that the work is done. It never says how it assures the quality.

Another aspect of “doing it right for wrong reasons” is unit testing in TDD. A good amount of software development professionals and higher management uses the % “test” coverage at unit level in their metrics to prove the hard work done to assure quality. You might know where I am going with this. If you are doing a Test Driven Development (TDD) you are NOT testing anything that matters to the end users, so stop adding the number of unit tests in your test coverage report.

TDD is for architecture and is a brilliant method to create a robust solution which supports the functionalities on top of it. The unit tests are merely the by-products of a good design and does not guarantee the complete functional behaviours. TDD is always recommended but don’t try to make it sound like it’s testing the acceptance criterion. In fact, a good programmer writes these unit tests to be confident in his own coding skills not because someone ask them to do it. In most cases, TDD is enforced to reduce technical debt by creating a culture of continuous refactoring.

Third aspect of the coverage fallacy is, coverage doesn’t mean it is always fully automated. It can be half manual and half automated. Coverage simply means we are confident about the % number of the functionalities and the rest are not tested. This creates a sense of false security. We should always test everything and we can only do it when we direct what we are building. If exploratory testing finds an edge case then we can add a direction to tackle the behaviour and not write to test to run for later. Regression testing after the changes are in place is considered already late. If you are an experienced testers you might read the code changes and figure out what areas have no affect by the change and decide to leave it, but you are still late.

Use your expertise BEFORE the development – Acceptance Guides not Tests

So what about you planned career as a Test Automation engineer? Well, no one said your skills are worthless, in fact you can use more of it.

We just have to shift left, a bit more left than we initially expected.

Not only we have to stop writing automated tests, we write them as guides before we decide to start working on the work item. So the programmers now have to implement the minimal code to pass your statement, hence preventing the defects for you. Better if you can help them in a pair programming session. Being said that, it doesn’t mean that we cannot apply the principles behind TDD. Enter Acceptance Test Driven Development (ATDD), Continuous Integration and Continuous Delivery.

Assuming that everyone have heard/read about these; if not there are a huge collection of online materials to read. In a nutshell, ATDD advocates that we should write acceptance tests before writing the code for the software we build. It defines what we should make with minimal codes to pass these tests, just like TDD but in a higher level which the end users are expecting to work.

Are these acceptance tests really tests though?

Yes and No. These are not tests but are called as one in my opinion (topic of a debate?). Enlighten me if you know. Instead these are expected results (or behaviours) which are used as guides to develop a piece of functionality. Therefore, we should really be calling them “Acceptance Guides” (just a thought). Reason why a lot of us use behaviours as proposed in BDD, as they kind of compliment each other. These can include non-functional tests as well.

 

Verdict

Therefore, to summarise, we are talking about removing regression tests completely after development while including them within the same code base as acceptance tests/guides (doesn’t matter what it’s called at this moment). Also, we need to remove unit tests as a data to include within a Test Coverage metrics. Behold, we have just proved Test Automation can be dead or is already dead.

Be Lean to stay agile.

Agile Transformation – Does it have to be Disruptive?

Agile transformation is the new “thing”, most software delivery businesses are trying to get a grip on. There’s a divide in opinion, facts and politics around it. Many running after the “credit” they get in changing a thing or two to get their names in the list of contributors, which may be soon converted into a tombstone. Others are combining the best practices, changing processes overnight and rebuilding culture to support the sudden changes. Everyone wants a piece of the action but the grave consequences are affecting the business they represent. Why? Because everyone expects it to happen faster, disruptive and trying to change things overnight which wasn’t changed for decades.

Unfortunately the message delivered to them reads like this – “Move into agile methods and you can get more products delivered in the same time”.

 

The most underrated fact around these, when we talk about “Agile” is –

“Agile methods will not double your productivity, in fact it will slow you down to begin with. Agile principles are about providing the sense of creating the most valuable product first.”

The above fact is ignored a lot of time and rouge consultants starts selling “Agile” promising more productivity in compared to Waterfall. Terms get thrown from all sides to the person responsible for budget and a chain of middle management roles wait patiently to get a cut. Members who are not supporting the “movement” either receives a calendar invite from HR or get clinically radicalised to the new religion “Agile”. Why are we trying to do Agile? No idea.

Disruption = Difference in Opinion

 

.If this difference in opinion is not resolved, you can never become agile as it depends on the organisation maturity in every aspect. Reason why many of us heard the phrase “Culture change” thrown at us during an agile transformation. A new bread of management leads the bandwagon towards a belief that doing agile will resolve all problems, improve output and finish project early. But here’s a subtle difference which establishes the Value to the business –

Output ≠ Outcome

 

Outcome may or may not be an intended feature, it can be a lesson which we learn from a failure in an iteration. Some might wonder – “What’s the value of this lesson from a failure which is not generating any revenue?” Well, as an example, a failure can teach us how ridiculous a feature is to begin with, so we don’t invest in future iterations of the feature. Hence reducing the risk of creating a worthless product based on the failed iteration which can be a short term success and a long term burden for a business.

Agile transformation, hence, should reflect a sense of purpose of what we are doing and WHY we have to do it. If, as an organisation we are unable to share the big picture to all employees, there is no way we can be agile in a distant future. Agility is not about implementing a framework or method or tool in place and hope for the best. It’s about self evaluation as a company to learn what we can or cannot do, accepting the limits and exposing the barriers which stops us from being agile. We execute, learn and improve.

The best agile transformations are gradual, slow and have a good amount of time invested in learning while we try to move away from the traditional approaches.

On the Contrary, there are exceptions where Disruption is the Only way!

Gradual improvements are welcome where there is no need for drastic change. But in some organisations this empathetic mindset can be taken for granted. We will find personalities who are extremely qualified and have a very good knowledge of what is right. But milking it seems a better idea for them rather than improvements while exploiting the same knowledge of agile transformation. This is especially true if they are not a “Permy”, as their political existence is based on how long they can stretch the project to get maximum benefit out of a contract/temporary role.

I would be wrong, if I say all contractors are like this. Of course not. In fact, most team level contractors are exceptionally brilliant and embrace agility. The issue starts when we have the middle management as a contractor, who can influence crucial business decisions. These personalities will not harm the business but won’t improve it either. They will keep it as it is and in the name of “Agile Transformation” the project can last for years to come.

Agile Transformation doesn’t happen in a quarter, but it shouldn’t take over a year either.

 

If it takes more than a year with low or no relevant value delivered, we can be dead sure that it is being prevented by a crucial impediment, a process or a personality on power. It can be anything. A year long transformation should be a warning that, not everyone is on board. In these situations, empathy should be shown to personalities who are on board and the rest should be disrupted to make a crucial business decision.

Do you feel the same way?

Which version of the transformation have you experienced/experiencing? Would love to hear your thoughts on this.