The Essence of Agile

January 21, 2013

I love to talk about Agile software development, Lean principles, and am currently trying to get a better grip on Rightshifting. However, when people ask me for advice it can be hard to recall details and specifics, especially if it’s either an area you haven’t thought about for a while, or something that you maybe only have limited perspectives or experience of. When this happens, it’s invaluable to be able to reach for some guiding principles.

The Agile Manifesto is a wonderful resource for this and I hear it’s even being refined (continuous improvement is a wonderful thing). Generally, when asked about them, I’ll end up mis-quoting them…even though I understand what they are about, why they were written down, how they are generally interpreted, etc. Who would have though that those 4 statements would be so difficult to remember? So I wanted to capture an even simpler interpretation of them. I was also keen to include as much from Lean as possible too. It struck me that the strongest, most often recurring theme is “feedback”…and the more I look at it, the more I’m convinced that Feedback is the very essence of Agile. I should probably back that up with evidence of some kind, or at the very least, my opinions and observations.


If we examine the Agile Manifesto, we can see that it is all about favoring rich, prompt feedback over other options.

  • Individuals and interactions over processes and tools

This is all about interacting with people, and in general, people are pretty good at giving prompt, quality feedback. Sure, we could all be a little better at it, but you generally get much more feedback talking to someone than using a tool or following a process to accomplish the same goal. For example, handing over a piece of work in a team distributed by time-zone. You could follow the process and update the story with the latest information, and you could use a tool, JIRA perhaps, to capture that information and make it available remotely to the rest of the team. However, having a conversation and brief guided tour of the feature in it’s current state imparts information much more effectively, and provides opportunity for questions and clarification.

  • Working software over comprehensive documentation

The documentation for a project is rarely a good guide to whether the software works or not…even when it gets completed. The problem is that documentation generally lags behind the software product and is often quite far down the priority list when identifying new work. Ultimately it provides poor quality feedback, due to it generally being out of date, and it doesn’t provide it in a useful time frame.

On the other hand, working software is a very rich feedback environment with every click and gesture providing more feedback instantaneously. Continuous integration, yet another feedback mechanism, should allow you to get access to such rich feedback each day (or even more frequently).

  • Customer collaboration over contract negotiation

Our customers write stories in the form of “As a…, I want to…, So that…”, and that’s a good thing. However, it’s just a starting point for a much more detailed conversation about the details. It provides opportunity for change too. Instead of asking for every feature they can think of up-front, they are involved in the evolving product, which should allow them to select and prioritise work in an intelligent manor.

Contract negotiation is a much slower process and much more guarded in the language used. I always see it as both sides trying to make sure they are not going to be punished for failure rather than a collaborative process to deliver something. So it’s not particularly prompt, and the language used tends to disguise the true meaning…so not very high quality either.

  • Responding to change over following a plan

Prompt feedback in essential as the longer you go without feedback, the more opportunity you have to go down the wrong path…or inversely, the more often you get feedback the more opportunity you have to perform a course correction based on the latest information at hand.

Following a plan when you know it to be inaccurate or incorrect (or even a complete fantasy) leads to a culture of no longer questioning, just doing.The act of planning is invaluable, it forces you to consider options and think about how things will work out. However, the plan itself diminishes in value rather quickly as the assumptions and estimates it was built around meet up with reality. Sometimes it’s value drops to zero (or in fact below zero and becomes a burden) even before you start following it.

Test Driven Development

High quality, prompt feedback turns out to be the key concept to all sorts of other Agile software development practices. My favorite example would of course be Test Driven Development (TDD). This provides feedback on a number of levels:

  • Seeing the test results go from failing (red bar of shame) to passing (green bar of joy) is the most obvious form of feedback provided by TDD and should start a round of refactoring now you are in a “good” state.
  • By writing the test first, you get feedback on your understanding of the task. It is very difficult to devise a test when you are unclear on the desired behavior.
  • By writing the test first, you get feedback on the state of the existing code-base. If the test is difficult to write, that feedback is telling you that you should first look at refactoring/redesigning the surrounding code.
  • By running all the tests frequently, you are getting feedback on the state of the entire project and how your current piece of work is integrating with it.

The huge advantage of TDD is that you get this feedback really quickly, it’s incredibly targeted  and it’s very specific. That is an almost perfect definition high quality, prompt feedback.

Pair Programming

I’ve heard code reviews talked about as a poor man’s substitute for pair programming. While I agree with the sentiment and acknowledge that both provide feedback on code, I think the gap between them in practice is much, much bigger than most people realise.

The obvious difference is that code reviews happen after the code has been written…sometimes quite a long time after the code has been written. So code reviews do not provide prompt feedback. This is bad…actually, I’m not sure that “bad” the right word. Maybe I should use “disappointing“? Yes, this is disappointing because the effort is being spent to review the code but the opportunity for getting appropriate returns on that investment have passed by already. In my experience, code reviews generally provide a lower quality of feedback than pair programming too. This is partly because they are conducted some time after the code was written, but also because it’s much harder to really understand a piece of code when you were not there at it’s creation.

So there you go, it’s all about the feedback. Make it high-quality, and seek/provide it promptly. I’ll grant you, it’s not exactly an epiphany, but it was an observation I thought I should share. Feel free to share or suggest your own observations on feedback in the comments section…or even to disagree with me.

Oh dear, it’s November now and I’ve not written the promised second part of my Lean Agile Scotland review. So no more excuses or procrastinating…


People Patterns by Joe O’Brien

I’d heard that Joe’s People Patterns talk was really good. I can now confirm that “really good” doesn’t do it justice. I could relate to so many of the issues he talked about and cannot agree more that projects don’t succeed or fail for technical reasons, they do so because of people. If you get the chance to see his talk, go see it. Failing that, listen/watch to it at Lean Agile Scotland 2012 videos.

Slightly off-topic, I had the pleasure of meeting Joe at a lunch with some friends a year or so ago. I was having a bad day and most likely came across like an angry dick, and I really regret that. Completely ignoring some of the points from People Patterns about talking to people, I put off going to speak to him to apologise for my behavior until it was too late and by then he’d already left.

Respect For People by Liz Keogh

It turns out that I can be very disrespectful and I didn’t even notice (see lunch with @objo above for details). Liz examined the roots of various words common in the Lean & Agile vocabulary and related it to the sorts of interactions we’re all familiar with in software development.

I tried the suggestion of not asking people to go next at our stand-up (it’s dis-respectful because I am demanding an update from them, rather than them providing it on their own). This led to a long awkward silence after my turn. Once I’d explained my thinking and talked about Liz’s keynote, the silences got a little shorter and I like to think the rest of the team understood what I was getting at.

Rightshifting track

On the second day, track one had 3 rightshifting sessions back to back. I’d read a little about it, talked a little about it at the Lean Agile Glasgow meet-up, but I didn’t really understand it. To my untrained eye, it looked like a variation on CMMI maturity model…but on it’s side rather than a pyramid. I’m glad to say that once Bob Marshall had assembled us into a circle and talking with us, it all became a lot clearer. I also left with an understanding of why large, established firms can find moving to an Agile process to be so painful.

Ian Carroll’s “Rightshifting in action, using Kanban for organisational change” was also riveting. I’d seem his webcast on Systemic Flow Mapping before and after that session I am even more motivated to perform the exercise in the business unit I work for. Maybe nearer Christmas I’ll find the time. I suspect a first attempt will be more of a learning exercise but even asking the questions and trying to describe the current process should be very revealing.

I didn’t manage to attend Torbjörn Gyllebring’s session, but I’ll be watching the video soon.

Why Agile Fails – Matt Wynne

I liked Matt’s presentation style and found the content engaging and interesting. My key points to take away were:

Wynne’s 1st Law of Software Delivery: If it isn’t fun, you’re doing it wrong.

I’ve used that a number of times myself now 🙂

The other key point was around Cargo Cults and Shu Ha Ri, which describes the learning process as (please excuse my poor rephrasing):

  • Shu – we practice the forms rigorously and without deviation
  • Ha – once disciplined, make innovations and question the forms
  • Ri – completely depart from the forms, open the door to creative technique, and arrive in a place where we act in accordance with what our heart/mind desires, unhindered while not overstepping laws

That last step really does sound like the phase we all want to be in, but the first phase is also very similar to when you see Scrum cargo cults in large organisations. Matt stressed that we shouldn’t look down on those teams as the first stage certainly does show similarities with Cargo Cults. However, the differentiation here is that a Cargo Cult doesn’t realise that it’s just the first step on the path to understanding…and maybe they just need some appropriate assistance or coaching to make that mental leap.

So, what now?

Well, those are my highlights of Lean Agile Scotland 2012 from the perspective of what gave me cause to stop and think/re-think about how I approach things. I enjoyed it all, and I can’t wait until Lean Agile Scotland 2013.

My team is moving from a Scrum process (which we’d tailored somewhat over the years) to something more Kanban, dropping estimation…or at least only giving very brief, initial estimates or deciding to break stories into smaller pieces.

I am determined to construct a Systemic Flow Map of my business area. I think it would reveal all sorts of valuable information about how our teams could work more collaboratively and streamline the work we do. I’m likely to blog about that for advice because I still have questions about exactly how to do it.

My friend and colleague Wayne Grant posted an excellent blog/summary of his experiences at Lean Agile Scotland 2012. I was there too and this spurred me on to finish writing about what I got out of the conference too. I’m trying to describe the thoughts I had about some of the talks rather than summarising the content (if you want a summary of the content, check out the Schedule on the link to the conference above), so please let me know if you think I’ve completely missed the point on any of them.


Common Objections to TDD
As I’m running TDD workshops myself, I felt that I had to attend the two TDD talks just in case I was missing out on any new thinking on the subject. First up was Seb Rose (@sebrose) with Common Objections to TDD (and their refutations). Seb had run a survey on the web gathering objections to TDD and this was an entertaining walk through them followed by discussion of why it was or wasn’t a valid one.

I found it very interesting in Seb’s results, that of the teams claiming to “be agile”, only about half were using TDD. I consider TDD to be core practice and apply the principle to many things outside of software development too as it brings me focus, forces me to ensure I have a good understanding of the task at hand, and provides a measurement of success afterwards. For example, if I’m about to make a phone call I often think of a test-case to describe the call:

  • My car will be booked into the garage…or my team will get to work on a new project

This sets me up for the call and during the call I can refer to it to make sure that it’s moving in the right direction or if I’ve become distracted/side-tracked, I can steer it back around. When the call is over I have a very simple check to see if it was a successful call or not. Maybe I take these things too far, but it works for me as I’m easily distracted.

TDD Pitfalls

The second talk on TDD was Brian Swan (@bgswan) with TDD Pitfalls. I’ve know Brian for some time and didn’t think there would be too much contentious material in his talk but I was motivated to question “Tests as Documentation” as a pitfall.

I don’t like comments in code for a number of reasons, but the biggest one is that they are always out of date. Well, that’s not true…but it very quickly becomes the case in a changing code-base and there is no easy way to tell, so I assume they are…and more often than not, delete them. For me, when I want to understand what a class does and how it should behave, I always read the tests first and find good value in that when they are written well.

I think what Brian was getting at in his talk was that it is very hard to write tests that clearly  describe the behaviour of the class…and forcing yourself to do that is the pitfall, and I’m thinking that he might have a point. It’s very easy to expend a lot of effort trying to achieve this and when you look at the code you still, eventually (I’ll still read the tests first) end up reading the implementation.


BDD: Busting the Myths

Related to TDD, there was BDD: Busting The Myths with Gojko Adzic (@gojkoadzic)…and if there’s an award for most entertaining talk of the conference, Gojko wins it easily. My exposure to BDD has always been at arms length and been dominated by people showing off how clever the tools are. I’ve never been convinced of it’s value because of this exposure, but have always felt that I was writing off something that could be immensely valuable.

What I took away from this talk was that, as I’d suspected, the tool was just a distraction. BDD is all about facilitating the conversation between the customer and the implementer. That there are some tools to capture this is merely a convenience and I don’t think we’ll ever get to the state where the customer can just turn up with a set of BDD tests (am I even allowed to call them tests?) to hand to a development team.

The conversation is where all the knowledge about the work/story/feature comes from and BDD tests are used to steer that in much a similar way as TDD focuses code and makes you ask the questions around behaviours.

The big revelation I had was that these BDD tests have a much shorter shelf-life than I had imagined. While talking about the costs of maintaining large BDD suites, Gojko was discouraging their use for regression testing of applications (well, certainly using them verbatim as the regression test). I had always struggled to imagine the customer truly owning the tests and maintaining and updating hundreds of them over the life of the product. By suggesting that the most value for them is in the initial implementation and perhaps not maintaining them further down the line seemed to make them much clearer and usable/well defined in my head.

End of Part 1

So that was Part 1, I hope it wasn’t too rambling and that perhaps you even found it interesting. In Part 2 I’ll be talking about People, Kanban, and Rightshifting and some new things I’d like my team to try out.

I really enjoyed Martin Fowler’s recent article on Test Coverage, and in particular I liked the graphic he included in it:
Image has gone :-(

This simple diagram sets out very clearly what test coverage is good at measuring and what it is not. I think that is a very important distinction to make because coverage is no indication of quality, just of what is and isn’t tested.

This appears to be at odds with a current wave of thinking (well, one that I’ve observed anyway) where a message comes down from senior managers that all projects must have at least xx% of test coverage. The repercussions of not achieving this target vary from audit actions to remedy the coverage all the way through to not allowing projects to be released until they hit the magic number…and the number varies wildly as well…but the thinking behind this is common, and flawed: High Test Coverage == High Quality Code.

Correlation != Causation

Correlation does not imply causation, but once that link has been pointed out it can be difficult to ignore without a solid understanding of what you are observing and why there is a correlation.

The correlation is there, plenty of projects with high test coverage are of excellent quality. These projects tend to be ones driven by the sensible implementation of good tests as and when required, quite likely following some kind of deliberate practice like Test Driven Development…but not necessarily. The thing to note here is not the quantity or coverage of the tests, it’s the use of good tests, when and where required.

The other side of this is where a target is set and the team has not been writing tests all along. They quite often panic when presented with a target and immediately look for shortcuts. The reason for this could be that testing is an often overlooked skill and, from my experience, a large number of developers mistakenly think that it’s beneath them. Not always the case, but this is a blog post so I feel justified in my over-generalisation.

I’m not saying that it’s easy to retro-fit good tests to a relatively untested code base. Far from it as without considering the testing up-front, it’s likely that the code is not organised/designed in such a way as to make testing easy. I’ve seen many teams jump on a product that promises to write all the tests they need, automatically, with minimal input. The problem with these tests is that they are rarely the high value tests that are needed, and they are applied mechanically across the project with no sense of context. More importantly, they remove the knowledge and understanding that a developer would gain by thinking through what tests are relevant and how the code should behave.

So, what should we do?

First of all, we should stop selling code coverage as a measure of quality. It’s not. However, it can be useful in spotting projects where the test coverage is particularly low or to spot trends in evolving code bases.

When coupled with other tools to identify complexity and rate of change (of the source code over time) then you can identify good areas to examine and see if you could write more tests. I imagine that there are even more interesting and useful insights to be had by combining other measures, feel free to comment on this if you have any good ones.

Aiming for 100% code coverage is a noble goal, but it should only be an aspiration and definitely not a target…and you still shouldn’t confuse it with a stamp of quality assurance.

Personally, I struggle to do anything other than TDD these days but I understand it’s not the path of least resistance and can be difficult to learn. Having said that, if you are not writing tests for your software then you are either incredibly over-confident, incredibly naive, or both.

End with an analogy

A Road network

Like any analogy, I expect this one will have many flaws but I’m going to use it anyway.

  • Would you consider that the road network would be improved just by adding more roads until we reached saturation?
  • Regardless of the quality of the roads?
  • Even when they were built where nobody lived, traveled, or wanted to go?

Product Owner required

November 16, 2011

Recently we had to find someone to act as Product Owner for our project. He had little experience of Scrum or of agile software development so I wrote him a primer on what role we needed a Product Owner to fulfill. This post is more or less what I sent him to see if he had the time and appetite for the role (with company and project details removed/altered to make it more generally applicable).

There are some things that people assume about the role of Product Owner, so let’s dispel some of those upfront.

What the Product Owner is not

  • It’s not a “prestige” or “honorary” role, you will have work to do and decisions to make
  • It’s not a project manager role, it has specific responsibilities but they are not related with the running of the project
  • It’s not shuffling work around on a spreadsheet and then handing it over to be done
  • It’s definitely not running the build team 😉

What does the Product Owner need

  • A clear and detailed product vision – the build team will look to the Product Owner for guidance on features and how they should work and fit in with “the vision”
  • A solid understanding of Business Value – I’ll come back to what I mean by Business Value
  • Communications with the key stakeholders and project sponsors
  • Understanding of the User Stories in the backlog – we’ll work out what level of detail and language they need to be valuable to both you and the build team
  • Some understanding of how software is developed, but not low level details. It will be helpful for us to occasionally explain technical concepts, but we won’t be asking you to cut any code.
  • Skin in the game. You’ll be responsible for maximizing the value realized from the effort spent.
What is Business Value (in this context)?

Ultimately, Business Value is whatever the Return On Investment (ROI) of this project is going to be judged on. That could be the number of users, the happiness of our users, the diversity of our users, the number of installations of the application, the collective dollar savings our customer make by using the product, or any combination of those (or more) options. This is not always going to be the same for all time either.

For example, right now we are trying to make a handful of customers happy, but next year we’ll probably be more interested in increasing the number, and diversity, of projects using our application, and how much money they are saving the firm by doing so (it’s an internal project in our firm, but the teams in the firm will only use it if they perceive it to be giving them additional value).

It’s going to be up to you to identify the current Business Value and prioritize work accordingly to get as much Business Value as possible…and be aware of when Business Value changes.


So I’ve talked about what a product owner is not, the things they need, defined business value. It’s probably a good time to talk about what you’d need to actually do.

Product Backlog

This is where every feature, bug, task, activity, etc related to the product lives. You would be responsible for keeping that list up-to-date and prioritized. This is the element of the role that was referred to as being a full time role. We will end up with some shared responsibility of its maintenance, but you would have the final say on priority of work. With that control, you’d be expected to keep on top of what’s in the backlog, see dependencies (with our assistance where it’s a technical dependency), and determine urgency of items. In an ideal world, you might be able to assign a numerical Business Value to each story to assist you…but we’ve attempted that on other projects and it’s not a trivial thing to do, so let’s walk before we run. There is also the breaking up of stories into smaller units and the creation of “epics” which span many stories to deliver a higher level feature.


You get to accept or reject any work done. Traditionally this is done via a demo at the end of an iteration where you would come along and we’d demonstrate all the features built in that iteration. Being intimately familiar with the stories, you would be in a position to say whether or not it’s acceptable, or perhaps needs some additional work. The feedback from this is also a driver for fine tuning the user stories so that your expectations line up with what we’re delivering (and vice-versa).


In iteration planning we essentially take the prioritized backlog and see what can be delivered, in a fixed period, starting with the highest priority item and working our way down the list. Of course this is an oversimplified view and it’s not always the most effective way to work, so your input would be required to provide themes and higher level goals for each release/iteration. For example, in one release we might focus on integration with other systems, in another we might focus on improving user experience.

So, given all of the above, are you still interested in the role job?

Writing better user stories

November 11, 2011

Recently we had a planning session where I was asked to explain what made a good User Story, so I thought I’d capture that in a blog.

We are all familiar with the standard layout of a User Story:
* As a …
* I want to …
* So that …

However, I’ve observed a temptation to just blindly follow that formula and then claim that you are “doing Agile”. For me, the real essence of Agile is looking at what you are doing, identifying if it’s giving you good value, and if not, changing it so that it does (or dropping the practice all together). So, how to we write better user stories ?

I’d like to start by breaking down what we capture with the above formula, what qualities we want from a user story, and then what we can do to improve our stories.


Role, Task, and Motivation

As a … : this defines the role of the person stating the User Story and gives it a context. If you have the same value here for every story, “User” for example, then the field is not providing any value to you. You should take the one role you have and try to break it into finer grained roles, even if they are not actually represented that way in your system.

I want to … : this is the task you are trying to achieve in the context you have just set up. Generally, people are quite comfortable describing this part. Concerns here are striking a balance between readability and capturing the necessary details. I’d err on the side of readability and make sure that the task part of the description is memorable and differentiates the story from others. You don’t have to capture all detail in this one line, you can add additional detail to a story outside of the “As a, I want to, Such that” structure.

So that … : this is the motivation behind the story and, I find, often over looked when writing User Stories. I’m not suggestion you write a lengthy thesis on the motivations of the user, but at least explore it and see if there are some useful pieces of information to take from it.



A good user story should also capture some information about how you might go about testing the story. This can be in the form of a simple “Done” list, or perhaps as  description of an integration level test/scenario(s) that would exercise it. I’ll assume that you would be writing unit tests (and writing them first) as part of the implementation.


Large Stories

A cardinal sin in my book is having huge stories that have such a complex completion criteria and cover such a wide range of functionality, that they take many iterations to be completed. These interrupt the rhythm of an agile project because it’s very hard to track progress against them. I’ve found that this can have a negative effect on the team morale as the story rolls on from one iteration to the next. We currently have a rule of thumb for our 1 week iterations, which is to limit estimates on stories to 3 days. If it looks bigger than that, I encourage the creation of sub-stories within it. So far, that’s working well for us.

Sure, you might find it useful to capture epic stories as a set of related functionality, but break it into bite sized chunks. Not only does this give you a warm feeling to the team as you close out all the sub-tasks, but the act if breaking an epic into smaller stories helps focus in on what exactly you are trying to achieve…which is what a good user story should do.



So do we write perfect user stories every time? No, of course not, but we often review stories in our retrospectives and identify which features of them made them easy to work with and which stories we had trouble with. I never expect a story to capture every detail, but I like to aspire for stories that anyone on the team could pick up and understand what needs to be implemented, why it’s being added to the product, and how they can go about verifying that they have finished the work.