Thursday, October 20, 2011

Kidz Kanban–Classes of Service

In my first Kidz Kanban post, I described how my 6 year old daughter and I established a Kanban board to visualize her weekend chore work.  Together, we learned 2 important things:
  1. Visualizing your work can be powerful. It reduces ambiguity in our human interactions.
  2. Using the language of your stakeholders provides better engagement and shared understanding.

The next weekend, we were working the items on the board, when my daughter makes a surprise announcement to the prioritized list of tasks.

“Dad I moved the “Clean basement” sticky to the top of the list so I can do it first. Is that ok…?” 

Me: Why do we need to do that first?

“Because Grandma is coming today and she is staying overnight.”

(Full disclosure, I don’t make a habit of stowing my mother-in-law in the basement when she visits. We have a comfortably finished basement where we house a guest bedroom and bathroom.)

My 6 year old made a quick observation when surveying her initial task list. She didn’t know definitively if the current priority would be a problem. But she intuitively knew we were at risk of not having the guest space in the basement ready for grandma's visit.

So I asked my daughter if she thought we should show that work differently.  I suggested we use an alternate color sticky note--in this case green.


My daughter decided to complete the green sticky notes first, and finish the remaining yellow notes last.

The more astute readers, (maybe everyone) of this blog will note that we haven't done much here other than an overly elaborate prioritization scheme.  Couldn't we accomplish the same effect by just re-ordering our sticky notes in the "Daddy Says" column without changing colors?  We could do that, but it doesn't really tell the whole story.

My daughter and I continued our conversation about the green ticket.  I asked if the basement cleaning needed to occur immediately.

Me: When will Grandma really need to use the basement?

"Well, not until she needs to go to bed tonight" 

Me: Could we wait until just before she goes to bed before we clean?

"Yeah, we could...... but I still want to clean it right now."

This exchange demonstrated that our green sticky note usage could mean more than just a simple change in priority. Even though my daughter decided to work the green notes first, she had the option to defer that commitment until later.  This is a subtle, but very important distinction from a simple priority list. By ignoring the priority approach, we open up new possibilities.  My daughter could choose to "check for ripe tomatoes" or "clean her bedroom" and tap new possibilities (value). For example, we could have fresh tomatoes in time for lunch with Grandma while we defer the basement cleaning until the afternoon.

Software teams that practice Kanban often use a similar approach called Classes of Service.  A good blog post is found here. I first learned about Classes of Service from the original Kanbaner, David Anderson, and he describes his approach in his book, Kanban: Successful Evolutionary Change for Your Technology Business

This example would be considered a Fixed Delivery Date Class of Service. Typically, a due date would be visible on the card.  In my daughter's example, she would need a Fixed Delivery Time on her sticky note. And since we are visualizing our fixed delivery date stickies on the same board as our standard class of service, we can make instant judgments about which items to work at various points in time.

My daughter ultimately decided against deferring the fixed date work item. That's ok. I have some work to do with her, but hey, she's only in the 1st grade. What's interesting though is how a 1st grader can show more agility than a highly-paid team of professional grade software folks--even the "Agile" ones.

Our simple weekend task board has one big advantage over many corporate software team task boards.  We don't batch our work in an arbitrary fashion. How many software professionals have to tell customers "no" or provide few alternative work paths because:
  • "We have to stick with our project plan so we can show we are successfully executing."
  • "We can't break our iteration for this, it will mess up our velocity metric."
  • "We committed to delivering this batch of stories by next Thursday."
I know I've experienced this in the past. If your struggling with something similar to this, you may want to consider offering Classes of Service to your customer for more nuanced team decision making.

Saturday, October 8, 2011

Kidz Kanban

I have a 6 year old daughter, who has been helping with household chores for quite some time. But recently we decided to visualize our Saturday morning chore work. So I ordered a decent quality whiteboard and mounted it to a kitchen closet door—in plain view of anyone using our kitchen and laundry room areas.

We started by creating 3 columns:

Todo –> Doing –> Done

This seemed to work ok. My daughter is bright, and a good reader for her age.  She understood the workflow states, so we worked our Todo list the first Saturday.

Afterward, I asked her how she thought it went. She thought it was alright, except she didn’t know what a “Todo” list meant.  She understood the purpose of the prioritized work items. But she didn’t understand the language “Todo”. So I asked her what she thought the column title should be. 
“Well this is the stuff that Daddy says I need to do.”
So we changed the column heading to reflect that suggestion. I asked her about the “Doing” state, and she offered:
“That’s what I’m doing.”
I updated the board again to the new language. And immediately asked about the “Done” column.
“That’s when we do a high five!”
Our new column headers (red arrow) now read:

Daddy Says –> I’m Doing –> High Five

Readers who are software professionals may be wondering what a children’s personal Kanban board has to do with software development. I think there is a lesson here that I’ve learned long before establishing a board for my daughter.  When modeling any process, use the language of your stakeholders. It’s likely to establish a better shared understanding, and people will naturally be more engaged.  I’ve made the mistake in the past of using contrived Kanban board column names on a software project.  It’s something I now watch for when setting up any kind of Kanban board.

If you are wondering about the smiley faces on the top bar (yellow arrow), that represents our Visual Management tool for both our girls' behavior. Smiley faces on Monday – Friday earn a trip for breakfast out on the weekend. Visualizing the “score” reduces confusion and arguments around the current state of the household behavior. Have you ever argued about the state of some chunk of software at your work place? Visualize it!

See part two of this series: Kidz Kanban: Classeses of Service.

Tuesday, June 28, 2011

KCDC Talk - Guerilla Kanban: A Toolkit for Improvement

For those who attended my talk, thank you for your interest. A pdf slide deck is available for download.

My Guerilla Kanban Toolkit For Improvement includes the topics:

Nature of Software
Visualize and Manage Flow
Risk Management

I hope to address more of these types of topics in future presentations. Let me know if you have questions!

Monday, April 25, 2011

Open Mouth, Insert Software Toolbox

A toolbox, from Biltema)
Image via Wikipedia
So often we hear the phrase, “I have many tools in my software development toolbox, and I like to use the correct tool for the job.” It sounds good. It appears to be a responsible approach. After all, why wouldn’t we want someone to have multiple tools at their disposal? 

The whole toolbox metaphor is fine on the surface. My problem with it stems from the incidentals in the discussion.  In my experience, most people who reference having a “software toolbox” are really hiding from a more direct questioning or discussion of specific practices or methods.  Instead of advocating for those methods they believe work, or more importantly, listening to others advocate for their methods, many in our field throw up the “toolbox” as a defense mechanism.  It makes for a clean exit too. The problem is this mechanism limits learning for everyone involved. The “software toolbox” is a conversation ender, not a conversation starter.

With that said, I am going to put my toolbox where my mouth is.

First, my obligatory disclaimer:  Individual results will likely vary. Past performance does not guarantee future success. Your specific context and corporate culture are critical factors when deciding which tools are best for you. This is not intended as a comprehensive list.

Troy’s State-of-the-Art Software Toolbox:
  1. Small Iterative Development Cycles: Software work is knowledge based. Knowledge in software is best improved by frequent feedback cycles. Early information feedback allows the team and customer to learn and improve at a much faster rate than when iterative approaches are not used.
    Source: Extreme Programming, Scrum
  2. Explicit Work-In-Progress Limits: Even when working iteratively, teams will often have too much WIP. Ever observe a Scrum team start every single user story by the first day?  I have, and without an explicit policy limiting work-in-progress, teams and managers will try to push too much work into the system. WIP limits allow a team to begin finishing work instead of starting more work. Team context switching is minimized, giving the team the opportunity to improve quality.
    Source: Kanban for software
  3. Just-In-Time Planning: Plan just for what you can do, and nothing more. There is no waterfall-style comprehensive requirements document with this approach. Nor is it the heavy backlog grooming practices frequently seen in some Agile teams where user stories are analyzed and specified in detail months before they are implemented. Just-in-time planning also means no commitment-based time-boxed iterations.  Optimal planning occurs when the the team is provided with the best thing to work on next, no more and no less.
    Source: Kanban for software
  4. High Discipline, Low Ceremony Engineering Practices: Emphasize practices that encourage collaboration and short feedback cycles. TDD/BDD and a continuous integration server are a great start. After-the-fact, formal code reviews are typically too late to add anything other than rework. Code metrics are fine, but be careful what you measure. Are your code metrics making the code easier to understand and change, or are you happy just to apply “standards” to your code? Covering code with tests is a good thing. But higher code coverage doesn’t automatically mean your code base is functioning the way your customer wants.
    Source: Extreme Programming
  5. Kaizen: Continuous improvement may come from inspect and adapt, PDCA, or other methods. It doesn’t matter if you prefer one over the other as long as you build an information feedback loop into your process. At the basic level, provide enough oxygen to the team to reflect, share ideas, and improve. Some teams may require regularly scheduled retrospectives, others may perform mini retrospectives or spontaneous Kaizen events during the project. 
    Source: Extreme Programming, Scrum, Kanban for software
I’m hoping this is valuable information for you.  This is my toolbox, and these are some of the more valuable tools to me. If I show you mine, will you show me yours?

Will these work for you?  I don’t know, that’s for you to decide. I just know that they work for me today. If I were a part of a brand new software team tomorrow, this is where I would start our discussion.  And hopefully we grow to an even better place.

“Merely having an open mind is nothing: the object of opening the mind, as of opening the mouth, is to shut it again on something solid.” -- G. K. Chesterton

Enhanced by Zemanta

Tuesday, February 15, 2011

Code Metrics as a Project Introduction

I recently started some analysis work for a new client at work. As we began our work, we were quickly confronted with a legacy codebase that would be the subject of our proposed work.

We need to give the client some technical feedback on the code. The obvious and common approach is crack open the solution and take a look. That approach usually results in anecdotal recommendations at best, so I turned to code analysis tooling.

In the .NET space, a popular code analysis tool is NDepend, so we gave it a try. NDepend generates a ton of metrics on your codebase and it is easy to get overloaded with data.  So, it is important to remember what your goals are with code metrics.  Do we need raw data or actionable information that can be presented to a client? 

In our case, we need to report to our client the state of their codebase. Specifically, we need to convey how hard it will be to make changes to this legacy code.  A nice visual cue is the Abstractness vs. Instability diagram provided in the NDepend analysis report:


NDepend defines “abstractness” as the percentage of abstract types (interfaces, abstract classes) to concrete types. Instability is defined as the ratio of efferent coupling to total coupling. Coupling at the assembly level is an interesting metric, but I am mainly concerned with the level of abstraction here.  As the diagram shows, this client has two assemblies with good scores (orange arrows), but one assembly with a very poor score (red arrow).  This would normally be a good sign, except that about 90% of the code is contained in the low scoring assembly. 

Assembly level metrics are fine, but the metrics I am most interested in are at the type level. Assemblies can easily be manipulated by moving code around with any decent refactoring tool.  But types are the building blocks of code and tell the true story of code quality.  NDepend provides a matrix of scores for types as shown below:


The pink cells identify the worst 15% of offenders.  I am most interested in CC (cyclomatic complexity), Ca (afferent coupling), and Ce (efferent coupling). If you are unfamiliar with the terms cyclomatic complexity and coupling in software, I suggest a little more reading on the topic.   For our purposes, definitions are provided by the NDepend website:

CC: Cyclomatic complexity is a popular procedural software metric equal to the number of decisions that can be taken in a procedure.

Ca: The Afferent Coupling for a particular type is the number of types that depends directly on it.

Ce: The Efferent Coupling for a particular type is the number of types it directly depends on.

Once the analysis report is generated, a quick glance through the type metrics will give you a good indication of the relative health of the code.  Lots of pink cells mean a more difficult journey ahead for the team.

NDepend provides some overall application metrics as well. Nothing earth-shattering, but it is interesting to know things like total number of lines of code, percentage of code comments, number of types, and percentage of public methods.


This is the type of information I find useful when starting a new codebase. I hope it’s helpful when you start your new project!