Search This Blog

Tuesday, September 27, 2011

When Discipline Breaks Down

What do I mean by discipline...  That was rhetorical so if you answered in your head, smack yourself on the hand and stand in the corner.

In the software world, discipline focuses on the patterns we follow in response to events or the breakdown thereof.  I try repeatedly and often unsuccessfully to drive home the idea that software development discipline is what allows us to efficiently and effectively add an increment of product.

Example.  The Wrong Value

My QA folks inform me that they expect a certain value to persist and it is always persisting as Zero.  I have two options.  Option 1 is to run the application and attach a debugger.  Regardless of the number of tiers in the system, this takes considerable time. Additionally, I have to have just the right input information to test with and it has to fulfill the requirements of the ENTIRE interface, not just the portion I care about.

As an alternative, I could select a tier and write a unit test pinpointing the portion of code that is potentially exhibiting the behavior. If the test passes, I can move on to the next tier.  If the test fails, I can then change the code to get the test to pass then commit.  Problem solved.

Today, we had a scenario where our public facing schema mapped to our canonical schema (our internal representation of our "thing") and it was reported that a certain value was not being persisted.  Since we had recently created the public schema, my partner insisted that the mapping between the two was wrong.  I immediately wrote a test that definitively proved the mapping was good.  It allowed us to pinpoint the area of the application where the behavior broke down.

McConnell and Beck both get kudos for instilling a personal discipline when I develop.  Immediate Feedback Loop.  In this case, writing a unit test to expose the problem is an example of an immediate feedback loop.  The test, when written where the problem does not occur, provides a regression test that can then be played back by the continuous integration server as a check.  The test that failed and then succeeded (when I fixed the problem area) stands as another regression check.



One of my biggest frustrations as an engineer is explaining to non-technical folks that I have to spend time writing tests to test my own code.  The ones that get it get my immediate respect.  They understand how much more efficient it is for me to write a test (that takes about 5 minutes to write) than to defer the immediate feedback loop to a feedback loop that incorporates multiple people and potentially some form of bug tracking tool.  They also understand that a bug found by me has less cost associated with it than a bug found during a QA test. 

My next challenge is to instill this discipline in my team.  I have repeatedly demonstrated to them the value of writing a test.  I'll likely do this again over the next 3-5 sprints.  In this case, repetition is probably the best teacher.

I have said this many times but recently it has become my greatest challenge.  It is far easier to build great software than it is to build great people.  Great People however, are the best long-term investment any company can set a budget for. 


Wednesday, September 14, 2011

ASP.NET Webforms to ASP.NET MVC - Approach

I won't reinvent the wheel in this post as your friend and mine Rajesh Pillai wrote up an awesome step-by-step approach to taking an ASP.NET Webforms project and migrating it to ASP.NET MVC one workflow at a time. 

His post can be found here.  If you also use Microsoft's Ajax Toolkit, there are a couple of things to point out.  First of all, make sure you watch the order in which your handlers are defined.  One side-effect I encountered was the 'case of the missing $sys' in which case MS ajax javascript was being served up through the axd route. 

Second, if you are also a webforms spring.net-enabled site, you'll want to either define your own MVC Controller Factory and declare it in global asax on or use Spring.NET's MVC Application.  In our case, we chose the former approach primarily to minimize impact and have more control over the factory. 

Finally, don't throw away your existing site and try to do the changeover Big Bang.  Instead, carefully choose the workflows in the site you want to convert and convert them one at a time.  Your challenge, if you are unfamiliar with Ajax, is to take those update panels out and replace them javascript that uses an Ajax library (in our case JQuery) to provide the same behaviors. 

A couple of additional SNAFUs I ran into while going through this exercise:
1) Forgetting to add the web.config to the Views folder.  If you create a VS2008 ASP.NET MVC 2 project, you'll notice a web.config is created in the Views folder.  Among other things, this web.config adds essential page filters and parsers which allow for the use of the Html extension when writing up the markup.  Without these, you will not have the correct state in your aspx page with which to make use of them.

2) Controllers marked as Content and not as Compile
This threw me for a loop for a good hour.  I lost all intellisense and like most of the other Microsoft folks out there, felt quite naked and vulnerable.  Check your properties window and make sure the controller class is defined with 'compile'.  It's that simple.

I'll be posting more as we start to transition the site over to MVC v2 framework. 

Why Technical Debt Sucks

And I do mean sucks.  Technical Debt sucks at our energy, motivation and velocity. 

Technical Debt is a term that composes Defects, Shortcuts, Workarounds and other contributors to sub-par quality software.  Shortcuts in this case refer to intentionally writing code that quickly solves a problem.  While the problem itself is fixed, on average a shortcut also becomes a primary contributor to side-effects. 

Defects are something with which we are all familiar.  Carrying defects across sprints kills velocity.  I had to announce to our stakeholders today that, while our team has averaged 30 story points over the last 6 sprints we were only able to complete 14 story points this past sprint because we had 8 defects to fix some of which had a lifespan of a year.  There is nothing that should be more embarrassing to a team than to have to explain this. 

Short answer, run a sprint bug free. 

Tuesday, July 26, 2011

Planning a Sprint - Reference Stories

If you've sat in a sprint planning meeting before and played games like Planning Poker to agree to story size, you're most likely very familiar with the problem of some team members perceiving stories larger or smaller than other team members.  Often, the reasons may be because of misunderstanding of story scope (Acceptance Criteria) or effort may be perceived higher in certain areas (for example, a story may require full regression if there is considerable impact).

One of the methods your team can use to break the disagreement is by identifying and comparing using reference stories.  A reference story is usually associated with a size on the Fibonacci sequence (if you're using them for story points) and your Scrum Master can facilitate the discussion by having the team agree that a story is either larger or smaller than a specific reference story.  Remember though, sizing is really a gut feeling and is subject to change. 

The best time to decide on reference stories is during Retrospective.  That way, the stories are still fresh in the team's heads.  Review the stories done (DONE, not almost done) during the sprint and have the team agree whether or not the size made sense.  Make sure you at least get a Small, Medium, Large and possibly Larger-than-Large (but not Epic).  This is one of the best ways to help a team decide on story size during upcoming sprints.

Remember too that you'll need to revisit this frequently especially if your team dynamics change.    

Wednesday, June 29, 2011

Agile Projects - Why you may not need those Cool Tools

We are continually inundated with new tools.  Tools that will write code for us (Codesmith, T4, etc etc), tools that tell us when our code has a smell (Resharper) and tools that help us keep lists of things to do (VersionOne, Team Foundation Server and others).  I'll be the first to admit that I love tools.  Or rather, I should say that I love trying out tools. 

Today, I discovered myself telling a team member who wanted to track Stories and Tasks in TFS, that above all else, we need to remember the purpose of a tool.  A tool is intended to make a job easier, not harder.  By harder, I mean that a tool should help us take less time to accomplish the same goal, or its overhead should equal the additional value it brings to the table. 

For the first time in many years, I have completely and unequivocally embraced the art of SIMPLE, namely using just a spreadsheet to manage Product Backlog and a template to print cards out for a Sprint backlog.  Feedback I got today was that it was very hard for the team member to know where they were with stories and tasks.  They felt that having the tasks in TFS as work items would help them better understand what they were doing.  What's concerning me with this statement is that the task board is less than 10 feet from where we sit, all in the same room. 

I'd like to believe that the confusion doesn't come from the use of index cards but more from lack of collaboration or something else.  How can managing a list in an application make it easier to know what someone has committed to and what they need to accomplish in 9 business days...

I believe that tools serve a valued purpose and tools like TFS are extremely valuable especially for demographically disparate teams however, teams that sit in the same room at the same table don't need the additional overhead.  Combine managing the board and TFS work items and it seems to me that it doubles the administrative overhead for very little additional value.

Tim