Search This Blog

Wednesday, June 29, 2011

Agile Projects - Why you may not need those Cool Tools

We are continually inundated with new tools.  Tools that will write code for us (Codesmith, T4, etc etc), tools that tell us when our code has a smell (Resharper) and tools that help us keep lists of things to do (VersionOne, Team Foundation Server and others).  I'll be the first to admit that I love tools.  Or rather, I should say that I love trying out tools. 

Today, I discovered myself telling a team member who wanted to track Stories and Tasks in TFS, that above all else, we need to remember the purpose of a tool.  A tool is intended to make a job easier, not harder.  By harder, I mean that a tool should help us take less time to accomplish the same goal, or its overhead should equal the additional value it brings to the table. 

For the first time in many years, I have completely and unequivocally embraced the art of SIMPLE, namely using just a spreadsheet to manage Product Backlog and a template to print cards out for a Sprint backlog.  Feedback I got today was that it was very hard for the team member to know where they were with stories and tasks.  They felt that having the tasks in TFS as work items would help them better understand what they were doing.  What's concerning me with this statement is that the task board is less than 10 feet from where we sit, all in the same room. 

I'd like to believe that the confusion doesn't come from the use of index cards but more from lack of collaboration or something else.  How can managing a list in an application make it easier to know what someone has committed to and what they need to accomplish in 9 business days...

I believe that tools serve a valued purpose and tools like TFS are extremely valuable especially for demographically disparate teams however, teams that sit in the same room at the same table don't need the additional overhead.  Combine managing the board and TFS work items and it seems to me that it doubles the administrative overhead for very little additional value.

Tim

Friday, December 17, 2010

Agile Software Development Part 1: It's All in the Application Framework

Application frameworks such as Spring.NET and Unity (there are others however I have used these 2 with great success) will cost you a little time up front in learning them however they will save you a ton of time as you add more features to your application.  For many of you, this is not new and you're probably moving on to another blog but for some, it's a new concept.  Why, after all, would we want to use a 3rd party application framework at all?  It gets down to the old cliche of "not reinventing the wheel".  For an application of any size, there are going to be dependencies.  These dependencies can show up in the form of models, views, controllers, supporting classes, presenters, widgets, etc, etc.  In a well designed n-tier application, it is ideal to have these dependencies expressed in the form of explicit interfaces especially in components that cross boundaries.  By this, I'm referring to the definition of interfaces; the contracts that are guaranteed to be supported by any class that implements them (as opposed to the implicit interface which consists of all public method, properties and events).

This is where the application framework comes in.  Rather than having your class you just instantiated create instances of all of its dependencies, the framework can instead serve up your class to the context.  Fowler and others commonly refer to this as Dependency Injection.  For most basic needs Unity can provide ample support for dependency injection.  In many applications however, there will arise the need for things like custom factories where families of classes implement the same interface and are used at different times based on context.  If those types of needs arise, Spring.NET might be a better choice as it already has the built-in capabilities in the form of factory methods.  While you can achieve the same thing with Unity if you write extensions (at this time I'm referring to V1 not V2).

Likely, you'll also need support for things like logging and caching.  Again, both Unity and Spring can provide these either through dependency injection or through their AOP capabilities. 

If you are a steadfast Microsoft fan and use other capabilities like Enterprise Library, Unity will suit most of your needs.  If however, you are open to other frameworks, I highly recommend taking a peek at Spring.NET. In Spring's latest release, there is now support for ASP.NET MVC 2 as well as the Windows Communication Foundation API.  Previous support for ASMX services made it possible to create dynamic service proxies as well as services.  When consuming services there is often a need to transform messages from services to a model for the client to consume.  ASMX support was so well done, it was possible to share entities between service and client without the need for message transformation on the client side. Additionally, there is native support for NHibernate in Spring.NET. 

Before I end up sounding like I'm doing a comparison of the two, I'll stop.   The main takeaway for you should be to make it a Sprint Zero effort to compare the application frameworks out there, determine what your needs are then do a formal selection process to determine the best fit.  By all means though, shorten your project life-cycle and use an application framework.

Tuesday, December 14, 2010

Scrumgineering defined

Scrumgineering simply put is:
  "Practicing the craft of Software Engineering in a Scrum team". 

The rabbit hole goes much deeper though when we start talking about how to deliver quality software.  It takes a firm understanding of design principles, solution domain and especially various practices that ensure the delivery of quality software.  These various practices include but are not limited to: Unit Testing/TDD, Continuous Integration and Deployment Automation.


Unit Testing/TDD
In a world where the development iterations are short, QA will most likely find themselves receiving completed stories closer to the end of the sprint.  On the past 2 agile teams I've been a team member of, this has certainly been the case.  Since the collective goal of the team needs to be to carry little or no technical debt over to subsequent sprints, unit testing must be employed by the team's developers to minimize the footprint over which the QA team member(s) must test.  Testing all the non-visual aspects of the source code with unit tests ensures that QA can focus mainly on blackbox testing the features.  In a recent project, we even included a small test harness to allow our developers to execute queries against views and verify they met the global business requirements for queries (threshold on logical reads and on total execution time).  Test Driven Development can be used strategically to drive out a design when the problem domain isn't completely known/clear.  I still recommend a "Sprint Zero" to define high level architecture.

Continuous Integration
Taking advantage of build servers is equally important for 2 reasons.  First, it provides a sterile environment in which to execute the unit tests and helps us avoid the "But it works on my machine" mantra.  Second, it allows us to integrate the various components and run tests that are similar to unit tests but broaden the spectrum to encompass more than one class/component.  This second set of tests helps drive out problems with configuration and especially with deployments.

Deployment Automation
Deployment Automation is just that.  It means that we automate the configuration and installation of our application.  Most applications these days are web-based meaning there is typically no media to be circulated for clients.  Instead, the application is installed and configured on a web server or server farm.  ASP.NET applications make it relatively easy to deploy and in most cases can be done using XCOPY.  The challenge comes when moving from environment to environment (eg., Staging to Production).  Often, configuration differs greatly and managing configuration manually introduces major risk.  Taking advantage of the Continuous Integration servers mentioned earlier, it is relatively easy to set up an application's configuration and target a specific environment.  Most tools like MSBUILD and NANT provide an ample set of tasks that allow configuration settings to be changed on the fly.  Add-on tasks such as SDCTasks add even more functionality to the build engine (for MSBuild that is).

These practices coupled with Agile software development techniques (more to follow on this topic) will greatly improve overall software quality.

Friday, December 10, 2010

Unit Testing is not Important, It's Critical

I recall reading a paragraph in Kent Beck's book on Test Driven Development Test Driven Development: By Example in which he points out that developers end up writing almost twice as much as they would have written had they not used TDD to drive out the design.  This is most likely in the top 5 reasons why many teams and companies don't adopt TDD as a standard practice (aside from the paradigm shift TDD entails) although what isn't obvious is that the first releasable version of the code will most likely consist of far fewer lines of source code. 

There is a similar albatross hanging around the neck of teams who want to shore up QA efforts by writing unit tests.  In many eyes outside of the immediate team, it is simply perceived as writing more code than is necessary to solve the problem.  This mindset cannot be further from the truth.  As more and more companies start to adopt iterative approaches to software development, having developers write tests that exercise their code during the iterations is mission critical.  The manner in which code is written iteratively requires that top-down or bottom-up approaches to development be turned on their side.  Rather than fully completing the data access layer, iterative approaches require that UI, Business and Data Access all be written for a small vertical sliver of functionality.  This approach ultimately means that all layers are affected with each new feature.  Equally, the baseline architecture may also be affected with each new iteration. 

The end result is source code that is constantly in flux as it is being re-factored to add new functionality.  To battle this flux, we turn to unit tests.  If you are new to unit testing, think of it as non-visual code that executes the non-visual portions of the application.  Ultimately, this means that QA resources (which tend to be a scarce resource) need to focus only on exercising the application in the guise of the end user.  With unit tests shoring up their efforts, QA can do black box testing and ensure that the visual aspects of the system behave correctly.  Additionally, developers can be fairly certain that when a defect is found it is contained within a much smaller boundary (typically infrastructure for complex systems or maybe just in the integration between the visual layer and business layer). 

Iterative development does not have room for monumental efforts at the end of a release where thousands of bugs accumulated over each iteration are then resolved.  Defects need to be resolved as soon as they are discovered and the end goal needs to be discovering and resolving them DURING the iteration, not in some future iteration where the central focus is finding them. 

If there remains a doubt in your mind, contact me and I will make myself available online for a demonstration of just how effective unit testing can be in managing defects.  

Wednesday, October 20, 2010

Extending TFS Builds - Specifically working with the Drop Location

I've seen a few different posts regarding getting to the Drop Location when working with MSBuild. In short, the DropLocation property isn't set when PropertyGroup variables are evaluated.

A workaround to this is to use the CreateProperty task and the GetBuildProperties task in a target just before you call other targets that need the folder location.

For starters, our script adds the AfterDropBuild target so we get into the workflow right after the build has been dropped to its drop location. An example drop location might look like "d:\builddrop\MyProject_Staging\Build_20101001.1".

Once you have the folder, you can then target specific published projects and do meaningful tasks like changing configuration to match the target environment.

Take a moment to look at builds that have been dropped and you should see beneath the build flavor (Debug, Release) a folder called "_PublishedWebSites". Each of the web sites defined (or services) in your solution will exist beneath that folder.

To get the folder, you need to configure a dependency target that is called through the "DependsOnTargets" attribute for the AfterDropBuild target.  The target calls the same task that is used by the Microsoft targets specifically "GetBuildProperties".  If you traverse the targets stored in the Microsoft folder under the MSbuild directory you'll see how it is used.  What you want to do is acquire just the DropLocation value.

Next, use the CreateProperty task to [re]populate a property variable with the DropLocation value.

You can accomplish this without having to wrap the TFSBuild.proj targets at all (as I've seen in other posts).

The following is an example:

<xmp>
  <PropertyGroup>
    <CurrentDropLocation></CurrentDropLocation>
  </PropertyGroup>
  <Target Name="AfterDropBuild" DependsOnTargets="ReinitializeBuildProperties;ConfigureUnityMappingFile;ConfigureWebConfigFile;CopyAndConfigureService">
    <!-- do your other stuff here such as configuring your .config files. -->
  </Target>

  <Target Name="ReinitializeBuildProperties">
      <GetBuildProperties TeamFoundationServerUrl="$(TeamFoundationServerUrl)"
                          BuildUri="$(BuildUri)">
        <Output TaskParameter="DropLocation" PropertyName="MyDropLocation" />
      </GetBuildProperties>
       
    <CreateProperty Value="$(MyDropLocation)\%(ConfigurationToBuild.FlavorToBuild)">
      <Output PropertyName="CurrentDropLocation" TaskParameter="Value" />
    </CreateProperty>
</xmp>

Spring.NET and the power of the Dictionary

I stumbled on a technique for simplifying factories (which if you think about, is exactly what Spring.NET's IOC container is) that are context-based.  This is not a new concept but it's worth repeating due to its simplicity and its elegance.

As an example scenario, we have an application that processes about 20 different types of notifications that send out targeted content to subscribers (yes, it's an email application aka SPAM engine).  Each notification can be configured independently of the others in the areas of transformation (the creation of formatted content to send) and transfer (SMTP, DB, MSMQ, etc). 

We arrived at a design that uses Composite, Strategy and Command patterns.  The Composite pattern is applied for Notification Commands (example, a notification that has many sections) and each Notification Command implements the Command Pattern (Execute, etc).  Likewise, Transfers are configured as Strategies (Strategy Pattern) and Composites ( a Transfer Strategy may be 3 physical strategies ie., Send to SMTP, Send to DB and Send to MSMQ). 

To add context, each type of notification has a unique value that traces back to the underlying persistent store.  Notification Type 1 for example, is a "Sales Flyer".

We have a very simple Job controller that is handed a Job (context) and is then asked to process it.  What used to be about 5,000 lines of code (for 20 notifications) has effectively been reduced to about 150 lines of code. 

In our Spring.NET XML registry, we defined our commands and strategies but were then taken aback out HOW we would choose the appropriate command to use to process the Job.  What we arrived at is both simple and rather elegant.

Spring.NET registries can store much more than templates for objects.  They can also store dictionaries.  In our case, we chose to store a dictionary of commands.  The dictionary's key refers to the unique identifier for each type of job and the value is the id of the command to use.  When the controller is activated and the job context passed, we reach into the dictionary (served up as a dependency property to the controller) and then ask the repository for the ICommand by Id which is retrieved from the dictionary. 

Code can be shared if anyone is interested although I'm fairly positive this is not an original idea and to some may be more along the lines of  "we already do this".

Wearing two saddles: Scrum and the multiple project Affect

Back in April, I agreed to be a team member on 2 projects split equally 50/50.  This was an experiment I agreed to in order to experience first-hand why team members need to be dedicated to a single project. 

It started out fine on planning day.  Half the day was dedicated to planning for Project A and the other half for Project B.  Half-way through the first sprint, it was apparent what was going to happen.  My 3 hours daily allocation per project was working out to be more like 2 effective hours per project and as I was just about ready to implement something cool for Project A, I had to stop and work on Project B.

On the day when QA needed to have the last bits to test, I had incomplete stories on both projects.  And of course, the next bad thing that happened was Project A wanted to be more valuable than Project B.  So what did I do?  Of course!  I worked long hours on Project A. 

In short, both sprints' goals were not met and I had very few completed stories.  Each project became the other's impediment and I "thrashed" between the two of them getting very little done.

This experiment reminds me of a line from Patrick Lencioni's "The 5 Dysfunctions of a Team" where the CEO character had to tell her team they were to pick a #1 priority goal.  A member asked why not 2 goals and her reply was, "If every goal is a priority, then none of them are". 

In my case, this was exactly what happened.  Both projects wanted to be #1 and it's just not realistic.  One project needed to be killed (there were no other resources to work on it).

As Scrum practitioners, it's important to learn why certain things don't work well and why others do (such as XP practices).  By all means, don't take my word for it.  Experience is the best teacher and in this case, my experience was quite the eye opener.  We have since moved to strictly dedicated team members.  We still have shared resources but none of them own stories and they certainly don't sit at the table.