Search This Blog

Friday, December 17, 2010

Agile Software Development Part 1: It's All in the Application Framework

Application frameworks such as Spring.NET and Unity (there are others however I have used these 2 with great success) will cost you a little time up front in learning them however they will save you a ton of time as you add more features to your application.  For many of you, this is not new and you're probably moving on to another blog but for some, it's a new concept.  Why, after all, would we want to use a 3rd party application framework at all?  It gets down to the old cliche of "not reinventing the wheel".  For an application of any size, there are going to be dependencies.  These dependencies can show up in the form of models, views, controllers, supporting classes, presenters, widgets, etc, etc.  In a well designed n-tier application, it is ideal to have these dependencies expressed in the form of explicit interfaces especially in components that cross boundaries.  By this, I'm referring to the definition of interfaces; the contracts that are guaranteed to be supported by any class that implements them (as opposed to the implicit interface which consists of all public method, properties and events).

This is where the application framework comes in.  Rather than having your class you just instantiated create instances of all of its dependencies, the framework can instead serve up your class to the context.  Fowler and others commonly refer to this as Dependency Injection.  For most basic needs Unity can provide ample support for dependency injection.  In many applications however, there will arise the need for things like custom factories where families of classes implement the same interface and are used at different times based on context.  If those types of needs arise, Spring.NET might be a better choice as it already has the built-in capabilities in the form of factory methods.  While you can achieve the same thing with Unity if you write extensions (at this time I'm referring to V1 not V2).

Likely, you'll also need support for things like logging and caching.  Again, both Unity and Spring can provide these either through dependency injection or through their AOP capabilities. 

If you are a steadfast Microsoft fan and use other capabilities like Enterprise Library, Unity will suit most of your needs.  If however, you are open to other frameworks, I highly recommend taking a peek at Spring.NET. In Spring's latest release, there is now support for ASP.NET MVC 2 as well as the Windows Communication Foundation API.  Previous support for ASMX services made it possible to create dynamic service proxies as well as services.  When consuming services there is often a need to transform messages from services to a model for the client to consume.  ASMX support was so well done, it was possible to share entities between service and client without the need for message transformation on the client side. Additionally, there is native support for NHibernate in Spring.NET. 

Before I end up sounding like I'm doing a comparison of the two, I'll stop.   The main takeaway for you should be to make it a Sprint Zero effort to compare the application frameworks out there, determine what your needs are then do a formal selection process to determine the best fit.  By all means though, shorten your project life-cycle and use an application framework.

Tuesday, December 14, 2010

Scrumgineering defined

Scrumgineering simply put is:
  "Practicing the craft of Software Engineering in a Scrum team". 

The rabbit hole goes much deeper though when we start talking about how to deliver quality software.  It takes a firm understanding of design principles, solution domain and especially various practices that ensure the delivery of quality software.  These various practices include but are not limited to: Unit Testing/TDD, Continuous Integration and Deployment Automation.


Unit Testing/TDD
In a world where the development iterations are short, QA will most likely find themselves receiving completed stories closer to the end of the sprint.  On the past 2 agile teams I've been a team member of, this has certainly been the case.  Since the collective goal of the team needs to be to carry little or no technical debt over to subsequent sprints, unit testing must be employed by the team's developers to minimize the footprint over which the QA team member(s) must test.  Testing all the non-visual aspects of the source code with unit tests ensures that QA can focus mainly on blackbox testing the features.  In a recent project, we even included a small test harness to allow our developers to execute queries against views and verify they met the global business requirements for queries (threshold on logical reads and on total execution time).  Test Driven Development can be used strategically to drive out a design when the problem domain isn't completely known/clear.  I still recommend a "Sprint Zero" to define high level architecture.

Continuous Integration
Taking advantage of build servers is equally important for 2 reasons.  First, it provides a sterile environment in which to execute the unit tests and helps us avoid the "But it works on my machine" mantra.  Second, it allows us to integrate the various components and run tests that are similar to unit tests but broaden the spectrum to encompass more than one class/component.  This second set of tests helps drive out problems with configuration and especially with deployments.

Deployment Automation
Deployment Automation is just that.  It means that we automate the configuration and installation of our application.  Most applications these days are web-based meaning there is typically no media to be circulated for clients.  Instead, the application is installed and configured on a web server or server farm.  ASP.NET applications make it relatively easy to deploy and in most cases can be done using XCOPY.  The challenge comes when moving from environment to environment (eg., Staging to Production).  Often, configuration differs greatly and managing configuration manually introduces major risk.  Taking advantage of the Continuous Integration servers mentioned earlier, it is relatively easy to set up an application's configuration and target a specific environment.  Most tools like MSBUILD and NANT provide an ample set of tasks that allow configuration settings to be changed on the fly.  Add-on tasks such as SDCTasks add even more functionality to the build engine (for MSBuild that is).

These practices coupled with Agile software development techniques (more to follow on this topic) will greatly improve overall software quality.

Friday, December 10, 2010

Unit Testing is not Important, It's Critical

I recall reading a paragraph in Kent Beck's book on Test Driven Development Test Driven Development: By Example in which he points out that developers end up writing almost twice as much as they would have written had they not used TDD to drive out the design.  This is most likely in the top 5 reasons why many teams and companies don't adopt TDD as a standard practice (aside from the paradigm shift TDD entails) although what isn't obvious is that the first releasable version of the code will most likely consist of far fewer lines of source code. 

There is a similar albatross hanging around the neck of teams who want to shore up QA efforts by writing unit tests.  In many eyes outside of the immediate team, it is simply perceived as writing more code than is necessary to solve the problem.  This mindset cannot be further from the truth.  As more and more companies start to adopt iterative approaches to software development, having developers write tests that exercise their code during the iterations is mission critical.  The manner in which code is written iteratively requires that top-down or bottom-up approaches to development be turned on their side.  Rather than fully completing the data access layer, iterative approaches require that UI, Business and Data Access all be written for a small vertical sliver of functionality.  This approach ultimately means that all layers are affected with each new feature.  Equally, the baseline architecture may also be affected with each new iteration. 

The end result is source code that is constantly in flux as it is being re-factored to add new functionality.  To battle this flux, we turn to unit tests.  If you are new to unit testing, think of it as non-visual code that executes the non-visual portions of the application.  Ultimately, this means that QA resources (which tend to be a scarce resource) need to focus only on exercising the application in the guise of the end user.  With unit tests shoring up their efforts, QA can do black box testing and ensure that the visual aspects of the system behave correctly.  Additionally, developers can be fairly certain that when a defect is found it is contained within a much smaller boundary (typically infrastructure for complex systems or maybe just in the integration between the visual layer and business layer). 

Iterative development does not have room for monumental efforts at the end of a release where thousands of bugs accumulated over each iteration are then resolved.  Defects need to be resolved as soon as they are discovered and the end goal needs to be discovering and resolving them DURING the iteration, not in some future iteration where the central focus is finding them. 

If there remains a doubt in your mind, contact me and I will make myself available online for a demonstration of just how effective unit testing can be in managing defects.  

Wednesday, October 20, 2010

Extending TFS Builds - Specifically working with the Drop Location

I've seen a few different posts regarding getting to the Drop Location when working with MSBuild. In short, the DropLocation property isn't set when PropertyGroup variables are evaluated.

A workaround to this is to use the CreateProperty task and the GetBuildProperties task in a target just before you call other targets that need the folder location.

For starters, our script adds the AfterDropBuild target so we get into the workflow right after the build has been dropped to its drop location. An example drop location might look like "d:\builddrop\MyProject_Staging\Build_20101001.1".

Once you have the folder, you can then target specific published projects and do meaningful tasks like changing configuration to match the target environment.

Take a moment to look at builds that have been dropped and you should see beneath the build flavor (Debug, Release) a folder called "_PublishedWebSites". Each of the web sites defined (or services) in your solution will exist beneath that folder.

To get the folder, you need to configure a dependency target that is called through the "DependsOnTargets" attribute for the AfterDropBuild target.  The target calls the same task that is used by the Microsoft targets specifically "GetBuildProperties".  If you traverse the targets stored in the Microsoft folder under the MSbuild directory you'll see how it is used.  What you want to do is acquire just the DropLocation value.

Next, use the CreateProperty task to [re]populate a property variable with the DropLocation value.

You can accomplish this without having to wrap the TFSBuild.proj targets at all (as I've seen in other posts).

The following is an example:

<xmp>
  <PropertyGroup>
    <CurrentDropLocation></CurrentDropLocation>
  </PropertyGroup>
  <Target Name="AfterDropBuild" DependsOnTargets="ReinitializeBuildProperties;ConfigureUnityMappingFile;ConfigureWebConfigFile;CopyAndConfigureService">
    <!-- do your other stuff here such as configuring your .config files. -->
  </Target>

  <Target Name="ReinitializeBuildProperties">
      <GetBuildProperties TeamFoundationServerUrl="$(TeamFoundationServerUrl)"
                          BuildUri="$(BuildUri)">
        <Output TaskParameter="DropLocation" PropertyName="MyDropLocation" />
      </GetBuildProperties>
       
    <CreateProperty Value="$(MyDropLocation)\%(ConfigurationToBuild.FlavorToBuild)">
      <Output PropertyName="CurrentDropLocation" TaskParameter="Value" />
    </CreateProperty>
</xmp>

Spring.NET and the power of the Dictionary

I stumbled on a technique for simplifying factories (which if you think about, is exactly what Spring.NET's IOC container is) that are context-based.  This is not a new concept but it's worth repeating due to its simplicity and its elegance.

As an example scenario, we have an application that processes about 20 different types of notifications that send out targeted content to subscribers (yes, it's an email application aka SPAM engine).  Each notification can be configured independently of the others in the areas of transformation (the creation of formatted content to send) and transfer (SMTP, DB, MSMQ, etc). 

We arrived at a design that uses Composite, Strategy and Command patterns.  The Composite pattern is applied for Notification Commands (example, a notification that has many sections) and each Notification Command implements the Command Pattern (Execute, etc).  Likewise, Transfers are configured as Strategies (Strategy Pattern) and Composites ( a Transfer Strategy may be 3 physical strategies ie., Send to SMTP, Send to DB and Send to MSMQ). 

To add context, each type of notification has a unique value that traces back to the underlying persistent store.  Notification Type 1 for example, is a "Sales Flyer".

We have a very simple Job controller that is handed a Job (context) and is then asked to process it.  What used to be about 5,000 lines of code (for 20 notifications) has effectively been reduced to about 150 lines of code. 

In our Spring.NET XML registry, we defined our commands and strategies but were then taken aback out HOW we would choose the appropriate command to use to process the Job.  What we arrived at is both simple and rather elegant.

Spring.NET registries can store much more than templates for objects.  They can also store dictionaries.  In our case, we chose to store a dictionary of commands.  The dictionary's key refers to the unique identifier for each type of job and the value is the id of the command to use.  When the controller is activated and the job context passed, we reach into the dictionary (served up as a dependency property to the controller) and then ask the repository for the ICommand by Id which is retrieved from the dictionary. 

Code can be shared if anyone is interested although I'm fairly positive this is not an original idea and to some may be more along the lines of  "we already do this".

Wearing two saddles: Scrum and the multiple project Affect

Back in April, I agreed to be a team member on 2 projects split equally 50/50.  This was an experiment I agreed to in order to experience first-hand why team members need to be dedicated to a single project. 

It started out fine on planning day.  Half the day was dedicated to planning for Project A and the other half for Project B.  Half-way through the first sprint, it was apparent what was going to happen.  My 3 hours daily allocation per project was working out to be more like 2 effective hours per project and as I was just about ready to implement something cool for Project A, I had to stop and work on Project B.

On the day when QA needed to have the last bits to test, I had incomplete stories on both projects.  And of course, the next bad thing that happened was Project A wanted to be more valuable than Project B.  So what did I do?  Of course!  I worked long hours on Project A. 

In short, both sprints' goals were not met and I had very few completed stories.  Each project became the other's impediment and I "thrashed" between the two of them getting very little done.

This experiment reminds me of a line from Patrick Lencioni's "The 5 Dysfunctions of a Team" where the CEO character had to tell her team they were to pick a #1 priority goal.  A member asked why not 2 goals and her reply was, "If every goal is a priority, then none of them are". 

In my case, this was exactly what happened.  Both projects wanted to be #1 and it's just not realistic.  One project needed to be killed (there were no other resources to work on it).

As Scrum practitioners, it's important to learn why certain things don't work well and why others do (such as XP practices).  By all means, don't take my word for it.  Experience is the best teacher and in this case, my experience was quite the eye opener.  We have since moved to strictly dedicated team members.  We still have shared resources but none of them own stories and they certainly don't sit at the table.

Thursday, April 29, 2010

Finding Root Cause More Quickly with Unit Tests

I can't emphasize enough the importance of writing good meaningful unit tests. This was reinforced again today when I received a request from one of our folks who said there was a bug with one of the systems for which I'm responsible. My immediate reaction was that I didn't trust the code I had written months ago. The scenario is pretty simple. If one property of a class is null, output a default value. The code is:

Writer.Write(instance.AdText ?? "Click link for more info....");

Silly me. I didn't trust the .NET Framework to work as expected so I wrote a unit test to verify that when AdText is null, the default is printed.

The bad news is I wrote the test. The good news is, it disproved my immediate invalid assumption of where to start chasing the bug down. Months ago, the process would have been to build and deploy the application then run it with an email subscriber and ads that have and don't have AdText properties. Alas, the same conclusion would have been reached but with the unit test, my silliness was pointed out to me far more quickly.

Tuesday, March 2, 2010

Unit Testing V. The Debugger

From a recent presentation I delivered, it finally hit me that the craft of debugging is really a form of manual unit testing. Yes, this may be a "duh!", but this was an "ah hah!" moment for me.

Think about it.

You compile your application and start it in the debugger. You then click, select, submit, type and probably click your way to the part of the code you want to make visual assertions on. You'll use watch windows and breakpoints so you can inspect the outcome. You then repeat this until the code is doing what you expect.

This is synonymous with:

1) instantiate the test subject (probably a class).
2) establish its initial state.
3) call a method on it.
4) use Assert()ions to determine if the outcome is the expected one.
5) repeat

Interestingly enough, it takes FAR longer to compile (integrate), run the debugger (instantiate and set up initial state), then make the Assert()ions (watch window, breakpoints and immediate window) then repeat the process. Also, you are human and therefore are subject to all sorts of distractions that minimize the probability of executing the same code path every time. This results in poor testing and invariably less than satisfactory code quality. Add 5 scenarios you need to test for the chance of screwing up is greatly exacerbated.

Unit tests, on the other hand, are repeatable, not subject to human distraction, do not require full integration of the application and can be shared with your team.

You'll find, if you're new to Scrum, that unit testing will save you a ton of time and give you a much better chance of burning down to zero instead of burning out.

If you're interested, let me know and I'll be happy to share my recent Unit Testing 101 slide deck with you as well as sample code (.NET) that demonstrates the basics as well as advanced concepts using a great addition to the Mocking world known as "Moq" (Mock You).

Learning from our elders

Over the past 5 years, I've come to the conclusion that there is something missing in the first few years of training aspiring developers embrace when they feel the pull towards software development as a career.

Scrum and XP practices have been around for quite some time now, 10 years or more to be more precise than "quite some time now".

I believe it's time for those of us who are more or less .NETters to take a break, reflect and start looking to our SUNners across the aisle (or Oraclers). Almost every tool we use in the .NET world to assess code quality got its start with a "j" in its name or at least with the absence of an "n". This tells me that there are either more mature experienced developers who are writing or have written applications in Java from which I can learn. The alternative is that we in the Microsoft camp write flawless code and therefore do not need to lean on quality tools.

I can tell you that I personally don't write flawless code. I lean heavily on quality tools to tell me when I'm an idiot. The beauty of those tools is they typically report my idiocy only to me and not to the people who assign my bonus. Of course, when I'm a real idiot and don't ask my quality tools if I'm being one, they will also quickly report to everyone that Tim had a poor thought moment.

Many of us pour libations out to Uncle Bob and Steve McConnell while others are clueless who these guys are and why we should care about what they say.

Here's an interesting factoid. From McConnell's Code Complete 2 book, he states that "a study conducted by IBM indicated that for every hour spent doing code inspections, 33 hours of additional work was saved".

These guys have been around the block a few times. We as developers should understand that their experiences can become our experiences if we only take a moment and listen (or read).