I won't reinvent the wheel in this post as your friend and mine Rajesh Pillai wrote up an awesome step-by-step approach to taking an ASP.NET Webforms project and migrating it to ASP.NET MVC one workflow at a time.
His post can be found here. If you also use Microsoft's Ajax Toolkit, there are a couple of things to point out. First of all, make sure you watch the order in which your handlers are defined. One side-effect I encountered was the 'case of the missing $sys' in which case MS ajax javascript was being served up through the axd route.
Second, if you are also a webforms spring.net-enabled site, you'll want to either define your own MVC Controller Factory and declare it in global asax on or use Spring.NET's MVC Application. In our case, we chose the former approach primarily to minimize impact and have more control over the factory.
Finally, don't throw away your existing site and try to do the changeover Big Bang. Instead, carefully choose the workflows in the site you want to convert and convert them one at a time. Your challenge, if you are unfamiliar with Ajax, is to take those update panels out and replace them javascript that uses an Ajax library (in our case JQuery) to provide the same behaviors.
A couple of additional SNAFUs I ran into while going through this exercise:
1) Forgetting to add the web.config to the Views folder. If you create a VS2008 ASP.NET MVC 2 project, you'll notice a web.config is created in the Views folder. Among other things, this web.config adds essential page filters and parsers which allow for the use of the Html extension when writing up the markup. Without these, you will not have the correct state in your aspx page with which to make use of them.
2) Controllers marked as Content and not as Compile
This threw me for a loop for a good hour. I lost all intellisense and like most of the other Microsoft folks out there, felt quite naked and vulnerable. Check your properties window and make sure the controller class is defined with 'compile'. It's that simple.
I'll be posting more as we start to transition the site over to MVC v2 framework.
Search This Blog
Wednesday, September 14, 2011
Why Technical Debt Sucks
And I do mean sucks. Technical Debt sucks at our energy, motivation and velocity.
Technical Debt is a term that composes Defects, Shortcuts, Workarounds and other contributors to sub-par quality software. Shortcuts in this case refer to intentionally writing code that quickly solves a problem. While the problem itself is fixed, on average a shortcut also becomes a primary contributor to side-effects.
Defects are something with which we are all familiar. Carrying defects across sprints kills velocity. I had to announce to our stakeholders today that, while our team has averaged 30 story points over the last 6 sprints we were only able to complete 14 story points this past sprint because we had 8 defects to fix some of which had a lifespan of a year. There is nothing that should be more embarrassing to a team than to have to explain this.
Short answer, run a sprint bug free.
Technical Debt is a term that composes Defects, Shortcuts, Workarounds and other contributors to sub-par quality software. Shortcuts in this case refer to intentionally writing code that quickly solves a problem. While the problem itself is fixed, on average a shortcut also becomes a primary contributor to side-effects.
Defects are something with which we are all familiar. Carrying defects across sprints kills velocity. I had to announce to our stakeholders today that, while our team has averaged 30 story points over the last 6 sprints we were only able to complete 14 story points this past sprint because we had 8 defects to fix some of which had a lifespan of a year. There is nothing that should be more embarrassing to a team than to have to explain this.
Short answer, run a sprint bug free.
Labels:
Agile Software Development,
Sprints,
Technical Debt
Tuesday, July 26, 2011
Planning a Sprint - Reference Stories
If you've sat in a sprint planning meeting before and played games like Planning Poker to agree to story size, you're most likely very familiar with the problem of some team members perceiving stories larger or smaller than other team members. Often, the reasons may be because of misunderstanding of story scope (Acceptance Criteria) or effort may be perceived higher in certain areas (for example, a story may require full regression if there is considerable impact).
One of the methods your team can use to break the disagreement is by identifying and comparing using reference stories. A reference story is usually associated with a size on the Fibonacci sequence (if you're using them for story points) and your Scrum Master can facilitate the discussion by having the team agree that a story is either larger or smaller than a specific reference story. Remember though, sizing is really a gut feeling and is subject to change.
The best time to decide on reference stories is during Retrospective. That way, the stories are still fresh in the team's heads. Review the stories done (DONE, not almost done) during the sprint and have the team agree whether or not the size made sense. Make sure you at least get a Small, Medium, Large and possibly Larger-than-Large (but not Epic). This is one of the best ways to help a team decide on story size during upcoming sprints.
Remember too that you'll need to revisit this frequently especially if your team dynamics change.
One of the methods your team can use to break the disagreement is by identifying and comparing using reference stories. A reference story is usually associated with a size on the Fibonacci sequence (if you're using them for story points) and your Scrum Master can facilitate the discussion by having the team agree that a story is either larger or smaller than a specific reference story. Remember though, sizing is really a gut feeling and is subject to change.
The best time to decide on reference stories is during Retrospective. That way, the stories are still fresh in the team's heads. Review the stories done (DONE, not almost done) during the sprint and have the team agree whether or not the size made sense. Make sure you at least get a Small, Medium, Large and possibly Larger-than-Large (but not Epic). This is one of the best ways to help a team decide on story size during upcoming sprints.
Remember too that you'll need to revisit this frequently especially if your team dynamics change.
Labels:
Reference Stories,
Sprint Planning,
User Stories
Wednesday, June 29, 2011
Agile Projects - Why you may not need those Cool Tools
We are continually inundated with new tools. Tools that will write code for us (Codesmith, T4, etc etc), tools that tell us when our code has a smell (Resharper) and tools that help us keep lists of things to do (VersionOne, Team Foundation Server and others). I'll be the first to admit that I love tools. Or rather, I should say that I love trying out tools.
Today, I discovered myself telling a team member who wanted to track Stories and Tasks in TFS, that above all else, we need to remember the purpose of a tool. A tool is intended to make a job easier, not harder. By harder, I mean that a tool should help us take less time to accomplish the same goal, or its overhead should equal the additional value it brings to the table.
For the first time in many years, I have completely and unequivocally embraced the art of SIMPLE, namely using just a spreadsheet to manage Product Backlog and a template to print cards out for a Sprint backlog. Feedback I got today was that it was very hard for the team member to know where they were with stories and tasks. They felt that having the tasks in TFS as work items would help them better understand what they were doing. What's concerning me with this statement is that the task board is less than 10 feet from where we sit, all in the same room.
I'd like to believe that the confusion doesn't come from the use of index cards but more from lack of collaboration or something else. How can managing a list in an application make it easier to know what someone has committed to and what they need to accomplish in 9 business days...
I believe that tools serve a valued purpose and tools like TFS are extremely valuable especially for demographically disparate teams however, teams that sit in the same room at the same table don't need the additional overhead. Combine managing the board and TFS work items and it seems to me that it doubles the administrative overhead for very little additional value.
Tim
Today, I discovered myself telling a team member who wanted to track Stories and Tasks in TFS, that above all else, we need to remember the purpose of a tool. A tool is intended to make a job easier, not harder. By harder, I mean that a tool should help us take less time to accomplish the same goal, or its overhead should equal the additional value it brings to the table.
For the first time in many years, I have completely and unequivocally embraced the art of SIMPLE, namely using just a spreadsheet to manage Product Backlog and a template to print cards out for a Sprint backlog. Feedback I got today was that it was very hard for the team member to know where they were with stories and tasks. They felt that having the tasks in TFS as work items would help them better understand what they were doing. What's concerning me with this statement is that the task board is less than 10 feet from where we sit, all in the same room.
I'd like to believe that the confusion doesn't come from the use of index cards but more from lack of collaboration or something else. How can managing a list in an application make it easier to know what someone has committed to and what they need to accomplish in 9 business days...
I believe that tools serve a valued purpose and tools like TFS are extremely valuable especially for demographically disparate teams however, teams that sit in the same room at the same table don't need the additional overhead. Combine managing the board and TFS work items and it seems to me that it doubles the administrative overhead for very little additional value.
Tim
Friday, December 17, 2010
Agile Software Development Part 1: It's All in the Application Framework
Application frameworks such as Spring.NET and Unity (there are others however I have used these 2 with great success) will cost you a little time up front in learning them however they will save you a ton of time as you add more features to your application. For many of you, this is not new and you're probably moving on to another blog but for some, it's a new concept. Why, after all, would we want to use a 3rd party application framework at all? It gets down to the old cliche of "not reinventing the wheel". For an application of any size, there are going to be dependencies. These dependencies can show up in the form of models, views, controllers, supporting classes, presenters, widgets, etc, etc. In a well designed n-tier application, it is ideal to have these dependencies expressed in the form of explicit interfaces especially in components that cross boundaries. By this, I'm referring to the definition of interfaces; the contracts that are guaranteed to be supported by any class that implements them (as opposed to the implicit interface which consists of all public method, properties and events).
This is where the application framework comes in. Rather than having your class you just instantiated create instances of all of its dependencies, the framework can instead serve up your class to the context. Fowler and others commonly refer to this as Dependency Injection. For most basic needs Unity can provide ample support for dependency injection. In many applications however, there will arise the need for things like custom factories where families of classes implement the same interface and are used at different times based on context. If those types of needs arise, Spring.NET might be a better choice as it already has the built-in capabilities in the form of factory methods. While you can achieve the same thing with Unity if you write extensions (at this time I'm referring to V1 not V2).
Likely, you'll also need support for things like logging and caching. Again, both Unity and Spring can provide these either through dependency injection or through their AOP capabilities.
If you are a steadfast Microsoft fan and use other capabilities like Enterprise Library, Unity will suit most of your needs. If however, you are open to other frameworks, I highly recommend taking a peek at Spring.NET. In Spring's latest release, there is now support for ASP.NET MVC 2 as well as the Windows Communication Foundation API. Previous support for ASMX services made it possible to create dynamic service proxies as well as services. When consuming services there is often a need to transform messages from services to a model for the client to consume. ASMX support was so well done, it was possible to share entities between service and client without the need for message transformation on the client side. Additionally, there is native support for NHibernate in Spring.NET.
Before I end up sounding like I'm doing a comparison of the two, I'll stop. The main takeaway for you should be to make it a Sprint Zero effort to compare the application frameworks out there, determine what your needs are then do a formal selection process to determine the best fit. By all means though, shorten your project life-cycle and use an application framework.
This is where the application framework comes in. Rather than having your class you just instantiated create instances of all of its dependencies, the framework can instead serve up your class to the context. Fowler and others commonly refer to this as Dependency Injection. For most basic needs Unity can provide ample support for dependency injection. In many applications however, there will arise the need for things like custom factories where families of classes implement the same interface and are used at different times based on context. If those types of needs arise, Spring.NET might be a better choice as it already has the built-in capabilities in the form of factory methods. While you can achieve the same thing with Unity if you write extensions (at this time I'm referring to V1 not V2).
Likely, you'll also need support for things like logging and caching. Again, both Unity and Spring can provide these either through dependency injection or through their AOP capabilities.
If you are a steadfast Microsoft fan and use other capabilities like Enterprise Library, Unity will suit most of your needs. If however, you are open to other frameworks, I highly recommend taking a peek at Spring.NET. In Spring's latest release, there is now support for ASP.NET MVC 2 as well as the Windows Communication Foundation API. Previous support for ASMX services made it possible to create dynamic service proxies as well as services. When consuming services there is often a need to transform messages from services to a model for the client to consume. ASMX support was so well done, it was possible to share entities between service and client without the need for message transformation on the client side. Additionally, there is native support for NHibernate in Spring.NET.
Before I end up sounding like I'm doing a comparison of the two, I'll stop. The main takeaway for you should be to make it a Sprint Zero effort to compare the application frameworks out there, determine what your needs are then do a formal selection process to determine the best fit. By all means though, shorten your project life-cycle and use an application framework.
Tuesday, December 14, 2010
Scrumgineering defined
Scrumgineering simply put is:
"Practicing the craft of Software Engineering in a Scrum team".
The rabbit hole goes much deeper though when we start talking about how to deliver quality software. It takes a firm understanding of design principles, solution domain and especially various practices that ensure the delivery of quality software. These various practices include but are not limited to: Unit Testing/TDD, Continuous Integration and Deployment Automation.
Unit Testing/TDD
In a world where the development iterations are short, QA will most likely find themselves receiving completed stories closer to the end of the sprint. On the past 2 agile teams I've been a team member of, this has certainly been the case. Since the collective goal of the team needs to be to carry little or no technical debt over to subsequent sprints, unit testing must be employed by the team's developers to minimize the footprint over which the QA team member(s) must test. Testing all the non-visual aspects of the source code with unit tests ensures that QA can focus mainly on blackbox testing the features. In a recent project, we even included a small test harness to allow our developers to execute queries against views and verify they met the global business requirements for queries (threshold on logical reads and on total execution time). Test Driven Development can be used strategically to drive out a design when the problem domain isn't completely known/clear. I still recommend a "Sprint Zero" to define high level architecture.
Continuous Integration
Taking advantage of build servers is equally important for 2 reasons. First, it provides a sterile environment in which to execute the unit tests and helps us avoid the "But it works on my machine" mantra. Second, it allows us to integrate the various components and run tests that are similar to unit tests but broaden the spectrum to encompass more than one class/component. This second set of tests helps drive out problems with configuration and especially with deployments.
Deployment Automation
Deployment Automation is just that. It means that we automate the configuration and installation of our application. Most applications these days are web-based meaning there is typically no media to be circulated for clients. Instead, the application is installed and configured on a web server or server farm. ASP.NET applications make it relatively easy to deploy and in most cases can be done using XCOPY. The challenge comes when moving from environment to environment (eg., Staging to Production). Often, configuration differs greatly and managing configuration manually introduces major risk. Taking advantage of the Continuous Integration servers mentioned earlier, it is relatively easy to set up an application's configuration and target a specific environment. Most tools like MSBUILD and NANT provide an ample set of tasks that allow configuration settings to be changed on the fly. Add-on tasks such as SDCTasks add even more functionality to the build engine (for MSBuild that is).
These practices coupled with Agile software development techniques (more to follow on this topic) will greatly improve overall software quality.
"Practicing the craft of Software Engineering in a Scrum team".
The rabbit hole goes much deeper though when we start talking about how to deliver quality software. It takes a firm understanding of design principles, solution domain and especially various practices that ensure the delivery of quality software. These various practices include but are not limited to: Unit Testing/TDD, Continuous Integration and Deployment Automation.
Unit Testing/TDD
In a world where the development iterations are short, QA will most likely find themselves receiving completed stories closer to the end of the sprint. On the past 2 agile teams I've been a team member of, this has certainly been the case. Since the collective goal of the team needs to be to carry little or no technical debt over to subsequent sprints, unit testing must be employed by the team's developers to minimize the footprint over which the QA team member(s) must test. Testing all the non-visual aspects of the source code with unit tests ensures that QA can focus mainly on blackbox testing the features. In a recent project, we even included a small test harness to allow our developers to execute queries against views and verify they met the global business requirements for queries (threshold on logical reads and on total execution time). Test Driven Development can be used strategically to drive out a design when the problem domain isn't completely known/clear. I still recommend a "Sprint Zero" to define high level architecture.
Continuous Integration
Taking advantage of build servers is equally important for 2 reasons. First, it provides a sterile environment in which to execute the unit tests and helps us avoid the "But it works on my machine" mantra. Second, it allows us to integrate the various components and run tests that are similar to unit tests but broaden the spectrum to encompass more than one class/component. This second set of tests helps drive out problems with configuration and especially with deployments.
Deployment Automation
Deployment Automation is just that. It means that we automate the configuration and installation of our application. Most applications these days are web-based meaning there is typically no media to be circulated for clients. Instead, the application is installed and configured on a web server or server farm. ASP.NET applications make it relatively easy to deploy and in most cases can be done using XCOPY. The challenge comes when moving from environment to environment (eg., Staging to Production). Often, configuration differs greatly and managing configuration manually introduces major risk. Taking advantage of the Continuous Integration servers mentioned earlier, it is relatively easy to set up an application's configuration and target a specific environment. Most tools like MSBUILD and NANT provide an ample set of tasks that allow configuration settings to be changed on the fly. Add-on tasks such as SDCTasks add even more functionality to the build engine (for MSBuild that is).
These practices coupled with Agile software development techniques (more to follow on this topic) will greatly improve overall software quality.
Friday, December 10, 2010
Unit Testing is not Important, It's Critical
I recall reading a paragraph in Kent Beck's book on Test Driven Development Test Driven Development: By Example in which he points out that developers end up writing almost twice as much as they would have written had they not used TDD to drive out the design. This is most likely in the top 5 reasons why many teams and companies don't adopt TDD as a standard practice (aside from the paradigm shift TDD entails) although what isn't obvious is that the first releasable version of the code will most likely consist of far fewer lines of source code.
There is a similar albatross hanging around the neck of teams who want to shore up QA efforts by writing unit tests. In many eyes outside of the immediate team, it is simply perceived as writing more code than is necessary to solve the problem. This mindset cannot be further from the truth. As more and more companies start to adopt iterative approaches to software development, having developers write tests that exercise their code during the iterations is mission critical. The manner in which code is written iteratively requires that top-down or bottom-up approaches to development be turned on their side. Rather than fully completing the data access layer, iterative approaches require that UI, Business and Data Access all be written for a small vertical sliver of functionality. This approach ultimately means that all layers are affected with each new feature. Equally, the baseline architecture may also be affected with each new iteration.
The end result is source code that is constantly in flux as it is being re-factored to add new functionality. To battle this flux, we turn to unit tests. If you are new to unit testing, think of it as non-visual code that executes the non-visual portions of the application. Ultimately, this means that QA resources (which tend to be a scarce resource) need to focus only on exercising the application in the guise of the end user. With unit tests shoring up their efforts, QA can do black box testing and ensure that the visual aspects of the system behave correctly. Additionally, developers can be fairly certain that when a defect is found it is contained within a much smaller boundary (typically infrastructure for complex systems or maybe just in the integration between the visual layer and business layer).
Iterative development does not have room for monumental efforts at the end of a release where thousands of bugs accumulated over each iteration are then resolved. Defects need to be resolved as soon as they are discovered and the end goal needs to be discovering and resolving them DURING the iteration, not in some future iteration where the central focus is finding them.
If there remains a doubt in your mind, contact me and I will make myself available online for a demonstration of just how effective unit testing can be in managing defects.
There is a similar albatross hanging around the neck of teams who want to shore up QA efforts by writing unit tests. In many eyes outside of the immediate team, it is simply perceived as writing more code than is necessary to solve the problem. This mindset cannot be further from the truth. As more and more companies start to adopt iterative approaches to software development, having developers write tests that exercise their code during the iterations is mission critical. The manner in which code is written iteratively requires that top-down or bottom-up approaches to development be turned on their side. Rather than fully completing the data access layer, iterative approaches require that UI, Business and Data Access all be written for a small vertical sliver of functionality. This approach ultimately means that all layers are affected with each new feature. Equally, the baseline architecture may also be affected with each new iteration.
The end result is source code that is constantly in flux as it is being re-factored to add new functionality. To battle this flux, we turn to unit tests. If you are new to unit testing, think of it as non-visual code that executes the non-visual portions of the application. Ultimately, this means that QA resources (which tend to be a scarce resource) need to focus only on exercising the application in the guise of the end user. With unit tests shoring up their efforts, QA can do black box testing and ensure that the visual aspects of the system behave correctly. Additionally, developers can be fairly certain that when a defect is found it is contained within a much smaller boundary (typically infrastructure for complex systems or maybe just in the integration between the visual layer and business layer).
Iterative development does not have room for monumental efforts at the end of a release where thousands of bugs accumulated over each iteration are then resolved. Defects need to be resolved as soon as they are discovered and the end goal needs to be discovering and resolving them DURING the iteration, not in some future iteration where the central focus is finding them.
If there remains a doubt in your mind, contact me and I will make myself available online for a demonstration of just how effective unit testing can be in managing defects.
Subscribe to:
Posts (Atom)