Search This Blog

Thursday, September 6, 2012

Litmus tests for using Service Stub over Mocked Object

During the RDNUG presentation tonight we talked about using  Service Stub or a Mock.  To provide a little more clarification on WHEN to use a Service Stub instead of a Mock I thought I'd share a little more here.

Service Stubs are essential when you have the following scenarios:
1) You need to interface with another system that isn't guaranteed to be available 100% of the time or can't be called (maybe production environment only).
2) The interface isn't ready yet.  Stubs are a great way to keep the project and delivery on schedule when you have external dependencies (e.g., services provided by a third party).
3) You have other sub-systems in your system that need a pre-fab reliable implementation which is somewhat a repeat of 1 & 2.

Mocks are essential when you need a very fast implementation of an interface that reliably behaves an intended way in order for you to test and isolate a class or component.  As exemplified tonight, Mocks will consistently perform faster in unit tests.  Again, consider 100 unit tests in a Continuous Integration scenario where each call averages 2 seconds.  200 seconds doesn't sound like a lot of time but think about the value add of CI; fast feedback loop.  When you have change sets coming in quickly (one or two small changes each change set) you want those tests returning quickly.  As shown tonight, consistent sub-second test responses can be achieved using Mocks. 

To summarize, use stubs when you need to simulate an external system and use mocks when you need to isolate in order to test behaviors in a dependent class.

I'll point to one of all time greats and a well spoken computer scientist who has helped me learn more about my chosen passion.  See Fowler's post here entitled "Mocks aren't Stubs"  http://martinfowler.com/articles/mocksArentStubs.html

Extending ASP.NET MVC with Dependency Injection Containers

Thanks to the Richmond.NET Users group for so graciously welcoming me tonight. 

You can download the source code for the discussion tonight here:

https://dl.dropbox.com/u/67850614/MVC4DIExtensibility.zip

The solution uses nuget packages so you shouldn't find any broken references anywhere.  I'd really like feedback from those who attended tonight.  I'm thinking about honing the presentation down into a much deeper talk specifically about using Spring.NET and only targeting MVC 4.  My thought is to build out more detailed examples of things like exporting POCOs using Spring.NET's WebServices exporter and implementing REAL aspects like caching and logging.

Thoughts?

Wednesday, May 30, 2012

Web Testing with Microsoft Visual Studio 2010 Ultimate: Context Parameters

This represents the first of a number of blog posts I hope to write over the next few weeks.  As my colleagues and I continue to deep-dive into the web testing tools provided by Microsoft Visual Studio 2010 Ultimate, I hope to share what we learn with you.

Today, my colleagues and I discussed the concept of test organization strategy.  In short, this translates to building reusable "chunks" of tests.  A great example of a reusable test is "Logon to site".  These is absolutely no reason to duplicate this effort across all tests you record.  Instead, record logon, parameterize the username and password and save it as a ".webtest" file.  The same approach applies to logging off a site.

The primary reason to do this is to promote reuse of webtests across your testing repository.  I'm not a test engineer so I have to give credit where credit is due.  My good friends and colleagues Guru Shyam Mony and Azmathulla Mohammed drilled this into my head and it simply makes sense from a test organization perspective.

Once we start going down this path, ultimately we'll want to create composite web tests.  Composite Web Tests a simply web tests that call other web tests.  Typically a single composite web test will represent a unit of work to be tested, or in test engineer terms a test scenario. 

Once we start creating composite tests, there will almost always be situations where we want to pass context parameters from the composite test to its child test.  I was surprised to find tonight how easy this actually is.

Step 1:  Create a context parameter in the "child test" and give it a default value that will allow the "child test" to pass.
Step 2:  In the composite test, create a parameter with the same name as that in the "child test".  Choose your method of setting the value for the context parameter.
Step 3:   Run the composite test and create some form of validation that ensures the value is passed from the composite test.

What you'll find is the composite test will in effect override the "default" value of the context parameter in the web test it calls and supply its own value. 

A great example of where this type of behavior may be desirable is as follows:

1) User logs in
2) Data source drives a set of dates and each date is supplied to a context parameter in the composite test.
3) Composite test calls a data entry web test and needs the date as a value.
4) Composite test calls a "review entries" web test and needs the date as a value.

We can essentially control the composite test and its subordinates with a single data source and a context parameter.

Friday, March 9, 2012

Rethinking Web Services

A recent tweet from Dino Esposito mentioned agreement with a blog here.  Web services as we know them historically are not the be-all end-all for every solution and as technology folks we need to be thinking about applying the right-sized solution for the problems we are faced with.

In fact, in a recent integration effort we opted for what I'd refer to as 'UltraLite' web services built on top of ASP.NET MVC.  Our target audience for integration is, after all, well versed in our payload format (JSON) and with added support for mapping JSON into view models, we're still working in a known world in our controller actions.  With added testability of controller actions, we also have the capability of supporting end-to-end testing with isolation and our integration testers can use tools they are comfortable with to do integration testing (yes, they can use a web page or a web client or whatever).

Add to the above, rich data annotations, model state validation and you have basic validation support as well. 

Tuesday, September 27, 2011

When Discipline Breaks Down

What do I mean by discipline...  That was rhetorical so if you answered in your head, smack yourself on the hand and stand in the corner.

In the software world, discipline focuses on the patterns we follow in response to events or the breakdown thereof.  I try repeatedly and often unsuccessfully to drive home the idea that software development discipline is what allows us to efficiently and effectively add an increment of product.

Example.  The Wrong Value

My QA folks inform me that they expect a certain value to persist and it is always persisting as Zero.  I have two options.  Option 1 is to run the application and attach a debugger.  Regardless of the number of tiers in the system, this takes considerable time. Additionally, I have to have just the right input information to test with and it has to fulfill the requirements of the ENTIRE interface, not just the portion I care about.

As an alternative, I could select a tier and write a unit test pinpointing the portion of code that is potentially exhibiting the behavior. If the test passes, I can move on to the next tier.  If the test fails, I can then change the code to get the test to pass then commit.  Problem solved.

Today, we had a scenario where our public facing schema mapped to our canonical schema (our internal representation of our "thing") and it was reported that a certain value was not being persisted.  Since we had recently created the public schema, my partner insisted that the mapping between the two was wrong.  I immediately wrote a test that definitively proved the mapping was good.  It allowed us to pinpoint the area of the application where the behavior broke down.

McConnell and Beck both get kudos for instilling a personal discipline when I develop.  Immediate Feedback Loop.  In this case, writing a unit test to expose the problem is an example of an immediate feedback loop.  The test, when written where the problem does not occur, provides a regression test that can then be played back by the continuous integration server as a check.  The test that failed and then succeeded (when I fixed the problem area) stands as another regression check.



One of my biggest frustrations as an engineer is explaining to non-technical folks that I have to spend time writing tests to test my own code.  The ones that get it get my immediate respect.  They understand how much more efficient it is for me to write a test (that takes about 5 minutes to write) than to defer the immediate feedback loop to a feedback loop that incorporates multiple people and potentially some form of bug tracking tool.  They also understand that a bug found by me has less cost associated with it than a bug found during a QA test. 

My next challenge is to instill this discipline in my team.  I have repeatedly demonstrated to them the value of writing a test.  I'll likely do this again over the next 3-5 sprints.  In this case, repetition is probably the best teacher.

I have said this many times but recently it has become my greatest challenge.  It is far easier to build great software than it is to build great people.  Great People however, are the best long-term investment any company can set a budget for. 


Wednesday, September 14, 2011

ASP.NET Webforms to ASP.NET MVC - Approach

I won't reinvent the wheel in this post as your friend and mine Rajesh Pillai wrote up an awesome step-by-step approach to taking an ASP.NET Webforms project and migrating it to ASP.NET MVC one workflow at a time. 

His post can be found here.  If you also use Microsoft's Ajax Toolkit, there are a couple of things to point out.  First of all, make sure you watch the order in which your handlers are defined.  One side-effect I encountered was the 'case of the missing $sys' in which case MS ajax javascript was being served up through the axd route. 

Second, if you are also a webforms spring.net-enabled site, you'll want to either define your own MVC Controller Factory and declare it in global asax on or use Spring.NET's MVC Application.  In our case, we chose the former approach primarily to minimize impact and have more control over the factory. 

Finally, don't throw away your existing site and try to do the changeover Big Bang.  Instead, carefully choose the workflows in the site you want to convert and convert them one at a time.  Your challenge, if you are unfamiliar with Ajax, is to take those update panels out and replace them javascript that uses an Ajax library (in our case JQuery) to provide the same behaviors. 

A couple of additional SNAFUs I ran into while going through this exercise:
1) Forgetting to add the web.config to the Views folder.  If you create a VS2008 ASP.NET MVC 2 project, you'll notice a web.config is created in the Views folder.  Among other things, this web.config adds essential page filters and parsers which allow for the use of the Html extension when writing up the markup.  Without these, you will not have the correct state in your aspx page with which to make use of them.

2) Controllers marked as Content and not as Compile
This threw me for a loop for a good hour.  I lost all intellisense and like most of the other Microsoft folks out there, felt quite naked and vulnerable.  Check your properties window and make sure the controller class is defined with 'compile'.  It's that simple.

I'll be posting more as we start to transition the site over to MVC v2 framework. 

Why Technical Debt Sucks

And I do mean sucks.  Technical Debt sucks at our energy, motivation and velocity. 

Technical Debt is a term that composes Defects, Shortcuts, Workarounds and other contributors to sub-par quality software.  Shortcuts in this case refer to intentionally writing code that quickly solves a problem.  While the problem itself is fixed, on average a shortcut also becomes a primary contributor to side-effects. 

Defects are something with which we are all familiar.  Carrying defects across sprints kills velocity.  I had to announce to our stakeholders today that, while our team has averaged 30 story points over the last 6 sprints we were only able to complete 14 story points this past sprint because we had 8 defects to fix some of which had a lifespan of a year.  There is nothing that should be more embarrassing to a team than to have to explain this. 

Short answer, run a sprint bug free.