The trouble with using strongly-typed datasets

Apparently, if your database-driven website is under heavy concurrent user load, the  Adapter.Fill method in the .NET Framework (called by code generated by the XSD in Visual Studio) begins to fail because it doesn’t close connections properly.

The next time I need a data access layer for anything of substance, strongly-typed datasets are off the list.

Are Exceptions Always Errors?

It would be easy enough to assume so–but surprisingly, that’s not always the case. So the following quote from this post:

“If there’s an exception, it should be assumed that something is terribly wrong; otherwise, it wouldn’t be called an exception.”

isn’t true in all cases. In chapter 18 of Applied Microsoft .NET Framework Programming (page 402), Jeffrey Richter writes the following:

“Another common misconception is that an ‘exception’ identifies an ‘error’.”

“An exception is the violation of a programmatic interface’s implicit assumptions.”

He goes on to use a number of different examples where an thrown exception is not because of an error. Before reading Richter, I certainly believed that exceptions were errors–and implemented application logging on the basis of that belief. The exception that showed me this didn’t always apply was ThreadAbortException. This exception gets thrown if you call Response.Redirect(url). The redirect happens just fine, but an exception is still thrown. The reason? When that overload of Response.Redirect is called, execution of the page where it’s called is stopped immediately by default. This violates the assumption that a page will execute fully, but is not an error. Calling Response.Redirect(url,false) prevents ThreadAbortException from being thrown, but it also means you have to write your logic slightly differently.

The other place I’d differ with the original author (Billy McCafferty) is in his description of “swallow and forget”, which is:

} catch (Exception ex) {
AuditLogger.LogError(ex);
}

The fact that it’s logged means there’s somewhere to look to find out what exception was thrown.  I would define “swallow and forget” this way:

}catch(Exception ex){

}

Of course, if you actually catch the generic exception, FxCop would flag that as a user violation.  I’m sure McCafferty was using this as an example.

SourceForge to the Rescue

I’d been hunting around for awhile trying to find a tool to automatically convert some .resx files into Excel so the translation company we’re using for one of our applications would have something convenient to work with.  It wasn’t until today that I found RESX2WORD.  It’s actually 2 utilities: one executable to convert .resx files into Word documents, and another to do the reverse.

The resulting Word from the resx2word executable has a paragraph of instructions to the translator and automatically duplicates the lines that need translating.

Lessons Learned: The Failure of Virtual Case File

I came across this article about the failure of the Virtual Case File project about a week ago. I read things like this in the hope of learning from the mistakes of others (instead of having to make them myself). What follows are some of the conclusions I drew from reading the article, and how they might apply to other projects.

Have the Right People in the Right Roles

The author of the article (Harry Goldstein) calls the appointment of Larry Depew to manage the VCF project “an auspicious start”. Since Depew had no IT project management experience, putting him in charge of a project so large with such high stakes struck me as a mistake. This error was compounded by having him play the role of customer advocate as well. In order to play of the role of project manager effectively, you can’t be on a particular side. Building consensus that serves the needs of all stakeholders as well as possible simply couldn’t happen with one person playing both roles.

Balance Ambition and Resources

The FBI wanted the VCF to be a one-stop shop for all things investigative. But they lacked both the necessary infrastructure and the people to make this a realistic goal. A better approach would have prioritized the most challenging of the individual existing systems to replace (or the one with the greatest potential to boost productivity of FBI agents), and focused the efforts there. The terrorist attacks of 9/11/2001 exposed how far behind the FBI was from a technology perspective, added a ton of political pressure to hit a home run with the new system, and probably created unrealistically high expectations as well.

Enterprise Architecture is Vital

This part of Goldstein’s article provided an excellent definition of enterprise architecture, which I’ve included in full below:

This blueprint describes at a high level an organization’s mission and operations, how it organizes and uses technology to accomplish its tasks, and how the IT system is structured and designed to achieve those objectives. Besides describing how an organization operates currently, the enterprise architecture also states how it wants to operate in the future, and includes a road map–a transition plan–for getting there.

Unfortunately, the FBI didn’t have an enterprise architecture. This meant there was nothing guiding the decisions on what hardware and software to buy.

Delivering Earlier Means Dropping Features

When you combine ambition beyond available resources with shorter deadlines, disaster is virtual certainty. When SAIC agreed to deliver six months earlier than initially agreed, that should have been contingent on dropping certain features. Instead, they tried to deliver everything by having eight teams work in parallel. This meant integration of the individual components would have to be nearly flawless–a dubious proposition at best.

Projects Fail in the Requirements Phase

When a project fails, execution is usually blamed. The truth is that failed projects fail much earlier than that–in requirements. Requirements failures can take many forms, including:

  • No written requirements
  • Constantly changing requirements
  • Requirements that specify “how” instead of “what”

The last two items describes the VCF’s requirements failure. The 800+ page document described web pages, form button captions, and logos instead of what the system needed to do.

In addition, it appears that there wasn’t a requirements traceability matrix as part of the planning documents.  The VCF as delivered in December 2003 (and rejected by the FBI), did things that there weren’t requirements for.  Building what wasn’t specified certainly wasted money and man-hours that could have been better spent.  I also inferred from the article that comprehensive test scenarios weren’t created until after the completed system had been delivered.  That could have (and should have) happened earlier than it did.

Buy or Borrow Before You Build

Particularly in the face of deadline pressure, it is vital that development buy existing components (or use open source) and integrate them wherever practical instead of building everything from scratch.  While we may believe that the problem we’re trying to solve is so unique that no software exists to address it, the truth is that viable software solutions exist to subsets of many of the problems we face.  SAIC building an “an e-mail-like system” when the FBI was already using GroupWise for e-mail was a failure in two respects.  From an opportunity cost perspective, the time this team spent re-inventing the wheel couldn’t be spent working on other functionality that actually needed to be custom built.  They missed an opportunity to leverage existing functionality.

Prototype for Usability Before You Build

Teams that build successful web applications come up with usability prototypes before code gets written.  At previous employers (marchFIRST and Lockheed-Martin in particular), after “comps” of key pages in the site were done, usability testing would take place to make sure that using the system would be intuitive for the user.  Particularly in e-commerce, if a user can’t understand your site, they’ll go somewhere else to buy what they want.  I attribute much of Amazon’s success to just how easy they make it to buy things.

In the case of the VCF, the system was 25% complete before the FBI decided they wanted “bread crumbs”.  A usability prototype would have caught this.  What really surprises me is that this functionality was left out of the design in the first place.  I can’t think of any website, whether it’s one I’ve built or one I’ve used, that didn’t have bread crumbs.  That seemed like a gigantic oversight to me.

A brief note on version control, labeling, and deployments

One thing I didn’t realize about CruiseControl.NET until recently was that it automatically labels your builds.  It uses a ... naming scheme.  The way this helps with deployments is that you can always look figure out what you’ve deployed to production in the past–as long as you only deploy labeled builds.

We still need to get our continuous integration setup working again, but in the interim, manual labeling of releases is still helpful.

Exposing InnerException

This week, an application I work on started logging an exception that provided no help at all in debugging the problem. My usual practice of running the app in debug mode with production values in the config file failed to reproduce the error too. After a couple of days of checking a bunch of different areas of code (and still not solving the problem), Bob (a consultant from Intervention Technologies) gave me some code to get at all the InnerException values for a given Exception. Download the function from here.

StackTrace can be pretty large. Since we log to a database, I was worried about overrunning the column width. I also wasn’t keen on the idea of looking at so large a text block if an Exception was nesting four or five additional ones. So instead of implementing the code above, I changed the code to log at each level. Doing it this way adds a log entry for each InnerException. Because the log viewer I implemented displays the entries in reverse-chronological order, the root cause of a really gnarly exception displays at the top. The changes I made to global.asax looked like this.

The result of this work revealed that the app had been complaining about not being able to reach the SMTP server to send e-mail (which it needs to send users their passwords when they register or recover lost passwords).

Once we’d established that the change was working properly, it was time to refactor the code to make the functionality more broadly available. To accomplish this, I updated our Log4Net wrapper like this.

App_Code: Best In (Very) Small Doses

When I first started developing solutions on version 2.0 of the .NET Framework, I saw examples that had some logic in the App_Code folder. For things like base pages, I thought App_Code was perfect–and that’s how I use it today. When I started to see applications put their entire middle tiers in App_Code however, I thought that was a bad idea. Beyond not being able to unit test your components (and the consequences associated with a lack of unit testing, coverage, etc), it just seemed … wrong.

Fortunately, there are additional reasons to minimize the use of App_Code:

Defending debuggers (sort of)

I came across this post about debuggers today. I found it a lot more nuanced than the Giles Bowkett post on the same topic. The part of the post I found the most useful was when he used the issue of problems in production to advocate for two practices I’m a big fan of: test-driven development and effective logging.

I’m responsible for an app that wasn’t developed using TDD and had no logging at all when I first inherited it. When there were problems in production, we endured all manner of suffering to determine the causes and fix them. Once we added some unit tests and implemented logging in key locations (global.asax and catch blocks primarily), the number of issues we had dropped off significantly. And when there were issues, the log helped us diagnose and resolve problems far more quickly.

The other benefit of effective logging is to customer service. Once I made the contents available to a business analyst, she could see in real-time what was happening with the application and provide help to users more quickly too.

Whether you add it on after the fact or design it in from the beginning, logging is a must-have for today’s applications.

What tests are really for

Buried deep in this Giles Bowkett post is the following gem:

“Tests are absolutely not for checking to see if things went wrong. They are for articulating what code should do, and proving that code does it.”

While it comes in the midst of an anti-debugger post (and an extended explanation of why comments on the post are closed), it is an excellent and concise statement of the purpose of unit tests.  It explains perhaps better than anything else I’ve read the main reason unit tests (or specifications, as the author would call them) should be written before the code.

Quick fix for “Failed to enable constraints” error

If you use strongly-typed datasets in .NET, you’ve encountered the dreaded “Failed to enable constraints …” message.  I most recently encountered it this morning, while unit testing some new code.  There were far fewer search results for the phrase than I expected, so I’ll add my experience to the lot.

The XSD I’m working with has one table and one stored procedure with four parameters.  A call of this stored procedure (via a method in a business logic class) returns a one-column one-row result set.  My code threw a ConstraintException each time the result set value was zero (0).  To eliminate this problem, I changed the value of AllowDBNull attribute of each column in the XSD table from False to True (if the value wasn’t True already).  When I ran the unit tests again, they were successful.

I’ll have to research this further at some point, but I think part of the reason for ConstraintException being thrown in my case was the difference between the stored procedure’s result set columns and the table definition of the associated table adapter.

In any case, setting AllowDBNull to True is one way to eliminate that pesky error.