Replicating Folder Structures in New Environments with MSBuild

I recently received the task of modifying an existing MSBuild script to copy configuration files from one location to another while preserving all but the top levels of their original folder structure.  Completing this task required a refresher in MSBuild well-known metadata and task batching (among other things), so I’m recounting my process here for future reference.

The config files that needed copying were already collected into an item via a CreateItem task.  Since we’re using MSBuild 4.0 though, I replaced it with the simpler ItemGroup.  CreateItem has been deprecated for awhile, but can still be used.  There is a bit of debate over the precise differences between CreateItem and ItemGroup, but for me the bottom line is the same (or superior) functionality with less XML.

Creating a new folder on the fly is easy enough with the MakeDir task.  There’s no need to manually check whether or not the directory you’re trying to create already exists or not.  The task just works.

The trickiest part of  this task was figuring out what combination of well-known metadata needed to go in the DestinationFiles attribute of the Copy task to achieve the desired result.  The answer ended up looking like this:

<Copy SourceFiles="@(ConfigFiles)" DestinationFiles="$(OutDir)_Config\$(Environment)\%(ConfigFiles.RecursiveDir)%(ConfigFiles.Filename)%(ConfigFiles.Extension)" />

The key bit of metadata is the RecursiveDir part.  Since the ItemGroup that builds the file collection uses the ** wildcard, and it covered all the original folder structure I needed, putting after the new “root” destination and before the file names gave me the result I wanted.  Another reason that well-known metadata was vital to the task is that all the files have the same name (Web.config), so the easiest way to differentiate them for the purpose of copying was their location in the folder structure.

In addition to the links above, this book by Sayed Ibrahim Hashimi was very helpful.  In a previous job where configuration management was a much larger part of my role, I referred to it (and sedodream.com) on a daily basis.

Fixing MVC Sitemap Errors

When attempting to manually test a .NET MVC application, I got the following exception from Visual Studio:
MvcSiteMapException

Looking at the inner exception revealed this message:

An item with the same key has already been added.

The sitemap file for our application is pretty long (over 1300 lines of XML), but a co-worker pointed me to the potential culprit right away. There was a sitemap node near the end of the file that had empty strings for its controller and action attributes. As far as I can tell, this generates the default url for the site’s home page. Since it already exists, this results in the exception that’s thrown. Removing the sitemap node resolved our issue.  A couple of threads that I checked on stackoverflow (here and here) provide other possible causes for the error.

Visual Studio & TFS Behavior Tweaks

One of a few long-running annoyances I’ve had with every version of TFS is one of the default behaviors on check-in. The default is to resolve an open item on check-in, which is virtually never the case the first (or second, or third, etc) time you check in code to resolve a bug or implement new functionality. Fortunately, Edsquared has the solution.

After making this long-overdue change in my development environment, I exported the keys for VS2010 and VS2012 as registration entry files below:

Feel free to use them in your environment.

Tim Cook Should Ignore Ars Technica (Almost) Completely

I came across this article by Jacqui Cheng and thought I’d add my two cents on each of the suggestions.

10. License OS X.  Despite the article’s protestations that licensing doesn’t have to be the disaster it was for them in the 90s, this suggestion misses the mark because it misunderstands what kind of company Apple–a hardware company.  Licensing OS X would only send hardware revenue to a company (or companies) other than Apple.  There’s no compelling reason for them to give away that money.  Licensing the OS won’t get them additional users, or revenue, or get them into some new market they might want to enter.  This is by far the worst idea on the list.

9. Bring some manufacturing jobs back to the U.S.  It’s a nice idea in theory, but in reality, there’s no compelling reason for them to do this.  Why  should they voluntarily raise their costs and reduce their profit margins?  Apple is hardly the only company doing business with Foxconn.  Dell, H-P, Cisco, Intel and Cisco are also major customers.

8. Invest in an independent research lab.  This has been said better by others, but Apple’s success is due in large part to its narrow focus.  People and capital used for such a lab wouldn’t be available to help with the things that Apple is great at.  There are other ways that Apple can contribute to the public good without directing a ton of money toward basic research.  In my view, the federal government is the right entity to be doing that (but that’s a whole other discussion).

7. More transparency on OS X and Mac plans.  Like suggestion 10, the primary focus of this suggestion seems to be on Mac Pro users.  It’s true that the Mac Pro hasn’t gotten much attention from Apple over the past couple of years.  Perhaps the biggest reason is that it doesn’t account for much of their revenue anymore.  The one point I would extrapolate from their suggestion that I would agree with is that Apple can definitely improve in how they treat developers for their platforms.  I’ve spent my career writing desktop and web applications on and for various versions of Windows, and Microsoft seems much more “pro-developer” (more information about development tools, free copies of software, training events, etc).  I wouldn’t expect Apple to try and become just like Microsoft in this regard (nor should they), but there are definitely some lessons Apple could learn.

6. Make the Apple TV more than a hobby.  This is the first suggestion in the list that I like.  I like the Apple TV enough that I own one for each TV in my house and have started buying them as gifts for family.

5. Offer streaming, subscription music.  I’m not sure what I think of this suggestion.  I avoided subscription music services in favor of buying music for years because I preferred the idea of owning it and being able to listen to it on whatever device I wanted.  I like the experience I’ve had with Spotify so far, but I don’t know if I listen to enough music to justify the monthly cost.  I’m not sure what Apple could bring to the space that would be better.  Whether they do anything with streaming or not, what Apple really needs to do is re-think iTunes.  As Apple has offered more and more content, iTunes has become more of a sprawling mess.

4. Inject some steroids into the Mac line.  I disagree with this suggestion completely.  Apple got it right with their focus on battery life and enough speed.  In mobile phones and tablets, seemingly every manufacturer using Android as the OS focused on metrics like processor speed, camera megapixels, and features like full multi-tasking.  The result: devices that had to be recharged multiple times over the course of a day.  By contrast, the iPhone is plenty fast, but I can go a full day without having to recharge it.  Multiple days can go by before I need to recharge the iPad.  Apple has correctly avoided competing on specific measures like processor speed and how many megapixels their cameras have.  They’re competing (and winning) on the experience of using their products.

3. Diversify the iOS product line.  If the rumors are correct, Apple will be offering a smaller version of the iPad soon.  The next iPhone will probably have a larger screen as well.  But beyond those changes, I don’t think Apple should be in any hurry to diversify in the way Ars Technica suggests.  By limiting the differentiation of their iOS-based products to storage size (and cost), Apple has chosen a metric that is both meaningful and easy for the typical consumer to understand.  This makes Apple products easier to buy than the alternatives.

2. Make a larger commitment to OS security.  I agree with this suggestion as well.  Apple’s success in the market has made them big enough for virus/malware makers to spend time targeting.

1. Cater to power users again.  I see this suggestion as a variation on the them of suggestion 7.  I’m sure Apple could do something like this in a way that wouldn’t disrupt their current approach.  Whether or not it would net them enough additional customers and revenue to be worthwhile is another discussion.

Introducing AutoPoco

I first learned about AutoPoco from this blog post by Scott Hanselman.  But it wasn’t until earlier this spring that I had an opportunity to use it.  I’d started a new job at the end of March, and in the process of getting familiar with the code base on my first project, I came across the code they used to generate test data.  I used AutoPoco to generate a much larger set of realistic-looking test data than was previously available.

Last week, I gave a presentation to my local .NET user group (RockNUG) on the framework.  The slide deck I put together is available here, and there’s some demo code here.  The rest of this blog post will speak to the following question (or a rather rough paraphrase of it) that came up during a demo: is it possible to generate instances of a class with an attribute that has a wide range of values, save one?

The demo that prompted this question is in the AddressTest.cs class in the demo code I linked to earlier.  In that class, the second test (Generate100InstanceOfAddressWithImpose) gives 50 of the 100 addresses a zip code of 20905 and a state of Maryland.  The possible objective of the question could be to generate a set of data with every state except one.

After taking a closer look at the documentation, and a review of the AutoPoco source code for generating random states, I came up with an answer.  The Generate1000InstancesOfAddressWithNoneInMaryland test not only excludes Maryland from the state property, it uses abbreviations instead of the full state name.  The implementation of CustomUsStatesSource.Next adds a couple of loops (one if abbreviations are used, one if not) that keep generating random indexes if the resulting state is contained in the list of states to exclude.

The ability to pass parameters to custom datasources in order to control what type of test data is generated is an incredibly useful feature.  In the work I did on my project’s test generator, I used the capability in order to create a base datasource that generated numeric strings with the length controlled by parameters.  This allow me to implement new datasources for custom ids in the application by inheriting from the base and specifying those parameters in the constructor.

Because AutoPoco is open source, if your project has specific needs, you can simply fork it and customize as you wish.  Another value-add of a framework like this could be realized if you write multiple applications that share data.  In such a scenario, test data becomes a corporate resource, with different sets generated and made available according to the scenarios being tested.

Another advantage of AutoPoco for test generation is that its use of plain old CLR objects keeps it independent of specific database technologies.  I’m currently using AutoPoco with RavenDB; it will work just as well with the database technology (or ORM) of your choosing–Entity Framework, NHibernate, SQL Server, Oracle, etc.

AutoPoco is available via NuGet, so it’s very easy to add to whatever test assemblies you’ve currently got in your solutions.  As long as you have public, no-arg constructors for the CLR objects (since AutoPoco uses reflection to work), you can generate large volumes of realistic-looking test data in virtually no time.

The Perils of Renaming in TFS

Apparently, renaming an assembly is a bad idea when TFS is your version control system.

Earlier this week, one of my co-workers renamed an assembly to consolidate some functionality in our application yesterday, and even though TFS said the changes were checked in, they weren’t.

I got the latest code the morning after the change, and got nothing but build failures. We’re using the latest version of TFS and it’s very frustrating that something like this still doesn’t work properly.

Ultimately, the solution was found at the bottom of this thread.

The only way I’ve found to avoid this kind of hassle is to create a new assembly, copy your code from the old assembly to the new one, change any references to the old assembly to use the new assembly, then delete the old assembly once you’ve verified the new one is working.

Please Learn to Code

I came across this post from Jeff Atwood in my Twitter feed this morning. It even sparked a conversation (as much of one as you can have 140 characters at a time) between me and my old co-worker Jon who agreed with Jeff Atwood in far blunter terms: “we need to cleanse the dev pool, not expand and muddy the water”.

While I understand Jon’s sentiment, “cleansing” vs. “expanding” just isn’t going to happen. Computer science as an academic discipline didn’t even exist until the 1950s, so it’s a very long way from having the sort of regulations and licensure of a field like engineering (a term that dates back to the 14th century). Combine that with the decreasing number of computer science graduates our colleges and universities graduate each year (much less the elimination of a computer science department), and it’s no surprise that people without formal education in computer science are getting paid to develop software.

While it does sound crazy that Mayor Bloomberg made learning to code his 2012 new year’s resolution, I’m glad someone as high-profile as the mayor of New York is talking about programming. When I was deciding what to study in college (way back in 1992), computer science as a discipline didn’t have a very high profile. While I knew programming was how video-games and other software was made, I had to find out about computer science from Bureau of Labor Statistics reports.

Jeff’s “please don’t learn to code” is counterproductive–the exact opposite of what we should be saying. Given a choice between having more people going into investment banking and more people going into software development, I suspect a large majority of us would favor the latter.

I also don’t believe that the objective of learning to code has to be making a living as a software developer in order to be useful. The folks at Software Carpentry are teaching programming to scientists to help them in their research. People who test software should know enough about programming to at least automate the repetitive tasks. If you use a computer with any regularity at all, even a little bit of programming knowledge will enable you to extend the capabilities of the software you use.

We need only look at some of the laws that exist in this country to see the results of a lack of understanding of programming by our judges and legislators. I think that lack of understanding led to software patents (and a ton of time wasted in court instead of spent innovating). The Stop Online Piracy Act and the Protect IP Act are other examples of dangerous laws proposed by legislators that don’t have even the most basic understanding of programming.

As someone who writes software for a living, I prefer customers who understand at least a bit about programming to those who don’t, because that makes it easier to talk about requirements (and get them right). They tend to understand the capabilities of off-the-shelf software a bit better and understand the tradeoffs between it and a custom system. In my career, there have been any number of times where an understanding of programming has helped me find an existing framework or solution that met most of a customer’s requirements, so I and my team were able to focus our work just on what was missing.

Thanks Again StackOverflow!

About a month ago, I wrote a brief post about starting a new job.  In it, I tipped my hat to StackOverflow Careers, for connecting me with my new employer.  Yesterday, I received a package from FedEx.  I was puzzled, since I didn’t recall ordering anything recently.  But upon opening it, I discovered a nice StackOverflow-branded portfolio, pen and card with The Joel Test on it.  In the pocket was a version of my profile printed on high-quality paper.

I appreciate the gesture, and thank StackOverflow Careers and the StackExchange team not only for the portfolio (which I’ve already replaced my previous portfolio with), or for creating a great site for connecting developers with employers that value developers, but for the whole collection of Q & A sites that make software development (and many other fields of endeavor) easier to learn.

 

 

From Web Forms to MVC

In the weeks since my last post, I’ve been thrown into the deep end of the pool learning ASP.NET MVC 3 and a number of other associated technologies for a healthcare information management application currently scheduled to deploy this July.  Having developed web applications using webforms since 2003, I’ve found it to be a pretty significant mental shift in a number of ways.

No Controls

There are none of the controls I’ve become accustomed to using over the years.  So in addition to learning the ins-and-outs of MVC 3, I’ve been learning some jQuery as well.

No ViewState

Because there’s no viewstate in MVC, any information you need in more than one view should be available either in the url’s query string, the viewmodel, or be retrievable via some mechanism in your view’s controller.  In the application I’m working on, we use Agatha.

More “Pages”

Each CRUD operation gets its own view (and its own viewmodel, depending on the circumstance).  This actively encourages separation of concerns in a way that webforms definitely does not.

A Controller is a Lot Like Code-Behind

I’ve been reading Dino Esposito’s book on MVC 3, and he suggests thinking of controllers this way fairly early in the book.  I’ve found that advice helpful in a couple of ways:

  1. This makes it quicker to understand where to put some of the code that does the key work of the application.
  2. It’s a warning that you can put far too much logic in your controllers the same way it was possible to put far too much into your code-behind.

Using Agatha has helped to keep the controllers I’ve written so far from being too heavy.

More to Come

This barely scratches the surface of my experience with MVC so far.  None of the views I’ve implemented has been been complex enough yet to benefit from the use of Knockout JS, but future assignments will almost certainly change this.  We’re also using AutoMapper to ease data transfer between our domain objects and DTOs.  In addition to using StructureMap for dependency injection, we’re using PostSharp to deal with some cross-cutting concerns.  Finally, we’re using RavenDB for persistence, so doing things the object database way instead of using SQL Server has required some fundamental changes as well.