Going beyond files with ItemGroup

If you Google for information on the ItemGroup element of MSBuild, most of the top search results will discuss its use in dealing with files.  The ItemGroup element is far more flexible than this, as I figured out today when making changes to an MSBuild script for creating databases.

My goal was to simplify the targets I was using to create local groups and add users to them.  I started with an example on pages 51-52 of Inside the Microsoft Build Engine: Using MSBuild and Team Foundation Build, where the author uses an ItemGroup to create a list of 4 servers.  Each server has custom metadata associated with it.  In his PrintInfo target, he displays the data, overrides an element with new data, and even removes an element.  Because MSBuild supports batching, you can declare a task and attributes once and still execute it multiple times.  Here’s how I leveraged this capability:

  1. I created a target that stored the username of the logged-in user in a property.
  2. I created an ItemGroup.  The metadata for each group element was the name of the local group I wanted to create.
  3. I wrote the Exec commands to execute on each member of the ItemGroup

The ItemGroup looks something like this:

<ItemGroup>
<LocalGroup Include=”Group1″>
<Name>localgroup1</Name>
</LocalGroup>
<LocalGroup Include=”Group2″>
<Name>localgroup2</Name>
</LocalGroup>
<LocalGroup Include=”Group3″>
<Name>localgroup3</Name>
</LocalGroup>
</ItemGroup>

The Exec commands for deleting a local group if it exists, creating a local group, and adding the logged-in user to it, look like this:

<Exec Command=”net localgroup %(LocalGroup.Name) /delete” IgnoreExitCode=”true” />
<Exec Command=”net localgroup %(LocalGroup.Name) /add” IgnoreExitCode=”true” />
<Exec Command=”net localgroup %(LocalGroup.Name) $(LoggedInUser) /add” />

The result is that these commands are executed for each member of the ItemGroup.  This implementation makes a lot easier to add more local groups if necessary, or make other changes to the target.

More on migrating partially-trusted managed assemblies to .NET 4

Some additional searching on changes to code access security revealed a very helpful article on migrating partially-trusted assemblies.  What I posted yesterday about preserving the previous behavior is found a little over halfway through the article, in the Level1 and Level2 section.

One thing this new article makes clear is that use of SecurityRuleset.Level1 should only be used as a temporary measure until code can be updated to support the new security model.

Upgrading .NET assemblies from .NET 3.5 to .NET 4.0

Code access security is one area that has changed quite significantly between .NET 3.5 and .NET 4.0.  Once an assembly has been upgraded, if it allowed partially-trusted callers under .NET 3.5, it would throw exceptions when called under .NET 4.0.  In order to make such assemblies continue to function after being upgraded, AssemblyInfo.cs needs to change from this:

[assembly: AllowPartiallyTrustedCallers]

to this:

[assembly: AllowPartiallyTrustedCallers]
[assembly: SecurityRules(SecurityRuleSet.Level1)]

Once this change has been made, the assembly will work under the same code access security rules that applied prior to .NET 4.0.

When 3rd-party dependencies attack

Lately, I’ve been making significant use of the ExecuteDDL task from the MSBuild Community Tasks project in one of my MSBuild scripts at work.  Today, someone on the development team got the following error when they ran the script:

“Could not load file or assembly ‘Microsoft.SqlServer.ConnectionInfo, Version=9.0.242.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91’

It turned out that the ExecuteDDL task has a dependency on a specific version of Microsoft.SqlServer.ConnectionInfo deployed by the installation of SQL Server 2005 Management Tools.  Without those tools on your machine, and without an automatic redirect in .NET to the latest version of the assembly, the error above results.  The fix for it is to add the following XML to the “assemblyBinding” tag in MSBuild.exe.config (in whichever .NET Framework version you’re using):

<dependentAssembly>
<assemblyIdentity name=”Microsoft.SqlServer.ConnectionInfo” publicKeyToken=”89845dcd8080cc91″ culture=”neutral” />
<bindingRedirect oldVersion=”0.0.0.0-9.9.9.9″ newVersion=”10.0.0.0″ />
</dependentAssembly>

Thanks to Justin Burtch for finding and fixing this bug.  I hope the MSBuild task gets updated to handle this somehow.

Continuous Integration Enters the Cloud

I came across this blog post in Google Reader and thought I’d share it.  The idea of being able to outsource the care and feeding of a continuous integration system to someone else is a very interesting one.  Having implemented and maintained such systems (which I’ve blogged about  in the past), I know it can be a lot of work (though using a product like TeamCity lightens the load considerably compared with CruiseControl.NET).  Stelligent isn’t the first company to come up the idea of CI in the cloud, but they may be the first using all free/open source tools to implement it.

I’ve read Paul Duvall’s book on continuous integration and highly recommend it to anyone who works with CI systems on a regular basis.  If anyone can make a service like this successful, Mr. Duvall can.

Set-ExecutionPolicy RemoteSigned

When you first get started with PowerShell, don’t forget to run ‘Set-ExecutionPolicy RemoteSigned’ from the PowerShell prompt. If you try to run a script without doing that first, expect to see a message like the following:

File <path to file> cannot be loaded because execution of scripts is disabled on this system.  Please see “get-help about_signing” for more details.

The default execution policy for PowerShell is “Restricted” (commands only, not scripts).  The other execution policy options (in decreasing order of strictness) are:

  • AllSigned
  • RemoteSigned
  • Unrestricted

When I first tripped over this, the resource that helped most was a TechNet article.  Later, I found a blog post that was more specific about the execution policies.

Can’t launch Cassini outside Visual Studio? This may help …

I’d been trying to launch the Cassini web server from a PowerShell script for quite awhile, but kept getting an error when I tried to display the configured website in a browser.  When I opened up a debugger, it revealed a FileNotFoundException with the following details:

“Could not load file or assembly ‘WebDev.WebHost, Version=8.0.0.0, Culture=neutral, PublicKeyToken=…’ or one of its dependencies…”

Since the WebDev.WebHost.dll was present in the correct .NET Framework directory, the FileNotFoundException was especially puzzling.  Fortunately, one of my colleagues figured out what the issue was.  WebDev.WebHost.dll wasn’t in the GAC.  Once the file was added to the GAC, I was able to launch Cassini and display the website with no problems.

Unit testing strong-named assemblies in .NET

It’s been a couple of years since I first learned about the InternalsVisibleTo attribute.  It took until this afternoon to discover a problem with it.  This issue only occurs when you attempt to unit test internal classes of signed assemblies with an unsigned test assembly.  If you attempt to compile a Visual Studio solution in this case, the compiler will return the following complaint (among others):

Strong-name signed assemblies must specify a public key intheir InternalsVisibleTo declarations.

Thankfully, this blog post gives a great walk-through of how to get this working.  The instructions in brief:

  1. Sign your test assembly.
  2. Extract the public key.
  3. Update your InternalsVisibleTo argument to include the public key.

A .NET Client for REST Interface to Virtuoso

For my current project, I’ve been doing a lot of work related to the Semantic Web.  This has meant figuring out how to write SPARQL queries in order to retrieve data we can use for testing our application.  After figuring out how to do this manually (we used this SPARQL endpoint provided by OpenLink Software), it was time to automate the process.  The Virtuoso product has a REST service interface, but the only example I found here for interacting with it used curl.  Fortunately, some googling revealed a really nice resource in the Yahoo Developer Network with some great examples.

I’ve put together a small solution in Visual Studio 2008 with a console application (VirtuosoPost) which executes queries against http://dbpedia.org/fct/ and returns the query results as XML.  It’s definitely not fancy, but it works.  There’s plenty of room for improvement, and I’ll make updates available here.  The solution does include all the source, so any of you out there who are interested in taking the code in an entirely different direction are welcome to do so.

Adventures in SPARQL

If this blog post seems different than usual, it’s because I’m actually using it to get tech support via Twitter for an issue I’m having.  One of my tasks for my current project has me generating data for use in a test database. DBPedia is the data source, and I’ve been running SPARQL queries to retrieve RDF/XML-formatted data against their Virtuoso endpoint.  For some reason though, the resulting data doesn’t validate.

For example, the following query:

PREFIX owl: <http://www.w3.org/2002/07/owl#>
PREFIX xsd: <http://www.w3.org/2001/XMLSchema#>
PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#>
PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>
PREFIX foaf: <http://xmlns.com/foaf/0.1/>
PREFIX dc: <http://purl.org/dc/elements/1.1/>
PREFIX : <http://dbpedia.org/resource/>
PREFIX dbpedia2: <http://dbpedia.org/property/>
PREFIX dbpedia: <http://dbpedia.org/>
PREFIX skos: <http://www.w3.org/2004/02/skos/core#>
SELECT ?property ?hasValue ?isValueOf
WHERE {
{ <http://dbpedia.org/resource/Bank> ?property ?hasValue FILTER (LANG(?hasValue) = ‘en’) .}
UNION
{ ?isValueOf ?property <http://dbpedia.org/resource/Bank> }
}

generates the RDF/XML output here.  If I try to parse the file with an RDF validator (like this one, for example), validation fails.  Removing the attributes from the output takes care of the validation issues, but what I’m not sure of is why the node ids are added in the first place.