NuGet has been helping .NET developers maintain package dependencies for a number of years now, and any good dev should know the basic operations from within Visual Studio – how to add, update, and remove packages in a solution, using the NuGet Gallery. But to use only the NuGet Gallery is to know only half – or less – of the story. You might be missing out on…

  • Testing out pre-release versions of your projects key dependencies.
  • Reverting to older versions of libraries.
  • Stand-alone tool installations using NuGet.
  • Full-fledged Windows installations using Chocolatey.
  • Placing your custom packages into private repositories.

Here are a few tips for moving toward mastery of this crucial part of the .NET development ecosystem.

It’s Just a Zip File

Like many of its older cousins in the world of package management, .nupkg, the file extension for NuGet packages, is just an alias for .zip. Change the filename from abc.nupkg to abc.zip and it will open beautifully from your favorite zip client. From time to time, it can be quite useful to open up a NuGet package and see what is inside.

Of course NuGet uses a particular file layout within the package, and you wouldn’t want to create the .nupkg by hand. Instead, you describe the desired package with a .nuspec file (which you should keep in source control), and then use the nuget pack command to create your package (which you should not keep in source control. Use an artifact repository instead).

Incidentally, you can also nuget pack your csproj file, but you have less control over the outcome this way.

Read the Docs

As with most tools, you’ll get more out of it if you start reading the documentation. I have particularly enjoyed the command line reference for working with nuget.exe. Note: this is basically, but not exactly, the same as the Package Manager PowerShell Console. Use the former for automation, or manual execution of NuGet commands. Use the latter in Visual Studio for advanced functionality.

Specifying the Version to Install

With both nuget.exe and the PowerShell console – but not in the Package Manager gui – you can install older or pre-release versions of packages by providing the version number:

PM> Install-Package <SomePackageId> -version <number>

Or

C:\YourProject> nuget.exe install <SomePackageId> -version <number>

There are two primary use cases for this:

  1. Some teams publish pre-release versions of their packages. While you wouldn’t typically want these in production, it can be useful to try out the pre-release in anticipation of up-coming enhancements or API changes.
  2. I’m guessing that the majority of business .NET applications were written before NuGet came around. Many of those have dependencies on old packages, which were installed manually. A mad rush to replace manual references with NuGet packages might not be wise; you need to take time and evaluate the impact of each package. It can be useful to start by installing the same version as you already utilize, but from a NuGet package. Then, you can carefully work on upgrading to newer versions of the package in a deliberate test-driven manner.

Software Installation

Most .NET devs probably don’t realize that the .nupkg files can be used for much more than installing packages inside of .NET projects in Visual Studio and SharpDevelop. A basic .nupkg file differs from a self-installing .exe or an .msi file in that it is just a zip file, with no automation to the install. This can be useful for distributing static files, websites, and tools that don’t need Windows registry settings, shortcuts, or global registration. Lets say that you pack up a website (.NET equivalent of a Java WAR file), and you want to install it in c:\inetpub\wwwroot\MySite. At the command prompt:

C:\LocationOfNuPkg> nuget.exe install <YourPackageId> -Source %CD% -o c:\inetpub\wwwroot\MySite

If you are running IIS with the default configuration, then you’ve just installed your website from a .nupkg artifact. Because NuGet is retrieving the package from an artifact repository, you only need a tool to push this command to that server, and then the server will put the “current version” from the repository.

But you can also do more, and this is where https://chocolatey.org”>Chocolatey</a> comes in. Using the same .nupkg name, Chocolatey does for Windows what NuGet did for .NET applications: supports easy discovery, installation, and upgrade of -packages- applications. Once you have Chocolatey itself installed (follow directions on the home page), you install many common open source tools from the command line. For example, this article has neglected to mention that you need to download nuget.exe in order to run NuGet from the command line. For that, you can simply run:

C:\>choco install nuget.commandline

This will install nuget.exe into c:\ProgramData\Chocolatey\bin, which automatically puts it into your command path. As with NuGet, versioning can be a huge benefit compared to distributing a zip or msi file.

The key difference between Chocolatey and NuGet is that the choco command runs a PowerShell install script inside the package. Basically anything you would have done in the past with an msi, perhaps built-up with a WiX xml file, you can do in a PowerShell script. Arguably, you have more control over your install, and it will be easier to support scripted installation processes. Again, the real power here is in automation. It won’t give you a nice gui for walking your users through the install (although you could embed a call to an msi inside your .nupkg file), but it does facilitate smoother rollout of applications to servers and multiple desktops.

Private Repositories

Most companies are not going to be comfortable with the idea of their developers throwing the company’s proprietary NuGet packages out on the Internet for the whole world to find. Instead, they’ll want to install a piece of server software that acts as a local repository. The Hosting Your Own NuGet Feed lists the primary options available. So far, I’ve been relatively happy with NexusOSS, which also allows me to host Maven packages form my Java teammates, and npm packages for my Node.js team (as well as a few others).

As this article is already quite long, look for a future post with more information on using NexusOSS as a private repository for NuGet and Chocolatey packages.

Back in October I started playing around with a few technologies, resulting in my first code posted to GitHub: safnetDirectory. I must say that it is not the most impressive bit of coding that I’ve ever done. However, the urge to learn sometimes needs an unencumbered, no-strings-attached, digital canvas on which to exercise. That urge is requited through the experimentation and the lessons learned, rather than the completion of an opus.

The end result: I have a prototype of a mixed Angular.Js / ASP.Net MVC application that provides a simple directory and simple administrative functionality. And it is Hosted on Azure.

safnetDirectory screenshot

Two user stories drove this exercise, with a made-up corporate name Prism Company (I never did get around to using an engraving of Isaac Newton for the logo):

  1. As a Prism Company employee, I would like to lookup contact information for other employees, so that I can call or otherwise contact my co-workers as needed.
  2. As a Prism Company Human Resources (HR) coworker, I need to add, update, or delete employee data, so that the company directory will always be up-to-date.

To deliver these stories, I began by allowing Visual Studio 2013 to setup a basic MVC5 application with the default Membership authentication provider. From there, I modified the system by expanding the User object to include additional fields: full name, e-mail address, and phone number. Although I prefer a lighter-weight solution than Entity Framework, I left EF6 as it wasn’t critical to my goals, and using the code-first approach allowed me to concentrate on the front-end development and authentication.

The original default Registration page was modified to become the “new employee” page. I left the standard MVC bindings in place instead of using Angular because it is dealing with a small amount of data with only periodic use, and thus does not need what I consider the primary benefit of a JavaScript MVVM framework: handling large amounts of data with minimal data transmission.

Next, I used ngGrid and integrated it with the EF6 data model to create a high performing grid, with paging performed in the database rather than in JavaScript. I didn’t manage to fully customize the grid in the way I want, so perhaps at a future date I’ll upgrade to a newer version of Angular.Js and a more flexible grid component. I secured the page by integrating with the ASP.Net claims-based authentication, taking advantage of that robust toolkit instead of trying to learn something like JSON Web Token (I just happen to need to learn the ASP.Net claims authentication for work).

Finally, I added a form with search options, which is bound with Angular instead of directly using a View and Controller in ASP.Net. Still, “back-end” functionality is required to process the search request, and for that I treated an MVC Controller as a REST service, without taking the time to introduce Web API. MVC was good enough.

For now, this is just a brief reminder to myself of what I was toying with. Hopefully before the year is out I’ll find time for a follow-up to this post, going into code-level detail on how these technologies integrated. Either way, the source code is open for the world to criticize.