Tutorials

Git is a fabulous tool for source control management. While incredibly powerful, it can be a little daunting to learn at first. The following tutorials will help. They are organized from basic to more advanced.

  • tryGit, an interactive tutorial
  • Learn Git Branching, an interactive tutorial
  • GitFlow, a very good read on the basic workflow / process used for FlightNode development
  • Forking Workflow, in fact we using the Forking Workflow to facilitate collaboration, but with the GitFlow branching structure at the core.
    • In your local clones, create a new remote called *upstream**, pointing to the main repository:
      git remote add upstream https://github.com/FlightNode/FlightNode.xyz
    • When you want to get the latest code from the shared repository, you’ll now be able to use
      git pull upstream <somebranch>.
  • [Pull Request Tutorial]9https://github.com/yangsu/pull-request-tutorial), with many nice screenshots and some advanced functionality, such as squash, rebase, and cherry-pick.
  • Pro Git, the entire book, is available online for free.

Typical Workflow

Once you’ve created your forks on GitHub, and your clones on your workstation, a typical day might look like this:

  1. Open Git-bash and cd to a workspace:
    cd /c/workspaces/FlightNode/FlightNode.Identity
  2. Planning on working on Issue #10 today… so create a feature branch:
    git checkout -b feature/10
  3. You don’t want to get out-of-date, or you may start running into major merge difficulties. Therefore:
    git pull upstream develop
  4. Work on some code in your editor of choice.
  5. Stage your code:
    1. New file:
      git add full/path/to/new/file.cs
    2. All existing files:
      git add -u :/
    3. All existing files in a particular directory:
      git add -u :full/path
  6. Do some more work, stage some more work.
  7. Commit your changes:
    git commit -m "10 - brief description"
    (for longer description, enter a brief description on first line, then hit enter to type the longer description starting on the next line. Finish with “).
  8. Done for the day? Want to backup your code? Push to your fork:
    git push origin feature/10
  9. Is the feature ready for other people to use? Then create a pull request in GitHub. In the pull request, add comments directly in the file if you want to explain something about your work. And invite others to review your code.

Graphical Git

SourceTree

Many people really like SourceTree, a tool from Atlassian. I have it installed and have not yet really used it, because I’m completely comfortable with using the command line.

Visual Studio Code

Visual Studio Code’s Git support is top-notch, when you don’t feel like using the command line. You can use the tool to perform:

  • git add (new or updated files, and it has a different little icon for each status)
  • git commit (supports multi-line commit messages by using Ctrl-Enter)
  • git push
  • git pull

There is no git sync command and I don’t know what the Sync comand here does. Maybe it runs pull then push?

Perhaps the best thing about VSCode’s integration: it makes the cumbersome process of un-staging and/or reverting your code changes very easy.

Git in Visual Studio Code

Visual Studio 2015

Visual Studio 2015’s support is actually pretty good too. I was biased against it for a long time, probably because they automatically stage files (“tracked files”) and make you purposefully unstage them (“un-tracked files”). But for most people that’s probably not a bad thing. And while I haven’t used it in VS2013 in many months, I feel like the 2015 experience is somehow a little better and a little more powerful than its predecessor.

Git in Visual Studio 2015

Here is a a brief demonstration of authentication and authorization using the FlightNode.Identity API. Significant help developing this received from ASP.NET Identity 2.1 with ASP.NET Web API 2.2 (Accounts Management) by Taiseer Joudeh.

Requires cloning the FlightNode.Identity and FlightNode.Common repositories.

Initial Database Setup

When running on your localhost for the first time, open the NuGet Package Manager Console. Switch to the FlightNode.Identity project. Then execute command Update-Database –Verbose to install the identity model tables into the (localdb)\ProjectsV12 database (that is default server name is in the config file). This uses Entity Framework Code First Migrations to install the database.

The initial database install creates a user with username ab@asfddfsdfs.com and password dirigible1. Clearly it is not good that the whole world knows this now, so the first thing to do is change that password. Which is a great way to test that the install worked properly.

Authenticate

To authenticate, POST a form-encoded body to the /oauth/token endpoint. Screenshots below are using Postman. The response will include an OAuth2 bearer token. In a real application, we would read this token and store* it for use with other API requests. * The best storage mechanism is LocalStorage.

Copy the value of the access_token from the response so that you can use it in subsequent steps.

Authenticate request

Modify the User

Now we’ll issue a POST request to the User route, using a JSON-formatted body. Assuming a fresh database install, the user ID will be 1. With this request, we’ll not only change the password, but we’ll also configure a new username, e-mail address and phone number. Leaving the mobile phone number null is just for show and it is not necessary to specify that.

POST http://localhost:50323/api/v1/user/1

Headers:
Content-Type = application/json

Body:
{
  "userId": 1,
  "userName": "dirigible@asfddfsdfs.com",
  "email": "dirigible@asfddfsdfs.com",
  "phoneNumber": "555-555-5555",
  "mobilePhoneNumber": null,
  "password": "dirigible"
 }

Attempt to modify user

Create Authorization Header

Unauthorized!?!?!?

Well of course, that’s what we wanted: the request did not have a bearer token in it - that is, the user is not authorized. And thus we received status code 401, as appropriate. If you look in the UserController code, you’ll see that it is decorated with the [Authorize] attribute, which is how you tell ASP.NET to check for a valid OAuth2-style bearer token before allowing the method to be executed.

So what do we need? We need another Header:

Authorization = bearer <the token response from earlier>

And now…

Successful modification

Everything is OK!

Oh, and while working on this code, I actually removed the password save. That should probably only occur with a special request to change the password.

Chain of Wetlands - view of downtown Dallas

Once upon a time, not far from downtown high rises, the greens and ponds of a golf course took over a portion of forested river bottomland. The river, having a mind of its own, would periodically flood out the golf course. The players complained about the mosquitoes and the stench of sewage from the treatment plant not far upstream.

Concerned about the quality of the water, and needing an outlet to lower the river's flood levels near downtown, someone decided to do something. The City took over the courses, much to the unfortunate owner's chagrin, and partnered with experts to remake the land. Where fairways once stood, now wetland ponds flow, further cleaning the already-treated waters. The greens were pulled up and natives plants installed, leading to a beautiful renaissance of prairie grasses, wildflowers, and their marshy kin too.

And the animals, driven out by human development, returned to the land. Especially the birds. And people returned as well, to fish and hike - and to count the birds. 158 different species were seen in four years of observations, tallied by diligent volunteers.

But how many volunteers, and how many hours have they put into monitoring the return of ecological diversity to this site? Were some parts of the area better than others - more diverse or with greater populations? Is there any way for participants and the public to easily see what's going on?

For lovers of nature, and for pragmatists who see the value of using a natural system to filter water and lower the impact of flooding, the return of meadows and wetlands is a happy *beginning* to a long story of restoring balance with an urban ecosystem. This story is one that repeats itself all across the land, from the great prairies to the mountains, and from the cities to the coasts. The details differ - restoration in one locale, and preservation in another - but the needs are the same.

And thus, this story leads to opportunity: to develop a web site platform that can provide for more detailed data collection, as well as volunteer tracking and stakeholder engagement. The story is a real one, exemplifying the role of citizen-science volunteers in monitoring the rehabilitation of the land. And while that particular project might never use the platform, there is real demand in other quarters. Citizen-science, it is time to meet your ideological cousin: open source.

Volunteer birders hiking through the prairie.

White Crowned Sparrow in a netleaf hackberry tree.

Additional reading on the Dallas Floodway Extension Project

Chain of Wetlands - prairie flowers

All photos by Stephen A. Fuqua.


Tropical Mockingbird, Hopkins, Belize. 2014, Stephen A. Fuqua.

The general problem, succinctly stated:

As human-dominated land uses replace native landscapes across North America, there is growing concern about the impacts this habitat loss will have on native bird populations. With many migratory bird species in decline, it is essential to assess the effectiveness of our conservation initiatives [1].

Of course, this applies around the world, not just in North America. There are hundreds of organizations and researchers working to understand the characteristics of current bird populations, and our impact on sustaining and growing those populations. The need for this work grows ever more pressing for those who recognize the value of maintaining diverse and vibrant ecosystems, especially in light of climate change [2].

Many people use a popular, and frankly excellent, application called eBird for collecting bird population data from citizen-scientists. However, that application is is geared toward a very general sort of data collection, which is well-suited for your work-a-day birder who wants to record what s/he has seen. But it is not well-suited for more rigorous scientific protocols.

Organizations and researchers cannot rely on eBird alone for many of their data collection and reporting needs. Their projects often need to track volunteers and detailed geographic locations, often at multiple scales, e.g. region (North Texas), site (Pioneer Park), point (lat/long). They want to communicate with those volunteers easily, keeping them informed of news and events. They may even want to facilitate some online training.

Rather than merely tailor a piece of software for a specific project, the FlightNode project aims to build a platform that can be tailored for many different types of projects, thus lowering the overall cost and burden of implementing an online management tool.


1. Dr. Tania Z. Homayoun, Program Background, IbaMonitoring.org, 2010.

2. To understand the potential impacts of climate change on North American birds, see Audubon’s Birds and Climate Change Report, published earlier this year.

Recently I ran across and old article from Phil Haack, about moving his blog to Jekyll using GitHub. And I realized that this is (or might be?) a perfect solution for managing content about the FlightNode project (though it will not be part of the platform itself).

There is a lot of work to do in terms of getting content up and getting it formatted. This is just the start of getting the framework going, so that I can get all the text and pictures out of my head and into documentation that other team members can use.

Startup References

NuGet has been helping .NET developers maintain package dependencies for a number of years now, and any good dev should know the basic operations from within Visual Studio – how to add, update, and remove packages in a solution, using the NuGet Gallery. But to use only the NuGet Gallery is to know only half – or less – of the story. You might be missing out on…

  • Testing out pre-release versions of your projects key dependencies.
  • Reverting to older versions of libraries.
  • Stand-alone tool installations using NuGet.
  • Full-fledged Windows installations using Chocolatey.
  • Placing your custom packages into private repositories.

Here are a few tips for moving toward mastery of this crucial part of the .NET development ecosystem.

It’s Just a Zip File

Like many of its older cousins in the world of package management, .nupkg, the file extension for NuGet packages, is just an alias for .zip. Change the filename from abc.nupkg to abc.zip and it will open beautifully from your favorite zip client. From time to time, it can be quite useful to open up a NuGet package and see what is inside.

Of course NuGet uses a particular file layout within the package, and you wouldn’t want to create the .nupkg by hand. Instead, you describe the desired package with a .nuspec file (which you should keep in source control), and then use the nuget pack command to create your package (which you should not keep in source control. Use an artifact repository instead).

Incidentally, you can also nuget pack your csproj file, but you have less control over the outcome this way.

Read the Docs

As with most tools, you’ll get more out of it if you start reading the documentation. I have particularly enjoyed the command line reference for working with nuget.exe. Note: this is basically, but not exactly, the same as the Package Manager PowerShell Console. Use the former for automation, or manual execution of NuGet commands. Use the latter in Visual Studio for advanced functionality.

Specifying the Version to Install

With both nuget.exe and the PowerShell console – but not in the Package Manager gui – you can install older or pre-release versions of packages by providing the version number:

PM> Install-Package <SomePackageId> -version <number>

Or

C:\YourProject> nuget.exe install <SomePackageId> -version <number>

There are two primary use cases for this:

  1. Some teams publish pre-release versions of their packages. While you wouldn’t typically want these in production, it can be useful to try out the pre-release in anticipation of up-coming enhancements or API changes.
  2. I’m guessing that the majority of business .NET applications were written before NuGet came around. Many of those have dependencies on old packages, which were installed manually. A mad rush to replace manual references with NuGet packages might not be wise; you need to take time and evaluate the impact of each package. It can be useful to start by installing the same version as you already utilize, but from a NuGet package. Then, you can carefully work on upgrading to newer versions of the package in a deliberate test-driven manner.

Software Installation

Most .NET devs probably don’t realize that the .nupkg files can be used for much more than installing packages inside of .NET projects in Visual Studio and SharpDevelop. A basic .nupkg file differs from a self-installing .exe or an .msi file in that it is just a zip file, with no automation to the install. This can be useful for distributing static files, websites, and tools that don’t need Windows registry settings, shortcuts, or global registration. Lets say that you pack up a website (.NET equivalent of a Java WAR file), and you want to install it in c:\inetpub\wwwroot\MySite. At the command prompt:

C:\LocationOfNuPkg> nuget.exe install <YourPackageId> -Source %CD% -o c:\inetpub\wwwroot\MySite

If you are running IIS with the default configuration, then you’ve just installed your website from a .nupkg artifact. Because NuGet is retrieving the package from an artifact repository, you only need a tool to push this command to that server, and then the server will put the “current version” from the repository.

But you can also do more, and this is where https://chocolatey.org”>Chocolatey</a> comes in. Using the same .nupkg name, Chocolatey does for Windows what NuGet did for .NET applications: supports easy discovery, installation, and upgrade of -packages- applications. Once you have Chocolatey itself installed (follow directions on the home page), you install many common open source tools from the command line. For example, this article has neglected to mention that you need to download nuget.exe in order to run NuGet from the command line. For that, you can simply run:

C:\>choco install nuget.commandline

This will install nuget.exe into c:\ProgramData\Chocolatey\bin, which automatically puts it into your command path. As with NuGet, versioning can be a huge benefit compared to distributing a zip or msi file.

The key difference between Chocolatey and NuGet is that the choco command runs a PowerShell install script inside the package. Basically anything you would have done in the past with an msi, perhaps built-up with a WiX xml file, you can do in a PowerShell script. Arguably, you have more control over your install, and it will be easier to support scripted installation processes. Again, the real power here is in automation. It won’t give you a nice gui for walking your users through the install (although you could embed a call to an msi inside your .nupkg file), but it does facilitate smoother rollout of applications to servers and multiple desktops.

Private Repositories

Most companies are not going to be comfortable with the idea of their developers throwing the company’s proprietary NuGet packages out on the Internet for the whole world to find. Instead, they’ll want to install a piece of server software that acts as a local repository. The Hosting Your Own NuGet Feed lists the primary options available. So far, I’ve been relatively happy with NexusOSS, which also allows me to host Maven packages form my Java teammates, and npm packages for my Node.js team (as well as a few others).

As this article is already quite long, look for a future post with more information on using NexusOSS as a private repository for NuGet and Chocolatey packages.

Does Visual Studio Code measure up to its close kin, Atom?

A friend asked me what I thought of Code. When I installed it a few weeks ago, my first reaction was: this is nice, if you’re not used to Atom already. Never satisfied with a simple gut reaction, I thought for a moment, and realized that I had not looked closely at Microsoft’s additions – particularly, debugging.

Overall Impressions

Both are beautiful text editors. There is something about the presentation that makes me simply enjoy working with text files more than when I open them in Notepad++. Aesthetics are worth something. That said, Notepad++ is still a great tool and I will not be rid of it any time soon (especially for large files).

I’ve been using Atom daily for the past three months, and have spent just a few full days in Code. Exploring the two side-by-side, there are both clear similarities and differences. Right out of the box Code does win in one respect: it provides me with a list of recently opened files. I’m sure there’s a plugin for Atom, but this should be standard.

Code changes the keybindings, but I can’t let that dissuade me. Code does not have jshint built-in, a tool that has been of great help to me as I move from C# to pure JavaScript. But it does detect errors, such as undeclared variables. Code, so far, eschews the plugin architecture of Atom. For a preview release of a lightweight IDE, that probably makes sense. Atom is an interchangeable swiss army knife, and Microsoft is aiming for a dedicated code editor. That said, the rest of this review will look at four great features that are accessible from a vertical toolbar in Code, comparing them to equivalents in Atom.

IntelliSense and Error Detection

In Atom, I have autocomplete-plus and it does a reasonable job of helping me finish my thoughts. But it is not a substitute for powerful IntelliSense. While there are language-specific autocomplete packages for many dialects, oddly enough I cannot find anything for JavaScript. Code, on the other hand, has some impressive auto completion, which applies equally for built-in JavaScript functionality and command-completion for local variables.

Code

codeAtom_1

Atom

codeAtom_2

File Explorer / Tree View

This presents open files in a list, and the application does not use the familiar horizontal tabs metaphor - instead, you see a vertical list of “Working Files”. This is useful when more than a handful of files are open, but it certainly takes some getting used to, and I have not decided if I like it yet.

The list of files is perfectly useful. Code does not have the Git-status color coding of Atom, but more on Git below.

Here’s an interesting feature: right-click on a file, choose Select for Compare, then right-click another file and choose Compare. You get a reasonable diff comparison. Nice, but not very functional in this setting. More important elsewhere.

Of course, Atom also has packages such atom-cli-diff (Git-like) and compare-files (GitHub-like). My first impression is that the Code diff is better, but that might be based on what I’m used to already.

Overall, I find the two different but equally useful.

Code

codeAtom_3

Atom

codeAtom_4

Search

When you’ve opened a folder, this will search every file in the folder. And it is fast. But so is Atom’s Find in Project. Code just makes it more visible through the menu bar. Both have regular expression support, and my early impression is that they are pretty well matched. That said, I do like Atom’s display of the line number next to the match.

Code

codeAtom_5

Atom

codeAtom_6

Git

In Atom, I have installed the Git-Control plugin. For the most part, I prefer using Bash, but occasionally it is convenient to use the development environment. Visual Studio’s support is pretty good for all of the basic functions, and so is Git-Control. But I don’t like the way that both of them go from unstaged to committed, seemingly bypassing staging. Although it sometimes feels like staging is an extra, unwanted step, Git has it for a reason. And it can turn out to be handy from time to time. So don’t hide it from me. Code gets this right.

Remember that file compare? Now we see it shine: Code gives you quick access to viewing changes on your unstaged and staged files.

Overall, the Code interface to Git is better, except for one flaw (for now): Commit messages are one-line only. And if you push that Enter key trying to add a line break, then you’ve just finished your commit. Commit messages often need to be multi-lined, so I hope that Microsoft changes this.

Code

codeAtom_7

Atom

codeAtom_8

Debug

There are some quirks, but this is promising. As you can see in this screenshot, variables aren’t displaying for me, so I don’t know what values I’m dealing with. No doubt that will improve with time. But at least it is possible to walk through the stack trace and try to understand what’s going on. This is going to be powerful and is reason enough to keep this Visual Studio Code around.

That said, there is a node debugger project for Atom. The pictures look promising, but even the maintainer admits it is buggy. I cannot get it to work at all - opening the debugger palette, you are presented with an opportunity to fill in a few paths. But the fields, at least in my install, are not enabled.

codeAtom_9

Conclusion

Atom is much more versatile, but Visual Studio Code is already a strong competitor. And is more stable; I’ve not yet experienced any bugs or program failures. I began writing this over a week ago, and decided to force myself into daily Code use before publishing. At this point, I miss a few things, but I am starting to get hooked on Code.

For more details on the features in Code, see John Papa. I purposefully avoid reading this - except the debugging overview - and other posts in order to draw my own conclusions. But this series of posts is too good not to promote.

The news has been going around: “refactoring doesn’t work,” say researchers. Code quality does not improve. It isn’t worth the time and effort. Here’s why I don’t buy it - why the research is fundamentally flawed and real software groups should ignore it. legoRefactoring.jpg

We don't need your wheels

The study’s aim was to evaluate the use of refactoring techniques to improve code quality and maintenance. In brief, they conclude that “there is no quality improvement after refactoring treatment.” Their analysis is based on computed code metrics (static analysis), performance, and perceived maintainability. The conclusion, however, is based solely on the code metrics, as the other two factors did not show statistically relevant changes.

The research was carried out in an exercise where students applied a small set of standard refactoring techniques to an application used at their university.

Code Metrics

I look at code metrics from time-to-time, and have written company standards on which metrics to look at and what thresholds to be concerned about. That is, I’m not opposed to metrics. Nevertheless, the use of code metrics is problematic in this paper. Let’s look at each metric, which was calculated with Visual Studio’s built-in tools:

Maintainability Index increased slightly. Win!

Cyclomatic Complexity increased slightly. Lose! The problem is, the total complexity is not all that meaningful. What is meaningful is the complexity of each class. Many times refactoring involves creating some new classes. These new classes by definition introduce a small complexity factor - and they could well be the reason for the increase. In looking at this number, it would have been better to look at the complexity of individual classes. Did that increase? Did the average complexity per class increase? We simply do not know.

Depth of Inheritance no change. Draw!

Class Coupling increased. Lose! On the other hand, more classes probably means that the code is better structured. I can only speak subjectively, but on its face, I cannot see anything negative about 7% increase in class coupling.

Lines of code increased. Lose! Measuring lines of code can be helpful in identifying methods and classes that are “too big” and need to be split up. But otherwise it is not helpful. Refactoring often means creating new classes and methods. Visual studio does not count the brackets {} for these - but each new class and method has a signature, and that signature does count.

Duplication was not measured, because Visual Studio does not calculate code duplication. This might have been a truly useful metric, particularly since one of the primary goals in refactoring is to remove code duplication.

Code Analysis Warnings were not measured, perhaps because they were simply overlooked. Refactoring should aim to eliminate common errors and warnings reported by static code analysis tools, such as FxCop, which is built into Visual Studio.

Performance and Changeability

Reading the paper, you will find this important phrase in the statistical analysis for both performance and “changeability” (maintainability): “do not reject the null hypothesis.” The null hypothesis is that the refactoring will not have any significant impact. In other words, the statistics do not support the positive hypothesis that performance will improve and the application will be more maintainable.

Performance. It is generally a given that you refactor for maintenance, not for performance. It is well known that performance problems sometimes require brute force solutions that are not as maintainable. In other words, this is an acknowledged tradeoff anyway.

Changeability. There is some legitimate criticism that less experienced programmers will have a harder time reading nicely object-oriented code. Thus less experience programmers, such as university students, may rate the changed code as less easy to maintain. Since programming teams typically have a mix of experience, this is very relevant. Each group should assess on their own which refactoring techniques actually tend to improve their codebase.

The benefits tend to be subtle. Some of the most-used techniques focus on clear naming conventions and on removing duplication. The benefit to these two are most apparent when fixing bugs - something that was not explored in the paper.

Trust

It is not my intention to bash the authors. The choice of topics is probably a good one, but the skimming through the paper, the authors’ lack of experience in real world programming is apparent. Authors who promote refactoring, such as Martin Fowler and “Uncle” Bob Martin, have spent decades in the consulting field. As such, they’ve worked with a wide diversity of companies, codebases, and programmers. At the risk of sounding anti-white tower: I trust their real world experience over university lab experience.


Ultimately, each group should decide on their own how and when to incorporate code refactoring into their daily habits. For many teams, it is simple as the old Boy Scout rule: leave the code cleaner than you found it. Take “silver bullet” promises with a grain of salt, but don’t allow a well-intentioned but poorly-constructed study to dissuade you from good engineering practices.

The paper: http://arxiv.org/ftp/arxiv/papers/1502/1502.03526.pdf

Back in October I started playing around with a few technologies, resulting in my first code posted to GitHub: safnetDirectory. I must say that it is not the most impressive bit of coding that I’ve ever done. However, the urge to learn sometimes needs an unencumbered, no-strings-attached, digital canvas on which to exercise. That urge is requited through the experimentation and the lessons learned, rather than the completion of an opus.

The end result: I have a prototype of a mixed Angular.Js / ASP.Net MVC application that provides a simple directory and simple administrative functionality. And it is Hosted on Azure.

safnetDirectory screenshot

Two user stories drove this exercise, with a made-up corporate name Prism Company (I never did get around to using an engraving of Isaac Newton for the logo):

  1. As a Prism Company employee, I would like to lookup contact information for other employees, so that I can call or otherwise contact my co-workers as needed.
  2. As a Prism Company Human Resources (HR) coworker, I need to add, update, or delete employee data, so that the company directory will always be up-to-date.

To deliver these stories, I began by allowing Visual Studio 2013 to setup a basic MVC5 application with the default Membership authentication provider. From there, I modified the system by expanding the User object to include additional fields: full name, e-mail address, and phone number. Although I prefer a lighter-weight solution than Entity Framework, I left EF6 as it wasn’t critical to my goals, and using the code-first approach allowed me to concentrate on the front-end development and authentication.

The original default Registration page was modified to become the “new employee” page. I left the standard MVC bindings in place instead of using Angular because it is dealing with a small amount of data with only periodic use, and thus does not need what I consider the primary benefit of a JavaScript MVVM framework: handling large amounts of data with minimal data transmission.

Next, I used ngGrid and integrated it with the EF6 data model to create a high performing grid, with paging performed in the database rather than in JavaScript. I didn’t manage to fully customize the grid in the way I want, so perhaps at a future date I’ll upgrade to a newer version of Angular.Js and a more flexible grid component. I secured the page by integrating with the ASP.Net claims-based authentication, taking advantage of that robust toolkit instead of trying to learn something like JSON Web Token (I just happen to need to learn the ASP.Net claims authentication for work).

Finally, I added a form with search options, which is bound with Angular instead of directly using a View and Controller in ASP.Net. Still, “back-end” functionality is required to process the search request, and for that I treated an MVC Controller as a REST service, without taking the time to introduce Web API. MVC was good enough.

For now, this is just a brief reminder to myself of what I was toying with. Hopefully before the year is out I’ll find time for a follow-up to this post, going into code-level detail on how these technologies integrated. Either way, the source code is open for the world to criticize.

Recently I have been looking at ServiceStack’s OrmLite “Micro ORM” as a light-weight alternative to Entity Framework. It is relatively easy to use and very powerful, with capability for both code-first and database-first development. After learning the basic interaction, it was time to flip back into TDD-mode.

And then I found quite the challenge: I wanted to write unit tests that insure that I’m using OrmLite correctly. I was not interested (for the time being) in testing OrmLite’s interaction with SQL Server itself. That is, I wanted behavioral unit tests rather than database integration tests. Time for a mock. But what would I mock? This ORM framework makes extensive use of extension methods that run off of the core IDbConnection interface from the .Net framework - so it would seem that there is no way to take advantage of Dependency Injection.

Enter the static delegates method promoted by Daniel Cazzulino. OK, so we have Constructor and Property Injection methods already. And now they are joined by have Delegate Injection. Let us take this simple example from a hypothetical repository class:

var dbFactory = new OrmLiteConnectionFactory(connectionString, SqlServerDialect.Provider);
using (IDbConnection db = dbFactory.OpenDbConnection())
{
    using (var tran = db.OpenTransaction())
    {
        db.Save(new BusinessEntity());
        tran.Commit();
    }
}

Refactoring the class to use constructor dependency injection, inserting an IDbConnectionFactory instance instead, is trivial and allows us to write unit tests that have a mock version of IDbConnectionFactory. But OpenTransaction() and Save() are all extension methods. How do we replace them?

Using Cazzulino’s technique, we can create a static class containing static delegates, and then insert those delegates into the repository. When it comes time for unit testing, just replace those static delegates with inline delegates – thus, effectively mocking the methods. Here’s the original signature for OpenTransaction:

public static IDbTransaction OpenTransaction(this IDbConnection dbConn)

This can be represented with a Func<T, Tresult> delegate:

public static Func<IDbConnection, IDbTransaction> OpenTransaction =
     (connection) => ReadConnectionExtensions.OpenTransaction(connection);

The Save<T>() method is a bit more troublesome, since it is itself a generic. In particular, I want to address this overload of Save():

public static int Save<T>(this IDbConnection dbConn, params T[] objs)

The <T> threw me off – where do you declare it? You can’t use put the T after Save in the Func. Then I realized it just needs to go on the static class. And what of params T[]? Convert it to an array of T:

public static class DelegateFactory<T>`
{
   public static Func<IDbConnection, T[], int> Save =
   (connection, items) =>
   {
        return OrmLiteWriteConnectionExtensions.Save(connection, items);
   };
}

However, we don’t really want the generic T applied to the non-generic methods, so perhaps we should create two different classes. And I just learned something new through trial and success… <T> allows for class name overloading! Enough with the chatter. Here is a complete example with a happy-path test and one negative test that ensures the Commit() isn’t called when there’s an error.

using Microsoft.VisualStudio.TestTools.UnitTesting;
using ServiceStack.Data;
using ServiceStack.OrmLite;
using System;
using System.Data;
using System.Linq;

namespace TestProject
{
    public class BusinessEntity { }

    public class Repository<T> where T: class
    {
        private readonly IDbConnectionFactory dbFactory;

        public Repository(IDbConnectionFactory dbFactory)
        {
            if (dbFactory == null)
            {
                throw new ArgumentNullException("dbFactory");
            }

            this.dbFactory = dbFactory;
        }

        public int Save(T input)
        {
            int rowsAffected = 0;
            using (IDbConnection db = dbFactory.OpenDbConnection())
            {
                using (var tran = DelegateFactory.OpenTransaction(db))
                {
                    rowsAffected = DelegateFactory<T>.Save(db, new[] { input });
                    tran.Commit();
                }
            }
            return rowsAffected;
        }
    }

    public static class DelegateFactory
    {
        public static Func<IDbConnection, IDbTransaction> OpenTransaction = (connection) => { return ReadConnectionExtensions.OpenTransaction(connection); };
    }

    public static class DelegateFactory<T>
    {
        public static Func<IDbConnection, T[], int> Save = (connection, items) => { return OrmLiteWriteConnectionExtensions.Save(connection, items); };
    }

    [TestClass]
    public class UnitTest1
    {
        [TestMethod]
        public void SaveANewObjectWithProperTransactionManagement()
        {
            // Prepare input
            var input = new BusinessEntity();

            // Use moq where we can
            var mockRepository = new Moq.MockRepository(Moq.MockBehavior.Strict);

            var mockFactory = mockRepository.Create<IDbConnectionFactory>();
            var dbConnection = mockRepository.Create<IDbConnection>();
            mockFactory.Setup(x => x.OpenDbConnection())
                       .Returns(dbConnection.Object);
            dbConnection.Setup(x => x.Dispose());

            var mockTransaction = mockRepository.Create<IDbTransaction>();
            mockTransaction.Setup(x => x.Commit());
            mockTransaction.Setup(x => x.Dispose());

            // And use the delegate methods elsewhere
            var expectedReturnValue = 1;

            DelegateFactory.OpenTransaction = (connection) => { return mockTransaction.Object; };
            DelegateFactory<BusinessEntity>.Save = (connection, items) =>
            {
                Assert.AreSame(dbConnection.Object, connection, "wrong connection object used for Save");
                Assert.IsNotNull(items, "items array is null");
                Assert.AreEqual(1, items.Count(), "items array count");
                Assert.AreSame(input, items[0], "wrong item sent to the Save comand");

                return expectedReturnValue;
            };

            // Call the system under test
            var system = new Repository<BusinessEntity>(mockFactory.Object);
            var response = system.Save(input);

            // Evaluate the results
            Assert.AreEqual(expectedReturnValue, response);

            mockRepository.VerifyAll();
        }

        [TestMethod]
        public void CommitIsNeverCalledWhenSaveEncountersAnException()
        {
            // Prepare input
            var input = new BusinessEntity();

            // Use moq where we can
            var mockRepository = new Moq.MockRepository(Moq.MockBehavior.Strict);

            var mockFactory = mockRepository.Create<IDbConnectionFactory>();
            var dbConnection = mockRepository.Create<IDbConnection>();
            mockFactory.Setup(x => x.OpenDbConnection())
                       .Returns(dbConnection.Object);
            dbConnection.Setup(x => x.Dispose());

            var mockTransaction = mockRepository.Create<IDbTransaction>();

            // **** Commit isn't allow ****
            //mockTransaction.Setup(x => x.Commit());

            mockTransaction.Setup(x => x.Dispose());

            // And use the delegate methods elsewhere
            DelegateFactory.OpenTransaction = (connection) => { return mockTransaction.Object; };
            DelegateFactory<BusinessEntity>.Save = (connection, items) =>
            {
                Assert.AreSame(dbConnection.Object, connection, "wrong connection object used for Save");
                Assert.IsNotNull(items, "items array is null");
                Assert.AreEqual(1, items.Count(), "items array count");
                Assert.AreSame(input, items[0], "wrong item sent to the Save command");

                // **** Inject an exception *** 
                // don't worry that this isn't a SQL exception - just make sure to 
                // test that this same exception occurs when Save is called
                throw new InvalidCastException();
            };

            // Call the system under test
            var system = new Repository<BusinessEntity>(mockFactory.Object);
            try
            {
                system.Save(input);
            }
            catch (InvalidCastException)
            {
                // Evaluate the results
                mockRepository.VerifyAll();
            }
            catch (Exception ex)
            {
                Assert.Fail("wrong exception - caught " + ex.GetType().ToString());
            }
        }
    }
}