Continuing from Upgrading safnet-directory, part 1, it is time to improve the solution’s unit testing. At the outset, the controllers cannot be unit tested effectively due to their direct dependence on Entity Framework and ASP.NET Identity Framework classes. With application of an in-memory database, they could be integration tested, but not unit tested as I understand and apply the term.

Interfaces and Dependency Injection

The data access needs to be pulled from the controllers, and the classes need to be refactored for inversion of control / dependency injection. In a larger application I would create three layers with distinct responsibilities:

  1. Controllers - interact with HTTP requests, delegating work to
  2. Services - contain business logic and requests to external systems in
  3. Data - for accessing data in databases or external services

Given that there is no significant business logic in this application, new code in the middle layer is not worth the effort. But I will extract a simple data layer. It will not be as comprehensive an interface as I would make in a larger app, but it will be effective in isolating the Controllers from Entity Framework. ASP.NET Identity classes are already encapsulating logic around authentication, and they can remain - so long as they have proper interfaces and can be injected into controllers.

For dependency injection, I know that ASP.NET Core (coming in a future article) has its own good system. I have heard kind words spoken about Simple Injector but never tried it out before.

PM> Install-Package SimpleInjector.Integration.WebApi

I’ll configure registrations with Scoped lifestyle so that objects are tracked and disposed of properly. In anticipation of future async work, I’ll setup async scoped as a default. Along with Simple Injector, I will create an interface for the existing ApplicationDbContext class, and call it IDbContext. Adding this method to my Startup class:

private void ConfigureDependencyInjection(HttpConfiguration config)
{
    var container = new Container();
    container.Options.DefaultScopedLifestyle = new AsyncScopedLifestyle();

    container.Register<IDbContext, ApplicationDbContext>(Lifestyle.Scoped);

    container.RegisterWebApiControllers(config);
    container.Verify();

    config.DependencyResolver =
        new SimpleInjectorWebApiDependencyResolver(container);
}

On the MVC-side of the house, I was using Owin as a cheap dependency manager for authentication with ASP.NET Identity framework. I will leave this alone for the moment, since that will likely require more radical change in the .NET Core future anyway.

The members of the IDbContext were easy to identify, based on how the Controllers were using ApplicationDbContext. One of those members returns an IQueryable. For unit testing, I need to be able to mock it. MockQueryable will do nicely. It seems that it is also time to Install my preferred unit testing tools, NUnit 3 and FluentAssertions.

> Install-package MockQueryable.Moq
> Install-Package NUnit
> Install-Package NUnit3TestAdapter
> Install-Package FluentAssertions

From Visual Studio Testing to NUnit

First, manually remove the reference to Microsoft.VisualStudio.QualityTools.UnitTestFramework. Next, find-and-replace VSTest attributes with NUnit. The most common changes:

Old New
using Microsoft.VisualStudio.TestTools.UnitTesting; using NUnit.Framework;
[TestClass] [TestFixture]
[TestInitialize] [SetUp]
[TestCleanup] [TearDown]
[TestMethod] [Test]

Here is a PowerShell script to fix these up:

$replacements = @{
    "using Microsoft.VisualStudio.TestTools.UnitTesting" = "using NUnit.Framework";
    "[TestClass]" = "[TestFixture]";
    "[TestInitialize]" = "[SetUp]";
    "[TestCleanup]" = "[TearDown]";
    "[TestMethod]" = "[Test]"
}

Get-ChildItem *Tests.cs -recurse | ForEach {
    $fileContents = (Get-Content -Path $_.FullName)

    $replacements.Keys | ForEach {
        $fileContents = $fileContents.Replace($_, $replacements.Item($_))
    }

    $fileContents |  Set-Content -Path $_.FullName
}

Removing Dependency on Thread.CurrentPrincipal

There is another hard-coded dependency that needs to be lifted in order to fully unit test the API EmployeeController: it uses the Thread.CurrentPrincipal to interrogate the user’s Claims. If the user is not in the “HR” role then it strips off the identifier values from the records before responding to the HTTP client. This can then signal that the client should not try to edit these records (note that the Post method is already authorized only for HR users, thus giving defense in depth). For a moment I thought about extracting an interface for accessing this information, but then I remembered that ApiController.User property and confirmed that it has a setter. Thus it is trivial to inject a fake user with fake claims:

[SetUp]
public void SetUp()
{
    _mockDbContext = new Mock<IDbContext>();
    _controller = new EmployeeController(_mockDbContext.Object);

    // Setup the current user as an HR user so that Id values will be mapped for editing
    var mockPrincipal = new Mock<IPrincipal>();

    var mockIdentity = new Mock<ClaimsIdentity>();
    mockIdentity.Setup(x => x.Claims)
        .Returns(new[] { new Claim(ClaimTypes.Role, AdminController.HR_ROLE) });

    mockPrincipal.Setup(x => x.Identity)
        .Returns(mockIdentity.Object);

    _controller.User = mockPrincipal.Object;
}

Extending the Unit Test Code Coverage

At last it is possible to fully unit test this class: the database connection has been mocked and the tests can control the user’s claimset. What about the other classes? There are some that will never be tested - e.g. the database migration classes and the data context class; these should be decorated with [ExcludeFromCodeCoverage]. It would be nice to find out what my code coverage is. Without Visual Studio Enterprise or DotCover we need to turn to OpenCover. Let’s try AxoCover for integrating OpenCover into Visual Studio.

Axo Cover Results in Visual Studio

Not bad. The display that is. Coverage isn’t very good; 35.8% of lines. However note that it did not ignore the Migrations. I forgot that OpenCover does not automatically ignore classes / methods decorated with either [GeneratedCode] or [ExcludeFromCodeCoverage]. Is there a setting in Axo? Yes. And in fact it did ignore ExcludeFromCodeCoverage. I just need to add GeneratedCode to the list: ;*GeneratedCodeAttribute

Axo Cover Settings

Now we have a full 36.7% of lines! Before trying to fill in the missing unit tests, perhaps there is some more cleanup in order:

  1. Exclude ASP.NET configuration classes - is there really value in unit testing BundleConfig, FilgerConfig, RouteConfig, or Startup?
  2. In addition, there are some unused methods related to two-factor authentication that can be removed from both the Identity management code and from the MVC AccountController and ManageController.

Eliminating unneeded code brought me up to 52.6% code coverage. Much of the uncovered code is related to ASP.NET Identity, and I suspect that code will change significantly in the upgrade to ASP.NET Core. Perhaps it is not worth taking the time to fill in missing unit tests for those methods and classes right now. Sole remaining focus for unit testing will be the EmployeeController - bringing it, at least, to 100% coverage.

Full file diffs in pull request 2.

In 2014 I built a quick-and-dirty web application using ASP.NET MVC5 and AngularJS 1.0.2. There are probably millions of web applications, large and small, that are “stuck” on some older tech, often because people are afraid of the work it will take to modernize them. In this series of blog posts, I’ll refactor away the tech debt and polish it up this little app to make it something to be proud of… as much as one can be proud of a simplistic proof-of-concept, anyway.

First up: basic and trivial cleanup of the solution, bringing it up to .NET 4.7.2. Future: improved testing; ASP.NET Core; Entity Framework Core and better separation of concerns; UI libraries / frameworks.

All work in this and subsequent posts will be using Visual Studio 2017, git-bash, and other free tools.

Improve the Name

Long pascalCased names are not in favor any more. Snake case has become deservedly popular - it is easier to read. So let’s rename this project from safnetDirectory to safnet-directory. GitHub repo’s staying put though, out of laziness.

The .nuget Directory Is Passé

In the early days, no doubt for what were at the time good reasons, Microsoft had us creating .nuget folders containing nuget.exe, nuget.config, and nuget.targets. Right around the time that I created this application, perhaps just afterward, Microsoft eliminated the need for it (check out the evolution of the Stack Overflow discussion).

git rm -r .nuget

But that’s not enough, because the project file references nuget.targets. Remove these lines from *.csproj:

 <Import Project="$(SolutionDir)\.nuget\NuGet.targets" Condition="Exists('$(SolutionDir)\.nuget\NuGet.targets')" />
  <Target Name="EnsureNuGetPackageBuildImports" BeforeTargets="PrepareForBuild">
    <PropertyGroup>
      <ErrorText>This project references NuGet package(s) that are missing on this computer. Enable NuGet Package Restore to download them.  For more information, see http://go.microsoft.com/fwlink/?LinkID=322105. The missing file is {0}.</ErrorText>
    </PropertyGroup>
    <Error Condition="!Exists('$(SolutionDir)\.nuget\NuGet.targets')" Text="$([System.String]::Format('$(ErrorText)', '$(SolutionDir)\.nuget\NuGet.targets'))" />
</Target>

Update the Solution

Not everyone is ready to commit to ASP.NET Core. For starters, let’s at least move it up from .NET 4.5 to .NET 4.7.2 and the latest versions of all NuGet dependencies.

This app is simple. Not surprisingly, there are no build errors or warnings.

Now update all of the NuGet packages to the latest (not .NET Core / Standard) libraries (26 updates available!). Interestingly, it took four rounds of updates to get through everything.

Did anything break? No build errors, warnings, or messages. But some unit tests failed - because this was a quick-and-dirty application, and I didn’t have time to write them properly. The failed tests were actually integration tests and they depended on the state of the database. Before making more radical changes (.net core), I should fix up the test project.

Web API

The API in this case was simply using MVC controllers and returning JSON, instead of using Web API. In ASP.NET COre the web libraries were merged into one framework instead of two. As the last step in this initial refactoring foray, I’ll convert the API endpoints to use Web API. Since I don’t have unit tests yet, I will not change the business logic in any way; only the method signatures and return statements should change. This way I don’t create unit tests for the old way and then modify them twice (once to Web API and once to APS.NET Core MVC).

Here’s a funtion from the HomeController:

[Authorize(Roles = AdminController.HR_ROLE)]
public JsonResult GetRecord(string id)
{
    using (var db = new Models.ApplicationDbContext())
    {
        var employee = db.Users
            .Where(x => x.Id == id)
            .Select(Map)
            .FirstOrDefault();

        return Json(employee, JsonRequestBehavior.AllowGet);
    }
}

Steps to take:

  1. Install Web API Nuget packages (Install-Package Microsoft.AspNet.WebApi; Install-Package Microsoft.AspNet.WebApi.OwinSelfHost) and configure it to run in OWin;
  2. This API is dealing with Employees, so I will create api/EmployeeController.cs with route api/employee, then
  3. Move this method to the new controller, renaming it to Get, and finally
  4. Change the return type from JsonResult to IHttpActionResult, without conversion to async (that can come with EF Core upgrade).
[HttpGet]
[Authorize(Roles = AdminController.HR_ROLE)]
public IHttpActionResult Get([FromUri] string id)
{
    using (var db = new ApplicationDbContext())
    {
        var employee = db.Users
            .Where(x => x.Id == id)
            .Select(Map)
            .FirstOrDefault();

        return Ok(employee);
    }
}

There are a few other “API” methods in the HomeController, and of course I’ll move those over as well. And apply proper attributes (HttpGet, HttpPost, Authorize, FromBody, FromUri). All told, it is a quick and painless process - at least for an application of this size ;-).

Well, not so quick… a few adjustments to be made in AngularJs as well. For starters, there’s the new routing for Employee CRUD operations. And, those operations will be using JSON instead of form encoding (that required custom code, which I now rip out). The URL I had defined in a global api object in _Layout.cshtml. For now I’ll leave it there and just update the routes. Finally, the search form was JSON-encoding an object and injecting that into the query string; that’s just weird and should be a plain query string, i.e. from &searchModel=%7B%22name%22%3A%22Stephen%22%2C%22title%22%3A%22%22%2C%22location%22%3A%22Austin%22%2C%22email%22%3A%22%22%7D to &name=Stephen&location=Austin for a search on the name and location properties.

Code change in app.js. Old:

$http.get(api.employeePaging, { params: { pageSize: pageSize, page: page, searchText: searchText } })

New code, good enough for the moment:

var params = {
    pageSize: pageSize,
    page: page,
    name: searchText.name,
    location: searchText.location,
    title: searchText.title,
    email: searchText.email
};
$http.get(api.employeePaging, { params: params })

Testing

The application is nearly as fully functional as before, but the CSS is messed up. Perhaps because I accidentally upgraded to Bootstrap 4.1.1. Back to 3.3.7 now, as I am not ready for BS 4. Indeed, that fixed it. While I am at it, I might as well remove Modernizr and Respond - no need to support old browsers. Also messed up: the Edit Employee modal dialog doesn’t close on save, but rather throws a very obscure-looking AngularJS error. Going to upgrade the UI in the future and this is very old AngularJS (1.0.2), so I will ignore it for now.

Full file diffs in pull request 1.

It is 2018, and I have only just learned about the fantastic FluentAssertions framework. Seems like a nice time for a quick review of unit testing tools for .NET. Personal preferences: NUnit, FluentAssertions, Moq.

Test Framework

MSTest, NUnit, XUnit - they are all useful. They are all well-integrated into Visual Studio now. I would not make a strong argument against any choice for a team that already knows the tool. For new development, I would unhesitatingly choose NUnit.

I used XUnit for the FlightNode project. I enjoyed the flexibility with the assertion statements. I did not like the enforcement of only one assertion per unit test. The theory is that multiple assertions make it more difficult to know which thing failed (do yourself a favor and add a message in the assertion to solve this!) and you do not get verification on the assertions written below the one that failed. The latter is a valid point, but I have very rarely found this to be a problem in the real world. Ultimately, I ended up spending too much time writing method signatures and extracting shared functionality in the name of small tests with one assertion. I also found that my XUnit tests were hard to maintain as the style caused me to extract so much into private functions, thus reducing the readability of any individual test.

MSTest at this point might be as good as NUnit - I have not investigated in a long time. The unit test first mentality of NUnit and its strong testing of exceptions and async make me a believer. Also, while Microsoft is a great company and all… I like promoting a diverse ecosystem.

Assertions

At another time, the richer set of assertions available in NUnit was a big factor in its favor over MSTest. Although I don’t buy the single assertion per test mentality, I do agree that they should be clear and expressive. In the JavaScript world, I’ve come to enjoy the should style of assertions in Chai. Only recently did I learn of a similar library for .NET, FluentAssertions, although it is several years old.

  1. This style helps me think about the value I’m testing first, rather than thinking about the assertion statement first
  2. It is easier to read.
  3. The assertion error messages are beautiful.
  4. The breadth and depth of the syntax is awe-inspiring.

Trivial comparison from an XUnit test

Assert.Equal(expected, system.Body);
/*
Message: Assert.Equal() Failure
Expected: hi
Actual:   (null)
*/

Becomes

system.Body.Should().Be(expected);
/*
Message: Expected system.Body to be "hi", but found <null>.
*/

Mocking

Moq is hands-down the winner. NSubstitute looks interesting and might be viable competition; however, it has been around for 8 years and personally I’ve never run into a project that uses it. RhinoMocks was great in its time, I’m sure, but the syntax is not as nice as Moq. And while the founder’s personal repository has had a handful of commits in recent years, the official repo hasn’t seen a commit since 2010 - whereas Moq is very much alive, with multiple releases in 2018. Ancient. And slow. Old code that still uses RhinoMocks should be updated.

A Note on AutoFixture

Looking back at FlightNode, I was confused by some code I wrote two years ago. What is this Fixture.Freeze code about? Apparently I have completely forgotten about AutoFixture. It was probably an interesting experiment. It may have even been beneficial. But I do not even know what’s going on when I look at my own tests. No doubt a quick review of AutoFixture’s documentation would suffice to bring me up to speed. At this moment, I have no argument for or against it.

There is a positive a trend of developers doing more of their own testing, going beyond unit testing. However, if independent testers are cut out of the loop, then surely many applications will suffer. Case in point: a user unexpectedly entering a decimal temperatures and military time in a citizen science data collection application.

The TERN data collection application supports the Texas Estuarine Research Network’s citizen science efforts on the Gulf coast of Texas and has been in operation since 2016. The code and testing were 99% all on my own, in spare time on weekends. I had tried to recruit some help with this open source project, but other than a couple of early code commits, I was unsuccessful in that regard.

Recently I received a support request for an application error with no obvious explanation. No error log entry was written. Thankfully after a few e-mails, the user sent me a screenshot of the error and the form contents - and I noticed right away that she had entered a decimal temperature value.

Decimal temperature values, regardless of the scale, seem quite reasonable. The problem is, in my personal data collection I always had whole integers. Thus it never occurred that decimal would be desired. The API expected an integer. The form field has a stepwise increment of integers, but the HTML numeric field type also happily accepts decimals. When the decimal was supplied, the WebAPI 2 application was actually throwing an error when trying to deserialize the data. Since this error is a bad request (user data), it was not logged (perhaps I should start logging BadRequests at the WARN level). Since decimal values are real-world legitimate, I changed the data type in the API and database, and now everything is fine.

This user also had helpful input regarding time. Many applications have drop-down menus for times, in 15 or 30 minute increments. I used the Kendo Time Picker control. Unlike some time controls, it doesn’t automatically drop the menu down - so you only know about it if you think to click the icon, which is not entirely intuitive. Once you do click on that, it can be a little confusing - are you allowed to type in your own, more precise, answer? I’ll need to do a little more research before deciding if/how to improve this.

And what about military time, the user asked? Personally, I can’t recall ever doing data entry with military time, so again it didn’t occur to me to check on this. It turns out this control is smart in that regard - and so is the back-end API service. The system accepts military time. However, because of changes in format between Angular, Kendo, and .NET code, it ends up being translated back into standard(?) time. Hopefully this isn’t too confusing - but it is admittedly odd. In a more robust system, perhaps the user would be able to choose her global preference for which time display to use.

Packer is a cross-platform tool for scripting out virtual machine images. Put another way: use it to create new virtual machines with fully automated and repeatable installations. No clicking around. Some of the benefits:

  1. Startup fresh virtual machines from a pre-created, Packer-based image in seconds instead of hours.
  2. Use the same scripts to create a local VM, a VWMARE instance, or a cloud-based virtual machine.
    • in other words, you can test your virtual machine creation process locally
  3. Helps you maintain a strategy of infrastructure-as-code, which can be version-conrolled.

While Packer can be combined with many other useful tools, at its heart it is a simple json template file that starts up a virtual machine, provisions it with files, and runs scripts on it. For a primarily-Windows shop, this means running Powershell scripts.

In search of a light weight windows vagrant box by Matt Wrock has many interesting optimizations. Evaluate each option carefully to decide if it is right for you, and to decide which options should be used with which builders. For example, if building for a corporate IT environment, running standard Windows Update might not be appropriate - you may need to run a group policy to install only approved updates. Mr. Wrock has many great templates for creating vagrant boxes that leverage Packer; studying them can be of great benefit when learning how to employ Packer in your own environment. Another great collection to use or study: boxcutter’s Packer Templates for Windows.

Best Practices with Packer and Windows lives up to its name and has a great Lego table-flip GIF. I particularly appreciated the suggestion of chaining multiple build templates together - although you do need to watch out for disk space consumption when you create multiple images, one building off of another. This article mentions running sysprep, a built-in Windows tool for genericizing a customized image. It does things like removing the Windows product key and other personalization settings. There may be a better way around it, but I’m not well-versed enough in Windows server setups to know how to avoid a pernicious little problem: if you chain multiple builds together with a typical sysprep configuration, then the second build will fail. It will be hung up on asking you to provide a product key! So I found it best to keep sysprep only in the final build in the chain. When running sysprep on AWS or Azure be sure to follow the specialized instructions for those environments (AWS, Azure).

Finally: transferring “large” files into a Windows image using the file provisioner is incredibly slow, because it runs over the WinRM protocol that is designed more for instructions than file transfer. Some builders support a built-in HTTP server for faster file transfers. But not all. A few options to overcome this:

  1. Download install files over the Internet
    1. Do you trust that the download link will stay alive and not be hijacked? Depends on the source. If you don’t, then it might be useful to create your own mirror.
    2. Downloading from the Internet can be tricky as you might not want your image-in-creation to have full Internet access, or you might be running locally on a virtual switch that is “internal only”.
  2. Download from local HTTP server
    1. You could put all of your files into an HTTP server that is local - on the computer that is running packer / packer.exe.
    2. This works for local images but not cloud-based images.
  3. Hybrid approach - inject a cloud-based URL for cloud-based image creation and a local URL for local image creation.

Recently I installed MongoDb using Chocolatey, and was surprised to notice that the executables weren’t placed into the Chocolately path. Chocolatey uses a shimming process to automatically add executes to PATH. This is really quite nice.

I can imagine scenarios where I have command line executables that weren’t installed by Chocotely that I would like to add to my path easily. Or a scenario like this where I want to address something that someone forgot to build into the choco package. Thankfully manually calling the shimgen executable to create a new shim is quite trivial:

c:\ProgramData\chocolatey\tools\shimgen.exe --output=c:\ProgramData\Chocolatey\bin\mongodump.exe --path="..\..\..\Program Files\MongoDb\Server\3.6\bin\mongodump.exe"

The only key thing to notice is the relative path constraint.

One of my very first technical blog posts was about running OpenSSH on Windows - written over 14 years ago. Recently I was playing around with Microsoft’s port of OpenSSH, which has officially achieved version v1.0.0.0 beta status. Installation was pretty easy, but I ran into a little problem: needing to set “user” group permissions. This little gist has my final script. For reasons of my own I didn’t feel like running the chocolatey install, so I don’t know if it has this permission problem.

The plovers gather in their southern haunts.
The long journey north.
Migration.

Problem

I had just upgraded NuGet packages - a seemingly innocent thing to do. Everything compiles fine and I tried to run my ASP.NET WebAPI service. Testing in Postman works fine, but when I try to let the browser call an endpoint (any endpoint), I get a mysterious 500 server error with a rather unhelpful payload message of {"message":"An error has occurred."}. However, even with Chrome accessing the service, a breakpoint in the endpoint showed me that the code was executing fine. The problem is clearly occurring inside the ASP.NET engine when trying to send the response back to the browser.

Solution

Chrome sends several headers that Postman does not, so I tried copying those headers into Postman to see if any of them made the difference. About half of them required use of the Postman interceptor, and I decided to do some googling before fiddling with that. Couldn’t turn anything up. I couldn’t even find a way to trace down the error, although I had a nagging feeling that the compression header might be related, since it was one of the key headers that Postman wouldn’t send (Accept-Encoding:gzip, deflate, sdch).

And that’s when I suddenly remember to look at the Windows Event Viewer. And sure enough, in the Application log I find a pair of error messages:

  1. Server cannot set status after HTTP headers have been sent.
  2. The directory specified for caching compressed content C:\Users\XYZ\AppData\Local\Temp\iisexpress\IIS Temporary Compressed Files\Clr4IntegratedAppPool is invalid. Static compression is being disabled.

My hunch was right: something wrong with the compression. Why did this suddenly occur? I have no idea. I hadn’t deleted files out of Temp recently. My NuGet package upgrades were for ancillary libraries, but not for ASP.NET itself. But the solution was trivial: as suggested by Event ID 2264 – IIS W3SVC Configuration, I just had to create the directory manually, and then everything was working again.

The API for FlightNode is essentially structured along the lines of the onion architecture concept. Follow that link for a full description of the concept. Because my graphics toolkit is currently limited, my diagram for FlightNode has boxes instead of concentric onion-looking circles…

One key takeaway is the direction of dependencies: all of the solid line dependencies in this chart move from left to right. The Domain Entities are at the “center” in the onion, or rightmost in this chart. They are generally “dumb” data structures that group related fields together. The application’s core business logic is in the Domain Managers. These use Entities as both input and output. As with most systems, there is a need to persist the entities somewhere - and the Domain Interfaces define the structure of the persistence API without defining its implementation. Naturally the interfaces need to deal with Domain Entities as well.

But how do we get data in and out (I/O) of the Domain Model? From the end-user perspective, it is through an HTTP-based API. This API contains two parts of the MVC pattern: models and controllers. The view part is completely separated to another project. The responsibility of the controllers is to handle HTTP input and output via ASP.NET WebAPI. The data transfer objects for that I/O are the Models. The controllers serve the function of remote facades in Martin Fowler’s terminology.

Concrete implementations of the persistence interfaces are no longer at the center of the model, as in a traditional layered architecture. This helps liberate the domain model from attachment to any particular model or pattern for saving the data: we could use a Repository or Table Data Gateway; we could use any ORM; we could use SQL Server, MySQL, Azure, MongoDB. All that matters is that we have a classes that implement the Domain Interfaces, and successfully store and retrieve data for periods when the application is offline.

Which implementation of the interfaces will be used? That depends on how the Inversion of Control (IoC) container is configured. FlightNode uses Unity to map an Entity Framework context class, which implements each persistence interface, to those interfaces. Then ASP.NET WebAPI uses the container to create the correct classes at runtime. In this situation, Entity Framework’s context class does not exactly match the Repository pattern or any of the other traditional data access patterns. Thus, unusually, there are no “repository classes”. There are just persistence interfaces, and the context classes.