Thursday, November 12, 2015

Is Aurelia going to a realistic competitor?

The quick and legitimate answer to the title of this post is, "I don't know." However I wanted to do a little digging to see the potential for this relative newbie to the JS Framework arena that is already so competitive and overflowing. Just see this: 100+ JavaScript Frameworks For Web Developers

To provide some context, here is a visual from Google trends based on some of the major competing frameworks (note: no matter which combination of 'aurelia' I used the results were all the same). Even if this metric isn't perfect, it still provides some level of comparison for popularity:



This GitHub thread has some interesting comments from Rob Eisenberg over this past year on Aurelia. With all the talk of it being a competitor to JavaScript frameworks like React and Angular, I was curious about its backing and support. With those frameworks you have Facebook and Google respectively behind them. I was curious if Aurelia was just a bunch of devs revolting with a new framework out of angst for what happened with the lack of use for Durandal and the ill advised direction Angular 2.0 was going according to Rob, or in the long run would this be a serious contender.

It's no secret JS frameworks and libraries seem to come and go as do the seasons, and investing heavily in one is an important decision. Durandal seemed to have lost a flame quickly in this JS framework battle, so I'm curious how Aurelia will fare.

Here are some quotes from that link from Rob:
"From a business perspective, Aurelia is backed by Durandal, Inc. Durandal is a company that is dedicated to providing open, free and commercial tools/services for developers and businesses that create software using web technologies."
As a private company it is tough to see the backing or possible angel investors involved with Durandal. However for OSS with a passionate community this could be a moot point.

He does go on to mention:
"Durandal is positioned to begin raising Series A venture capital this month. That isn't to support the open source Aurelia project. That project does not need funding. Rather, it is to support Durandal Inc. which intends to offer a much richer set of tooling and services for those who want to leverage Aurelia and the web. We are building out a serious business and our entire platform will be built with Aurelia and for Aurelia. Our potential investors are very excited about our plans and we expect to have some cool stuff to show in the future"
So that could add some potential to Durandal Inc. to keep this thing moving forward. He continues on about the horsepower behind it's actual creation and continued development:
"Aurelia itself is solid due to the fact that it currently has a 12 person development team distributed throughout the world and a large active community, especially considering it was only announced a couple of months ago"
...a bit later he quotes:
"We have 17 members on our core team currently which contribute daily"
Well hopefully those 12-17 people remain passionate :D

I think the conservative decision today is to go with ReactJS or AngularJS with Aurelia being the bold one. I'm not thinking it's going to fade away anytime soon, but with so many competing frameworks it's important for it to catch some mainstream traction or the OSS community might loose steam working for a lost cause.

I for one hope it does succeed and becomes a bit more mainstream. When comparing the syntax for ReactJS, Angular 2.0, and Aurelia, I believe I'd choose Aurelia. Unfortunately for me I'm one in the camp that actually likes Angular 1.x and it's implementation so I don't really have any gripes to it currently for switching to something different. However its shortcomings in performance and implementation are certainly going to be addressed by the radically different 2.0 which still needs to grow on me a bit.

Time will tell and the community not I will answer this question by adoption (or lack thereof) of this framework and others in the upcoming months and years.

Tuesday, July 28, 2015

Git Ignore to Untrack TypeScript Auto Generated Files

When using TypeScript you really don't want the transpiled output files created from the source .ts file to be committed to the repository. This is because the output could be looked at analogous to the /bin generated files which we all know are not to be committed. The TypeScript auto-generated files (.js and .js.map) will be built independently for each user's source, and only the single .ts file should be committed. 

The only exception I see to this process is if there is a restriction on TypeScript compiling on the server (i.e. CI build server) in which case the actual .js file might have to be committed. As long as the TypeScript compiler is present for compilation, only the .ts file should be committed.

To ignore the auto-generated .js and .js.map files for a new project the process is quite simple. Just add some rules to the .gitignore file at the root of your project like below and these files will not be tracked.

# Typescript Auto-generated files
# Note: all files must be TS files in this directory or ordinary JS files could be removed
# Modify as needed to preserve files and be more explicit
BowlingSPAWeb/BowlingSPA/app/*.js
BowlingSPAWeb/BowlingSPA/app/*.js.map
BowlingSPAWeb/BowlingSPA/app/controllers/*.js
BowlingSPAWeb/BowlingSPA/app/controllers/*.js.map
BowlingSPAWeb/BowlingSPA/app/services/*.js
BowlingSPAWeb/BowlingSPA/app/services/*.js.map

The above still needs to be done for existing projects, but if you see tracked changes already showing up for your repository, you are going to have to manually remove them from being tracked since they are already a part of the repository. From the Git docs:
If you already have a file checked in, and you want to ignore it, Git will not ignore the file if you add a rule later. In those cases, you must untrack the file first.
The easiest way to do this is to run the following command against the applicable files using Git Bash commands. I find opening the Git Bash command prompt directly from the directories makes this process easiest.

git rm --cached myfile.js
git rm --cached *.js
git rm --cached *.js.map

You may prefer instead of using wildcards (like above) to manually address each file individually applying the command. It's a 1 time deal to remove the tracking, so it might be best to be explicit as I did it below:


Note, once successfully removed, you can't run the command again or you will get an error message similar to the following:
fatal: pathpec 'myfile.js' did not match any files
This is because the file has already been removed from the repositories tracking so there isn't any command to preform against it.

If using VS.NET, you should see after refreshing that these files are now shown as being deleted from the repository. 





I do recommend that you commit from the Git Bash command line when removing these files. I had mixed results when jumping over to VS.NET to commit, as the deleted files were not shown as changes to the master to be synced to the server. Only once I committed from Git Bash did it actually apply and commit successfully:


After committing and on subsequent updates, the changes to the TypeScript auto-generated files should no longer be tracked and shown as 'Included Changes'

Tuesday, July 14, 2015

Which Version of TypeScript is Installed and Which Version is Visual Studio Using?

My rating of TypeScript on a scale of 1-10... a solid 10

My rating of finding out which version of TypeScript is installed and being used currently... a 2. 

Sometimes the simplest of things are missed and I think this is one of the cases. There is a lot of playing detective and resulting confusion that will come about trying to figure out which versions of TypeScript are actually installed and subsequently, which is targeted for VS.NET to compile against. I'm here to clear up this confusion and shed some light on this for TypeScript users.

The TypeScript compiler (tsc.exe) is installed through the installation of VS.NET 2013 Update 2 (or later versions), VS.NET 2015, or via the TypeScript tools extensions available through the VisualStudio Gallery. VS.NET users of one of the aforementioned versions are in a good place because TypeScript having been created by Microsoft integrates well into VS.NET. Regardless of installation method, the tooling and compilers are available at the following location:

C:\Program Files (x86)\Microsoft SDKs\TypeScript

As can be seen from the screenshot below, I have folders for versions 1.0 and 1.5:



Now before moving further, the way you've probably found to find out which version of TypeScript is installed is to send the -v option to the compiler which will "Print the compiler's version." You can do this from any location using the command prompt. Doing so on a default installation of say VS.NET 2013 with the 1.0 SDK folder present will yield the following:



Notice we have a SDK folder for version 1.0 however the compiler version is 1.0.3.0. This is because what really matters is not the folder, but rather the actual version of the compiler within the folder. In this case the 1.0 folder contains version 1.3 of the TypeScript compiler.

As mentioned, you can run the -v option against the compiler from anywhere. If you run this command against the directory that physically contains the TypeScript compiler (tsc.exe), then the resulting version output will be that of the compiler in the SDK directory targeted.

Running the version command against the 1.0 directory:



Running the version command against the 1.5 directory:



OK great, we can run version commands in different spots and find out about versions of the compiler. However which one are VS.NET and my project using, and where is that compiler version coming from when I run the -v option in a non-TypeScript SDK directory?

Let's address the global version question 1st. You can run the where tsc command from any command line which will tell you the TypeScript compiler location the version is returned from using the version option:



OK so I have SDK tools version 1.0 (compiler version 1.3 as we know) and 1.5 installed. Why is it returning only the tsc.exe information from the 1.0 SDK folder? It turns out that this information is actually a part of the PATH environmental variable in Windows. There is no magic or real installation assessment going on here. It simply reads the directory value embedded within the path variable for TypeScript and finds the tsc.exe version within that specified directory. Here is what was within my Windows PATH variable for TypeScript:

C:\Program Files (x86)\Microsoft SDKs\TypeScript\1.0\



Before we get much further it turns out all these versioning commands, PATH variable values, and the like, they have absolutely no bearing at all on which version of TypeScript VS.NET is using and compiling against. I'll get to that in a minute. 

However every Google search for, "which TypeScript version do I have installed" yields a bunch of results saying, "run tsc -v" which ultimately reads that value in the PATH variable and confuses the heck out of people. They get concerned that VS.NET is not targeting their newer version of TypeScript installed. 

The matter of fact is that TypeScript can have multiple side-by-side installations and it's all about the project's targeted value and none of what's in the PATH variable is important. You would think if the TypeScript team wanted to use the PATH variable they would update it to the newest version upon installing a newer TypeScript version. Not so. It remains stagnant at the old version which then shows the older TypeScript compiler version thus leaving everyone confused.  I found this comment on the following GitHub thread which confirms, folks will have to update the PATH variable manually for the time being:



Before manually changing the PATH variable to point to the newer TypeScript SDK version, lets look at what VS.NET reads to know which compiler to target. Unfortunately it is not available as a nice dropdown in the TypeScript properties for the project (hence adding to my rating of '2' for the version fun with TypeScript). Instead it is in the project's properties configuration within the .csproj or .vbproj file. I particularly like the EditProj Visual Studio Extension which adds a nice 'Edit Project File' to the context menu when right-clicking a project within VS.NET. Doing this will bring up the project's configuration, and I can see the TypeScript version targeted and used by the project inside the tag:



Now VS.NET will append this value to the C:\Program Files (x86)\Microsoft SDKs\TypeScript\ path to get the compiler used for the project. In this case we know from above the 1.3 version of the TypeScript compiler is in that directory. 

Let's do a test and write some TypeScript using the spread operator that's not fully supported until ES6 or version 1.5 of TypeScript which can compile to ES5 JavaScript. Technically version 1.3 of TypeScript can still compile the following to JS but it will complain in the IDE which is what we'd expect:



Notice how we get the red squiggly under the spread operator usage (three dots ...) and a notice that this is a TypeScript 1.5 feature.

Now we can prove the tsc -v output and resulting PATH variable value are not what's being used by VS.NET (it's the Tools Version I showed above). If we change the PATH variable TypeScript directory value from:

C:\Program Files (x86)\Microsoft SDKs\TypeScript\1.0\

to:

C:\Program Files (x86)\Microsoft SDKs\TypeScript\1.5\

...in the System variables (the TypeScript directory is embedded within the Path variable, so copy out to Notepad to locate and modify):


...and open a new command window we can see now the tsc -v command reads the updated PATH variable and outputs the resulting 1.5.0 version. 



So cool right? VS.NET should now reflect version 1.5 of TypeScript being used and our warning should go away. Nope. Save your TypeScript file, rebuild, reopen VS.NET, do whatever you feel, and you will still see the warning above. That's because what matters to VS.NET is what version is in it's own project configuration and not what the PATH variable returns via the version command.



What we need to do is manually update the project's configuration to point to the 1.5 SDK tools version (remember this should be the value of the folder that VS.NET appends to the SDK directory and not the actual compiler version). Using the 'Edit Project' tool I mentioned previously, I can change the 1.0 tools version to 1.5:



If I go back to my TypeScript file, immediately I notice the spread command is understood and accepted as we are pointing to version 1.5 of the TypeScript compiler:



So that's TypeScript versioning as of today. If you want to target a newer (or older) version of TypeScript, or just want to see which version your project is currently using, you'll need to take a look in the project's configuration for the  value.

As an aside, VS.NET will warn you if you have a newer version of the TypeScript tools installed, but your project is targeting an older version. It will ask if you would like to upgrade your project:



I can say I've had mixed results saying  'Yes' to this dialog including while writing this post. It did not update the version to 1.5 in my project's properties. I still had to manually modify the version.

To this end, I've made a recommendation on Microsoft Connect that the TypeScript tools version (and corresponding compiler version), be selectable from within the IDE on the 'TypeScript Build' tab within the project's properties. You can read and vote for this if you would like here: Allow switching TypeScript configured version from Project Properties IDE

Friday, June 19, 2015

Using OpenCover and ReportGenerator to get Unit Testing Code Coverage Metrics in .NET

If you are fortunate enough to use the highest offered VS.NET edition inclusive of additional testing abilities, or purchased a license to a product such as dotCover then you already have access to unit testing code coverage tools. However there is still an easy and powerful way to get the same type metrics using a combination of msbuild.exe and (2) open source tools: OpenCover and ReportGenerator.



OpenCover will leverage msbuild.exe and analyze code to determine the amount of code coverage your application has in reference to the unit tests written against it. ReportGenerator will then leverage those results and display them in a .html report output that is generated. The really cool part of it all is that since it is all scriptable, you could make the output report an artifact of a Continuous Integration (CI) build definition. In this manner you can see how a team or project is doing in reference to the code base and unit testing after each check-in and build.


A quick word on what this post will not get into - What percentage is good to have that indicates the code has decent coverage? 100%? 75%? Does it matter? The answer is, it depends and there is no single benchmark to use. The answer lies in the fact that one should strive to create unit tests that are are meaningful. Unit testing getters and setters to achieve 100% code coverage might not be a good use of time. Testing to make sure a critical workflow's state and behavior are as intended are examples of unit tests to be written. The output of these tools will just help highlight any holes in unit testing that might exist. I could go into detail on percentages of code coverage and what types of tests to write in another post. The bottom line - just make sure you are at least doing some unit testing. 0% is not acceptable by any means.

Prior to starting, you also can get this script from my 'BowlingSPAService' solution in GitHub within the BowlingSPA repository. You can clone the repository to your machine and inspect or run the script. Run the script as an Administrator and view the output.

BowlingSPA GitHub


1. Download NuGet Packages:


Download and import the following (2) open source packages from NuGet into your test project. If you have multiple test projects, no worries. The package will be referenced in the script via its 'packages' folder location and not via any specific project. The test projects output is the target of these packages.

OpenCover - Used for calculating the metrics

Report Generator - Used for displaying the metrics

The documentation you'll need to refer to most is for OpenCover. It's Wiki is on GitHub and can be found at the location below. ReportGenerator doesn't need to be tweaked so much as it really just displays the output metrics report in HTML generated by OpenCover. This was my guide for creating the batch file commands used in this article.

OpenCover Wiki


2. Create a .bat file script in your solution 


I prefer to place these types of artifacts in a 'Solution Folder' (virtual folder) at the root to be easily accessible.

3. Use the following to commands to generate the metrics and report

   a. Run OpenCover using mstest.exe as the target:

Note: Make sure the file versions in this script code are updated to match whatever NuGet package version you have downloaded.

"%~dp0..\packages\OpenCover.4.5.3723\OpenCover.Console.exe" ^
-register:user ^
-target:"%VS120COMNTOOLS%\..\IDE\mstest.exe" ^
-targetargs:"/testcontainer:\"%~dp0..\BowlingSPAService.Tests\bin\Debug\BowlingSPAService.Tests.dll\" /resultsfile:\"%~dp0BowlingSPAService.trx\"" ^
-filter:"+[BowlingSPAService*]* -[BowlingSPAService.Tests]* -[*]BowlingSPAService.RouteConfig" ^
-mergebyhash ^
-skipautoprops ^
-output:"%~dp0\GeneratedReports\BowlingSPAServiceReport.xml"



The main pieces to point out here are the following:
  • Leverages mstest.exe to target the 'BowlingSPAService.Tests.dll' and send the test results to an output .trx file. Note: you can chain together as many test .dlls as you have in your solution; you might certainly have more than 1 test project
  • I've added some filters that will add anything in the 'BowlingSPAService' namespace, but also removing code in the 'BowlingSPAService.Tests' namespace as I don't want metrics on the test code itself or for it to show up on the output report. Note: these filters can have as many or few conditions you need for your application. You will after getting familiar with the report probably want to remove auto-generated classes (i.e. Entity Framework, WCF, etc.) from the test results via their namespace.
  • Use 'mergebyhash' to merge results loaded from multiple assemblies
  • Use 'skipautoprops' to skip .NET 'AutoProperties' from being analyzed (basic getters and setters don't require unit tests and thus shouldn't be reported on the output)
  • Output the information for the report (used by ReportGenerator) to 'BowlingSPAServiceReport.xml'

   b. Run Report Generator to create a human readable HTML report

"%~dp0..\packages\ReportGenerator.2.1.5.0\ReportGenerator.exe" ^
-reports:"%~dp0\GeneratedReports\BowlingSPAServiceReport.xml" ^
-targetdir:"%~dp0\GeneratedReports\ReportGenerator Output"

The main pieces to point out here are the following:
  • Calls ReportGenerator.exe from the packages directory (NuGet), providing the output .xml report file genrated from #3(a) above, and specifying the output target directory folder to generate the index.htm page. 
  • The report creation directory can be anywhere you wish, but I created a folder named 'ReportGenerator Output'

   c. Automatically open the report in the browser

start "report" "%~dp0\GeneratedReports\ReportGenerator Output\index.htm"

The main pieces to point out here are the following:
  • This will open the generated report in the machine's default browser
  • Note: if IE is used, you will be prompted to allow 'Blocked Content.' I usually allow as it provides links on the page with options to collapse and expand report sections.

4. Stitch together all the sections into a single script to run


REM Create a 'GeneratedReports' folder if it does not exist
if not exist "%~dp0GeneratedReports" mkdir "%~dp0GeneratedReports"

REM Remove any previous test execution files to prevent issues overwriting
IF EXIST "%~dp0BowlingSPAService.trx" del "%~dp0BowlingSPAService.trx%"

REM Remove any previously created test output directories
CD %~dp0
FOR /D /R %%X IN (%USERNAME%*) DO RD /S /Q "%%X"

REM Run the tests against the targeted output
call :RunOpenCoverUnitTestMetrics

REM Generate the report output based on the test results
if %errorlevel% equ 0 ( 
 call :RunReportGeneratorOutput 
)

REM Launch the report
if %errorlevel% equ 0 ( 
 call :RunLaunchReport 
)
exit /b %errorlevel%

:RunOpenCoverUnitTestMetrics
"%~dp0..\packages\OpenCover.4.5.3723\OpenCover.Console.exe" ^
-register:user ^
-target:"%VS120COMNTOOLS%\..\IDE\mstest.exe" ^
-targetargs:"/testcontainer:\"%~dp0..\BowlingSPAService.Tests\bin\Debug\BowlingSPAService.Tests.dll\" /resultsfile:\"%~dp0BowlingSPAService.trx\"" ^
-filter:"+[BowlingSPAService*]* -[BowlingSPAService.Tests]* -[*]BowlingSPAService.RouteConfig" ^
-mergebyhash ^
-skipautoprops ^
-output:"%~dp0\GeneratedReports\BowlingSPAServiceReport.xml"
exit /b %errorlevel%

:RunReportGeneratorOutput
"%~dp0..\packages\ReportGenerator.2.1.5.0\ReportGenerator.exe" ^
-reports:"%~dp0\GeneratedReports\BowlingSPAServiceReport.xml" ^
-targetdir:"%~dp0\GeneratedReports\ReportGenerator Output"
exit /b %errorlevel%

:RunLaunchReport
start "report" "%~dp0\GeneratedReports\ReportGenerator Output\index.htm"
exit /b %errorlevel%


This is what the complete script could look like. It adds the following pieces:

    a. Create an output directory for the report if it doesn't already exist
    a. Remove previous test execution files to prevent overwrite issues
    b. Remove previously created test output directories
    c. Run all sections together synchronously ensuring each step finishes successfully before proceeding


5. Analyze the output



Upon loading the report you can see immediately in percentages how well the projects are covered. As one can see below I have a decent start to the coverage in the pertinent areas of BowlingSPAService, but the report shows I need some additional testing. However, this is exactly the tool that makes me aware of this void. I need to write some more unit tests! (I was eager to get this post completed and published before finishing unit testing) :)

You can expand an individual project to get a visual of each of the class's unit test coverage:


By selecting an individual class, you can see a line level coverage visually with red and green highlighting. Red means there isn't coverage, and green means there is coverage. This report and visual metric highlights well when unit tests have probably only been written for 'happy path' scenarios, and there are unit test gaps for required negative or branching scenarios. By inspecting the classes that are not at 100% coverage, you can easily identify gaps and write additional unit tests to increase the coverage where needed.




Upon review you might find individual 'template' classes or namespaces that add 'noise' and are not realistically valid targets for the unit testing metrics. Add the following style filters to the filter switch targeting OpenCover to remove a namespace or a single class respectively:
  • -[*]BowlingSPAService.WebAPI.Areas.HelpPage.*
  • -[*]BowlingSPAService.WebApiConfig
After adding or modifying unit tests, you can run the report again and see the updated results. Questions about the particulars of how the metrics are evaluated are better directed toward the source for OpenCover on GitHub.

That's all you should need to get up and running to generate unit test metrics for your .NET solutions! As mentioned previously, you might want to add this as an output artifact to your CI build server and provide the report link to a wider audience for general viewing.  There are also additional fine tuning options and customizations you can make so be sure to check out the OpenCover Wiki I posted previously. 

Wednesday, April 29, 2015

Big News From Microsoft Build 2015 Day 1


As typical, there are usually some big and cool announcements made on the Microsoft .NET front in regards to the framework and surrounding technologies. Today at Build, 3 of the biggest announcements made were the release of .NET Core for Mac and Linux, the the release of Visual Studio Code, and finally the ability to compile and run Android and iOS app code in a Universal app for Windows.

The .NET Core release announcement will have the most impact for ASP.NET 5 (vNext) allowing one to build web sites and services on Windows, Linux, and Mac using .NET Core. Visual Studio Code will have an impact by being an all platform code editor which should give some competition to the likes of Sublime and Notepad++. Finally the ability to reuse Android and iOS app code to build and create Windows Universal apps will position Microsoft well going forward.

As a good friend and colleague of mine Anthony Handley said best, "This really IS a new Microsoft." This huge push for OSS and cross platform enabled development will certainly strengthen Windows 10 and Windows based development overall. Since it appeared Microsoft was running a distant 3rd in the mobile space both from hardware and software perspective, these moves to allow Android (Java / C++) and iOS (ObjectiveC) code to be compiled on Windows to make and run Windows Universal apps to reuse existing code is a winners move in my opinion. Combine this with the ability to run a portion of the .NET framework on Mac and Linux, you have a nice 1-2 punch strategy. This will certainly (and hopefully) bring Microsoft back into the picture as they drop the 'proprietary' walls and allow in cross platform app building.

As a modified version of the old saying goes, "if you can't beat them, join them, become best friends, and conquer the world!"

Microsoft releases .NET Core preview for Mac and Linux

Microsoft Launches Visual Studio Code, A Free Cross-Platform Code Editor For OS X, Linux And Windows

Huge news: Windows 10 can run reworked Android and iOS apps

Tuesday, March 31, 2015

Orlando Code Camp 2015


It was nice to see everyone that came out to Orlando Code Camp 2015! As I understand it there were 800+ attendees and a ton of great speakers and content. I enjoyed doing my presentation entitled, "Single Page Applications for the ASP.NET Developer using AngularJS, TypeScript, and WebAPI" There was a great turn out and was I was happy to be there.

For those interested, here is the URL to the GitHub repository with the sample app:
BowlingSPA GitHub

This is the slide deck from the presentation:


I really enjoy speaking at events put on by the Orlando .NET User Group (ONETUG). They are a fantastic group of individuals, and I look forward to hopefully speaking down in Orlando later this year at another user group event!

Tuesday, March 24, 2015

I'm Speaking at Orlando Code Camp 2015


Orlando Code Camp 2015 is almost here! This Saturday, March 28th I'll be down at Code Camp presenting on my session titled, "Single Page Applications for the ASP.NET Developer using AngularJS, TypeScript, and WebAPI" If you are interested in web development, or just curious about what SPAs entail this should be a great session to attend. 

The best part about Code Camp is it is FREE to attend and draws some fantastic speakers and attendees from all around. In all the years I have attended, I really think it's on par with some of the major software conferences and has many of the same speakers too. It's well worth attending to get some great information and have a chance to network with area peers. 

Check out the site to register, and hope to see you there!

Friday, January 30, 2015

Resolve Multiple Interface Bindings With Ninject

The old adage "program to an abstraction, not a concretion" reigns true and used in most modern application design via the use of Interfaces and concepts like IoC and DI. The polymorphic behavior of Interfaces and their ability to have multiple implementations allows us to do some really cool things as well as be staged for highly testable code with mocking frameworks like Moq.

However I'd say 99% of the time we typically have 1 Interface bound to a single concrete class using our DI framework. There will be cases where you will want to take advantage of having more than 1 implementation of an Interface and need to configure this.

Ninject is a great DI framework and you can achieve this through the use of named bindings. In this case if we have an Interface named ICalculate you could have 2 (or more) implementations. However upon injecting the class into a constructor, how would you dictate which instantiation/binding to use? The named bindings accomplish this as follows.

1. Provide a name for the bindings using the same Interface to allow them to be unique:
Bind<ICalculate>().To<CalcImpl1>().Named("Calculation1");
Bind<ICalculate>().To<CalcImpl2>().Named("Calculation2");

2. Specify the named binding as an attribute applied on the argument of the Interface being injected:
readonly ICalculate _calculate;
public MathCalculations([Named("Calculation1")] ICalculate calculate){
    _calculate = calculate;
}

It's that easy! For more information, see the Ninject documentation here.

Monday, January 19, 2015

I'm Speaking at Modern Apps LIVE! (LIVE! 360) Las Vegas

I'm excited to be speaking at the upcoming Modern Apps LIVE! conference co-located at the Visual Studio LIVE!, LIVE! 360 Las Vegas conference March 16-20. This conference track has a fantastic lineup of speakers including Jason Bock, Brent Edwards, Anthony Handley, Rocky Lhotka, and Kevin Ford with a variety of topics surrounding Modern Application Development. 
There is still time to save $500 using my speaker registration code: LVSPK30 Click on the banner below to go straight to the registration page.


Speaking

Here are the Modern Apps LIVE! sessions I'll be speaking at during the conference:

I hope to see you there and don't miss out on the registration savings above!!