Wednesday, November 12, 2014

Upgrading to Angular 1.3: Global Controllers Not Supported by Default

If you recently upgraded to Angular 1.3.x, you my get the following JavaScript error when trying to run your application:

Error: [ng:areq] Argument 'MyController' is not a function, got undefined

This issue is a result of some breaking changes where AngularJS no longer supports global controllers set on the window object as of version 1.3. In reality if you have a production application using global controllers, it is not advised and would be a prime target of refactoring regardless. However you might of had a small test app or the like that upon upgrading Angular to v1.3.x it stops working unexpectedly. The intention behind this change was to prevent poor coding practices.

The actual breaking change is highlighted on GitHub here: https://github.com/angular/angular.js/blob/g3_v1_3/CHANGELOG.md#breaking-changes-13

I like how the use of global controllers according to the change was for, "examples, demos, and toy apps." I agree with the statements, so I'm OK with this change. It really is code smell to use controller functions in the global scope.

Let's look at code that would have worked in Angular versions prior with a trivial sample:

<body ng-app>
    <div ng-controller="MyController">
        <input ng-model='dataEntered' type='text' />
        <div>You entered: {{dataEntered}}</div>
    </div>
    <script src='/Scripts/angular.js'></script>
    <script type='text/javascript'>
        function MyController($scope) {
            $scope.dataEntered = null;
        };
    </script>
</body>

The breaking change requires one to register the Controller with a Module to provide scope and pull it off the global window object. The changes required are shown below:

<body ng-app="SimpleAngularApp">
    <div ng-controller="MyController">
        <input ng-model='dataEntered' type='text' />
        <div>You entered: {{dataEntered}}</div>
    </div>
    <script src='/Scripts/angular.js'></script>
    <script type='text/javascript'>
        (function () {
            function MyController($scope) {
                $scope.dataEntered = null;
            };
            angular.module("SimpleAngularApp", []).controller("MyController", ["$scope", MyController]);
        })();
    </script>
</body>

You might find this will have the biggest impact going forward when you are throwing together quick demos or examples using Plunker or a small test harness. Just remember to register the controller with a Module to prevent running into this error. 

There technically is a workaround if you must make a fix quickly, but not advised long term. You can choose to set $controllerProvider.allowGlobals(); which will allow the old code to run. You can read about it here: https://docs.angularjs.org/api/ng/provider/$controllerProvider

If your apps have previously been constructed using best practices, this should not impact you at all. For additional changes between Angular 1.2 and 1.3, see the following link: https://docs.angularjs.org/guide/migration

Wednesday, October 22, 2014

I'm Speaking at Modern Apps LIVE! (LIVE! 360) Orlando

I'm excited to be speaking at the upcoming Modern Apps LIVE! conference co-located at the Visual Studio LIVE!, LIVE! 360 Orlando conference November 17-21. There are still a few days left to save $600 using my speaker registration code: LSPK17 Click on the banner below to go straight to the registration page.



Here are the Modern Apps LIVE! session I'll be speaking at during the conference:

MAH05: Building a Responsive Single Page App
MAF01 Workshop: Modern App Development In-Depth: iOS, Android, Windows, and Web

I hope to see you there and don't miss out on the registration savings that end 10/24!

Wednesday, October 15, 2014

Fixing the "Authentication failed" Message When Accessing a TFS-Git Repository

Recently I've been working with a TFS Project using Git as the source control provider and something locally has gone wrong and I just couldn't remedy in VS.NET directly. The buzz and consensus already seems to be that managing your Git repository with one of the following tools is easier and more powerful than VS.NET IDE integration:
  • Git Bash
  • Git for Windows
  • TortoiseGit
  • SourceTree
  • Git Gui
I have used Subversion in the past so TortoiseGit was familiar already to me, but the others were not too hard to test out as well. The main goal of what I needed to do was a simple Git Pull to update my local repository to the most current version. I'm using a Windows LiveID to authenticate to the TFS Online Project just fine in VS2013 and made the original clone successfully.

VS2013 initially had no issue doing a Pull, but once things got messed up I decided to use an external tool to fix the problem. The issue is all the tools kept failing authentication with the following error in some flavor below. It didn't make sense because the credentials I was providing worked previously.

"Authentication failed"

Note - it's know that the VS IDE integration with Git does not expose all the functionality available, so if you get into a mess with your repo its probably not going to be easy to fix from VS.NET's tooling. 

It turns out the solution is to modify a setting on your Windows LiveID account to 'Enable alternate credentials'. You can reach this setting by clicking on your user name in the top left-hand corner once logged into your Live account, selecting 'My Profile', and then selecting the 'Credentials' heading.

Here you will need to click the link to 'Enable alternate credentials' and fill in a password, and secondary username if desired or if the application you use can't use an email address for a username:


This allows the use of basic authentication credentials and fixes the authentication issue with the tools I listed above to manage a Git repository. Make sure to use these credentials when authenticating and you should now be able to manage your TFS-Git repository without authentication issues.

Wednesday, October 8, 2014

How To: Export a SQL Server Database to Windows Azure

If you are beginning to work with Windows Azure and are ready to deploy an application or service, you may begin to wonder how to export that existing backend SQL Server database as well. 

The good news is it's quite trivial to do by using SQL Server Management Studio and the Windows Azure Management Portal. For this example I'm going to use local my 'BowlingStats' SQL Database that is used with my BowlingSPA application to export to Azure.

Prerequisites
  • Obtain a Windows Azure Account

1. Create a Storage Account

Once logged onto the Azure Management Portal, select 'Storage' from the options on the left, and then from 'Data Services' -> 'Storage,' select the 'Quick Create' option. Enter a URL for the name of your storage account as well as a location and replication strategy. Normally Azure will pre-populate with the best default options but you can change them if desired.



You will see a message once the account has been successfully created:



Upon creating the account you will be presented with a screen containing a 'Primary' and 'Secondary' set of access keys for the Storage Account you just created. Store these keys as they will be needed to connect to Azure Storage later from SQL Server.



Don't worry if you quickly dismissed the dialog with the keys. You can always get back to them by selecting 'Manage Access Keys' at the bottom of the 'Storage' Azure option. You can also regenerate the keys if they have been compromised.




2. Export your SQL Database as a .bacpac file directly to Azure

Now that we have a storage account, we need to hop over to SQL Server Management Studio 2012 and export our database as a .bacpac file to Azure.

Right-click the database and select, 'Tasks' -> 'Export Data-tier Application...'




After selecting 'Next' from the Introduction screen, select the 'Save to Windows Azure' option and then press 'Connect'. Here you can enter the name of your Azure Storage Account created in step 1, as well as the 'Primary' key value provided when the Storage Account was created. Press the 'Connect' button once the information has been entered.



After the connect dialog has been dismissed add a 'Container' name that will contain the .bacpac file on Azure. Once the information is all correct, press the 'Next' button and being the export of data from your SQL Server database directly to Windows Azure! 



Once successfully exported, the bacpac file will be available for import back on the Azure Management Portal in storage.




3. Import your uploaded bacpac to a SQL Database on Azure

Finally, let's go back to the Windows Azure Management Portal and import the SQL Database bacpac file. Select 'SQL Databases' from the options on the left, and then from 'Data Services' -> 'SQL Database,' select the 'Import' option.



The next dialog will allow you to choose the database settings. To select the bacpac we already uploaded, press the folder icon to browse to the available data files in storage.



Expand your storage account and you should see the container name set when the .bacpac file was exported from SQL Server. Select the file and press 'Open' to automatically populate the bacpac URL.



Enter a name for the database, select your subscription, service tier, (do not select 'Web' or 'Business' from the screenshot as they will eventually be retired), performance, max size, and server. If you have previously created a SQL Server instance, you can choose it and provide the login information. If this is the 1st time, select  'New SQL database server' and press the next button. 



You will now need to create the SQL Server instance login information. Note that while sometimes Azure appears to be smart and pre-populate options (i.e. Region') with the one closest to you, I did not find this to be the case with this dialog. I believe 'East Asia' was pre-selected and provided me a warning that the storage account and database were not in the same region. Make sure to switch it to the same value as your storage account and the warning will be dismissed. Once you enter your login credentials, press the accept button to complete the process.



That's it! Your new SQL Database and instance once provisioned will display under 'SQL Databases' in Windows Azure for all of your cloud application and service needs.



Tuesday, August 26, 2014

To take thy code personally - Yes or No? That is the question...

One area of conversation that tends to come about in software engineering is the degree to which one takes their code personally. Litmus test: that shiny gem you wrote was code reviewed and recommendations are made to make some changes. Do you:
  1. turn red, tell the reviewer how idiotic they are, storm away, and start looking secretly for a new job?
  2. listen to what the other party has to say and make sense of it as a learning experience?
Now this is only one of many examples of taking code 'personally,' and realistically is a bit of an extreme in option 'a.' This behavior though takes shape and evolves in different ways, but I believe if this is an attribute you posses it's one to shake and shed fast for the betterment of your career,

I have for years stated it wasn't the physical lines of code I laid down, but the byproduct of what I learned that is the real personal reward. I've written apps over the years that for one reason or another didn't make it to prod or where short lived. However the experienced gained from what I did was what kept moving me forward.

For example - "Hey Allen, you aren't upset that app you worked on got scrapped because the business did a 180?" Me: "Nope. Because now I know a ton about x, y, and z and am a better developer for it."

I actually had a (technical) debate/argument about this topic with an industry peer within the last few years. I stated I don't take my code too personally and the experience gained was paramount (after all that continued experience helps me move on and do bigger and better things). Sure I absolutely want what I create to succeed, but if it gets thrown out, rewritten, refactored, or the like I will not loose any sleep at night. I will however if it presents itself take note and learn from it to grow. This person couldn't understand why I wouldn't take great pride in what I did and die by the sword if need be for the lines of code I write and that those that took their code personally were more along the lines of what they thought were 'great programmers'. Problem with this is you so much as critique the architecture/design of ones code with this so called 'passion' and get ready... it's like you just told them they have a 3rd eye or something.

Do not confuse taking your code personally with instilling quality in an application. These are separate and one does not depend on the other. If the code you are responsible for is used on an aircraft and in charge of avionics - sure you absolutely care about that code and it's quality. However, the aircraft dev guru that told you your C++ methodologies were outdated and we are programming for Boeing not Pan Am in the 70's - you need not again take this personally. If you care about your craft you will truly try to grow from others suggestions and wisdom, learn from your experiences, and continue to move forward.

Next and also important, do not confuse taking your code personally and having passion for your craft as a software engineer. Again these are not intersecting lines. I have a major passion for my career and craft as a software engineer. It's what keeps me on the computer late at night learning new technologies, preparing for presentations, and blogging. This has nothing to do with taking personally the code I write. Taken from "the grumpy programmer" and said best by someone with uber experience, "Don't fall in love with your code."

Before posting this I read over what I wrote a few times because I don't want it coming across as I don't care - because I absolutely do; ask those I work with and they should concur. However, I don't let that VB6 app I wrote 12 years ago and put my blood, sweat, and tears into and the fact that it's no longer used bother me one bit. However, the knowledge I gained at that time helped get me to where I am today.

So the answer to the title - 'no', do not take your code personally. Odds are the lines of code you write today will not be around in 10 years. Instead try to find great joy in the experience you gain from what you do because you can take that with you from project to project and it will certainly be around for many, many years to come.

Thursday, August 7, 2014

Upcoming Speaking Events: Orlando Modern Apps LIVE! (LIVE! 360) and ONETUG Meeting

I really excited to move to the next step in my venture of helping others in the realm of software engineering with (2) upcoming speaking engagements. I'll be speaking in November at Modern Apps LIVE! as part of the LIVE! 360 conference in Orlando, FL. I'll also be speaking in September at the Orlando .NET User group meeting.

Registration for LIVE! 360 in Orlando this fall which runs from November 17-21 is now open. If you have not been to this conference previously, you should really consider attending to get out of the vacuum which is the daily routine and see what's really out there in the world of .NET and related technology development. The conference over the years has done wonders for my career in the way of knowledge gained in addition to helping land me at my current employer Magenic for which I'm truly grateful. The wealth of knowledge and network potential that exists is unparalleled to most other events live or virtual.

The good news is that if you use the code LSPK17 during registration, you will save $600! Click on the image below to go directly to the site to check out the content and sessions available.

Here are the Modern Apps LIVE! session I'll be speaking at during the conference:
MAH05: Building a Responsive Single Page App
MAF01 Workshop: Modern App Development In-Depth: iOS, Android, Windows, and Web

Here is information on the ONETUG meeting I will present at as well in September:
SPA/Responsive Design with Allen Conway

I hope to see you all there!

Tuesday, July 22, 2014

Hey Developers - is it UI, UX, UI/UX or something else?

My recent venture into consulting has allowed me the privilege of working with a talented UX team. The interesting thing is I probably fell into a trap and misconception about their roles and responsibility like any other developer that has not worked with a group containing this specialization. I rack that up to lack of 1st hand experience, so I've appreciated the onboarding of more knowledge. While this may not be the 'perfect' account of the UX professional's world, it's hopefully at least insightful to the large community of developers in a similar position to me.

Recently on our company Yammer site I saw the following quote by my friend and colleague Anthony Handley:
STOP staying UX/UI. It's just UX.
This had me thinking to the previous times I had Googled Bing'd the phrases "UI vs UX" or "difference between UI and UX." These were turning up abstract and sometimes confusing explanations. One link I pulled up was probably the equivalent of a 30 page document! Is it that complex? Other descriptions would say stuff like "the bike is a UI and the user thinking about the purchase and the tire size is the UX..." OK strike 1 - awful explanations and I like metaphors and analogies typically. I still didn't get it.

I was beginning to notice it is taboo to say the wrong acronym. To me I had always called it 'UI'. I consulted with another talented UX colleague of mine, Mickey Moran-Diaz for some consultation and education recently to make sure I didn't say the wrong thing. The resulting information I think built my knowledge to a point where I have a better understanding. 

The UX is everything. The UI is only a couple of aspects of the UX. Simple enough so far? Good.

A document I found in my searches highlights this well: UX is not UI. Take a look at the document on that page - it details in a single page all of the aspects of UX vs. UI. Notice how UI is only 2 very fine details withing the broad scope which is UX. This aligns with my friend Anthony's comment - "It's just UX."


This got me to thinking - why all the defensiveness that I've seen from UX people? I started to suspect that these talented group of folks were having their jobs being belittled/simplified to wireframing textboxes (see my post on Pencil for those devs that work alone). I come to find out this was at least partially true, and hence the passion of the UX team behind defining and making an understanding of the vast world that is UX.

This unfortunate simplification of an industry or profession albeit wrong, is quite easy to do. Think about it. "An accountant just adds numbers." "A pharmacist just puts pills in a bottle." "Playing basketball is just a ball going back and forth on the court." All simplifications and not realizing the entirety which makes up those professions. As typical it's "more than meets the eye." UX is much more than just wireframing a design.


UX is a multitude of practices, procedures, research, design, thought, artistry, and many, many more things that make up a being a UX person or team. The document in the link I provided above lists many of those aspects: research, design, brainstorming, requirements, interviewing, prototyping, and the list goes further. The bottom line - don't simplify the UX profession as a group of individuals that produce wireframes for a 'UI' design.

I also found that a lot of this conversation revolves around the context for which it is used. When it comes to a individual professional, team, or practice - it is just UX. However, in the context of an application's architecture I was beginning to be afraid to even call the topmost layer in my app 'the UI layer' anymore wondering if that was incorrect. With a sigh of relief, it is not incorrect. When speaking in a technical sense and distinguishing layers it's acceptable to still label it as the UI/Presentation/Views/etc. layer. In this context, it has nothing to do with the UX process or team, but rather to a technical distinguishing of logical layers in building an application. Cool, I can still say UI layer!


My experience though is the reality of the situation revolves around the fact that there are a ton of developers out there in ratio to UX professionals. As for myself I never had the privilege of working with a UX specialist at any job I've held until currently. This to most organizations (unfortunately) is seen as a 'luxury' position in addition to being misunderstood slightly and therefore does not have a presence. It probably falls in line with positions like DBAs where many companies do not hire them either. Developers do everything and hence the natural ignorance around these specialized positions.

Hopefully if you get the opportunity to work with a UX professional or team you will now as a developer or IT professional have a brief insight into the vast realm of responsibilities and expertise those in this field entail. It absolutely is much more to be a UX professional than simply doing UI design.

Friday, July 11, 2014

Encrypting Configuration Sections In .NET

Please note before reading - while you might of found this solution from a search result and the post looks long and possibly intimidating - it is not! Once you familiarize yourself with the steps, the process becomes quite easy to duplicate per machine where required. Most of the post here is providing explanation of what's happening as opposed to the raw steps needed to preform the encryption/decryption.

I have about a half of dozen posts in 'draft' form partially done that I want to get off my plate, so in no particular order here is one I've had on the back burner for a while. I'll guide you through encrypting configuration sections in application .config files. Nothing cutting edge her, but still an important topic to cover.


This is applicable to any type of .config such as a web.config or an app.config file so that means it spreads the technology spectrum of ASP.NET, WinForms, WPF, Windows Services, etc.. I have not yet looked into the equivalent for Win8 Store Apps using LocalSettings or RoamingSettings, so for now this is applicable to the aforementioned that use .config files. Since the LocalSettings are buried in .dat files in the user's profile the need may not be as pressing as the .config files that reside in the virtual directory of a web site directly in the well known inetpub\wwwroot.


So often I'm reading through a book, magazine, or online article and I see the following:

<connectionStrings>... with no encryption

or:

<appsettings>... with no encryption

OK I get it, the authors typically do not have the time or want to go into a major tangent to talk about securing these elements with encryption. However today that's exactly what I'm going to show you how to do. If for some reason you have not caught on yet, you do not want sensitive information that is displayed above in plain text. If the file is compromised (internally or externally in the wrong hands for 1 of 100 reasons) it's a good layer of protection against being able to read it directly without decryption. In any regard, you as a developer should ALWAYS be thinking about security and protecting any type of sensitive data as if it's your own.

The steps are really straight forward and the process is quite simple and repetitive once you get used to it. I recommend you make a cheat sheet of notes for encrypting and decrypting the configuration sections to aid you on an ongoing basis. Steps 1-2 below only have to be done 1 time. Steps 3-4 are only done 1 time per machine where decryption will take place. Steps 5-6 are the steps ongoing to encrypt and decrypt configuration sections when needed.

NOTE: All steps below must be done running the command line tool as an Administrator. If you do not do this you will get various errors when trying to create/export keys as well as with manipulating permissions.

1. Create a machine-level RSA Key Container (1 time step)

Let's being by talking about the default provider and why it will not suffice for encryption / decryption needs outside of a single machine. 


If you use the default RSAProtectedConfigurationProvider without specifying a a Custom RSA Key Container, the encryption/decryption will only work on the machine where the data is 1st encrypted. Obviously this solution is no good as you will more than likely develop a solution locally and publish/deploy to 1...n servers. In this scenario, you need to create a Custom RSA machine-level container key and export it to a file which can then be imported on the servers where the application will run. 

If you try and decrypt the .config file manually using the command line with the default provider on a secondary machine where the encryption was not done, you will receive the following error below. Also in your .NET application at runtime, you would get an error upon trying to access any of the settings that have been encrypted.


Also before we get started we need to be aware of some permission issues. To prevent the following error: "Creating RSA Key Container... The RSA key container could not be opened. Failed!" message upon creating a new key, you will 1st want to set up permissions on the following directory where the machine keys reside after being creating: 

C:\Documents and Settings\All Users\Application Data\Microsoft\Crypto\RSA\MachineKeys


This is the directory where the machine keys from the command line below get created and stored. The issue is, even as an administrator you may not have access to create and manipulate the keys by default. The easiest thing to do is allow the 'Administrators' group of the machine have 'Modify' permissions to this directory.


Right-click on the 'MachineKeys' directory and ensure the Administrators group has the proper access:



Now that the permissions are set, we can being the process of creating our container. To create a Custom RSA Container run the following command:

aspnet_regiis -pc "SecurityKeys" -exp 

Note: the -exp switch allows the keys to be exportable (next step)


You should see the following success message:


2. Exporting the Custom RSA Encryption Key (1 time step)

Note: If just running through these steps and you want to see how it all works on a single machine locally, you can skip Step #2 and #3 and come back later to export and import to additional machines.

We must export the newly created encryption key so it can then be imported on 1...n machines where our app will run. This way we know decryption will occur seamlessly once we deploy.

To export the custom RSA key container to a file run the following command:

aspnet_regiis -px "SecurityKeys" "C:\SecurityKeys.xml" -pri

Note: the -pri switch makes sure both the private and public keys are exported. This enables both encryption and decryption. Without the– switch, you would only be able to encrypt data with the exported key.


You should see the following success message below. Also note the .xml file created in the location you specified in the command.


3. Importing the Certificate (1 time step - per machine)

Next we must import the .xml file containing the RSA encryption key on the machine(s) where our app will be running. Obviously we do not need to import it on the current machine because we have already created it in the machine keys. However, you must copy that file to the servers/machines where the app will run and import it.

I would assume the PowerShell gurus could script or automate this process rather quickly across machines. I'll just show the command required.

To import the RSA Encryption Key, run the following command:

aspnet_regiis -pi "SecurityKeys" "C:\EncryptionTest\SecurityKeys.xml"

You should see the following success message:



NOTE: It is important to DELETE the .xml file containing the keys once they have been successfully imported. This way the keys don't fall into the wrong hands and get imported on a machine where not desired. You can always go back to the main machine and export again as needed.

4. Adding Permissions to the Certificate (1 time step - per machine)

The certificate we just created now needs to have the proper permissions added to it so decryption can happen at runtime automatically under the context the app is running. If you are running an ASP.NET app in IIS that will be NT Authority\NETWORK SERVICE. The list of users or groups that need to have permission depends on they type of app you are running. If it's a WinForms or WPF app, you might add a group like CompanyXYZ\MyAppUsers. If you get any type of runtime errors along the lines of:

Failed to decrypt using provider 'AppEncryptionProvider'. Error message from the provider: The RSA key container could not be opened.

...then come back to this section and make sure to grant access both to the key and at the MachineKey folder to the user context for which your app/service/etc. is running under.

First we need to add access to the key container itself. For this example we'll assume we are running a web application using ASP.NET. To add access to the container run the following command:

aspnet_regiis -pa "SecurityKeys" "NT Authority\NETWORK SERVICE"

You should see the following success massage:


Second, we need to go back to the directory from step #1 and add 'Read' permissions only to the same user. To recall that directory is as follows:

C:\Documents and Settings\All Users\Application Data\Microsoft\Crypto\RSA\MachineKeys

Right-click the 'MachineKeys' folder and grant just 'Read' access to 'NETWORK SERVICE' as follows:



5. Encrypting the Configuration Section

Now it's finally time to encrypt!! Copy the full path to the directory containing your web.config or app.config file. Notice this super secret I have in plain text that we want to encrypt:


<appSettings>
    <add key="SuperSecretPassword" value="abc123" />
</appSettings>

Note: This process uses the aspnet_regiis.exe tool and targets web.config files by default. However, this will still work for any type of app.config file as well. Just close any open instances of your app.config file in VS.NET and rename it temporaily to web.config for the encrypting process. Once complete, rename back to app.config and open back up in VS.NET. You will see the encryption still works perfectly.

Add the following configuration section above the and section in your web.config or app.config file. The name property will be the handle we will use for encrypting from the command line and not the key container name, so just remember this so as not to get confused:


<configProtectedData>
  <providers>
  <add keyContainerName="SecurityKeys"
    useMachineContainer="false"
    description="Uses RsaCryptoServiceProvider to encrypt and decrypt"
    name="AppEncryptionProvider"
    type="System.Configuration.RsaProtectedConfigurationProvider,
    System.Configuration, 
    Version=2.0.0.0, 
    Culture=neutral, 
    PublicKeyToken=b03f5f7f11d50a3a" />
  </providers>
</configProtectedData>

Close any open .config file being targeted, and run the following command to encrypt the section sepecified. You can change the section to be encrypted as needed:

aspnet_regiis -pef "appSettings" "C:\EncryptionTest" -prov "AppEncryptionProvider"

The values after the -pef switch indicates the section to encrypt. The value after the -prov (provider) switch should be the value from the 'provider name' property we set in the config file above. You should see success messages like the ones below. I encrypted both the <appsettings> and <connectionstrings> sections:



Now open back up the web.config or app.config file and the sections are encrypted!



6. Decrypting the Configuration Section

Decrypting the file as we will see in step #7 happens automatically upon calling any settings in code, but obviously the resulting encrypted sections do not allow you to make changes. You may need to decrypt the .config file to get the sections in a state where changes can be made.

To decrypt the .config file, run the following command (note the provider switch is not needed):

aspnet_regiis -pdf "appSettings" "C:\EncryptionTest"

You should see the following success message:


Open the file back up and it should be decrypted so changes can be made.

7. Seeing it in action within the application

Guess what code you need to access the <appsettings> or <connectionstrings> values in code to ensure it gets decrypted properly? Nothing!! That's why this is so great, all of it was handled in the configuration by the configured provider and Key Container.


Look at the following line of code in action! It was decrypted on the fly seamlessly and no special coding was needed. That's nice!


This is a really simple way to add security to those elements that are sensitive in your configuration files. So instead of having your entire database connection string in plain text within your .config file, consider taking 2 minutes and encrypting it!

Thursday, May 15, 2014

Visual Studio LIVE! Redmond Exclusive Discount Code!

Hello everyone! Once again I have an exclusive discount for you to use for Visual Studio LIVE! Redmond upcoming August 18-22. Use code UGRD04 or click on the following link http://bit.ly/UGRD04Reg to save $600!! 

This is sure to be a fantastic career and learning experience as it is held directly at the Microsoft Headquarters!

If you are curious what the conference is like, see my series of related posts directly from the conference by clicking the following link: VSLive! Blog Posts


Tuesday, March 4, 2014

Visual Studio LIVE! Chicago Exclusive Discount Code!

Hey everyone, I have an exclusive discount for you to use for Visual Studio LIVE! Chicago upcoming May 5-8. Use code UGCH05 or click on the following link http://bit.ly/UGCH05Reg to save $500!! 

If you are a software engineer, developer, or IT industry professional you need to check out the main site http://bit.ly/UGCH05 and see the wealth of content to be presented. This is certainly a conference to attend or ask your boss about attending if you have the opportunity.

If you are curious what the conference is like, see my series of related posts directly from the conference by clicking the following link: VSLive! Blog Posts


Wednesday, January 22, 2014

HTTP Error 405 When Making a PUT or DELETE Call to Web API Service

If you build ASP.NET Web API services and are testing against IIS, you might encounter the following error when trying to execute a HTTP PUT or DELETE:
Error Summary:

HTTP Error 405.0 - Method Not Allowed

The page you are looking for cannot be displayed because an invalid method (HTTP verb) is being used.


Searching the web actually yielded a ton of results but as usual they were wide and plentiful making it difficult to discern the correct solution. I'm using VS.NET 2012 and VS.NET 2013 on Windows 7 and Windows 8 (variations of these), so this solution is current.

Here it is: a combination of updating the values for a few of the HTTP Handlers and modules in the web.config will solve this issue. 1st the solution, which resides within the <system.webServer> section within the web.config file for the Web API project:

<system.webServer>
  <validation validateIntegratedModeConfiguration="false" />
  <modules>
  <remove name="WebDAVModule" />
  </modules>
  <handlers>
  <remove name="WebDAV" />
  <remove name="ExtensionlessUrlHandler-ISAPI-4.0_32bit" />
  <remove name="ExtensionlessUrlHandler-ISAPI-4.0_64bit" />
  <remove name="ExtensionlessUrlHandler-Integrated-4.0" />
  <add name="ExtensionlessUrlHandler-ISAPI-4.0_32bit" path="*." verb="GET,HEAD,POST,DEBUG,PUT,DELETE,PATCH,OPTIONS" modules="IsapiModule" scriptProcessor="%windir%\Microsoft.NET\Framework\v4.0.30319\aspnet_isapi.dll" preCondition="classicMode,runtimeVersionv4.0,bitness32" responseBufferLimit="0" />
  <add name="ExtensionlessUrlHandler-ISAPI-4.0_64bit" path="*." verb="GET,HEAD,POST,DEBUG,PUT,DELETE,PATCH,OPTIONS" modules="IsapiModule" scriptProcessor="%windir%\Microsoft.NET\Framework64\v4.0.30319\aspnet_isapi.dll" preCondition="classicMode,runtimeVersionv4.0,bitness64" responseBufferLimit="0" />
  <add name="ExtensionlessUrlHandler-Integrated-4.0" path="*." verb="GET,HEAD,POST,DEBUG,PUT,DELETE,PATCH,OPTIONS" type="System.Web.Handlers.TransferRequestHandler" preCondition="integratedMode,runtimeVersionv4.0" />
  </handlers>
</system.webServer>

The explanation:

  1. The default ExtensionlessUrlHandler-* handlers displayed above are set within the %userprofile%\documents\iisexpress\config\applicationhost.config file and do not by default allow HTTP PUT and DELETE verbs. By removing and re-adding the handlers with all of the HTTP verbs we will allow, processing will continue as expected.
  2. Unfortunately updating the handlers is not enough. The WebDAV HTTP Publishing feature will block HTTP PUT and DELETE calls regardless of our previous modifications. WebDAV is a feature allowing access to files and folders via the internet as an alternative to FTP. If you are not using it in the Web API project, then removing the module will free up the restriction on these HTTP verbs. If you want to read more about what WebDAV is, please read this.
After these modifications have been made try using Fiddler or Postman and make a HTTP PUT or DELETE call and it should now work as expected. If you are reading this and are using a HTTP POST for all operations and bypassing a good RESTful design by having multiple POST actions on a controller (only distinguished by the action name in the URL as opposed to the verb defining the route), reconsider and try these modifications if in their absence had previously prevented you from creating that rich RESTful service you initially set out to make.

Tuesday, January 7, 2014

Creating a Unit Test Using Moq to Stub Out Dependencies

One of the main powers behind a mocking framework like 'Moq' for use with unit testing is the ability to write true unit tests as opposed to integration tests. Remember unit tests should run quickly and provide quick validation of the code being tested. They should be free of using external dependencies and rather focus on testing the code at hand. Examples of these dependencies could be a database, web service, 3rd party component, network share, file, etc..

If all of your unit tests were actually integration tests, it could potentially be lengthy to call and test, and even more challenging to configure. What if you are developing offline and that web service is not available? Should you then not be able to test your code? The answer of course is 'no' you should still be able to test your code.

Use of a mocking framework like Moq allows us to stub out these dependencies with expected behavior, thus allowing the code to be tested independent of the actual integration with that dependency. The primary example of this would be to stub out data access calls that would normally at runtime actually call the database.

The following example relies on the Repository pattern and an Interface provided with the calls that would be implemented to reach out to the database. This abstraction allows Moq to substitute the implementation and behavior to a predetermined expectation for our unit test. If you happen to already be using the Repository pattern but not unit testing, you should be able to fit the code below after understanding it into your solution to being building up some unit tests.

Here is the Repository Interface we are working with for the example:
public interface IRepository<T> where T : class 
{
  IList<T> GetAll();
}

Here is the implemented Repository class. There is a lot more abstraction we could do to the Repository in addition to using the UnitOfWork pattern, but the focus here is on the unit test so this is a basic implementation:

public class PersonRepository : IRepository<Person>
{
  private AdventureWorksEntities db = new AdventureWorksEntities();

  public IList<Person> GetAll()
  {
     var people = from p in db.People
                  select p;

     return people.ToList();
   }
}

In this test project I used Entity Framework and connected it to the 'AdventureWorks' database. I expanded on the Person POCO and added an arbitrary method that would find an employee by 1st and last name. In reality a method like this would probably be on a generic repository that takes an expression as the parameter, but again the focus here is to show how we can unit test this method that uses the Repository without actually calling the database. Here is the Partial Class I added with the 'GetPersonByName' method:

public partial class Person
{
  private readonly IRepository<Person> personRepository;

  public Person(IRepository<Person> personRepository)
  {
    this.personRepository = personRepository;
  }

  public Person GetPersonByName(string firstName, string LastName)
  {

    var allPeople = this.personRepository.GetAll();

    return allPeople.Where(x => x.FirstName.ToLower() 
                                     == firstName.ToLower() && 
                                   x.LastName.ToLower() 
                                     == LastName.ToLower())
                          .FirstOrDefault();
  }
}

The last coding step is to write the unit test. I 1st need to build up a Person collection that will be used by the framework. This can be done in a Setup method with the [TestInitialize] attribute assigned. I will create a simple test that will ensure a valid instance of Person is returned. The idea here is to test the code within the method to make sure that logic is sound and working as opposed to the actual database call itself. I will leverage Moq to inject the Mock Interface into the Person class. If you have not used Dependency Injection, here is another 'plus' for doing it and it's benefits for testing. Here is the complete PersonTest class with the simple unit test:

[TestClass]
public class PersonTest
{

  private IList<Person> people;

  [TestInitialize]
  public void Setup()
  {

    people = new List<Person>()
    {
      new Person()
           {
             Title = "Mr.",
             FirstName = "Allen",
             LastName = "Conway",
             PersonType = "EM"     
           },
      new Person()
           {
             Title = "Mr.",
             FirstName = "John",
             LastName = "Smith",
             PersonType = "SC"
           }
     };

  }

  [TestMethod]
  public void PersonSearch_ShouldFind_ValidInstance()
  {

    //Arrange
    var repositoryMock = new Mock<IRepository<Person>>();
    //Setup mock that will return People list when called:
    repositoryMock.Setup(x => x.GetAll()).Returns(people);
    var person = new Person(repositoryMock.Object);

    //Act (mocked up IRepository will supply the data when calls are made to the repository)
    var singlePerson = person.GetPersonByName("Allen", "Conway");

    //Assert
    Assert.IsNotNull(singlePerson); // Test if null
    Assert.IsInstanceOfType(singlePerson, typeof(Person)); // Test type

  }

}

If you run the unit test above you will see it passes and passes quite quickly (in about 100ms on my machine). Now if I had to actually call out to the database it would have returned the 20,000 rows in the Person table and that is not what I'm testing here. To prove the mock object (in this instance it's actually a stub returning a known state but that's for another post) behaved and returned the collection we expected, right click on the test name and select to debug (I'm using the MSTest runner in VS.NET 2013). If you walk through the code, guess what is returned when the call to .GetAll() on the repository is made within the 'GetPersonByName' method? Our collection created within the test class:



This is only the basics of unit testing and using a mocking framework like Moq, but I put it out there because I see such a disparity between creating unit tests and not creating / using them in our field. It seems like people are 'all in Gung-Ho' or do absolutely none of it. Hopefully this post showed a straight forward and simple example for those wanting to get into unit testing or begin using a mocking framework that are not already doing so today.

Thursday, January 2, 2014

Using Pencil to Create Wireframes for GUI Prototyping

image courtesy of link 
If you have ever lifted weights for strength or toning, you probably know there are some muscle groups in your body that are stronger and easier to work out than others. For me the muscles in the middle of my upper back seem to always be the weaker ones.

I offer this analogy as a segue into the 'software engineering' muscles of all the disciplines and hats we must wear. One discipline that I'm not particularly an expert at is UI design. Sure I've used the splash of rich web content on sites over the years, but nothing that's getting awards for style. Listen, I don't spend day in and day out doing it so I don't consider myself a seasoned expert. Like DBAs are to database development, UI/UX Designers make entire careers out of the trade.


However most of us 'all around' software engineers do not have the luxury of working with a true designer, and at some time or another must design a UI from scratch. We probably result to white board drawings, paper and pen, or just going straight to the VS.NET IDE designer to lay down some HTML, XAML, or raw form controls.


One main prototyping tool that comes to mind immediately is Sketchflow. Sketchflow is a great tool which was integrated with Microsoft Blend, and now Blend is a part of the VS.NET IDE install (since VS.NET 2012 - check versioning for support). About a year ago I was also introduced to a free tool named Pencil for creating wireframes for GUI prototyping in applications. The later tool 'Pencil' I will speak about today.

image courtesy of link
I've had mixed opinions about 'prototyping' over the years for various reasons I could write in another post. Most of my angst came from corporate environments where accounting, HR, or some other internal company party was requesting a custom application, and was a bit too overreaching in wanting to design the application which was a slippery slope. In many of these situations it was best to have the end party define the business problem and let the engineers do what they know best: solve that problem via a custom application. 

image courtesy of link
However this is not always the situation and in many environments (especially where the client is paying for an application), a sketch or mock-up of the final product is required. Could you imagine ordering and paying 25k for a car at the dealer that you had no idea what it would look like, but rather would get you from point 'A' to point 'B' because that was your problem? No, you must know what the car looks like as it is a big part of the purchasing decision. "Hey I wanted a 4 wheel mini car not a moon bus!"

Regardless of the scenario, you control how widely exposed the wireframes you create are distributed. You may only use them for yourself to have a visual of what to do when you dig into the HTML, XAML, etc.. You may choose to keep them internal to your development team that you might be delegating the task to for creation. Lastly, you might be required to show a physical design to a client before they will approve of it and allow you to proceed. Regardless of the audience this is a great tool in the toolbox for UI mock-up / sketch / prototyping design.


Begin by downloading Pencil from the 'Pencil Project' website located below:


Pencil Project


Once installed you will see a blank canvas where you can drag shapes, controls, etc. on to the screen. I'm going to pull a few shapes onto the screen that contains a few tabs as seen below:




I just threw that sketch together in a few minutes and is quite rudimentary. However, if needing to really design an application's GUI this tool has a nice set of shapes and controls for your needs. In fact in the newer versions, shapes have peen added for iPhone and Android tablets to mock up screens for these applications as well.


The one gripe I have about the tool is getting used to how to manipulate the shapes. When I 1st dragged an iPad tablet on it was too large for my screen and the 'size' values were grayed out in the toolbar. I'm accustomed to a right-click -> properties -> change values methodology from most application but this does not hold true for Pencil. It takes a little getting used to, but for being 'free' I can not complain. Some controls like a table are defined by pipe delimited values like displayed below:




Lastly the individual mock-ups can be exported a .png files for use in documentation, and the entire Pencil Project file (*.ep files) can be exported for storing in your source or document control systems.


If you have not done any prototyping before, or used some rudimentary tools to attempt the process, give Pencil a try. It might take that design or requirements document to the next level on your upcoming project. Tools such as these will certainly help elaborate and articulate the end product in a visual as opposed to a verbal manner.