Friday, December 14, 2012

Visual Studio LIVE! Orlando Day 5 - All about MVC 4


1 more day to go and I'm looking forward to a full day on MVC! I have become a big fan of MVC over the last year, previously being a web forms developer for the past 9 or so years. The agenda looks great (this is not some intro course) so let's get to it!

Mastering ASP.NET MVC4 in Just One Day
Tiberiu Covaci, Microsoft MVP and Senior Trainer & Mentor


I'm looking at the agenda for today and it looks like we will be covering soup to nuts on MVC. Obviously there are a lot of web forms developers in attendance that are seeking out knowledge on MVC, so there has to be a general introduction to 'Model-View-Controller' As always the typical diagram displaying the general interaction between the parts is displayed below:

If you are new to MVC there are some goals, even say 'advantages', to this framework. Tibi mentioned testability, tight control over markup leverage the benefits of ASP.NET 4, and  conventions and guidance. I actually spoke in depth to some of these in a post I did a couple of years ago and you can look at it here: ASP.NET Web Forms vs. ASP.NET MVC 

One of the other main features of MVC is its extensibility. ASP.NET MVC is actually open source, so modifying it to your needs presents an endless about of customizations  For example, you don't have to use just the .aspx or Razor View engines. Others are available like Spark if you want to use it. Personally I'm a big fan of the Razor view engine out of the box and it fits my needs. However it's nice to have that flexibility if needed. Now with NuGet, you can get a slew of different packages to help with development. In fact if you strictly work with blinders on from only using out of the box MVC functionality, you are probably missing out on something that will make developing your application easier, How do you find out about the packages available  Blogs, conferences, magazine article, word of mouth from experience, etc.

It's always tough when explaining to those new to MVC which piece to explain 1st (Model-View-Controller). Starting with the Model is a good idea as you can really think of these as your old business logic layer classes. On this note a few of the architectural patterns were brought up that can used in MVC: Repository, DDD (big fan), and service pattern. If you are not familiar with DDD, in a nutshell the focus of the architecture is on the business domain, and after all this should be the focus of the application. Someday I would like to see a MVC implementation using DDD, but today we will be using the Repository pattern which I am also a big proponent.

The example used today had a single model class named 'Course'. This is where the getters/ setters are for the object and any method calls not directly related to persistence  The repository pattern is responsible for the interaction with the persistence layer (i.e. Entity Framework) and wraps the basic CRUD methods. To keep the focus on MVC, there was not any true implementation in the repository to fetch data, and the data was loaded from static data in a mock layer named 'TibiPublicSite.MockRepository'. The CourseRepository implemented ICourseRepository, and returned hardcoded/mocked up data. Works perfectly because this is not a session on EF or other ORM.

One thing I should mention that was being done in the solution referenced throughout the day was the layering of the application. You will notice by default that an MVC project will have all of the pieces in a single solution. This is ok and I recommend for beginners to just stick with this until they get the hang of the method calls and flow. However, MVC can be considered a and architecture for a UI only and this is why separating out the additional layers is a good idea. Here are the layers from today's application:

  • TibiPublicSite (MVC)
  • TibiPublicSite.MockRepository
  • TibiPublicSite.Models (contains repository interfaces and model classes)

The next key piece of the MVC architecture are the 'Controllers'. These are the server side classes, and each MVC request maps to a Action method in a Controller class  Remember, MVC uses routing, and the URL route is dictating an action on the controller to be executed. The classes here implement 'IController'.

An Action is a method that returns a ActionResult. ActionResult is the base class, but there are other result types as well like ViewResult, RedirectResult, and JsonResult to name a few. When you add a controller, you can actual scaffold it based on an existing class (make sure to build your project 1st so they are available for selection  This will create many of the action methods on the controller already needed based on the model class selected.


Using Dependency Injection on the Controller is a great was to loosely couple the persistence mechanisms from the controller itself. When the controller knows too much about the actual persistence details, you create tightly coupled code that is less reusable and more difficult to maintain in the future. Having the controller have intimate knowledge and making direct calls to the database is not a good idea. The repository is injected into the interface on the controller and is accessible by the methods on the class. It's better to call _repository.Save(myObject), rather than having to know how to directly work with the Entity Framework context, ADO.NET code, or other persistence mechanisms. You might be thinking that you could still push persistence details down to the model, but by injecting the repository into the Controller, you set yourself up much better for unit testing.

However to take this a step further in reality there is not a 1 to 1 relationship between the View and the Model. Therefore we will be using customized classes that contains a combination of data that may span > 1 model class. These classes are named 'ViewModels'.

HTTP verbs are decorated on controller actions in the form of attributes. The HttpGet is not required as it is the default. You can still add it for consistency and to be explicit. If you try and call an action with the incorrect verb it will obviously not get called.

It is important in controller actions to check if (ModelState.IsValid) to ensure model binding worked correctly.

Routing is a part of ASP.NET System.Web.Routing. /Bikes/MountainBike is much better that Products.aspx?Item=MountainBike  Routing is defined in Application_Start() and contains one default route {controller}/{action}/{id}. The ordering it is listed in the class is important. The routing process is to trickle down top to bottom to find the 1st match. If you have a match that was unintended, you might get the wrong controller action.

You know sandwiched here in the middle I have to say how exciting some of these technologies are to work with. Take a look at the following line of code:

var model = _repository.GetByCriteria( c=> c.Title.Contains(searchTerm));

MVC, Lambdas, Dependency Injection, entity framework, dynamic typing. One line of code that does so much would have taken many times the effort if not using these technologies. This is why it's so important to stay current, because otherwise you may be working 10x the effort when there is a better way.

Route debugging can be aided by a NuGet package created by Phil Haack named 'RouteDebugger'. This is a fantastic tool that when navigating to a URL, you will get the bottom portion of the browser breaking down the route and which patterns it matched and any constraints  This will help for what I discussed previously about trying to determine which route from Application_Start that the URL matched. Just remember to disable this in configuration before deployment or when not needed:

You know he started talking about web.config transformations and this is something I'm embarrassed to say I have not used. The result, over complication in code or going back and toggling values between test and production. Essentially is does a XSLT transformation on the web.config file for both 'Debug' and 'Release' scenarios. Need to start using this.

Views are another major component of MVC. They are as close to a page as it could get. Strongly typed views inherit from ViewPage for aspx pages. If using Razor, add @model directive for razor pages.

The Razor View Engine syntax is slick. I actually did not start using MVC until v3 so I never got into the .aspx engine syntax. In fact if you were driven away from MVC because it looked like classic ASP, give it another look and try out the Razor syntax. Razor syntax uses HTML helpers which can be accessed in the view via Intellisense by starting with the '@' symbol. Remember when adding a new Razor view, the difference between a 'Partial' view or not is the inclusion of the _Layout view. Partial views will not include it.

Tibi forgot to add the @RenderBody() helper to _Layout.cshtml (think of the _Layout view like a Master Page). This helper is required and is analogous to the ContentPlaceHolder server control in a MasterPage in web forms. The @RenderBody method indicates where view templates that are based on this master layout file should “fill in” the body content.

Make sure to declare the model directive at the top of the view like @model IEnumerable. This will allow strongly typed access to the 'Courses' data within the HTML helpers in the view. If you want to remove the redundant namespace declaration, add it to the web.config as an imported namespace. It can then be changed to @model IEnumerable. Even using a 'using' statement in the view at the top is acceptable as well.

The razor syntax HTML helpers flows very nicely in with existing HTML. With razor syntax JS is added in a section called @section scripts{} and CSS is in a section called @section styles{} You can also use the @RenderSection("scripts", false) helper in the _Layout view to dictate a section in other views. If you use JS or CSS on most views, this will keep it consistent in placing.

For those new to MVC and the Razor engine the HtmlActionLink will be your friend for making links on a page that route to controller actions. Remember there is no physical page on the server (like web forms). We have to route to an action on a controller to execute our code on the server. An example of this is the following: @Html.ActionLink("Start", "Start", "Courses"). This makes a link named 'Start', that will route to the 'Start' method on the 'Courses' controller. Remember MVC is based on convention and not configuration like web forms so you do not have to write 'CoursesController' because 'Controller' is redundant and known.

If you want to pass parameters you would pass them as object routeValues: @HtmlActionLink("Details", "Details", new{id = item.id}, new{@class  = "abc"}). Notice using '@' in front of class because 'class' is a reserved keyword.

The @using (Html.BeginForm()) wraps where the form starts and begins, and uses IDisposable to be disposed when not used anymore. 

Model Binding is probably my favorite thing in MVC. Remember the days of having to explicitly pull all form values server side to send off for manipulation (i.e. SaveValues() or something)? Not anymore. Model Binding will automatically send form values back to the controller and populate the object in the controller method parameter list. This saves on a lot of redundant code. I guess I get more excited about this because I did web forms for so many years and would sometimes have large methods to scrape values off controls into a business object. 

To add to how easy this all wires up, you can scaffold the View off a Model or ViewModel class and have all the markup created for you! Again, this was a lot of manual work in web forms (unless using a FormView or something but the server side operations had a heavy footprint and got messy quick). When scaffolding, you can select the class and type of View to create. The screen below shows how I scaffold a 'Create' View for 'Courses' model class:


Want even more streamlined? Use @Html.EditorForModel() on the view for the bound model. What you get from a single line is the entire form of fields to edit! One line, are you kidding me? Again, us old web forms folks appreciate this a a lot. By the way if you are wondering how you might exclude or control which fields show up on the form, because maybe you don't want SSN shown or at least disabled. You can control this by using data annotations on the model properties that the fields map to. Have a look at the System.ComponentModel.DataAnnotations namespace for the fields available. 

The annotations also work hand in hand with the jQuery validation scripts. Remember having to explicitly add all the RequiredFieldValidators or JS clientside? Not anymore, because ASP.NET will use these annotations to automatically do fields validation based on the presence of the jQuery validation scripts. In fact, these scripts will already be in your project upon creation, so you have to do very little to get it all working.

Annotations do (even with me) seem a bit like decorating values for UI purposes a few layers down which might be viewed as improper. An alternative is to create a MetaData class that abstracts away the annotations. So I might create a class called 'public class CourseMetadata' and decorate the model class with [MetadataType(typeof(CourseMetadata))]. Just remember the annotations are good for EF as well if doing a 'Code First' approach as it will define the constraints on the fields in the database. I do admit abstracting away the annotations into a MetaData class is pretty nice and probably something I will implement in future applications. The only downside might be developers coming behind not initially seeing the annotations where they expect them (on the Model class). This is why commenting never hurts. Just add a little note in the XML block on the class.

If you happen to being doing mobile apps, you could add separate versions of the View using the nomenclature 'index.mobile.cshtml'. This is a nice way to differentiate views. I would not be surprised in the next 2-5 years that a feature is available to streamline this duplication of Views between desktop and mobile Views. Today you still have to create separate views. Remember (from the other day), the user-agent is pulled when accessing the site, and ASP.NET knows how to serve up which version of the views. Check out "Framework64\v4..0.30319\Config\Browsers\" to get a list of the user-agent browser files. You can open these files in VS.NET if you want to inspect them. After the proper user agent is matched, then the view name by convention (index.mobile.cshtml) is rendered.

"How many want to hear about Web API?" The entire room raised their hands! How appropriate to finish the day and conference with a technology at the top of my list in interest in learning. It allows the creation of REST based services.

Now here is something to wrap your head around. Just learning MVC? Well there is a camp of folks that are writing application by making all calls to the controller via Web API. One of the goals here is to push more functionality to the client and prevent expensive server calls. Libraries like Knockout.js have to be leveraged to help in this. By doing this you make lean responsive applications. It's a balance between what goes on the client and what's on the server. One thing at a time for now...

Tibi added a folder to the existing MVC app named 'api' under the existing 'Controllers' folder. The 'api' will match the convention for the Web API URIs. This folder will contain the controller actions for 'api' calls. Notice in the routing (Application_Start) that no 'Action' is defined. This is because the Http verb infers this for us in API calls.

Using Fiddler is the easiest way to make API calls. Just click on the 'Composer' tab and select the proper Http verb along with the URL to call. You can also modify the request headers or body from here as well. This is a nice place to define the user-agent if you are testing that functionality. To build up the body, create JSON like {"Title": "My New Course", "Duration": "1"}. Make sure in the header to include "Content-type: application/json" (without quotes).

He finished off with a quick demo on a Dependency Resolver named Unity (Unity.MVC or Unity.WebAPI; there are different versions for different technologies). Unity is a lightweight, extensible dependency injection container that supports interception, constructor injection, property injection, and method call injection. You can get Unity from NuGet and add it as an installed package to your application. This way you do not have to manually instantiate the model type on the repository for each class. In Application_Start() you make a call to Bootstrapper.Initialize() to register the containers that map the Types to their concrete equivalent.

container.RegisterType();

Whew, that was a great day with a lot of information on MVC. Nice job Tibi!

Wrap Up Day 5 - That's a wrap!


Well it's the last day of the conference, and the 1st day the sun has been out since prior to conference registration! Sorry to the people that came here from the north and expected to see the 'sunshine state' along with 75 degree weather.

It's bittersweet at the end of the conference because it's sad it's over but it will be nice to go home. The comradery among attendees and networking that occurs is an awesome byproduct of the conference. My brain is absolutely jam packed with good information. I applaud all of the presenters and conference organizers and rank this as one of the best Visual Studio LIVE! conferences I've attended. If you have never attended a conference like this, I highly recommend breaking out of your cultural "bubble" at work, home, etc. and attend to see where the community is headed. These conferences doing a job second to none to validating and presenting a sort of "what's good and what's maybe not so good" based on community feedback (presenters don't pitch anything and just state the facts).

I am already looking forward to next year's conference, but in the meantime I feel I have the information needed to make great decisions on leading technology for the upcoming 2013 year. Hope everyone has a great trip home and see you next year! #Live360


Thursday, December 13, 2012

Visual Studio LIVE! Orlando Day 4


Well it's day 4 of the conference and day 3 of the session tracks. Today for me should probably be called "The day of Marcel", as I will be attending (3) of Marcel de Vries' sessions. I'm ready, so let's get going!

Building Single Page Web Applications with HTML5, ASP.NET, MVC4, Upshot.js and Web API
Marcel de Vries, Microsoft MVP and Technology Manager, infoSupport




Single Page Applications or 'SPA’s' are web applications that are one HTML page that contains the whole application. You might wonder why this would be useful. Well the idea is that they are lean, responsive, runs on any device, and has the ability to work offline. SPAs rely on JavaScript heavily on the client as well as say ASP.NET MVC on the back end. It was noted that those familiar with Silverlight and MVVM would be comfortable making SPA's. We can use the Web API to allow communication between the client and server via JS and Ajax.

As always, my ears perked up the minute Web API was being discussed again! The more information presented the better as this is a relatively new technology (ASP.NET Web API not REST based services) and I think it will be a great tool in the toolbox. A Web API controller inherits from the class ApiController, and the default project template will already create a base set of classes where this is done for you. He set up a basic Web API service and was using Fiddler to make calls to the service. The nice thing with using Fiddler if you have not before, is being able to inspect the request and response including the headers.

Next MVVM was brought up which was interesting because normally I do not associate this with web applications. What was being explained was having the ViewModel in JS with observables to have the UI react upon. Knockout.js was used for the ViewModel and jQuery was used for creating the observableEntities. Knockout.js enables observables and dependency tracking, declarative bindings, and templating. There are several JS libraries that do templating but Knockout is a good choice. I saw John Papa at this year's code camp show multiple templating examples using Knockout. Currently not all browsers support JS getters and setters (Internet Explorer), therefore all observable objects are functions. Observables do take up a little memory, so keep this in mind when creating them.

Data Binding seems pretty straight forward using Knockout. You can bind the ViewModel to HTML objects using the 'applyBindings' method. This method is typically called in the Document.Ready function. Some of the available bindings are text, html, css, style, and attr. 

Templating is also an interesting feature of Knockout. The 'data-bind' property can be added to a <div> and the bound ViewModel data will be used if it exists.

Sammy.js is a library (Nav.js is another) I had not heard of and is used for routing in the application. It's actually pretty nice, because it allows setting up routes for navigation in the way of a JS function. Based on the route called and parameters extracted from the URI, methods can be called and the page can be manipulated as needed. He was using jQuery to do fading and displaying of different <div> sections, yet still all in a SPA. To be a true SPA with rich deep linking, these types of methods are going to need to be used in order to provide a rich page with all the required functionality. The routing concept seen here, in MVC, and Web API is something familiar to developers using any of these technologies, but less likely for people still doing web forms exclusively. Looking at this to me strengthens the need to branch into or at least understand MVC because so many technologies are leveraging routing. 

So what we have is routing being done in (2) places: on the MVC controller using Web API to serve up data, and on the client using Sammy.js to react to the navigation and get that served up data from the service via an Ajax call. As he explains it's really a tango between the (2) to supply functionality for navigation and data needs. Make sure to minimize communication when possible and cache Ajax calls when possible. Amplify.js is a library that provides support for caching the data for Ajax calls.There are properties on the Ajax calls that allow dictating if caching will be used and any provided timeout values. This may not be as applicable on larger sites with a wide array of data binding that needs to be fetched on demand, but for a SPA this would fit well as there is probably not going to be a ton of new data fetching due to the streamlined nature of a single-page's functionality.

Single Page Applications provide a new type of web client. It results in fast responsive apps with options for offline  The JS libraries used help facilitate the needs of the SPA, but he noted the ones he used are not the only ones available and to research the others available to see if it meets additional needs of your SPA. 


Intellitrace, What is it and How Can I Use it to My Benefit?
Marcel de Vries, Microsoft MVP and Technology Manager, infoSupport




Marcel started out by stating he sees all of these developers that have VS.NET 'Ultimate', but have no idea what's in it. Intellitrace is a tool I've known about for years and almost ashamed I have not sought it for use because I know how great of a tool it is. He made an analogy to the aviation industry - you don't want to crash a plane to collect data afterwards. The same manner with applications, don't wait until production to find issues and then have the task of trying to recreate.

Intellitrace provides a debug logfile that you can use to debug your program at a different moment in time. The file contains debug events, exception events, .NET Framework events, and allows configuration for what is relevant for your application. For example, if you make a SQL call, the logfile will automatically capture the SQL used and let you know how it was called (LINQ, ADO.NET, etc.) Well that's almost sale enough right there! How many times do we run SQL Profiler after the fact to try and recreate some anomaly  This type of post-issue debugging is less effective that having all the historical data already captured for us (and not just some message in the event log).

One tid-bit, if using Intellitrace on a machine with a Solid State Drive (SSD), you will barely notice it's there and running. There is a little more of a hit noticed on machines with a HDD.

Intellitrace can be configured from Tools -> Options -> Intellitrace -> Intellitrace events. There may be options not applicable based on the technology you are using (i.e. Windows Forms events don't make sense for ASP.NET apps).

When turned on for example it will capture all return values from method calls within a method. Ever start putting a whole bunch of Debug.Writeline statements when trying to figure out values at runtime? Intellisense prevents to an extent having to do this kind of manual messy work.

He did an example and noted Intellitrace will start logging once breaking into code (one of a few ways to get logging). This is some great stuff! You get the traditional debug experience, but the Intellitrace information is displayed in a new pane on the right-hand side.

The integration with TFS was impressive to. He kicked off a build that ran unit tests and when one failed, he was able to get the output Intellitrace file to provide the details of why it failed. Also within TFS the workflow is nice if you use it as the symphony between QA testers running tests and the output for developers will automatically create Work Items.He was using a product called Microsoft Test Manager (MTM) to do testing. Using Microsoft Test Manager you can plan, manage, and execute both manual and exploratory tests. You can also automate your manual tests once they are stabilized.

It is important to note that the debug symbols file (.pdb) must be turned on to provide information for Intellitrace. Also if using TFS, make sure to configure the server to save the symbols to a 'symbol server'.

There is also a free downloadable product called 'IntelliTrace everywhere'. It allows anyone to capture the logs from anywhere. It is available for download at the following link.

He did a demo of doing IntelliTrace in a production scenario using PowerShell. The one thing to know is that initiating it can cause the app pool to recycle. This as we know will graceful hand the user over to the new worker process, but things like cache and session might be cleared. Just be aware prior to firing off the process. One other important note on this. You do not need the .pdb files on the production server (as typical with a release build). The GUID of the .exe will be used and be directed to the designated symbols server to get the debug information. So if you don't have a symbols server set up the debug symbols would have to be present in order to get the output trace logfile. Symbols Server is a part of TFS, but it can also be downloaded separately and configured via command line if you do not use TFS. He was not sure if indexing was possible, or which other source control providers to get the files were supported. Also remember, VS.NET 'Ultimate' is required to read the trace files, but it is not a requirement to have VS.NET on the production server.


EF Code First Magic Unicorn Edition and Beyond
Keith Burnell, Senior Software Engineer, Skyline Technologies, Inc.



The reality is we write data centric apps, and 99.9999% of the time there is a relational database behind it all. What this means as developers is we have to write a lot of data access code. This is the CRUD and mapping data to the objects we use and must do it over and over and over again. What's this code look like? Lots of raw ADO.NET filling datasets. The DataSet has a big object/memory footprint so it's not efficient to be passing around this non-typed data. This argument / statement is nothing new and has been being preached (by me as well) for several years now. "Datasets are so .NET 1.1" (that's me trying to sound cute, while actually sounding dumb).  Ideally we want to be working with Plain Old Class Objects (POCO), which are nothing more than a generic class.

The 1st ORM from Microsoft to help solve this issue was LINQ to SQL. This got us closer to POCOs by working with a class that represented a table and are not the huge in-memory representation of the database (DataSet). Unfortunately,  LINQ to SQL classes are still heavy and tried to do too much. This helped lead to it not being the flagship data access technology (read here) atop of ADO.NET from Microsoft, enter Entity Framework.

ORMs help with impedance mismatch allowing mapping, typically via XML configuration files, to create class objects that represent the tables in the database. Out of the box Entity Framework it missed the mark, missing core ORM functionality (no foreign keys), and even tester gave a 'no-confidence' vote on it stating formally they were unhappy. The 2nd version od EF, EF4 (yes 4 = 2 and they skipped the version numbers, probably to distance itself from the issues of EF1), but still worked heavily in a database-first approach. EF 4.1 released a set of bits (Magic Unicorn Edition) introducing a code-1st approach. Code 1st allows creating your code and domain model and never have to touch SQL Server Management Studio (SSMS). The result, you can write code (that's what we do best), POCO classes, to generate our domain model and have it create the database afterwards. Fear not there is not a 1-to-1 class to table. The database generated will have a design that you will be happy with. Just check out how the tables are laid out after the database has been created.

It's not to give the impression that the database is perfect but it gets it 90% there. You can look at SQL server and to see the changes you want to make (remove pluralized table names, change field types, etc.). There are (2) ways to make the changes  1st is to use Fluent configurations. This is actual C# code in the DataContext class in a method that will override OnModelCreating(). For example modelBuilder.Conventions.Remove which will remove the pluralized table names. The second way is to configure the attributes (System.ComponentModel.DataAnnotations, yes same as MVC) on the Domain/Entity class.  For example adding a [Required, StringLength(25)] attribute, this will cause the field to be not null and have a length of 25 in the database. Interesting point, if you configure in both spots, the Fluent configurations will win.

Interesting note based on a question I asked: I'd only done a database 1st approach which works of of the Entity Data Model (.edmx) and the XML mapping files behind them. With a Code 1st approach, there is no more data models (.edmx) and no XML mapping files. All customization must be done via attributes or fluent configuration.

To push the code further we can actually seed the database directly from the code as well. You may or may not like this feature, but at least it's available  Remember the focus here is to have to deal with SSMS as little as possible. We are developers not DBAs. Like Keith said, "I a developer, not a DBA and I want to code" Ok the last statement might not bode well to all, but it does hold merit. In reality, it's better to be great and 1 thing than sorta good at 2.

Entity Framework 4.3 has a release that included via NuGet package named Entity Framework Migrations. EF will sniff out if the database model on the server is different than what's in code and recommend Code First Migrations. A hash table on local SQL instances connected to development can track changes out of sync. It is important to note this hash table is not out on SQL server in production. After enabling migrations via NuGet, some new Migration classes will be created. Then you can issue a Update-Database -Verbose to have the database get updated. The -Verbose switch will generate the SQL in the command window that can be sent of to a DBA to show what changes need to be applied. The changes do occur to the local instance (based on connection string) but not to production. Again all this this would be needed if you were not initializing EF and having the database dropped and recreated each time there is a change. This piece helps sell EF to DBAs because the code will not push out any changes to production.

Beginning in EF6, calls implemented are using the new async features!! The calls out to get data are not blocking and happen asynchronous.

Keith likes this approach because he want developers in the database as little as possible. Your a develop so think in a business domain mindset and not in a database centric one. Too often we begin by creating a data model and then making our application reflect this thought process. The data model and object model do not, and often should not, be the same. If we are already creating tables in SQL with an object model mindset slipping in, then what's the point? Just create the model in code and have the database model created in a data centric way so the (2) are done with the proper mindset.


The LINQ Programming Model
Marcel de Vries, Microsoft MVP and Technology Manager, infoSupport



Here is another session by Marcel! This is a pretty funny Dutch guy so hopefully if you are here attending you got an opportunity to attend one of his sessions. The room is packed but it's all behind me as I'm front and center for this one.

LINQ or Language Integrated Query is a set of featured for writing structured type-safe queries over local object collections and remote data sources. The basics of LINQ are sequences (something that contains elements, implements ) and elements (are parts of sequences we do the LINQ operations on). Often LINQ can replace For-Each loops in a much more concise and streamlined manner. A typical query operator takes a sequence as input and produces a transformed output sequence.

There are around 40+ operators in the System.Linq.Enumerable namespace such as 'Where', 'OrderBy', and 'Select'. Extension methods allow extending a class that we did not write. Query operators are implemented as Extension Methods on IEnumerable and return IEnumerable. For example:

var query = names
                    .Where(n=> n.contains("a"))
                    .OrderBy(n => n.Length)
                    .Select(n => n.ToUpper());

You could create your own extension method as well if you need to wrap up some functionality and expose it for use on LINQ queries. Also note, the operators do not modify the original collection, it just projects the data to the results. 

Something I have used before is 'Comprehension Syntax" which provides syntactic shortcuts for writing LINQ statements in C# and VB.NET. It looks a lot more like TSQL but it is not. You can see the same query from above rewritten using Compression Syntax:

var query = from name in names
                    where name.Contains("a")
                    order by name.Length
                    select name.ToUpper();

Most LINQ operators uses deferred execution which means they are not executed when constructed, but rather when they are enumerated. A good idea (and even code standard) is to force execution of LINQ statements that are returned from methods rather than the query itself to prevent deferred execution thus preventing unnecessary extra evaluations.

PLINQ is another great technology I have used over the past couple of years. It allows iterating on the collection in parallel. The one side effect is that you can not guarantee order that the collection is enumerated on. A great use though is say you have a collection of employees you want to calculate a bonus using a method that does a calculation  The employees in the collection have no correlation or importance of ordering, so operating on the collection in parallel is a great idea to leverage multi-core environments. I actually wrote a post on this that you can see here: Leveraging Parallelism in the .NET Framework 4.0 for Asynchronous Programming Funny enough Marcel used a similar example method called ExpensiveCalculation(). He demonstrated using by both not using and then using the .AsParallel method. Without PLINQ  it his demo on a 4 core machine took 28 seconds. Once he added .AsParallel() it dropped down to 7 seconds. You could also see each core in task manager go to 100% when using PLINQ thus utilizing all the available cores (developers we need to be leveraging async and the cores available!)

LINQ to XML (actually my 1st go at using LINQ a few years ago) is a much better way of accessing XML data as opposed to older methods like XPath. I asked Marcel if C# has support to access XML elements in a strongly typed manner (when the .xsd is added to the project) as you can in VB.NET, but he said he did not think so. This is one place where the (2) languages normally so similar still differ a bit. VB.NET has a lot more functionality like XML literals that are not available in C#.

LINQ to DataSets is another cool feature if you are still working with DataSets. The DataSets are manipulated after the data in loaded into the DataSet all on the client side. It allows strongly typed access to DataSet values even if not using strongly typed DataSets. This is another area I have written a post on previously so of you want to see some examples, check out the following: How To: Populate a List of Objects from a DataSet Using LINQ

Lastly he covered Interpreted queries which operate on IQueryable instead of IEnumerable. The main difference is in how it executes. Where IEnumerable executes piece by piece, IQueryable builds everything and is a one and done execution. IQueryable also delivers an expression tree to the QueryProcessor. This is what is being used for LINQ to Entites with the Entity Framework. It was pretty cool in that he was using IntelliTrace from earlier today to show the SQL that was being used for the LINQ to Entity query. The basic EF architecture and how these queries work is shown below (image courtesy of Visual Studio LIVE! and Marcel de Vries)




Bottom line here without a lot of fluff, get started using LINQ in code as it is much more streamlined and to me actually more expressive and intuitive than the older methods that preceded it.


Live! 360 Conference Wrap-up
Don Jones, Andrew Brust, Rocky Lhotka, Greg Shields, Dan Home 



"How many are going to the midnight showing of the Hobbit tonight" (like 8 hands raised) "man I thought this crowd was more geeky than that!"

This was a 'wrap up' session to the individual session tracks portion of the conference, and each conference chair or co-chair was present. Each was asked what their primary goal was. Dan Home co-chair of the SharePoint track said his main goal was to expose SharePoint 2010 (and 2013 I believe he might of mentioned but I know absolutely zero about ShapePoint) Greg was helping to bring Virtualization and Cloud to the masses. Don Home was trying to bleed the SQL information between DBAs and Developers.

The format for the bulk of the presentation was open mic Q&A. There were a lot of good questions and immediate feedback directed to the members on stage on the conference. As they all mentioned the best feedback comes in the form of submitting the conference evaluations as they do read this and shape the conference after the feedback the receive.

I wanted to get up and make a few comments and ask a couple of questions myself. I began by telling everyone how if I was nervous just going to a microphone and asking a simplq question, I can't imagine the presenters on stage debugging applications real-time. You know the old application crash or code that doesn't work during a presentation? I image a nice ball of stress comes about when that occurs. 

I thanked all of the presenters for their hard work and quality presentations. Like I said no fluff in these conferences. The reason this is my 5th Visual Studio LIVE! is because the content, networking, and general community vibe option on technologies is hard to put a price on. These guys from start to finish all do a nice job.

I made some comments that I noticed a lot of the Azure sessions in the Visual Studio track seemed to be not that well attended. In addition the ad-hoc polls found very few hands raised on doing Azure development. Don;t get me wrong Azure looks like a fabulous implementation for that technology, but I keep hearing "We will never put our data and apps in the cloud." The response was that they do monitor the session attendance and realize Azure is such a new product so give it time. Typically they fins best overall attendance to the conferences when they market the 'newest of the new' on topics and Azure falls within those lines. I wasn't making a stance against Azure as much as I might like to see those sessions replaced with others that seemed heavily attended (i.e. Entity Framework, LINQ, .NET Framework, Web API , JavaScript, etc.) There is a co-held conference for Cloud & Virtualization so maybe putting the tracks over there might work well. Don't worry, it was in my evaluation I turned in :-P

I also asked if there would be any tracks on the reemerging terchnologies like IE6 or Microsoft Bob, but I was just laughed at. :) I did however want to know if they would sprinkle in any architecture, design, or OOP sessions that are less tecnology centric. I understand it is difficult in 75 minutes to teach topics like DI, but still put it out there for commenting. With major sponsors of this conference like 'Microsoft', it only makes sense that the tracks highlight thier technologies. Maybe if Martin Fowler sponsors Visual Studio LIVE! we can get some design pattern classes :-P

The final topic of conversation is where these conferences are worth their weight in gold. If you did't attend too bad and I'm not going to tell you the magic secret to development that was presented. Ha, just kidding but in seriousness it was about the state of Microsoft today with Windows 8.

"This may be the most dangerous time for Microsoft since the justice department."

Well Microsoft definitely as made a bold move with Windows 8. However not for the reasons you might think I'm going to say. When asked how many in the room (of probably 300-400) are doing or planning to do Windows 8 development, about 1/3 raised their hands. 

The reality is, nobody knows if this is going to work or not. However, Windows 8 is not Windows Vista. Windows Vista was plagued with security issues and that isn't really the case this go around. It's more about betting the marbles on this 2 headed beast of being a one size fits all OS serving both mobile (WinRT) and desktop needs.

Rocky Lhotka did a little monologue that stated all of it best. What is Microsoft got only into the mobile market and the new devices were just that; another mobile device trying to compete with iPad a well established tablet at this point. What if the millions of dollars invested over the last 10 or so years in our Win32 .NET apps would not work on those devices? Microsoft has taken a chance but a clever one (as Andrew Brust stated) to cover all territory and make as OS for all needs. Isn't it going to be nice to be able to run any mobile app on our desktop? After all desktops are still the king of hardware and multiple times more powerful than mobile devices. With Windows 8 we are pretty much getting 'the best of both worlds'.

Wrap Up Day 4




Well the 75 minute sessions are done and wrapped up and it's on to the post-conference workshops. Tomorrow I will be doing a MVC workshop that should fill in some cracks and let me see some of the best or better practices in development. My brain is almost full, so I'll cram in all of the great content tomorrow and then be on my way!

Wednesday, December 12, 2012

Visual Studio LIVE! Orlando Day 3

All right let's get the day going! I have my fancy green tea (yes they have great coffee and really nice tea - energy drinks reserved for the afternoon :)) and I'm ready to on-load some great information. I'm spending almost the entire day on a web track, and as always there are multiple sessions I want to attend so it's always tough to choose. Here we go!

Visual Studio Live! Keynote: Building Applications with Windows Azure

James Conard, Sr. Director of Evangelism, Windows Azure



I have to admit that Azure is a technology that I thinks has a ton of potential, and the implementation seems to be done well, but it is not something I see using in the near future myself. Why you might ask? Well, professionally it's not an option for the time being. As far as personally, I looked once at messing around with hosting a site in Azure so I could get some experience with the technology. I was drawn in by the 3 or 6 months free hosting, but started looking at how much it would cost in the long run. It turns out the Azure hosting was going to cost much more than any other hosting company and did not make sense for me.

I have seen some sessions over the past year similar to this one done by Scott Guthrie. As I'm watching the demo today I have to say that the creation, deployment, and configuration couldn't be more straight forward. There is no excuse for entry if needing to do Azure development by saying it's too difficult to get set up. From the close ties in VS.NET and Azure and out to the Azure Management Portal, I can say the tooling on both ends appears to be well designed and intuitive.


The real power of Cloud Services are automation and ability to scale so easily. In Management Portal it is amazing how much can be configured. The (2) tabs I liked the most are 'Configure' and 'Scale'. It was mentioned that just recently that the VMs now support Windows Server 2012, .NET 4.5 including all of the new features like web sockets. On the 'Scale' tab you can use sliders to change the number of cores for both the front end and backend VMs. What they don't tell you (but I assume most here know) is that upping the cores used on the VM for a site that gets heavy traffic will result in a significant cost increase. Since the cloud based pricing model is based on what you use, they make it look so simple but it does come with a cost monetarily. 


Managing SQL server in the cloud is just as straight forward with support for many of the things a traditional SQL instance has.


There were multiple demos from web to mobile and again in my opinion the reoccurring theme was the one of ease to create, deploy, and manage any type of project hosted in Azure. I know that if I ever do get into cloud development in the future, I'll feel confident in using the right tools with Windows Azure.



JavaScript and jQuery for .NET
John Papa, Microsoft Regional Director



Ok this room is packed! Actually there are more people for this session than there were the keynote. However the planners this year seamed to have missed a tad on which sessions would be the 'popular' ones that needed moved to the larger Pacifica 6 room. This is the 3rd session I've been in that had to move from a smaller room to this one. Seems consistent that the web and .NET Framework sessions are much more attended than the Windows 8, XAML, and Azure sessions. John's sessions at CodeCamp or Visual Studio! live seem to always attract the masses and he does a great job presenting.

He delve right into the different data types for JavaScript. The differences between Dynamic (JavaScript, Ruby, SmallTalk) and Static (.NET, Java) languages were highlighted as well. A Dynamic language like JS can have objects or anything change at runtime, where in Static languages everything is already decided at runtime. 


He also highlighted a new typed JavaScript language from Microsoft named TypeScript. TypeScript is a superset of JS. Anything you already know in JS can be used in TypeScript. TypeScript will give you a lot more information at compilation time for code issues vs. getting that little yellow icon down in the status bar of the browser at runtime. Ahh, who needs something great like this, let's just type our JS perfectly and there will be no issue. From what John is highlighting, the next version of JS, ES6, should have a lot of cool enhancements in this arena that TypeScript is covering today. To see how the differences look between JS and TypeScript, check out the TypeScript Playground


Objects in JS are hash tables and can change at runtime. You can actually add properties to change the object on the fly. Arrays are just indexed hashes that can be redimensioned implicitly at runtime just based on the index accessed.


One thing John was doing that I think was an effective way to relay his topics was to make comparisons between how we do something with an object in C# and how we do it in JS. One point to make along this lines is there are no classes in JS and you need to wrap your head around this. However the next version of JS, ES6, will start to contain the class keyword.


He also spoke to the difference between double equals (==) and triple equals (===). Main point here, if you are unsure of a type coming in and need to do a comparison, use the triple equals (===). For example (if 0 === "") will not evaluate (which is good), where (if 0== "") will evaluate (which is bad).


"Avoid globals, globals are bad" says John. Yeah this doesn't just apply to JS and is just a good message regardless. Any variable created on the fly will be a global variable.


I do like function expressions in JavaScript and have used them before in some of the jQuery I've done. As with any JS defined variables, make sure to physically define the function before calling it or you will run into errors. Where JS programming deviates in implementation is to prevent hoisting issues, go ahead and declare all of your needed variables at the top of your functions so they will be available for use.


John spent a good amount of time using the afore mentioned TypeScript playground to show the niceties of the language. It really does bridge the gap to those of us more familiar with OO languages like C#. Who knows how long it could be until the next version of JS, so TypeScript is an attractive option today.


I would have to assume that I'm probably like the majority of people in this packed room. JavaScript is not something I get really excited about and to me it is a necessary evil, especially now more than ever. I guess this sentiment comes from the fact that I do not use JS day in and day out, so I've never broke through the barrier of being really proficient. I've written a lot for my web applications over the years moving from plain JS to using libraries like jQuery, so it's not new to me. JS is one of those areas I consider myself dangerous and productive but by no means highly proficient yet. The good news is I like what I hear from John, and all of the tooling and support that has wrapped around JS recently. The stronger OO syntax support is a really nice feature in TypeScript. As well VS.NET 2012 has a lot better Intellisense to help those like me that need the extra help. I know one thing JS is here to stay and a major player in the industry so I expect the sessions on JavaScript in the future will be plentiful.



Reach the Mobile Masses with ASP.NET MVC 4 and jQuery Mobile
Keith Burnell, Senior Software Engineer, Skyline Technologies, Inc.





Developing applications that not only work on mobile devices, but have an optimal mobile experience is key today. If you ever being up a traditional website on a mobile device that was not designed with a mobile experience in mind, it will probably not get used. Most people aren't developing 'mobile only' sites, so when developing websites used on the desktop (not mobile), then it's good to sprinkle in some functionality to allow sites be multi-purpose (mobile and desktop).


Interesting, "If you get nothing else out of this talk, make sure to add the viewport meta tag to your markup". It makes sure to set the width of the page to the device. Apple devices will actually go ahead and inject this tag for you. However, don't be fooled because overall Apple has a small market share worldwide (it's the US where it is so popular). For Android and other devices this will make the content fit as it should on a mobile device. It looks like the line below:


<meta name="viewport" content="width=device-width">

In a nutshell, if you can't invest in a lot of mobile device functionality in your MVC application, adding at least the tag above will shape the content much better with little effort; there is no reason not to use it. Taking the styling to the next level is to modify sets of CSS for mobile and traditional sites.

It's pretty cool because he was creating MVC apps from project templates in VS.NET 2012, and running them both in Browser and using an simulator. One note on difference between an emulator and a simulator. An emulator actually emulates the hardware of the device, where a simulator just simulates the user experience of the phone. Keith was using various simulators like Android, Apple, and a mobile device with an Opera browser.

Tangent here, he asked: "Anyone doing JavaScript development for Windows 8?" Not a single hand was raised. Not telling necessarily of this in the future, but as of today there doesn't seem to be a ton of Win8 development going on just yet.

Next he talked about (2) different layout files: _Layout.mobile.cshtml and _Layout.cshtml.The cool thing was that based on the browser type being sniffed out at runtime, the ViewEngine (Razor) looks at the user-agent value and then uses the proper _Layout file. Even though this is great, Keith does admit we are not at a euphoria when there is a single UI codebase for mobile and desktop devices. You still have differences in files but this is to be expected. He has done a ton of mobile sites and this is always the case.

Tangent again, he asked: "How many people are own a Windows Phone?" In a room of 150ish, there were like 5 hands raised.

Next he went down a level to have multiple display modes based on device: Android, iPhone, etc. This is avaliable as of MVC 4, and if you are doing mobile development for the masses, this is reason enough to upgrade. The 'display modes' are registered in Application_Start(). He used a slick lambda expression to compare the user agent to the string of the new display mode to override the user agent (context.GetOverriddenUserAgent()). A new display mode is registered with the ViewEngine. If the newly added display mode, say "iPhone" was added, and user-agent value (i.e. "iPhone") matches then the display mode will be used. Note: Google user-agent strings if you need a reference to the actual names that are used.

jQuery mobile is a JS library for touch optimized devices (tablets, phones, etc.). The scripts can be easily downloaded from NuGet (or directly from the web). NuGet by the way can be used within the enterprise (NuGet server exists internally) to download packages (i.e. custom internal components) to keep everyone on the latest and greatest. It is HTML5 markup driven. It is supported in about 99% of any modern mobile browser so no worries there.  Use the data-"dash" attribute to store data across HTML or JS. 'jQuery.Mobile.MVC' (superset of jQuery Mobile) will add everything the 'jQuery.Mobile' package does, but in addition it adds MVC views to allow switching views between the "Mobile View" and "Desktop View". It also adds the (2) Layout files: _Layout.mobile.cshtml and _Layout.cshtml.

This session had some great information on helping make MVC sites have mobile capabilities with very little work. After all we are all about working less and doing more.


Controlling ASP.NET MVC 4
Phillip Japikse, MVP & Developer, Telerik



With VS.NET 2010 and MVC 4 there have never been more project templates available to help us get started developing MVC sites. In fact enough of the industry complained and they even have a Facebook site template, yikes! For so long people would go 'File -> New Project' and then go, "Now what?" The various templates help get us started in a variety of ways. While the default home page on a MVC site may never be used out of the box, it at least shows how it's used.

So Phillip asked how many people do mobile development, and about 1/4 of the room raised their hands. Then he said how many are web developers, and the whole room did. He said, those that are developing for the web are also developing for mobile. Any web application exposed outside the firewall will be accessed by mobile devices, so it's something we need to embrace.

OAuth is not included in the 'Internet' template. We can leverage, Microsoft, Google, Facebook, etc. for the login and leverage their sign-on for creating a single-sign-on (SSO) scenario. Uncomment a few code blocks and it's done!

He also touched on the "viewport" tag which was discussed in the last session. It comes for free and makes it so we don't have to view desktop versions of a site (with a magnify glass) on a mobile device. Once again, jQuery.Mobile was touted for View Switching. He demonstrated how it adds a widget to the site to allow users to click on a link to switch between desktop and mobile versions. This is useful in scenarios where a website has not been customized for mobile devices yet. Imagine you have a production site, widely used, and all the sudden it does not work on the iPad mini. Do you have time to rewrite the CSS and markup? No, and this is where you can add in the View Switching functionality.

Love it! Phillip: "How many people are doing System.Threading.Thread.Start?... you're doing it wrong. It's hard and there's a reason C++ devs became C# developers. There is an easier way to do things."  This falls right in line with multiple of my previous posts (and some still in draft form: async in C# 5.0). Async and await in Framework 4.5, or TPL since Framework 4.0. One interesting note, in MVC 3 there was no way to modify the controller without creating a separate class, inheriting from IController and putting all the controller functionality within. In MVC 4, you just subclass all of the controllers to another class that derives from AsyncController and get all of the functionality of async operations.

Next he rolled into a little on Web API. He confirms as I have in several of my comments that WCF is a bit of a bear and has a significant learning curve. I think he was trying to show that WCF is too heavy and use just Web API because of its loads of features, but several in the crowd disagreed. WCF is one of those technologies that if you just dab in it, it's tough to be fully productive. He does say, and I agree, that the majority of people that like WCF have spent the time to learn how to use it. With the Fall 2012 ASP.NET update there are Web API performance enhancements.

Tangent - "How many people use Web Matrix?" Not one person in the crowd of 100-200.

On the note about the 'Fall 2012' ASP.NET update, it's pretty significant. There are actually breaking changes like some things removed from Razor (rare non-used methods) and breaking the MVC RTM. There are NuGet packages that can be downloaded (Tools Update) from Microsoft which will fix these issues. Bottom line, if starting a new project make sure to get the Fall update before building the project.

Tangent - Phillip always cracks me up (I have been to his sessions in the past). He has everyone stand up 'to stretch'. He tells people with even number birthdays to place there hands together (like prayer), and odd number birthdays to open their arms up with palms up. You get the entire crowd looking like there are standing up praising him, and then he takes a photo. Nice!

In a nutshell (yeah a lot of O'Reilly books with that title), MVC 4 has matured greatly and is loaded with features for both desktop and mobile website development.


Creating RESTful Web Services with the Web API
Rob Daigneau, Practice Lead, Slalom Consulting



This is a session I had starred on my agenda and have been looking forward to it all week. Top it off that I think Rob is a great presenter with over 20+ years of development experience (loved his 8mhz CPU with 16mb of RAM computer and the rest is ancient history). The room is packed as I would expect. He touts Web API to be a lot better to use than WCF Rest based services which is a more clear cut opinion than that of Miguel's class on Day 1. 

He started it off with a room vote of the following:

  • How many people use WCF: Almost 100% of the room
  • How many people use WCF RESTful services: About 1/5 of the room (including myself)
  • How many using ASP.NET MVC: About 1/2 the room.
Interesting that he mentioned that some think that REST based services are only used for basic CRUD operations. I had never know that to be, but interesting and yes very far from the truth.


The Web API is built atop of ASP.NET and the MVC architecture. They are also based on the REST architecture. The REST architecture has constraints like statelessness, requiring a Uniform Interface (HTTP - GET,POST,PLACE,DELETE), Unique URIs, and resources manipulated through representations (from client to server back to client to change the state of the client). Bottom line, Web API does not follow the REST architecture to a 'T', but nether does WCF. Just don't tell a Restafarian that you are creating a REST based service using Web API or you might get scolded (but who really cares, this is a purist thing).

Web API has a project template in VS.NET 2012 under the 'web' heading. The default template shows an example of basic calls which is nice to get started. The cool thing is scaffolding a new controller for a Web API call. Just like scaffolding a MVC controller off an entity or model class, we can do the same for a API controller:



He also highlighted the ability for the client to set in the header the ability to request XML or JSON to be returned. How much work for the developer? None. It's all baked into the Web API project and done for you. Nice!!

For MVC developers, routing is the same using Web API. The default route template will build a route like this:  /api/{controller}/{value}, where 'value' is optional. Once again convention is used when calling the controller. If a HttpGet is done, then the action sought out will be one with the name 'Get'. Cool thing is you can add descriptions on the end and it will still work (i.e. GetAllNames()) as long as the 'Get' is still there.

You can use an instance of the 'HttpClient' class to make calls to a RESTful service. Of course any type of client can call your RESTful service (Java, .NET, etc.) but this is the best way to make calls from .NET. Adding the header to request XML or JSON on this HttpClient instance is a single line of code: client.DefaultRequestHeaders.Accept.Add(). There was another method when doing a HttpPut called client.PutAsJsonAsync. This stuff is great!

He recommends not only sending back status codes from the server like (200 OK, 201 Created, 404 Not Found, 500 Internal Server Error), but also sending a timestamp. This way multiple clients trying to do say a PUT on the same resource will have the ability to handle concurrency with the time value.

Remember that HttpGet and HttpDelete are supposed to be idempotent. You can call over and over and the result will not change. A HttpPut is not idempotent.

He showed a few examples adding additional routes to constrain to HttpPost calls and allowed calling non-Http verb method name calls (i.e. DoSomething()). Obviously this is desired as mentioned before, you are really going to want to do more than just CRUD operations that map to the standard Http verbs. Just make sure to build a new route in Application_Start for this because the default route will not find a non-standard named method on the controller.

Rob also presented some examples on how you can expand beyond the XML/JSON return types to other supported media types over HTTP like 'CSV'. It's based on the client's accept header value so any of the supported types can technically be returned by the RESTful service. This was cool stuff, but I think for the majority of folks getting into REST based services will be fine with JSON and XML. This stems from the fact the the need to require a REST based service usually comes with a request to have client/technology/platform agnostic services.

A brief discussion was had on query string vs. URL parameters (between the slashes) vs. building up the body of the request with request parameter values. It's all preference, but there are URI length limits. If a query string or list of URL values gets too long, then one should build up the body of the request. Combine this with MVC model binding and you could have a pre built object from the request once it hits the server.

Lastly he spoke to errors. Returning 500 codes is not the best way. Remember with SOAP services we had rich .NET exception handling between the service and the client. This is not the case with REST based services. He suggested at a minimum to create a HttpResponseMessage(HttpStatusCode.BadRequest) and fill it with a robust description of what error occurred from the request. But the coolest method was to create a .NET exception and and add that to the Response message along with the BadRequest value.

This was one of the best sessions I've been to and I can take a lot of what I learned and that Rob provided and apply it in new Web API service applications.

Wrap Up Day 3




Another fantastic and information packed day here in Orlando! My favorite session was the one on Web API but I got great information from all of the sessions. I think the most popular session overall was John Papa's on JavaScript as it almost filled the entire keynote hall. JavaScript is not something I have an strong passion for, but I got a lot of information to sharpen my skills if needed. I'm also happy to announce we passed by 12/12/12 12:12:12.12 with no problem at all today. :-P Well it's time to rest up, eat some dessert, and get ready for another great day tomorrow!