Wednesday, December 12, 2012

Visual Studio LIVE! Orlando Day 3

All right let's get the day going! I have my fancy green tea (yes they have great coffee and really nice tea - energy drinks reserved for the afternoon :)) and I'm ready to on-load some great information. I'm spending almost the entire day on a web track, and as always there are multiple sessions I want to attend so it's always tough to choose. Here we go!

Visual Studio Live! Keynote: Building Applications with Windows Azure

James Conard, Sr. Director of Evangelism, Windows Azure



I have to admit that Azure is a technology that I thinks has a ton of potential, and the implementation seems to be done well, but it is not something I see using in the near future myself. Why you might ask? Well, professionally it's not an option for the time being. As far as personally, I looked once at messing around with hosting a site in Azure so I could get some experience with the technology. I was drawn in by the 3 or 6 months free hosting, but started looking at how much it would cost in the long run. It turns out the Azure hosting was going to cost much more than any other hosting company and did not make sense for me.

I have seen some sessions over the past year similar to this one done by Scott Guthrie. As I'm watching the demo today I have to say that the creation, deployment, and configuration couldn't be more straight forward. There is no excuse for entry if needing to do Azure development by saying it's too difficult to get set up. From the close ties in VS.NET and Azure and out to the Azure Management Portal, I can say the tooling on both ends appears to be well designed and intuitive.


The real power of Cloud Services are automation and ability to scale so easily. In Management Portal it is amazing how much can be configured. The (2) tabs I liked the most are 'Configure' and 'Scale'. It was mentioned that just recently that the VMs now support Windows Server 2012, .NET 4.5 including all of the new features like web sockets. On the 'Scale' tab you can use sliders to change the number of cores for both the front end and backend VMs. What they don't tell you (but I assume most here know) is that upping the cores used on the VM for a site that gets heavy traffic will result in a significant cost increase. Since the cloud based pricing model is based on what you use, they make it look so simple but it does come with a cost monetarily. 


Managing SQL server in the cloud is just as straight forward with support for many of the things a traditional SQL instance has.


There were multiple demos from web to mobile and again in my opinion the reoccurring theme was the one of ease to create, deploy, and manage any type of project hosted in Azure. I know that if I ever do get into cloud development in the future, I'll feel confident in using the right tools with Windows Azure.



JavaScript and jQuery for .NET
John Papa, Microsoft Regional Director



Ok this room is packed! Actually there are more people for this session than there were the keynote. However the planners this year seamed to have missed a tad on which sessions would be the 'popular' ones that needed moved to the larger Pacifica 6 room. This is the 3rd session I've been in that had to move from a smaller room to this one. Seems consistent that the web and .NET Framework sessions are much more attended than the Windows 8, XAML, and Azure sessions. John's sessions at CodeCamp or Visual Studio! live seem to always attract the masses and he does a great job presenting.

He delve right into the different data types for JavaScript. The differences between Dynamic (JavaScript, Ruby, SmallTalk) and Static (.NET, Java) languages were highlighted as well. A Dynamic language like JS can have objects or anything change at runtime, where in Static languages everything is already decided at runtime. 


He also highlighted a new typed JavaScript language from Microsoft named TypeScript. TypeScript is a superset of JS. Anything you already know in JS can be used in TypeScript. TypeScript will give you a lot more information at compilation time for code issues vs. getting that little yellow icon down in the status bar of the browser at runtime. Ahh, who needs something great like this, let's just type our JS perfectly and there will be no issue. From what John is highlighting, the next version of JS, ES6, should have a lot of cool enhancements in this arena that TypeScript is covering today. To see how the differences look between JS and TypeScript, check out the TypeScript Playground


Objects in JS are hash tables and can change at runtime. You can actually add properties to change the object on the fly. Arrays are just indexed hashes that can be redimensioned implicitly at runtime just based on the index accessed.


One thing John was doing that I think was an effective way to relay his topics was to make comparisons between how we do something with an object in C# and how we do it in JS. One point to make along this lines is there are no classes in JS and you need to wrap your head around this. However the next version of JS, ES6, will start to contain the class keyword.


He also spoke to the difference between double equals (==) and triple equals (===). Main point here, if you are unsure of a type coming in and need to do a comparison, use the triple equals (===). For example (if 0 === "") will not evaluate (which is good), where (if 0== "") will evaluate (which is bad).


"Avoid globals, globals are bad" says John. Yeah this doesn't just apply to JS and is just a good message regardless. Any variable created on the fly will be a global variable.


I do like function expressions in JavaScript and have used them before in some of the jQuery I've done. As with any JS defined variables, make sure to physically define the function before calling it or you will run into errors. Where JS programming deviates in implementation is to prevent hoisting issues, go ahead and declare all of your needed variables at the top of your functions so they will be available for use.


John spent a good amount of time using the afore mentioned TypeScript playground to show the niceties of the language. It really does bridge the gap to those of us more familiar with OO languages like C#. Who knows how long it could be until the next version of JS, so TypeScript is an attractive option today.


I would have to assume that I'm probably like the majority of people in this packed room. JavaScript is not something I get really excited about and to me it is a necessary evil, especially now more than ever. I guess this sentiment comes from the fact that I do not use JS day in and day out, so I've never broke through the barrier of being really proficient. I've written a lot for my web applications over the years moving from plain JS to using libraries like jQuery, so it's not new to me. JS is one of those areas I consider myself dangerous and productive but by no means highly proficient yet. The good news is I like what I hear from John, and all of the tooling and support that has wrapped around JS recently. The stronger OO syntax support is a really nice feature in TypeScript. As well VS.NET 2012 has a lot better Intellisense to help those like me that need the extra help. I know one thing JS is here to stay and a major player in the industry so I expect the sessions on JavaScript in the future will be plentiful.



Reach the Mobile Masses with ASP.NET MVC 4 and jQuery Mobile
Keith Burnell, Senior Software Engineer, Skyline Technologies, Inc.





Developing applications that not only work on mobile devices, but have an optimal mobile experience is key today. If you ever being up a traditional website on a mobile device that was not designed with a mobile experience in mind, it will probably not get used. Most people aren't developing 'mobile only' sites, so when developing websites used on the desktop (not mobile), then it's good to sprinkle in some functionality to allow sites be multi-purpose (mobile and desktop).


Interesting, "If you get nothing else out of this talk, make sure to add the viewport meta tag to your markup". It makes sure to set the width of the page to the device. Apple devices will actually go ahead and inject this tag for you. However, don't be fooled because overall Apple has a small market share worldwide (it's the US where it is so popular). For Android and other devices this will make the content fit as it should on a mobile device. It looks like the line below:


<meta name="viewport" content="width=device-width">

In a nutshell, if you can't invest in a lot of mobile device functionality in your MVC application, adding at least the tag above will shape the content much better with little effort; there is no reason not to use it. Taking the styling to the next level is to modify sets of CSS for mobile and traditional sites.

It's pretty cool because he was creating MVC apps from project templates in VS.NET 2012, and running them both in Browser and using an simulator. One note on difference between an emulator and a simulator. An emulator actually emulates the hardware of the device, where a simulator just simulates the user experience of the phone. Keith was using various simulators like Android, Apple, and a mobile device with an Opera browser.

Tangent here, he asked: "Anyone doing JavaScript development for Windows 8?" Not a single hand was raised. Not telling necessarily of this in the future, but as of today there doesn't seem to be a ton of Win8 development going on just yet.

Next he talked about (2) different layout files: _Layout.mobile.cshtml and _Layout.cshtml.The cool thing was that based on the browser type being sniffed out at runtime, the ViewEngine (Razor) looks at the user-agent value and then uses the proper _Layout file. Even though this is great, Keith does admit we are not at a euphoria when there is a single UI codebase for mobile and desktop devices. You still have differences in files but this is to be expected. He has done a ton of mobile sites and this is always the case.

Tangent again, he asked: "How many people are own a Windows Phone?" In a room of 150ish, there were like 5 hands raised.

Next he went down a level to have multiple display modes based on device: Android, iPhone, etc. This is avaliable as of MVC 4, and if you are doing mobile development for the masses, this is reason enough to upgrade. The 'display modes' are registered in Application_Start(). He used a slick lambda expression to compare the user agent to the string of the new display mode to override the user agent (context.GetOverriddenUserAgent()). A new display mode is registered with the ViewEngine. If the newly added display mode, say "iPhone" was added, and user-agent value (i.e. "iPhone") matches then the display mode will be used. Note: Google user-agent strings if you need a reference to the actual names that are used.

jQuery mobile is a JS library for touch optimized devices (tablets, phones, etc.). The scripts can be easily downloaded from NuGet (or directly from the web). NuGet by the way can be used within the enterprise (NuGet server exists internally) to download packages (i.e. custom internal components) to keep everyone on the latest and greatest. It is HTML5 markup driven. It is supported in about 99% of any modern mobile browser so no worries there.  Use the data-"dash" attribute to store data across HTML or JS. 'jQuery.Mobile.MVC' (superset of jQuery Mobile) will add everything the 'jQuery.Mobile' package does, but in addition it adds MVC views to allow switching views between the "Mobile View" and "Desktop View". It also adds the (2) Layout files: _Layout.mobile.cshtml and _Layout.cshtml.

This session had some great information on helping make MVC sites have mobile capabilities with very little work. After all we are all about working less and doing more.


Controlling ASP.NET MVC 4
Phillip Japikse, MVP & Developer, Telerik



With VS.NET 2010 and MVC 4 there have never been more project templates available to help us get started developing MVC sites. In fact enough of the industry complained and they even have a Facebook site template, yikes! For so long people would go 'File -> New Project' and then go, "Now what?" The various templates help get us started in a variety of ways. While the default home page on a MVC site may never be used out of the box, it at least shows how it's used.

So Phillip asked how many people do mobile development, and about 1/4 of the room raised their hands. Then he said how many are web developers, and the whole room did. He said, those that are developing for the web are also developing for mobile. Any web application exposed outside the firewall will be accessed by mobile devices, so it's something we need to embrace.

OAuth is not included in the 'Internet' template. We can leverage, Microsoft, Google, Facebook, etc. for the login and leverage their sign-on for creating a single-sign-on (SSO) scenario. Uncomment a few code blocks and it's done!

He also touched on the "viewport" tag which was discussed in the last session. It comes for free and makes it so we don't have to view desktop versions of a site (with a magnify glass) on a mobile device. Once again, jQuery.Mobile was touted for View Switching. He demonstrated how it adds a widget to the site to allow users to click on a link to switch between desktop and mobile versions. This is useful in scenarios where a website has not been customized for mobile devices yet. Imagine you have a production site, widely used, and all the sudden it does not work on the iPad mini. Do you have time to rewrite the CSS and markup? No, and this is where you can add in the View Switching functionality.

Love it! Phillip: "How many people are doing System.Threading.Thread.Start?... you're doing it wrong. It's hard and there's a reason C++ devs became C# developers. There is an easier way to do things."  This falls right in line with multiple of my previous posts (and some still in draft form: async in C# 5.0). Async and await in Framework 4.5, or TPL since Framework 4.0. One interesting note, in MVC 3 there was no way to modify the controller without creating a separate class, inheriting from IController and putting all the controller functionality within. In MVC 4, you just subclass all of the controllers to another class that derives from AsyncController and get all of the functionality of async operations.

Next he rolled into a little on Web API. He confirms as I have in several of my comments that WCF is a bit of a bear and has a significant learning curve. I think he was trying to show that WCF is too heavy and use just Web API because of its loads of features, but several in the crowd disagreed. WCF is one of those technologies that if you just dab in it, it's tough to be fully productive. He does say, and I agree, that the majority of people that like WCF have spent the time to learn how to use it. With the Fall 2012 ASP.NET update there are Web API performance enhancements.

Tangent - "How many people use Web Matrix?" Not one person in the crowd of 100-200.

On the note about the 'Fall 2012' ASP.NET update, it's pretty significant. There are actually breaking changes like some things removed from Razor (rare non-used methods) and breaking the MVC RTM. There are NuGet packages that can be downloaded (Tools Update) from Microsoft which will fix these issues. Bottom line, if starting a new project make sure to get the Fall update before building the project.

Tangent - Phillip always cracks me up (I have been to his sessions in the past). He has everyone stand up 'to stretch'. He tells people with even number birthdays to place there hands together (like prayer), and odd number birthdays to open their arms up with palms up. You get the entire crowd looking like there are standing up praising him, and then he takes a photo. Nice!

In a nutshell (yeah a lot of O'Reilly books with that title), MVC 4 has matured greatly and is loaded with features for both desktop and mobile website development.


Creating RESTful Web Services with the Web API
Rob Daigneau, Practice Lead, Slalom Consulting



This is a session I had starred on my agenda and have been looking forward to it all week. Top it off that I think Rob is a great presenter with over 20+ years of development experience (loved his 8mhz CPU with 16mb of RAM computer and the rest is ancient history). The room is packed as I would expect. He touts Web API to be a lot better to use than WCF Rest based services which is a more clear cut opinion than that of Miguel's class on Day 1. 

He started it off with a room vote of the following:

  • How many people use WCF: Almost 100% of the room
  • How many people use WCF RESTful services: About 1/5 of the room (including myself)
  • How many using ASP.NET MVC: About 1/2 the room.
Interesting that he mentioned that some think that REST based services are only used for basic CRUD operations. I had never know that to be, but interesting and yes very far from the truth.


The Web API is built atop of ASP.NET and the MVC architecture. They are also based on the REST architecture. The REST architecture has constraints like statelessness, requiring a Uniform Interface (HTTP - GET,POST,PLACE,DELETE), Unique URIs, and resources manipulated through representations (from client to server back to client to change the state of the client). Bottom line, Web API does not follow the REST architecture to a 'T', but nether does WCF. Just don't tell a Restafarian that you are creating a REST based service using Web API or you might get scolded (but who really cares, this is a purist thing).

Web API has a project template in VS.NET 2012 under the 'web' heading. The default template shows an example of basic calls which is nice to get started. The cool thing is scaffolding a new controller for a Web API call. Just like scaffolding a MVC controller off an entity or model class, we can do the same for a API controller:



He also highlighted the ability for the client to set in the header the ability to request XML or JSON to be returned. How much work for the developer? None. It's all baked into the Web API project and done for you. Nice!!

For MVC developers, routing is the same using Web API. The default route template will build a route like this:  /api/{controller}/{value}, where 'value' is optional. Once again convention is used when calling the controller. If a HttpGet is done, then the action sought out will be one with the name 'Get'. Cool thing is you can add descriptions on the end and it will still work (i.e. GetAllNames()) as long as the 'Get' is still there.

You can use an instance of the 'HttpClient' class to make calls to a RESTful service. Of course any type of client can call your RESTful service (Java, .NET, etc.) but this is the best way to make calls from .NET. Adding the header to request XML or JSON on this HttpClient instance is a single line of code: client.DefaultRequestHeaders.Accept.Add(). There was another method when doing a HttpPut called client.PutAsJsonAsync. This stuff is great!

He recommends not only sending back status codes from the server like (200 OK, 201 Created, 404 Not Found, 500 Internal Server Error), but also sending a timestamp. This way multiple clients trying to do say a PUT on the same resource will have the ability to handle concurrency with the time value.

Remember that HttpGet and HttpDelete are supposed to be idempotent. You can call over and over and the result will not change. A HttpPut is not idempotent.

He showed a few examples adding additional routes to constrain to HttpPost calls and allowed calling non-Http verb method name calls (i.e. DoSomething()). Obviously this is desired as mentioned before, you are really going to want to do more than just CRUD operations that map to the standard Http verbs. Just make sure to build a new route in Application_Start for this because the default route will not find a non-standard named method on the controller.

Rob also presented some examples on how you can expand beyond the XML/JSON return types to other supported media types over HTTP like 'CSV'. It's based on the client's accept header value so any of the supported types can technically be returned by the RESTful service. This was cool stuff, but I think for the majority of folks getting into REST based services will be fine with JSON and XML. This stems from the fact the the need to require a REST based service usually comes with a request to have client/technology/platform agnostic services.

A brief discussion was had on query string vs. URL parameters (between the slashes) vs. building up the body of the request with request parameter values. It's all preference, but there are URI length limits. If a query string or list of URL values gets too long, then one should build up the body of the request. Combine this with MVC model binding and you could have a pre built object from the request once it hits the server.

Lastly he spoke to errors. Returning 500 codes is not the best way. Remember with SOAP services we had rich .NET exception handling between the service and the client. This is not the case with REST based services. He suggested at a minimum to create a HttpResponseMessage(HttpStatusCode.BadRequest) and fill it with a robust description of what error occurred from the request. But the coolest method was to create a .NET exception and and add that to the Response message along with the BadRequest value.

This was one of the best sessions I've been to and I can take a lot of what I learned and that Rob provided and apply it in new Web API service applications.

Wrap Up Day 3




Another fantastic and information packed day here in Orlando! My favorite session was the one on Web API but I got great information from all of the sessions. I think the most popular session overall was John Papa's on JavaScript as it almost filled the entire keynote hall. JavaScript is not something I have an strong passion for, but I got a lot of information to sharpen my skills if needed. I'm also happy to announce we passed by 12/12/12 12:12:12.12 with no problem at all today. :-P Well it's time to rest up, eat some dessert, and get ready for another great day tomorrow!

Tuesday, December 11, 2012

Visual Studio LIVE! Orlando Day 2

Today is when we get into the meat and potatoes of the conference as the keynotes and session tracks begin. Today Andrew Burst was doing an introduction and stated as I had presumed that this is the biggest Orlando conference in years. There are (4) tracks, Visual Studio LIVE!, SQL Server LIVE!, SharePoint LIVE!, and Cloud & Virtualization LIVE!. There are so many good sessions across the separate tracks, but I'm focused on the Visual Studio LIVE! content.

Visual Studio LIVE! Keynote: Application Lifecycle Management: It's a Team Sport
Brian Keller, Principal Technical Evangelist, Microsoft



If memory serves me correct, I've heard Brian speak before and enjoy his presentations. This keynote focused on some of the new features of VS.NET 2012 and specifically the functionality and enhancements to Team Foundation Server.

TFS is a beautiful and now maturing product that makes one forget the days of SourceSafe quickly. In fact the (2) don't even belong in the same room. Unfortunately to date, the SS trail left such a bad taste in the communities mouth that I find people don't give TFS the look it deserves. Although based on the feedback from the room of the features included, it seems to have a heavy following. 

TFS has a focus on developer productivity, continuous integration, and agile methodologies. Things like the 'My Work' queue in Team Explorer provide a nice workflow for open items. Having the ability to submit a shelvset and request a 'Code Review' is a nice feature. The ability to audit reviews to show they happened could be beneficial as well. The 'diff' tool has some updates (apparently has not been modified since 95' according to Brian) and displays differences in files directly in VS.NET.

The workflow offered in TFS is so robust that odds are there can be some overlapping with existing external workflow engines you may be using. I know in my experience having 'open tasks' within TFS could be a repeat of a change request logged in an external system. The distinction comes in that I think you bring the work list items in TFS down a level and more specific on what to do (i.e. modify class 'x' to do 'y') as opposed to a higher level change request (i.e. Add functionality 'z').

The best part about TFS is it's seamless integration directly into VS.NET. Being the flagship product from Microsoft for supporting source control, project management, and overall SDLC support, it's second to none in terms of VS.NET support. The real hurdle with TFS is it is a massive piece of technology. It's so difficult to keep up on all the development and related technologies required to keep current, and yet adopt a new platform like TFS. While it warrants the look, it is probably best to have a small team or consultant firm assist if ever moving over to TFS.

Local workspaces were added so that when working offline files are no longer just made read-only. The local workspace allows the developer to continue working offline and push updates to the server once back online via 'Included' or 'Excluded' changes.

The following were the (3) coolest features presented. 1st was 'Coded UI Tests'. This best I can describe reminds me of how automation macros worked in Office. Essentially you can set up a coded UI test that will take control of the mouse and keyboard to simulate a user interacting with the UI of the site using a designed test. AWESOME! Actually I've seen this before but never used it, as there are 3rd party companies that make tools for sale like this. Regardless this definitely has potential.

Next was a new feature called 'Code Clone'. How many times do we wish to go back in the code and refactor repetitive methods? This utility within VS.NET 2012 will search the solution to fins similar blocks of code to suggest refactoring. Score!

Lastly, Intellitrace. This has been around since VS.NET 2010 and something I would really like to get into. It provides the ability to output files consumable within VS.NET to 're-create' a low level trace breakdown of exceptions that occurred  Along with this comes the ability to double click an exception and have it go straight to the part of the application that caused it and have full history of the call sequence. Wow, how often do we get a "Please investigate problem 'x', and it occurs only every so often.' Then comes the process of trying to recreate the use case and cause the error which is often difficult. Intellitrace takes the guess work out of this process and streamlines time for developers to debug issues.

LIVE! 360 Keynote: Visual Studio and the New Web Enabled Apps for Office and SharePoint 2013
Jay Schmelzer, Director of Program Management, Visual Studio Team, Microsoft



Jay is responsible for the design time tools in Visual Studio, and as he was introduced these tools were touted as, 'everything that's important in VS.NET'. The focus of this keynote is on Office and SharePoint Applications built in VS.NET. 

Well for a web guy (for the most part) such as myself, doing any kind of Office automation on the server is a big 'no, no' and even is documented *here*. The only place appropriate in in a thick client Windows or WPF application which is not my forte over the last several years. I had my days of heavy office automation early in my career doing a lot with VB6 and VBA interacting with Excel or Word. In .NET I worked with the COM interop .dlls with Excel... make sure to dispose of those unmanaged objects, nuff said. The mindset over the last 6-8 years in my world was to veer toward making applications web based and independent of platform specific applications, i.e. Office. Maybe a wrong line of thinking; they have office for Macs too, right? (hint sarcasm)

However with the push of Windows 8 and the Windows App Store, there is more than certain a push to bring developers back close to the OS (could write an entire separate post on this). With this closeness to the Windows OS, re-enter (or at least for me) Office applications. I mean I'm not the only one on this track because I've seen very few keynotes or sessions on Office apps in the last 5 years at this conference.

Watching the demos though, I am pleased to see the flexibility in choice of technology use for creating these Office and SharePoint applications. Jay created a MVC 4 application (MvcApplication1.Sharepoint) for SharePoint, deployed to SharePoint with ease, and was then running and hosted in the context of SharePoint. How many times can I say SharePoint...

Here's one thing I hope I'm never told to investigate: "Allen, can you please take a look at the JavaScript object model for SharePoint." Sorry, no thank you I will pass.

Jay continued on with the plethora of new functionality regarding High-Trust apps and LightSwitch. They tout LightSwitch as "the easiest way to create modern business applications for the enterprise" My fear with LightSwitch is the days of old having rogue business developers creating enterprise applications and then dumping them on 'real' developers once they move on in life. Sound familiar: "yeah we have these apps John built using LampSwitch or something, and he left and there isn't anyone to support them. Can you help us?" 

The thing I give total credit to Microsoft is not boxing us into a narrow set of technologies to solve problems. While these products are not directly applicable to me today, I still thing they hold merit for the overall community in situations where they do make sense.


What's New in Visual Studio 2012
Rob Daigneau, Practice Lead, Slalom Consulting



This was a great session by an experienced leader of our industry. Rob used to be a director of architecture at Amazon.com, also wrote the book "Service Design Patterns". Book approaches doing services from a WSDL or REST style. (check slides for coupon code). This guy has a wealth of design and architecture experience, so I'm sure I'll be seeking him out in the future.

I always love when presenters ask a "who's using what" show of hands on technology. In this room there are probably 200-300 people.
  • Windows 8 apps: 10 hands
  • Web applications: 99% of the room
    • WebForms: 35%
    • MVC 65% (Quote from Rob: "Yeah, we're better. Haha, just kidding")
  • WCF: About 30% of the room (I'm convinced service developers are a smaller subset due to the skills and is not a reflection of using a competing technology like the other votes)
  • WinForms: 60-75% of the room
  • Office Apps: 3 hands (see my keynote comments)
  • Azure Apps: 2 hands.
  • Windows Workflow: 2 hands.
Starting right off he highlighted the fact that with VS.NET 2012 you can upgrade a project without changing the project files (Yeah!). Also individual projects can target different .NET versions (Yeah again!). It can also target different platforms: Server, XBOX, Windows Phone, etc.

The following are the versions of VS.NET 2012: Test, Pro, Premium, and Ultimate. He states to get anything reasonable done you need to get at least the entry level 'Pro' version. The 'Test' version is targeted for QA professionals, that use 'Labs' that leverage VMs in the cloud to do testing. Odds are many here will use 'Ultimate'. 

Next he did the drill down on the different project templates in VS.NET 2012. All the players are still there with a focus on creating templates that are intuitive to help know which type of project to make. He also noted Silverlight is not dead, and still a citizen of the framework  A lot o SL code out there and it must still be supported.

He was speaking about the 'Metro' naming mess as others have. New name: 'Windows Store Apps'. A misnomer because they certainly do not have to be sold in the store. I think Microsoft could have come up with a better name.

The search capability in Solution Explorer is really nice. Ability to use pascal casing searches like 'CC' to narrow down to say 'CharityCampaign'. I am spoiled because a lot of these type features exist in tools like ReSharper which I use. However, if you don't have this tool, they have definitely made improvements to the search functionality.

Remember in any version of VS.NET the slowness of loading assemblies in the 'Add Reference' dialog? They finally fixed/improved this feature. The second the dialog is presented the entire set of assemblies is available for selection. New searching ability here as well, which before was done just via scrolling.

With ASP.NET MVC 4 they have optimized razor, added mobile templates, added Web API, and single-page applications (Knockout.js & Web API). Web API is for the development of truly RESTful services. As opposed to yesterday, Rob is clear on his choice between WCF based REST services and Web API, which is Web API.

Minification and Bundling are great features to organize and optimize the JavaScript files within an application. As JavaScript becomes more and more a 1st class citizen of writing applications (and not just a sprinkling of JS in web apps), these features are in here at a perfect time.

Navigation and development in HTML is much improved. Must of the features are small tidbits, but they add up. Changing the opening tag will change the closing tag for you (as opposed to searching for it), snippets (Tab twice), and improved intellisense based on document type (i.e. HTML5). The embedded page inspector helps replace the need for things like Firebug as well which is nice. The inspector is not totally new functionality in that it really is just the IE developer toolbar, but the main difference is not having to start and stop the project to have access to this functionality. It can now be done at design time.

I keep hearing with each version of VS.NET that JavaScript intellisense is 'improved'. I wasn't that impressed from 08' to 10', but it appears they actually have made decent improvements in 2012. Rob was highlighting some of the intellisense capabilities and I could see right away that it is better. I like the intellisense on functions which stems from user added XML documentation, quite similar to managed code. Debugging and adding quick watches was making me feel much more like I'm debugging C# which is a huge compliment.

Rob moved to the 'async' and 'await' functionality in C# 5.0. I've had a draft of a blog post on this for about (2) months now, and need to finish and publish. "Asynchronous programming can't get any easier than this." I couldn't agree more and I have other blog posts that speak to this. No more merging of threads, words like 'Mutex', etc. are the days of old.

I had a keen eye on WCF improvements because I use the technology heavily and on a daily basis. Web Sockets offer the same functionality at NetTcp binding but is based on open standards and is communicated via HTTP. There is a new binding named 'netHttpBinding' which will do web sockets. Super cool and I know I will be looking to use this in the future.

He also highlighted a new feature on an advanced feature option when adding a WCF reference that allows for task-based asynchronous programming. Ok this is awesome, but contradicts with some of the content from my post yesterday about WCF architecture. I'm not sure how this feature would be incorporated manually when creating the WCF communication layers manually and not having the proxy classes automagically built. Guess I could always cheat and see what the proxies look like and see how it behaves to port over to an implementation that uses manually created proxies.



Oh wow, Dependency Graphs (Architecture -> Generate Dependency Graph) are something I could really leverage, especially when inheriting a larger project. It creates a visual diagram of all assemblies, there relationships, and dependencies. This is super useful when trying to understand how the moving parts all interact. Sometimes swimming around in the unknown of a new solution and trying to at a high-level see how parts interact is not an easy task. These architectural views will help alleviate this challenge.

To sum it up, I'm sold on VS.NET 2012 (WITH ITS GREAT NEW MENUS :-P).


Smackdown: Windows Store Apps vs Websites
Ben Dewey, Senior Consultant, Tallan, Inc. Author of Windows 8 Apps



Ben touts this class is a good fit so developers can know the difference and make good choices when questioned or tasked with selecting a traditional website or a new Windows Store application.

Windows 8 now exposes a new API named WinRT. WinRT is C++ code built upon the runtime to allow the development of windows store apps. .NET was built atop of COM and the Win32 API. WinRT is an OO API with a projection layer that allows development in various languages such as JavaScript, XAML, or C++ to make Windows Store applications. Devices where the real driver of WinRT. Cameras (CameraCaputreUI), GPS, Accelerometer, blue tooth, Near Field Communication, etc. Microsoft promises to build upon WinRT for new devices.

One note he made and I've spoke of over the last year or so is that it is still difficult to develop using HTML5 because in the enterprise there are too many still on non-supporting browsers like IE7 and IE8.

One reoccurring premise I see consistently is that you are going to have a much more rich user experience when creating a Windows Store app (obviously) because of the closeness to the OS using WinRT as opposed to rendering HTML to the client (even with HTML5 ability) from a remote server.

Windows Store applications must have a 'clean' experience offline. Thus means you will want to download all the images, etc. so users can run offline with a seamless experience.

When using JavaScript and Windows 8, there is access to that WinRT C++ code directly. There is also enhanced support for touch, new controls, and asynchronous support. JavaScript is a 1st class citizen when it comes to writing WinRT apps. Therefore is you like JS, you will be happy to create your Windows Store apps using it as a primary language. I've used JS as required in my web apps over the years and more so recently, but I still am not comfortable using it like I am in C# or VB.NET. I know one thing is for sure, JS is not going away so I need to keep refining these skills especially in the areas of JS asynchronous programming. What I would love to see is a session highlighting the different languages that can be used to create Windows Store apps. I understand opening the door to all the languages is to attract the masses, but why in the world would I by default pick JS to write an app and not C#/VB.NET? I'm sure someone could justify it well, I just haven't been enlightened on it yet.

Ben presented a few Windows Applications built with JS, using a camera, using the GPS, and using a simulated accelerometer. I have to admit, for this type of functionality these apps are ideal to make as the WinRT API exposes functionality directly to bring these devices to life. With just a few lines of code + a Bing 3rd party tool, he had it wired up in JS to show our exact Geo location using the GPS in his laptop. I was impressed how fast it switched from New York on the map to hear in Orlando. He noted when using rapid fire events that are subscribed to is to only listen when required to preserve battery.

File I/O in WinRT has an enhanced file upload control for Windows Store apps. One cool feature is the ability to select a file on the client and manipulate on the client without uploading to the server. After all if the server is not required, then why involve it. The upload control also has the ability to upload multiple files, which was not native with the traditional HTML or even ASP.NET upload control. I can say from personal experience the ability to extract a thumbnail of an image so easily is awesome.

Some of the new features of Windows 8 are the 'App Bar' and charms. If using IE on Windows 8 you will see the app bar across the bottom of the screen in the form of global buttons on the right (i.e. navigation), and current selection buttons on the left. Charms are available from a bar that runs down the right hand side of the screen, and may include charms like 'Search' or 'Share'.

'Hybrid' apps allow creating a native JS Windows Store app and also a web app by simply using an iFrame. This gives you the best of both worlds by allowing access to the rich native WinRT functionality while still having a website accessable via any URL. Ben mentioned some frown upon this approach but it is doable.

Obviously another plus for Windows Store apps are discoverability and monitization, Having some random web app can be difficult to sell functionality or draw traffic. Windows Store apps now have a centralized home in the Windows Store and provides the ability to sell apps and make money.

By the end of the session one thing is apparent. From the perspective strictly for providing a rich user experience and deep functionality, the winner hands-down are Windows Store Applications. This is of course throwing out the large elephant in the room which is the requirement of Windows 8. So web apps vs. Windows Store apps is a little Apples and Oranges. I also must say a lot of these sessions highlighting Windows Store apps remind me of the Silverlight sessions 3-4 years ago. All the examples were simple little video players or Twitter apps. The real comparison to shape up in the future is selling the masses on the new format and interactivity learning curve in Windows 8.


What's New in the .NET BCL
Jason Bock, Principal Lead Consultant, Magenic



Jason is an elite member of our industry, supporting his status as a MVP with his quality sessions at Visual Studio LIVE! I have had the pleasure of attending Jason's sessions before and the guy has more low level framework knowledge than almost anyone I've interacted with previously. And he likes the band Rush too, so that's cool!

The .NET framework is now 10 years old and we have come a long way. It's amazing how much  its evolved since Framework 1.0 and how much functionality has been added over the years.  The .NET framework 4.5 is whats called an 'in place upgrade' and actually places all of the new files into the v4.0.30319 directories. This is important to understand because the 4.0 assemblies will be overwritten. However, Microsoft says this will not be a problem as it's an in place upgrade, but it's still good to understand the behavior.

The BCL is made up of the following as diagrammed below (image courtesy of Visual Studio LIVE! and Jason Bock):




Jason was highlighting a tool he used that would output the differences between the different frameworks and output it to XML. The purpose of this is just for reference since there really isn't not a good repository showing the comparisons between the frameworks. He mentioned there was a tiny bit of functionality marked Obsolete in the Framework, but it was so esoteric and unused odds are it will not affect the masses.

A topic on asynchronous programming came up because by his estimation there are 230 new methods that are asynchronous in .NET 4.5. In fact all of the File I/O operations from the last session I attended for Windows Store Apps are asynchronous  Over the years if you look at CPU speed it would almost double every few years. Well in the last few years the speed has decreased, but more cores are added. Each core has a thread so many machines today (like mine) have 4 cores and 8 threads (Hyperthreading enabled). What we as programmers need to be doing is utilizing these cores to allow asynchronous programming. And today in .NET 4.5, there is almost no excuse not to because the complexities of asynchronous programming are more and more being extracted away. Queue TPL and async and await keywords. He recommends as do I agree to use the 'Task' class and not 'Threads'. In fact what's interesting is he states how disgustingly complex the code to support these simple keywords is under the covers. He said to look at the IL and see how nasty the implementation code is. We are not supposed to see that code anyways, but leverage the hard work others have done for us. 

There is a new ability for you RegEx fans to add a timeout to the expression evaluation. This is good so that if something is quite complex, you can cause it to timeout explicitly. The timeout value is placed in the overload of the constructor.

Next he moved into memory management in Framework 4.5. He (as am I) always dumbfounded by those that do not like deterministic disposal and rely solely on the Garbage Collection thinking it is perfect and there can be no memory leaks. To prove that the GC is not perfect, allocate really, really large arrays and let them sit around. It will introduce memory leaks. He also demonstrated how wiring up events(+=) and not unhooking them (-=) (which is important) can cause some memory leak issues that are quite significant. In applications, it is possible that handlers that are attached to event sources will not be destroyed in coordination with the listener object that attached the handler to the source. This situation can lead to memory leaks. A new class named 'WeakEventHandler' has been introduced to help prevent the memory leak condition acting like a manager on that handler and ensures to unhook the handler automatically.

He also covered compression. There is finally a way to read a .zip in .NET 4,5. There was compression previously like 'DeflateStream' or GZip, but nothing for .zip native to the framework. No longer do you have to get NuGet packages or 3rd party components. 

Lastly he spoke to web sockets which is probably one of the favorite technologies I'm looking forward to using in the future. He finished out with some of the other bulleted items added like UDP in WCF, Generic support in MEF, and System.Json.

Wrap Up Day 2



Well it's been a busy day and it has flown by! It seems like I just woke up and now the day is complete. All presenters are 1st class as usual as well as the attendees I interact with. If you think about it, conferences like this amass the best of the best in the industry and so it's easy to get along with those around. It's like 1 big team of sorts.  My brain is filled with lots to digest and I look forward to what tomorrow will bring!

Monday, December 10, 2012

Visual Studio LIVE! Day 1 - WCF and Web API



So it's day 1 here at Visual Studio LIVE!, and I'm here attending Miguel Castro's WCF & Web API full day workshop. There is a great attendance with probably 300-400 people from my estimation. I'm here on the 2nd row so I can get the most and interact without yelling. I am definitely refining my air traffic controller multitasking skills today trying to listen, take notes, and learn simultaneously.

I actually had the pleasure of having dinner with Miguel and his family along with Rocky Lhotka, and Andrew Burst at last years conference over at Bubba Gump Shrimp. I sat across from Miguel and had some great conversations on WCF. I also had lunch today with him as well which was cool! The man has a Black 560 HP Mustang GT he customized. Yes please. Here's a guy that has a wealth of knowledge on WCF, services, and SOA which you wish was sitting in the cube or office next to you on a daily basis. He validates himself easily as a Microsoft MVP for Connected Systems, and I enjoy his knowledge transfer.

This class was interesting to me because I have been using WCF since CTP in Framework 3.0, and by no means do I consider myself an expert (on this or anything), but I wonder about tracks I have a deeper experience in already. Meaning, will it be a lot of information I've digested previously. Fear not, one can never know it all. Case and point this class by Miguel. The byproduct of the architecture used behind some of the simple WCF samples shown is the real gem.

When Miguel asked how many in the room are WCF developers, to my surprise it was not the overall majority in the room. I am confident this was not telling of the technology but rather there were a lot of developers that have not been involved with writing services. I feel for some of these people because WCF is so deep as a technology that's it's hard to digest in a single day. This is exactly the reason though that I found this class so informative. This was a 300 level track masquerading a a 101 during introduction, and he states this is a 5 day class compressed to a single day. When DI, SOA, SoC, and OOP are all being discussed within the 1st hour or so... this is great stuff.

Where I really align beliefs in architecture with Miguel is on SoC. I always can get a feel of how an architecture is laid out simply but looking at all of the projects in a VS.NET solution collapsed and seeing the logical layers. Miguel is a strong proponent of separating the individual pieces of WCF into their own layers. He does not care for the bloating of the auto generated proxy classes created by 'Add Service Reference' within VS.NET, in addition to the tightly coupled nature of the service contract, implemented classes, and the associated channel communications. I too agree for the majority of applications. I still think there is a place for consuming a service and having the proxies created, along with a WCF Service Library on the back-end. I think if you have a simple 1 page app using a simple service (keeping in mind YAGNI principal, while still observing scalibility of functionality), there is a place for a compressed architecture. WCF Service Library or WCF Service Application + Consuming client application using 'Add Service Reference' within VS.NET is simple, straight forward, and easy to understand and consume. I try never to be what they call an 'Architecture Astronaut' and over-architect when not required; not implying Miguel is this by any means, but just that I rarely will use the word 'always' (he didn't either, just making a point) when using the sentence "This application should be architected like this ____" I also don't believe in spaghetti crap applications either, so hopefully I make proper decisions on building and constructing applications.

However for true enterprise applications using SOA as a basis, or even any large scale isolated applications I couldn't hold the flag with any more pride on properly separating the logic when working with WCF. I'm a big fan of the ideology around architectures like Domain Driven Design, MVC, MVVM, and heck even simple 3-layer UI-BLL-DAL. They all have at least a basic commonality which is the idea of logically separating responsibilities and concerns. Since WCF inherently has a lot of different responsibilities end-to-end it to makes sense to separate the major players into their own pieces. This isn't really a new concept as most advanced literature in our field will at minimum separate out the host and WCF functionality, but most even take it a step further and break the pieces down further. Why all of this work? In the long run it allows us to be extremely flexible and make isolated changes without a large ripple effect, and to allow switching out pieces like the host easily.

First let's look at an overall SOA architecture. This gives a great visual on a decent layering of the application (slide courtesy of Visual Studio LIVE! and Miguel Castro).



Here is a breakdown of the WCF components that will need to be created, each responsibility being its own project (slide courtesy of Visual Studio LIVE! and Miguel Castro):



Next let's look at the breakdown of the actual project layers and their high-level purpose:



Business Engine: This is the typical layer which contains the business rules and logic. Also referred to as the Business Logic Layer, Business Domain or just Domain Layer.

Client: This represents any UI client that will be making calls to our WCF service. This might be a ASP.NET, WPF, WinForms, etc. application.

Contracts: These are the Service Contracts and any DTO DataContracts used for transporting data across the wire. There are no implementation classes in this layer. Miguel had a nice suggestion to post fix the word 'Data' on a DataContract DTO, like 'ZipCodeData'. This helps distinguish them from service or business logic methods. DTOs are typically nothing more than getters and setters to move serilizable data across the network.

Host: This is the hosting layer for WCF. This might be contain the files necessary for hosting via a  self-hosted Windows Service or IIS (.svc file). Remember that  a svc. is a 'browsing point' for a WCF service. It invokes the appropriate WCF handler. ASP.NET needs a browsing point to know what world it's in.This also contains the .config file, and the only one that matters which is the one from the host. Within this config there is the WCF configuration

WebHost: This is an optional layer that provides an additional avenue if the 'Host' layer is implemented using an alternate self-hosted mechanism like a console application or Windows Service.

Again as you decipher each layer you see it has a very specific responsibility  It really is not a lot of extra work to segregate the layers and the benefit is the ability to make isolated changes or even switch out components more easily (like hosting methods) without a lot of 'unhooking'. The main thing here is *not* to couple the host and the service code in case the host needs to be changed out later. This architecture and layout of the layers has it's obvious benefits.

He then led into a great discussion on WCF Proxy Instancing and Concurrency. I was more interested in the instancing portion as I think there are more use cases for making changes. The following are the (3) main types of instancing:

Per-call instancing
    -each call spins up a new instance of the service.
    -On the client it appears that it's a single object, but it's not. Stateless behavior.
     -Advantage is stateless, scalable, and nothing is held on server's memory.
      -Not the default, but Miguel sets his services up like this.
Per-session instancing
    -WCF default
    -1st call spins up instance and calls constructor
    -2nd call uses SAME instance
    -Can maintain state using class-wide members in service
Singleton Instancing
    -when host opens, host instantiates service
    -one instance serves all proxies for all clients
    -very specific usage scenarios
    -With IIS hosting and the potential for app pool recycles, the Singleton instance is destroyed and dispose is called. This is a disadvantage and something to keep in mind if trying to use a Singleton for the WCF proxy.

On the topic of proxies, they are unmanaged objects. They are not CLR managed. Always dispose of proxy classes, and use a 'using' statement when possible to wrap instantiation of the proxy classes. Until the proxy is closed, WCF will keep an open connection and add to the value for service throttling thinking the connection is still active. Throttling is ability for WCF to manage concurrent calls, prior to queuing up to prevent server from breaking. This is really important to be on notice. If that proxy class is not disposed performance can hinder significantly  Miguel had instance where a DI proxy in MVC was not disposed. Calls went from 1.5 secs to 100ms once the issue was tracked down and the proxy was disposed. A few in the audience complained of problems with the 'using' statement throwing exceptions, but I would like to see the IL differences from calling manually. Bottom line, dispose of explicitly if the 'using' statement gives any issues.

Next he went into doing WCF callbacks. Very useful stuff. Callbacks have a place for keeping a proxy channel open to allow the server have the ability to make calls back to the client. The use of this could be as simple as updating a progress meter bar, updating sports scores during a game, updating stock prices, updating dashboards, etc. This is in my opinion a much more appropriate 'tool' for what often is done via a timer and polling. Polling definitely has its place and I use it and have blogged on it, but when the responsibility is on the server to notify the clients of updated data, then using duplex services are a good idea. If you have a need to constantly keep a client updated and are using a WCF SOAP based service, then using callbacks and a duplex service should be considered, especially if you are implementing some sort of polling to fetch data.

The topic of WCF exceptions was also highlighted. As always the main exception rules still apply when handling exceptions. Here are the main reasons to catch exceptions:

  • wrap and re-throw
  • log and re-throw
  • consume and dissolve

However with WCF operations it is acceptable to have one big Try-Catch with Throw FaultException.
My particular favorite way is using something like the following:

throw new FaultException("Blah, blah");

Something interesting, if you do not explicitly throw a FaultException server side or throw an exception of a specific type (i.e. DivideByZeroException), the proxy on the client will *not* be preserved and will be closed. However, if a FaultException or FaultException (where 'T' can be an exception or any data contract) is thrown by the service, the proxy state will be preserved. I asked why we would want to preserve the proxy state on the client? After all we are busted server side, should the client proxy survive? Odds are no it should not. But maybe you want to give the client a chance to absorb and go through some re-try logic without closing the proxy. This would be a rare use case, but it's important to understand.

Miguel also highlighted Transactions is WCF. I have not used transactions in WCF in a production setting, but as typical the best example to show how to use transactions is with a bank account example. The example used was one of a method called 'TransferMoney()' and underneath (2) additional methods are called 'DebitAccount()' and 'CreditAccount()'. Obviously if 'DebitAccount()' worked and 'CreditAccount()' did not, we do not want the transaction to complete and have it roll back. This is not the same as those that have worked with SQL transactions in ADO.NET. The rollback is independent of any SQL calls. You might still have a database call involved but you might not as well.

A few things to note on transaction in WCF. They are at the operation level. By default transactions are not turned on (Value = 'NotAllowed'). However there are (2) other settings which are 'Allowed' and 'Mandatory'. As Miguel mentions, setting the operation to 'Allowed' is a low risk as it actually does nothing until transactions are implemented; it just opens the door to allowing them if desired. Also, all methods downstream must participate in transactions or the transaction functionality will not work properly. Any method in the chain that does not support and implement transactions will not allow the rollback to occur if there is a failure. Lastly, the client does not need to worry about wrapping the initial call into a transaction. As long as the WCF service implements them properly it will still have the transaction implemented correctly. Transactions are not needed for all use cases, but the actual implementation when planned out properly is not too difficult.

In come REST services, my favorite topic of the day. I know not the end-all answer to everything, but its power and simplicity are great. WCF SOAP services for line-of-business intranet .NET to .NET apps using NetTcp bindings are still lightning fast and the way to go for performance. However the minute a non-.NET client is introduces especially outside the firewall, using RESTful services are the required solution. Miguel stated that even know the wsHttpBinding is supposed to have interoperability, it doesn't do uplevel perfectly so need to go with REST in these situations. Deciding SOAP services vs. REST services will mostly come down to the consuming clients, internet vs. intranet, and interoperability factors. Also remember with REST services many of the topics discussed previously like transactions, concurrency, etc. are not applicable. There is still a place very much so for SOAP based services albeit a much heavier implementation that that of REST based services.

REST based services are much lighter weight than SOAP services returning either XML or JSON typically. There is no heavy SOAP message to deal with and the constraints of what the consuming client will look like. REST services based on the REST architecture are an extension of web standards and the GET, PUT, PLACE, and DELETE verbs (note: PUT and DELETE verbs are turned off in IIS by default, so make sure to turn them on if you need them). In fact you can make REST based calls directly in a browser (browser calls are by default HTTP GET). Although the purists, the so-called 'RESTafarains' will never acknowledge a pure REST implementation most of the time there is at least one place I totally agree with them on implementing these types of services. Make sure to use the HTTP verbs properly  For example, don't abuse the verbs and do an 'update' behind the scenes as part of a GET. While possible it's incorrect and there are really no checks in place to prevent this.

With a WCF implementation, the deciding factor on the return type, XML or JSON, is configurable at the service level. Ideally you would expose (2) endpoints, (1) for each return type. Then it's up to the client to call the appropriate URL containing the endpoint that will return the desired type. However, technically the server is still deciding on the actual service what the return type will be.

What I REALLY like about Web API services is the ability for the client to set the 'Content-Type' value in the request header to indicate the return type! Yep no configuration or heavy implementation. Set the header and you get back the type you requested. If testing from a browser, Google Chrome by default returns XML, and IE returns JSON. My recommendation if you are not familiar with JSON is to begin using it because it is more compact and lightweight than XML. With so many JSON deserializers in .NET it makes it super simple to convert it to a DataContract once received and then work with a strongly typed object. You *can* do the same with XML via LINQ to XML into a type like a DataContract but it's much more cumbersome to work with in my opinion.

WCF REST service and ASP.NET Web API are competing products at Microsoft and have a single intended purpose to deliver data in a RESTful manner. However, REST was introduced 1st in WCF 3.5 with introduction of WebGet and WebInvoke attributes and webHttpBinding. Web API was originally born out of the WCF Starter Kit and was RC in MVC 4. You can use the Web API with .NET Framework 4.0.

The main deciding factor when looking at architecting an application is the decision to use a WCF REST based service vs. a Web API service. There are subtle advantages to both and Miguel warns of not getting all caught up in the 'Web API is the greatest thing since sliced bread' deal. If you already have a full blown WCF service layer implementation and need to add REST atop of that, then a WCF REST service may be the easier way to go. However if starting from scratch it appeared to seem that the general census was to use a Web API REST implementation.

Interesting tidbit - technically the largest REST based deployment in the world... the world wide web.

The final part of the day was to cover WCF security in 45 minutes. WHAT?!?! Yeah pretty much impossible. The good news is (at least for me) I was have done so much over the years with WCF security, authorization, authentication, and securing services that the security information 'blitz' made sense to me. However anyone in the audience that has not done anything with security will need to do a lot more research on the topics. May I recommend perusing this blog as I have dedicated several posts to WCF security and securing WCF services.

The main points I wanted to highlight here are the following. TCP is a secure binding by default. It's binary. You can't break the pipe. HTTP on the other hand is an open binding and the 'message' needs to be secured. You can actually secure the 'Transport' which will also secure the message with either a SSL certificate (HTTPS) or via X509 certificates  I prefer using a SSL cert and like I mentioned have several posts on the topic. However the points on NetTcp are important to restate. If you *can* use a TCP binding, you will get some blazing performance and native Windows Security so it's an attractive option when working on an intranet application with a .NET to .NET scenario. Check out the WCF Security Guide on CodePlex if you really want a deep dive. In reality an 8 hour course could easily be given just on the topic of security.



To wrap things up on this busy day, I must say I onloaded a LOT of information for the services world. If you have a chance check out Miguel at any of the mainstream conferences around the country, on his blog, or on Twitter @miguelcastro67. I only have one piece of advice for Miguel since he has provided so much information to me today... dump the Mac.

I will also leave you with some of Miguel's best quotes of the day. I always enjoy his candid style!


"What is exception handling? A slash block and then 'ToDo'"
"Can we even call them Metro apps anymore, or not because some food store in eastern Europe sued Microsoft."
"Compilation is the 1st unit test, right... sometimes it's the only unit test"
"I get to start shit and not have to finish it" (contractors)
"My shit don't break"
"Who does SharePoint in the room.... Why?"
"no, no Google, we use Bing here right?"
"Dude, i'm not covering security 2 hours in! I do it at the end of the day when you brains are fried so I can bull shit my way through"
"You just broke my shit, I'm going to be pissed"
"Rhode Island sucks! It's not even a real state."
"What do you have to do to slow a Windows system down... Nothing"
"I feel like I just gave birth to a callback."
"Most New Jerseyans can't spell DB2"
"The RESTafarains are as whacked out and smoke as much ganja as the Rastafarians"
"Regular Expressions are cartoon characters cursing"
"I don't agree with anything a DBA says except in table naming"

Sunday, December 9, 2012

It's Visual Studio LIVE Eve!


I pumped and excited for another great year at Visual Studio LIVE! I'm attending the 2012 conference here in sunny Orlando, Florida which is a great selection based on the cold, rainy weather I see around the rest of the country. It's my 5th Visual Studio LIVE! conference, and I can say it has been a HUGE contributor to my DNA makeup as a software developer over the years.

I noticed this year the conference has expanded back out to Visual Studio LIVE!, SharePoint LIVE!, SQL Server LIVE!, and Cloud & Virtualization LIVE! The last time I saw the conference segregated out like that was in 2007 in Vegas when it was the old TechMentor and VSLive! conferences. Nice to see the conference is back in full swing and there are going to be a boatload of talented folks from all over the country attending.

Of course I will be focusing on the Visual Studio LIVE! tracks and will veer toward a lot of the ASP.NET sessions in MVC, Web API, JavaScript and the like. I'm also looking forward to sprinkling of sessions on Windows 8 applications, VS.NET 2012, and .NET Framework 4.5. As usual I'm looking over the tracks and there are always multiple great sessions occurring at the same time, so it's always difficult to choose.

Another great part of this conference is networking with elite developers and the like to get a litmus on the industry and what people are using, found not good to use, and overall direction on the Microsoft stack.

I've already registered tonight and getting ready for a full day workshop tomorrow on WCF and Web API with Miguel Castro. I've always been a fan of programming services and enjoy them a lot so tomorrow should be both fun and intriguing to continue my knowledge push on these technologies and best practices.

All right, it's almost that time so here we go!!