So it's day 1 here at Visual Studio LIVE!, and I'm here attending Miguel Castro's WCF & Web API full day workshop. There is a great attendance with probably 300-400 people from my estimation. I'm here on the 2nd row so I can get the most and interact without yelling. I am definitely refining my air traffic controller multitasking skills today trying to listen, take notes, and learn simultaneously.
I actually had the pleasure of having dinner with Miguel and his family along with Rocky Lhotka, and Andrew Burst at last years conference over at Bubba Gump Shrimp. I sat across from Miguel and had some great conversations on WCF. I also had lunch today with him as well which was cool! The man has a Black 560 HP Mustang GT he customized. Yes please. Here's a guy that has a wealth of knowledge on WCF, services, and SOA which you wish was sitting in the cube or office next to you on a daily basis. He validates himself easily as a Microsoft MVP for Connected Systems, and I enjoy his knowledge transfer.
This class was interesting to me because I have been using WCF since CTP in Framework 3.0, and by no means do I consider myself an expert (on this or anything), but I wonder about tracks I have a deeper experience in already. Meaning, will it be a lot of information I've digested previously. Fear not, one can never know it all. Case and point this class by Miguel. The byproduct of the architecture used behind some of the simple WCF samples shown is the real gem.
When Miguel asked how many in the room are WCF developers, to my surprise it was not the overall majority in the room. I am confident this was not telling of the technology but rather there were a lot of developers that have not been involved with writing services. I feel for some of these people because WCF is so deep as a technology that's it's hard to digest in a single day. This is exactly the reason though that I found this class so informative. This was a 300 level track masquerading a a 101 during introduction, and he states this is a 5 day class compressed to a single day. When DI, SOA, SoC, and OOP are all being discussed within the 1st hour or so... this is great stuff.
Where I really align beliefs in architecture with Miguel is on SoC. I always can get a feel of how an architecture is laid out simply but looking at all of the projects in a VS.NET solution collapsed and seeing the logical layers. Miguel is a strong proponent of separating the individual pieces of WCF into their own layers. He does not care for the bloating of the auto generated proxy classes created by 'Add Service Reference' within VS.NET, in addition to the tightly coupled nature of the service contract, implemented classes, and the associated channel communications. I too agree for the majority of applications. I still think there is a place for consuming a service and having the proxies created, along with a WCF Service Library on the back-end. I think if you have a simple 1 page app using a simple service (keeping in mind YAGNI principal, while still observing scalibility of functionality), there is a place for a compressed architecture. WCF Service Library or WCF Service Application + Consuming client application using 'Add Service Reference' within VS.NET is simple, straight forward, and easy to understand and consume. I try never to be what they call an 'Architecture Astronaut' and over-architect when not required; not implying Miguel is this by any means, but just that I rarely will use the word 'always' (he didn't either, just making a point) when using the sentence "This application should be architected like this ____" I also don't believe in spaghetti crap applications either, so hopefully I make proper decisions on building and constructing applications.
However for true enterprise applications using SOA as a basis, or even any large scale isolated applications I couldn't hold the flag with any more pride on properly separating the logic when working with WCF. I'm a big fan of the ideology around architectures like Domain Driven Design, MVC, MVVM, and heck even simple 3-layer UI-BLL-DAL. They all have at least a basic commonality which is the idea of logically separating responsibilities and concerns. Since WCF inherently has a lot of different responsibilities end-to-end it to makes sense to separate the major players into their own pieces. This isn't really a new concept as most advanced literature in our field will at minimum separate out the host and WCF functionality, but most even take it a step further and break the pieces down further. Why all of this work? In the long run it allows us to be extremely flexible and make isolated changes without a large ripple effect, and to allow switching out pieces like the host easily.
First let's look at an overall SOA architecture. This gives a great visual on a decent layering of the application (slide courtesy of Visual Studio LIVE! and Miguel Castro).
Here is a breakdown of the WCF components that will need to be created, each responsibility being its own project (slide courtesy of Visual Studio LIVE! and Miguel Castro):
Next let's look at the breakdown of the actual project layers and their high-level purpose:
Business Engine: This is the typical layer which contains the business rules and logic. Also referred to as the Business Logic Layer, Business Domain or just Domain Layer.
Client: This represents any UI client that will be making calls to our WCF service. This might be a ASP.NET, WPF, WinForms, etc. application.
Contracts: These are the Service Contracts and any DTO DataContracts used for transporting data across the wire. There are no implementation classes in this layer. Miguel had a nice suggestion to post fix the word 'Data' on a DataContract DTO, like 'ZipCodeData'. This helps distinguish them from service or business logic methods. DTOs are typically nothing more than getters and setters to move serilizable data across the network.
Host: This is the hosting layer for WCF. This might be contain the files necessary for hosting via a self-hosted Windows Service or IIS (.svc file). Remember that a svc. is a 'browsing point' for a WCF service. It invokes the appropriate WCF handler. ASP.NET needs a browsing point to know what world it's in.This also contains the .config file, and the only one that matters which is the one from the host. Within this config there is the WCF configuration
WebHost: This is an optional layer that provides an additional avenue if the 'Host' layer is implemented using an alternate self-hosted mechanism like a console application or Windows Service.
Again as you decipher each layer you see it has a very specific responsibility It really is not a lot of extra work to segregate the layers and the benefit is the ability to make isolated changes or even switch out components more easily (like hosting methods) without a lot of 'unhooking'. The main thing here is *not* to couple the host and the service code in case the host needs to be changed out later. This architecture and layout of the layers has it's obvious benefits.
He then led into a great discussion on WCF Proxy Instancing and Concurrency. I was more interested in the instancing portion as I think there are more use cases for making changes. The following are the (3) main types of instancing:
-each call spins up a new instance of the service.
-On the client it appears that it's a single object, but it's not. Stateless behavior.
-Advantage is stateless, scalable, and nothing is held on server's memory.
-Not the default, but Miguel sets his services up like this.
-1st call spins up instance and calls constructor
-2nd call uses SAME instance
-Can maintain state using class-wide members in service
-when host opens, host instantiates service
-one instance serves all proxies for all clients
-very specific usage scenarios
-With IIS hosting and the potential for app pool recycles, the Singleton instance is destroyed and dispose is called. This is a disadvantage and something to keep in mind if trying to use a Singleton for the WCF proxy.
On the topic of proxies, they are unmanaged objects. They are not CLR managed. Always dispose of proxy classes, and use a 'using' statement when possible to wrap instantiation of the proxy classes. Until the proxy is closed, WCF will keep an open connection and add to the value for service throttling thinking the connection is still active. Throttling is ability for WCF to manage concurrent calls, prior to queuing up to prevent server from breaking. This is really important to be on notice. If that proxy class is not disposed performance can hinder significantly Miguel had instance where a DI proxy in MVC was not disposed. Calls went from 1.5 secs to 100ms once the issue was tracked down and the proxy was disposed. A few in the audience complained of problems with the 'using' statement throwing exceptions, but I would like to see the IL differences from calling manually. Bottom line, dispose of explicitly if the 'using' statement gives any issues.
Next he went into doing WCF callbacks. Very useful stuff. Callbacks have a place for keeping a proxy channel open to allow the server have the ability to make calls back to the client. The use of this could be as simple as updating a progress meter bar, updating sports scores during a game, updating stock prices, updating dashboards, etc. This is in my opinion a much more appropriate 'tool' for what often is done via a timer and polling. Polling definitely has its place and I use it and have blogged on it, but when the responsibility is on the server to notify the clients of updated data, then using duplex services are a good idea. If you have a need to constantly keep a client updated and are using a WCF SOAP based service, then using callbacks and a duplex service should be considered, especially if you are implementing some sort of polling to fetch data.
The topic of WCF exceptions was also highlighted. As always the main exception rules still apply when handling exceptions. Here are the main reasons to catch exceptions:
- wrap and re-throw
- log and re-throw
- consume and dissolve
However with WCF operations it is acceptable to have one big Try-Catch with Throw FaultException.
My particular favorite way is using something like the following:
throw new FaultException
Something interesting, if you do not explicitly throw a FaultException server side or throw an exception of a specific type (i.e. DivideByZeroException), the proxy on the client will *not* be preserved and will be closed. However, if a FaultException or FaultException
Miguel also highlighted Transactions is WCF. I have not used transactions in WCF in a production setting, but as typical the best example to show how to use transactions is with a bank account example. The example used was one of a method called 'TransferMoney()' and underneath (2) additional methods are called 'DebitAccount()' and 'CreditAccount()'. Obviously if 'DebitAccount()' worked and 'CreditAccount()' did not, we do not want the transaction to complete and have it roll back. This is not the same as those that have worked with SQL transactions in ADO.NET. The rollback is independent of any SQL calls. You might still have a database call involved but you might not as well.
A few things to note on transaction in WCF. They are at the operation level. By default transactions are not turned on (Value = 'NotAllowed'). However there are (2) other settings which are 'Allowed' and 'Mandatory'. As Miguel mentions, setting the operation to 'Allowed' is a low risk as it actually does nothing until transactions are implemented; it just opens the door to allowing them if desired. Also, all methods downstream must participate in transactions or the transaction functionality will not work properly. Any method in the chain that does not support and implement transactions will not allow the rollback to occur if there is a failure. Lastly, the client does not need to worry about wrapping the initial call into a transaction. As long as the WCF service implements them properly it will still have the transaction implemented correctly. Transactions are not needed for all use cases, but the actual implementation when planned out properly is not too difficult.
In come REST services, my favorite topic of the day. I know not the end-all answer to everything, but its power and simplicity are great. WCF SOAP services for line-of-business intranet .NET to .NET apps using NetTcp bindings are still lightning fast and the way to go for performance. However the minute a non-.NET client is introduces especially outside the firewall, using RESTful services are the required solution. Miguel stated that even know the wsHttpBinding is supposed to have interoperability, it doesn't do uplevel perfectly so need to go with REST in these situations. Deciding SOAP services vs. REST services will mostly come down to the consuming clients, internet vs. intranet, and interoperability factors. Also remember with REST services many of the topics discussed previously like transactions, concurrency, etc. are not applicable. There is still a place very much so for SOAP based services albeit a much heavier implementation that that of REST based services.
REST based services are much lighter weight than SOAP services returning either XML or JSON typically. There is no heavy SOAP message to deal with and the constraints of what the consuming client will look like. REST services based on the REST architecture are an extension of web standards and the GET, PUT, PLACE, and DELETE verbs (note: PUT and DELETE verbs are turned off in IIS by default, so make sure to turn them on if you need them). In fact you can make REST based calls directly in a browser (browser calls are by default HTTP GET). Although the purists, the so-called 'RESTafarains' will never acknowledge a pure REST implementation most of the time there is at least one place I totally agree with them on implementing these types of services. Make sure to use the HTTP verbs properly For example, don't abuse the verbs and do an 'update' behind the scenes as part of a GET. While possible it's incorrect and there are really no checks in place to prevent this.
With a WCF implementation, the deciding factor on the return type, XML or JSON, is configurable at the service level. Ideally you would expose (2) endpoints, (1) for each return type. Then it's up to the client to call the appropriate URL containing the endpoint that will return the desired type. However, technically the server is still deciding on the actual service what the return type will be.
What I REALLY like about Web API services is the ability for the client to set the 'Content-Type' value in the request header to indicate the return type! Yep no configuration or heavy implementation. Set the header and you get back the type you requested. If testing from a browser, Google Chrome by default returns XML, and IE returns JSON. My recommendation if you are not familiar with JSON is to begin using it because it is more compact and lightweight than XML. With so many JSON deserializers in .NET it makes it super simple to convert it to a DataContract once received and then work with a strongly typed object. You *can* do the same with XML via LINQ to XML into a type like a DataContract but it's much more cumbersome to work with in my opinion.
WCF REST service and ASP.NET Web API are competing products at Microsoft and have a single intended purpose to deliver data in a RESTful manner. However, REST was introduced 1st in WCF 3.5 with introduction of WebGet and WebInvoke attributes and webHttpBinding. Web API was originally born out of the WCF Starter Kit and was RC in MVC 4. You can use the Web API with .NET Framework 4.0.
The main deciding factor when looking at architecting an application is the decision to use a WCF REST based service vs. a Web API service. There are subtle advantages to both and Miguel warns of not getting all caught up in the 'Web API is the greatest thing since sliced bread' deal. If you already have a full blown WCF service layer implementation and need to add REST atop of that, then a WCF REST service may be the easier way to go. However if starting from scratch it appeared to seem that the general census was to use a Web API REST implementation.
Interesting tidbit - technically the largest REST based deployment in the world... the world wide web.
The final part of the day was to cover WCF security in 45 minutes. WHAT?!?! Yeah pretty much impossible. The good news is (at least for me) I was have done so much over the years with WCF security, authorization, authentication, and securing services that the security information 'blitz' made sense to me. However anyone in the audience that has not done anything with security will need to do a lot more research on the topics. May I recommend perusing this blog as I have dedicated several posts to WCF security and securing WCF services.
The main points I wanted to highlight here are the following. TCP is a secure binding by default. It's binary. You can't break the pipe. HTTP on the other hand is an open binding and the 'message' needs to be secured. You can actually secure the 'Transport' which will also secure the message with either a SSL certificate (HTTPS) or via X509 certificates I prefer using a SSL cert and like I mentioned have several posts on the topic. However the points on NetTcp are important to restate. If you *can* use a TCP binding, you will get some blazing performance and native Windows Security so it's an attractive option when working on an intranet application with a .NET to .NET scenario. Check out the WCF Security Guide on CodePlex if you really want a deep dive. In reality an 8 hour course could easily be given just on the topic of security.
To wrap things up on this busy day, I must say I onloaded a LOT of information for the services world. If you have a chance check out Miguel at any of the mainstream conferences around the country, on his blog, or on Twitter @miguelcastro67. I only have one piece of advice for Miguel since he has provided so much information to me today... dump the Mac.
I will also leave you with some of Miguel's best quotes of the day. I always enjoy his candid style!
"What is exception handling? A slash block and then 'ToDo'"
"Can we even call them Metro apps anymore, or not because some food store in eastern Europe sued Microsoft."
"Compilation is the 1st unit test, right... sometimes it's the only unit test"
"I get to start shit and not have to finish it" (contractors)
"My shit don't break"
"Who does SharePoint in the room.... Why?"
"no, no Google, we use Bing here right?"
"Dude, i'm not covering security 2 hours in! I do it at the end of the day when you brains are fried so I can bull shit my way through"
"You just broke my shit, I'm going to be pissed"
"Rhode Island sucks! It's not even a real state."
"What do you have to do to slow a Windows system down... Nothing"
"I feel like I just gave birth to a callback."
"Most New Jerseyans can't spell DB2"
"The RESTafarains are as whacked out and smoke as much ganja as the Rastafarians"
"Regular Expressions are cartoon characters cursing"
"I don't agree with anything a DBA says except in table naming"
About the 'using' statement issue: As you guessed, it has nothing to do with the IL that is generated. It isn't directly related to the using statement itself either. The issue is with the way IDisposable() is implemented by WCF client objects (ClientBase of T). The client's Dispose() calls Close(), which will throw an exception if the channel is in a faulted state. If this happens, the underlying resources are not cleaned up properly. The way around this design flaw is to try to dispose of the client, and then call Abort() if close fails.ReplyDelete