Thursday, December 13, 2012

Visual Studio LIVE! Orlando Day 4


Well it's day 4 of the conference and day 3 of the session tracks. Today for me should probably be called "The day of Marcel", as I will be attending (3) of Marcel de Vries' sessions. I'm ready, so let's get going!

Building Single Page Web Applications with HTML5, ASP.NET, MVC4, Upshot.js and Web API
Marcel de Vries, Microsoft MVP and Technology Manager, infoSupport




Single Page Applications or 'SPA’s' are web applications that are one HTML page that contains the whole application. You might wonder why this would be useful. Well the idea is that they are lean, responsive, runs on any device, and has the ability to work offline. SPAs rely on JavaScript heavily on the client as well as say ASP.NET MVC on the back end. It was noted that those familiar with Silverlight and MVVM would be comfortable making SPA's. We can use the Web API to allow communication between the client and server via JS and Ajax.

As always, my ears perked up the minute Web API was being discussed again! The more information presented the better as this is a relatively new technology (ASP.NET Web API not REST based services) and I think it will be a great tool in the toolbox. A Web API controller inherits from the class ApiController, and the default project template will already create a base set of classes where this is done for you. He set up a basic Web API service and was using Fiddler to make calls to the service. The nice thing with using Fiddler if you have not before, is being able to inspect the request and response including the headers.

Next MVVM was brought up which was interesting because normally I do not associate this with web applications. What was being explained was having the ViewModel in JS with observables to have the UI react upon. Knockout.js was used for the ViewModel and jQuery was used for creating the observableEntities. Knockout.js enables observables and dependency tracking, declarative bindings, and templating. There are several JS libraries that do templating but Knockout is a good choice. I saw John Papa at this year's code camp show multiple templating examples using Knockout. Currently not all browsers support JS getters and setters (Internet Explorer), therefore all observable objects are functions. Observables do take up a little memory, so keep this in mind when creating them.

Data Binding seems pretty straight forward using Knockout. You can bind the ViewModel to HTML objects using the 'applyBindings' method. This method is typically called in the Document.Ready function. Some of the available bindings are text, html, css, style, and attr. 

Templating is also an interesting feature of Knockout. The 'data-bind' property can be added to a <div> and the bound ViewModel data will be used if it exists.

Sammy.js is a library (Nav.js is another) I had not heard of and is used for routing in the application. It's actually pretty nice, because it allows setting up routes for navigation in the way of a JS function. Based on the route called and parameters extracted from the URI, methods can be called and the page can be manipulated as needed. He was using jQuery to do fading and displaying of different <div> sections, yet still all in a SPA. To be a true SPA with rich deep linking, these types of methods are going to need to be used in order to provide a rich page with all the required functionality. The routing concept seen here, in MVC, and Web API is something familiar to developers using any of these technologies, but less likely for people still doing web forms exclusively. Looking at this to me strengthens the need to branch into or at least understand MVC because so many technologies are leveraging routing. 

So what we have is routing being done in (2) places: on the MVC controller using Web API to serve up data, and on the client using Sammy.js to react to the navigation and get that served up data from the service via an Ajax call. As he explains it's really a tango between the (2) to supply functionality for navigation and data needs. Make sure to minimize communication when possible and cache Ajax calls when possible. Amplify.js is a library that provides support for caching the data for Ajax calls.There are properties on the Ajax calls that allow dictating if caching will be used and any provided timeout values. This may not be as applicable on larger sites with a wide array of data binding that needs to be fetched on demand, but for a SPA this would fit well as there is probably not going to be a ton of new data fetching due to the streamlined nature of a single-page's functionality.

Single Page Applications provide a new type of web client. It results in fast responsive apps with options for offline  The JS libraries used help facilitate the needs of the SPA, but he noted the ones he used are not the only ones available and to research the others available to see if it meets additional needs of your SPA. 


Intellitrace, What is it and How Can I Use it to My Benefit?
Marcel de Vries, Microsoft MVP and Technology Manager, infoSupport




Marcel started out by stating he sees all of these developers that have VS.NET 'Ultimate', but have no idea what's in it. Intellitrace is a tool I've known about for years and almost ashamed I have not sought it for use because I know how great of a tool it is. He made an analogy to the aviation industry - you don't want to crash a plane to collect data afterwards. The same manner with applications, don't wait until production to find issues and then have the task of trying to recreate.

Intellitrace provides a debug logfile that you can use to debug your program at a different moment in time. The file contains debug events, exception events, .NET Framework events, and allows configuration for what is relevant for your application. For example, if you make a SQL call, the logfile will automatically capture the SQL used and let you know how it was called (LINQ, ADO.NET, etc.) Well that's almost sale enough right there! How many times do we run SQL Profiler after the fact to try and recreate some anomaly  This type of post-issue debugging is less effective that having all the historical data already captured for us (and not just some message in the event log).

One tid-bit, if using Intellitrace on a machine with a Solid State Drive (SSD), you will barely notice it's there and running. There is a little more of a hit noticed on machines with a HDD.

Intellitrace can be configured from Tools -> Options -> Intellitrace -> Intellitrace events. There may be options not applicable based on the technology you are using (i.e. Windows Forms events don't make sense for ASP.NET apps).

When turned on for example it will capture all return values from method calls within a method. Ever start putting a whole bunch of Debug.Writeline statements when trying to figure out values at runtime? Intellisense prevents to an extent having to do this kind of manual messy work.

He did an example and noted Intellitrace will start logging once breaking into code (one of a few ways to get logging). This is some great stuff! You get the traditional debug experience, but the Intellitrace information is displayed in a new pane on the right-hand side.

The integration with TFS was impressive to. He kicked off a build that ran unit tests and when one failed, he was able to get the output Intellitrace file to provide the details of why it failed. Also within TFS the workflow is nice if you use it as the symphony between QA testers running tests and the output for developers will automatically create Work Items.He was using a product called Microsoft Test Manager (MTM) to do testing. Using Microsoft Test Manager you can plan, manage, and execute both manual and exploratory tests. You can also automate your manual tests once they are stabilized.

It is important to note that the debug symbols file (.pdb) must be turned on to provide information for Intellitrace. Also if using TFS, make sure to configure the server to save the symbols to a 'symbol server'.

There is also a free downloadable product called 'IntelliTrace everywhere'. It allows anyone to capture the logs from anywhere. It is available for download at the following link.

He did a demo of doing IntelliTrace in a production scenario using PowerShell. The one thing to know is that initiating it can cause the app pool to recycle. This as we know will graceful hand the user over to the new worker process, but things like cache and session might be cleared. Just be aware prior to firing off the process. One other important note on this. You do not need the .pdb files on the production server (as typical with a release build). The GUID of the .exe will be used and be directed to the designated symbols server to get the debug information. So if you don't have a symbols server set up the debug symbols would have to be present in order to get the output trace logfile. Symbols Server is a part of TFS, but it can also be downloaded separately and configured via command line if you do not use TFS. He was not sure if indexing was possible, or which other source control providers to get the files were supported. Also remember, VS.NET 'Ultimate' is required to read the trace files, but it is not a requirement to have VS.NET on the production server.


EF Code First Magic Unicorn Edition and Beyond
Keith Burnell, Senior Software Engineer, Skyline Technologies, Inc.



The reality is we write data centric apps, and 99.9999% of the time there is a relational database behind it all. What this means as developers is we have to write a lot of data access code. This is the CRUD and mapping data to the objects we use and must do it over and over and over again. What's this code look like? Lots of raw ADO.NET filling datasets. The DataSet has a big object/memory footprint so it's not efficient to be passing around this non-typed data. This argument / statement is nothing new and has been being preached (by me as well) for several years now. "Datasets are so .NET 1.1" (that's me trying to sound cute, while actually sounding dumb).  Ideally we want to be working with Plain Old Class Objects (POCO), which are nothing more than a generic class.

The 1st ORM from Microsoft to help solve this issue was LINQ to SQL. This got us closer to POCOs by working with a class that represented a table and are not the huge in-memory representation of the database (DataSet). Unfortunately,  LINQ to SQL classes are still heavy and tried to do too much. This helped lead to it not being the flagship data access technology (read here) atop of ADO.NET from Microsoft, enter Entity Framework.

ORMs help with impedance mismatch allowing mapping, typically via XML configuration files, to create class objects that represent the tables in the database. Out of the box Entity Framework it missed the mark, missing core ORM functionality (no foreign keys), and even tester gave a 'no-confidence' vote on it stating formally they were unhappy. The 2nd version od EF, EF4 (yes 4 = 2 and they skipped the version numbers, probably to distance itself from the issues of EF1), but still worked heavily in a database-first approach. EF 4.1 released a set of bits (Magic Unicorn Edition) introducing a code-1st approach. Code 1st allows creating your code and domain model and never have to touch SQL Server Management Studio (SSMS). The result, you can write code (that's what we do best), POCO classes, to generate our domain model and have it create the database afterwards. Fear not there is not a 1-to-1 class to table. The database generated will have a design that you will be happy with. Just check out how the tables are laid out after the database has been created.

It's not to give the impression that the database is perfect but it gets it 90% there. You can look at SQL server and to see the changes you want to make (remove pluralized table names, change field types, etc.). There are (2) ways to make the changes  1st is to use Fluent configurations. This is actual C# code in the DataContext class in a method that will override OnModelCreating(). For example modelBuilder.Conventions.Remove which will remove the pluralized table names. The second way is to configure the attributes (System.ComponentModel.DataAnnotations, yes same as MVC) on the Domain/Entity class.  For example adding a [Required, StringLength(25)] attribute, this will cause the field to be not null and have a length of 25 in the database. Interesting point, if you configure in both spots, the Fluent configurations will win.

Interesting note based on a question I asked: I'd only done a database 1st approach which works of of the Entity Data Model (.edmx) and the XML mapping files behind them. With a Code 1st approach, there is no more data models (.edmx) and no XML mapping files. All customization must be done via attributes or fluent configuration.

To push the code further we can actually seed the database directly from the code as well. You may or may not like this feature, but at least it's available  Remember the focus here is to have to deal with SSMS as little as possible. We are developers not DBAs. Like Keith said, "I a developer, not a DBA and I want to code" Ok the last statement might not bode well to all, but it does hold merit. In reality, it's better to be great and 1 thing than sorta good at 2.

Entity Framework 4.3 has a release that included via NuGet package named Entity Framework Migrations. EF will sniff out if the database model on the server is different than what's in code and recommend Code First Migrations. A hash table on local SQL instances connected to development can track changes out of sync. It is important to note this hash table is not out on SQL server in production. After enabling migrations via NuGet, some new Migration classes will be created. Then you can issue a Update-Database -Verbose to have the database get updated. The -Verbose switch will generate the SQL in the command window that can be sent of to a DBA to show what changes need to be applied. The changes do occur to the local instance (based on connection string) but not to production. Again all this this would be needed if you were not initializing EF and having the database dropped and recreated each time there is a change. This piece helps sell EF to DBAs because the code will not push out any changes to production.

Beginning in EF6, calls implemented are using the new async features!! The calls out to get data are not blocking and happen asynchronous.

Keith likes this approach because he want developers in the database as little as possible. Your a develop so think in a business domain mindset and not in a database centric one. Too often we begin by creating a data model and then making our application reflect this thought process. The data model and object model do not, and often should not, be the same. If we are already creating tables in SQL with an object model mindset slipping in, then what's the point? Just create the model in code and have the database model created in a data centric way so the (2) are done with the proper mindset.


The LINQ Programming Model
Marcel de Vries, Microsoft MVP and Technology Manager, infoSupport



Here is another session by Marcel! This is a pretty funny Dutch guy so hopefully if you are here attending you got an opportunity to attend one of his sessions. The room is packed but it's all behind me as I'm front and center for this one.

LINQ or Language Integrated Query is a set of featured for writing structured type-safe queries over local object collections and remote data sources. The basics of LINQ are sequences (something that contains elements, implements ) and elements (are parts of sequences we do the LINQ operations on). Often LINQ can replace For-Each loops in a much more concise and streamlined manner. A typical query operator takes a sequence as input and produces a transformed output sequence.

There are around 40+ operators in the System.Linq.Enumerable namespace such as 'Where', 'OrderBy', and 'Select'. Extension methods allow extending a class that we did not write. Query operators are implemented as Extension Methods on IEnumerable and return IEnumerable. For example:

var query = names
                    .Where(n=> n.contains("a"))
                    .OrderBy(n => n.Length)
                    .Select(n => n.ToUpper());

You could create your own extension method as well if you need to wrap up some functionality and expose it for use on LINQ queries. Also note, the operators do not modify the original collection, it just projects the data to the results. 

Something I have used before is 'Comprehension Syntax" which provides syntactic shortcuts for writing LINQ statements in C# and VB.NET. It looks a lot more like TSQL but it is not. You can see the same query from above rewritten using Compression Syntax:

var query = from name in names
                    where name.Contains("a")
                    order by name.Length
                    select name.ToUpper();

Most LINQ operators uses deferred execution which means they are not executed when constructed, but rather when they are enumerated. A good idea (and even code standard) is to force execution of LINQ statements that are returned from methods rather than the query itself to prevent deferred execution thus preventing unnecessary extra evaluations.

PLINQ is another great technology I have used over the past couple of years. It allows iterating on the collection in parallel. The one side effect is that you can not guarantee order that the collection is enumerated on. A great use though is say you have a collection of employees you want to calculate a bonus using a method that does a calculation  The employees in the collection have no correlation or importance of ordering, so operating on the collection in parallel is a great idea to leverage multi-core environments. I actually wrote a post on this that you can see here: Leveraging Parallelism in the .NET Framework 4.0 for Asynchronous Programming Funny enough Marcel used a similar example method called ExpensiveCalculation(). He demonstrated using by both not using and then using the .AsParallel method. Without PLINQ  it his demo on a 4 core machine took 28 seconds. Once he added .AsParallel() it dropped down to 7 seconds. You could also see each core in task manager go to 100% when using PLINQ thus utilizing all the available cores (developers we need to be leveraging async and the cores available!)

LINQ to XML (actually my 1st go at using LINQ a few years ago) is a much better way of accessing XML data as opposed to older methods like XPath. I asked Marcel if C# has support to access XML elements in a strongly typed manner (when the .xsd is added to the project) as you can in VB.NET, but he said he did not think so. This is one place where the (2) languages normally so similar still differ a bit. VB.NET has a lot more functionality like XML literals that are not available in C#.

LINQ to DataSets is another cool feature if you are still working with DataSets. The DataSets are manipulated after the data in loaded into the DataSet all on the client side. It allows strongly typed access to DataSet values even if not using strongly typed DataSets. This is another area I have written a post on previously so of you want to see some examples, check out the following: How To: Populate a List of Objects from a DataSet Using LINQ

Lastly he covered Interpreted queries which operate on IQueryable instead of IEnumerable. The main difference is in how it executes. Where IEnumerable executes piece by piece, IQueryable builds everything and is a one and done execution. IQueryable also delivers an expression tree to the QueryProcessor. This is what is being used for LINQ to Entites with the Entity Framework. It was pretty cool in that he was using IntelliTrace from earlier today to show the SQL that was being used for the LINQ to Entity query. The basic EF architecture and how these queries work is shown below (image courtesy of Visual Studio LIVE! and Marcel de Vries)




Bottom line here without a lot of fluff, get started using LINQ in code as it is much more streamlined and to me actually more expressive and intuitive than the older methods that preceded it.


Live! 360 Conference Wrap-up
Don Jones, Andrew Brust, Rocky Lhotka, Greg Shields, Dan Home 



"How many are going to the midnight showing of the Hobbit tonight" (like 8 hands raised) "man I thought this crowd was more geeky than that!"

This was a 'wrap up' session to the individual session tracks portion of the conference, and each conference chair or co-chair was present. Each was asked what their primary goal was. Dan Home co-chair of the SharePoint track said his main goal was to expose SharePoint 2010 (and 2013 I believe he might of mentioned but I know absolutely zero about ShapePoint) Greg was helping to bring Virtualization and Cloud to the masses. Don Home was trying to bleed the SQL information between DBAs and Developers.

The format for the bulk of the presentation was open mic Q&A. There were a lot of good questions and immediate feedback directed to the members on stage on the conference. As they all mentioned the best feedback comes in the form of submitting the conference evaluations as they do read this and shape the conference after the feedback the receive.

I wanted to get up and make a few comments and ask a couple of questions myself. I began by telling everyone how if I was nervous just going to a microphone and asking a simplq question, I can't imagine the presenters on stage debugging applications real-time. You know the old application crash or code that doesn't work during a presentation? I image a nice ball of stress comes about when that occurs. 

I thanked all of the presenters for their hard work and quality presentations. Like I said no fluff in these conferences. The reason this is my 5th Visual Studio LIVE! is because the content, networking, and general community vibe option on technologies is hard to put a price on. These guys from start to finish all do a nice job.

I made some comments that I noticed a lot of the Azure sessions in the Visual Studio track seemed to be not that well attended. In addition the ad-hoc polls found very few hands raised on doing Azure development. Don;t get me wrong Azure looks like a fabulous implementation for that technology, but I keep hearing "We will never put our data and apps in the cloud." The response was that they do monitor the session attendance and realize Azure is such a new product so give it time. Typically they fins best overall attendance to the conferences when they market the 'newest of the new' on topics and Azure falls within those lines. I wasn't making a stance against Azure as much as I might like to see those sessions replaced with others that seemed heavily attended (i.e. Entity Framework, LINQ, .NET Framework, Web API , JavaScript, etc.) There is a co-held conference for Cloud & Virtualization so maybe putting the tracks over there might work well. Don't worry, it was in my evaluation I turned in :-P

I also asked if there would be any tracks on the reemerging terchnologies like IE6 or Microsoft Bob, but I was just laughed at. :) I did however want to know if they would sprinkle in any architecture, design, or OOP sessions that are less tecnology centric. I understand it is difficult in 75 minutes to teach topics like DI, but still put it out there for commenting. With major sponsors of this conference like 'Microsoft', it only makes sense that the tracks highlight thier technologies. Maybe if Martin Fowler sponsors Visual Studio LIVE! we can get some design pattern classes :-P

The final topic of conversation is where these conferences are worth their weight in gold. If you did't attend too bad and I'm not going to tell you the magic secret to development that was presented. Ha, just kidding but in seriousness it was about the state of Microsoft today with Windows 8.

"This may be the most dangerous time for Microsoft since the justice department."

Well Microsoft definitely as made a bold move with Windows 8. However not for the reasons you might think I'm going to say. When asked how many in the room (of probably 300-400) are doing or planning to do Windows 8 development, about 1/3 raised their hands. 

The reality is, nobody knows if this is going to work or not. However, Windows 8 is not Windows Vista. Windows Vista was plagued with security issues and that isn't really the case this go around. It's more about betting the marbles on this 2 headed beast of being a one size fits all OS serving both mobile (WinRT) and desktop needs.

Rocky Lhotka did a little monologue that stated all of it best. What is Microsoft got only into the mobile market and the new devices were just that; another mobile device trying to compete with iPad a well established tablet at this point. What if the millions of dollars invested over the last 10 or so years in our Win32 .NET apps would not work on those devices? Microsoft has taken a chance but a clever one (as Andrew Brust stated) to cover all territory and make as OS for all needs. Isn't it going to be nice to be able to run any mobile app on our desktop? After all desktops are still the king of hardware and multiple times more powerful than mobile devices. With Windows 8 we are pretty much getting 'the best of both worlds'.

Wrap Up Day 4




Well the 75 minute sessions are done and wrapped up and it's on to the post-conference workshops. Tomorrow I will be doing a MVC workshop that should fill in some cracks and let me see some of the best or better practices in development. My brain is almost full, so I'll cram in all of the great content tomorrow and then be on my way!

No comments:

Post a Comment