Monday, November 18, 2013

Visual Studio LIVE! Orlando 2013 Day 1

It's day 1 and we're going to get into some JavaScript, Single Page Applications, WebAPI and web development (and it turned out a little SQL too) which is all right up my alley. I really enjoy the day long workshops on the bookends of the conference because where as the sessions during the middle of the week are like 'appetizers', the workshops are like a '5-course meal' of the same food genre. This looks to be a popular session as it's quite full, but as usual I scored a seat up front :)

One disclaimer about these posts, it's a test of asynchronous brain power to create these entries. I've described it as being part court recorder, part author, and part student at the same time. I'm not simply regurgitating the material as I hear it. Rather it's a process of listening, processing, learning, filtering, and typing all at the same time. End result, if I mispell (<- irony by the way) a word or have anything stated incorrectly I absolutely welcome feedback, so please just leave a comment.

Data-Centric Single Page Applications with Knockout, JQuery, Breeze, and Web API
Brian Noyes, CTO, Solliance


Right off the bat, I'm hearing great things about SPAa that I normally wouldn't expect; layers and *Separation of Concerns*. Typically when i think of SPAs I'm thinking about slamming a bunch of fancy JS into the UI to communicate with the server via AJAX and databinding. The analogy that Brian used was one that hits home close to explanations I have used: a messy footlocker. What does a footlocker look like that has everything jammed into it and unorganized? These are great analogies for helping to explain why SoC is important, and I happy to see this type of architectural conversation occurring in reference to SPAs. The basic EF architecture and how these queries work is shown below (image courtesy of Visual Studio LIVE! and Brian Noyes):


The next topic nowadays was the plethora of JavaScript libraries available. One of the issues is determining which library to use. Many of the libraries do 1 thing and 1 thing well, but could be retired or morphed into something different too quickly and disrupt the future of your application.

The gold standard at least for now is probably jQuery. It's not a given but it sits atop the stable list of libraries. Some of the other mentioned were Knockout (data binding), Breeze (CRUD data service calls), Twitter Bootstrap (CSS Styles and widgets), and Durandal (full JS framework that ties together with jQuery, Knockout, and Require.js). Several others were mentioned such as Angular, Ember, Backbone, and Foundation. The cool thing is if you learn one of these, odds are you can easily switch to another. Typically the differences are small syntactical differences to do accomplish the same objective. Brian was discussing the comfort level with each library and why he choose the libraries he did to work with on development. A lot of it is just personal preference as a lot of these can do the same thing. One direct comparison was made between Durandal and Angular. Both are data binding frameworks using the MV* composition. The important thing to note is either composes well with Breeze and jQuery.

POJOs is a new term I heard today: Plain Old JavaScript object. Wow, I think to the arguments I've seen online about JS being a true OO language. Of course most of us understand POCOs, so here come the POJOs! If one thing is not stated this week explicitly, I'll go ahead and say it: JS is going nowhere and may eventually rule the world as king of development languages. With it's inherit ability to run across almost any platform it's ranking in current day development is rising to the top if it's not already there. Remember when we thought JS was just some fancy client-side trinket to make alerts and validation? Not anymore, JS is beginning to reign supreme and emerging as a 1st class citizen for all types of development. If you want to be relevant as a software engineer, I recommend having a working level knowledge of JS at a minimum.


TypeScript was brought up and is essentially a higher level language that it's up outputting JS. It helps prevent some of the bad practices of natively writing JS. The allure for me is writing in something more familiar in the style of a strongly typed language, yet creating the JS needed and ran at runtime. I actually hope TypeScript catches on as a standard in is used widespread because I think it will make the process of writing JS much easier. Since it's backed by Microsoft, there is a good chance of this happening. I think it just needs a tad more time to mature and sell itself. This is a good week to make that happen to a wide audience.

Quick list of development tools for client-side development: NuGet, Chrome / IE development tools, Fiddle, Postman, Knockout Context debugger, jsFiddle, Web Essentials, Resharper, and Productivity Power Tools. On a personal note I use or have used about 75% of those, so always good to know I'm not working in a vacuum / cave and unaware of whats valuable. The conversation about the IDE productivity tools like Just Code / CodeRush / and Resharper all help every day developers do stuff faster. Yep there is no need to state it any more elegantly. These tools help what might ordinarily take 4-5 clicks, keyboard presses, mouse-clicks, etc be compressed into 1-2 instead. I personally use and like Resharper but all of the tools do a different flavor of the same thing. As far as the browser tools, I personally use the IE development tools because I am targeting IE so often, but the Chrome tool seems to be particularly useful with it's 'Sources', 'Console', and 'Elements' tabs for debugging code client-side. The ability to debug the JS in the browser using F5/F8 in Chrome was something I have not done, but something I will *definitely* be using in the future.

jQuery which I'm sure most have already used is a rock in the JS world for doing rich DOM manipulation. It helps with normalizing the API for working across browsers. Remember the days of writing large 'if' blocks in JS to account for different browsers to accomplish a single task? With jQuery this issue is almost non-existent as the functionality to determine this is encapsulated and reduces what the developer needs to write. However it is much more that just this as the shorthand nature of all functionality exposed in jQuery greatly reduces the amount of overall JS required to be written. The 'selectors' allow easy reference to DOM elements and then manipulation via jQuery. An example is $("#MyElement").val("Hello"); is a simple way to set the value of a HTML input text box. Once you have your element selected there is a *massive* amount of manipulation that can be done to that element.

One of my favorite parts of jQuery and was presented was the ability to make AJAX calls to the server to return and bind data. I've written about some of the challenges of doing this in reference to ASP.NET web forms (Divorcing the UpdatePanel for Asynchronous Postbacks in Web Forms Isn't Easy), but for SPAs and MVC applications this works really well. The syntax to call a server method (i.e. Controller method) and return JSON to use client side is straight forward. One of the new things I have not seen are JS 'promises' like .done, .fail, and.always, vs. .success, .error, and .complete. The former are suggested as the modern way of doing this analogous to the new async/await functionality in C#.

I will say one of the aspects of JS development that I need to improve on is the organization of the logic. I still can be blamed for placing JS at the top of my pages. If I were writing in a server-side language I would say this type of disorganization violates many code standards, so I need to begin to refactor out this code into meaningful files that are better organized. In my defense this is the nature or tendency though for writing in languages that are not strongly typed and have a tendency to be written in a notepad style as plain text.

Knockout is a data binding JS library that is a means for separation of concerns. It works on all browser and is not dependent on any other libraries. It's size is about 13KB, or should I just say non-existent which is fantastic. It main features are Observables (like INotifyPropertyChanged in WPF/Silverlight world) and UI templating. I have not used observables in JS before, but this peaks my interest. The ability to have changes notify when their properties change in JS closes the gap on client-side vs server-side abilities.

OK WOW! The conciseness of two-way databinding with Knockout is amazing vs jQuery. You can place the declaration directly on the element like: data-bind=" value: name". The Observable nature prevents the need to do traditional 'push/pull' that I had been doing previously with jQuery AJAX calls. At this point even early in the day, if I just improved my JS organization and began to use Knockout, I believe I would be leaps and bounds ahead of where I am today. The thing that begins to get overwhelming is the onslaught of features one wants to apply after learning everything in this week, but the reality is to pick and choose to be productive and move forward. I think it's time 'Knockout' gets into that bag of items to use back at home.

Data binding attribute:
<input data-bind="value: name" type="text" />

JavaScript object to bind to:
var customer = {
Name: "Allen"
};

applyBindings function to bring the two together:
ko.applyBindings(customer);

I also didn't expect to hear about MVVM in this session. My thought was: "here are the 100 newest JS libraries and how they can make your life easier without hardly using C#" OK, that's probably a little generic, but I'm really pleased to see architectural and OO discussions in this track. I definitely scored with this session today as the architect blood running through my veins is at ease with what's being presented. MVVM is a presentation layer architecture to help with Separation of Concerns. For example here would be a typical breakdown:

Model: JavaScript
View: HTML / CSS
ViewModel: JavaScript

Inline styles are a violation of MVVM so layering is needed to provide organization and Soc. The ViewModel will expose properties and logic for databinding and can be analogous to the 'code behind' files from web forms if you need to connect the dots. The Model is typically everything else not in the View and ViewModel. It might be POJO, busniess logic, and validation mixed together. The View is just that; the structure the user sees on the screen.

A complete demonstration was given on Knockout Observables. It was cool to see a simple example of a text box and label where when the text value was changed and tabbed off, the label value observed the update and automatically updated. This wasn't just JS assigning 1 field to another. The Observable was at work and notified automatically when the field was changed.

Knockout Context was a nice add-in to Chrome to help with debugging. Unfortunately navigating through the object model using the native tools does not yield the values of the bound observables. Using Knockout Context allows you to see the actual values to assist with debugging.

The 'Revealing Module Pattern' was an interesting way of structuring code similar to a class in .NET that contained private values and methods to be exposed as a package. The return object is the exposed value of the API to the rest of the code using it. One important thing to note are the parenthesis at the end of the function after the last bracket to make sure that the function is invoked. This structure is used to create ViewModels that then are data bound to for the controls on the page.

A typical data binding syntax looks like data-bind="value: customer().Name" The parenthesis are required to make the databinding work, but there are extensions to Knockout that will allow the removal of parenthesis for a more concise syntax declaration.

Templating in Knockout and in JS in general is something I still have not gotten completely familiar with to date. You can use the following syntax: data-bind="foreach: products" on a
with columns and have them repeated when rendered for the collection. I suppose analogous to some of the Razor Helpers in MVC, but since we are looking strictly at SPAs and JS, this is the mainstream method for data binding a collection of data.

Probably the main distinction to make here is using purely JS vs. a more robust and rich framework like ASP.NET. I see all of these various data binding examples and think "I don't even need ASP.NET" and that is partially the point. However as I mentioned prior in respect to web forms, using purely JS data binding exposes a rabbit hole of issues and challenges. It might not be an "all or nothing" scenario (as nothing in development usually is), but finding the harmony between use of technologies and languages is the challenge I'm still working through.

There was a question on the following line of JS at the beginning of the file:

var my = my || {};

What this does is state if there is already a 'my' namespace or some other library using this namespace, add our code to it and don't void what already exists. However if it does not exist, our binding objects will still be added to the namespace. One of the aspects I really like about the Knockout syntax is the concise yet explicit feel of the code. When I read it having not used it before, it just looks to make sense and be quite readable. Compare and contrast this to raw JS, and it's much easier to read and write.

I did like Brian's explanation of a methodical approach to deciding when to use custom data-binding handlers. The 1st thought is if you want to manipulate an element, you probably are thinking about using jQuery to get that element and then do what is needed. The next question to ask is if you will be doing that same manipulation more than once. If the answer is 'yes', then it's best to look into this type of refactoring and use of binding functionality.

Templating has the following syntax:

data-bind="template: {name: currentTemplate}"

<div id="homeTemplate">
Hello
</div>

In this example above, 'currentTemplate' is our template object that exposed the data for binding and uses 'homeTemplate' by ID.

SQL Server Workshop for Developers
Leonard Lobel, Microsoft MVP and CTO, Sleek Technologies, Inc


OK I I took a U-turn here and hopped over to the SQL Server for Developers workshop. Brian was doing a terrific job in the other workshop, but in lieu of learning a couple of more JS libraries, I got selfish for craving more and varying knowledge so I sunk over here. I peeked at the agenda and there looked to be several topics in the afternoon that seemed to be useful for me to investigate.

I'm catching the tail-end of the SQL 2012 enhancements discussion. A random note that caught my ear immediately is about querying the system information. When needing to get system information query the system view schema as opposed to the system tables which change names in just about every version of SQL released.

XML is the only topic being cover that pre-dates SQL 2008. Leonard makes a great point about never going to extremes when making decisions on a topic. Some camps say we don't even need a relational database because of the structure and validation of XML. The other extreme is to never use XML for storing data and always use a database. There are shades of gray in the middle.You can store the structured XML in the database, but yet still have the advantages of XML querying.

XML in it's raw form can be stored in the database. I know 1st hand from doing this it can be a bit tricky to query as he validated but feasible. One interesting thing which I need to go back and look at some of the databases I worked on is the use of the XML data type for a column. I believe in the past I always used a varchar(max), but if I had a column that is always XML (not say JSON and XML captured from service traffic for auditing) then using the XML type appears perfect. In fact upon querying using 'FOR XML RAW' or 'FOR XML AUTO', the XML appears as a link, and clicking on it produces a well formatted version of the value. This is very nice. In fact you can even relate the XML to a schema (.xsd) and get validation upon inserting into the database. No doubt about it, if I need to store XML in the database again, I'll be using this type. I may not explore all of the functionality available, but even just for storing the data, this appears to be a superior method.

There were also examples of querying using XQuery where in the WHERE clause there were conditions using a XPath expression like the following: Book.value('/book[1]/title[1]', 'varchar(max)') AS Title. This ability extends into doing other CRUD operations as well. The overall concept is that you can query and modify XML stored in SQL server just as you could in a programmatic way from .NET. You are getting the best of both worlds with storing XML, yet being able to query and manipulate the XML in it's proper way.

Next up is something I really do want to get into more and know I'll use it in the future. FILESTREAM in SQL Server. There are (2) options for storing data: inside as a BLOB or outside the database in the file system. BLOBs use a varbinary(max) data type. The bad part of inside the database is the bloating of the database and the eventual pull on performance, and the bad part of being outside the database is the requirement to backup all the files manually.

A few quick side notes on FILESTREAM; it is not supported in mirrored environments. However in SQL 2012, it is functional with HADR (high availability). If you are using this new version of HADR the FILESTREAM functionality works. Also if you turn on TDE, it does not encrypt the files. One last tidbit for the frugal among us, the FILESTREAM files do not contribute to the 10GB database size when using SQL Server express.

The FILESTREAM attribute allows storing the files outside of SQL server with just the pointer to the file. However SQL Server manages these files and brings back the transnational nature of the NTFS file system. With FILESTREAM you get the best of both worlds; reduced bloating on the database and store files where they naturally belong. Once FILESTREAM is enabled, the column needs to be declared with "varbinary(max) FILESTREAM." Notice the data type is the same as before except with the FILESTREAM attribute.

The generation of the FILESTREAM will create a .LDF and .MDF to manage the files along with pointing to a folder that pre-exists to store the files (i.e. 'Images'). I did ask if the files are available to seek out manually on the file system. The response was technically they do, but they are obfuscated and we should not be looking at the files. We actually did this and the file was obfuscated 3 levels deep and each folder and file appeared to be named as a GUID. The file itself had no extension thus not identifying the type of file. You would have to open the file in the appropriate program to see it. However, once you store more than 1 file, there will be little chance you could distinguish between the file names to open what you are looking to find. This example was only feasible because we were working with a single file in an empty directory. Point is, you only access the file via SQL Server.

In .NET one can access these files via System.Data.SqlTypes.SqlFileStream as a stream in the proper way. This method will not place a burden on SQL Server performance (as opposed to querying the database natively which will place a burden on SQL). By using a transaction to get the path after inserting essentially a dummy file, we can stream the actual file to the destination thus bypassing SQL. This way we do not have to allocate any memory on SQL Server's side to store and access the file. Yes even for accessing the file a transaction was used, thus keeping the load off of SQL Server. Using a transaction seems odd to read the data, but the .NET library requires a transnational context, so just accept the process for the byproduct of reducing the pull on SQL Server resources.

An example was provided for the procedure of using a FILESTREAM in a transnational manner from .NET. The idea is to insert everything required except the BLOB column using a 0 length string. Once the values are returned specifically the path name and transaction context, these are used to create and use the filestream thus completing the transaction.

Something new to me is the 'hierarchyid' data type. Its essentially a binary value that has its bits arranged in a certain way to determine the hierarchy (parent, child, sibling, etc.) This type of column can be indexed (depth-first or breadth-first). You can use either or both to meet your needs. It appears this type of data type would be great for hierarchical data stored in trees (org charts, family trees, etc.) I'm sure those of us storing hierarchical data in the past have used a contrived series of numbers creating a path to the location of a node, etc. It has some SQL functions such as GetAncestor() and GetDescendant() to query the hierarchical data. It may not be pretty, but honestly is much cleaner than any hand baked approach.

This data type and functionality could be the solution to storing hierarchical data. It might not be used on every database, but it's another good tool to know about in the proverbial toolbox. I know for a fact I've done the hand baked approach to create treeviews in ASP.NET and the data persistence was not clean. This would have been a better approach if it had existed.

The FileTable was added in SQL 2012. It is a combination of FILESTREAM and the hierarchical data. It appears as an ordinary table with the exception that the schema and columns cannot be dictated. Some of these columns are stream_id (GUID identifier), file_stream (FILESTREAM), name (nvarchar name of directory or file), and path_locator (hierarchyid with location of the file or directory within the file system hierarchy). FileTable exposes a file share with the files stored. Changes are tracked on both sides; if changes are made on either the database or on the file system the other one is updated.

The most interesting thing is that the files are not stored in an obfuscated manner like they are with pure FILESTREAM which means the files are directly accessible. The one caveat are 'memory-mapped' files. An example was shown where a .txt file was attempted to be opened and an error was presented. This has to do with the way Windows uses pointers in virtual memory to quickly open the file. The quick fix to open the file is to just copy it out to another location. I would probably investigate the FileTable feature before FILESTREAM if using SQL 2012 as it seems to align a little more directly with traditional file storage practices.

SQL Server also now has Geospatial functionality, where data pertinent to user's physical location can be aligned and presented. Obviously this would be most useful for mobile development. This functionality has existed since 2008 but has been improved in 2012. The functionality uses either Planar or Geodetic spatial models to locate the position. Geometry (flat model) preforms much better than Geography (curved model), but can be used if the area is small or the precise accuracy is not paramount. There are (2) data types that can be used for the column data: 'geometry' and 'geography'.

One very cool feature is if you query the data, you can click on the 'Spatial results' to see the graphical data. The data used to produce the geographical information can be imported from any of the following sources: Well-Known Text (WKT), Well-Known Binary (WKB), Geographic Markup Language (GML). The data for example in WKT looks like the following: POLYGON ((35 10, 45 45, 15 40, 10 20, 35 10),(20 30, 35 35, 30 20, 20 30)). Obviously we will be importing this information and not creating from scratch. If you do want to try and generate valid results from scratch, I assume you are the same person (yes singular, because nobody should be doing this) that hand writes the vector based coordinates in XMAL :P

One example shown was having (2) photos that were taken and then using the .STDistance() function was their location to determine how far apart they are physically. This is just scraping the surface on what can be done and my mind goes 100mph thinking about the potential. The Geospatial functionality really opens a ton of doors for location based programming and data needs.

One side note on something I have blogged about before (You Can't Know It All (YCKIA), But Keep Pursuing After It Anyway) proving you can't know it all or be perfect at everything. Over the years I have noticed this trend - in purely code sessions, you might see some SQL or stored procs that are a bit rough around the edges. Here in a SQL session, I see top notch SQL and some HTML, JS, AJAX, and C# that is a tad rough around the edges. TOTALLY 100% OK and I know these are just stubbed out examples. However even in examples you can see the main topic being pushed. At the end of the day it's tough if not impossible to be great at everything. That's why I always yield to or at least listen the knowledge of the experienced! This guy has probably forgotten more about SQL that I'll ever know.

The last portion of the day is dedicated to 'Enterprise' features that are only available to SQL Server 'Enterprise' editions unless noted. 1st up is SQL Server Audit. This was added in SQL 2008 and in SQL 2012 they included it in 'Standard' edition as well. With Audit you can track virtually any database action by a user or process and log it to either the file system or Windows event log. How many times have we over the years created an 'Audit' process for SQL Server manually? With this feature we have some inherit functionality available. Now if your auditing does not map flatly to your data structure, this might totally trump your need for custom auditing. For example, just showing that a modification to a record by a user is not enough, and there are combination rules that make up the auditing. However even a rule such as auditing users that delete more than 10 records in a 1 week span; it appears you could take the raw audit records and then create a view incorporating the rules.

The next Enterprise feature discussed is Change Data Capture or CDC. It provides the functionality to record changes to a table’s data into another table without writing triggers. I suppose this could be really useful on tracking changes to a table containing transnational data (i.e. banking deposits) to provide and audit trail what occurred. You can used system functions to query the data. This also has been available since 2008, and imagine was created not only because the functionality is so often sought after, but was being done manually regardless as this type of auditing is sometimes absolutely required.

The final Enterprise feature presented was TDE or 'Total Disc Encryption'. The entire database including the data and logs can be encrypted. This was if backups are compromised they will not be readable with the proper master key and certificates installed on the server. Note that the TDE keys are created at the server level and is why it can be done without modifying individual columns, etc.. The data will be encrypted on the fly as it is stored and read. This fact makes me wonder about the performance hit, but honestly it's probably so minuscule it's not worth blinking an eye. The type of algorithm used as with any encryption will either increase or decrease the performance hit based on it's strength. Often encryption is a requirement so any performance hit is a non-factor and has to be included in estimates.

NOTE: One should not solely rely on TDE for truly sensitive data. If you have columns like SSN, credit card numbers, etc. they should be individually encrypted at the column level. Unlike TDE that is free and requires no work at the query level, column level encryption / decryption does require providing the keys on each access of the data. It might be a small pain, but security when needed can't have a price tag or effort tag attached; it needs to be done.

The TDE process initially does take some time, so make sure you do it off hours if doing this to a production database. He did note that 'tempdb' gets encrypted implicitly if using encryption on temp objects. He recommends storing encrypted databases on a separate instance than non-encrypted databases to reduce the unnecessary nature of encrypting databases that do no require it (like 'tempdb'). I don't believe this is a requirement though and a mix of TDE databases and non-encrypted databases should be fine. What's the old saying: "let's test it and see!"

Wrap Up Day 1


Well that's another great start to this year's VSLive! conference! I'm happy that I switched up to the SQL class after lunch as I was able to pick up a lot of additional information across 2 different topics. Hats off to Brian Noyes and Leonard Lobel for an excellent job presenting; I'm quite pleased with the content. Beginning tomorrow, the traditional portion of the conference begins with the 75 minute sessions. Looking at the schedule there are so many different sessions I want to go to, so I will need to apply some filtering logic of my own to figure out which ones to attend ;)

Award for Best Session Name :)

No comments:

Post a Comment