Markus on Development and Publishing

This is Markus Egger's professional blog, which covers topics such as development, publishing, and business in general. As the publisher of CoDe and CoDe Focus magazines, and as the President and Chief Software Architect of EPS Software Corp., Markus shares his insights and opinions on this blog.

Content Area Footer

Monday, October 18, 2010
More Free “State of .NET” Events Announced

Three Locations - Houston, Dallas, and Phoenix

  • Houston:   Tuesday, November 9, 2010  1:30 - 4:30 PM
    Microsoft Houston Office - 2000 W Sam Houston Pkway S, Houston, TX 77042
  • Dallas:   Monday, November 15, 2010  1:30 - 4:30 PM
    Microsoft Dallas Office -  7000 SR-161 (George Bush Turnpike), Irving, TX 75039
  • Phoenix: Thursday, November 11, 2010 1:30 - 4:30 PM
    Microsoft Phoenix Office – 2929 N. Central Ave, Suite 1400, Phoenix, AZ 85012

Brought to you by Microsoft, CODE Magazine, CODE Training & EPS Software, this free afternoon event presents an unbiased look at the current and future development with .NET. Join me for an afternoon of free and independent information about current Microsoft development technologies! What is the state of .NET today? Which of the many .NET technologies have gained traction? Which ones can you ignore for now? What other Microsoft technologies should you include in your development efforts? This event is completely free of charge and is designed for developers as well as IT decision makers. Specific prior knowledge is not required. Attendees of this event will come away with a clear understanding of which technologies to use for various technical challenges.

This is NOT Microsoft marketing hype - this is based on our real-world development experience with various technologies.

Topics will include:

  • Latest News from PDC!
  • Visual Studio 2010
  • Silverlight 4.0
  • Visual Studio LightSwitch
  • Expression Studio
  • ASP.NET MVC 3.0
  • WebMatrix
  • Azure
  • Windows Phone 7
  • Razor
  • and more! 

Note: The final list of topics is always subject to change as we are always aiming to have the most up-to-date content possible!

Signup is free, but you have to let us know you are coming. Here are the appropriate links:

Questions? Please e-mail or call 832-717-4445 x32.

This event is co-hosted by EPS Software Corp. and Microsoft Corporation. EPS is responsible for all content presented at this event.

Posted @ 7:01 PM by Egger, Markus ( -
Comments (587)

Monday, October 11, 2010
Session Materials for my Presentations at Houston Tech Fest 2010

I just uploaded session materials for my Houston Tech Fest 2010 sessions. There are several parts:

  1. The layout examples (including the automatic form layout and the form template that creates the ribbon and other stuff).
    Hint: Try to uncomment some of the merged resource dictionaries to see different styles.
    Here’s the link:
  2. Here are my slide decks for both sessions.
  3. Also, I recently wrote an article about this sort of stuff. It appears in the upcoming issue of CODE Magazine. You can also see an unedited draft of the article here. Or of course, get the latest CODE Magazine (November/December 2010) and read the final article. (BTW: The first download link above has the samples that go with the article).

Enjoy, and feel free to ping me if you have questions!

Posted @ 12:47 PM by Egger, Markus ( -
Comments (371)

Monday, September 27, 2010
Adding Behavior to Styles and other XAML

WPF and Silverlight styling is amazing (and frankly leaps and bounds beyond HTML5 and CSS3, but that is a different story altogether…). You can easily re-style your entire application or individual controls. You can style the layout of entire screens. You can re-brand Silverlight components so they can be reused on different sites. It’s awesome. And it is all done declaratively in XAML.

One of the most common questions I always get when it comes to styling is: “I need to add custom behavior and thus a code-behind file. How do I do that?”.

Well, the simple answer is: You don’t! And you don’t have to. In fact, I consider it a great architectural characteristic of WPF/SL that styles do not have the capability to add custom behavior! Why? Because a style is “content” and attached code-behind programs are “behavior”. And frankly, one of the biggest mistakes the software industry as a whole has made over time is mixing content and behavior. It is the greatest security nightmare ever created! Take Word documents for instance: Why do we have this mess with macro-viruses? Because when you open a Word document, which after all is generally mostly content, you might inadvertently activate some sort of script (behavior) that does things you might not want. This can only happen because content and behavior is mixed. If Word documents (or documents in other apps) didn’t have this facility at all, we would not have any macro viruses. (And we could always have a different kind – a .docm file format perhaps, or something like that – that enabled macros, just so one knew when opening a file what to expect).

There are many other document types that have the same problem. HTML for instance is a classic example of missing content and behavior. To such an extreme in fact, that we really can’t differentiate between content and script hardly at all, which is why we have to deal with things such as cross-site scripting attacks and a lot more. (And don’t even get me started on SQL Injection attacks, which are rooted in the same problem…). But this post is not meant to be about this classic security issue (books like Writing Secure Code spend a lot of time and space on this very issue).

Anyway: Pure XAML styles do not have this problem, because XAML is just content. XAML specifies which objects are to be used and what kinds of things to trigger (through triggers, events, commands, and behaviors). However, it is important to realize that the objects (and other behaviors) that are references in XAML already need to be compiled into the app. So it is easily possible to reference and use all kinds of behavior objects that the application is designed to provide, but even if someone downloads a XAML style file from a web site, there is no inherent security risk, because no unwanted new behavior can come along. It is a design I like very much.

But back to the issue at hand: What if you do indeed need to reference behavior in your XAML file in a style that you would typically use a code-behind file for? Well, here’s the trick: Whatever you have in your XAML file simply refers to objects. For instance, consider the following XAML snippet:

<Window x:Class="WpfApplication.MainWindow"
        xmlns= " 2006/xaml/presentation"
        xmlns:x= " 2006/xaml">

In this snippet, a Window hosts a Grid, which hosts a Button. All these tags simply refer to classes of the same name. There is nothing special about these classes. You can also add your own classes and instantiate them as part of the XAML markup like this:

<Window x:Class="WpfApplication.MainWindow"
        xmlns= " 2006/xaml/presentation"
        xmlns:my= "clr-namespace:WpfApplication"
        xmlns:x= " 2006/xaml">
        <Button x:Name="button1">Hello!</Button>
        <my:GreatBehavior />

Of course you can do whatever you want in this special object. Nobody says an object in XAML needs to have a visual appearance. In this case, it only provides behavior. Let’s say you had originally intended to create an on-mouse-over handler for the button object in a code-behind file. Instead, you can use this special object and add the following code:

public class GreatBehavior : FrameworkElement
    public GreatBehavior()
        Loaded += delegate
                            var button = FindName("button1") as Button;
                            button.MouseEnter += (s, e) => MessageBox.Show("Now!");


This code finds the button and attaches the desired behavior to the button’s event, just like you would in a code-behind file. Except you now have a behavior-less XAML file that can be securely loaded, no matter where it came from. This of course implies that the GreatBehavior class was already compiled into your app (as would a code-behind file). So this is certain to be entirely safe. You can use this same technique with any XAML file. In fact, I often write my views in my MVVM apps where the views are loose XAML files with no code-behind whatsoever. It works great!

Note that some might point out that this is not the greatest solution syntax-wise, as there is no clear indicator that the behavior goes with the button. That is true, and it is probably better to attach the behavior object to the button, which can be done through an attached property. This is a little beyond the scope of this blog post, but check out this article by Josh Smith on how to do that and thus achieve more agreeable syntax. The basic idea remains the same.

Now what does all of this have to do with styles and templates? Well, basically, when creating advanced skins and templates, I quite often run into scenarios where I just can’t do quite everything I want with declarative XAML. And since there is no code-behind file in skins ever, I use this very technique to introduce my own behavior and thus have complete freedom to do whatever I want. These behavior objects could be pure behavior objects as in the above example.

Another variation on the theme is to subclass an existing object. For instance, in my MVVM applications, I let my view model define “actions”, which are then used to populate a toolbar or similar UI element. (Check out this article of mine in CODE Magazine for a detailed discussion of this concept). However, if my view model does not have any actions defined, I do not want the toolbar (or whatever UI element I use) to show up at all. I thus created myself a subclassed Grid that sets itself visible or collapsed depending on whether the current data context object implements the IHaveActions interface.

Once you have this sort of setup, you are back to being able to do whatever you can do in source code, yet you can include it in any XAML you want, and you can do so entirely without introducing security risks. And this freedom can be worth a lot in sophisticated applications!

Posted @ 5:01 PM by Egger, Markus ( -
Comments (676)

Wednesday, August 25, 2010
Auto-Updating a Silverlight Out-Of-Browser App

Silverlight supports Out-Of-Browser operation, meaning you can build an app that runs very much like a desktop application. You enable this via a simple setting in the properties of the Silverlight control. Once you have an OOB app, you can either start it as a stand alone app, or, if the app is originally launched within a browser as part of a page, users can right-click on it and choose to “install the app locally”.

I really like this feature. But I was surprised to learn that an app taken out of the browser does not automatically update itself when a newer version is available on the server. It always seems to me that is the quintessential modus operandi in Silverlight. After all, in the web browser, when you navigate to a site with a Silverlight control, you automatically and always get the latest version of that control. Versions that have been cashed from prior visits are only used if the version is still up to date. Not so for OOB Silverlight apps. You can publish new versions all you want, the client (by default) has the version that was originally installed.

So what do you do to fix this problem? Well, luckily it turns out you can simply add a few lines of code to the app’s constructor to enable updating. Add the following code to your App class’ constructor (in App.xaml.cs) to get auto-updating:

if (Current.IsRunningOutOfBrowser)
    Current.CheckAndDownloadUpdateCompleted += (sender, e) =>
            if (e.Error == null && e.UpdateAvailable)
                MessageBox.Show("New version! Please restart!");

So the whole constructor part of the app class should now look like this:

public partial class App : Application
    public App()
        Startup += Application_Startup;
        Exit += Application_Exit;
        UnhandledException += Application_UnhandledException;


        if (Current.IsRunningOutOfBrowser)
            Current.CheckAndDownloadUpdateCompleted += (sender, e) =>
                    if (e.Error == null && e.UpdateAvailable)
                        MessageBox.Show("New version! Please restart!");
    // …

There you go! Live is good again :-)

Posted @ 10:13 AM by Egger, Markus ( -
Comments (2891)

Tuesday, August 24, 2010
Microsoft Expression Web 4 SuperPreview Rocks!

I have long been a user – a fan even – if the Microsoft Expression product line. However, I use mostly the XAML related tools, such as Expression Blend and Expression Design. As it turns out, I have so far completely overlooked a hidden gem: Expression SuperPreview.

SuperPreview is a tool that allows you to view web pages the way they appear in different browsers. The basic idea is that you open a URL in SuperPreview and pick the browsers you would like to see. This includes some of the browsers you have installed on your machine (in my case, it can show IE6, IE7, IE8, IE8 rendered in IE7 mode, and FireFox 3.6.8). I also have Chrome and Safari installed, but I guess those are not included. What is pretty cool however, is that there now is a SuperPreview online feature that’s in BETA. This allows connecting to an online rendering service (free) that shows even more formats that are not native to Windows. I have no idea what will ultimately be included in this, but in the current beta, it can show a view of Safari on the Mac, which I otherwise could not test on my Windows machine. Hopefully, there will be more added in the future.

The different views SuperPreview provides are very cool too. There are side-by-side comparisons of various renderings. You can even overlay 2 renderings to see the exact differences between the different browsers. Very cool!

I also like the option of setting a screen size to simulate smaller screen resolutions than I personally run. Plus, the zoom feature is very nice as well. This way, you can see more of your web site, or more easily compare the site. Zooming works in SuperPreview regardless of whether the browser actually supports it or not.

I will def. be using this tool going forward. I find it much easier to verify my sites work with multiple browsers and versions using SuperPreview than going to each browser manually. Many I simply do not even have (like IE6).

Posted @ 3:30 AM by Egger, Markus ( -
Comments (903)

Wednesday, July 07, 2010
Printing in Silverlight 4

Silverlight 4 can print. This was big news when it was first announced. While it may be true that printing isn’t as important anymore as it used to be. Lots of stuff is now just handled electronically rather than in paper. However, this doesn’t eliminate the need for printing. Here are a few scenarios where print is useful:

  • Sometimes you just need a piece of paper. Some countries still require printed paper invoices for instance. Or another example is our Tower48 Digital Escrow service, where legal documents and contracts just have to be printed, whether we want it or not.
  • Sometimes certain forms have to be filed, and they need to look exactly right. (Or maybe you need to print a check?). Just plain HTML output and relying on the browser to print that HTML doesn’t even get close to getting the job done.
  • Sometimes, it isn’t all that easy to convert whatever you have to print into HTML for print purposes. What if you have a vector drawing or some graph in Silverlight and want to print that? No easy conversion to HTML there.
  • Sometimes, you may not want a piece of paper, but you may need a file such as XPS or PDF that you can file away or email to someone. Printing these days doesn’t necessarily just mean “printing to paper”. Sending a document through the printer spooler and then through an XPS or PDF creator can be an easy way to accomplish this. (Creating PDF straight out of Silverlight low-level is not for the faint of heart… you’d need some kind of third party tool for this).

So printing is good to have. I just spend a whole bunch of time on a print algorithm for our escrow service company. I am happy with the result I was able to achieve with Silverlight 4. However, it must also be said that printing in SL4 is not a no-brainer. You now have basic print ability through an API, but it isn’t something that “just works”. You will have to do a lot of stuff yourself.

The basic idea of printing in SL4 is pretty simple: You create a print object, trigger a print job through it, and then receive various events, such as “printing a page now”. You simply react to this event and create some sort of Silverlight UI as the print “visual” (the thing you want to send to the printer) and let SL4 print that out. This process continues as long as you indicate there are more pages to be printed.

So let’s say you have a Silverlight control and you want to print the control exactly as it is on screen. You could simply put the following code in a button to have it print:

var doc = new PrintDocument();
doc.PrintPage += (sender, e) =>
                     e.PageVisual = this;
                     e.HasMorePages = false;
doc.Print("Example Document");

In this example, I simply use a Lambda Expression to handle the “PrintPage” event (you could have certainly also used a standard event handler). This event handler kicks in as soon as the print job is triggered via the Print method (the parameter is the document title, which is used for things such as the printer spooler). The event handler then fires when the first page is to be printed. It uses the current element (presumably the whole control) as the visual. It then indicates that no further content is to be printed, so this will result in a single page printout.

So far so good (and simple). However, while this is what is usually used as a sample, it is just about useless in real-world scenarios. After all, print-outs rarely look exactly like their on-screen counter parts. Plus, this has some issues such as the printouts really being exactly like the on-screen version. For instance, if you have a list of data, the printout will only show the same content that is on the screen. It won’t show stuff you’d have to scroll for, and it certainly won’t flow over to the next page.

So the question arises: How does one do something for real?

Real-World Printing

Well, that is a bit more difficult to answer. In my scenario, where I had tons of text to print, I had to first figure out how many pages I wanted/needed to print. To do this, I created completely new visuals, rather than using anything that was already on the screen. Unfortunately, there currently is no easy way to do any of this automatically. So your first step is to figure out how large the sheet of paper is you want to print. You really do not know that at this point, since the user hasn’t picked a printer yet, so you have to make some assumptions. In my scenario, I had to support Letter, Legal, and A4 paper formats. To do so, I simply created in-memory Canvas objects of appropriate sizes. In Silverlight, we use “logical pixels” as our measurement. One logical pixel is 1/96th of an inch. Armed with this information, we can create Canvas objects of appropriate sizes. For instance, this creates a Canvas that is of the same size as a Letter sheet of paper:

var letter = new Canvas();
letter.Height = 1056;
letter.Width = 816;

Note: Appropriate sizes are: Letter = 816x1056, Legal = 816x1248, A4 = 797.8x1123.2 pixels.

Theoretically, you can now put stuff on that canvas and use it to print:

var doc = new PrintDocument();
doc.PrintPage += (sender, e) =>
                     var letter = new Canvas();
                     letter.Height = 1056;
                     letter.Width = 816;
                        new TextBlock() { Text = "Hello World!" });

                     e.PageVisual = letter;
                     e.HasMorePages = false;
doc.Print("Example Document");

Of course now you have a whole “sheet of paper”, but most printers can’t print all the way to the edge. So you probably do not want to put elements at position 0,0 as in this example. The printer would be somewhat likely to cut it off. (Or more likely, it may print position 0,0 just fine, but will cut off at the bottom right end of the “page”). The way I like to take here is I like to handle the entire page as if it was exactly a physical page, so I support a margin within the document. Typically, a margin would be something like 1 inch all around. So I may want to position the TextBlock at position 96,96:

This gives you a single page. However, in most business cases, you have more data to print than one page. In my case I had a flexible amount of text stored in TextBlock elements. Silverlight does not automatically split that sort of content into multiple pages (a process known as “pagination”). Therefore, I had to do this myself. Not a trivial task really, and a good pagination algorithm is not for the faint of heard (it includes handling concepts such as “orphans” and requires advanced layout concepts). In my case, I decided it was good enough to figure out whether an entire paragraph, or at least a “run” within a paragraph, fit on the page. If not, it goes to the next page. (A “run” is a segment of text, such as an entire paragraph, or a single section with the same formatting, such as a bold sentence). In my case, this worked OK, since I had relatively short paragraphs with relatively simple formatting. For other scenarios, a more advanced approach is likely needed.

My basic idea is relatively simple: I put a TextBlock on my page canvas at the position I want it. Then, I start pulling “inlines” (these are the runs within a text block) from my source data, and add them to the canvas. Then, I measure how much space was taken up. If it didn’t all fit on the page, I remove the last inline again, create a new page, and start the process over, until I have all my pages with all the text. Here is the critical segment of code:

var canvas = new Canvas();
canvas.Height = 1056;
canvas.Width = 816;

var contentArea = new TextBlock();
contentArea.TextWrapping = TextWrapping.Wrap;
contentArea.Width = canvas.Width - (96 * 2);
contentArea.Height = canvas.Height - (96 * 2);
Canvas.SetTop(contentArea, 96);
Canvas.SetLeft(contentArea, 96);

int originalInlineCount = originalText.Inlines.Count;
for (int counter = 0; counter < originalInlineCount; counter++)
    var inline = originalText.Inlines[0];
    contentArea.Measure(new Size(contentArea.Width, double.MaxValue));
    if (contentArea.ActualHeight > canvas.Height - (96 * 2);) // too large to fit on page!!!
        originalText.Inlines.Insert(0, inline); // Back into the original source

This code continues on until the original source of my text (called “originalText” in this example) has no more inlines left. I add page after page to an in-memory List<Canvas>, so when the algorithm goes through, I have all my pages available to me.

This also allows me to do things like go through all my data to create all the pages, and then iterate over the list once again and add more information to each page, such as “Page 1 of X”. There are many such scenarios where you need to taker this “two-pass” approach of first creating all the content, and then adding more information to each page, once you know how many pages you have. There are so many useful scenarios in fact, that I would consider this the standard approach.

Now that we have all the pages, we can simply send them to the printer:

var doc = new PrintDocument();
var pages = GetPagesLikeAbove();
// Pseudo code…
int currentPage = -1;
doc.PrintPage += (sender, e) =>
                     e.PageVisual = pages[currentPage];
                     e.HasMorePages = currentPage + 1 < pages.Count;
doc.Print("Example Document");

This works fine in theory,but there is one more tricky part here: In our page generation algorithm above, we assumed letter-size paper. However, once the print job starts, the user might choose a printer with different paper size. The same goes for margins. You can check these things via the event arguments (“e”) passed to you (there are PageMargins and PrintableArea properties on that object). But what do you really do with that information?

For the margins, a way to go is to set a negative margin on the actual page that is being printed:

var currentVisual = pages[currentPage];
currentVisual.Margin =
    new Thickness(e.PageMargins.Left * -1,
                  e.PageMargins.Top * -1, 0, 0);
e.PageVisual = currentVisual;

This shifts the visual up and to the left to be outside the printable range. However, since our physical page already has margins accounted, things should be back to “normal” (assuming the margins you have actually chosen aren’t smaller than the printer can actually handle… a detail you can check by looking at the margin information sent in the event args). If things get worse, such as the paper size being different from what you expect, or the margins being incompatible… well… that’s not good. You have to re-initiate your whole printing algorithm before the first page gets printed, to create an appropriate size and margin setup (which can really throw off a lot).

Print Preview

As you are reading this, you may say “fine… but why even assume certain sizes and margins? Couldn’t I just do all the page generation when the first page prints or when the print job starts?”. And the answer is “yes”. You may have to do that anyway, truth be told. However, I like having a print algorithm that can create the pages ahead of time. This way, I can allow the user to do things such as “print only page 4 and 7”. But even more useful to me, is the ability to show a preview. Since you already have all your pages available as Silverlight objects in memory, you can simply choose to show them on the screen as well. Stick them in a Viewbox, and you even have a thumbnail preview:

// Show 4th page in preview:
previewViewbox.Child = pages[3];

I find that very useful and extremely simple. Here is a dialog I created for Tower48 using this exact approach. Pretty nice for a web app, if you ask me :-)

Print Preview Dialog in Silverlight 4

So all things considered, printing works reasonably well in Silverlight 4. However, you have to handle a lot of things yourself. Creating a bullet-proof pagination algorithm that can handle all kinds of content is difficult. You can find some code for this on places like Codeplex. Check it out. Maybe it will fit your needs. But it would be nice if Microsoft made this a bit easier in future versions of Silverlight…

Posted @ 6:36 AM by Egger, Markus ( -
Comments (725)

Tuesday, June 08, 2010
Evaluating View Model Options

If you are building WPF or Silverlight applications, chances are you are using the MVVM Pattern (see below if you are not familiar with this pattern). So far, so common. Once you look at the details however, there are some interesting aspects to this setup. A recent discussion on my fellow “Advisors” on the Microsoft Prism Advisory Board got me interested in investigating different options to create the “ViewModel” (VM) part of this pattern. After all, there are quite a few different ways to go about this, all with unique (and sometimes only perceived) advantages and disadvantages. I hence set out to create a few test setups of a simple model and view-model combination (a simple Customer example) to test in particular performance and memory consumption. This post presents some of the results of those tests and invites readers to comment on the findings and suggest other options and perhaps test them on their own (my test setup project can be downloaded here).

What is the MVVM Pattern? Well, I am glad you asked! It at heart is pretty simple, actually. If you have any sort of XAML based UI (and really even others as well, although this article does not concern itself with that) and you want to use it to edit data (the typical business application scenario), then you will quickly discover that this data is often not very well suited for binding, since it was not specially designed for UI needs. For instance, you may have a FirstName and a LastName property/field in your data, but you want to bind to the full name. Or you may want to set something visible based on a flag, but the flag is boolean and in WPF/SL you need a property of type Visibility. To make this easier, the idea of a “view model” arose, which is a special version of the “data” (or “model”) that massages the data into a more suitable format (by adding calculated properties, or adding properties with new types, or even consolidating multiple data feeds, and so forth). The result is something that is much easier to use then the raw data. For a more complete discussion of this pattern, check out the explanation in an article on MSDN magazine by Josh Smith, or on the Wikipedia.

Quick Overview

As with many performance test examples, it is always difficult to create a setup that provides data that is meaningful in the real world. I thus had to create something that although somewhat contrived, should still provide some meaningful insight. I settled on a simple Customer scenario. My “model” is a hardcoded list of Customer objects. I am not retrieving them from a database, but I am creating them in-memory based on randomly combined names and such. To create a meaningful sample size, I am creating 100,000 Customer objects as my data source. Retrieving 100,000 customer records is not something you would normally do in your interfaces (at least I hope you wouldn’t... users would have to use a very small font to see them all... ;-)), but I wanted to create a large sample size to get results that are a bit easier to distinguish and also to compensate for the relatively few properties I have in my model. (Even a simple “customer + order information and detail” model could have a significant number of properties). Note that I handle all the customers independently. There would be more efficient ways to handle 100,000 customers (and especially related data such as sales regions that are the same for each customer), but the purpose of this simulation is to pretend these are completely independent view models. Otherwise, the resulting performance data would be less meaningful.

Each customer has a first name and last name property, as well as a company name. Furthermore, there is a flag indicating whether the customer has a set credit limit, and if so, the credit limit is stored in another property. Finally, there is a short list of sales regions the customer may be assigned to. This list is an independent data source (although I put it inside each model for simple binding as is the purpose of MVVM). The customer itself is assigned a sales region ID.

In all the different approaches of the view model, I create the same type of behavior and information:

  1. All properties in the view model need to bind properly.
  2. A FullName property exposes first name + last name.
  3. A SalesRegion property exposes the name of the currently selected sales region, which is only identified by ID in the Customer object.
  4. A HasCreditLimit_Visible property exposes the boolean credit limit flag converted to type Visibility, so it becomes easy to bind the visibility of any element to this property without the need of a converter.
  5. The added properties also need to bind and update properly when one of the underlying data elements changes.
  6. The sales regions list is exposed as a Regions property on every view model.

Note: #1 means means that one has to implement INotifyPropertyChanged on the view model in order to get proper binding behavior. POCOs (plain old CLR objects) with standard properties do not notify a bound element of a change in the property, thus not causing the UI element to update properly when the property’s value changes. This unfortunately is a major nuisance in creating anything in WPF/SL that binds, because it means you can’t just use any old data source and use it for binding, if there is any chance the source could change (which is almost always the case in business apps… although there are exceptions… in my example, I can easily bind a sales-region drop down to a standard list of regions, because my sample app never changes the regions once assigned). Long story short: Implementing INPC (INotifyPropertyChanged) is one of the driving forces behind the creation of view models. (Note that my naive customer view model does not implement this interface and hence binding does not work properly… this object is only there for performance comparisons).

Overview of Approaches

I implemented a variety of different view models. There is a naive approach that simply wraps the Customer object and adds the desired properties. This approach is not actually functional, but it serves as a performance yardstick. If you run the sample application however, you will notice that if you select this type of data source and you update fields such as FirstName, the behavior is incorrect in that the FullName does not change (and so forth…).

I then have a few view models that follow a relatively conventional approach. For one, I created a view model that duplicates all the properties found in the Customer class, but it also manually implements INPC so each property properly indicates when it has been updated. (This is an approach I often encounter in the wild). In addition, I added the desired additional properties. I also make sure that when one of the properties another property depends on is updated (such as FirstName for FullName), then not only do I notify of a change in that property, but I call NotifyChange() for the second property as well, thus fulfilling the behavior requirements we have. Note that I load these view models 3 different ways: 1) by manually copying the values from the model to the view-model, 2) by having a reflection-based mapper handle the task for me, and 3) by creating lambda expressions that handle the mapping task for me.

Another approach is to implement a view model based on the DependencyObject class, with each property in the view-model implemented as a dependency property. This theoretically offers some advantages, but it also is extremely tedious as a lot of code has to be written for each property. Plus, there are some significant drawbacks (see below).

Finally, I implemented a number of different view-model objects that are based on C#4’s dynamic features. I have an object that is derived from DynamicObject which implements its own “state bag” to store the property values. A simple approach simply does that and then adds the other desired properties manually. All the original values simply get mapped into the state bag by hand. The advantage of this is that once such an object is in place, one does not have to define properties anymore. Properties can simply be used and will be added on the fly if need be. Plus, the object can automatically handle INPC, so that headache is gone. Oh, and I added the ability to register property dependencies. This means that the object can natively know that when “SalesRegionId” changes, “SalesRegion” changes as well. In addition, one can add interesting conventions. In my example, I can call any property with a “_Visible” suffix, and the dynamic object will automatically attempt to convert the value to a Visibility type. (You could easily add many more such useful conventions).

In addition to the simple dynamic object, I implemented other variations on the theme. One only uses the state bag for new properties but uses the model object directly as the state bag by pulling data out of that object using reflection. An even more advanced version uses a combination of a sophisticated state bag and model container approach with directly integrated access conventions and dependency registration. Finally, I implemented a dynamic object that uses the dependency property system as its state bag in the hopes of the dependency property system being more optimized than simple name-value dictionaries.

The Results in a Nutshell

You can read a more detailed description of the results for each version below (with some additional thoughts of mine added). In short, I would say that each approach is workable performance-wise. I am loading a very large number of objects. Scaled down to the number of view models you are likely to load in real applications, one is tempted to say that either approach should easily be fast enough. (On the other hand, I think we need to get away from this line of thinking in the Microsoft world, as other companies – in particular Apple and Google – are showing us that performance and responsive/fast experiences are what users want to buy these days).

Not surprisingly, the more hand-coding you do, the better the result. However, the dynamic approach is also growing on me for its various benefits (see below). It is slower, but still probably fast enough, and the benefits are pretty interesting. Dependency property approaches are the biggest disappointment in my tests. They are a pain to deal with, and the advantages are mostly theoretic in nature. Some things would be neat, but just aren’t that big a deal. Other things are great in theory, but you won’t be able to really get that benefit due to other overhead. Plus, there simply are a few show-stoppers that make the dependency property approach a non-starter in my opinion.

Here is a quick overview of my results with 100,000 view models, for instantiation, then accessing every property on every model, and then running a select statement on the list of 100,000 models. I ran these tests on the PDC Tablet PC Microsoft gave away to all PDC attendees. I figure this is a middle-of the road machine, plus, changes are people reading this might have the same machine. I built the test project (which you can download here) in Release configuration and ran the resulting EXE out of the file system and not from within Visual Studio. But as always: Performance numbers are hard to take literal. The most interesting aspect of these numbers is the relative comparison between the different approaches.

Model 0.050 5,862 n/a n/a n/a n/a
Naive ViewModel 0.014 1,953 0.096 0 0.006 32
Manual Static ViewModel 0.030 6,641 0.103 0 0.006 32
Reflection Mapped ViewModel 3.667 6,641 0.093 0 0.006 32
Lambda Mapped ViewModel 0.200 6,641 0.094 0 0.006 32
Simple Dynamic ViewModel 2.014 73,212 0.600 289? (0) 0.071 32
Good Dynamic ViewModel 0.864 46,094 4.443 67 0.322 32
Better Dynamic ViewModel 0.905 46,094 4.270 0 0.335 32
Dependency Object ViewModel 2.163 22,802 0.278 67 0.022 32
Dynamic Dependency ViewModel 2.149 60,180 0.926 73 0.150 32

 Note: It is a lot easier to look at the results in a nicely formatted Excel spreadsheet, which you can download here. It also has quite a bit more detail and precision, and I did a lot of the tests multiple times, which is reflected in the spreadsheet, but not in the table above.

The results in a nutshell are this: Everything you do by hand is fast. Everything you automate is much slower. Reflection is an awful performance killer. Dynamic stuff is a memory hog and slow at the same time. Surprisingly, Dependency Property based solutions perform badly on all accounts (time and memory). That last one may be the biggest surprise in these tests, although I was also surprised how much memory dynamic objects consume. Obviously, combining dependency objects, dynamic objects, and reflection is the “triple-whammy” of bad performance.

However: On the whole, if you load view models one-by-one to edit data, as is the case in most scenarios, all these approaches should work fine performance-wise. And dynamic solutions are starting to grow on my, because of the many benefits they offer. (Note that dynamic view models currently do not work in Silverlight).

For those of you who are interested in a detailed analysis, here is detail on each and every row in the above table:

The Detailed Results

The Model

The Model isn’t really part of the tests. The Model is the underlying data the view models all use. I am creating is in-memory, and it is only meant as a simulated data source. The exact data in the model changes with every run, but each subsequently tested view model uses the same model data. I thought it might be interesting to randomize the data a bit and run tests repeatedly, just to see if anything changes (it really didn’t much).

The interesting aspects of the Model are:

  • It takes just under 5/100’s of a second to create 100,000 model objects in memory
  • Consumed memory for all my tests is just under (or at about) 6MB. This is an interesting number, as view models have to use or duplicate that data.

I did not perform any tests on the Model in terms of running property access or select statements. They should probably be very similar to accessing the manually mapped static view model (see below).

The Naive (Faulty) ViewModel

This view model approach is also included for comparison only. It is a view model that simply uses the underlying model for pass-through access. It doesn’t implement INotifyPropertyChanged (INPC) and thus does not function properly. This view model only adds the 3 properties that aren’t found in the model (FullName, SalesRegion, and HasCreditLimit_Visible) and also exposes the Regions property to provide easy access to sales regions information. All other properties on this view model simply pass access on to the actual model class.

This view model represents faulty view models I often see in the wild, created by developers who do not understand INPC. This view model could only be used for read-only scenarios, and even for such scenarios, there are better ways to do this. However, this view model gives us interesting insight in raw performance we could theoretically get out of a view model if we wouldn’t have to worry about things such as INPC and inter-property dependencies.

Here are the most interesting performance aspects:

  • At just over 1/100th of a second, this view model loads the fastest of all view models, which is not surprising, since it only stores a reference to 2 other objects – the model and the regions – on construction.
  • Consumes the least amount of memory of any model (just under 2MB), although 2MB of just storing object references is a bit on the hefty side. Clearly, more memory is allocated than just storing 200,000 object references (2 for each of the 100,000 view model instances).
  • Accessing all properties is fast. Once again, among the fastest scenarios consistently. This tells us that accessing a property that accesses another object’s property (the view model property accessing the model property) is about as fast as accessing the property directly ( as is the case in the 3 mapped examples below). This may come as a bit of a surprise. Not earth-shaking, but it made me raise an eye-brow (both, actually, since I can’t do the single-brow-raise ;-)).
  • No memory is consumed accessing all the properties (which is true for almost all scenarios I tested)
  • Running a LINQ select statement over this view model was also among the fastest with minimal difference to the view models that held their property values directly (which is about as surprising as the property access result above).
  • A small amount of memory is allocated (32KB) to run the select. This result is consistent across all select tests.

So all in all, this would be desirable results, except for the fact that this view model does not work :-).

Manual Static ViewModel

This view model is a hand-coded view model that duplicates all the properties from the model and then adds the 3 additional properties as well as a reference to the sales regions information. This view model also manually implements INPC on all properties, and manually notifies secondary properties of changes (such as FullName changed when FirstName changes). In the constructor of this model, I hand-coded copying each property value from the model into the corresponding property on the view model (a tedious task in real-world view models, which generally have a lot more properties than I had here).

This view model represents a classic view model that’s fully functional. It also represents the view model that is amongst the most labor-intensive to implement (only trumped in tedium by the dependency property view model) as a lot of hand-coding of unskilled code is required. On the upside, this approach performs well, both in time and memory consumed.

Here are the most important characteristics:

  • Mapping property values into this view model takes about 3/100’s of a second and thus 3 times as long as the faulty view model which doesn’t do any mapping at all. A 3-fold increment is a lot, but on the other hand, considering how much work is done here, this performance is pretty good and certainly acceptable for most uses.
  • This view model consumes just over 6.5MB of memory and is thus only about 10% more memory hungry than the model itself (a fact that is probably explained by storing a reference to the Regions information). This result is very much in line with what I would have expected.
  • Accessing every single property in every single instance of this view model is marginally slower than the naive implementation above. Not by much, but still, it is consistently a few % slower. I am surprised by this, as I would have imagined accessing a property value right within the object should be slower than accessing the property value on another object. It should involve twice as many steps (or only half as many in this implementation). Nevertheless, repeated tests always made this come out slower. I can only imagine it has to do with the overall increased memory consumption of the app. (It should not have to do with implementing INPC, since I am not testing set operations here).
  • No memory is consumed accessing all the properties.
  • Running a LINQ select over this view model is marginally faster than the naive implementation. This is directly contradicting the property access result. I would imagine that being faster makes more sense. In any event: The difference is very small. Maybe I should stop worrying about it :-)
  • The select allocates the obligatory 32KB of memory.

All in all, this view model implementation works very well and is what other implementations are measured by. However, implementing this guy by hand is an error-prone pain in the rear, which is really why we are investigating other options in the first place.

Reflection Mapped ViewModel

This is really the same view model as the previous, manually mapped one. The only difference is that instead of mapping each value in the constructor, I am using reflection to automatically map all the properties that can be found in both objects. I might be able to optimize this better especially for objects of the same type, but that was not the goal of this performance test. Either way, the conclusion is that this guy moves like molasses! Clocking in a 3.7 seconds of load time, this guy is far more than 100 times slower than the manually mapped version. Ouch! (Of course if all you do is load one of these at a time, that may still be OK for you).

So the most important characteristic of this guy:

  • Loads by far the slowest of all tested view model approaches (3.7 seconds)
  • All other performance figures are identical to the manually mapped view model above (which makes sense, since everything other than the constructor is the same).

So this guy is very slow, but we also eliminated all the code it took to populate the object. This is a significant benefit that may be more important than load-performance in some scenarios. Overall, I am disappointed with this result. I didn’t think this would be quite so slow.

Lambda Mapped ViewModel

This is another variation on the same view mode, but instead of using reflection to map the models, I use a list of lambda expressions. The result is a lot faster than reflection, which makes sense since lambda expressions are just a list of code segments that get executed full blast one after another, with the only performance penalty we pay over a completely manually coded entity being only iterating through a list that contains these expressions.

The result is an object that loads 6-7 times slower than the hand-coded approach. So it is still quite fast, but when you think about it, this is also quite a bit slower than the manual version. A difference that is especially significant when you consider that I’d be hard-pressed to tell you what the benefit is of this over the manual version. You end up writing just as much code, but it is more difficult to read. Of course, you do gain the advantage that these maps could be defined in a reusable fashion, in case that provides a benefit to your scenario (such as when you have a single model that is used for many very similar view models).

Here is the skinny:

  • Loads pretty fast at 0.2 seconds, but is much slower than a hand-coded version and provides little benefit
  • All other performance figures are identical to the manually mapped view model above.

So you might as well forget about this one. Not a lot of benefit. Slower. Just as much code to write. Perhaps this is interesting if you want to combine reflection mapping with lambdas so you can do more than a 1:1 mapping. All in all, there is not much that appeals to me here.

Simple Dynamic ViewModel

Since C#4.0, we can use dynamic language features, and I have done so here by creating a view model that inherits from DynamicObject. This gives me some interesting options. For one, you do not have to define properties, but you can simply assign them. I do so in the constructor of the model:

public SimpleDynamicCustomerViewModel(Customer customer, IEnumerable<SalesRegion> regions)
    // ...
    self.FirstName = customer.FirstName;
    self.LastName = customer.LastName;
    // ...

Note that the “self” pointer acts just like “this” would, except “self” is a member I created in the base class and it always exposes “this” as type dynamic, which gives me easy access to dynamic features. The name “self” is one of the 2 generally accepted ways to refer to the current object (languages usually use either “this” or “self”… other versions like VB’s “Me” are not as widely used), which is why I chose that name.

Anyway: When this code runs, the system sees that these properties do not actually exist and then resorts to calling a TrySetMember() method, which I overrode to accept the new value and store it in an internal state bag (implemented here as a Dictionary<string, object>). What is very nice about this approach is that I can automatically handle INPC in the TrySetMember() method, and simply notify for a change of whatever the name of the desired object was. Furthermore, I can keep a list of dependent properties and automatically notify interested subscribers of a change of those properties as well. This allows me to register all dependent properties (such as “FullName” having a dependency on “FirstName” and “LastName”) in the constructor of the view model. Using this approach, the nastiness of INPC is handled once and for all, and not just that, but the dependent-property feature is pretty useful and cool. There is a lot to like here.

But there also are things that are not as likable. For one, using this approach, I still have to map the properties of the model into the state bag of my dynamic view model. So that is a lot of code to write. But worse: This guy is a performance-slouch! It is slow, and it consumes a very large amount of memory. 73MB to encapsulate what was originally 6MB! Not good. I understand that there is overhead in the dictionary (which stores not just the property value, but also its name in memory) and there is the dictionary with the dependency maps. But still, I do not understand why this consumes such an immense amount of memory. Something odd is going on here that warrants further investigation.

Here’s the overview of the performance characteristics:

  • Load time is pretty bad. 2 seconds to load the 100,000 objects. For a version that basically manually maps the members into a state bag, that is a lot of time. (Although once again, if you only load a handful of these guys, or even a few thousand, you are probably perfectly fine).
  • Memory consumption is insane! 73MB to manage 6MB of actual data?!? “Ain’t noth’n good coming from that!” (In general, it seems that instances of the Dictionary type are extremely memory intensive. I could probably mess with the object’s capacity and such, but I doubt a lot of people would do that in the real world, so I decided against such optimizations for this test. (And frankly, I would not expect it to make a huge difference).
  • Accessing every property on every instance is not lightning fast, but it isn’t bad either. About 0.6 seconds to go through the entire test. So it’s about 6 times slower than static property access. I find that quite acceptable for a dynamic system, and you shouldn’t have a problem with that out in the wild. (Note that you will access properties much more often than you instantiate the object… especially in binding scenarios).
  • Memory consumption for property access is odd. I generally see no memory used, but during the first iteration, I sometimes see memory being allocated (in the neighborhood of 300KB). It is an interesting phenomenon, but since we just blew 73MB on creating the object, I am not too worried about using 300KB during first time property access. It’s a rounding error by comparison.
  • The LINQ select test runs generally around 7 or 8 hundreds of a second. That’s about 15 times slower than a static object. Sounds like a lot, but it is still very fast. We are selecting 10,000+ objects fro a set of 100,000 in 0.07 seconds. That is practically instantaneous and should not be a problem for any app. (If that is what your bottle-neck is, then dynamic is not the way to go for you anyway…)
  • The select test usually also allocates the usual 32KB of memory, although there also sometimes are slight upward spikes around the 45KB mark. Once again, not something to be concerned about, but this sort of thing just tickles my curiosity.

So speed and memory management is not a strong point of this setup. However, there are some very very interesting benefits here. Despite the performance issues, I am very tempted by this approach. Never having to worry about INPC again, and even being able to define related properties is very cool. And there are more interesting features dangling there as a carrot, which we will explore below. In short: You wouldn’t pick this approach if you need performance and can’t waste memory, but at least there are huge upsides that may make it all worth it.

Note: Another downside of all dynamic view model approaches is that you can’t just look at the object to know what properties it has, since the properties are not explicitly defined. So when you need to write the binding expressions in your XAML view, you just need to know what is there, by looking in the constructor, or – my preferred option – by showing a custom debug tool (which you have to create yourself… but it is easy) that allows you a glimpse of the state bag.

Note: Also consider that dynamic view models currently only work in WPF, as Silverlight currently (v4) does not support binding to dynamic objects.

Good Dynamic ViewModel

The dynamic view model described above is a very simple implementation of a dynamic object. The concept however offers a number of other potential benefits. In this second dynamic approach, I added 2 interesting features: First, I am not using a dictionary as the state bag anymore, but I am using the actual model object as the state bag. Second, I am allowing for convention based property access. Here’s what this means in detail:

Instead of using a Dictionary<string, object>, I now simply pass the original model object to the constructor of my abstract view model class. When a property then is accessed that is not explicitly defined on the view model, TryGetMember() kicks in and looks at the original model object using reflection, to find out whether it has the desired property. If so, it simply accesses it. (This also works for TrySetMember(), allowing write access to the property). The advantage of this approach is that we now eliminate all mapping, since all the properties of the original model object are always accessible automatically. Furthermore, when a value is set, the TrySetMember() method does all the INPC stuff I described in the simple dynamic view model. Thus this approach simply decorates the original object fully automatically, with automatic INPC and also with support for related property notification. Very nice. Unfortunately, also very slow. :-(

The second feature I added is the convention based property access. I am only supporting a single convention here (a “_Visible” suffix), but one could take this idea quite far in very useful ways. The basic idea is that if someone accesses any property and adds “_Visible” to the name, special access happens and TryGetMember() tries to convert the original property to type “Visibility”. In our example, our model has a “HasCreditLimit” property, thus, if someone binds to “HasCreditLimit_Visible”, even though that property doesn’t really exist, the dynamic object will take the boolean value and turn it into a WPF Visibility type. (Write access works as well). I *love* this feature. I can add several more of these conventions and thus have 95% of all the properties I will ever need in my view model automatically covered by the features this dynamic view model object offers out of the box. This is extremely cool and saves a ton of time, reduces tedium, and eliminates a potential source of errors.

Well, at least it saves time writing code. It certainly doesn’t save time once the code runs, because this is slooooow to access properties. Drastically slower than any other option I evaluated. But it takes less memory than other dynamic objects, and it loads fast, since no mapping has to be done on load. (Although considering that fact, it is amazing why it takes as long as it does… probably due to storing property relation information, which could be optimized by putting values into a static definition, which I didn’t do for this test).

Here are the numbers:

  • It takes 0.865 seconds to load this version. 2 1/2 times faster than the first dynamic version.
  • It consumes a lot less memory than the first dynamic version (46MB vs. 73MB) but I am still floored about this massive consumption. I removed the property mapping data from this test, and it turns out it accounts for most of the allocated memory, so if this was kept in a static instance, memory consumption could be brought down to almost nothing, assuming you instantiate the same view model a lot.
  • Accessing every single property in every single instance is extremely slow. Around 4.2 to 4.4 seconds for the 100,000 objects. More than 40 times slower than our static view model implementation. This is an operation that is done a lot, so this isn’t good. (On the other hand, once again, this is probably easily fast enough for most view model uses).
  • Generally, no memory is consumed when accessing these properties, although I have seen odd random memory allocations (up to 70KB) for this operation. Nothing to worry about, but it seems to be clear that anything dynamic in C#/.NET seems to have this random minor memory allocation characteristic.
  • Since property access is slow, one would expect the select test to be slow too, and that is exactly the case. With 0.3 seconds on that test, it is about 60 times slower than the hand-coded view model.
  • For the most part, the select statement seems to allocate the usual 32KB of memory, although on occasion, consumption goes slightly higher.

All in all, performance is not great, but memory consumption could probably be optimized to a point where it was very good. The benefits of this approach are plain awesome, and if you are only instantiating a handful of objects for editing, or even a few hundred or thousand objects for a list, then this approach probably works very well and will save you a ton of time. If you have the need to have a lot of objects in memory on the other hand, and are binding all properties, this approach is not for you. (But then I would ask why you would really keep 100,000 objects in memory in the first place… especially in distributed scenarios, just getting the data from the database is going to be a serious drag, making the performance overhead of the dynamic object insignificant).

Note: The same property discoverability issue as with the simple dynamic object exists here.

Better Dynamic ViewModel

This is yet another improvement on the previous approach. In fact, I have a few ideas that I may continue to explore and add to my description here. The idea here is to go back to a state-bag approach and combine the first and the second view model approach. The state bag however is not just a simple Dictionary<string, object> but the value element is a specialized StateBagValue object, which encapsulates dependency information and potentially a lot more. For instance, this state bag could be used to cache reflection information. This would negatively impact memory consumption over time, but improve performance drastically, especially in scenarios where one accesses property values repeatedly (as in binding scenarios, which are obviously very common in view models, since that is why we build view models in the first place).

I have to spend a bit more time on this approach and will then publish my findings here. You can already take a look at the code I have in the current example. Current performance characteristics are very similar to the good dynamic view model described above.

Dependency Object ViewModel

This approach is pretty interesting. The idea here is to build view models entirely out of dependency objects with dependency properties. In theory, this approach has multiple advantages. For one, the dependency property system is highly optimized, as it was originally built for WPF interfaces that may have thousands of objects with lots and lots of properties. So the idea behind this view model approach is that if view models are potentially large lists of objects with lots of properties, then the same benefits should be useful here. Furthermore, dependency properties offer automatic change notification (so we do not have to implement INPC). Also – and this is unique to this approach – dependency properties are great for binding. You could for instance animate properties in a view model (and I can think of multiple uses for that) and you could also theoretically bind properties to each other, thus creating dependencies between properties.

In reality, none of this really is all that great. I hand-coded a view model with all dependency properties. This in concept is similar to the hand-coded static view model (see above), but using dependency properties instead of regular properties. Let me tell you: This was a pain! Lots of code to write, and most people would probably have a hard time telling what the code really is/does. “Close to business values coding”, this is not!

Furthermore, I would have expected object instantiation on this to be fast, but it was not! 2.2 seconds to instantiate all the objects puts it at second slowest. Only the reflection mapped static view model was slower. In fact, this is more than twice as time consuming as the good dynamic versions, and slightly slower than the most naive dynamic approach. It is more than 200 times slower than the standard hand-crafted view model. I am really not entirely sure why this would be. I understand that there is extra work that has to be done for dynamic properties (although my example uses the dependency property system in such a simple way, I am surprised there is much overhead), but I thought most of it would be handled statically, and thus loading 100,000 objects of the same type should be fast. Well, it isn’t!

Also, this guy gobbles up massive amounts of memory. 23MB to store 6MB of data. This one really floored me, and was probably the biggest surprise in all my tests. I always looked at dependency properties as a highly memory-optimized way to store property values, but I guess that benefit only kicks in when properties are set to their default value. Since very few properties in my view model are set to their default value (what would be the default value of a FirstName property other than an empty string? And how many names to you store in your database where the first name is really empty? Well, in my example, none), we do not get that benefit. Still, I am surprised that the required memory is 23MB and not 6MB. Frankly, this is useless for our purposes, since it doesn’t provide this desired benefit at all.

Also, the ability to bind individual properties together is something I have never found to be useful in any real world scenario. After all, you would probably have to write value converters and all kinds of stuff (to bind HasCreditLimit to HasCreditLimit_Visible for instance). There re simpler ways to do this, plus, if I wanted to write value converters, I could just use those in my views. Nope, this really isn’t all that useful.

There also is some speculation about dependency properties providing an advantage in binding to the UI. Frankly, I have no idea how I would test binding performance reliably. (Beside, I have never seen an app that was slow due to binding to POCO properties). I am under the impression that this should only make a difference if the object and property one binds *to* is a dependency property, as the binder couldn’t possibly know that the set/get on the source object is backed by a dependency property. Maybe I am missing something here, but I would not think this should make a difference. But of someone knows more about this, I would love you to post a comment.

So really, only automatic change notification is nice to have, but frankly, with the amount of code one has to write to get dependency properties, it would be easier to implement INPC by hand.

There also is a complete show-stopper here, IMO. Dependency properties can only be accessed from the thread that created the property (usually the UI thread… otherwise, things get REALLY complex). This means that you can’t update your view model from a background thread, which is something a lot of modern apps have to do at this point, or at least you shouldn’t lock yourself out from doing this in the future. Right there, this approach is becoming useless to me.

If you are still interested, here is the detailed performance information:

  • Loading is slow at 2.2 seconds…
  • …and memory intensive at 23MB.
  • Access on the other hand is fast. I was able to access all properties at around 0.26 seconds, which is only about 2 1/2 times slower than the manually coded POCO view model. Still, since the memory benefit isn’t there, what is the point in writing the extra code and taking the performance hit?
  • Practically no memory is accessed for the property access (although I did witness a small allocation, around 70KB, on occasion during the first run).
  • Select is also fast at around 0.022 seconds. This is still about 4 times slower than POCO selects though. No point in taking the hit.
  • Memory allocation for the select is the usual 32KB

Yeah, no reason to do this. Don’t even try.

Dynamic Dependency ViewModel

OK, so the dependency object test didn’t work out so well, but I wanted to try this anyway: What happens if I create a dynamic object but use the dependency object system as a view model? So instead of using a dictionary TryGetMember() and TrySetMember() could go to a dependency object and start registering and using dependency properties on it. It actually works perfectly fine. But it is also slow. Of course this approach eliminates the need to hand-code dependency properties, so it takes away that pain point, but this also means that there is a single state-bag class that can register a property of a certain name only once, which means that the type of it has to be “object” to eliminate potential duplicates. The only other way around that I can think of is to create new dependency objects for each dynamic object type you create, which would result in nasty code and people would probably be extremely confused about it all.

I am not even going to bore you with more details. This just doesn’t work very well at all. Here are the performance numbers:

  • Loading is about the same as with the other dependency object approach (2.15 seconds)
  • Memory consumption is through the rood! Not quite as bad as the 73MB the simple dynamic object approach took, but 60MB to load. Almost 3 times as much as the simple dependency object approach. (Once again, keeping that list of related properties in each instance seems to be the difference here… so I could optimize that by making that static…)
  • Property access performance is that great. Just under a second to run the access test. That’s 4 times slower than the default dependency property. I guess for dynamic access that increase is OK though.It is about 10 times slower than the hand-coded POCO view model. All in all, this makes for the second slowest access time. Only the complex dynamic approach is slower.
  • Just like in the previous example, usually no memory is allocated for the property access test. Sometimes we get the ominous 70KB allocation though.
  • As we’d expect, the select statement was slower as well. 0.15 seconds to run the select test makes it the second slowest contender there also.
  • Memory allocation is the usual 32KB, although I have seen slightly higher allocations, which seems to be par for the course on dynamic objects.

Anyway: Don’t do this. This just doesn’t work as I had speculated (not too surprising, after the simple dependency object view model performed so badly…).

Posted @ 12:17 AM by Egger, Markus ( -
Comments (509)

Thursday, April 15, 2010
Slides and Samples from my DevConnections 2010 (Visual Studio 2010 and Silverlight 4 Launch) Presentations

You can now download my slides from DevConnections 2010 in Las Vegas (this is the Visual Studio 2010 and Silverlight 4 Launch Event). You can download the slides here. This includes the slides for all 3 of my sessions (Graphics Design Lesson for Developers, Efficient UI Design, and Polished Interfaces with Blend).

Also, you can download the samples (Silverlight and WPF).

You may also want to check out some of my recent posts from this Spring’s State of .NET Events, as well as my recent post about my presentation at Houston D2SIG.

Furthermore, check out some of my older posts with some videos about the Silverlight example.

Posted @ 1:41 AM by Egger, Markus ( -
Comments (120)

Sunday, April 11, 2010
A Real-World DynamicObject Example in C#

The other day I had to solve an interesting problem and it turned out that C# 4.0’s dynamic features came in very handy. In fact, I thought the way we used the features was so interesting, I wanted to share it with it.

The basic situation was this: I was tasked with building a UI editor that is part of an product we are currently building, This allows the user to create custom forms and UIs and such. The fields that go on the forms (as well as all the labels and other elements) are data bound. For instance, you could drop a first-name field on a UI and that would result in a label and a textbox. The textbox is data bound to the data element that represents the current record’s first name field. The label itself is also data bound so the caption of the label is driven by meta data and can be translated or generally set by the user.

The trouble was that when a UI is in design mode in the editor, the data context may or may not be available (more likely not). Now this is a real bummer, because a label with a data bound caption that doesn’t find its data source has an empty caption, which means it is invisible. As you can imagine, the design experience was sub-optimal.

So how can we provide some sort of data context that works no matter what? We need a context that not just has the capability to show something useful to the user, but it needs to be able to act as just about any binding expression. For instance, let’s say we have the following element on a form:

<TextBlock Text="{Binding Label_FirstName}" />

This requires that the current data element has a Label_FirstName property this can bind to. And since this expression is user-definable, that would mean you’d need an object that has any and every conceivable property name. That’s not going to happen.

What we did instead is we derived a new object from the new DynamicObject class. This is a special class that behaves in a dynamic way. Most importantly for our scenario, it as a TryGetMember() method. This method gets called every time someone tries to access a member on the object. For instance, of someone accesses x.HelloWorld, instead of going straight to a HelloWorld property, the TryGetMember() method gets called, and one of its parameters indicates what the name of the desired member is (“HelloWorld” in this case). We can use this method to then take control of the operation and return a value rather than letting the operation pass to a real member (which wouldn’t exist in our case).

Here is a simplified version of the example we are using:

public class BindingFaker : DynamicObject
    public override bool TryGetMember(GetMemberBinder binder, out object result)
        var nameParts = binder.Name.Split('_');
        if (nameParts.Length == 3) result = nameParts[1];
        else if (nameParts.Length == 2) result = nameParts[1];
        else if (nameParts.Length == 1) result = nameParts[0];
        else result = binder.Name;

        result = "{" + result + "}";

        return true;

As you can see, the binder parameter provides information about what member is being accessed. In our case, we take over all operations and simply check the member name. We apply a little bit of logic to the member name and then simply echo it back. (For instance, a member name of “Label_FirstName” would echo back “{FirstName}”). This way, we always provide some meaningful value being returned, no matter what the binding expression tries to access. (Note: Our real world implementation is a little more sophisticated since sometimes meta-data may be available and if so, we use it).

So now, we can simply always attach this object as the DataContext of the object that’s being designed and the binding expressions will always go to the TryGetMember() method which returns whatever we want it to return so the user sees something meaningful.

This has worked out very well. Doing this without dynamic features would have been very difficult. Using DynamicObject on the other hand was trivial and only a few lines of code. Cool! :-)

Posted @ 9:31 PM by Egger, Markus ( -
Comments (176)

Sunday, April 11, 2010
April 2010 D2SIG Slide Deck and Samples

The samples and slide deck from my Natural User Interface (NUI) and multi-touch presentation from last Tuesday at Houston’s D2SIG user group meeting are now available for download. You can get the slide deck here, and the samples here.

Also, I recently blogged about our series of Spring 2010 State of .NET events. That blog post has links to all those downloads, as well as a number of videos and other things. Check it out here.

Posted @ 9:10 PM by Egger, Markus ( -
Comments (113)

Wednesday, March 31, 2010
State of .NET – Spring 2010

Yesterday we presented our first State of .NET event of the year, starting out in Dallas (with an event in Houston following tomorrow). This time, the event focuses on the following topics:

  • Visual Studio 2010
    • General improvements
    • Code productivity
    • Languages
    • ASP.NET Web Forms 4
    • ASP.NET MVC 2
    • jQuery
    • WPF 4
  • Expression Studio (4)
  • Silverlight 4
  • NUI and Multi-Touch (Windows 7)
  • Windows Phone 7 Series
  • Windows Azure

And a bunch of other details. You can download the slide deck I used in the presentation here. You can also download the WPF4 and Silverlight multi-touch examples here.

Also, Shawn Weisfeld (from INETA) was nice enough to do a high quality recording of the event. He wrote about it on his blog and embedded the videos right there. You can see his blog post.

For those of you who are interested in the Windows Phone 7 videos, visit and check out the multi-media section. Also, check out for the Mix 2010 keynote video including Windows Phone 7 Series stuff.

Posted @ 12:10 PM by Egger, Markus ( -
Comments (136)

Friday, February 12, 2010
DevConnections 2009 (Las Vegas) Slide Decks

Here is some older stuff I never had the chance to share before. This ZIP file has my slide decks for all my session for DevConnections 2009 in Las Vegas. This includes the following sessions:

  • Graphics Design for Developers
  • Interface Design
  • iPhone Development for .NET Developers
  • REST and Data in Azure
  • All day WPF Business Application pre-conference session.


Posted @ 6:32 PM by Egger, Markus ( -
Comments (442)

Friday, February 12, 2010
Presentation Materials for my C# 4.0 Dynamic Presentation at HDNUG

Last night I did a presentation at the Houston .NET User Group (HDNUG) on .NET Language developments, with a focus on dynamic languages, and C# 4.0’s dynamic features in particular.

You can now download the slide deck I used for that presentation (here), and the samples (here). Enjoy!

Posted @ 6:11 PM by Egger, Markus ( -
Comments (211)

Friday, February 05, 2010
Thinking about Google's Chrome OS and Similar Offerings

So how about Google’s Chrome OS? Interesting idea, isn’t it? (Although not very original and certainly not something Google invented). Create a device with a closed OS that launches straight into the browser as a pure web device. A solid state drive will make booting a very fast operation. No software installs mean no hassles (at least in theory). One is always online anyway, or so the theory goes, so why ever install a rich client app?

For many users, this may be pretty close to the truth. However, and that much is clear, HTML apps are not really great apps. From a UI design point-of-view, they outright suck. Sure, we have come a long way in making HTML UIs better, and we sure are pushing the limits with AJAX and advanced client-side scripting, but at the end of the day, we are still stuck in a fairly outdated world of HTML that was never all that great, even when it was new. Even simple Windows UIs often provide much better user experiences, and they are much less labor intensive to build. But I digress.

So for many scenarios, users will be happy with web browsers, but at the same time, they are losing the ability to do anything on their machines. No Word documents to create, no Outlook, no games to play. Will users really be happy using Google docs and Outlook Web Access? Personally, I know I wouldn’t. Nobody I know would. But then maybe I just don’t know the right people. I am sure some people really value not having to install anything as it also brings the benefit of not accidently installing viruses locally. (Although Facebook shows that viruses and other malware can also make it into online offerings, and Chrome OS won’t be immune to those sort of attacks either. And neither will any other system for that matter. Take that, Mac!).

I really do wonder about the user experience though. I can’t see myself using just HTML-based apps. Sure, HTML may improve. After all, HTML5 is on the horizon and there is hope of adoption of those new features. Especially on Google’s side (who is behind HTML5, after all). Although in general, pushing a new HTML standard is a very difficult undertaking, as there simply is no way to force all the clients to update (which now include anything from Windows machines to mobile devices such as phones and Kindles). So it will be a very long time before HTML5 really becomes significant. And “oh by the way”, how are you going to get a new browser with HTML5 support onto your Chrome OS Netbook? Well, there will be some update ability, which weakens the “no install” story considerably. How much you can update the OS and its components (such as the browser) will probably depend on the OS vendor (as Google is not the only one playing in this market), but the situation is clear: Either you allow (update) installs and are more susceptible to security problems (with a weaker advantage on the maintenance side), or you can’t update, turning the machine into a brick every time something new is coming out. (Perhaps with the ability to re-flash the device at the original vendor, similar to the situation with some phones today).

Of course the no-install-story also means very poor support for technologies like Silverlight and Flash. After all, how are you going to get that technology on your device, and how are you updating it? And forget about running any offline apps, no matter how great those features are in Silverlight 3 and 4!

One of the biggest question in terms of Google’s success with the Chrome OS is “how will they build customer affinity”? If you are a Windows user, you identify with that OS. Same if you are a Mac user. Your next PC is likely going to be a Windows or Mac PC again, unless you have a really good reason to abandon that platform and your investment in it. If you are a pure web-OS user on the other hand, then who cares what OS you are using? You might go from Chrome OS to some other vendor of a web-OS. You will probably hardly know the difference. Or you might go to Mac or Windows as you grow tired of not having a strong offline story. Anything that installs on the client ties the user to that platform. If it wasn’t for all the apps, I might have long tried a phone other than the iPhone again (as there are now offerings with similar feature sets to the original iPhone), but the apps keep me there. “What if there isn’t an equivalent of Shazzam”? I often think. Substitute the app(s) of your choice. You get the idea.

If you have something client-specific, you care about the client. Otherwise, clients become interchangeable, which is bad news if you are the company invested in creating the client technology/device.

Frankly, even if Google’s idea with the web-OS flies, I would have much higher hopes for someone like Apple to capitalize on it. After all, Google is an advertisement company, not a tech company.

What about Microsoft? Could they compete in this arena? Fundamentally, yes. In fact, I think Microsoft is in a great position to re-purpose some if its OS assets in combination with Internet Explorer and put that into a scale-down version, perhaps with some rich-client capabilities (and Silverlight pre-installed). In short: Microsoft would be great to put out a very competitive offering that is much superior to whatever Google and (hypothetically) Apple could put out. Whether that will happen is a completely different story however. After all MS’ track record in the mobile device arena is plastered with unrealized potential…

Posted @ 4:15 PM by Egger, Markus ( -
Comments (51)

Thursday, February 04, 2010
Social Media Pet Peeves

I like social media. I think especially Facebook and Twitter is the way to go and for individuals and businesses, it is as important to establish a meaningful presence on these sites as it was in 1998 to have a web site. I also like other social media outlets such as Xbox Live. I have tons of friends and I care about what they do. I am also interested in tons of businesses and products and want to be connected with them, but I’d never have the time to go out of my way to visit their web sites all the time.

There are some things people do on these sites however, that drive me nuts. Here is a list of my top gripes:

  • Pick a name I can recognize! If you friend me and I can’t tell who you are, I am not going to friend you back. And even if I can find out who’s behind a silly name initially, I will probably forget it and not recognize you when you post a status update. Besides, many social networks will make it very hard for me to search for you if you use a silly name!
  • Upload a real picture of yours! This is very similar to the name issue. I (as many people) are not all that good with names. I meet tons of people at various events and other occasions. I make an effort to remember names, but I still have a much easier time to recognize faces. So use a photo that allows me to recognize you!
  • Put up a short and concise bio (especially on Twitter). If you friend me, I will follow you too in all likelihood. But I go through lists of people fast. Just today, I weeded through 200 followers and I have probably less than 1 second for each of them to decide whether I want to follow them back. So I glance at the bio and make an on the spot decision. If I do not find anything that interests me, I will never follow you. This is a one-chance deal.
  • Before you friend a lot of people, add some substance. If someone follows me on Twitter I will look at their page, and if there is nothing there for me to look at tweet-wise, then I will not follow you back as I have no idea what kind of stuff you post online.
  • If you are a business, you need to change the way you present yourself. Post a single stupid marketing message and I will unfriend you. Post something that has substance.
  • Don’t protect your tweets. If you do not want your tweets to be seen, then Twitter is not the place for you. I will never go out of my way to follow you if I have to request the follow. You can always block someone you don’t want to see your tweets.
  • DON’T SHOUT AT ME! If you write in all upper case, you are annoying a lot of people.

Well, that’s the short list anyway. I could go on and on. What are your social media pet peeves?

Posted @ 4:41 PM by Egger, Markus ( -
Comments (223)

Monday, February 01, 2010
Dynamic C# 4.0 Presentation from the Munich .NET User Group

Geee! I almost forgot to upload this: I recently (January 2010) did a presentation at the .NET User Group in Munich ( Here is a link to the slide deck I used for that presentation.

Note that this slide deck (and presentation) is in German. I will however present the same talk at the Houston .NET User Group soon (, and I’ll upload the English slide deck after that event.

Posted @ 12:25 PM by Egger, Markus ( -
Comments (61)

Saturday, January 30, 2010
My Thoughts on the Apple iPad

As many of you probably know, I love Tablet PCs and similar devices. I have had a Tablet PC from the day they became available. I have even had a tablet device that was Windows CE based years and years ago. I have spent a huge amount of time developing for the Microsoft Tablet PC and also for Origami devices (“UMPC” a.k.a. “Ultra Mobile PCs”). I even like related devices such as Microsoft Surface, the iPhone, and e-Reading devices such as the Kindle. We have done 2 CODE Magazine special issues focusing on Tablet PC and Mobile PC development ( Microsoft even named me one of the world’s most influential Tablet PC and Mobile PC developers.

So with all that in mind, what do I think about the iPad?

Well, I think it is a cool device and I will def. get one! And here is why:

Ain’t I a Microsoft Guy?

As most readers of my blog probably know, I like Microsoft stuff, so I would like to add a little background information on how that impacts me feelings towards the iPad. (And of course, as a Microsoft Regional Director (RD), I am pre-disposed to liking Microsoft stuff over other products). Furthermore, I have gone on record stating that I am not a fan of Apple Macs. But yes, I have also enjoyed my iPhone. So there is a bit of an odd relationship there. At the end of the day however, I like slate devices. Ever since I first saw a tablet style device on Star Trek, I knew that I would enjoy such an experience, and I have gotten my first tablet device based on Windows CE years and years ago. Then, once Microsoft came out with true Tablet PCs, I jumped on that and spent a huge amount of time programming them and evangelizing Tablet PCs. In fact, Microsoft named me one of the most influential Tablet PC and Mobile PC developers in the world. From a magazine point of view, we have published 2 focus issues on Tablet PC and Origami development (Origami being Microsoft’s Ultra Mobile PC offering, which was supposed to be very similar to the iPad although it didn’t quite work out like that) and you can still find that content on At EPS, we are also a Microsoft Surface shop, which falls into the same category in some ways.

Then, along came the iPhone and it made NUIs (Natural User Interfaces) and multi-touch mainstream. (Microsoft also has gone that direction, first with Surface and now with Windows 7). What is great about the iPhone and iPod Touch product line is that it is entirely designed with NUIs in mind. These devices run apps that aren’t adopted for multi-touch interaction, but they have been specifically created for that kind of an experience. Now, the iPad follows the same approach, and from everything I have seen so far, I think the result will be very good.

So yes, I am a Microsoft guy, but I have to acknowledge that Apple seems to have a very interesting product again (probably not as mainstream as the iPod or the iPhone, but still…). I am hoping Microsoft will come out with a competing product, because I think the original ideas around Origami were awesome and I also think that Microsoft’s Tablet PCs (current and devices that are rumored to be coming) are fundamentally better as they support true multi-tasking, pen input, handwriting recognition, and so forth. Nevertheless, I think the iPad will be a good product and it is an important step forward.

Note: With that in mind, we recently recorded an episode of CodeCast which you can download here.

A Great Experience

One of the things that is really crucial and a stand-out feature is how well Apple designed the overall experience of the iPad. I am not always an Apple UI fan, but for this, I think they knocked one out of the park. A lot of features of the iPad are available elsewhere. Sure, using a Windows 7 tablet, one can use IE to browse the web in a multi-touch fashion. Sure, the iPhone has photo albums. Sure, one can open Outlook on a Tablet PC. But the point is that the iPad provides an experience that is specifically created for multi-touch and NUI interaction.

In Windows, the web browsing experience is a GUI experience that is retro-fitted with touch support. Launching applications is done through the Windows Task Bar, which works in multi-touch scenarios, but it is not a great experience as it is optimized for mouse interaction. Sure, the Task Bar is now a tad bigger so one can interact with it using one’s finger. But if one was to start from scratch with a multi-touch UI, then one would do it completely different (as Apple and others – such as HP – have done). Try using Outlook in a NUI setup. It simply isn’t usable.

Nope, Apple has the upper hand at this point. The multi-touch UI of the iPhone/iPod Touch is done very well and it is super responsive. The iPad looks to be scaling things up. It is reportedly even more responsive than the iPhone and it provides a bigger touch area and screen. Browsing the web, reading email, watching videos, looking at photos, keeping up with Facebook, and looking at Tweets will be a treat! I entirely believe that doing all those things will be a better experience on the iPad than any other device! (For book reading on the other hand, I think the Kindle is better with its digital ink display technology and long battery life).

But it’s just a bigger iPod!

Exactly! I consider that a good thing. Some people have reacted disappointed to Apple’s announcement. I disagree! I would not want the iPad to be a scale-down Mac, because if it was, the same problems as with MS Tablet PCs would apply. I do not want to use apps designed for mouse and keyboard with some sort of half-assed touch support scheme. Instead, I want the available touch ups to become more sophisticated and scale up! And that is exactly what Apple is doing. They take the mail client from the iPhone and make it take advantage of the bigger display. The photo app looks to be a treat. Even the iWorks suite will provide a pretty cool environment for document reading and handling.

Furthermore, there are 140,000+ apps (January 2010) that are already designed entirely for multi-touch with a NUI. Let those apps grow up and provide more power within these exciting new paradigms. Scaling down a Mac (or a Netbook) would be exactly the wrong way to go as GUI apps simply don’t translate well to environments that call for NUIs.

Another aspect of the iPad being a bigger iPod is that there is a huge ecosystem of iPod/iPhone accessories that will now also work with the iPad. These accessories are already made to work with mobile devices and are thus very suitable for the iPad. Accessories made for the Mac on the other hand wouldn’t.

Personally, I have gone on record saying that I think mobile devices will largely replace desktop PCs. Why have a desktop machine when you can have a mobile device the size of a phone you can always carry with you and use in a way that provides a good experience, and when you are in the office, you dock it and use an external keyboard and monitor as the device switches into “stationary mode”. Sure, we are not there yet and it will take years to get there, but ultimately, I believe this is where things will go. And for that to be true, we will see smaller devices become more powerful and grow up, rather than current PCs shrinking down with features such as mouse and keyboard UIs and everything else that goes along with PCs. The iPad is one small but very significant step in this direction (and it supports an external keyboard and monitor).

The Size of the iPad Device

I have read online (in particular in a blog owned by a former manager of the MS Origami team) that some people think the device is too large. “When we designed the Origami, we aimed for a more mobile and portable experience” they say. But this was at a time when cell phones didn’t provide good online experiences. Today, the iPhone has revolutionized phones and entirely changed the game for what a phone can do. Reading the web is a very good experience on the iPhone (and other phones that followed). So we already have an ultra-mobile experience that works well and we do not need the iPad to be a device we always take with us wherever we go.

Instead, I see the iPad as a device I will use in my home. I will have it next to my development machine with some documentation open or a video running. I will have it next to my gaming machine or Xbox to read a strategy guide, a walk through, or the WoW Wiki. I also envision taking it on flights to have a bit of a bigger video screen than phones provide. I will take it to my granny to show her the latest pictures of ourselves or from Facebook friends. I will take it on a trip to have my travel guide and map with me. I will take it to the beach. I will use it to read email on the couch or in bed. I may even have one for the bathroom. But I will not constantly carry one in my pocket.

For all these scenarios, I believe the size of the device is going to be great. It is also very lightweight (1.5lbs… less than 1kg), which is great!

The Limitations

Of course the iPad has some limitations that are rather the bummer. No support for Flash and Silverlight. I think it makes sense from Apple’s point of view to not provide competition to the app store, but from a user’s point of view, it would be very nice to have. (And from a developer’s point of view, I would love to use Silverlight to code for this device).

Furthermore, the device has no stylus, eliminating all possibility of more accurate and sophisticated interaction. There is no handwriting recognition, which is a feature I use all the time with Tablet PCs. The touch technology Apple uses (their capacitive touch sensor) is great for finger-interaction, but it simply does not support stylus-interaction. To me, that is probably the single biggest limitation. I believe if people gave the handwriting recognition on Tablet PCs a try, they would be amazed!

Multi-tasking is a problem at this point. Developers can’t write apps that run in the background. But that limitation could probably be lifted over time.

The device doesn’t have a camera. This is an odd omission, and i predict it will be fixed in the future. A camera would enable video conferencing. I think it would be neat, but I don’t think it is quite as big a deal as people think. You are not going to use the iPad to take pictures at a party since you are unlikely to carry it along, and even if you did, it would be too unruly. Also, the way you are likely to hold the iPad, a fixed camera is unlikely to point in a useful direction. A movable camera on the other hand would be just… well… odd in a device like this. It would probably make the device thicker and the moving part would probably make it much more fragile. Perhaps it may even be better to have a camera accessory that connects with a cable…

The iPad Name

So there is a bit of a comedic factor here. Yes, I get it. Female hygiene product. Hours of entertainment value. If you are in puberty. For everyone else: Let’s move on! It’s called the iPad and I think that is a good name. It rolls of the tongue. It explains exactly what it is: A digital pad. What is a tablet exactly? What is a slate? I think “Tablet PC” is a good name for a true PC in tablet form, but for what Apple built, a “pad” is a good correlation. Furthermore, it puts the product right into the same family as the iPod, which is good as well. After all, it tells people right away to associate this with an iPod style of experience rather than a Mac experience.

It also amused me that the aforementioned gentlemen from the Origami team thought the name is “awful”, while he apparently thought “Ultra Mobile Personal Computer” was a good name for his offering. That little nugget sure keeps me entertained longer than the reference to a female hygiene product ;-).

What I think will/should happen

I think the iPad will be a success. It is a good idea. It’s time has come. And most importantly, Apple provides what appears to be a good product and pairs it with a huge marketing push which the Tablet PC and the MS UMPS never enjoyed. Furthermore, Apple doesn’t just provide the software, but it provides the device. Once again (similar to the iPhone and iPod), if you want one, there will be no doubt where to get one and what you will get. I think all this will add up to a device that will sell well and bring this type of experience into the mainstream. Of course the product is more specialized than a phone or a music player, so I would not expect this to sell as much as the iPhone or the iPod, but it will still do well, I think.

I am also hoping that Microsoft will come forth with another push in the Tablet market. There are rumors around the “Courier” tablet device. Microsoft has already shown smaller slate devices. All those things are very exciting, and – as mentioned above – I think Microsoft has technology that is better. I hope that Microsoft will come forth with a completely new UI for these devices that is built from the ground up as a NUI and I want Tablet PCs that also work in regular laptop mode to switch between the current Windows paradigm (GUI) to a NUI paradigm when needed. I also would like to see Microsoft build their own devices. I have no problem with OEMs also building devices, but I think Microsoft needs to put out a device with a feature set people are aware of, with a price point people are aware of, and with a place to buy them people are aware of. (With the Origami, Microsoft completely depended on OEMs to put together hardware that supported feature sets the OEMs picked, and a price point the OEMs set, resulting in a scenario where MS could only advertise the OS but not much more, leaving people wondering where they could get such a device… not unlike the scenarios with MS phones, really).

Oh, and for all this to make sense for MS, there needs to be marketing. I think Apple has put more marketing muscle into the iPad in the last 2 or 3 days than MS did for the entire Tablet PC and Origami efforts combined. At least it seems that way, and that is what marketing is all about, after all.

Pros and Cons

Here is a list of pros and cons of the iPad as I see them:


  • Great overall experience built uncompromisingly for touch
  • All iPhone apps available (140,000+ at this point)
  • Programmable the same way as the iPhone (Objective-C and MonoTouch)
  • Probably a very good Internet browsing experience
  • Probably great for email reading
  • Probably excellent for video viewing (although the aspect ratio is an old-fashioned 4:3)
  • Probably great for photo viewing (one can see the iPad as a great digital picture frame)
  • Great integration of things such as Facebook, YouTube, Twitter, and third party apps like SlingCatcher/SlingBox
  • Lightweight (1.6lbs)
  • Instantly on
  • Very responsive (supposedly more so than the already very responsive iPhone).
  • Excellent screen
  • Relatively long battery life (supposedly 10 hours, so much longer than even Netbooks but also much less than the Kindle and other eReaders)
  • Not bad to have a new e-Reader app (iBooks) although I wouldn’t want to trade my Kindle for it (although the Kindle reader should be supported by the iPad just like it is on the iPhone)
  • An external keyboard is available as is the ability to hook it up to a monitor
  • The iPad is less expensive than a slate-only Microsoft Tablet PC (and the least expensive version will work well for most people)
  • All iPod/iPhone accessories should work with the iPad (other than the ones it doesn’t physically fit in, such as a lot of the speaker-docking-stations)
  • iWorks suite should be decent for office app needs (within reason)


  • Not a “real computer” (can’t run Mac and Windows software)
  • No camera (motion or still)
  • I haven’t tried it myself, but typing on the virtual keyboard is not something I am looking forward to. Data entry will be difficult, I bet.
  • No stylus or handwriting support
  • No support for Flash and Silverlight
  • Cost (although not compared to the iPhone) - 16GB ($499/$629), 32GB ($599/$729), or 64GB ($699/$829) – the lower price is with wireless only, while the more expensive one supports 3G (but doesn’t include the monthly cell service fee 3G requires).
  • Short battery life compared to Kindle and other e-Readers
  • Screen probably hard to read in sunshine (unlike e-Reader screens like the Kindle’s)

Posted @ 12:20 AM by Egger, Markus ( -
Comments (62)

Friday, October 30, 2009
State of .NET and User Group Presentations in Denver

I just got back from doing a State of .NET Event in Denver as well as a user group presentation there. It was fun. Thanks for the warm welcome “Denverers” (what do you call people from Denver?). 

For those of you who attended one of the presentations, don't forget to sign up for a CODE Magazine subscription using one of the special URLs provided. For those of you who weren't there, you can still get a great CODE Magazine offer here. Also, don't forget to subscribe to the free CodeCast podcast

Here is the slide deck for these talks:

Also, I promised additional examples to download, so here they are:

Also, for more information on Silverlight and related Expression products, see

For more information on our Tower48 escrow company, check out this post, or go to Also, here is a video on the Tower48 stuff:

And here is a video of the Silverlight hockey app:

The external download link for the video is this:

Posted @ 4:24 PM by Egger, Markus ( -
Comments (113)

Tuesday, October 06, 2009
Cross-Site Access Policy for Self-Hosted WCF Services

When you build WCF Services you basically have to options in making the service available: 1) host in IIS, and 2) self-host in something like a Windows Service or similar application. In general, it is easier to let IIS host the service, because it offers features such as service-health-monitoring. Plus, it is easier to just put a service into an ASP.NET based application as a .svc endpoint. I use this ability myself, for both HTTP and TCP/IP based services.

However, there are also scenarios where I prefer the self-hosting route. This is especially true for my more important and more powerful services, because those are typically the services I expose in a number of different formats and over a number of different protocols. In self-hosted scenarios, you generally have more options to expose the same service. For instance, I may want to expose a service over TCP/IP, HTTP (SOAP and REST), and MSMQ all at the same time. And yes, they are not all the same exact services (queued ones for instance, aren’t going to return a result, while SOAP services do, as do REST services, but they are still separate classes). Even so, they often are wrappers around the same business logic and generally go together. So self-hosting may be of interest there.

Using WCF, exposing services (SOAP and REST) using the HTTP(s) protocol in a self-hosted scenario is not very difficult. You simply create a host app, add the appropriate WCF ABCs (well, Address and Binding, mainly… the Contract will be the same exact thing regardless of how you host the service), and you are pretty much ready to go. (There actually is a bit of a gotcha when you compete with IIS for a URL, but I will blog about that separately).

The one thing that people often ask me when it comes to this stuff is “how do I call this from Silverlight”. Silverlight by default does not allow cross-domain calls. This means that a Silverlight component that is part of cannot automatically access a service on To do this, the domain that hosts the service ( in this example) needs to define a cross-site access policy. It explicitly needs to opt in to allow services on that domain to be called from another domain.

Note that in these types of self-hosted scenarios you almost always have a cross domain call. Even if you only have a single site that calls the service, it is unlikely that the self-hosted service grabs the same domain as the actual web site hosting the Silverlight component. It can be done, but just from an architectural and maintenance point of view, it tends to get very confusing.

Anyway: What you need is a ClientAccessPolicy.xml file in the root of the domain that hosts the service. The XML file content is fairly simple. Here is an example that allows for unlimited cross-site access:

<?xml version="1.0" encoding="utf-8"?>
allow-from http-request-headers="*">
domain uri="*"/>
resource path="/" include-subpaths="true"/>

Note that this leaves the server wide open. In the wild, it is better to limit callers to the exact set you actually need. Typically, that might mean limiting access to SOAP headers from 2 or 3 caller domains, or something like that.

On a side note, it is also possible to use a CrossDomain.xml file instead. However, this is an Adobe format which Microsoft supports. This is not a file specifically created for Silverlight and it doesn’t support the same options and Microsoft won’t be able to add anything to that file format if needed. You should only use a CrossDomain.xml file if you have a real reason and understand the implications. Otherwise, stick with the ClientAccessPolicy.xml file. Also, contrary to common misconception, it is not necessary to have both files. Just a ClientAccessPolicy.xml file is all you typically want/need.

Anyway: If your server is running IIS, you can simply put the ClientAccessPolicy.xml file into the root folder of the domain. Silverlight will then access the file to figure out what the server opts in for.

One scenario you may run into is that the WCF server does not run IIS at all. After all, if you have a server dedicated to running these services in a self-hosted fashion, then why even run IIS? Just to serve up one XML file? That is probably overkill, especially considering that this opens the server up security-wise, and you also might run into trouble with IIS competing for URLs and such. Not to mention that every service running consumes resources. So if all you need is serve up this XML file, then don’t run IIS. Instead. have your WCF service host serve up the policy file as well. Here’s how:

First, create a service contract that can serve up XML content:

public interface IClientAccessPolicy
    [WebGet(UriTemplate = "/clientaccesspolicy.xml")]
    XElement GetClientAccessPolicy();

There are some interesting aspects here. Fundamentally, this is a pretty simple WCF contract that happens to return an XElement. The interesting part is that this services – just like the REST service WCF can now host – supports simple web-get access, which means it can be accessed as a plain URL. The URL specified in this case is specified in the URI Template, and for this type of operation it is always the same (ClientAccessPolicy.xml in the root folder). So now, whenever someone goes to that file name in the root URL, this service can kick in and return a result.

As you can imagine, the actual implementation of this service is trivial, as all that’s needed is a single method that returns the desired XML as an XElement. A good, simple, and flexible way to do this is using LINQ to XML for this. But you can implement this any way you want. (In fact, you can probably use other return types that represent XML).

So now, all that’s left to do in your host application is to fire up this service using the WebHttpBehavior. Something like this will do fine:

Uri[] addresses = new Uri[] { "" };
var host = new ServiceHost(typeof(ClientAccessPolicy), addresses);
var endpoint =
    new WebHttpBinding(), string.Empty);
endpoint.Behaviors.Add(new WebHttpBehavior());
var smb = new ServiceMetadataBehavior();
smb.HttpGetEnabled = true;

There you have it! Now your self-hosted WCF service can host the client access policy “file”. No IIS needed or even desired.


Posted @ 7:10 AM by Egger, Markus ( -
Comments (1651)

Monday, October 05, 2009
Dynamically Loading Resource Dictionaries in Silverlight 3

At this fall’s BASTA conference in Mainz, I presented a session on “Reusable Silverlight Components”. One of the things I showed in that session was how to create Silverlight components that can be hosted in different sites and also be completely re-styled and rebranded by means of dynamically loaded Resource Dictionaries.

Silverlight v3 is the first version of Silverlight that supports resource dictionaries. This makes it much easier to maintain resources generically in separate XAML files, and even switch between different sets of resources. One of the possibilities that often goes overlooked however, is that resource dictionaries can be loaded completely dynamically from any URL. I often use this in scenarios where I pass parameters to a Silverlight control, where one of the parameters is the URL of such a resource dictionary. I then load that dictionary dynamically, so everything in that application references that dictionary. The basic idea is the dynamic load process from a URL. This can be done like so:

WebClient request = new WebClient();
request.DownloadStringCompleted +=
    new DownloadStringCompletedEventHandler(request_DownloadStringCompleted);
    new Uri("", UriKind.Absolute));

This triggers an asynchronous string download from the specified URL. The associated event handler fires when the download is complete and assigns the loaded resource dictionary:

void request_DownloadStringCompleted(object sender, DownloadStringCompletedEventArgs e)
    string resourceXaml = e.Result;
    ResourceDictionary dictionary =
        Application.Current.Resources as ResourceDictionary;

This accesses the current application resources (which should be a resource dictionary, although some error handling may be appropriate here) and then uses a XamlReader to load the retrieved XAML string, casts it to a resource dictionary (which it may not be, so more error handling is in order here) and then simply adds it to the collection of available resource dictionaries.

There are a few more things of interest here that are worth pointing out:

First of all, Silverlight 3 still doesn’t support dynamic resources. Static resources get assigned as soon as an interface loads and can’t be changed later. This means that the new resource dictionaries should be added before any real UI loading is done. I generally like to allow for a “resourcedictionaries” parameter passed to the Silverlight control, but I make the parameter optional. For this reason, I generally have this kind of code in my Startup event handler in App.xaml.cs:

private void Application_Startup(object sender, StartupEventArgs e)
    if (e.InitParams.ContainsKey("resourcedictionary"))
        var content = new Grid();
        this.RootVisual = content;
        content.Children.Add(new LoadingAnimation());
WebClient request = new WebClient();
        request.DownloadStringCompleted +=
            new DownloadStringCompletedEventHandler(request_DownloadStringCompleted);
            new Uri(e.InitParams["resourcedictionary"], UriKind.Absolute));
        var root = new Page1();

This code checks for the parameter. If it isn’t present, the root UI (Page1.xaml in this case) is loaded right away. Otherwise, I create a Grid() as a root container (the RootVisual setting can only be assigned once, so I am using the Grid object as a container which I can then use to load other UI into) and I then load a temporary loading screen while resource dictionaries are downloaded (you never know how long that might take). Then, when the dictionary is downloaded, I merge it into the resources and then I load the real UI into the Grid:

void request_DownloadStringCompleted(object sender, DownloadStringCompletedEventArgs e)
    string resourceXaml = e.Result;
    ResourceDictionary dictionary =
.Current.Resources as ResourceDictionary;

    var grid = this.VisualRoot as Grid;
    grid.Children.Add(new Page1());

Using this approach, the main UI (Page1) gets loaded after the custom dictionaries are downloaded and thus all static resources pick up the new styles.

Note: If you use the Implicit Style Manager, some of this is not as critical, since the ISM manually applies styles whenever it is invoked. However, since you are likely to still use named styles (which are always static), you probably still have the same problem.

Another interesting thing to note here is that I am simply adding the new dictionaries to the collection of merged dictionaries. The way Silverlight looks up resources, the resources added last are found first. So if my application already has a dictionary with a style called “StandardButtonStyle” and a dictionary that is added later also has a style of the same name, the one loaded last is found and used. This means that dynamically loaded resource dictionaries can define a few new styles as needed. Since the standard resource dictionaries remain in place, Silverlight will find all the default styles there, but the new dictionaries can override only specific ones. If you completely replace all of the application’s resources, then the newly loaded dictionaries would have to define every single resource or else the control would show an error and probably fail to load. So adding resource dictionaries in addition is generally a nifty technique that works very well in the real world.


Posted @ 12:14 PM by Egger, Markus ( -
Comments (1183)

More Posts: Newer Posts - Older Posts








Syndication RSS 2.0 RSS 2.0

All My Blogs:
My personal blogs:
Dev and Publishing Dev and Publishing
Travel and Internat. Living Travel and Internat. Living
Other blogs I contribute to:
Milos Blog (US) Milos Blog (US)
VFPConv. Dev Blog (US) VFPConv. Dev Blog (US)
VFPConv. Dev Blog (DE) VFPConv. Dev Blog (DE)


Blog Archives
All Blog Posts

    September (1)
    September (1)
    April (1)
    March (1)
    October (1)
    June (3)
    May (1)
    March (2)
    February (2)
    January (2)
    December (3)
    November (2)
    October (2)
    September (1)
    August (2)
    July (1)
    June (1)
    April (3)
    March (1)
    February (5)
    January (1)
    October (4)
    September (2)
    August (1)
    July (1)
    May (4)
    April (6)
    February (1)
    January (1)
    December (3)
    November (11)
    October (8)
    September (1)
    July (1)
    June (3)
    May (3)
    April (6)
    March (6)
    February (4)
    December (1)
    November (1)
    October (5)
    September (1)
    August (1)
    July (6)
    June (3)
    May (3)
    April (1)
    March (2)
    January (2)
    December (3)
    November (4)
    October (1)
    September (2)
    August (2)
    July (4)
    June (1)
    May (2)
    April (10)
    March (2)
    February (3)
    January (1)
    December (6)
    November (7)
    October (6)
    September (8)
    August (10)
    July (6)
    June (9)




This Blog is powered by MilosTM Collaboration Components.