Friday, December 30, 2005
.NET Framework 2.0 Not Enough for ClickOnce Deployment?
I like ClickOnce deployment. So far, everything I have created with it has worked very well and it sure makes life a lot easier. However, yesterday, I ran into a bit of a snag: I had never installed any of our ClickOnce deployed apps on a machine that does not have older versions of the .NET Framework installed. In fact, many machines even have .NET development environments (various versions) installed. But yesterday, I tried to install a ClickOnce deployed app on a machine that didn't have any version of the .NET Framework installed. So I started out installing the .NET Framework 2.0, and then I tried to install the ClickOnce deployed app. However, I got the following error message during install:
As it turns out, that assembly is not part of the .NET Framework 2.0. I am not sure if this is anything that has to do with my particular app, but I'd be surprised since the app is completely built on the 2.0 version of the framework. Also, at the point in time the error message appears, I do not think any parts of my app have been accessed yet. Also, there is no reported precondition that the machine is not living up to. So this is pretty odd.
Today, I reinstalled a brand new machine from scratch and I decided to give this a test-run. Same result. With the Framework 2.0, it does not work. Yesterday, I fixed the problem by getting the assembly out of the GAC on another machine of mine and by manually installing it on the target machine (which is a hack that I would only recommend for people familiar with the GAC). Today, I tried to fix the problem more scientifically by installing the Framework 1.1. That did not help either. I then tried to install the Framework 1.0. Still no dice.
Unable to install or run the application. The application requires that assembly Microsoft.mshtml Version 7.0.3300.0 be installed in the Global Assembly Cache (GAC) first.
So in the end, the only solution I was able to find is the low-level hack: Open a "DOS" prompt and navigate to your Windows\Assembly\GAC. (You can not do this with Windows Explorer, because it shows the GAC directories differently). If you do not have a GAC folder, create it. If you do have that folder, check if there is a Microsoft.mshtml folder in it. If not, copy the contents of the same folder on your original machine (I assume you have a machine that had a working version of the app and thus probably has that folder) onto the target machine. You can do this with the XCOPY command (use the /e parameter). This should fix it.
This is really all a bit odd. I assume that this must somehow be related with my app. I simply can not believe that MS would ship such a serious bug. But on the other hand, I can not find anything special about my app...
Posted @ 6:07 PM by Egger, Markus (firstname.lastname@example.org) -
Friday, December 30, 2005
Loading XAML Dynamically
XAML (the declarative markup language used by WPF/Avalon UIs) can be handled in many ways. It can be compiled into a DLL, it can be compiled into BAML (basically binary XAML), or it can be handled as XML and parsed to form the UI. This is a particularly useful option whenever you want to use XAML to define a document rather than conventional UI. The big question then often becomes: How do you load such content dynamically?
There is a defined way to do this. WinFX provides a special class to load XAML/BAML dynamically. However, the difficulty is that the way it is documented is outdated. There are books, articles, and even the SDK documentation that all refer to a Parser class that is supposed to exist in the System.Windows.Serialization namespace. Supposedly, it would work like this:
However, as mentioned before, this class does not exist anymore. Instead, there now is a new (and as far as I can tell as of yet undocumented) class called XamlReader:
And that works just fine. You may need to cast the result set depending on what you need, but other than that, this class is extremely trivial to use.
Posted @ 5:41 AM by Egger, Markus (email@example.com) -
Wednesday, December 28, 2005
Display Size, Display Resolution, Font Size, Font Smoothing, and much more...
Screen resolutions and monitor sizes seem to be a constant source of confusion, even for relatively technical people. And especially with Avalon (WPF) on the horizon a solid understanding of some of these things is very important (and some things currently considered "common knowledge" really aren't entirely true). For this reason, I decided to sit down and write a few paragraphs about this. This probably isn't a topic that is sophisticated enough for us to print a CoDe Magazine article about, but it is a perfect topic for a blog.
There are a number of measurements that go with computer monitors. In general, the bigger the monitor, the better. However - and this is already where a lot of people go wrong - bigger screens to do not mean that they can show more stuff at once. The amount of information shown concurrently depends on the resolution (at least with current technology). Higher resolution means that more things can be shown at once. Some people confuse "high resolution" with things being larger. Quite the opposite is true. At least at this point, because in the future, screen resolution will not be a measurement of how much can be displayed at once. So things get a bit tricky here. And then of course, there are different screen proportions, such as wide-screen or portrait vs. landscape screens.
Overview of Terms
A monitor's size is measured in inches, even if you are in a metric world (an attempt was made to introduce cm as screen measurements in the European Union, but that just completely confused everyone, because nobody knew what a 43.18cm screen was...). The size in inches is the diagonal dimension of the monitor. So a 19 inch monitor is supposed to be 19 inches from the top-left corner to the bottom-right corner. I say "supposed", because with the old-style tube monitors ("CRT"), the visible area of the screen is less than that. I am currently writing this on a 19 inch CRT monitor, but the visible area is only 17.8 inches. Kind of a rip-off really. Flat panels ("LCD") are different in that the visible size is really the size of the monitor. So when you have a 19 inch flat panel, the diagonal dimension is 19 inches. This is why people often say that a 17 inch flat panel is similar to a 19 inch CRT monitor.
Then there is screen resolution, which is (currently) measured in X and Y pixel counts. A typical resolution might be 1024x768 or 1280x1024. This simply means that the picture is made up of 1280 points horizontally, and 1024 vertically (or whatever the numbers are).
A different way of looking at resolution (and actually the more accurate way) is dpi (dots per inch). This measures how many pixels can be displayed in each square inch of screen real-estate. My 19 inch CRT monitor is 14.3 inches wide and 10.8 inches tall. I run at a resolution of 1600x1200 pixels. This means that each horizontal inch of my screen gets about 112 pixels, and each vertical inch also gets about 112 pixels. My current system configuration thus has a resolution of 112dpi. My system runs at slightly higher than average resolution. A typical monitor today runs at 96 dpi, meaning that if one draws a square of 96 pixels on the screen, it shows up as exactly 1 square inch. But then Windows today has little knowledge of displays that are different from the 96 dpi standard. For instance, it has no idea that my monitor runs at 112 dpi and thus developers have no way to really create output that is 1 square inch in size. A really pathetic situation when you think about it.
Dpi is a very important measurement in many scenarios. For instance, in the publishing and printing business (as in "printing on paper"), we generally consider 300dpi to be the absolute lower limit of acceptable quality. 600dpi is normal. 1200pdi is great. As you can see, computer monitors have a ways to go.
Then, there are points. Fonts are generally measured in points. Typical font sizes in Windows today are 8, 10, and 12 points. Points are somewhat similar to pixels, but not identical. For instance, a 10 point "T" in Arial font is drawn 10 pixels tall on my system. A 10 point "T" in Times New Roman font on the other hand is 9 pixels tall. A 24 point "T" in Times New Roman is 21 pixels. So there is a significant difference between points and pixels. The difference is that a point is 1/72nd of an inch, while a pixel is 1/96th of an inch. At 8 or 10 point fonts, the difference is minimal, but at larger sizes, the difference is significant. A lot of people get confused because things are so similar, but they are not the same.
Points is a handy measurement, because it allows to specify the size of the font in absolute terms. A 10 point font's upper case letters are supposed to be about 0.14 inches or about 3.5mm tall. This will be true for things printed on paper, and in theory it should also be true for fonts displayed on monitors. However, since we already discovered that Windows has little knowledge about what size things end up on the screen, this is currently only the case if the display happens to be at exactly 96dpi. Otherwise, Windows is supposed to scale things appropriately (in my case, it should use more pixels for each character), but it doesn't. This is one area that Windows Vista will fix.
Screen Dimensions and Proportions
So let's talk about dimensions a bit more: A lot of people think that when they get a bigger monitor, they can see more things at once. While there is a relationship between screen size and maximum resolution (bigger monitors often support higher resolutions), it is not the physical size of the device that defines how much can be displayed, but it (currently) only is the resolution that is the defining factor. A 22 inch monitor running at 1024x768 can show the same amount of "stuff" as a 15 inch monitor with a resolution of 1024x768 pixels. Of course, on the 22 inch monitor, each pixel will look huge in that case, so I would argue that the quality is really lower (the small display would have around 100dpi, while the big one shows a nasty 60dpi or so).
Another aspect of screen dimensions are its proportions (or "aspect ratio"). Most screens have traditionally had an aspect ratio that reminds one of a landscape piece of paper. This is also similar to traditional TV proportions. Lately however, more and more TVs are widescreen, probably because that worked so well in movie theaters where the widescreen really fills one's entire field of vision and thus drastically improves the experience. I have always wondered about widescreen TVs. A lot of people really like it, but I never warmed up to them. For one, the "field of vision effect" that works so well in movie theaters does not happen at all in home setups. And unless you are watching mostly movies optimized for the wide screen format, everything needs to be stretched horizontally to fill the screen. This is nasty because it makes everyone look fat. A lot of people tell me that after a few weeks they don't even notice that anymore. Great! So I'd be paying significantly more money just so after a few weeks I will not notice the poorer quality anymore? Yeah, right! Also, I will let you in on a little secret: Most people think they get to see more horizontally, but in reality, the picture is just cut off vertically. This isn't that easy to notice with TVs, but you can see it well with those "panoramic" photo cameras ("panoramic" is a different word for "wide screen"). If you look at the negative of such as camera's film, you will notice that it is a single frame cut off at the top and bottom. Heck, at the very least I would have expected that they use more resolution vertically...
So anyway: Wide screen is now also available for computers. And my only thought is "why the heck would anyone want a widescreen display"? With movies, it makes some sense, because that is the format that a lot of movies are shot in, but for software?!? Most of the things we work on already suffer by the landscape nature of our screens combined with the portrait nature of content we are working on. Scrolling top to bottom is the most common way of scrolling. Web pages grow from top to bottom. They could greatly benefit from taller displays. The same is true for Word documents. Computer monitors should go "tall screen" (portrait) rather than wide screen for better user experiences.
A typical wide-screen LC display has a resolution of something like 1680x1050 pixels. The people that like it usually say "what I like is that I can view 2 documents at the same time side by side". In other words: They can open a web page and a Word document at the same time side-by-side. But guess what: You could do the same thing if you have a 1600x1200 display, except you would see more of each document vertically without scrolling! So once again, I am better off. Oh, and most of the time you will pay extra for the inferior wide-screen display. But hey, whatever floats your boat...
BTW: One of the features of my Tablet PC I like best is the ability to use it as a portrait monitor. It does wonders to my productivity when I work on articles or other writing tasks (such as writing code, even).
The Truth About Screen Resolution
So why do so many people opt for lower resolutions? (Most commonly 1200x1024 and 1024x768... really nobody uses lower resolutions anymore... keep this in mind when designing your apps).The problem is technical. Today, most people look at higher resolution displays and say "boy, can you really read this?". This is understandable today, although most people do not have a good understanding why it is supposed to be understandable.
As mentioned above, a 10 point font is supposed to be of a certain inch/mm size no matter what. So if one looks at a 96dpi monitor with 1600x1400 resolution, then a 10 point font is exactly the size a 10 point font is in a book or magazine. In printing, font sizes of 8 - 10 points are normal. 7 points is even acceptable in terms of readability, but it is usually not used if there isn't a good reason. 12 point fonts are used for children's books and look amateurish for everything else. Most people do not have trouble reading 8 point fonts.
So why is a 10 point font on a computer monitor considered hard to read then? There are a few reasons: Contrast is a problem for instance. The fact that Windows is not capable of scaling fonts properly for displays that are not at 96dpi. And the biggest single problem is resolution. A 10 point font at 96dpi is hard to read, while a 10 point font at 600dpi is easy to read. This is because at 96dpi, fonts are so blurry that the brain/eye combination has a lot of hard work to do to turn it into something the average human can read as a character.
Consider this screen shot of a "W" at 10 point Times New Roman:
Zoomed in, we can see how pixelated it is:
No wonder this is hard on the eyes.
I also created the same character at 10 points (so exactly the same size as before) but at 460dpi. Here it is zoomed in again:
Keep in mind that the original size of this is this: . Of course, the render quality of the small version is no better than the 96dpi version, because you are not likely to have a 460dpi display. However, come back and reread this blog entry a few years from now when you have a better display, and you will see it just as crisp and sharp as the enlarged version.
Font Smoothing, Anti-Alias, and ClearType
The 96dpi "W" above is rendered in black color. So why is it shades of gray when we zoom in then? That is due to font smoothing. If only black and white pixels were set, the font would look even worse than it does now. To explain font smoothing, consider a simpler graphic: A straight line. Whenever a straight line needs to be drawn that is not completely horizontal or vertical, it appears pixelated. Here is a zoomed in version of a line:
As you can see, this looks awful. We can achieve a better result by applying anti-aliasing. Using this technique, pixels that should be only half set are drawn in a color that is somewhere between the background color and the foreground color. Here is the same line with anti-alias applied (or as good as I can create such a line manually in Paint):
The second version appears much smoother as we can see when we actually look at it in regular size. Compare the two versions:
The same technique can be applied to fonts and the result is shown further above in the 96dpi "W".
Another technique to smooth lines and fonts and graphics in general is known as ClearType. ClearType takes advantages of the way LC Displays (flat panels) work. On a flat panel, each pixel is really made up of a red, green, and blue sliver. Those slivers are aligned horizontally within each pixel. For instance, let's assume the rightmost third of each pixel is the red sliver (I can not recall the exact order... plus it varies for different displays). We can create the appearance of a third of a pixel being drawn by only lighting the pixel up bright red (or really by lighting up the other 2 pixels and leaving the red one dark (black) depending on what the foreground color is). Therefore, we can create a smooth appearance like so:
To really see what's going on, we need to zoom in further:
Of course, this only works on LCDs, because on all other displays, the same graphic looks like this:
The trick is that each sliver within a pixel is so small that it is impossible for the human eye to see the actual color. Instead, the brain assumes the same color as the neighboring pixels, and thus the line appears to humans like so:
This is by far the best version of all the ones we have looked into so far (keep in mind that I am still showing this zoomed in).
All these font smoothing techniques rely on biology (the human eye) and how the brain processes information received from the eye. Basically, the human eye/brain combination is much better at seeing patterns than it is at seeing individual colors. Due to this fact, the brain can be tricked into thinking a line is smooth. However, it relies on some heavy cognitive processing, and thus it is tiring to read a lot of text on the screen.
BTW: Some people claim ClearType works well for them even on regular monitors. As I said: The goal here is to trick the brain into seeing things differently than they are. If that can be achieved by lack of intelligence on the user's part, rather than based on biological facts, then that's fine with me too. In all seriousness though: ClearType does not improve smoothness on CRT monitors. In fact, quality deteriorates on such displays when ClearType is used. (The same is true for LC Displays that use a nonstandard arrangements of the red, green, and blue slivers).
One of the nastiest things that can happen on LCD Monitors is rasterization. Basically, it is not good to run an LCD display at anything but its maximum resolution. If the maximum resolution of your display is 1200x1024 and you have Windows set to display at 1200x1024 pixels, then each pixel the software puts out maps to a pixel on the screen, and the maximum possible display quality is achieved.
However, let's say you have a LCD Monitor with a resolution of 1200x1024 pixels, but have Windows set at 1024x768, then each pixel rendered by the software is displayed by 1.17 pixels on the monitor's hardware. There is no way to display 0.17 pixels. Therefore, the information has to be rasterized, meaning that it has to be mapped onto the display. Consider this list of pixels:
If this pixel is to be mapped from 1024 (software) to 1200 (hardware) horizontal pixels, then the first hardware (monitor) pixel displays the first 83% of the pixel the software put out. The second hardware pixel displays the remaining 17% of the software pixel, plus a part of the second software pixel, and so forth. Therefore, each but the first pixel pixel has to show an approximation of two pixels combined, with results in a nasty color mixture. Creating this effect manually, we arrive at something like this:
Compare the two versions up close:
Needless to say that the crispness of the previous version is lost. Add anti-alias or ClearType to the mix and you have a serious mess that is next to unreadable.
I often talk to customers and they look at my resolution and conclude they could never read my screen because everything is too small. Then, when I meet them in their office they have their 1200x1024 display set to show 1024x768 pixels, and half an hour later I walk out with a headache. They would be much better off with much better readability of they went with the higher resolution. But I guess since they already can't see things well at the resolution they are at, they conclude that if they increase their resolution, things would get even worse. Quite the opposite is the case.
BTW: It is also possible to go the other way and set Windows to 1600x1400, but only have a display that supports 1200x1024 (this often happens with projectors). Rasterization is applied here too, but with even worse results.
How well rasterization works depends on the overall resolution. If you have a monitor that can display 1600x1400 and then show an 800x600 output, it will look pretty good, because there are a lot of pixels (each being very small) the rasterizer can work with. However, when resolutions are very close (1200 to 1024 for instance), then things turn nasty.
To drastically improve readability and rival print, computer displays need to operate at much higher resolutions. And by that I do not mean that we need to add pixels so everything can get smaller, but instead, we need to add pixels yet keep things at the same size. To do this, Windows needs to be resolution independent, which means that things must not be specified in hardware pixel measurements.
Windows Vista does exactly that. To make things a bit confusing, the new way of measuring screen dimensions is still called "pixel" (earlier versions called it "length", but apparently people didn't like that). A Vista pixel is not the same as a pixel on screen. A Vista pixel is a logical measurement that is 1/96th of an inch. So in my case, where I am running at 112dpi, something that is 96 Vista pixels long will in fact be drawn with 112 "real" pixels. It will also be exactly one inch long. It would also be just a bit crisper than on a 96dpi display. And of course, a 10 point font will be exactly 3.5mm tall.
The goal is to get to resolutions that rival print. 300dpi displays are already available today, although they are really expensive. It is expected that they will be more reasonably priced in the relatively near future. 600dpi would be great, but realistically, that is way off in the future. 1200dpi would be awesome, but that is currently science-fiction. ;-)
And of course, even with a 300, 600, or 1200 dpi display, a 10 point font will still be 3.5mm tall. It just will be much more readable than today, because we do not have to trick the brain into finding patterns within all that smoothing madness. So it will be possible to read large amounts of text on a screen and it will be no more or less exhausting than reading a book in print today.
With resolution independence, we really need to keep track of 2 types of resolution: The overall resolution the display runs at (at 300dpi that could be something like 5000x4000... or, if I get my way with portrait displays, 5000x8000). This determines how big the area of display really is. Then, based on that combined with the physical dimensions of the display, we arrive at the dpi measurement. The more, the better, both in dpi and in resolution.
This change alone will make it worth upgrading to Vista. But there are other side-effects of this too. Since Vista can deal with graphics independent of the display resolution, it has to be able to scale things at a high level of quality and performance (and not the mess we currently have with "large font" settings and such). And since things can scale, they can scale arbitrarily. This means that each window can be scaled to say 250% of its original size (or whatever you want to zoom to), and still completely maintain its layout and proportions.
Buying a Monitor Today
So here is why I am even writing this blog entry: We have been trying to buy a new monitor. Personally, I really like high resolutions. I find that lots of workspace makes me significantly more productive. I therefore like to use a dual monitor setup where each monitor runs at a resolution of 1600x1200. Or at the very least, I want a single monitor that runs at 1600x1200. In the past, it has not been hard to buy such monitors, but this has changed! I have not tried this in the US yet, but when we just recently wanted to buy a monitor, it was for my second home in Austria (Europe), and it turns out that in Austria, it is practically impossible to buy anything but flat panels (LCDs) anymore. While there is nothing fundamentally wrong with flat panels, the problem is that they are all very low resolution. Most of them are 1200x1024. This is OK for home users who user their system to browse the web, but it is not OK for me as a developer. (Neither is it OK for serious gamers, really). Higher resolution flat panels are available, but stating at $500+, and reasonably good ones (with a fast pixel reaction time... 16ms or less) are only available at the $800+ range as far as I can tell. A rather disappointing situation, considering that a few months ago, one could buy a 19 inch CRT monitor for just over $100 or $150 that supported resolutions much higher than 1600x1400 even. So we are really taking a step back here.
The whole situation surprises me a bit to be honest. I can go to Dell and buy a notebook (which of course has a built in LC Display) for U$ 1,200, and it has a resolution of 1600x1400. However, when I want to buy an external LCD Monitor from Dell with the same resolution, it's U$ 650 and it isn't even a very good display (slow). That just does not make sense to me.
Posted @ 2:00 PM by Egger, Markus (firstname.lastname@example.org) -
Tuesday, December 27, 2005
They Snuck in Some Cider...
I almost missed this but it turns out that the December CTP of the WinFX extensions for Visual Studio are much more than one would expect after having used the previous CTP. While previous versions were mostly a simple collection of templates, this latest build is a full-blown Orcas preview!
The biggest surprise for me was that Cider (Visual Studio's visual designer for XAML/Avalon/WPF applications) showed up without much fanfare. I only discovered this by accident, really. I was messing around with some XAML and accidentally hit the "Design" button, and there it was! Very cool!
Posted @ 1:03 PM by Egger, Markus (email@example.com) -
Sunday, December 18, 2005
Session Material for VS 2005 Conference
I recently presented 3 different topics at the German Visual Studio 2005 Conference in Rosenheim (just outside Munich... relatively speaking). The presentation was in German, but the samples were in English. The slide decks were also in German, but I needed them for some internal training, so I now translated them to English. So this should be good stuff for everyone interested in this subject. Here is a list of topics I presented on and links to materials and related topics.
- New Language Features in C# 2.0 and C# 3.0
Here is a link to the samples and the slides in English and German language.
Also, if you are interested in more information about this, check out my recent eColumn on the subject. (Sign up for email newsletters to get these articles).
This is pretty cutting edge stuff. Extremely fascinating though. Here are the samples and the slides (English and German again).
Also, I wrote another eColumn that discusses LINQ, which will be available on www.CoDe-Magazine.com tomorrow or the day after (sign up so you get it!). I also wrote an article that will be in the next printed issue of CoDe Magazine. I am also working on a German article.
- Tablet PC Development
To get all kinds of information on the Tablet PC stuff, check out our latest Tablet PC CoDe Focus issue. I wrote several articles that are immediately applicable to this session (Intro, Reco, RTS) and in fact, this is where the samples came from (they can be downloaded from the CoDe site). There are also a number of additional articles on that site that are of interest to you if you are interested in Tablet PC development.
Note: You can get that entire printed magazine completely free of charge. If you are interested, sign up here.
Posted @ 2:30 PM by Egger, Markus (firstname.lastname@example.org) -
Tuesday, December 06, 2005
My New C# 3.0 Language Features Article
A new online article (eColumn) of mine has just been made available on www.CoDe-Magazine.com. It deals with new features in the C# 3.0 language, including Type Inference, Object Initializers, Anonymous Types, Lambda Expressions, Expression Trees, and even a little bit of LINQ. (I only touch on LINQ in this article... a second eColumn focussing on LINQ, as well as a full printed article on LINQ will appear shortly).
To get this eColumn, it is best to sign up for our email newsletters. You can do so at: http://www.code-magazine.com/Account.aspx?mode=email
You can also see this article online at this URL: http://www.code-magazine.com/Article.aspx?quickid=050123
Posted @ 7:35 AM by Egger, Markus (email@example.com) -