Windows Live Writer, Plug-Ins, WPF, Code Analysis and Tufte!


After the other days debacle with Nostalgia and Flickr I looked to simplify my picture posting process for this blog. My requirements are simple: something like right-click an image, and then click "Send to Flickr" or something similar. I figured that someone must have written this already.... but no, I came up dry.

I figured I'd have to write my own version against the Flickr APIs (which are vast!). However, during my initial investigations I found out two interesting things:

  1. You can post images to Flickr via email, and
  2. There is a Windows Live Writer plug-in for Flickr images (WLW is the text editor that I use to write all my posts).

Armed with this information, I managed to completely streamline my image uploading process with zero code on my part! Good enough... for now.

With that, I'd the appropriate links and figured that I would blog about my experience, share the solution, with you dear Reader, and seek your opinion on the matter. Then it occurred to me that there must be a Windows Live Writer plug-in for my search began anew. But, again, I came up dry (well, I found one, but it really sucked).

By this time my interest was really piqued with regards to Windows Live Writer plug-ins; I wondered what it would take to write a plug-in with my exacting requirements:

  • It must be a WPF front-end (nothing else will do)
  • It must use XLinq for the back-end (experiment)
  • It must be FxCop and Microsoft Source Analysis compliant (as I want to ship it, to you dear Reader)
  • The UI should "suck-less" and follow as many Tufte principals as makes sense for an application of this type (experiment)


When I started out on this adventure I was far from sure if the WPF approach was even possible; Windows Live Writer is a Windows Forms based application, therefore the plug-ins must also be WinForms. However, you can load WPF controls into WinForms using some fancy interop, so I was confident I could make something work. But, what I really wondered was:

Is it possible to load a WPF Window from WinForms?

I wanted to write something like:

private void buttonOpenDialog_Click(object sender, EventArgs e)
    MyWpfWindow dialog = new MyWpfWindow();
    if (dialog.ShowDialog() == true)

Where the buttonOpenDialog method was inside a WinForms application. It turns out that this is not only possible but it also work extremely well.

I wanted to use XLinq given that the data will be coming from, in the form of an RSS feed, and I want to filter the results locally XLinq seemed like a perfect fit, and indeed it has been:

IEnumerable<LinkItem> result =
from i in this.xdoc.Element(rdf + "RDF").Elements(rss + "item")
    select new LinkItem
        Categories = i.Element(dce + "subject").Value,
        Date = (DateTime)i.Element(dce + "date"),
        Description = i.Element(rss + "description").Value,
        Link = i.Element(rss + "link").Value,
        Title = i.Element(rss + "title").Value

This is a simple query to deserialize the RSS feed into a list of custom LinkItem class, which is then used by the WPF UI.

To load the RSS feed in the first instance is as simple as:

this.xdoc = XDocument.Load(

That forms the basis of the back-end provider.

Next is the static code analysis tools. The main difference between the two tools I have chosen is that FxCop reports on the assemblies (post build), whereas the Source Analysis tool does what it says on the tin, and looks at the source text files (pre build).

The important aspect to understand here is that I'm not looking for a zero bounce with these tools. It may be possible, and indeed I did nearly achieve a zero bounce with the Source Analysis tool, I was finally thwarted due to the way WPF works. As was the case with this code, sometimes you simply cannot obey all the rules and still compile!

The rules are there to help you, not hand-cuff or hurt you, remember to only apply the rules that make your code better, cos aiming for 100 percent compliance in my experience means less than optimal code; not to mention the extra time required to chase down the last 1 or 2 percent of the violations, that is time better spent on shipping new features.

So with that in mind, here is my analysis of the remaining FxCop rules that I have no intention of fixing and why:


Note also that there is a way to suppress the violations in your source code for FxCop (and other static code analysis tools?), using code analysis attributes and providing compile time hints, which in production code, I strongly recommend you do. That way (1) you formally acknowledge the rule violation, (2) you also explain why the rule is ignored, and (3) most importantly, you explain why as close to the code as possible.

(If anyone is interested in how to achieve the suppression of FxCop rules in this way with attributes then drop me a line an I'll knock a "how-to" post together).

And finally, the Tufte analysis. This is probably the most subjective part of the work I've done for this plug-in; but I honestly believe that this UI is one of the best I've done.

This is the bit where I would most appreciate feedback:


In the final analysis I'm happy with the result, the code is available in two different forms:

This is the first time that I've released code in this way, so your feedback would be greatly appreciated. Enjoy!


Anonymous said...

It looks great; you've done a good job on the post as a whole and on the plug-in, too. I'll download and install it, but as yet, I'm not a heavy user of, so I'm not sure how much mileage I'll get out of it just yet.

Just one point, in your Tufte analysis you say that each row presents more than 6 pieces of data. I can only count 4:
1. Title
2. URL
3. Tags
4. Post date

Where are the others that I'm missing?

Paul said...

Thanks Derek for your positive feedback; I'm also really glad you asked that question - it allows me to address something that I missed in the initial post.

The one of the *missing* pieces of information from your list is the Description, which was not shown in the graphic but is available on a tooltip; the description will probably never been used to choose the item, therefore is not present to clutter the initial display, but is available if needs be.

The second missing element is not so much a data item but what can be inferred from the way the data is presented; which is a huge topic in and of itself. The fact the items are ordered on top of one another provides an additional piece of data, that being recency. You can review the dates in the items, but visually stacking the items enables you to also use recency as an additional factor when picking a link for your post.

I hope than answers your question, there is a lot more to say on adding more to data by presenting it in different ways, but I'll save that for a future post.

Thanks again for your interest.


Anonymous said...

Cool thanks :)