Deploy Role with caching enabled to Windows Azure

I am currently setting up my first Azure project. I had a skeleton Solution with a Web Role (MVC 4) and a Worker Role.

To test the whole cycle I wanted to deploy early. We are using the visualstudio.com TFS, which can be coupled to Azure (or actually it’s the other way around) and a continuous deployment build template is available there (AzureContinuesDeployment.11.xaml).

I was experiencing unhealthy instances and timeouts (when the build script took more than one hour to execute – due to the deployment steps) after deployment. After some searching and experimenting I came to the following conclusion.

When you select the Caching option in the Role Properties, you MUST enter a valid Storage Account for deployment on Azure.

By default the configuration for that setting contains the well-known “UseDevelopmentStorage=true” value. While this is fine for running locally it absolutely will not work when deployed on Azure.

The Storage Account Name and Key can be found on the Azure Portal – Storage tab. Select a Storage account there from the list. Then in the App Bar (below) there is an option Manage Keys. There you can copy the relevant values. Note that I used the primary key. Not sure if it works if you use the Secondary key.

In hindsight it is logical and obvious, but it took me a good while to figure out that THAT was the problem.

Hope it helps.

Yet Another Logging Framework

This blogpost is a brain dump on a new logging framework I’m planning to write. If you have any additions or suggestions please leave them as comments.

So why do we need, yet another Logging Framework. Well, because I cannot find in others what I think is important and also I want to leverage the Diagnostics Trace code I have already written.

What scenarios should this Logging Framework be able to handle?

  • Easy to use.
    It should be easy to configure and easy to call. Any provider based mechanism (log targets) is bound to have config associated with it, but at least give good feedback on any problems that might occur. Convention over configuration and MEF might ease the problems.
  • One liner.
    When using it, you should only have to write one line of code. This is not always feasible but it is a good starting point. Why is this important? Because you want to make it easy on the poor developer that is using your ‘yet another logging framework’. The use of Aspects (PostSharp) can also be of use.
    On the other hand, most applications that use a 3rd party library almost always create a façade for interacting with it. So its better to have a good API that is easy to integrate into a façade than to have an awkward API in order to force it all into one line of code.
  • Uses the TraceContext (source) information to ‘automatically’ enrich the log entries. The TraceContext now has a method to AddContectTo an Exception, but could be extended to also AddContextTo a LogEntry.
  • Fallback log targets.
    Have a mechanism to fallback on log targets when they are not available. This answers the question: were do you log the exception that you cannot connect to your log target.
  • Integrated .NET support for WCF and ASP.NET.
    Make use of the extra information that lives in those contexts and allow that context to be easily added to the log entry. Both these contexts also support some kind of interception (think Behaviors for WCF and Modules for ASP.NET) to allow automatic logging.
  • Log to multiple log targets at the same time – each with unique filters.
    The default System.Diagnostics TraceListeners will already do this. Also be able to retrieve different types of logs in the code (error log, message log, security audit log etc).
  • Use that same log framework for technical logging as well as functional auditing.
    There is no reason why a logging framework cannot be used for (more) functional logging also, provided the they use unique ‘streams’.
  • Different log entry formats.
    Different log targets may require different log entry structs. Still I would prefer to use the existing diagnostics code. This means that those structures has to be serialized and deserialized. I think JSON can be of great help here.
  • Distributed and Correlated.
    You want to be able to correlate log (and trace) entries made by different parts of the system. This allows you to get a good impression to what happened where and why.
  • Support debug-only log entries.
    Conditional compilation on DEBUG. No biggie, just useful.
  • Asynchronous logging.
    This is tough one. You want Async logging in order to minimize performance impact on the running thread – actually this is more a tracing issue than a logging issue, assuming logging doesn’t output a whole lot of entries per second). But making it asynchronous can also mean that you lose that one vital entry just before the app crashes. More on this later.

Using System.Diagnostics

The idea is that we use as much of the existing logging technology as possible. That means reusing and extending the main classes of the System.Diagnostics namespace. TraceSource can be derived into LogSource and provide the basis for a log target. Each LogSource can specify a collection of TraceListeners. Custom TraceListeners can be used as well as the out of the box ones.

But using these TraceListseners means that all Log information has to be squeezed through a single string (essentially – worst case). This coupled with the fact that different log types might require different log entry structures leads to one conclusion. We have to serialize the complex data into a string so that it can be output by different log targets (Sinks) and mapped to their most appropriate fields.

The use of JSON would be excellent here, also because JSON is somewhat readable even in its serialized form. So you can still make sense of it even when its written to a text file. The object structure that is used will be partly fixed, we will need some known fields to extract data needed for further processing. But custom structures can also be easily serialized to JSON and on the receiving side, easily serialized into generic data containers (ExpandoObjects) for use in the viewer.

Formatting this complex data into something that makes sense for the specific log target is done when reading the log, not while writing it. This not only saves a little performance hit while writing the log entry, it also allows for a more versatile viewer.

Performance

One of the obvious ways to decouple the performance costs of tracing and logging is to take the processing of the log entry onto a background thread. Only the data gathering takes place on the active thread all other operations will be done on the background thread.

The trouble with this is that you can lose potentially critical log entries just before your application crashes. One possibly way to have the best of both worlds is to use the log level (Critical, Error, Warning and Info) as an indication of priority. That could mean that Critical log entries are always logged on the active thread. The other levels are processed by the background thread starting with Error, Warning and Info as least significant.

We have to provide some way to identify the order of these entries (can be a simple sequence number) in order to be able to view them in the correct order. Gaps in the sequence can be detected and displayed in the viewer. This mechanism will also make it easy to merge log ‘files’ from different machine into one.

If we take formatting out of the write-a-log-entry process, we might also need to revisit the Tracing code we have so far in order to make that option available in Tracing too.

Reading Log entries

For each (type of) log target a Sink is needed that knows how to put the log entry data into its storage. Think for instance Enterprise Library Logging block or Log4Net or simply the event log. A TraceListener is implemented for each target that knows how to take that one string and persist it in the most optimal way.

When those (types of) targets also want to play in the log viewer, they also have to expose a Provider: an object that knows how to read log entries from its storage and provide them to the viewer.

The viewer will be able to union all log entries from all providers and sort them into the temporal sequence they were written in. Merging of different (machine) sources is also possible.

Of course the viewer would be able to filter and even search through the entries.

I think it would be advantageous to implement an OData REST service as a source for all the log entries. This allows easy access to the log entries for all kinds of purposes and provide a flexible basis for retrieving log entry information for different applications. Both Xml and Json formatting can be supported.

Closing remarks

I am sure that a lot more issues will present themselves once a more detailed design is made and implementation starts. But I think this WILL make a pretty nice logging framework if we can pull it off. Writing this blog post has helped me to structure my thoughts more on the subject and I hope it was a pleasant read for you, perhaps even an inspiration to tweak the logging framework you are no using.

[WPF] Data Model – View Model – View

This post is based on an interpretation of a pattern called View-View Model-Document or View-View Model-(Data) Model. I did not invent it. I just write this to have a record of what I learned when exploring the design pattern.

The following figure displays the WPF Application Architecture. On the right side a legend explains the meaning of the shapes used in the diagram. The Xaml shape indicates artifacts that are typically created with Xaml. The WPF shape indicates to a WPF specific type and the class shape indicates a custom class specific to the application in question.

The dashed lines show a Data Binding dependency with the arrow pointing toward the dependency being bound. The solid line with the arrow also displays a dependency but one that is set on the named property. A solid line with a diamond represents containment (diamond being the container). Multiplicity of this containment is indicated with the numbers at the “containee”.

View Rendering

The View Model is set as Content on the View. The ViewModel will provide object instances that drive the View’s content. These instances are usually Data Model types but can also be other view specific types. Through the use of Data Templates the ViewModel type and the Data Model type as well as any other types the View Model might provide are “converted into UI elements”. Each Data Template is written specifically for the object type and has knowledge of the object hierarchy within one instance.

There are two options in how to interpret the Data Model. Some would consider the Data Model to be the collection of all application data (not necessarily counting view or UI specific data). Others would design a Data Model class to manage only one instance of one entity. A Data Model that is modeled to manage one or more collections of data can be harder than to bind against than a Data Model that manages only one entity instance. Either way can work with this architecture although it must be said that creating a Data Model that only passes through the information of the entity must be avoided.

With the View Model and the Data Model in place the View can be data bound and the content is rendered in the View based on the View Model Data Template and the Data Model Data Template.

Note: A major drawback of working with Data Templates is the lack of support for Visual Designer Tools (Blend and Visual Studio). These tools will help you design views but not as a Data Template.

Command Handling

Just rendering a view is like a glass window: you can see everything but you can’t do anything with it. A user interacts with application by clicking and typing: generating events. This architecture proposes to use Commands to route these events to the application (code). WPF predefines several categories of commands that can (and should) be (re)used in your application. The Command Model manages a command and the event it represents. On one hand it references the Command it manages on the other hand does it reference the View Model. Think of the Command Model as the mini-Controller for one Command. When the Command fires the Command Manager executes its handler code against the View Model, which might cause view updates (property changed notification).

During binding in the Data Templates the Command must be set to the Command property of the UI element. Note that Command instances can be declared static.

Unit Testing

Because the objects are bound to the view (and Data Templates) there is no dependency from the View Model or Data Model to the View or its UI elements. This means that the View Model, Data Model and Command Model objects can be unit tested very easily without having to resort to UI-record & replaying testing tools.

Wrapping up

My first experiments with this pattern were interesting: it takes a little getting used to and frequently I had to get above the code to see the (helicopter) overview to see what I was doing and were to put the code. I realize that this post might be a bit short and cryptic. I recommend reading the blog posts of Dan Crevier that also includes a sample application in its last post. I think I might have deviated a little from what Dan writes but the idea remains the same: Utilize the tremendous power of WPF data binding.

Trace Context Aspects with PostSharp

In my previous post I wrote about a method context information gathering framework I wrote in an attempt to increase the amount of useful information in trace output and exceptions. In this final post about the framework I will discuss the use of a static aspect weaver PostSharp.

Static Aspect Weaving

I assume that you know what aspects are at least at a conceptual level. The nice thing about static aspects is that their code is injected at compile time, not at runtime. True: static aspects are not configurable, but they are completely transparent (dynamic aspects usually require the client code to create a proxy instead of the actual object).

I use PostSharp, a static aspect weaver, to implement a code attribute that triggers aspect code injection into the assembly. PostSharp works at IL level so it should be usable with any .NET language. While PostSharp provides a generic assembly manipulation framework, it is actually Laos that provides the aspect framework.

When an aspect (a code attribute) is applied to a method, PostSharp (actually Laos) rewrites the IL for that method. It basically relocates the original method IL in a new method with the same name, prefixed by a ‘~’. Then it inserts custom IL that performs the call sequence on your aspect object. Your aspect can be responsible for calling the actual method -using a delegate- (as it is in this example), although there are also flavors of aspects that do not require this.

So a typical (simplified) call stack would look something like this:

void MyClass.MyMethod(string param1)
void MyAspect.OnInvocation(MethodInvocationEventArgs eventArgs)
void delegate.DynamicInvoke(object[] parameters)

void MyClass.~MyMethod(string param1)

The stubbed MyClass.MyMethod routes execution to the aspect (OnInvocation) applied to the method and the aspect code invokes the delegate that point to the original method (or it doesn’t 😉 and the original method (prefixed with ~) executes.

TraceContextAspect

In order to eliminate the custom code you’d have to write to use the TraceContext in our method context information gathering framework, I’ve created a PostSharp/Laos aspect class that intercepts the method call as described above. So instead of making all the calls to the TraceContext yourself in the method code, you simply apply the aspect to the method:

[TraceContextAspect]
public string PrintName(int numberOfTimes)
{
    // method impl.
}

The TraceContextAspect implements the OnInvocation method like so:

using (AspectTraceContext ctx = new AspectTraceContext(_methodBase))
{
    ctx.Initialize(eventArgs.Delegate.Target);
    AddParameters(ctx, eventArgs.GetArguments());
    ctx.TraceMethodEntry();

    try
    {
        eventArgs.ReturnValue = eventArgs.Delegate.DynamicInvoke(eventArgs.GetArguments());

        ctx.SetReturnValue(eventArgs.ReturnValue);
    }
    catch (Exception e)
    {
        ctx.AddContextTo(e);
        throw;
    }
} // maintains stack and writes method exit and flushes writer.

Note that I’ve derived a new class from TraceContext for this specific situation (AspectTraceContext) that takes a MethodBase instance as a parameter in its constructor. The MethodBase instance is handed to you by the PostSharp/Laos framework and represent the method the aspect was placed on. The bold text is the actual call to the original method. As you can see, all the custom code needed to setup the TraceContext has now moved to the OnInvocation method implementation.

Conclusion

The use of a static aspect weaver has dramatically simplified the usage of the method context information gathering framework. Tracing useful and rich information from your method has now become a breeze (as it should be ;-).

I hope these last 3 post has shown you how you can leverage existing technology (System.Diagnostics and PostSharp) to make the most out your own tracing framework (in this case). I also hope you will be inspired to find new applications to use static aspects in your own code. I find that static aspects can really make your life easier while at the same time not making your code (execution paths) more complicated than needed.

You can download the source code here.

A Method Context Information Gathering Framework

Do you also have that feeling when you type in your tracing code that it is too cumbersome and too much hassle to get it right? I mean really trace the information that is useful for finding faults, for instance. And when you log an exception, even when you write out all the information the exception offers, it is barely enough to really understand what went wrong?


That is why I wrote this framework. This framework tries to solve one major problem (and some small one on the side): Getting the runtime information of an executing method into a trace text or an exception.

Note that this post assumes you know about the main classes in System.Diagnostics.


MethodContext


The MethodContext is a class that maintains all running values during the execution of a method. It is created at the beginning of the method and it is Disposed at the end of the method. It collects information about the runtime values of the method parameters, the class reference (this) if it isn’t a static method and its return value. This runtime method context information can be formatted into trace texts or added to an exception the code is about to throw.


The MethodContext also maintains its own call stack and provides access to its calling method context.


For tracing purposes a MethodContext derived class, the TraceContext, adds to this a reference to a MethodTracer and method entry and exit trace methods.


Here’s a typical usage scenario:


public string Repeat(string value, int numberOfTimes)
{
    using(TraceContext ctx = new TraceContext())
    {
        ctx.Initialize(this);
        ctx.AddParameter(value, “value”);
        ctx.AddParameter(numberOfTimes, “numberOfTimes”);
        ctx.TraceMethodEntry();

        string result = null;

        // method impl.

        ctx.ReturnValue = result;

        return result;
    }  // Dispose is called on ctx, calling TraceMethodExit()

}


Note that the TraceContext (MethodContext) maintains weak references to all instances.


MethodTracer


The MethodTracer instance is created for each TraceContext. The MethodTracer takes a TraceSource and an optional method-level TraceSwitch in its constructor and uses these to filter and output trace text. It implements 2 versions of the FormatTrace method; one instance and one static. The static FormatTrace method can be used by your code to trace custom texts. The TraceContext is located behind the scenes.


The FormatTrace method takes an object parameter as the object to generate a trace text for (along with some other parameters). The method delegates this task to the TraceManager (discussed next) where a collection of ITraceFormatter instances is checked to see if that specific Type is supported.


If the TraceSwitch allows it the formatted text is output to the TraceSource.


TraceManager


The TraceManager is a global (singleton) class that manages several cached classes. One already discussed are the TraceFormatters. These classes are able to generate trace text for a specific Type of object. TraceFormatters can use other TraceFormatters and thus composing the trace text.


By convention the TraceManager will keep track of a TraceSource for each class Type that creates a TraceContext. It will also keep track of optional method TraceSwitch instances that can be configured to fine tune trace levels at method level.


Exceptions


Exceptions caught in the method that created a TraceContext can keep track of it using the LastError property on the context. When throwing exceptions you can add the context information of the TraceContext to the exception using the AddContextTo method. This method populates the Data dictionary of an exception instance with the context information. Note that only types that are serializable are added, this is because the Data dictionary doesn’t allow otherwise (exceptions need to be marshaled across boundaries sometimes and that involves serialization).


The following code sample shows a nice way to add runtime information to an exception before propagating it to a higher level.


public string ReadTextFile(string path)
{
    using(TraceContext ctx = new TraceContext())
    {
        ctx.Initialize(this);
        ctx.AddParameter(path, “path”);
        ctx.TraceMethodEntry();

        string result = null;

        try
        {
            // method impl.
        }
        catch(IOException ioe)
        {
            ctx.AddContextTo(ioe);
            throw;
        }

        ctx.ReturnValue = result;

        return result;
    }  // Dispose is called on ctx, calling TraceMethodExit()

}


The calling code will receive an exception which Data dictionary is filled with runtime method context information. A nice extension would be to be able to dump all properties of the instances present in the Data dictionary. That way you should be able to generate comprehensive error log messages.


Configuration


The following configuration shows how to set up a Console TraceListener and an EventLog TraceListener for errors, a trace source for several classes, trace switches at class level (trace source) and at method level.


<configuration>
  <system.diagnostics>
    <sharedListeners>
      <!– Choose your trace output channels –>
      <add name=”Console” type=”System.Diagnostics.ConsoleTraceListener”
           initializeData=”false” />
      <!– Only Error traces will go to the Event Log –>
      <add name=”ErrorEventLog” type=”System.Diagnostics.EventLogTraceListener”
           initializeData=”Jacobi.Diagnostics.TestApp”>
        <filter type=”System.Diagnostics.EventTypeFilter” initializeData=”Error” />
      </add>
    </sharedListeners>
    <sources>
      <!– Configure a TraceSource for each class–>
      <!– Non-configured classes all create a default TraceSource –>
      <source name=”Jacobi.Diagnostics.TestApp”>
        <listeners>
          <add name=”Console” />
          <add name=”ErrorEventLog” />
        </listeners>
      </source>
      <source name=”Jacobi.Diagnostics.TestApp.TraceAspectTest”>
        <listeners>
          <add name=”Console” />
          <add name=”ErrorEventLog” />
          <remove name=”Default” />
        </listeners>
      </source>
    </sources>
    <switches>
      <!– SourceSwitch settings for classes –>
      <!– You can specify SourceSwitch settings
          without configuring the TraceSource (<sources>) –>
      <add name=”Jacobi.Diagnostics.TestApp” value=”Error” />
      <add name=”Jacobi.Diagnostics.TestApp.TraceAspectTest” value=”All” />
      <!– TraceSwitch settings for Methods –>
      <add name=”Jacobi.Diagnostics.TestApp.TraceAspectTest.WithContext” value=”Info” traceFlow=”Entry,Exit” />
      <add name=”Jacobi.Diagnostics.TestApp.TraceAspectTest.ConcatName” value=”Verbose” />
    </switches>
  </system.diagnostics>
</configuration>

Basically this configuration is very similar to the one discussed in my previous post about System.Diagnostics.  <sharedListeners> declares all listeners, <sources> lists a TraceSource per class and <switches> configures both class-level switches and method-level switches. Notice that the method level switches extent the dot-syntax with the method name for the name of the switch and carry an extra traceFlow attribute. The traceFlow attribute allows you to filter the output of method entry and exit traces that are done by the TraceMethodEntry and TraceMethodExit methods on TraceContext.


The next post will investigate a way to get rid of all the custom code you have to write to use the TraceContext. Using a static aspect weaver it is possible to have all that code removed from your method and indicate with a code attribute what methods you want to trace in a completely transparent way.


Download the source code here. Note that this source code also contains the projects for the static aspect weaver that will be discussed in the next post.

Basic System.Diagnostics

The .NET framework has shipped with the System.Diagnostics namespace since version 1.0. My efforts to build a method context information gathering framework on the services of System.Diagnostics has brought me a deeper understanding of its classes and configuration settings. I will talk about my method context information gathering framework in a later post, but first I thought I would get us all on the same page on System.Diagnostics.


System.Diagnostics implements several classes that play a key role in outputting trace text from your application. Better understanding these classes will bring you insights in to how to extend the existing diagnostic framework in .NET or how to set up the configuration file to make full use of the out-of-the-box functionality.


TraceListener


The TraceListener class receives trace texts from the application and outputs it to a specific channel it was written for. There is a DefaultTraceListener class that outputs its text to the Win32 API OutputDebugString. But there is also an EventLogTraceListener that outputs its text to the windows event log. There is even an XmlWriterTraceListener that will output Xml to a stream. There are more listeners you can choose from and you can even write your own. Just derive your listener class from the abstract TraceListener base class and implement the abstract methods.


A TraceListener also maintains an optional Filter. This allows you to fine tune the type of information that a TraceListener actually outputs. For instance, you could put an EventTypeFilter on the EventLogTraceListener to only output Error-type traces to the windows event log.


TraceListener instances are held in a collection that all receive the same trace text to output. This means that the same trace text can be output on different channels (each channel is represented by a TraceListener) at the same time. This collection can live at the global/static Trace or Debug classes or at a TraceSource.


TraceSource


A TraceSource represents a configurable trace object that maintains its own set of TraceListeners. An associated TraceSwitch (discussed next) controls the trace level for this ‘scope’. Typically a TraceSource is configured in the .config. When the code instantiates a TraceSource with the same name it reads its settings from the .config file. This way you can control what portions of your application code will output trace text.


TraceSwitch


A TraceSwitch maintains a TraceLevel property that controls the importance of the trace texts passed to the TraceListeners. The usual Error, Warning, Info and Verbose are supported. Typical use is to configure the TraceSwitch in the .config file and when the code instantiates an instance using the same name it reads its settings from the .config file. Although you can use TraceSwitches standalone they are usually associated with a TraceSource (in config). It is also possible to write your own TraceSwitch.


Configuration Settings


A lot of System.Diagnostics functionality is driven by the .config file. Lets dive right in and look at the following configuration:


<configuration>
  <system.diagnostics>
    <sharedListeners>
      <!– Choose your trace output channels –>
      <add name=”Console” type=”System.Diagnostics.ConsoleTraceListener”
           initializeData=”false” />
      <!– Only Error traces will go to the Event Log –>
      <add name=”ErrorEventLog” type=”System.Diagnostics.EventLogTraceListener”
           initializeData=”Jacobi.Diagnostics.TestApp”>
        <filter type=”System.Diagnostics.EventTypeFilter” initializeData=”Error” />
      </add>
    </sharedListeners>
    <sources>
      <!– Configure a TraceSource for each class–>
      <source name=”Jacobi.Diagnostics.TestApp”>
        <listeners>
          <add name=”Console” />
          <add name=”ErrorEventLog” />
        </listeners>
      </source>
      <source name=”Jacobi.Diagnostics.TestApp.Class1″ >
        <listeners>
          <add name=”Console” />
          <add name=”ErrorEventLog” />
          <remove name=”Default” />
        </listeners>
      </source>
    </sources>
    <switches>
      <!– SourceSwitch settings for classes –>
      <add name=”Jacobi.Diagnostics.TestApp” value=”Error” />
      <add name=”Jacobi.Diagnostics.TestApp.Class1″ value=”All” />
    </switches>
  </system.diagnostics>
</configuration>

The configuration settings live inside the <system.diagnostics> element. Then we see a <sharedListeners> section. Although it is possible to configure all TraceListener settings separately for each TraceSource, I prefer to maintain a global list of (configured) TraceListeners and refer to them from the TraceSource configuration. <sharedListeners> is that global place. Notice that the EventLogTraceListener has a <filter> defined that only allows Error-type traces to pass to the event log.


The <sources> section allows you to list the configuration settings for all the TraceSources your application uses. If the configuration for an instantiated TraceSource is not found in the .config file, it is shut off by default. So if you expect to see trace output from a specific TraceSource but there isn’t any, 9 out of 10 times you did not configure it right (check the spelling).


Each <source> declares its own collection of TraceListeners, in this case referring to one declared in <sharedListeners>. As a convention I’ve used the full class names as TraceSource names and TraceSwitch names. But you can also choose a courser granularity, say at component level or at sub-system level.


You can associate a TraceSwitch with a TraceSource by using the switchName and switchType attributes on the <source> element. I’ve not done so in my example and that means that you have to instantiate the TraceSwitches manually in code (with the correct name). You can associate a TraceSwitch with a TraceSource by either using the switchName and switchType attributes on the <source> element -or- by just declaring a switch (<switches><add>) with the same name as the TraceSource.


Wrapping up


This quick tour around System.Diagnostics discussed the main classes that enable you to build pretty powerful tracing support into your application. With this information you could already instantiate a TraceSource for each class and configure a matching TraceSwitch. Code inside the class would simply call the TraceSource and it would work. You could configure it to allow specific (type of) information to come through while the rest is blocked, for instance. And although I would encourage anybody to at least take some time to fiddle with a simple console test application to try out these features, it is my experience that you’ll want more in a real application. That is why I build my method context information gathering framework. Although this framework does not add much to the tracing capabilities of System.Diagnostics, it does add a lot to the quality of information that is in the trace texts.


I plan to write about my framework in a future post.


For more information on System.Diagnostics go to MSDN.

Writing a WCF POX Syndication Service

WCF has received some enhancements in the .NET 3.5 framework. It is now possible to use the WCF service framwork to write POX  services; a service that does not use SOAP, but Plain Old Xml.


Also the Syndication support for WCF is new to .NET 3.5. It has builtin support to produce Rss 2.0 and Atom 1.0 feeds.


The example I like to show you is an Event Log Feed Service. This service produces a Rss 2.0 feed for the Application event log Error entries. It is hosted in IIS and can be called by addressing it over the url:


http://[base url]/Service.svc/GetLogEntries


Notice the GetLogEntries after the Service.svc. The GetLogEntries maps to the GetLogEntries method of the service and takes two optional parameters: an entryType (Error, Warning, Information) and a feedType (Rss1, Atom1).


http://[base url]/Service.svc/GetLogEntries?feedType=atom1
http://[base url]/Service.svc/GetLogEntries?entryType=Warning&feedType=atom1


To look at this for yourself download the source code here and install the service in IIS. Make sure the service dll is in the web’s bin folder.


The Service interface is declared using WFC’s ServiceContract attribute.


[ServiceKnownType(typeof(Atom10FeedFormatter))]
[ServiceKnownType(typeof(Rss20FeedFormatter))]
[ServiceContract]
public interface IEventFeedService
{
    [WebGet(UriTemplate=”/GetLogEntries?eventType={eventType}&feedType={feedType}”)]
    [OperationContract]
    SyndicationFeedFormatter<SyndicationFeed> GetLogEntries(string eventType, string feedType);
}


The return types are declare with using the ServiceKnownType attribute. The WebGet attribute makes it possible to call this service using the url (GET). The UriTemplate declares what possible variation are supported on the url: the method name and its optional parameters. Note that the parameter names of the method match the parameter names in the UriTemplate.


The Service implementation class implements the method of the Service interface (refer to the dowload for complete source code). Creating the Syndication Feed is a matter of creating SyndicationFeedItem instances and adding them to a SyndicationFeed instance. Finally the method returns either a Rss20FeedFormatter or a Atom10FeedFormatter instance depending on the requested feed format.


The Service.svc file is used by IIS to determine how to host the service.


<%@ServiceHost Factory=”System.ServiceModel.Web.WebServiceHostFactory” Service=”EventLogFeedService.EventFeedService” %>


Besides specifying which service class should be hosted, the file also specifies a Factory to create a WCF service Host instance. Note that this is a new class that supports the url addressing of our WebGet enabled service.


Disclaimer: Exposing event log information to external sources can expose a security hole!

Why I think Merging Source Code sucks

Recently I was involved in a pretty large project to do a full and complete source code merge of 2 branches into one. We used TFS in the project and that was a first for me. So perhaps my experience was sub-optimal due to my lack of understanding of TFS but here are my thoughs on it anyway:




  • For some reason the branch relationships that should have existed between all the files of the two code branches were broken for some of the files. Even when we did a baseless Merge these relations remaind broken even though the manual says that they should be fixed after a baseless merge.


  • Using the Auto Merge “feature” reintroduced some defects that were fixed in one branch. Clearly this is not a “featue” you want to use all that often.


  • A Merge cannot cope with refactoring. Basically you are on your own when you refactor too much of you code and the text-based compare can’t match up the pieces of code that are the same, because they’re too far apart.


  • Merging of generated assets (workflow, dataset, etc.) is a disaster. You would normally just merge the model and let the tool generate the code for the new model. But manually (or automatically) merging the “model” is no easy task.


  • Resolving conflicts in project and solution files is also problematic. Most of the time we just made sure that all the changes of both branches were in the output and later sorted out the deleted files and stuff. Problem is that you cannot see the context of these files (associated files etc).


  • Resolving conflicts in normal source code (C# in this case) was not a walk in the park either. The 3-view comparer tool you’ll get to resolve these conflicts has no syntax collering. Its basically a scitzofranic notepad.

I think the problem with resolving conflicts is that it is a text-based operation (at least it seems to be). The auto-merge feature has no clue what it is merging and therefor it is no wonder it makes a mess of you source files. What you need is a specific conflict resolver for eacht type of file (with Text as the default fallback). So If I had a DataSet resolver it would know that this xml schema was in fact a DataSet and it could make (or propose) educated changes to the output. If you’d had these resolvers with builtin knowledge of what they are merging, I think the result would improve drastically. And it would make me a happy camper again. Up until that day, code merges are a pain for me.


What is your experience with merging code trees?

Ploggle Desktop Application

Ploggle is a community site that lets you publish pictures online. It is free (for 3 accounts with 100 pictures max) and has a pretty decent UI for viewing the pictures. The only pain is submitting your pictures to your Ploggle site. You have to email them in the right order (last in, first out).

Uploading multiple pictures with text using your standard email client just wasn’t working for me. So I decided to do something about that.

I wrote a .NET 2.0 WinForms application that allows you to select you picture files and type a description for each. Order your pictures and send them to Ploggle with one button click.

Now the installer says version 1.0 but in fact its more like an alpha version. I’ve only tested it using Outlook (11) to send the emails.

Download the installer (msi) here.

Send any feature requests and bug reports to obiwanjacobi at hotmail dot com.

Consumer Driven Contracts in the BizTalk LOB adapter framework

This post discusses the Consumer Driven Contract pattern and the use of that pattern in the BizTalk LOB adapter framework. Note that the LOB adapters are built on WCF and are not dependent on BizTalk Server. No knowledge of BizTalk Server is required to work with the LOB adapters SDK.

Consumer driven contracts is a pattern that suggests a consumer will only use those parts of a contract published by a provider that is of direct use to that consumer. The consumer will then only have a dependency on a sub-set of the provider contract and will therefore be impacted less by change. Ian Robinson wrote an article on the subject that can be found at Microsoft Msdn or at Martin Fowler’s site. The article suggests that a provider should accommodate the expectations of its consumers in such a way that changes to the service will impact the consumer as little as possible and independent (between service and consumer but also between different consumers of the same service) versioning is possible. So now we have two (types of) contracts. A provider contract that communicates all the capabilities of the service and a consumer contract; a sub-set of the provider contract a consumer actually uses. One service typically has one provider contract (unless it is specifically build to act on more than one –version of a- contract) and may have many consumer contracts.

One of the current technologies that uses the consumer driver contracts pattern is the LOB adapters of BizTalk. The LOB adapters are built on WCF and can be used separate of BizTalk Server. The LOB adapter framework is designed to expose legacy (line of Business) systems as web services. The service author has to implement four aspects to satisfy the framework.

  1. Connection
    The service implementation should be able to connect to the legacy system.
  2. Metadata browsing
    The service implementation returns a collection of available ‘operations’. One or more of these operations can be selected by the consumer. This is an implementation of the consumer driven contract at operation level.
  3. Metadata resolvement
    The operations selected by the consumer need to be resolved to callable methods.
  4. Messaging
    The service needs to be able to receive request messages for the operations and send back response messages (when appropriate).

When a service implements these facilities the framework will generate its Wsdl, perform connection pooling, do transactions and handle security. A consumer of a LOB adapter service can use the new ‘Add Adapter Service Reference’ menu option in Visual Studio to reference an LOB service and select the methods with a new UI. The UI allows you to make a connection to a service, browse and search its metadata and select which methods you want to consume from the service. This new UI is also available in BizTalk server when consuming an adapter service in BizTalk.

Both the Consumer Driven Contract pattern and the LOB adapter SDK are interesting for Service Oriented Architectures. The pattern will reduce coupling between consumer and service, which is good for evolving your SOA and the LOB adapter SDK will provide you with a framework you can build on when service-enabling those legacy systems.

A blog post on msdn has some more resources on the LOB adapter framework.

WCF: Hosting non-http protocols in IIS 7.0

The new IIS 7.0 allows hosting of multiple protocols. I experimented with hosting a WCF service with Tcp and Http endpoints in IIS 7.0.


I started with creating a new Service Web Project in VS.2005. The project template gives you a Service with one method that echo’s the string back prefixed by “Hello:”. I added the <serviceMetadata httpGetEnabled=”true”/> to the web.config to allow meta data exchange using a http get (for instance a browser).


 Then I created a WinForms client application and referenced the Service url to create the proxy. The client has one textbox and a button. The button-click handler creates a service proxy using the default ctor and calls the web service with the string entered in the textbox. The result returned by the service is displayed in a MessageBox.


So, now I have a simple, plain vanilla, out-of-the-box service and client. I can test if everything is working and should see a message box popup after I pushed the button on the form of the client.


Now we need to configure IIS 7.0 to handle the Tcp protocol as well. It turns out there is no UI in Vista to do this (there should be in Windows Server 2008). But luckily there is a command line tool you can use to get this done. Here’s the command line:


%windir%system32inetsrvappcmd.exe set site “Default Web Site” /+bindings.[protocol=’net.tcp’,bindingInformation=’8080:*’]


This command line adds a net.tcp protocol handler to IIS 7.0 (called a binding, not to be mistaken for a WCF binding, which is a different thing altogether) that listens on port 8080. The ‘*’ is a wildcard for the host name: so this handler will handle all tcp traffic on port 8080 no matter the host name specified.


This is a global setting to the “Default Web Site”. Our web application that runs in IIS still has to be told to use that IIS-binding: the tcp handler for port 8080. Here’s the command line to do that:


%windir%system32inetsrvappcmd.exe set app “Default Web Site/<MyAppName>” /enabledProtocols:http,net.tcp


 This will enable the specified protocols for the specified application. Note: replace <MyAppName> with your actual web application name of the service project you created in the beginning.


Important: You have to have admin rights to successfully run these command lines or you’ll get an access denied error.


If you try to run the client again it should still work. But realize that it is still connecting using the http protocol. Now add a new endpoint to the web.config that uses a netTcpBinding (I just copied the existing endpoint and replaced the binding).


Now we have added the tcp endpoint to the service, it is time to update the client. The easiest way to get the tcp config settings into your client’s app.config file is to update the Service Reference in VS.NET.


Dont try to run the client now: you’ll get an exception. The reason is that there are now two endpoint elements (under client element) and you have to tell the service proxy what endpoint configuration to use. So enter the bindingConfiguration value in the ctor of the service proxy (starting with “WsHttpBinding” or “NetTcpBinding”). The client should work on either endpoint.


If you need more information on the subject check out this msdn magazine article:
http://msdn.microsoft.com/msdnmag/issues/07/09/WAS/default.aspx

I’m Back!

I thought it was a good idea to leave blogginabout.net because I wanted to blog about my interests in programming MIDI and didn’t think it would be usefull to the bloggingabout.net audience. So I went to blogspot and got me a blog there. After a little over six months I’ve decided I want to dedicate the blogspot blog to my hobby projects and not intermix it with other stuff I come across when at work. So I checked back here at bloggingabout.net and my account is still working.


 So check my “hobby” blog if you’re into programming or using (MIDI) music studio (related) applications:


http://obiwanjacobi.blogspot.com/


It’s nice to be back 😉

[Links] BizTalk Direct Port Bindings

A nice explanation of the direct port binding flavors in BizTalk.


Part 1: http://blogs.msdn.com/kevin_lam/archive/2006/04/18/578572.aspx
Part 2: http://blogs.msdn.com/kevin_lam/archive/2006/04/25/583490.aspx
Part 3: http://blogs.msdn.com/kevin_lam/archive/2006/06/14/631313.aspx
Part 4: http://blogs.msdn.com/kevin_lam/archive/2006/07/07/659214.aspx
Part 5: http://blogs.msdn.com/kevin_lam/archive/2006/07/25/678547.aspx


Have fun.
— Marc

Singleton Generics [updated]

The Singleton pattern is probably the most famous pattern of all. Usually it is implemented as a behaviour of a specific class. But why not let the developer decide how to manage instance lifetimes? The new .NET 2.0 Generics feature gives us just the tools for creating these object lifetime classes.

public static class StaticInstance<T>
    where T : new()
{
    private static T _instance;
    private static object _lock = new object();

    public static T Current
    {
        get
        {
            if (_instance == null)
            {
               lock (_lock)
               {
                   if (_instance == null)
                   {
                       _instance = new T();
                   }
               }
            }

            return _instance;
        }
    }
}

This code manages one instance of Type T in a (AppDomain) static variable, your typical Singleton implementation. Any class can be used as a Singleton now, just call StaticInstance<MyClass>.Current to access the instance of your type ‘MyClass’. Beware though that being a Singleton instance has concurrency issues in that multiple threads could access that one instance of your class at the same time.

In an ASP.NET context you often have the need to have "static" information available but private to the current request. Well, simply write another instance manager class such as this one:

public static class HttpContextInstance<T>
     where T : new()
{
     private static string _typeName = typeof(T).FullName;

     public static T Current
     {
          get
          {
              Debug.Assert(HttpContext.Current != null);

              T instance = (T)HttpContext.Current.Items[_typeName];

              if (instance == null)
              {
                  instance = new T();
                  HttpContext.Current.Items[_typeName] = instance;
              }

              return instance;
          }
    }

    public static void Dispose()
    {
        IDisposable instance = HttpContext.Current.Items[_typeName] as IDisposable;

        if (instance != null)
        {
            instance.Dispose();
        }

        HttpContext.Current.Items[_typeName] = null;
    }
}

The instance is stored in the Items collection of the HttpContext, thus making the instance private to just the current web request. I’ve also included a Dispose method to dispose of the instance’s resources when the request is done (global.asax) and clear the slot in the HttpContext items collection. You could think of other implementations for storing instances in the Thread Local Storage, the logical CallContext or any other place that might be convienient to you.

Have fun,
Marc Jacobi

 



[UPDATE 14-feb-06]

 

I like to point out some of the problems that you may encounter using this approach. The following issues should be taken into account:

  1. A Type specified for T must be able to cope with the concurrency consequences of the instance class implementation. For the StaticInstance example this means that it should syncronize access to its member variables.
  2. The Type (T) must have a public default constructor and your team could use that default constructor to create their own instances. For some types this is not a real big issue for others it can introduce hard-to-track-down bugs. If your Type (T) is not designed to be instantiated more than once implement your own Current property and remove your (default) constructor(s).
  3. All team members should "know" what Type (T) is accessed by which instance class. If one member uses StaticInstance<MyClass>.Current and another uses HttpContextInstance<MyClass>.Current you’ll have 2 instances living two different lifetimes. This is a weakness that can be overcome and we will discuss next.

Because C# (generics) does not support the typedef keyword (C++: allows defining a new type using other types declaratively) the only way to simplify and hardwire a generics type is to derive from it. So if you use the following code template for instance class implementations you can fix issue 3 by deriving a new type.

public class StaticInstance<T>
    where T : new()
{
    private StaticInstance()
    {}

    public static T Current
    {
        get
        {
            // [your implementation here]
        }
    }
}

Now, say you use a static instance of MyClass in your application you can derive a new type to hardwire the T parameter. This also gives you one point of definition for the MyClass-singleton and makes it easy to transparently change the instance class backing up the singleton.

public class MyClassSingleton : StaticInstance<MyClass>
{}

I hope this update gives you a better overview of the consequenses of using this approach.
Keep the questions and suggestions coming.

Greetings,
Marc Jacobi

Object Builder code project

Object Builder is part of the new Enterprise Library 2.0 (EntLib) and the Composite UI Application Block (CAB). Apparently they they decided that Object Builder deserved its own project and it does for it is a stand alone reusable component (if you can figure it out that is ;-).

Here’s the link to the Got Dot Net code project for Object Builder.
http://www.gotdotnet.com/codegallery/codegallery.aspx?id=e915f307-c1c6-47c4-8ea0-cb4f0346fba0

Have fun,
Marc.