Utilizing the Directed Graph document in Visual Studio

Have you noticed  that there is a Directed Graph document (.dgml) in Visual Studio? This document can visualize directed graphs inside VS. This type of document is authored with the tools of the Ultimate edition. If you do not have the Ultimate edition, the editor allows for manual creation of nodes and links, which may be useful in some situations but cumbersome at best.

The .dgml file follows a simple xml schema and is pretty easy to generate yourself. So I downloaded the .xsd file (you can use the xml namespace as an url and it will present you a page with info on this) and generated some classes with xsd.exe.

Now you have to build a DirectedGraph class instance with all the Nodes and Links that represent the information you are trying to visualize. This will require you to write some custom code (a factory method/class for instance) that transforms the data structure of your program into Nodes and Links.

I have done this for a project I am working on at home. The data structure I wanted to visualize represents a schema (similar to an xml schema) but each ‘node’ has a whole lot of navigation properties. A small schema looks something like this:

SchemaNodeMap

Depending on the properties you set for each Node and Link object you can change Font, Colors, Lines, Labels etc.

Be aware that any changes you make to the document are saved to the .dgml file. If you want these changes reflected back to your own structure, you have to write an interpreter for that – not something I would recommend.

Be sure to explore the capabilities of the .dgml editor – there are some nice features there (Analyzers).

I can use this as a debugging tool to see if my schema is converted correctly into this structure. I needed about 150 lines of very simple code to create my factory.

Hope it helps,
Marc

Deploy Role with caching enabled to Windows Azure

I am currently setting up my first Azure project. I had a skeleton Solution with a Web Role (MVC 4) and a Worker Role.

To test the whole cycle I wanted to deploy early. We are using the visualstudio.com TFS, which can be coupled to Azure (or actually it’s the other way around) and a continuous deployment build template is available there (AzureContinuesDeployment.11.xaml).

I was experiencing unhealthy instances and timeouts (when the build script took more than one hour to execute – due to the deployment steps) after deployment. After some searching and experimenting I came to the following conclusion.

When you select the Caching option in the Role Properties, you MUST enter a valid Storage Account for deployment on Azure.

By default the configuration for that setting contains the well-known “UseDevelopmentStorage=true” value. While this is fine for running locally it absolutely will not work when deployed on Azure.

The Storage Account Name and Key can be found on the Azure Portal – Storage tab. Select a Storage account there from the list. Then in the App Bar (below) there is an option Manage Keys. There you can copy the relevant values. Note that I used the primary key. Not sure if it works if you use the Secondary key.

In hindsight it is logical and obvious, but it took me a good while to figure out that THAT was the problem.

Hope it helps.

Yet Another Logging Framework

This blogpost is a brain dump on a new logging framework I’m planning to write. If you have any additions or suggestions please leave them as comments.

So why do we need, yet another Logging Framework. Well, because I cannot find in others what I think is important and also I want to leverage the Diagnostics Trace code I have already written.

What scenarios should this Logging Framework be able to handle?

  • Easy to use.
    It should be easy to configure and easy to call. Any provider based mechanism (log targets) is bound to have config associated with it, but at least give good feedback on any problems that might occur. Convention over configuration and MEF might ease the problems.
  • One liner.
    When using it, you should only have to write one line of code. This is not always feasible but it is a good starting point. Why is this important? Because you want to make it easy on the poor developer that is using your ‘yet another logging framework’. The use of Aspects (PostSharp) can also be of use.
    On the other hand, most applications that use a 3rd party library almost always create a façade for interacting with it. So its better to have a good API that is easy to integrate into a façade than to have an awkward API in order to force it all into one line of code.
  • Uses the TraceContext (source) information to ‘automatically’ enrich the log entries. The TraceContext now has a method to AddContectTo an Exception, but could be extended to also AddContextTo a LogEntry.
  • Fallback log targets.
    Have a mechanism to fallback on log targets when they are not available. This answers the question: were do you log the exception that you cannot connect to your log target.
  • Integrated .NET support for WCF and ASP.NET.
    Make use of the extra information that lives in those contexts and allow that context to be easily added to the log entry. Both these contexts also support some kind of interception (think Behaviors for WCF and Modules for ASP.NET) to allow automatic logging.
  • Log to multiple log targets at the same time – each with unique filters.
    The default System.Diagnostics TraceListeners will already do this. Also be able to retrieve different types of logs in the code (error log, message log, security audit log etc).
  • Use that same log framework for technical logging as well as functional auditing.
    There is no reason why a logging framework cannot be used for (more) functional logging also, provided the they use unique ‘streams’.
  • Different log entry formats.
    Different log targets may require different log entry structs. Still I would prefer to use the existing diagnostics code. This means that those structures has to be serialized and deserialized. I think JSON can be of great help here.
  • Distributed and Correlated.
    You want to be able to correlate log (and trace) entries made by different parts of the system. This allows you to get a good impression to what happened where and why.
  • Support debug-only log entries.
    Conditional compilation on DEBUG. No biggie, just useful.
  • Asynchronous logging.
    This is tough one. You want Async logging in order to minimize performance impact on the running thread – actually this is more a tracing issue than a logging issue, assuming logging doesn’t output a whole lot of entries per second). But making it asynchronous can also mean that you lose that one vital entry just before the app crashes. More on this later.

Using System.Diagnostics

The idea is that we use as much of the existing logging technology as possible. That means reusing and extending the main classes of the System.Diagnostics namespace. TraceSource can be derived into LogSource and provide the basis for a log target. Each LogSource can specify a collection of TraceListeners. Custom TraceListeners can be used as well as the out of the box ones.

But using these TraceListseners means that all Log information has to be squeezed through a single string (essentially – worst case). This coupled with the fact that different log types might require different log entry structures leads to one conclusion. We have to serialize the complex data into a string so that it can be output by different log targets (Sinks) and mapped to their most appropriate fields.

The use of JSON would be excellent here, also because JSON is somewhat readable even in its serialized form. So you can still make sense of it even when its written to a text file. The object structure that is used will be partly fixed, we will need some known fields to extract data needed for further processing. But custom structures can also be easily serialized to JSON and on the receiving side, easily serialized into generic data containers (ExpandoObjects) for use in the viewer.

Formatting this complex data into something that makes sense for the specific log target is done when reading the log, not while writing it. This not only saves a little performance hit while writing the log entry, it also allows for a more versatile viewer.

Performance

One of the obvious ways to decouple the performance costs of tracing and logging is to take the processing of the log entry onto a background thread. Only the data gathering takes place on the active thread all other operations will be done on the background thread.

The trouble with this is that you can lose potentially critical log entries just before your application crashes. One possibly way to have the best of both worlds is to use the log level (Critical, Error, Warning and Info) as an indication of priority. That could mean that Critical log entries are always logged on the active thread. The other levels are processed by the background thread starting with Error, Warning and Info as least significant.

We have to provide some way to identify the order of these entries (can be a simple sequence number) in order to be able to view them in the correct order. Gaps in the sequence can be detected and displayed in the viewer. This mechanism will also make it easy to merge log ‘files’ from different machine into one.

If we take formatting out of the write-a-log-entry process, we might also need to revisit the Tracing code we have so far in order to make that option available in Tracing too.

Reading Log entries

For each (type of) log target a Sink is needed that knows how to put the log entry data into its storage. Think for instance Enterprise Library Logging block or Log4Net or simply the event log. A TraceListener is implemented for each target that knows how to take that one string and persist it in the most optimal way.

When those (types of) targets also want to play in the log viewer, they also have to expose a Provider: an object that knows how to read log entries from its storage and provide them to the viewer.

The viewer will be able to union all log entries from all providers and sort them into the temporal sequence they were written in. Merging of different (machine) sources is also possible.

Of course the viewer would be able to filter and even search through the entries.

I think it would be advantageous to implement an OData REST service as a source for all the log entries. This allows easy access to the log entries for all kinds of purposes and provide a flexible basis for retrieving log entry information for different applications. Both Xml and Json formatting can be supported.

Closing remarks

I am sure that a lot more issues will present themselves once a more detailed design is made and implementation starts. But I think this WILL make a pretty nice logging framework if we can pull it off. Writing this blog post has helped me to structure my thoughts more on the subject and I hope it was a pleasant read for you, perhaps even an inspiration to tweak the logging framework you are no using.

The context of anonymous methods/lambda’s is lost on async proxy calls in SL3 (Updated)

EDIT: This issue was caused by using cached singletons for each proxy (see comments). So the context of anonymous method works correctly, it just appeared it didn’t because multiple event handler (lambda’s) were registered to the same proxy instance.

We’re building an SL3 application that gets its data from a WCF (Ria) Service. The application uses a standard generated proxy (SlSvcUtil.exe) and all calls are done using the familiar async pattern: call a begin method and wait for the completed event. In our SL3 application we have some layers on top of this Proxy:

1) The ServiceAgent – manages the service proxy and transparently recreates it when it is in a Faulted state. It subscribes to the completed event on the proxy and promotes this to the caller (repositories).
2) The Repositories – exposes centralized and cached data access divided bij functional area / domain.
3) The Model (per module) – Each module in the app implements the MVVM pattern and its model accesses the repositories and maps the data from service contracts to module-specific view entities.

Because all data fetching is async we use an eventing mechanism (similar to INotifyPropertyChanged and INotifyCollectionChanged) to communicate’the ‘completed’ event from the proxy upward through the application.

It was in the implementation of the repositories that I first discovered that something was wrong with the context in which our ‘completed’ events were raised. Our implementation looked something like this (pseudo code):

public DataEntity GetDataEntity(Guid Id)
{
    DataEntity retVal = new DataEntity();

    using(var proxy = _serviceAgent.GetProxy<RequestDataContract, ResponseDataContract>())
    {
        proxy.RegisterCompletedHandler( (response) =>
            {
                retVal.Property1 = Response.Property1;
                retVal.Property2 = Response.Property2;
                retVal.Property3 = Response.Property3;
            });
        proxy.BeginRequest(new Request(id));
    }
    return retVal;
}

Initially this method returns an empty object that gets mapped to an empty ViewEntity and bound to the UI. When the proxy reports the service call is completed the ‘CompletedHandler’ is called implemented by the lambda and provides access to the response of the service call. Now the empty DataEntity is filled and these changes are propegated to the ViewEntity (think INotifyPropertyChanged) and in turn the Viewentity notifies its changes to the UI (also INotifyPropertyChanged). This works, no problem.

Untill you place another call to the same repository method while the first is still ‘running’. Then the ‘context’ the lambda needs to fill the retVal is lost and ‘overwritten’ by the second call. So it may be very well that the result of the first call is written to the retVal of the second call. You can imagine the strange behavior you’ll get in your app (and how long it takes to figure out what the problem is ;-).

The solution that I’ve found is to use the userState that the proxy allows you to sent with a method call. The pseudo code will look something like this:

public DataEntity GetDataEntity(Guid Id)
{
    DataEntity retVal = new DataEntity();

    using(var proxy = _serviceAgent.GetProxy<RequestDataContract, ResponseDataContract>())
    {
        proxy.RegisterCompletedHandler( (response, userState) =>
            {
                DataEntity de = (DataEntity)userState;
                de.Property1 = Response.Property1;
                de.Property2 = Response.Property2;
                de.Property3 = Response.Property3;
            });
        proxy.BeginRequest(new Request(id), retVal);
    }
    return retVal;
}

Now the correct retVal is passed as userState allong with the service call to the proxy and the completed (event) handler will now have access to it when it is called and will be able to set the property values.

I was very suprised that this occurred in my code and it may very well be that I’m doing things wrong, but I don’t see it. Any suggestions are welcome.

Hope it helps.

Services with a twist

On the project I am currently working on we have several layers of processing going on:

  • External Systems (silos)
    These are the legacy systems that contain all the information.
  • A service layer (WCF)
    These Domain Services expose the legacy systems transparently. Talking to these services gives you no clue which legacy system is used. Sometimes its more than one.
  • An Enterprise Service Bus (BTS2006R2/ESB)
    Positioned as messaging middle ware. For the most part completely transparent to the clients.
  • Client / Front end applications (SL3)
    User applications that consume domain services through the ESB.

In order to let our domain services perform optimal with the many calls that they’ll receive and to make them as versatile as possible, we’ve decided to do two additional things:

  • Support Batching
    Most service methods can handle multiple requests at a time. Its like taking your normal service operation contract and putting a collection around it. This enables the client to (for instance) resolve multiple IDs in one single call / round trip. It is the classical choice between multiple calls with small messages or less calls with larger messages. The client can now choose how it wants to interact with the service.
  • Support Prefetching
    We define a rich data model that each of these domain services work with. These services go beyond just ‘Customers’ or just ‘Orders’. Because all data within a domain service is so related / connected we felt that it would be best to keep it all in one service. But you do not always want all ‘Orders’ and all ‘OrderItems’ and all ‘Product’ for a ‘Customer’. So we allow most service operations to specify what we have called ‘Prefetch paths’. Basically you call the service operation and you specify what relations of the root entity that operations serves up you want also to be included in the response. So you could call GetOrders with only  the ‘OrderItems’ prefetch key. That would result in all Orders and OrderItems for a Customer. The client once again is in control of what data is retrieved to suit its needs.

We understand that implementing services this way is somewhat non-standard (I have never seen it done before), but we feel that it provides a lot of flexibility to its clients. For a service to be reusable in a number of different contexts, we believe it should be more flexible that your normal, plain vanilla service. Nontheless, we really like some community feedback on this design and would appreciate any suggestions you might have.

Silverlight: Breaking the daisy chain?

This post discusses the consequences of calling asynchronous calls in Silverlight (or any other scenario that lets you pass in event handlers for completion notification).

Everything is asynchronous in Silverlight. With each call you make, you pass down event handlers that are called when the operation is done. When trying to program a sequential execution flow in your Silverlight program, you’ll see the daisy-chain ‘pattern’ emerge. This is where a method starts an asynchronous call, the event handler does some work and starts another asynchronous call then the next event handler performs another asynchronous call, etc. Look at your own Silverlight code and see if you can detect this pattern.

You see your logic spread out over a couple of methods/event handlers. Question is: does this need fixing? From a puristic standpoint I would say yes. On the other hand I can see that a daisy chain might not be the worst you have to live with. When the logic is simple enough and following the chain is easy, it is all right to leave it at that. But what if at some point you have to branch of the chain? For instance you have a condition (if-then-else) that determines to call one asynchronous method or -if it is not true- it will call another asynchronous method. Now you have two parallel tracks the execution flow can follow down the chain. This can get messy very quickly.

Another issue to remember is that the thread that is used to call your event handler and notify you of the outcome of the asynchronous call, is not necessarily the same thread that was used to create the UI. So you cannot call into the UI directly from within the event handler. You have to marshal the call using Invoke.

But how do we solve this? One pattern comes to mind is the state table. Define a state for each step in the daisy chain and determine what state to go to next when an event handler is called. But this doesn’t do anything for the fragmentation of the code. Its just a different way of cutting it into pieces and I would argue its less obvious than the original daisy chain (its also not entirely what the state table was meant for).

You could use anonymous methods (or lambda’s) to pull everything into one method, but the question is if this is more readable and maintainable than a daisy chain.

Although I have not worked out the details of this idea, I was thinking of a base class that would implement some helper methods to perform asynchronous calls and provide event handlers. This should allow you to implement all your code in one method (or as many as like) and call asynchronous methods and gather their responses almost transparently. Not sure if this idea will work, though.

What I would like is to code out my Silverlight code in a normal sequential flow using “normal” programming paradigms and patterns. But until someone comes up with a good solution for that, we just have to experiment with our own solutions and patterns.

 

How to implement catch (Exception e)?

How often did you see a C# catch(Exception e) statement in code? How often did you write it yourself?

I know I do it, even when I know I shouldn’t. Why?

Because its so easy! Doing it right is hard(er) or at least take much more code you have to repeat over and over again.

But its not something that you’d be proud of (I’m not).

So, I thought it was time to change that. But how? I definitely don’t want to rewrite a lot of code for each try-catch-finally block.

First lets take a look at error handling. When do you really handle an error? Almost never I dare to say. I’ve only encountered one occasion where I really handled an exception (a dead-lock exception from SQL-Server: I waited a random amount of time a retried – three times. After that I just let the exception bubble up).

What does your error handling code look like? I bet it looks something like this:

    try
    {
        // real code here…
    }
    catch(Exception e)
    {
        Logger.LogException(e);
    }

I don’t see the handling part in this ;-) why do we call this error handling? BTW don’t call throw e in the catch block. I rewrites the call stack and you loose the original call stack.

But there’s a whole range of exceptions you don’t want to catch. AccessViolationException? ExecutionEngineException?

Those indicate situations you can’t fix anyway.

How about InvalidCastException and NullReferenceException?

Those exceptions indicate some technical error and are an indication of plain bugs. I wouldn’t want to catch those in my code (only at AppDomain level to log them).

The good news is that the BCL team is doing something about this in .NET 4.0. But even in .NET 4.0 catch(Exception e) is still not a good idea.

But how do we handle exceptions the easy way (the catch(Exception) way) but filter on the really important exceptions? We can take the solution of the BCL team one step further.

The following code is not production code but it demonstrates an idea to handle exceptions correctly once and for all.

    public class ErrorHandler
    {
        public delegate void TryCallback();
        public delegate void ExceptionCallback(Exception e);
        public delegate void FinallyCallback(bool? exception);

        public ErrorHandler()
        {
            // add "really" fatal exceptions by default.
            FatalExceptions.Add(typeof(AccessViolationException));
            FatalExceptions.Add(typeof(ExecutionEngineException));
        }

        private List<Type> _fatalExceptions = new List<Type>();
        public IList<Type> FatalExceptions
        {
            get { return _fatalExceptions; }
        }

        public bool IsFatalException(Type exceptionType)
        {
            if (!typeof(Exception).IsAssignableFrom(exceptionType))
            {
                throw new ArgumentException("Specified type is not (derived from) System.Exception.", "exceptionType");
            }

            return (_fatalExceptions.FindIndex(e => e.GetType() == exceptionType) != -1);
        }

        public bool? TryCatchFinally(TryCallback @try, ExceptionCallback @catch, FinallyCallback @finally)
        {
            bool? handleException = null;

            if (@try == null)
            {
                throw new ArgumentNullException("@try");
            }

            try
            {
                @try();
            }
            catch (Exception e)
            {
                handleException = HandleException(ref e);

                if (@catch != null && !IsFatalException(e.GetType()))
                {
                    @catch(e);
                }

                if (handleException != null)
                {
                    if (handleException == true)
                    {
                        throw e;
                    }
                    else
                    {
                        throw;
                    }
                }
            }
            finally
            {
                if (@finally != null)
                {
                    @finally(handleException);
                }
            }

            return handleException;
        }

        public bool? HandleException(ref Exception e)
        {
            bool? result = null;

            if (e != null)
            {
                if (IsFatalException(e.GetType()))
                {
                    // throw
                    result = false;
                }
                else
                {
                    // TODO: call EntLib exception policy

                    result = false; // for now
                }
            }

            return result;
        }
    }

The HandleException method is where it gets decided whether an exception is handled and how. This is also the place to integrate EntLib if you desire. The return value of the HandleException can be null (do nothing), false (call throw) or true – meaning the exception has been replaced (exception wrapping) and throw e should be called. You could elaborate the catch callback to include retries of the @try code when you actually handle an exception (like the dead lock example earlier).

You could use this code as follows:

    public void MethodThatCouldGoWrong(string someParameter)
    {
        ErrorHandler errorHandler = new ErrorHandler();
        errorHandler.FatalExceptions.Add(typeof(InvalidCastException));
        errorHandler.FatalExceptions.Add(typeof(NullReferenceException));

        errorHandler.TryCatchFinally(
            delegate()  // try
            {
                // do something here that causes an exception
            },
            delegate(Exception e) // catch
            {
                // handle the exception e
            },
            null    // finally
            );
    }

This code will not call the catch callback on AccessViolationException, ExecutionEngineException, InvalidCastException and NullReferenceException.

You probably don’t want to instantiate the ErrorHandler class each time you need it – you could make it static as long as you add all fatal exception during initialization of that static instance. Then its a matter of calling the TryCatchFinally method and doing your processing using anonymous delegates (I think in this case its more readable than lambdas). You can even pass null to the @catch callback if you don’t have any custom handling to perform but still get your exceptions ‘handled’.

So its a start. Maybe not perfect.

Thoughts?

[BAM] PivotTable names must be unique

More a note to self than a serious blog post (haven’t got the time to do screen shots and stuff).

When creating BAM views in Excel, you can copy the initial PivotTable that is generated to create multiple predefined ‘views’. To copy the PivotTable select it (go to the edge until you get a solid arrow cursor) then Copy it (Ctrl+C), select a free cell well below the existing PivotTable and Paste (Ctrl+V). Right-click in the Pivot Table and select Table Options… to give it a name. This Name must be unique across the workbook. Otherwise the PivotTable will not be linked to a cube when exported (although it all seems to work in excel) and your view will be missing from the Aggregations node in the BAM Portal navigation pane.

[WPF] Data Model – View Model – View

This post is based on an interpretation of a pattern called View-View Model-Document or View-View Model-(Data) Model. I did not invent it. I just write this to have a record of what I learned when exploring the design pattern.

The following figure displays the WPF Application Architecture. On the right side a legend explains the meaning of the shapes used in the diagram. The Xaml shape indicates artifacts that are typically created with Xaml. The WPF shape indicates to a WPF specific type and the class shape indicates a custom class specific to the application in question.

The dashed lines show a Data Binding dependency with the arrow pointing toward the dependency being bound. The solid line with the arrow also displays a dependency but one that is set on the named property. A solid line with a diamond represents containment (diamond being the container). Multiplicity of this containment is indicated with the numbers at the “containee”.

View Rendering

The View Model is set as Content on the View. The ViewModel will provide object instances that drive the View’s content. These instances are usually Data Model types but can also be other view specific types. Through the use of Data Templates the ViewModel type and the Data Model type as well as any other types the View Model might provide are “converted into UI elements”. Each Data Template is written specifically for the object type and has knowledge of the object hierarchy within one instance.

There are two options in how to interpret the Data Model. Some would consider the Data Model to be the collection of all application data (not necessarily counting view or UI specific data). Others would design a Data Model class to manage only one instance of one entity. A Data Model that is modeled to manage one or more collections of data can be harder than to bind against than a Data Model that manages only one entity instance. Either way can work with this architecture although it must be said that creating a Data Model that only passes through the information of the entity must be avoided.

With the View Model and the Data Model in place the View can be data bound and the content is rendered in the View based on the View Model Data Template and the Data Model Data Template.

Note: A major drawback of working with Data Templates is the lack of support for Visual Designer Tools (Blend and Visual Studio). These tools will help you design views but not as a Data Template.

Command Handling

Just rendering a view is like a glass window: you can see everything but you can’t do anything with it. A user interacts with application by clicking and typing: generating events. This architecture proposes to use Commands to route these events to the application (code). WPF predefines several categories of commands that can (and should) be (re)used in your application. The Command Model manages a command and the event it represents. On one hand it references the Command it manages on the other hand does it reference the View Model. Think of the Command Model as the mini-Controller for one Command. When the Command fires the Command Manager executes its handler code against the View Model, which might cause view updates (property changed notification).

During binding in the Data Templates the Command must be set to the Command property of the UI element. Note that Command instances can be declared static.

Unit Testing

Because the objects are bound to the view (and Data Templates) there is no dependency from the View Model or Data Model to the View or its UI elements. This means that the View Model, Data Model and Command Model objects can be unit tested very easily without having to resort to UI-record & replaying testing tools.

Wrapping up

My first experiments with this pattern were interesting: it takes a little getting used to and frequently I had to get above the code to see the (helicopter) overview to see what I was doing and were to put the code. I realize that this post might be a bit short and cryptic. I recommend reading the blog posts of Dan Crevier that also includes a sample application in its last post. I think I might have deviated a little from what Dan writes but the idea remains the same: Utilize the tremendous power of WPF data binding.

Trace Context Aspects with PostSharp

In my previous post I wrote about a method context information gathering framework I wrote in an attempt to increase the amount of useful information in trace output and exceptions. In this final post about the framework I will discuss the use of a static aspect weaver PostSharp.

Static Aspect Weaving

I assume that you know what aspects are at least at a conceptual level. The nice thing about static aspects is that their code is injected at compile time, not at runtime. True: static aspects are not configurable, but they are completely transparent (dynamic aspects usually require the client code to create a proxy instead of the actual object).

I use PostSharp, a static aspect weaver, to implement a code attribute that triggers aspect code injection into the assembly. PostSharp works at IL level so it should be usable with any .NET language. While PostSharp provides a generic assembly manipulation framework, it is actually Laos that provides the aspect framework.

When an aspect (a code attribute) is applied to a method, PostSharp (actually Laos) rewrites the IL for that method. It basically relocates the original method IL in a new method with the same name, prefixed by a ‘~’. Then it inserts custom IL that performs the call sequence on your aspect object. Your aspect can be responsible for calling the actual method -using a delegate- (as it is in this example), although there are also flavors of aspects that do not require this.

So a typical (simplified) call stack would look something like this:

void MyClass.MyMethod(string param1)
void MyAspect.OnInvocation(MethodInvocationEventArgs eventArgs)
void delegate.DynamicInvoke(object[] parameters)

void MyClass.~MyMethod(string param1)

The stubbed MyClass.MyMethod routes execution to the aspect (OnInvocation) applied to the method and the aspect code invokes the delegate that point to the original method (or it doesn’t ;-) and the original method (prefixed with ~) executes.

TraceContextAspect

In order to eliminate the custom code you’d have to write to use the TraceContext in our method context information gathering framework, I’ve created a PostSharp/Laos aspect class that intercepts the method call as described above. So instead of making all the calls to the TraceContext yourself in the method code, you simply apply the aspect to the method:

[TraceContextAspect]
public string PrintName(int numberOfTimes)
{
    // method impl.
}

The TraceContextAspect implements the OnInvocation method like so:

using (AspectTraceContext ctx = new AspectTraceContext(_methodBase))
{
    ctx.Initialize(eventArgs.Delegate.Target);
    AddParameters(ctx, eventArgs.GetArguments());
    ctx.TraceMethodEntry();

    try
    {
        eventArgs.ReturnValue = eventArgs.Delegate.DynamicInvoke(eventArgs.GetArguments());

        ctx.SetReturnValue(eventArgs.ReturnValue);
    }
    catch (Exception e)
    {
        ctx.AddContextTo(e);
        throw;
    }
} // maintains stack and writes method exit and flushes writer.

Note that I’ve derived a new class from TraceContext for this specific situation (AspectTraceContext) that takes a MethodBase instance as a parameter in its constructor. The MethodBase instance is handed to you by the PostSharp/Laos framework and represent the method the aspect was placed on. The bold text is the actual call to the original method. As you can see, all the custom code needed to setup the TraceContext has now moved to the OnInvocation method implementation.

Conclusion

The use of a static aspect weaver has dramatically simplified the usage of the method context information gathering framework. Tracing useful and rich information from your method has now become a breeze (as it should be ;-).

I hope these last 3 post has shown you how you can leverage existing technology (System.Diagnostics and PostSharp) to make the most out your own tracing framework (in this case). I also hope you will be inspired to find new applications to use static aspects in your own code. I find that static aspects can really make your life easier while at the same time not making your code (execution paths) more complicated than needed.

You can download the source code here.

A Method Context Information Gathering Framework

Do you also have that feeling when you type in your tracing code that it is too cumbersome and too much hassle to get it right? I mean really trace the information that is useful for finding faults, for instance. And when you log an exception, even when you write out all the information the exception offers, it is barely enough to really understand what went wrong?


That is why I wrote this framework. This framework tries to solve one major problem (and some small one on the side): Getting the runtime information of an executing method into a trace text or an exception.

Note that this post assumes you know about the main classes in System.Diagnostics.


MethodContext


The MethodContext is a class that maintains all running values during the execution of a method. It is created at the beginning of the method and it is Disposed at the end of the method. It collects information about the runtime values of the method parameters, the class reference (this) if it isn’t a static method and its return value. This runtime method context information can be formatted into trace texts or added to an exception the code is about to throw.


The MethodContext also maintains its own call stack and provides access to its calling method context.


For tracing purposes a MethodContext derived class, the TraceContext, adds to this a reference to a MethodTracer and method entry and exit trace methods.


Here’s a typical usage scenario:


public string Repeat(string value, int numberOfTimes)
{
    using(TraceContext ctx = new TraceContext())
    {
        ctx.Initialize(this);
        ctx.AddParameter(value, “value”);
        ctx.AddParameter(numberOfTimes, “numberOfTimes”);
        ctx.TraceMethodEntry();

        string result = null;

        // method impl.

        ctx.ReturnValue = result;

        return result;
    }  // Dispose is called on ctx, calling TraceMethodExit()

}


Note that the TraceContext (MethodContext) maintains weak references to all instances.


MethodTracer


The MethodTracer instance is created for each TraceContext. The MethodTracer takes a TraceSource and an optional method-level TraceSwitch in its constructor and uses these to filter and output trace text. It implements 2 versions of the FormatTrace method; one instance and one static. The static FormatTrace method can be used by your code to trace custom texts. The TraceContext is located behind the scenes.


The FormatTrace method takes an object parameter as the object to generate a trace text for (along with some other parameters). The method delegates this task to the TraceManager (discussed next) where a collection of ITraceFormatter instances is checked to see if that specific Type is supported.


If the TraceSwitch allows it the formatted text is output to the TraceSource.


TraceManager


The TraceManager is a global (singleton) class that manages several cached classes. One already discussed are the TraceFormatters. These classes are able to generate trace text for a specific Type of object. TraceFormatters can use other TraceFormatters and thus composing the trace text.


By convention the TraceManager will keep track of a TraceSource for each class Type that creates a TraceContext. It will also keep track of optional method TraceSwitch instances that can be configured to fine tune trace levels at method level.


Exceptions


Exceptions caught in the method that created a TraceContext can keep track of it using the LastError property on the context. When throwing exceptions you can add the context information of the TraceContext to the exception using the AddContextTo method. This method populates the Data dictionary of an exception instance with the context information. Note that only types that are serializable are added, this is because the Data dictionary doesn’t allow otherwise (exceptions need to be marshaled across boundaries sometimes and that involves serialization).


The following code sample shows a nice way to add runtime information to an exception before propagating it to a higher level.


public string ReadTextFile(string path)
{
    using(TraceContext ctx = new TraceContext())
    {
        ctx.Initialize(this);
        ctx.AddParameter(path, “path”);
        ctx.TraceMethodEntry();

        string result = null;

        try
        {
            // method impl.
        }
        catch(IOException ioe)
        {
            ctx.AddContextTo(ioe);
            throw;
        }

        ctx.ReturnValue = result;

        return result;
    }  // Dispose is called on ctx, calling TraceMethodExit()

}


The calling code will receive an exception which Data dictionary is filled with runtime method context information. A nice extension would be to be able to dump all properties of the instances present in the Data dictionary. That way you should be able to generate comprehensive error log messages.


Configuration


The following configuration shows how to set up a Console TraceListener and an EventLog TraceListener for errors, a trace source for several classes, trace switches at class level (trace source) and at method level.


<configuration>
  <system.diagnostics>
    <sharedListeners>
      <!– Choose your trace output channels –>
      <add name=”Console” type=”System.Diagnostics.ConsoleTraceListener”
           initializeData=”false” />
      <!– Only Error traces will go to the Event Log –>
      <add name=”ErrorEventLog” type=”System.Diagnostics.EventLogTraceListener”
           initializeData=”Jacobi.Diagnostics.TestApp”>
        <filter type=”System.Diagnostics.EventTypeFilter” initializeData=”Error” />
      </add>
    </sharedListeners>
    <sources>
      <!– Configure a TraceSource for each class–>
      <!– Non-configured classes all create a default TraceSource –>
      <source name=”Jacobi.Diagnostics.TestApp”>
        <listeners>
          <add name=”Console” />
          <add name=”ErrorEventLog” />
        </listeners>
      </source>
      <source name=”Jacobi.Diagnostics.TestApp.TraceAspectTest”>
        <listeners>
          <add name=”Console” />
          <add name=”ErrorEventLog” />
          <remove name=”Default” />
        </listeners>
      </source>
    </sources>
    <switches>
      <!– SourceSwitch settings for classes –>
      <!– You can specify SourceSwitch settings
          without configuring the TraceSource (<sources>) –>
      <add name=”Jacobi.Diagnostics.TestApp” value=”Error” />
      <add name=”Jacobi.Diagnostics.TestApp.TraceAspectTest” value=”All” />
      <!– TraceSwitch settings for Methods –>
      <add name=”Jacobi.Diagnostics.TestApp.TraceAspectTest.WithContext” value=”Info” traceFlow=”Entry,Exit” />
      <add name=”Jacobi.Diagnostics.TestApp.TraceAspectTest.ConcatName” value=”Verbose” />
    </switches>
  </system.diagnostics>
</configuration>

Basically this configuration is very similar to the one discussed in my previous post about System.Diagnostics.  <sharedListeners> declares all listeners, <sources> lists a TraceSource per class and <switches> configures both class-level switches and method-level switches. Notice that the method level switches extent the dot-syntax with the method name for the name of the switch and carry an extra traceFlow attribute. The traceFlow attribute allows you to filter the output of method entry and exit traces that are done by the TraceMethodEntry and TraceMethodExit methods on TraceContext.


The next post will investigate a way to get rid of all the custom code you have to write to use the TraceContext. Using a static aspect weaver it is possible to have all that code removed from your method and indicate with a code attribute what methods you want to trace in a completely transparent way.


Download the source code here. Note that this source code also contains the projects for the static aspect weaver that will be discussed in the next post.

Basic System.Diagnostics

The .NET framework has shipped with the System.Diagnostics namespace since version 1.0. My efforts to build a method context information gathering framework on the services of System.Diagnostics has brought me a deeper understanding of its classes and configuration settings. I will talk about my method context information gathering framework in a later post, but first I thought I would get us all on the same page on System.Diagnostics.


System.Diagnostics implements several classes that play a key role in outputting trace text from your application. Better understanding these classes will bring you insights in to how to extend the existing diagnostic framework in .NET or how to set up the configuration file to make full use of the out-of-the-box functionality.


TraceListener


The TraceListener class receives trace texts from the application and outputs it to a specific channel it was written for. There is a DefaultTraceListener class that outputs its text to the Win32 API OutputDebugString. But there is also an EventLogTraceListener that outputs its text to the windows event log. There is even an XmlWriterTraceListener that will output Xml to a stream. There are more listeners you can choose from and you can even write your own. Just derive your listener class from the abstract TraceListener base class and implement the abstract methods.


A TraceListener also maintains an optional Filter. This allows you to fine tune the type of information that a TraceListener actually outputs. For instance, you could put an EventTypeFilter on the EventLogTraceListener to only output Error-type traces to the windows event log.


TraceListener instances are held in a collection that all receive the same trace text to output. This means that the same trace text can be output on different channels (each channel is represented by a TraceListener) at the same time. This collection can live at the global/static Trace or Debug classes or at a TraceSource.


TraceSource


A TraceSource represents a configurable trace object that maintains its own set of TraceListeners. An associated TraceSwitch (discussed next) controls the trace level for this ‘scope’. Typically a TraceSource is configured in the .config. When the code instantiates a TraceSource with the same name it reads its settings from the .config file. This way you can control what portions of your application code will output trace text.


TraceSwitch


A TraceSwitch maintains a TraceLevel property that controls the importance of the trace texts passed to the TraceListeners. The usual Error, Warning, Info and Verbose are supported. Typical use is to configure the TraceSwitch in the .config file and when the code instantiates an instance using the same name it reads its settings from the .config file. Although you can use TraceSwitches standalone they are usually associated with a TraceSource (in config). It is also possible to write your own TraceSwitch.


Configuration Settings


A lot of System.Diagnostics functionality is driven by the .config file. Lets dive right in and look at the following configuration:


<configuration>
  <system.diagnostics>
    <sharedListeners>
      <!– Choose your trace output channels –>
      <add name=”Console” type=”System.Diagnostics.ConsoleTraceListener”
           initializeData=”false” />
      <!– Only Error traces will go to the Event Log –>
      <add name=”ErrorEventLog” type=”System.Diagnostics.EventLogTraceListener”
           initializeData=”Jacobi.Diagnostics.TestApp”>
        <filter type=”System.Diagnostics.EventTypeFilter” initializeData=”Error” />
      </add>
    </sharedListeners>
    <sources>
      <!– Configure a TraceSource for each class–>
      <source name=”Jacobi.Diagnostics.TestApp”>
        <listeners>
          <add name=”Console” />
          <add name=”ErrorEventLog” />
        </listeners>
      </source>
      <source name=”Jacobi.Diagnostics.TestApp.Class1″ >
        <listeners>
          <add name=”Console” />
          <add name=”ErrorEventLog” />
          <remove name=”Default” />
        </listeners>
      </source>
    </sources>
    <switches>
      <!– SourceSwitch settings for classes –>
      <add name=”Jacobi.Diagnostics.TestApp” value=”Error” />
      <add name=”Jacobi.Diagnostics.TestApp.Class1″ value=”All” />
    </switches>
  </system.diagnostics>
</configuration>

The configuration settings live inside the <system.diagnostics> element. Then we see a <sharedListeners> section. Although it is possible to configure all TraceListener settings separately for each TraceSource, I prefer to maintain a global list of (configured) TraceListeners and refer to them from the TraceSource configuration. <sharedListeners> is that global place. Notice that the EventLogTraceListener has a <filter> defined that only allows Error-type traces to pass to the event log.


The <sources> section allows you to list the configuration settings for all the TraceSources your application uses. If the configuration for an instantiated TraceSource is not found in the .config file, it is shut off by default. So if you expect to see trace output from a specific TraceSource but there isn’t any, 9 out of 10 times you did not configure it right (check the spelling).


Each <source> declares its own collection of TraceListeners, in this case referring to one declared in <sharedListeners>. As a convention I’ve used the full class names as TraceSource names and TraceSwitch names. But you can also choose a courser granularity, say at component level or at sub-system level.


You can associate a TraceSwitch with a TraceSource by using the switchName and switchType attributes on the <source> element. I’ve not done so in my example and that means that you have to instantiate the TraceSwitches manually in code (with the correct name). You can associate a TraceSwitch with a TraceSource by either using the switchName and switchType attributes on the <source> element -or- by just declaring a switch (<switches><add>) with the same name as the TraceSource.


Wrapping up


This quick tour around System.Diagnostics discussed the main classes that enable you to build pretty powerful tracing support into your application. With this information you could already instantiate a TraceSource for each class and configure a matching TraceSwitch. Code inside the class would simply call the TraceSource and it would work. You could configure it to allow specific (type of) information to come through while the rest is blocked, for instance. And although I would encourage anybody to at least take some time to fiddle with a simple console test application to try out these features, it is my experience that you’ll want more in a real application. That is why I build my method context information gathering framework. Although this framework does not add much to the tracing capabilities of System.Diagnostics, it does add a lot to the quality of information that is in the trace texts.


I plan to write about my framework in a future post.


For more information on System.Diagnostics go to MSDN.

Writing a WCF POX Syndication Service

WCF has received some enhancements in the .NET 3.5 framework. It is now possible to use the WCF service framwork to write POX  services; a service that does not use SOAP, but Plain Old Xml.


Also the Syndication support for WCF is new to .NET 3.5. It has builtin support to produce Rss 2.0 and Atom 1.0 feeds.


The example I like to show you is an Event Log Feed Service. This service produces a Rss 2.0 feed for the Application event log Error entries. It is hosted in IIS and can be called by addressing it over the url:


http://[base url]/Service.svc/GetLogEntries


Notice the GetLogEntries after the Service.svc. The GetLogEntries maps to the GetLogEntries method of the service and takes two optional parameters: an entryType (Error, Warning, Information) and a feedType (Rss1, Atom1).


http://[base url]/Service.svc/GetLogEntries?feedType=atom1
http://[base url]/Service.svc/GetLogEntries?entryType=Warning&feedType=atom1


To look at this for yourself download the source code here and install the service in IIS. Make sure the service dll is in the web’s bin folder.


The Service interface is declared using WFC’s ServiceContract attribute.


[ServiceKnownType(typeof(Atom10FeedFormatter))]
[ServiceKnownType(typeof(Rss20FeedFormatter))]
[ServiceContract]
public interface IEventFeedService
{
    [WebGet(UriTemplate=”/GetLogEntries?eventType={eventType}&feedType={feedType}”)]
    [OperationContract]
    SyndicationFeedFormatter<SyndicationFeed> GetLogEntries(string eventType, string feedType);
}


The return types are declare with using the ServiceKnownType attribute. The WebGet attribute makes it possible to call this service using the url (GET). The UriTemplate declares what possible variation are supported on the url: the method name and its optional parameters. Note that the parameter names of the method match the parameter names in the UriTemplate.


The Service implementation class implements the method of the Service interface (refer to the dowload for complete source code). Creating the Syndication Feed is a matter of creating SyndicationFeedItem instances and adding them to a SyndicationFeed instance. Finally the method returns either a Rss20FeedFormatter or a Atom10FeedFormatter instance depending on the requested feed format.


The Service.svc file is used by IIS to determine how to host the service.


<%@ServiceHost Factory=”System.ServiceModel.Web.WebServiceHostFactory” Service=”EventLogFeedService.EventFeedService” %>


Besides specifying which service class should be hosted, the file also specifies a Factory to create a WCF service Host instance. Note that this is a new class that supports the url addressing of our WebGet enabled service.


Disclaimer: Exposing event log information to external sources can expose a security hole!

Why I think Merging Source Code sucks

Recently I was involved in a pretty large project to do a full and complete source code merge of 2 branches into one. We used TFS in the project and that was a first for me. So perhaps my experience was sub-optimal due to my lack of understanding of TFS but here are my thoughs on it anyway:




  • For some reason the branch relationships that should have existed between all the files of the two code branches were broken for some of the files. Even when we did a baseless Merge these relations remaind broken even though the manual says that they should be fixed after a baseless merge.


  • Using the Auto Merge “feature” reintroduced some defects that were fixed in one branch. Clearly this is not a “featue” you want to use all that often.


  • A Merge cannot cope with refactoring. Basically you are on your own when you refactor too much of you code and the text-based compare can’t match up the pieces of code that are the same, because they’re too far apart.


  • Merging of generated assets (workflow, dataset, etc.) is a disaster. You would normally just merge the model and let the tool generate the code for the new model. But manually (or automatically) merging the “model” is no easy task.


  • Resolving conflicts in project and solution files is also problematic. Most of the time we just made sure that all the changes of both branches were in the output and later sorted out the deleted files and stuff. Problem is that you cannot see the context of these files (associated files etc).


  • Resolving conflicts in normal source code (C# in this case) was not a walk in the park either. The 3-view comparer tool you’ll get to resolve these conflicts has no syntax collering. Its basically a scitzofranic notepad.

I think the problem with resolving conflicts is that it is a text-based operation (at least it seems to be). The auto-merge feature has no clue what it is merging and therefor it is no wonder it makes a mess of you source files. What you need is a specific conflict resolver for eacht type of file (with Text as the default fallback). So If I had a DataSet resolver it would know that this xml schema was in fact a DataSet and it could make (or propose) educated changes to the output. If you’d had these resolvers with builtin knowledge of what they are merging, I think the result would improve drastically. And it would make me a happy camper again. Up until that day, code merges are a pain for me.


What is your experience with merging code trees?

Ploggle Desktop Application

Ploggle is a community site that lets you publish pictures online. It is free (for 3 accounts with 100 pictures max) and has a pretty decent UI for viewing the pictures. The only pain is submitting your pictures to your Ploggle site. You have to email them in the right order (last in, first out).

Uploading multiple pictures with text using your standard email client just wasn’t working for me. So I decided to do something about that.

I wrote a .NET 2.0 WinForms application that allows you to select you picture files and type a description for each. Order your pictures and send them to Ploggle with one button click.

Now the installer says version 1.0 but in fact its more like an alpha version. I’ve only tested it using Outlook (11) to send the emails.

Download the installer (msi) here.

Send any feature requests and bug reports to obiwanjacobi at hotmail dot com.

Consumer Driven Contracts in the BizTalk LOB adapter framework

This post discusses the Consumer Driven Contract pattern and the use of that pattern in the BizTalk LOB adapter framework. Note that the LOB adapters are built on WCF and are not dependent on BizTalk Server. No knowledge of BizTalk Server is required to work with the LOB adapters SDK.

Consumer driven contracts is a pattern that suggests a consumer will only use those parts of a contract published by a provider that is of direct use to that consumer. The consumer will then only have a dependency on a sub-set of the provider contract and will therefore be impacted less by change. Ian Robinson wrote an article on the subject that can be found at Microsoft Msdn or at Martin Fowler’s site. The article suggests that a provider should accommodate the expectations of its consumers in such a way that changes to the service will impact the consumer as little as possible and independent (between service and consumer but also between different consumers of the same service) versioning is possible. So now we have two (types of) contracts. A provider contract that communicates all the capabilities of the service and a consumer contract; a sub-set of the provider contract a consumer actually uses. One service typically has one provider contract (unless it is specifically build to act on more than one –version of a- contract) and may have many consumer contracts.

One of the current technologies that uses the consumer driver contracts pattern is the LOB adapters of BizTalk. The LOB adapters are built on WCF and can be used separate of BizTalk Server. The LOB adapter framework is designed to expose legacy (line of Business) systems as web services. The service author has to implement four aspects to satisfy the framework.

  1. Connection
    The service implementation should be able to connect to the legacy system.
  2. Metadata browsing
    The service implementation returns a collection of available ‘operations’. One or more of these operations can be selected by the consumer. This is an implementation of the consumer driven contract at operation level.
  3. Metadata resolvement
    The operations selected by the consumer need to be resolved to callable methods.
  4. Messaging
    The service needs to be able to receive request messages for the operations and send back response messages (when appropriate).

When a service implements these facilities the framework will generate its Wsdl, perform connection pooling, do transactions and handle security. A consumer of a LOB adapter service can use the new ‘Add Adapter Service Reference’ menu option in Visual Studio to reference an LOB service and select the methods with a new UI. The UI allows you to make a connection to a service, browse and search its metadata and select which methods you want to consume from the service. This new UI is also available in BizTalk server when consuming an adapter service in BizTalk.

Both the Consumer Driven Contract pattern and the LOB adapter SDK are interesting for Service Oriented Architectures. The pattern will reduce coupling between consumer and service, which is good for evolving your SOA and the LOB adapter SDK will provide you with a framework you can build on when service-enabling those legacy systems.

A blog post on msdn has some more resources on the LOB adapter framework.

WCF: Hosting non-http protocols in IIS 7.0

The new IIS 7.0 allows hosting of multiple protocols. I experimented with hosting a WCF service with Tcp and Http endpoints in IIS 7.0.


I started with creating a new Service Web Project in VS.2005. The project template gives you a Service with one method that echo’s the string back prefixed by “Hello:”. I added the <serviceMetadata httpGetEnabled=”true”/> to the web.config to allow meta data exchange using a http get (for instance a browser).


 Then I created a WinForms client application and referenced the Service url to create the proxy. The client has one textbox and a button. The button-click handler creates a service proxy using the default ctor and calls the web service with the string entered in the textbox. The result returned by the service is displayed in a MessageBox.


So, now I have a simple, plain vanilla, out-of-the-box service and client. I can test if everything is working and should see a message box popup after I pushed the button on the form of the client.


Now we need to configure IIS 7.0 to handle the Tcp protocol as well. It turns out there is no UI in Vista to do this (there should be in Windows Server 2008). But luckily there is a command line tool you can use to get this done. Here’s the command line:


%windir%system32inetsrvappcmd.exe set site “Default Web Site” /+bindings.[protocol=’net.tcp’,bindingInformation=’8080:*’]


This command line adds a net.tcp protocol handler to IIS 7.0 (called a binding, not to be mistaken for a WCF binding, which is a different thing altogether) that listens on port 8080. The ‘*’ is a wildcard for the host name: so this handler will handle all tcp traffic on port 8080 no matter the host name specified.


This is a global setting to the “Default Web Site”. Our web application that runs in IIS still has to be told to use that IIS-binding: the tcp handler for port 8080. Here’s the command line to do that:


%windir%system32inetsrvappcmd.exe set app “Default Web Site/<MyAppName>” /enabledProtocols:http,net.tcp


 This will enable the specified protocols for the specified application. Note: replace <MyAppName> with your actual web application name of the service project you created in the beginning.


Important: You have to have admin rights to successfully run these command lines or you’ll get an access denied error.


If you try to run the client again it should still work. But realize that it is still connecting using the http protocol. Now add a new endpoint to the web.config that uses a netTcpBinding (I just copied the existing endpoint and replaced the binding).


Now we have added the tcp endpoint to the service, it is time to update the client. The easiest way to get the tcp config settings into your client’s app.config file is to update the Service Reference in VS.NET.


Dont try to run the client now: you’ll get an exception. The reason is that there are now two endpoint elements (under client element) and you have to tell the service proxy what endpoint configuration to use. So enter the bindingConfiguration value in the ctor of the service proxy (starting with “WsHttpBinding” or “NetTcpBinding”). The client should work on either endpoint.


If you need more information on the subject check out this msdn magazine article:
http://msdn.microsoft.com/msdnmag/issues/07/09/WAS/default.aspx

Using an Interop layer with BizTalk

This is my second project in which I used an Interop layer for calling custom code in BizTalk and it still feels (and smells) good.


The Interop layer is an assembly that isolates the BizTalk specifics from the rest of your code base (I’ve not needed more than a single assembly but I can imagine you might want to split things up in multiple assemblies with bigger and more diverse projects). The BizTalk specific types are only used in this assembly and requirements like [Serializable] are also implemented here (if not supported by underlying code base). Note that I do not use the Interop layer for Pipeline Components: each pipeline component is an ‘interop layer’ on its own – so I do take care not to leak any BizTalk specifics into the general code base.


Some examples of the types of classes I found myself writing in the Interop assembly are:




  • Message Factory classes
    Easily construct new message instances using BizTalk (XLANG) specific types.


  • Content Enrichment classes
    Classes that fetch additional information and return it as Xml (string) for use in maps. See also my post on How to Enrich a Message.


  • Configuration classes
    A serializable class that contains configuration settings in an orchestration; when you want to keep the settings an orchestration works with during its lifetime constant.


  • Collection / Iterator classes
    Classes that represent a collection of (domain) information and can be used inside a Loop shape (MoveNext/Current).


  • Xml Aggregator class
    A serializable class that knows how to generically aggregate multiple Xml documents into a single Xml document (of the same schema).

The fact that the Interop layer shields the rest of you custom code base from BizTalk specifics makes your (non interop/other) code Unit Testable without having to resort to ‘creatative’ injection techniques ;-).


I hope it works just as well for you as it has for me.

Why do we not design a class like this?

I started learning WPF some time ago and the first thing that struck me was the sheer number of properties available on these classes. It turns out that those properties are mainly there to make support for Xaml (declaritive programming) easier/better. But even before WPF, WinForms too had its share of properties on its classes.

 

So I wondered why would we want to have so many Font-properties on a class in WPF; at least in WinForms its on a Font object and the Form/Control only has a Font property (the answer has probably to do with Xaml -again).

 

To be more generic: why do we not (in general) design our class properties to be grouped in functional related groups and make these groups class properties themselves?

 

So instead of (public fields used for clarity):

 

public class MyClass
{
public int MinWidth;
public int MinHeight;
public int MaxWidth;
public int MaxHeight;

// …more properties
}

 

Why don’t we make code like this:

 

public class MyClass
{
public Dimension Dimension = new Dimension();

public struct Dimension
{
public int MinWidth;
public int MinHeight;
public int MaxWidth;
public int MaxHeight;
}

// …more property groups
}

 

For classes with a lot of properties this would make things a little more navigatable for the user of that class (i would think). When there are several pages of intellisense worth of properties (in the first example) you would be able to reduce that to only a handfull of groups. Note that I propose a property group as a struct: its merely a container for properties and has no functionality of its own.

 

One of the downsides of this approach could be that accessing the main class from a property implementation in a property group requires passing that reference to all property groups. On the otherhand: most property implementations are pretty simple anyway…

 

Thoughts?

Building Reusable Pipeline Components Part 4: Build SourceFileName Component

The Build SourceFileName component builds a value for the %SourceFileName% macro that can be used as part of a Send Port Url thus allowing customization of the file names.


The following class diagram shows the classes for the BuildSourceFileName component:


BuildSourceFileName classes


The BuildSourceFileName class derives from the PipelineComponentWithResourceBase base class and passes a name prefix and an instance to a ResourceManager to its constructor. The override of the ExecuteInternal method concatenates all the literal or context property values into one string and uses the FileAdapterMessageContext class to set the FILE.ReceivedFileName context property of the message. This context property is used as a value for the %SourceFileName% macro.


Both the Load and Save methods are overridden to manage persistence of the configured properties. Three values are maintained for each configured property (Key, Namespace and IsLiteral) and an extra count value to indicate the number of configured properties.


The BuildSourceFileNameBagNames utility class manages the names for each configured property in the persistence bag.


 Download the source code here.

Building Reusable Pipeline Components Part 3: Outbound File Location Component

The Outbound File Location component allows messages to be placed in a sub folder hierarchy based on Context Properties of the Message.


The following class diagram shows the classes for the OutboundFileLocation component:


Outbound File Location classes


The OutboundFileLocation class derives from the PipelineComponentWithResourceBase base class and passes a name prefix and an instance to a ResourceManager to its constructor. The override of the ExecuteInternal method uses the SystemMessageContext class to check the OutboundTransportType. If the FILE adapter is not used, the component does not perform any operation and simply returns the incoming message.


Otherwise the Outbound Transport Location is parsed for a macro (e.g. %SourceFileName%) and the sub path is build retrieving the configured properties values from the current message’s context. Then the base path (configured in the Send Port), the sub path and the macro are combined and assigned to the Outbound Transport Location context property of the message.


Both the Load and Save methods are overridden to manage persistence of the configured properties. Three values are maintained for each configured property (Key, Namespace and IsLiteral) and an extra count value to indicate the number of configured properties.


The OutboundFileLocationBagNames utility class manages the names for each configured property in the persistence bag.


Download the source code here.

Building Reusable Pipeline Components Part 2: Static Message Context Component

A Static Message Context is a Pipeline Component that has one or more Context Properties configured and applies them to each message that travels through the Pipeline.


The following class diagram shows the classes of the Static Message Context component:


Static Message Context classes


The StaticMessageContext class derives from PipelineComponentWithResourceBase base class and passes a name prefix and an instance to a ResourceManager in its constructor. The class also overrides the ExecuteInternal method to apply the configured Context Properties to the message being processed. Applying the configured properties is simply a matter of iterating over the configured properties and writing or promoting them to the message context of the current message being processed.


Both the Load and the Save methods are overridden to manage the configured properties persistence. Four values per configured property are maintained (Key, Namespace, Value and Promote) and an extra count value to indicate the number of configured properties. The StaticMessageContextBagNames utility class manages the names for each configured property in the persistence bag.


Download the source code here.

Building Reusable Pipeline Components Part 1: The base classes

This is a four part series on building reusable Pipeline Components for BizTalk (2004/2006). The four part series starts of with laying a foundation with a couple of base classes and the other three installments will cover one component each. The following components will be discussed.



  • Static Message Context Component
    A Pipeline Component that can add statically configured properties to a message (context).

  • Outbound File Location Component
    A Pipeline Component that can create sub folders in an Outbound Transport Location based on message (context) properties or literal expressions.

  • Build SourceFileName Component
    A Pipeline Component that can set the value for the %SourceFileName% macro used in Send Ports. This is a convenient way to custom-name the output files.

The Pipeline Components use a collection of configured properties that are configured during design-time in the (BizTalk) Pipeline Designer. Once the Pipeline is deployed no configured properties can be added or removed but the values of the existing properties can be changed.


A quick analysis of the three components shows us that we see 2 common needs here: (1) we can probably make a base class for each Pipeline Component to derive from and (2) all components use (some sort of) configurable Context Properties to build some internal state.


The following class diagram shows the base classes:


base classes


The PipelineComponentBase class provides the initial layer of Pipeline Component functionality. It (partially) implements the IBaseComponent, IComponent and IPersistPropertyBag interfaces most commonly found in Pipeline Components. Note that the base class does not declare any (class) attributes for registering Pipeline Components. These class attributes should be added on the implementation class that (indirectly) derives from the PipelineComponentBase class.


Both the Name and Description properties of the Pipeline Component are abstract: the derived class has to implement them. The class also introduces an abstract ExecuteInternal (template) method that is call from the IComponent.Execute method implementation. This method has the exact same signature as the IComponent.Execute method but is only called if the Enabled property is set to true (default). When Enabled is set to false the incoming message is returned and no operations are performed. This allows you to switch Pipeline Components on and off through configuration – even in production.


The PipelineComponentWithResourceBase class derives from the PipelineComponentBase class and takes an instance of a ResourceManager in its constructor (with an optional name prefix). The ResourceManager instance passed to the constructor can be taken from the Visual Studio generated Settings class. Through a naming convention it looks for resources in the assembly for the Name and Description properties. The class also implements the IComponentUI interface and its Icon property. The implementation of the Icon property fall back to a generic Icon if there is no Component specific Icon resource found.


The naming convention used for the implementation of the Name, Description and Icon properties is: [prefix] + “Component” + PropertyName. For instance a Pipeline Component class that specifies “MyComponent” as prefix would provide the resource key “MyComponentComponentName” with a string value “My Component” to implement the Name property. Note that you must supply the Name and Description property resources. The Icon property resource is optional and defaults to “ComponentIcon”. Use the prefix if you have more than one Pipeline Component in a single assembly.


The ContextPropertyDef class provides a base class for component specific configurable Context Property instances.


I’ve also included the MessageContextBase class in the class diagram. This class provides a basis for implementing Typed Message Context classes like the SystemMessageContext implemented by BizTalk. The Build SourceFileName Component uses the FileAdapterMessageContext (which derives from MessageContextBase) to access the FILE.ReceivedFileName context property.


Download the source code here


 

I’m Back!

I thought it was a good idea to leave blogginabout.net because I wanted to blog about my interests in programming MIDI and didn’t think it would be usefull to the bloggingabout.net audience. So I went to blogspot and got me a blog there. After a little over six months I’ve decided I want to dedicate the blogspot blog to my hobby projects and not intermix it with other stuff I come across when at work. So I checked back here at bloggingabout.net and my account is still working.


 So check my “hobby” blog if you’re into programming or using (MIDI) music studio (related) applications:


http://obiwanjacobi.blogspot.com/


It’s nice to be back ;-)

[Links] BizTalk Direct Port Bindings

A nice explanation of the direct port binding flavors in BizTalk.


Part 1: http://blogs.msdn.com/kevin_lam/archive/2006/04/18/578572.aspx
Part 2: http://blogs.msdn.com/kevin_lam/archive/2006/04/25/583490.aspx
Part 3: http://blogs.msdn.com/kevin_lam/archive/2006/06/14/631313.aspx
Part 4: http://blogs.msdn.com/kevin_lam/archive/2006/07/07/659214.aspx
Part 5: http://blogs.msdn.com/kevin_lam/archive/2006/07/25/678547.aspx


Have fun.
– Marc

Using Xml Schema for describing Midi System Exclusive message content

In my spare time I’m writing a Midi Console application. This application can manage all the settings of all the midi devices in a studio (Samplers, Sound modules, Drum machines etc.). The onboard user interface of most midi devices are poor (at best) and having an application that allows you to manage settings for multiple devices in a consise way would improve productivity of the mucisian.
Most midi devices support what’s called System Exclusive messages. The content of these messages are not standardized by the MMA but can be freely used by any manufacturer for its own purposes. A typical way to get to all the settings of a midi device is through using these System Exclusive messages.


The way applications dealt with these device specific messages in the past, was to write specific drivers for each device or -at best- have some sort of reference table where each settings was located in the System Exclusive message. This meant that any application targeting System Exclusive message would support a fixed set of midi devices. If your device is not on the list, you could not use the application.


After studying the different binary content layout of System Exclusive message for several manufacturers, it occurred to me that they could be described in a meta language which could than be used to handle the interpretation and compilation of these device specific messages. I decided to use (or abuse most would say) Xml Schema (xsd) to describe the content of these messages.


I’m currently writing a Midi Device Schema Specifications document that describes how one would use Xml Schema for Midi System Exclusive messages. For those who are interested: I post new versions to this thread in the mididev newsgroup and the latest version of the specifications can be found here.


Any feedback is most welcome on any aspect of the specifications or the solution in general.

Double check locking optimization

While I was looking for something completly different I stumbled upon the threading best practices on msdn(2).
http://msdn2.microsoft.com/en-us/library/1c9txz50.aspx


I noticed an optimization for the double check locking pattern that I use a lot. Instead of:


if (x == null)
{
    lock (lockObject)
    {
        if (x == null)
        {
            x = y;
        }
    }
}


you can also write


System.Threading.Interlocked.CompareExchange(ref x, y, null);


It performs better but is a bit less readable (in my view).
One of those “nice to know”s.

[BizTalk] How to enrich a message

While working on my first BizTalk 2006 project I came across the need to enrich messages as they were routed through BizTalk. I knew that a Map would be the logical approach because most fields could be copied through directly. Based on some other properties a database lookup should be performed and the resulting data structure is to be added to the destination message. The structure of this data is complex; an hierarchy with multiple, repeating  elements.

After trying all sorts of things covering Database Lookup, and all sorts of Scripting functoids configurations and searching the net, I finally came across this post. http://www.infusionblogs.com/blogs/syd/archive/2006/05/17/480.aspx. It nearly describes the problem I tried to solve and sure enough, within 5 minutes I had a working solution.

I made a custom .NET assembly with a class that has a public method to lookup the extra information based on the source message properties. This method takes several (in my case) string parameters and returns a string that contains the Xml sub structure thats to be included in the destination message. Note that the string returned is the OuterXml of the DocumentElement of the XmlDocument. If you return the OuterXml of the XmlDocument you’ll include a <?xml?> PI and the map doesn’t like that (with good reason).

public class DataLookup
{
public string Lookup(string param1, string param2)
{
XmlDocument xmlDoc = new XmlDocument();

// load xml document…

return xmlDoc.DocumentElement.OuterXml;
}
}

Then I created a map in VS.NET, selected the source and destination schemas and placed two Scripting Functoids on the map-surface. The first scripting functoid has its inputs connected to the required input properties from the source schema and the output is connected to the second scripting functoid and is configured to call out to my .NET assembly. The second scripting functoid is configured as an Xslt call template (code is shown below) and its output connects to the root element in the destination schema where the additional information has to go.

<xsl:template name=”enrich”>
<xsl:param name=”data” />
<xsl:value-of disable-output-escaping=”yes”  select=”$data” />
</xsl:template>

So, during a transformation, the source properties are fed into my custom class’ method, a database lookup is performed and an xml document is created which is returned as a string. This Xml as string is then passed into the xslt named template and injected into the destination message at the specified root element.

Source Properties => Functoids  => Destination Xml

It seems a bit strange that you have to convert Xml into a string (to be converted into Xml again by the Xslt template) in an xml technology-based BizTalk map. But hey, it works…

The D-word

Documentation. Good documentation is seldom seen. We all know the nDoc tool that can massage our code comments into a nice looking reference document. But just reference documentation is usually not sufficient. You probably know from your own experience that learning to master a new library or framework takes more than just reference documentation if you want to avoid endless trail and error and searching for that API call that does what you need at that time. You need an overview of the architecture (code design) in order to relate different classes to eachother. You need a description of common usage patterns so you can get started quickly. You need… well, good documentation.

I’m not going to write about what I think is good documentation, I leave that to you to work out ;-). I just want to bring a help authoring tool under your attention that I’ve been missing (like for ever). It is packaged in an unexpected place: the Visual Studio 2005 SDK (VSIP). This SDK you use when you want to extend Visual Studio. You need to register to get it here. Microsoft ships a stripped down version of HelpStudio called HelpStudio Lite made by Innovasys. If you ever worked with Html Help workshop that ships with Visual Studio for some time now, you will find the tool familiar, if you do not have any experience, just take a look at an existing Help file and you will get most of the concepts in no time. The tool will let you generate help content that integrates into the VS.NET help viewer.

When installation of the VS.NET SDK is finished the following screen is shown. Notice that the Help Authoring tool is not installed by default and has to be installed seperatly. When I ran the installation it looked as if it didn’t work, no progress indication or whatever, but eventually the installer reported a succesfull install. So be patient.

Visual Studio SDK Installer - HelpStudio Component

When you start HelpStudio Lite a startup window is shown by default, explaining how to create a new project for instance. When creating a new project you get a choice for VS 2003 or VS 2005 compatibility. The following screenshot shows you a new help project in HelpStudio Lite.

HelpStudio New Project

Looks better that Html Help Workshop doesn’t it. I would suggest you try it and see for yourself.

nDoc Documentation

One of the things you probably want to do (at least I wanted to do) is integrate the html files generated by nDoc into your HelpStudio project. nDoc uses Html Help Workshop under the covers and HelpStudio Lite allows an import of a Html Help Workshop project file. Sweet. Not entirly. All content is wrapped in the selected HelpStudio template and this will give you double (contained) headers on each and every page. The Solution I found that works best is only import the Table of Content (ToC) of your nDoc result and just hand copy all html files into the build directory (called Default). You could add each html file to the HelpStudio project definition, but that gets labour intensive when handling large numbers and frequent updates. Each imported ToC entry has a link to an .htm file and those will get resolved during the build or you will get errors. You can move the imported ToC around to any place you fancy, without breaking the links to the content.

Deployment

So you’re done. Now you want to package your help file so others can install it (together with your library or framework). Unfortunatly the help system is pretty complex and it is not a case of just distributing a help file (like .chm). The Vistual Studio SDK site has a installer for a new VS.NET Project type that lets you build a help deployment setup (and merge module). It is called Help Integration Wizard and can be downloaded here. When you create a new VS.NET project of this type you walk through a wizard that lets you browse to the help file built by HelpStudio and create an installer for it. Note that when installing help files registering the help collection takes really long.

So I hope you find this usefull and I helped inspire you to write ‘good documentation’. This description is not as detailed as I would like due to time contraints. If you have any questions just post them as comments and I will try to get you started.

TIP: Did you know that you can export VS.NET 2005 class diagrams as .jpg image files? Right click in the designer and choose the export menu option. Now you can inculde class diagrams easily in your help files, point to the Image Files folder, add them to the help project and drag & drop them into your help Topic.

Singleton Generics [updated]

The Singleton pattern is probably the most famous pattern of all. Usually it is implemented as a behaviour of a specific class. But why not let the developer decide how to manage instance lifetimes? The new .NET 2.0 Generics feature gives us just the tools for creating these object lifetime classes.

public static class StaticInstance<T>
    where T : new()
{
    private static T _instance;
    private static object _lock = new object();

    public static T Current
    {
        get
        {
            if (_instance == null)
            {
               lock (_lock)
               {
                   if (_instance == null)
                   {
                       _instance = new T();
                   }
               }
            }

            return _instance;
        }
    }
}

This code manages one instance of Type T in a (AppDomain) static variable, your typical Singleton implementation. Any class can be used as a Singleton now, just call StaticInstance<MyClass>.Current to access the instance of your type ‘MyClass’. Beware though that being a Singleton instance has concurrency issues in that multiple threads could access that one instance of your class at the same time.

In an ASP.NET context you often have the need to have "static" information available but private to the current request. Well, simply write another instance manager class such as this one:

public static class HttpContextInstance<T>
     where T : new()
{
     private static string _typeName = typeof(T).FullName;

     public static T Current
     {
          get
          {
              Debug.Assert(HttpContext.Current != null);

              T instance = (T)HttpContext.Current.Items[_typeName];

              if (instance == null)
              {
                  instance = new T();
                  HttpContext.Current.Items[_typeName] = instance;
              }

              return instance;
          }
    }

    public static void Dispose()
    {
        IDisposable instance = HttpContext.Current.Items[_typeName] as IDisposable;

        if (instance != null)
        {
            instance.Dispose();
        }

        HttpContext.Current.Items[_typeName] = null;
    }
}

The instance is stored in the Items collection of the HttpContext, thus making the instance private to just the current web request. I’ve also included a Dispose method to dispose of the instance’s resources when the request is done (global.asax) and clear the slot in the HttpContext items collection. You could think of other implementations for storing instances in the Thread Local Storage, the logical CallContext or any other place that might be convienient to you.

Have fun,
Marc Jacobi

 



[UPDATE 14-feb-06]

 

I like to point out some of the problems that you may encounter using this approach. The following issues should be taken into account:

  1. A Type specified for T must be able to cope with the concurrency consequences of the instance class implementation. For the StaticInstance example this means that it should syncronize access to its member variables.
  2. The Type (T) must have a public default constructor and your team could use that default constructor to create their own instances. For some types this is not a real big issue for others it can introduce hard-to-track-down bugs. If your Type (T) is not designed to be instantiated more than once implement your own Current property and remove your (default) constructor(s).
  3. All team members should "know" what Type (T) is accessed by which instance class. If one member uses StaticInstance<MyClass>.Current and another uses HttpContextInstance<MyClass>.Current you’ll have 2 instances living two different lifetimes. This is a weakness that can be overcome and we will discuss next.

Because C# (generics) does not support the typedef keyword (C++: allows defining a new type using other types declaratively) the only way to simplify and hardwire a generics type is to derive from it. So if you use the following code template for instance class implementations you can fix issue 3 by deriving a new type.

public class StaticInstance<T>
    where T : new()
{
    private StaticInstance()
    {}

    public static T Current
    {
        get
        {
            // [your implementation here]
        }
    }
}

Now, say you use a static instance of MyClass in your application you can derive a new type to hardwire the T parameter. This also gives you one point of definition for the MyClass-singleton and makes it easy to transparently change the instance class backing up the singleton.

public class MyClassSingleton : StaticInstance<MyClass>
{}

I hope this update gives you a better overview of the consequenses of using this approach.
Keep the questions and suggestions coming.

Greetings,
Marc Jacobi

Object Builder code project

Object Builder is part of the new Enterprise Library 2.0 (EntLib) and the Composite UI Application Block (CAB). Apparently they they decided that Object Builder deserved its own project and it does for it is a stand alone reusable component (if you can figure it out that is ;-).

Here’s the link to the Got Dot Net code project for Object Builder.
http://www.gotdotnet.com/codegallery/codegallery.aspx?id=e915f307-c1c6-47c4-8ea0-cb4f0346fba0

Have fun,
Marc.

DataSet Manager

Why are OR-mappers cool? I dont know? My experience with them has been limited and the time I did use them the support for common constructs was very poor (at best): I don’t think its a good idea for a OR-mapper to caugh up a SQL select query for each instance that needs to go in a collection. The framework should be able to retrieve any hierarchy of data in one round trip (not counting the sceanrio where you have to decide how big the data packet may get versus how many roundtrips are optimal). Also I believe the OR-Mapper problem domain is two fold: 1) you need a good data access layer that supports all the features the required for a "good" OR-mapper and 2) you need the OR-mapping itself, map relational data to objects and back, which is a relatively simple problem to solve.

So I started thinking about the data access layer. I’m a big fan of typed DataSets. If you know your VS.NET you can whip up a typed DataSet within 5 minutes. My assumtion is that the data access layer works with (typed) DataSets. Also I propose to put your application data model in one type DataSet. For instance, You put the complete datamodel of Northwind into one typed "Nortwind" DataSet. If you have a really big solution you could work with subsets for each sub-system.

Now I want to be able to express my data queries in the same ‘entities’ defined in my application typed DataSet (Northwind). Why would an application programmer have to express his data requests in a different "language" than his application data model? Now we need an entry point that will handle our data requests and knows about our application data model. Enter the DataSet Manager.

The way the DataSet Manager retrieves data is by using the structure defined in the (typed) DataSet. It needs a copy of this structure as its data model. For now we assume that this application model reflects the physical data model in the database. A "Real" mapper would allow a certain abstraction here, allowing your application business entities to be subsets (or combined) database entities. The DataSet manager would have (overloaded) methods to fecth data for a (running) typed application DataSet. For instance "Fill up the (Northwind.)Empoyees table", "Fetch all orders handled by this Employee", make changes to the dataset and save.

In the following code examples we assume we have a typed DataSet named Northwind with the complete Northwind database schema.

// create the DataSetManager and initialize the DataModel
DataSetManager mgr = new DataSetManager();
mgr.CreateDataModel(new Northwind(), "Initial Catalog=Northwind");

This code creates a new DataSetManager and initializes the instance with the application data model to use and a connection string to the physical database. Inside the DataSetManager the schema of the (typed) DataSet is analysed and the DataSetManager creates DataAdapters (in this prototype) for each DataTable. The DataSetManager is ready.

// load all Employees data into an empty dataset
Northwind dataSet = new Northwind();
mgr.Fill(dataSet, dataSet.Employees);

Notice the second Northwind DataSet instance. The first was passed to the DataSetManager as a schema, this one is used (by the application programmer) for actually storing data. We ask the DataSetManager to fill up the Employees table and pass it the running DataSet and a reference to the table definition. Because we use typed DataSets both are contained in one instance (thats what makes a DataSet typed). All methods of the DataSetManager take a DataSet as a first parameter. This allows for seperation of data holder and data definition. The DataSetManager will build a "Select * from Employees" query for this method call and writes the results back to the (running) DataSet.

But wait a minute. If EnforceConstraints is set to true this won’t work. The lack of data in the other tables the Employee table has a DataRelation with will cause the contraints to fail. Not quite so. The DataSetManager knows about the schema and therefor knows about these relations too. It examines the content of the passed dataset and dissables these contraints that ‘point’ to empty tables. If you pass in a dataset with its EnforceConstraints set to false, the DataSetManager does nothing.

// find all Orders for the first Employee
Northwind.EmployeesRow employee = dataSet.Employees[0];
mgr.Expand(dataSet, employee, DataSetManager.FindRelation(dataSet.Employees, dataSet.Orders));

We retrieve a reference to an Employee and ask the DataSetManager to expand for this Employee instance (DataRow) using the relation between Employees and Orders. We use a helper method to find the DataRelation between these tables. Again the order data concerning the employee is placed in the passed dataset.

// change some data
employee.Notes = String.Format("Handled {0} orders.", dataSet.Orders.Count);
// update all changes made to the dataset
mgr.Update(dataSet);

Now we change some data in the dataset (on the employee) and ask the DataSetManager to persist the changes back to the database. Using the standard DataSet.GetChanges method and using a DataAdapter the empoyee is updated in the database.

These are the method (overloads) the DataSetManager supports:

public DataSet DataModel{get;}
public void CreateDataModel(DataSet dataModel, string
connectionString)
public int
Fill(DataSet dataSet)
public int
Fill(DataSet dataSet, DataTable table)
public int
Fill(DataSet dataSet, DataRelation relation)
public int
Expand(DataSet dataSet, DataTable table)
public int
Expand(DataSet dataSet, DataRow row)
public int
Expand(DataSet dataSet, DataRelation relation)
public int
Expand(DataSet dataSet, DataRow row, DataRelation relation)
public int
Update(DataSet dataSet)
public int Update(DataTable table)

This prototype was written using .NET 1.1 and therefor no generics are used. But in a future version this would certainly be an option for added type safety. One thing thats missing from this prototype is where-clauses. This is one of the problems i’m still wresteling with. How would you express filter criteria using application entities? I’ve considered Query by example but abandoned that path. The problem with QbE is that you would introduce yet another instance of the application typed DataSet used for holding the filter criteria. And the other problem is that complex filters are difficult to express using QbE. The only other option would be to define yet another proprietary object model for expressing filter criteria.

Also notice that this is no OR-mapper (yet). Its just a really intuitive way to work with your application data. The OR-bit would map back and forth between your application typed DataSet and your (domain) objects. The real power of OR-mappers is not in the mapping part but in the data access layer.

So I would really like to hear your comments, suggestions and objections and if you want to see the DataSetManager internals drop me a line at obiwanjacobi@nospam_hotmail.com (remove the nospam_ ;-).

Service Container

Now the Service Container (Inversion of Control and Dependency Injection) concepts are being adopted by the Microsoft Pattern & Practices group in their CAB and EntLib frameworks, maybe I can talk about a service container implementation I made a few months ago (BTW: A service is a course-grained piece of reusable functionality and not -necessarily- a web service!).

I noticed that everyone who has made a service container (or service locator) framework implemented a custom container interface. But the System.ComponentModel contains a decent IServiceProvider interface that is a suitable client interface. It defines one method GetService that takes a Type parameter to specify the service (interface) type. Visual Studio uses a service container when (form/web) controls are hosted on a designer surface.

So I set out to design the basic structure that is needed for building a service container. After examinig the System.ComponentModel namespace further an IComponent, IContainer and an ISite interface came to light. It appears that these interfaces are used in the standard service container mechanism that is already present in the .NET framework.

The (I)Container manages a collection of (I)Component instances (the Container is NOT a ServiceContainer). When a Component is added to a (and only one) Container it is ‘sited’. A (I)Site is what ties a Component to its Container. Notice that the ISite interface is derived from IServiceProvider interface. So, whenever the Component needs a service it only needs to ask its Site using the GetService method for the service it requires.

The default .NET implementation creates a no-functionality Site instance for each Component that is added to a Container. Luckily for us there’s a virtual method on the Container class (CreateSite) we can override to create a Site of our own making. We need to because we still have no ServiceContainer so far and the default implementation provider by the System.ComponentModel doesn’t provide in one, either.

The way I see it is that the Site provides a way to give each Component a unique context and because the Site already implements the IServiceProvider interface it is the logical place to place the ServiceContainer. My design explicitly defines a ServiceContainer class but logically the Site and ServiceContainer could be thought of as one and the same. This means that it is possible to regulate the service implementations each component has access to through its Site.

This means, for example, that if you have a WebPart page and you bootstrap each WebPart (=Component) to a Container, you can control what services are provided to each WebPart. Or if you have a Domain Object Model (=Component) and bootstrap each instance to a Container, you could control the way these instances perform their tracing, because you control the actual Trace Service implementation used for these objects. Must be said that I assume that only a Service interface is published to the clients (Components) not its implementation class.

But how does the framwork know what services to load for a specific component? The framework looks in the .config file for that. The .config file contains a section that describes services. What classes to use and maybe some service specific configuration settings. It also contains a service profile section. A service profile is a list of services that is loaded into a ServiceContainer. At this point there’s also room to specify custom service config settings. Finaly there’s a context binding section. This section maps (for instance) a Component’s name to a service profile that describes the services available for that Component. Future implementations of the framework will probably also include a service policy section to describe settings like lifetime management and other runtime aspects. Lifetime management of the Service instances is not implemented yet. At the moment the Service instance is created (on demand) and cached in the ServiceContainer. Singleton or PerCall (or custom) instance management is something you’ll want to have eventually.

What will happen if a Component requests a service that is not available in its Site/ServiceContainer. The framework  allows for a Component hierarchy, where a Component may host a Container that contains their child Component instances. So, if a service request can not be satisfied, it is passed to the parent ServiceContainer and so on, untill the root ServiceContainer is reached. This also implies that service clients (Components) can handle the absence of a service they request (handling this scenario can involve throwing an exception ofcourse ;-).

The root ServiceContainer also contains the Services used by the framework itself. The way the framework obtains its configuration settings is implemented as a service, which gives you the option to replace the configuration source.

Take a look at the gotdotnet workspace (if you’re still interested ;-) where you can download a short ppt and access the source code.Future plans for this framework include incorporating the ObjectBuilder (providing a free DI framework) and providing an alternate Configuration Service for the EntLib configuration app block.

Any thoughts, suggestions or comments are most welcome.