Utilizing the Directed Graph document in Visual Studio

Have you noticed  that there is a Directed Graph document (.dgml) in Visual Studio? This document can visualize directed graphs inside VS. This type of document is authored with the tools of the Ultimate edition. If you do not have the Ultimate edition, the editor allows for manual creation of nodes and links, which may be useful in some situations but cumbersome at best.

The .dgml file follows a simple xml schema and is pretty easy to generate yourself. So I downloaded the .xsd file (you can use the xml namespace as an url and it will present you a page with info on this) and generated some classes with xsd.exe.

Now you have to build a DirectedGraph class instance with all the Nodes and Links that represent the information you are trying to visualize. This will require you to write some custom code (a factory method/class for instance) that transforms the data structure of your program into Nodes and Links.

I have done this for a project I am working on at home. The data structure I wanted to visualize represents a schema (similar to an xml schema) but each ‘node’ has a whole lot of navigation properties. A small schema looks something like this:


Depending on the properties you set for each Node and Link object you can change Font, Colors, Lines, Labels etc.

Be aware that any changes you make to the document are saved to the .dgml file. If you want these changes reflected back to your own structure, you have to write an interpreter for that – not something I would recommend.

Be sure to explore the capabilities of the .dgml editor – there are some nice features there (Analyzers).

I can use this as a debugging tool to see if my schema is converted correctly into this structure. I needed about 150 lines of very simple code to create my factory.

Hope it helps,

Silverlight: Breaking the daisy chain?

This post discusses the consequences of calling asynchronous calls in Silverlight (or any other scenario that lets you pass in event handlers for completion notification).

Everything is asynchronous in Silverlight. With each call you make, you pass down event handlers that are called when the operation is done. When trying to program a sequential execution flow in your Silverlight program, you’ll see the daisy-chain ‘pattern’ emerge. This is where a method starts an asynchronous call, the event handler does some work and starts another asynchronous call then the next event handler performs another asynchronous call, etc. Look at your own Silverlight code and see if you can detect this pattern.

You see your logic spread out over a couple of methods/event handlers. Question is: does this need fixing? From a puristic standpoint I would say yes. On the other hand I can see that a daisy chain might not be the worst you have to live with. When the logic is simple enough and following the chain is easy, it is all right to leave it at that. But what if at some point you have to branch of the chain? For instance you have a condition (if-then-else) that determines to call one asynchronous method or -if it is not true- it will call another asynchronous method. Now you have two parallel tracks the execution flow can follow down the chain. This can get messy very quickly.

Another issue to remember is that the thread that is used to call your event handler and notify you of the outcome of the asynchronous call, is not necessarily the same thread that was used to create the UI. So you cannot call into the UI directly from within the event handler. You have to marshal the call using Invoke.

But how do we solve this? One pattern comes to mind is the state table. Define a state for each step in the daisy chain and determine what state to go to next when an event handler is called. But this doesn’t do anything for the fragmentation of the code. Its just a different way of cutting it into pieces and I would argue its less obvious than the original daisy chain (its also not entirely what the state table was meant for).

You could use anonymous methods (or lambda’s) to pull everything into one method, but the question is if this is more readable and maintainable than a daisy chain.

Although I have not worked out the details of this idea, I was thinking of a base class that would implement some helper methods to perform asynchronous calls and provide event handlers. This should allow you to implement all your code in one method (or as many as like) and call asynchronous methods and gather their responses almost transparently. Not sure if this idea will work, though.

What I would like is to code out my Silverlight code in a normal sequential flow using “normal” programming paradigms and patterns. But until someone comes up with a good solution for that, we just have to experiment with our own solutions and patterns.


Using Xml Schema for describing Midi System Exclusive message content

In my spare time I’m writing a Midi Console application. This application can manage all the settings of all the midi devices in a studio (Samplers, Sound modules, Drum machines etc.). The onboard user interface of most midi devices are poor (at best) and having an application that allows you to manage settings for multiple devices in a consise way would improve productivity of the mucisian.
Most midi devices support what’s called System Exclusive messages. The content of these messages are not standardized by the MMA but can be freely used by any manufacturer for its own purposes. A typical way to get to all the settings of a midi device is through using these System Exclusive messages.

The way applications dealt with these device specific messages in the past, was to write specific drivers for each device or -at best- have some sort of reference table where each settings was located in the System Exclusive message. This meant that any application targeting System Exclusive message would support a fixed set of midi devices. If your device is not on the list, you could not use the application.

After studying the different binary content layout of System Exclusive message for several manufacturers, it occurred to me that they could be described in a meta language which could than be used to handle the interpretation and compilation of these device specific messages. I decided to use (or abuse most would say) Xml Schema (xsd) to describe the content of these messages.

I’m currently writing a Midi Device Schema Specifications document that describes how one would use Xml Schema for Midi System Exclusive messages. For those who are interested: I post new versions to this thread in the mididev newsgroup and the latest version of the specifications can be found here.

Any feedback is most welcome on any aspect of the specifications or the solution in general.

DataSet Manager

Why are OR-mappers cool? I dont know? My experience with them has been limited and the time I did use them the support for common constructs was very poor (at best): I don’t think its a good idea for a OR-mapper to caugh up a SQL select query for each instance that needs to go in a collection. The framework should be able to retrieve any hierarchy of data in one round trip (not counting the sceanrio where you have to decide how big the data packet may get versus how many roundtrips are optimal). Also I believe the OR-Mapper problem domain is two fold: 1) you need a good data access layer that supports all the features the required for a "good" OR-mapper and 2) you need the OR-mapping itself, map relational data to objects and back, which is a relatively simple problem to solve.

So I started thinking about the data access layer. I’m a big fan of typed DataSets. If you know your VS.NET you can whip up a typed DataSet within 5 minutes. My assumtion is that the data access layer works with (typed) DataSets. Also I propose to put your application data model in one type DataSet. For instance, You put the complete datamodel of Northwind into one typed "Nortwind" DataSet. If you have a really big solution you could work with subsets for each sub-system.

Now I want to be able to express my data queries in the same ‘entities’ defined in my application typed DataSet (Northwind). Why would an application programmer have to express his data requests in a different "language" than his application data model? Now we need an entry point that will handle our data requests and knows about our application data model. Enter the DataSet Manager.

The way the DataSet Manager retrieves data is by using the structure defined in the (typed) DataSet. It needs a copy of this structure as its data model. For now we assume that this application model reflects the physical data model in the database. A "Real" mapper would allow a certain abstraction here, allowing your application business entities to be subsets (or combined) database entities. The DataSet manager would have (overloaded) methods to fecth data for a (running) typed application DataSet. For instance "Fill up the (Northwind.)Empoyees table", "Fetch all orders handled by this Employee", make changes to the dataset and save.

In the following code examples we assume we have a typed DataSet named Northwind with the complete Northwind database schema.

// create the DataSetManager and initialize the DataModel
DataSetManager mgr = new DataSetManager();
mgr.CreateDataModel(new Northwind(), "Initial Catalog=Northwind");

This code creates a new DataSetManager and initializes the instance with the application data model to use and a connection string to the physical database. Inside the DataSetManager the schema of the (typed) DataSet is analysed and the DataSetManager creates DataAdapters (in this prototype) for each DataTable. The DataSetManager is ready.

// load all Employees data into an empty dataset
Northwind dataSet = new Northwind();
mgr.Fill(dataSet, dataSet.Employees);

Notice the second Northwind DataSet instance. The first was passed to the DataSetManager as a schema, this one is used (by the application programmer) for actually storing data. We ask the DataSetManager to fill up the Employees table and pass it the running DataSet and a reference to the table definition. Because we use typed DataSets both are contained in one instance (thats what makes a DataSet typed). All methods of the DataSetManager take a DataSet as a first parameter. This allows for seperation of data holder and data definition. The DataSetManager will build a "Select * from Employees" query for this method call and writes the results back to the (running) DataSet.

But wait a minute. If EnforceConstraints is set to true this won’t work. The lack of data in the other tables the Employee table has a DataRelation with will cause the contraints to fail. Not quite so. The DataSetManager knows about the schema and therefor knows about these relations too. It examines the content of the passed dataset and dissables these contraints that ‘point’ to empty tables. If you pass in a dataset with its EnforceConstraints set to false, the DataSetManager does nothing.

// find all Orders for the first Employee
Northwind.EmployeesRow employee = dataSet.Employees[0];
mgr.Expand(dataSet, employee, DataSetManager.FindRelation(dataSet.Employees, dataSet.Orders));

We retrieve a reference to an Employee and ask the DataSetManager to expand for this Employee instance (DataRow) using the relation between Employees and Orders. We use a helper method to find the DataRelation between these tables. Again the order data concerning the employee is placed in the passed dataset.

// change some data
employee.Notes = String.Format("Handled {0} orders.", dataSet.Orders.Count);
// update all changes made to the dataset

Now we change some data in the dataset (on the employee) and ask the DataSetManager to persist the changes back to the database. Using the standard DataSet.GetChanges method and using a DataAdapter the empoyee is updated in the database.

These are the method (overloads) the DataSetManager supports:

public DataSet DataModel{get;}
public void CreateDataModel(DataSet dataModel, string
public int
Fill(DataSet dataSet)
public int
Fill(DataSet dataSet, DataTable table)
public int
Fill(DataSet dataSet, DataRelation relation)
public int
Expand(DataSet dataSet, DataTable table)
public int
Expand(DataSet dataSet, DataRow row)
public int
Expand(DataSet dataSet, DataRelation relation)
public int
Expand(DataSet dataSet, DataRow row, DataRelation relation)
public int
Update(DataSet dataSet)
public int Update(DataTable table)

This prototype was written using .NET 1.1 and therefor no generics are used. But in a future version this would certainly be an option for added type safety. One thing thats missing from this prototype is where-clauses. This is one of the problems i’m still wresteling with. How would you express filter criteria using application entities? I’ve considered Query by example but abandoned that path. The problem with QbE is that you would introduce yet another instance of the application typed DataSet used for holding the filter criteria. And the other problem is that complex filters are difficult to express using QbE. The only other option would be to define yet another proprietary object model for expressing filter criteria.

Also notice that this is no OR-mapper (yet). Its just a really intuitive way to work with your application data. The OR-bit would map back and forth between your application typed DataSet and your (domain) objects. The real power of OR-mappers is not in the mapping part but in the data access layer.

So I would really like to hear your comments, suggestions and objections and if you want to see the DataSetManager internals drop me a line at obiwanjacobi@nospam_hotmail.com (remove the nospam_ ;-).