Yet Another Logging Framework

This blogpost is a brain dump on a new logging framework I’m planning to write. If you have any additions or suggestions please leave them as comments.

So why do we need, yet another Logging Framework. Well, because I cannot find in others what I think is important and also I want to leverage the Diagnostics Trace code I have already written.

What scenarios should this Logging Framework be able to handle?

  • Easy to use.
    It should be easy to configure and easy to call. Any provider based mechanism (log targets) is bound to have config associated with it, but at least give good feedback on any problems that might occur. Convention over configuration and MEF might ease the problems.
  • One liner.
    When using it, you should only have to write one line of code. This is not always feasible but it is a good starting point. Why is this important? Because you want to make it easy on the poor developer that is using your ‘yet another logging framework’. The use of Aspects (PostSharp) can also be of use.
    On the other hand, most applications that use a 3rd party library almost always create a façade for interacting with it. So its better to have a good API that is easy to integrate into a façade than to have an awkward API in order to force it all into one line of code.
  • Uses the TraceContext (source) information to ‘automatically’ enrich the log entries. The TraceContext now has a method to AddContectTo an Exception, but could be extended to also AddContextTo a LogEntry.
  • Fallback log targets.
    Have a mechanism to fallback on log targets when they are not available. This answers the question: were do you log the exception that you cannot connect to your log target.
  • Integrated .NET support for WCF and ASP.NET.
    Make use of the extra information that lives in those contexts and allow that context to be easily added to the log entry. Both these contexts also support some kind of interception (think Behaviors for WCF and Modules for ASP.NET) to allow automatic logging.
  • Log to multiple log targets at the same time – each with unique filters.
    The default System.Diagnostics TraceListeners will already do this. Also be able to retrieve different types of logs in the code (error log, message log, security audit log etc).
  • Use that same log framework for technical logging as well as functional auditing.
    There is no reason why a logging framework cannot be used for (more) functional logging also, provided the they use unique ‘streams’.
  • Different log entry formats.
    Different log targets may require different log entry structs. Still I would prefer to use the existing diagnostics code. This means that those structures has to be serialized and deserialized. I think JSON can be of great help here.
  • Distributed and Correlated.
    You want to be able to correlate log (and trace) entries made by different parts of the system. This allows you to get a good impression to what happened where and why.
  • Support debug-only log entries.
    Conditional compilation on DEBUG. No biggie, just useful.
  • Asynchronous logging.
    This is tough one. You want Async logging in order to minimize performance impact on the running thread – actually this is more a tracing issue than a logging issue, assuming logging doesn’t output a whole lot of entries per second). But making it asynchronous can also mean that you lose that one vital entry just before the app crashes. More on this later.

Using System.Diagnostics

The idea is that we use as much of the existing logging technology as possible. That means reusing and extending the main classes of the System.Diagnostics namespace. TraceSource can be derived into LogSource and provide the basis for a log target. Each LogSource can specify a collection of TraceListeners. Custom TraceListeners can be used as well as the out of the box ones.

But using these TraceListseners means that all Log information has to be squeezed through a single string (essentially – worst case). This coupled with the fact that different log types might require different log entry structures leads to one conclusion. We have to serialize the complex data into a string so that it can be output by different log targets (Sinks) and mapped to their most appropriate fields.

The use of JSON would be excellent here, also because JSON is somewhat readable even in its serialized form. So you can still make sense of it even when its written to a text file. The object structure that is used will be partly fixed, we will need some known fields to extract data needed for further processing. But custom structures can also be easily serialized to JSON and on the receiving side, easily serialized into generic data containers (ExpandoObjects) for use in the viewer.

Formatting this complex data into something that makes sense for the specific log target is done when reading the log, not while writing it. This not only saves a little performance hit while writing the log entry, it also allows for a more versatile viewer.

Performance

One of the obvious ways to decouple the performance costs of tracing and logging is to take the processing of the log entry onto a background thread. Only the data gathering takes place on the active thread all other operations will be done on the background thread.

The trouble with this is that you can lose potentially critical log entries just before your application crashes. One possibly way to have the best of both worlds is to use the log level (Critical, Error, Warning and Info) as an indication of priority. That could mean that Critical log entries are always logged on the active thread. The other levels are processed by the background thread starting with Error, Warning and Info as least significant.

We have to provide some way to identify the order of these entries (can be a simple sequence number) in order to be able to view them in the correct order. Gaps in the sequence can be detected and displayed in the viewer. This mechanism will also make it easy to merge log ‘files’ from different machine into one.

If we take formatting out of the write-a-log-entry process, we might also need to revisit the Tracing code we have so far in order to make that option available in Tracing too.

Reading Log entries

For each (type of) log target a Sink is needed that knows how to put the log entry data into its storage. Think for instance Enterprise Library Logging block or Log4Net or simply the event log. A TraceListener is implemented for each target that knows how to take that one string and persist it in the most optimal way.

When those (types of) targets also want to play in the log viewer, they also have to expose a Provider: an object that knows how to read log entries from its storage and provide them to the viewer.

The viewer will be able to union all log entries from all providers and sort them into the temporal sequence they were written in. Merging of different (machine) sources is also possible.

Of course the viewer would be able to filter and even search through the entries.

I think it would be advantageous to implement an OData REST service as a source for all the log entries. This allows easy access to the log entries for all kinds of purposes and provide a flexible basis for retrieving log entry information for different applications. Both Xml and Json formatting can be supported.

Closing remarks

I am sure that a lot more issues will present themselves once a more detailed design is made and implementation starts. But I think this WILL make a pretty nice logging framework if we can pull it off. Writing this blog post has helped me to structure my thoughts more on the subject and I hope it was a pleasant read for you, perhaps even an inspiration to tweak the logging framework you are no using.