Part of my day job is to maintain an online purchase path, and through my recent efforts to refactor and streamline the code, I’ve come across a technique that better enables me to build my scripts in such a way that they are truly unobstrusive. But what do I mean by all this? Ok, say you have some code such as:

Script.js:

(function($) {

  var serviceUrl = window.ScriptConfig.serviceUrl;
  // Where ScriptConfig is declared on your target page.

})(window.jQuery);

Page.html:

<script type="text/javascript">
  window.ScriptConfig = {
    serviceUrl: "/path/to/service"
  };
</script>
<script type="text/javascript" src="/script.js"></script>

The problem with the above scenario, is that the ordering of scripts to config is really important, otherwise you’re attempting to use values in your script which are still potentially undefined. Also, if you’re trying to achieve a semantically clean page (where the page’s responsibility is to represent data, no presentation, not logic) – this kinda violates that.

What you can do though, is instead of relying on your scripts to be javascript, lets set them to “text/xml”:

<script id="scriptConfig" type="text/xml">
  <config serviceUrl="/path/to/service" />
</script>

Essentially, by using xml in your script tags, you can then take advantage of your browsers xml parser to read the document in and configure your scripts. Let’s change our script:

(function($) {
  
  var serviceUrl;
  
  $(function() {
    var xml = $.parseXML($("#scriptConfig").html());
    
    serviceUrl = $(xml).find("config").attr("serviceUrl");
  });

})(window.jQuery);

Now, in our script file, we can use jQuery’s parseXML function to build an Xml DOM. Because this is a DOM object, we can then use jQuery itself to interrogate the model to find elements, attributes.

Note: You have to use “html()”, and not “text()” as the latter does not work in IE.

This also allows us to have a completely script free page :-)

Digg This
Reddit This
Stumble Now!
Buzz This
Vote on DZone
Share on Facebook
Bookmark this on Delicious
Kick It on DotNetKicks.com
Shout it
Share on LinkedIn
Bookmark this on Technorati
Post on Twitter
Google Buzz (aka. Google Reader)

Expressive Configuration Files with Evaluators

I’ve not played around too much with WPF, but one of the things I do like about it, is the expressive binding syntax you can use to declaratively apply binding to elements, e.g:

  <ListBox Name="entryListBox" ItemsSource="{Binding Source={StaticResource RssFeed}, XPath=//item}" />

It lead me to think, could we provide something similar, but apply it to something such as a configuration file, or really any string content? Here is what I am aiming to achieve:

  <configuration>
    <appSettings>
      <!-- Simple Values -->
      <add key="MyFirstSetting" value="First" />
      <add key="MySecondSetting" value="{AppSetting MyFirstSetting}" />
      
      <!-- A connection string lookup -->
      <add key="DefaultDatabaseConnection" value="Test" />
      
      <!-- Other settings -->
      <add key="DatabaseServer" value="192.168.0.1,1724" />
      
      <add key="Database" value="Test" />
      <add key="Database_Staging" value="Staging" />
      <add key="Database_Production" value="Production" />
      
      <add key="MachineType" value="Staging" />
      
      <!-- Composite expressions -->
      <add key="SomeSetting" value="MySecondSetting" />
      <add key="SomeCompositeSetting" value="{AppSetting {AppSetting SomeSetting}}" />
    </appSettings>
    
    <connectionStrings>
      <add name="Test" connectionString="Server=(local);Database=Test;Integrated Security=true;" />
      <add name="Main" connectionString="Server={AppSetting DatabaseServer};Database={AppSetting Database};Integrated Security=true" />
    </connectionStrings>
  </configuration>

The idea being that we could evaluate these expressions at runtime without having to worry about composing variables from say, the Application settings, or connection strings, etc.

The way I approached this, was to define a interface, the IEvaluator. This would allow me from the start to make the mechanism extensible. The interface looks like this:

public interface IEvaluator
{
    #region Properties
    string Name { get; }
    #endregion

    #region Methods
    string Evaluate(string value);
    #endregion
}

It’s a simple interface, promoting a name (which will directly correlate to the left-hand value, e.g. “AppSetting” in “{AppSetting MyFirstSetting}”), and the method which will evaluate the actual argument itself. Here is an example evaluator, the AppSettingEvaluator:

public class AppSettingEvaluator : EvaluatorBase
{
    #region Properties
    public override string Name { get { return "AppSetting"; } }
    #endregion

    #region Methods
    protected override string EvaluateCore(string value)
    {
        return From.AppSetting(value);
    }
    #endregion
}

I’ve implemented a base class which handles the caching of any results, and this method simply uses a From.AppSetting(…) method call which is a simple utility function. The evaluator mechanism itself is driven through a regular expression, that will match a pattern like “{<key> <value>}”, and allow us to extract those values and replace. We also provide a list of evaulators (bundling our AppSettingEvaluator and ConnectionStringEvaluator as standard, but it does provide a mechanism to register your own:

public static class Evaluator
{
    #region Fields
    private static readonly Regex ExpressionRegex = new Regex(
        "\{(?<key>[a-z]*?)\s(?<value>[^{^}]*?)\}",
        RegexOptions.IgnoreCase
        | RegexOptions.Singleline
        | RegexOptions.CultureInvariant
        | RegexOptions.IgnorePatternWhitespace
        | RegexOptions.Compiled
        );

    private static readonly IList<IEvaluator> _evaluators = new List<IEvaluator>()
    {
        new AppSettingEvaluator(),
        new ConnectionStringEvaluator()
    };
    #endregion

    #region Methods
    public static void AddEvaluator(IEvaluator evaluator)
    {
        if (evaluator == null)
            throw new ArgumentNullException("evaluator");

        _evaluators.Add(evaluator);
    }

    public static string Evaluate(string value)
    {
        if (string.IsNullOrEmpty(value))
            return value;

        if (!HasExpression(value))
            return value;

        value = ExpressionRegex.Replace(value, m =>
        {
            string key = m.Groups["key"].Value;
            string val = m.Groups["value"].Value;

            var evaluator = _evaluators.Where(e => key.Equals(e.Name)).FirstOrDefault();
            if (evaluator == null)
                return string.Format("[{0} {1}]", key, val);

            return evaluator.Evaluate(val);
        });

        return Evaluate(value);
    }

    public static bool HasExpression(string @string)
    {
        if (string.IsNullOrWhiteSpace(@string))
            return false;

        return ExpressionRegex.IsMatch(@string);
    }
    #endregion
}

The bulk of the work is done by the static Evaluate method, which uses our regular expression, and applies replacements based on the keys found. If we don’t have an evaluator, we re-write the values with square braces “[<key> <value>]” to highlight that it was not matched, and also to safeguard from infinite loops. We use this method recursively to ensure that we can support composite expressions, e.g. “{ConnectionString {AppSetting DefaultDatabaseConnection}}”.

Automatically Evaluating in Custom Configuration Sections

The next logic step for me, was to see how we could automatically apply these expression evaluations to custom configuration sections. Given that configuration is read once and cached for the lifetime of the application, the initial hit would be the first time the configuration is read. Although we can’t make modifications to the stable set of configuration sections, for appSettings, connectionStrings etc, we could potentially provide custom evaluation for our own configuration sections. The way we do this, is we need to subclass ConfigurationSection, and intercept the xml being read before it is processed. To do this, we need to override the DeserializeElement(…) method of the ConfigurationElement type. We do this at the ConfigurationSection stage, as it will apply the change to the entire configuration section, not individual elements. Here is how it looks:

public abstract class EvaluatedConfigurationSectionBase : ConfigurationSection
{
    #region Methods
    protected override void DeserializeElement(XmlReader reader, bool serializeCollectionKey)
    {
        string content = reader.ReadOuterXml();
        content = Evaluator.Evaluate(content);

        var manager = new XmlNamespaceManager(reader.NameTable);
        var context = new XmlParserContext(reader.NameTable, manager, null, reader.XmlSpace);
        var newReader = new XmlTextReader(content, XmlNodeType.Element, context);

        newReader.Read();
        base.DeserializeElement(newReader, serializeCollectionKey);
    }
    #endregion
}

With that, we can implement a custom configuration section:

public class TestConfigurationSection : EvaluatedConfigurationSectionBase
{
    #region Fields
    private const string TestPropertyAttribute = "testProperty";
    private const string TestElementElement = "testElement";
    #endregion

    #region Properties
    [ConfigurationProperty(TestPropertyAttribute, IsRequired = true)]
    public string TestProperty
    {
        get { return (string)this[TestPropertyAttribute]; }
    }

    [ConfigurationProperty(TestElementElement, IsRequired = false)]
    public TestConfigurationElement TestElement
    {
        get { return (TestConfigurationElement)this[TestElementElement]; }
    }
    #endregion
}

There is nothing special about this class, its standard implementation of a custom configuration section. We haven’t had to mark anything to be evaluated, it’s just handled for us automatically. Here is what it would look like in config:

<test testProperty="{ConnectionString {AppSetting DefaultDatabaseConnection}}">
  <testElement testProperty="{ConnectionString Main}" />
</test>

Now, when I read those values:

  var config = (TestConfigurationSection)ConfigurationManager.GetSection("test");

The values of config.TestProperty, and config.TestElement.TestProperty are automatically evaluated for us, just the once. Neat!

Custom Evaluators

Let’s look at a custom evaluator. Historically, I’ve found myself typing Type names out in full multiple times in configuration files (although less and less these days), so I came up with a solution, named types. This is a configuration section that represents a mapping between a shortname and the full type name, e.g.:

<namedTypes>
  <add name="String" type="System.String" />
  <add name="CompositionContainerFactory" type="My.Namespace.For.A.CompositionContainerFactory, My.Assembly" />
</namedTypes>

We’ll skip over the configuration section implementation, and jump to the evaluator:

public class NamedTypeEvaluator : EvaluatorBase
{
    #region Properties
    public override string Name { get { return "NamedType"; } }
    #endregion

    #region Methods
    protected override string EvaluateCore(string value)
    {
        return NamedTypes.GetNamedType(value);
    }
    #endregion
}

We can register this evaluator:

Evaluator.AddEvaluator(new NamedTypeEvaluator());

And assuming we use it somewhere, e.g., in an application setting:

<appSettings>
  <!-- Container Factory Type -->
  <add key="ContainerFactory" value="{NamedType CompositionContainerFactory}" />
</appSettings>

This value can automatically be evaluated for us. I think the obvious next step is to compose evaluators from your inversion of control (IoC) container or service locator, so you introduce a dynamic plug and play system.

The project is attached, let me know your thoughts.

Expressive Configuration files with Evaluators

Digg This
Reddit This
Stumble Now!
Buzz This
Vote on DZone
Share on Facebook
Bookmark this on Delicious
Kick It on DotNetKicks.com
Shout it
Share on LinkedIn
Bookmark this on Technorati
Post on Twitter
Google Buzz (aka. Google Reader)

I’ve been playing around with Glimpse recently, and knowing that my favourite tech at the moment is the Managed Extensibility Framework (MEF), I decided to put together a Glimpse plugin that would allow you a glimpse (ahem) into what is going on inside your composition container. The plugin is implemented as a series of decorators around the primitive MEF types (currently we are concentraing on Catalogs), and a custom implementation of a CompositionContainer that supports the Glimpse mechanism.

How does it work?

At the core of the plugin, we have our custom container, the GlimpseCompositionContainer. This is a subclass of the standard CompositionContainer, it supports the ability to grab information about your catalogs. We do this during construction of the container by building a set of descriptors for the catalogs. So, the first step to get started (in fact, the only step), is to swap your declaration of your container to:

var catalog = new AggregateCatalog(
  new DirectoryCatalog("bin"));
  
var container = new GlimpseCompositionContainer(catalog);

The GlimpseCompositionContainer is designed to replace your current container. If Glimpse is not enabled, the container simply behaves like the standard CompositionContainer. When it is enabled, we decorate any of the catalogs using our GlimpseCatalog.

The GlimpseCatalog is responsible for providing details on the exports that are being required for composition. So for each requested contract, a set of possible definitions may be returned.

Getting this data to Glimpse

Because Glimpse is designed to display data to the client, data is collected through the lifetime of the request, and pushed down to client on render. We currently collect data in two places:

  1. Catalog information (presented on the MEF Container tab) displays the list of catalogs, and the parts available for each catalog. This is collected and stored in the runtime cache (as the container should essentially be declared once for the lifetime of the applicaiton.
  1. Import information (presented on the MEF Imports tab) displays the list of requests to the composition container, along with any matching exports. This is collected for each request, and is stored in the Request.Items collection.

With this information to hand, we can start visualising what our composition system is made of.

The sample Application

On the Github project I’ve built a sample project which demonstrates the sort of information we can start getting from our app. In the sample app, I’ve reused the MefDependencyResolver and MefControllerFactory from my previous MVC+MEF projects (see here) which gives us a quick demonstration of how it all works.

With a dependency resolver and a controller factory plugged in, we can start exporting our controllers:

namespace Glimpse.MEF.Sample.Controllers
{
    using System;
    using System.ComponentModel.Composition;
    using System.Web.Mvc;

    using Models;
    using Models.Logging;

    /// <summary>
    /// Defines the home controller.
    /// </summary>
    [ExportController("Home"), PartCreationPolicy(CreationPolicy.NonShared)]
    public class HomeController : Controller
    {
        #region Fields
        private readonly ILogger _logger;
        #endregion

        #region Constructor
        /// <summary>
        /// Initialises a new instance of <see cref="HomeController"/>.
        /// </summary>
        /// <param name="logger">The logger.</param>
        [ImportingConstructor]
        public HomeController(ILogger logger)
        {
            if (logger == null) throw new ArgumentNullException("logger");

            _logger = logger;
        }
        #endregion

        #region Actions
        /// <summary>
        /// Displays the default view.
        /// </summary>
        /// <returns></returns>
        public ActionResult Index()
        {
            _logger.Log("Entered HomeController.Index() action.");

            ViewBag.Message = "Welcome to ASP.NET MVC!";

            _logger.Log("Returning result from HomeController.Index() action.");
            return View();
        }

        /// <summary>
        /// Displays the about view.
        /// </summary>
        /// <returns></returns>
        public ActionResult About()
        {
            return View();
        }
        #endregion
    }
}

There is nothing really extensive going on here, I’ve just updated the the default HomeController implementation to support an injection of a dependency (our ILogger), and we then [Export] the controller so it can be picked up by MEF.

How does this get presented to Glimpse? Well, when we make a request to our /Home url, we started saving information about each request to GetExports on our catalogs. So in the above example, we resul in the followin items:

  1. Request to match contract System.Web.Mvc.IController which returns our 1 instance of our exported controller.
  2. Request to match contract GlimpseMEF.Models.Logging.ILogger which returns our 1 instance of our exported logger. This is then injected into our controller.

This is an initial version, which could change quite a lot, I need to detemine what sort of information people want to see. With that in mind, grab the source code, or search for Glimpse.MEF on Nuget to start playing around.

Digg This
Reddit This
Stumble Now!
Buzz This
Vote on DZone
Share on Facebook
Bookmark this on Delicious
Kick It on DotNetKicks.com
Shout it
Share on LinkedIn
Bookmark this on Technorati
Post on Twitter
Google Buzz (aka. Google Reader)

RazorPostLogo

An early look at RazorEngine v3

Finally, I’ve started making progress on getting RazorEngine v3 out into the wild. Last night I pushed an early version of v3 to RazorEngine’s new home at GitHub. There is still a lot of stuff I need to get done, but there is at least something you can start poking around with. There are a lot of changes in v3, so I thought I’d highlight just a few here.

Moving to GitHub

The first thing you’ll notice is that I’m no longer pushing the code onto CodePlex, but now moving to GitHub. GitHub will give us the opportunity to better share and collaborate with the development community to start really pushing RazorEngine in the open-source direction. Up until now, development has really been done by Ben (@buildstarted) and myself, but I want to allow many more people to get involved and determine the direction we take RazorEngine. GitHub’s fantastic SCM abilities and community features will help us achieve this.

Currently, I’ve been using the master branch for all my commits. After the release push of v3, we’ll start development in secondary branches and use the master branch as a release branch.

An early version of v3 is on GitHub now, so feel free to fork and start your pull requests, ask questions and get involved :-)

Breaking API Changes

I hate breaking API changes, but unfortunately there is such a significant reason to do so that it becomes unavoidable. Attempting to wrap the older API calls into what we now need may be impossible to do cleanly. Rest assured, I’ll be doing a complete review of the API before pushing an RC, but unfortunately there have been a few required changes.

The key change that prompted this was the overwhelming need to run RazorEngine in parallel/multi-threaded scenarios. Because of how RazorEngine v2 was laid out under the hood, there were a number of mistakes that were made in its design, and thread-safety was a big one. Currently, attempting to run RazorEngine v2.1 in a multi-threaded way nearly always ends badly, you’ll get a hideous amount of mangled template output and/or failed compiles. This has prompted me to redesign the API to support multi-threaded scenarios natively.

This key change is breaking, and there a few other changes which have required a redesign. We’ll be reviewing the whole API and will document some migration guides nearer release date.

Native Parallel Support

To assist with parallel scenarios, I’ve baked in support for parsing multiple templates in parallel. This means you don’t have to worry about handling the threading model yourself, let RazorEngine’s TemplateService take care of it. Here is an example test highlighting running some templates in parallel:

/// <summary>
/// Tests that the template service can parse multiple templates in parallel.
/// </summary>
[Test]
public void TemplateService_CanParseMultipleTemplatesInParallel_WitNoModels()
{
    using (var service = new TemplateService())
    {
        const string template = "<h1>Hello World</h1>";
        var templates = Enumerable.Repeat(template, 10);

        var results = service.ParseMany(templates, true);

        Assert.That(templates.SequenceEqual(results), "Rendered templates do not match expected."); 
    }
}

Obviously this is a trivial example, there are a number of overloads for the ParseMany method which supports scenarios with single templates/many models, many-templates/many models, etc. The parallelism is provided by PLINQ, and the execution of which can be customised by providing your own implementation of the new IParallelQueryPlan interface.

This is not to say you can’t use your own threading model, or even the threadpool. If you want to pop over to GitHub you can find some example tests of running a TemplateService in parallel scenarios. A future blog post will cover this in more depth.

Unit Test Support

At first I found it hard to quantify what would be considered a plausible test case. Originally we were simply providind a wrapper around the Razor parser, so most unit tests would only be testing the parser itself. As RazorEngine has evolved, a need has arrisen to provide a suite of tests that prove the API.

Currently I am using NUnit as the test framework, but I would imagine I will push up some sample tests using alternatives. This is important, as unit testing was a key area of support we were lacking in v2.1. There is a whole host of tests on GitHub currently, as these will expand as we introduce more features and capture more test scenarios.

Template Isolation

Razor was built around a neat parsing technology, and a code generation framework. Under the hood, your templates are being parsed, and compiled into executable class instances. Each template we generate is compiled into its own assembly, and subsequently loaded into memory. If you are using RazorEngine in volume you’ll notice that the memory footprint of your app will increase, because we do not have the ability to unload assemblies from the primary AppDomain.

In v3, we’ve introduced a new template service, the IsolatedTemplateService that supports the parsing of templates in a child AppDomain. Now, there are some limitations on what can be parsed, essentially at the moment there is no support for anonymous or dynamic models, and also any models that you do want parsed must be serialisable (for cross-domain communication).

Here is an example test of using the IsolatedTemplateService:

/// <summary>
/// Tests that a simple template with a model can be parsed.
/// </summary>
[Test]
public void IsolatedTemplateService_CanParseSimpleTemplate_WithComplexSerialisableModel()
{
    using (var service = new IsolatedTemplateService())
    {
        const string template = "<h1>Hello @Model.Forename</h1>";
        const string expected = "<h1>Hello Matt</h1>";

        var model = new Person { Forename = "Matt" };
        string result = service.Parse(template, model);

        Assert.That(result == expected, "Result does not match expected: " + result);
    }
}

The basic example is virtually identical to how you would instantiate a normal TemplateService, we wanted to make it easy to spin up instances in template services. A future blog post will detail AppDomain isolation more.

Automatic Text Encoding

Like ASP.NET MVC’s Razor implementation, we wanted to include automatic text encoding in the base framework. As the majority of use cases are based around the use of HTML, we’ve defaulted the default text encoding for values as Html-encoded. We also support raw-encoding (i.e. raw text) also, as well as a native Raw method for rendering out raw text in an Html-encoded template.

There is still a lot more to come, I hope to introduce more extensibility points to allow developers to plug more directly into the pipeline. Let me know your thoughts!

Digg This
Reddit This
Stumble Now!
Buzz This
Vote on DZone
Share on Facebook
Bookmark this on Delicious
Kick It on DotNetKicks.com
Shout it
Share on LinkedIn
Bookmark this on Technorati
Post on Twitter
Google Buzz (aka. Google Reader)

Epic

A tale of epic epicness…

An adventurous attempt at twisting Razor and abusing Sql Server

I have many hair-brained ideas, but none as mental as this one. I wanted to introduce the power of Razor to Sql Server, to allow for database-level template parsing! You’re probably asking what the hell I was thinking, but to understand fully I need to explain a few things about our corporate network migration, and the limitations it now provides.

As part of a large migration from our smaller network to a larger, globally managed network, a lot of new restrictions were put in place as to what applications and services can or can’t do. The biggest issue was that we can no longer send SMTP emails, only approved services can do :-( . Also, the sheer volume of logging/error emails being despatched has slowly been getting out of hand for a while now – so we were looking to introduce a centralised logging mechanism in a database, which handles throttling and esclation of logging data where required. Updating our existing applications/services to use this new logging tool is easy… we just take out the email logger, and plug in our shiny new database logger (thanks Chris and Andy!).

The database would be controlling how often items are reported, so it would need the ability to create the html emails we despatch. Different applications require and report different information, so it was important to me to provide a flexible templating solution for whatever is reported. And thats when I considered Razor.

Now, there are a number of challenges which will cause this dream to be a complete failure, and these include (but are not limited to)

  • Razor is a .NET 4.0 assembly, and Sql Server 2008 supports .NET 3.5
  • Compiling dynamic assemblies and loading them on Sql Server using Assembly.Load() fails

So here’s what I did…

Backporting System.Web.Razor

The first challenge that had to be overcome was the initial issue with the System.Web.Razor library. Razor was introduced with ASP.NET MVC 3, which of course is a .NET 4.0 assembly. To enable us to even consider using Razor on Sql Server we have to acknowledge that Sql Server 2008 (which was my target version) supports up to .NET 3.5 (with the .NET 2.0 runtime). This means the standard release System.Web.Razor.dll assembly would not work at all. We need to somehow backport this to .NET 3.5.

Luckily, the source code for the library is available at the ASP.NET codeplex site for us to abuse. First thing first, change the project target (to .NET 3.5 Client Profile) and then try to compile. Whoa, as expected a whole host of errors. Missing ISet<T>? No Tuple<T1, T2>?, no Task<T>? Lots to try and fix. Let’s take them one step at a time…

Fixing ISet<T>

The generic ISet<T> interface was introduced in as part of the .NET 4.0 BCL, and doesn’t have a counterpart in the .NET 3.5 BCL. But, the standard HashSet<T> does exist. Simple fix, replace all instances of ISet<string> with HashSet<string>.

Fixing Tuple<T1, T2>

The ordered set type, Tuple was also first introduced as part of .NET 4.0. This was a quick type to chuck together:

public class Tuple<T1, T2>
{
    private readonly T1 _t1;
    private readonly T2 _t2;

    internal Tuple(T1 t1, T2 t2)
    {
        _t1 = t1;
        _t2 = t2;
    }

    public T1 Item1 { get { return _t1; } }
    public T2 Item2 { get { return _t2; } }
}

public class Tuple
{
    public static Tuple<T1, T2> Create<T1, T2>(T1 t1, T2 t2)
    {
        return new Tuple<T1, T2>(t1, t2);
    }
}

Fixing Task<T>

Gah, the Task type… I don’t know enough about how it works internally to build a new Task type from scratch. After a bit of googling you can uncover the lineage of that particular type (Task Parallel Library), and a version comes bundled with the Reactive Extensions project. When Reactive was being developed, it was still pre .NET 4.0, so they were actively targeting .NET 3.5. This is good, it allows us to install the v1.0.2856.0 (this is important!) version of Rx for .NET 3.5 (you can find it here), and grab the System.Threading.dll assembly from C:\Program Files\Microsoft Cloud Programmability\Reactive Extensions\v1.0.2856.0\Net35. Add that as a reference to the project.

Other small fixes…

There are a few other fixes which you need to tackle, firstly String.IsNullOrWhiteSpace, and the Enum.HasFlag methods which don’t currently exist.. They are pretty easy to fix… and also StringBuilder.Clear. I wrapped all these up in a simple extension class:

static class Extensions
{
    public static bool IsNullOrWhiteSpace(this string value)
    {
        return (string.IsNullOrEmpty(value)) || (string.IsNullOrEmpty(value.Trim()));
    }

    public static void Clear(this StringBuilder builder)
    {
        builder.Length = 0;
    }

    public static bool HasFlag(this PartialParseResult result, PartialParseResult flag)
    {
        return ((result & flag) == flag);
    }

    public static bool HasFlag(this RecoveryModes modes, RecoveryModes flag)
    {
        return ((modes & flag) == flag);
    }
}

Once we’ve updated any method calls, we can try compiling again, and hazaah! We have now backported System.Web.Razor to .NET 3.5. There are oppotunities here for future projects, potentially including backporting the MVC RazorViewEngine to .NET 3.5 (any takers?) and also a .NET 3.5 version of the RazorEngine to target those scenarios where you can’t actually move on to using .NET 4.0 just yet (and believe me, sadly there are many!).

Shrinking RazorEngine for Sql Server

The RazorEngine project (current release v2 – v3 is still in the works, I just tend to get sidetracked with wacky ideas…) has been a great project to work on, from its inception (thanks Ben @BuildStarted) to where we are with it now. The only downside to it in this instance is that it is a bit complex to configure. What we need to do, is create a streamlined version of it. Now, Ben (@BuildStarted) had already done something similar (Git) when he was assisting with the awesome TinyWeb Razor View Engine. I took that as a starting point, and put together a severly cut down verison of RazorEngine (hereby called RazorEngineLite) that focused purely on the C# language and Html markup. No template isolation, no anonymous and dynamic types (which we couldn’t support on .NET 3.5 anyways). It’s a real barebones RazorEngine release. What I did want to do though, is I wanted to look ahead at how I might introduce a caching mechanism for templates that might take advantage of our database level access.

Introducing the ITemplateCache

RazorEngineLite has support for a dual-level caching mechanism. The Runner type that is included uses a RunnerCache type which is an in-memory cache. This is the primary location that RazorEngineLite will look for previously compiled template types. If the template type has not been previously cached, it will search through a possible series of second-level caching mechanims to try and grab an instance. The reason I wanted to support this, is to allow compiled types to be pushed back to the database, so in scenarios where the database is taken offline (or simply the AppDomain unloaded), when it starts again, it doesn’t have to recompile the template, it can just grab the cache details.

The baseline RazorEngineLite assembly bundles the default ITemplateCache instance which is automatically instantiated by the Runner type. Libraries can pass in their own instances to provide the second-level cache sources.

Also, like its v3 big brother, the caching mechanism supports a hashcode operation. Essentially when a template is cached, the current hashcode is cached alongside it. Should the template content change when we call to parse it again, the cached item will be invalidated (as the new and old hashcodes will be different) – this prompts a new compilation and a re-caching of the updated type.

We do some other peformance improvements too, and an idea spun from our codeplex project has evolved into us caching constructor delegates, so when we actually want to spin up new instances of our templates, we can just re-use our delegates instead. This is all handled by our TemplateCacheItem type:

public class TemplateCacheItem
{
    public int HashCode { get; private set; }
    public Type Type { get; private set; }
    public Func<ITemplate> Factory { get; private set; }

    public TemplateCacheItem(int hashCode, Type type)
    {
        HashCode = hashCode;
        Type = type;

        CreateFactory();
    }

    private void CreateFactory()
    {
        var ctor = Type.GetConstructor(new Type[0]);

        Factory =  Expression.Lambda<Func<ITemplate>>(
            Expression.New(ctor)).Compile();
    }
}

The Sql Server Mountain

Right, here is where it gets really interesting. The Sql CLR is a sandboxed runtime that runs within Sql Server. Assemblies can be hosted by Sql Server per database, and the AppDomain in which they run is per owner. There are a number of restrictions to what you can and can’t do with hosted assemblies, and the SQL CLR sandbox is very good enforcing that.

When you register assemlies, you do so through the CREATE ASSEMBLY command, and you specify the permission set the assembly will conform to. Sql Server will verify the integrity of the assembly and ensure that it conforms the requirements of the requested permission set (SAFE, EXTERNAL ACCESS and UNSAFE). Using SAFE wouldn’t work for what we want to do, as we’re going to need to access the file system. Attempting to register our assembly with EXTERNAL ACCESS fails, as it doesn’t like static fields that are not marked as read-only (I did look to change these, but stumbled at the System.Threading.dll assembly, which would need to be recompiled…). This leaves us with UNSAFE.

Before we start registering anything, we need to make sure that we’ve done some additional configuration:

ALTER DATABASE [ErrorLogging] SET TRUSTWORTHY ON;
sp_configure 'clr enabled', 1
RECONFIGURE

Quick sidebar: Instead of registering all three assemblies (RazorEngineLite, System.Web.Razor and System.Threading, I opted to ILMerge them into a single assembly, RazorEngineLite.Merged.dll)

CREATE ASSEMBLY [RazorEngineLite]
AUTHORIZATION [ErrorLoggingClient]
FROM 'C:\CLR\RazorEngineLite.Merged.dll'
WITH PERMISSION_SET = UNSAFE

When the assembly is registered it is exposed to any SQL CLR projects that target the database. Our ErrorLogging project is one such project, so I can now add a reference to our hosted assembly.

Now we are in a position to try and start using Razor, but immediately we are thwarted. Firstly, we can’t add our referenced assembly to the compiler operation as the C# compiler expects an absolute path to the assembly so we can compile our dynamic templates, and after that, we can’t use Assembly.Load() as its explicitly forbidden (even when using the UNSAFE permission set). Frowning face.

Referencing our assemblies for compilation

The C# compiler requires explicit references to non GAC’d assemblies when it is building the metadata of a new assembly. Our hosted assemblies exist in Sql Server, so we can’t actually provide a folder path to them in their current state. What I did, was to write out the hosted assemblies back to the file system, and then reference them explicitly via the compiler. The first time our CLR stored procedure is accessed, the static constructor of it’s parent type enforces that we grab any hosted assemblies (that are user defined) and writes them back out:

CREATE PROCEDURE [re].[GetHostedAssemblies]
AS
BEGIN
	SELECT a.[name] AS [HostedName], a.[clr_name] AS [AssemblyName], f.[name] as [Path], f.[content] as [Content]
	FROM sys.assemblies a INNER JOIN sys.assembly_files f ON a.[assembly_id] = f.assembly_id
	WHERE a.is_user_defined = 1
END
private static void EnsureHostedAssemblies()
{
    if (!Directory.Exists(_referenceDirectory))
        Directory.CreateDirectory(_referenceDirectory);

    using (var conn = new SqlConnection("Context Connection = true"))
    {
        conn.Open();

        using (var command = new SqlCommand("re.GetHostedAssemblies", conn))
        {
            command.CommandType = CommandType.StoredProcedure;

            using (var reader = command.ExecuteReader())
            {
                while (reader.Read())
                {
                    string filename = (string)reader["Path"];
                    if (filename.Contains("\\"))
                        filename = filename.Substring(filename.LastIndexOf('\\') + 1);

                    if (!filename.Contains("."))
                        filename += ".dll";

                    byte[] data = (byte[])reader["Content"];
                    using (var writer = new BinaryWriter(new FileStream(Path.Combine(_referenceDirectory, filename), FileMode.Create, FileAccess.ReadWrite)))
                    {
                        writer.Write(data);
                    }
                }
            }
        }
    }
}

This will pick up any assemblies that are hosted and defined by us, therefore, our RazorEngineLite.Merged assembly, and also the library we are currently debugging with Visual Studio (which incidentally is registered without a .dll suffix, which is why the code checks for a missing file extension). We build a list of these assemblies and pass them back into the RazorEngineLite’s Compiler type so we can manually reference our required assemblies.

Loading dynamic assemblies in Sql Server

One of the biggest problems with this whole plan of mine is that I didn’t realise that I couldn’t use Assembly.Load() anywhere in my code. Sql Server enforces that we can’t use this, even with our UNSAFE permission set. Thats a spanner in the works! My workaround for this feels dirty, but actually works really well. There are two parts to this workaround. Firstly, after we compile the dynamic assembly, we need to ensure we register that assembly with Sql Server. I call a procedure in my database which does this for me:

CREATE PROCEDURE [re].[CreateAssembly]
	@name VARCHAR(1000),
	@path VARCHAR(MAX)
AS
BEGIN
	DECLARE @sql NVARCHAR(MAX)
	SET @sql = N'CREATE ASSEMBLY [' + @name + '] AUTHORIZATION [ErrorLoggingClient] FROM ''' + @path + ''' WITH PERMISSION_SET = SAFE';

	EXECUTE sp_executesql @sql;
END

By setting the generated assembly to be finalised on disk instead of memory, we now have a dynamically compiled assembly, which we can register with Sql Server before we attempt to load it.

Next, instead of explicity using Assembly.Load() (which we are indirectly calling when we access the CompiledAssembly property of the CompilerResults type [ref: RazorEngineLite's Compiler type]), we can make use of Type.GetType(…) but pass in the fully qualified assembly name of the type.

string typeName = "RazorEngineLite.Dynamic." + name + ", " + name + ", version=0.0.0.0, culture=neutral, publickeytoken=null, processorarchitecture=msil";
Type type = Type.GetType(typeName);

When the AppDomain tries to resolve that type, it will find it, because the owning assembly has already been registered. No Assembly.Load(), just pure awesomenous!

Let’s look at an example:

[SqlProcedure]
public static int TestCompile(string name, string template, ref string result)
{
    try
    {
        var person = new Person { Name = "Matt", Age = 27 };
        var runner = new Runner(GetReferenceAssemblies(), CreateAssembly, new SqlTemplateCache());

        result = runner.Parse(name, template, person);

        return 0;
    }
    catch (Exception ex)
    {
        SqlContext.Pipe.Send(ex.Message);
        SqlContext.Pipe.Send(ex.StackTrace);

        result = null;
        return -1;
    }

}

In the above sample CLR procedure, we’re creating an instance of our model, Person, and creating a new Runner instance. The Runner instance will call to the Compiler to create the associated template, passing in our referenced assemblies, and a callback delegate which is actioned when we have created the dynamic assembly (this is the point we register it with Sql Server). We get our parsed template result back, merged with our model information (ala Razor), and return:

DECLARE @result VARCHAR(MAX)

EXEC dbo.TestCompile 'test', 'Hello @Model.Name', @result OUTPUT

PRINT @result

Which results in:

Hello Matt

Adding a second-level cache

I mentioned previously that the RazorEngineLite project supports a dual-level cache mechanism. Well, in this use case, I wanted to take advantage of Sql Server to provide an additional caching mechanism. The reason being, is that if the AppDomain is unloaded and restarted, there is no real point in recompiling a template that has already been registered with Sql Server. To support this second-level cache, I created a SqlTemplateCache type:

public class SqlTemplateCache : ITemplateCache
{
    public void Add(string name, TemplateCacheItem item)
    {
        if (string.IsNullOrEmpty(name) || item == null)
            return;
            
        using (var conn = new SqlConnection("Context Connection = true"))
        {
            conn.Open();

            using (var command = new SqlCommand("re.Add", conn))
            {
                command.CommandType = CommandType.StoredProcedure;
                command.Parameters.AddWithValue("@name", name);
                command.Parameters.AddWithValue("@hashCode", item.HashCode);
                command.Parameters.AddWithValue("@typeName", item.Type.AssemblyQualifiedName);
                command.Parameters.AddWithValue("@assembly", item.Type.Assembly.GetName().Name);

                command.ExecuteNonQuery();
            }
        }
    }
    
    public TemplateCacheItem Find(string name)
    {
        if (string.IsNullOrEmpty(name))
            return null;

        using (var conn = new SqlConnection("Context Connection = true"))
        {
            conn.Open();

            using (var command = new SqlCommand("re.Find", conn))
            {
                command.CommandType = CommandType.StoredProcedure;
                command.Parameters.AddWithValue("@name", name);

                using (var reader = command.ExecuteReader())
                {
                    if (!reader.Read())
                        return null;

                    int hashCode = (int)reader["HashCode"];
                    string typeName = (string)reader["TypeName"];

                    Type type = Type.GetType(typeName);

                    if (type == null)
                        return null;

                    return new TemplateCacheItem(hashCode, type);
                }
            }
        }
    }
}

We store the type information for the registered template, so when an initial check for a compiled assembly that won’t be in memory, it can check this table for the secondary cache information, get the Type, and then promote it to the primary cache (in memory).

Wrapping it Up

This was probably my biggest challenge yet, and I’m glad I’ve got it done. But before you even consider it’s actual application in production use, a lot of time needs to be spent ensuring the code is performant and most importantly.. safe. Ensuring the stability of Sql Server should always be a priority when you start integrating CLR procedures into your database platform.

If you like what you see, heck if you don’t like what you see, I welcome any and all feedback!

For those interested in seeing it in action, the project(s) are attached, but you’ll need to do some work to prepare your database to support it.

RazorEngineLite/System.Web.Razor – after build, using \Build\Merge.bat to ILMerge assemblies into RazorEngineLite.Merged.dll
ErrorLogging
ErrorLoggingSql – baseline structure – make sure you create database, user and schema ([re]) first.

Digg This
Reddit This
Stumble Now!
Buzz This
Vote on DZone
Share on Facebook
Bookmark this on Delicious
Kick It on DotNetKicks.com
Shout it
Share on LinkedIn
Bookmark this on Technorati
Post on Twitter
Google Buzz (aka. Google Reader)

MEFPostLogo

It’s been a while since I’ve blogged any MEF stuff, and having never blogged anything to do with WCF, I thought it could be good to share something I’ve been playing with these last few days.

The Premise

I’m already using MEF now in quite a few production projects, but also a lot of work lately has been going in to evolve the architecture of a service bus. Consider a set of individual services hosted as seperate WCF services. The hosting architecture is all essentially the same.

Where does MEF fit in? What I wanted to do was build an architecture that will allow me to drop in libraries that contain WCF services, and we’ll discover them with MEF and instantiate them.

The Basics

To aid discovery of services via MEF, we need someway of expressing that the service is actually a service, instead of just another part. Now, typically when we are exporting a part, we would label up a part, either using a common base type, or just with an empty export:

[Export]
[Export(typeof(ISomething))]

What I decided on doing, was provide a marker interface. This will allow us to select the all service parts from the container. Let’s look at the contract, and a sample service that might implement it:

public interface IHostedService { }

[ServiceContract]
public interface ICalculatorService : IHostedService
{
  [OperationContract]
  int Add(int operandA, int operandB);
}

Now when we export our service, we also need a way of describing that service to our consuming types. So, that means some metadata, here’s what I’ve got:

public interface IHostedServiceMetadata
{
  string Name { get; }
  Type ServiceType { get; }
}

The metadata interface is quite simplistic, giving us a way of selecting the service via a name. The ServiceType property I’ll come to a bit later. Now we can wrap up the export and the metadata, by combining them into a single export attribute:

[AttributeUsage(AttributeTargets.Class, AllowMultiple = false, Inherited = true),
 MetadataAttribute]
public class ExportServiceAttribute : ExportAttribute, IHostedServiceMetadata
{
  public ExportServiceAttribute(string name, Type serviceType)
    : base(typeof(IHostedService))
  {
    Name = name;
    ServiceType = ServiceType;
  }
  
  public string Name { get; private set; }
  public Type ServiceType { get; private set; }
}

The Imports

Now, on the other side of the fence, we need a way of importing our types, but rather than require us to manually create instances of types, it could be good to have MEF automatically create instances of the ServiceHost that actually hosts the service.

MEF is extensible by design, and a core part of that is the ExportProvider model. ExportProvider instances are used to match ImportDefinitions with appropriate parts. So in this sense, we can create an ExportProvider that will automatically create ServiceHost instances.

public class ServiceHostExportProvider : ExportProvider
{
  private static readonly string MatchContractName = AttributedModelServices
    .GetContractName(typeof(ExportServiceHost));
    
  public CompositionContainer SourceContainer { get; set; }
  
  protected override IEnumerable<Export> GetExportsCore(
    ImportDefinition importDefinition,
    AtomicComposition atomicComposition)
  {
    if (importDefinition.ContractName.Equals(MatchContractName))
    {
      var exports = SourceContainer
        .GetExports<IHostedService, IHostedServiceMetadata>();
        
      Func<IHostedServiceMetadata, Export> factory = 
        m => new Export(MatchContractName,
          () => ExportServiceHostFactory.CreateExportServiceHost(SourceContainer, m));
          
      switch (importDefinition.Cardinality)
      {
        case ImportCardinality.ExactlyOne:
          {
            var export = exports.Single();
            return new[] { factory(export.Metadata) };
          }
        case ImportCardinality.ZeroOrOne:
          {
            var export = exports.SingleOrDefault();
            return (export == null)
              ? Enumerable.Empty<Export>()
              : new[] { factory(export.Metadata) };
          }
        case ImportCardinality.ZeroOrMore:
          {
            return exports.Select(e => factory(e.Metadata));
          }
      }
    }
    
    return Enumerable.Empty<Export>();
  }
}

Now, remember I added that ServiceType property to our metadata export, the reason being is we need that to create or service, and its not easy to grab Type information from MEF, because MEF is designed for the discovery of unknown parts (in the default programming model). So our ServiceType property allows us to express the actual type before we’ve created the instance.

The other issue we have, is the design of the ServiceHost type that System.ServiceModel provides. ServiceHost allows us to pass a Singleton object, or a Type instance. MEF supports the creation of types with complex constructors with dependency injection, but the ServiceHost(Type) constructor enforces a public default constructor. What I decided to do, is create a custom service host type, the ExportServiceHost which will support custom creation of objects via MEF.

The ExportServiceHost type extends ServiceHostBase, with the main requirement being to initialise the ServiceDescription which actually describes our service:

  protected override ServiceDescription CreateDescription(
    out IDictionary<string, ContractDescription> implementedContracts)
  {
    var sd = new ServiceDescription { ServiceType = Meta.ServiceType };
    
    implementedContracts = GetContracts(Meta.ServiceType)
      .ToDictionary(cd => cd.ConfigurationName, cd => cd);
      
    var endpointAttributes = GetEndpoints(Meta.ServiceType);
    
    foreach (var cd in implementedContracts.Values)
    {
      foreach (var endpoint in GetServiceEndpoints(endpointAttributes, meta, cd))
        sd.Endpoints.Add(endpoint);
    }
    
    var serviceBehaviour = EnsureServiceBehavior(sd);
    serviceBehaviour.InstanceContextMode = InstanceContextMode.PerSession;
    
    foreach (var endpointAttribute in endpointAttributes)
      endpointAttribute.UpdateServiceDescription(sd);
      
    AddBaseAddresses(sd.Endpoints);
    return sd;
  }

There is no wiring up at this point, we need to extend WCF with an IInstanceProvider and a behaviour to link it through. We add this through our factory class:

internal static class ExportServiceHostFactory
{
  public static ExportServiceHost CreateExportServiceHost(
    CompositionContainer container,
    IHostedServiceMetadata meta)
  {
    var host = new ExportServiceHost(meta);
    host.Description.Behaviors.Add(
      new ExportServiceBehavior(container, meta.Name));
      
    return host;
  }
}

One of the key parts of this implementation is how we describe our endpoints.

The Endpoints

One of the early pain points when learning WCF is getting used to configuration of endpoints and bindings. With the idea of dynamic services, we should try and rely on configuration less. While this could be objectional, it isn’t too different from say, a fluent configuration API.

What we can take advantage of is the ability to decorate our service implementation with attributes which define both endpoints/bindings. Now, I imaging we can leverage a base attribute with common endpoint properties, and then specialise for our different communication schemes, http, tcp, etc.

[AttributeUsage(AttributeTargets.Class, AllowMultiple = true, Inherited = true)]
public abstract class EndpointAttribute : Attribute
{
  protected EndpointAttribute(int defaultPort)
  {
    Port = defaultPort;
  }
  
  public string BindingConfiguration { get; set; }
  public string Path { get; set; }
  public int Port { get; set; }
  
  internal abstract ServiceEndpoint CreateEndpoint(
    ContractDescription description,
    IHostedServiceMetadata meta);
    
  protected virtual Uri CreateUri(string scheme, IHostedServiceMetadata)
  {
    var builder = new UriBuilder(scheme, "localhost", Port, Path ?? meta.Name);
    return builder.Uri;
  }
}

So, let’s have a look at a specialised endpoint attribute:

[AttributeUsage(AttributeTargets.Class, AllowMultiple = false)]
public class HttpEndpointAttribute : EndpointAttribute
{
  private const int DefaultPort = 50001;
  
  public HttpEndpointAttribute() : base(DefaultPort) 
  {
    EnableGet = true;
  }
  
  public HttpBindingType BindingType { get; set; }
  public bool UseHttps { get; set; }
  public bool EnableGet { get; set; }
  
  internal override ServiceEndpoint CreateEndpoint(
    ContractDescription description,
    IHostedServiceMetadata meta)
  {
    var uri = CreateUri((UseHttps) ? "https" : "http", meta);
    var address = new EndpointAddress(uri);
    
    var binding = CreateBinding(BindingType);
    return new ServiceEndpoint(description, binding, address);
  }
  
  protected virtual Binding CreateBinding(HttpBindingType bindingType)
  {
      switch (bindingType)
      {
          case HttpBindingType.BasicHttp:
              return (BindingConfiguration == null)
                         ? new BasicHttpBinding()
                         : new BasicHttpBinding(BindingConfiguration);
          case HttpBindingType.WSHttp:
              return (BindingConfiguration == null)
                         ? new WSHttpBinding()
                         : new WSHttpBinding(BindingConfiguration);
          default:
              throw new ArgumentNullException("Unsupported binding type: " + bindingType);
      }
  }
}

The Examples

Right, so let’s put this all together, using our sample contract we defined earlier:

[ExportService("SimpleCalculator", typeof(SimpleCalculatorService)), HttpEndpoint]
public class SimpleCalculatorService : ICalculatorService
{
  public int Add(int operandA, int operandB)
  {
    return (operandA + operandB);
  }
}

As you can see, the idea is simple, you can focus on your service implementation, and the MEF architecture just gets out of the way. To simply it further, we can change our architecture to automatically create a HttpEndpointAttribute where no EndpointAttribute instances are defined.

A more advanced example, let’s add in support for a complex constructor:

[ExportService("AdvancedCalculator", typeof(AdvancedCalculatorService)),
 HttpEndpoint, TcpEndpoint]
public class AdvancedCalculatorService : ICalculatorService
{
  private readonly ILogger _logger;
  
  [ImportingConstructor]
  public AdvancedCalculatorService(ILogger logger)
  {
    _logger = logger;
    _logger.Log("Created instance of AdvancedCalculatorService");
  }
  
  public int Add(int operandA, int operandB)
  {
    int result = (operandA + operandB);
    
    _logger.Log("Computing result: " + operandA + " + " + operandB + ": " + result);
    return result;
  }
}

We get the benefits of clean, testable code, and dynamic instantiation with dependency injection for our services.

The Web

When hosting services through IIS, the server handles requests to .svc files. To support our dynamic instantiation, we can use a derivative of Darko’s Dynamic IIS hosted WCF Service work. In which we create a custom ServiceHostFactory, and then power it through a VirtualPathProvider. Our WebServiceHostFactory is shaped like:

public class WebServiceHostFactory : ServiceHostFactory
{
  private static CompositionContainer _container;
  private static readonly sync = new object();
  
  public CompositionContainer Container
  {
    get
    {
      lock (object)
      {
        return _container;
      }
    }
  }
  
  public override ServiceHostBase CreateServiceHost(
    string constructorString,
    Uri[] baseAddresses)
  {
    var meta = Container
      .GetExports<IHostedService, IHostedServiceMetadata>()
      .Where(e => e.Metadata.Name.Equals(constructorString, StringComparison.OrdinalIgnoreCase))
      .Select(e => e.Metadata)
      .SingleOrDefault();
      
    if (meta == null) return null;
    
    var host = new ExportServiceHost(meta, baseAddresses);
    host.Description.Behaviors.Add(
      new ExportServiceBehavior(Container, meta.Name));
      
    var contracts = meta.ServiceType
      .GetInterfaces()
      .Where(t => t.IsDefined(ServiceContractAttribute), true));
      
    EnsureHttpBinding(host, contracts);
    return host;
  }
  
  private static void EnsureHttpBinding(
    ExportServiceHost host,
    IEnumerable<Type> contracts)
  {
    var binding = new BasicHttpBinding();
    
    host.Description.Endpoints.Clear();
    foreach (var contract in contracts)
      host.AddServiceEndpoint(contract.FullName, binding, "");
  }
}

Now we are in a position to dynamically support WCF services through both standalone code (e.g. windows services, console apps, etc.), and also hosted code (ASP.NET). Please find the sample code attached, which includes the base code, some example services, some example hosts, and also an example client.

I welcome any and all feedback :-)

MEF + WCF ServiceHost

Digg This
Reddit This
Stumble Now!
Buzz This
Vote on DZone
Share on Facebook
Bookmark this on Delicious
Kick It on DotNetKicks.com
Shout it
Share on LinkedIn
Bookmark this on Technorati
Post on Twitter
Google Buzz (aka. Google Reader)

RazorPostLogo

I’ve been doing quite a lot with Razor lately, namely still working on RazorEngine vNext, but also reading a lot of Andrew Nurse’s blog to understand what the Razor parser is doing under the hood.

Turns out, its an elegant synergy of two separate parsers, one which handles code, one which handles markup. The two parsers do a little dance, and the end result, is a series of code statements that write the content of your template/page/document to an output. This in turn is compiled into a class which is executed to obtain the end result.

Jazor (ahem), attempts to be a faithful implementation of the Razor syntax, but built purely in javascript. The general idea would be that you could declaratively use Razor syntax in your client side templates. Let’s have a quick look:

<script type="text/html" id="template01">
  <div>Hello @model.name, you are @model.getAge() year(s) old!</div>
</script>

Now, let’s breakdown how it works:

Synergistic ballet

Andrew aptly calls this “recursive ping-pong” (read his blog post here), its an elegant way of breaking down a text stream into a recursive set of calls to each parser. We start off at the very top level, and make an initial call to a markup parser; when the markup parser starts reading the content, it will do so until it reaches a valid transtion point, the “@” symbol (well, there is a little more to it than that, we have to take email address, and @@ escaping into consideration). Once a valid transition point is reached, the markup parser calls the code parser, which in turn attempts to read a code block (an expression, statement, etc.).

The way we do this with Jazor is we have our core object, the parser which wraps a codeParser and a markupParser. The parser makes a call to the markupParser to start reading the content. and the recursive little dance begins.

Generating code

Much like Razor, we start generating blocks of code which we will reassemble as part of our template execution. If we take our above example, Jazor will generate a literal block, followed by an expression, then a literal, another expression, and then a final literal, so:

"n <div>Hello "
model.name
", you are "
model.getAge()
" year(s) old!</div>n "

We need to transform this series of blocks into something runnable, thankfully, there is eval.

Return of eval

After we’ve done generating our template blocks, we assembly them back together as a function, which we can then eval, and run. We do this by writing the code that wraps the blocks. For the above example, the code we generate would look something like:

(function(model) {
    var r = []; 
    r.push("n <div>Hello "); 
    r.push(model.name); 
    r.push(", you are "); 
    r.push(model.getAge()); 
    r.push(" year(s) old!</div>n "); 
    return r.join(""); 
});

I’ve prettified the code for readability, but essentially we are writing a custom method, and then we evaluate that as a function object, which can be executed directly:

    var func = eval(tmp); // Where tmp is our generated function code.

Executing templates

You can start running templates using the global jazor object:

    var model = {
        name: "Matt",
        getAge: function() { return 27; }
    };
    var result = jazor.parse("Hello @model.name", model);

Alternatively, if you are using jQuery, you can use the $.fn.jazor method on your query object:

    var result = $("#template01").jazor(model);

What does Jazor support

In this initial 0.1.0, it is really just a preview, very much unfinished. Currently we have support for most Razor syntaxes, so we have:

Expressions

    @model.name

Code Blocks

    @{
        var name = "Matt";
    }

Explicit Expressions

    @(model.name)

Line parsing

    @: Hello World

if, for, with, while

    @for (var i in model) {
        Current: @i
    }

And we also have support for helper methods, e.g.:

    @helper writeAge(age) {
        <span>@age</span>
    }
    Hello @model.name, you are @writeAge(model.getAge()) year(s) old!

Helper methods are transformed into internal functions of our template function. E.g, the above would be transformed into:

(function(model) {
  function writeAge(age) {
    var hr = [];
    hr.push("n    <span>");
    hr.push(age);
    hr.push("</span>n");
    return hr.join("");
  }
  // etc....
});

This allows us to declaratively define template functions, just as Razor allows.

What’s missing

There is still quite a bit of work to do, bugs to iron out, one big glaring omission currently is support for else/else if statements, and the parser doesn’t really obey xml markup just yet, but for an initial release, I’m hoping its enough to get people interested. I’ll push it to github soon so you can start forking and hopefully get involved!

The script file is attached, remember, its an early work in progress! Let me know what you think :-)

jazor-dev.0.1.0.js

Digg This
Reddit This
Stumble Now!
Buzz This
Vote on DZone
Share on Facebook
Bookmark this on Delicious
Kick It on DotNetKicks.com
Shout it
Share on LinkedIn
Bookmark this on Technorati
Post on Twitter
Google Buzz (aka. Google Reader)

KnockoutLogo

Facebook-style Search with Knockoutjs and JQuery

I recently stumbled across Knockoutjs, a MVVM framework for Javascript that allows us to declaratively add two-way data binding to html elements with minimal markup and no tedious event registration. Now there are plenty of examples of using Knockoutjs on the project website, but what I want to see is if I could create a Facebook style search box and results using some nifty and clean Knockoutjs code.

Now, Knockoutjs isn’t really concerned with the DOM manipulation and event registration of the current generation of javascript frameworks, but that is not to say that it doesn’t play nice. In fact, using Knockoutjs and jQuery together is just a dream.

View Models and Bindings

Our first port of call is to look at how we plan our view model. I can initially see we’re going to need something like:

var viewModel = {
  query: ko.observable(),
  results: ko.observableArray()
};

This allows us to data bind to the query property and automatically update that as our search box is changed. But the problem we currently have, is when we want to start fetching data from our server, we would have to manually update this observable with our results. To get round this, we can use a nifty extension called ko.mapping. The mapping extension allows us to define a base model and generate a view model from that, e.g.:

var baseModel = {
  query: "",
  results: []
}

var viewModel = ko.mapping.fromJS(baseModel);

The mapping extension will take our base model and generate a view model with observable members within it, essentially our mapped view model works like our original view model. What we can do now though, is when we grab results from our server, we can simple use the mapping extension to update the view model, which in turn updates our UI.

var resultModel = // get from server?
ko.mapping.updateFromJS(viewModel, resultModel);

So how does this fit in with our UI?

A simple html markup

Our markup is pretty simple, we have a textbox, a button and a list. Chuck in a few more elements and some styling, we get a nice Facebook-style search box:

<div class="searchBox">
  <span class="searchContainer"><input /><button></button></span>
  <ul class="results"></ul>
</div>

Data-binding the view model

What we need to do first, is add a two-way data-binding to our input textbox, but the default binding updates our value when the change event is fired. To allow a search-as-you-type experience, we need to change this to a keyup event. We can do that with the following binding:

<input data-bind="value: query, valueUpdate: 'afterkeyup'" />

Now, let’s jump ahead a bit, and start looking at how our UI will get updated by our view model:

(function($)
{      
  var baseModel = 
  {
    query: "",
    results: []
  };
  
  var viewModel = ko.mapping.fromJS(baseModel);
  viewModel.doSearch = function()
  {
    var $this = this;
    setTimeout(function()
    {
      var resultModel = null;
      var q = $this.query();
      if (q == "") 
      {
        resultModel = { results: [] };
        ko.mapping.updateFromJS(viewModel, resultModel);
      } 
      else
      {
        $.ajax({
          url: "json.asp",
          data: { "query": q },
          type: "GET",
          dataType: "json",
          success: function(r)
          {
            resultModel = r;
            ko.mapping.updateFromJS(viewModel, resultModel);
          }
        });
      }
    }, 1);
    
    return true;
  };

  ko.applyBindings(viewModel, $("#search").get(0));
})(jQuery);

I’ve added a function to the view model, doSearch and that is responsible for updating our results. We use jQuery’s ajax function to grab our server data (in JSON format), and then map that result straight into our view model. You’ll notice the setTimeout call, unfortunately the keypress event the doSearch function will be bound to fires before the value is updated, so we need to delay it and allow our value to update before the we try and do our search.

We’ve also only applied this view model in the scope of the search element, this allows us to use multiple view models in a single page, targeting different widgets.

Binding events and templating the results

To get it all working, we need to do a couple of things. Firstly, let’s bind our events, we do this in two places, our textbox and our button. The button will respond to the default click event, but the textbox needs to respond to keyup, so…

<input data-bind="value: query, valueUpdate: 'afterkeyup', event: { keyup: doSearch }" />
<button data-bind="click: doSearch"></button>

Using jQuery’s tmpl plugin, we can easily create client side templates, which we need for our result objects. Now, our results may look like this:

results: [
  { type: "header", text: "People" },
  { type: "person", name: "Matthew Abbott", imageUrl: "..." }
]

So, in our template, we need to handle both header elements, and people elements. Let’s have a look:

<script type="text/html" id="resultItem">
  <li class="${ type() }">
    {{if type() == "header"}}
      <span data-bind="text: text"></span>
    {{else}}
      <a href="#">
        <img src="${ imageUrl() }" />
        <span class="text" data-bind="text: name"></span>
      </a>
    {{/if}}
  </li>
</script>

With that template, we add a specific class for styling purposes, based on the result type, and then fill the content using data-bound html elements. Once again, we leave the majority of our data-binding to Knockoutjs, but we let jQuery’s tmpl plugin handle our conditionals.

And this is how we wire it up:

<ul class="results" data-bind='template: { name: "resultItem", foreach: results }, visible: results().length > 0'></ul>

We add our data binding to the list element, specify our template name, and then we toggle the visible binding based on the number of results. Without any results present, the list will automatically be hidden, when we have results, it shows. Here is an example:

Let me know what you think, you can have a look at the demo here: http://fidelitydesign.net/pub/index.html

Digg This
Reddit This
Stumble Now!
Buzz This
Vote on DZone
Share on Facebook
Bookmark this on Delicious
Kick It on DotNetKicks.com
Shout it
Share on LinkedIn
Bookmark this on Technorati
Post on Twitter
Google Buzz (aka. Google Reader)

FuncPost

I remember looking through some of the included VS2008 samples (here), and there is quite a nice little DynamicQuery project demonstrating the ExpressionParser demo type. It wasn’t long until I started having a play with how we could take advantage of this in code.

The first step was to make a few modifications to the ExpressionParser type, namely changing access modifiers and removing the limitation on using a fixed list of types. Where do we make these changes? Well, first step, let’s make it public.

public class ExpressionParser
{
  // ...
}

Next, let’s remove the constraint on predefined types, within the ParseMemberAccess method:

Expression ParseMemberAccess(Type type, Expression instance)
{
  // ...
        switch (FindMethod(type, id, instance == null, args, out mb))
        {
            case 0:
                throw ParseError(errorPos, Res.NoApplicableMethod,
                    id, GetTypeName(type));
            case 1:
                MethodInfo method = (MethodInfo)mb;
                //if (!IsPredefinedType(method.DeclaringType)) // Comment out this line, and the next.
                    //throw ParseError(errorPos, Res.MethodsAreInaccessible, GetTypeName(method.DeclaringType));
                if (method.ReturnType == typeof(void))
                    throw ParseError(errorPos, Res.MethodIsVoid,
                        id, GetTypeName(method.DeclaringType));
                return Expression.Call(instance, (MethodInfo)method, args);
            default:
                throw ParseError(errorPos, Res.AmbiguousMethodInvocation,
                    id, GetTypeName(type));
        }
  // ...
}

Now we are in a position to start flexing our function musicles. But first, we need to understand how it all works.

ExpressionParser can take a string expression and convert it to an Expression, which we can in turn create a Delegate for, so given the expression “(a + b)”, if we know the operand types, and the return type, we can create a Delegate. Let’s see how:

var @params = new[] 
              { 
                Expression.Parameter(typeof(int), "a"),
                Expression.Parameter(typeof(int), "b")
              };
              
var parser = new ExpressionParser(@params, "(a + b)", null);
var @delegate = Expression.Lambda(parser.Parse(typeof(int)), @params).Compile();

With that small bit of code, we’ve created some parameter expressions, named to match the operands in our expression, and then we use the ExpressionParser to create an Expression instance that represents our string expression. The next step was to compile the Lamda of that expression into an executable Delegate.

The nice thing about the Delegate type, is that it is castable to Func<> instances and makes using the code easy to use. Our example above could be used as such:

Func<int, int, int> func = (Func<int, int, int>)@delegate;
int result = func(1, 2); // Result should be three.

The important thing which I hope you get, is that we’ve taken a simple string expression and converted it into a much more powerful Func<> delegate, which is strongly-typed.

To this end, I’ve started building a FunctionFactory type used to easily generate these Func<> instances, e.g.

public static class FunctionFactory
{
    private static Delegate CreateInternal(Type[] argumentTypes, Type returnType, string expression, string[] argumentNames = null)
    {
        if (argumentNames != null)
        {
            if (argumentTypes.Length != argumentNames.Length)
                throw new ArgumentException("The number of argument names does not equal the number of argument types.");
        }

        var @params = argumentTypes
            .Select((t, i) => (argumentNames == null)
                                  ? Expression.Parameter(t)
                                  : Expression.Parameter(t, argumentNames[i]))
            .ToArray();

        ExpressionParser parser = new ExpressionParser(@params, expression, null);

        var @delegate = Expression.Lambda(parser.Parse(returnType), @params).Compile();
        return @delegate;
    }

    public static Func<TReturn> Create<TReturn>(string expression, string[] argumentNames = null)
    {
        return (Func<TReturn>)CreateInternal(new Type[0], typeof(TReturn), expression, argumentNames);
    }

    public static Func<A, TReturn> Create<A, TReturn>(string expression, string[] argumentNames = null)
    {
        return (Func<A, TReturn>)CreateInternal(new[] { typeof(A) }, typeof(TReturn), expression, argumentNames);
    }

    public static Func<A, B, TReturn> Create<A, B, TReturn>(string expression, string[] argumentNames = null)
    {
        return (Func<A, B, TReturn>)CreateInternal(new[] { typeof(A), typeof(B) }, typeof(TReturn), expression, argumentNames);
    }
}

Now we can do some really cool stuff:

string expression = "(1 + 2)";
var func = FunctionFactory.Create<int>(expression);

int result = func(); // Result should be 3.

expression = "(a * b)";
var func2 = FunctionFactory.Create<int, int, int>(expression, new[] { "a", "b" });

int result = func2(10, 50); // Result should be 500.

We can even go a bit further and handle more complex objects, so if I have an example type, Person with an Age property, we can test against it:

expression = "(Age == 5)";
var func3 = FunctionFactory.Create<Person, bool>(expression);
bool isFive = func3(new Person { Age = 5 });

As you can see, the application of these types of expressions is quite endless. Personally I’ve now been able to introduce dynamic expressions into quite complex rule systems to allow for an additional dimension of flexibility.

Please find the demo project attached, it’s not as tidy as usual, but still fun to play with and extend.

FunctionFactory VS2010 Project

Digg This
Reddit This
Stumble Now!
Buzz This
Vote on DZone
Share on Facebook
Bookmark this on Delicious
Kick It on DotNetKicks.com
Shout it
Share on LinkedIn
Bookmark this on Technorati
Post on Twitter
Google Buzz (aka. Google Reader)

RazorPostLogo

Subtemplating with @Include

ASP.NET MVC3 includes the Razor view engine, which was the starting point for the RazorEngine project. Razor views (like their WebForm counterparts) support the ability to include partial views in the parent view. In doing so, it allows you to modularise your views into logical sections.

Much like the Razor view engine, we also support this concept in RazorEngine. We’ve implemented partial view support using the @Include methods built into TemplateBase. We specifically haven’t named them RenderPartial (ala the Razor ViewEngine) as we want to differentiate RazorEngine from it’s MVC counterpart. So how do we start subtemplating?

Here is an example template:

Here is a sample template. @Include("helloWorld")

When parsing this template, the Razor code generated creates the method calls to the TemplateBase methods. During execution, these methods will first determine if the template named “helloWorld” has been pre-compiled (and as such exists in the current TemplateService’s template cache.

We can handle the resolution of this named template two ways:

string helloWorldTemplate = "Hello World";
Razor.Compile(helloWorldTemplate, "helloWorld");

string fullTemplate = "Here is a sample template. @Include(\"helloWorld\")";
string result = Razor.Parse(fullTemplate);

In the above example, we are precompiling the template ahead of time. This enforces the template is cached (all compiled templates are named). If we don’t want to precompile the template, we need a ITemplateResolver.

Razor.AddResolver(s => GetTemplateContent(s));

string fullTemplate = "Here is a sample template. @Include(\"helloWorld\")";
string result = Razor.Parse(fullTemplate);

By adding a resolver, it allows us to dynamically locate the template which hasn’t been precompiled. This template is then compiled, executed and the result is injected into the result of the parent template. An ITemplateResolver is defined as:

public interface ITemplateResolver
{
  string GetTemplate(string name);
}

The result of the template resolver must be the unparsed template content. RazorEngine supports adding multiple template resolvers to a TemplateService, as well as declaring a template resolver though a Func<string, string> instance.

Passing Models from Parent to Child

RazorEngine also supports passing model data from a parent model to a child model using @Include<T>(templateName, model). Child models are treated in compilation the same way as parent models.

string helloWorldTemplate = "Hello @Model";
Razor.CompileWithAnonymous(helloWorldTemplate, "helloWorld");

string fullTemplate = "@Include(\"helloWorld\", @Model.Name)! Welcome to Razor!";
string result = Razor.Parse(fullTemplate, new { Name = "World" });
Digg This
Reddit This
Stumble Now!
Buzz This
Vote on DZone
Share on Facebook
Bookmark this on Delicious
Kick It on DotNetKicks.com
Shout it
Share on LinkedIn
Bookmark this on Technorati
Post on Twitter
Google Buzz (aka. Google Reader)

Copyright © 2012 Matthew Abbott. All Rights Reserved. Metro.press theme designed by Darko Pečnik.