Asynchronous processing in SQL with Service Broker

SQL Server Service Broker was introduced in Microsoft SQL Server 2005 and this article suggests several uses, the most significant of which is probably the ability to do asynchronous processing. Service Broker might be seen as the database equivalent of Microsoft Message Queuing.

In this scenario, a synchronous process can put a message on a queue and then complete immediately (i.e. return control to the user). A separate process can then take this message off the queue and perform some processing based on it – this processing is effectively done “behind the scenes” as far as the user is concerned. Typically, it might be some relatively time-consuming work which isn’t on the user’s critical path.

As with all asynchronous processes, the problem of how to report errors is a tricky one, given that the user might have gone home for the evening by the time they arise. You might imagine keeping a log of such errors and generating notifications, all of which could also be done from SQL, using Database Mail for example.

Getting started with Service Broker

Service Broker’s implementation may seem unnecessarily complicated, involving not only queues but also things called services, contracts and message types. These have been introduced to allow the functionality to be used in as many weird and wonderful ways as realistically possible but they do present a bit of a steep learning curve. So here’s a very simple set of scripts to get you going and show how it works.

Begin by running the first script from any existing database on a SQL Server, version 2005 or later. It creates a new database called ServiceBrokerDemo and makes in it a Queue, Service, Conversation, Message Type and a couple of example stored procedures.

Once that has completed, run the second script:

This puts some messages on the queue by calling the Proc_Queue_Send_Message stored procedure (which completes quickly). It then calls the Proc_Queue_Process_Messages stored procedure; this reads messages off the queue and processes them, putting the results in the T_QueueProcessingResult table. Proc_Queue_Process_Messages mimics doing “hard work” via a WaitFor Delay '00:00:05‘ statement.

An alternative way of working is implemented in the third script, which should then be run:

Now, Proc_Queue_Process_Messages will be called automatically whenever a message is put on the queue – i.e. there’s no need to explicitly call it.

Finally, run the fourth script:

This will again put some messages on the queue. Once it has completed (quickly) the processing will have been started and can be monitored by re-running the final Select * From dbo.T_QueueProcessingResult With (NoLock) statement. Every ~5s, a new row should appear (5 in total).

An aside about Conversations

As in real life, striking up a “conversation” can be a time-consuming process so these are generally cached outside of the Service Broker framework. The TSys_ConversationHandle table is used to do this.

ContrOCC Hackday IV – Part 2

Carrying on from our first post following the results of our developers’ adventures in the most recent ContrOCC hackday, here is the final set of projects:

Julian Fletcher – Graphically presenting performance information to the lay developer

We are now amassing ever greater amounts of anonymous performance information from live Local Authority (LA) databases:

  • execution times
  • deadlocks
  • missing index hints.

This information is transferred, via CSV files, from all the LA databases to a shared OCC server. Here, several Stored Procedures may be used to analyse the figures. However, these output slightly impenetrable tables, so I set about presenting the information in a graphical format using SQL Server Reporting Services (SSRS).

Setting this up was fairly tedious – in particular, getting all the necessary permissions (even when everything was on my own machine). However, in summary, this is how it works:

  1. You set up a SQL Server to have SSRS. This results in a couple of new databases being created:
    Screenshot of Julian's hackday project
  2. Configure SSRS using the Configuration Manager:
    Screenshot of Julian's hackday project
  3. Install the Microsoft SQL Server Report Builder application and design your reports. These can obtain their data from a database on an entirely different server.
  4. Save them to your hard drive and then publish them the Report Server when they’re ready.
  5. The screenshot shows what this looks like. You can get the averages at all LAs for any one stored procedure:
    Screenshot of Julian's hackday project
  6. You can also drill through to get a by-version breakdown for a particular LA (not down – this is a slightly different concept):
    Screenshot of Julian's hackday project
  7. Just by chance, I’ve chosen a stored procedure which appears to have sped up in later versions!

As I mentioned above, getting the right permissions was a pain and I still haven’t made the Report Server available to other users but I hope to be able to soon.

Maciej Luszczynski – Cleaning up the developer documentation

I focused on providing How To documentation for new developers on the wiki. I started by cleaning up existing pages. The Home page is now greatly improved, although the Windows Topics section still requires clean-up.

I have also identified some topics that it might be useful to add to the wiki, such as:

  • how to remove a database object
  • how to remove a database table properly
  • how to commit a change to the SVN properly
  • how to check the imports flow

Matthew Clarke – CSV Merger

This tool improves merging of CSV data files which often cause SVN conflicts, for example when a column is added.

The tool can be set to be your default merge tool for CSVs in SVN.

Screenshot of Matthew's hackday project

When you would normally resolve conflicts CSVMerger now opens.

Screenshot of Matthew's hackday project

It will highlight conflicts and missing values for new columns:

Screenshot of Matthew's hackday project Screenshot of Matthew's hackday project

You can even resolve conflicts via the UI (needs polish).

Screenshot of Matthew's hackday project

So, you’ll be able to go home 5 hours early because the merge was done in no time.

Nathan-Madonna Byers – F#/C#

I looked at using various F# Type Providers (from the open-source FSharp.Data.SqlClient package) to interact with ContrOCC databases in a type-safe manner, such that many errors can be caught at compile-time rather than raising run-time exceptions. In particular, I investigated the potential of this approach for writing test data scripts and worked through the implications of exposing this functionality to C# code.

The first thing to note is that there’s little in my prototype that couldn’t have been achieved solely in C#. However, I find F# to be a more natural and expressive language for this kind of thing. In terms of F#/C# interoperability, there were a few head-scratching issues that took a while to figure out – but it all seems to work pretty smoothly. (Famous last words.)

A quick summary of the specific Type Providers:

  • SqlProgrammabilityProvider: This can be used when working with stored procedures. Unfortunately, it doesn’t currently support SPs with output parameters or explicit return values, which rules out many of our SPs.
  • SqlCommandProvider: This can be used when working with arbitrary SQL commands. I’ve used this in my prototype as a lightweight wrapper around SPs (e.g. rather than calling an _Insert SP directly, I can have a simple three-line SQL command: declare an @ID parameter; execute the SP [passing @ID as the output parameter]; select the @ID).
  • SqlEnumProvider: This can be used to create enumerations for (relatively) static lookup data, e.g. from TRefSys_ tables. This means that it may no longer be necessary to define these enumerations manually in our C# code (and then have to worry about keeping them synchronized with the database &c.).

In terms of extending the prototype, various factors – the use of the SqlCommandProvider and the succinctness of F# (plus type inference and auto-generics) – mean that the amount of hand-written “boilerplate” code required is fairly small. (It’s not dissimilar to the amount of hand-written SQL needed to do it the old way but should be quicker to write.) The thing that really takes time is the need to analyze the SPs in relation to the database schema, e.g. to decide: which parameters to expose to C#; which should be mandatory, etc.

Finally, a summary of the key benefits:

  • Static typing (and Type Provider “magic”) mean that many errors can be caught at compile-time.
  • Intellisense (including visibility of defaults for optional parameters) and auto-completion.
  • Optional parameters are truly optional: no more “counting commas”!
  • The scripts are (more) readable.
  • Debugging support and full stack traces for exceptions.
  • “Production” SPs – i.e. those used by the UI (and other components) – are always used.
  • Modularization, i.e. we can finally share portions of data setup across tests.

Nigel Palmer – ContrOCC version manager

I set out to create a version number manager to assist with the building of ContrOCC. Version numbers are currently set manually in a variety of different places and there is some risk of one of them being missed. This tool will allow the developer responsible for the build to set all version numbers in the SQL database and the C# code. This will improve the accuracy of the build process and prevent test failures due to incorrect version numbers.

The ContrOCC Version Manager tool has a simple user interface that allows the user to set the version number. This then updates all the required build files to have the new version number.

Screenshot of Nigel's hackday project

Patrick Donkor – An executable imports/exports specification

I spent my Hack Day looking at how I could use Specflow as a way of creating an executable specification for the development of new Imports/Exports.

The idea is take a specification using the Gherkin syntax to specify a top level view of an import/export. This specification would be agreed upon between an IM and the Client. The specification would be passed to the developer who would code and test against it.

To see if this could be done, I used an existing client payments import as my template. I spent the first part of the day setting up the specification for the import. This went well. The second half of the day was spent trying to implement automated tests to use against the specification. I did encounter a few minor issues that, given time, could be rectified. The screenshot below shows what a sample file looks like.

Screenshot of Patrick's hackday project

Steph Sharp – Improving code integrity checks

I spent the day trying to suppress output from one of our internal code integrity checks (sys_checkschema). This output is an unwanted side effect, and can be confusing, especially to newer members of the team. Although I did manage to prevent this output being displayed, I was unsuccessful as there were problems with expected errors no longer being reported and the addition of spurious new errors, which I was unable to resolve.

Tom Litt – Migration from within the ContrOCC UI

I took the brief “make the migration process at new customer sites a part of the ContrOCC UI” and tried to see whether it would be feasible to run migrations via a wizard available in the main desktop client, instead of the existing process where there is a separate tool. Why? If a customer had this, they wouldn’t have to send data off-site. They could re-run migration tables as often as they liked. Selling this to us, we could ditch the SQL Server Integration Services packages which are a merge nightmare.

In production the wizard, would:

  • allow editing of migration settings
  • allow selection of files to be migrated in, or perhaps just point at a directory full of migration CSVs
  • run through each file independently, or run them all in an unattended batch; depending on the level of reporting required
  • feedback on the level of success/failure for each file.

The file specifications and import code are already written, so this is a case of wrapping the code as stored procedures and putting UI on it.

In practice I got as far as establishing a wizard, selecting the client file, importing it and writing the existing HTML feedback to the wizard. This could be extended fairly easily to handle all files and report more nicely, and I think this is worthy of future effort.

Tomasz Agurciewicz – Taming our test scripts

ContrOCC’s testing meta language takes some time for developers to learn. That’s why I decided to finish/enhance a tool I made some time ago which makes it easier:

Screenshot of Tomasz's hackday project

The tool parses a test script and tries to present its contents in a sane fashion. All commands are listed in the Command dropdown, and selecting one of them lists available actions. So for example Command “S” (Service) has actions “Add” and “Load”.

Next, for the selected action it finds the appropriate sql part in the script and tries to list what each column stands for. It then lets you enter all the required data and form a single test script line that you can use later on when preparing tests or data for ContrOCC. It also prints the selected chunk in a separate tab, so you can look up if in doubt what each parameter does.

Trevor Hingley – SQL Server projects in VS2012

I looked at SQL Server projects in Visual Studio 2012, with a view to seeing how viable it would be as a tool for ContrOCC database development and deployment.

There are immediate issues in that the SQL Server Data Tools (SSDT) installed on the machine must be in line with Visual Studio. In my case creating a project in VS2012 immediately raised an error, as the SSDT were not compatible.

After installing the latest version of SSDT, I found the SQL Server project very easy to use. Database schema can be imported from a database, a selection of scripts or a .dacpac file very easily. Multiple projects can be used to handle separate version schemas, and the tooling makes comparison between projects/schema easy to view and update. Pre- and post-deployment scripts can also easily be added to the project.

Where the viability of the SQL Server project type fails is in deployment. Schemas can easily be deployed to databases which can be connected to, and scripts can be generated to create a new database or upgrade from a known schema to the latest schema. However, there is no versioning capability when generating scripts to be able to produce scripts capable of upgrading from multiple schema versions to the current version. A possible alternative is to generate .dacpac files which have versioning capability. However this was not found to be viable, as a .dacpac generated in VS2012 causes an xml schema error when attempting to import into a 2k8 SQL Server database.

ContrOCC Hackday IV – Part 1

Our ContrOCC hackdays give our developers a day to work on tweaks, gripes, improvements, or whole new features of their choosing and then sharing those with the rest of the team.

For all the thinking behind our product hackdays, have a look at the intro to the first ContrOCC hackday.

This year we have so many team members I have split this post in two; we’ll post the remaining projects soon. Here is the first set:

Alan Carter – Converting the distributed tests config file to XML

I worked on converting the configuration file for our distributed testing service from plain text to XML, something I’ve wanted to do for a while. As the configuration for the tests grows ever more complex, the plain text file becomes increasingly arcane and difficult to navigate. An XML file allows the configuration to be laid out in a more structured manner, and lets most text editors provide benefits such as folding and syntax highlighting. It also has an accompanying schema so that we can catch mistakes much more easily.

This work supports a longer term goal I have of getting the config file into SVN and splitting out parts which are likely to be changed frequently.

There’s a little more work needed to finish writing the new parser, but with just a little more work this should be fully usable. I also hope to arrange for the distributed tests service to perform schema validation of the file.

Screenshot of Alan's hackday project

Chris Griggs – Converting ContrOCC tools to use Git

I looked at converting our tools that currently use SVN to use Git. We currently use the SharpSVN library to query our repository. There’s a SharpGit project in NuGet that’s supposed to be an almost direct replacement, unfortunately with only ~200 downloads it’s very immature and not up to the task. I chose libgit2sharp as the best alternative. This is a C# wrapper around the official libgit2 library. The library itself is well documented, the wrapper less so.

I was quickly able to convert the Macro Developer Tool (an internal development tool which uses the C pre-processor to avoid code duplication in SQL) to use Git as it only looks at the working copy to see which files have been added or modified. I then turned my attentions to the PFG (our patch file build tool). I soon learnt that this was not going to be so trivial.

The concept of a tag is somewhat different between Git and SVN, as is the idea of looking at the central repository to find the previous one (although I doubt we’ll all have the entire repo on our machines either). As it’s written, the PFG relies on there being a linear succession of changes between the previous tag and the current one. In Git it’s likely that we would branch out and merge in commits throughout the development of a tag. We would therefore need to compare the two trees without relying on their history. I am satisfied that this is possible, but more thought will be required than was possible on a hack day.

Chris Henry – Visualising the ContrOCC Database

I spent the day looking into ways of visualising the structure of the ContrOCC database. The aim was to see to what extent it divides into modules which could in principle be more rigorously separated in future. The work in practice fell into three parts:

Part 1: List Dependencies

Extracting data on dependencies from our source data. I have a long standing prejudice against SQL’s own lists of dependencies and since I knew it wouldn’t be too much work I instead read dependencies out of the SQL source code using parser code borrowed from the PFG (our patch file build tool).

Part 2: Box & Arrow Graph

I next tried to graph dependencies as a box & arrow diagram. I used the MSAGL library and technically speaking it worked first time. With minimal effort I was able to produce a suitable diagram with tooltips showing the name of the objects.

Screenshot of Chris' hackday project

However, when I scaled up to the 1000s of objects in our database it didn’t even try to produce a decent layout – it just stretched the whole graph out horizontally:

Screenshot of Chris' hackday project

Or zoomed in:

Screenshot of Chris' hackday project

Part 3: Dependency Structure Matrix

Whilst looking into ways I might influence the layout I stumbled across the idea of the Dependency Structure Matrix and immediately decided it was a better prospect. In true ContrOCC fashion I began writing my own DSM code and by the end of the day had an incomplete, tantalising but ultimately broken implementation.

Follow-up

I intend to:

  • Investigate the commercial tool Lattix or, if it doesn’t impress, fix my broken implementation.
    Postscript: I tried it. I like it. The costs are such that although we might consider it this would only be in the context of a future decision to undertake a major program of database restructuring.
  • Spend some time looking at the results.

Chris Parish – Prototyping a new UI for different screen resolutions

I worked on a prototype of a new user interface layout for ContrOCC which would degrade better on small screens, but take better advantage of large (or at any rate tall) ones. This involved redesigning the Contract UI  so that it no longer contained tabsets, but instead a series of vertically stacked sections each of which can be individually expanded and collapsed. There are also a series of links along the top of the page which allow the user to jump to any section of interest.

Screenshot of Chris' hackday project

Ian Evans – An app to capture client-confirmed visit data

ContrOCC records details of client visits to be carried out, and can generate payments/charges based on those visits, but without verification there will always be an element of trust. This is a browser/smartphone-based add-on to ContrOCC to allow client-confirmed visit data capture.

Instead of requiring the service provider to verify attendance by relying on, for example, a client’s landline, the visitor hands the client their own smartphone, which may be their personal device, and the client enters a picture-based PIN. Pictures are used as potentially easier to remember than a numeric PIN. For this demo, I have used various types of balls, but obviously anything meaningful to the client could be used.

Linked to location-based data, the time and place is verified, with that data instantly available in ContrOCC and matchable against PO/Invoice data. Visit start and end are recorded, together with the option for the client and/or visitor to send a message to the LA.

For this demo, a smartphone surround is shown. Some LA branding has been applied, but it really needs design input!

Screenshot of Ian's hackday projectThe code is inline C#, borrowing concepts such as the “no-compile developer experience” from .NET vNext, where the developer never needs to build, just code and save. There is no solution or project, the website is opened directly in any version of Visual Studio, with system-wide code shared from App_Code. Development code is continuously compiled by aspnet_compiler into a deployable “build”, with a compile management screen showing ‘real-time’ code warning/error status.

Ian Lewis – A tool to parse PDF files

The task I chose to undertake on Hack Day was to develop a tool that would parse PDF files such that the contents of the file could be inspected. I managed to get to the stage of recognising the PDF Header and Trailer, and converting the body into tokens which would be used to create PDF Objects.

One application of this utility could be to allow the ContrOCC generated PDF exports to be inspected and checked against a set of expected results. This would allow PDF exports to be checked during the automated tests. Initially, the automated tests would check that the text content is as expected and could be extended to include the positioning of the text and the type of fonts that were used in the export. Ultimately, once the structure of the files is more fully understood, this utility could be used to allow PDF files to be generated.

This was an ambitious Hack Day project that needed more than the one day that was allocated. I estimate that it would take at least a week to complete the parser with the appropriate automated test code.

Jo Lloyd – Alternative help navigation

Our current help technology has many good features, but lacks the ability to deal well with simultaneously providing descriptive (“what is this field?”) and functional (“how do I … ?”) help. While the text, being endlessly flexible, can easily encompass all kinds of help, there is no automated and structured means to point users to different paths through the content.

  • We need to provide a description of every field on every form. But we would also like to provide activity-based help based on where the user is – eg how to disregard a property, how to carry forward a payment, whatever.
  • The current approach to Help in widely used applications like Office largely abandons context in favour of search. But I think our context-related help is important, and I would like to maintain the principle that pressing the F1 key always shows you something useful, rather than requiring more clicks or a search before anything is shown.
  • One thing I do like about Office Help is that in addition to a title, it shows you a snippet of each item in a list of links. It would be nice to include something like that.
  • The tree structure on the left of our current help seems to me very useful, and I’d like to retain that functionality (or something like it).

How it is now

Screenshot of Jo's hackday project

  • I’ve illustrated a simple approach (conceptually, that is), which just adds some options to what we already have via a new menu bar offering context-related links of two (or indeed potentially more) kinds. F1 would open the help for the current position, as now, and this would set the initial context of the dynamic menus.
  • I have imagined that tags would be used to set the relationships between items, as it is both relatively low maintenance and familiar. But manually maintained lists could also be used.
  • I have assumed that index and search options remain much as now (although I haven’t illustrated them).
  • I should stress that this is all purely hypothetical and I have not spent any time investigating technical solutions.

Providing more options

Screenshot of Jo's hackday project

The Related menu

Screenshot of Jo's hackday project

The How to menu

Screenshot of Jo's hackday project

How it might look with snippets

Screenshot of Jo's hackday project

Jon Bowen – Planner-style timesheet UI

With significant help from Chris Smith I have created an “alternative” UI for OCCult (our internal timesheet system). This is not intended as a replacement for the existing timesheet entry form, but as an addition that considerably simplifies timesheet entry for people with particular kinds of working patterns:

  1. People who have largely the same project for all their work
  2. People who record their work in their notes by time of day rather than by hours worked
  3. People who prefer visual to verbal reasoning (thought that’s not a working pattern!)
  4. People who work on several projects at once, recording some time for each project every day.

The new UI is visually completed, and partly coded. There are still many bugs, and some features that aren’t implemented at all. Some staff have evaluated the new UI, and despite some networking problems which leave the demo file locked, and other MS Office installation problems on some workstations that prevent the new UI from opening, feedback has been broadly positive, with some negative. However, only a minority of staff have provided feedback. The most negative feedback has been from staff who are already happy with the existing UI and don’t see the need for an alternative.

The key to building innovation

Jeff Gothelf is the author of Lean UX, a book that plugs into the theory of The Lean Startup and looks at how User Experience design processes fit in with the Lean approach.

Jeff was interviewed by Communitech News and described what he believes is the key to building an innovative product or company:

Talk to your customers.

I mean, really have the humility to listen to your customers.

Learn what it is that they love about your product; learn what it is that they hate about your product; learn about what it is that they hate about your competitor’s product; learn about what they love about your competitor’s product.

Listen to them and their needs. Figure out what job they are hiring your product to do, and then make your product do that better than anyone else’s. You will never know that unless you leave the safety and comfort of your office and go out and talk to your customers.

That really inspires me.

I recommend the rest of the interview too, where Jeff discusses the importance of cross-functional teams, mistakes to avoid, and how to remain competitive as you scale your product.

ASP.NET Web API on Linux and Apache with Mono

We had a requirement at OCC to build a RESTful web service that would be able to run on both Windows and Linux servers. Someone suggested we give Mono a look to see if we would be able to use the ASP.NET Web API framework served up by the Apache Web Server on Linux. That sounded great; we have a lot of experience with the .NET Framework and a lot of experience with Linux but so far have not brought the two together.

Banner showing Mono, .NET, ASP.NET, Apache and Linux logos

Mono is an implementation of the .NET framework that can be used to build applications that run on Linux and OS X in addition to Microsoft Windows. There are further details on Wikipedia.

In the past, some concerns have been expressed regarding licensing, software patents and their possible impact on Mono and the applications that depend upon it. The Mono team have addressed these concerns and recently (April 2014) Microsoft released Roslyn under the Apache 2 license and have committed to working closely with the Xamarin team, whose product is based around Mono, which may further calm concerns.

Getting Started on Linux

If you are lucky your system will have a package available to install Mono, if so then you should use this. At the time I was not so lucky so had to get the latest stable source and build that.

In Practice

Because the Mono team are attempting to keep up with developments by Microsoft the Mono framework does not fully implement the latest .NET framework. This can lead to some headaches where there is a partial implementation which can result in some methods of a class not being available under Mono but often these issues can be worked around.

However, Mono is under very active development and generally manages to keep up surprisingly well.

Pros:

  • Can use Visual Studio for the bulk of development.
  • Once something builds and runs on Windows it runs very reliably on Mono. I’ve only been looking at Web applications so I couldn’t comment on a Desktop application with a GUI.

Cons:

  • NuGet has limited usefulness with Mono. I had to get the necessary binary files and manage a Libraries directory within the project. Not a big issue in my case but could be if large numbers of externals are required.
  • Have to maintain a separate build on the Linux system. I used makefiles which was not too onerous but this might be mitigated by MonoDevelop or Eclipse but it did not seem enough of a problem.

Building Mono

Building Mono from source is pretty straightforward but there are a few gotchas.

First it is necessary to make sure a basic development environment is in place, on a CentOS system that’s something along the lines of:

    yum -y install bison glib2 glib2 freetype freetype-devel \
        fontconfig fontconfig-devel libpng libpng-devel libX11 \
        libX11-devel glib2-devel libgdi* libexif glibc-devel \
        urw-fonts java unzip gcc gcc-c++ automake autoconf \
        libtool wget giflib-devel libjpeg-devel libtiff-devel \
        libexif-devel httpd-devel

Source Code

Get the 2.10 source releases of [libgdiplus][15], [mod_mono][16] and [XSP][14] – at the time of writing the stable build of mono is at version 3.2.3. It is does not appear to be important to have all components with the same version as the main Mono release.

Unpack each in a local directory then configure and build in the following order:

libgdiplus

    cd libgdiplus-2.10
    ./configure --prefix=/opt/mono
    make
    sudo make install

Mono

    cd mono-3.2.3
    ./configure --prefix=/opt/mono --with-libgdiplus=/opt/mono
    make
    sudo make install

Add the /opt/mono/bin path to the system path and also set the PKG_CONFIG_PATH to /opt/mono/lib/pkgconfig through the /etc/profile (do not forget to export the variables). These variables must be set before building xsp as it needs the C# compiler otherwise the configure part of the build will fail.

xsp

    cd xsp-2.10
    ./configure --prefix=/opt/mono
    make
    sudo make install

mod_mono

    cd mod_mono-2.10
    ./configure --prefix=/opt/mono --with-mono-prefix=/opt/mono
    make
    sudo make install
    sudo mv /etc/httpd/conf/mod_mono.conf /etc/httpd/conf.d/

Configuration

It will probably be necessary to add the path to Mono’s shared libraries to the system wide library path. This can be done by either adding the path to /etc/ld.so.conf or, if the /etc/ld.so.conf.d directory exists, by adding a new file there (I suggest following the naming convention used by other files in that directory) with the path to the Mono shared libraries – these will be at /opt/mono/lib. Once this has been done run the ldconfig command as root to update the system.

After building and installing check the installation by running:

    mono-test-install

Making .NET 4.5 work

When building from source code there is a problem when running applications which require the .NET framework 4.5 libraries. The xsp4 and mod_mono shell scripts that are executed (located in the /opt/mono/bin directory) refer to executables in the /opt/mono/lib/mono/4.0 directory. Typically the executables themselves are fine but they refer to the 4.0 libraries which can be missing some of the newer features. This can result in problems of the form:

    Exception caught during reading the configuration file:
    System.MissingMethodException: Method not found: blah blah blah
      at System.Configuration.ClientConfigurationSystem.System..... yack yack

To fix this first make symbolic links in the 4.5 directory to the 4.0 files:

    ln -s /opt/mono/lib/mono/4.0/xsp4.exe /opt/mono/lib/mono/4.5/xsp4.exe
    ln -s /opt/mono/lib/mono/4.0/mod-mono-server4.exe \
        /opt/mono/lib/mono/4.5/mod-mono-server4.exe

Then edit /opt/mono/bin/xsp4 and /opt/mono/bin/mod-mono-server4 to reference the symbolic links.

Fixing errors caused by colons in the virtual path name

In our application the resources managed by the RESTful interface include the colon ‘:’ character. There appears to be a bug which creeps out when using ASP.NET applications in sub directories. The problem appears with the static initialisation in System.Web.VirtualPathUtility which manages to not read the Web.config system.web/monoSettings verificationCompatibility="1" attribute so fixed by setting the monoSettingsVerifyCompatibility member variable false otherwise errors are generated when there is a colon in a virtual path name.

Apache

The Apache mod for Mono passes requests to the mod_mono_server, which is able to support multiple ASP.NET processes.

With the above completed restart Apache web server and verify that mod_mono has been picked up.

    httpd -M

You can also inspect the error log after a restart.

MonoServerPath

Mono’s support for ASP.NET under Apache uses a simple module which delegates requests to the mod-mono-server. The MonoServerPath setting in httpd.conf specifies where the mono server is for each location:

    MonoServerPath default "/opt/mono/bin/mod-mono-server4" 

This configures mono for the default path which for a standard Apache configuration will be /var/www/html. It is also necessary to configure the application and handler:

    MonoApplications "/:/var/www/html"

    <Location "/">
        Allow from all
        Order allow,deny
        SetHandler mono
    </Location>

In addition, the following options can be set:

    MonoSetEnv default MONO_IOMAP=all
    MonoDebug default true

Restart the server and check the error log file.

If other locations need to be configured much the same needs to be repeated, for example, if a /test application were to be created it would be configured as:

    Alias /test "/var/www/test"
    MonoServerPath test "/opt/mono/bin/mod-mono-server4"
    AddMonoApplications test "/test:/var/www/test"

    <Location "/test">
        Allow from all
        Order allow,deny
        MonoSetServerAlias test
        SetHandler mono
    </Location>

Other Directives

It is recommended to disable KeepAlive for performance reasons or at least restrict the time-out to 2 seconds.

    KeepAlive Off

The CentOS installation of Apache web server sets the name for the log files as access_log and error_log; you may want to have the more conventional .log file extension.

Configuration Tool

The mono site has a handy online tool that can help with setting up a basic configuration for either a virtual host or an application.

In conclusion

Building a RESTful ASP.NET Web API with Mono, to run on Windows and Linux servers, was pretty straightforward with only a few problems on the way.

CuPiD demo at Said Business School

Reynold Greenlaw and Andy Muddiman attended the Oxford Startups demo night at Oxford Launchpad in the Said Business School, on 15th May where they demoed the GaitAssist smartphone app that has been developed for the Cupid project. They demoed it pretty much continuously at our very busy table to the many interested attendees and our communications manager Janine Smith joined in to lend a hand.

The app continuously compares the gait of a user with Parkinson’s disease during walking with an “optimal gait” calibrated with a clinician. Based on evaluation of the specific patient, the most important gait parameters (e.g. step length, cadence) are set for future training sessions. The app gives the user feedback on how well they are doing and uses a technique to help Parkinson’s gait called audio cueing.

Cupid demo 3  Cupid demo 1Andy and Reynold demoing the app

Data on gait performance is visible to the clinician who can then adjust the exercise remotely forming a feedback loop between the user and the doctor. You can take a look at the demo itself on this video of Audio Bio-Feedback in action.

This event was organized by the Oxford Academic Health Science Network which brings together clinicians, academics and the public to promote best health in the region.

The Oxford Launchpad was launched in February 2014. It is a new space for entrepreneurial activity housed within the Said Business School, University of Oxford.

Finding time to think

A personal blog post from our Director of Consultancy Projects

As well as delivering products, OCC has a team that specialises in custom software development; they are behind the wide variety of case studies on our website. This combination of teams working on custom software and product development & support is, I think, unique.

Once a year I take the custom development team out for a day to discuss how we might write even better software. This time we crossed the road to the Jam Factory for an open discussion and some prepared talks. The talks were:

  • A toolkit for establishing better experience for our users
  • The judicious use of Agile project management
  • A critical look at SOLID principles
  • Lessons on testing from the product development team

The result was a surprise to me. The conclusion of the team was that best way to improve our performance was none of the above but instead improving the office environment. Nothing to do with software at all. The team wanted fewer interruptions, less multitasking  and more time to quietly think through difficult technical problems.

I was surprised and really interested in the conclusion. Our office is already considered calm and quiet. So we’re going to try a few new things. We’re going to experiment with collaboration tools that allow people to raise a “Do not disturb” flag. We’re using instant messaging that reduces disruption, hosted within OCC to keep the data confidential. I’m encouraging staff to book rooms or work from home during intensive design phases. It’s going to be interesting to see how we get on.

Consultancy Away Day

Consultancy Away Day

Tom’s thoughts on AngularJS, TypeScript, SignalR, D3.js and Git

OCC DevCamp has been a great opportunity to put the day-job to one side and try out some new technologies. Here are some thoughts on how I got on with them.

D3, Git, TypeScript, AngularJS and SignalR logos

AngularJS

AngularJS is Google’s offering in the JavaScript application building framework arena. Having previously used Knockout I wanted to use a different framework for comparison. I’ve felt that Knockout isn’t especially well suited to large applications, and it seems to struggle in terms of performance when handling a large number of data bindings.

Angular comes with something of a learning curve (fortunately I shunted most of this on to Mel!) but after a week of use, it feels like a better choice for a larger application. The framework encourages writing code in a modular way, and it seems to be doing less magic in the background; surely good for performance. Data sizes in this application have been small, but there’s never been any indication that Angular has been slowing things down. More research would be possible if we choose to re-write one of our existing Knockout apps, but at first glance it doesn’t seem like that would be cost effective.

Recommendation: I would recommend using this again over Knockout. It would be good to see it put to work on a real project.

TypeScript

I’d been wanting to try out TypeScript for some time, since the prospect of writing JavaScript with straight-forward classes, modules, interfaces, etc. and compile-time type checking was a big draw. Again, I found there to be something of a learning curve, but it has been well worth it. There have been plenty of occasions when compiling my TypeScript file has revealed errors in code that I wouldn’t have found until running the application in a web browser if I’d been using vanilla JavaScript; obviously that adds up to a lot of time saved.

A couple of gotchas:

  1. You’ve got to keep remembering to build your solution in order to rebuild the associated JavaScript file. [Edit: it seems like it ought to be possible to get this working with compile-on-save, but this didn’t work out of the box for me]
  2. If working with other JavaScript libraries – and let’s face it you will be – you’ll need to remember to download the associated TypeScript definitions files or face compilation failures. Fortunately these are easy to find on NuGet, as long as you remember to go looking for them, e.g.:

Screenshot of loading the AngularJS TypeScript Definition

Recommendation: TypeScript is definitely worth using. You can start by just renaming your existing files with the .ts extension, and if it all gets too much you can simply drop back to plain old JavaScript.

Example: The best example of the benefits of writing well laid-out code with TypeScript and AngularJS together was when I wanted to drop Poll Graphs into three different web-pages served by our application. One had the main presentation and associated JavaScript, the other didn’t. The following code was sufficient to drop graphs into all three pages with no further work:

Angular code to add polls to the presentation

SignalR

SignalR is a library for real-time web functionality. In the best circumstances it will use a WebSocket between server and client, but will seamlessly fall-back to older client-server mechanisms if required.

This was very easy to get started with, and has been very powerful. The code in our Hub class, which receives messages and broadcasts responses, is very clear and concise. The documentation from Microsoft is surprisingly good. However, we’ve seen issues with connections being dropped, messages getting delayed, and there have been problems along the way with the architecture of our application which was at times getting into circular message->broadcast->message chains. That said, I invested very little time in trying to make robust the Hub backbone to our application.

Recommendation: I don’t think that any of the issues we’ve come across couldn’t be resolved given more development time and production servers. The application architecture needs to be right no matter which library we use for real-time calls. If I had the need to develop real-time interaction I’d give SignalR a chance: it’s a great starting point and builds in a lot of powerful features.

D3.js

D3.js is a JavaScript library for manipulating documents using data, with very powerful capabilities. We only scratched the surface of this, but it produced some very nice bar charts for us. Once I got my head around the syntax, changes to the graphs presentation and scaling were very easy to make; a sure sign of a well thought out library.

Screenshot of a D3.js poll chart

Recommendation: I’d definitely use this library again if I have client-side graphing needs. In fact, I just want an excuse to use it on something more complicated than a bar chart!

Git & GitHub

Distributed source control in its own right is definitely a good thing. The ability to pull incoming changes locally, handle them and commit a properly recorded merge action is valuable. I’d already scaled the learning curve for using a distributed source control system when trialling Mercurial some time ago. The concepts came back to me fairly quickly so I didn’t have to waste time on that this week.

I ended up using a few tools to work with our Git repositories:

  • GitHub for Windows client: The UI was very confusing and anything but the simplest functions required dropping out to the command line. Shiver. I quickly stopped trying to use it.
  • Visual Studio integration was ok for simple actions but often seemed to get completely stuck when conflicts needed to be resolved; perhaps there will be fixes for this in the future.
  • TortoiseGit: by the end of the week I was solely using TortoiseGit. Not only because it was very similar to the TortoiseSVN toolset I’m used to, but also because it worked: it did what it had to do, when it had to do it.

Recommendation: I’d still recommend using distributed source control over non-distributed in general. I’d use Tortoise clients if we decided to do this at work.

Chris’ thoughts on HTML5 Canvas, SignalR and Git

HTML5, SignalR and Git logosHTML5 Canvas

HTML5 canvas is a powerful little container for graphics which I only used the smallest set of features of. Other features to explore would be changing of line colours/styles, and of drawing shapes. There are also other algorithms for drawing lines (Bezier/Quadratic) that might lead to smoother lines. Problems arose due to different implementations between web-browsers; chrome seemed to fare the worst here, especially when reporting the position of events. Some of the other features of HTML5/CSS3 I used were only available in the latest versions of browsers and work-arounds would need to be found to allow more universal coverage.

The JavaScript used to create canvases was quite simple so didn’t cause many headaches; most of the work went into working out which CSS properties to use, especially considering the odd transformations Reveal applies to slides. I found Firebug very useful for inspecting the code used to create various element and for debugging JavaScript, although Firefox’s and Chrome’s built in developer tools offered most of what I needed.

To do the annotation itself pointer events made life a lot simpler than the original process of defining different events for each input type, I would certainly go down this route again especially since there’s a polyfill available to use them in older browsers.

UI Effects

I also looked at some uses of pure HTML/CSS to replace things often scripted; showing and hiding menus for example. Once established I think this will be a nicer way of doing things. Given more time I would have liked to look more into animations and other effects.

SignalR

SignalR made sending annotations to different screens easy. Although because of our implementation of communications classes in Angular a nasty hack was needed to send annotations to this via events. Starting again I would have attempted to do the drawing and receiving of pointer events using Angular itself.

Git

I found Git was not obvious in its use to start with but by the end of the week I had a workflow sorted. Stash Save / Pull / Stash Pop / Merge / Push. This might not have been the recommended route, but worked for me. I still prefer Mecurial to both this and SVN.

Andrew’s thoughts on Git, Xamarin and SignalR

My focus for DevCamp was building an Android app that would interact with the presentation web app in realtime. You can read the details of how we achieved that in my previous post Using SignalR in native Android and iOS apps. For that we made use of a toolkit called Xamarin, as well as the ASP.NET SignalR library, and Git for source control.

Xamarin, SignalR, Git, GitHub and Android logos

Xamarin

Xamarin is a toolkit that allows you to use C# to write code for native Android and iOS apps. I had a good experience with this overall. I hit a small bug where changing the namespace caused a crash, but found an issue already raised for this on the Xamarin issue tracker. Other than that I found using it to be very easy.

Writing the code in C# required no knowledge of Java, though you do still need to know about the UI interfaces for the various devices. However, learning these would be much less work than learning the entire library system and back-ends.

We used Portable Class Libraries to allow us to write code that targeted both the ASP.NET MVC project and the Xamarin Android project. This worked perfectly, and meant we could share class structure changes and business logic without needing to recode anything. This demonstrated one of the major potential gains of using this kind of system.

The second major gain is that while the UI development needs some experience with each of the platforms, the back end can be written by existing C# developers without them having to learn additional skills. This could be a major saving even if only developing for a single platform.

I didn’t get to try adding an iOS front end to the application as this requires a Mac build server and an annual development fee to Apple (submitting to the Windows Store also requires an annual development fee). This isn’t a major drawback since these would be required for native development without the use of Xamarin, but it would be nice if the requirement for a Mac could be removed.

I’d like to try this at some point in the future to explore how well the multi-platform development works.

The only caveat I have is that the Xamarin license cost is high (around $999 per developer per platform per year). This would likely be recoverable for a medium sized project in the saved developer hours, but could cause support issues if we were not doing enough work in the area to keep at least one subscription running all the time.

Conclusion: Recommended

Notes on Android Development

Android VMs are very slow and getting a physical device ready for debugging can be a bit of a trial, needing various device drivers and options changing. Once it is set up it tends to run smoothly though.

Writing the activities for an Android UI requires some learning and does not translate directly from Windows/web UI design.

SignalR

SignalR is a real-time communication library for .Net applications. I found it easy to use, and the portable build worked fine on Android (with Xamarin) with little additional effort.

Writing the SignalR calls was straight forward. However, as with any multi-client system, serious consideration about the communication messages being transferred is required. We initially created a system where messages could end up triggering other messages that cascaded into a loop. Careful design would be able to eliminate these occasions, but they’re easy to miss.

Conclusion: Recommended

Git

We decided to use Git for our source control during DevCamp, it’s a distributed system, which means working with local repositories and pushing to / pulling from others. We use SVN in our usual day-to-day work and are pretty happy with it for the most part, but we were interested in trialling what’s becoming the go-to system for many developers our there.

GitHub

GitHub is the most popular hosted solution for Git projects and seems a decent repository for code. If I wanted a shared repository I’d have no issue using this. It allows access via SVN as well (though I didn’t try this), so this would allow continued use of SVN if required. Recommended.

Git GUIs

Visual Studio Tools for Git – I had a few issues with this. I didn’t find that it worked well compared to other versions, and had major issues with authentication. I didn’t find it added much over the other Git access methods. Not recommended.

TortoiseGit – This was a great interface to Git, though this may just be because I’m already familiar with TortoiseSVN. Recommended if you like TortoiseSVN.

Overall

I’m unsure about this. I like the local commits and branches, but I had a number of cases where local changes I made didn’t seem to get merged into the shared trunk, so that while I could see the changes in my local stream when the merge occurred they were ignored. This may be down to misunderstandings on my part, but I didn’t feel that this made the system reliable enough for production use.

Conclusion: Uncertain – further investigation needed