ContrOCC Hackday IV – Part 2

Carrying on from our first post following the results of our developers’ adventures in the most recent ContrOCC hackday, here is the final set of projects:

Julian Fletcher – Graphically presenting performance information to the lay developer

We are now amassing ever greater amounts of anonymous performance information from live Local Authority (LA) databases:

  • execution times
  • deadlocks
  • missing index hints.

This information is transferred, via CSV files, from all the LA databases to a shared OCC server. Here, several Stored Procedures may be used to analyse the figures. However, these output slightly impenetrable tables, so I set about presenting the information in a graphical format using SQL Server Reporting Services (SSRS).

Setting this up was fairly tedious – in particular, getting all the necessary permissions (even when everything was on my own machine). However, in summary, this is how it works:

  1. You set up a SQL Server to have SSRS. This results in a couple of new databases being created:
    Screenshot of Julian's hackday project
  2. Configure SSRS using the Configuration Manager:
    Screenshot of Julian's hackday project
  3. Install the Microsoft SQL Server Report Builder application and design your reports. These can obtain their data from a database on an entirely different server.
  4. Save them to your hard drive and then publish them the Report Server when they’re ready.
  5. The screenshot shows what this looks like. You can get the averages at all LAs for any one stored procedure:
    Screenshot of Julian's hackday project
  6. You can also drill through to get a by-version breakdown for a particular LA (not down – this is a slightly different concept):
    Screenshot of Julian's hackday project
  7. Just by chance, I’ve chosen a stored procedure which appears to have sped up in later versions!

As I mentioned above, getting the right permissions was a pain and I still haven’t made the Report Server available to other users but I hope to be able to soon.

Maciej Luszczynski – Cleaning up the developer documentation

I focused on providing How To documentation for new developers on the wiki. I started by cleaning up existing pages. The Home page is now greatly improved, although the Windows Topics section still requires clean-up.

I have also identified some topics that it might be useful to add to the wiki, such as:

  • how to remove a database object
  • how to remove a database table properly
  • how to commit a change to the SVN properly
  • how to check the imports flow

Matthew Clarke – CSV Merger

This tool improves merging of CSV data files which often cause SVN conflicts, for example when a column is added.

The tool can be set to be your default merge tool for CSVs in SVN.

Screenshot of Matthew's hackday project

When you would normally resolve conflicts CSVMerger now opens.

Screenshot of Matthew's hackday project

It will highlight conflicts and missing values for new columns:

Screenshot of Matthew's hackday project Screenshot of Matthew's hackday project

You can even resolve conflicts via the UI (needs polish).

Screenshot of Matthew's hackday project

So, you’ll be able to go home 5 hours early because the merge was done in no time.

Nathan-Madonna Byers – F#/C#

I looked at using various F# Type Providers (from the open-source FSharp.Data.SqlClient package) to interact with ContrOCC databases in a type-safe manner, such that many errors can be caught at compile-time rather than raising run-time exceptions. In particular, I investigated the potential of this approach for writing test data scripts and worked through the implications of exposing this functionality to C# code.

The first thing to note is that there’s little in my prototype that couldn’t have been achieved solely in C#. However, I find F# to be a more natural and expressive language for this kind of thing. In terms of F#/C# interoperability, there were a few head-scratching issues that took a while to figure out – but it all seems to work pretty smoothly. (Famous last words.)

A quick summary of the specific Type Providers:

  • SqlProgrammabilityProvider: This can be used when working with stored procedures. Unfortunately, it doesn’t currently support SPs with output parameters or explicit return values, which rules out many of our SPs.
  • SqlCommandProvider: This can be used when working with arbitrary SQL commands. I’ve used this in my prototype as a lightweight wrapper around SPs (e.g. rather than calling an _Insert SP directly, I can have a simple three-line SQL command: declare an @ID parameter; execute the SP [passing @ID as the output parameter]; select the @ID).
  • SqlEnumProvider: This can be used to create enumerations for (relatively) static lookup data, e.g. from TRefSys_ tables. This means that it may no longer be necessary to define these enumerations manually in our C# code (and then have to worry about keeping them synchronized with the database &c.).

In terms of extending the prototype, various factors – the use of the SqlCommandProvider and the succinctness of F# (plus type inference and auto-generics) – mean that the amount of hand-written “boilerplate” code required is fairly small. (It’s not dissimilar to the amount of hand-written SQL needed to do it the old way but should be quicker to write.) The thing that really takes time is the need to analyze the SPs in relation to the database schema, e.g. to decide: which parameters to expose to C#; which should be mandatory, etc.

Finally, a summary of the key benefits:

  • Static typing (and Type Provider “magic”) mean that many errors can be caught at compile-time.
  • Intellisense (including visibility of defaults for optional parameters) and auto-completion.
  • Optional parameters are truly optional: no more “counting commas”!
  • The scripts are (more) readable.
  • Debugging support and full stack traces for exceptions.
  • “Production” SPs – i.e. those used by the UI (and other components) – are always used.
  • Modularization, i.e. we can finally share portions of data setup across tests.

Nigel Palmer – ContrOCC version manager

I set out to create a version number manager to assist with the building of ContrOCC. Version numbers are currently set manually in a variety of different places and there is some risk of one of them being missed. This tool will allow the developer responsible for the build to set all version numbers in the SQL database and the C# code. This will improve the accuracy of the build process and prevent test failures due to incorrect version numbers.

The ContrOCC Version Manager tool has a simple user interface that allows the user to set the version number. This then updates all the required build files to have the new version number.

Screenshot of Nigel's hackday project

Patrick Donkor – An executable imports/exports specification

I spent my Hack Day looking at how I could use Specflow as a way of creating an executable specification for the development of new Imports/Exports.

The idea is take a specification using the Gherkin syntax to specify a top level view of an import/export. This specification would be agreed upon between an IM and the Client. The specification would be passed to the developer who would code and test against it.

To see if this could be done, I used an existing client payments import as my template. I spent the first part of the day setting up the specification for the import. This went well. The second half of the day was spent trying to implement automated tests to use against the specification. I did encounter a few minor issues that, given time, could be rectified. The screenshot below shows what a sample file looks like.

Screenshot of Patrick's hackday project

Steph Sharp – Improving code integrity checks

I spent the day trying to suppress output from one of our internal code integrity checks (sys_checkschema). This output is an unwanted side effect, and can be confusing, especially to newer members of the team. Although I did manage to prevent this output being displayed, I was unsuccessful as there were problems with expected errors no longer being reported and the addition of spurious new errors, which I was unable to resolve.

Tom Litt – Migration from within the ContrOCC UI

I took the brief “make the migration process at new customer sites a part of the ContrOCC UI” and tried to see whether it would be feasible to run migrations via a wizard available in the main desktop client, instead of the existing process where there is a separate tool. Why? If a customer had this, they wouldn’t have to send data off-site. They could re-run migration tables as often as they liked. Selling this to us, we could ditch the SQL Server Integration Services packages which are a merge nightmare.

In production the wizard, would:

  • allow editing of migration settings
  • allow selection of files to be migrated in, or perhaps just point at a directory full of migration CSVs
  • run through each file independently, or run them all in an unattended batch; depending on the level of reporting required
  • feedback on the level of success/failure for each file.

The file specifications and import code are already written, so this is a case of wrapping the code as stored procedures and putting UI on it.

In practice I got as far as establishing a wizard, selecting the client file, importing it and writing the existing HTML feedback to the wizard. This could be extended fairly easily to handle all files and report more nicely, and I think this is worthy of future effort.

Tomasz Agurciewicz – Taming our test scripts

ContrOCC’s testing meta language takes some time for developers to learn. That’s why I decided to finish/enhance a tool I made some time ago which makes it easier:

Screenshot of Tomasz's hackday project

The tool parses a test script and tries to present its contents in a sane fashion. All commands are listed in the Command dropdown, and selecting one of them lists available actions. So for example Command “S” (Service) has actions “Add” and “Load”.

Next, for the selected action it finds the appropriate sql part in the script and tries to list what each column stands for. It then lets you enter all the required data and form a single test script line that you can use later on when preparing tests or data for ContrOCC. It also prints the selected chunk in a separate tab, so you can look up if in doubt what each parameter does.

Trevor Hingley – SQL Server projects in VS2012

I looked at SQL Server projects in Visual Studio 2012, with a view to seeing how viable it would be as a tool for ContrOCC database development and deployment.

There are immediate issues in that the SQL Server Data Tools (SSDT) installed on the machine must be in line with Visual Studio. In my case creating a project in VS2012 immediately raised an error, as the SSDT were not compatible.

After installing the latest version of SSDT, I found the SQL Server project very easy to use. Database schema can be imported from a database, a selection of scripts or a .dacpac file very easily. Multiple projects can be used to handle separate version schemas, and the tooling makes comparison between projects/schema easy to view and update. Pre- and post-deployment scripts can also easily be added to the project.

Where the viability of the SQL Server project type fails is in deployment. Schemas can easily be deployed to databases which can be connected to, and scripts can be generated to create a new database or upgrade from a known schema to the latest schema. However, there is no versioning capability when generating scripts to be able to produce scripts capable of upgrading from multiple schema versions to the current version. A possible alternative is to generate .dacpac files which have versioning capability. However this was not found to be viable, as a .dacpac generated in VS2012 causes an xml schema error when attempting to import into a 2k8 SQL Server database.

ContrOCC Hackday IV – Part 1

Our ContrOCC hackdays give our developers a day to work on tweaks, gripes, improvements, or whole new features of their choosing and then sharing those with the rest of the team.

For all the thinking behind our product hackdays, have a look at the intro to the first ContrOCC hackday.

This year we have so many team members I have split this post in two; we’ll post the remaining projects soon. Here is the first set:

Alan Carter – Converting the distributed tests config file to XML

I worked on converting the configuration file for our distributed testing service from plain text to XML, something I’ve wanted to do for a while. As the configuration for the tests grows ever more complex, the plain text file becomes increasingly arcane and difficult to navigate. An XML file allows the configuration to be laid out in a more structured manner, and lets most text editors provide benefits such as folding and syntax highlighting. It also has an accompanying schema so that we can catch mistakes much more easily.

This work supports a longer term goal I have of getting the config file into SVN and splitting out parts which are likely to be changed frequently.

There’s a little more work needed to finish writing the new parser, but with just a little more work this should be fully usable. I also hope to arrange for the distributed tests service to perform schema validation of the file.

Screenshot of Alan's hackday project

Chris Griggs – Converting ContrOCC tools to use Git

I looked at converting our tools that currently use SVN to use Git. We currently use the SharpSVN library to query our repository. There’s a SharpGit project in NuGet that’s supposed to be an almost direct replacement, unfortunately with only ~200 downloads it’s very immature and not up to the task. I chose libgit2sharp as the best alternative. This is a C# wrapper around the official libgit2 library. The library itself is well documented, the wrapper less so.

I was quickly able to convert the Macro Developer Tool (an internal development tool which uses the C pre-processor to avoid code duplication in SQL) to use Git as it only looks at the working copy to see which files have been added or modified. I then turned my attentions to the PFG (our patch file build tool). I soon learnt that this was not going to be so trivial.

The concept of a tag is somewhat different between Git and SVN, as is the idea of looking at the central repository to find the previous one (although I doubt we’ll all have the entire repo on our machines either). As it’s written, the PFG relies on there being a linear succession of changes between the previous tag and the current one. In Git it’s likely that we would branch out and merge in commits throughout the development of a tag. We would therefore need to compare the two trees without relying on their history. I am satisfied that this is possible, but more thought will be required than was possible on a hack day.

Chris Henry – Visualising the ContrOCC Database

I spent the day looking into ways of visualising the structure of the ContrOCC database. The aim was to see to what extent it divides into modules which could in principle be more rigorously separated in future. The work in practice fell into three parts:

Part 1: List Dependencies

Extracting data on dependencies from our source data. I have a long standing prejudice against SQL’s own lists of dependencies and since I knew it wouldn’t be too much work I instead read dependencies out of the SQL source code using parser code borrowed from the PFG (our patch file build tool).

Part 2: Box & Arrow Graph

I next tried to graph dependencies as a box & arrow diagram. I used the MSAGL library and technically speaking it worked first time. With minimal effort I was able to produce a suitable diagram with tooltips showing the name of the objects.

Screenshot of Chris' hackday project

However, when I scaled up to the 1000s of objects in our database it didn’t even try to produce a decent layout – it just stretched the whole graph out horizontally:

Screenshot of Chris' hackday project

Or zoomed in:

Screenshot of Chris' hackday project

Part 3: Dependency Structure Matrix

Whilst looking into ways I might influence the layout I stumbled across the idea of the Dependency Structure Matrix and immediately decided it was a better prospect. In true ContrOCC fashion I began writing my own DSM code and by the end of the day had an incomplete, tantalising but ultimately broken implementation.

Follow-up

I intend to:

  • Investigate the commercial tool Lattix or, if it doesn’t impress, fix my broken implementation.
    Postscript: I tried it. I like it. The costs are such that although we might consider it this would only be in the context of a future decision to undertake a major program of database restructuring.
  • Spend some time looking at the results.

Chris Parish – Prototyping a new UI for different screen resolutions

I worked on a prototype of a new user interface layout for ContrOCC which would degrade better on small screens, but take better advantage of large (or at any rate tall) ones. This involved redesigning the Contract UI  so that it no longer contained tabsets, but instead a series of vertically stacked sections each of which can be individually expanded and collapsed. There are also a series of links along the top of the page which allow the user to jump to any section of interest.

Screenshot of Chris' hackday project

Ian Evans – An app to capture client-confirmed visit data

ContrOCC records details of client visits to be carried out, and can generate payments/charges based on those visits, but without verification there will always be an element of trust. This is a browser/smartphone-based add-on to ContrOCC to allow client-confirmed visit data capture.

Instead of requiring the service provider to verify attendance by relying on, for example, a client’s landline, the visitor hands the client their own smartphone, which may be their personal device, and the client enters a picture-based PIN. Pictures are used as potentially easier to remember than a numeric PIN. For this demo, I have used various types of balls, but obviously anything meaningful to the client could be used.

Linked to location-based data, the time and place is verified, with that data instantly available in ContrOCC and matchable against PO/Invoice data. Visit start and end are recorded, together with the option for the client and/or visitor to send a message to the LA.

For this demo, a smartphone surround is shown. Some LA branding has been applied, but it really needs design input!

Screenshot of Ian's hackday projectThe code is inline C#, borrowing concepts such as the “no-compile developer experience” from .NET vNext, where the developer never needs to build, just code and save. There is no solution or project, the website is opened directly in any version of Visual Studio, with system-wide code shared from App_Code. Development code is continuously compiled by aspnet_compiler into a deployable “build”, with a compile management screen showing ‘real-time’ code warning/error status.

Ian Lewis – A tool to parse PDF files

The task I chose to undertake on Hack Day was to develop a tool that would parse PDF files such that the contents of the file could be inspected. I managed to get to the stage of recognising the PDF Header and Trailer, and converting the body into tokens which would be used to create PDF Objects.

One application of this utility could be to allow the ContrOCC generated PDF exports to be inspected and checked against a set of expected results. This would allow PDF exports to be checked during the automated tests. Initially, the automated tests would check that the text content is as expected and could be extended to include the positioning of the text and the type of fonts that were used in the export. Ultimately, once the structure of the files is more fully understood, this utility could be used to allow PDF files to be generated.

This was an ambitious Hack Day project that needed more than the one day that was allocated. I estimate that it would take at least a week to complete the parser with the appropriate automated test code.

Jo Lloyd – Alternative help navigation

Our current help technology has many good features, but lacks the ability to deal well with simultaneously providing descriptive (“what is this field?”) and functional (“how do I … ?”) help. While the text, being endlessly flexible, can easily encompass all kinds of help, there is no automated and structured means to point users to different paths through the content.

  • We need to provide a description of every field on every form. But we would also like to provide activity-based help based on where the user is – eg how to disregard a property, how to carry forward a payment, whatever.
  • The current approach to Help in widely used applications like Office largely abandons context in favour of search. But I think our context-related help is important, and I would like to maintain the principle that pressing the F1 key always shows you something useful, rather than requiring more clicks or a search before anything is shown.
  • One thing I do like about Office Help is that in addition to a title, it shows you a snippet of each item in a list of links. It would be nice to include something like that.
  • The tree structure on the left of our current help seems to me very useful, and I’d like to retain that functionality (or something like it).

How it is now

Screenshot of Jo's hackday project

  • I’ve illustrated a simple approach (conceptually, that is), which just adds some options to what we already have via a new menu bar offering context-related links of two (or indeed potentially more) kinds. F1 would open the help for the current position, as now, and this would set the initial context of the dynamic menus.
  • I have imagined that tags would be used to set the relationships between items, as it is both relatively low maintenance and familiar. But manually maintained lists could also be used.
  • I have assumed that index and search options remain much as now (although I haven’t illustrated them).
  • I should stress that this is all purely hypothetical and I have not spent any time investigating technical solutions.

Providing more options

Screenshot of Jo's hackday project

The Related menu

Screenshot of Jo's hackday project

The How to menu

Screenshot of Jo's hackday project

How it might look with snippets

Screenshot of Jo's hackday project

Jon Bowen – Planner-style timesheet UI

With significant help from Chris Smith I have created an “alternative” UI for OCCult (our internal timesheet system). This is not intended as a replacement for the existing timesheet entry form, but as an addition that considerably simplifies timesheet entry for people with particular kinds of working patterns:

  1. People who have largely the same project for all their work
  2. People who record their work in their notes by time of day rather than by hours worked
  3. People who prefer visual to verbal reasoning (thought that’s not a working pattern!)
  4. People who work on several projects at once, recording some time for each project every day.

The new UI is visually completed, and partly coded. There are still many bugs, and some features that aren’t implemented at all. Some staff have evaluated the new UI, and despite some networking problems which leave the demo file locked, and other MS Office installation problems on some workstations that prevent the new UI from opening, feedback has been broadly positive, with some negative. However, only a minority of staff have provided feedback. The most negative feedback has been from staff who are already happy with the existing UI and don’t see the need for an alternative.

Merry Christmas from the ContrOCC Development Team

ContrOCC Christmas Wordle

This picture was generated based on the most common words in the user guide for the upcoming ContrOCC version 8 roadmap release.

Client, Service, Care and Package are seen to be important as are Financial, Payment, and Charging. More prosaic common words are dictionary, date, default, report, see, and used.

Our thanks to the entertaining wordle.net which made this fun and easy to do.

The rise of self-service social care

Imagine you’ve been asked to think up a really difficult technical challenge. Well how about delivering health and social care on line, using self-service portals, to frail and elderly citizens?

For good measure, imagine that those people need your services to be as simple and familiar as Amazon or Google and that the people funding you are Local Authorities, facing budget cuts in the order of 60%. Oh and you have to ensure accessibility – from PCs, tablets, mobile phones, and smart TVs and support off-line working in rural communities. It could be described as a perfect storm.

Care Act banner

To encapsulate the challenge, the Association of Directors of Adult Social Services, ADASS, has published a paper on “the development of online services for information and advice supporting the Care Act 2014″, which gives an excellent overview.

Self-service has to succeed

We need to succeed with this challenge if we are to enable individuals, and their carers or agents, to help themselves to plan for and obtain timely care, mainly through resources in the community and care suppliers. Those with high needs will be able to use tools to support a dialogue with the council around self-assessment and start the process towards council funding or applying for care accounts.

The reality is that only self-service will allow citizens to continue to receive quality support with all the well versed pressure on health and social care budgets – our aging society and the increase in administrative duties on Councils brought about by the Care Act, estimated at a 60% increase, on average, per Council.

The paper highlights that, where a person applies for council services or a care account, the information should transfer to the council’s back office system through an “open API“. This is vital, if we are to avoid the age old dilemma of people filling in information online but then have to tell their story over and again to different professionals. It also discusses the Government Digital Service principles behind good design of websites. For example, there has been a tendency to use pictures to make social care sites feel friendly but the paper points to modern research that shows this does not work.

The potential savings of self-service portals have been analysed by Socitm who have shown the relative costs of different forms of contact:

  • On-line information contacts cost £0.09
  • Telephone contacts cost £2.59
  • Face to face contacts cost £8.15

It’s a massive difference, and one that could mean millions of pounds of cost, or savings, depending on how councils choose to approach this challenge in the next few years.

How can self-service make a real difference, both for stretched Local Authorities and Citizens?

One example is triage using our Online Financial Assessment tools – with simplified needs and financial assessment questionnaires to help people see whether they are likely to be funding themselves or whether they might qualify for supported care.

Another example is for council IT systems such as ContrOCC to accept online self-referrals from citizens, completing as much workflow as possible before handing it over to the council.

The paper also makes reference to OCC’s work on micro commissioning or mini-tendering, where authorities are enhancing their directories so that they can send out a package of needs and ask suppliers to come back with priced offers to meet them.

The paper makes a number of references to our work at CarePlace and one to East Sussex 1Space. Not many other eMarketPlaces get a mention so we are really proud to see the collaborations we have with our customers clearly at the forefront of the challenge.

The key to building innovation

Jeff Gothelf is the author of Lean UX, a book that plugs into the theory of The Lean Startup and looks at how User Experience design processes fit in with the Lean approach.

Jeff was interviewed by Communitech News and described what he believes is the key to building an innovative product or company:

Talk to your customers.

I mean, really have the humility to listen to your customers.

Learn what it is that they love about your product; learn what it is that they hate about your product; learn about what it is that they hate about your competitor’s product; learn about what they love about your competitor’s product.

Listen to them and their needs. Figure out what job they are hiring your product to do, and then make your product do that better than anyone else’s. You will never know that unless you leave the safety and comfort of your office and go out and talk to your customers.

That really inspires me.

I recommend the rest of the interview too, where Jeff discusses the importance of cross-functional teams, mistakes to avoid, and how to remain competitive as you scale your product.

National Grid showcasing three OCC-partnered projects at LCNI

National Grid will be showcasing three of their OCC-partnered projects at the Low Carbon Networks & Innovation Conference (LCNI) in Aberdeen, 20-22 October 2014. The conference is a platform for gas and electricity network operators to showcase projects sponsored under their NIA and NIC innovation schemes.

The OCC-partnered projects are:

  • Ramp Rate Study System: an interface to a numerical modelling tool that allows engineers to study the effects of a sudden change in gas pressure (“ramping”) on the network.
  • Standards Map: allows engineers to see and navigate the thousands of dependencies between National Grid’s technical standards documents.
  • Network Investment Storyboard: an offline GIS tool which allows the presentation and exploration of different investment options in a clear and consistent manner.

At this event, National Grid Gas Transmission will be presenting key innovation projects from their NIA portfolio on the exhibition floor with guest speakers presenting at several breakout sessions. National Grid are taking this opportunity to promote their valuable project collaboration and partnerships (OCC being one) that consistently contribute to the successful delivery of their innovation portfolio. James Whiteford of National Grid will be the guest speaker on Day 2 of LCNI and will present the Network Investment Storyboard. A video of the tool is available on request.

The Standards Map features in the annual summary of Network Innovation Allowance, on page 10. This tool shows the complex dependency network between the standards National Grid have to maintain.

Standards Map

Innovation at National Grid

National Grid is actively involved in supporting innovation projects via its NIA funding mechanism. You can also read a summary of the Gas Transmission project portfolio. You’re welcome to visit the National Grid exhibition stand at the LCNI conference where OCC’s projects will be amongst the highlighted works.

ASP.NET Web API on Linux and Apache with Mono

We had a requirement at OCC to build a RESTful web service that would be able to run on both Windows and Linux servers. Someone suggested we give Mono a look to see if we would be able to use the ASP.NET Web API framework served up by the Apache Web Server on Linux. That sounded great; we have a lot of experience with the .NET Framework and a lot of experience with Linux but so far have not brought the two together.

Banner showing Mono, .NET, ASP.NET, Apache and Linux logos

Mono is an implementation of the .NET framework that can be used to build applications that run on Linux and OS X in addition to Microsoft Windows. There are further details on Wikipedia.

In the past, some concerns have been expressed regarding licensing, software patents and their possible impact on Mono and the applications that depend upon it. The Mono team have addressed these concerns and recently (April 2014) Microsoft released Roslyn under the Apache 2 license and have committed to working closely with the Xamarin team, whose product is based around Mono, which may further calm concerns.

Getting Started on Linux

If you are lucky your system will have a package available to install Mono, if so then you should use this. At the time I was not so lucky so had to get the latest stable source and build that.

In Practice

Because the Mono team are attempting to keep up with developments by Microsoft the Mono framework does not fully implement the latest .NET framework. This can lead to some headaches where there is a partial implementation which can result in some methods of a class not being available under Mono but often these issues can be worked around.

However, Mono is under very active development and generally manages to keep up surprisingly well.

Pros:

  • Can use Visual Studio for the bulk of development.
  • Once something builds and runs on Windows it runs very reliably on Mono. I’ve only been looking at Web applications so I couldn’t comment on a Desktop application with a GUI.

Cons:

  • NuGet has limited usefulness with Mono. I had to get the necessary binary files and manage a Libraries directory within the project. Not a big issue in my case but could be if large numbers of externals are required.
  • Have to maintain a separate build on the Linux system. I used makefiles which was not too onerous but this might be mitigated by MonoDevelop or Eclipse but it did not seem enough of a problem.

Building Mono

Building Mono from source is pretty straightforward but there are a few gotchas.

First it is necessary to make sure a basic development environment is in place, on a CentOS system that’s something along the lines of:

    yum -y install bison glib2 glib2 freetype freetype-devel \
        fontconfig fontconfig-devel libpng libpng-devel libX11 \
        libX11-devel glib2-devel libgdi* libexif glibc-devel \
        urw-fonts java unzip gcc gcc-c++ automake autoconf \
        libtool wget giflib-devel libjpeg-devel libtiff-devel \
        libexif-devel httpd-devel

Source Code

Get the 2.10 source releases of [libgdiplus][15], [mod_mono][16] and [XSP][14] – at the time of writing the stable build of mono is at version 3.2.3. It is does not appear to be important to have all components with the same version as the main Mono release.

Unpack each in a local directory then configure and build in the following order:

libgdiplus

    cd libgdiplus-2.10
    ./configure --prefix=/opt/mono
    make
    sudo make install

Mono

    cd mono-3.2.3
    ./configure --prefix=/opt/mono --with-libgdiplus=/opt/mono
    make
    sudo make install

Add the /opt/mono/bin path to the system path and also set the PKG_CONFIG_PATH to /opt/mono/lib/pkgconfig through the /etc/profile (do not forget to export the variables). These variables must be set before building xsp as it needs the C# compiler otherwise the configure part of the build will fail.

xsp

    cd xsp-2.10
    ./configure --prefix=/opt/mono
    make
    sudo make install

mod_mono

    cd mod_mono-2.10
    ./configure --prefix=/opt/mono --with-mono-prefix=/opt/mono
    make
    sudo make install
    sudo mv /etc/httpd/conf/mod_mono.conf /etc/httpd/conf.d/

Configuration

It will probably be necessary to add the path to Mono’s shared libraries to the system wide library path. This can be done by either adding the path to /etc/ld.so.conf or, if the /etc/ld.so.conf.d directory exists, by adding a new file there (I suggest following the naming convention used by other files in that directory) with the path to the Mono shared libraries – these will be at /opt/mono/lib. Once this has been done run the ldconfig command as root to update the system.

After building and installing check the installation by running:

    mono-test-install

Making .NET 4.5 work

When building from source code there is a problem when running applications which require the .NET framework 4.5 libraries. The xsp4 and mod_mono shell scripts that are executed (located in the /opt/mono/bin directory) refer to executables in the /opt/mono/lib/mono/4.0 directory. Typically the executables themselves are fine but they refer to the 4.0 libraries which can be missing some of the newer features. This can result in problems of the form:

    Exception caught during reading the configuration file:
    System.MissingMethodException: Method not found: blah blah blah
      at System.Configuration.ClientConfigurationSystem.System..... yack yack

To fix this first make symbolic links in the 4.5 directory to the 4.0 files:

    ln -s /opt/mono/lib/mono/4.0/xsp4.exe /opt/mono/lib/mono/4.5/xsp4.exe
    ln -s /opt/mono/lib/mono/4.0/mod-mono-server4.exe \
        /opt/mono/lib/mono/4.5/mod-mono-server4.exe

Then edit /opt/mono/bin/xsp4 and /opt/mono/bin/mod-mono-server4 to reference the symbolic links.

Fixing errors caused by colons in the virtual path name

In our application the resources managed by the RESTful interface include the colon ‘:’ character. There appears to be a bug which creeps out when using ASP.NET applications in sub directories. The problem appears with the static initialisation in System.Web.VirtualPathUtility which manages to not read the Web.config system.web/monoSettings verificationCompatibility="1" attribute so fixed by setting the monoSettingsVerifyCompatibility member variable false otherwise errors are generated when there is a colon in a virtual path name.

Apache

The Apache mod for Mono passes requests to the mod_mono_server, which is able to support multiple ASP.NET processes.

With the above completed restart Apache web server and verify that mod_mono has been picked up.

    httpd -M

You can also inspect the error log after a restart.

MonoServerPath

Mono’s support for ASP.NET under Apache uses a simple module which delegates requests to the mod-mono-server. The MonoServerPath setting in httpd.conf specifies where the mono server is for each location:

    MonoServerPath default "/opt/mono/bin/mod-mono-server4" 

This configures mono for the default path which for a standard Apache configuration will be /var/www/html. It is also necessary to configure the application and handler:

    MonoApplications "/:/var/www/html"

    <Location "/">
        Allow from all
        Order allow,deny
        SetHandler mono
    </Location>

In addition, the following options can be set:

    MonoSetEnv default MONO_IOMAP=all
    MonoDebug default true

Restart the server and check the error log file.

If other locations need to be configured much the same needs to be repeated, for example, if a /test application were to be created it would be configured as:

    Alias /test "/var/www/test"
    MonoServerPath test "/opt/mono/bin/mod-mono-server4"
    AddMonoApplications test "/test:/var/www/test"

    <Location "/test">
        Allow from all
        Order allow,deny
        MonoSetServerAlias test
        SetHandler mono
    </Location>

Other Directives

It is recommended to disable KeepAlive for performance reasons or at least restrict the time-out to 2 seconds.

    KeepAlive Off

The CentOS installation of Apache web server sets the name for the log files as access_log and error_log; you may want to have the more conventional .log file extension.

Configuration Tool

The mono site has a handy online tool that can help with setting up a basic configuration for either a virtual host or an application.

In conclusion

Building a RESTful ASP.NET Web API with Mono, to run on Windows and Linux servers, was pretty straightforward with only a few problems on the way.

Digital Health Hackday

David Hannaford and Mark Stone attended Digital Health Oxford’s Hackday, held at the Oxford Launchpad in Said Business School, Oxford, over the weekend of 12th and 13th July.

David’s team came 2nd for the data visualization tool they built to enable a doctor to show a patient their survival rate if they are at risk from cardiac issues. The app calculates the survival rate of an individual by looking at data on risk factors, such as age, weight, blood pressure, and whether the patient is a smoker or diabetic.

Screenshot of the app

The technologies used to create the app are JavaScript, jQuery and D3 (D3 is a data visualization JavaScript library).

20140713_140748

David having fun trying out Google Glass

This hackday had a focus on Global Health, Patient Experience, Care, Aging, and Management in Medicine. There was also a  Clinicians’ Challenge and Patients’ Challenge, although participants were free to decide what they’d like to work on. There were mentors to help advise and guide, and an expert panel to judge the projects at the end.

Digital Health Hackdays are weekend events that bring together researchers, developers, clinicians, entrepreneurs, patients, engineers, designers and others interested in Digital Health.

CuPiD demo at Said Business School

Reynold Greenlaw and Andy Muddiman attended the Oxford Startups demo night at Oxford Launchpad in the Said Business School, on 15th May where they demoed the GaitAssist smartphone app that has been developed for the Cupid project. They demoed it pretty much continuously at our very busy table to the many interested attendees and our communications manager Janine Smith joined in to lend a hand.

The app continuously compares the gait of a user with Parkinson’s disease during walking with an “optimal gait” calibrated with a clinician. Based on evaluation of the specific patient, the most important gait parameters (e.g. step length, cadence) are set for future training sessions. The app gives the user feedback on how well they are doing and uses a technique to help Parkinson’s gait called audio cueing.

Cupid demo 3  Cupid demo 1Andy and Reynold demoing the app

Data on gait performance is visible to the clinician who can then adjust the exercise remotely forming a feedback loop between the user and the doctor. You can take a look at the demo itself on this video of Audio Bio-Feedback in action.

This event was organized by the Oxford Academic Health Science Network which brings together clinicians, academics and the public to promote best health in the region.

The Oxford Launchpad was launched in February 2014. It is a new space for entrepreneurial activity housed within the Said Business School, University of Oxford.

EmerGent: Relevance of Social Media in Emergencies

We recently started a new project – EmerGent, which is a 3 year European project researching the impact of social media by emergency services during a crisis.

Emergent logo

Whenever there’s a large fire, riot, earthquake or other crisis a lot of information immediately appears in social media – some valuable, some completely erroneous. With our partners we are developing tools and techniques for mining and validating that content.

The EU-FP7 EmerGent project has received funding from the European Union’s Seventh Framework Programme for research, technological development and demonstration under grant agreement no 608352. Find out more about EmerGent and its partners on the project website.