Adding a text size widget to your site using CSS and Sass

A requirement we hear from many of our Government customers is that a sizable number of their users with sight impairment prefer to have a text size widget on-screen when they browse a website.

These accessibility widgets are tough to implement cleanly using HTML and CSS but the advent of CSS preprocessors such as Sass and LESS make the job much easier. In this post we’ll see how we can use Sass to create a text size widget.

What we’re aiming for is the standard row of “A” characters increasing in size to denote changing the text size on the page, like this: Screenshot showing three A characters of increasing size. – you can take a look at one in action on one of our sites.

What we will aim to do is add a class (small/medium/large) to the body element of the page, which we can then refer to in CSS to say: when the body has a class of small then the font-size should be x; when medium then y; large then z.

There are two elements to making the widget work:

  1. JavaScript that adds the widget to the page, records the setting in a cookie, and adds a class to the page body so that our CSS can react to it.
  2. CSS that sets the font-size based on the body class. A Sass mixin makes this much more manageable.

JavaScript

Here’s the code, and we’ll go through it bit-by-bit:

$(document).ready(function () {
    // Add in the widget
    $('#text-size-widget').append('<ul><li class="small"><a href="#" class="selected">A</a></li><li class="medium"><a href="#">A</a></li><li class="large"><a href="#">A</a></li></ul>');
    
    // Read the current text-size from a cookie
    if ($.cookie('TEXT_SIZE')) {
        var cookieTextSize = $.cookie('TEXT_SIZE');
        $('body').addClass(cookieTextSize);
        $('#text-size-widget a').removeClass('selected');
        $('#text-size-widget li.' + cookieTextSize + ' a').addClass('selected');
    }
    
    // Add the resize event handler to the widget's links
    $('#text-size-widget a').click(function () {
        var $el = $(this);
        var textSize = $el.parent().attr('class');
        $('body')
            .removeClass('small medium large')
            .addClass(textSize);
        $('#text-size-widget a').removeClass('selected');
        $el.addClass('selected');
        $.cookie('TEXT_SIZE', textSize, { path: '/', expires: 10000 });
        return false;
    });
});

We’re using jQuery here so the first line sets this script to run when the page is loaded and ready.

We then add in the HTML for the widget, by inserting a list of links into any element on the page with an id of text-size-widget, so you’ll need to have at least one of those in your page. We insert this using JavaScript so that if a user doesn’t have JavaScript enabled they won’t find a non-functioning list of links on their page.

We then check to see if the user has a cookie called TEXT_SIZE, if they do we read it, add the appropriate class to our body element, and refresh which of our “A” links is selected. This makes sure that the user’s choice carries over from page to page on our site.

Finally we set what happens when the user clicks one of the “A” links:

  • We detect which link they clicked and store the small/medium/large class in a variable to use later
  • We remove whatever text size class might be on the body currently and set the new one
  • We update which of the “A” links is selected
  • We store the new text size in the TEXT_SIZE cookie
  • We return false so that the link does not perform its standard functionality (in this case adding “#” to the URL)

That’s the widget, cookie and body classes dealt with, now to tell the CSS how to handle it.

CSS

What we’re looking to end up with is along the lines of:

p { font-size: 10px; }
body.medium p { font-size: 12px; }
body.large p { font-size: 14px; }

Clearly it would be massively frustrating to have to write all three options out every time you wanted to set the font-size of something – this is where our Sass mixin comes in.

Mixins let you extract repetitive code out of your CSS into something that looks a bit like a function (you can even pass in values/settings) and call from your CSS whenever you need it.

Here’s what our mixin looks like:

@mixin font-size($baseFontSize, $mediumMultiplier: 1.2, $largeMultiplier: 1.4) {
    font-size: $baseFontSize + px;
    body.medium &    { font-size: ($baseFontSize * $mediumMultiplier) + px; }
    body.large &    { font-size: ($baseFontSize * $largeMultiplier) + px; }
}

The mixin is called font-size and you pass in the “base” font size (i.e. the size to use for the smallest setting, which is the default), and optionally you can set the multipliers we use to go up to the medium and large sizes.

We’ve then got a bit of Sass code that generates our CSS using the base font size and the multipliers.

Of course if you prefer to use ems or another unit you can replace the px with whatever you want.

To use this mixin all we need to write is:

p {
    @include font-size(10);
}

And when we compile the Sass it will generate exactly what we want. So all we need to do is replace any normal font-size declarations with a call to our new mixin and we’re away!

Compiling Sass with Gulp in Visual Studio

We love visual studio at OCC. It’s an incredible IDE for software and web development and Microsoft put amazing effort into keeping up with the direction developers find themselves moving.

In this case we’re talking front end web development and specifically:

  • Gulp.js – a JavaScript-powered automated build system that uses Node to perform the tasks you find yourself doing over and over
  • Sass – a language extending CSS (that is compiled to produce CSS) to give developers more power when creating stylesheets

For quite a while Visual Studio has supported Sass, specifically through the Web Essentials plugin, which has offered intellisense, CSS preview and compilation. But much of this functionality is being removed from the plugin because the broader developer community is moving to using build tools like Gulp (and it’s peers, such as Grunt) and Web Essentials wants us to use them too.

But how do we go about doing that?

Setting up Gulp in Visual Studio 2013

Node.js

The first step is to install Node.js, the JavaScript server that Gulp uses to run its tasks. Importantly, you’ll need v0.10.28 (there’s an x64 folder in there, which most of us will want) as the more recent versions are not yet supported by the Gulp-Sass compiler plugin. It looks like this will be fixed very soon though, which is great!

For Sass compilation we’ll need to install two Node packages, so open a command prompt and run the following npm (Node Package Manager) commands:

  • npm install gulp -g
  • npm install gulp-sass -g

The -g is a flag to install globally, so you can use the modules anywhere.

Now we can set up Gulp for your solution. Create a file called package.json in the root of your solution and add in the following information:

{
    "name": "Project Name",
    "version": "1.0.0",
    "description": "Project Description",

    "devDependencies": {
        "gulp": "3.8.11",
        "gulp-sass": "1.3.2"
    }
}

You’ll need to make sure those gulp and gulp-sass version numbers match those you have installed.

This will allow any other developer to come along and run npm install on this directory and Node will download and install everything they need!

Give Gulp some tasks

Create a gulpfile.js file in your solution and add the following JavaScript to get started:

var gulp = require('gulp');
var sass = require('gulp-sass');

gulp.task('sass-compile', function () {
    gulp.src('./Content/style.scss')
        .pipe(sass())
        .pipe(gulp.dest('./Content/'));
});

gulp.task('watch-sass', function () {
    gulp.watch('./Content/*.scss', ['sass-compile']);
});

Here we have three sections

  1. First we create gulp and sass variables and use Require to ensure they’re good to go.
  2. Then we create a Gulp task called ‘sass-compile’ and tell to it find our Sass stylesheet, compile it and output the result to our directory.
  3. Finally we create a ‘watch-sass’ task that watches all Sass files in our folder and runs the sass-compile task whenever anything changes.

Now, if we ran gulp sass-compile at a command prompt in our project, it would compile our Sass. If we ran gulp watch-sass it would start watching. But running commands all the time isn’t much fun…

Task Runner Explorer

The answer is to install the Task Runner Explorer Visual Studio Extension, which will give us some nice UI for running and automating Gulp tasks from within Visual Studio.

Once you’ve installed (and restarted VS) when you right-click on your gulpfile you’ll see a new ‘Task Runner Explorer’ option, which will open up a panel showing a list of your Gulp tasks on the left and some binding options (or the output of whatever has been run) on the right.

Task Runner screenshot

If you double-click on sass-compile it will run and compile the Sass. If you double click on sass-watch it will start watching. Much better!

Task Runner execution results screenshot

The icing on the cake is that you can right-click on one of your tasks and bind it to one of the following Visual Studio events:

  • Before Build
  • After Build
  • Clean
  • Solution Open

Right-click on your sass-watch task and bind it to Solution Open. Now, every time you open your solution, Gulp will automatically start watching your Sass files and compiling them whenever it needs to – perfect!

What else can Gulp do?

Gulp doesn’t just do Sass compiling, it’s hugely powerful. Here are just a few of the tasks Gulp can help you with:

  • Minify your CSS, JavaScript, HTML or images
  • Combine your CSS files
  • Generate CSS sprites
  • Generate favicons
  • Rename files
  • Check if files have changed or not
  • Output messages/errors
  • Add CSS vendor prefixes
  • Check code quality with hinting

More good news as Visual Studio 2015 will come with first-class support for Gulp as standard, so these tools will be built right in to the IDE.

Search engine rankings for Social Care: the 001 Taxi Problem

If you’re more than 20 years old you will remember the annual delivery of your local telephone directory; the thud as it lands heavily in your hallway, often with a crumpled cover, a testament to the efforts of the delivery boy to fit the tightly published pages through your letter box. For me, this directory was my first exposure to search engine rankings, with its wonderfully named 001 Aardvark Taxis vying angrily with 001 Ace taxis for first place in the listings.

Pigot’s directories, published in 1814, were the first widely available English reference source for everything from tradesman to public houses. After Bell’s invention of the telephone in 1876, directories became common, first to the secretaries of the nobility and then to the middle classes.

Coming First

The 001 Taxi problem is as alive today as it was in phone directories of my youth. Today it manifests itself in the ‘war’ of search engine rankings – a continuous cut and thrust between businesses seeking to steal a march on their competitors whilst the search engine developers try to outwit the tricks of web site developers. For some businesses, failing to appear on the first page of Google’s search results means anonymity and business failure; paying for adwords is expensive.

I became acutely aware of this problem in our recent development of Social Care information and advice portals. These portals, legal cornerstone of the Government’s 2014 Care Act, require local authorities to establish and maintain an information and advice service in their area. At OCC, we have sought to develop a smart ranking system that works on keyword relevance (i.e. the prominence of the word in the service profile). However, service providers may list keywords several times in their description to optimise their profile which we don’t want. We wish to rank services by relevance so if the keyword (or thesaurus equivalent) is found in the description once it should get the same relevance points as finding it 10 times. Google’s famous Page Rank algorithm is not available to us in the Social Care space, so we need to find another way to help our users find the highest quality information for their search.

To display our results we have introduced a secondary search ranking facet based on service profile quality score. The idea is that we establish a scoring mechanism for service profiles similar to the relevance score; points are awarded for logo, accreditation, quality of description, images, contact details, etc.

However, this still leaves a very serious problem. Serious for service users, providers and Local Authorities, namely that if you type in the same keywords you will get exactly the same search results in the same order every time. This is a reality – most people will use the same words to search for services: words like ‘Home Care’ or ‘Meals on Wheels’. So the same providers will always come first in your list; the same providers last. This is bad for creating a diverse and competitive market and bad for citizen choice. The reality is that well over 20 appropriate residential care providers may all have scored somewhere between 98% and 100% in our ranking algorithms. The small percentage differences are vital for the order of results but we surmise that they are trivial in terms of the actual suitability of the service for the citizen.

Randomization

One solution – randomization. We could take results that have scored almost the same (and we can define what ‘almost’ means in percentage terms) and then randomize the results. All those providers that have scored ‘almost the same’ are presented in a different order each time you search – the actual order being determined by a randomization algorithm.

This simple technique helps to create a diverse market; shares business across a range of suitable providers and helps ensure a vibrant and competitive market. There is no incentive for Providers to give themselves silly names; no more 001 Aardvark Care Homes. However, we’re worried about the impact on usability – how many search engines do you use where the results come out different each time for the same search? If you send your search link to a friend and tell them to look at result #2, they will see a different service in that position.

If we’re going to use randomization to help create a vibrant Social Care market we’re going to have to make sure everybody understands what we’re doing. Thoughts on a spare page of an old Yellow Pages please.

Asynchronous processing in SQL with Service Broker

SQL Server Service Broker was introduced in Microsoft SQL Server 2005 and this article suggests several uses, the most significant of which is probably the ability to do asynchronous processing. Service Broker might be seen as the database equivalent of Microsoft Message Queuing.

In this scenario, a synchronous process can put a message on a queue and then complete immediately (i.e. return control to the user). A separate process can then take this message off the queue and perform some processing based on it – this processing is effectively done “behind the scenes” as far as the user is concerned. Typically, it might be some relatively time-consuming work which isn’t on the user’s critical path.

As with all asynchronous processes, the problem of how to report errors is a tricky one, given that the user might have gone home for the evening by the time they arise. You might imagine keeping a log of such errors and generating notifications, all of which could also be done from SQL, using Database Mail for example.

Getting started with Service Broker

Service Broker’s implementation may seem unnecessarily complicated, involving not only queues but also things called services, contracts and message types. These have been introduced to allow the functionality to be used in as many weird and wonderful ways as realistically possible but they do present a bit of a steep learning curve. So here’s a very simple set of scripts to get you going and show how it works.

Begin by running the first script from any existing database on a SQL Server, version 2005 or later. It creates a new database called ServiceBrokerDemo and makes in it a Queue, Service, Conversation, Message Type and a couple of example stored procedures.

Once that has completed, run the second script:

This puts some messages on the queue by calling the Proc_Queue_Send_Message stored procedure (which completes quickly). It then calls the Proc_Queue_Process_Messages stored procedure; this reads messages off the queue and processes them, putting the results in the T_QueueProcessingResult table. Proc_Queue_Process_Messages mimics doing “hard work” via a WaitFor Delay '00:00:05‘ statement.

An alternative way of working is implemented in the third script, which should then be run:

Now, Proc_Queue_Process_Messages will be called automatically whenever a message is put on the queue – i.e. there’s no need to explicitly call it.

Finally, run the fourth script:

This will again put some messages on the queue. Once it has completed (quickly) the processing will have been started and can be monitored by re-running the final Select * From dbo.T_QueueProcessingResult With (NoLock) statement. Every ~5s, a new row should appear (5 in total).

An aside about Conversations

As in real life, striking up a “conversation” can be a time-consuming process so these are generally cached outside of the Service Broker framework. The TSys_ConversationHandle table is used to do this.

ContrOCC Hackday IV – Part 2

Carrying on from our first post following the results of our developers’ adventures in the most recent ContrOCC hackday, here is the final set of projects:

Julian Fletcher – Graphically presenting performance information to the lay developer

We are now amassing ever greater amounts of anonymous performance information from live Local Authority (LA) databases:

  • execution times
  • deadlocks
  • missing index hints.

This information is transferred, via CSV files, from all the LA databases to a shared OCC server. Here, several Stored Procedures may be used to analyse the figures. However, these output slightly impenetrable tables, so I set about presenting the information in a graphical format using SQL Server Reporting Services (SSRS).

Setting this up was fairly tedious – in particular, getting all the necessary permissions (even when everything was on my own machine). However, in summary, this is how it works:

  1. You set up a SQL Server to have SSRS. This results in a couple of new databases being created:
    Screenshot of Julian's hackday project
  2. Configure SSRS using the Configuration Manager:
    Screenshot of Julian's hackday project
  3. Install the Microsoft SQL Server Report Builder application and design your reports. These can obtain their data from a database on an entirely different server.
  4. Save them to your hard drive and then publish them the Report Server when they’re ready.
  5. The screenshot shows what this looks like. You can get the averages at all LAs for any one stored procedure:
    Screenshot of Julian's hackday project
  6. You can also drill through to get a by-version breakdown for a particular LA (not down – this is a slightly different concept):
    Screenshot of Julian's hackday project
  7. Just by chance, I’ve chosen a stored procedure which appears to have sped up in later versions!

As I mentioned above, getting the right permissions was a pain and I still haven’t made the Report Server available to other users but I hope to be able to soon.

Maciej Luszczynski – Cleaning up the developer documentation

I focused on providing How To documentation for new developers on the wiki. I started by cleaning up existing pages. The Home page is now greatly improved, although the Windows Topics section still requires clean-up.

I have also identified some topics that it might be useful to add to the wiki, such as:

  • how to remove a database object
  • how to remove a database table properly
  • how to commit a change to the SVN properly
  • how to check the imports flow

Matthew Clarke – CSV Merger

This tool improves merging of CSV data files which often cause SVN conflicts, for example when a column is added.

The tool can be set to be your default merge tool for CSVs in SVN.

Screenshot of Matthew's hackday project

When you would normally resolve conflicts CSVMerger now opens.

Screenshot of Matthew's hackday project

It will highlight conflicts and missing values for new columns:

Screenshot of Matthew's hackday project Screenshot of Matthew's hackday project

You can even resolve conflicts via the UI (needs polish).

Screenshot of Matthew's hackday project

So, you’ll be able to go home 5 hours early because the merge was done in no time.

Nathan-Madonna Byers – F#/C#

I looked at using various F# Type Providers (from the open-source FSharp.Data.SqlClient package) to interact with ContrOCC databases in a type-safe manner, such that many errors can be caught at compile-time rather than raising run-time exceptions. In particular, I investigated the potential of this approach for writing test data scripts and worked through the implications of exposing this functionality to C# code.

The first thing to note is that there’s little in my prototype that couldn’t have been achieved solely in C#. However, I find F# to be a more natural and expressive language for this kind of thing. In terms of F#/C# interoperability, there were a few head-scratching issues that took a while to figure out – but it all seems to work pretty smoothly. (Famous last words.)

A quick summary of the specific Type Providers:

  • SqlProgrammabilityProvider: This can be used when working with stored procedures. Unfortunately, it doesn’t currently support SPs with output parameters or explicit return values, which rules out many of our SPs.
  • SqlCommandProvider: This can be used when working with arbitrary SQL commands. I’ve used this in my prototype as a lightweight wrapper around SPs (e.g. rather than calling an _Insert SP directly, I can have a simple three-line SQL command: declare an @ID parameter; execute the SP [passing @ID as the output parameter]; select the @ID).
  • SqlEnumProvider: This can be used to create enumerations for (relatively) static lookup data, e.g. from TRefSys_ tables. This means that it may no longer be necessary to define these enumerations manually in our C# code (and then have to worry about keeping them synchronized with the database &c.).

In terms of extending the prototype, various factors – the use of the SqlCommandProvider and the succinctness of F# (plus type inference and auto-generics) – mean that the amount of hand-written “boilerplate” code required is fairly small. (It’s not dissimilar to the amount of hand-written SQL needed to do it the old way but should be quicker to write.) The thing that really takes time is the need to analyze the SPs in relation to the database schema, e.g. to decide: which parameters to expose to C#; which should be mandatory, etc.

Finally, a summary of the key benefits:

  • Static typing (and Type Provider “magic”) mean that many errors can be caught at compile-time.
  • Intellisense (including visibility of defaults for optional parameters) and auto-completion.
  • Optional parameters are truly optional: no more “counting commas”!
  • The scripts are (more) readable.
  • Debugging support and full stack traces for exceptions.
  • “Production” SPs – i.e. those used by the UI (and other components) – are always used.
  • Modularization, i.e. we can finally share portions of data setup across tests.

Nigel Palmer – ContrOCC version manager

I set out to create a version number manager to assist with the building of ContrOCC. Version numbers are currently set manually in a variety of different places and there is some risk of one of them being missed. This tool will allow the developer responsible for the build to set all version numbers in the SQL database and the C# code. This will improve the accuracy of the build process and prevent test failures due to incorrect version numbers.

The ContrOCC Version Manager tool has a simple user interface that allows the user to set the version number. This then updates all the required build files to have the new version number.

Screenshot of Nigel's hackday project

Patrick Donkor – An executable imports/exports specification

I spent my Hack Day looking at how I could use Specflow as a way of creating an executable specification for the development of new Imports/Exports.

The idea is take a specification using the Gherkin syntax to specify a top level view of an import/export. This specification would be agreed upon between an IM and the Client. The specification would be passed to the developer who would code and test against it.

To see if this could be done, I used an existing client payments import as my template. I spent the first part of the day setting up the specification for the import. This went well. The second half of the day was spent trying to implement automated tests to use against the specification. I did encounter a few minor issues that, given time, could be rectified. The screenshot below shows what a sample file looks like.

Screenshot of Patrick's hackday project

Steph Sharp – Improving code integrity checks

I spent the day trying to suppress output from one of our internal code integrity checks (sys_checkschema). This output is an unwanted side effect, and can be confusing, especially to newer members of the team. Although I did manage to prevent this output being displayed, I was unsuccessful as there were problems with expected errors no longer being reported and the addition of spurious new errors, which I was unable to resolve.

Tom Litt – Migration from within the ContrOCC UI

I took the brief “make the migration process at new customer sites a part of the ContrOCC UI” and tried to see whether it would be feasible to run migrations via a wizard available in the main desktop client, instead of the existing process where there is a separate tool. Why? If a customer had this, they wouldn’t have to send data off-site. They could re-run migration tables as often as they liked. Selling this to us, we could ditch the SQL Server Integration Services packages which are a merge nightmare.

In production the wizard, would:

  • allow editing of migration settings
  • allow selection of files to be migrated in, or perhaps just point at a directory full of migration CSVs
  • run through each file independently, or run them all in an unattended batch; depending on the level of reporting required
  • feedback on the level of success/failure for each file.

The file specifications and import code are already written, so this is a case of wrapping the code as stored procedures and putting UI on it.

In practice I got as far as establishing a wizard, selecting the client file, importing it and writing the existing HTML feedback to the wizard. This could be extended fairly easily to handle all files and report more nicely, and I think this is worthy of future effort.

Tomasz Agurciewicz – Taming our test scripts

ContrOCC’s testing meta language takes some time for developers to learn. That’s why I decided to finish/enhance a tool I made some time ago which makes it easier:

Screenshot of Tomasz's hackday project

The tool parses a test script and tries to present its contents in a sane fashion. All commands are listed in the Command dropdown, and selecting one of them lists available actions. So for example Command “S” (Service) has actions “Add” and “Load”.

Next, for the selected action it finds the appropriate sql part in the script and tries to list what each column stands for. It then lets you enter all the required data and form a single test script line that you can use later on when preparing tests or data for ContrOCC. It also prints the selected chunk in a separate tab, so you can look up if in doubt what each parameter does.

Trevor Hingley – SQL Server projects in VS2012

I looked at SQL Server projects in Visual Studio 2012, with a view to seeing how viable it would be as a tool for ContrOCC database development and deployment.

There are immediate issues in that the SQL Server Data Tools (SSDT) installed on the machine must be in line with Visual Studio. In my case creating a project in VS2012 immediately raised an error, as the SSDT were not compatible.

After installing the latest version of SSDT, I found the SQL Server project very easy to use. Database schema can be imported from a database, a selection of scripts or a .dacpac file very easily. Multiple projects can be used to handle separate version schemas, and the tooling makes comparison between projects/schema easy to view and update. Pre- and post-deployment scripts can also easily be added to the project.

Where the viability of the SQL Server project type fails is in deployment. Schemas can easily be deployed to databases which can be connected to, and scripts can be generated to create a new database or upgrade from a known schema to the latest schema. However, there is no versioning capability when generating scripts to be able to produce scripts capable of upgrading from multiple schema versions to the current version. A possible alternative is to generate .dacpac files which have versioning capability. However this was not found to be viable, as a .dacpac generated in VS2012 causes an xml schema error when attempting to import into a 2k8 SQL Server database.

ContrOCC Hackday IV – Part 1

Our ContrOCC hackdays give our developers a day to work on tweaks, gripes, improvements, or whole new features of their choosing and then sharing those with the rest of the team.

For all the thinking behind our product hackdays, have a look at the intro to the first ContrOCC hackday.

This year we have so many team members I have split this post in two; we’ll post the remaining projects soon. Here is the first set:

Alan Carter – Converting the distributed tests config file to XML

I worked on converting the configuration file for our distributed testing service from plain text to XML, something I’ve wanted to do for a while. As the configuration for the tests grows ever more complex, the plain text file becomes increasingly arcane and difficult to navigate. An XML file allows the configuration to be laid out in a more structured manner, and lets most text editors provide benefits such as folding and syntax highlighting. It also has an accompanying schema so that we can catch mistakes much more easily.

This work supports a longer term goal I have of getting the config file into SVN and splitting out parts which are likely to be changed frequently.

There’s a little more work needed to finish writing the new parser, but with just a little more work this should be fully usable. I also hope to arrange for the distributed tests service to perform schema validation of the file.

Screenshot of Alan's hackday project

Chris Griggs – Converting ContrOCC tools to use Git

I looked at converting our tools that currently use SVN to use Git. We currently use the SharpSVN library to query our repository. There’s a SharpGit project in NuGet that’s supposed to be an almost direct replacement, unfortunately with only ~200 downloads it’s very immature and not up to the task. I chose libgit2sharp as the best alternative. This is a C# wrapper around the official libgit2 library. The library itself is well documented, the wrapper less so.

I was quickly able to convert the Macro Developer Tool (an internal development tool which uses the C pre-processor to avoid code duplication in SQL) to use Git as it only looks at the working copy to see which files have been added or modified. I then turned my attentions to the PFG (our patch file build tool). I soon learnt that this was not going to be so trivial.

The concept of a tag is somewhat different between Git and SVN, as is the idea of looking at the central repository to find the previous one (although I doubt we’ll all have the entire repo on our machines either). As it’s written, the PFG relies on there being a linear succession of changes between the previous tag and the current one. In Git it’s likely that we would branch out and merge in commits throughout the development of a tag. We would therefore need to compare the two trees without relying on their history. I am satisfied that this is possible, but more thought will be required than was possible on a hack day.

Chris Henry – Visualising the ContrOCC Database

I spent the day looking into ways of visualising the structure of the ContrOCC database. The aim was to see to what extent it divides into modules which could in principle be more rigorously separated in future. The work in practice fell into three parts:

Part 1: List Dependencies

Extracting data on dependencies from our source data. I have a long standing prejudice against SQL’s own lists of dependencies and since I knew it wouldn’t be too much work I instead read dependencies out of the SQL source code using parser code borrowed from the PFG (our patch file build tool).

Part 2: Box & Arrow Graph

I next tried to graph dependencies as a box & arrow diagram. I used the MSAGL library and technically speaking it worked first time. With minimal effort I was able to produce a suitable diagram with tooltips showing the name of the objects.

Screenshot of Chris' hackday project

However, when I scaled up to the 1000s of objects in our database it didn’t even try to produce a decent layout – it just stretched the whole graph out horizontally:

Screenshot of Chris' hackday project

Or zoomed in:

Screenshot of Chris' hackday project

Part 3: Dependency Structure Matrix

Whilst looking into ways I might influence the layout I stumbled across the idea of the Dependency Structure Matrix and immediately decided it was a better prospect. In true ContrOCC fashion I began writing my own DSM code and by the end of the day had an incomplete, tantalising but ultimately broken implementation.

Follow-up

I intend to:

  • Investigate the commercial tool Lattix or, if it doesn’t impress, fix my broken implementation.
    Postscript: I tried it. I like it. The costs are such that although we might consider it this would only be in the context of a future decision to undertake a major program of database restructuring.
  • Spend some time looking at the results.

Chris Parish – Prototyping a new UI for different screen resolutions

I worked on a prototype of a new user interface layout for ContrOCC which would degrade better on small screens, but take better advantage of large (or at any rate tall) ones. This involved redesigning the Contract UI  so that it no longer contained tabsets, but instead a series of vertically stacked sections each of which can be individually expanded and collapsed. There are also a series of links along the top of the page which allow the user to jump to any section of interest.

Screenshot of Chris' hackday project

Ian Evans – An app to capture client-confirmed visit data

ContrOCC records details of client visits to be carried out, and can generate payments/charges based on those visits, but without verification there will always be an element of trust. This is a browser/smartphone-based add-on to ContrOCC to allow client-confirmed visit data capture.

Instead of requiring the service provider to verify attendance by relying on, for example, a client’s landline, the visitor hands the client their own smartphone, which may be their personal device, and the client enters a picture-based PIN. Pictures are used as potentially easier to remember than a numeric PIN. For this demo, I have used various types of balls, but obviously anything meaningful to the client could be used.

Linked to location-based data, the time and place is verified, with that data instantly available in ContrOCC and matchable against PO/Invoice data. Visit start and end are recorded, together with the option for the client and/or visitor to send a message to the LA.

For this demo, a smartphone surround is shown. Some LA branding has been applied, but it really needs design input!

Screenshot of Ian's hackday projectThe code is inline C#, borrowing concepts such as the “no-compile developer experience” from .NET vNext, where the developer never needs to build, just code and save. There is no solution or project, the website is opened directly in any version of Visual Studio, with system-wide code shared from App_Code. Development code is continuously compiled by aspnet_compiler into a deployable “build”, with a compile management screen showing ‘real-time’ code warning/error status.

Ian Lewis – A tool to parse PDF files

The task I chose to undertake on Hack Day was to develop a tool that would parse PDF files such that the contents of the file could be inspected. I managed to get to the stage of recognising the PDF Header and Trailer, and converting the body into tokens which would be used to create PDF Objects.

One application of this utility could be to allow the ContrOCC generated PDF exports to be inspected and checked against a set of expected results. This would allow PDF exports to be checked during the automated tests. Initially, the automated tests would check that the text content is as expected and could be extended to include the positioning of the text and the type of fonts that were used in the export. Ultimately, once the structure of the files is more fully understood, this utility could be used to allow PDF files to be generated.

This was an ambitious Hack Day project that needed more than the one day that was allocated. I estimate that it would take at least a week to complete the parser with the appropriate automated test code.

Jo Lloyd – Alternative help navigation

Our current help technology has many good features, but lacks the ability to deal well with simultaneously providing descriptive (“what is this field?”) and functional (“how do I … ?”) help. While the text, being endlessly flexible, can easily encompass all kinds of help, there is no automated and structured means to point users to different paths through the content.

  • We need to provide a description of every field on every form. But we would also like to provide activity-based help based on where the user is – eg how to disregard a property, how to carry forward a payment, whatever.
  • The current approach to Help in widely used applications like Office largely abandons context in favour of search. But I think our context-related help is important, and I would like to maintain the principle that pressing the F1 key always shows you something useful, rather than requiring more clicks or a search before anything is shown.
  • One thing I do like about Office Help is that in addition to a title, it shows you a snippet of each item in a list of links. It would be nice to include something like that.
  • The tree structure on the left of our current help seems to me very useful, and I’d like to retain that functionality (or something like it).

How it is now

Screenshot of Jo's hackday project

  • I’ve illustrated a simple approach (conceptually, that is), which just adds some options to what we already have via a new menu bar offering context-related links of two (or indeed potentially more) kinds. F1 would open the help for the current position, as now, and this would set the initial context of the dynamic menus.
  • I have imagined that tags would be used to set the relationships between items, as it is both relatively low maintenance and familiar. But manually maintained lists could also be used.
  • I have assumed that index and search options remain much as now (although I haven’t illustrated them).
  • I should stress that this is all purely hypothetical and I have not spent any time investigating technical solutions.

Providing more options

Screenshot of Jo's hackday project

The Related menu

Screenshot of Jo's hackday project

The How to menu

Screenshot of Jo's hackday project

How it might look with snippets

Screenshot of Jo's hackday project

Jon Bowen – Planner-style timesheet UI

With significant help from Chris Smith I have created an “alternative” UI for OCCult (our internal timesheet system). This is not intended as a replacement for the existing timesheet entry form, but as an addition that considerably simplifies timesheet entry for people with particular kinds of working patterns:

  1. People who have largely the same project for all their work
  2. People who record their work in their notes by time of day rather than by hours worked
  3. People who prefer visual to verbal reasoning (thought that’s not a working pattern!)
  4. People who work on several projects at once, recording some time for each project every day.

The new UI is visually completed, and partly coded. There are still many bugs, and some features that aren’t implemented at all. Some staff have evaluated the new UI, and despite some networking problems which leave the demo file locked, and other MS Office installation problems on some workstations that prevent the new UI from opening, feedback has been broadly positive, with some negative. However, only a minority of staff have provided feedback. The most negative feedback has been from staff who are already happy with the existing UI and don’t see the need for an alternative.

The key to building innovation

Jeff Gothelf is the author of Lean UX, a book that plugs into the theory of The Lean Startup and looks at how User Experience design processes fit in with the Lean approach.

Jeff was interviewed by Communitech News and described what he believes is the key to building an innovative product or company:

Talk to your customers.

I mean, really have the humility to listen to your customers.

Learn what it is that they love about your product; learn what it is that they hate about your product; learn about what it is that they hate about your competitor’s product; learn about what they love about your competitor’s product.

Listen to them and their needs. Figure out what job they are hiring your product to do, and then make your product do that better than anyone else’s. You will never know that unless you leave the safety and comfort of your office and go out and talk to your customers.

That really inspires me.

I recommend the rest of the interview too, where Jeff discusses the importance of cross-functional teams, mistakes to avoid, and how to remain competitive as you scale your product.

ASP.NET Web API on Linux and Apache with Mono

We had a requirement at OCC to build a RESTful web service that would be able to run on both Windows and Linux servers. Someone suggested we give Mono a look to see if we would be able to use the ASP.NET Web API framework served up by the Apache Web Server on Linux. That sounded great; we have a lot of experience with the .NET Framework and a lot of experience with Linux but so far have not brought the two together.

Banner showing Mono, .NET, ASP.NET, Apache and Linux logos

Mono is an implementation of the .NET framework that can be used to build applications that run on Linux and OS X in addition to Microsoft Windows. There are further details on Wikipedia.

In the past, some concerns have been expressed regarding licensing, software patents and their possible impact on Mono and the applications that depend upon it. The Mono team have addressed these concerns and recently (April 2014) Microsoft released Roslyn under the Apache 2 license and have committed to working closely with the Xamarin team, whose product is based around Mono, which may further calm concerns.

Getting Started on Linux

If you are lucky your system will have a package available to install Mono, if so then you should use this. At the time I was not so lucky so had to get the latest stable source and build that.

In Practice

Because the Mono team are attempting to keep up with developments by Microsoft the Mono framework does not fully implement the latest .NET framework. This can lead to some headaches where there is a partial implementation which can result in some methods of a class not being available under Mono but often these issues can be worked around.

However, Mono is under very active development and generally manages to keep up surprisingly well.

Pros:

  • Can use Visual Studio for the bulk of development.
  • Once something builds and runs on Windows it runs very reliably on Mono. I’ve only been looking at Web applications so I couldn’t comment on a Desktop application with a GUI.

Cons:

  • NuGet has limited usefulness with Mono. I had to get the necessary binary files and manage a Libraries directory within the project. Not a big issue in my case but could be if large numbers of externals are required.
  • Have to maintain a separate build on the Linux system. I used makefiles which was not too onerous but this might be mitigated by MonoDevelop or Eclipse but it did not seem enough of a problem.

Building Mono

Building Mono from source is pretty straightforward but there are a few gotchas.

First it is necessary to make sure a basic development environment is in place, on a CentOS system that’s something along the lines of:

    yum -y install bison glib2 glib2 freetype freetype-devel \
        fontconfig fontconfig-devel libpng libpng-devel libX11 \
        libX11-devel glib2-devel libgdi* libexif glibc-devel \
        urw-fonts java unzip gcc gcc-c++ automake autoconf \
        libtool wget giflib-devel libjpeg-devel libtiff-devel \
        libexif-devel httpd-devel

Source Code

Get the 2.10 source releases of [libgdiplus][15], [mod_mono][16] and [XSP][14] – at the time of writing the stable build of mono is at version 3.2.3. It is does not appear to be important to have all components with the same version as the main Mono release.

Unpack each in a local directory then configure and build in the following order:

libgdiplus

    cd libgdiplus-2.10
    ./configure --prefix=/opt/mono
    make
    sudo make install

Mono

    cd mono-3.2.3
    ./configure --prefix=/opt/mono --with-libgdiplus=/opt/mono
    make
    sudo make install

Add the /opt/mono/bin path to the system path and also set the PKG_CONFIG_PATH to /opt/mono/lib/pkgconfig through the /etc/profile (do not forget to export the variables). These variables must be set before building xsp as it needs the C# compiler otherwise the configure part of the build will fail.

xsp

    cd xsp-2.10
    ./configure --prefix=/opt/mono
    make
    sudo make install

mod_mono

    cd mod_mono-2.10
    ./configure --prefix=/opt/mono --with-mono-prefix=/opt/mono
    make
    sudo make install
    sudo mv /etc/httpd/conf/mod_mono.conf /etc/httpd/conf.d/

Configuration

It will probably be necessary to add the path to Mono’s shared libraries to the system wide library path. This can be done by either adding the path to /etc/ld.so.conf or, if the /etc/ld.so.conf.d directory exists, by adding a new file there (I suggest following the naming convention used by other files in that directory) with the path to the Mono shared libraries – these will be at /opt/mono/lib. Once this has been done run the ldconfig command as root to update the system.

After building and installing check the installation by running:

    mono-test-install

Making .NET 4.5 work

When building from source code there is a problem when running applications which require the .NET framework 4.5 libraries. The xsp4 and mod_mono shell scripts that are executed (located in the /opt/mono/bin directory) refer to executables in the /opt/mono/lib/mono/4.0 directory. Typically the executables themselves are fine but they refer to the 4.0 libraries which can be missing some of the newer features. This can result in problems of the form:

    Exception caught during reading the configuration file:
    System.MissingMethodException: Method not found: blah blah blah
      at System.Configuration.ClientConfigurationSystem.System..... yack yack

To fix this first make symbolic links in the 4.5 directory to the 4.0 files:

    ln -s /opt/mono/lib/mono/4.0/xsp4.exe /opt/mono/lib/mono/4.5/xsp4.exe
    ln -s /opt/mono/lib/mono/4.0/mod-mono-server4.exe \
        /opt/mono/lib/mono/4.5/mod-mono-server4.exe

Then edit /opt/mono/bin/xsp4 and /opt/mono/bin/mod-mono-server4 to reference the symbolic links.

Fixing errors caused by colons in the virtual path name

In our application the resources managed by the RESTful interface include the colon ‘:’ character. There appears to be a bug which creeps out when using ASP.NET applications in sub directories. The problem appears with the static initialisation in System.Web.VirtualPathUtility which manages to not read the Web.config system.web/monoSettings verificationCompatibility="1" attribute so fixed by setting the monoSettingsVerifyCompatibility member variable false otherwise errors are generated when there is a colon in a virtual path name.

Apache

The Apache mod for Mono passes requests to the mod_mono_server, which is able to support multiple ASP.NET processes.

With the above completed restart Apache web server and verify that mod_mono has been picked up.

    httpd -M

You can also inspect the error log after a restart.

MonoServerPath

Mono’s support for ASP.NET under Apache uses a simple module which delegates requests to the mod-mono-server. The MonoServerPath setting in httpd.conf specifies where the mono server is for each location:

    MonoServerPath default "/opt/mono/bin/mod-mono-server4" 

This configures mono for the default path which for a standard Apache configuration will be /var/www/html. It is also necessary to configure the application and handler:

    MonoApplications "/:/var/www/html"

    <Location "/">
        Allow from all
        Order allow,deny
        SetHandler mono
    </Location>

In addition, the following options can be set:

    MonoSetEnv default MONO_IOMAP=all
    MonoDebug default true

Restart the server and check the error log file.

If other locations need to be configured much the same needs to be repeated, for example, if a /test application were to be created it would be configured as:

    Alias /test "/var/www/test"
    MonoServerPath test "/opt/mono/bin/mod-mono-server4"
    AddMonoApplications test "/test:/var/www/test"

    <Location "/test">
        Allow from all
        Order allow,deny
        MonoSetServerAlias test
        SetHandler mono
    </Location>

Other Directives

It is recommended to disable KeepAlive for performance reasons or at least restrict the time-out to 2 seconds.

    KeepAlive Off

The CentOS installation of Apache web server sets the name for the log files as access_log and error_log; you may want to have the more conventional .log file extension.

Configuration Tool

The mono site has a handy online tool that can help with setting up a basic configuration for either a virtual host or an application.

In conclusion

Building a RESTful ASP.NET Web API with Mono, to run on Windows and Linux servers, was pretty straightforward with only a few problems on the way.

CuPiD demo at Said Business School

Reynold Greenlaw and Andy Muddiman attended the Oxford Startups demo night at Oxford Launchpad in the Said Business School, on 15th May where they demoed the GaitAssist smartphone app that has been developed for the Cupid project. They demoed it pretty much continuously at our very busy table to the many interested attendees and our communications manager Janine Smith joined in to lend a hand.

The app continuously compares the gait of a user with Parkinson’s disease during walking with an “optimal gait” calibrated with a clinician. Based on evaluation of the specific patient, the most important gait parameters (e.g. step length, cadence) are set for future training sessions. The app gives the user feedback on how well they are doing and uses a technique to help Parkinson’s gait called audio cueing.

Cupid demo 3  Cupid demo 1Andy and Reynold demoing the app

Data on gait performance is visible to the clinician who can then adjust the exercise remotely forming a feedback loop between the user and the doctor. You can take a look at the demo itself on this video of Audio Bio-Feedback in action.

This event was organized by the Oxford Academic Health Science Network which brings together clinicians, academics and the public to promote best health in the region.

The Oxford Launchpad was launched in February 2014. It is a new space for entrepreneurial activity housed within the Said Business School, University of Oxford.

Finding time to think

A personal blog post from our Director of Consultancy Projects

As well as delivering products, OCC has a team that specialises in custom software development; they are behind the wide variety of case studies on our website. This combination of teams working on custom software and product development & support is, I think, unique.

Once a year I take the custom development team out for a day to discuss how we might write even better software. This time we crossed the road to the Jam Factory for an open discussion and some prepared talks. The talks were:

  • A toolkit for establishing better experience for our users
  • The judicious use of Agile project management
  • A critical look at SOLID principles
  • Lessons on testing from the product development team

The result was a surprise to me. The conclusion of the team was that best way to improve our performance was none of the above but instead improving the office environment. Nothing to do with software at all. The team wanted fewer interruptions, less multitasking  and more time to quietly think through difficult technical problems.

I was surprised and really interested in the conclusion. Our office is already considered calm and quiet. So we’re going to try a few new things. We’re going to experiment with collaboration tools that allow people to raise a “Do not disturb” flag. We’re using instant messaging that reduces disruption, hosted within OCC to keep the data confidential. I’m encouraging staff to book rooms or work from home during intensive design phases. It’s going to be interesting to see how we get on.

Consultancy Away Day

Consultancy Away Day