How to write a 5 year plan (and why it doesn’t matter if no one follows it)

The Lead Developer LogoTom Litt & I will be attending The Lead Developer conference in September – it’s a new conference with a great line up of speakers covering new and disruptive technologies (of course), tools, methodologies, and, because it is aimed at Leads, also managing teams, motivation and leadership.

To warm up I’ve written an article for the conference blog: How to write a 5 year plan (and why it doesn’t matter if no one follows it).

ContrOCC Hackday V – Part 1

Our ContrOCC hackdays give our developers a day to work on tweaks, gripes, improvements, or whole new features of their choosing and then sharing those with the rest of the team.

For all the thinking behind our product hackdays, have a look at the intro to the first ContrOCC hackday.

We have plenty of projects to talk about again this year so I have split this post in two; we’ll post the remaining projects soon. Here is the first set:

Adam and Tomasz B – Code analysis

We did some research on Visual Studio’s Code Analysis tool. The tool automatically checks all code for compliance with a customizable set of predefined rules. Those rules check for common programming mistakes, adherence to good design practices, etc. There’s a wide selection of Microsoft recommended rules already built into Visual Studio, which can be cherry-picked to form custom rulesets.

We focused our work on developing a few custom rules that would enforce some of the ContrOCC specific conventions, such as using correct prefixes for variable names or making sure all forms inherit from the right base form. We’ve been able to develop a few sample rules and run them against the ContrOCC desktop client code as a proof of concept.

Alan – Database schema documentation via metadata

I put together a small demonstration of adding extended properties to database schema objects (specifically, schema, tables or columns) so that we can add documentation to them which can then be viewed in Management Studio.  This is something I’d done in a previous hack day, but this time I put together ContrOCC-style infrastructure to support it, giving a rough idea of how I’d expected it to be done if we wanted to use this for real.

I also mocked up a representation of how the data in our system reference tables could be moved into the metadata file, rather than its current location in CSV files. This would mean we could ditch the CSV files for good, thus having one fewer place to find data. It would also mean not having to maintain the CSV column headers to keep them in line with schema changes, and the metadata representation of each table’s schema would be visible in the same file as the data.

Hopefully the data would also diff more easily in this form, and would be more human-readable when viewing diffs or editing files manually.  If we were to implement this, the Data Maintenance Tool would be updated to read from and write to the metadata file rather than the CSVs – so, editing the data would be no different from before, but behind the scenes it would be stored in a more sensible way.

Screenshot of Alan's work

Chris G – Upgrade AllTheThings to .NET 4.5

Upgrade AllTheThings™ to .NET 4.5. Then try to find a cool, user facing, reason to keep it.

The first part of my task seemed trivial, the latter less so. Microsoft has fallen out of love with WinForms in favour of WPF, so there was little in the way of eye candy to entice users to higher versions of .NET (unless you count EnableWindowsFormsHighDpiAutoResizing in .NET 4.5.2). So I turned my attention to making the UI a bit more “U” friendly.

My efforts concentrated on my annoyance with having to support screen resolutions you can draw with a crayon. I looked into screens for creating new entities flowing through (a bit like a wizard) so we don’t need to present so many controls on one form. I also made the navigation bar collapsible to free up some real estate, this even works with the new hot-keys for keyboard navigation.

Screenshot of Chris' work

Chris H – XML import/export definitions

Our current representation of imports & exports uses a system of four interlinked system reference (TRefSys) tables. This means that the reference data defining any single export is scattered across four CSV files and has to be tied together with IDs. I’m interested in replacing this with a single XML file defining each import or export. This would have the following advantages:

  • All data defining a single export is in one file so easier to comprehend
  • XML is more human readable than CSV
  • No need to manually type lots of IDs for sub items

For hack day, I focused on:

  • Generation of XML from our existing TRefSys tables using SQL Server’s surprisingly helpful SELECT FOR XML feature. This would be used in the one-off transition to the new functionality.
  • Writing temporary table functions which exactly reproduced the original contents of the TRefSys tables (including for now the IDs) this then enabled existing code to be updated to use the XML via these functions with a simple search & replace operation.
  • Conversion of one actuals import to XML format

This was a success, and I was able to get the relevant import test to pass with the TRefSys definition of the same import deleted. The next step (which I didn’t have time for) would be to eliminate the unwanted Database IDs from the format and rewrite the SQL which depends on them.

Chris P – Loading lists asynchronously

For hack day I have produced a working prototype which performs asynchronous population of lists which should reduce delays users experience with populating lists as while the overall time is unchanged the application becomes responsive quicker.  This also means if filter conditions are entered which lead to a long delay then the filter conditions can be immediately modified.

Changes made:

  • Return to UI immediately list population started
  • While SQL running:
    • Show “Loading…” in list  (instead of normal SQL wait dialog)
    • Disable list so no editing allowed
  • Populate list on completion of SQL
  • Kill executing SQL if no longer required:
    • Move to different content type (i.e. control disposed)
    • Filter conditions change (i.e. another call to mFillList)

Other ideas:

  • It might make sense to modify lazy loading so in background start loading one list at a time anyway.  Need to ensure loading for displayed lists are shown immediately
  • We might want to limit maximum number of SQL commands which can execute at a time: probably only really an issue for accounts screen where we have a lot of tabs

Damian and Pawel – Visualise database schema metadata

Aim: To help maintain metadata files when creating/editing new database tables, columns etc.

What we did:

  • Visualise ContrOCC Metadata.xml file in more readable form (TreeView)
  • Edit existing attributes (most of them)
  • Add new attributes
  • Add new instances of the nodes

Screenshot of Damian & Pawel's work

Near the end we realized that the problem is harder and more complex than we thought. There is still a lot of work to do, but it was a fun challenge!

Ian L – CSV column alignment tool

I chose to create a file pre-processor that formats a CSV file such that all the columns line up and a post-processor which removes all the padding added by the pre-processor.

These file processors could then be used by AraxisMerge, the file difference tool I use, to help make comparing our table CSV files from the Data directory easier to compare.

As an extra challenge, I used PowerShell to write the file processors.

Learning how to use PowerShell took more time that I expected and the Cmd Applets that I thought existed did not, which means I will have to write more script than I anticipated.

Jon – Generating database test scripts from UI interactions

My task was to explore providing a user-friendly UI for generating database test scripts from the ContrOCC UI. I spent most of the day getting to grips with the ContrOCC UI Framework which I’ve never had to do any serious work with before, so it was a great training day for me though not very productive.

I reached the point of being able to open a form from the Troubleshooting menu which contained lists of clients and cplis with tick-boxes for selection. Unfortunately before I committed Visual Studio crashed and scribbled all over my main .cs file so I have no screenshot or code. However, now I know what I’m doing it should be fairly quick to repair if I ever return to this.

Generations Working Together – Panel Discussion

Gov office for Science

Generations Working Together

Tuesday, 16th June, 2015
18.00 – 21.00

University of Oxford, Andrew Wiles Building, Mathematical Institute,
Woodstock Rd, Oxford, OX2 6GG

 

Join us for a Foresight Review, panel discussion and refreshments

The number of older people in the UK is set to rise significantly. What will this mean for the UK’s workforce? How will it impact younger generations? Are we doing enough to nurture younger talent? A fascinating project by the Government Office for Science is set to unveil the challenges and opportunities of an ageing society. This Foresight Future of an Ageing Population Project is chaired by the University of Oxford’s Professor Sarah Harper (Oxford Institute of Population Ageing).

Introduction address

Professor Andrew Hamilton (Vice-Chancellor of the University of Oxford)

Panel

  • Will Hutton (Principal, Hertford College, University of Oxford)
  • Mark Evans (Chief Executive Officer of Adaptix, NED and Founder of Mirada, Non-Executive Chair of Cydar)
  • Dr John Boyle (Managing Director, Oxford Computer Consultants; Chairman, The Oxford Trust)
  • Steve Burgess (Chief Executive Officer, The Oxford Trust)

Chaired by: Professor Sarah Harper (Director, the Oxford Institute of Population Ageing)                                                                                                                   

18.00     Refreshments

18.45     Panel 

19.30     Wine Reception

Please register with allison.stevens@venturefestoxford.com

The event is organised in collaboration with: the Oxford Institute of Population Ageing (University of Oxford) and Venturefest Oxford

Panel

Adding a text size widget to your site using CSS and Sass

A requirement we hear from many of our Government customers is that a sizable number of their users with sight impairment prefer to have a text size widget on-screen when they browse a website.

These accessibility widgets are tough to implement cleanly using HTML and CSS but the advent of CSS preprocessors such as Sass and LESS make the job much easier. In this post we’ll see how we can use Sass to create a text size widget.

What we’re aiming for is the standard row of “A” characters increasing in size to denote changing the text size on the page, like this: Screenshot showing three A characters of increasing size. – you can take a look at one in action on one of our sites.

What we will aim to do is add a class (small/medium/large) to the body element of the page, which we can then refer to in CSS to say: when the body has a class of small then the font-size should be x; when medium then y; large then z.

There are two elements to making the widget work:

  1. JavaScript that adds the widget to the page, records the setting in a cookie, and adds a class to the page body so that our CSS can react to it.
  2. CSS that sets the font-size based on the body class. A Sass mixin makes this much more manageable.

JavaScript

Here’s the code, and we’ll go through it bit-by-bit:

$(document).ready(function () {
    // Add in the widget
    $('#text-size-widget').append('<ul><li class="small"><a href="#" class="selected">A</a></li><li class="medium"><a href="#">A</a></li><li class="large"><a href="#">A</a></li></ul>');
    
    // Read the current text-size from a cookie
    if ($.cookie('TEXT_SIZE')) {
        var cookieTextSize = $.cookie('TEXT_SIZE');
        $('body').addClass(cookieTextSize);
        $('#text-size-widget a').removeClass('selected');
        $('#text-size-widget li.' + cookieTextSize + ' a').addClass('selected');
    }
    
    // Add the resize event handler to the widget's links
    $('#text-size-widget a').click(function () {
        var $el = $(this);
        var textSize = $el.parent().attr('class');
        $('body')
            .removeClass('small medium large')
            .addClass(textSize);
        $('#text-size-widget a').removeClass('selected');
        $el.addClass('selected');
        $.cookie('TEXT_SIZE', textSize, { path: '/', expires: 10000 });
        return false;
    });
});

We’re using jQuery here so the first line sets this script to run when the page is loaded and ready.

We then add in the HTML for the widget, by inserting a list of links into any element on the page with an id of text-size-widget, so you’ll need to have at least one of those in your page. We insert this using JavaScript so that if a user doesn’t have JavaScript enabled they won’t find a non-functioning list of links on their page.

We then check to see if the user has a cookie called TEXT_SIZE, if they do we read it, add the appropriate class to our body element, and refresh which of our “A” links is selected. This makes sure that the user’s choice carries over from page to page on our site.

Finally we set what happens when the user clicks one of the “A” links:

  • We detect which link they clicked and store the small/medium/large class in a variable to use later
  • We remove whatever text size class might be on the body currently and set the new one
  • We update which of the “A” links is selected
  • We store the new text size in the TEXT_SIZE cookie
  • We return false so that the link does not perform its standard functionality (in this case adding “#” to the URL)

That’s the widget, cookie and body classes dealt with, now to tell the CSS how to handle it.

CSS

What we’re looking to end up with is along the lines of:

p { font-size: 10px; }
body.medium p { font-size: 12px; }
body.large p { font-size: 14px; }

Clearly it would be massively frustrating to have to write all three options out every time you wanted to set the font-size of something – this is where our Sass mixin comes in.

Mixins let you extract repetitive code out of your CSS into something that looks a bit like a function (you can even pass in values/settings) and call from your CSS whenever you need it.

Here’s what our mixin looks like:

@mixin font-size($baseFontSize, $mediumMultiplier: 1.2, $largeMultiplier: 1.4) {
    font-size: $baseFontSize + px;
    body.medium &    { font-size: ($baseFontSize * $mediumMultiplier) + px; }
    body.large &    { font-size: ($baseFontSize * $largeMultiplier) + px; }
}

The mixin is called font-size and you pass in the “base” font size (i.e. the size to use for the smallest setting, which is the default), and optionally you can set the multipliers we use to go up to the medium and large sizes.

We’ve then got a bit of Sass code that generates our CSS using the base font size and the multipliers.

Of course if you prefer to use ems or another unit you can replace the px with whatever you want.

To use this mixin all we need to write is:

p {
    @include font-size(10);
}

And when we compile the Sass it will generate exactly what we want. So all we need to do is replace any normal font-size declarations with a call to our new mixin and we’re away!

Research behind Virtual Assay wins Prize

Oliver Britton, a DPhil student in the Department of Computer Sciences, University of Oxford, has won an international prize for his paper on a new computer model of cardiac electrophysiology. The National Centre for the Replacement, Refinement and Reduction of Animals in Research (NC3Rs) awarded Ollie the prize for its potential to reduce the number of animals used in drug testing. He plans to use the prize grant for further research to apply the methodology in neuroscience.

OCC has been working with Ollie and the Oxford University Department of Computer Science to bring his research to market. We have developed Virtual Assay, a tool enabling commercial and academic researchers to model the effects of drugs on populations of cell models that display the full range of electrophysiological responses seen in real heart cells. This is an improvement on previous models which have tended to ignore this natural variability. The tool can be used to screen out drug candidates that could be toxic to the heart before animal studies are done. The work has been supported by Isis Innovation and the EPSRC.

OCC would like to congratulate Ollie and his supervisors, Professor Blanca Rodriguez and Dr Alfonso Bueno-Orovio, for their well-deserved win!

For more information, please contact Dr. Fred Kemp, Deputy Head of Technology Transfer, Isis Innovation fred.kemp@isis.ox.ac.uk

dcs

Compiling Sass with Gulp in Visual Studio

We love visual studio at OCC. It’s an incredible IDE for software and web development and Microsoft put amazing effort into keeping up with the direction developers find themselves moving.

In this case we’re talking front end web development and specifically:

  • Gulp.js – a JavaScript-powered automated build system that uses Node to perform the tasks you find yourself doing over and over
  • Sass – a language extending CSS (that is compiled to produce CSS) to give developers more power when creating stylesheets

For quite a while Visual Studio has supported Sass, specifically through the Web Essentials plugin, which has offered intellisense, CSS preview and compilation. But much of this functionality is being removed from the plugin because the broader developer community is moving to using build tools like Gulp (and it’s peers, such as Grunt) and Web Essentials wants us to use them too.

But how do we go about doing that?

Setting up Gulp in Visual Studio 2013

Node.js

The first step is to install Node.js, the JavaScript server that Gulp uses to run its tasks. Importantly, you’ll need v0.10.28 (there’s an x64 folder in there, which most of us will want) as the more recent versions are not yet supported by the Gulp-Sass compiler plugin. It looks like this will be fixed very soon though, which is great!

For Sass compilation we’ll need to install two Node packages, so open a command prompt and run the following npm (Node Package Manager) commands:

  • npm install gulp -g
  • npm install gulp-sass -g

The -g is a flag to install globally, so you can use the modules anywhere.

Now we can set up Gulp for your solution. Create a file called package.json in the root of your solution and add in the following information:

{
    "name": "Project Name",
    "version": "1.0.0",
    "description": "Project Description",

    "devDependencies": {
        "gulp": "3.8.11",
        "gulp-sass": "1.3.2"
    }
}

You’ll need to make sure those gulp and gulp-sass version numbers match those you have installed.

This will allow any other developer to come along and run npm install on this directory and Node will download and install everything they need!

Give Gulp some tasks

Create a gulpfile.js file in your solution and add the following JavaScript to get started:

var gulp = require('gulp');
var sass = require('gulp-sass');

gulp.task('sass-compile', function () {
    gulp.src('./Content/style.scss')
        .pipe(sass())
        .pipe(gulp.dest('./Content/'));
});

gulp.task('watch-sass', function () {
    gulp.watch('./Content/*.scss', ['sass-compile']);
});

Here we have three sections

  1. First we create gulp and sass variables and use Require to ensure they’re good to go.
  2. Then we create a Gulp task called ‘sass-compile’ and tell to it find our Sass stylesheet, compile it and output the result to our directory.
  3. Finally we create a ‘watch-sass’ task that watches all Sass files in our folder and runs the sass-compile task whenever anything changes.

Now, if we ran gulp sass-compile at a command prompt in our project, it would compile our Sass. If we ran gulp watch-sass it would start watching. But running commands all the time isn’t much fun…

Task Runner Explorer

The answer is to install the Task Runner Explorer Visual Studio Extension, which will give us some nice UI for running and automating Gulp tasks from within Visual Studio.

Once you’ve installed (and restarted VS) when you right-click on your gulpfile you’ll see a new ‘Task Runner Explorer’ option, which will open up a panel showing a list of your Gulp tasks on the left and some binding options (or the output of whatever has been run) on the right.

Task Runner screenshot

If you double-click on sass-compile it will run and compile the Sass. If you double click on sass-watch it will start watching. Much better!

Task Runner execution results screenshot

The icing on the cake is that you can right-click on one of your tasks and bind it to one of the following Visual Studio events:

  • Before Build
  • After Build
  • Clean
  • Solution Open

Right-click on your sass-watch task and bind it to Solution Open. Now, every time you open your solution, Gulp will automatically start watching your Sass files and compiling them whenever it needs to – perfect!

What else can Gulp do?

Gulp doesn’t just do Sass compiling, it’s hugely powerful. Here are just a few of the tasks Gulp can help you with:

  • Minify your CSS, JavaScript, HTML or images
  • Combine your CSS files
  • Generate CSS sprites
  • Generate favicons
  • Rename files
  • Check if files have changed or not
  • Output messages/errors
  • Add CSS vendor prefixes
  • Check code quality with hinting

More good news as Visual Studio 2015 will come with first-class support for Gulp as standard, so these tools will be built right in to the IDE.

Search engine rankings for Social Care: the 001 Taxi Problem

If you’re more than 20 years old you will remember the annual delivery of your local telephone directory; the thud as it lands heavily in your hallway, often with a crumpled cover, a testament to the efforts of the delivery boy to fit the tightly published pages through your letter box. For me, this directory was my first exposure to search engine rankings, with its wonderfully named 001 Aardvark Taxis vying angrily with 001 Ace taxis for first place in the listings.

Pigot’s directories, published in 1814, were the first widely available English reference source for everything from tradesman to public houses. After Bell’s invention of the telephone in 1876, directories became common, first to the secretaries of the nobility and then to the middle classes.

Coming First

The 001 Taxi problem is as alive today as it was in phone directories of my youth. Today it manifests itself in the ‘war’ of search engine rankings – a continuous cut and thrust between businesses seeking to steal a march on their competitors whilst the search engine developers try to outwit the tricks of web site developers. For some businesses, failing to appear on the first page of Google’s search results means anonymity and business failure; paying for adwords is expensive.

I became acutely aware of this problem in our recent development of Social Care information and advice portals. These portals, legal cornerstone of the Government’s 2014 Care Act, require local authorities to establish and maintain an information and advice service in their area. At OCC, we have sought to develop a smart ranking system that works on keyword relevance (i.e. the prominence of the word in the service profile). However, service providers may list keywords several times in their description to optimise their profile which we don’t want. We wish to rank services by relevance so if the keyword (or thesaurus equivalent) is found in the description once it should get the same relevance points as finding it 10 times. Google’s famous Page Rank algorithm is not available to us in the Social Care space, so we need to find another way to help our users find the highest quality information for their search.

To display our results we have introduced a secondary search ranking facet based on service profile quality score. The idea is that we establish a scoring mechanism for service profiles similar to the relevance score; points are awarded for logo, accreditation, quality of description, images, contact details, etc.

However, this still leaves a very serious problem. Serious for service users, providers and Local Authorities, namely that if you type in the same keywords you will get exactly the same search results in the same order every time. This is a reality – most people will use the same words to search for services: words like ‘Home Care’ or ‘Meals on Wheels’. So the same providers will always come first in your list; the same providers last. This is bad for creating a diverse and competitive market and bad for citizen choice. The reality is that well over 20 appropriate residential care providers may all have scored somewhere between 98% and 100% in our ranking algorithms. The small percentage differences are vital for the order of results but we surmise that they are trivial in terms of the actual suitability of the service for the citizen.

Randomization

One solution – randomization. We could take results that have scored almost the same (and we can define what ‘almost’ means in percentage terms) and then randomize the results. All those providers that have scored ‘almost the same’ are presented in a different order each time you search – the actual order being determined by a randomization algorithm.

This simple technique helps to create a diverse market; shares business across a range of suitable providers and helps ensure a vibrant and competitive market. There is no incentive for Providers to give themselves silly names; no more 001 Aardvark Care Homes. However, we’re worried about the impact on usability – how many search engines do you use where the results come out different each time for the same search? If you send your search link to a friend and tell them to look at result #2, they will see a different service in that position.

If we’re going to use randomization to help create a vibrant Social Care market we’re going to have to make sure everybody understands what we’re doing. Thoughts on a spare page of an old Yellow Pages please.

Asynchronous processing in SQL with Service Broker

SQL Server Service Broker was introduced in Microsoft SQL Server 2005 and this article suggests several uses, the most significant of which is probably the ability to do asynchronous processing. Service Broker might be seen as the database equivalent of Microsoft Message Queuing.

In this scenario, a synchronous process can put a message on a queue and then complete immediately (i.e. return control to the user). A separate process can then take this message off the queue and perform some processing based on it – this processing is effectively done “behind the scenes” as far as the user is concerned. Typically, it might be some relatively time-consuming work which isn’t on the user’s critical path.

As with all asynchronous processes, the problem of how to report errors is a tricky one, given that the user might have gone home for the evening by the time they arise. You might imagine keeping a log of such errors and generating notifications, all of which could also be done from SQL, using Database Mail for example.

Getting started with Service Broker

Service Broker’s implementation may seem unnecessarily complicated, involving not only queues but also things called services, contracts and message types. These have been introduced to allow the functionality to be used in as many weird and wonderful ways as realistically possible but they do present a bit of a steep learning curve. So here’s a very simple set of scripts to get you going and show how it works.

Begin by running the first script from any existing database on a SQL Server, version 2005 or later. It creates a new database called ServiceBrokerDemo and makes in it a Queue, Service, Conversation, Message Type and a couple of example stored procedures.

Once that has completed, run the second script:

This puts some messages on the queue by calling the Proc_Queue_Send_Message stored procedure (which completes quickly). It then calls the Proc_Queue_Process_Messages stored procedure; this reads messages off the queue and processes them, putting the results in the T_QueueProcessingResult table. Proc_Queue_Process_Messages mimics doing “hard work” via a WaitFor Delay '00:00:05‘ statement.

An alternative way of working is implemented in the third script, which should then be run:

Now, Proc_Queue_Process_Messages will be called automatically whenever a message is put on the queue – i.e. there’s no need to explicitly call it.

Finally, run the fourth script:

This will again put some messages on the queue. Once it has completed (quickly) the processing will have been started and can be monitored by re-running the final Select * From dbo.T_QueueProcessingResult With (NoLock) statement. Every ~5s, a new row should appear (5 in total).

An aside about Conversations

As in real life, striking up a “conversation” can be a time-consuming process so these are generally cached outside of the Service Broker framework. The TSys_ConversationHandle table is used to do this.

ContrOCC Hackday IV – Part 2

Carrying on from our first post following the results of our developers’ adventures in the most recent ContrOCC hackday, here is the final set of projects:

Julian Fletcher – Graphically presenting performance information to the lay developer

We are now amassing ever greater amounts of anonymous performance information from live Local Authority (LA) databases:

  • execution times
  • deadlocks
  • missing index hints.

This information is transferred, via CSV files, from all the LA databases to a shared OCC server. Here, several Stored Procedures may be used to analyse the figures. However, these output slightly impenetrable tables, so I set about presenting the information in a graphical format using SQL Server Reporting Services (SSRS).

Setting this up was fairly tedious – in particular, getting all the necessary permissions (even when everything was on my own machine). However, in summary, this is how it works:

  1. You set up a SQL Server to have SSRS. This results in a couple of new databases being created:
    Screenshot of Julian's hackday project
  2. Configure SSRS using the Configuration Manager:
    Screenshot of Julian's hackday project
  3. Install the Microsoft SQL Server Report Builder application and design your reports. These can obtain their data from a database on an entirely different server.
  4. Save them to your hard drive and then publish them the Report Server when they’re ready.
  5. The screenshot shows what this looks like. You can get the averages at all LAs for any one stored procedure:
    Screenshot of Julian's hackday project
  6. You can also drill through to get a by-version breakdown for a particular LA (not down – this is a slightly different concept):
    Screenshot of Julian's hackday project
  7. Just by chance, I’ve chosen a stored procedure which appears to have sped up in later versions!

As I mentioned above, getting the right permissions was a pain and I still haven’t made the Report Server available to other users but I hope to be able to soon.

Maciej Luszczynski – Cleaning up the developer documentation

I focused on providing How To documentation for new developers on the wiki. I started by cleaning up existing pages. The Home page is now greatly improved, although the Windows Topics section still requires clean-up.

I have also identified some topics that it might be useful to add to the wiki, such as:

  • how to remove a database object
  • how to remove a database table properly
  • how to commit a change to the SVN properly
  • how to check the imports flow

Matthew Clarke – CSV Merger

This tool improves merging of CSV data files which often cause SVN conflicts, for example when a column is added.

The tool can be set to be your default merge tool for CSVs in SVN.

Screenshot of Matthew's hackday project

When you would normally resolve conflicts CSVMerger now opens.

Screenshot of Matthew's hackday project

It will highlight conflicts and missing values for new columns:

Screenshot of Matthew's hackday project Screenshot of Matthew's hackday project

You can even resolve conflicts via the UI (needs polish).

Screenshot of Matthew's hackday project

So, you’ll be able to go home 5 hours early because the merge was done in no time.

Nathan-Madonna Byers – F#/C#

I looked at using various F# Type Providers (from the open-source FSharp.Data.SqlClient package) to interact with ContrOCC databases in a type-safe manner, such that many errors can be caught at compile-time rather than raising run-time exceptions. In particular, I investigated the potential of this approach for writing test data scripts and worked through the implications of exposing this functionality to C# code.

The first thing to note is that there’s little in my prototype that couldn’t have been achieved solely in C#. However, I find F# to be a more natural and expressive language for this kind of thing. In terms of F#/C# interoperability, there were a few head-scratching issues that took a while to figure out – but it all seems to work pretty smoothly. (Famous last words.)

A quick summary of the specific Type Providers:

  • SqlProgrammabilityProvider: This can be used when working with stored procedures. Unfortunately, it doesn’t currently support SPs with output parameters or explicit return values, which rules out many of our SPs.
  • SqlCommandProvider: This can be used when working with arbitrary SQL commands. I’ve used this in my prototype as a lightweight wrapper around SPs (e.g. rather than calling an _Insert SP directly, I can have a simple three-line SQL command: declare an @ID parameter; execute the SP [passing @ID as the output parameter]; select the @ID).
  • SqlEnumProvider: This can be used to create enumerations for (relatively) static lookup data, e.g. from TRefSys_ tables. This means that it may no longer be necessary to define these enumerations manually in our C# code (and then have to worry about keeping them synchronized with the database &c.).

In terms of extending the prototype, various factors – the use of the SqlCommandProvider and the succinctness of F# (plus type inference and auto-generics) – mean that the amount of hand-written “boilerplate” code required is fairly small. (It’s not dissimilar to the amount of hand-written SQL needed to do it the old way but should be quicker to write.) The thing that really takes time is the need to analyze the SPs in relation to the database schema, e.g. to decide: which parameters to expose to C#; which should be mandatory, etc.

Finally, a summary of the key benefits:

  • Static typing (and Type Provider “magic”) mean that many errors can be caught at compile-time.
  • Intellisense (including visibility of defaults for optional parameters) and auto-completion.
  • Optional parameters are truly optional: no more “counting commas”!
  • The scripts are (more) readable.
  • Debugging support and full stack traces for exceptions.
  • “Production” SPs – i.e. those used by the UI (and other components) – are always used.
  • Modularization, i.e. we can finally share portions of data setup across tests.

Nigel Palmer – ContrOCC version manager

I set out to create a version number manager to assist with the building of ContrOCC. Version numbers are currently set manually in a variety of different places and there is some risk of one of them being missed. This tool will allow the developer responsible for the build to set all version numbers in the SQL database and the C# code. This will improve the accuracy of the build process and prevent test failures due to incorrect version numbers.

The ContrOCC Version Manager tool has a simple user interface that allows the user to set the version number. This then updates all the required build files to have the new version number.

Screenshot of Nigel's hackday project

Patrick Donkor – An executable imports/exports specification

I spent my Hack Day looking at how I could use Specflow as a way of creating an executable specification for the development of new Imports/Exports.

The idea is take a specification using the Gherkin syntax to specify a top level view of an import/export. This specification would be agreed upon between an IM and the Client. The specification would be passed to the developer who would code and test against it.

To see if this could be done, I used an existing client payments import as my template. I spent the first part of the day setting up the specification for the import. This went well. The second half of the day was spent trying to implement automated tests to use against the specification. I did encounter a few minor issues that, given time, could be rectified. The screenshot below shows what a sample file looks like.

Screenshot of Patrick's hackday project

Steph Sharp – Improving code integrity checks

I spent the day trying to suppress output from one of our internal code integrity checks (sys_checkschema). This output is an unwanted side effect, and can be confusing, especially to newer members of the team. Although I did manage to prevent this output being displayed, I was unsuccessful as there were problems with expected errors no longer being reported and the addition of spurious new errors, which I was unable to resolve.

Tom Litt – Migration from within the ContrOCC UI

I took the brief “make the migration process at new customer sites a part of the ContrOCC UI” and tried to see whether it would be feasible to run migrations via a wizard available in the main desktop client, instead of the existing process where there is a separate tool. Why? If a customer had this, they wouldn’t have to send data off-site. They could re-run migration tables as often as they liked. Selling this to us, we could ditch the SQL Server Integration Services packages which are a merge nightmare.

In production the wizard, would:

  • allow editing of migration settings
  • allow selection of files to be migrated in, or perhaps just point at a directory full of migration CSVs
  • run through each file independently, or run them all in an unattended batch; depending on the level of reporting required
  • feedback on the level of success/failure for each file.

The file specifications and import code are already written, so this is a case of wrapping the code as stored procedures and putting UI on it.

In practice I got as far as establishing a wizard, selecting the client file, importing it and writing the existing HTML feedback to the wizard. This could be extended fairly easily to handle all files and report more nicely, and I think this is worthy of future effort.

Tomasz Agurciewicz – Taming our test scripts

ContrOCC’s testing meta language takes some time for developers to learn. That’s why I decided to finish/enhance a tool I made some time ago which makes it easier:

Screenshot of Tomasz's hackday project

The tool parses a test script and tries to present its contents in a sane fashion. All commands are listed in the Command dropdown, and selecting one of them lists available actions. So for example Command “S” (Service) has actions “Add” and “Load”.

Next, for the selected action it finds the appropriate sql part in the script and tries to list what each column stands for. It then lets you enter all the required data and form a single test script line that you can use later on when preparing tests or data for ContrOCC. It also prints the selected chunk in a separate tab, so you can look up if in doubt what each parameter does.

Trevor Hingley – SQL Server projects in VS2012

I looked at SQL Server projects in Visual Studio 2012, with a view to seeing how viable it would be as a tool for ContrOCC database development and deployment.

There are immediate issues in that the SQL Server Data Tools (SSDT) installed on the machine must be in line with Visual Studio. In my case creating a project in VS2012 immediately raised an error, as the SSDT were not compatible.

After installing the latest version of SSDT, I found the SQL Server project very easy to use. Database schema can be imported from a database, a selection of scripts or a .dacpac file very easily. Multiple projects can be used to handle separate version schemas, and the tooling makes comparison between projects/schema easy to view and update. Pre- and post-deployment scripts can also easily be added to the project.

Where the viability of the SQL Server project type fails is in deployment. Schemas can easily be deployed to databases which can be connected to, and scripts can be generated to create a new database or upgrade from a known schema to the latest schema. However, there is no versioning capability when generating scripts to be able to produce scripts capable of upgrading from multiple schema versions to the current version. A possible alternative is to generate .dacpac files which have versioning capability. However this was not found to be viable, as a .dacpac generated in VS2012 causes an xml schema error when attempting to import into a 2k8 SQL Server database.

ContrOCC Hackday IV – Part 1

Our ContrOCC hackdays give our developers a day to work on tweaks, gripes, improvements, or whole new features of their choosing and then sharing those with the rest of the team.

For all the thinking behind our product hackdays, have a look at the intro to the first ContrOCC hackday.

This year we have so many team members I have split this post in two; we’ll post the remaining projects soon. Here is the first set:

Alan Carter – Converting the distributed tests config file to XML

I worked on converting the configuration file for our distributed testing service from plain text to XML, something I’ve wanted to do for a while. As the configuration for the tests grows ever more complex, the plain text file becomes increasingly arcane and difficult to navigate. An XML file allows the configuration to be laid out in a more structured manner, and lets most text editors provide benefits such as folding and syntax highlighting. It also has an accompanying schema so that we can catch mistakes much more easily.

This work supports a longer term goal I have of getting the config file into SVN and splitting out parts which are likely to be changed frequently.

There’s a little more work needed to finish writing the new parser, but with just a little more work this should be fully usable. I also hope to arrange for the distributed tests service to perform schema validation of the file.

Screenshot of Alan's hackday project

Chris Griggs – Converting ContrOCC tools to use Git

I looked at converting our tools that currently use SVN to use Git. We currently use the SharpSVN library to query our repository. There’s a SharpGit project in NuGet that’s supposed to be an almost direct replacement, unfortunately with only ~200 downloads it’s very immature and not up to the task. I chose libgit2sharp as the best alternative. This is a C# wrapper around the official libgit2 library. The library itself is well documented, the wrapper less so.

I was quickly able to convert the Macro Developer Tool (an internal development tool which uses the C pre-processor to avoid code duplication in SQL) to use Git as it only looks at the working copy to see which files have been added or modified. I then turned my attentions to the PFG (our patch file build tool). I soon learnt that this was not going to be so trivial.

The concept of a tag is somewhat different between Git and SVN, as is the idea of looking at the central repository to find the previous one (although I doubt we’ll all have the entire repo on our machines either). As it’s written, the PFG relies on there being a linear succession of changes between the previous tag and the current one. In Git it’s likely that we would branch out and merge in commits throughout the development of a tag. We would therefore need to compare the two trees without relying on their history. I am satisfied that this is possible, but more thought will be required than was possible on a hack day.

Chris Henry – Visualising the ContrOCC Database

I spent the day looking into ways of visualising the structure of the ContrOCC database. The aim was to see to what extent it divides into modules which could in principle be more rigorously separated in future. The work in practice fell into three parts:

Part 1: List Dependencies

Extracting data on dependencies from our source data. I have a long standing prejudice against SQL’s own lists of dependencies and since I knew it wouldn’t be too much work I instead read dependencies out of the SQL source code using parser code borrowed from the PFG (our patch file build tool).

Part 2: Box & Arrow Graph

I next tried to graph dependencies as a box & arrow diagram. I used the MSAGL library and technically speaking it worked first time. With minimal effort I was able to produce a suitable diagram with tooltips showing the name of the objects.

Screenshot of Chris' hackday project

However, when I scaled up to the 1000s of objects in our database it didn’t even try to produce a decent layout – it just stretched the whole graph out horizontally:

Screenshot of Chris' hackday project

Or zoomed in:

Screenshot of Chris' hackday project

Part 3: Dependency Structure Matrix

Whilst looking into ways I might influence the layout I stumbled across the idea of the Dependency Structure Matrix and immediately decided it was a better prospect. In true ContrOCC fashion I began writing my own DSM code and by the end of the day had an incomplete, tantalising but ultimately broken implementation.

Follow-up

I intend to:

  • Investigate the commercial tool Lattix or, if it doesn’t impress, fix my broken implementation.
    Postscript: I tried it. I like it. The costs are such that although we might consider it this would only be in the context of a future decision to undertake a major program of database restructuring.
  • Spend some time looking at the results.

Chris Parish – Prototyping a new UI for different screen resolutions

I worked on a prototype of a new user interface layout for ContrOCC which would degrade better on small screens, but take better advantage of large (or at any rate tall) ones. This involved redesigning the Contract UI  so that it no longer contained tabsets, but instead a series of vertically stacked sections each of which can be individually expanded and collapsed. There are also a series of links along the top of the page which allow the user to jump to any section of interest.

Screenshot of Chris' hackday project

Ian Evans – An app to capture client-confirmed visit data

ContrOCC records details of client visits to be carried out, and can generate payments/charges based on those visits, but without verification there will always be an element of trust. This is a browser/smartphone-based add-on to ContrOCC to allow client-confirmed visit data capture.

Instead of requiring the service provider to verify attendance by relying on, for example, a client’s landline, the visitor hands the client their own smartphone, which may be their personal device, and the client enters a picture-based PIN. Pictures are used as potentially easier to remember than a numeric PIN. For this demo, I have used various types of balls, but obviously anything meaningful to the client could be used.

Linked to location-based data, the time and place is verified, with that data instantly available in ContrOCC and matchable against PO/Invoice data. Visit start and end are recorded, together with the option for the client and/or visitor to send a message to the LA.

For this demo, a smartphone surround is shown. Some LA branding has been applied, but it really needs design input!

Screenshot of Ian's hackday projectThe code is inline C#, borrowing concepts such as the “no-compile developer experience” from .NET vNext, where the developer never needs to build, just code and save. There is no solution or project, the website is opened directly in any version of Visual Studio, with system-wide code shared from App_Code. Development code is continuously compiled by aspnet_compiler into a deployable “build”, with a compile management screen showing ‘real-time’ code warning/error status.

Ian Lewis – A tool to parse PDF files

The task I chose to undertake on Hack Day was to develop a tool that would parse PDF files such that the contents of the file could be inspected. I managed to get to the stage of recognising the PDF Header and Trailer, and converting the body into tokens which would be used to create PDF Objects.

One application of this utility could be to allow the ContrOCC generated PDF exports to be inspected and checked against a set of expected results. This would allow PDF exports to be checked during the automated tests. Initially, the automated tests would check that the text content is as expected and could be extended to include the positioning of the text and the type of fonts that were used in the export. Ultimately, once the structure of the files is more fully understood, this utility could be used to allow PDF files to be generated.

This was an ambitious Hack Day project that needed more than the one day that was allocated. I estimate that it would take at least a week to complete the parser with the appropriate automated test code.

Jo Lloyd – Alternative help navigation

Our current help technology has many good features, but lacks the ability to deal well with simultaneously providing descriptive (“what is this field?”) and functional (“how do I … ?”) help. While the text, being endlessly flexible, can easily encompass all kinds of help, there is no automated and structured means to point users to different paths through the content.

  • We need to provide a description of every field on every form. But we would also like to provide activity-based help based on where the user is – eg how to disregard a property, how to carry forward a payment, whatever.
  • The current approach to Help in widely used applications like Office largely abandons context in favour of search. But I think our context-related help is important, and I would like to maintain the principle that pressing the F1 key always shows you something useful, rather than requiring more clicks or a search before anything is shown.
  • One thing I do like about Office Help is that in addition to a title, it shows you a snippet of each item in a list of links. It would be nice to include something like that.
  • The tree structure on the left of our current help seems to me very useful, and I’d like to retain that functionality (or something like it).

How it is now

Screenshot of Jo's hackday project

  • I’ve illustrated a simple approach (conceptually, that is), which just adds some options to what we already have via a new menu bar offering context-related links of two (or indeed potentially more) kinds. F1 would open the help for the current position, as now, and this would set the initial context of the dynamic menus.
  • I have imagined that tags would be used to set the relationships between items, as it is both relatively low maintenance and familiar. But manually maintained lists could also be used.
  • I have assumed that index and search options remain much as now (although I haven’t illustrated them).
  • I should stress that this is all purely hypothetical and I have not spent any time investigating technical solutions.

Providing more options

Screenshot of Jo's hackday project

The Related menu

Screenshot of Jo's hackday project

The How to menu

Screenshot of Jo's hackday project

How it might look with snippets

Screenshot of Jo's hackday project

Jon Bowen – Planner-style timesheet UI

With significant help from Chris Smith I have created an “alternative” UI for OCCult (our internal timesheet system). This is not intended as a replacement for the existing timesheet entry form, but as an addition that considerably simplifies timesheet entry for people with particular kinds of working patterns:

  1. People who have largely the same project for all their work
  2. People who record their work in their notes by time of day rather than by hours worked
  3. People who prefer visual to verbal reasoning (thought that’s not a working pattern!)
  4. People who work on several projects at once, recording some time for each project every day.

The new UI is visually completed, and partly coded. There are still many bugs, and some features that aren’t implemented at all. Some staff have evaluated the new UI, and despite some networking problems which leave the demo file locked, and other MS Office installation problems on some workstations that prevent the new UI from opening, feedback has been broadly positive, with some negative. However, only a minority of staff have provided feedback. The most negative feedback has been from staff who are already happy with the existing UI and don’t see the need for an alternative.