Walk for Parkinson’s – Oxford

Oxford Computer Consultants has long links to Parkinson’s research, being involved with a number of EU projects including Parreha, ParkService, PERFORM, and CuPiD.

A group of OCC employees and their families decided to take part in this year’s Oxford Walk for Parkinson’s on Sunday 25th October.

Walk for Parkinson's

The OCC team preparing for the walk.

On a sunny Sunday morning, Rosalind Ravelli, Laura Walton, John and Marie Boyle, and Reynold and Liz Greenlaw all walked the 4 mile course in under 2 hours. James Greig, Andy Muddiman, Julie Mabbett and Adam Wiseman completed the 8 mile course in under 3 hours 15 minutes, with Chris Griggs being the first to complete the 8 miles course by finishing in under 2 hours!

After having cake everyday for a week, our “Eat for Parkinson’s” cakes raised £134.19. This was added to the OCC fundraising page, where the total currently stands at £705.00. Thank you everyone for your support!

The security standards you need to consider when handling sensitive data

At OCC we have been building and hosting software that deals with sensitive public sector data for over a decade. But where do you start if you are embarking on a project/business that has sensitive data at its heart?


ISO 27001

The basic standard you need to look at for a company in this sector is ISO 27001:2013. You can purchase a copy of this standard online. If you go along this path the one thing I will say is that it’s easy to misunderstand what its purpose is and hence go overboard; do more than you need to. It is a standard that outlines a number of things you have to think about, it’s not a list of what you have to do. It is perfectly acceptable to say that you have risk assessed something and decided it’s not worth it, providing you can justify it. The standard is designed to make it clear to others what it is you do and to have assurance (via audits) that you have thought about it and that you do what you say.

As a start up a number of things can be radically simplified and make it more complex as you grow. You can get consultants to help you with this but it is feasible to do it yourself. To get this you need to create a series of policies and practices, a Statement of Applicability (which relates the standard back to the policies) and get audited by a registered body, we use SGS but there are many others.

N3 & PSN

If you want to connect to systems within the NHS network, there is also the N3 connectivity. This will be important if you need to support your system running inside the N3 network or if you need a connection to it.

For N3 I only know the basics: that is you complete a code of connection, which is a bit like ISO 27001 only more thorough and prescriptive, i.e. there are things you have to have, like two factor authentication and a certain grade of security devices.

For non-health organisations there is also the PSN or Public Services Network for which the same principles apply but just not quite as strict. This is the one we are currently working on for OCC, and we hope to go live with by the end of the year.

Penetration Testing

The other thing you cannot do without in the current climate, is to get your system externally penetration tested by a company that is CHECK or CREST registered and ask for a statement of opinion you can show (potential) customers. This seems pretty much expected as standard among our customers now.

These companies can cost anywhere between £800-1200 per day and we usually have to buy about 3 or 4 days for a system. How often you do this will depend on how often you change the system.

Where to find help

It is a bit of a mountain to climb but it’s not as bad once you get into it; take it in chunks and keep it in perspective (you will find purists that will easily go over the top). There are a number of consultants that can help in this area – many auditors have a side business of their own that they to sell consultancy and training services and the good ones will tell you what you don’t need as well as what you do. My auditors have nearly all given pointers during the audits as well.

In addition there are hosting companies that have experience connecting to PSN/N3 networks and systems that hold sensitive data. They are able to provide you with advice and, if handled correctly, can be useful free resource if there is a contract in it for them. Just keep an eye on the costs of the eventual contract you have with them.

We’ve covered the standards most relevant to the public sector, but of course there may be others related to your industry. The PCI DSS (Payment Card Industry Data Security Standard) is an obvious example, for those handling payment information. You should seek advice on other standards that apply to your business.

The other thing to remember is the landscape in this area continually changes, the levels of security and the type of things that were acceptable when we first started hosting systems are no longer acceptable and we’ve had to adapt and migrate to stay on top of the changes. So you are never truly done!

ContrOCC Hackday V – Part 2

Carrying on from our first post following the results of our developers’ adventures in the most recent ContrOCC hackday, here is the final set of projects:

Julian – Client Provisions

As anybody who has ever peeked under the bonnet of the Charging Engine will know, charges are based on Client Provisions. These are entirely derived from Care Package Line Items and there is a many to many relationship between the two. However, at many Local Authorities, Client Provisions have a one to one relationship with its corresponding Care Package Line Item in about 90% of cases.

To see whether there are any fundamental problems with imposing “1 CPLI = 1 CP = 1 CPLI”, I doctored code to only make one CP per CPLI and then ran the standard charging tests. This resulted in some errors being reported; those where the totals still agreed but were made up of different individual charges (i.e. correct) and those where the totals were different (incorrect). These will be reviewed soon.

Maciej – Alternative storage

The aim for my Hack Tuesday was to investigate alternative options for database storage especially for bulky but simple stuff like documents or audit data to improve some support related activities, like:

  • Backup and restore performance
  • Backup download time for purposes of investigation

The following options have been considered:

  • Split databases (i.e. a master database and auxiliary database for the purpose of blobs/audit storage)
  • Multiple file groups
  • FILESTREAM option

It turned out that none of the above really matches our needs. Having separated databases possess a risk of internal consistency loss; multiple file groups are really tempting (in fact these are being exploited by some customers already), but these suit slightly different purposes; the FILESTREAM option may improve performance of some blob-related objects queries, but it does not provide any additional benefit in terms of backup/restore activities.

I have found some clues pointing out that we could exploit the read-only file groups to store audit data (or, to be exact, to move the audit data from the current to the archive table on some regular basis), which may allow for reduced locking, and some improved performance and faster backups (as there is no need to backup read-only file groups frequently; although that would require a different backup strategy).

Matthew – WiX

This time I ported one of our installers from Windows Installer Projects (which is deprecated) to Windows Installer XML (WiX). I chose the ‘Product Package Installer’ installer.
WiX is MSBuild based and declarative, requiring you to explicitly list all files that are installed. However, a built in tool (Heat) can be used to auto create fragments that list the files in a directory (it runs as a Pre-Build Target to the WiXProj).

Screenshot of Matthew's WiXProj file

You can optionally apply an XSL to exclude certain files, such a PDBs, that you don’t want installed. This could also be achieved with Pre-Build copies but the XSL transform is much more reliable.

Screenshot of Matthew's XSL

The main ‘Product.wxs’ does the hard work, in the screeenshot below the Component Groups ‘Binaries’ and ‘Tags’ are WiX Fragments that were automatically created by Heat. Unfortunately, in order to have a Start Menu shortcut it is necessary to explicitly include the main executable file rather than letting Heat do the work, this is why ProductPackageInstaller.exe isn’t part of the ‘Binaries’ Component Group and is excluded from that Fragment by the XSL.

Screenshot of Matthew's WXS

Nathan – DB upgrades with F#

I set about replacing the ‘What is the Upgrade Path’ wiki page, which describes how to upgrade a ContrOCC database from one version to another, with a standalone console application. The console application is written in F#: partly because it’s a good language for rapid prototyping and partly because I’m keen to gain more experience with the language – but primarily because I’d already collated the required structured branch / release / merge information as part of my ongoing development of the stress testing tool (also written in F#). The implementation is nearly complete: just some loose ends to tidy up, plus I’d like to add some unit tests.

Screenshot of Nathan's db upgrade console app

Nigel – Generating test data

I attempted to create an application that would allow for TPA (our internal testing language) to be created just by ticking the types of data that you want in your database. The idea being that different combinations of data could be created without needing to hand code any TPA. I built a framework that sticks TPA together and only references TPA data if it exists (i.e. For Organisations the Organisation Ownership field is only populated if the Organisation Ownerships have been created.)

I also included a base date with a view to allowing the TPA to be generated for different dates but I did not find time to implement this feature. I did implement the feature of allowing a user to choose what “invalid” characters should be included in the test data. (i.e. for a CSV export file you want to include commas in your test data to make sure that the export can handle commas.) With the framework in place more TPA “bricks” can be added to the “TPA bricks” folder to extend what test data can be generated. As you can see from the screenshot below I have only implemented 5 test data items.

Screenshot of Nigel's test settings app

Steph – New documentation

Not having a particularly good long term memory, I have a habit of writing down any point of interest on completion of a task in my lab book. Notes can vary from where a particular feature lives the graphical user interface and how to use it, to schema diagrams or more specific development related reminders. As time has gone on, the earlier notes seem very obvious and I rarely refer to them personally, but on more than one occasion they have proved useful to a colleague and have been borrowed and even photocopied.

Since a lack of documentation is a common complaint amongst newer members of the team, I used hack day to start writing them up in a digital format which can be indexed and searched. This will be made available in a central location and hopefully added to over time.

Tom G – Automating deployment

I looked into trying to take some of the pain out of setting up the Remote Module for testing: with mixed results.

I ended up spending most of the day automating the tedious process of manually setting up the folders and scripts required for the module, which differs depending on whether you are doing an install or upgrade, as well as the version of ContrOCC you are testing. I ended up with a working C# command line application that fully automates this section of the process, albeit with a couple of limitations.

I also spent some time trying to programmatically hack IIS settings, but didn’t get too far. This is probably not worth the effort anyway as the changes we make in IIS for the Remote Synchronisation Service are quick and trivial. I was going to try modifying web.config files in C#, but I didn’t get around to this and again this is probably not worth the effort considering how minor the changes we make are.

Tom L – Automating component testing

I spent my hackday firstly trying to get my “C# component test runner” up and running on the latest branch. This tool is intended to build our C# solutions (all of them, for test purposes) and run our Visual Studio tests. Ideally once working this would be kicked off nightly. After banging my head against annoying MSBuild-from-the-command-line issues in the morning I made progress on actually running some tests in the afternoon, finally resulting in the ImportExport tests being run.

I have yet to see whether the results I’ve got from running the tests in debug mode against a WIP database bear any relation to the official results being seen by people running the tests the standard manual way.

Tomasz A – Web-based CSV editor

I tried recreating our reference data csv editor tool as web application to see if it’s feasible at all. I used MVC 5 + WebApi 2.0 + AngularJS + Bootstrap to create a single page application that could fetch, edit and update our reference information. Adding additional filtering to ease the navigation proved to be easy and effective though I ran out of time to implement actual updating of data in the database so at this point it’s just a fancy data viewer. With the potential to style quickly it could become quite useful and suit developer personal preferences. With the filtering it’s also much simpler to add/edit data required.

Screenshot of Tomasz A's CSV editor web app

Trevor – Parsing & Analysing T-SQL

I wanted to look at ways of parsing T-SQL. Third party tools for T-SQL seemed pretty ropey. The general consensus on the message boards was to use the C# ScriptDom (Microsoft.SqlServer.TransactSql.ScriptDom).

Having established this as the correct approach, I started writing a simple win32 app that allows a SQL script to be specified, and the SQL parser is then used to inspect the script and report errors. The parser uses the “Visitor” code pattern to collect SQL fragments which correspond to scenarios that need to be examined. These fragments can then be broken down by casting them to specific DOM object types, providing helpful properties which prevent the need to write complex regex code to isolate SQL elements.

However the behaviour of the parser is complex, reflecting the complexity of T-SQL syntax (especially when inspecting JOIN fragments). While I managed to code a few scenarios (e.g. missing collate statements on temporary table definitions and missing decimal column precision declarations), I ran out of time when looking at some of the more complex scenarios.

My conclusion is that the functionality does show promise for writing more elegant tooling, however it requires significantly more time than hack day allows to fully explore this. I will therefore keep this project on hold, and hopefully revisit this on the next hack day.

LocalGovCamp 2015 & Local Democracy Maker Day

Last month OCC took part in LocalGovCamp 2015 and the Local Democracy Maker Day fringe event in Leeds.

LocalGovCamp is an annual “unconference” where the attendees set the agenda by pitching sessions, building a schedule, and taking part in the sessions that appeal the most to them. To people used to formal conferences, it might sound a little chaotic, but it works incredibly well and results in a highly topical and engaging event.

Sessions ranged in topic from Open Data, to low cost video streaming, to government as a platform. My colleague Samuel Martin found the session on text-messaging particularly thought provoking. He writes:

One topic that was pitched was on how text-messaging is a form of communication that the public sector could exploit better.

The discussion that followed was that text-messaging, in the sense of SMS from a mobile phone, is now a well-established communication channel. Research has found that text messaging typically produces faster responses than other forms of written communication. This has been exploited significantly by the private sector for example to record meter-readings, advise of appointments, track orders, carry out user satisfaction surveys, and other purposes. Participants considered that although there were some examples of the public sector taking up this functionality, for example schools communicating with parents and carers, there were other opportunities for wider use. Use of text messaging was seen in particular as being a potentially good way of interacting with hearing-impaired citizens.

This would obviously depend on councils or other public bodies having a record of relevant mobile phone numbers; this could be captured at any point of customer contact and recorded on a CRM system. Integrating SMS communication with CRM also allows the opportunity to deliver more personalised messages that can be recorded automatically as a customer interaction and automated customer journeys could be mapped and planned for.

However, the group recognised that some people do not keep their number when they change contract or SIM card and as numbers are reassigned there would be risks involved in relying too much upon this channel. This has an impact on the nature of the information that can be safely transmitted or requested. This may ultimately limit whether text messaging is used to transact services, or purely for one-way information sharing.

Importantly it was also noted that regarding “Text Messages” as only meaning “SMS Messaging” is now outdated. WhatsApp, Google Hangouts and Facebook Messenger are all regarded as text messaging and may offer further opportunities for interaction, beyond any existing council use of social media for publicity purposes. However none of the participants identified examples of the technology being used in this way.

What made this session, and indeed the format of the event, successful was that the people who participated were the right people to participate, and the topics were the right topics. This comes from the nature of allowing a diverse group of individuals, local authority officers, freelancers and suppliers, to join together and crowd-source issues and solutions without there being a top-down agenda.

We really enjoyed the events and look forward to seeing how the attendees and community continue to develop the ideas and discussions that were had.

You can read more on the topics and outcomes on the LocalGovCamp Blogs and Pics page.

How to write a 5 year plan (and why it doesn’t matter if no one follows it)

The Lead Developer LogoTom Litt & I will be attending The Lead Developer conference in September – it’s a new conference with a great line up of speakers covering new and disruptive technologies (of course), tools, methodologies, and, because it is aimed at Leads, also managing teams, motivation and leadership.

To warm up I’ve written an article for the conference blog: How to write a 5 year plan (and why it doesn’t matter if no one follows it).

ContrOCC Hackday V – Part 1

Our ContrOCC hackdays give our developers a day to work on tweaks, gripes, improvements, or whole new features of their choosing and then sharing those with the rest of the team.

For all the thinking behind our product hackdays, have a look at the intro to the first ContrOCC hackday.

We have plenty of projects to talk about again this year so I have split this post in two; we’ll post the remaining projects soon. Here is the first set:

Adam and Tomasz B – Code analysis

We did some research on Visual Studio’s Code Analysis tool. The tool automatically checks all code for compliance with a customizable set of predefined rules. Those rules check for common programming mistakes, adherence to good design practices, etc. There’s a wide selection of Microsoft recommended rules already built into Visual Studio, which can be cherry-picked to form custom rulesets.

We focused our work on developing a few custom rules that would enforce some of the ContrOCC specific conventions, such as using correct prefixes for variable names or making sure all forms inherit from the right base form. We’ve been able to develop a few sample rules and run them against the ContrOCC desktop client code as a proof of concept.

Alan – Database schema documentation via metadata

I put together a small demonstration of adding extended properties to database schema objects (specifically, schema, tables or columns) so that we can add documentation to them which can then be viewed in Management Studio.  This is something I’d done in a previous hack day, but this time I put together ContrOCC-style infrastructure to support it, giving a rough idea of how I’d expected it to be done if we wanted to use this for real.

I also mocked up a representation of how the data in our system reference tables could be moved into the metadata file, rather than its current location in CSV files. This would mean we could ditch the CSV files for good, thus having one fewer place to find data. It would also mean not having to maintain the CSV column headers to keep them in line with schema changes, and the metadata representation of each table’s schema would be visible in the same file as the data.

Hopefully the data would also diff more easily in this form, and would be more human-readable when viewing diffs or editing files manually.  If we were to implement this, the Data Maintenance Tool would be updated to read from and write to the metadata file rather than the CSVs – so, editing the data would be no different from before, but behind the scenes it would be stored in a more sensible way.

Screenshot of Alan's work

Chris G – Upgrade AllTheThings to .NET 4.5

Upgrade AllTheThings™ to .NET 4.5. Then try to find a cool, user facing, reason to keep it.

The first part of my task seemed trivial, the latter less so. Microsoft has fallen out of love with WinForms in favour of WPF, so there was little in the way of eye candy to entice users to higher versions of .NET (unless you count EnableWindowsFormsHighDpiAutoResizing in .NET 4.5.2). So I turned my attention to making the UI a bit more “U” friendly.

My efforts concentrated on my annoyance with having to support screen resolutions you can draw with a crayon. I looked into screens for creating new entities flowing through (a bit like a wizard) so we don’t need to present so many controls on one form. I also made the navigation bar collapsible to free up some real estate, this even works with the new hot-keys for keyboard navigation.

Screenshot of Chris' work

Chris H – XML import/export definitions

Our current representation of imports & exports uses a system of four interlinked system reference (TRefSys) tables. This means that the reference data defining any single export is scattered across four CSV files and has to be tied together with IDs. I’m interested in replacing this with a single XML file defining each import or export. This would have the following advantages:

  • All data defining a single export is in one file so easier to comprehend
  • XML is more human readable than CSV
  • No need to manually type lots of IDs for sub items

For hack day, I focused on:

  • Generation of XML from our existing TRefSys tables using SQL Server’s surprisingly helpful SELECT FOR XML feature. This would be used in the one-off transition to the new functionality.
  • Writing temporary table functions which exactly reproduced the original contents of the TRefSys tables (including for now the IDs) this then enabled existing code to be updated to use the XML via these functions with a simple search & replace operation.
  • Conversion of one actuals import to XML format

This was a success, and I was able to get the relevant import test to pass with the TRefSys definition of the same import deleted. The next step (which I didn’t have time for) would be to eliminate the unwanted Database IDs from the format and rewrite the SQL which depends on them.

Chris P – Loading lists asynchronously

For hack day I have produced a working prototype which performs asynchronous population of lists which should reduce delays users experience with populating lists as while the overall time is unchanged the application becomes responsive quicker.  This also means if filter conditions are entered which lead to a long delay then the filter conditions can be immediately modified.

Changes made:

  • Return to UI immediately list population started
  • While SQL running:
    • Show “Loading…” in list  (instead of normal SQL wait dialog)
    • Disable list so no editing allowed
  • Populate list on completion of SQL
  • Kill executing SQL if no longer required:
    • Move to different content type (i.e. control disposed)
    • Filter conditions change (i.e. another call to mFillList)

Other ideas:

  • It might make sense to modify lazy loading so in background start loading one list at a time anyway.  Need to ensure loading for displayed lists are shown immediately
  • We might want to limit maximum number of SQL commands which can execute at a time: probably only really an issue for accounts screen where we have a lot of tabs

Damian and Pawel – Visualise database schema metadata

Aim: To help maintain metadata files when creating/editing new database tables, columns etc.

What we did:

  • Visualise ContrOCC Metadata.xml file in more readable form (TreeView)
  • Edit existing attributes (most of them)
  • Add new attributes
  • Add new instances of the nodes

Screenshot of Damian & Pawel's work

Near the end we realized that the problem is harder and more complex than we thought. There is still a lot of work to do, but it was a fun challenge!

Ian L – CSV column alignment tool

I chose to create a file pre-processor that formats a CSV file such that all the columns line up and a post-processor which removes all the padding added by the pre-processor.

These file processors could then be used by AraxisMerge, the file difference tool I use, to help make comparing our table CSV files from the Data directory easier to compare.

As an extra challenge, I used PowerShell to write the file processors.

Learning how to use PowerShell took more time that I expected and the Cmd Applets that I thought existed did not, which means I will have to write more script than I anticipated.

Jon – Generating database test scripts from UI interactions

My task was to explore providing a user-friendly UI for generating database test scripts from the ContrOCC UI. I spent most of the day getting to grips with the ContrOCC UI Framework which I’ve never had to do any serious work with before, so it was a great training day for me though not very productive.

I reached the point of being able to open a form from the Troubleshooting menu which contained lists of clients and cplis with tick-boxes for selection. Unfortunately before I committed Visual Studio crashed and scribbled all over my main .cs file so I have no screenshot or code. However, now I know what I’m doing it should be fairly quick to repair if I ever return to this.

Generations Working Together – Panel Discussion

Gov office for Science

Generations Working Together

Tuesday, 16th June, 2015
18.00 – 21.00

University of Oxford, Andrew Wiles Building, Mathematical Institute,
Woodstock Rd, Oxford, OX2 6GG


Join us for a Foresight Review, panel discussion and refreshments

The number of older people in the UK is set to rise significantly. What will this mean for the UK’s workforce? How will it impact younger generations? Are we doing enough to nurture younger talent? A fascinating project by the Government Office for Science is set to unveil the challenges and opportunities of an ageing society. This Foresight Future of an Ageing Population Project is chaired by the University of Oxford’s Professor Sarah Harper (Oxford Institute of Population Ageing).

Introduction address

Professor Andrew Hamilton (Vice-Chancellor of the University of Oxford)


  • Will Hutton (Principal, Hertford College, University of Oxford)
  • Mark Evans (Chief Executive Officer of Adaptix, NED and Founder of Mirada, Non-Executive Chair of Cydar)
  • Dr John Boyle (Managing Director, Oxford Computer Consultants; Chairman, The Oxford Trust)
  • Steve Burgess (Chief Executive Officer, The Oxford Trust)

Chaired by: Professor Sarah Harper (Director, the Oxford Institute of Population Ageing)                                                                                                                   

18.00     Refreshments

18.45     Panel 

19.30     Wine Reception

Please register with allison.stevens@venturefestoxford.com

The event is organised in collaboration with: the Oxford Institute of Population Ageing (University of Oxford) and Venturefest Oxford


Adding a text size widget to your site using CSS and Sass

A requirement we hear from many of our Government customers is that a sizable number of their users with sight impairment prefer to have a text size widget on-screen when they browse a website.

These accessibility widgets are tough to implement cleanly using HTML and CSS but the advent of CSS preprocessors such as Sass and LESS make the job much easier. In this post we’ll see how we can use Sass to create a text size widget.

What we’re aiming for is the standard row of “A” characters increasing in size to denote changing the text size on the page, like this: Screenshot showing three A characters of increasing size. – you can take a look at one in action on one of our sites.

What we will aim to do is add a class (small/medium/large) to the body element of the page, which we can then refer to in CSS to say: when the body has a class of small then the font-size should be x; when medium then y; large then z.

There are two elements to making the widget work:

  1. JavaScript that adds the widget to the page, records the setting in a cookie, and adds a class to the page body so that our CSS can react to it.
  2. CSS that sets the font-size based on the body class. A Sass mixin makes this much more manageable.


Here’s the code, and we’ll go through it bit-by-bit:

$(document).ready(function () {
    // Add in the widget
    $('#text-size-widget').append('<ul><li class="small"><a href="#" class="selected">A</a></li><li class="medium"><a href="#">A</a></li><li class="large"><a href="#">A</a></li></ul>');
    // Read the current text-size from a cookie
    if ($.cookie('TEXT_SIZE')) {
        var cookieTextSize = $.cookie('TEXT_SIZE');
        $('#text-size-widget a').removeClass('selected');
        $('#text-size-widget li.' + cookieTextSize + ' a').addClass('selected');
    // Add the resize event handler to the widget's links
    $('#text-size-widget a').click(function () {
        var $el = $(this);
        var textSize = $el.parent().attr('class');
            .removeClass('small medium large')
        $('#text-size-widget a').removeClass('selected');
        $.cookie('TEXT_SIZE', textSize, { path: '/', expires: 10000 });
        return false;

We’re using jQuery here so the first line sets this script to run when the page is loaded and ready.

We then add in the HTML for the widget, by inserting a list of links into any element on the page with an id of text-size-widget, so you’ll need to have at least one of those in your page. We insert this using JavaScript so that if a user doesn’t have JavaScript enabled they won’t find a non-functioning list of links on their page.

We then check to see if the user has a cookie called TEXT_SIZE, if they do we read it, add the appropriate class to our body element, and refresh which of our “A” links is selected. This makes sure that the user’s choice carries over from page to page on our site.

Finally we set what happens when the user clicks one of the “A” links:

  • We detect which link they clicked and store the small/medium/large class in a variable to use later
  • We remove whatever text size class might be on the body currently and set the new one
  • We update which of the “A” links is selected
  • We store the new text size in the TEXT_SIZE cookie
  • We return false so that the link does not perform its standard functionality (in this case adding “#” to the URL)

That’s the widget, cookie and body classes dealt with, now to tell the CSS how to handle it.


What we’re looking to end up with is along the lines of:

p { font-size: 10px; }
body.medium p { font-size: 12px; }
body.large p { font-size: 14px; }

Clearly it would be massively frustrating to have to write all three options out every time you wanted to set the font-size of something – this is where our Sass mixin comes in.

Mixins let you extract repetitive code out of your CSS into something that looks a bit like a function (you can even pass in values/settings) and call from your CSS whenever you need it.

Here’s what our mixin looks like:

@mixin font-size($baseFontSize, $mediumMultiplier: 1.2, $largeMultiplier: 1.4) {
    font-size: $baseFontSize + px;
    body.medium &    { font-size: ($baseFontSize * $mediumMultiplier) + px; }
    body.large &    { font-size: ($baseFontSize * $largeMultiplier) + px; }

The mixin is called font-size and you pass in the “base” font size (i.e. the size to use for the smallest setting, which is the default), and optionally you can set the multipliers we use to go up to the medium and large sizes.

We’ve then got a bit of Sass code that generates our CSS using the base font size and the multipliers.

Of course if you prefer to use ems or another unit you can replace the px with whatever you want.

To use this mixin all we need to write is:

p {
    @include font-size(10);

And when we compile the Sass it will generate exactly what we want. So all we need to do is replace any normal font-size declarations with a call to our new mixin and we’re away!

Research behind Virtual Assay wins Prize

Oliver Britton, a DPhil student in the Department of Computer Sciences, University of Oxford, has won an international prize for his paper on a new computer model of cardiac electrophysiology. The National Centre for the Replacement, Refinement and Reduction of Animals in Research (NC3Rs) awarded Ollie the prize for its potential to reduce the number of animals used in drug testing. He plans to use the prize grant for further research to apply the methodology in neuroscience.

OCC has been working with Ollie and the Oxford University Department of Computer Science to bring his research to market. We have developed Virtual Assay, a tool enabling commercial and academic researchers to model the effects of drugs on populations of cell models that display the full range of electrophysiological responses seen in real heart cells. This is an improvement on previous models which have tended to ignore this natural variability. The tool can be used to screen out drug candidates that could be toxic to the heart before animal studies are done. The work has been supported by Isis Innovation and the EPSRC.

OCC would like to congratulate Ollie and his supervisors, Professor Blanca Rodriguez and Dr Alfonso Bueno-Orovio, for their well-deserved win!

For more information, please contact Dr. Fred Kemp, Deputy Head of Technology Transfer, Isis Innovation fred.kemp@isis.ox.ac.uk


Compiling Sass with Gulp in Visual Studio

We love visual studio at OCC. It’s an incredible IDE for software and web development and Microsoft put amazing effort into keeping up with the direction developers find themselves moving.

In this case we’re talking front end web development and specifically:

  • Gulp.js – a JavaScript-powered automated build system that uses Node to perform the tasks you find yourself doing over and over
  • Sass – a language extending CSS (that is compiled to produce CSS) to give developers more power when creating stylesheets

For quite a while Visual Studio has supported Sass, specifically through the Web Essentials plugin, which has offered intellisense, CSS preview and compilation. But much of this functionality is being removed from the plugin because the broader developer community is moving to using build tools like Gulp (and it’s peers, such as Grunt) and Web Essentials wants us to use them too.

But how do we go about doing that?

Setting up Gulp in Visual Studio 2013


The first step is to install Node.js, the JavaScript server that Gulp uses to run its tasks. Importantly, you’ll need v0.10.28 (there’s an x64 folder in there, which most of us will want) as the more recent versions are not yet supported by the Gulp-Sass compiler plugin. It looks like this will be fixed very soon though, which is great!

For Sass compilation we’ll need to install two Node packages, so open a command prompt and run the following npm (Node Package Manager) commands:

  • npm install gulp -g
  • npm install gulp-sass -g

The -g is a flag to install globally, so you can use the modules anywhere.

Now we can set up Gulp for your solution. Create a file called package.json in the root of your solution and add in the following information:

    "name": "Project Name",
    "version": "1.0.0",
    "description": "Project Description",

    "devDependencies": {
        "gulp": "3.8.11",
        "gulp-sass": "1.3.2"

You’ll need to make sure those gulp and gulp-sass version numbers match those you have installed.

This will allow any other developer to come along and run npm install on this directory and Node will download and install everything they need!

Give Gulp some tasks

Create a gulpfile.js file in your solution and add the following JavaScript to get started:

var gulp = require('gulp');
var sass = require('gulp-sass');

gulp.task('sass-compile', function () {

gulp.task('watch-sass', function () {
    gulp.watch('./Content/*.scss', ['sass-compile']);

Here we have three sections

  1. First we create gulp and sass variables and use Require to ensure they’re good to go.
  2. Then we create a Gulp task called ‘sass-compile’ and tell to it find our Sass stylesheet, compile it and output the result to our directory.
  3. Finally we create a ‘watch-sass’ task that watches all Sass files in our folder and runs the sass-compile task whenever anything changes.

Now, if we ran gulp sass-compile at a command prompt in our project, it would compile our Sass. If we ran gulp watch-sass it would start watching. But running commands all the time isn’t much fun…

Task Runner Explorer

The answer is to install the Task Runner Explorer Visual Studio Extension, which will give us some nice UI for running and automating Gulp tasks from within Visual Studio.

Once you’ve installed (and restarted VS) when you right-click on your gulpfile you’ll see a new ‘Task Runner Explorer’ option, which will open up a panel showing a list of your Gulp tasks on the left and some binding options (or the output of whatever has been run) on the right.

Task Runner screenshot

If you double-click on sass-compile it will run and compile the Sass. If you double click on sass-watch it will start watching. Much better!

Task Runner execution results screenshot

The icing on the cake is that you can right-click on one of your tasks and bind it to one of the following Visual Studio events:

  • Before Build
  • After Build
  • Clean
  • Solution Open

Right-click on your sass-watch task and bind it to Solution Open. Now, every time you open your solution, Gulp will automatically start watching your Sass files and compiling them whenever it needs to – perfect!

What else can Gulp do?

Gulp doesn’t just do Sass compiling, it’s hugely powerful. Here are just a few of the tasks Gulp can help you with:

  • Minify your CSS, JavaScript, HTML or images
  • Combine your CSS files
  • Generate CSS sprites
  • Generate favicons
  • Rename files
  • Check if files have changed or not
  • Output messages/errors
  • Add CSS vendor prefixes
  • Check code quality with hinting

More good news as Visual Studio 2015 will come with first-class support for Gulp as standard, so these tools will be built right in to the IDE.