The key to building innovation

Jeff Gothelf is the author of Lean UX, a book that plugs into the theory of The Lean Startup and looks at how User Experience design processes fit in with the Lean approach.

Jeff was interviewed by Communitech News and described what he believes is the key to building an innovative product or company:

Talk to your customers.

I mean, really have the humility to listen to your customers.

Learn what it is that they love about your product; learn what it is that they hate about your product; learn about what it is that they hate about your competitor’s product; learn about what they love about your competitor’s product.

Listen to them and their needs. Figure out what job they are hiring your product to do, and then make your product do that better than anyone else’s. You will never know that unless you leave the safety and comfort of your office and go out and talk to your customers.

That really inspires me.

I recommend the rest of the interview too, where Jeff discusses the importance of cross-functional teams, mistakes to avoid, and how to remain competitive as you scale your product.

ASP.NET Web API on Linux and Apache with Mono

We had a requirement at OCC to build a RESTful web service that would be able to run on both Windows and Linux servers. Someone suggested we give Mono a look to see if we would be able to use the ASP.NET Web API framework served up by the Apache Web Server on Linux. That sounded great; we have a lot of experience with the .NET Framework and a lot of experience with Linux but so far have not brought the two together.

Banner showing Mono, .NET, ASP.NET, Apache and Linux logos

Mono is an implementation of the .NET framework that can be used to build applications that run on Linux and OS X in addition to Microsoft Windows. There are further details on Wikipedia.

In the past, some concerns have been expressed regarding licensing, software patents and their possible impact on Mono and the applications that depend upon it. The Mono team have addressed these concerns and recently (April 2014) Microsoft released Roslyn under the Apache 2 license and have committed to working closely with the Xamarin team, whose product is based around Mono, which may further calm concerns.

Getting Started on Linux

If you are lucky your system will have a package available to install Mono, if so then you should use this. At the time I was not so lucky so had to get the latest stable source and build that.

In Practice

Because the Mono team are attempting to keep up with developments by Microsoft the Mono framework does not fully implement the latest .NET framework. This can lead to some headaches where there is a partial implementation which can result in some methods of a class not being available under Mono but often these issues can be worked around.

However, Mono is under very active development and generally manages to keep up surprisingly well.


  • Can use Visual Studio for the bulk of development.
  • Once something builds and runs on Windows it runs very reliably on Mono. I’ve only been looking at Web applications so I couldn’t comment on a Desktop application with a GUI.


  • NuGet has limited usefulness with Mono. I had to get the necessary binary files and manage a Libraries directory within the project. Not a big issue in my case but could be if large numbers of externals are required.
  • Have to maintain a separate build on the Linux system. I used makefiles which was not too onerous but this might be mitigated by MonoDevelop or Eclipse but it did not seem enough of a problem.

Building Mono

Building Mono from source is pretty straightforward but there are a few gotchas.

First it is necessary to make sure a basic development environment is in place, on a CentOS system that’s something along the lines of:

    yum -y install bison glib2 glib2 freetype freetype-devel \
        fontconfig fontconfig-devel libpng libpng-devel libX11 \
        libX11-devel glib2-devel libgdi* libexif glibc-devel \
        urw-fonts java unzip gcc gcc-c++ automake autoconf \
        libtool wget giflib-devel libjpeg-devel libtiff-devel \
        libexif-devel httpd-devel

Source Code

Get the 2.10 source releases of [libgdiplus][15], [mod_mono][16] and [XSP][14] – at the time of writing the stable build of mono is at version 3.2.3. It is does not appear to be important to have all components with the same version as the main Mono release.

Unpack each in a local directory then configure and build in the following order:


    cd libgdiplus-2.10
    ./configure --prefix=/opt/mono
    sudo make install


    cd mono-3.2.3
    ./configure --prefix=/opt/mono --with-libgdiplus=/opt/mono
    sudo make install

Add the /opt/mono/bin path to the system path and also set the PKG_CONFIG_PATH to /opt/mono/lib/pkgconfig through the /etc/profile (do not forget to export the variables). These variables must be set before building xsp as it needs the C# compiler otherwise the configure part of the build will fail.


    cd xsp-2.10
    ./configure --prefix=/opt/mono
    sudo make install


    cd mod_mono-2.10
    ./configure --prefix=/opt/mono --with-mono-prefix=/opt/mono
    sudo make install
    sudo mv /etc/httpd/conf/mod_mono.conf /etc/httpd/conf.d/


It will probably be necessary to add the path to Mono’s shared libraries to the system wide library path. This can be done by either adding the path to /etc/ or, if the /etc/ directory exists, by adding a new file there (I suggest following the naming convention used by other files in that directory) with the path to the Mono shared libraries – these will be at /opt/mono/lib. Once this has been done run the ldconfig command as root to update the system.

After building and installing check the installation by running:


Making .NET 4.5 work

When building from source code there is a problem when running applications which require the .NET framework 4.5 libraries. The xsp4 and mod_mono shell scripts that are executed (located in the /opt/mono/bin directory) refer to executables in the /opt/mono/lib/mono/4.0 directory. Typically the executables themselves are fine but they refer to the 4.0 libraries which can be missing some of the newer features. This can result in problems of the form:

    Exception caught during reading the configuration file:
    System.MissingMethodException: Method not found: blah blah blah
      at System.Configuration.ClientConfigurationSystem.System..... yack yack

To fix this first make symbolic links in the 4.5 directory to the 4.0 files:

    ln -s /opt/mono/lib/mono/4.0/xsp4.exe /opt/mono/lib/mono/4.5/xsp4.exe
    ln -s /opt/mono/lib/mono/4.0/mod-mono-server4.exe \

Then edit /opt/mono/bin/xsp4 and /opt/mono/bin/mod-mono-server4 to reference the symbolic links.

Fixing errors caused by colons in the virtual path name

In our application the resources managed by the RESTful interface include the colon ‘:’ character. There appears to be a bug which creeps out when using ASP.NET applications in sub directories. The problem appears with the static initialisation in System.Web.VirtualPathUtility which manages to not read the Web.config system.web/monoSettings verificationCompatibility="1" attribute so fixed by setting the monoSettingsVerifyCompatibility member variable false otherwise errors are generated when there is a colon in a virtual path name.


The Apache mod for Mono passes requests to the mod_mono_server, which is able to support multiple ASP.NET processes.

With the above completed restart Apache web server and verify that mod_mono has been picked up.

    httpd -M

You can also inspect the error log after a restart.


Mono’s support for ASP.NET under Apache uses a simple module which delegates requests to the mod-mono-server. The MonoServerPath setting in httpd.conf specifies where the mono server is for each location:

    MonoServerPath default "/opt/mono/bin/mod-mono-server4" 

This configures mono for the default path which for a standard Apache configuration will be /var/www/html. It is also necessary to configure the application and handler:

    MonoApplications "/:/var/www/html"

    <Location "/">
        Allow from all
        Order allow,deny
        SetHandler mono

In addition, the following options can be set:

    MonoSetEnv default MONO_IOMAP=all
    MonoDebug default true

Restart the server and check the error log file.

If other locations need to be configured much the same needs to be repeated, for example, if a /test application were to be created it would be configured as:

    Alias /test "/var/www/test"
    MonoServerPath test "/opt/mono/bin/mod-mono-server4"
    AddMonoApplications test "/test:/var/www/test"

    <Location "/test">
        Allow from all
        Order allow,deny
        MonoSetServerAlias test
        SetHandler mono

Other Directives

It is recommended to disable KeepAlive for performance reasons or at least restrict the time-out to 2 seconds.

    KeepAlive Off

The CentOS installation of Apache web server sets the name for the log files as access_log and error_log; you may want to have the more conventional .log file extension.

Configuration Tool

The mono site has a handy online tool that can help with setting up a basic configuration for either a virtual host or an application.

In conclusion

Building a RESTful ASP.NET Web API with Mono, to run on Windows and Linux servers, was pretty straightforward with only a few problems on the way.

CuPiD demo at Said Business School

Reynold Greenlaw and Andy Muddiman attended the Oxford Startups demo night at Oxford Launchpad in the Said Business School, on 15th May where they demoed the GaitAssist smartphone app that has been developed for the Cupid project. They demoed it pretty much continuously at our very busy table to the many interested attendees and our communications manager Janine Smith joined in to lend a hand.

The app continuously compares the gait of a user with Parkinson’s disease during walking with an “optimal gait” calibrated with a clinician. Based on evaluation of the specific patient, the most important gait parameters (e.g. step length, cadence) are set for future training sessions. The app gives the user feedback on how well they are doing and uses a technique to help Parkinson’s gait called audio cueing.

Cupid demo 3  Cupid demo 1Andy and Reynold demoing the app

Data on gait performance is visible to the clinician who can then adjust the exercise remotely forming a feedback loop between the user and the doctor. You can take a look at the demo itself on this video of Audio Bio-Feedback in action.

This event was organized by the Oxford Academic Health Science Network which brings together clinicians, academics and the public to promote best health in the region.

The Oxford Launchpad was launched in February 2014. It is a new space for entrepreneurial activity housed within the Said Business School, University of Oxford.

Finding time to think

A personal blog post from our Director of Consultancy Projects

As well as delivering products, OCC has a team that specialises in custom software development; they are behind the wide variety of case studies on our website. This combination of teams working on custom software and product development & support is, I think, unique.

Once a year I take the custom development team out for a day to discuss how we might write even better software. This time we crossed the road to the Jam Factory for an open discussion and some prepared talks. The talks were:

  • A toolkit for establishing better experience for our users
  • The judicious use of Agile project management
  • A critical look at SOLID principles
  • Lessons on testing from the product development team

The result was a surprise to me. The conclusion of the team was that best way to improve our performance was none of the above but instead improving the office environment. Nothing to do with software at all. The team wanted fewer interruptions, less multitasking  and more time to quietly think through difficult technical problems.

I was surprised and really interested in the conclusion. Our office is already considered calm and quiet. So we’re going to try a few new things. We’re going to experiment with collaboration tools that allow people to raise a “Do not disturb” flag. We’re using instant messaging that reduces disruption, hosted within OCC to keep the data confidential. I’m encouraging staff to book rooms or work from home during intensive design phases. It’s going to be interesting to see how we get on.

Consultancy Away Day

Consultancy Away Day

Tom’s thoughts on AngularJS, TypeScript, SignalR, D3.js and Git

OCC DevCamp has been a great opportunity to put the day-job to one side and try out some new technologies. Here are some thoughts on how I got on with them.

D3, Git, TypeScript, AngularJS and SignalR logos


AngularJS is Google’s offering in the JavaScript application building framework arena. Having previously used Knockout I wanted to use a different framework for comparison. I’ve felt that Knockout isn’t especially well suited to large applications, and it seems to struggle in terms of performance when handling a large number of data bindings.

Angular comes with something of a learning curve (fortunately I shunted most of this on to Mel!) but after a week of use, it feels like a better choice for a larger application. The framework encourages writing code in a modular way, and it seems to be doing less magic in the background; surely good for performance. Data sizes in this application have been small, but there’s never been any indication that Angular has been slowing things down. More research would be possible if we choose to re-write one of our existing Knockout apps, but at first glance it doesn’t seem like that would be cost effective.

Recommendation: I would recommend using this again over Knockout. It would be good to see it put to work on a real project.


I’d been wanting to try out TypeScript for some time, since the prospect of writing JavaScript with straight-forward classes, modules, interfaces, etc. and compile-time type checking was a big draw. Again, I found there to be something of a learning curve, but it has been well worth it. There have been plenty of occasions when compiling my TypeScript file has revealed errors in code that I wouldn’t have found until running the application in a web browser if I’d been using vanilla JavaScript; obviously that adds up to a lot of time saved.

A couple of gotchas:

  1. You’ve got to keep remembering to build your solution in order to rebuild the associated JavaScript file. [Edit: it seems like it ought to be possible to get this working with compile-on-save, but this didn’t work out of the box for me]
  2. If working with other JavaScript libraries – and let’s face it you will be – you’ll need to remember to download the associated TypeScript definitions files or face compilation failures. Fortunately these are easy to find on NuGet, as long as you remember to go looking for them, e.g.:

Screenshot of loading the AngularJS TypeScript Definition

Recommendation: TypeScript is definitely worth using. You can start by just renaming your existing files with the .ts extension, and if it all gets too much you can simply drop back to plain old JavaScript.

Example: The best example of the benefits of writing well laid-out code with TypeScript and AngularJS together was when I wanted to drop Poll Graphs into three different web-pages served by our application. One had the main presentation and associated JavaScript, the other didn’t. The following code was sufficient to drop graphs into all three pages with no further work:

Angular code to add polls to the presentation


SignalR is a library for real-time web functionality. In the best circumstances it will use a WebSocket between server and client, but will seamlessly fall-back to older client-server mechanisms if required.

This was very easy to get started with, and has been very powerful. The code in our Hub class, which receives messages and broadcasts responses, is very clear and concise. The documentation from Microsoft is surprisingly good. However, we’ve seen issues with connections being dropped, messages getting delayed, and there have been problems along the way with the architecture of our application which was at times getting into circular message->broadcast->message chains. That said, I invested very little time in trying to make robust the Hub backbone to our application.

Recommendation: I don’t think that any of the issues we’ve come across couldn’t be resolved given more development time and production servers. The application architecture needs to be right no matter which library we use for real-time calls. If I had the need to develop real-time interaction I’d give SignalR a chance: it’s a great starting point and builds in a lot of powerful features.


D3.js is a JavaScript library for manipulating documents using data, with very powerful capabilities. We only scratched the surface of this, but it produced some very nice bar charts for us. Once I got my head around the syntax, changes to the graphs presentation and scaling were very easy to make; a sure sign of a well thought out library.

Screenshot of a D3.js poll chart

Recommendation: I’d definitely use this library again if I have client-side graphing needs. In fact, I just want an excuse to use it on something more complicated than a bar chart!

Git & GitHub

Distributed source control in its own right is definitely a good thing. The ability to pull incoming changes locally, handle them and commit a properly recorded merge action is valuable. I’d already scaled the learning curve for using a distributed source control system when trialling Mercurial some time ago. The concepts came back to me fairly quickly so I didn’t have to waste time on that this week.

I ended up using a few tools to work with our Git repositories:

  • GitHub for Windows client: The UI was very confusing and anything but the simplest functions required dropping out to the command line. Shiver. I quickly stopped trying to use it.
  • Visual Studio integration was ok for simple actions but often seemed to get completely stuck when conflicts needed to be resolved; perhaps there will be fixes for this in the future.
  • TortoiseGit: by the end of the week I was solely using TortoiseGit. Not only because it was very similar to the TortoiseSVN toolset I’m used to, but also because it worked: it did what it had to do, when it had to do it.

Recommendation: I’d still recommend using distributed source control over non-distributed in general. I’d use Tortoise clients if we decided to do this at work.

Chris’ thoughts on HTML5 Canvas, SignalR and Git

HTML5, SignalR and Git logosHTML5 Canvas

HTML5 canvas is a powerful little container for graphics which I only used the smallest set of features of. Other features to explore would be changing of line colours/styles, and of drawing shapes. There are also other algorithms for drawing lines (Bezier/Quadratic) that might lead to smoother lines. Problems arose due to different implementations between web-browsers; chrome seemed to fare the worst here, especially when reporting the position of events. Some of the other features of HTML5/CSS3 I used were only available in the latest versions of browsers and work-arounds would need to be found to allow more universal coverage.

The JavaScript used to create canvases was quite simple so didn’t cause many headaches; most of the work went into working out which CSS properties to use, especially considering the odd transformations Reveal applies to slides. I found Firebug very useful for inspecting the code used to create various element and for debugging JavaScript, although Firefox’s and Chrome’s built in developer tools offered most of what I needed.

To do the annotation itself pointer events made life a lot simpler than the original process of defining different events for each input type, I would certainly go down this route again especially since there’s a polyfill available to use them in older browsers.

UI Effects

I also looked at some uses of pure HTML/CSS to replace things often scripted; showing and hiding menus for example. Once established I think this will be a nicer way of doing things. Given more time I would have liked to look more into animations and other effects.


SignalR made sending annotations to different screens easy. Although because of our implementation of communications classes in Angular a nasty hack was needed to send annotations to this via events. Starting again I would have attempted to do the drawing and receiving of pointer events using Angular itself.


I found Git was not obvious in its use to start with but by the end of the week I had a workflow sorted. Stash Save / Pull / Stash Pop / Merge / Push. This might not have been the recommended route, but worked for me. I still prefer Mecurial to both this and SVN.

Andrew’s thoughts on Git, Xamarin and SignalR

My focus for DevCamp was building an Android app that would interact with the presentation web app in realtime. You can read the details of how we achieved that in my previous post Using SignalR in native Android and iOS apps. For that we made use of a toolkit called Xamarin, as well as the ASP.NET SignalR library, and Git for source control.

Xamarin, SignalR, Git, GitHub and Android logos


Xamarin is a toolkit that allows you to use C# to write code for native Android and iOS apps. I had a good experience with this overall. I hit a small bug where changing the namespace caused a crash, but found an issue already raised for this on the Xamarin issue tracker. Other than that I found using it to be very easy.

Writing the code in C# required no knowledge of Java, though you do still need to know about the UI interfaces for the various devices. However, learning these would be much less work than learning the entire library system and back-ends.

We used Portable Class Libraries to allow us to write code that targeted both the ASP.NET MVC project and the Xamarin Android project. This worked perfectly, and meant we could share class structure changes and business logic without needing to recode anything. This demonstrated one of the major potential gains of using this kind of system.

The second major gain is that while the UI development needs some experience with each of the platforms, the back end can be written by existing C# developers without them having to learn additional skills. This could be a major saving even if only developing for a single platform.

I didn’t get to try adding an iOS front end to the application as this requires a Mac build server and an annual development fee to Apple (submitting to the Windows Store also requires an annual development fee). This isn’t a major drawback since these would be required for native development without the use of Xamarin, but it would be nice if the requirement for a Mac could be removed.

I’d like to try this at some point in the future to explore how well the multi-platform development works.

The only caveat I have is that the Xamarin license cost is high (around $999 per developer per platform per year). This would likely be recoverable for a medium sized project in the saved developer hours, but could cause support issues if we were not doing enough work in the area to keep at least one subscription running all the time.

Conclusion: Recommended

Notes on Android Development

Android VMs are very slow and getting a physical device ready for debugging can be a bit of a trial, needing various device drivers and options changing. Once it is set up it tends to run smoothly though.

Writing the activities for an Android UI requires some learning and does not translate directly from Windows/web UI design.


SignalR is a real-time communication library for .Net applications. I found it easy to use, and the portable build worked fine on Android (with Xamarin) with little additional effort.

Writing the SignalR calls was straight forward. However, as with any multi-client system, serious consideration about the communication messages being transferred is required. We initially created a system where messages could end up triggering other messages that cascaded into a loop. Careful design would be able to eliminate these occasions, but they’re easy to miss.

Conclusion: Recommended


We decided to use Git for our source control during DevCamp, it’s a distributed system, which means working with local repositories and pushing to / pulling from others. We use SVN in our usual day-to-day work and are pretty happy with it for the most part, but we were interested in trialling what’s becoming the go-to system for many developers our there.


GitHub is the most popular hosted solution for Git projects and seems a decent repository for code. If I wanted a shared repository I’d have no issue using this. It allows access via SVN as well (though I didn’t try this), so this would allow continued use of SVN if required. Recommended.

Git GUIs

Visual Studio Tools for Git – I had a few issues with this. I didn’t find that it worked well compared to other versions, and had major issues with authentication. I didn’t find it added much over the other Git access methods. Not recommended.

TortoiseGit – This was a great interface to Git, though this may just be because I’m already familiar with TortoiseSVN. Recommended if you like TortoiseSVN.


I’m unsure about this. I like the local commits and branches, but I had a number of cases where local changes I made didn’t seem to get merged into the shared trunk, so that while I could see the changes in my local stream when the merge occurred they were ignored. This may be down to misunderstandings on my part, but I didn’t feel that this made the system reliable enough for production use.

Conclusion: Uncertain – further investigation needed

Mel’s thoughts on SignalR, AngularJS and TypeScript

I spent most of the DevCamp week working on the client applications: the presentation itself (minus the annotations and the graphs), the secondary screen and the audience view. So the new technologies I spent most of my time on were AngularJS and TypeScript, with a little time spent on SignalR.

Angular, SignalR and TypeScript logos


I didn’t spend much time delving into SignalR, mostly because I didn’t need to. The code to set up a connection between clients and server is very concise and easy to understand. Tom was doing most the work on this, so it’s possible there was more complex stuff I missed, but on the bits I worked on, the main problem was figuring out how to do it properly in Angular. Without the Angular, it only took a few straightforward lines and it just worked.

We did get some intermittent problems with dropped connections – we were focusing on getting a prototype together rather than putting in a lot of error handling. It would be interesting to see how easy it is to get proper error handling client side and server side, considering the simplicity without it.

I’ve recently recommended it for a different project and I am happy to be using it again.


AngularJS, however, I’m still undecided on. It certainly seems very robust and full-featured – I was impressed by that. I also liked the modularisation, and how easy the data binding was. My problem with it was the learning curve.

I had previously used KnockoutJS quite heavily (a lighter-weight MVVM library which also offers data binding), and that was something I could just layer over my usual coding style and only use when I had to. To use Angular properly, I needed to completely change how I structured my code, which slowed me down considerably at the beginning. I also wasn’t sure how best to integrate it with existing non-Angular code. (Unlike Typescript, where half of us were using it and half not, and it all just fitted together).

It’s hard for me to judge how good it is until I’ve spent some more time using it, but it worked very well once I knew what I was meant to be doing. It’s probably a better library than Knockout, but certainly one that you have to invest a lot more time into at the beginning.


TypeScript was my favourite of the new technologies I tried out. It was exceptionally simple to start using – I installed the software, downloaded some type definitions for the libraries I was using, and then it just worked.

Annotating with types takes some getting used to, but is definitely worth it – using typescript, most of my typos or errors get caught at compile time with useful error messages. Using plain JavaScript, I might not spot them until runtime, and then have to debug why it isn’t working as expected. It saved a lot of time.

The only irritation was having to recompile the TypeScript after making changes instead of just refreshing the browser, although the TypeScript editor in Visual Studio does support compilation on save. Having modules and classes was also very nice – I suspect it would have been even more useful if I wasn’t using Angular, which implements its own modularisation.

I’ll definitely be using it in future projects.

ContrOCC Hackday III

We’ve already made it to the third of our successful product hackdays, giving our developers a day to work on tweaks, gripes, improvements, or whole new features of their choosing and then sharing those with the rest of the team.

For all the thinking behind our product hackdays, have a look at our summary of the first ContrOCC hackday.

The day’s projects

Alan – Test scripting improvements

I did some prototype work on a new version of our “TPA” test scripting language. I wanted to store a representation of the test objects and commands in the application’s configuration tables and then begin using that stored knowledge to generate the test SQL dynamically rather than having lengthy stored procedures.

Chris G – Gemini Upgrade

I looked into upgrading our task tracking software Gemini to the latest version. The new version seemed to solve most of my main bugbears – editing worked in any browser, two people editing at the same time was handled better (but not as good as Bugzilla). It also allowed integration with SharePoint, and SVN, as well as Windows authentication.

Upgrading straight to the new version was not possible, and unfortunately Countersoft have redesigned their website so all links to the download the intermediate version redirect to the homepage. After contacting their support I was able to get a copy. Upgrading then was not too difficult, there were ~60 tasks that were orphaned (not part of a project) and had to be excluded from the migration, these weren’t accessible through the old front end anyway.

The further upgrade to the latest version appeared to go without a hitch but when viewing the frontend none of the links worked, and going directly to issues only displayed the description and not comments or additional fields. I suspect this is because of the introduction of Project Templates which would need to be configured.  I recommend interested parties performing a more serious evaluation of Gemini v6 when it’s released, deciding we do want to upgrade, and devoting some time to get this to work.

Chris H – Xamarin Mobile App

My aim for the day was to try out the Xamarin cross-platform mobile development toolkit, which supports Android, iOS, and windows development using C#. To keep things manageable I focused on Android development only, and decided to start on an app to collect homecare actuals using a mobile phone.

The experience was not altogether positive. Initially I started from the Xamarin sample field service app but after two hours of trying to figure out the dependencies was getting nowhere and decided to stop. The various build error messages didn’t clearly point me in the direction of what was going on, and Googling them didn’t produce any results.

Following that, I began building an app from scratch cribbing code from hello world and tutorial projects provided by Xamarin. These were of high quality and well documented and I feel that studied in more depth would be an excellent way to learn more about the system. However, my progress was fairly slow and this wasn’t helped by the slowness of the android emulator (apparently using a real device is much better) or the flakiness of its connection to the debugger.

Screenshot of Chris H's demo Xamarin app

Chris H’s demo Xamarin app

Chris P – Improve Coverage of Smoke Tests

ContrOCC currently includes some smoke tests which are run by a debug option to programmatically display screens to flush out coding errors and especially compilation errors in SQL stored procedures used to select data since such errors are only detected at run time.

I have worked on extending the coverage of these smoke tests so that:

  1. Where tabbed controls are used, each tab is displayed.  This ensures that any data associated with lists on these tabs gets loaded since for performance reason loading is delayed until the tab is displayed.
  2. Where lists are used, ensure that:
    1. Where no population has been performed (eg. where for performance reasons population is not done until user enters filter criteria and selects Apply) that the list gets populated
    2. The popup form associated with the list gets displayed to extend coverage of SQL stored procedures which are executed
    3. Lists and tabs on displayed popup forms are recursively processed as for the main content type screens
  3. All dictionary lists and their associated popup forms are displayed.  Currently the smoke test just displays one dictionary.

Generally I feel that the changes are relatively robust and could be included with minimal risk into the main development.  The main issue would be to make sure the test database is updated to include data for all lists to prevent a load of warnings relating to tests which cannot be run at present.

Julian – Performance Dashboard Reports

These reports are designed to help users identify performance bottlenecks on a SQL Server. Key points:

  • It’s a free download from Microsoft, consisting of a setup script (“setup.sql” which has to be run on the server), a fairly short help file (PerfDash.chm) and ~20 Report Definition Files.
    • Install on the PC from which you’re running SQL Server Management Studio (SSMS).
  • There are versions for 2005, 2008 and 2012.
  • They simply make use of the Dynamic Management Views (DMVs) that were introduced in SQL Server 2005.
    • Therefore they are lightweight in operation.
    • They use the Custom Reports functionality – Reporting Services is not involved / required.
    • ContrOCC uses some of these in the Performance Information framework.
  • They report on all databases on the server.
    • This might be a problem if we wanted to run a dashboard at an LA.
    • It also means that information relating to ContrOCC databases may be missing / hard to find (as only a certain amount of data is retained).

Once you’ve got the download, running them is very simple:

Screenshots of the steps to create a Performance Dashboard report

Steps to create a Performance Dashboard report

From the dashboard, you can navigate to a number of other screens which show additional details, such as those relating to CPU usage, IO stats, missing indexes etc.


They provide a GUI for displaying information from DMVs – details we are already gathering using the Performance Information menu item, such as those relating to missing indexes etc.


  • Nice, easy to use GUI incorporating drill down.
  • Gives the impression of real time monitoring.
  • Good for rapid troubleshooting of immediate problems.


  • Not restricted to a “database of interest.”
  • Unlikely that clients would allow us to install on live servers, no matter how lightweight, especially if hosting non-OCC databases.

Julian – Extended Events

These became available in SQL 2008 and provide a more granular level of monitoring than is available with tools such as SQL Server Profiler (which it is probably intended to replace). Like the Performance Dashboard Reports, and unlike SQL Server Profiler’s GUI, they have a low performance impact on the server. In 2008, there was no associated GUI interface and information was returned as XML. However, by SQL 2012, SSMS included a GUI component for working with Extended Events and viewing the results which makes everything a great deal simpler:

Steps for working with Extended Events in SQL Server Management Studio

Working with Extended Events in SQL Server Management Studio

The general idea is that you start monitoring the events you’re interested in, leave it running for a while and then have a look at the results. There are about 300 events to chose from and results can be saved either to a file on the server (for large data sets of historical data) or a ring buffer (where older ones are overwritten).

Default Session

The system_health session is set up and enabled by default (as in the screenshot above). It reports on 17 events, including errors (mainly those relating to memory problems), waits and deadlocks. If you Watch Live Data, you get a list of events and, if you select one, its details. Depending on the type of event, you’ll get a different set of details. For the deadlock one (highlighted above), you get a graphical display of what happened!

Screenshot showing the details of a deadlock event

Viewing the details of a deadlock event


We could use this new functionality at any client site running SQL 2008 or later if we wanted to monitor what’s going on in a variety of areas. Ideally, they would be on SQL 2012. There’s obviously an overlap with SQL Server Profiler and also Performance Monitor. Tools such as these may be useful if we need evidence to help inform clients about issues with the specification of their servers, for example, by using them to show that they need more memory, improved configuration, etc.

Maciej – Optimising UDFs using CLR code

My plan was to analyse and replace a number of User Defined Functions (UDFs) that cause some performance issues with CLR-based code. I played with one UDF in particular, trying to find out which piece of it takes the most time. It turned out that this UDF performs some data access which could be rewritten with CLR. I then spend the rest of the time analysing how the UDF works in detail and creating a general idea how to rewrite it with C#.

Unfortunately, I have had not enough time to implement the actual code.  I’m going to do that in my spare time to check if the idea of CLR replacement is right in terms of performance.

Mark – jQuery CLNDR plugin

I investigated the jQuery CLNDR plugin for creating custom calendar controls on web sites. Unlike other calendar controls, CLNDR uses a developer-provided UI template rather than generating markup, allowing much more customisable displays. I wanted to see if it could be used for improving custom calendars in our web app, in particular the week picker and the timetabled actuals display.

Notes and observations

  • It has three dependencies: jQuery 1.7+, moment.js (a JavaScript date-handling library) and underscore.js (a utilities library that includes a default template implementation).
  • Documentation is somewhat lacking, particularly in explaining how to use templates. There is a hook to use your own template renderer if you want.
  • The default template markup uses <%=…%> which causes problems on ASPX pages. Different markup tags can be used.
  • The existing methods are aimed at a month display only. The basic method returns an one-month array of day objects which are iterated through to create the individual day cells in the calendar. All navigation events assume 1 month or 1 year forward or back.
  • Calendar events are provided in a JSON array and are completely configurable. Event attributes can be used to set event div properties such as CSS class (see example below). Events appear in a collection associated with each day object. There is also an “all events in the current month” collection.
  • Event start and end times are not handled by default, but judicious use of margins and template markup should allow a more timeline-based display. Multi-day events are possible, so for example a suspension could be displayed.
  • There is a basic (JavaScript) event handling model that captures click events on navigation controls and calendar cells. Other event handlers can be bound in ready and doneRendering events.


The lack of week navigation means that it is not developed enough for our purposes at present, but it is worth keeping an eye on.

Screenshot showing an example of the CLNDR plugin in use

An example of the CLNDR plugin in use

Matthew – Investigation into self-updating/installing client

The original intention was to create an example of a self-installing or self-updating ContrOCC Client which would in theory enable more frequent C# releases by providing simpler deployment. However, upon investigation it seemed that this was not especially valuable as the ‘simplest’ method for deploying ContrOCC clients via Group Policy is already used at some sites.

Instead I analysed the various reasons why clients might be reluctant to deploy new software updates, methods to address this reluctance, and different technologies and approaches to allow ContrOCC to ‘Update’ itself if we chose to do so.

I also ran a less than successful test with NetSparkle, a library that promised very simple automatic update mechanisms but turned out to be non-functional.

Mike – Automatically generating database schema metadata XML

The ContrOCC metadata is a large XML file that contains a representation of the database schema that developers keep in sync. Currently the only way this can be done is by editing the XML text by hand. I have created a tool which reads the schema details from the database and updates the metadata XML automatically, prompting for table/column description text where it is required but none exists. There are other uses that this could be put to such as using it for faster schema checking but currently I have not had the time to try this.

There are a number of command line options (some of which can probably be dropped), a command to merge with the metadata would be:

mdd --server=localhost --database=Controcc_Testing ^
--merge --tablename=T_Actual ^
--metadata-xml="C:\controcc\Metadata\ContrOCC Metadata.xml" ^
--output="Merged Metadata.xml"

The server setting will default to localhost so can be omitted in this example and if mdd is run within the structure of a ContrOCC checkout it will attempt to find the metadata file.  There is also an --interactive option which will prompt for descriptions if none are present.

Nathan – Script to show info about an import/export specification

I’ve written a SQL script that outputs useful information about the stored procedures and/or columns for one or more of ContrOCC’s import/export specifications. This information includes flattened details of the parameters used in the SP as well as usage counts.

Steph – Report Tool improvements

I finished off the improvements to the report review tool I started in the last hackday. Instead of only having the most recently used command saved, the text box has now been changed to a drop down list of the 5 most recently used commands. This is useful for when a task involves updating more than one report, or for testing with different parameters. My attempts to intercept the paste event and strip out any carriage returns remains unsuccessful.

Screenshot of Steph's report tool drop down

Steph’s report tool drop down

Tanmaya – Evaluation of AutoIt and Visual Studio for ContrOCC User Interface testing.

AutoIt is a freeware automation language for Windows. It has a BASIC-like structure and supports network protocols and Win32 DLLs. It comes with a Window Info tool which can be used to hover over a control/dialog to get its class name, ID, etc.

Advantages – Free, light weight

Disadvantages – It is time consuming to learn new language, find control ids and script them. Maintenance wise its costly too i.e. UI layout changes, we need to revisit the script/compile them to make them work.

Visual Studio test projects comes as part of the Premium or Ultimate editions. To create one, select New Project > C# > Test > Coded UI Test Project.

It come with a Coded UI Test Builder tool which lets you record actions, review them and automatically generate the test class. Development and testing with Coded UI is very quick and easy to use. The multiple tests can be run in one click and it produces reports just like existing ContrOCC automated test.

Summary – Considering all advantages, I would prefer going with Visual Studio instead of AutoIt.

Tom – Service Broker

I continued a previous project investigating Service Broker as a way to improve auditing performance.

Tomasz – Auto Comments

For my hackday project I tried writing a semi-auto commenting tool. The idea behind it is that we don’t write comments until the final commit when the simple application would find all the changes you did and point them out one by one in the changed files while navigating to the nearest comments/history section and fill in TaskID, date, author automatically leaving only the comment to the user.

I didn’t manage to complete the whole thing, but finished with something that can fetch revision changes based on revision number and date range, fetch file changes for selected revisions and display files and diff between them with additional navigation between change spots. What’s left is actually inserting the comments, which I’d like to follow up on another hackday or in my spare time.

Ulen – Getting Gemini to give “heads-up” on invalid tasks

A routine waste of time for a Responsible Developer is chasing up tasks with invalid information – such as missing Release Notes, or mismatched status information.

My task today was to write a routine that can be scheduled directly from Gemini (our task tracker) to email the developer and/or Task originator that something is amiss on the task, giving them a chance to fix things ahead of time.

In essence the job will run periodically throughout the day, and check for Tasks that have been recently updated. The type of problem found can then have a configured grace period before it sends the email (e.g. Unauthorised tasks would inform the Assignee asap; mismatched status/resolution would be more lenient up to an hour or two). It can then send a summary of all problems to the RD once per day.

This will involve a certain level of hard-coding to begin with but in time this process can be tweaked, further automated, and pertinent details held in Gemini in attributes against the Version.

Implementing an HTML5 Canvas screen overlay

Part of our Dev Camp project this year involves being able to annotate a presentation slide using a mouse, finger or stylus. We instantly looked to the HTML5 Canvas element to provide a bitmap drawing surface.

Chris' Canvas overlay being used for slide annotation

Chris’ Canvas overlay being used for slide annotation

Sizing and scaling the Canvas

When first trying to create a canvas that overlaid the screen I discovered that they had two height/width properties.

  • – CSS attributes
  • canvas.height/width – DOM properties

The CSS attributes can take percentage values (100% in our case), the DOM properties cannot. If you try this you end up with a canvas of dimensions 0, 0. The trick is to put the canvas in a div and set the size with JavaScript.

Scaling the canvas with the window can be done by setting the the style.width and style.height in an event handler. This thought makes the co-ordinates of the screen and that of the canvas out of sync so lines don’t appear under where you draw them. Unfortunately changing the DOM properties causes the canvas to clear itself, necessitating copying the canvas, resizing it, and copying back the content but rescaled.

An easiler solution was to fix the size to the canvas and only change the CSS attributes translating between the screen co-ordinates and those of the canvas, this gives rescaling for free and will also make matters easier when displaying the annotations on multiple screens with different resolutions.

To make the canvas appear in front of everything else it was given a-really-big-number™ for its z-index.

Annotating with pointer events

Annotating on the canvas was done by adding event listeners for mousedown, mousemove, and mouseup. This worked well on traditional devices with a mouse but not at all on newer touch-orientated devices. Rather than duplicating effort and coding for both mouse events and touch events (and possibly others; pen for example), pointer events were used.

Earlier in the year we had a lunchtime mini-conf video of Jacob Rossi’s W3Conf talk ‘Pointing-forward’ where he spoke about the proposed pointer events W3C specification from Microsoft for handling hardware agnostic pointer input. It is supported in IE10 with vendor prefixes and in IE11. Fortunately there’s a polyfil, hand.js, to enable support in other browsers.

One slight additional change required for touch devices is to disable the default behaviour of the events; dragging your finger over the device is usually mapped to scroll.

Hiding the canvas and disabling it to allow interaction with underlying content was easily achieved using two CSS attributes: visibilty (hidden/visible), and pointerEvents (none/auto).

We ended up with a canvas that could be shown and hidden, overlaying the slide and allowing the user to annotate on top without disturbing the content underneath. As well as being able to switch off annotation mode, so that you could then interact with the slide, but keep the annotation overlay visible.