EmerGent: Relevance of Social Media in Emergencies

We recently started a new project – EmerGent, which is a 3 year European project researching the impact of social media by emergency services during a crisis.

Emergent logo

Whenever there’s a large fire, riot, earthquake or other crisis a lot of information immediately appears in social media – some valuable, some completely erroneous. With our partners we are developing tools and techniques for mining and validating that content.

The EU-FP7 EmerGent project has received funding from the European Union’s Seventh Framework Programme for research, technological development and demonstration under grant agreement no 608352. Find out more about EmerGent and its partners on the project website.

Finding time to think

A personal blog post from our Director of Consultancy Projects

As well as delivering products, OCC has a team that specialises in custom software development; they are behind the wide variety of case studies on our website. This combination of teams working on custom software and product development & support is, I think, unique.

Once a year I take the custom development team out for a day to discuss how we might write even better software. This time we crossed the road to the Jam Factory for an open discussion and some prepared talks. The talks were:

  • A toolkit for establishing better experience for our users
  • The judicious use of Agile project management
  • A critical look at SOLID principles
  • Lessons on testing from the product development team

The result was a surprise to me. The conclusion of the team was that best way to improve our performance was none of the above but instead improving the office environment. Nothing to do with software at all. The team wanted fewer interruptions, less multitasking  and more time to quietly think through difficult technical problems.

I was surprised and really interested in the conclusion. Our office is already considered calm and quiet. So we’re going to try a few new things. We’re going to experiment with collaboration tools that allow people to raise a “Do not disturb” flag. We’re using instant messaging that reduces disruption, hosted within OCC to keep the data confidential. I’m encouraging staff to book rooms or work from home during intensive design phases. It’s going to be interesting to see how we get on.

Consultancy Away Day

Consultancy Away Day

The OCC Christmas party, in May, in Poland

For our 2013 Christmas do we decided we would visit our colleagues in Poland rather than go for the usual Christmas outing. As Poland tends to be rather chilly in December we opted to hold out until May and hope for some summery weather.

So, over a long weekend, we (along with friends and partners) enjoyed the delights of a boat trip, a visit to underground cellars where vodka is made and aged, a walk round a picturesque lake, two lovely dinners, one in a Pomeranian Castle, and also a lively paintball game.

We didn’t have any snow as might normally befit a Christmas trip – the first day was very beautifully sunny and the second very wet, when we had to shelter under a bridge to have our picnic.

OCC has been running a small office in Szczecin for many years and some of our Polish engineers are now experts in UK social care.

CuPiD Technical Workshop

We believe there are many organisations implementing systems that conform to the following pattern:

  • A patient is being supported at home.
  • They wear sensors and/or use a smartphone.
  • Sometimes there is a home unit that wirelessly connects to the sensors/smartphone.
  • The data is securely transferred to a server.
  • There is a browser based application by which a clinician accesses the data.
  • Sometimes the clinician can send data back to the patient via the home unit or smartphone.
  • There may be integration between the server and external systems such as EHR.

We are holding a technical workshop at our offices in the centre of Oxford on the 14th April and inviting projects building similar systems to attend.

We will demonstrate the Cupid system and exercises, and talk openly about what went well and what didn’t. We will also discuss our plans to provide the telemedicine on an open source basis after at the completion of the Cupid project. The Cupid system is nearly complete.

CuPiD Exergaming The workshop will be an open discussion for a technical audience to exchange ideas and information. There will be time to talk about the validating and exploiting telemedicine systems. Please note this is not a marketing or sales event, but an opportunity to network.

So far we have representatives from StrokeBack, CogWatch, Oxford University, Oxford Brookes Movement Science Group and Digital Health Oxford attending.

If you would like to come please contact info@oxfordcc.co.uk

eHealth and the Brain – ICT for Neuropsychiatric Health

Reynold Greenlaw, Director of Consultancy at OCC, recently presented at eHealth and the Brain – ICT for Neuropsychiatric Health held in Brussels on the 5th November 2013. His presentation on Telemedicine and eHealth for Neurology, focused on the OCC’s involvement in the CuPiD Project.

CuPiD is a three year EU project powered by an eight member consortium led by the University of Bologna.

The CuPiD project is developing and field-testing home rehabilitation services for the major motor disabilities caused by Parkinson’s disease. OCC is responsible for transforming these services into telemedicine services, available in the home with remote supervision from a clinician. OCC is also responsible for integrating this system into Electronic Patient Records and systems for procuring health and social care services.

Top Tips in 10 Minutes for Software Engineers

Recently Reynold Greenlaw, our Director of Consultancy Projects, talked to 10MinutesWith about building a career as a Software Engineer. In their interview, Reynold covers the hardest and the best parts of working as a Software Engineer. He also talks about his own career path and gives tips on how to get started and what skills are needed.

10MinutesWith is an educational website, focusing on videos designed to help students and graduates understand different jobs and identify a career path.

Watch the video on our Careers page.

OCC is continually growing as a company; maintaining a steady growth rate of 15 – 20% pa. This month we have welcomed three new Software Engineers, all joining our successful products team. We believe people are a business’ greatest asset so we put a lot of care into finding the best new colleagues and into making the company an enjoyable place to work and learn.

Find out about our products or see the range of our custom software and services we offer.

Tom’s thoughts on AngularJS, TypeScript, SignalR, D3.js and Git

OCC DevCamp has been a great opportunity to put the day-job to one side and try out some new technologies. Here are some thoughts on how I got on with them.

D3, Git, TypeScript, AngularJS and SignalR logos


AngularJS is Google’s offering in the JavaScript application building framework arena. Having previously used Knockout I wanted to use a different framework for comparison. I’ve felt that Knockout isn’t especially well suited to large applications, and it seems to struggle in terms of performance when handling a large number of data bindings.

Angular comes with something of a learning curve (fortunately I shunted most of this on to Mel!) but after a week of use, it feels like a better choice for a larger application. The framework encourages writing code in a modular way, and it seems to be doing less magic in the background; surely good for performance. Data sizes in this application have been small, but there’s never been any indication that Angular has been slowing things down. More research would be possible if we choose to re-write one of our existing Knockout apps, but at first glance it doesn’t seem like that would be cost effective.

Recommendation: I would recommend using this again over Knockout. It would be good to see it put to work on a real project.


I’d been wanting to try out TypeScript for some time, since the prospect of writing JavaScript with straight-forward classes, modules, interfaces, etc. and compile-time type checking was a big draw. Again, I found there to be something of a learning curve, but it has been well worth it. There have been plenty of occasions when compiling my TypeScript file has revealed errors in code that I wouldn’t have found until running the application in a web browser if I’d been using vanilla JavaScript; obviously that adds up to a lot of time saved.

A couple of gotchas:

  1. You’ve got to keep remembering to build your solution in order to rebuild the associated JavaScript file. [Edit: it seems like it ought to be possible to get this working with compile-on-save, but this didn't work out of the box for me]
  2. If working with other JavaScript libraries – and let’s face it you will be – you’ll need to remember to download the associated TypeScript definitions files or face compilation failures. Fortunately these are easy to find on NuGet, as long as you remember to go looking for them, e.g.:

Screenshot of loading the AngularJS TypeScript Definition

Recommendation: TypeScript is definitely worth using. You can start by just renaming your existing files with the .ts extension, and if it all gets too much you can simply drop back to plain old JavaScript.

Example: The best example of the benefits of writing well laid-out code with TypeScript and AngularJS together was when I wanted to drop Poll Graphs into three different web-pages served by our application. One had the main presentation and associated JavaScript, the other didn’t. The following code was sufficient to drop graphs into all three pages with no further work:

Angular code to add polls to the presentation


SignalR is a library for real-time web functionality. In the best circumstances it will use a WebSocket between server and client, but will seamlessly fall-back to older client-server mechanisms if required.

This was very easy to get started with, and has been very powerful. The code in our Hub class, which receives messages and broadcasts responses, is very clear and concise. The documentation from Microsoft is surprisingly good. However, we’ve seen issues with connections being dropped, messages getting delayed, and there have been problems along the way with the architecture of our application which was at times getting into circular message->broadcast->message chains. That said, I invested very little time in trying to make robust the Hub backbone to our application.

Recommendation: I don’t think that any of the issues we’ve come across couldn’t be resolved given more development time and production servers. The application architecture needs to be right no matter which library we use for real-time calls. If I had the need to develop real-time interaction I’d give SignalR a chance: it’s a great starting point and builds in a lot of powerful features.


D3.js is a JavaScript library for manipulating documents using data, with very powerful capabilities. We only scratched the surface of this, but it produced some very nice bar charts for us. Once I got my head around the syntax, changes to the graphs presentation and scaling were very easy to make; a sure sign of a well thought out library.

Screenshot of a D3.js poll chart

Recommendation: I’d definitely use this library again if I have client-side graphing needs. In fact, I just want an excuse to use it on something more complicated than a bar chart!

Git & GitHub

Distributed source control in its own right is definitely a good thing. The ability to pull incoming changes locally, handle them and commit a properly recorded merge action is valuable. I’d already scaled the learning curve for using a distributed source control system when trialling Mercurial some time ago. The concepts came back to me fairly quickly so I didn’t have to waste time on that this week.

I ended up using a few tools to work with our Git repositories:

  • GitHub for Windows client: The UI was very confusing and anything but the simplest functions required dropping out to the command line. Shiver. I quickly stopped trying to use it.
  • Visual Studio integration was ok for simple actions but often seemed to get completely stuck when conflicts needed to be resolved; perhaps there will be fixes for this in the future.
  • TortoiseGit: by the end of the week I was solely using TortoiseGit. Not only because it was very similar to the TortoiseSVN toolset I’m used to, but also because it worked: it did what it had to do, when it had to do it.

Recommendation: I’d still recommend using distributed source control over non-distributed in general. I’d use Tortoise clients if we decided to do this at work.

Chris’ thoughts on HTML5 Canvas, SignalR and Git

HTML5, SignalR and Git logosHTML5 Canvas

HTML5 canvas is a powerful little container for graphics which I only used the smallest set of features of. Other features to explore would be changing of line colours/styles, and of drawing shapes. There are also other algorithms for drawing lines (Bezier/Quadratic) that might lead to smoother lines. Problems arose due to different implementations between web-browsers; chrome seemed to fare the worst here, especially when reporting the position of events. Some of the other features of HTML5/CSS3 I used were only available in the latest versions of browsers and work-arounds would need to be found to allow more universal coverage.

The JavaScript used to create canvases was quite simple so didn’t cause many headaches; most of the work went into working out which CSS properties to use, especially considering the odd transformations Reveal applies to slides. I found Firebug very useful for inspecting the code used to create various element and for debugging JavaScript, although Firefox’s and Chrome’s built in developer tools offered most of what I needed.

To do the annotation itself pointer events made life a lot simpler than the original process of defining different events for each input type, I would certainly go down this route again especially since there’s a polyfill available to use them in older browsers.

UI Effects

I also looked at some uses of pure HTML/CSS to replace things often scripted; showing and hiding menus for example. Once established I think this will be a nicer way of doing things. Given more time I would have liked to look more into animations and other effects.


SignalR made sending annotations to different screens easy. Although because of our implementation of communications classes in Angular a nasty hack was needed to send annotations to this via events. Starting again I would have attempted to do the drawing and receiving of pointer events using Angular itself.


I found Git was not obvious in its use to start with but by the end of the week I had a workflow sorted. Stash Save / Pull / Stash Pop / Merge / Push. This might not have been the recommended route, but worked for me. I still prefer Mecurial to both this and SVN.

Andrew’s thoughts on Git, Xamarin and SignalR

My focus for DevCamp was building an Android app that would interact with the presentation web app in realtime. You can read the details of how we achieved that in my previous post Using SignalR in native Android and iOS apps. For that we made use of a toolkit called Xamarin, as well as the ASP.NET SignalR library, and Git for source control.

Xamarin, SignalR, Git, GitHub and Android logos


Xamarin is a toolkit that allows you to use C# to write code for native Android and iOS apps. I had a good experience with this overall. I hit a small bug where changing the namespace caused a crash, but found an issue already raised for this on the Xamarin issue tracker. Other than that I found using it to be very easy.

Writing the code in C# required no knowledge of Java, though you do still need to know about the UI interfaces for the various devices. However, learning these would be much less work than learning the entire library system and back-ends.

We used Portable Class Libraries to allow us to write code that targeted both the ASP.NET MVC project and the Xamarin Android project. This worked perfectly, and meant we could share class structure changes and business logic without needing to recode anything. This demonstrated one of the major potential gains of using this kind of system.

The second major gain is that while the UI development needs some experience with each of the platforms, the back end can be written by existing C# developers without them having to learn additional skills. This could be a major saving even if only developing for a single platform.

I didn’t get to try adding an iOS front end to the application as this requires a Mac build server and an annual development fee to Apple (submitting to the Windows Store also requires an annual development fee). This isn’t a major drawback since these would be required for native development without the use of Xamarin, but it would be nice if the requirement for a Mac could be removed.

I’d like to try this at some point in the future to explore how well the multi-platform development works.

The only caveat I have is that the Xamarin license cost is high (around $999 per developer per platform per year). This would likely be recoverable for a medium sized project in the saved developer hours, but could cause support issues if we were not doing enough work in the area to keep at least one subscription running all the time.

Conclusion: Recommended

Notes on Android Development

Android VMs are very slow and getting a physical device ready for debugging can be a bit of a trial, needing various device drivers and options changing. Once it is set up it tends to run smoothly though.

Writing the activities for an Android UI requires some learning and does not translate directly from Windows/web UI design.


SignalR is a real-time communication library for .Net applications. I found it easy to use, and the portable build worked fine on Android (with Xamarin) with little additional effort.

Writing the SignalR calls was straight forward. However, as with any multi-client system, serious consideration about the communication messages being transferred is required. We initially created a system where messages could end up triggering other messages that cascaded into a loop. Careful design would be able to eliminate these occasions, but they’re easy to miss.

Conclusion: Recommended


We decided to use Git for our source control during DevCamp, it’s a distributed system, which means working with local repositories and pushing to / pulling from others. We use SVN in our usual day-to-day work and are pretty happy with it for the most part, but we were interested in trialling what’s becoming the go-to system for many developers our there.


GitHub is the most popular hosted solution for Git projects and seems a decent repository for code. If I wanted a shared repository I’d have no issue using this. It allows access via SVN as well (though I didn’t try this), so this would allow continued use of SVN if required. Recommended.

Git GUIs

Visual Studio Tools for Git – I had a few issues with this. I didn’t find that it worked well compared to other versions, and had major issues with authentication. I didn’t find it added much over the other Git access methods. Not recommended.

TortoiseGit – This was a great interface to Git, though this may just be because I’m already familiar with TortoiseSVN. Recommended if you like TortoiseSVN.


I’m unsure about this. I like the local commits and branches, but I had a number of cases where local changes I made didn’t seem to get merged into the shared trunk, so that while I could see the changes in my local stream when the merge occurred they were ignored. This may be down to misunderstandings on my part, but I didn’t feel that this made the system reliable enough for production use.

Conclusion: Uncertain – further investigation needed

Mel’s thoughts on SignalR, AngularJS and TypeScript

I spent most of the DevCamp week working on the client applications: the presentation itself (minus the annotations and the graphs), the secondary screen and the audience view. So the new technologies I spent most of my time on were AngularJS and TypeScript, with a little time spent on SignalR.

Angular, SignalR and TypeScript logos


I didn’t spend much time delving into SignalR, mostly because I didn’t need to. The code to set up a connection between clients and server is very concise and easy to understand. Tom was doing most the work on this, so it’s possible there was more complex stuff I missed, but on the bits I worked on, the main problem was figuring out how to do it properly in Angular. Without the Angular, it only took a few straightforward lines and it just worked.

We did get some intermittent problems with dropped connections – we were focusing on getting a prototype together rather than putting in a lot of error handling. It would be interesting to see how easy it is to get proper error handling client side and server side, considering the simplicity without it.

I’ve recently recommended it for a different project and I am happy to be using it again.


AngularJS, however, I’m still undecided on. It certainly seems very robust and full-featured – I was impressed by that. I also liked the modularisation, and how easy the data binding was. My problem with it was the learning curve.

I had previously used KnockoutJS quite heavily (a lighter-weight MVVM library which also offers data binding), and that was something I could just layer over my usual coding style and only use when I had to. To use Angular properly, I needed to completely change how I structured my code, which slowed me down considerably at the beginning. I also wasn’t sure how best to integrate it with existing non-Angular code. (Unlike Typescript, where half of us were using it and half not, and it all just fitted together).

It’s hard for me to judge how good it is until I’ve spent some more time using it, but it worked very well once I knew what I was meant to be doing. It’s probably a better library than Knockout, but certainly one that you have to invest a lot more time into at the beginning.


TypeScript was my favourite of the new technologies I tried out. It was exceptionally simple to start using – I installed the software, downloaded some type definitions for the libraries I was using, and then it just worked.

Annotating with types takes some getting used to, but is definitely worth it – using typescript, most of my typos or errors get caught at compile time with useful error messages. Using plain JavaScript, I might not spot them until runtime, and then have to debug why it isn’t working as expected. It saved a lot of time.

The only irritation was having to recompile the TypeScript after making changes instead of just refreshing the browser, although the TypeScript editor in Visual Studio does support compilation on save. Having modules and classes was also very nice – I suspect it would have been even more useful if I wasn’t using Angular, which implements its own modularisation.

I’ll definitely be using it in future projects.