After the success of our first ContrOCC hackday, we’ve decided to hold one every four months, to give the team a day to work on things that they think would make ContrOCC a little nicer in some way, for users or developers.
For all the thinking behind our product hackdays, have a look at our summary of the first ContrOCC hackday.
The day’s projects
Basel created a new method for managing foreign keys on database tables, allowing us to drop or re-add them with ease via a new stored procedure.
Chris G looked at generating C# wrappers for SQL stored procedures given a directory of SQL files.
Chris started off by looking at using Irony or ANTLR to generate Lexers and Parsers for the SQL but decided these were overkill for what he could achieve in one day. He then put together a rough-and-ready parser in C# to get the name of the procedure, and the name and data type of any parameters. This was then used to produce a file containing a C# class with properties for each parameter and a method for calling the stored procedure using the existing methods in ContrOCC.
Chris is now interested in looking at automation, as well as supporting macros and more complex SQL formatting.
Chris H developed a prototype of bookends for use in performance logging. These record the total duration of an action performed by a ContrOCC user, such as viewing a client. Bookends provide a more reliable measure of the performance actually experienced by ContrOCC users than the individual SQL execution times, and enable log analysis of C# performance problems.
Development of a production quality version of this functionality is highly desirable, although it poses significant technical challenges. In the meantime the prototype as it stands has already given us some interesting performance information.
Julian set out with the aim to reduce the the effort involved in building a new version of ContrOCC; in particular, the need to use SQL Server Management Studio to search the SQL code base for various possible faults. He identified RegEx as the most appropriate tool and added a couple of functions to ContrOCC’s CLR library and added a new check to our check-schema stored procedure. The benefits of this approach are that
- The object definitions in the database are tested – not the files in source control, which is important given that some objects are generated programmatically.
- It’s done by the automated testing, so faults will be picked up as and when they happen.
- The person doing the build has less to do.
Julian now plans on looking at improving the performance of his additions and is looking for other potential uses.
Mark has been looking at converting the Provider Portal user interface tests from Selenium IDE to Selenium WebDriver. The aim is to create a suite of Visual Studio unit tests that can either be run from Visual Studio or from a script for automated testing.
Mark has succeeded in setting up three Visual Studio unit tests that run our simplest UI test in Internet Explorer, Firefox and Chrome. These tests will return errors that say precisely what check failed and on which page. The relevant Selenium DLLs are incorporated into the ProviderPortalUnitTests project and there are (currently) no special configuration settings required.
There were a few issues:
- There is an export option in the Selenium IDE that auto-generates C# code corresponding to an existing IDE test. Most of that was thrown away because it was full of unhelpful code and assertion failure text. Instead, Mark created a utilities class that wraps the Selenium commands and returns meaningful exceptions.
- Windows Firewall throws up a dialog whenever the Internet Explorer and Chrome tests run. This is because they require an executable server component that has to be in the test directory so that the client driver can find it. There is a workaround, which is to add the path of the server component to the PATH variable on the computer.
- The Internet Explorer test has a weird configuration requirement for IE – “Protected Mode” must have the same setting (either enabled or disabled) for all security zones.
- Tests would occasionally fail randomly because the test would try to look for an element before the page had finished loading (the “ClickAndWait” command doesn’t seem to exist in WebDriver). Managed to fix this by setting an intrinsic polling period when searching for page elements.
Whether Selenium WebDriver is the correct technology to use for UI testing is still open for discussion but it’s clear that automated testing can be improved and new tools are cropping up to help.
Matthew aimed to make the ContrOCC Data Maintenance tool simpler to use, especially when working with very large and complicated CSV files.
Key features to implement were:
- Sorting and filtering of rows
- Finding columns easily
- Ability to duplicate rows
- Managing and creating CSV files easily
As there’s now a lot more stuff on the screen I’ll use a diagram to illustrate the changes.
ContrOCC Data Maintenance tool
For the future Matthew is planning on adding a ‘History’ of the last 10 files that were opened, as sometimes it is necessary to switch between different files. He’d also like to add searching for files and columns based on partial names, such as typing ‘Setting’ in the file or column filter boxes would show results where ‘Setting’ appears in the name.
Mike looked into seeing how feasible it would be to make a significant volume of the ContrOCC documentation/information usefully searchable by developers. Mike wanted to make it as easy as possible to find all the relevant documentation he needed when working on ContrOCC. Mike looked at an indexing and searching feature using the Apache SOLR and related tools. The next step would be to go forward with an implementation to index the documentation in our source control system and ContrOCC wiki pages.
Steph aimed to improve the usability of our internal report preview tool. She wanted to add the ability to paste multiple lines of text from the clipboard into the report SQL text box as this is how it is presented on the ContrOCC troubleshooting menu, and to save a list of most recently run reports as developers often work on several at a time. Steph managed to get these features both working independently in a test program, but ran out of time before I could combine them into a single control and integrate them into the main project, which she’ll save for a quiet moment or the next hackday.
Tom investigated compared the impact of handling auditing by using SQL Server Service Broker to handling it during trigger execution. The aims were twofold. Firstly, to gain some experience in using the service broker, since it might prove useful to us as a tool in our performance arsenal. Secondly, on a hunch that although we don’t perceive auditing to be expensive, there may be a knock-on cost transaction durations and sizes caused by processing auditing information inline.
By the end of the day I had managed to compare three different database configurations whilst performing a complex procedure:
- Baseline time with no auditing: 11m 29s
- Time with auditing via triggers: 16m 24s (+42.8%)
- Time with auditing via service broker and a single log table: 16m 15s (+41.5%)
So there isn’t a clear winner; however there was only had time to gain a pretty limited understanding of the basic principles of the service broker and there may be better ways of using it. This suggests that this is an area that would require considerable investigation time if we were to better evaluate the options. For auditing, the argument is not at all clear that this would yield benefits.
Ulen decided to compare the use of the SNAPSHOT isolation level to NOLOCK. Most of the documentation focuses on the disadvantages of NOLOCK and there is very little that goes into any detail about SNAPSHOT – or most importantly what the risk and overheads are for changing from one to the other.
Ulen wants to take this forward and put in some more dedicated research, so that he can produce a technical paper on the subject for further discussion. Sadly he ran out of time as he was bogged down following an example he found online which quite simply didn’t work the way it was documented! Ulen suggests taking suggestions to move to an Optimistic locking policy with a pinch of salt – they could simply be fans of other database systems who don’t agree with the tried and tested Pessimistic Locking that Sql Server provides.