Why you can’t afford to ignore the new Health and Safety sentencing guidelines


By our Commercial Lead, Duncan Davies.

New sentencing guidelines were issued in February for breaches of Health and Safety (H&S) regulations. It’s safe to say this didn’t make the front pages.

The new guidance was devised independently of the HSE (although the HSE provided input into the process), and comes in a couple of “easy-to-use grids” that allow you (in theory) to estimate your potential level of fine for a particular offence.  The idea is simplification and there’s a link to the new guidelines at the end of this post.

At a recent seminar at the Safety & Health Expo 2016 in London, roughly 50% of the audience raised their hand when asked whether they were aware of these changes.

If you’re in the 50% that don’t know, here’s a few thought-provoking questions:

  1. How many £1M+ fines have there been since the law changed in February 2016?
  2. What’s the longest prison sentence that’s been passed down in the last 6 months?
  3. Are you aware you can receive the same fine irrespective of anyone being injured, if there is shown to be culpability and a lack of H&S procedures?

Before we share the answers, it’s important to recognise that this new guidance is intended to send a blunt message to business: that Health & Safety is no more the preserve of the overly cautious, process-obsessed, budget-starved, H&S professional tucked away in a broom cupboard.

Health & Safety is now well and truly heading to the centre of the Board table. We’re now seeing the reality of directors themselves heading to prison, and fines being imposed that are ‘meaningful’ where previously they might have been a mere ‘slap on the wrist’.

Some argue that the new guidelines are mostly designed to increase a source of income from large companies, who now face the largest consequences of these new guidelines. A more public-spirited person might say they are intended to make the workplace safer for more people.

A key aspect that’s changed is that there’s now a focus not only on harm done, but also the harm risked. In theory this makes a lot of sense. If two companies commit the same ‘sin’, both should be liable, even if only one of them is ‘lucky enough’ not to actually hurt someone.  In reality it’s going to be a painful process to prove what could have happened, but didn’t.

All of this means giving renewed focus on employee engagement and to those projects that build a safety culture; more than ever, businesses will need to rely on employees, subcontractors, suppliers, and partners to create that culture. It means that vigilance and capturing near miss information is more important than ever. And it means that Health & Safety professionals are going to have to give their boards more data and more tools to help them manage this business risk.

And the results to date of these changes? Well a quick poll of publicised cases since February 2016 reveals 10 cases with total fines of £13M, with one case seeing a company director sent to prison for six years.

A recent report from IOSH highlighted that in the period February to August 2016 there have been as many £1M+ penalties as there were in the previous two decades.

It remains to be seen how recent high profile cases, such as Alton Towers and Foodles Production, will be prosecuted. What seems certain is that this new blunt instrument is going to be used to grab the attention of all those people who’ve not yet recognised that the H&S landscape has fundamentally changed.

For more information on the guidelines, see pages 4 and 5 of: www.sentencingcouncil.org.uk/wp-content/uploads/HS-offences-definitive-guideline-FINAL-web.pdf





Making the most of Docker for .NET development

Screen Shot 2016-06-29 at 13.34.37

Lead Developer Peter Duerden – who is currently working on Control F1’s Innovate UK-funded “i-Motors” telematics project – gives his top tips on how to best utilise Docker’s containerised solution for .NET development. 


The recent rise in popularity of the Micro-Services Architecture has resulted in a shift in the way architects and developers design and develop their solutions. This architecture allows developers to build software applications which use independently deployable software components.

To assist in the paradigm of the Micro-Services Architecture, Solomon Hykes released an open source project called Docker in March 2013. Docker provides a lightweight containerised solution that builds upon the Linux kernel’s ability to use namespaces, but – unlike a virtual machine – a Docker container works with resource isolation using namespaces.

.NET developers

From the background above, you would think that the ability to utilise Docker for .NET development would be limited. However, Microsoft has invested time in Docker Tools for Visual Studio 2015 – Preview. This adds support for the building, deploying and debugging of containers using Mono and, more recently, .NET Core, which can be deployed or debugged locally when used in conjunction with Docker Toolbox or Docker for Windows (or Mac).

The Docker Tools have changed quite dramatically from their early releases, and during conversations with the development team in Seattle I was able to request, amongst other things, the inclusion of .NET Console apps to receive support for deployment. That request has been included into the current toolset.

The Docker Tools are a real benefit for fast development, but understanding the Docker processes before using them can help, and actually manually writing a Dockerfile for building your own images can prove valuable if you are to reap the full benefits of what Docker can offer.


Docker containers can be hosted on any Linux machine running the Docker daemon, but you need more than one in a production environment, and will need to run a Docker Swarm that can manage the allocation of containers, resourced over multiple machines to ensure resilience should one machine fail.

NB certain Cloud providers offer their own “Container Services” which ease the deployment of containers, but don’t use the standard Docker tools for composing or scaling.

Helpful tips

Although you may want to just dive in and try your first Docker container, be mindful of the following:

  1. Understand the Docker build process
    • What is it doing when it builds the images using the Dockerfile?
    • How are things layered?
    • Exposing ports – make sure you think about why and when
    • Have you considered tagging?
  2. Experiment with Docker Compose
    • Build your application by linking services
    • “Internal” services do not need ports exposing to the Host machine
  3. Build your own Docker Swarm
    • By building a swarm you can understand and identify the related challenges this will have, such as service registry or load balancing
  4. Use Docker natively. Don’t use one hosting provider’s container service, as this will tie you in and mean further work to unstitch your solution if you ever decide to host elsewhere.
  5. Be aware that not all .NET Nuget packages support .NET Core. There are certain ones that do, but quite a few still aren’t supported.

The future

At the time of writing this, quite a few changes are occurring in both the Docker and .NET Core spaces, which will no doubt have an impact on future development.

  1. .NET Core has now been released. Hopefully this will drive an increase in the number of Nuget packages available to developers.
  2. Docker Swarm has been simplified and should hopefully make it easier to build and manage swarms.
  3. Microsoft are close to the release of Windows Container Service, which offers similar functionality to Docker, but on Windows Server 2016. This will therefore allow for full .NET framework capabilities rather than the .NET Core version.

Getting an HTTP REST server running on the iMX233 OLinuXino-MICRO

Screen Shot 2016-04-08 at 19.39.28

Control F1 Lead Developer Phil Kendall gives some handy pointers on how to get an HTTP REST server running on the iMX233 OLinuXino-MICRO.

As part of an on-going client project, Control F1 was asked to get an HTTP REST server running on the iMX233 OLinuXino-MICRO – a little ARM-powered small board computer. There’s a lot of documentation out there for the OLinuXino board, but it’s not always clear which is the most up to date, so this post covers how we managed to get everything working in 2016.

Before you begin…

…always buy the right kit. You’ll obviously need a board, but as well as that remember to get:

  • A 5V power supply with a 5.5mm jack
  • A MicroSD card
  • A composite video lead (or another cable with RCA connectors)
  • A USB keyboard
  • A powered USB hub – we found that the OLinuXino didn’t put out enough power over its USB port to get keyboards to function
  • A WiFi dongle, which uses the RealTek 8192cu chipset. It’s always a bit tricky to determine exactly which chipset is in a dongle, but as of February 2016, the TP-LINK TL-WN823Nwas using the right chipset

In theory, that’s all you absolutely need, but I’d also strongly recommend a composite to HDMI converter, just because most monitors don’t have composite inputs these days.

First steps

As with any of these kind of projects, the first step is always to get something running on the board. Thankfully for the OLinuXino, this turns out to be relatively easy: grab the “iMX233 Arch image with kernel 2.6.35 WIFI release 4” image as linked from the Olimex wiki, and then simply follow the steps listed there to copy the image onto the MicroSD card, put the card into the board, plug in the keyboard and video and you should get the standard Tux splashscreen, followed by a login prompt.


The first thing to check is that you’ve got any sort of communications at all; the easiest way to do this is to run a scan for any wireless networks:

ip link wlan0 up
iwlist scan

That should be enough to give you a list of all the wireless networks that the board can see. Find your network and run:

wpa_passphrase SSID PASSWORD > MyNetwork.conf

Now edit /etc/wpa_supplicant/wpa_supplicant.conf, delete all the example configurations from the file and then add in MyNetwork.conf as you created above. After that, it should just be a matter of bringing everything up:

wpa_supplicant -B -i wlan0 -Dwext -c /etc/wpa_supplicant/wpa_supplicant.conf
ifconfig wlan0 <ip address> up
route add default gw <gateway address>

…and finally editing /etc/resolv.conf to add an appropriate nameserver. With a following wind, you should now have fully working networking on your board and be able to SSH into it.

HTTP server

I’m a big fan of spray.io’s spray-can as a lightweight HTTP server. As spray-can is in Scala, the first thing to do is to get a JVM onto the box. This also turns out to be nice and easy – grab the “ARMv5/ARMv6/ARMv7 Linux – SoftFP ABI, Little Endian” version of the Java SE Embedded JDK from Oracle’s site. This contains the JRE as well as the JDK, so copy the JRE onto the board and ensure that java is on the PATH somewhere. Test this with your favourite “Hello, world!” implementation if you so wish.


Getting Scala running on the board was also relatively easy: simply copy scala-compiler.jar, scala-library.jar and scala-reflect.jar from whichever version of Scala you’re using on the board, and then run your Scala code as:

java -Xbootclasspath/a:/full/path/to/scala-compiler.jar:/full/path/to/scala-library.jar:/full/path/to/scala-reflect.jar -jar helloworld.jar

I packaged that up into a shell script and put that on the PATH just to make things easier.


The biggest issue with getting spray-can running is ensuring that all its dependencies are available. The easiest way I found to do this was to use the sbt-assembly plugin to produce a “fat JAR” and deploying that onto the board. sbt-assembly importantly does the right thing out of the box and merges Typesafe Config configuration files so that they all work properly. Other than that, the only change I needed to make was to increase spray-can’s start up timeout a bit, just due to the relatively slow speed of the board; this can most easily be done by adding the following stanza to application.conf:

spray.can {
  server {
    # spray-can can be a bit slow to start on the board, so give it more time to start
    bind-timeout = 10s

After all that, you should be able to run any of the spray-can demos on the board.


Can small businesses benefit from the Northern Powerhouse?

Northern Powerhouse

Control F1 MD Andy Dumbell on his perceptions of the Northern Powerhouse initiative and why it’s so important to support small businesses in the region.

As cofounder and MD of a small tech business based in Yorkshire, and as a resident of the north-east, I have a vested interest in the North and support initiatives that will benefit the region.

With this in mind I recently attended the 2016 “UK Northern Powerhouse International Conference & Exhibition” held in Manchester, with the objective to understand what the NPH (Northern Powerhouse) is all about, and how it could benefit our region and its small businesses.

Prior to the event, like a lot of delegates I only had a vague understanding of what the NPH is; a concept that aims to rebalance the UK economy, pushing growth outside of London into Northern cities – from Liverpool through Manchester, Leeds, Sheffield, Hull, Teeside and Newcastle. It’s also a vision backed by the Chancellor George Osborne, who some say has staked his political reputation on its success.

That’s all well and good, but what does this actually mean for small businesses? What is the plan and who is leading it? How do we get involved and where are the opportunities? My hope was that the event would help answer these questions and leave us all feeling inspired and brimming with enthusiasm!

The conference was hosted by John Humphrys, who as you can imagine was witty, engaging and tenacious. It was split over two days and predominantly consisted of individual presentations and panel discussions. The calibre of the speakers was high and the format encouraged audience interaction through a social networking technology called slido.

The conference also offered some powerful networking opportunities with CEOs and other top executives of some of the largest companies and organisations in the UK, across various sectors. I formed promising connections with people whom I’d normally struggle to meet. For instance, I secured a private meeting with the President and CEO of one of the largest supermarket chains in the UK to talk about his challenges, what my business does and where we could potentially add value to his. We swapped business cards and my business has been invited to present to some of his senior team, to explore how we might work together.

On day 1 Lord O’Neil of Gately, Commercial Secretary to H M Treasury, kicked the conference off with his keynote speech on the governments’s NPH agenda with a focus on connectivity, communication and education. He talked about the NPH being a two phase project with the first currently underway, targeting awareness, devolution and improvements to transport links to better interconnect the North.

Phase 2 will be a continuation of the first, extended to address challenges in education and skills and their interplay with business. Lord O’Neil aspires to improve the outcomes and aspirations for the North, and central to phase 2 will be a low cost travel card similar to London’s Oyster card that can be used to commute across the North of England.

O’Neil’s introduction was a good start. The intention to provide better transport links should create new business opportunities whilst making commuting to work quicker, simpler and more cost effective, opening up access for businesses to a wider talent pool and encouraging local people to work in the region. However, connectivity issues within cities was not discussed and needs to be addressed in parallel to this wider piece.

There was considerable focus on education and skills throughout the event, especially given recent negative press on declining education standards in the North. I raised a question during a panel discussion on Science, Research and Skills: “what role will the NPH play in improving education standards and building a skills pipeline to meet future business demands?”

Professor Liz Towns-Andrew, Research and Enterprise, Yorkshire Universities, gave a passionate response and talked about her efforts to embed enterprise in the curriculum to create more business savvy graduates. She also highlighted the importance of listening to what employers and industry needs, and the necessity of getting these parties more engaged with further education.

This was encouraging and we heard about a number of further promising initiatives throughout the event. For instance, the International Festival of Business (IFB 2016) hosted in Liverpool later this year, a global gathering of international reach and significance. Over three weeks business leaders and thousands of international business delegates will get together, opening up opportunities in new markets. The festival includes a series of events, workshops and panel debates, with the intention to forge new connections and help businesses to secure new customers from key markets around the globe.

What wasn’t clear, though, was the role the NPH will play in all of this. Will it simply remain a concept; a platform for interested parties that share their ideas, debate and network, or will it be a real entity and the true driving force behind significant change? This wasn’t crystallised and the NPH is facing a fair amount of criticism. Social networks are loaded with frustration over the lack of a clear plan with some critics labelling the NPH as a gimmick without substance.

Geographical focus also presents a bit of an issue. Manchester is likely to become the capital of the NPH; it was quite telling that the chancellor’s first speech on the concept was delivered here, with events such as this one hosted in the city. But it’s important that the organisers rotate event locations across the north and don’t neglect cities such as York, Sunderland and Hull. It would be great to see future NPH events hosted in Newcastle and other major cities; otherwise we risk Manchester becoming the northern powerhouse and a new divide forming, breeding imbalance within the North which will undermine the initiative’s whole purpose.

So was the UK Northern Powerhouse International Conference & Exhibition worth attending? Personally, I didn’t achieve all of my objectives and like a lot of delegates, my understanding of the NPH is still relatively vague. Yet whilst I didn’t leave feeling wholly inspired, I am hopeful, and believe that the NPH – which is still in its infancy – can accomplish its vision if given the necessary support: as delegates we are responsible for ensuring ROI when investing our time and money into such activities. Returning to my original question – can small businesses benefit from the Northern Powerhouse? – I would say yes! Get involved; network, debate, collaborate and help play an active role in the NPH story – it’s what we make of it that counts.

Control F1 takes Intel prize at IoT Hackathon

Screen Shot 2016-02-23 at 16.36.19

Head of Development Nick Payne describes his time at London Olympia’s IoT Hackathon, at which Control F1 proved victorious, taking the Intel Prize for our team’s “Personal Comfort Monitor”.

About two weeks ago friend of Control F1 Steve Cowper dropped us an email about attending an IoT Hackathon at London Olympia. Having never attended a hackathon on this scale before it seemed like a great idea. The prizes looked interesting and it fitted in quite nicely with what CF1 are doing in the space too.

A couple of impromptu phone calls later, we’d come up with the idea of the “Personal Comfort Monitor” – PCM for short! We were taking on Intel’s challenge of creating an application for the Intel Edison board, together with an Arduino breakout board and a bevvy of Grove sensors. Of course, all plans could change having had the hardware pitched to us on arrival…

Fortunately, they didn’t! Intel has built a Cloud IoT platform, loosely based on MQTT and hosted on AWS (again, a perfect fit for CF1).

The brief told us that there had to be a business case and route to market for whatever we were building. Steve had done plenty of market research and field data collection previously and promptly set out on the documentation for our project (whilst keeping the dev machine oiled with coffee and cake).

Screen Shot 2016-02-23 at 16.31.20

Control F1 Lead Developer Phil Kendall made light work of the hardware, iterating quickly over the Intel examples to get various sensors wired up and talking to the backend. Meanwhile I configured the backend platform and made sure that the project was registered so we stood a chance of winning, having set up a github repo for the code.

After about five hours we had a board that was able to ingest data from various sensors (with air quality fudged by a rotary switch in true hackathon style) and a mobile application that displayed the data. In short order we also got a red LED lighting up when the derived “comfort level” dropped below 50 (Steve kindly produced an algorithm for Phil to convert into NodeJS together with Excel proof). Time was called on the first day, and we retired happy that we’d achieved a fair amount.

Day 2 arrived and we implemented the LCD display board to give a user friendly read out. After much hacking (and a bit of swearing) Phil converted the Edison IoT agent to TCP sending, as UDP sending messed up the LCD – it turned it off!

Nick embarked on polishing the mobile app (a rewrite followed!) and by the end of the morning it looked half decent. The Intel EnableIoT api was able to be called to give the mobile app what it needed.

With a few hours to spare, we helped a few of the other hackers and checked out the competition. Then, once the presentation was finished, we had some lunch.

The pitches drew quite a crowd at the Expo. We were next to last, so at least we didn’t have too long to wait for the judges to finalise the results.

I’m proud to announce that we won 1st prize from Intel! They were impressed by the amount that we achieved in the time we had, and with the idea and pitch (and maybe that we raised a few bugs for them to go and fix too!)

Many thanks to the organisers, and to Richard Kasteline in particular. The prizes will definitely be used back at CF1 HQ to continue our “After School Clubs”, and who knows – maybe the Personal Comfort Monitor will come in handy at Control F1 HQ – we have three Edisons to play with now, and a Surface 3 to display results on!

i-Motors receives £1.3m from Innovate UK

Connected Car

We are excited to be able to share that i-Motors – a new Control F1-led telematics project – has been awarded a grant of £1.3M by Innovate UK.

We’ll be partnering with the University of Nottingham’s Geospatial Institute and Human Factors Research Group, traffic management specialists InfoHub Ltd, remote sensing experts Head Communications and telecoms gurus Huduma to deliver the project.

Picture a future without gridlock. A future in which our city streets, roads and highways are safer, cleaner and greener. In which vehicles can self-diagnose a fault and order a new component, or automatically detect a hazard such as ice on the road before it’s too late and warn other vehicles around them too. A future in which cars can drive themselves…

That future isn’t far away: it is predicted that the UK will see huge growth in the production of autonomous (driverless) cars by 2030. Meanwhile the production of connected cars – cars with inbuilt “telematics” devices, capable of communicating to other vehicles and machines – is forecast to rise from around 0.8 million in 2015 to 2 million in 2025, accounting for 95% of all cars produced in the UK.

Yet whilst the number of cars with the technology to connect is already rising, little progress has been made towards putting this technology to use.

i-Motors plans to address this issue. Capitalising on our extensive telematics experience (read about our telematics partnership with the RAC here), we plan to establish a set of universal standards on how vehicles communicate with each other, and with other machines. Making use of connected cars’ ability to support apps, we’ll be working with academics from Nottingham University’s Geospatial Institute and Human Factors Research Group to build a mobile platform that allows vehicles of different manufacturers and origins to transfer and store data.

We’ll use patented technology, allowing data to be collected and analysed at greater speeds than ever before. We’ll also be working alongside traffic management experts InfoHub Ltd to combine these data with other data sources such as weather reports, event data and traffic feeds, easing congestion and increasing safety through realtime updates and route planning. In addition, the i-Motors platform will allow vehicles to report errors, which can be automatically crosschecked against similar reports to diagnose the problem and reduce the chance of a breakdown.

We will also be working with Head Communications to address the issue of limited connectivity by developing sensors capable of transmitting data to the cloud in realtime. Through installing these sensors – known as Beyond Line of Sight (BLOS) – vehicles can remain connected with sub-metre precision, even when out of internet and GPS range. And we will be collaborating with telecoms gurus Huduma to make i-Motors sustainable and commercially successful in the long term.

i-Motors has the backing of Nottingham, Coventry and Sheffield City Councils, where the new technology will first be piloted, and a letter of support from the Transport Systems and Satellite Applications Catapult, and fleet management experts Isotrak. The project will make use of live vehicle data provided by Ford, which has an ongoing relationship with the University of Nottingham.

Our MD Andy Dumbell commented:

“We are delighted to have been awarded the funding by Innovate UK to lead on this ground-breaking project. Connected and driverless cars offer us the opportunity to make huge strides in terms of reducing congestion, bringing down emissions, and even saving lives. Yet as is always the case when dealing with big data, it’s only effective if you know how to use it. We believe that through i-Motors we can set the standard for connected and autonomous vehicles and redefine the future of our streets, highways and cities.”

Just how innovative is the UK?

Screen Shot 2015-12-14 at 16.46.05

Our Product Development Director Dale Reed shares his thoughts from the 2015 Innovate UK Conference. 

Innovate UK is the UK’s innovation agency; an executive non-departmental public body, sponsored by the Department for Business, Innovation & Skills. They host an annual event to highlight the best and brightest of British Innovation, with exhibitors and seminars held over a two day period in London.

What was constantly highlighted throughout the event was just how innovative we actually are in this country. Consider these statistics:

The UK represents around 1% of the total global population and yet; we produce 16% of the world’s published scientific papers, and we host 4 out of the world’s top 10 Universities.

Then consider some of the inventions that have really shaped the world we live in today:

Computers? Charles Babbage, British.

Telephone? Alexander Graham Bell, British.

World Wide Web? Tim Berners-Lee, British.

Television? John Logie Baird, British.

You can also add to that list radar, the endoscope, the zoom lens, holography, in vitro fertilisation, animal cloning, magnetically levitated trains, the jet engine, antibiotics and, indeed, Viagra!

Some years ago, Japan’s Ministry of International Trade and Industry made a study of national inventiveness and concluded that modern era Britain had produced around 55% of the worlds ‘significant’ inventions, compared with 22% for the US and 6% for Japan. The point is that the Brits have a long history of innovation and it’s something we should be mightily proud of.

The downside is that however good we’ve been at inventing things, we’ve not been that great at commercialising them. Almost all of those inventions mentioned above have been vastly commercialised by businesses outside of the UK (really only jet engines and antibiotics contribute anything significant to our GDP). We also lose a great deal of our brightest minds to businesses overseas.

Fortunately this seems to be one of the areas that’s being changed, as evidenced by some of the talks I sat in on at the event. Many universities are now teaming up with businesses to place students and under-graduates – something which benefits all parties. Despite some difficulties around IP protection, it’s a huge boon to the student to learn some business sense and commercial ability before being employed full time. The employer gets some very bright minds to help them think around their problems. Many students go on to work with the business full time on graduation, and many businesses continue with the scheme year on year because it’s been so successful for them.

There are also now a lot of Catapult Centres right here in the UK (https://www.catapult.org.uk/). These are a network of world-leading centres designed to transform the UK’s capability for innovation in specific areas and help drive future economic growth. They are a series of physical centres where the very best of the UK’s businesses, scientists and engineers work side by side on late-stage research and development – transforming high potential ideas into new products and services to generate economic growth.

By bringing together the right teams who can work together and innovate, and just as importantly commercialise, the centres are ensuring the UK can continue to be at the forefront of innovation, particularly in technology and the sciences.

Graphene of course is a well-known British invention which I think will soon be joining the list of the world’s most life changing innovations in fairly short shrift. The number of applications seems almost limitless at the moment. We already have the National Graphene Institute, built as part of Manchester University, and fortunately the UK is working hard to ensure we are capable of commercialising the potential for Graphene. Work on another £60,000,000 building – the Graphene Engineering Innovation Centre – is currently underway, which will help look at how to move the research into actual production.

We also have a lot of expertise in quantum mechanics, and again companies in the UK are now working towards commercialisation of highly accurate sensors utilising quantum – for example an accelerometer based on the quantum interference of ultracold atoms. These will be able to provide highly accurate location and accelerometer information without any need for GPS or external factors. Although quite large at the moment it’s expected that they’ll be microchip sized within the next two years. Obviously this could be a huge boon to mobile, telematics and asset tracking systems. It’s currently being developed for use with submarines so they can determine their position accurately without having to surface to use GPS.

Overall I came away from the event feeling extremely positive and excited to be here in the UK at a time when there is so much potential for new technologies and innovation. I’m very much looking forward to Control F1 being a part of it!

A Sparkling View from the Canals

Screen Shot 2015-12-04 at 15.14.11

Control F1 sent Lead Developer Phil Kendall and Senior Developer Kevin Wood over to Amsterdam for the first European edition of Spark Summit. Here’s their summary of the conference.

One of the themes from Strata + Hadoop World in London earlier this year was the rise of Apache Spark as the new darling of the big data processing world. If anything, that trend has accelerated since May, but it has perhaps also moved in a slightly different direction as well – while the majority of the companies talking about Spark at Strata + Hadoop World were the innovative, disruptive small businesses, at Spark Summit there were a lot of big enterprises who were either building their big data infrastructure on Spark, or moving their infrastructure from “classical” Hadoop MapReduce to Spark. From a business point of view, that’s probably the headline for the conference, but here’s some more technical bits:

The Future of MapReduce

MapReduce is dead. It’s going to hang on for a few years yet due to the number of production deployments which exist, but I don’t think you would have been able to find anyone at the conference who was intending to use MapReduce for any of their new deployments. Of course, it should be remembered that this was the Spark Summit, so this won’t have been a representative sample, but when you’ve some of the biggest players in the big data space like Cloudera and Hortonworks joining in on the bandwagon, I certainly think this is the way that things are going.

In consequence, the Lambda Architecture is on its way out as well. Nobody ever really liked having to maintain two entirely separate systems for processing their data, but at the time there really wasn’t a better way. This is a movement which started to gain momentum with Jay Kreps’ “Questioning the Lambda Architecture” article last year, but as we now have an enterprise ready framework which can handle both the streaming and batch sides of the processing coin, it’s time to move on to something with less overhead, quite possibly Spark, MesosAkkaCassandra and Kafka, something which Helena Edelson implored us to do during her talk. Just hope your kids don’t go around saying “My mum/dad works with Smack”.

The Future of Languages for Spark

Scala is the language for interacting with Spark. While the conference was pretty much split down the middle between folks using Scala and folks using Python, how the Spark world is going was perhaps most obviously demonstrated by the spontaneous round of applause which Vincent Saulys got for his “Please use Scala!” comment during his keynote presentation. The theme here was very much that while there were people moving from Python to Scala, nobody was going the other way. On the other hand, the newcomer on the block here is SparkR, which has the potential to open up Spark to the large community of data scientists out there who already know R. The support in Spark 1.5 probably isn’t quite there yet to really open the door, but improvements are coming in Spark 1.6, and they’re definitely looking for feedback from the R community as to which features should be a priority, so it’s not going to be long before you’re going to see a lot of people using Spark and R.

The Future of Spark APIs

DataFrames are the future for Spark applications. Similarly to MapReduce, while nobody’s going to be killing off the low level way of working directly with resilient distributed datasets (RDDs), the DataFrames API (which is essentially equivalent to Spark SQL) is going to be where a lot of the new work gets done. The major initiative here at the moment is Project Tungsten, which gives a whole number of nice optimisations at the DataFrame level. Why is Spark moving this way? Because it’s easier to optimise when you’re higher up the stack – if you have a holistic view of what the programmer is attempting to accomplish, you can generally optimise that a lot better than if you’re looking at the individual atomic operations (the maps, sorts, reduces and whatever else of RDDs). SQL showed the value of introducing a declarative language for “little” data problems in the 1970s and 1980s; will DataFrames be that solution for big data? Given their position in all of R, Python (via Pandas) and Spark, I’d be willing to bet a small amount of money on “yes”.

On a related topic, if you’ve done any work with Spark, you’ve probably seen the Spark stack diagram by now. However, I thought Reynold Xin’s “different take” on the diagram during his keynote was really powerful – as a developer, this expresses what matters to me – the APIs I can use to interact with Spark. To a very real extent, I don’t care what’s happening under the hood: I just need to know that the mightily clever folks contributing to Spark are working their black magic in that “backend” box which makes everything run super-fast.

The Future of Spark at Control F1

I don’t think it will come as a suprise to anyone who has been following our blog that we’re big fans of Spark here at Control F1. Don’t be surprised if you see it in some of our work in the near future🙂

Implementing best practice application design principles in Kentico 8

kentico logo square


In this post Lead CMS Developer Chris Parkinson discusses how we use n-tier architecture and Ioc at Control F1 to allow us to write abstracted, testable custom code within Kentico, and how we then consume this custom code in the CMSApp and CMSApp_AppCode projects.

NB This post assumes a good working knowledge of Kentico 8 and understanding of SOLID principles, including n-tier and dependency injection / inversion of control. If the latter is new to you, here is a good place to start.

At Control F1 we like to apply good application design principles to any development project, be it Greenfield, mobile or CMS integration. These typically include the use of dependency injection, n-tier, and separation of concerns to allow custom code to be more easily tested.

Sometimes, however, particularly when working with an off the shelf product, we have to come up with workarounds. Kentico is a brilliant tool for quickly developing feature-rich websites, but as with any proprietary software it isn’t always the easiest to extend. We typically use Kentico web application projects at Control F1. This allows us to more easily integrate Kentico with our build processes using MSBuild (another article entirely).

One of the quirks of web application projects relates to the continued use of an App_Code  folder; which is a hangover from the older Website project. In website projects, code added and edited within App_Code is compiled on the fly, allowing server side code to be developed without the need to re-compile. This concept doesn’t really exist in web applications. Here is a good post from Microsoft explaining the differences between websites and web applications in .NET

Kentico has split out App_Code into its own VS project called CMSApp_AppCode, with any code/folders under it moved under a folder called Old_App_Code.

Kentico 8 pic

This causes two fundamental problems:

  1. Any custom objects or providers generated from custom page types or custom classes in Kentico are generated here.
  2. Any custom code that needs to inherit from CMSLoaderAttribute (custom events, extending the order provider etc) HAS to go here.

Remember these two points and we’ll quickly move on and discuss how we architecture our solution.

Historically, Control F1 Kentico solutions consisted of the following projects:

Kentico 8 2

  • Core (Utilities, helpers etc.)
  • Data (DAL layer)
    • Extensions (Extensions to CMS ‘entities’ – out of the box info, objects etc.)
    • Mappings (Classes that map ‘entities’ to ‘domain’ objects)
    • Repositories (Data repository classes to wrap up Kentico providers)
  • Domain
    • DTOs / DTO Extensions
  • Services
    • Service classes that typically wrap up the repository layer but return a ServiceResult<T> object, allowing consistent consumption from the client

The Service and Data projects typically contain both the interface and the implementation, meaning that to use a service you need to reference the Services project.

This is absolutely fine from the CMSApp project – you can happily reference Services and use them as you wish using your Ioc container of choice. But it’s time to revisit the two fundamental problems with Kentico caused by the CMSApp_AppCode project:

1. Any custom objects or providers generated from custom classes in Kentico are generated here.

Going back to our application architecture, we have a Data project which we use to wrap up out of the box Kentico info and provider objects. We also want to do this with any custom objects generated from custom classes in the CMS, meaning that we need to add a reference to App_Data from our Data project. This leads us nicely on to problem 2…

2. Any custom code that needs to inherit from CMSLoaderAttribute (custom events, extending the order provider etc) HAS to go here

Consider an eCommerce website where we might want to trigger custom code when the ‘Paid’ event is fired. We’d typically do this by creating a custom provider object that extends OrderInfoProvider, and by overriding the ProcessOrderIsPaidChangeInternal method to check for the IsPaid property.

public class MyCustomOrderProvider : OrderInfoProvider
    protected override void ProcessOrderIsPaidChangeInternal(OrderInfo orderInfo)
        if (orderInfo.OrderIsPaid)
            // TODO - our custom code here

Now say we want to execute some custom code written in an IOrderService, we’d have to add a reference to the Services project.

Eek, fail!

Kentico 8 3

When we try and add a reference to our services project from CMSApp_AppCode, we get a circular reference error. This makes complete sense. CMSApp_AppCode is referenced by Data which is referenced by Services, so with our current architecture there’s no obvious way to use custom services in CMSApp_AppCode. This is obviously no good – we’ve already written custom code and tested so we don’t want to recreate this using Kentico’s provider objects – the whole point was abstracting this.

The solution is actually fairly simple, albeit requires a bit of refactoring and the use of reflection. Reflection, used with care, is a great tool that allows you to load an instance of an object at runtime – which is perfect for this scenario. Here is another good post on CodeProject explaining reflection in .NET.

Firstly, consider the initial good design principles we were discussing – particularly separation of concerns. If we’re using inversion of control, our consuming application doesn’t really need to know implementation details. All it needs to know is that we have an IOrderService which contains a method called DoStuff().  Therefore, with this in mind we can refactor our application to split out interfaces and implementations into separate projects. The solution now looks like:

kentico 8 4

  • Core
  • Data
  • Domain
  • Services
  • Interfaces
    • Services
    • Data

Our CMSApp_AppCode and CMSApp projects can now reference the interfaces project. However, we still need to create new implementations of these interfaces in order to use them. As we discussed earlier, CMSApp can happily reference the services directly as there are no dependencies on CMSApp from the n-tier layer. From CMSApp_AppCode we can’t reference services directly, but because we’ve abstracted interfaces from implementations we can load the implementations dynamically using reflection and map them to the correct interface.

Steps to achieve this are as follows:

  1. We need to load the assemblies – in this instance we need both Data and Services. We’re using dependency injection in our services and need to pass in the repositories from the data project. We’ve created a helper method to allow us to get the correct directory without specifying the full file path.
    var dataAssembly = Assembly.LoadFile(string.Concat(IoHelper.AssemblyDirectory, "\\Data.dll"));
    var _servicesAssembly = Assembly.LoadFile(string.Concat(IoHelper.AssemblyDirectory, "\\Services.dll"));


  2. Next we need to search the loaded assembly for an exported type that is assignable to our interface. In this case we’ve created an extension method that returns back an instance of the supplied type (providing a named parameter for when we have multiple instances of an interface).
    public static Type GetType<T>(this Assembly assembly, string name)
        return (from type in assembly.GetExportedTypes()
                where typeof(T).IsAssignableFrom(type)
                && type.Name == name
                select type).
    var orderMapperType = dataAssembly.GetType<IMapper<Order, OrderInfo>>();
    var orderRepositoryType = dataAssembly.GetType<IOrderRepository>();
    var orderServiceType = servicesAssembly.GetType<IOrderService>();


  3. And finally we create new instances, passing in any dependencies.
    var orderMapper = (IMapper<Order, OrderInfo>)Activator.CreateInstance(orderMapperType);
    var orderRepository =
                   (IOrderRepository)Activator.CreateInstance(orderRepositoryType, orderMapper);
    var orderService = (IOrderService)Activator.CreateInstance(orderRepository)

It’s probably also worth mentioning Ioc in a bit more detail. Kentico 8 is a primarily WebForms application, meaning that whilst we could integrate an off the shelf library such as Ninject, Unity or StructureMap, unless you’re using an MVP pattern – which Kentico doesn’t – these libraries are quite fiddly to get to work. We want to use a well architected solution for our custom code that follows the good design principles we’ve previously discussed, but we don’t want to spend too much time fighting the way Kentico works.

In this instance we decided to create our own simple IServiceContainer. This is an interface that sits in the interfaces project under the Ioc namespace and contains public properties for the services we want to expose – repositories and mappings and private. These are used internally, but we don’t want the client to have access to them.

public interface IServiceContainer
    IOrderService OrderService { get; set; }

Our solution contains two implementations – one within CMSApp that creates new instances of implementations, and one within CMSApp_AppCode that uses reflection to create new instances as previously discussed. We’re also using lazy loading, meaning instances are only created when we actually need them. We typically then create ‘base’ abstract classes for either CMSWebParts / Modules / Loaders etc. that contain a protected property which in turn contains the IServiceContainer implementation. This allows us to call ServiceContainer.OrderService.DoStuff() etc.

public class CMSAppServiceContainer : IServiceContainer
    private IOrderService _orderService;
    public IOrderService OrderService
            if (_orderService == null)
                _orderService = new OrderService();
            return _orderService;
public class CMSAppAppCodeServiceContainer : IServiceContainer
    private IOrderService _orderService;
    public IOrderService OrderService
            if (_orderService == null)
                _orderService = (IOrderService)Activator.CreateInstance(_orderRepository)
            return _orderService;

In summary, I hope you find this useful. I also hope that I’ve successfully demonstrated that with a bit of effort it’s fairly straightforward to write abstracted testable code within your Kentico solutions that can be used in the CMSApp and CMSApp_AppCode projects.


Control F1 wins an Examiner Business Award!

Screen Shot 2015-11-13 at 14.38.51

Last night the Control F1 team were suited and booted for the Examiner Business Awards, and we’re delighted to share that we were the proud recipients of the University of Hudderfield’s Innovation and Enterprise Award.

We fought off stiff competition from worthy finalists Wellhouse Leisure and The Flood Company Commercial Ltd. to win the accolade.

Our Co-founder and Technical Director Carl said:

We’ve put a lot into research and development – to the point of really pushing the boundaries – and it’s wonderful to see it paying off. Innovation is a core value for us and we’re delighted to have this recognised through tonight’s award.”