Building with .NET Core and Docker in TeamCity on AWS

By Dr Philip Kendall, Lead Analyst, Control F1.

Building with .NET Core and Docker in TeamCity on AWS

At Control F1, we’re always evaluating the latest technologies to see if and how they’ll fit with our clients needs. One of our core strengths is in .NET development, so we’ve recently been looking at the newly released Visual Studio 2017, along with .NET Core 1.1 and combining this with our ongoing use of Docker to create microservices. We always like all our projects to have continuous integration to ensure a consistent and repeatable build process – in our case, we use a TeamCity instance running in AWS for this. However, actually getting everything to build in TeamCity wasn’t quite as easy as we would have hoped due to a few minor niggles, so I’ve put together this blog post to capture everything that we needed to do.

To read the rest of this blog, please click here


Labels, Camera, Action…!

By Control F1 Lead Architect Phil Kendall.

Control F1 were asked earlier this year to work with a global pharma company to write the control software for a complex piece of physical hardware. Integrating all the moving pieces had proved a challenge, so our client needed a company with extensive experience in developing complex pieces of Windows software. From the specification supplied by our client, we quickly identified that there were going to be two main challenges in this project:

  • Integrating with the hardware in the project: four barcode-reading cameras from Cognex, and a Siemens S7 PLC, for which the control software (and physical machine) was supplied by HERMA.
  • Being able to develop and test the software. There was only one instance of the HERMA machine, and that was already installed on the client’s site (and it’s too big for our office anyway!); similarly we weren’t going to have enough cameras to let everybody working on the project have a full set of cameras.

Integrating with the hardware

Interfacing with the Cognex cameras themselves is relatively easy, as Cognex supply a full .NET SDK and set of device drivers to perform the “grunt work” of communicating with the cameras. However, the SDK is still relatively low-level: it lets you perform just about anything with the cameras, but obviously doesn’t have any business domain specific functions. On a technical note, the SDK is also a little bit “old school” and doesn’t make use of the latest and greatest .NET features – a decision which is completely understandable from a Cognex point of view who need their SDK to be useable by as many consumers as possible, but does mean that the SDK doesn’t quite fit neatly into a modern .NET application.

To work around both these issues, we developed a wrapper around the Cognex SDK that both encapsulates the low-level functionality in the Cognex SDK into the higher level business functionality that we needed for the project, and also presents a more modern .NET style interface, for example using lambda functions rather than delegates. The library has very much been designed to be a generic wrapper for the Cognex SDK so that we can re-use it in any future projects which use the Cognex cameras.

For the Siemens S7, we did a small amount of research and found the S7.Net Plus library. Once again, this enables low-level communications with the S7 PLC so we wrapped it in a higher level interface which implemented the business logic that HERMA had built on top of the S7 PLC.

Both libraries were tested when we had access to the hardware, the Cognex library by actually having a camera here at Control F1 HQ, and the HERMA library with assistance from HERMA who were able to set up a copy of their software at their site and give us remote access.

Developing and testing

As noted above, our big challenge here was how to develop and test the software without everybody having access to cameras and the HERMA machine. The trick here was simply to remove the requirement for everybody to have hardware: by developing a facade around the Cognex and HERMA libraries, we were able to make it so that we could use either the real interfaces to the hardware, or a emulator of each device which we developed. The emulators were configurable so that we could adjust their behaviour for various cases – for example, simulating a misread from one of the Cognex cameras, or a fault from the HERMA system.

The emulators were invaluable to us while developing the project: they allowed us to at one stage have three developers and a tester working on the project, and also to be able to have a demo VM which we could give to the client to let them test how the user interface was evolving, all without needing any hardware or for people to travel to anywhere – with the obvious savings of time and money all that brings.

So, did it all work?

Now, it’s all well and good developing against emulators, but emulators are no good if they don’t have the same behaviour as the real system. The moment of truth came when we sent our COO, Nick Payne, and Lead Architect/Developer, Phil Kendall, to the client’s site in order to put everything together on the real hardware… and the answer is that things worked pretty well. We’d be lying if we said everything worked perfectly first time, but the vast majority of the hardware was up and running within a day. The rest of the week was a pattern familiar to anyone who’s done integration testing before: mostly fairly quiet while we ran through the client’s thorough test plan (thanks Nick for his sterling work keeping everything running smoothly) interspersed with occasional moments of panic as the test plan revealed an issue (thanks Phil for some frantic and occasionally late-night hacking to fix the issues). By the end of the week, the client had signed the machine off to move into production, and Nick and Phil just about managed to get home at a reasonable time on Friday evening.

What did we learn?

From a Control F1 point of view, the most important knowledge we gleaned from this project was the work with did with the Cognex cameras and SDK – they’re some very nice pieces of kit, the SDK is a solid piece of code and we’ve now got the emulator framework we can use to accelerate development of any future projects using the Cognex cameras. Similarly, we’ve now got a way to interface with Siemens S7 PLCs which we can reuse for any future projects.

Other than that, the project reinforced a couple of good software engineering practices which we knew about:

  • Do the less understood bits of the project first to reduce risk. By focusing our initial development efforts on the hardware integration side, we were able to reduce the uncertainty in our planning – this in turn meant that we were able to confidently commit to the client’s timescales relatively early on the project.
  • Log everything. When you’re working with real hardware on a machine on a remote site, being able to get an accurate record of what happened when a problem occurred is invaluable. However, don’t log too much – if the camera is giving you a 30 line output, you don’t need to log the output as it passes through every level in the system as all you end up with then is a log file which is very hard to read.

Sound interesting?

If you’ve got a project which it sounds like we might be able to help you with, please drop us a line.

R, spray-can and Docker

Control F1 Lead Architect Phil Kendall gives some advice on performing R calculations in microservices.

Back in January this year, Control F1 started work as the lead member of the i-Motors consortium, a UK Government and industry funded* project working towards viable, commercially sustainable Smart Mobility applications for connected and autonomous vehicles. One of the key elements we will be delivering as part of the project is the capability to add predictive and contextual intelligence to connected vehicles, allowing all individual drivers, fleet managers and infrastructure providers to make better decisions about transport in the UK. At a coding level, this means we need to get some data science / machine learning / AI code written and deployed. This post gives a quick run through of the technology choices we made, why we made them and how we implemented it all.

Why R?

There are effectively two choices for doing “small scale” (i.e. fits into the memory on one machine) data science; R and Python (with scikit-learn). It just so happens that I’m much more an R guy than a Python guy, and the algorithms we wanted to deploy here were written in R.

Why Docker?

For i-Motors, we’ve gone down the microservices route for a lot of the common reasons, including the ability to independently improve the various components of our system without needing to do high risk “Big Bang” deployments where we have to change every critical part of the system at once. There are obviously alternatives to Docker for running microservices – while this post is Docker-specific, it shouldn’t be too hard to adapt what’s here to another container platform.

Why spray-can?

This is where it gets a bit more complicated! Excluding the definitely right out there on the cutting edge Docker for Windows Server 2016, running Docker means running Linux. At Control F1 we’re mostly a .NET house on the server side, so a number of the i-Motors components have been written in .NET Core and very happily deploy themselves on Docker. However, the .NET to R bridge hasn’t yet been ported to .NET Core, so there’s no simple way for a .NET Core application to talk to R at the moment. I investigated a couple of other options for bridging to R, including using node.js and the rstats package. Unfortunately, the official release of rstats doesn’t work with the latest versions of node, and while there are forks out there which fix the issue, basing a long-term project on a package without official releases didn’t seem like the wisest solution. The one option which did present itself was JRI, the Java/R Interface which I’d made some use of before when running on the JVM.

When it comes to JVM languages, I’m a big fan of Scala and the toolkit – again, the solution here isn’t particularly tied to Scala and and should be relatively easy to adapt to any other JVM language and/or web API framework.


All the code for this blog post is available from Bitbucket. I’ll give a brief overview of the code here.


The web API is set up in RSprayCanDockerApp and RSprayCanDockerActor. This is pretty much a straight copy of the spray-can “Getting Started” app, with the notable exception that we bind the listener to rather than localhost – this is important as the requests will be coming from an unknown source when deployed in Docker.

R integration

The guts of the R integration happens in the SynchronizedRengine class and its associated companion object. There are two non-trivial bits of behaviour here:

  • The guts of R are inherently a singleton object – there is one and only one underlying R engine per JVM. SynchronizedRengine.performCalculation() has a simple lock around the call into the R engine so that we have one and only one thread accessing the R engine.
  • The error handling is “a bit quirky”. If the R engine encounters an error, it calls the rWriteConsole() function in the RMainLoopCallbacks interface. The natural thing to do here would be to throw an exception, but unfortunately the native code between the Rengine.eval() call and the callback silently swallows the exception, so we can’t do that; instead we stash the exception away in a variable. If the evaluation failed (indicated by it returning null), we then retrieve the stashed away exception. In Scala, we wrap this into a Try object, but in a less functional language you could just re-throw the exception at this point.

Docker integration

The Docker integration is done via SBT Native Packager and is pretty vanilla; three things to note:

  • The Docker image is based on our “OpenJRE with R” image – this is the standard OpenJDK image but with R version 3.3 installed, and the JRI bridge library installed in /opt/lib. The minimal source for this image is also on Bitbucket.
  • We pass the relevant option to the JVM so that it can find the JRI bridge library: -Djava.library.path=/opt/lib
  • We set the appropriate environment variable so that the JRI bridge library can find R itself: R_HOME=/usr/lib/R

If you just want a play with the finished Docker container, it’s available from Docker Hub; just run it up as “docker -p 8080:8080 controlf1/r-spraycan-docker“.

Putting it altogether

For this demo, the actual maths I’m getting R to do is very simple: just adding two numbers. Obviously, we don’t need R to do that but in the real world you should be able to substitute your own algorithms easily – we’ve already deployed four separate machine learning algorithms into i-Motors based on this pattern. But as demos are always good:

$ curl http://localhost:8080/add/1.2/3.4


Where next?

What we’ll be working on in the near future is investigating how this solution scales with the load on the system – a single instance of the microservice will obviously be limited by the single-threaded nature of R, but we should be able to bring up multiple instances of the microservice (“scale out” rather than “scale up”) to handle the level of requests we expect i-Motors to produce. I’m not foreseeing any problems with this approach, but we’ll certainly be keeping an eye on the performance numbers of our “intelligence services” as we increase the number of vehicles in the system.

* i-Motors is jointly funded by government and industry. The government’s £100m Intelligent Mobility fund is administered by the Centre for Connected and Autonomous Vehicles (CCAV) and delivered by the UK’s innovation agency, Innovate UK.

i-Motors receives £1.3m from Innovate UK

We are excited to be able to share that i-Motors – a new Control F1-led telematics project – has been awarded a grant of £1.3M by Innovate UK.

We’ll be partnering with the University of Nottingham’s Geospatial Institute and Human Factors Research Group, traffic management specialists InfoHub Ltd, remote sensing experts Head Communications and telecoms gurus Huduma to deliver the project.

Picture a future without gridlock. A future in which our city streets, roads and highways are safer, cleaner and greener. In which vehicles can self-diagnose a fault and order a new component, or automatically detect a hazard such as ice on the road before it’s too late and warn other vehicles around them too. A future in which cars can drive themselves…

That future isn’t far away: it is predicted that the UK will see huge growth in the production of autonomous (driverless) cars by 2030. Meanwhile the production of connected cars – cars with inbuilt “telematics” devices, capable of communicating to other vehicles and machines – is forecast to rise from around 0.8 million in 2015 to 2 million in 2025, accounting for 95% of all cars produced in the UK.

Yet whilst the number of cars with the technology to connect is already rising, little progress has been made towards putting this technology to use.

i-Motors plans to address this issue. Capitalising on our extensive telematics experience (read about our telematics partnership with the RAC here), we plan to establish a set of universal standards on how vehicles communicate with each other, and with other machines. Making use of connected cars’ ability to support apps, we’ll be working with academics from Nottingham University’s Geospatial Institute and Human Factors Research Group to build a mobile platform that allows vehicles of different manufacturers and origins to transfer and store data.

We’ll use patented technology, allowing data to be collected and analysed at greater speeds than ever before. We’ll also be working alongside traffic management experts InfoHub Ltd to combine these data with other data sources such as weather reports, event data and traffic feeds, easing congestion and increasing safety through realtime updates and route planning. In addition, the i-Motors platform will allow vehicles to report errors, which can be automatically crosschecked against similar reports to diagnose the problem and reduce the chance of a breakdown.

We will also be working with Head Communications to address the issue of limited connectivity by developing sensors capable of transmitting data to the cloud in realtime. Through installing these sensors – known as Beyond Line of Sight (BLOS) – vehicles can remain connected with sub-metre precision, even when out of internet and GPS range. And we will be collaborating with telecoms gurus Huduma to make i-Motors sustainable and commercially successful in the long term.

i-Motors has the backing of Nottingham, Coventry and Sheffield City Councils, where the new technology will first be piloted, and a letter of support from the Transport Systems and Satellite Applications Catapult, and fleet management experts Isotrak. The project will make use of live vehicle data provided by Ford, which has an ongoing relationship with the University of Nottingham.

Our MD Andy Dumbell commented:

“We are delighted to have been awarded the funding by Innovate UK to lead on this ground-breaking project. Connected and driverless cars offer us the opportunity to make huge strides in terms of reducing congestion, bringing down emissions, and even saving lives. Yet as is always the case when dealing with big data, it’s only effective if you know how to use it. We believe that through i-Motors we can set the standard for connected and autonomous vehicles and redefine the future of our streets, highways and cities.”

A Sparkling View from the Canals

Control F1 sent Lead Developer Phil Kendall and Senior Developer Kevin Wood over to Amsterdam for the first European edition of Spark Summit. Here’s their summary of the conference.

One of the themes from Strata + Hadoop World in London earlier this year was the rise of Apache Spark as the new darling of the big data processing world. If anything, that trend has accelerated since May, but it has perhaps also moved in a slightly different direction as well – while the majority of the companies talking about Spark at Strata + Hadoop World were the innovative, disruptive small businesses, at Spark Summit there were a lot of big enterprises who were either building their big data infrastructure on Spark, or moving their infrastructure from “classical” Hadoop MapReduce to Spark. From a business point of view, that’s probably the headline for the conference, but here’s some more technical bits:

The Future of MapReduce

MapReduce is dead. It’s going to hang on for a few years yet due to the number of production deployments which exist, but I don’t think you would have been able to find anyone at the conference who was intending to use MapReduce for any of their new deployments. Of course, it should be remembered that this was the Spark Summit, so this won’t have been a representative sample, but when you’ve some of the biggest players in the big data space like Cloudera and Hortonworks joining in on the bandwagon, I certainly think this is the way that things are going.

In consequence, the Lambda Architecture is on its way out as well. Nobody ever really liked having to maintain two entirely separate systems for processing their data, but at the time there really wasn’t a better way. This is a movement which started to gain momentum with Jay Kreps’ “Questioning the Lambda Architecture” article last year, but as we now have an enterprise ready framework which can handle both the streaming and batch sides of the processing coin, it’s time to move on to something with less overhead, quite possibly Spark, MesosAkkaCassandra and Kafka, something which Helena Edelson implored us to do during her talk. Just hope your kids don’t go around saying “My mum/dad works with Smack”.

The Future of Languages for Spark

Scala is the language for interacting with Spark. While the conference was pretty much split down the middle between folks using Scala and folks using Python, how the Spark world is going was perhaps most obviously demonstrated by the spontaneous round of applause which Vincent Saulys got for his “Please use Scala!” comment during his keynote presentation. The theme here was very much that while there were people moving from Python to Scala, nobody was going the other way. On the other hand, the newcomer on the block here is SparkR, which has the potential to open up Spark to the large community of data scientists out there who already know R. The support in Spark 1.5 probably isn’t quite there yet to really open the door, but improvements are coming in Spark 1.6, and they’re definitely looking for feedback from the R community as to which features should be a priority, so it’s not going to be long before you’re going to see a lot of people using Spark and R.

The Future of Spark APIs

DataFrames are the future for Spark applications. Similarly to MapReduce, while nobody’s going to be killing off the low level way of working directly with resilient distributed datasets (RDDs), the DataFrames API (which is essentially equivalent to Spark SQL) is going to be where a lot of the new work gets done. The major initiative here at the moment is Project Tungsten, which gives a whole number of nice optimisations at the DataFrame level. Why is Spark moving this way? Because it’s easier to optimise when you’re higher up the stack – if you have a holistic view of what the programmer is attempting to accomplish, you can generally optimise that a lot better than if you’re looking at the individual atomic operations (the maps, sorts, reduces and whatever else of RDDs). SQL showed the value of introducing a declarative language for “little” data problems in the 1970s and 1980s; will DataFrames be that solution for big data? Given their position in all of R, Python (via Pandas) and Spark, I’d be willing to bet a small amount of money on “yes”.

On a related topic, if you’ve done any work with Spark, you’ve probably seen the Spark stack diagram by now. However, I thought Reynold Xin’s “different take” on the diagram during his keynote was really powerful – as a developer, this expresses what matters to me – the APIs I can use to interact with Spark. To a very real extent, I don’t care what’s happening under the hood: I just need to know that the mightily clever folks contributing to Spark are working their black magic in that “backend” box which makes everything run super-fast.

The Future of Spark at Control F1

I don’t think it will come as a suprise to anyone who has been following our blog that we’re big fans of Spark here at Control F1. Don’t be surprised if you see it in some of our work in the near future 🙂

Implementing best practice application design principles in Kentico 8


In this post Lead CMS Developer Chris Parkinson discusses how we use n-tier architecture and Ioc at Control F1 to allow us to write abstracted, testable custom code within Kentico, and how we then consume this custom code in the CMSApp and CMSApp_AppCode projects.

NB This post assumes a good working knowledge of Kentico 8 and understanding of SOLID principles, including n-tier and dependency injection / inversion of control. If the latter is new to you, here is a good place to start.

At Control F1 we like to apply good application design principles to any development project, be it Greenfield, mobile or CMS integration. These typically include the use of dependency injection, n-tier, and separation of concerns to allow custom code to be more easily tested.

Sometimes, however, particularly when working with an off the shelf product, we have to come up with workarounds. Kentico is a brilliant tool for quickly developing feature-rich websites, but as with any proprietary software it isn’t always the easiest to extend. We typically use Kentico web application projects at Control F1. This allows us to more easily integrate Kentico with our build processes using MSBuild (another article entirely).

One of the quirks of web application projects relates to the continued use of an App_Code  folder; which is a hangover from the older Website project. In website projects, code added and edited within App_Code is compiled on the fly, allowing server side code to be developed without the need to re-compile. This concept doesn’t really exist in web applications. Here is a good post from Microsoft explaining the differences between websites and web applications in .NET

Kentico has split out App_Code into its own VS project called CMSApp_AppCode, with any code/folders under it moved under a folder called Old_App_Code.

Kentico 8 pic

This causes two fundamental problems:

  1. Any custom objects or providers generated from custom page types or custom classes in Kentico are generated here.
  2. Any custom code that needs to inherit from CMSLoaderAttribute (custom events, extending the order provider etc) HAS to go here.

Remember these two points and we’ll quickly move on and discuss how we architecture our solution.

Historically, Control F1 Kentico solutions consisted of the following projects:

Kentico 8 2

  • Core (Utilities, helpers etc.)
  • Data (DAL layer)
    • Extensions (Extensions to CMS ‘entities’ – out of the box info, objects etc.)
    • Mappings (Classes that map ‘entities’ to ‘domain’ objects)
    • Repositories (Data repository classes to wrap up Kentico providers)
  • Domain
    • DTOs / DTO Extensions
  • Services
    • Service classes that typically wrap up the repository layer but return a ServiceResult<T> object, allowing consistent consumption from the client

The Service and Data projects typically contain both the interface and the implementation, meaning that to use a service you need to reference the Services project.

This is absolutely fine from the CMSApp project – you can happily reference Services and use them as you wish using your Ioc container of choice. But it’s time to revisit the two fundamental problems with Kentico caused by the CMSApp_AppCode project:

1. Any custom objects or providers generated from custom classes in Kentico are generated here.

Going back to our application architecture, we have a Data project which we use to wrap up out of the box Kentico info and provider objects. We also want to do this with any custom objects generated from custom classes in the CMS, meaning that we need to add a reference to App_Data from our Data project. This leads us nicely on to problem 2…

2. Any custom code that needs to inherit from CMSLoaderAttribute (custom events, extending the order provider etc) HAS to go here

Consider an eCommerce website where we might want to trigger custom code when the ‘Paid’ event is fired. We’d typically do this by creating a custom provider object that extends OrderInfoProvider, and by overriding the ProcessOrderIsPaidChangeInternal method to check for the IsPaid property.

public class MyCustomOrderProvider : OrderInfoProvider
    protected override void ProcessOrderIsPaidChangeInternal(OrderInfo orderInfo)
        if (orderInfo.OrderIsPaid)
            // TODO - our custom code here

Now say we want to execute some custom code written in an IOrderService, we’d have to add a reference to the Services project.

Eek, fail!

Kentico 8 3

When we try and add a reference to our services project from CMSApp_AppCode, we get a circular reference error. This makes complete sense. CMSApp_AppCode is referenced by Data which is referenced by Services, so with our current architecture there’s no obvious way to use custom services in CMSApp_AppCode. This is obviously no good – we’ve already written custom code and tested so we don’t want to recreate this using Kentico’s provider objects – the whole point was abstracting this.

The solution is actually fairly simple, albeit requires a bit of refactoring and the use of reflection. Reflection, used with care, is a great tool that allows you to load an instance of an object at runtime – which is perfect for this scenario. Here is another good post on CodeProject explaining reflection in .NET.

Firstly, consider the initial good design principles we were discussing – particularly separation of concerns. If we’re using inversion of control, our consuming application doesn’t really need to know implementation details. All it needs to know is that we have an IOrderService which contains a method called DoStuff().  Therefore, with this in mind we can refactor our application to split out interfaces and implementations into separate projects. The solution now looks like:

kentico 8 4

  • Core
  • Data
  • Domain
  • Services
  • Interfaces
    • Services
    • Data

Our CMSApp_AppCode and CMSApp projects can now reference the interfaces project. However, we still need to create new implementations of these interfaces in order to use them. As we discussed earlier, CMSApp can happily reference the services directly as there are no dependencies on CMSApp from the n-tier layer. From CMSApp_AppCode we can’t reference services directly, but because we’ve abstracted interfaces from implementations we can load the implementations dynamically using reflection and map them to the correct interface.

Steps to achieve this are as follows:

  1. We need to load the assemblies – in this instance we need both Data and Services. We’re using dependency injection in our services and need to pass in the repositories from the data project. We’ve created a helper method to allow us to get the correct directory without specifying the full file path.
    var dataAssembly = Assembly.LoadFile(string.Concat(IoHelper.AssemblyDirectory, "\\Data.dll"));
    var _servicesAssembly = Assembly.LoadFile(string.Concat(IoHelper.AssemblyDirectory, "\\Services.dll"));


  2. Next we need to search the loaded assembly for an exported type that is assignable to our interface. In this case we’ve created an extension method that returns back an instance of the supplied type (providing a named parameter for when we have multiple instances of an interface).
    public static Type GetType<T>(this Assembly assembly, string name)
        return (from type in assembly.GetExportedTypes()
                where typeof(T).IsAssignableFrom(type)
                && type.Name == name
                select type).
    var orderMapperType = dataAssembly.GetType<IMapper<Order, OrderInfo>>();
    var orderRepositoryType = dataAssembly.GetType<IOrderRepository>();
    var orderServiceType = servicesAssembly.GetType<IOrderService>();


  3. And finally we create new instances, passing in any dependencies.
    var orderMapper = (IMapper<Order, OrderInfo>)Activator.CreateInstance(orderMapperType);
    var orderRepository =
                   (IOrderRepository)Activator.CreateInstance(orderRepositoryType, orderMapper);
    var orderService = (IOrderService)Activator.CreateInstance(orderRepository)

It’s probably also worth mentioning Ioc in a bit more detail. Kentico 8 is a primarily WebForms application, meaning that whilst we could integrate an off the shelf library such as Ninject, Unity or StructureMap, unless you’re using an MVP pattern – which Kentico doesn’t – these libraries are quite fiddly to get to work. We want to use a well architected solution for our custom code that follows the good design principles we’ve previously discussed, but we don’t want to spend too much time fighting the way Kentico works.

In this instance we decided to create our own simple IServiceContainer. This is an interface that sits in the interfaces project under the Ioc namespace and contains public properties for the services we want to expose – repositories and mappings and private. These are used internally, but we don’t want the client to have access to them.

public interface IServiceContainer
    IOrderService OrderService { get; set; }

Our solution contains two implementations – one within CMSApp that creates new instances of implementations, and one within CMSApp_AppCode that uses reflection to create new instances as previously discussed. We’re also using lazy loading, meaning instances are only created when we actually need them. We typically then create ‘base’ abstract classes for either CMSWebParts / Modules / Loaders etc. that contain a protected property which in turn contains the IServiceContainer implementation. This allows us to call ServiceContainer.OrderService.DoStuff() etc.

public class CMSAppServiceContainer : IServiceContainer
    private IOrderService _orderService;
    public IOrderService OrderService
            if (_orderService == null)
                _orderService = new OrderService();
            return _orderService;
public class CMSAppAppCodeServiceContainer : IServiceContainer
    private IOrderService _orderService;
    public IOrderService OrderService
            if (_orderService == null)
                _orderService = (IOrderService)Activator.CreateInstance(_orderRepository)
            return _orderService;

In summary, I hope you find this useful. I also hope that I’ve successfully demonstrated that with a bit of effort it’s fairly straightforward to write abstracted testable code within your Kentico solutions that can be used in the CMSApp and CMSApp_AppCode projects.


Internet of Things World: Europe 2015 – it’s not just about robots

This post is an extract taken from Control F1 MD Andy Dumbell’s piece for Internet of Things (IoT) World News, following the IoT Europe conference in Berlin. Read the full piece here.

I’m writing this post whilst flying home from Berlin, feeling enthused, excited and inspired, after attending the first “Internet of Things World: Europe” conference. The show itself brought together thought leaders, alliances, and companies big and small from all parts of the evolving IoT (Internet of Things) sector.

Why did we go? Apart from the usual reasons for attending a conference – to learn, network, and pick up free t-shirts – we hoped to gain a better understanding of the IoT ecosystem; to crystallise where we can add value, and to find communities to collaborate with.

So, what exactly do we mean by IoT? This question was raised throughout the event, and I felt quite reassured by the lack of consensus, as we often debate the issue here at Control F1 – “it’s not just about robots!” Everyone had their own definition. One speaker’s presentation started with “IoT = Big Data”. Another view was that the IoT is a less organised version of M2M (machine to machine). Others pondered over whether it’s simply the next generation of M2M.

Here’s my stab at this: the IoT is connecting everyday objects across digital networks – such as the internet – trying to infer meaningful information whilst creating value.  Connected things can include just about any asset: clothing, appliances, vehicles, parcels, people, pets, buildings, planets – the list goes on. The IoT enables communication with such assets, to monitor them through sensing solutions, create intelligence, and manage and control them remotely.

However, for me the more important question is: why does it matter? The simple answer is that the IoT can make our lives better, but it is only worthwhile when it creates real value. For example, IoT innovations can save lives! By generating information and enabling timely communication, we can solve problems and make informed decisions, which leads to intelligence, convenience, efficiencies, effectiveness, smart socks and so on.

One of the highlights from the show was Katja von Raven’s talk on opening doors. Her business, Chamberlain, a manufacturer of smart home control products sold worldwide, has embraced the IoT to create a market leading smart garage door opener. The obvious benefit is increased convenience versus traditional products – you can ask your iPhone “did I leave the door open again?”, and then close it remotely. And Chamberlain has created new value for its customers through an alerting service – 70%+ of them use this feature, and 40% of subscribers say they could not live without it. A simple and effective solution made possible through the IoT.

I always enjoy hearing an inspiring success story – especially a technology driven one. Chamberlain took the brave decision to adopt the IoT and rethink its business model, transforming into a manufacturing and digital tech company. This was driven through consumer-guided decisions to create a useful product, rather than a misguided attraction to shiny new toys adding to the Internet of Pointless Things.

Advancements in connectivity also provided for interesting discussions. We heard about 5G. We heard about LoRa’s mission to standardise low power wide area networks to meet the IoT market needs. And we heard about SIGFOX’s low-cost, low-throughput, low-energy-consumption network – which can literally see through walls!

I was, however, surprised that Bluetooth didn’t have a stronger presence. I attended the Bluetooth Europe conference in London last month where they presented their planned roadmap, which includes mesh network capability, IPv6 support, as well as other interesting advancements that the IoT community could benefit from. The conference would have also been a great place for Amazon to showcase their new AWS IoT services.

Unsurprisingly Big Data and Analytics were also part of the theme, with insights drawn from various verticals on how to get value from billions of connected things. For example, the automotive sector is providing near real-time intelligence to motorists through connected vehicles, interpreting data from sensing solutions and broadcasting updates on congestion, road risk and better route options.

The European Commission talked about their continued support for IoT innovation and future deployment, with hundreds of millions of euros committed to funding research and experimentation, from smart farming and food security, to autonomous vehicles in a connected environment.

The IoT still feels a bit like the Wild West – fast, risky, but an exciting place to be. Past scepticism has subsided, with developments from major players making the IoT a tangible business opportunity. The pace of innovation is incredible. It has been catalysed by major advancements in connectivity, cloud tech, hardware, and driven by a generation of enthusiastic startups, innovators, forward thinking businesses, and communities, driving industry forward.

As a company we have worked in IoT from our conception in 2010, providing innovative software solutions and consultancy for big brands and startups alike. These have ranged from high-end fashion accessories that double as a personal security device, to the technology that allowed Nestle to launch a competition with hidden tracking devices in its chocolate bars (lucky winners were hunted down and handed a briefcase containing thousands of pounds!)

In summary, the IoT Europe conference served to reaffirm our strategy, and inspired us to continue innovating. There is no doubt that the IoT is changing our lives for the better, emerging as the third wave of development for the internet. The future will be quite different from the world we know today. We want to be part of the driving force that gets us there.

Configuring Elastic MapReduce 4 applications from the AWS console

Lead Developer Phil Kendall recently blogged about getting started with Spark on EMR. In this follow up post he explains how to configure EMR 4 applications from the AWS console.

Update 12th November: Jon Fritz, one of the Elastic MapReduce PM team, let me know that they’ve now fixed this bug in the console

Back in July, Amazon released “v4” of their Elastic MapReduce platform which introduced some fairly big changes as to how applications are configured. While there are some nice examples on that page, those examples don’t work if you try them in the AWS console: if you copy and paste an example into the “Edit software settings” box and then try and create a cluster, you get the following error:
…which is perhaps not the world’s most informative error ever, and definitely a bit disappointing when all you’ve done is taken an AWS-supplied example. After much frustration, I finally discovered that it’s the capitalisation of the keys that is significant: if you change the supplied example to

    "classification": "core-site",
    "properties": {
      "": "250"
    "classification": "mapred-site",
    "properties": {
      "": "2",
      "": "90",
      "mapreduce.tasktracker.reduce.tasks.maximum": "5"

…then everything works just fine – note the lower case “c” and “p” in “classification” and “properties” as opposed to the upper case versions used in AWS’s example. I’ve sent feedback to the AWS team on this one so I suspect it may end up getting fixed pretty soon, but if anyone else is suffering from the same problem then hopefully this gets you out of a hole!

Adventures in Spark on Elastic MapReduce 4

Lead Developer Phil Kendall on getting started with Spark on EMR.

In June, Spark, the up and coming big data processing framework, became a first class citizen on Amazon Elastic MapReduce (EMR). Last month, Amazon announced EMR release 4.0.0 which “brings many changes to the platform”. However, some of those changes lead to a couple of “gotchas” when trying to run Spark on EMR, so this post is a quick walk through the issues I found when getting started with Spark on EMR and (mostly!) solutions to those issues.

Running the demo

Jon Fritz‘s blog post announcing the availability of Spark on EMR contained a nice simple example of getting a Spark application up and running on EMR. Unfortunately, if you try and run through that demo on the EMR 4.0.0 release, then you get an error when trying to fetch the flightsample jar from S3:

Exception in thread "main" java.lang.RuntimeException: Local file does not exist.

This one turns out to be not too hard to fix – the EMR 4.0.0 release has just moved the location of the hdfs utility so it’s now on the normal PATH rather than being installed in the hadoop user’s home directory. That can trivially be fixed by just removing the absolute path, but while we’re in the area, we can also upgrade to using the new command-runner rather than script-runner. Once you’ve done both those changes, the Custom JAR step should look like this:


…and you can then happily run through the rest of the demo.

Spark Streaming on Elastic MapReduce

The next thing you might try is to get Spark Streaming running on EMR. On the face of it, this looks to be nice and easy – just push your jar containing the streaming application onto the cluster and away you go. And your application starts…. and then just sits there, steadfastly refusing to do anything at all. Experienced Spark Streaming folk will quite possibly recognise this as a symptom of the executors not having enough cores to run their workloads – each receiver you create occupies a core, so you need to ensure that there are enough cores in your cluster to run the receivers and to process the data. To some extent, you’d hope this isn’t a problem as the m3.xlarge instances that you get by default when creating an EMR cluster each have 4 cores, so there must be something else going on here.

The issue here turns out to be the default Spark configuration when running on YARN, which is what EMR uses for its cluster management – each executor is by default allocated only one core so your nice cluster with two 4 core machines in it was actually sitting there with three quarters of its processors doing nothing. Getting around this is what the “-x” option mentioned in Jon Fritz’s blog post did – it ensured that Spark used all the available resources on the cluster, but that setting isn’t available with EMR 4.0.0. The equivalent option for the new version is mentioned in the “Additional EMR Configuration Options for Spark” of the EMR 4.0.0 announcement: you need to set the “maximizeResourceAllocation” property. To do that, select “Go to advanced options” when creating the cluster, expand the “Edit software settings (optional)” section and then add in the appropriate configuration string: “classification=spark,properties=[maximizeResourceAllocation=true]“. This does unfortunately mean that the “quick options” for creating a cluster is pretty much useless when using Spark as you’re always going to want to be setting this option or a variant of it.

Getting to the Spark web UI

When you’re running a Spark application, you may well be used to using the Spark web UI to keep an eye on your job. However, getting to the web UI on an EMR cluster isn’t as easy as it might appear at first glance. You can happily point your web browser to http://<cluster master DNS address>:4040/ as usual, but that returns a redirect to http://ip-<numbers>.<region>.compute.internal:20888/proxy/application_<n>_<n>/ containing a reference to the internal DNS name of the machine which isn’t too helpful if you’re outside the VPC inside which the cluster is running. I haven’t found a perfect solution to this one yet, but you can just replace “ip-<numbers>.<region>.compute.internal” with the external DNS name of the master – so you’re pointing at something like http://<cluster master DNS address>:20888/proxy/application_<n>_<n>/ – and then you can happily browse around the web UI from there.

Onward and upward

With all that, I’ve pretty much got up and running with Spark on Elastic MapReduce 4. Now, it’s back to the actual Spark applications again…

What do software developers and plasterers have in common?

Control F1 Lead Developer Nick Payne’s musings on the similarities between plastering and programming.

What do software developers and plasterers have in common? A bizarre question, you might say, but hear me out.

Recently, having purchased our first home, we were on the look out for trades-people – people skilled in the art of making our world comfortable, pleasing to the eye and functional. However, we were at something of a disadvantage. We didn’t have the first idea about how to install an alarm system, hang wallpaper or plaster a ceiling; nor did we have the tools or time to do so. We also didn’t have a clue what it might take – both in temporal and pecuniary terms – to get to where we wanted to be.

What we did know, however, is that our home needs to be secure, not to have holes in the wall where the up-lighters used to be, and to have ceilings that are free from asbestos. 

So discussions began, and we soon realised that we needed a plan; some requirements and maybe an idea of cost. We headed to Google and looked for trades-people local to us, but Geography alone wasn’t enough – we also needed to feel confident that their previous work was acceptable to their customers.

The alarm system was particularly enlightening. Being a self-confessed geek, I wanted an all singing, all dancing, system – one which I could monitor remotely, turn on and off from upstairs and downstairs, arm automatically when I left from both front and rear doors, etc. etc… It quickly became obvious that I’d missed the point!  

On speaking to our chosen alarm specialist (who also happened to be the electrician) I realised a few things: I’ve never left the house by the rear door; the house isn’t big enough to warrant being able to set the alarm from upstairs; that remote monitoring is only as good as the people at the end of it. We also realised that although the electrician was more than happy to install such a system, we simply couldn’t justify the cost – it would give us no additional benefits over and above what we now have installed.

It was a similar story with the plasterer: we showed him the rooms we needed attending to and he said, “that’ll be £X”. I was rather taken aback. We chatted further and it soon became clear that he’d plastered many a ceiling before, of the same size, in similar houses, with similar resources and materials. 

So, back to the original question… how does this compare to software development? Well, software developers are able to provide systems just like the all singing, all dancing alarm system. We’re also able to give a confident quote when we’ve done similar work before. But the really good developers can also get to the bottom of what it is that you really want. 

Here at Control F1, we work hard to understand what it is that you’re trying to achieve, and to provide the most cost efficient, effective solution possible. And whilst innovation remains at the heart of what we do, we tell it like it is and never provide you with novelty purely for the sake of it. Just as repairing the holes in my walls and making my home secure improves my physical existence, so at Control F1 we find the best solution to make your digital life better.