Labels, Camera, Action…!

hermacognex

By Control F1 Lead Architect Phil Kendall.

Control F1 were asked earlier this year to work with a global pharma company to write the control software for a complex piece of physical hardware. Integrating all the moving pieces had proved a challenge, so our client needed a company with extensive experience in developing complex pieces of Windows software. From the specification supplied by our client, we quickly identified that there were going to be two main challenges in this project:

  • Integrating with the hardware in the project: four barcode-reading cameras from Cognex, and a Siemens S7 PLC, for which the control software (and physical machine) was supplied by HERMA.
  • Being able to develop and test the software. There was only one instance of the HERMA machine, and that was already installed on the client’s site (and it’s too big for our office anyway!); similarly we weren’t going to have enough cameras to let everybody working on the project have a full set of cameras.

Integrating with the hardware

Interfacing with the Cognex cameras themselves is relatively easy, as Cognex supply a full .NET SDK and set of device drivers to perform the “grunt work” of communicating with the cameras. However, the SDK is still relatively low-level: it lets you perform just about anything with the cameras, but obviously doesn’t have any business domain specific functions. On a technical note, the SDK is also a little bit “old school” and doesn’t make use of the latest and greatest .NET features – a decision which is completely understandable from a Cognex point of view who need their SDK to be useable by as many consumers as possible, but does mean that the SDK doesn’t quite fit neatly into a modern .NET application.

To work around both these issues, we developed a wrapper around the Cognex SDK that both encapsulates the low-level functionality in the Cognex SDK into the higher level business functionality that we needed for the project, and also presents a more modern .NET style interface, for example using lambda functions rather than delegates. The library has very much been designed to be a generic wrapper for the Cognex SDK so that we can re-use it in any future projects which use the Cognex cameras.

For the Siemens S7, we did a small amount of research and found the S7.Net Plus library. Once again, this enables low-level communications with the S7 PLC so we wrapped it in a higher level interface which implemented the business logic that HERMA had built on top of the S7 PLC.

Both libraries were tested when we had access to the hardware, the Cognex library by actually having a camera here at Control F1 HQ, and the HERMA library with assistance from HERMA who were able to set up a copy of their software at their site and give us remote access.

Developing and testing

As noted above, our big challenge here was how to develop and test the software without everybody having access to cameras and the HERMA machine. The trick here was simply to remove the requirement for everybody to have hardware: by developing a facade around the Cognex and HERMA libraries, we were able to make it so that we could use either the real interfaces to the hardware, or a emulator of each device which we developed. The emulators were configurable so that we could adjust their behaviour for various cases – for example, simulating a misread from one of the Cognex cameras, or a fault from the HERMA system.

The emulators were invaluable to us while developing the project: they allowed us to at one stage have three developers and a tester working on the project, and also to be able to have a demo VM which we could give to the client to let them test how the user interface was evolving, all without needing any hardware or for people to travel to anywhere – with the obvious savings of time and money all that brings.

So, did it all work?

Now, it’s all well and good developing against emulators, but emulators are no good if they don’t have the same behaviour as the real system. The moment of truth came when we sent our COO, Nick Payne, and Lead Architect/Developer, Phil Kendall, to the client’s site in order to put everything together on the real hardware… and the answer is that things worked pretty well. We’d be lying if we said everything worked perfectly first time, but the vast majority of the hardware was up and running within a day. The rest of the week was a pattern familiar to anyone who’s done integration testing before: mostly fairly quiet while we ran through the client’s thorough test plan (thanks Nick for his sterling work keeping everything running smoothly) interspersed with occasional moments of panic as the test plan revealed an issue (thanks Phil for some frantic and occasionally late-night hacking to fix the issues). By the end of the week, the client had signed the machine off to move into production, and Nick and Phil just about managed to get home at a reasonable time on Friday evening.

What did we learn?

From a Control F1 point of view, the most important knowledge we gleaned from this project was the work with did with the Cognex cameras and SDK – they’re some very nice pieces of kit, the SDK is a solid piece of code and we’ve now got the emulator framework we can use to accelerate development of any future projects using the Cognex cameras. Similarly, we’ve now got a way to interface with Siemens S7 PLCs which we can reuse for any future projects.

Other than that, the project reinforced a couple of good software engineering practices which we knew about:

  • Do the less understood bits of the project first to reduce risk. By focusing our initial development efforts on the hardware integration side, we were able to reduce the uncertainty in our planning – this in turn meant that we were able to confidently commit to the client’s timescales relatively early on the project.
  • Log everything. When you’re working with real hardware on a machine on a remote site, being able to get an accurate record of what happened when a problem occurred is invaluable. However, don’t log too much – if the camera is giving you a 30 line output, you don’t need to log the output as it passes through every level in the system as all you end up with then is a log file which is very hard to read.

Sound interesting?

If you’ve got a project which it sounds like we might be able to help you with, please drop us a line.

R, spray-can and Docker

rdockspray

Control F1 Lead Architect Phil Kendall gives some advice on performing R calculations in microservices.

Back in January this year, Control F1 started work as the lead member of the i-Motors consortium, a UK Government and industry funded* project working towards viable, commercially sustainable Smart Mobility applications for connected and autonomous vehicles. One of the key elements we will be delivering as part of the project is the capability to add predictive and contextual intelligence to connected vehicles, allowing all individual drivers, fleet managers and infrastructure providers to make better decisions about transport in the UK. At a coding level, this means we need to get some data science / machine learning / AI code written and deployed. This post gives a quick run through of the technology choices we made, why we made them and how we implemented it all.

Why R?

There are effectively two choices for doing “small scale” (i.e. fits into the memory on one machine) data science; R and Python (with scikit-learn). It just so happens that I’m much more an R guy than a Python guy, and the algorithms we wanted to deploy here were written in R.

Why Docker?

For i-Motors, we’ve gone down the microservices route for a lot of the common reasons, including the ability to independently improve the various components of our system without needing to do high risk “Big Bang” deployments where we have to change every critical part of the system at once. There are obviously alternatives to Docker for running microservices – while this post is Docker-specific, it shouldn’t be too hard to adapt what’s here to another container platform.

Why spray-can?

This is where it gets a bit more complicated! Excluding the definitely right out there on the cutting edge Docker for Windows Server 2016, running Docker means running Linux. At Control F1 we’re mostly a .NET house on the server side, so a number of the i-Motors components have been written in .NET Core and very happily deploy themselves on Docker. However, the .NET to R bridge hasn’t yet been ported to .NET Core, so there’s no simple way for a .NET Core application to talk to R at the moment. I investigated a couple of other options for bridging to R, including using node.js and the rstats package. Unfortunately, the official release of rstats doesn’t work with the latest versions of node, and while there are forks out there which fix the issue, basing a long-term project on a package without official releases didn’t seem like the wisest solution. The one option which did present itself was JRI, the Java/R Interface which I’d made some use of before when running on the JVM.

When it comes to JVM languages, I’m a big fan of Scala and the spray.io toolkit – again, the solution here isn’t particularly tied to Scala and spray.io and should be relatively easy to adapt to any other JVM language and/or web API framework.

Implementation

All the code for this blog post is available from Bitbucket. I’ll give a brief overview of the code here.

Startup

The web API is set up in RSprayCanDockerApp and RSprayCanDockerActor. This is pretty much a straight copy of the spray-can “Getting Started” app, with the notable exception that we bind the listener to 0.0.0.0 rather than localhost – this is important as the requests will be coming from an unknown source when deployed in Docker.

R integration

The guts of the R integration happens in the SynchronizedRengine class and its associated companion object. There are two non-trivial bits of behaviour here:

  • The guts of R are inherently a singleton object – there is one and only one underlying R engine per JVM. SynchronizedRengine.performCalculation() has a simple lock around the call into the R engine so that we have one and only one thread accessing the R engine.
  • The error handling is “a bit quirky”. If the R engine encounters an error, it calls the rWriteConsole() function in the RMainLoopCallbacks interface. The natural thing to do here would be to throw an exception, but unfortunately the native code between the Rengine.eval() call and the callback silently swallows the exception, so we can’t do that; instead we stash the exception away in a variable. If the evaluation failed (indicated by it returning null), we then retrieve the stashed away exception. In Scala, we wrap this into a Try object, but in a less functional language you could just re-throw the exception at this point.

Docker integration

The Docker integration is done via SBT Native Packager and is pretty vanilla; three things to note:

  • The Docker image is based on our “OpenJRE with R” image – this is the standard OpenJDK image but with R version 3.3 installed, and the JRI bridge library installed in /opt/lib. The minimal source for this image is also on Bitbucket.
  • We pass the relevant option to the JVM so that it can find the JRI bridge library: -Djava.library.path=/opt/lib
  • We set the appropriate environment variable so that the JRI bridge library can find R itself: R_HOME=/usr/lib/R

If you just want a play with the finished Docker container, it’s available from Docker Hub; just run it up as “docker -p 8080:8080 controlf1/r-spraycan-docker“.

Putting it altogether

For this demo, the actual maths I’m getting R to do is very simple: just adding two numbers. Obviously, we don’t need R to do that but in the real world you should be able to substitute your own algorithms easily – we’ve already deployed four separate machine learning algorithms into i-Motors based on this pattern. But as demos are always good:

$ curl http://localhost:8080/add/1.2/3.4

4.6

Where next?

What we’ll be working on in the near future is investigating how this solution scales with the load on the system – a single instance of the microservice will obviously be limited by the single-threaded nature of R, but we should be able to bring up multiple instances of the microservice (“scale out” rather than “scale up”) to handle the level of requests we expect i-Motors to produce. I’m not foreseeing any problems with this approach, but we’ll certainly be keeping an eye on the performance numbers of our “intelligence services” as we increase the number of vehicles in the system.

* i-Motors is jointly funded by government and industry. The government’s £100m Intelligent Mobility fund is administered by the Centre for Connected and Autonomous Vehicles (CCAV) and delivered by the UK’s innovation agency, Innovate UK.

Why you can’t afford to ignore the new Health and Safety sentencing guidelines

health_safety_law-550x550

By our Commercial Lead, Duncan Davies.

New sentencing guidelines were issued in February for breaches of Health and Safety (H&S) regulations. It’s safe to say this didn’t make the front pages.

The new guidance was devised independently of the HSE (although the HSE provided input into the process), and comes in a couple of “easy-to-use grids” that allow you (in theory) to estimate your potential level of fine for a particular offence.  The idea is simplification and there’s a link to the new guidelines at the end of this post.

At a recent seminar at the Safety & Health Expo 2016 in London, roughly 50% of the audience raised their hand when asked whether they were aware of these changes.

If you’re in the 50% that don’t know, here’s a few thought-provoking questions:

  1. How many £1M+ fines have there been since the law changed in February 2016?
  2. What’s the longest prison sentence that’s been passed down in the last 6 months?
  3. Are you aware you can receive the same fine irrespective of anyone being injured, if there is shown to be culpability and a lack of H&S procedures?

Before we share the answers, it’s important to recognise that this new guidance is intended to send a blunt message to business: that Health & Safety is no more the preserve of the overly cautious, process-obsessed, budget-starved, H&S professional tucked away in a broom cupboard.

Health & Safety is now well and truly heading to the centre of the Board table. We’re now seeing the reality of directors themselves heading to prison, and fines being imposed that are ‘meaningful’ where previously they might have been a mere ‘slap on the wrist’.

Some argue that the new guidelines are mostly designed to increase a source of income from large companies, who now face the largest consequences of these new guidelines. A more public-spirited person might say they are intended to make the workplace safer for more people.

A key aspect that’s changed is that there’s now a focus not only on harm done, but also the harm risked. In theory this makes a lot of sense. If two companies commit the same ‘sin’, both should be liable, even if only one of them is ‘lucky enough’ not to actually hurt someone.  In reality it’s going to be a painful process to prove what could have happened, but didn’t.

All of this means giving renewed focus on employee engagement and to those projects that build a safety culture; more than ever, businesses will need to rely on employees, subcontractors, suppliers, and partners to create that culture. It means that vigilance and capturing near miss information is more important than ever. And it means that Health & Safety professionals are going to have to give their boards more data and more tools to help them manage this business risk.

And the results to date of these changes? Well a quick poll of publicised cases since February 2016 reveals 10 cases with total fines of £13M, with one case seeing a company director sent to prison for six years.

A recent report from IOSH highlighted that in the period February to August 2016 there have been as many £1M+ penalties as there were in the previous two decades.

It remains to be seen how recent high profile cases, such as Alton Towers and Foodles Production, will be prosecuted. What seems certain is that this new blunt instrument is going to be used to grab the attention of all those people who’ve not yet recognised that the H&S landscape has fundamentally changed.

For more information on the guidelines, see pages 4 and 5 of: www.sentencingcouncil.org.uk/wp-content/uploads/HS-offences-definitive-guideline-FINAL-web.pdf

 

 

 

 

Making the most of Docker for .NET development

Screen Shot 2016-06-29 at 13.34.37

Lead Developer Peter Duerden – who is currently working on Control F1’s Innovate UK-funded “i-Motors” telematics project – gives his top tips on how to best utilise Docker’s containerised solution for .NET development. 

Background

The recent rise in popularity of the Micro-Services Architecture has resulted in a shift in the way architects and developers design and develop their solutions. This architecture allows developers to build software applications which use independently deployable software components.

To assist in the paradigm of the Micro-Services Architecture, Solomon Hykes released an open source project called Docker in March 2013. Docker provides a lightweight containerised solution that builds upon the Linux kernel’s ability to use namespaces, but – unlike a virtual machine – a Docker container works with resource isolation using namespaces.

.NET developers

From the background above, you would think that the ability to utilise Docker for .NET development would be limited. However, Microsoft has invested time in Docker Tools for Visual Studio 2015 – Preview. This adds support for the building, deploying and debugging of containers using Mono and, more recently, .NET Core, which can be deployed or debugged locally when used in conjunction with Docker Toolbox or Docker for Windows (or Mac).

The Docker Tools have changed quite dramatically from their early releases, and during conversations with the development team in Seattle I was able to request, amongst other things, the inclusion of .NET Console apps to receive support for deployment. That request has been included into the current toolset.

The Docker Tools are a real benefit for fast development, but understanding the Docker processes before using them can help, and actually manually writing a Dockerfile for building your own images can prove valuable if you are to reap the full benefits of what Docker can offer.

Deployment/hosting

Docker containers can be hosted on any Linux machine running the Docker daemon, but you need more than one in a production environment, and will need to run a Docker Swarm that can manage the allocation of containers, resourced over multiple machines to ensure resilience should one machine fail.

NB certain Cloud providers offer their own “Container Services” which ease the deployment of containers, but don’t use the standard Docker tools for composing or scaling.

Helpful tips

Although you may want to just dive in and try your first Docker container, be mindful of the following:

  1. Understand the Docker build process
    • What is it doing when it builds the images using the Dockerfile?
    • How are things layered?
    • Exposing ports – make sure you think about why and when
    • Have you considered tagging?
  2. Experiment with Docker Compose
    • Build your application by linking services
    • “Internal” services do not need ports exposing to the Host machine
  3. Build your own Docker Swarm
    • By building a swarm you can understand and identify the related challenges this will have, such as service registry or load balancing
  4. Use Docker natively. Don’t use one hosting provider’s container service, as this will tie you in and mean further work to unstitch your solution if you ever decide to host elsewhere.
  5. Be aware that not all .NET Nuget packages support .NET Core. There are certain ones that do, but quite a few still aren’t supported.

The future

At the time of writing this, quite a few changes are occurring in both the Docker and .NET Core spaces, which will no doubt have an impact on future development.

  1. .NET Core has now been released. Hopefully this will drive an increase in the number of Nuget packages available to developers.
  2. Docker Swarm has been simplified and should hopefully make it easier to build and manage swarms.
  3. Microsoft are close to the release of Windows Container Service, which offers similar functionality to Docker, but on Windows Server 2016. This will therefore allow for full .NET framework capabilities rather than the .NET Core version.

Getting an HTTP REST server running on the iMX233 OLinuXino-MICRO

Screen Shot 2016-04-08 at 19.39.28

Control F1 Lead Developer Phil Kendall gives some handy pointers on how to get an HTTP REST server running on the iMX233 OLinuXino-MICRO.

As part of an on-going client project, Control F1 was asked to get an HTTP REST server running on the iMX233 OLinuXino-MICRO – a little ARM-powered small board computer. There’s a lot of documentation out there for the OLinuXino board, but it’s not always clear which is the most up to date, so this post covers how we managed to get everything working in 2016.

Before you begin…

…always buy the right kit. You’ll obviously need a board, but as well as that remember to get:

  • A 5V power supply with a 5.5mm jack
  • A MicroSD card
  • A composite video lead (or another cable with RCA connectors)
  • A USB keyboard
  • A powered USB hub – we found that the OLinuXino didn’t put out enough power over its USB port to get keyboards to function
  • A WiFi dongle, which uses the RealTek 8192cu chipset. It’s always a bit tricky to determine exactly which chipset is in a dongle, but as of February 2016, the TP-LINK TL-WN823Nwas using the right chipset

In theory, that’s all you absolutely need, but I’d also strongly recommend a composite to HDMI converter, just because most monitors don’t have composite inputs these days.

First steps

As with any of these kind of projects, the first step is always to get something running on the board. Thankfully for the OLinuXino, this turns out to be relatively easy: grab the “iMX233 Arch image with kernel 2.6.35 WIFI release 4” image as linked from the Olimex wiki, and then simply follow the steps listed there to copy the image onto the MicroSD card, put the card into the board, plug in the keyboard and video and you should get the standard Tux splashscreen, followed by a login prompt.

WiFi

The first thing to check is that you’ve got any sort of communications at all; the easiest way to do this is to run a scan for any wireless networks:

ip link wlan0 up
iwlist scan

That should be enough to give you a list of all the wireless networks that the board can see. Find your network and run:

wpa_passphrase SSID PASSWORD > MyNetwork.conf

Now edit /etc/wpa_supplicant/wpa_supplicant.conf, delete all the example configurations from the file and then add in MyNetwork.conf as you created above. After that, it should just be a matter of bringing everything up:

wpa_supplicant -B -i wlan0 -Dwext -c /etc/wpa_supplicant/wpa_supplicant.conf
ifconfig wlan0 <ip address> up
route add default gw <gateway address>

…and finally editing /etc/resolv.conf to add an appropriate nameserver. With a following wind, you should now have fully working networking on your board and be able to SSH into it.

HTTP server

I’m a big fan of spray.io’s spray-can as a lightweight HTTP server. As spray-can is in Scala, the first thing to do is to get a JVM onto the box. This also turns out to be nice and easy – grab the “ARMv5/ARMv6/ARMv7 Linux – SoftFP ABI, Little Endian” version of the Java SE Embedded JDK from Oracle’s site. This contains the JRE as well as the JDK, so copy the JRE onto the board and ensure that java is on the PATH somewhere. Test this with your favourite “Hello, world!” implementation if you so wish.

Scala

Getting Scala running on the board was also relatively easy: simply copy scala-compiler.jar, scala-library.jar and scala-reflect.jar from whichever version of Scala you’re using on the board, and then run your Scala code as:

java -Xbootclasspath/a:/full/path/to/scala-compiler.jar:/full/path/to/scala-library.jar:/full/path/to/scala-reflect.jar -jar helloworld.jar

I packaged that up into a shell script and put that on the PATH just to make things easier.

spray-can

The biggest issue with getting spray-can running is ensuring that all its dependencies are available. The easiest way I found to do this was to use the sbt-assembly plugin to produce a “fat JAR” and deploying that onto the board. sbt-assembly importantly does the right thing out of the box and merges Typesafe Config configuration files so that they all work properly. Other than that, the only change I needed to make was to increase spray-can’s start up timeout a bit, just due to the relatively slow speed of the board; this can most easily be done by adding the following stanza to application.conf:

spray.can {
  server {
    # spray-can can be a bit slow to start on the board, so give it more time to start
    bind-timeout = 10s
  }
}

After all that, you should be able to run any of the spray-can demos on the board.

 

Can small businesses benefit from the Northern Powerhouse?

Northern Powerhouse

Control F1 MD Andy Dumbell on his perceptions of the Northern Powerhouse initiative and why it’s so important to support small businesses in the region.

As cofounder and MD of a small tech business based in Yorkshire, and as a resident of the north-east, I have a vested interest in the North and support initiatives that will benefit the region.

With this in mind I recently attended the 2016 “UK Northern Powerhouse International Conference & Exhibition” held in Manchester, with the objective to understand what the NPH (Northern Powerhouse) is all about, and how it could benefit our region and its small businesses.

Prior to the event, like a lot of delegates I only had a vague understanding of what the NPH is; a concept that aims to rebalance the UK economy, pushing growth outside of London into Northern cities – from Liverpool through Manchester, Leeds, Sheffield, Hull, Teeside and Newcastle. It’s also a vision backed by the Chancellor George Osborne, who some say has staked his political reputation on its success.

That’s all well and good, but what does this actually mean for small businesses? What is the plan and who is leading it? How do we get involved and where are the opportunities? My hope was that the event would help answer these questions and leave us all feeling inspired and brimming with enthusiasm!

The conference was hosted by John Humphrys, who as you can imagine was witty, engaging and tenacious. It was split over two days and predominantly consisted of individual presentations and panel discussions. The calibre of the speakers was high and the format encouraged audience interaction through a social networking technology called slido.

The conference also offered some powerful networking opportunities with CEOs and other top executives of some of the largest companies and organisations in the UK, across various sectors. I formed promising connections with people whom I’d normally struggle to meet. For instance, I secured a private meeting with the President and CEO of one of the largest supermarket chains in the UK to talk about his challenges, what my business does and where we could potentially add value to his. We swapped business cards and my business has been invited to present to some of his senior team, to explore how we might work together.

On day 1 Lord O’Neil of Gately, Commercial Secretary to H M Treasury, kicked the conference off with his keynote speech on the governments’s NPH agenda with a focus on connectivity, communication and education. He talked about the NPH being a two phase project with the first currently underway, targeting awareness, devolution and improvements to transport links to better interconnect the North.

Phase 2 will be a continuation of the first, extended to address challenges in education and skills and their interplay with business. Lord O’Neil aspires to improve the outcomes and aspirations for the North, and central to phase 2 will be a low cost travel card similar to London’s Oyster card that can be used to commute across the North of England.

O’Neil’s introduction was a good start. The intention to provide better transport links should create new business opportunities whilst making commuting to work quicker, simpler and more cost effective, opening up access for businesses to a wider talent pool and encouraging local people to work in the region. However, connectivity issues within cities was not discussed and needs to be addressed in parallel to this wider piece.

There was considerable focus on education and skills throughout the event, especially given recent negative press on declining education standards in the North. I raised a question during a panel discussion on Science, Research and Skills: “what role will the NPH play in improving education standards and building a skills pipeline to meet future business demands?”

Professor Liz Towns-Andrew, Research and Enterprise, Yorkshire Universities, gave a passionate response and talked about her efforts to embed enterprise in the curriculum to create more business savvy graduates. She also highlighted the importance of listening to what employers and industry needs, and the necessity of getting these parties more engaged with further education.

This was encouraging and we heard about a number of further promising initiatives throughout the event. For instance, the International Festival of Business (IFB 2016) hosted in Liverpool later this year, a global gathering of international reach and significance. Over three weeks business leaders and thousands of international business delegates will get together, opening up opportunities in new markets. The festival includes a series of events, workshops and panel debates, with the intention to forge new connections and help businesses to secure new customers from key markets around the globe.

What wasn’t clear, though, was the role the NPH will play in all of this. Will it simply remain a concept; a platform for interested parties that share their ideas, debate and network, or will it be a real entity and the true driving force behind significant change? This wasn’t crystallised and the NPH is facing a fair amount of criticism. Social networks are loaded with frustration over the lack of a clear plan with some critics labelling the NPH as a gimmick without substance.

Geographical focus also presents a bit of an issue. Manchester is likely to become the capital of the NPH; it was quite telling that the chancellor’s first speech on the concept was delivered here, with events such as this one hosted in the city. But it’s important that the organisers rotate event locations across the north and don’t neglect cities such as York, Sunderland and Hull. It would be great to see future NPH events hosted in Newcastle and other major cities; otherwise we risk Manchester becoming the northern powerhouse and a new divide forming, breeding imbalance within the North which will undermine the initiative’s whole purpose.

So was the UK Northern Powerhouse International Conference & Exhibition worth attending? Personally, I didn’t achieve all of my objectives and like a lot of delegates, my understanding of the NPH is still relatively vague. Yet whilst I didn’t leave feeling wholly inspired, I am hopeful, and believe that the NPH – which is still in its infancy – can accomplish its vision if given the necessary support: as delegates we are responsible for ensuring ROI when investing our time and money into such activities. Returning to my original question – can small businesses benefit from the Northern Powerhouse? – I would say yes! Get involved; network, debate, collaborate and help play an active role in the NPH story – it’s what we make of it that counts.

Control F1 takes Intel prize at IoT Hackathon

Screen Shot 2016-02-23 at 16.36.19

Head of Development Nick Payne describes his time at London Olympia’s IoT Hackathon, at which Control F1 proved victorious, taking the Intel Prize for our team’s “Personal Comfort Monitor”.

About two weeks ago friend of Control F1 Steve Cowper dropped us an email about attending an IoT Hackathon at London Olympia. Having never attended a hackathon on this scale before it seemed like a great idea. The prizes looked interesting and it fitted in quite nicely with what CF1 are doing in the space too.

A couple of impromptu phone calls later, we’d come up with the idea of the “Personal Comfort Monitor” – PCM for short! We were taking on Intel’s challenge of creating an application for the Intel Edison board, together with an Arduino breakout board and a bevvy of Grove sensors. Of course, all plans could change having had the hardware pitched to us on arrival…

Fortunately, they didn’t! Intel has built a Cloud IoT platform, loosely based on MQTT and hosted on AWS (again, a perfect fit for CF1).

The brief told us that there had to be a business case and route to market for whatever we were building. Steve had done plenty of market research and field data collection previously and promptly set out on the documentation for our project (whilst keeping the dev machine oiled with coffee and cake).

Screen Shot 2016-02-23 at 16.31.20

Control F1 Lead Developer Phil Kendall made light work of the hardware, iterating quickly over the Intel examples to get various sensors wired up and talking to the backend. Meanwhile I configured the backend platform and made sure that the project was registered so we stood a chance of winning, having set up a github repo for the code.

After about five hours we had a board that was able to ingest data from various sensors (with air quality fudged by a rotary switch in true hackathon style) and a mobile application that displayed the data. In short order we also got a red LED lighting up when the derived “comfort level” dropped below 50 (Steve kindly produced an algorithm for Phil to convert into NodeJS together with Excel proof). Time was called on the first day, and we retired happy that we’d achieved a fair amount.

Day 2 arrived and we implemented the LCD display board to give a user friendly read out. After much hacking (and a bit of swearing) Phil converted the Edison IoT agent to TCP sending, as UDP sending messed up the LCD – it turned it off!

Nick embarked on polishing the mobile app (a rewrite followed!) and by the end of the morning it looked half decent. The Intel EnableIoT api was able to be called to give the mobile app what it needed.

With a few hours to spare, we helped a few of the other hackers and checked out the competition. Then, once the presentation was finished, we had some lunch.

The pitches drew quite a crowd at the Expo. We were next to last, so at least we didn’t have too long to wait for the judges to finalise the results.

I’m proud to announce that we won 1st prize from Intel! They were impressed by the amount that we achieved in the time we had, and with the idea and pitch (and maybe that we raised a few bugs for them to go and fix too!)

Many thanks to the organisers, and to Richard Kasteline in particular. The prizes will definitely be used back at CF1 HQ to continue our “After School Clubs”, and who knows – maybe the Personal Comfort Monitor will come in handy at Control F1 HQ – we have three Edisons to play with now, and a Surface 3 to display results on!

i-Motors receives £1.3m from Innovate UK

Connected Car

We are excited to be able to share that i-Motors – a new Control F1-led telematics project – has been awarded a grant of £1.3M by Innovate UK.

We’ll be partnering with the University of Nottingham’s Geospatial Institute and Human Factors Research Group, traffic management specialists InfoHub Ltd, remote sensing experts Head Communications and telecoms gurus Huduma to deliver the project.

Picture a future without gridlock. A future in which our city streets, roads and highways are safer, cleaner and greener. In which vehicles can self-diagnose a fault and order a new component, or automatically detect a hazard such as ice on the road before it’s too late and warn other vehicles around them too. A future in which cars can drive themselves…

That future isn’t far away: it is predicted that the UK will see huge growth in the production of autonomous (driverless) cars by 2030. Meanwhile the production of connected cars – cars with inbuilt “telematics” devices, capable of communicating to other vehicles and machines – is forecast to rise from around 0.8 million in 2015 to 2 million in 2025, accounting for 95% of all cars produced in the UK.

Yet whilst the number of cars with the technology to connect is already rising, little progress has been made towards putting this technology to use.

i-Motors plans to address this issue. Capitalising on our extensive telematics experience (read about our telematics partnership with the RAC here), we plan to establish a set of universal standards on how vehicles communicate with each other, and with other machines. Making use of connected cars’ ability to support apps, we’ll be working with academics from Nottingham University’s Geospatial Institute and Human Factors Research Group to build a mobile platform that allows vehicles of different manufacturers and origins to transfer and store data.

We’ll use patented technology, allowing data to be collected and analysed at greater speeds than ever before. We’ll also be working alongside traffic management experts InfoHub Ltd to combine these data with other data sources such as weather reports, event data and traffic feeds, easing congestion and increasing safety through realtime updates and route planning. In addition, the i-Motors platform will allow vehicles to report errors, which can be automatically crosschecked against similar reports to diagnose the problem and reduce the chance of a breakdown.

We will also be working with Head Communications to address the issue of limited connectivity by developing sensors capable of transmitting data to the cloud in realtime. Through installing these sensors – known as Beyond Line of Sight (BLOS) – vehicles can remain connected with sub-metre precision, even when out of internet and GPS range. And we will be collaborating with telecoms gurus Huduma to make i-Motors sustainable and commercially successful in the long term.

i-Motors has the backing of Nottingham, Coventry and Sheffield City Councils, where the new technology will first be piloted, and a letter of support from the Transport Systems and Satellite Applications Catapult, and fleet management experts Isotrak. The project will make use of live vehicle data provided by Ford, which has an ongoing relationship with the University of Nottingham.

Our MD Andy Dumbell commented:

“We are delighted to have been awarded the funding by Innovate UK to lead on this ground-breaking project. Connected and driverless cars offer us the opportunity to make huge strides in terms of reducing congestion, bringing down emissions, and even saving lives. Yet as is always the case when dealing with big data, it’s only effective if you know how to use it. We believe that through i-Motors we can set the standard for connected and autonomous vehicles and redefine the future of our streets, highways and cities.”

Just how innovative is the UK?

Screen Shot 2015-12-14 at 16.46.05

Our Product Development Director Dale Reed shares his thoughts from the 2015 Innovate UK Conference. 

Innovate UK is the UK’s innovation agency; an executive non-departmental public body, sponsored by the Department for Business, Innovation & Skills. They host an annual event to highlight the best and brightest of British Innovation, with exhibitors and seminars held over a two day period in London.

What was constantly highlighted throughout the event was just how innovative we actually are in this country. Consider these statistics:

The UK represents around 1% of the total global population and yet; we produce 16% of the world’s published scientific papers, and we host 4 out of the world’s top 10 Universities.

Then consider some of the inventions that have really shaped the world we live in today:

Computers? Charles Babbage, British.

Telephone? Alexander Graham Bell, British.

World Wide Web? Tim Berners-Lee, British.

Television? John Logie Baird, British.

You can also add to that list radar, the endoscope, the zoom lens, holography, in vitro fertilisation, animal cloning, magnetically levitated trains, the jet engine, antibiotics and, indeed, Viagra!

Some years ago, Japan’s Ministry of International Trade and Industry made a study of national inventiveness and concluded that modern era Britain had produced around 55% of the worlds ‘significant’ inventions, compared with 22% for the US and 6% for Japan. The point is that the Brits have a long history of innovation and it’s something we should be mightily proud of.

The downside is that however good we’ve been at inventing things, we’ve not been that great at commercialising them. Almost all of those inventions mentioned above have been vastly commercialised by businesses outside of the UK (really only jet engines and antibiotics contribute anything significant to our GDP). We also lose a great deal of our brightest minds to businesses overseas.

Fortunately this seems to be one of the areas that’s being changed, as evidenced by some of the talks I sat in on at the event. Many universities are now teaming up with businesses to place students and under-graduates – something which benefits all parties. Despite some difficulties around IP protection, it’s a huge boon to the student to learn some business sense and commercial ability before being employed full time. The employer gets some very bright minds to help them think around their problems. Many students go on to work with the business full time on graduation, and many businesses continue with the scheme year on year because it’s been so successful for them.

There are also now a lot of Catapult Centres right here in the UK (https://www.catapult.org.uk/). These are a network of world-leading centres designed to transform the UK’s capability for innovation in specific areas and help drive future economic growth. They are a series of physical centres where the very best of the UK’s businesses, scientists and engineers work side by side on late-stage research and development – transforming high potential ideas into new products and services to generate economic growth.

By bringing together the right teams who can work together and innovate, and just as importantly commercialise, the centres are ensuring the UK can continue to be at the forefront of innovation, particularly in technology and the sciences.

Graphene of course is a well-known British invention which I think will soon be joining the list of the world’s most life changing innovations in fairly short shrift. The number of applications seems almost limitless at the moment. We already have the National Graphene Institute, built as part of Manchester University, and fortunately the UK is working hard to ensure we are capable of commercialising the potential for Graphene. Work on another £60,000,000 building – the Graphene Engineering Innovation Centre – is currently underway, which will help look at how to move the research into actual production.

We also have a lot of expertise in quantum mechanics, and again companies in the UK are now working towards commercialisation of highly accurate sensors utilising quantum – for example an accelerometer based on the quantum interference of ultracold atoms. These will be able to provide highly accurate location and accelerometer information without any need for GPS or external factors. Although quite large at the moment it’s expected that they’ll be microchip sized within the next two years. Obviously this could be a huge boon to mobile, telematics and asset tracking systems. It’s currently being developed for use with submarines so they can determine their position accurately without having to surface to use GPS.

Overall I came away from the event feeling extremely positive and excited to be here in the UK at a time when there is so much potential for new technologies and innovation. I’m very much looking forward to Control F1 being a part of it!

A Sparkling View from the Canals

Screen Shot 2015-12-04 at 15.14.11

Control F1 sent Lead Developer Phil Kendall and Senior Developer Kevin Wood over to Amsterdam for the first European edition of Spark Summit. Here’s their summary of the conference.

One of the themes from Strata + Hadoop World in London earlier this year was the rise of Apache Spark as the new darling of the big data processing world. If anything, that trend has accelerated since May, but it has perhaps also moved in a slightly different direction as well – while the majority of the companies talking about Spark at Strata + Hadoop World were the innovative, disruptive small businesses, at Spark Summit there were a lot of big enterprises who were either building their big data infrastructure on Spark, or moving their infrastructure from “classical” Hadoop MapReduce to Spark. From a business point of view, that’s probably the headline for the conference, but here’s some more technical bits:

The Future of MapReduce

MapReduce is dead. It’s going to hang on for a few years yet due to the number of production deployments which exist, but I don’t think you would have been able to find anyone at the conference who was intending to use MapReduce for any of their new deployments. Of course, it should be remembered that this was the Spark Summit, so this won’t have been a representative sample, but when you’ve some of the biggest players in the big data space like Cloudera and Hortonworks joining in on the bandwagon, I certainly think this is the way that things are going.

In consequence, the Lambda Architecture is on its way out as well. Nobody ever really liked having to maintain two entirely separate systems for processing their data, but at the time there really wasn’t a better way. This is a movement which started to gain momentum with Jay Kreps’ “Questioning the Lambda Architecture” article last year, but as we now have an enterprise ready framework which can handle both the streaming and batch sides of the processing coin, it’s time to move on to something with less overhead, quite possibly Spark, MesosAkkaCassandra and Kafka, something which Helena Edelson implored us to do during her talk. Just hope your kids don’t go around saying “My mum/dad works with Smack”.

The Future of Languages for Spark

Scala is the language for interacting with Spark. While the conference was pretty much split down the middle between folks using Scala and folks using Python, how the Spark world is going was perhaps most obviously demonstrated by the spontaneous round of applause which Vincent Saulys got for his “Please use Scala!” comment during his keynote presentation. The theme here was very much that while there were people moving from Python to Scala, nobody was going the other way. On the other hand, the newcomer on the block here is SparkR, which has the potential to open up Spark to the large community of data scientists out there who already know R. The support in Spark 1.5 probably isn’t quite there yet to really open the door, but improvements are coming in Spark 1.6, and they’re definitely looking for feedback from the R community as to which features should be a priority, so it’s not going to be long before you’re going to see a lot of people using Spark and R.

The Future of Spark APIs

DataFrames are the future for Spark applications. Similarly to MapReduce, while nobody’s going to be killing off the low level way of working directly with resilient distributed datasets (RDDs), the DataFrames API (which is essentially equivalent to Spark SQL) is going to be where a lot of the new work gets done. The major initiative here at the moment is Project Tungsten, which gives a whole number of nice optimisations at the DataFrame level. Why is Spark moving this way? Because it’s easier to optimise when you’re higher up the stack – if you have a holistic view of what the programmer is attempting to accomplish, you can generally optimise that a lot better than if you’re looking at the individual atomic operations (the maps, sorts, reduces and whatever else of RDDs). SQL showed the value of introducing a declarative language for “little” data problems in the 1970s and 1980s; will DataFrames be that solution for big data? Given their position in all of R, Python (via Pandas) and Spark, I’d be willing to bet a small amount of money on “yes”.

On a related topic, if you’ve done any work with Spark, you’ve probably seen the Spark stack diagram by now. However, I thought Reynold Xin’s “different take” on the diagram during his keynote was really powerful – as a developer, this expresses what matters to me – the APIs I can use to interact with Spark. To a very real extent, I don’t care what’s happening under the hood: I just need to know that the mightily clever folks contributing to Spark are working their black magic in that “backend” box which makes everything run super-fast.

The Future of Spark at Control F1

I don’t think it will come as a suprise to anyone who has been following our blog that we’re big fans of Spark here at Control F1. Don’t be surprised if you see it in some of our work in the near future🙂