02 July 2021

Behind the map: how we use Enterprise Integration Patterns in developing one.network

This blog is the first of a series designed to give you a glimpse of how we manage the one.network platform. We’re starting off with Enterprise Integration Patterns.

Our platform is extremely powerful. It gives highway authorities, utility companies, and event organisations the ability to plan, manage, and share traffic disruptions on the roads — all from a single dashboard.

But how does it work? And what goes on behind the scenes to ensure our customers can successfully manage their traffic interventions?

The data behind the platform

At one.network, one of our core business functions is to aggregate high volumes of traffic-related data from sources across the countries we operate. In the UK, we use local data sources such as Traffic Wales, the Scottish Roadworks Register, and National Highways — to name but a few.

We then process and collate the data, which allows us to offer our users powerful insight solutions such as our Clash & Coordination module, as well as customisable data extraction in our Reports module.

We also store the data, where applicable, and provide it to our users not only on our platform but also through integrations, created with our Data APIs. This means we can offer historical insights on traffic disruptions in modules such as Traffic Replay.

To make this happen seamlessly, we integrate our systems with external inbound/outbound systems using something called Enterprise Integration Patterns (EIPs).

What are Enterprise Integration Patterns and how do we use them?

Enterprise Integration Patterns (EIPs) are software design patterns created to solve common problems that arise when integrating different systems. In most of our back-end services, we use Apache Camel as our integration framework.

Camel is a Java-based, open source, message-oriented framework that implements the most used EIPs. For example, when we have a specific data format (XML, JSON, Shapefile, etc) from a data source we want to consume, we use a software component that implements what’s called the message endpoint pattern.

Diving into the process

Let’s start with the definitions.

The Message Endpoint

This is what holds the knowledge about how to connect to the data source and deal with the data format, allowing the receiver application to access the data programmatically.

The Message

This holds the data record that’s being exchanged, along with metadata about the record and origin system.

The Camel Message Endpoint component

This is the most common message endpoint, and the one we use at one.network. It’s the foundation on which we build the integration logic, which is what connects the endpoint to other components.

We use the same pattern in outbound communications, such as when we provide data feeds in a whole host of different formats for our Data API users.

A lot of our data sources offer a data record that’s actually a list of records — for example, roadworks in a given area. To iterate through the list and process each road work individually, we use what’s known as a splitter pattern.

This pattern is implemented in Camel as the Split component. It takes a message from a message endpoint, splits it into sub-messages, and sends them off downstream to be processed.

Since any internal system will typically have its own data models, the messages have to be translated. To achieve this, there’s another pattern: a Translator pattern.

In practice

Let’s take a UK roadwork data model as an example. We might receive its location using the British National Grid (BNG) reference system, but our internal data model also uses the World Geodetic System (WGS). So we need to convert the geometric data from one reference system to another.

We also might want to enrich the data; perhaps finding the name of the street where the roadworks are located. Finally, we want to save all this information.

How? We use the Camel processor component, which is an implementation of the translator pattern.

While a single processor might be enough to do all the required transformations, it’s usually best practice to separate the processing logic into multiple processing steps, each one responsible for one thing, in accordance with the Single Responsibility Principle (SRP).

That means we’ll probably want to have one processor that handles all the geographic processing logic, another that does metadata enrichment, and a third that handles the persistence of the data — i.e. the logic behind storing, updating and deleting data.

At one.network, we have multiple chained processors that output the processed message down to the next processor in the processing chain.

This is also a type of Enterprise Integration Pattern, and it’s called Pipes and Filters.

And that, in brief, is how we at one.network use Enterprise Integration Patterns to develop our platform.

Hopefully this blog gave you a better understanding of how we use these patterns to build one.network. They are the backbone of a lot of our services.

Curious to learn more or speak to one of our Developer team? Get in touch.