³ÉÈË¿ìÊÖ

« Previous | Main | Next »

Torch Relay: Building the site

Post categories: ,Ìý

Mike Brown | 10:00 UK time, Tuesday, 18 December 2012

My name is Mike Brown. I was the technical architect on the ³ÉÈË¿ìÊÖ's Torch Relay website.

Alongside my colleague Matthew Clark I was lucky enough to have the role leading the technical delivery.

If you can think back to the beginning of the 'summer', before the huge success of the Olympic and Paralympic games, you might remember the .

Back in mid-May thousands of people were chosen to run a short distance with the Olympic Flame and the country was temporarily gripped by the spectacle via #bbctorchcam.

The Torch Relay site

My colleague Michael Burnett, product lead for the Torch Relay, published this blog at the time to explain some of the features we'd made available as part of the site.

The Torch relay played an important role in the delivery of the 2012 Olympic website and it wasn't by accident that it used the same video delivery pipeline, giving us critical information on performance and editorial workflow.

Like many websites, the effectiveness of the proposition belied some clever technology - resourcefulness in the face of some tricky problems. We thought it only right to share it with you.

This blog talks in high level terms about technical architecture, and is probably of most interest to the slightly more technical user, but we've kept it as broad as possible.

Our requirements

Simply put, here's what we wanted to achieve;

  • To track the geographical location of the Olympic Flame and make this location available to users, both in terms of actual location and the route itself.
  • To allow users to watch live/catchup video coverage of the Torch including the ability to rewind live and to find the video of when the Torch passed through a particular location.
  • To allow users to determine how close the Torch comes to a location and when.
In a technology context then we needed to;
  • Get a video stream from the Torch location.
  • Get the location of the Torch itself.
  • Match them up.


Easy? Not quite.

This was a unique challenge, something the ³ÉÈË¿ìÊÖ hadn't tackled before. Seventy unrelenting days, a minimum of 12 hours a day left no room for soft launching software releases.

The video solution was a marvel in itself, both in terms of delivering the video signal from location to ³ÉÈË¿ìÊÖ Television Centre, and how we integrated with the Torch location to produce the compelling features on the website.

In fact we can't do justice to it in a single post. Roger Mosey - Project Director London 2012 - discusses some of the basics in his blog post, but for this post we'll concentrate on how we located the Torch itself.

Getting the location

'Convoy mode'

For organisational purposes the Torch Relay is split into a number of 'modes'. For example, when the Torch is being driven from location to location in a vehicle (a converted horsebox!) this is known as 'convoy' mode.

The Torch often moves away from the designated 'running' route to a beauty spot or to a different or alternative mode of transport (on a train for example).

Unfortunately this presented us with a problem. At any given time we couldn't be sure where the Torch would be or what resources we would have available to track its location.

Convoy mode meant we would have the location of the ³ÉÈË¿ìÊÖ vehicle (and by proxy the Torch location) to help us the get the location but an alternative transport event could have been covered by any one of a number of Local ³ÉÈË¿ìÊÖ TV Crews.

To complicate matters further some of these events meant that we couldn't rely on the convoy vehicle location at all. Sometimes it would have to take a different route to the Torch altogether or on some days not be present at all (the Torch visited the Channel Islands and the Outer Hebrides amongst others where the convoy couldn't follow).

Added to this was the fact that because of the complexity of the route (calculated to visit a location close to 90% of the UK) we didn't have any significant data of the Torch's whereabouts on a daily basis by the time we had to sign off the application design.

We needed something flexible and ultimately something that we could get away with if it went wrong.

Would we lie to you?

Of course not, but we were at times more calculating with the accuracy of the Torch location - we had to be.

Most of the time we were able to bring you the 'real' location of the Torch but sometimes we just couldn't. In these situations we relied on the military precision within which the Torch Relay was scheduled to run.

So for example we would use realtime GPS data to locate the Torch when we knew we could comfortably get an accurate location.

As soon as we moved to a phase of the relay where we couldn't rely on the ³ÉÈË¿ìÊÖ vehicle to provide a location we would switch to playback mode.

This involved interpolating timing points using the expected average speed for that section. It enabled us to playback the torch location with some degree of confidence at any non-convoy section.

This generation of the route was a slow process. Our production of the correct route playback geometry was typically no more than 7-10 days ahead of the current day at any given time.

Not getting lost

The popularity with which the Torch Relay was met soon caused us problems. It quickly became clear that try as they might, keeping the Torch to schedule was going to be extremely difficult due to the large numbers visiting the route.

Seamlessly snapping the torch location between a 'pre-created' route and its real location would have involved a complex set of heuristics - more complex than the business logic we had already implemented to, for example, mark a location as not having been visited even when the route twisted and turned in close proximity to that location.

Instead we opted for a set of manual controls via a l API that allowed us to modify its location. Here's an example.

Service Summary

This worked beautifully. It afforded us the ability to lose signal completely, for the incoming signal to be wildly inaccurate when the convoy vehicle couldn't follow the Torch and for us to jump big chunks of time and/or distance to keep the live point as accurate as we could make it.

How did we do?

As you would expect from an event of this type the traffic followed a fairly typical long-running event model but with some interesting patterns.

Olympic hysteria generated an enormous initial spike - Day One (19th May) - Land's End to Plymouth - gave us over one million page requests with upwards of 250,000 unique users.

A few days in we saw the biggest consistent audiences - in fact all the biggest days are in that first week, but after that we started to see traffic tailing off from that initial peak.

What is interesting is that one could draw correlations between the degree of population density and traffic levels. If this is the case it would fit very well with the intensely 'local flavour of the Torch.

People relatively close to the Torch on that day were perhaps more likely to follow the relay than those further away.

For example on our pages that reflected a single given day (rather than the current day) we saw the highest overall numbers from each of the London days. On these days the torch travelled over a much smaller geographic area in a more compact form. Likewise we saw more rural routes with significantly lower figures.

It should also be noted there are high traffic days which can be attributed to 'celebrity' torchbearers.

Tidying up after ourselves

One final thing worth discussing, is something not often discussed!

Of paramount importance to the 2012 project is how to tidy up after yourself and be a good temporary-product citizen on a platform of finite capacity.

The ³ÉÈË¿ìÊÖ platform is a large and complex technology stack and services millions of requests per day - it's not sustainable to continue to consume resources and place dependencies on services given very low demand.

The Torch relay is a largely temporary product and can't continue to function in its existing form and so it was designed very much with this in mind - at all stages of the application design we used existing components where possible.

As an example have a look at the diagrams below.

Before

The first shows (a broadly accurate) architecture we originally designed - this made use of existing services wherever possible to reduce costs and simplify decommissioning the service. As you can see there are a large number of dependencies and services in the request chain.

Original application design

As 2012 team members move onto new projects we cannot support this number of dependencies, or more importantly be a dependency on these services as they migrate and modify.

After

Here we have reduced the number of dependencies without compromising too much of the audience proposition.

The blue lines represent requests to a set of 'stubbed' or flattened (flattening in this case is the process of taking the content of an HTTP request and saving it as a 'flat' file on disk - allowing us to switch off the 'live' API) assets which were previously serviced via the red lines

This change has been implemented simply by looking at the file system rather than using HTTP. The code is largely untouched.

Some minor modifications to the user interface have been put in place (people no longer need to search for how close the Torch comes to them for example) and we're pretty much done.

It's often overlooked but how you leave a platform can be just as important as the work you do to get there in the first place.

Design with reduced number of dependencies

Summary

The Torch Relay was a fantastic project to work on and yet another reason why working on technical architecture at the ³ÉÈË¿ìÊÖ can be so rewarding.

It represented some unusual, key technical challenges to Future Media and the wider ³ÉÈË¿ìÊÖ and was a great example of how a flexible approach to developing and deploying software can be successful when there is very little confidence in the event you are building for.

We explicitly designed a site that could provide a soak test for the ³ÉÈË¿ìÊÖ Olympic video platform and be cleanly downgraded and archived releasing dependencies on core ³ÉÈË¿ìÊÖ services.

Further to that the ³ÉÈË¿ìÊÖ gained some key technical understanding, valuable both in the run up to the Olympics and beyond.

As broadcast technology becomes more mobile and more integrated with IP delivery the architecture used to deliver the Torch Relay may provide a glimpse into the future of how we deliver fast moving events and provide that key 'local' flavour so important to putting the audience at the heart of the story.

Mike Brown was previously technical architect on the Torch Relay website and is now Technical Lead for ³ÉÈË¿ìÊÖ Connected Studio.

Comments

  • Comment number 1.

    Hi Mike,

    There was a promise at some point to put the full torch relay video online including the bits where coverage dropped out. I was looking forward to watching the torch pass a few locations I know well, but each had poor reception.

    Meanwhile, all the video now seems broken on the torch relay site.

    Any chance of a little love for the flattened site?

    Chris

Ìý

More from this blog...

³ÉÈË¿ìÊÖ iD

³ÉÈË¿ìÊÖ navigation

³ÉÈË¿ìÊÖ Â© 2014 The ³ÉÈË¿ìÊÖ is not responsible for the content of external sites. Read more.

This page is best viewed in an up-to-date web browser with style sheets (CSS) enabled. While you will be able to view the content of this page in your current browser, you will not be able to get the full visual experience. Please consider upgrading your browser software or enabling style sheets (CSS) if you are able to do so.