Developing the World Service archive prototype
The time-span for developing the prototype was very short. We wanted to be able to demonstrate the prototype at the in early September where our section was offered a booth in its , which gave us two months.
Quite early on, we realised that in order to get to something viable in such a short time, we needed to involve a small user community very early on. In order to do so, we put a first version of the prototype online very quickly (mid-July), pointed a few hundred users from at it very early on, and started to gather feedback. This feedback helped us understand user needs more precisely and prioritise features of the prototype. We will describe our development process during these two months in two blog posts, this one focuses on the engineering work and the next one focuses on the User Experience of the prototype.
We also decided quite early on to work on top of a (namely, the simple and fast ). Triple stores are a good match for the problem we were trying to solve: prototype quickly on top of data extracted by a very varied range of sources (automated tagging tools from audio or text, contributor identification tools, image data, original World Service database, etc.). Basically, all those tools can just push some text to that store, which happens to be , without any assumption on how that data is going to get used. Custom feeds can then be generated from that store using queries executed by the prototype. We also use for our search, indexing textual data available in the triple store. All the audio (around 70,000 programmes) is served from
On top of Solr and the triple store, we built a application using the gem (which we contributed to, to make it support some new SPARQL 1.1 features), and a simple library, inflating model objects from SPARQL queries called . In order to run integration tests we use the lightweight triple store, for which we built . We store all the user contributions (e.g. new tags, validation and invalidation of tags) in a relational database. Each of these contributions point to the URI of an item described within the triple store. ÌýWe also used to write our Javascript, for storing expensive computations in memcache, and for managing users, authentication and authorisation, for talking to Solr, for configuration management, , and for continuous integration and deployment, and for accessing our audio content on Amazon S3. When new edits are being made by users in the prototype, we use a job for re-indexing the corresponding data in Solr. We also wrote two new gems, and , parsing content from Wikipedia.
Ìý
Ìý
Where possible, we tried to ensure that the audience facing site worked across a range of modern devices. To help with this, we used framework to provide some baseline user interface (UI) elements although the design required us to make adjustments in order to fit the ³ÉÈË¿ìÊÖ's Global Experience Language (GEL).
We followed the for organising our CSS into base styles, layouts and reusable modules. Using the CSS preprocessor helped as we could share and reuse common properties such as theme colours and helper functions.
The audio player went through several iterations, starting with the library. Audio.js is simple and easy-to-use but we found that it didn't give us the control that we required. We implemented a custom UI based on the very flexible library. The new UI allows the media player to scale to the width of the viewport dynamically which is something that we couldn't find in a lot of existing players.
The image picker and homepage carousel (in earlier versions) used the library which implements a lightweight carousel using CSS3 animations and supports touch gestures which improved the experience on tablet devices.
All our components fire events that are listened to by the page controllers allowing us to easily collate user actions in our analytics system. This gives us feedback on how our users interact with the site so that we can improve the user experience, develop new features and build richer metadata.
Comments Post your comment