³ÉÈË¿ìÊÖ

Archives for March 2012

IRFS Weeknotes #101

Post categories: ,Ìý,Ìý,Ìý

Pete Warren Pete Warren | 11:12 UK time, Friday, 30 March 2012

"Weeknotes 101 : The worst weeknotes in the world?"

If George Orwell was a member of the ³ÉÈË¿ìÊÖ R&D IRFS team he would no doubt be wearing his signature wry smile under his well-trimmed dystopic moustache given that we're now on the 101st edition of our weekly survey of our activities -- but he isn't, so he doesn't.

Well, it seems Spring has sprung as the clocks sprang, with a few members of the team taking some holiday time just as the weather brightened, but there is still plenty of activity in the Central Lab.

Read the rest of this entry

Prototyping Weeknotes #100

Post categories: ,Ìý,Ìý,Ìý

Tristan Ferne | 17:00 UK time, Tuesday, 27 March 2012

This is the Prototyping team's 100th weeknote. We started the experiment at the start of 2010. At that time we were kicking off two projects, one resulted in Music Trends, one became Zeitgeist and Music Resolver has been informing some more recent work in the ³ÉÈË¿ìÊÖ. Since then we've built scores of prototypes, pushed a few things through to production and standardisation, expanded, changed, moved offices and . And what about now? I just went round the office asking people.

Read the rest of this entry

Royal Visit to MediaCityUK and ³ÉÈË¿ìÊÖ R&D

On Friday 23rd March 2012, Matthew Postgate, Controller ³ÉÈË¿ìÊÖ R&D, myself and members ofÌýthe User Experience & Accessibility team were given the honour of presenting a short demonstration of our work to Her Majesty The Queen and His Royal Highness The Duke of Edinburgh.

This was part of the Royal visit to MediaCityUK, Salford to formally open the ³ÉÈË¿ìÊÖ North Buildings, the Studio facilities and start the Sport Relief Mile. In glorious sunshine and large crowd, Her Majesty's visit was a great success for all involved fromÌýthe ³ÉÈË¿ìÊÖ & its partners in Salford including Peel Media.ÌýThe visit wasÌýcaptured by our colleagues in News.

Ìý

The Queen arrives at MediaCityUK in her car

The Queen arrives at MediaCityUK in her car

Ìý

On their tour of the ³ÉÈË¿ìÊÖ facilities, Her Majesty & His Royal Highness spent a short time at the R&D exhibition space where Matthew & I were able to give them a brief history of R&D, the role we play in the ³ÉÈË¿ìÊÖ as well as our relevance to UK media industry and our continuing work with academic partners.

Dr Michael Evans, Lead Engineer on the team, then explained about our work to improve Accessibility in "connected homes" where internet technology allows televisions to work with our personal, accessible devices. In the 'connected home', it is feasible to envisage the subtitles (or signing for those who are hard of hearing) available on your tablet computer, hear the audio description (or a foreign language) on your smartphone, or even read it on your braille device. All of these devices are synchronised and working together (usingÌýour work onÌýan Universal ControlÌýspecification) maintaining a shared experience, even for families / groups with a wide range of accessibility requirements.

Having created this technology for disabled users, we were able to show how it can be used to deliver additional content, such as the ³ÉÈË¿ìÊÖ Online companion, about the programme which could benefit everyone at home.

Finally, Liz Valentine, one of our Junior Research Scientists, explained to the Royal visitors about her current research field work on "Accessible Single Switch Remote Control"Ìýwith a Ìýsupported facility at , Lancaster. Beaumont is a facility which supports learners aged between 18 and 25 with a broad range of physical and learning disabilities and aims to empower them to take responsibility for their own lives. Working with these inspiring teenagers, who aren't able to use TV remote controls or other conventional interfaces, the team has been developing ways in which the students can use head trackers and single switches, usually built into their wheelchairs, to take control of their media devices and access the ³ÉÈË¿ìÊÖ's channels & content independently. The short film below will give you sense of the work we presented.

In order to see this content you need to have both Javascript enabled and Flash Installed. Visit ³ÉÈË¿ìÊÖ Webwise for full instructions. If you're reading via RSS, you'll need to visit the blog to access this content.

Both The Queen & The Duke of Edinburgh had plenty of questions and showed an interest in the work as did other senior visitors on the day.

AccessibilityÌýis just one part of how R&D is serving the ³ÉÈË¿ìÊÖ's audience but is one that is possibly not very well known.ÌýI'd like to thankÌýthe effort put in by those R&D staff involved in the work shown and the demonstrations themselves: Mike, Liz, Matt Brooks, Alex Rawcliffe, Steve Jolly, Brendan Crowther, Kevin Claydon, Sharon MartinÌýand Alice Whittle.

We'd like to especially thank the students & staff of Beaumont College for their continued engagement with our research work and allowing us to film them. Also thanks to Chris Sizemore & Andy Littledale from ³ÉÈË¿ìÊÖ Online Knowledge & Learning Product and the team at Natural History Unit for support with the Frozen Planet material and 'Dual Screen Companion'.

Finally our thanks for continued support of Peter Salmon & the ³ÉÈË¿ìÊÖ North Board who created this fantastic opportunity for R&D and Future Media to showcase our work. The Royal visit was a great success and represents a really positive milestone for the whole of the MediaCityUK project.

We are very proud to have been part of a special day for the ³ÉÈË¿ìÊÖ.

Dr Adrian Woolard, Project Director, North Lab.

Automatically tagging the World Service archive

Post categories:

Yves Raimond | 16:55 UK time, Tuesday, 20 March 2012

A couple of months ago, Dominic blogged about ABC-IP, a collaborative project with looking at unlocking archive content by interlinking it with further data sources. In this post we describe some work we have been doing on automatic tagging of speech audio with identifiers.

The World Service archive

One dataset we are looking at within this project is the World Service archive. This archive is isolated from other programme data sources at the ³ÉÈË¿ìÊÖ, like ³ÉÈË¿ìÊÖ Programmes or the Genome Project, and the associated programme data within it is very sparse. It would therefore benefit a lot from being automatically interlinked with further data sources which makes it such a particularly interesting use-case. The archive is also very large: it covers many decades and consists of about two and a half years of high-quality continuous audio content.

Automated semantic tagging of speech audio

One way of dealing with such a large programme archive with patchy metadata but high-quality content is to use the content itself in order to find links with related data sources. For example if a programme mentions 'London', 'Olympics' and '1948' a lot, then there is a high chance it is talking about the . Using the structured data available in Wikipedia we can then draw a link between a recent programme on the and that archive programme and use that link to provide further historical context.

When developing such an algorithm we need to take into account a couple of desirable properties: it needs to be efficient enough to be applicable to a large archive and it needs to use an unbounded target vocabulary, as programmes within an archive can virtually be about anything.

We therefore built such a 'semantic tagger', automatically assigning tags drawn from (publishing structured data extracted from Wikipedia) to speech radio programmes.

We start by automatically transcribing the audio, using the tools. The outputted transcripts are very noisy - there are lots of different accents in the archive, and it covers a lot of genres and topics. Also, they don't include any punctuation or capitalisation, on which most existing Named Entity Extraction tools rely a lot. We then build a dictionary of terms from a list of labels extracted from DBpedia. We then look for those terms in the automated transcripts. In order to disambiguate and rank candidate terms, we use an approach inspired by the Enhanced Topic-based Vector Space Model proposed by . We consider the subject classification in DBpedia derived from Wikipedia categories and encoded as a . We start by constructing a vector space for those categories capturing the hierarchical relationships between them. Two categories that are siblings will have a high cosine similarity. Two categories that do not share any ancestor will have a null cosine similarity. The further away a common ancestor between two categories is, the lower the cosine similarity between those two categories will be. We published an implementation of such a vector space model and wrote about it in more details in our upcoming paper.

We consider a vector in that space for each DBpedia web identifier, corresponding to a weighted sum of all the categories attached to it. We then construct a vector modelling the whole programme, by summing all vectors of all possible corresponding DBpedia web identifiers for all candidate terms. DBpedia identifiers corresponding to wrong interpretations of specific terms will account for very little in the resulting vector, while web identifiers related with the main topics of the programme will overlap and add up. For each ambiguous term, we pick the corresponding DBpedia web identifer that is the closest to that programme vector. We then rank the outputted web identifiers by considering their score (taking into account how much the corresponding term is mentioned in the programme and how specific to the programme that term is) and their distance to the programme vector.

We end up with a ranked list of DBpedia identifiers for each programme. For example a 1970 profile of the composer Gustav Holst gets tagged with , and and a 1983 episode of the Medical Programme gets tagged with , and .

We evaluated the results against 150 programmes that have been manually tagged in ³ÉÈË¿ìÊÖ Programmes and found that the results, although by no means perfect, are good enough to efficiently bootstrap the tagging of a large collection of programmes. The results of our evaluation will be published as part of our LDOW paper.

Processing the World Service archive

Applying such an algorithm to a very large archive is a challenge. Even though the tagging step is quite fast, the transcription step is slightly slower than real-time on commodity hardware. However, all steps apart from the final IDF step can be parallelised. We can therefore throw a lot of machines at the problem to process an archive relatively quickly. We developed a message queue-based system to distribute computations across a large pool of machines and an API aggregating all the data generated at each step of the processing workflow. A screenshot of that API 'statistics' page as of today is available below.

Each EC2 'Compute Unit' has relatively predictable performances (for example, we transcribe 60 minutes of audio in 80 minutes on one Compute Unit), which means that the price for processing an entire archive can be estimated prior to running the computation. Also, that price won't depend on the time it takes to process the entire archive: throwing 100 machines at the problem will get results quickly and for the same price as 10 machines for 10 times longer. The only bottleneck is the bandwidth at which we can send audio to Amazon servers, which meant we could only process about 20,000 programmes per week.

Then at regular intervals a script sets the final ranking of the resulting tags and pushes the resulting data over to MetaBroadcast's systems. An example of those automated tags showing in their is available below.

Upcoming publications

The underlying algorithm is described in more details in a paper accepted at , part of the conference. The application of that algorithm to the entire World Service archive is described in more details in an upcoming WWW'12 paper accepted in the demo track. We will post pointers to the papers as soon as they get published.

Next steps

There is quite a lot more we could do to make that automated tagging algorithm work better. One of the first thing we could improve is the quality of the speech recognition. We use off-the-shelves acoustic and language models and we could probably get better results by using models trained on similar data. We are also looking at automated segmentation. Most programmes deal with a few different topics and it would be interesting to isolate topic-specific segments. We also recently started some work aiming at automatically identifying contributors in programmes.

Prototyping Weeknotes #99

Post categories: ,Ìý,Ìý

Chris Godbert | 11:38 UK time, Monday, 19 March 2012

This week the project focus is Recommendations and Chris Newell, one of our Lead Technologists, gives us an overview of the recent work the team has been doing. This week saw the publication of a final paper from the titled . Following the completion of the project we have continued to explore recommender systems and their user interfaces. For most of this work we use the open source which was developed by the project. The library supports a wide range of recommender system algorithms and provides a framework for testing and evaluation. Using the Java version of the library and Apache Tomcat we have built a Web API that any of our projects can use to integrate recommendations into their projects and interfaces.

Read the rest of this entry

A Day at University of Southampton

Post categories: ,Ìý

Rosie Campbell Rosie Campbell | 15:00 UK time, Tuesday, 6 March 2012

If you're reading this, you're probably already aware of the innovative creative and technical work undertaken by ³ÉÈË¿ìÊÖ R&D. However, despite the department's long and impressive history, when I tell people what I do I often get the response 'R&D? I didn't even know the ³ÉÈË¿ìÊÖ had an R&D department!' In fact, it's a problem that has manifested itself in our graduate recruitment process: many of the applicants just don't have the relevant technical skills, and we aren't reaching enough of those that do. It's understandable: when people think of the ³ÉÈË¿ìÊÖ they probably think of TV, radio, journalism and media - while the many opportunities for technical graduates in the organisation may not be initially obvious.

So that's how Rod, Mark and I (two of us trainees and the other an ex-trainee) found ourselves on the road to the to attend their Science and Engineering careers fair on the 7th of February this year.

Read the rest of this entry

Prototyping Weeknotes #98

Post categories: ,Ìý,Ìý

George Wright George Wright | 11:13 UK time, Tuesday, 6 March 2012

This week's weeknotes are brought to you by the letter 'R' and the sound of the new Magnetic Fields LP, and the highlighted project is 'Roar to Explore'

Read the rest of this entry

More from this blog...

³ÉÈË¿ìÊÖ iD

³ÉÈË¿ìÊÖ navigation

³ÉÈË¿ìÊÖ Â© 2014 The ³ÉÈË¿ìÊÖ is not responsible for the content of external sites. Read more.

This page is best viewed in an up-to-date web browser with style sheets (CSS) enabled. While you will be able to view the content of this page in your current browser, you will not be able to get the full visual experience. Please consider upgrading your browser software or enabling style sheets (CSS) if you are able to do so.