Category Archives: ODcamp 2015

Design and Data: how do we bring them together?


Warning: Liveblogging – prone to error, inaccuracy, and howling affronts to grammar. This post will be improved over the course of a few days.

Session hosted by Simon Gough, ODI Devon

Global Service Jam is a two day event exploring using service design to build human-centred tools. It’s not something data is mentioned much at. On the other hand, hackathons are often all about data. The two don’t seem to meet.

He’s involved with #dataloop – a tool for exploring data as a designer.

  • Infographics are an interesting one. They’re very much about presenting data in an easily digestible format. What is the design process behind the infographic that explores reader need? Scale of audience? Understanding the data – and what you want it to do.
  • Visualisation tools – there are plenty of them, but how much traction they get? Service design methods tend to be qualitative, rather than quantitative.
  • Personas are a core point of most service design. Can you consider that persona’s need for data? Of course, and so that should be part of the process. If you’re not used to working with data, though, that can be a fairly abstract process.

Are there other parts of this process that we can bring data into?

Developers can be more comfortable with design (because they use it), while designers tend to be uncomfortable with data. That said, you can see resistance to designers in hack days.

Can we make this process easier by working within a very specific challenge that helps create the right focus by reducing the complexity?

Psychology versus process

One software developer suggests we’re dealing with a conflict between a psychological approach (design) and process approach (software). Two different dynamics – so can there be a single solution? Well, this is where the defined challenge comes back into play. In more open situations – like hack days and the service jam – you have this problem. Without a context, how can designers explore data?

Open source is a common point: software for data, frameworks (personas, journey maps, blueprints) for designers. Both professions use forms of frameworks to shape their work, but they’re not really aware of each other’s tools.

This is not a tools or methodology problem, suggest one attendee, but a cultural one. And another suggests that is where a project manager comes into play, and can be vital to bridging this divide. You need to know a reasonable amount about how someone else works to co-operate with them well.

Specialism or democratisation?

We’ve seen journalists emerging as data journalists – but it’s a core group of specialists right now. Will we see data specialist designers? Or is the increasing complexity of data and data formats making it hard for that sort of specialism to emerge?

Design is moving in the opposite direction – democratisation of design through co-creation, for example.

We have exponentially larger amounts of data available, with a geometric rise in connections. That again creates more opportunities, but again makes things harder.

Maybe agile approaches – working on cross-functional teams on constrained problems on sprints – might be one approach. Devops was a cultural change, that lead to a whole bunch of new tools to facilitate that way of working.

Tools can ease the learning curves of taking a designer mindset and applying it to data work, without shackling it to the designer’s initial way of thinking, as early web development software did.

Further notes and links.

Food, hygiene and the open data challenge

Warning: Liveblogging – prone to error, inaccuracy, and howling affronts to grammar. This post will be improved over the course of a few days.

ODCamp 21-02-15_14_Food_Standards_Agen

Hosted by Dr Sian Thomas, Food Standards Agency

The Food Standards Agency has a big commitment to open data – but is honest that it’s not always in a useful format. Dr Thomas asked for suggestions for improving that, and the room had plenty of ideas…

The more ways of accessing the data, the better was the message: RSS, CSVs, APIs, etc. Tab separated data is “old fashioned” – but pretty easy to deal with. However, she’s only got a team of four, and is responsible for a lot more than open data (like date protection, FoI, and so on…). They’re dependent on other data-collecting organisations opening up what they do.

Supply chain open data could be a really interesting perspective, especially for the rural part of the economy. DEFRA has a lot of open data on that. But once it enters the supply chain it becomes commercial data, and no-one releases that. Some supermarkets release some data, but far from all, and in theory you can do more down the packaging chain. By law you need to know one step above and one step below – who you bought it from and who you sell it to. It’s not a standard format, though. Also, food is traded as a commodity, so it often changes had without physically moving. That said, DEFRA is right at the top of the list of bodies that release data.

Data quality: how do authorities describe supermarket canteens? As the company it’s in – or the contract catering company actually running it. There is a standards quality programme – but there are cultural factors that come into play. For example, in a more affluent area the forms of food consumed might be inherently more risky – rare meat and chicken liver paté. They notice the quality issues most in Wales, where there’s mandatory scores on the doors of the rating, and that’s changed things.

The App Gap

There are lots of apps around some of this data – but they never seem to get past competition wins into existence, or at least into consumers’ hands. Maybe they should approach people like Yelp and TripAdvisor? It’s been mooted before. There’s strong correlation between their scores and food hygiene ratings. Maybe they could be used as a trigger for reinspection?

Could food hygiene data enrich open street maps? Sure. Pub data to highlight pubs they don’t have marked right now, or warning signs for dodgy takeaways. But address data is a problem – what do you do about hospital sites, with multiple outlets on a postcode, or a great restaurant next to that dodgy takeaway.

Updates are a problem too – we’re only getting an annual snapshot of more rapidly updated date. Could we get an RSS feed of changes, for example? Parsing the existing XML can be tricky. In Belfast people use backslashes in range addresses that breaks a lot of operations.

Accounting for allergies

Food contamination alerts for allergies need more work. They’d really like to take the RSS feed of allergy updates, and make them filterable by specific allergy, but they’re not allowed to invest in that kind of service. Could you relate that to barcode scanning? Yup, in theory. That would allow some apps to check for the update.

Allergies are a complex area – we have undiagnosed people, we have inaccurately self-diagnosed people, and not comprehensive picture of what foods are creating the biggest issues. There are some files available on the Food & You section of, and generally decent figures on the diagnosed people.

Food poisoning outbreaks are hard to pinpoint quickly – unless it can be identified via social media. For example, an outbreak via a curry festival was identified by social media before the labs managed to do so.

Building an Open Addresses database – and opening its APIs

Warning: Liveblogging – prone to error, inaccuracy, and howling affronts to grammar. This post will be improved over the course of a few days.

Gianfranco Cecconi & James Smith

Open Addresses are trying to build a huge addressing dataset from scratch, fighting the monsters and competitors that involves. They believe that addresses are a key asset of the national information infrastructure – and we need to liberate those addresses – or that was the pitch to the Cabinet Office.

The problem is huge.

They started with the assumption that they could build their dataset from existing open data sets, that (by chance) have associated address information, without intellectual property issues – and a volunteer workforce would then develop it from there. The Royal Mail suggests that there are 60m addresses in the UK – but that’s delivery places. This project has a wider view of the idea of addresses. Your electricity meter or your drone delivery spot might be an address.

Surviving as a non-profit

They also need to survive financially. They try to be frugal – so they try to not get sued, but they also try to build services that can fund what they do. The early money from the Cabinet Office will not last for ever. They have APIs that you can use in your products and services – for free. But there will be value added services on top of that. For example, “give me a likelihood of how real an address is”. It’s not a trivial problem – but could be very useful for delivery services.

There is no UK master list of addresses – no gold standard. Everyone is working to build their database, and all have errors, but some are further ahead. Confirmation is needed on these addresses, and Open Addresses is built to deal with this doubt and uncertainty as they go.

While they do need money to survive, many of their basic services are free, because they need to be there.

Working with the Open Addresses API

The obvious thing: search the data. And that you can do via the API. Just three lines! But the completeness is limited right now – they only have 1.2m of those 60m addresses. You can submit addresses through an API called Sorting Office. Again, free for now. They’ll normalise the address for you – and you can donate it to them, but you don’t have to.

With informed consent from your clients, you can hand over addresses to us on a day to day basis – through Turbot. It’s a platform for managing scrapers, and is descended from ScraperWiki. (It went live last night – 20th February 2015.)

Want to more sophisticated analysis on a block of text with addresses in it? The address building blocks API allow you to perform detailed analysis and processing on that sort of data. That is likely to be the main source of revenues in the battle to survive. The confidence API will be made available, giving a confidence score on any address.

Building the database

Their biggest challenge ahead of them is building the addresses. There’s a privacy issue – and persuading people that sharing addresses is not the same as sharing personal information about yourself doesn’t really tell anything personal. The existence of an address is not personal information, it’s just a fact. You can walk down streets and write them down. But it feels private.

There’s also a corporate approach, working with companies that use addresses, but they need explicit permission from clients to share their addresses.

Further notes and links.

Open Data for Charities – opportunities and roadblocks

Warning: Liveblogging – prone to error, inaccuracy, and howling affronts to grammar. This post will be improved over the course of a few days.


A session about use of open data by charities, inspired by the Data for Good report from Nesta.

Tracey Gyateng from the NPC is helping charities measure the work they do. How do they know, for example, if offenders stop offending after their work? Can government datasets help that? They think so, and are working on systems to help do that.

They also work with charities to open their eyes to the potential of data. Some of the rhetoric is around making money from data – but it can be used for charities to improve the well-being of people.

Many charities don’t know much about open data – or have the understanding to know how to release or access it.

360 Giving – an emerging data standard for grants. Policies are beginning to be published on Github to allow people to access them more.

Breaks in the data supply chain

Like the food supply chain, the data supply chain is broken. There’s no opportunity to thank the farmer that grew the supermarket food you bought. The same is true of the data flow in charities. You give to Comic Relief or the like, and there’s little feedback of what your money ends up doing, bar the few the film for the following year. We can engage the citizens that volunteer and donate more.

80% of charties have less than £100,000 in income – so it’s important to keep focused on that.

Mobile sensor feeds could be useful – combining sensor data and open data could be very useful. There are various projects underway on that.

Even experience with data is not as much of an advantage as you might think – problems with formats and understanding its nature can be difficult.

Charities: big and small

Are the challenges of open data for small charities and big charities different? One participant thought so, another suggested that if big charities lead, small charities can follow from that. But the University of Southampton research suggests that for small charities it’s much more about delivery than engagement.

Citizens Advice has had a lot of help from DataKind to help analyse their resources. They’ve produced some useful models, that smaller charities could use.

Local organisations often don’t think they have the time or organisation to collect data other than that required by contracts or law. As organisations, you do have data and information about your area that you could be sharing. The biggest problem is breaking the barrier of the procurement mindset: they are procured for that service and that service alone.

It would be great if the bigger organisations took on this modelling and passed it down the chain. So many of the small organisers are scared of the big funders and doing things they weren’t paid for.

Continue reading Open Data for Charities – opportunities and roadblocks

Banking on the Open Data Camp

I am bringing Open Data Camp a big fat data problem.

How many young people are homeless every year?

Whilst Centrepoint estimates that the figure is 80,000, the truth is, no-one really knows.

Government and hundreds of organisations are working to improve the situation for young people experiencing homelessness. However there are currently no ways to collect, track and measure the work being done on a national scale. We often operate in a vacuum, not knowing how many young people are homeless, or why; or which interventions are most effective.

Youth Homelessness Databank

We aim to change this. Centrepoint have recently won a grant from the Google Impact Challenge to create the UK’s only Youth Homelessness Databank. The Databank will collect and collate data from multiple sources:  the homelessness charity sector, local authorities, central government and other open datasets.

Holistic picture

Analysis and visualisation of these data will give us a holistic picture of the scale and causes of youth homelessness; and of the range and effectiveness of interventions. This will lead to a greater understanding of what works, better services, better funding decisions and ultimately better outcomes for young people experiencing homelessness.

What is an ‘ambitious and pioneering project’ on paper is a festival of moving parts in practice. My big fat data problem –

can we piece together data from the homelessness charity sector, local authorities, central government and open datasets to understand which young people experience homelessness, why this happens, and what works for them?

  • breaks into a thousand cuts of questions and poorly aligned data sets.


So I’m banking on #ODC to help me answer some of the following:

1. Data Flows

I would like to understand how youth homelessness data flows around the country.

Data somehow whizzes from beneficiary to assessment to beneficiary to provider to local authority and/or funder to DCLG to live data-table…. Can we map this journey?

2. FOIs.

Yes, FOIs. We have questions, me may have to FOI them. What works and what doesn’t?

3. Critical friends

Who are the critical friends within local authorities that we can talk to for the inside scoop? FOIs do not a collaborative project make!

4. What Databank can do for you?

I don’t ask what you can do for the Databank, I ask what Databank can do for YOU!

The UK spends up to £3.2 billion a year on youth homelessness. As we approach the impending fiscal cliff, what can the Youth Homelessness Databank do for YOU?

5. Systems talking to each other

Getting client management systems to talk to each other.

Any tips? On a post-it please!

Looking to the future!

Beyond #ODC, the Youth Homelessness Data Bank wants to hear from you –your contacts, your ideas and expertise on what data we should be collecting, which services/agencies we could be requesting information from and how we can offer young people experiencing homelessness opportunities to be involved.

Be in touch!

Contact me on Twitter: @la_gaia

Open Data and auto-discovery

Hi, my name is Christopher Gutteridge, I work for the innovation and development team of the University of Southampton, created the first version of their open data service and am one of the founders of </bragsheet>

For a long time I’ve been interested in open data from organisations. Each organisation owns its own data but there’s lots of value in many organisations publishing similar open data in similar ways. Your organisation isn’t special it almost certainly has some of:

  • sites, buildings, rooms, desks
  • people, teams, departments, job roles
  • key webpages: contact us, search, freedom-of-information, message from the boss
  • a product catalogue
  • places (physical or online) where you can get a service which may have opening hours and specific offers of a service at a price, from coffee to brain surgery to car parking
  • research outputs or publications
  • social media accounts
  • news and notices
  • events

The exact data you store or publish about these things may vary (this includes the links between things, eg people-in-buildings). However, the basic concepts should be the same for many organisations and we’ve been looking at ideas around how to share this information without the need for Google or Facebook to act as an intermediary. The route is cool, but it doesn’t solve the problem I want to solve because web crawling embedded data isn’t the best way to get a dataset. Also, there’s no trust that data found by crawling is really official information, and not just a demo but Jeff the PhD student.

At we have created a simple mechanism to discover such predictable information sets from an organisation from the web homepage. We are using this to autodiscover lists of research equipment in the UK academic sector and it has proved both effective and cheap (sustainable) while protecting the community from the risks normally associated with a hub that collates data suddenly going away. At the time of writing, 16 organisations, including 5 of the Russell Group, have implemented the OPD (organisation profile document), which is basically an auto-discoverable FOAF profile in Turtle which also describes the information sets an organisation has. While we’ve piloted this technique, it is by design anarchistic — anybody can expand and add to it. I want a web of data which doesn’t require silicon valley heavy hitters to let me work with open data.

Oh, there’s also which now has open data from 40 contributing institutions. Actually, there’s a whole lot of other datasets:

I’ll be attending the Open Data Camp on Sunday and I’d love to tell you more about our work, either one-on-one or maybe in a session.




This post was originally published on the Trafford Innovation and Intelligence Lab web site.

I am currently helping to organise an event called ‘Open Data Camp’, which is to be held in Winchester (it’s near Southampton), on the 21st and 22nd February 2015. We think that it’s definitely the first of its kind in the UK, and possibly the first in the world (or even the universe, depending on which side of the Drake Equation fence you sit on). The 21st of February also happens to be International Open Data day.

Open Data Camp is a two-day event, consisting of an unconference and maker-space. The focus of the event is entirely open data – the notion of making data available so that it can be reused by anyone, without any restrictions. Though the event is an unconference (which means the content of the day is decided by attendees at the beginning of the day), it is likely that there will be sessions looking at the National Information Infrastructure, technical challenges, and opportunities presented by open data, amongst lots of other things.

Who is doing this?

The campmakers are a ragtag group of open data people:

Mark Braggins (Hampshire Hub Partnership)
James Cattell (Cabinet Office)
Neil Ford (Events)
Hendrik Grothuis (Cambridgeshire County Council and Open Data User Group)
Martin Howitt (Devon County Council)
Lucy Knight (Devon County Council and LocalGov Digital)
Pauline Roche (Birmingham)
Giuseppe Solazzo (Open Data User Group)
Sasha Taylor (British Association of Public-Safety Communications Officials)
Sian Thomas (Food Standards Agency)
Jamie Whyte (Trafford Innovation and Intelligence Lab and LocalGov Digital)

Open Data Camp also has a number of excellent sponsors, without whom it would not be happening:

Hampshire County Council
Open Addresses
Food Standards Agency
Office for National Statistics
Ordnance Survey

Why are we doing this?

We are a group of people who are passionate about open data. We really feel that by opening data up, good things happen. There are many events held where open data is a supporting cast member – but at Open Data Camp – it’s the star of the show. To bring together 200 people for a weekend who are into open data is a brilliant opportunity to push open data forward.

Why are Trafford doing this?

Trafford has a history of doing open data well. We worked on setting up DataGM, we were the first Local Authority to be awarded a Pilot Level Open Data Institute Certificate, and we have recently been asked to work with the Cabinet Office as Local Experts in Open Data – working with a handful of other Councils who also do it well.

We use open data, as well as releasing it. We have recently used open data to identify priority sites for positioning defibrillators, apply for funding to support projects to reduce isolation in the elderly, and combined open and closed datasets to analyse cervical cancer screening rates, amongst many others.

Because of this, we have a vested interest in the wider open data picture. The more open data is released, the more we can use it to provide intelligence – through analysis and benchmarking. The better our intelligence is – the more informed our decision-making is.

But apart from the benefits that more data brings, there’s another good thing that’s happening because of the camp. The open data community is exceptionally talented, but is quite thinly distributed across the globe. Open Data Camp is being used as a touch point for some of these groups and organisations – the camp itself is now looking likely to connect with the Open Knowledge Foundation hack in London, Bath:Hacked, Greater Manchester Data Synchronisation Programme Lean Startup weekend, Ebola Open Data Jam, and ODI nodes. The mechanics of these link-ups are yet to be worked out, but the fact that these connections are forming is very good for the open data movement.

How can you get involved?

All the tickets for Open Data Camp have now been sold (or rather allocated – it’s a free event). I will blog about the event once it has happened, with outcomes, outputs, challenges, etc. We (the campmakers) will be tweeting in the run up to the event, and during the event itself, using the hashtag #ODcamp. All attendees will also be asked to tweet during the event. We are also looking into ways that we can livestream sessions – more details of that will be available on the website.

Finally – if the camp is a success, we’ll probably look to make it an annual feature. If so I’ll do my best to drag the next one up North. Don’t be afraid to get tickets and come along!

Linking open data

We’re happy to be sponsoring the first Open Data Camp UK and we’re looking forward to hearing, and seeing, what people are doing with Open Data. To us, as data publishers, the best thing about opening up data is the freedom it gives you to create something useful.

But if you link your open data the possibilities really open up. So, in that spirit, this post is about what publishing Linked Open Data really means and some of the practical advantages it has.

Linked Data is:

“a method of publishing structured data [on the web] so that it can be interlinked and become more useful.”

With Linked Data, each data point (i.e thing or fact) has its very own URL on the Web. This is unique and because it’s readily available on the internet, people can look it up easily. And Linked Open Data can also contain links to other facts, so you can discover more, related data.

The linked data “cloud”

But Linked Data also rocks if you want to make something with the data. This is because when you look up the linked data page, all the metadata about it is embedded in: so there are no ambiguous column names to slow you down.

And if data is published as linked, as well as being published on a web site, it means that it comes with APIs, including a SPARQL endpoint – so developers can query the data in a variety of formats and use the data in their own programs.

But it’s not just for the techies – if you’re not technical, linking up your open data has other advantages.

  • It makes it easier to work with open data across organisations and departments because it’s not locked into silos: anyone can access it, making it truly open.
  • Linking open data with other data sources and having specific names for things saves time and effort when problem solving. Take a look at Steve Peters’ post on Joining The Dots across departments.
  • It’s low cost and sustainable – you convert the data once and reuse it – again and again. As part of our PublishMyData service, you can update your data yourself.
  • By linking your open data, it makes it easier to create apps and visualisations which are a friendly, quick way in to the data.
Swirrl’s event space at Manchester’s Museum of Science and Technology

And on 21st April 2015 we’ll be sponsoring an event of our own: Local Open Data:Reaping the Benefits.

This is a one day event at Manchester’s Museum of Science and Industry. Its aim is to bring together people working with, or interested in, data at a local level.

You can check out our awesome speakers here, or register your interest.

Photo credit

The linked data cloud features in the Wikipedia article: and is attributed to Anja Jentzsch

Food data to go

We know from past hackathon events that the attendees are a talented hive of production and we want to help you to make more honey. At the Food Standards Agency, we have a healthy appetite for openness. This is because we’re an independent government department with no specific minister. It means openness and transparency are in our DNA.

We publish open data about food.

So let’s cook

We’re excited to be part of the Open Data Camp and have a series of digital offerings to serve up. If you’re into making stuff, we’re keen for you bring your experience to the table and use our data to make a new innovative application and that can include social media.

Below are details of our main datasets and some examples of where to find existing applications. These might inspire you.

Do let us know how you get on @foodgov and use #opendata. Our @drsiant will be at the event and me, @davidberrecloth, via Twitter.

  1. UK food hygiene ratings API (JSON and XML format)


About the geo-coded data

The food hygiene ratings given to restaurants, pubs, cafés, takeaways, hotels and other places consumers eat, as well as supermarkets and other food shops. A food business’s rating reflects the standards of food hygiene found on the date of inspection or visit by the local authority.

Get data

Our API 2.0, which includes calls to the server, can query and return data (not the whole dataset though):

A more basic API as well as static XML files by local authority:

Consumers can search for ratings at:


There are a number of app outlets offering hygiene rating apps based on our data – have a search of Apple, Android, Windows, BlackBerry, for example. Also, there are a number of websites. Search for ‘food hygiene ratings’ to find these. Can you think of a potential social media application? For example, a Facebook check-in at a restaurant displays the restaurant’s rating on a map?

  1. Allergy alerts and food alerts (RSS feed)


About allergy alerts

Peanuts, egg, milk, fish are some of the 14 major allergens and when allergy labelling is incorrect on a food product, or if there’s another food allergy risk, the food product has to be withdrawn from sale or recalled to protect consumers. Food allergic reactions range from mild to very serious. Most people are not allergic to all 14 allergens and we know affected individuals would benefit enormously if they could get alerts for the allergen that they are affected by, straight to their preferred social media feed.

Get allergy alerts

About food alerts

If there’s a problem with a food product (such as it contains pieces of metal or a nasty food bug) then that means it should not be sold and might be withdrawn (taken off the shelves) or recalled (customers are asked to return the product for a refund).

Get food alerts

  1. Audit of meat establishments (CSV format)

About the data

Slaughterhouses (abattoirs), meat cutting plants and wild game handling establishments are audited by us to make sure that they are:

  •         complying with food law requirements
  •         meeting relevant standards in relation to public health and, in slaughterhouses, animal health and welfare

More information at

Get data


Search web for ‘meat audit app’.

  1. UK local authority enforcement data (CSV format)

About the data

If something goes wrong or the risks become too high, local authorities can take enforcement action against a food business – closure, seizure of food, a simple caution, or a prosecution, for example. Data showing food law enforcement action taken is available in CSV format for the past four years up to 2013/14.

Get it

  1. Food and You survey

About data

This consumer survey is used to collect information about reported behaviours, attitudes and knowledge relating to food issues. It provides data on people’s reported food purchasing, storage, preparation, consumption and factors that may affect these, such as eating habits, influences on where respondents choose to eat out and experiences of food poisoning

Get data and user guide


It can be used for marketing to target food messages to the right groups through the relevant channel.

Keep connected

Join the conversation at @foodgov using #opendata

Be our Facebook community

Watch our videos

Get our news by RSS

Get our news by email

Enjoy the weekend guys!

If you open stuff up, good stuff happens

This is a slightly edited version of a post originally published on DATA.GOV.UK

I rather like the phrase: “Engineering Serendipity” which – as I choose to interpret it – means something like ‘creating conditions which maximise the chances of good stuff happening’. If you’re interested in a fuller discussion of Engineering Serendipity, there’s the excellent article written by Greg Lindsay over on Aspen Ideas.

I’ll come back to engineering serendipity a bit later. Please bear with me in the meantime, however, as I veer off-course to talk briefly about TV chefs.

Don’t watch, just cook

I love good food, and also enjoy cooking, but I never watch cookery programmes on television. I totally ‘get’ why people find the genre entertaining and informative, it just doesn’t do-it for me personally. My view is: if I have enough time to watch someone else cooking, then I might as well spend the time preparing a meal.

TV Chefery

When I say I “never” watch cookery programmes, it isn’t strictly true – I did watch some TV chefery a couple of weeks ago, as an episode of the “Hairy Bikers” was on in the background during a family get-together. In this particular episode – filmed in Bangkok during a recent tour of Asia – the Hairy Bikers were seeking the perfect recipe for Thai Green Curry.

Big break

They visited Aunty Daeng, a self-taught cook with an international reputation. Apparently, Aunty’s big break came when she prepared a meal for a royal visit to the government department where she was working at the time. The royals were so impressed, they invited her to become their private chef.  Had the royals not had the opportunity to taste Aunty Daeng’s food, she might still be working in a government department.

For all I know, Aunty Daeng’s old job may have been hugely worthwhile, and I’m not knocking working in a government department. My point is that a set of circumstances were created which led to Aunty Daeng’s career taking off.

What’s this got to do with Open Data?

I’m glad you asked.

Several times recently, I’ve noticed a combination of ‘chance’ and open data leading to good things that weren’t anticipated by the publishers of the data. Here are a few examples:

Blue Lights and severe weather events

BluelightCamp is a free annual unconference and open data hack which brings together people with some sort of interest in emergency services. In previous years, BlueLightCamp has been linked with British APCO’s annual exhibition in Manchester, and in 2013 we introduced an open data hack element.

In 2014 we held BluelightCamp in Hampshire instead, which meant that, for the first time, BlueLightCamp ‘met’ Hampshire Hub. This led to the birth of a new initiative: WUDOWUD. I won’t go into the detail here, as there’s an article about it on British APCO’s web site, co-written with Chris Cooper of Know Now Information.

Food, pubs and bus stops

Last November, we held the latest in a series of ‘Informing Hampshire’ events which are pitched at (mostly) people who help inform public service decision-making in-and-around Hampshire.

One of the presenters was Chris Gutteridge from the University of Southampton who mentioned during his presentation that he’d taken Food Hygiene Certificates open data (published by the Food Standards Agency), together with Public Transport open data, and presented it (along with lots of other useful stuff) on a map for students and staff.

That could be handy for anyone looking for a pub which serves food, and is near to a bus stop (for the correct bus to get home again later). From a public safety perspective, people finding decent pubs with good public transport links are probably less likely to be tempted to drink-and-drive. From a bus company perspective, that’s more bums on seats. From an open data publisher’s perspective, it’s positive proof that it’s worthwhile releasing useful data like Food Hygiene ratings, as they’re actually being used.

Open data up in the air

In 2014 we released aerial photography for the whole of the county of Hampshire. This includes high resolution imagery, together with height data, near infrared, and the routes flown.

As we were focusing on introducing the new Hampshire Hub, we didn’t have time or resources to provide a delivery mechanism for the aerial photography as a separate project, so we just made the data available under the Open Government Licence (OGL).

A couple of months ago we were approached out of the blue by the Geodata team at the University of Southampton who have obtained funding to create an online portal to let users explore and download 3D representations of the aerial open data. Geodata have obtained funding to do the development at no cost to the Hampshire Hub, and will make their site available to the public for free. In the words of Jason Sadler who leads the Geodata team: “If you open stuff up, good stuff happens.”

A fair wind

The next example isn’t Hampshire-specific, it’s global. I first heard about it during a presentation given at The Graphical Web, an event run by Alan Smith, who leads the Data Visualisation team at the Office for National Statistics (ONS). If you haven’t seen The Graphical Web before, I heartily recommend it, and all of the presentations were recorded and are available through the site.

Cameron Beccario gave a talk about The Wind Map: a ‘visualization of global weather conditions forecast by supercomputers updated every three hours’. Actually, it’s not ‘just’ that, and amongst other things includes ocean temperatures and waves, regularly updated. It’s a superb undertaking, and is the result of many hundreds of hours of effort.

The Wind Map is an excellent example of really good stuff happening when data is opened up. It wouldn’t have been possible had the data not been made freely available by the U.S. National Weather Service and others.

Open Data Camp – Engineering Serendipity

Ok, I confess, there’s a sub-plot here. Part of the reason for writing this post is to plug an event I’m co-organising. It’s Open Data Camp, which is in Winchester on the 21-22nd February 2015. Yes, that’s a weekend.

As far as I’m aware, it’s a UK-first, combining the ‘unconference’ format with a theme of open data. There will also be opportunities to ‘make stuff’ with open data over the weekend.

Tickets are being released in batches through Eventbrite. You’ll have to be quick, though, as they’re going fast.

Thank you sponsors

The organisers* are really grateful to Hampshire County Council for letting us use their fabulous HQ venue free of charge, and Matthew Buck of Drawnalism who donated the artwork and branding we’re using for the event.

Several others have offered their support and we’re following-up on the detail. We still seeking additional sponsors to help make the event go with a bang, so if you’re interested, please get in touch.

It’s a kinda magic

I’m convinced magic will take place at Open Data Camp, just like it does at other unconferences like UKGovCamp. Open Data Camp is open to the public, is free to attend, and spans all sectors. I’m hoping that new initiatives, ideas and collaborations will ‘pop-out’ from Open Data Camp – even though I’ve no idea what they might be. As event organisers we’re just trying to create the conditions which maximise the chances of good stuff happening.


  • There are a bunch of people on the organising team for Open Data Camp, ranging from as far North as Manchester, and as far south as Devon: