How can we use better Lidar data analysis?

Analysing Lidar data for fun and business benefit

WARNING – liveblogging. Prone to error, inaccuracy and howling affronts to grammar and syntax. Posts will be improved over the next 48 hours

Google doc of session notes

Lidar is a technology that produces incredibly useful data that can be used to measure the environment. It’s a key part of self-driving cars – and the applications go much further than that.

Serge Wich is in the Amazon using Lidar on drones to aid orang-utan conservation – the software can actually tell orang-utans alongside apart from other species. But where Lidar has the most promise is in the built environment. 3D modelling with Lidar solves so many problems organisation have with building metrics.

The Lidar Data Dump

In Sept 2015 the Environment Agency released 11Tb of open data – all based on Lidar data. Natural Wales followed with 5Tb.

Can we use this data accurately measure building size and attributes, avoiding the need for a site survey? Can that decrease wasted trips by utilities, where they don’t have a long enough ladder, or the building is too short? Could consecutive Lidars be used to detect unauthorised extensions?

You can already derive gutter lines and roof lines from this data. But the massive amounts of data we have are in a proprietary format which isn’t easily queryable. And we really want to mix it with other data sets.

Can we find a solution that:

  • runs off commodity hardware
  • uses lossless compression
  • is fast enough for bulk queries
  • facilitates queries with standatf spacial objects
  • returns DEM data or roster images
  • generic enough to handle other DEM or raster sources

Building a better Lidar tool

John Murray converted the DEMs into binary arrays and compressed them. A MySQL database holds the metadata. The OS’s 1km grid record number provides the file name, which are then stored in a folder hierarchy. Cacheing speeds up batch operations.

It runs on a standard Intel Core i7, on a machine with 32GB RAM – we’re talking top-end commodity hardware, but commodity hardware none-the-less.

You can pull of building outlines, and then rasterise them into images. But his next step is to vectorisation – and improving on the OS’s building layer. Can the project then move on to a stack layer, built from slice of buildings? How about time comparisons to detect building alterations? Can we uses slicing to estimate carriageway widths?

Can we use GPU processing to speed up the work?

John has published fuller details of his Lidar analysis work.

Other uses for Lidar?

Any potential agricultural or health data? For example, infrared for crop health? Can we use green space mapping against health data to prove its value? In theory, that should be possible.

Some things to note:

  • We only have about 71% coverage in Environment Agency released areas (78% of urban areas) – there’s not much data in some rural areas, for example
  • Data quality varies by area, based on technology use.

What would it take to turn it into a truly national database? WE’re still quite a distance away. John’s system work with overlapping datasets by doing two queries – but do we need the gaps filled in?

Leave a Reply

Your email address will not be published. Required fields are marked *