What Actually Happens to Your Drone Data After the Flight?
Most clients focus on the drone itself - the flight, the equipment, the day on site. But the work that turns raw imagery into something useful happens long after the drone lands. Here's how we approach data processing, and why the steps you can't see matter just as much as the ones you can.
The first rule: you don't leave site without two copies
Before anything else happens, before we pack the van, the raw data is backed up to a second device on site. That SD card holds the results of everything - the mobilisation, the setup, the flight time. Losing it on the drive home isn't an option. So we verify the data is present, check it's valid (no black frames, no corrupted files), and confirm it's in two separate locations. Only then do we leave the field.
Two very different processing paths
Once back from site, the first decision is what kind of output we're working towards. "You've got two big paths," as Bob puts it: a report or a technical deliverable. A report is something a person will read and act on. A technical deliverable - a point cloud, a georeferenced orthomosaic, a LiDAR dataset - is something that feeds into another system or workflow. The processing approach is completely different depending on which path you're on.
Folder structure and naming: more important than most people realise
Whether the client wants raw imagery or a finished report, the underlying data needs to be structured. "The worst thing in the world you can hand somebody is a single folder with a thousand images" that starts at DJI_001 and ends at DJI_1000. It's uninterpretable. Our job is to make sure someone can land on a folder structure cold, find what they're looking for, and understand it immediately - North facade, South facade, Roof A, Building B, whatever system the site uses. We also rename files to match the asset: Turbine 1, Blade B, Leading Edge. Anyone picking up that data set six months later knows exactly what they're looking at.
Finding the right image for each defect
On inspection jobs, you'll often capture 20 images of the same defect from different angles and distances. Not all of them are equally useful. Part of the processing work is identifying each defect, isolating the best image or set of images to communicate its size, location, and severity, and pairing a wide orientation shot with a close-up detail. You also need to avoid what Bob calls "conducting an inspection through a straw" - zooming in so tightly that you capture the defect but lose all sense of where it is on the structure.
Classification, severity, and description
Every defect in a report gets three things: a classification (is it corrosion? spalling? erosion?), a severity level on a four or five point scale, and a written technical description. Severity ranges from informational at Level 1 - green mildew on blades, worth knowing but not urgent - through to Level 4: critically dangerous, phone the client directly. That description text matters because it's often what feeds straight into a maintenance system. The person acting on it may never see the original report.
PDF for humans, spreadsheet for machines
The standard deliverable on most inspection jobs is a PDF report paired with a spreadsheet. The PDF is human-readable - a summary, all the imagery, all the defect entries. The spreadsheet is structured for import: each defect as a line item, sortable and filterable, ready to go into whatever maintenance platform the client runs. "PDF for the humans, spreadsheet for the machines." Both come out of the same processing work. Together, they mean anyone holding those documents has a clear, current picture of the asset's condition.