Bury a flight plan under irrelevant NOTAMs and pilots stop reading it.
The flight operations function of a global carrier was buried in NOTAMs - we built the AI filter that gave the cockpit its signal back.

Every flight gets an Operational Flight Plan: route, fuel, weather, NOTAMs. The flight-planning system returns every NOTAM associated with every airspace and airport along the route - which is exhaustive, and exhausting. Pilots end up parsing dozens of pages of low-relevance notices to find the handful that actually matter for the sector they're flying. Behind the scenes, the NOTAM review team - already lean after pandemic-era restructuring - was spending days per route filtering by hand. The question: could AI compress the review without compressing the safety margin?
- 01
Parse the source, don't trust the format.
Operational flight plans are PDF-native, structured for human reading rather than programmatic ingestion. We built a custom parsing pipeline to extract every NOTAM cleanly - the unglamorous foundation that made everything downstream possible.
- 02
Layer business rules under the AI.
A pure-ML approach would have been faster to ship and harder to trust. We chose a two-layer architecture instead: rules-driven filtering for the categories where compliance is explicit (company NOTAMs, performance NOTAMs, mandatory categories), and a trained classifier for the ambiguous middle. The judgment call: in flight operations, rules are auditable; models are inscrutable. Use each where it belongs.
- 03
Close the loop with the source system.
The filter's output is reviewed by a flight-operations agent and fed back into the upstream system, so undesired NOTAMs stop appearing in future flight plans. The system gets quieter every time it runs.
NOTAM review per route dropped from days to half a day. The flight-planning team recovered capacity at a moment when capacity was the constraint. Pilots receive cleaner, more relevant flight plans - and the rules-aligned consistency of the filter means two pilots flying the same route now read the same filtered view. Safety improved not through more information, but through less of the wrong information. The unlock: a feedback loop that compounds - every flight makes the next flight's plan cleaner.
“In regulated, high-stakes workflows, the right architecture isn't ML or rules - it's both. Build the rules layer for auditability, the ML layer for ambiguity, and the feedback loop for compounding.
Buried under document volume that nobody can review at the speed the operation demands? We help safety-critical functions cut review time without cutting safety margin.
Let's talk

