We're building the AI brain that makes drones autonomous. Define a mission — inspect a roof, map a field, count inventory — and the intelligence handles everything else. On-device. No cloud. No pilot.
Oculair isn't a drone. It's the brain that makes any drone autonomous. An AI system that plans missions, flies them, and understands what it sees — all from a single edge compute module.
An onboard language model acts as mission controller — planning actions, calling tools, adapting to conditions, and making abort decisions in real time. No scripted waypoints. Actual reasoning.
Computer vision runs during flight, classifying what the drone sees and feeding observations back to the mission controller. The system doesn't just capture — it comprehends.
The intelligence layer connects to any MAVLink-compatible flight controller through an adapter interface. Swap airframes, upgrade sensors, scale from a single drone to a fleet.
The same autonomous platform adapts to completely different domains. Define the mission parameters — the AI figures out the execution.
Roofs, facades, bridges, towers. Autonomous flight plans that capture every angle, with CV that flags defects — cracks, corrosion, missing material — and maps them to precise locations.
Fly a yard, field, or depot. The system identifies, counts, and geolocates assets — vehicles, pallets, containers, heavy equipment — and generates a structured inventory without manual walkthroughs.
Autonomous field surveys that assess crop health, irrigation coverage, pest damage, and growth staging. Actionable field maps from a single flight, processed entirely on-device.
Systematic aerial capture for orthomosaics, terrain models, and volumetric measurements. The AI optimises flight paths for coverage and overlap, adapting to terrain and conditions in real time.
Every layer runs locally. The drone carries its own reasoning engine, safety system, and vision pipeline — no internet connection required during flight.
A local language model plans and executes missions through validated tool calls. It reasons about goals, sequences actions, handles errors, and decides when to abort — all within a 15W power envelope.
Every command passes through geofence, altitude, battery, GPS quality, and state coherence gates before execution. The LLM proposes. The safety layer disposes.
Object detection and classification run during flight. Results feed back to the mission controller, letting the drone spend more time on areas that need attention.
Built on NVIDIA Jetson. LLM inference, CV pipeline, and flight control run simultaneously on a single edge module. Zero cloud dependency. Zero latency penalty.
We're building Oculair in the open. Technical decisions, architecture deep-dives, and lessons from development.
Latency kills autonomy. How we fit a reasoning engine into a 15W power budget on NVIDIA Jetson.
Tool registries, safety gates, and the agent loop that lets a language model fly a drone without killing anyone.
How the same autonomous stack adapts from roof inspection to crop scouting to inventory tracking.
Progress updates, technical insights, and early access when we're ready. No spam — just signal.