DRIFT, measured.
How DRIFT performs against human review, against itself, and across domains.
DRIFT vs. a NASA review board
During the Landolt photometric calibration mission's SRR/MDR review, DRIFT was deployed against the full document corpus alongside the human review board. Outputs were compared against the board's eventual findings.
Distributed peer reasoning, edge hardware
Three Raspberry Pi nodes running independent DRIFT engines. Tested against trust failure scenarios where one node's data qualification status degrades and peer nodes must catch the failure before downstream consequences.
Edge hardware performance, characterized
DRIFT engine and mesh performance measured on Raspberry Pi 5 edge hardware. Three-node configuration with deterministic measurement methodology, controlled thermal and power state, and per-node scenario profiles exercising distinct engine paths.
Specific p50, p99, and max values for each category, full methodology, and the preliminary measurement report (v1.1, April 2026) are available under NDA. Formal Stage 1 campaign with extended-duration runs and multi-day replication is scheduled for later in 2026.
Detailed methodology, source documents, and validation procedures are available under NDA. DRIFT is a sealed engine; results presented here are output-side observable. The engine's internals are not disclosed.
Want the methodology under NDA?
Request a Walkthrough