Validation Protocol

Pre-specified criteria.
Blind ranking. Calibration.

We measure ourselves the way exploration teams measure targets, against ground truth.

How we work

01

Walkthrough

Understand your tenements, data landscape, and what you consider a hit.

02

Data review

Assess data quality, coverage, and fit for the target package workflow.

03

Pilot target package

Run on a single campaign. Review outputs together. Iterate.

04

Validation and expansion

Measure against ground truth. Expand to additional campaigns once trust is earned.

What we mean

Target

A ranked recommendation for the next decision gate: mapping, sampling, trenching, or drilling.

Hit

A target that meets the client's pre-specified success criteria when tested against ground truth.

Calibration

When we say 80% confident, we're right about 80% of the time. We measure this.

Rejection

When uncertainty is too high, the system abstains rather than guessing. We report when and why.

How we test

Success criteria defined before results are seen
Targets ranked blind to known outcomes
Hit rate and calibration measured against drilling results
Rejection rate and reasoning tracked across terranes
Nothing ships without blind benchmark wins

Review and audit

QP-ready export bundles with full lineage
Supports NI 43-101 / JORC review workflows
Security documentation available on request

See it applied to real data

Walk through how OracleAI would work on your tenement, what the outputs look like, and what they mean.