LENS 01
Skill Fragility
How quickly and irreversibly does human competence erode if this task is automated and the person stops doing it?
LENS 02
Intervention Readiness
When the AI fails or produces wrong output, how prepared and able is a human to recognise the failure and take over effectively?
LENS 03
Tacit Knowledge Depth
Does meaningful oversight, correction or collaboration require the human to have done this work themselves — or can it be supervised from the outside?
LENS 04
Anomaly Exposure
How frequently does genuinely novel, out-of-distribution work arise — situations that fall outside what the AI can handle reliably?
LENS 05
Feedback Integrity
Does the human still receive enough meaningful signal from the work to stay calibrated — to know when the AI is right, wrong, or subtly drifting?
LENS 06
Failure Visibility
When the AI produces a wrong or degraded output, is failure obvious — or does it camouflage itself as plausible, well-formed, confident output?
Methodological basis
This framework derives from Lisanne Bainbridge's 1983 paper "Ironies of Automation," which documented the mechanisms by which automation shifts rather than eliminates human cognitive burden. Bainbridge identified that automated systems reliably degrade the skills, situational awareness and intervention capacity of the humans nominally in oversight roles — particularly when failures are rare, feedback is attenuated, and tacit knowledge is required. The six lenses operationalise her core mechanisms for application to contemporary AI deployment decisions. The recommended collaboration designs are not simple risk scores but design orientations: each implies specific choices about role structure, training investment, interface design and oversight architecture.