Tool MEDIUM relevance

CIRCLE: A Framework for Evaluating AI from a Real-World Lens

Reva Schwartz Carina Westling Morgan Briggs Marzieh Fadaee Isar Nejadgholi Matthew Holmes Fariza Rashid Maya Carlyle Afaf Taïk Kyra Wilson Peter Douglas Theodora Skeadas Gabriella Waters Rumman Chowdhury Thiago Lacerda
Published
February 27, 2026
Updated
March 18, 2026

Abstract

This paper proposes CIRCLE, a six-stage, lifecycle-based framework to bridge the reality gap between model-centric performance metrics and AI's materialized outcomes in deployment. Current approaches such as MLOps frameworks and AI model benchmarks offer detailed insights into system stability and model capabilities, but they do not provide decision-makers outside the AI stack with systematic evidence of how these systems actually behave in real-world contexts or affect their organizations over time. CIRCLE operationalizes the Validation phase of TEVV (Test, Evaluation, Verification, and Validation) by formalizing the translation of stakeholder concerns outside the stack into measurable signals. Unlike participatory design, which often remains localized, or algorithmic audits, which are often retrospective, CIRCLE provides a structured, prospective protocol for linking context-sensitive qualitative insights to scalable quantitative metrics. By integrating methods such as field testing, red teaming, and longitudinal studies into a coordinated pipeline, CIRCLE produces systematic knowledge: evidence that is comparable across sites yet sensitive to local context. This, in turn, can enable governance based on materialized downstream effects rather than theoretical capabilities.

Metadata

Comment
Accepted at Intelligent Systems Conference (IntelliSys) 2026

Pro Analysis

Full threat analysis, ATLAS technique mapping, compliance impact assessment (ISO 42001, EU AI Act), and actionable recommendations are available with a Pro subscription.

Threat Deep-Dive
ATLAS Mapping
Compliance Reports
Actionable Recommendations
Start 14-Day Free Trial