Resolve makes instructional intent legible, measurement valid, and intervention precisely targeted — one pipeline, every lesson. And in doing so, it generates the labeled data architecture knowledge tracing models have been missing.
Resolve takes a lesson from any high-quality mathematics curriculum as input and produces a complete instructional package as output. The same cognitive architecture that drives the instructional outputs generates labeled training data against a psychometrician-defined measurement model.
Every lesson in a high-quality curriculum carries mathematical intent that is embedded in activity structure — intent that requires expert interpretation to surface. Resolve analyzes the lesson against a research-driven protocol anchored to the standard's cognitive demands, classifies every activity by its relationship to the learning objective, and flags critical instructional moments. The result is a clarified lesson a teacher can execute without guesswork — with the curriculum's own intent made explicit.
The original exit ticket is replaced with a psychometrician-designed instrument derived from the same cognitive architecture as the lesson. Every wrong-answer option maps to a specific misconception code defined in the empirical research literature — a documented, theoretically-grounded hypothesis about student thinking. The instrument differentiates among cognitive error types, not just correctness. Every student response is a labeled observation. Results feed automatically into generated, targeted remediation matched to the specific error identified.
Each misconception is classified into one of five cognitive error types — Gateway, Structural, Representational, Coordination, Foundational. Each type has a distinct, evidence-based instructional template. 20-minute modules are generated automatically, matched to the specific error type, ordered by severity, and produced in both teacher and student versions. Each module ends with a 2-item diagnostic check: Resolved, Partially Resolved, or Persists.
Most adaptive systems are trained on behavioral proxies — right/wrong sequences, time-on-task, clickstreams — where latent student states are inferred post-hoc. Resolve's architecture inverts this. Misconception codes, cognitive step sequences, and measurement priors define the latent states before data is collected. Every response to a Resolve diagnostic item is a labeled observation against a theoretically grounded model — not a behavioral signal to be interpreted after the fact.
| Without Resolve | With Resolve |
|---|---|
| Behavioral sequences — right/wrong, time, clicks | Response sequences labeled against psychometrician-defined misconception codes from the empirical research literature |
| Latent states inferred post-hoc from behavioral patterns | Latent states defined in advance by a cognitive architecture — before a single data point is collected |
| Black-box mastery probability | Interpretable state: which misconception is active, at which construct level, at which cognitive step |
| Training corpus assembled from proxies | Training corpus generated automatically — data collection begins when the pipeline runs, not when scale is achieved |
| Difficult to defend to assessment-literate buyers | Grounded in IRT measurement framework with expert-based 3-parameter priors — defensible to psychometricians and assessment platforms |
Resolve's diagnostic architecture and ML infrastructure are complementary in a specific way: one generates the labeled data the other needs, and together they produce something neither can offer alone.
Resolve is not a concept. It is a working two-agent workflow with demonstrated output across multiple Grades 3–8 mathematics standards.
The fastest path to understanding fit is reviewing a complete lesson package — clarified lesson, diagnostic instrument, and remediation modules — alongside the data schema it generates.