from Continuous Measurement

The diagnostic layer
between curriculum
and intervention.

Resolve makes instructional intent legible, measurement valid, and intervention precisely targeted — one pipeline, every lesson. And in doing so, it generates the labeled data architecture knowledge tracing models have been missing.

What Resolve Is

Three outputs.
One cognitive architecture.

Resolve takes a lesson from any high-quality mathematics curriculum as input and produces a complete instructional package as output. The same cognitive architecture that drives the instructional outputs generates labeled training data against a psychometrician-defined measurement model.

Output 01
Cognitive Clarification
Curriculum layer

Every lesson in a high-quality curriculum carries mathematical intent that is embedded in activity structure — intent that requires expert interpretation to surface. Resolve analyzes the lesson against a research-driven protocol anchored to the standard's cognitive demands, classifies every activity by its relationship to the learning objective, and flags critical instructional moments. The result is a clarified lesson a teacher can execute without guesswork — with the curriculum's own intent made explicit.

Plain language The lesson, annotated so teachers can see exactly what they're supposed to do, why each activity matters, and which moments are critical.
Output 02
Diagnostic Alignment
Assessment layer

The original exit ticket is replaced with a psychometrician-designed instrument derived from the same cognitive architecture as the lesson. Every wrong-answer option maps to a specific misconception code defined in the empirical research literature — a documented, theoretically-grounded hypothesis about student thinking. The instrument differentiates among cognitive error types, not just correctness. Every student response is a labeled observation. Results feed automatically into generated, targeted remediation matched to the specific error identified.

Plain language A smarter assessment at the end of the lesson that tells you not just who got it wrong, but which specific, research-defined misunderstanding caused it — and automatically generates the right intervention.
Output 03
Typed Remediation
Intervention layer

Each misconception is classified into one of five cognitive error types — Gateway, Structural, Representational, Coordination, Foundational. Each type has a distinct, evidence-based instructional template. 20-minute modules are generated automatically, matched to the specific error type, ordered by severity, and produced in both teacher and student versions. Each module ends with a 2-item diagnostic check: Resolved, Partially Resolved, or Persists.

Plain language A short, targeted activity for each student's specific misunderstanding — generated automatically, no additional teacher prep required.
For Your ML Team

The labeled data problem,
solved at the source.

Most adaptive systems are trained on behavioral proxies — right/wrong sequences, time-on-task, clickstreams — where latent student states are inferred post-hoc. Resolve's architecture inverts this. Misconception codes, cognitive step sequences, and measurement priors define the latent states before data is collected. Every response to a Resolve diagnostic item is a labeled observation against a theoretically grounded model — not a behavioral signal to be interpreted after the fact.

Without Resolve With Resolve
Behavioral sequences — right/wrong, time, clicks Response sequences labeled against psychometrician-defined misconception codes from the empirical research literature
Latent states inferred post-hoc from behavioral patterns Latent states defined in advance by a cognitive architecture — before a single data point is collected
Black-box mastery probability Interpretable state: which misconception is active, at which construct level, at which cognitive step
Training corpus assembled from proxies Training corpus generated automatically — data collection begins when the pipeline runs, not when scale is achieved
Difficult to defend to assessment-literate buyers Grounded in IRT measurement framework with expert-based 3-parameter priors — defensible to psychometricians and assessment platforms
Stage 01 — Near Term
Knowledge tracing with interpretable states
Labeled response sequences provide a training corpus for KT models where latent constructs are psychometrician-defined, not behaviorally inferred. Models trace which specific misconception is active at which construct level — not just a mastery probability.
Stage 02 — Medium Term
Within-instruction proficiency measurement
Resolve's cognitive step sequences define a generative model for student behavior during instruction. In-lesson responses become evidence for updating a belief state in real time — predicting which misconception a student holds before the exit ticket is scored.
The Strategic Case

What the combination
makes possible.

Resolve's diagnostic architecture and ML infrastructure are complementary in a specific way: one generates the labeled data the other needs, and together they produce something neither can offer alone.

01
District-level distribution
Resolve anchors instruction, assessment, and intervention in the HQIM adoption cycle — the procurement decision that governs what every teacher in a district uses. That is the institutional entry point the K–12 market requires.
02
A training corpus no one else has
Theoretically-labeled response data against psychometrician-defined misconception taxonomies — generated automatically at scale. This corpus cannot be assembled from existing behavioral datasets. It only exists if the diagnostic architecture exists first.
03
Defensibility with assessment buyers
Assessment platforms and state agencies ask hard questions about measurement validity. Resolve's IRT-grounded architecture gives adaptive outputs a theoretically defensible foundation — not just accuracy metrics.
Current State

What exists right now.

Resolve is not a concept. It is a working two-agent workflow with demonstrated output across multiple Grades 3–8 mathematics standards.

Two-agent workflow
Deliberately architected to produce ML-ready outputs from first run — every response labeled against a psychometrician-authored cognitive-diagnostic model, latent states defined before data is collected. The complexity lives in the cognitive architecture, not in proprietary software. That's the point.
Complete lesson package
Cognitive clarification, diagnostic exit ticket, and full remediation module suite produced and pressure-tested across multiple Grades 3–8 mathematics standards. Available for review.
ML-ready data architecture
Explicitly designed for downstream ML pipeline. Labeled response data generated against psychometrician-defined measurement models from first use — data collection begins when the pipeline runs.
Scope & expansion path
Grades 3–8 Mathematics (US CCSSM) today. Clear, sequenced path to K–2, 9–12, international curriculum frameworks, and ELA. The cognitive architecture is standard-agnostic.
Next Step

See what the output
actually looks like.

The fastest path to understanding fit is reviewing a complete lesson package — clarified lesson, diagnostic instrument, and remediation modules — alongside the data schema it generates.

Review a Sample Lesson Package Review Technical Documentation