This case study describes a real engagement. Client identity, proprietary details, and specific metrics are anonymized or approximated under NDA.
Adaptive Learning Path Engine
One-size-fits-all course structure with 62% dropout rate after module 3. No personalization based on learner performance — all learners received the same content in the same sequence regardless of prior knowledge, pace, or demonstrated understanding.
Adaptive learning engine that adjusts content difficulty, pacing, and format based on real-time learner performance signals. The curriculum graph is reconfigured per-learner after each assessment, routing strong performers to accelerated paths and weaker performers to reinforcement content.
This engagement replaced a linear course structure with a dynamic curriculum graph where the learner's next content node is determined by their performance on each completed node. The system models each learner's demonstrated knowledge state across a curriculum ontology of 340 concepts, adjusts content difficulty within each concept using a three-level difficulty taxonomy, and routes learners to reinforcement or advancement paths after each assessment event. Content is served from a Sanity CMS that organizes learning objects by concept, difficulty level, and format (video, text, interactive exercise, quiz). The adaptive engine makes routing decisions in under 100ms, with no perceptible latency between assessment completion and next content load.
The Challenge
Adaptive learning requires a curriculum structure that supports multiple valid paths through the same material — a constraint that existing content did not satisfy. The prior course was organized as a linear sequence, and adapting it required restructuring 180+ existing content objects into the concept-difficulty taxonomy and identifying prerequisite relationships between concepts. This content restructuring was a significant upfront investment that had to be completed before any engine development could begin. The performance signal model also required careful design: learner performance on a single quiz is noisy, and a system that reacts too aggressively to single data points produces erratic path changes that confuse learners. Smoothing the adaptation signal required calibrating the performance model on historical learner data, which was sparse for lower-frequency content nodes.
How We Built It
Curriculum graph construction (Weeks 1–3): We worked with the subject matter experts to restructure the existing course content into a concept graph with 340 nodes, three difficulty levels per node, and explicit prerequisite edges. Each content object was tagged with concept, difficulty, format, and estimated completion time. The graph structure was validated by having instructors trace three representative learner paths through it and identify structural gaps. Prerequisite relationships were derived from instructor input rather than inferred algorithmically, since the domain semantics were not reliably recoverable from content text alone.
Performance model and knowledge state estimation (Weeks 4–6): Learner performance is modeled as a knowledge state vector over the 340 concepts, updated after each assessment event using a Bayesian knowledge tracing approach. The update rule weighs recent assessments more heavily than historical ones and applies concept prerequisite relationships to propagate performance signals (strong performance on an advanced concept provides weak evidence of competency on prerequisite concepts). The performance model was calibrated against 18 months of historical learner assessment data, and the calibration parameters were validated by comparing predicted performance on held-out assessment events against actual outcomes.
Adaptive routing engine (Weeks 7–9): The routing engine queries the learner's current knowledge state and the curriculum graph to select the next content node on each assessment completion. Routing logic considers: demonstrated mastery on the current concept (determines advancement vs. reinforcement), prior performance on prerequisites (surfaces gaps that may be blocking progress), and learner-stated time preference (a daily time commitment from onboarding adjusts the content length and depth of next-node selections). The routing engine is implemented as a Python service behind a Redis cache for knowledge state lookups, with route decisions completing in under 50ms.
Content delivery integration and analytics (Weeks 10–12): Content is served from a Sanity CMS via the existing platform API, with the adaptive engine supplying the content node ID rather than a fixed sequence position. The Next.js frontend required minimal changes — the sequence navigation was replaced by a single "next content" endpoint call that the engine resolves. A Vercel deployment pipeline handles frontend deployments with zero-downtime. Analytics dashboards give instructors concept-level visibility into learner performance distribution: which concepts have high failure rates, which paths learners take most frequently, and where the largest performance gaps exist between adjacent difficulty levels.
What We Delivered
Dropout rate after module 3 dropped from 62% to 37% — a 41% reduction — in the first cohort run through the adaptive system. Module completion rate across the full course increased 2.3x. The reduction in dropout was most pronounced among learners who had previously been abandoning at the point where content difficulty exceeded their preparation level — the reinforcement routing is now catching these learners before they disengage.
Average time-to-competency (time from course start to passing the final assessment) decreased 28% for learners who completed the course. This reflects both the removal of unnecessary linear progression through concepts learners already understand (for stronger performers) and the prevention of advancement before foundational gaps are filled (for learners who were previously advancing despite incomplete preparation).
Instructor workload changed qualitatively. Rather than reviewing learner progress on a fixed linear schedule, instructors now use the analytics dashboard to identify which concepts are generating the highest failure rates and to review the content at those nodes for improvement opportunities. Three content improvements have been made based on analytics data in the first two months of operation, each correlating with measurable reductions in failure rate at the corresponding concept nodes.
Ready to build something like this?
Tell us what you are building. We will scope it, price it honestly, and give you a clear plan.
Start a ConversationFree 30-minute scoping call. No obligation.