EBA Learning Games
EBA Learning Games is a modular suite of short, feedback-heavy mini-games built as reusable 'lego bricks', so we could ship weekly, keep UX consistent, and give children immediate correction instead of 30-minute delayed feedback.
Context
A lot of 11+ practice is structurally demotivating. Children do 20–40 minutes of questions, then wait for marking or tutor feedback. Even when they try, the correction arrives too late to connect effort to improvement. The result is predictable: practice feels pointless, confidence drops, and avoidance behaviours grow.
The product goal was not “more content”. It was a better learning loop. Short sessions, fast correction, and a flow that makes it easy to try again.
Discovery
I tested early versions across roughly 15 families and two classrooms, spanning about 500 lessons. Two patterns showed up quickly:
- The UI interaction cost was the hidden enemy. Some children were not getting questions wrong because they did not understand the concept. They were losing time and confidence through friction: confusing input patterns, too many clicks, unclear feedback states, or too much reading before action. In a timed setting, that friction becomes panic.
- Baseline skill gaps required gentler entry ramps. A subset of children needed easier starts and scaffolding before they could benefit from timed performance. If the first experience feels like failure, they disengage. Games needed to start in a way that lets the child succeed early, then gradually tighten.
These insights pushed the project away from building “cool games” and toward building a consistent, low-friction product surface that could host many games without reinventing UX each time.
What I decided to build (and why)
I treated the games as one product, not dozens of mini-products.
That meant building a shared foundation with:
- A consistent game shell with the same HUD, timing, scoring language, feedback placement, and end-of-session summary across all games
- Reusable interaction primitives so children learn input patterns once and reuse them everywhere
- Standard feedback patterns including immediate correctness and fast retry loops
- Difficulty handling as a system with shared difficulty state and ramp rules
- Shared measurement with consistent event instrumentation across games
The payoff was compounding speed. Every new game benefited from the same usability lessons, and every improvement to the shell improved the entire suite.
Rejected alternatives
- Longer sessions More depth per session, but it breaks the learning loop. Children are left carrying confusion until the end, which increases frustration and avoidance.
- Bespoke UX per game More tailored experiences, but shipping slows and the suite becomes inconsistent. Children pay the relearning cost every time they switch games.
- Over-indexing on gamification early It can lift engagement briefly, but it does not fix the core problem of friction and delayed correction. If the experience is hard to use, points and badges do not rescue it.
- Third-party platforms Fast to launch, but weak integration with the rest of EBA, limited instrumentation, and less control over the loop design.
What shipped
- A standard game shell with consistent HUD, timer, scoring, feedback states, and session summary
- Reusable interaction components that reduced input friction and increased consistency
- A weekly shipping cadence for new games and improvements to the shared shell
- Shipped titles included Sudoku, Statistics Trainer, and Magic Maths, built on the same core architecture
Analytics: Events
-
game_startedTracks activation. This tells you how often a game is opened relative to impressions, and which games attract the first click. -
question_answered(correctness, response_time) Tracks the core learning interaction. Correctness shows difficulty fit, and response_time is a proxy for friction, confusion, or mastery. -
game_completedTracks loop completion. This is your primary retention proxy inside a session and helps compare which games keep children engaged to the end. -
game_abandonedTracks drop-off. Pairing this with the last known question and response_time helps locate where the experience breaks, such as early confusion or late-session fatigue.
How I used these signals
- Completion rate by game and difficulty to spot titles that looked fun but did not hold attention.
- Response_time on easy items to detect input friction, unclear UI states, or reading load that should not exist in a fast loop.
- Abandon location inside the session (early, mid, late) to decide whether to fix onboarding, pacing, or difficulty ramps.
What this instrumentation enabled
- Identify where drop-off clustered (start vs mid-session vs final questions)
- Spot games with high friction (high time-to-answer across easy items)
- Compare completion and retry behaviour across titles
- Validate whether easier ramps increased early success and session completion
AI usage
AI was used as an acceleration layer, not a replacement for product judgement. It helped me compress iteration cycles by turning observations into structured tickets quickly, generate edge-case checklists for inputs, timing, and scoring, tighten acceptance criteria so changes stayed consistent across the suite, and draft lightweight test plans for regressions in the shared shell.
Next steps
- Adaptive difficulty that responds to performance and confidence, not just correctness
- Better onboarding so first-time users experience early success and understand the loop
- Tutor-facing assignment logic so practice can be targeted to specific weaknesses
- Stronger progress summaries that translate gameplay into actionable insight without overwhelming parents and tutors