EBA Homework Accountability
EBA Homework Accountability is a simplified homework workflow for parents, children, and tutors, designed so parents can quickly see what’s assigned and what’s done, and tutors can assign work without constantly rewriting links and instructions.
Context
Homework tracking is where trust is built or lost. When a routine is unclear, parents default to chasing, children default to avoidance, and tutors spend time repeating admin instead of teaching. The problem is not motivation alone. It is a workflow problem.
The constraints were practical. Parents were time-poor and wanted a quick, reliable view of progress. Tutors needed lower overhead. If assignment and follow-up felt annoying, the system would not be used, and the routine would collapse.
Discovery
This started from repeated parent complaints and tutor feedback rather than a roadmap idea. Two consistent signals emerged:
- Parents did not lack effort. They lacked visibility. They could not easily answer what is due, what is done, and what is being avoided, especially across multiple children and busy weeks.
- Tutor adoption was the bottleneck. Tutors resisted anything that felt like extra admin. If tutors did not enjoy assigning work inside the system, they fell back to WhatsApp and copy-paste messages. If tutors hate the tool, the system collapses.
The key insight was to treat tutor friction as a first-class product problem. A parent-friendly view is useless if assignments are not consistently created.
What I decided to build (and why)
I built a lightweight LMS-style workflow focused on a single loop and a small number of states. The goal was not to replicate a full LMS. The goal was to make the routine reliable.
- A clear state model: assigned → in progress → submitted → reviewed → done, so everyone shares the same definition of progress.
- Role-based permissions: tutors assign and review, parents oversee and support, children complete and submit.
- A structured resource selector: reusable resource locations and templates to reduce repetitive link pasting and instruction rewriting.
- Fast parent surfaces: a simple, glanceable view that answers what is due, what is late, and what needs attention today.
The north star was consistency. If it takes too long to assign, it will not be used. If it is confusing to interpret, it will not reduce chasing.
Rejected alternatives
- Keeping everything in WhatsApp or email Fast in the moment, but poor tracking. Messages get buried and progress becomes guesswork.
- Building a full LMS Too heavy for the use case. More features would increase setup and training cost and reduce adoption.
- Free-text links only Flexible, but messy. Links drift, instructions vary by tutor, and parents cannot quickly interpret what matters.
- Solving everything at once Delays shipping. The core loop needed to work first before adding layers like reporting and calendars.
What shipped
- Role-based dashboards for tutors, parents, and children
- Homework objects with clear states and deadlines
- Parent surfaces that show what is assigned, what is in progress, and what is overdue at a glance
- Tutor workflows designed for speed, including structured resource locations to reduce repetitive copy-paste
- Review flow so submitted work can be marked as reviewed and moved to done with minimal overhead
Analytics: Events
-
homework_assignedTracks tutor adoption and reliability. If this does not happen consistently, the whole system fails because there is nothing for parents and children to act on. -
homework_viewedTracks visibility. This tells you whether the assignment actually reached the household and whether parents or children are seeing what is expected. -
homework_startedTracks conversion from awareness to action. This helps separate a visibility problem from a motivation or difficulty problem. -
homework_submittedTracks follow-through. This tells you whether children can complete the workflow and whether submission UX is blocking progress. -
homework_reviewedTracks tutor throughput. Review latency is often where trust erodes, so this event measures whether feedback closes the loop in time. -
homework_completedTracks loop closure. This shows how often work reaches a true end state that parents and tutors agree on.
How I used these signals
- Assigned to viewed rate to test whether the system creates clarity or whether assignments are still getting lost.
- Viewed to started rate to detect avoidance, unclear instructions, or poor task sizing.
- Submitted to reviewed time to identify bottlenecks in tutor workflows and decide whether notifications or batching is needed.
What this instrumentation enabled
- Measure assignment reliability (assigned versus viewed and started)
- Spot routine breakdowns (viewed but not started, started but not submitted)
- Track tutor throughput (review latency and reviewed-to-done conversion)
- Identify where reminders or clearer instructions would have the highest impact
AI usage
AI helped me formalise the state machine, draft role and permission matrices, and translate messy qualitative feedback into clear tickets with acceptance criteria. It also helped generate edge cases, especially around partial completion, late submissions, and multi-child households.
Next steps
- Notifications tuned to reduce chasing without spamming
- Calendar surfaces so routines can be seen in a weekly view
- Tutor notes attached to submissions to reduce back-and-forth messages
- Reporting after the core workflow is validated in beta, focused on a small set of actionable signals