This role is designed for individuals who value correctness over speed. The annotations you produce serve as training data for AI systems used daily by thousands of students. Accurate labeling strengthens the product; inconsistent labels teach the model incorrect patterns.
LearnWith.AI develops AI-driven learning experiences grounded in learning science, data analytics, and expert knowledge. Your primary function will be to convert raw student session videos into reliable, rubric-aligned labels the team depends on. You will review recorded student sessions, pinpoint critical behavioral events, and apply precise classification rules to capture what occurred and when. Additionally, you will assess LLM-generated pre-annotations, correct errors, and record edge cases to help engineers refine the underlying system.
This is not freelance, ad-hoc annotation work. It involves a continuous queue within one product area, supported by direct feedback, calibration against gold-standard datasets, and advancement tied to accuracy and consistency. If you value transparent expectations, quantifiable quality standards, and work that directly influences model outcomes, we would like to hear from you.
Your role ensures that student session videos are transformed into labeled datasets with ">=95% accuracy and time-precise annotations, providing a reliable foundation for measuring model performance improvements or regressions.
LearnWith.AI is an edtech startup that leverages AI and subject matter experts to cultivate a new way of learning. Our unique approach leverages 50+ years of learning science, cutting-edge data analytics and AI-powered coaching. In doing so, we can help students learn more, learn faster, and learn better - and have fun while doing it. We are a remote-first company that hires globally via Crossover.
There is so much to cover for this exciting role, and space here is limited. Hit the Apply button if you found this interesting and want