Responsible AI
development principles

AI can open doors to richer learning experiences, accelerate research, and spark new possibilities in education. To realize this potential, we build our tools with care, intention, and a deep sense of responsibility — grounded in safety, transparency, and responsible design.

These principles guide what we create and release:

Privacy & Security First

We prioritize strong privacy and security safeguards, ensuring our tools can be integrated responsibly in educational contexts.

Safety by Design

From how we handle data to how we evaluate model behavior, we integrate risk-mitigation features into our tools to encourage responsible use.

Human-Centered

Our tools are designed to extend human insight, respect context, and meet the real-world needs of educators, researchers, and developers.

Purpose-Driven & Pedagogically Grounded

We design our tools to reflect insights from learning science and pedagogy, aiming to support effective educational use.

Continuously Evolving

Responsible AI development is an ongoing commitment. We adapt and improve our practices and tools based on research, user feedback, and real-world impact.

We’re building these tools with care — and in partnership with the communities who use them.
If you’re exploring ways to integrate AI into your work, get in touch with us.