Product Trust Manager, Learning Commons
Learning Commons aims to scale proven teaching and learning practices to benefit every learner by building AI infrastructure that better connects the way students learn to the tools they learn with.
The Team
At Learning Commons, we operate at the intersection of technology, research, and philanthropy. We pair product development with grantmaking to scale proven teaching and learning practices for the benefit of every learner. We aim to bring learning science into the tools educators and students use every day.
Our work is grounded in a deep belief: when technology reflects the realities of classrooms and the science of how students learn, it can meaningfully strengthen teaching and unlock new possibilities for students. The rise of generative AI offers us a once-in-a-generation opportunity to dramatically accelerate the translation of research insights into practical, classroom-ready tools; tools that honor teachers’ expertise, adapt to students’ needs, and make effective learning practices easier to access, implement, and sustain.
In today’s fragmented edtech landscape, school districts are often left piecing together products that don’t always align with curricula or instructional needs. While AI holds enormous potential to support teachers and students, it can only deliver on that promise when grounded in research, high-quality educational data, and expert evaluation. That’s why we’re building open, public-purpose infrastructure — datasets, rubrics, and resources — that help raise the standard for educational tools and create more consistent, impactful learning experiences for all students and teachers.
The Opportunity
We are seeking a Product Trust Manager to join our Education Trust team. In this role, you will lead technically grounded trust initiatives across our education platform, with deep ownership of platform integrity, data privacy, responsible AI development, DPIAs, API access patterns, data licensing, and restricted data access controls. You will translate trust, legal, and policy requirements into scalable product and platform mechanisms.
What You'll Do
- Execute the Trust strategy across AI systems, APIs, data platforms, and partner integrations, ensuring product integrity and compliant use of data at scale.
- Own and define trust requirements for consent alignment, API access controls (authentication vs authorization, scoped permissions, rate limiting, logging and monitoring), retention controls, deletion workflows, user rights, data licensing, and data use restrictions. Partner with engineering and product teams to translate these requirements into enforceable platform controls.
- Establish, document, and maintain clear governance standards and policy frameworks across the AI and data lifecycle—including training data ingestion, model evaluation, inference APIs, downstream consumption, and third-party integrations—and ensure they are consistently understood and applied across teams.
- Identify and mitigate structural risks across the AI and data lifecycle.
- Collaborate with Legal, Product Counsel, Privacy, and Security to translate GDPR, CCPA/CPRA, and emerging AI regulations into documented policy guidance and enforceable developer-facing requirements.
What You'll Bring
- 8+ years of experience in product risk, platform governance, trust & safety, data privacy, policy or related domains, with practical experience working closely with technical and AI/ML teams, and a strong hands-on understanding of API-based platforms (authentication, authorization, access scopes), restricted data access frameworks (RBAC, approval workflows, purpose-based access), and common system failure modes that create privacy or data misuse risk.
- Demonstrated expertise in privacy, data, and platform risk governance, including leading DPIAs/PIAs and privacy risk assessments; translating outcomes into product and engineering requirements; governing data licensing and contractual data use limitations; and applying data protection regulations (e.g., GDPR, CCPA/CPRA) to APIs, developer access, and platform ecosystems.
- Proven ability to lead cross-functional initiatives involving product, engineering, legal, security, and operations in fast-moving environments.
- Analytical, systems-oriented mindset with the ability to spot structural risk early and design scalable mitigations.
- Clear communicator who can translate between legal requirements, technical implementation, and product strategy.
Compensation
The Redwood City, CA base pay range for a new hire in this role is $169,000 - $211,000. New hires are typically hired into the lower portion of the range, enabling employee growth in the range over time. Actual placement in range is based on job-related skills and experience, as evaluated throughout the interview process.
Better Together
As we grow, we’re excited to strengthen in-person connections and cultivate a collaborative, team-oriented environment. This role is a hybrid position requiring you to be onsite for at least 60% of the working month, approximately 3 days a week, with specific in-office days determined by the team’s manager. The exact schedule will be at the hiring manager's discretion and communicated during the interview process.
Benefits for the Whole You
We’re thankful to have an incredible team behind our work. To honor their commitment, we offer a wide range of benefits to support the people who make all we do possible.
- Provides a generous employer match on employee 401(k) contributions to support planning for the future.
- Paid time off to volunteer at an organization of your choice.
- Funding for select family-forming benefits.
- Relocation support for employees who need assistance moving
If you’re interested in a role but your previous experience doesn’t perfectly align with each qualification in the job description, we still encourage you to apply as you may be the perfect fit for this or another role.
#LI-Hybrid #LI-Onsite
Apply for this job
*
indicates a required field
