
Applied Research and Growth
About Goodfire
Behind our name: Like fire, AI holds the potential for both devastating harm and immense benefit. Just as our ancestors' mastery of fire enabled them to cook food, smelt metals, and launch rockets into space, AI stands as humanity's most profound innovation since that first controlled flame. Our goal is to tame this new fire, enabling a safe transition into a post-AGI world.
Goodfire is an AI interpretability research lab focused on understanding and intentionally designing advanced AI systems. We believe that advances in interpretability will unlock the next frontier of safe and powerful foundation models.
At Goodfire, we're building Neural Programming Interfaces (NPIs) to transform how AI models are developed, just like how Application Programming Interfaces (APIs) transformed software development. NPIs allow developers to reach into the mind of an AI model, extract knowledge the model is thinking about, and design model behavior in precise and targeted ways.
We’re backed by Lightspeed Venture Partners, Menlo Ventures, NFDG’s AI Grant, South Park Commons, Work-Bench, and other leading investors.
Working at Goodfire
Our team brings together AI interpretability experts and experienced startup operators from organizations like OpenAI and DeepMind, united by the belief that interpretability is essential to advancing AI development.
We're a public benefit corporation based in San Francisco. All roles are in-person, five days a week, at our Telegraph Hill office.
The role:
We are looking for our first researcher-facing growth hire to join our team. You should be a creative researcher who can build tools to showcase new capabilities unlocked by Ember, our mechanistic interpretability API. You will turn our research into accessible content, demos, and community outreach. You’ll shape how the world learns about and engages with our company’s mission, from driving researcher interest to forming new research collaborations.
Core responsibilities:
- Build engaging, interactive content on top of Ember and other tools to expand how people think about mechanistic interpretability (e.g. show how Ember can improve performance on jailbreak benchmarks or reduce hallucinations)
- Help our research team make their research more accessible and engaging
- Manage our social media, blog, and research outreach, making complex interpretability concepts easy to understand
- Engage and build our community
Who you are:
Goodfire is looking for experienced individuals who embody our values and share our deep commitment to making interpretability accessible. We care deeply about building a team who shares our values:
Put mission and team first
All we do is in service of our mission. We trust each other, deeply care about the success of the organization, and choose to put our team above ourselves.
Improve constantly
We are constantly looking to improve every piece of the business. We proactively critique ourselves and others in a kind and thoughtful way that translates to practical improvements in the organization. We are pragmatic and consistently implement the obvious fixes that work.
Take ownership and initiative
There are no bystanders here. We proactively identify problems and take full responsibility over getting a strong result. We are self-driven, own our mistakes, and feel deep responsibility over what we’re building.
Action today
We have a small amount of time to do something incredibly hard and meaningful. The pace and intensity of the organization is high. If we can take action today or tomorrow, we will choose to do it today.
If you share our values and have at least two years of relevant experience, we encourage you to apply and join us in shaping the future of how we design AI systems.
What we are looking for:
- Strong understanding of the field of mechanistic interpretability
- Incredibly creative; pushes forward boundaries of tools and technology
- Has published AI research
- Excellent writer
- Can communicate complex concepts simply
Preferred qualifications:
- Experience working in a fast-paced, early-stage startup environment
- Good understanding of vibes on X
- Experience with interpretability techniques and tooling for AI models
- Experience working with open source AI models
- Contributions to open-source ML projects or libraries
This role offers market competitive salary, equity, and competitive benefits. More importantly, you'll have the opportunity to work on groundbreaking technology with a world-class team dedicated to ensuring a safe and beneficial future for humanity.
This role reports to our CEO.
Apply for this job
*
indicates a required field