Software Engineer
Haize Labs haizes LLMs at scale. We are the robustness layer eliminating the risk of using language models in any setting. To prevent these systems from failing, we preemptively discover all the ways in which they can fail and continuously eliminate them in deployment.
We are looking for Software Engineers to help us to develop fundamental safety tooling for LLMs. Your work will not only set the standard both in terms of research, but also in terms of how LLMs are tested, verified, and applied across customers, companies, and industries. You will directly influence how the world responsibly uses LLMs.
Responsibilities
- Work directly with customers to adapt our core R&D for different domains.
- Build out core infra, cloud tooling, and UX around our algorithms.
- Deliver a delightful human-in-the-loop product experience.
- Ship tools that are used by developers across the world.
Qualifications
- Experience with ML in an applied setting.
- Strong open source presence or strong track record of software engineering projects and employment.
- Can ramp up very quickly on understanding our research.
- Love to break things, i.e. have a “stick it to the Man” attitude.
Annual Salary
$150,000 – $600,000 USD
Logistics
Location policy: 6 days a week, in person, in NYC.
US visa sponsorship: If you are exceptional, we will sponsor your visa.
We encourage you to apply even if you do not believe you meet every single qualification: We’re open to considering a wide range of perspectives and experiences, and would love to chat with you.
Compensation and Benefits: Haize Lab’s provides generous salary, equity, and benefits.
What sets us apart
We are not here to write GPT wrappers or get rich quick off the AI bubble. We're here to work on the hardest, most fundamental research problem in AI: making it reliable and robust. Come here to push yourself, learn fast, experience excellence, and kickstart your life's work. We value our team above all else, and firmly believe that greatness begets greatness.
Since starting 6 months ago, we’ve developed a suite of safety tools that’s being used at places like Anthropic, AI21, Scale AI, and several other foundation model providers. We’ve been fortunate enough to be backed by the founders of Cognition, Hugging Face, Weights and Biases, Nous, Etched, Okta, Replit as well as rockstar AI and security executives from Stripe, Anduril, and Netflix. We’re lucky to be advised by professors from CMU and Harvard.
Our founding team has been working together for quite some time. We are turning down our Stanford PhD offers to send it on Haize Labs, have gotten into Y Combinator and other accelerators multiple times (and turned down multiple times), were the first Student Researcher at Allen AI, co-led R&D at a Series A NLP startup, wrote ML-guided matchmaking services for 50,000+ students, built an educational nonprofit supporting 60 countries, and did some other cool things along the way.
Apply for this job
*
indicates a required field