
Member of Technical Staff, Multimodal Post-train/RL
About Hark
Hark is an artificial intelligence company building advanced, personalized intelligence. One that is proactive, multimodal, and capable of interacting with the world through speech, text, vision, and persistent memory.
We're pairing that intelligence with next-generation hardware to create a universal interface between humans and machines. While today's AI largely operates through chat boxes and decade-old devices, Hark is focused on what comes next: agentic systems that interact naturally with people and the real world.
To get there, we're developing multimodal models and next-generation AI hardware together - designed from the ground up as a single, unified interface for a new era of intelligent systems.
About the Role
The Omni team at Hark is building the next generation of AI experiences beyond text, enabling models to understand and generate content across multiple modalities, including text, audio, and vision. Our goal is to create seamless, real-time multimodal intelligence that powers intuitive and immersive user experiences.
As part of the Omni team, you will help drive the development of real-time audio, video, and multimodal models. This includes working across the full stack—from data and modeling to training, serving, and product integration. You will contribute to both pretraining and posttraining efforts while collaborating closely with product teams to push the boundaries of model capability and deliver exceptional end-to-end user experiences.
Responsibilities
- Design and implement efficient RL algorithms and training strategies to achieve state-of-the-art performance in multimodal foundation models, PPO/GRPO/RLHF
- Drive research and development agendas to advance real-time multimodal intelligence, including audio, video modeling capabilities.
- Improve data quality for large-scale post training by developing data filtering, curation, and synthetic data generation techniques.
- Build evaluation frameworks and internal benchmarks to measure model capability, reliability, and user experience across modalities.
- Collaborate closely with product and engineering teams to translate research advances into impactful, real-world AI experiences.
Requirements
- Proven track record of leading research that significantly improves neural network capability through advances in data, modeling, or training.
- Strong experience in data-driven experimentation, systematic analysis, and iterative model debugging.
- Experience building or working with large-scale distributed machine learning systems.
- Strong ownership mindset and willingness to do whatever is necessary to deliver the best end-to-end AI user experience.
Bonus Qualifications
- Background in graphics engines, simulation, or rendering techniques is a plus.
- Experience with multimodal models, speech/audio systems, video models, or real-time AI systems is a strong plus.
Compensation
The pay offered for this position may vary based on several individual factors, including job-related knowledge, skills, and experience. The total compensation package may also include additional components/benefits depending on the specific role. This information will be shared if an employment offer is extended.
Apply for this job
*
indicates a required field