Back to jobs

Member of Technical Staff, Multimodal Speech

San Jose

About Hark

Hark is an artificial intelligence company building advanced, personalized intelligence. One that is proactive, multimodal, and capable of interacting with the world through speech, text, vision, and persistent memory.

We're pairing that intelligence with next-generation hardware to create a universal interface between humans and machines. While today's AI largely operates through chat boxes and decade-old devices, Hark is focused on what comes next: agentic systems that interact naturally with people and the real world.

To get there, we're developing multimodal models and next-generation AI hardware together - designed from the ground up as a single, unified interface for a new era of intelligent systems.

About the Role 

The Omni team at Hark is building the next generation of AI experiences beyond text, enabling models to understand and generate content across multiple modalities, including text, audio. Our goal is to create seamless, real-time multimodal intelligence that powers intuitive and immersive user experiences.

As part of the Omni team, you will drive the development of advanced speech and audio capabilities within multimodal foundation models. You will work across the full stack—from data and modeling to training, evaluation, and real-time serving—pushing the boundaries of speech intelligence and human-computer interaction.

Responsibilities

  • Drive research and development to advance speech and audio capabilities in multimodal models, including speech recognition, synthesis, and understanding.
  • Develop and improve large-scale speech and audio data pipelines, including data collection, filtering, alignment, and synthetic data generation.
  • Design and implement state-of-the-art models for speech and audio, including end-to-end multimodal architectures and real-time systems.
  • Build evaluation frameworks and internal benchmarks to measure speech quality, latency, robustness, and overall user experience.
  • Optimize models and systems for real-time performance, scalability, and production deployment.
  • Collaborate closely with product and engineering teams to translate research innovations into impactful, user-facing AI experiences.

Requirements

  • Proven track record of advancing speech or audio models through innovations in data, modeling, or training.
  • Strong experience in speech/audio domains such as ASR, TTS, speech-to-speech, or audio foundation models.
  • Experience with large-scale machine learning systems and distributed training.
  • Strong background in data-driven experimentation, systematic evaluation, and model iteration.
  • Strong ownership mindset and ability to drive end-to-end impact from research to production.

Bonus Qualifications

  • Familiarity with signal processing, acoustics, or audio representation learning is a plus
  • Experience with multimodal systems (speech + text, speech + vision) or real-time AI systems is a strong plus.

The pay offered for this position may vary based on several individual factors, including job-related knowledge, skills, and experience. The total compensation package may also include additional components/benefits depending on the specific role. This information will be shared if an employment offer is extended.

Apply for this job

*

indicates a required field

Phone
Resume/CV*

Accepted file types: pdf, doc, docx, txt, rtf