Senior Machine Learning Engineer - Camera Model
About the Company
At Torc, we have always believed that autonomous vehicle technology will transform how we travel, move freight, and do business.
A leader in autonomous driving since 2007, Torc has spent over a decade commercializing our solutions with experienced partners. Now a part of the Daimler family, we are focused solely on developing software for automated trucks to transform how the world moves freight.
Join us and catapult your career with the company that helped pioneer autonomous technology, and the first AV software company with the vision to partner directly with a truck manufacturer.
Meet the Team
As a Senior Machine Learning Engineer – Camera Models, you will develop and deploy machine learning models that power camera-based perception for autonomous trucks. The Camera Models team builds and maintains core vision systems that enable the autonomy stack to understand the environment, detect and localize objects, and estimate scene structure from camera data.
Working closely with teams across perception, data, and infrastructure, you will own the development and improvement of robust, scalable camera-based models that support safe and reliable autonomous driving in real-world freight environments.
This role focuses on owning model development for scoped problem areas, improving system performance through iteration, and delivering production-ready machine learning solutions within the autonomy stack.
What You’ll Do
- Design, develop, and deploy deep learning models for camera-based perception (e.g., object detection, segmentation, depth estimation, scene understanding)
- Own end-to-end model development for scoped areas, from data curation and training to evaluation and deployment
- Write production-quality ML code to support scalable training, evaluation, and inference pipelines
- Analyze model performance across diverse driving scenarios, identify failure modes, and improve robustness and generalization
- Contribute to and improve large-scale training pipelines, including dataset preparation, distributed training, and experiment tracking
- Partner with data teams to improve dataset quality, including labeling strategies and coverage of edge cases
- Collaborate with perception, simulation, and validation teams to evaluate and integrate models into the autonomy stack
- Improve tooling, workflows, and infrastructure to accelerate experimentation and model iteration
- Contribute to model architecture decisions and technical discussions within the team
- Mentor junior engineers on implementation, debugging, and best practices
What You’ll Need to Succeed
- Bachelor’s degree in Computer Science, Robotics, Electrical Engineering, Machine Learning, or a related technical field with 6+ years of industry experience, OR Master’s degree with 3+ years OR PhD with 1+ years of experience
- Experience developing and deploying deep learning models for computer vision or perception systems
- Strong programming skills in Python and PyTorch, with experience writing production-quality ML code
- Experience training and evaluating models using large-scale datasets and distributed compute environments
- Solid understanding of modern deep learning architectures used in perception (e.g., CNNs, transformers, multi-task models)
- Experience debugging model behavior, analyzing performance metrics, and improving model reliability
- Ability to translate ambiguous problems into structured ML solutions and deliver independently
- Experience collaborating cross-functionally to integrate ML models into larger autonomy or robotics systems
Bonus Points:
- Experience in autonomous driving, robotics, or simulation-based ML systems
- Experience with multi-task learning or unified perception architectures
- Experience with large-scale data pipelines, distributed training systems (e.g., Ray), or experiment management frameworks
- Familiarity with camera calibration, geometric reasoning, or 3D perception from images (e.g., BEV, monocular depth, structure-from-motion)
- Experience deploying ML models into production or real-world robotics systems
Work Location: For this position, we are open to hiring in either the Ann Arbor, MI OR Blacksburg, VA (U.S.) office work locations in a hybrid capacity. We are also open to hiring Remote in the United States
Perks of Being a Full-time Torc’r
Torc cares about our team members and we strive to provide benefits and resources to support their health, work/life balance, and future. Our culture is collaborative, energetic, and team focused. Torc offers:
- A competitive compensation package that includes a bonus component and stock options
- 100% paid medical, dental, and vision premiums for full-time employees
- 401K plan with a 6% employer match
- Flexibility in schedule and generous paid vacation (available immediately after start date)
- Company-wide holiday office closures
- AD+D and Life Insurance
At Torc, we’re committed to building a diverse and inclusive workplace. We celebrate the uniqueness of our Torc’rs and do not discriminate based on race, religion, color, national origin, gender (including pregnancy, childbirth, or related medical conditions), sexual orientation, gender identity, gender expression, age, veteran status, or disabilities.
Even if you don’t meet 100% of the qualifications listed for this opportunity, we encourage you to apply.
Our compensation reflects the cost of labor across several geographic markets. Pay is based on a number of factors and may vary depending on job-related knowledge, skills, and experience. Torc's total compensation package will also include our corporate bonus and stock option plan. Dependent on the position offered, sign-on payments, relocation, and other forms of compensation may be provided as part of a total compensation package, in addition to a full range of medical, financial, and/or other benefits.
Job ID: 102584
US Pay Range
$177,300 - $212,800 USD
Create a Job Alert
Interested in building your career at Torc Robotics? Get future opportunities sent straight to your email.
Apply for this job
*
indicates a required field
