ML Engineer, II - Birds Eye View (BEV)
Meet the Team:
As a Machine Learning Engineer II – Scene Model, you will help develop and deploy machine learning models that enable autonomous trucks to understand their surrounding environment. Our team focuses on building multi-modal perception systems in bird’s-eye-view (BEV) that fuse information from LiDAR, cameras, radar, and map inputs to produce a unified representation of the scene.
Working closely with teams across perception, prediction, planning, and platform infrastructure, you will contribute to models that detect objects, understand road structure, and generate spatial temporal representations used by downstream autonomy systems.
This role focuses on developing and improving deep learning models, training pipelines, and data workflows that power scene understanding within the autonomy stack.
What You’ll Do
- Develop and train machine learning models for scene understanding, including tasks such as object detection, road and lane prediction, semantic voxel grid classification, occupancy prediction, and map understanding in bird’s-eye-view (BEV) space.
- Implement production-quality ML code to support model training, evaluation, and inference within the perception stack.
- Analyze model performance, identify failure modes, and propose improvements to increase robustness across diverse driving environments and conditions.
- Identify and interpret objects, lanes, obstacles, and weather conditions in the driving environment.
- Apply data science techniques to analyze model performance, understand data distributions, and identify corner cases.
- Contribute to multi-modal perception systems, combining signals from LiDAR, cameras, radar, and map sources into unified scene representations.
- Work with large-scale datasets from simulation, fleet logs, and on-vehicle data to curate training data and improve model performance.
- Collaborate with data, deployment, and infrastructure teams to evaluate perception models and ensure reliable performance in real-world driving scenarios.
- Help integrate perception models into the autonomy stack and testing pipelines, enabling faster experimentation and iteration.
- Contribute to tooling and infrastructure that improves training efficiency, experiment tracking, and reproducibility.
- Participate in technical discussions around model architectures, sensor fusion strategies, and training approaches within the team.
What You’ll Need to Succeed
- Bachelor’s degree in Computer Science, Robotics, Electrical Engineering, Machine Learning, or a related technical field with 4+ years of industry experience, or a Master’s degree with 2+ years of experience.
- Strong understanding of computer-vision, and machine learning basics.
- Experience applying machine learning techniques such as imitation learning, reinforcement learning, or sequence modeling to robotics, autonomous systems, or complex control environments.
- Strong programming skills in Python and PyTorch, with experience writing production-quality ML code.
- Experience training and evaluating machine learning models using large datasets and scalable compute environments.
- Understanding of ML architectures used in autonomy systems, such as transformers, graph neural networks, or sequence models.
- Experience debugging model behavior, analyzing performance metrics, and iterating on training pipelines.
- Ability to collaborate with cross-functional teams to integrate ML models into larger software systems.
- Good technical communication skills, written and verbal.
- A positive team-player mindset.
Bonus Points!
- PhD in machine learning or data science.
- Experience working in autonomous driving, robotics, or simulation-based training environments.
- Experience with distributed training frameworks or large-scale ML infrastructure (e.g., Ray, Anyscale).
- Experience working with simulation environments or large-scale behavior datasets.
- Familiarity with vehicle dynamics, motion planning, or multi-agent decision-making systems.
- Experience deploying ML models into production or real-world robotics systems.
- Experience with multi-modal sensor fusion, including LiDAR, cameras, radar, or map inputs.
- Experience working with BEV representations, occupancy grids, or 3D scene representations.
Create a Job Alert
Interested in building your career at Torc Robotics? Get future opportunities sent straight to your email.
Apply for this job
*
indicates a required field
