
Software Engineer - Perception Algorithm
We are seeking a highly skilled Software Engineer – Perception Algorithm to develop and optimize sensor fusion, localization, occupancy grid fusion, and drivable space estimation algorithms for autonomous driving and ADAS. This role requires expertise in camera, radar, and LiDAR fusion, classical and deep learning-based perception, and real-time system implementation. The ideal candidate will have strong coding skills in Python and C++, experience with Linux environments, vehicle testing, and tool development to enhance perception system evaluation and debugging.
Role and Responsibilities:
- Develop and optimize perception algorithms for Level 2/3 autonomous driving systems using camera, LiDAR, and RADAR data.
- Implement and enhance vehicle localization using GNSS, IMU, LiDAR, and visual odometry.
- Design and develop occupancy grid fusion techniques for environment modeling and obstacle detection.
- Implement drivable space estimation using classical methods (e.g., Bayesian models, rule-based) and deep learning approaches (e.g., BEV segmentation, Transformer models).
- Develop custom tools for sensor data visualization, debugging, and algorithm evaluation.
- Write high-performance, real-time software for deployment on embedded automotive platforms (e.g., NVIDIA Orin, Xavier).
- Collaborate with cross-functional teams to ensure seamless integration and robust implementation.
- Test, release, and deploy perception algorithms into Lucid production programs.
- Support the validation and verification of perception algorithms using prototype and pre-production vehicles.
- Propose software algorithms to enhance future autonomous driving capabilities.
Required Qualifications
- Proficient in Python and C++ for real-time and high-performance computing.
- Skilled in Linux development, debugging, and system optimization.
- Experience with sensor fusion (e.g., Kalman Filters, Bayesian Inference).
- Strong knowledge of probabilistic models, SLAM, and Kalman filtering.
- Familiar with deep learning frameworks (TensorFlow, PyTorch).
- Knowledgeable about ROS, middleware frameworks, and real-time constraints.
- Vehicle testing experience including data collection and algorithm validation.
- Developed tools for data visualization, debugging, and automated evaluation.
- Excellent communication and teamwork skills.
- Bachelor’s or Master’s in Computer Science, Electrical Engineering, Robotics, or related field.
- 3+ years relevant work experience or Ph.D. for senior roles.
- Advanced degrees preferred.
Preferred Qualifications
- Multi-modal sensor fusion experience for automotive applications.
- Hands-on HD maps, BEV-based perception, and occupancy grid mapping.
- Understanding of deep learning architectures, including Transformer models.
- Deployed models on NVIDIA Jetson Orin, Xavier, or similar hardware.
- Knowledge of CAN bus, automotive networks, and vehicle interfacing.
Base Pay Range (Annual)
$154,000 - $211,750 USD
By Submitting your application, you understand and agree that your personal data will be processed in accordance with our Candidate Privacy Notice. If you are a California resident, please refer to our California Candidate Privacy Notice.
Apply for this job
*
indicates a required field