
Forward Deployed Engineer
Title: Forward Deployed Engineer
About Rhino Federated Computing
Rhino Federated Computing Rhino solves one of the biggest challenges in AI: seamlessly connecting siloed data through federated computing. The Rhino Federated Computing Platform (Rhino FCP) serves as the ‘data collaboration tech stack’, extending from providing computing resources to data preparation & discoverability, to model development & monitoring - all in a secure, privacy preserving environment. To do this, Rhino FCP offers flexible architecture (multi-cloud and on-prem hardware), end-to-end data management workflows (multimodal data, schema definition, harmonization, and visualization), privacy enhancing technologies (e.g., differential privacy), and allows for the secure deployment of custom code & 3rd party applications via persistent data pipelines. Rhino is trusted by >60 leading organizations worldwide - including 14 of 20 of Newsweek’s ‘Best Smart Hospitals’ and top 20 global biopharma companies - and is leveraging this foundation for financial services, ecommerce, and beyond.
The company is headquartered in Boston, with an R&D center in Tel Aviv.
About the Role
Key Responsibilities
- Manage a portfolio of customer delivery projects by collaborating with a joint team of our clients and Rhino Product, Engineering, and Sales.
- Design and implement custom production grade federated AI pipelines, code training and inference workflows, and utilize AIOps tools, and model performance optimization.
- Guide customers through production level federated AI engineering decisions and provide coding and configuration support- such as hyper parameter settings, evaluation, experiment management, privacy testing, and AIOps while ensuring scalability, reliability, and security.
- Onboard and educate users and equip the users with training and tutorials and demonstrate deep technical understanding of the product.
- Provide project management support to help users achieve successful Rhino FCP implementations. Deliver communications such as executive-level updates and detailed iteration plans. Track metrics and weekly status reports to demonstrate progress and value.
- Debug technical issues with users and implement effective support process improvements.
- Advise the Product organization on features that would help advance use of the platform understanding the use cases required of customers.
Required Skills
- Established track record working with production level or productized AI applications, Deep Learning frameworks, and tools, AI/MLOps, Cloud Computing, and GPU accelerated solutions in cloud computing/NVIDIA cloud and edge environments.
- Production level Hands-on experience with design, development, fine tuning, and serving of imaging or language models or structured data AI models.
- Effective with Generative AI ecosystem of tools such as Llama, GPT, Gemini, LangChain, Agentic Frameworks, and so on.
- Proficient in Python Programming, REST APIs, Databases, and cluster management tools, including Docker Containers and Kubernetes.
- 4+ years of AI experience.
- Take full ownership of problems from start to finish, driving success for both the team and our customers.
- Operate with high energy and adaptability, seamlessly switching contexts and managing multiple initiatives with broad accountability. Prioritize effectively and thrive in ambiguous environments.
Preferred Skills
- Healthcare, Biopharma, Financial Services, Public Sector industry experience.
- Experience with sales and customer facing roles such as solutions engineer, sales engineer, solutions architect, and professional services engineer.
- Project Management mindset and experience - delivery excellence, stakeholder engagement, and impact measurement.
- Degree in a quantitative field, with computer science or engineering or biomedical informatics/bioinformatics preferred.
- Location: Boston.
Apply for this job
*
indicates a required field