Back to jobs
New

Senior Planatir Engineer

Boston, MA

The Role

Fusion is a global electronics distributor building a modern data platform on top of a 20 year old legacy system. Palantir Foundry is a core part of that stack. We use it to deliver unified views of our customers, products, and operations, and we are leaning hard into AIP and agent driven workflows to compress delivery cycles. We are looking for a senior engineer who can work hands on in Foundry: building pipelines, ontology models, agents, and applications alongside our external Palantir consultants today, and progressively taking ownership as the platform matures.

This is an execution focused role with real autonomy. You will design and build solutions, work directly with business stakeholders, and have a voice in how the platform evolves.

What You’ll Do

  • Build and maintain data pipelines: Design, build, and optimize batch and streaming pipelines in Foundry (Pipeline Builder with AI assist, Code Repositories, Code Workbook) that ingest data from internal and external source systems.
  • Develop ontology models: Create and maintain ontology objects, relationships, link types, functions, and actions that represent business domains and power downstream applications, agents, and AIP workflows.
  • Build AI native experiences with AIP: Use AIP Logic to compose AI driven functions, AIP Agent Studio to stand up domain agents, and AIP Threads or AIP Assist to embed conversational interfaces into Workshop apps. Use AIP Evals to validate prompt and agent behavior before promotion to production.
  • Leverage AIP Analyst and FDE AI for delivery acceleration: Use AIP Analyst for ad hoc analytical work directly against the ontology, and lean on FDE AI and Solution Designer to scaffold ontology objects, pipelines, and starter applications. Refine the AI generated output, harden it for production, and bring it under proper change control.
  • Deliver Foundry applications: Build and iterate on Workshop, Quiver, and Contour applications that give sales, operations, and leadership actionable views of the business, including embedded agentic actions and write back through Ontology Actions.
  • Work alongside external consultants: Collaborate closely with our Palantir implementation partners to understand existing designs, validate data flows, and progressively take on more of the day to day Foundry work.
  • Connect Foundry to the broader data platform: As the data modernization team builds out medallion layer domain stores and DBT based pipelines on Azure, help align Foundry’s ontology and ingestion paths so the platform can transition from legacy sources to the new architecture cleanly.
  • Integrate with external systems: Build and maintain OSDK based integrations and APIs that connect Foundry to internal services and third party platforms.
  • Engage stakeholders directly: Gather requirements from business teams, demo solutions, train end users, and incorporate feedback into iterations.
  • Ensure data quality and observability: Implement validation, lineage tracking, health checks, and monitoring within Foundry pipelines. Build automated data quality checks, including AIP assisted ones, to keep data accurate and reliable.
  • Apply AI tooling to your own workflow: Use AIP and AI assisted development practices to accelerate pipeline work, code review, and documentation.

What You Bring

Data Engineering Foundations

These are the bedrock skills we expect every senior data engineer on this team to have, regardless of platform. Foundry is the surface; these are the muscles underneath.

  • 7+ years building production data pipelines in modern stacks.
  • Strong SQL, including window functions, CTEs, set operations, and query optimization on large datasets. You can read an execution plan and explain the cost.
  • Solid grasp of dimensional modeling (Kimball, star schema), Data Vault, or equivalent. You can defend a modeling choice with reasoning, not dogma.
  • Experience designing for slowly changing dimensions, idempotent loads, late arriving data, backfills, and incremental processing.
  • Proficiency in Python, including PySpark and pandas for transformation work.
  • Working knowledge of the Spark execution model: partitioning, shuffles, skew, broadcast joins, and tuning.
  • Experience with ETL and ELT patterns, CDC pipelines, and workflow orchestration.
  • Familiarity with modern transformation frameworks such as DBT, including tests, documentation, snapshots, and incremental materializations.
  • Understanding of columnar formats (Parquet, Delta, Iceberg), table formats, and partitioning strategies.
  • Practical experience implementing data quality frameworks, data contracts, and SLAs between producers and consumers.
  • Comfort with Git workflows, code review, branching strategies, and CI/CD for data platforms.

Palantir Foundry and AIP

  • 2+ years hands on Palantir Foundry experience with delivered work you can describe in detail.
  • Strong knowledge of core Foundry capabilities: Ontology Manager, Ontology Functions and Actions, Pipeline Builder, Code Repositories, Code Workbook, Workshop, Builds, and platform security (branching, resource allocation, rollbacks).
  • Experience with Foundry analytical and integration tools: Quiver, Contour, OSDK, and Marketplace products.
  • Hands on experience with AIP for augmenting data workflows: AIP Logic, AIP Assist and Threads, and AIP assisted Pipeline Builder.
  • Familiarity with newer AIP surfaces is a strong plus: AIP Agent Studio, AIP Analyst, FDE AI and Solution Designer, AIP Evals, and Automate.
  • Understanding of how to evaluate, guardrail, and version AI driven functions and agents in a production context.

Cloud, Architecture, and Stakeholder Skills

  • Proficiency with Azure data services, or equivalent depth in AWS or GCP with willingness to work in Azure.
  • Solid understanding of data modeling across relational, analytical, and document paradigms.
  • Ability to translate business requirements into technical designs and explain trade offs to non technical stakeholders.
  • Self directed work style. You identify what needs doing and get it done without waiting for direction.

Nice to Have

  • Production experience with DBT on Azure (Synapse, Fabric, or Databricks).
  • Experience with medallion architecture (Bronze, Silver, Gold) or lakehouse patterns on Databricks, Snowflake, or equivalent.
  • Experience evaluating LLM powered features: prompt design, eval harnesses, hallucination guardrails, and cost or latency tuning.
  • Experience building or operating real time pricing, recommendation, or scoring engines.
  • Familiarity with electronics distribution, supply chain, or ERP data, particularly Dynamics GP and Great Plains.
  • Background in domain driven design or microservices architecture.
  • TypeScript or C# for OSDK consumers and downstream services.
  • Exposure to CRM, payment, or logistics API integrations.

Why Fusion

  • Greenfield meets legacy: You’ll help build a modern data platform while keeping a 20 year business running. Real engineering problems, not toy projects.
  • Real impact, small team: A lean team where your work is visible and your judgment matters. No layers of approval.
  • AI first engineering culture: We use AIP and AI tooling aggressively. One engineer with the right tools does the work of three.
  • Global scale: Fusion operates across the US, Singapore, and Hong Kong. The platform serves users worldwide.

Create a Job Alert

Interested in building your career at Fusion Worldwide? Get future opportunities sent straight to your email.

Apply for this job

*

indicates a required field

Phone
Resume/CV

Accepted file types: pdf, doc, docx, txt, rtf

Cover Letter

Accepted file types: pdf, doc, docx, txt, rtf