Senior Data Engineer
About Baselayer: With experience across far-ranging banks, Fortune 500 tech co’s, fintech unicorns 🦄, and AI experts, Baselayer is built by financial institutions, for financial institutions. Started in 2023 by experienced founders Jonathan Awad and Timothy Hyde, Baselayer has raised $20 Million and hit $2 Million in ARR faster than any other identity company in history. Today more than 2,000 Financial Institutions and Government Agency customers later, Baselayer is revolutionizing the way businesses approach fraud prevention and compliance. 🏆
Check out their press release here → https://baselayerhq.com/press/
About You: You want to learn from the best of the best, get your hands dirty, and put in the work to hit your full potential. You’re not just doing it for the win—you’re doing it because you have something to prove and want to be great. You’re hungry to become an elite data engineer, designing rock-solid infrastructure that powers cutting-edge AI/ML products.
- You have 1–3 years of experience in data engineering, working with Python, SQL, and cloud-native data platforms
- You’ve built and maintained ETL/ELT pipelines, and you know what clean, scalable data architecture looks like
- You’re comfortable with structured and unstructured data, and you thrive on building systems that transform chaos into clarity
- You think in DAGs, love automating things with Airflow or dbt, and sweat the details when it comes to data integrity and reliability
- You’re curious about AI/ML infrastructure, and you want to be close to the action—feeding the models, not just cleaning up after them
- You value ethical data practices, especially when dealing with sensitive information in environments like KYC/KYB or financial services
- You’re a translator between technical and non-technical stakeholders, aligning infrastructure with business outcomes
- Highly feedback-oriented. We believe in radical candor and using feedback to get to the next level
- Proactive, ownership-driven, and unafraid of complexity—especially when there’s no playbook
Responsibilities:
- Pipeline Development: Design, build, and maintain robust, scalable ETL/ELT pipelines that power analytics and ML use cases
- Data Infrastructure: Own the architecture and tooling for storing, processing, and querying large-scale datasets using cloud-based solutions (e.g., Snowflake, BigQuery, Redshift)
- Collaboration: Work closely with data scientists, ML engineers, and product teams to ensure reliable data delivery and feature readiness for modeling
- Monitoring & Quality: Implement rigorous data quality checks, observability tooling, and alerting systems to ensure data integrity across environments
- Data Modeling: Create efficient, reusable data models using tools like dbt, enabling self-service analytics and faster experimentation
- Security & Governance: Partner with security and compliance teams to ensure data pipelines adhere to regulatory standards (e.g., SOC 2, GDPR, KYC/KYB)
- Performance Optimization: Continuously optimize query performance and cost in cloud data warehouses
- Documentation & Communication: Maintain clear documentation and proactively share knowledge across teams
- Innovation & R&D: Stay on the cutting edge of data engineering tools, workflows, and best practices—bringing back what works and leveling up the team
Benefits:
- Hybrid in SF. In office 3 days/week
- Flexible PTO
- Healthcare, 401K
- Smart, genuine, ambitious team
Start date: April
Salary Range: $135k – $220k + Equity - 0.05% – 0.25%
Apply for this job
*
indicates a required field