Back to jobs

Principal Data Analytics Engineer (Mex - Arg)

About Stori

Stori is a fast-growing, venture-backed financial technology company, on a mission to democratize credit access for 400 million underbanked LatAm consumers. Stori currently operates in Mexico and has a global team with offices in Arlington Virginia, Mexico City, and Asia. We have quickly made our mark as one of the top digital banks in Mexico with more than two million applicants for our credit card product since launching.

Stori is one of the top-funded startups in the region with US$250 million raised to date. We are backed by top global venture capital funds, such as GGV Capital, GIC, Lightspeed Venture Partners, General Catalyst, Goodwater Capital, Mexico’s Tresalia Capital, Vision Plus Capital, BAI Capital and Source Code Capital; who have successfully invested in startups such as Affirm, Airbnb, Alibaba, Stripe, and TikTok.

Stori has a standout founder team among fintechs, leveraging 100+ years of accumulated experience in consumer finance, banking and technology across Mastercard, Intel, Capital One, Morgan Stanley, GE Capital, and HSBC in the U.S., Mexico and Asia. The team has launched and managed many multi-million-customer credit card products globally, providing a wide breadth of experience and knowledge to our team.

We welcome diversity of background, experience and thinking. Storians are passionate about our mission and take pride in the products we build. Our culture thrives off of a flat structure and an inclusive environment where all of our employees can be their authentic selves, with boundless opportunities for professional growth.

The Role

Main responsibilities:

  1. As an IC and tech lead, create, optimize, and share knowledge throughout the company about best practices on Airflow, DBT, Redshift/SQL, and Spark tools, when also making recommendations of new tools implementation that will improve the company delivery, performance, and resilience. 
  2. Ensure our data warehouse performance by applying best practices on already created models while sharing best practices based on current usage individually and via group knowledge sharing.
  3. Build and maintain data pipelines using Redshift, Dynamo, Athena, Glue, Kinesis, SQS, Firehose, CDK, Step Functions, etc, with these pipelines will process gigabytes of information in batch and real-time.
  4. Facilitate design sessions, drive design decisions, and lead code reviews. Comfortable challenging assumptions to improve existing solutions and ensure the team builds the best data product possible.
  5. Act as a “tech lead” on both internal and cross-functional projects: work with the business team and engineering leadership to prioritize development roadmaps and plan future projects, prioritize and plan the team’s work, communicate with stakeholders on progress, and unblock team members
  6. Attract and nurture talent, mentor, and develop a world-class engineering team.

Hybrid Role 

What we are looking for:
  • Experience:
    • 8+ years of experience in software engineering with a strong focus on data
    • 5+ years experience in building and managing high-performance engineering teams with an Agile framework
    • 5+ years experience in building batch and real-time data pipelines that extract, transform, and load the data into analytical data warehouses or data lake
    • Creative, resourceful, and enthusiastic about seeking new solutions to problems and opportunities

 

  • Skills and attitudes
    • Strong proficiency with Python
    • Strong proficiency in SQL, git, Airflow, DBT, database optimization, Docker/ECS.
    • Strong proficiency with a deployment framework (CDK, Cloudformation, Terraform, or Serverless Framework, etc).
      Familiarity with serverless technologies in AWS: S3, Lambda, Redshift, DynamoDB, Athena, Glue, EMR, Kinesis, SQS, Firehose, Step Functions.
    • Able to create an End-to-end pipeline in Spark.
    • Experience with CI/CD (Continuous Integration, Continuous Delivery), Automated Testing, Automated Delivery

  • Bonus Points:
    • Experience in building and maintaining data warehouse/data lake or large-scale real-time/batch customer feature data pipelines/microservices
    • Experience with Spark-based tools like Databricks or Snowflake
    • NodeJS Knowledge

What we offer

  • Make a positive impact on the lives of our customers via financial inclusion
  • Professional development opportunities 
  • International exposure & work experience
  • Company swag
  • Legally required benefits

Apply for this job

*

indicates a required field

Resume/CV*

Accepted file types: pdf, doc, docx, txt, rtf

Select...
Select...
Select...
Select...