New

Senior Data Engineer - DBA (R13184)

Remote-MX

ABOUT OPORTUN

Oportun (Nasdaq: OPRT) is a mission-driven fintech that puts its 2.0 million members' financial goals within reach. With intelligent borrowing, savings, and budgeting capabilities, Oportun empowers members with the confidence to build a better financial future. Since inception, Oportun has provided more than $16.6 billion in responsible and affordable credit, saved its members more than $2.4 billion in interest and fees, and helped its members save an average of more than $1,800 annually. Oportun has been certified as a Community Development Financial Institution (CDFI) since 2009.

 

WORKING AT OPORTUN


Working at Oportun means enjoying a differentiated experience of being part of a team that fosters a diverse, equitable and inclusive culture where we all feel a sense of belonging and are encouraged to share our perspectives. This inclusive culture is directly connected to our organization's performance and ability to fulfill our mission of delivering affordable credit to those left out of the financial mainstream. We celebrate and nurture our inclusive culture through our employee resource groups.

POSITION SUMMARY

As a Sr. Data Engineer at Oportun, you will be a key member of our team, responsible for designing, developing, and maintaining sophisticated software / data platforms in achieving the charter of the engineering group. Your mastery of a technical domain enables you to take up business problems and solve them with a technical solution. With your depth of expertise and leadership abilities, you will actively contribute to architectural decisions, mentor junior engineers, and collaborate closely with cross-functional teams to deliver high-quality, scalable software solutions that advance our impact in the market. This is a role where you will have the opportunity to take up responsibility in leading the technology effort – from technical requirements gathering to final successful delivery of the product - for large initiatives (cross-functional and multi-month-long projects).

 

RESPONSIBILITIES

  • Database Design & Architecture
    • Design, implement, and maintain optimal database schemas for relational (MariaDB) and NoSQL (MongoDB) databases.
    • Participate in data modeling efforts to support analytics in Databricks.
  • Performance Monitoring & Tuning
    • Monitor and tune performance across all platforms to ensure optimal performance.
    • Use profiling tools (e.g., EXPLAIN, query plans, system logs) to identify and resolve bottlenecks.
  • Security & Compliance
    • Implement access controls, encryption, and database hardening techniques.
    • Manage user roles and privileges across MariaDB, MongoDB, and Databricks.
    • Ensure compliance with data governance policies (e.g., GDPR, HIPAA).
  • Backup & Recovery
    • Implement and maintain backup/recovery solutions for all database platforms.
    • Periodically test restore procedures for business continuity.
  • Data Integration & ETL Support
    • Support and optimize ETL pipelines between MongoDB, MariaDB, and Databricks.
    • Work with data engineers to integrate data sources for analytics.
  • Monitoring & Incident Response
    • Set up and monitor database alerts.
    • Troubleshoot incidents, resolve outages, and perform root cause analysis.
  • MariaDB-Specific Responsibilities
    • Administer MariaDB instances (standalone, replication, Galera Cluster).
    • Optimize SQL queries and indexing strategies.
    • Maintain stored procedures, functions, and triggers.
    • Manage schema migrations and upgrades with minimal downtime.
    • Ensure ACID compliance and transaction management.
  • MongoDB-Specific Responsibilities
    • Manage replica sets and sharded clusters.
    • Perform capacity planning for large document collections.
    • Tune document models and access patterns for performance.
    • Set up and monitor MongoDB Ops Manager / Atlas (if used).
    • Automate backup and archival strategies for NoSQL data.
  • Databricks-Specific Responsibilities
    • Manage Databricks workspace permissions and clusters.
    • Collaborate with data engineers to optimize Spark jobs and Delta Lake usage.
    • Ensure proper data ingestion, storage, and transformation in Databricks.
    • Support CI/CD deployment of notebooks and jobs.
    • Integrate Databricks with external data sources (MariaDB, MongoDB, S3, ADLS).
  • Collaboration & Documentation
    • Collaborate with developers, data scientists, and DevOps engineers.
    • Maintain up-to-date documentation on data architecture, procedures, and standards.
    • Provide training or onboarding support for other teams on database tools.

 

REQUIREMENTS

  • Bachelor's or master’s degree in computer science, Data Science, or a related field.
  • 5+ years of experience in data engineering, with a focus on data architecture, ETL, and database management.
  • Proficiency in programming languages like Python/PySpark and Java or /Scala
  • Expertise in big data technologies such as Hadoop, Spark, Kafka, etc.
  • In-depth knowledge of SQL and experience with various database technologies (e.g., PostgreSQL, MariaDB MySQL, NoSQL databases).
  • Experience and expertise in building complex end-to-end data pipelines.
  • Experience with orchestration and designing job schedules using the CICD tools like Jenkins, Airflow or Databricks
  • Ability to lead ETL migration from Talend to Databricks pyspark
  • Demonstrated building capabilities, reusable utilities, and tools for speeding complex business processes.
  • Ability to work in an Agile environment (Scrum, Lean, Kanban, etc)
  • Ability to mentor junior team members.
  • Familiarity with cloud platforms (e.g., AWS, Azure, GCP) and their data services (e.g., AWS Redshift, S3, Azure SQL Data Warehouse).
  • Strong leadership, problem-solving, and decision-making skills.
  • Excellent communication and collaboration abilities.
  • Familiarity or certification in Databricks is a plus.

Preferred Skills and Tools

  • MariaDB Tools: mysqldump, mysqladmin, Percona Toolkit
  • MongoDB Tools: mongodump, mongotop, mongoexport, Atlas UI
  • Databricks Tools: Jobs UI, Databricks CLI, REST API, SQL Analytics
  • Scripting: Bash, Python, PowerShell
  • Monitoring: Prometheus, Grafana, CloudWatch, DataDog
  • Version Control & CI/CD: Git, Jenkins, Terraform (for infrastructure-as-code)
  • Preferred Cloud provider: AWS

 

#LI-REMOTE

#LI-GK1

 

We are proud to be an Equal Opportunity Employer and consider all qualified applicants for employment opportunities without regard to race, age, color, religion, gender, national origin, disability, sexual orientation, veteran status or any other category protected by the laws or regulations in the locations where we operate.

 

California applicants can find a copy of Oportun's CCPA Notice here:  https://oportun.com/privacy/california-privacy-notice/.

 

We will never request personal identifiable information (bank, credit card, etc.) before you are hired. We do not charge you for pre-employment fees such as background checks, training, or equipment. If you think you have been a victim of fraud by someone posing as us, please report your experience to the FBI’s Internet Crime Complaint Center (IC3).

Apply for this job

*

indicates a required field

Resume/CV*

Accepted file types: pdf, doc, docx, txt, rtf


Education

Select...

Select...
Select...
Select...
Select...
Select...
Select...
Select...
Select...