Back to jobs

Senior Data Engineer

Richardson, TX or Nashville, TN

Founded in 1993, MedeAnalytics is an innovation-focused company. Over the past three decades, we have worked tirelessly to reimagine healthcare through the power of data—and helped thousands of organizations achieve their potential along the way. Leveraging state-of-the-art analytics and data activation, MedeAnalytics delivers actionable insights that support payers, providers, employers, and public entities as they navigate the complex healthcare landscape. Using artificial intelligence and machine learning alongside the most advanced data orchestration in the industry, we empower organizations to optimize their resource allocation, experience superior patient outcomes, and achieve population health management goals.

And that’s just the beginning.

With a deep understanding of the complex challenges facing the healthcare industry, MedeAnalytics offers a comprehensive suite of solutions to address key areas such as:

  • Population Health Management: Gain insights into patient populations, identify at-risk individuals, and implement targeted interventions to improve health outcomes.
  • Value-Based Care: Optimize care delivery, reduce costs, and enhance patient satisfaction by aligning with value-based care models.
  • Revenue Cycle Management: Streamline revenue cycle processes, improve reimbursement rates, and minimize denials.
  • And more…

MedeAnalytics is committed to delivering cutting-edge technology and exceptional customer service. Our team is passionate about transforming healthcare and making a positive impact on the lives of patients.

About our opportunity:

We are currently seeking a passionate and talented Sr. Data Engineer to join our dynamic team and contribute to our mission. As part of MedeAnalytics, you'll play a pivotal role in maintaining and expanding our robust data infrastructure across diverse cloud environments, driving transformative change in healthcare delivery.

Essential Duties and Responsibilities

Data pipelines:

  • Design, develop, and implement secure and scalable data pipelines utilizing tools such as Airbyte, Fivetran, Python, and custom scripts.
  • Leverage tools like Great Expectations for data quality testing and validation.
  • Design and implement data transformations using dbt models, ensuring data integrity and consistency.
  • Build and orchestrate data workflows using AWS Step Functions, Dagster, or similar tools.
  • Integrate data across multiple cloud platforms and on-premises systems using a multi-cloud data fabric approach.
  • Monitor and maintain existing data pipelines for performance and reliability.

Data management:

  • Work with Snowflake or MPP databases to store and manage large datasets effectively.
  • Utilize Iceberg data lake for efficient data organization and governance.
  • Leverage AWS Glue for data ETL and data lake management tasks.
  • Implement data lineage solutions to track and understand data origin and flow.
  • Establish and maintain a robust data catalog and metadata management system.
  • Experience in Agentic AI and text-to-sql process is preferred

Collaboration and contribution:

  • Collaborate with cross-functional teams, including data analysts, data scientists, and product managers, to understand data needs and translate them into technical solutions.
  • Document code, processes, and data models for clarity and knowledge sharing.
  • Stay up-to-date on the latest trends and technologies in the data engineering landscape.

Essential Skills, Experience and Education

  • 5+ years of experience as a data engineer or similar role required.
  • Proven experience with data pipeline development tools like Airbyte, Fivetran, and Python.
  • Expertise in data transformation with dbt or similar tools.
  • Experience with data quality testing frameworks like Great Expectations.
  • Familiarity with multi-cloud data fabric concepts and tools.
  • Familiarity with cloud platforms like AWS, particularly S3, Step Functions, Glue, and Snowflake.
  • Strong Python programming skills.
  • Strong experience with data table formats such as Iceberg, Deltalake, Hudi etc. and building datalake house using Starburst.
  • Experience with DuckDB or similar analytical databases.
  • Experience with DevOps practices and tools (a plus).
  • Experience with containerization technologies like Docker (a plus).
  • Experience with data governance and security best practices (a plus).
  • Excellent communication and collaboration skills.
  • Problem-solving and analytical thinking skills.
  • A passion for data and its potential to impact the healthcare industry.

Benefits Include:

  • Comprehensive Medical, Dental, and Vision Coverage – Effective the first of the month following your start date
  • Company-Paid Life & AD&D Insurance, plus Short-Term and Long-Term Disability (STD/LTD)
  • Company-Paid Employee Assistance Program (EAP) premium tier for your wellbeing
  • 401(k) Plan with company match
  • Paid Holidays and Paid Time Off (PTO) Accruals
  • Employee Referral Bonus Program
  • Professional Development Opportunities to support your growth
  • And More!

We are committed to equal employment opportunity regardless of race, color, ethnicity, ancestry, religion, national origin, gender, sex, gender identity or expression, sexual orientation, age, citizenship, marital or parental status, disability, veteran status, or other class protected by applicable law. We are proud to be an equal opportunity workplace.

** At this time, we are unable to provide or transfer sponsorship; candidates must be authorized to work in the country where this position is located and cannot require sponsorship now or in the future.

At MedeAnalytics we deeply value each and every one of our committed, inspired and passionate team members. If you're looking to make an impact doing work that matters, you're in the right place. Help us shape the future of healthcare by joining #TeamMede.

MedeAnalytics does not utilize any outside vendors/agencies.  Please no unsolicited phone calls or invites.

Apply for this job

*

indicates a required field

Resume/CV*

Accepted file types: pdf, doc, docx, txt, rtf


Education

Select...

Select...
Select...
Select...
Select...
Select...
Select...
Select...
Select...
Select...
Select...
Select...
Select...
Select...
Select...

 

Participation in an onsite technical interview is a required part of the hiring process. Candidates must reside within a commutable distance of Nashville, TN, or Richardson, TX.

Select...
Select...

By submitting this application, you confirm to the Company that you have no contractual commitments or other legal obligations that would prohibit you from performing your duties for the Company such as, but not limited to, any employment agreement, consulting agreement, non-compete agreement, non-solicitation agreement or confidentiality agreement.