Back to jobs

Data Engineer

Dubai, United Arab Emirates

Founded by Michael Lahyani in 2005 as a magazine (Al Bab World), Property Finder today is a single technology platform and brand across multiple countries in the MENA region. We offer the most advanced tools and best-in-class user experience for homeseekers, real estate brokers, and developers. Property Finder's most recent valuation secures our status among the Middle East's emerging unicorns, affirming a growth-oriented identity. 

Over the years, we've expanded our operations to Bahrain, Egypt, Qatar, Saudi Arabia, and secured a strategic shareholding in Hepsiemlak, the leading property portal in Turkey. With over 600+ dedicated people in 6 regional offices, we facilitate more than 14 million monthly visits across our platforms, solidifying our position as a regional powerhouse in the proptech space. 

As the pioneering portal for homeseekers in the region,  we are on a mission to motivate and inspire people to live the life they deserve.

Position Summary:

We are seeking an inventive and forward-thinking Data Engineer to join our innovative team. In this role, you will not just follow the traditional paths of data engineering; instead, you'll break new ground by bringing a fresh, creative perspective to every project. Your self-motivation and ability to think differently will be key as you design and implement smart data solutions that go beyond the ordinary.

 

By harnessing the power of Generative AI, you will develop solutions that are not only efficient but transformative, driving automation and enabling us to achieve more with less. Your versatility in multiple programming languages, combined with a relentless focus on innovation, will allow you to collaborate effectively with data science, business analytics, and product development teams.

 

As a Data Engineer, your contributions will be pivotal in ensuring that our data solutions are ahead of the curve, utilizing the latest tools and methodologies while maintaining the highest standards of security, privacy, and regulatory compliance. 

You won’t just build data pipelines—you’ll reimagine them, pushing the boundaries of what’s possible in data engineering.

 

Our Tech Stack:

Languages: SQL & Python

Pipeline orchestration tool: Dagster (Legacy: Airflow)

Data stores: Redshift, Snowflake, Clickhouse

Platforms & Services: Docker, Kubernetes

PaaS: AWS (ECS/EKS, DMS, Kinesis, Glue, Bedrock, Athena, S3 and others.)

ETL:  FiveTran & DBT for transformation

IaC: Terraform (with Terragrunt)



Key Responsibilities:

 

  • Design and Implement Innovative Data Solutions: Develop and maintain advanced ETL pipelines using SQL, Python, and Generative AI, transforming traditional data processes into highly efficient and automated solutions.
  • Orchestrate Complex Data Workflows: Utilize tools such as Dagster and Airflow for sophisticated pipeline orchestration, ensuring seamless integration and automation of data processes.
  • Leverage Generative AI for Data Solutions: Create and implement smart data solutions using Generative AI techniques like Retrieval-Augmented Generation (RAG). This includes building solutions that retrieve and integrate external data sources with LLMs to provide accurate and contextually enriched responses.
  • Employ Prompt Engineering: Develop and refine prompt engineering techniques to effectively communicate with large language models (LLMs), enhancing the accuracy and relevance of generated responses in various applications.
  • Utilise Embeddings and Vector Databases: Apply embedding language models to convert data into numerical representations, storing them in vector databases. Perform relevancy searches using these embeddings to match user queries with the most relevant data.
  • Incorporate Semantic Search Techniques: Implement semantic search to enhance the accuracy and relevance of search results, ensuring that data retrieval processes are highly optimised and contextually aware.
  • Collaborate Across Teams: Work closely with cross-functional teams, including data science, business analytics to understand and deliver on unique and evolving data requirements.
  • Ensure High-Quality Data Flow: Leverage stream, batch, and Change Data Capture (CDC) processes to ensure a consistent and reliable flow of high-quality data across all systems.
  • Enable Business User Empowerment: Use data transformation tools like DBT to prepare and curate datasets, empowering business users to perform self-service analytics.
  • Maintain Data Quality and Consistency: Implement rigorous standards to ensure data quality and consistency across all data stores, continuously innovating to improve data reliability.
  • Monitor and Enhance Pipeline Performance: Regularly monitor data pipelines to identify and resolve performance and reliability issues, using innovative approaches to keep systems running optimally.

Essential Experience:

 

  • 7+ years of experience as a data engineer.
  • Proficiency in SQL and Python.
  • Experience with modern cloud data warehousing and data lake solutions such as Snowflake, BigQuery, Redshift, and Azure Synapse.
  • Expertise in ETL/ELT processes, and experience building and managing batch and streaming data processing pipelines.
  • Strong ability to investigate and troubleshoot data issues, providing both short-term fixes and long-term solutions.
  • Experience with Generative AI, including Retrieval-Augmented Generation (RAG), prompt engineering, and embedding techniques for creating and managing vector databases.
  • Knowledge of AWS services, including DMS, Glue, Bedrock, SageMaker, and Athena
  • Familiarity with dbt or other data transformation tools

Other Desired Experience:

 

  • Familiarity with AWS Bedrock Agents and experience in fine-tuning models for specific use cases, enhancing the performance of AI-driven applications.
  • Proficiency in implementing semantic search to enhance the accuracy and relevance of data retrieval.
  • Experience with LangChain techniques and platforms for building applications that require complex, multi-step reasoning, such as conversational AI, document retrieval, content generation, and automated decision-making processes.
  • Experience with AWS services and concepts, including AWS OpenSearch, EC2, ECS, EKS, VPC, IAM, and others...
  • Proficiency with orchestration tools like Dagster, Airflow, AWS Step Functions, and similar platforms.
  • Experience with pub-sub, queuing, and streaming frameworks such as AWS Kinesis, Kafka, SQS, and SNS.
  • Familiarity with CI/CD pipelines and automation, ensuring efficient and reliable deployment processes.



Our promise to talent

We encourage our people, called creators, to move fast, to be bold and offer them countless ways to make an impact in a fast-growing and talent-centric organisation. 

Our goal is to ensure that our people find their time at Property Finder a rewarding experience where the company’s growth also means personal growth.

Overall it is a place for you to be your best self. 

Property Finder Principles

  • Move fast and make things happen
  • Data beats opinions
  • Don’t confuse motion with progress
  • Failure is success if we learn from it
  • People over pixels

Find us at:

Twitter

Facebook

Instagram

Linkedin

Glassdoor

 

Apply for this job

*

indicates a required field

Resume/CV*

Accepted file types: pdf, doc, docx, txt, rtf