Back to jobs
New

Scientific Knowledge Engineer, Ontology & Data Modeling

Spain, Poland

About Xebia

For more than 25 years, our global network of passionate technologists and pioneering craftspeople has delivered cutting-edge technology and game-changing consulting to companies on the brink of AI driven digital transformation. Since 2001, we have grown into a full service digital consulting company with 6000+ professionals working on a worldwide ambition. Driven by the desire to make a difference, we keep innovating. Fuelling the growth of our company with our knowledge worker culture. When teaming up with Xebia, expect in-depth expertise based on an authentic, value-led, and high quality way of working that inspires all we do.

At Xebia, we put ‘People First’—committed to attracting diverse talent and fostering an inclusive, respectful workplace where everyone is valued for their contributions. We welcome all individuals and evaluate solely on the quality of their work and teamwork.

About the Role

Scientific Knowledge Engineer, Ontology & Data Modeling

This role is responsible for maximizing the value of our data assets over a lifetime to bring purpose to data by acting as translators of highly technical information from domain experts into an appropriate data model – complete with significant ontology and vocabulary - that can be utilized to effectively structure and index the data. Specifically working with Product managers and R&D subject matter expertise to define the language (data models, ontology, standards, etc.) of science into data products by acting as the voice of “Knowledge base” and interoperability/value of asset.

  Key responsibilities include:

  • Definition of schemas/ontology and data models of scientific information required for the creation of value adding data products. This includes accountability for the quality control and mapping specifications to be industrialized by data engineering and maintained in platform provisioned tooling.
  • Accountable for the quality control (through validation and verification) of mapping specifications to be industrialized by data engineering and maintained in platform provisioned tooling – e.g., models, schemas, controlled vocab.
  • Working with Product managers/engineers confidently convert business need into defined deliverable business requirements to enable the integration of large-scale biology data to predict, model, and stabilize therapeutically relevant protein complex and antigen conformations for drug and vaccine discovery.
  • Collaborate with external groups to align data standards with industry/ academic ontologies ensuring that data standards are defined with usage/analytics in mind. 
  • Provides bespoke subject matter expertise for R&D data to translate deep science into data for actionable insights
  • Contribute to and maintain documentation of data standards, ontology decisions, and mapping rationale to support organizational knowledge transfer and auditability

Basic Qualifications:

We are looking for professionals with these required skills to achieve our goals:

  • Masters degree in Bioinformatics, Biomedical Science, Biomedical Engineering, Molecular Biology, or Computer Science (with a life science application focus)
  • 6+ years of relevant work experience
  • Specific experience contributing to Knowledge Graph development efforts, including entity modeling, relationship design, and schema governance
  • Hands-on experience with open-source ontology tools and languages: Protégé, SPARQL, OWL, SKOS, SHACL, RML, RDF/Turtle
  • Working knowledge of major life sciences ontologies: Gene Ontology (GO), OBO Foundry ontologies (CL, UBERON, HPO, MONDO, CHEBI, EFO, CLO), MeSH, SNOMED CT, UMLS
  • Familiarity with linked data principles and semantic web technologies
  • Experience with industry-standard tools for building data serialization protocols (e.g., JSON Schema, LinkML)
  • Proficiency in at least one programming language — preferably Python — for scripting vocabulary mappings, building data models, automating QC, and prototyping pipelines

Preferred Qualifications:

If you have the following characteristics, it would be a plus:

  • Experience with data governance and data quality tooling (e.g., Ataccama, Informatica, Talend, OpenRefine, Great Expectations, dbt)
  • Experience with at least one programming language – e.g. Python – for scripting vocabulary mappings, building data models, etc
  • Experience supporting LLM integration or AI-readiness workflows — including metadata enrichment, entity linking, embedding pipelines, or retrieval-augmented generation (RAG) architectures
  • Understanding of vector databases and their role in semantic search and knowledge retrieval (e.g., Weaviate, Chroma)
  • Familiarity with cloud data platforms and infrastructure relevant to large-scale biological data (e.g., AWS, GCP, Azure)
  • Familiarity with graph database technologies (e.g., Neo4j, Amazon Neptune, Stardog, GraphDB, TigerGraph)

Create a Job Alert

Interested in building your career at Xebia Spain? Get future opportunities sent straight to your email.

Apply for this job

*

indicates a required field

Phone
Resume/CV

Accepted file types: pdf, doc, docx, txt, rtf

Cover Letter

Accepted file types: pdf, doc, docx, txt, rtf


Select...
Select...