New

Senior Software Engineer, GenAI Security product (Vulcan)

Taipei

About the role 

We are looking for a Senior Software Engineer to build the application layer and tooling ecosystem that powers our GenAI security capabilities. 

In this role, you are primarily a builder of systems and applications. You will work closely with other Software Engineers to bridge the gap between AI Research and Engineering by turning experimental concepts into production-grade software. Your scope covers three critical pillars: 

  1. Core Application Development: Building the Vulcan Platform (Full Stack) and internal application logic.
  2. AI Agent Engineering: "Programming" models via advanced Prompt Engineering and workflow orchestration.
  3. Tooling & LLMOps: Creating the operational pipelines and infrastructure tooling that serve these applications. 

You will not just integrate APIs; you will architect the surrounding software ecosystem that makes our GenAI Red Teaming systems scalable, reliable, and user-friendly. 

 

What you’ll do

1. Core Application & Platform Development (Full Stack)

  • Vulcan Platform: Partner with Product and Design to build intuitive frontend interfaces (React, Next.js) for dashboards, configuration consoles, and visualization tools.
  • Backend Services & APIs: Develop and maintain the essential APIs (FastAPI/Python) and microservices that power the Vulcan platform.
  • Internal Tooling Ecosystem: Build frontend and backend tools that accelerate internal teams (Project, ML/AI), including configuration panels, data visualization pipelines, and evaluation interfaces.
  • Guardrails & Security Features: Implement backend services for AI guardrails (content moderation, prompt filtering) and automated adversarial testing pipelines.

2. GenAI Agent & Logic Engineering

  • Prompt Engineering as Code: Treat prompts as software logic. Lead the design and implementation of AI Agent behaviors, optimizing responses through structured Prompt Engineering techniques. 
  • Agent Workflow Design: Orchestrate complex, multi-step LLM workflows where the "application logic" involves chaining model interactions effectively.
  • Response Handling: Design robust parsing and validation mechanisms to ensure raw model outputs are converted into structured, usable application data. 

3. Tooling &LLMOpsInfrastructure 

  • Tooling Infrastructure: Build and maintain the underlying tools and services that support the AI lifecycle, ensuring seamless integration between development, testing, and production environments. 
  • LLMOps Pipelines: Establish pipelines for evaluation, deployment, and monitoring to ensure model reliability and consistent performance.
  • Asynchronous Processing: Architect robust task execution systems using Message Queues (Celery, RabbitMQ) to handle long-running asynchronous AI inference tasks.
  • Observability: Implement logging and tracing (e.g., Langfuse, MLflow) to track system health, latency, and costs within the application layer. 

 

Requirements

  • Professional Experience
    • 3+ years of total software development experience with a focus on web applications and backend systems.
    • Experience working in a development team of 3+ engineers
    • Familiar with collaborative development practices such as code reviews, issue tracking, and team-based delivery 
  • Technical Mastery
    • Strong Full Stack Proficiency: Proficiency in Python (Backend) and TypeScript (Frontend), with the ability to switch contexts effectively.
    • System Design: Solid understanding of System Design principles, specifically in building scalable web applications and microservices.
    • Engineering Mindset: You treat AI/LLMs as a software component. You focus on reliability, testing, and maintainability of the code surrounding the model. 
  • Soft Skills & Mindset
    • Independent Execution: Excellent ability to plan, organize, and execute features from concept to production with minimal supervision.
    • Cross-Functional Collaboration: Superior communication skills to translate requirements between AI/ML Researchers and non-technical stakeholders (PMs). 

 

Nice-to-haves 

  • Leadership Experience: Proven experience leading an engineering team or mentoring junior engineers.
  • Async Architecture: Deep familiarity with Task Executors and Message Queues (Celery, RabbitMQ) for high-concurrency workloads.
  • LLMOps Tools: Experienced in tools like Langfuse, MLflow, or similar observability and evaluation platforms.
  • CI/CD: Experienced in designing and maintaining CI/CD pipelines (e.g., GitLab CI) for automated testing and deployment. 

Experience with these tools is a plus, but we value strong engineering fundamentals and learning ability over specific tool experience. 

 

Other Benefits 

To us, people are our greatest asset, and we are more than happy to invest in employees! We create a healthy work atmosphere and provide you with the tools and support for doing your job successfully. With a culture of flexibility and transparency, we believe there should be no barriers, and everyone’s contributions matter. 

Work Life Balance is a must  

  • 15 days annual leaves (pro-rata for partial month at first year) 
  • 5 days full-pay sick leaves, 3 days menstrual leaves 
  • Health check subsidy 
  • Ergonomic-design chair and fully-equipped devices for work
  • Hybrid remote work and flexible working hour.

Grow together & keep learning

  • Conferences & external subsidy 
  • Learning clubs to share technical skill (e.g: Frontend/Backend tech sharing, Blockchain...etc) 

Work Hard, Play even Harder 

  • Various entertainment & sports clubs, attend basketball clubs today, and play board game tomorrow! 
  • Snacks & beverage to refill your energy anytime 

Create a Job Alert

Interested in building your career at AIFT? Get future opportunities sent straight to your email.

Apply for this job

*

indicates a required field

Phone
Resume/CV

Accepted file types: pdf, doc, docx, txt, rtf

Cover Letter

Accepted file types: pdf, doc, docx, txt, rtf