
Director of GenAI Security Research, Vulcan
Taiwan/Korea/Hong Kong/Singapore/Vietnam/MiddleEast/Other regions
Job Description
The Director of GenAI Security Research defines Vulcan’s scientific agenda, leads a multi‑disciplinary research organization, and translates cutting‑edge discoveries into product capabilities that keep clients’ GenAI systems secure. The role requires a strong research experience, demonstrated leadership, and the ability to communicate complex findings to executives, regulators, and clients.
Vulcan product: https://vulcanlab.ai/
Vulcan LinkedIn: https://www.linkedin.com/company/vulcanlab-ai/
AIFT group: https://aift.io/
*Please apply with English CV, thank you.
-
Responsibilities
- Lead a research roadmap including but not limited to GenAI vulnerabilities, agentic‑AI and multimodal adversarial robustness, model‑supply‑chain risk, and alignment with international and local standards.
- Research and integrate new and emerging frameworks, toolchains, and experimental protocols in generative‑model development to ensure Vulcan remains at the forefront of GenAI security research.
- Develop and apply techniques to detect and protect against model vulnerabilities, adversarial attacks, and other AI-specific threats.
- Manage and coach a team of research scientists and engineers, and partner with cross‑functional teams to identify, design, and implement latest safeguards and security features into Vulcan products.
- Publish in leading venues, secure patents, and present at conferences to position Vulcan/AIFT as an industry leader in GenAI security.
-
Requirements
- Bachelor’s degree with 8+ years, Master’s with 6+ years, or Ph.D. with 3+ years of experience in computer science, machine learning, AI, and/or cybersecurity, including substantial work in AI research.
- Self-starter with desire to work in a fast paced environment with cross-region teams.
- Excellent verbal and written communication skills.
-
Preferred Requirements
- Experience scaling research programs within start‑ups or high‑growth organizations.
- Familiarity GenAI technologies, experience in security research, e.g., published peer‑reviewed publications or open‑source contributions that advanced GenAI security.
- Fluency in secure software development and regulatory frameworks relevant to AI risk management.
Apply for this job
*
indicates a required field