AI Security Context Engineer
About Radiant Security
We’re a SF Bay Area Cyber AI startup. Our vision is simple: enable all security teams to perform security operations with the efficiency and effectiveness needed to prevent breaches . We’re a small team of researchers and engineers with a deep focus in cyber and AI. Our product automates the triage for any security alert leveraging deep research, big data and dozens of AI Agents.
Join us and boost your career with hands-on AI experience.
The Role
As an AI Security Context Engineer, you’ll sit at the intersection of cybersecurity expertise and applied AI. Your mission is to translate deep security understanding into actionable context that teaches Radiant’s AI agents how to think, reason, and triage real-world security investigations.
You’ll partner closely with our AI research, engineering, and product teams to define how modern attacks should be interpreted, how alerts should be enriched, and how investigations should unfold across dozens of agentic steps. The work you do directly shapes the quality and accuracy of Radiant’s autonomous investigations — this is one of the most critical roles in the company and central to our core value proposition.
This is a rare opportunity to bring your SOC and detection experience into an AI-forward environment, contribute to the next evolution of security automation, and build systems used by teams around the world.
What problems will you be working on?
- Turning complex security signals (SIEM, EDR, IPS, cloud security logs, etc.) into high-quality AI reasoning that powers fully automated investigations
- Translating attacker behaviors and TTPs into teachable patterns for AI agents to detect, correlate, and triage threats
- Defining the logic behind multi-step agentic investigation workflows — which signals the AI should examine/compare, why, and how it should decide next steps
- Closing context gaps that break investigations, improving outcomes by shaping the “security intuition” of the model
- Stress-testing AI reasoning end-to-end to ensure it mirrors how top SOC analysts think when handling real incidents
What you’ll do
- Shape how Radiant’s AI agents think: translating real SOC workflows, attacker behaviors, and detection patterns into the reasoning that drives automated investigations
- Design and refine multi-step investigation logic, curating which signals matter, how alerts should be enriched, and how AI agents decide next steps
- Evaluate and improve AI decision-making and stress-testing agentic workflows to ensure they replicate how top analysts actually investigate potential incidents
- Work with a modern, cloud-native AI stack and have direct impact on one of the most critical components of Radiant’s platform
- Develop a stronger understanding of agentic AI and how it is leveraged for detection and analysis
Things we’re looking for
- An undergraduate degree in computer science
- Experience as a security analyst in an operational capacity
- Having previously worked for security product companies (startups are a plus)
- A working knowledge of adversarial TTPs, malware infrastructure, and the malware economy
- Have previously worked hands-on with a variety of security detection technologies that are part of a robust security program (SEIM, IPS, WAF, EDR)
- Past, relevant experience with cloud security technologies
- A track record of providing security subject matter expertise and guidance to people who are not security experts
Benefits
- Generous equity package
- Unlimited PTO (take time when you need it)
- Top-of-market salary
- Great healthcare
The process
We’re a startup and we’re making decision quickly. Our process is designed to give you the best glimpse of our team and allow us to evaluate your technical and culture fit.
- Step 1: Executive interview + Technical interview
- Step 2: Virtual On Site: Technical and Leadership interviews
Create a Job Alert
Interested in building your career at RadiantSecurity? Get future opportunities sent straight to your email.
Apply for this job
*
indicates a required field
