AI Engineer
About CoreStory
CoreStory unlocks the hidden intelligence in your legacy code. By using AI to surface business logic and technical insights, we give enterprises the clarity to modernize faster, maintain apps smarter, and reduce the risk of costly failures.
We’re looking for an AI Engineer who is passionate about building intelligent systems that blend large language models, retrieval architectures, and conversational agents into cohesive, scalable products. This role is critical to the core AI engine powering the CoreStory Platform.
Role Overview
As an AI Engineer, you’ll play a central role in developing and optimizing the AI components that power CoreStory’s narrative intelligence platform. You’ll work across LLM integration, vector search systems, prompt orchestration, agentic systems, and retrieval-augmented generation (RAG) pipelines.
You’ll collaborate closely with the product, data, and infrastructure teams to prototype, productionize, and continuously evolve our AI stack — ensuring that our systems are accurate, explainable, efficient, and on the cutting edge of modern AI capabilities.
Key Responsibilities
- Design, implement, and optimize LLM-powered systems (e.g., RAG, chat agents, summarizers, knowledge graph integration).
- Build and manage data indexing and retrieval pipelines using LlamaIndex, LangChain, or similar frameworks.
- Implement and maintain vector databases (e.g., Pinecone, Neo4j, Weaviate, Chroma, or Azure Cognitive Search).
- Integrate open-source and proprietary LLMs (e.g., GPT, Claude, Llama) into the CoreStory Platform.
- Develop and refine AI-driven features — including generative insights, automated summarization, and narrative analytics.
- Collaborate with DevOps and backend teams to deploy scalable AI services within CoreStory’s cloud infrastructure.
- Continuously benchmark model performance, latency, and cost, identifying opportunities for optimization.
- Stay current with advancements in AI — from model architectures to emerging frameworks — and propose innovative applications aligned with CoreStory’s mission.
- Contribute to internal documentation, experimentation frameworks, and evaluation methodologies.
Qualifications
Required Skills:
- 3+ years of experience in AI engineering, machine learning, or applied NLP.
- Strong hands-on experience with LlamaIndex, LangChain, or similar orchestration frameworks.
- Experience designing and implementing vector database solutions (e.g., Pinecone, Neo4j, FAISS, Milvus, Weaviate).
- Solid understanding of LLM APIs (OpenAI, Anthropic, Mistral, Hugging Face, etc.).
- Proficiency in Python, with experience in libraries such as FastAPI, Pandas, or NumPy.
- Understanding of retrieval-augmented generation (RAG) patterns, embeddings, and tokenization.
- Familiarity with prompt engineering, tool calling, and chat agent architectures.
- Strong problem-solving and analytical mindset, with attention to performance and scalability.
- Demonstrated interest in staying up-to-date with the fast-evolving AI landscape.
Preferred:
- Experience deploying AI services in production (e.g., using Docker, Azure, or AWS).
- Exposure to LangGraph, semantic search, or hybrid RAG systems.
- Familiarity with knowledge graphs, document intelligence, or multimodal AI.
- Previous experience in SaaS or early-stage startup environments.
What We Offer
- Competitive compensation and equity.
- Flexible, remote-first work environment.
- Opportunities to define and build the AI roadmap of a fast-growing technology company.
- Collaborative, learning-oriented culture.
- Access to cutting-edge AI models, research, and infrastructure.
Create a Job Alert
Interested in building your career at CoreStory? Get future opportunities sent straight to your email.
Apply for this job
*
indicates a required field
.png?1764950345)