Case study
A practical build spanning Python, OpenAI APIs, LangChain, Django, RAG, Vector Store, Docker.
Overview
A live retrieval-augmented generation (RAG) system that lets visitors interrogate my career history, architecture decisions, and technical background in a conversational interface. Built with LangChain and deployed on this site as a working demonstration of production RAG engineering — including grounding, source attribution, and conversational context management.
Context
Most portfolio sites are static documents. I wanted to build something that demonstrates RAG engineering skills while being genuinely useful — a system where hiring managers can ask specific questions about my experience and get grounded, source-attributed answers.
Architecture
The system indexes my CV, project descriptions, and career narrative into a vector store. Queries are embedded, relevant chunks retrieved, and responses generated with source attribution. The conversational interface maintains context across multiple turns.
Outcome
A live, working RAG system that serves as both a portfolio differentiator and a practical demonstration of the same patterns I've deployed in enterprise settings.
Related writing
20 March 2025 · 9 min read
Andrea Head on moving from applied mathematics into production AI, MLOps engineering, fintech, and retail machine learning delivery.
24 February 2026 · 16 min read
A long-form essay on AI ethics, governance, and the historical roots of responsible AI systems.
More projects
Led architecture and delivery of a production-grade RAG chatbot for John Lewis Partnership's internal workforce — from first …
Defined and delivered JUMO's internal ML platform from first principles — a config-driven orchestration layer that scaled model …
Designed and built a real-time model monitoring system at JUMO that reduced data anomaly detection time from weeks …
If you need secure GenAI delivery, RAG engineering, MLOps automation, or production ML systems support, feel free to get in touch.