Ask me anything

Case study

ChatVitae — RAG-Powered CV Assistant

A practical build spanning Python, OpenAI APIs, LangChain, Django, RAG, Vector Store, Docker.

Overview

Why this project matters

A live retrieval-augmented generation (RAG) system that lets visitors interrogate my career history, architecture decisions, and technical background in a conversational interface. Built with LangChain and deployed on this site as a working demonstration of production RAG engineering — including grounding, source attribution, and conversational context management.

Context

The problem

Most portfolio sites are static documents. I wanted to build something that demonstrates RAG engineering skills while being genuinely useful — a system where hiring managers can ask specific questions about my experience and get grounded, source-attributed answers.

Architecture

How it was built

The system indexes my CV, project descriptions, and career narrative into a vector store. Queries are embedded, relevant chunks retrieved, and responses generated with source attribution. The conversational interface maintains context across multiple turns.

Outcome

What was delivered

A live, working RAG system that serves as both a portfolio differentiator and a practical demonstration of the same patterns I've deployed in enterprise settings.

Related writing

Articles connected to this project

20 March 2025 · 9 min read

From Applied Mathematics to MLOps Engineering: My Journey Through AI, FinTech, and Retail

Andrea Head on moving from applied mathematics into production AI, MLOps engineering, fintech, and retail machine learning delivery.

24 February 2026 · 16 min read

Tracing the Ethical Contours of Artificial Intelligence: From Antiquity to the Global Governance Paradigms of 2026

A long-form essay on AI ethics, governance, and the historical roots of responsible AI systems.

More projects

Keep exploring

Production RAG Chatbot (Enterprise application)

Led architecture and delivery of a production-grade RAG chatbot for John Lewis Partnership's internal workforce — from first …

The Prediction Factory: Designing an ML Platform from First Principles

Defined and delivered JUMO's internal ML platform from first principles — a config-driven orchestration layer that scaled model …

Production ML Monitoring: From Weeks to Minutes

Designed and built a real-time model monitoring system at JUMO that reduced data anomaly detection time from weeks …

Technology stack

PythonOpenAI APIsLangChainDjangoRAGVector StoreDocker

Next steps

Interested in similar work?

If you need secure GenAI delivery, RAG engineering, MLOps automation, or production ML systems support, feel free to get in touch.