β οΈ This post links to an external website. β οΈ
Itβs been over a year sincePart IIof this series, and a lot has happened.
Weβve been building AI systems for almost two years now. Most of our client projects involve AI in some form. It started with the RAG approach I wrote about in Part II β retrieve documents, inject context, generate response. That worked, but it only went so far.
The problem was that RAG was a single-pass process. Complex questions often required multiple retrieval steps, but we were stuck with one upfront retrieval β either copying results manually from one query to the next, or building custom scripts to chain them programmatically. It was brittle and limited. The real breakthrough came from letting the LLM make these decisions autonomously: try one search approach, examine the results, decide what to try next, query multiple sources if needed, then synthesize everything together. The agentic flow β where the LLM recursively uses tools until it has what it needs β is fundamentally more effective than any amount of manual or scripted prompt chaining we could build.
Then MCP came along. To be clear, agentic tool-calling patterns existed before MCP β frameworks like LangGraph and various agent libraries had been doing this for a while. But MCP standardized it. When Anthropic released MCP, it was a bit of a wake-up call that we werenβt building with state-of-the-art approaches. Weβd been so focused on our custom RAG implementation that weβd missed the broader shift toward letting LLMs autonomously orchestrate tools. MCP didnβt magically solve all our problems β we still had to build access control, multi-tenancy, data pipelines, and all the production concerns ourselves β but it gave us a standardized framework for the agentic tool-calling pattern we were missing.
continue reading on revelry.co
If this post was enjoyable or useful for you, please share it! If you have comments, questions, or feedback, you can email my personal email. To get new posts, subscribe use the RSS feed.