Cost Effective querying using Cheaper GPT models: LangChain Retrievers — Part 2

Reeshabh Choudhary
3 min readJul 14, 2023

Recap

Azure Cognitive Search & OpenAI: https://medium.com/@reeshabh-choudhary/querying-enterprise-data-using-azure-openai-and-cognitive-search-3e5fa6d732cb

NLP & Tokens: https://medium.com/@reeshabh-choudhary/nlp-tokens-and-cost-estimation-with-azure-openai-ed905085e9fc

Cost Effective querying using Cheaper GPT models — Part 1: https://medium.com/@reeshabh-choudhary/token-limits-of-azure-openai-cost-effective-querying-using-cheaper-gpt-models-d8cfca18799e

Introduction

In the previous three articles, we discussed what is Azure Cognitive Search and how we can query enterprise data using Azure Open AI models. We also discussed about the feature of tokenization with respect to Large Language models (LLMs) and how to do effective cost estimation for using Azure OpenAI models. In the third article, we broke down the cost of different GPT models and their token limits. Based upon it, we looked at summarization algorithms and use of local LLMs to summarize our search results, which are then to be passed on to GPT prompt for final response.

In this article, we shall be discussing about how to leverage LangChain Retrievers and Chains to query over Large documents.

Azure Cognitive Search Retriever: A Vector DB alternative

Retriever

A retriever is an interface that returns documents given an unstructured query. It is more general than a vector store. A retriever does not need to be able to store documents, only to return (or retrieve) it.

We can leverage a default AzureCognitiveSearchRetriever from LangChain’s retriever library, which allows us to fetch search over our query input with minimal code. Also, if we use this retriever library, we can do away with code of chunking and indexing the document locally using Vector DBs like Chroma.

Obvious downside is that Azure Cognitive Search service comes with a cost but it is faster and efficient in comparison with ChromaDB chunking and indexing process overall.

Response Processing using Recursive calls to LLMs via Chains

Once, the results are retrieved from Cognitive Search, we can do the following:

· MAP-REDUCE: The map reduce documents chain first applies an LLM chain to each document individually (the Map step), treating the chain output as a new document. It then passes all the new documents to a separate combine documents chain to get a single output (the Reduce step).

· REFINE: The refine documents chain constructs a response by looping over the input documents and iteratively updating its answer. For each document, it passes all non-document inputs, the current document, and the latest intermediate answer to an LLM chain to get a new answer.

Sample Notebook

Please find the link for sample notebook, where you can plug and play your LLM with proper API keys, managed via Azure Key Vault and see the response over your personal/enterprise documents.

https://github.com/reeshabh90/PDF-reader/blob/master/efficient.ipynb

Summary

While the GPT models have wide variety of use case, the token limits to models available with respect to organizational cost for infrastructure remains a challenge which needs to be balanced. LLM models are efficient but due to token limits, we have to make multiple calls to LLM models and which results in slower response time, hence, it is up to the developers to mix and match the approach of using local summarization with NLP (discussed in previous part) and LLM Retriever and Compression.

References:

1. https://python.langchain.com/docs/modules/chains/document/refine

2. https://python.langchain.com/docs/modules/chains/document/map_reduce

3. https://python.langchain.com/docs/modules/data_connection/retrievers/integrations/azure_cognitive_search

--

--

Reeshabh Choudhary

Software Architect and Developer | Author : Objects, Data & AI.