langchain-nvidia-ai-endpoints package contains LangChain integrations for chat models and embeddings powered by NVIDIA AI Foundation Models, and hosted on the NVIDIA API Catalog.
NVIDIA AI Foundation models are community- and NVIDIA-built models that are optimized to deliver the best performance on NVIDIA-accelerated infrastructure. You can use the API to query live endpoints that are available on the NVIDIA API Catalog to get quick results from a DGX-hosted cloud compute environment, or you can download models from NVIDIA’s API catalog with NVIDIA NIM, which is included with the NVIDIA AI Enterprise license. The ability to run models on-premises gives your enterprise ownership of your customizations and full control of your IP and AI application.
NIM microservices are packaged as container images on a per model/model family basis and are distributed as NGC container images through the NVIDIA NGC Catalog. At their core, NIM microservices are containers that provide interactive APIs for running inference on an AI Model.
This example goes over how to use LangChain to interact with the supported NVIDIA Retrieval QA Embedding Model for retrieval-augmented generation via the NVIDIAEmbeddings class.
For more information on accessing the chat models through this API, refer to the ChatNVIDIA documentation.
Install the package
Access the NVIDIA API Catalog
To get access to the NVIDIA API Catalog, do the following:- Create a free account on the NVIDIA API Catalog and log in.
- Click your profile icon, and then click API Keys. The API Keys page appears.
- Click Generate API Key. The Generate API Key window appears.
- Click Generate Key. You should see API Key Granted, and your key appears.
- Copy and save the key as
NVIDIA_API_KEY. - To verify your key, use the following code.
Work with the API Catalog
When initializing an embedding model you can select a model by passing it, e.g.NV-Embed-QA below, or use the default by not passing any arguments.
Embeddings methods including:
-
embed_query: Generate query embedding for a query sample. -
embed_documents: Generate passage embeddings for a list of documents which you would like to search over. -
aembed_query/aembed_documents: Asynchronous versions of the above.
Self-host with NVIDIA NIM Microservices
When you are ready to deploy your AI application, you can self-host models with NVIDIA NIM. For more information, refer to NVIDIA NIM Microservices. The following code connects to locally hosted NIM Microservices.Similarity
The following is a quick test of the similarity for these data points: Queries:- What’s the weather like in Komchatka?
- What kinds of food is Italy known for?
- What’s my name? I bet you don’t remember…
- What’s the point of life anyways?
- The point of life is to have fun :D
- Komchatka’s weather is cold, with long, severe winters.
- Italy is famous for pasta, pizza, gelato, and espresso.
- I can’t recall personal names, only provide information.
- Life’s purpose varies, often seen as personal fulfillment.
- Enjoying life’s moments is indeed a wonderful approach.
Embedding runtimes
Document embedding
- What’s the weather like in Komchatka?
- What kinds of food is Italy known for?
- What’s my name? I bet you don’t remember…
- What’s the point of life anyways?
- The point of life is to have fun :D
- Komchatka’s weather is cold, with long, severe winters.
- Italy is famous for pasta, pizza, gelato, and espresso.
- I can’t recall personal names, only provide information.
- Life’s purpose varies, often seen as personal fulfillment.
- Enjoying life’s moments is indeed a wonderful approach.
Truncation
Embedding models typically have a fixed context window that determines the maximum number of input tokens that can be embedded. This limit could be a hard limit, equal to the model’s maximum input token length, or an effective limit, beyond which the accuracy of the embedding decreases. Since models operate on tokens and applications usually work with text, it can be challenging for an application to ensure that its input stays within the model’s token limits. By default, an exception is thrown if the input is too large. To assist with this, NVIDIA’s NIMs (API Catalog or local) provide atruncate parameter that truncates the input on the server side if it’s too large.
The truncate parameter has three options:
- “NONE”: The default option. An exception is thrown if the input is too large.
- “START”: The server truncates the input from the start (left), discarding tokens as necessary.
- “END”: The server truncates the input from the end (right), discarding tokens as necessary.
RAG retrieval
The following is a repurposing of the initial example of the LangChain Expression Language Retrieval Cookbook entry, but executed with the AI Foundation Models’ Mixtral 8x7B Instruct and NVIDIA Retrieval QA Embedding models available in their playground environments. The subsequent examples in the cookbook also run as expected, and we encourage you to explore with these options. TIP: We would recommend using Mixtral for internal reasoning (i.e. instruction following for data extraction, tool selection, etc.) and Llama-Chat for a single final “wrap-up by making a simple response that works for this user based on the history and context” response.Related topics
langchain-nvidia-ai-endpointspackageREADME- Overview of NVIDIA NIM for Large Language Models (LLMs)
- Overview of NeMo Retriever Embedding NIM
- Overview of NeMo Retriever Reranking NIM
ChatNVIDIAModel- NVIDIA Provider Page