AI

How to Build an Exceptional AI Chatbot Using LangGraph.js

November 6, 2024

In today's fast-paced world, providing fast and accurate customer support is essential to grow your business. This article gives you a high level overview on which tools we used at Gleap to build a human-like AI bot.

AI chatbots have become essential tools in achieving this goal, providing instant assistance and significantly enhancing user satisfaction. At Gleap, we have leveraged advanced large language models (LLMs) and innovative frameworks like LangGraph.js to develop Kai — our next-generation AI support agent that transforms customer interactions into seamless and effective experiences. Say hello to the new age of AI-powered customer support.

Central to Kai's development was LangGraph — a powerful framework that streamlined our AI chatbot creation process. In this article, we'll delve into how we utilized LangGraph.js to build our AI chatbot Kai, discuss essential concepts like chunking strategies and vector databases, and provide insights to help you create your own AI-powered support agent.

What is LangGraph?

LangGraph.js is a JavaScript framework designed to simplify the development of AI applications powered by LLMs. It provides a modular and flexible structure, enabling developers to build complex AI workflows with ease. LangGraph is built on top of LangChain, but adds the enhancement of graphs - allowing for more complex and efficient AI agents. Additionally, LangGraph integrates seamlessly with various LLMs like GPT-4o and Claude 3.5, allowing us at Gleap to always offer the latest and most advanced LLMs for our customers.

Build your first RAG agent with LangGraph

Let's get started by discovering the core tasks that our RAG pipeline needs to run through.

Content ingestion: Content cleanup, chunking and indexing (creating embeddings and storing chunks in a vector database)

AI agent with LangGraph:

  • Tool choosing agent
  • Content retrieval tool
  • Grade chunks
  • Rerank chunks
  • Custom tools
  • AI answer agent

Content ingestion

A crucial component of RAG is content ingestion. This process includes cleaning the documents, dividing them into smaller sections, converting those sections into embeddings, and then storing these embeddings in a vector database.

Embeddings are special vector representations of the content, allowing you to find content that is similar through vector distance.

At Gleap, we developed a Node.js service that monitors changes in our MongoDB. When a change occurs, the service cleans the content and segments it using the RecursiveCharacterTextSplitter. We then generate an embedding and store the processed data in a self-hosted Qdrant database, an open-source vector database.

We tested several vector databases, and found Qdrant the most appealing due to its stability, open-source nature, and large community.

AI Agent with LangGraph

Once content ingestion is set up, the next step is configuring an AI agent using LangGraph. This AI agent is responsible for handling queries, retrieving relevant information, and providing accurate responses. Here are the essential components we utilized to build Kai’s AI-driven responses:

Tool choosing agent: The tool-choosing agent is an integral part of our AI pipeline. It dynamically selects which tools to use based on the incoming query. For instance, if a query requires customer account information, the agent calls specific APIs, while a general inquiry will call the question answering tool.

Content retrieval tool: With content indexed in our vector database, the content retrieval tool finds relevant content chunks by comparing vector embeddings. This tool enables the AI to access contextually relevant information, ensuring the chatbot's responses remain accurate and consistent. The retrieved content will be graded with the next tool below.

Content grading: Not all content is equally valuable for every query. Kai uses a grading mechanism to prioritize and rank content chunks by relevance. Chunks are graded based on their contextual fit and relevance score, allowing only the most relevant sections to proceed in the response pipeline.

Reranking chunks: Once chunks are graded, they are further refined through reranking, which ensures that the most pertinent information is presented first. This process eliminates redundant information, delivering streamlined responses without overwhelming users. We use Cohere's reranker for content reranking.

Custom tools: Custom tools are things like, fetching data from an external server on the fly, performing AI actions and much more. Those custom tools empower our AI actions.

AI answer agent: Finally, the AI answer agent constructs the response by synthesizing data from the graded and reranked chunks. Leveraging LLMs such as GPT-4 and Claude 3.5, the agent generates responses that are conversational, clear, and relevant to the customer’s needs.

Final grading: This tool will double check the final answer and restart from scratch if the answer isn't a great fit.

Related articles

See all articles