Key Takeaways
- LangChainJS is a powerful JavaScript framework that enables developers to build and experiment with AI-driven language models and agents, seamlessly integrating into web applications.
- The framework allows for the creation of agents that can utilize various tools and data sources to perform complex language tasks, such as internet searches and mathematical calculations, enhancing response accuracy and relevance.
- LangChain supports a variety of models, including language models for simple text output, chat models for interactive conversations, and embeddings models for converting text to numerical vectors, facilitating diverse NLP applications.
- Text data can be efficiently managed and processed through customizable chunking methods, ensuring optimal performance and context relevance when handling large texts.
- Beyond using OpenAI models, LangChain is compatible with other LLMs and AI services, providing flexibility and expanded capabilities for developers exploring different AI integrations in their projects.
In this comprehensive guide, we’ll dive deep into the essential components of LangChain and demonstrate how to harness its power in JavaScript.
LangChainJS is a versatile JavaScript framework that empowers developers and researchers to create, experiment with, and analyze language models and agents. It offers a rich set of features for natural language processing (NLP) enthusiasts, from building custom models to manipulating text data efficiently. As a JavaScript framework, it also allows developers to easily integrate their AI applications into web apps.
Prerequisites
To follow along with this article, create a new folder and install the LangChain npm package:
npm install -S langchain
After creating a new folder, create a new JS module file by using the .mjs
suffix (such as test1.mjs
).
Agents
In LangChain, an agent is an entity that can understand and generate text. These agents can be configured with specific behaviors and data sources and trained to perform various language-related tasks, making them versatile tools for a wide range of applications.
Creating a LangChain agent
Agents can be configured to use “tools” to gather the data they need and formulate a good response. Take a look at the example below. It uses Serp API (an internet search API) to search the Internet for information relevant to the question or input, and use that to make a response. It also uses the llm-math
tool to perform mathematical operations — for example, to convert units or find percentage change between two values:
import { initializeAgentExecutorWithOptions } from "langchain/agents";
import { ChatOpenAI } from "langchain/chat_models/openai";
import { SerpAPI } from "langchain/tools";
import { Calculator } from "langchain/tools/calculator";
process.env["OPENAI_API_KEY"] = "YOUR_OPENAI_KEY"
process.env["SERPAPI_API_KEY"] = "YOUR_SERPAPI_KEY"
const tools = [new Calculator(), new SerpAPI()];
const model = new ChatOpenAI({ modelName: "gpt-3.5-turbo", temperature: 0 });
const executor = await initializeAgentExecutorWithOptions(tools, model, {
agentType: "openai-functions",
verbose: false,
});
const result = await executor.run("By searching the Internet, find how many albums has Boldy James dropped since 2010 and how many albums has Nas dropped since 2010? Find who dropped more albums and show the difference in percent.");
console.log(result);
After creating the model
variable using modelName: "gpt-3.5-turbo"
and temperature: 0
, we create the executor
that combines the created model
with the specified tools (SerpAPI and Calculator). In the input, I’ve asked the LLM to search the Internet (using SerpAPI) and find which artist dropped more albums since 2010 — Nas or Boldy James — and show the percentage difference (using Calculator).
In this example, I had to explicitly tell the LLM “By searching the Internet…” to have it get data up until present day using the Internet instead of using OpenAI’s default data limited to 2021.
Here’s what the output looks like:
> node test1.mjs
Boldy James has released 4 albums since 2010. Nas has released 17 studio albums since 2010.
Therefore, Nas has released more albums than Boldy James. The difference in the number of albums is 13.
To calculate the difference in percent, we can use the formula: (Difference / Total) * 100.
In this case, the difference is 13 and the total is 17.
The difference in percent is: (13 / 17) * 100 = 76.47%.
So, Nas has released 76.47% more albums than Boldy James since 2010.
Models
There are three types of models in LangChain: LLMs, chat models, and text embedding models. Let’s explore every type of model with some examples.
Language model
LangChain provides a way to use language models in JavaScript to produce a text output based on a text input. It’s not as complex as a chat model, and it’s used best with simple input–output language tasks. Here’s an example using OpenAI:
import { OpenAI } from "langchain/llms/openai";
const llm = new OpenAI({
openAIApiKey: "YOUR_OPENAI_KEY",
model: "gpt-3.5-turbo",
temperature: 0
});
const res = await llm.call("List all red berries");
console.log(res);
As you can see, it uses the gpt-3.5-turbo
model to list all red berries. In this example, I set the temperature to 0 to make the LLM factually accurate. Output:
1. Strawberries
2. Cranberries
3. Raspberries
4. Redcurrants
5. Red Gooseberries
6. Red Elderberries
7. Red Huckleberries
8. Red Mulberries
Chat model
If you want more sophisticated answers and conversations, you need to use chat models. How are chat models technically different from language models? Well, in the words of the LangChain documentation:
Chat models are a variation on language models. While chat models use language models under the hood, the interface they use is a bit different. Rather than using a “text in, text out” API, they use an interface where “chat messages” are the inputs and outputs.
Here’s a simple (pretty useless but fun) JavaScript chat model script:
import { ChatOpenAI } from "langchain/chat_models/openai";
import { PromptTemplate } from "langchain/prompts";
const chat = new ChatOpenAI({
openAIApiKey: "YOUR_OPENAI_KEY",
model: "gpt-3.5-turbo",
temperature: 0
});
const prompt = PromptTemplate.fromTemplate(`You are a poetic assistant that always answers in rhymes: {question}`);
const runnable = prompt.pipe(chat);
const response = await runnable.invoke({ question: "Who is better, Djokovic, Federer or Nadal?" });
console.log(response);
As you can see, the code first sends a system message and tells the chatbot to be a poetic assistant that always answers in rhymes, and afterwards it sends a human message telling the chatbot to tell me who’s the better tennis player: Djokovic, Federer or Nadal. If you run this chatbot model, you’ll see something like this:
AIMessage.content:
'In the realm of tennis, they all shine bright,\n' +
'Djokovic, Federer, and Nadal, a glorious sight.\n' +
'Each with their unique style and skill,\n' +
'Choosing the best is a difficult thrill.\n' +
'\n' +
'Djokovic, the Serb, a master of precision,\n' +
'With agility and focus, he plays with decision.\n' +
'His powerful strokes and relentless drive,\n' +
"Make him a force that's hard to survive.\n" +
'\n' +
'Federer, the Swiss maestro, a true artist,\n' +
'Graceful and elegant, his game is the smartest.\n' +
'His smooth technique and magical touch,\n' +
'Leave spectators in awe, oh so much.\n' +
'\n' +
'Nadal, the Spaniard, a warrior on clay,\n' +
'His fierce determination keeps opponents at bay.\n' +
'With his relentless power and never-ending fight,\n' +
'He conquers the court, with all his might.\n' +
'\n' +
"So, who is better? It's a question of taste,\n" +
"Each player's greatness cannot be erased.\n" +
"In the end, it's the love for the game we share,\n" +
'That makes them all champions, beyond compare.'
Pretty cool!
Embeddings
Embeddings models provide a way to turn words and numbers in a text into vectors, that can then be associated with other words or numbers. This may sound abstract, so let’s look at an example:
import { OpenAIEmbeddings } from "langchain/embeddings/openai";
process.env["OPENAI_API_KEY"] = "YOUR_OPENAI_KEY"
const embeddings = new OpenAIEmbeddings();
const res = await embeddings.embedQuery("Who created the world wide web?");
console.log(res)
This will return a long list of floats:
[
0.02274114, -0.012759142, 0.004794503, -0.009431809, 0.01085313,
0.0019698727, -0.013649924, 0.014933698, -0.0038185727, -0.025400387,
0.010794181, 0.018680222, 0.020042595, 0.004303263, 0.019937797,
0.011226473, 0.009268062, 0.016125774, 0.0116391145, -0.0061765253,
-0.0073358514, 0.00021696436, 0.004896026, 0.0034026562, -0.018365828,
... 1501 more items
]
This is what an embedding looks like. All of those floats for just six words!
This embedding can then be used to associate the input text with potential answers, related texts, names and more.
Now let’s look at a use case of embedding models…
Now here’s a script that will take the question “What is the heaviest animal?” and find the right answer in the provided list of possible answers by using embeddings:
import { OpenAIEmbeddings } from "langchain/embeddings/openai";
process.env["OPENAI_API_KEY"] = "YOUR_OPENAI_KEY"
const embeddings = new OpenAIEmbeddings();
function cosinesim(A, B) {
var dotproduct = 0;
var mA = 0;
var mB = 0;
for(var i = 0; i < A.length; i++) {
dotproduct += A[i] * B[i];
mA += A[i] * A[i];
mB += B[i] * B[i];
}
mA = Math.sqrt(mA);
mB = Math.sqrt(mB);
var similarity = dotproduct / (mA * mB);
return similarity;
}
const res1 = await embeddings.embedQuery("The Blue Whale is the heaviest animal in the world");
const res2 = await embeddings.embedQuery("George Orwell wrote 1984");
const res3 = await embeddings.embedQuery("Random stuff");
const text_arr = ["The Blue Whale is the heaviest animal in the world", "George Orwell wrote 1984", "Random stuff"]
const res_arr = [res1, res2, res3]
const question = await embeddings.embedQuery("What is the heaviest animal?");
const sims = []
for (var i=0;i<res_arr.length;i++){
sims.push(cosinesim(question, res_arr[i]))
}
Array.prototype.max = function() {
return Math.max.apply(null, this);
};
console.log(text_arr[sims.indexOf(sims.max())])
This code uses the cosinesim(A, B)
function to find the relatedness of each answer to the question. By finding the list of embeddings most related to the question using the Array.prototype.max
function by finding the maximum value in the array of relatedness indexes that were generated using cosinesim
, the code is then able to find the right answer by finding which text from text_arr
belongs to the most related answer: text_arr[sims.indexOf(sims.max())]
.
Output:
The Blue Whale is the heaviest animal in the world
Chunks
LangChain models can’t handle large texts and use them to make responses. This is where chunks and text splitting come in. Let me show you two simple methods to split your text data into chunks before feeding it into LangChain.
Splitting chunks by character
To avoid abrupt breaks in chunks, you can split your texts by paragraph by splitting them at every occurrence of a newline:
import { Document } from "langchain/document";
import { CharacterTextSplitter } from "langchain/text_splitter";
const splitter = new CharacterTextSplitter({
separator: "\n",
chunkSize: 7,
chunkOverlap: 3,
});
const output = await splitter.createDocuments([your_text]);
This is one useful way of splitting a text. However, you can use any character as a chunk separator, not just \n
.
Recursively splitting chunks
If you want to strictly split your text by a certain length of characters, you can do so using RecursiveCharacterTextSplitter
:
import { RecursiveCharacterTextSplitter } from "langchain/text_splitter";
const splitter = new RecursiveCharacterTextSplitter({
chunkSize: 100,
chunkOverlap: 15,
});
const output = await splitter.createDocuments([your_text]);
In this example, the text gets split every 100 characters, with a chunk overlap of 15 characters.
Chunk size and overlap
By looking at those examples, you’ve probably started wondering exactly what the chunk size and overlap parameters mean, and what implications they have on performance. Well, let me explain it simply in two points.
Chunk size decides the amount of characters that will be in each chunk. The bigger the chunk size, the more data is in the chunk, the more time it will take LangChain to process it and to produce an output, and vice versa.
Chunk overlap is what shares information between chunks so that they share some context. The higher the chunk overlap, the more redundant your chunks will be; the lower the chunk overlap, the less context will be shared between the chunks. Generally, a good chunk overlap is between 10% and 20% of the chunk size, although the ideal chunk overlap varies across different text types and use cases.
Chains
Chains are basically multiple LLM functionalities linked together to perform more complex tasks that couldn’t otherwise be done with simple LLM input-->output
fashion. Let’s look at a cool example:
import { ChatPromptTemplate } from "langchain/prompts";
import { LLMChain } from "langchain/chains";
import { ChatOpenAI } from "langchain/chat_models/openai";
process.env["OPENAI_API_KEY"] = "YOUR_OPENAI_KEY"
const wiki_text = `
Alexander Stanislavovich 'Sasha' Bublik (Александр Станиславович Бублик; born 17 June 1997) is a Kazakhstani professional tennis player.
He has been ranked as high as world No. 25 in singles by the Association of Tennis Professionals (ATP), which he achieved in July 2023, and is the current Kazakhstani No. 1 player...
Alexander Stanislavovich Bublik was born on 17 June 1997 in Gatchina, Russia and began playing tennis at the age of four. He was coached by his father, Stanislav. On the junior tour, Bublik reached a career-high ranking of No. 19 and won eleven titles (six singles and five doubles) on the International Tennis Federation (ITF) junior circuit.[4][5]...
`
const chat = new ChatOpenAI({ temperature: 0 });
const chatPrompt = ChatPromptTemplate.fromMessages([
[
"system",
"You are a helpful assistant that {action} the provided text",
],
["human", "{text}"],
]);
const chainB = new LLMChain({
prompt: chatPrompt,
llm: chat,
});
const resB = await chainB.call({
action: "lists all important numbers from",
text: wiki_text,
});
console.log({ resB });
This code takes a variable into its prompt, and formulates a factually correct answer (temperature: 0). In this example, I asked the LLM to list all important numbers from a short Wiki bio of my favorite tennis player.
Here’s the output of this code:
{
resB: {
text: 'Important numbers from the provided text:\n' +
'\n' +
"- Alexander Stanislavovich 'Sasha' Bublik's date of birth: 17 June 1997\n" +
"- Bublik's highest singles ranking: world No. 25\n" +
"- Bublik's highest doubles ranking: world No. 47\n" +
"- Bublik's career ATP Tour singles titles: 3\n" +
"- Bublik's career ATP Tour singles runner-up finishes: 6\n" +
"- Bublik's height: 1.96 m (6 ft 5 in)\n" +
"- Bublik's number of aces served in the 2021 ATP Tour season: unknown\n" +
"- Bublik's junior tour ranking: No. 19\n" +
"- Bublik's junior tour titles: 11 (6 singles and 5 doubles)\n" +
"- Bublik's previous citizenship: Russia\n" +
"- Bublik's current citizenship: Kazakhstan\n" +
"- Bublik's role in the Levitov Chess Wizards team: reserve member"
}
}
Pretty cool, but this doesn’t really show the full power of chains. Let’s take a look at a more practical example:
import { z } from "zod";
import { zodToJsonSchema } from "zod-to-json-schema";
import { ChatOpenAI } from "langchain/chat_models/openai";
import {
ChatPromptTemplate,
SystemMessagePromptTemplate,
HumanMessagePromptTemplate,
} from "langchain/prompts";
import { JsonOutputFunctionsParser } from "langchain/output_parsers";
process.env["OPENAI_API_KEY"] = "YOUR_OPENAI_KEY"
const zodSchema = z.object({
albums: z
.array(
z.object({
name: z.string().describe("The name of the album"),
artist: z.string().describe("The artist(s) that made the album"),
length: z.number().describe("The length of the album in minutes"),
genre: z.string().optional().describe("The genre of the album"),
})
)
.describe("An array of music albums mentioned in the text"),
});
const prompt = new ChatPromptTemplate({
promptMessages: [
SystemMessagePromptTemplate.fromTemplate(
"List all music albums mentioned in the following text."
),
HumanMessagePromptTemplate.fromTemplate("{inputText}"),
],
inputVariables: ["inputText"],
});
const llm = new ChatOpenAI({ modelName: "gpt-3.5-turbo", temperature: 0 });
const functionCallingModel = llm.bind({
functions: [
{
name: "output_formatter",
description: "Should always be used to properly format output",
parameters: zodToJsonSchema(zodSchema),
},
],
function_call: { name: "output_formatter" },
});
const outputParser = new JsonOutputFunctionsParser();
const chain = prompt.pipe(functionCallingModel).pipe(outputParser);
const response = await chain.invoke({
inputText: "My favorite albums are: 2001, To Pimp a Butterfly and Led Zeppelin IV",
});
console.log(JSON.stringify(response, null, 2));
This code reads an input text, identifies all mentioned music albums, identifies each album’s name, artist, length and genre, and finally puts all the data into JSON format. Here’s the output given the input “My favorite albums are: 2001, To Pimp a Butterfly and Led Zeppelin IV”:
{
"albums": [
{
"name": "2001",
"artist": "Dr. Dre",
"length": 68,
"genre": "Hip Hop"
},
{
"name": "To Pimp a Butterfly",
"artist": "Kendrick Lamar",
"length": 79,
"genre": "Hip Hop"
},
{
"name": "Led Zeppelin IV",
"artist": "Led Zeppelin",
"length": 42,
"genre": "Rock"
}
]
}
This is just a fun example, but this technique can be used to structure unstructured text data for countless other applications.
Going Beyond OpenAI
Even though I keep using OpenAI models as examples of the different functionalities of LangChain, it isn’t limited to OpenAI models. You can use LangChain with a multitude of other LLMs and AI services. You can find the full list of LangChain and JavaScript integratable LLMs in their documentation.
For example, you can use Cohere with LangChain. After installing Cohere, using npm install cohere-ai
, you can make a simple question-->answer
code using LangChain and Cohere like this:
import { Cohere } from "langchain/llms/cohere";
const model = new Cohere({
maxTokens: 50,
apiKey: "YOUR_COHERE_KEY", // In Node.js defaults to process.env.COHERE_API_KEY
});
const res = await model.call(
"Come up with a name for a new Nas album"
);
console.log({ res });
Output:
{
res: ' Here are a few possible names for a new Nas album:\n' +
'\n' +
"- King's Landing\n" +
"- God's Son: The Sequel\n" +
"- Street's Disciple\n" +
'- Izzy Free\n' +
'- Nas and the Illmatic Flow\n' +
'\n' +
'Do any'
}
Conclusion
In this guide, you’ve seen the different aspects and functionalities of LangChain in JavaScript. You can use LangChain in JavaScript to easily develop AI-powered web apps and experiment with LLMs. Be sure to refer to the LangChainJS documentation for more details on specific functionalities.
Happy coding and experimenting with LangChain in JavaScript! If you enjoyed this article, you might also like to read about using LangChain with Python.
I'm a full-stack developer with 3 years of experience with PHP, Python, Javascript and CSS. I love blogging about web development, application development and machine learning.