How to cancel execution
This guide assumes familiarity with the following concepts:
When building longer-running chains or LangGraph agents, you may want to interrupt execution in situations such as a user leaving your app or submitting a new query.
LangChain Expression Language (LCEL) supports aborting runnables that are in-progress via a runtime signal option.
Built-in signal support requires @langchain/core>=0.2.20
. Please see here for a guide on upgrading.
Note: Individual integrations like chat models or retrievers may have missing or differing implementations for aborting execution. Signal support as described in this guide will apply in between steps of a chain.
To see how this works, construct a chain such as the one below that performs retrieval-augmented generation. It answers questions by first searching the web using Tavily, then passing the results to a chat model to generate a final answer:
Pick your chat model:
- OpenAI
- Anthropic
- FireworksAI
- MistralAI
- Groq
- VertexAI
Install dependencies
- npm
- yarn
- pnpm
npm i @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
Add environment variables
OPENAI_API_KEY=your-api-key
Instantiate the model
import { ChatOpenAI } from "@langchain/openai";
const model = new ChatOpenAI({
model: "gpt-4o-mini",
temperature: 0
});
Install dependencies
- npm
- yarn
- pnpm
npm i @langchain/anthropic
yarn add @langchain/anthropic
pnpm add @langchain/anthropic
Add environment variables
ANTHROPIC_API_KEY=your-api-key
Instantiate the model
import { ChatAnthropic } from "@langchain/anthropic";
const model = new ChatAnthropic({
model: "claude-3-5-sonnet-20240620",
temperature: 0
});
Install dependencies
- npm
- yarn
- pnpm
npm i @langchain/community
yarn add @langchain/community
pnpm add @langchain/community
Add environment variables
FIREWORKS_API_KEY=your-api-key
Instantiate the model
import { ChatFireworks } from "@langchain/community/chat_models/fireworks";
const model = new ChatFireworks({
model: "accounts/fireworks/models/llama-v3p1-70b-instruct",
temperature: 0
});
Install dependencies
- npm
- yarn
- pnpm
npm i @langchain/mistralai
yarn add @langchain/mistralai
pnpm add @langchain/mistralai
Add environment variables
MISTRAL_API_KEY=your-api-key
Instantiate the model
import { ChatMistralAI } from "@langchain/mistralai";
const model = new ChatMistralAI({
model: "mistral-large-latest",
temperature: 0
});
Install dependencies
- npm
- yarn
- pnpm
npm i @langchain/groq
yarn add @langchain/groq
pnpm add @langchain/groq
Add environment variables
GROQ_API_KEY=your-api-key
Instantiate the model
import { ChatGroq } from "@langchain/groq";
const model = new ChatGroq({
model: "mixtral-8x7b-32768",
temperature: 0
});
Install dependencies
- npm
- yarn
- pnpm
npm i @langchain/google-vertexai
yarn add @langchain/google-vertexai
pnpm add @langchain/google-vertexai
Add environment variables
GOOGLE_APPLICATION_CREDENTIALS=credentials.json
Instantiate the model
import { ChatVertexAI } from "@langchain/google-vertexai";
const model = new ChatVertexAI({
model: "gemini-1.5-flash",
temperature: 0
});
// @lc-docs-hide-cell
import { ChatAnthropic } from "@langchain/anthropic";
const llm = new ChatAnthropic({
model: "claude-3-5-sonnet-20240620",
});
import { TavilySearchAPIRetriever } from "@langchain/community/retrievers/tavily_search_api";
import type { Document } from "@langchain/core/documents";
import { StringOutputParser } from "@langchain/core/output_parsers";
import { ChatPromptTemplate } from "@langchain/core/prompts";
import {
RunnablePassthrough,
RunnableSequence,
} from "@langchain/core/runnables";
const formatDocsAsString = (docs: Document[]) => {
return docs.map((doc) => doc.pageContent).join("\n\n");
};
const retriever = new TavilySearchAPIRetriever({
k: 3,
});
const prompt = ChatPromptTemplate.fromTemplate(`
Use the following context to answer questions to the best of your ability:
<context>
{context}
</context>
Question: {question}`);
const chain = RunnableSequence.from([
{
context: retriever.pipe(formatDocsAsString),
question: new RunnablePassthrough(),
},
prompt,
llm,
new StringOutputParser(),
]);
If you invoke it normally, you can see it returns up-to-date information:
await chain.invoke("what is the current weather in SF?");
Based on the provided context, the current weather in San Francisco is:
Temperature: 17.6Β°C (63.7Β°F)
Condition: Sunny
Wind: 14.4 km/h (8.9 mph) from WSW direction
Humidity: 74%
Cloud cover: 15%
The information indicates it's a sunny day with mild temperatures and light winds. The data appears to be from August 2, 2024, at 17:00 local time.
Now, letβs interrupt it early. Initialize an
AbortController
and pass its signal
property into the chain execution. To illustrate
the fact that the cancellation occurs as soon as possible, set a timeout
of 100ms:
const controller = new AbortController();
const startTimer = console.time("timer1");
setTimeout(() => controller.abort(), 100);
try {
await chain.invoke("what is the current weather in SF?", {
signal: controller.signal,
});
} catch (e) {
console.log(e);
}
console.timeEnd("timer1");
Error: Aborted
at EventTarget.<anonymous> (/Users/jacoblee/langchain/langchainjs/langchain-core/dist/utils/signal.cjs:19:24)
at [nodejs.internal.kHybridDispatch] (node:internal/event_target:825:20)
at EventTarget.dispatchEvent (node:internal/event_target:760:26)
at abortSignal (node:internal/abort_controller:370:10)
at AbortController.abort (node:internal/abort_controller:392:5)
at Timeout._onTimeout (evalmachine.<anonymous>:7:29)
at listOnTimeout (node:internal/timers:573:17)
at process.processTimers (node:internal/timers:514:7)
timer1: 103.204ms
And you can see that execution ends after just over 100ms. Looking at this LangSmith trace, you can see that the model is never called.
Streamingβ
You can pass a signal
when streaming too. This gives you more control
over using a break
statement within the for await... of
loop to
cancel the current run, which will only trigger after final output has
already started streaming. The below example uses a break
statement -
note the time elapsed before cancellation occurs:
const startTimer2 = console.time("timer2");
const stream = await chain.stream("what is the current weather in SF?");
for await (const chunk of stream) {
console.log("chunk", chunk);
break;
}
console.timeEnd("timer2");
chunk
timer2: 3.990s
Now compare this to using a signal. Note that you will need to wrap the
stream in a try/catch
block:
const controllerForStream = new AbortController();
const startTimer3 = console.time("timer3");
setTimeout(() => controllerForStream.abort(), 100);
try {
const streamWithSignal = await chain.stream(
"what is the current weather in SF?",
{
signal: controllerForStream.signal,
}
);
for await (const chunk of streamWithSignal) {
console.log(chunk);
break;
}
} catch (e) {
console.log(e);
}
console.timeEnd("timer3");
Error: Aborted
at EventTarget.<anonymous> (/Users/jacoblee/langchain/langchainjs/langchain-core/dist/utils/signal.cjs:19:24)
at [nodejs.internal.kHybridDispatch] (node:internal/event_target:825:20)
at EventTarget.dispatchEvent (node:internal/event_target:760:26)
at abortSignal (node:internal/abort_controller:370:10)
at AbortController.abort (node:internal/abort_controller:392:5)
at Timeout._onTimeout (evalmachine.<anonymous>:7:38)
at listOnTimeout (node:internal/timers:573:17)
at process.processTimers (node:internal/timers:514:7)
timer3: 100.684ms