入门指南: 链
info
在一些应用中,仅使用一个语言模型是可以的,但通常将语言模型与其他信息源(例如第三方API或其他语言模型)组合使用是非常有用的。
这就是链的概念。
LangChain提供了一个用于链的标准接口,以及一些可直接使用的内置链。您也可以创建自己的链。
📄️ LLM链
概念指南
🗃️ 与索引相关的链
3 items
📄️ 顺序链
顺序链允许您连接多个链,并将它们组合成执行特定场景的管道。
🗃️ 其他链
8 items
📄️ 提示选择器
概念指南
高级
要实现自己的自定义链,您可以继承BaseChain
并实现以下方法:
import { CallbackManagerForChainRun } from "langchain/callbacks";
import { BaseChain as _ } from "langchain/chains";
import { BaseMemory } from "langchain/memory";
import { ChainValues } from "langchain/schema";
abstract class BaseChain {
memory?: BaseMemory;
/**
* Run the core logic of this chain and return the output
*/
abstract _call(
values: ChainValues,
runManager?: CallbackManagerForChainRun
): Promise<ChainValues>;
/**
* Return the string type key uniquely identifying this class of chain.
*/
abstract _chainType(): string;
/**
* Return the list of input keys this chain expects to receive when called.
*/
abstract get inputKeys(): string[];
/**
* Return the list of output keys this chain will produce when called.
*/
abstract get outputKeys(): string[];
}
继承BaseChain
_call
方法是自定义链必须实现的主要方法。它接受输入记录并返回输出记录。接收到的输入应符合inputKeys
数组,返回的输出应符合outputKeys
数组。
在自定义链中实现此方法时,值得特别关注的是runManager
参数,它允许您的自定义链参与与内置链相同的回调系统callbacks system。
如果在自定义链中调用另一个链/模型/代理,则应将其传递给调用runManager?.getChild()
的结果,该结果将生成一个新的回调管理器,范围限定为该内部运行。例如:
import { BasePromptTemplate, PromptTemplate } from "langchain/prompts";
import { BaseLanguageModel } from "langchain/base_language";
import { CallbackManagerForChainRun } from "langchain/callbacks";
import { BaseChain, ChainInputs } from "langchain/chains";
import { ChainValues } from "langchain/schema";
export interface MyCustomChainInputs extends ChainInputs {
llm: BaseLanguageModel;
promptTemplate: string;
}
export class MyCustomChain extends BaseChain implements MyCustomChainInputs {
llm: BaseLanguageModel;
promptTemplate: string;
prompt: BasePromptTemplate;
constructor(fields: MyCustomChainInputs) {
super(fields);
this.llm = fields.llm;
this.promptTemplate = fields.promptTemplate;
this.prompt = PromptTemplate.fromTemplate(this.promptTemplate);
}
async _call(
values: ChainValues,
runManager?: CallbackManagerForChainRun
): Promise<ChainValues> {
// Your custom chain logic goes here
// This is just an example that mimics LLMChain
const promptValue = await this.prompt.formatPromptValue(values);
// Whenever you call a language model, or another chain, you should pass
// a callback manager to it. This allows the inner run to be tracked by
// any callbacks that are registered on the outer run.
// You can always obtain a callback manager for this by calling
// `runManager?.getChild()` as shown below.
const result = await this.llm.generatePrompt(
[promptValue],
{},
runManager?.getChild()
);
// If you want to log something about this run, you can do so by calling
// methods on the runManager, as shown below. This will trigger any
// callbacks that are registered for that event.
runManager?.handleText("Log something about this run");
return { output: result.generations[0][0].text };
}
_chainType(): string {
return "my_custom_chain";
}
get inputKeys(): string[] {
return ["input"];
}
get outputKeys(): string[] {
return ["output"];
}
}