Hierarchy (view full)

Constructors

Properties

llm: BaseLanguageModelInterface
outputKey: string = "text"
outputParser: BaseLLMOutputParser<EvalOutputType> = ...
prompt: BasePromptTemplate
requiresInput: boolean = true
requiresReference: boolean = false
skipReferenceWarning: string = ...
criterionName?: string
evaluationName?: string = ...
llmKwargs?: any
memory?: any
skipInputWarning?: string = ...

Accessors

Methods

  • Parameters

    • inputs: ChainValues[]
    • Optional config: any[]

    Returns Promise<ChainValues[]>

    ⚠️ Deprecated ⚠️

    Use .batch() instead. Will be removed in 0.2.0.

    This feature is deprecated and will be removed in the future.

    It is not recommended for use.

    Call the chain on all inputs in the list

  • Run the core logic of this chain and add to output if desired.

    Wraps _call and handles memory.

    Parameters

    • values: any
    • Optional config: any

    Returns Promise<ChainValues>

  • Check if the evaluation arguments are valid.

    Parameters

    • Optional reference: string

      The reference label.

    • Optional input: string

      The input string.

    Returns void

    Throws

    If the evaluator requires an input string but none is provided, or if the evaluator requires a reference label but none is provided.

  • Evaluate Chain or LLM output, based on optional input and label.

    Parameters

    • args: StringEvaluatorArgs
    • Optional config: any

    Returns Promise<ChainValues>

    The evaluation results containing the score or value. It is recommended that the dictionary contain the following keys:

    • score: the score of the evaluation, if applicable.
    • value: the string value of the evaluation, if applicable.
    • reasoning: the reasoning for the evaluation, if applicable.
  • Invoke the chain with the provided input and returns the output.

    Parameters

    • input: ChainValues

      Input values for the chain run.

    • Optional options: any

    Returns Promise<ChainValues>

    Promise that resolves with the output of the chain run.

  • Format prompt with values and pass to LLM

    Parameters

    • values: any

      keys to pass to prompt template

    • Optional callbackManager: any

      CallbackManager to use

    Returns Promise<EvalOutputType>

    Completion from LLM.

    Example

    llm.predict({ adjective: "funny" })
    
  • Parameters

    • inputs: Record<string, unknown>
    • outputs: Record<string, unknown>
    • returnOnlyOutputs: boolean = false

    Returns Promise<Record<string, unknown>>

  • Parameters

    • input: any
    • Optional config: any

    Returns Promise<string>

    Deprecated

    Use .invoke() instead. Will be removed in 0.2.0.

  • Create a new instance of the CriteriaEvalChain.

    Parameters

    • llm: BaseLanguageModelInterface
    • Optional criteria: "detail" | ConstitutionalPrinciple | {
          [key: string]: string;
      } | "conciseness" | "relevance" | "correctness" | "coherence" | "harmfulness" | "maliciousness" | "helpfulness" | "controversiality" | "misogyny" | "criminality" | "insensitivity" | "depth" | "creativity"
    • Optional chainOptions: Partial<Omit<LLMEvalChainInput<EvalOutputType, BaseLanguageModelInterface>, "llm">>

      Options to pass to the constructor of the LLMChain.

    Returns Promise<CriteriaEvalChain>

  • Resolve the criteria to evaluate.

    Parameters

    • Optional criteria: "detail" | ConstitutionalPrinciple | {
          [key: string]: string;
      } | "conciseness" | "relevance" | "correctness" | "coherence" | "harmfulness" | "maliciousness" | "helpfulness" | "controversiality" | "misogyny" | "criminality" | "insensitivity" | "depth" | "creativity"

      The criteria to evaluate the runs against. It can be: - a mapping of a criterion name to its description - a single criterion name present in one of the default criteria - a single ConstitutionalPrinciple instance

    Returns Record<string, string>

    A dictionary mapping criterion names to descriptions.

Generated using TypeDoc