To use this model you need to have the node-llama-cpp module installed. This can be installed using npm install -S node-llama-cpp and the minimum version supported in version 2.0.0. This also requires that have a locally built version of Llama2 installed.

Example

// Initialize the ChatLlamaCpp model with the path to the model binary file.
const model = new ChatLlamaCpp({
modelPath: "/Replace/with/path/to/your/model/gguf-llama2-q4_0.bin",
temperature: 0.5,
});

// Call the model with a message and await the response.
const response = await model.invoke([
new HumanMessage({ content: "My name is John." }),
]);

// Log the response to the console.
console.log({ response });

Hierarchy (view full)

Constructors

Properties

maxTokens?: number
temperature?: number
topK?: number
topP?: number
trimWhitespaceSuffix?: boolean

Methods

  • Returns {
        maxTokens: undefined | number;
        temperature: undefined | number;
        topK: undefined | number;
        topP: undefined | number;
        trimWhitespaceSuffix: undefined | boolean;
    }

    • maxTokens: undefined | number
    • temperature: undefined | number
    • topK: undefined | number
    • topP: undefined | number
    • trimWhitespaceSuffix: undefined | boolean

Generated using TypeDoc