RTVI Client Web includes a LLM helper for typical LLM tasks and workflows.

Using the LLM helper

import { VoiceClient, LLMHelper } from "realtime-ai";

const voiceClient = new VoiceClient({
  // ...
  services: {
    llm: "openai",
  },
});
const llmHelper = new LLMHelper({
  callbacks: {
    // ...
  },
});
voiceClient.registerHelper("llm", llmHelper);

Actions

All of the below are abstracted actions.

As with all actions, they can be awaited or chained with .then().

If the bot is unable to process the action, it will trigger the onMessageError callback and MessageError event.

getContext()

llm:get_context

Retrieve LLM context from bot. Returns Promise<LLMContext>

const llmHelper = voiceClient.getHelper("llm") as LLMHelper;
const ctx = await llmHelper.getContext();

// >  { messages?: LLMContextMessage[]; tools?: []; }

setContext()

llm:set_context

Replaces the current LLM context with the provided one. Returns Promise<boolean>.

context
LLMContext
required

LLMContext option to set.

interrupt
boolean
default: "false"

Interrupt the current conversation and apply the new context immediately.

const llmHelper = voiceClient.getHelper("llm") as LLMHelper;
await llmHelper.setContext(
  {
    messages: [
      {
        role: "system",
        content: "Your are a helpful assistant",
      },
      {
        role: "user",
        content: "Tell me a joke",
      },
    ],
  },
  false | true
);

// >  true | false (if error)

appendToMessages()

llm:append_to_messages

Append a new message to the existing context. Returns Promise<boolean>.

context
LLMContextMessage
required

New message to apply to the context.

runImmediately
boolean
default: "false"

Apply the new message immediately, or wait until the current turn concludes.

const llmHelper = voiceClient.getHelper("llm") as LLMHelper;
await llmHelper.appendToMessages(
  {
    role: "user",
    content: "Tell me a joke",
  },
  false | true
);

// >  true | false (if error)

Callbacks and events

onLLMJsonCompletion: (jsonString: string) => void;
onLLMFunctionCall: (func: LLMFunctionCallData) => void;
onLLMFunctionCallStart: (functionName: string) => void;
onLLMMessage: (message: LLMContextMessage) => void;