Sitevision Developer
Log in

Log in

ai

The AI SDK was introduced in 2024.08.1 as a tech preview. Usage requires a specific license.

The AI SDK helps developers build AI-powered applications by providing integration with Large Language Models (LLMs).

Currently, the SDK supports OpenAI-compatible APIs and is designed for server-side environments.

Methods

ai.generateText(llmConfiguration, options)

Method to generate text using a LLM configuration (sv:llmConfiguration). Useful for non-interactive use cases such as summarization.

Options

Generate text options
PropertyTypeDescription
messagesarray<{role: string, content: string}>Messages that represent a conversation. See example below.

Message object structure

  • role: A string indicating the role of the entity that generated the message. It can take the following values:
    • system: Represents the system or environment providing context or instructions.
    • user: Represents the user interacting with the language model.
    • assistant: Represents the language model or assistant generating a response.
  • content: A string containing the actual message or content provided by the role.

Note

  • The system role is optional but is typically used to establish the behavior or context for the assistant.
  • The conversation should be logically structured, ensuring that the roles follow a sequence that reflects a natural conversation.

Timeout

The default timeout and allowed timeout range may vary depending on the Sitevision environment. It’s typically advisable to adjust this based on your specific use case and the performance of the language model.

Generate text options
PropertyTypeDescriptionDefault
timeoutnumberThe maximum amount of time, in milliseconds, that Sitevision will wait for a response before the connection times out.15000 (15s). May vary depending on Sitevision environment.

Other options

Note that the optimal values and ranges for the following options may vary depending on the specific model being used. Please refer to your model's documentation if you want to to learn more. For example, OpenAI API's documentation can be found here.

Generate text options

Property

Type

Description

maxTokens

number

Specifies the maximum number of tokens (words or word fragments) that the model can generate in the response. This includes both the prompt and the completion.

temperature

number

Controls the randomness or creativity of the model's output. Lower values make the output more deterministic and focused, while higher values introduce more variability and creativity.

frequencePenalty

number

Penalizes the model for using the same tokens repeatedly. A higher value reduces the likelihood of repeating the same phrases, promoting more diverse language usage.

Example

Returns

The results object contains the output and metadata related to the text generation request. It includes the following properties.

GenerateText result object

Property

Type

Description

text

string

The generated text based on the conversation provided in the messages array. In this case, it contains the summary generated by the assistant.

error

string

Contains an error message if something went wrong during the text generation process. If the request was successful, this will be an empty string.

finishReason

string

Indicates the reason why the generation process stopped. Common values may include:

  • stop
    • The model stopped because it reached the end of the generated text or an expected stopping point.
  • length
    • The generation was halted because it reached the maximum number of tokens allowed.
  • error
    • The generation could not be performed at all or failed unexpectedly.
  • other
    • The generation stopped due to another reason not covered by the typical cases. This could be due to specific model or system behavior.
usage

object{promptTokens: number, completionTokens: number, totalTokens: number}

Contains token usage information for the request. Note that usage may vary and may not be supported by some models.

Example

ai.streamText(llmConfiguration, options)

Method to stream text using a LLM configuration (sv:llmConfiguration). Useful for interactive use cases such as chat.

This function doesn't return anything. The streamed response data from the remote LLM service is consumed by two callbacks (onChunk and onFinish) specified in the options object.

Options

Stream text options
PropertyTypeDescription
messagesarray<{role: string, content: string}>Messages that represent a conversation. See example below.

Message object structure

  • role: A string indicating the role of the entity that generated the message. It can take the following values:
    • system: Represents the system or environment providing context or instructions.
    • user: Represents the user interacting with the language model.
    • assistant: Represents the language model or assistant generating a response.
  • content: A string containing the actual message or content provided by the role.

Note

  • The system role is optional but is typically used to establish the behavior or context for the assistant.
  • The conversation should be logically structured, ensuring that the roles follow a sequence that reflects a natural conversation.

Handling the streaming response

The result of the streaming operation is handled via the onChunk and the onFinish callback functions. Both properties and their functions must be supplied or the streamText operation will fail.

Stream text options
PropertyTypeDescription
onChunkfunction(token: string)Callback that is invoked whenever a token is received from the remote LLM

Streamed tokens

The onChunk function is responsible to dispatch tokens to the streamText caller, typically an end user. This callback is invoked whenever a single token is received from the remote LLM. A token is typically a small fraction of the total result - typically a single word. It can therefore be advisible to gather multiple tokens to a batch before dispatching them to the caller.

Note that the streamText response must be explicitly flushed if the caller should get the response data in a streamed manner. Without an explicit flush, the caller will typically receive all response data as a single chunk when the streaming operation has completed fully (i.e. the streamText invocation will be perceived as the generateText invocation).

Stream text options
PropertyTypeDescription
onFinishfunction({error:string, finishReason:string, usage: {promptTokens:number, completionTokens:number, totalTokens:number}})Callback that is invoked when the response from the remote LLM is finished

The onFinish callback argument

The function argument is a result object that describes the stream operation result.

The onFinish result object

Property

Type

Description

text

string

This is always an empty string. This property only exist to be structurally identical with the result object of the generateText function

error

string

Contains an error message if something went wrong during the streaming operation. If the request was successful, this will be an empty string.

finishReason

string

Indicates the reason why the streaming process stopped. Common values may include:

  • stop
    • The model stopped because it reached the end of the streamed text or an expected stopping point.
  • length
    • The stream operation was halted because it reached the maximum number of tokens allowed.
  • error
    • The generation could not be performed at all or failed unexpectedly.
  • other
    • The stream operation stopped due to another reason not covered by the typical cases. This could be due to specific model or system behavior.
usage

object{promptTokens: number, completionTokens: number, totalTokens: number}

Contains token usage information for the request. Note that usage may vary and may not be supported by some models.

Timeout

The default timeout and allowed timeout range may vary depending on the Sitevision environment. It’s typically advisable to adjust this based on your specific use case and the performance of the language model.

Generate text options
PropertyTypeDescriptionDefault
timeoutnumberThe maximum amount of time, in milliseconds, that Sitevision will wait for a response before the connection times out.15000 (15s). May vary depending on Sitevision environment.

Other options

Note that the optimal values and ranges for the following options may vary depending on the specific model being used. Please refer to your model's documentation if you want to to learn more. For example, OpenAI API's documentation can be found here.

Generate text options

Property

Type

Description

maxTokens

number

Specifies the maximum number of tokens (words or word fragments) that the model can generate in the response. This includes both the prompt and the completion.

temperature

number

Controls the randomness or creativity of the model's output. Lower values make the output more deterministic and focused, while higher values introduce more variability and creativity.

frequencePenalty

number

Penalizes the model for using the same tokens repeatedly. A higher value reduces the likelihood of repeating the same phrases, promoting more diverse language usage.

Example

Did you find the content on this page useful?