ai
The AI SDK was introduced in 2024.08.1 as a tech preview. Usage requires a specific license.
The AI SDK helps developers build AI-powered applications by providing integration with Large Language Models (LLMs).
Currently, the SDK supports OpenAI-compatible APIs and is designed for server-side environments.
Methods
ai.generateText(llmConfiguration, options)
Method to generate text using a LLM configuration (sv:llmConfiguration). Useful for non-interactive use cases such as summarization.
Options
Property | Type | Description |
---|---|---|
messages | array<{role: string, content: string}> | Messages that represent a conversation. See example below. |
Message object structure
- role: A string indicating the role of the entity that generated the message. It can take the following values:
system
: Represents the system or environment providing context or instructions.user
: Represents the user interacting with the language model.assistant
: Represents the language model or assistant generating a response.
- content: A string containing the actual message or content provided by the role.
Note
- The
system
role is optional but is typically used to establish the behavior or context for the assistant. - The conversation should be logically structured, ensuring that the roles follow a sequence that reflects a natural conversation.
Timeout
The default timeout and allowed timeout range may vary depending on the Sitevision environment. It’s typically advisable to adjust this based on your specific use case and the performance of the language model.
Property | Type | Description | Default |
---|---|---|---|
timeout | number | The maximum amount of time, in milliseconds, that Sitevision will wait for a response before the connection times out. | 15000 (15s). May vary depending on Sitevision environment. |
Other options
Note that the optimal values and ranges for the following options may vary depending on the specific model being used. Please refer to your model's documentation if you want to to learn more. For example, OpenAI API's documentation can be found here.
Property | Type | Description |
---|---|---|
| number | Specifies the maximum number of tokens (words or word fragments) that the model can generate in the response. This includes both the prompt and the completion. |
| number | Controls the randomness or creativity of the model's output. Lower values make the output more deterministic and focused, while higher values introduce more variability and creativity. |
frequencePenalty | number | Penalizes the model for using the same tokens repeatedly. A higher value reduces the likelihood of repeating the same phrases, promoting more diverse language usage. |
Example
Returns
The results object contains the output and metadata related to the text generation request. It includes the following properties.
Property | Type | Description |
---|---|---|
| string | The generated text based on the conversation provided in the messages array. In this case, it contains the summary generated by the assistant. |
| string | Contains an error message if something went wrong during the text generation process. If the request was successful, this will be an empty string. |
finishReason | string | Indicates the reason why the generation process stopped. Common values may include:
|
usage | object{promptTokens: number, completionTokens: number, totalTokens: number} | Contains token usage information for the request. Note that usage may vary and may not be supported by some models. |
Example
ai.streamText(llmConfiguration, options)
Method to stream text using a LLM configuration (sv:llmConfiguration). Useful for interactive use cases such as chat.
This function doesn't return anything. The streamed response data from the remote LLM service is consumed by two callbacks (onChunk and onFinish) specified in the options object.
Options
Property | Type | Description |
---|---|---|
messages | array<{role: string, content: string}> | Messages that represent a conversation. See example below. |
Message object structure
- role: A string indicating the role of the entity that generated the message. It can take the following values:
system
: Represents the system or environment providing context or instructions.user
: Represents the user interacting with the language model.assistant
: Represents the language model or assistant generating a response.
- content: A string containing the actual message or content provided by the role.
Note
- The
system
role is optional but is typically used to establish the behavior or context for the assistant. - The conversation should be logically structured, ensuring that the roles follow a sequence that reflects a natural conversation.
Handling the streaming response
The result of the streaming operation is handled via the onChunk and the onFinish callback functions. Both properties and their functions must be supplied or the streamText operation will fail.
Property | Type | Description |
---|---|---|
onChunk | function(token: string) | Callback that is invoked whenever a token is received from the remote LLM |
Streamed tokens
The onChunk function is responsible to dispatch tokens to the streamText caller, typically an end user. This callback is invoked whenever a single token is received from the remote LLM. A token is typically a small fraction of the total result - typically a single word. It can therefore be advisible to gather multiple tokens to a batch before dispatching them to the caller.
Note that the streamText response must be explicitly flushed if the caller should get the response data in a streamed manner. Without an explicit flush, the caller will typically receive all response data as a single chunk when the streaming operation has completed fully (i.e. the streamText invocation will be perceived as the generateText invocation).
Property | Type | Description |
---|---|---|
onFinish | function({error:string, finishReason:string, usage: {promptTokens:number, completionTokens:number, totalTokens:number}}) | Callback that is invoked when the response from the remote LLM is finished |
The onFinish callback argument
The function argument is a result object that describes the stream operation result.
Property | Type | Description |
---|---|---|
| string | This is always an empty string. This property only exist to be structurally identical with the result object of the generateText function |
| string | Contains an error message if something went wrong during the streaming operation. If the request was successful, this will be an empty string. |
finishReason | string | Indicates the reason why the streaming process stopped. Common values may include:
|
usage | object{promptTokens: number, completionTokens: number, totalTokens: number} | Contains token usage information for the request. Note that usage may vary and may not be supported by some models. |
Timeout
The default timeout and allowed timeout range may vary depending on the Sitevision environment. It’s typically advisable to adjust this based on your specific use case and the performance of the language model.
Property | Type | Description | Default |
---|---|---|---|
timeout | number | The maximum amount of time, in milliseconds, that Sitevision will wait for a response before the connection times out. | 15000 (15s). May vary depending on Sitevision environment. |
Other options
Note that the optimal values and ranges for the following options may vary depending on the specific model being used. Please refer to your model's documentation if you want to to learn more. For example, OpenAI API's documentation can be found here.
Property | Type | Description |
---|---|---|
| number | Specifies the maximum number of tokens (words or word fragments) that the model can generate in the response. This includes both the prompt and the completion. |
| number | Controls the randomness or creativity of the model's output. Lower values make the output more deterministic and focused, while higher values introduce more variability and creativity. |
frequencePenalty | number | Penalizes the model for using the same tokens repeatedly. A higher value reduces the likelihood of repeating the same phrases, promoting more diverse language usage. |