Interface IPromptQueryConfig_GeneratedTokenLimit
public interface IPromptQueryConfig_GeneratedTokenLimit
Mixin for a
query connection configuration model that allows
the user to enter the number of tokens the model should generate at most in its response to the prompt.
Use of this mixin is optional, but ensures a consistent name for the property. When different prompt service handlers use the same property name, switching between different prompt service handlers preserves the configured system prompt.
See also the <xp:generatedTokenLimit/> Faces component from the
http://www.xima.de/taglib/xfc/prompt namespace, which provides a user interface for editing the data defined
by this mixin.
- Since:
- 8.5.0
-
Field Summary
FieldsModifier and TypeFieldDescriptionstatic final StringThe name of the generated token limit attribute. -
Method Summary
Modifier and TypeMethodDescriptionGets the maximum number of tokens the model should generate in its response to the prompt.voidsetGeneratedTokenLimit(Long generatedTokenLimit) Sets the maximum number of tokens the model should generate in its response to the prompt.
-
Field Details
-
ATTR_GENERATED_TOKEN_LIMIT
The name of the generated token limit attribute. If you implement a custom serialization / deserialization, use this name for the attribute to increase compatibility with other prompt service handlers.- See Also:
-
-
Method Details
-
getGeneratedTokenLimit
Long getGeneratedTokenLimit()Gets the maximum number of tokens the model should generate in its response to the prompt.- Returns:
- The maximum number of generated tokens.
-
setGeneratedTokenLimit
Sets the maximum number of tokens the model should generate in its response to the prompt.- Parameters:
generatedTokenLimit- The maximum number of generated tokens to set.
-