Interface IPromptQueryConfig_GeneratedTokenLimit


public interface IPromptQueryConfig_GeneratedTokenLimit
Mixin for a query connection configuration model that allows the user to enter the number of tokens the model should generate at most in its response to the prompt.

Use of this mixin is optional, but ensures a consistent name for the property. When different prompt service handlers use the same property name, switching between different prompt service handlers preserves the configured system prompt.

See also the <xp:generatedTokenLimit/> Faces component from the http://www.xima.de/taglib/xfc/prompt namespace, which provides a user interface for editing the data defined by this mixin.

Since:
8.5.0
  • Field Summary

    Fields
    Modifier and Type
    Field
    Description
    static final String
    The name of the generated token limit attribute.
  • Method Summary

    Modifier and Type
    Method
    Description
    Gets the maximum number of tokens the model should generate in its response to the prompt.
    void
    setGeneratedTokenLimit(Long generatedTokenLimit)
    Sets the maximum number of tokens the model should generate in its response to the prompt.
  • Field Details

    • ATTR_GENERATED_TOKEN_LIMIT

      static final String ATTR_GENERATED_TOKEN_LIMIT
      The name of the generated token limit attribute. If you implement a custom serialization / deserialization, use this name for the attribute to increase compatibility with other prompt service handlers.
      See Also:
  • Method Details

    • getGeneratedTokenLimit

      Long getGeneratedTokenLimit()
      Gets the maximum number of tokens the model should generate in its response to the prompt.
      Returns:
      The maximum number of generated tokens.
    • setGeneratedTokenLimit

      void setGeneratedTokenLimit(Long generatedTokenLimit)
      Sets the maximum number of tokens the model should generate in its response to the prompt.
      Parameters:
      generatedTokenLimit - The maximum number of generated tokens to set.