ahvn.utils.exts.autotask module¶
autotask utilities for AgentHeaven.
This module provides the autotask function that creates callable functions automatically implemented using Large Language Models (LLMs) based on examples and task descriptions.
The function infers the task logic from provided examples and applies it to new inputs, without requiring explicit function implementation.
- ahvn.utils.exts.autotask.autotask(prompt=None, descriptions=None, system=None, examples=None, instructions=None, output_schema=None, composer='autotask', lang=None, llm_args=None, search_args=None, capture=None, **kwargs)[源代码]¶
Create a function that is automatically implemented using LLM inference.
This function infers task logic from the provided description and examples, then applies it to new inputs using an LLM. Uses PromptUKFT for template rendering with structured prompt generation.
- 参数:
prompt (Optional[PromptUKFT]) -- A pre-defined PromptUKFT template to use for the task. If None, a default prompt will be constructed using the provided descriptions and examples. If not None, the prompt will be used directly and other parameters (descriptions, system, examples, instructions) will be ignored. (TODO: behavior of other parameters -> update prompt)
descriptions (Union[str, List[str]]) -- Task description(s) that explain what the function should do.
system (Optional[str]) -- A single system prompt to guide the LLM's behavior.
examples (Iterable[Union[Dict[str, Any], CacheEntry]], optional) -- A list of examples demonstrating the desired input-output behavior. Each example should be a dictionary with 'inputs' and 'output'/'expected' keys, or a CacheEntry object. Expected is preferred over output if both are provided. Defaults to None.
instructions (Union[str, List[str]], optional) -- Additional instructions to guide the LLM's response.
output_schema (Dict[str, Any], optional) -- Schema defining the expected output format. This will affect how the prompt instructions are generated regarding the output format. If None, defaults to {"mode": "base"}.
composer (str, optional) -- The prompt composer to use. Defaults to "autotask".
lang (str, optional) -- Language code for localization (e.g., "en" for English).
llm_args (Dict, optional) -- Arguments for the LLM model (e.g., {"model": "gemini-flash"}). If None, uses default LLM configuration.
search_args (Dict, optional) -- Arguments for searching examples from example sources. It is used only when examples is a KL example source (KLStore, KLEngine, KLBase).
capture (Dict, optional) -- Capture settings for logging or debugging. If provided, it will be used to capture the execution details. - 'prompt': The constructed prompt object.
kwargs -- Additional keyword arguments.
- 返回:
The LLM-inferred output for the given inputs, parsed from the response.
- 返回类型:
- 抛出:
AutoFuncError -- If the LLM fails to generate valid output or if there's an error during execution.
示例
>>> f = autotask( ... descriptions="Square the input number", ... examples=[ ... {"inputs": {"x": 5}, "output": 25}, ... {"inputs": {"x": 3}, "output": 9}, ... ], ... output_schema={"mode": "repr"}, ... llm_args={"preset": "tiny"} ... ) >>> f(x=4) 16
>>> f = autotask( ... descriptions="Sentiment analysis. Rate the sentiment of the text from 1 to 10. Return an integer.", ... examples=[ ... {"inputs": {"text": "An absolute masterpiece!"}, "expected": 10}, ... {"inputs": {"text": "What a letdown."}, "expected": 3}, ... {"inputs": {"text": "It was fine."}, "expected": 6}, ... ], ... output_schema={"mode": "repr"}, ... llm_args={"preset": "tiny"} ... ) >>> f(text="The plot was engaging but the ending was predictable.") 7 # or maybe 6/8/9, depending on LLM interpretation
- ahvn.utils.exts.autotask.autotask_prompt_composer(kl, system=None, descriptions=None, examples=None, instructions=None, instance=None, search_args=None, **kwargs)[源代码]¶
- 返回类型:
- 参数:
kl (PromptUKFT)
system (str | None)
examples (Iterable[Dict[str, Any] | CacheEntry | ExperienceUKFT] | BaseKLStore | BaseKLEngine | KLBase | None)
instance (CacheEntry | None)