AI agent capabilities
The AI agent operators expand the feature set of a typical Generative AI model by being able to interact directly with elements of the Altair AI Cloud platform.
A regular LLM might know very little about the detailed functionality of the platform, and even if it does, it does not have the capabilities to facilitate any of that knowledge. This limits the impact an agent can have, because all follow-up actions need to be evaluated and defined manually in the workflow by the user. For example, an LLM might be able to explain how a workflow is built or how deployment need to be configured, but it can’t act on that knowledge.
That’s where the AI agent operator comes into play. Because of its configuration the operator can react to prompts and requests from the user and return information or trigger actions on the AI Cloud platform. In an analogy, it not only does tell you it’s going to rain, but it also hands you an opened umbrella.
What functions are available
This section lists the current functions and actions an AI agent can execute on the AI Cloud platform. Calling those functions can be embedded into any prompt for the two AI agent operators (AI Agent and Create Prompt). In general, the strength of an agent comes into play when it can be integrated and connected with other parts of the platform.
Because LLMs are very complex systems, with a lot of non-deterministic elements, not all prompts will always work the same. The outcomes can depend highly on the selected provider, the model and its fine-tuning parameters. Also, some of the hosted models might be improved and updated over time by the provider. So please do not take the provided examples as one hundred percent given, because depending on your setup the exact wording may differ and might require some experimentation for fine-tuning. You can use Prompt Studio for testing and fine-tuning your prompts.
Information functions
The agent can access the AI Cloud platform and gather information and insights about its environment. This can be used to detect available data sets and workflows that then can be executed or deployed by the agent in a follow-up step. Please note that the agent has the same access rights as the user who is executing it.
The following functions are supported:
Function | Description |
---|---|
Get info about the current project it resides in | It can list linked data sets, workflows and other assets. This information also allows the agent to further evaluate its actions. |
Get info about all projects it has access to | This allows the agent to get a list of all projects it can interact with. Cross-project interaction can be useful to create more complex agent workflows, for example to use projects where the agent only has viewing rights to read data from. |
Get info about a specified project it has access to | This allows the agent to expand its knowledge about other projects. |
Get information on all data within a project | Help to find and identify relevant data sets. Most useful in combination with follow-up statements like workflow execution or other data related tasks. |
Get information on all deployed workflows for the tenant or project | Lists the already deployed workflows. The additional information also includes the query parameters of the deployment (if available). The query parameters are helpful if that endpoint is called later. |
Action functions
The agent is not only able to collect information about its environment but can also take active actions to create new assets on its own. With those functions an agent can read from a project and can write results back, it can create new workflows and create deployments, and it can create consumable assets like AutoClustering models and dashboards.
-
Build workflows from scratch by stating the desired functionality
The final workflow will include operator configuration and port connections. The agent will use any available assets, including connections and other workflows. Please note that very complex workflows might require some additional human inspection.
-
Store content in project under a supplied name, so it can store workflows, IOObjects, and other files
This allows it to convert arbitrary strings into usable content in a project. The stored content is then available for other agents or human users.
-
Retrieve content from a project in string format
Allows the agent to actively work on content from a project. The retrieved content can be used as part of a prompt for another agent operator or included in a RAG workflow to improve the agent based on new information.
-
Create scheduled deployments out of workflows of specified projects
This action allows the agent to create a new deployment based on a workflow in a selected project. With this particular action it is possible to create continuous execution of repeating tasks using the selected workflows. This is a very powerful feature when defining complex agents that can create ongoing tasks. Typical examples for scheduled deployments are checking for new data from any available source or regularly running models for scoring data points. Remember that deployed workflows can also include other AI agent operators that can execute their own tasks.
-
Create Auto Clustering analysis on specified data within a specified project
This directly starts a new AutoClustering run for the selected data set. The clustering can run with a specified number of clusters, but as default it will try to find the optimal number of clusters by its own. The results are available via a link in the response.
-
Allow creation of Apps based on templates for a data table
This will create a new App in the current project. It will use the provided template definition and configures it to use the linked data table.
-
Deploy workflows as new functions
A selected workflow can be deployed as a new function for the agent. The LLM reasons when to call that function based on the deployment name, so an informative name is important. That way, the agent's capabilities can be extended dramatically by providing deployed workflows with use case specific capabilities. To further enhance the flexibility of said functions, variable parameters can be included as well. The prompt can be set via variable, or better via RAG placeholder with the input table. The JSON table input for deployed workflows can be connected to a RAG port of the Agent, and if the input table has specifically only one nominal cell, the value within is used as-is for the RAG placeholder replacement
Monitoring and Traceability
The many different options of a fully deployed agent can quickly create an overwhelming number of options and returned information. It is very important to be able to track and monitor the actions and output of any agents. To make sure to stay on top of that and figure out what the agent did, it outputs at its second output port a so-called PromptRecord object. That contains a list of all called functions, together with the arguments for each. That way, a clear audit trail of what was triggered is always provided - and can optionally be saved into the project.