What is an assistant
An assistant is an interactive "agent" based on an large-language-model, which can be provided "abilities" (functions) that it can use to solve problems by interacting with other parts of a solution. For example, conceptually an assistant may be provided the ability to:
read
: read values from a streamoutput
: output stream values (to a new stream)write
: output stream values (to an existing writeable stream)expression
: evaluate an expression based on argument valuesaggregate
: summarize stream valueschart
: generate charts
The functions (abilities) provided to an assistant are defined in a Template by a developer using an LLMBuilder, so an assistant as another type of task that runs in Elara, the has stream inputs, and stream outputs.
An assistant is a data driven autonomous agent constrained by the provided functions (and stream types) so can reason about how to use said functions in order to achieve a prompted goal, based on having a purpose.
Practical example
Imagine you have some highly unstructured data (for example a large text file with customer details) that you need to extract structured data from to make some decisions. If there were the case, you may instruct an assistant with:
purpose
: you would like them to act as a customer data extraction assistant.prompt
: can you extract customer information for "Jan Doe"
You may then provide some functions to the assistant so that it may complete the task, for example:
- provide a function for the assistant to "get the unstructured customer data" from a stream based on the assistant providing a customer name, so you could create a
read
function that returns customer unstructured data, based on an argument of a valid customer name - provide a function for the assistant to "set the extracted structured customer data" based on a defined type that it needs to conform to, and a description of what the different elements in the type represent, so you could create an
output
function that takes an argument of structured data, and updates a stream value.
In either case of the two above functions, the assistant can freely call said functions, and even decide when to call the functions. This ability of an llm is known as chain of thought. Provided with a goal, and a purpose, and a description of actions it can take, the llm will plan out how to leverage actions to achieve said goal.
Because an Elara solution is fundamentally streams and tasks, the real advantage of using an LLMBuilder
to create an assistant as above is that it can be entirely data driven (even its purpose and prompts!), and may interact with other complex AI tasks such as simulation, and optimization, and even another LLMBuilder
assistant, all while having conformance of inputs and outputs strictly managed under-the-hood and automatically.
To allow business users to leverage an assistant within a solution, just like charts and tables, an assistant can easily be added to a layout to expose an interactive chat thread with said assistant.
Next Steps
In the next lesson you will learn how to construct a simple assistant.