Define an assistant
In this tutorial you will extend the
interactive scenario defined earlier and:- Extend reports resource to include further information.
- Add a description of the data to the Reports and Orders resources.
- Create an LLM assistant to aggregate and chart reports data, and write and read price data, and read orders data.
- Add a chat to the existing dashboard based on the llm assistant
This lesson will assume that you have an empty project and asset which you can to deploy to a workspace named 09_01_02_define_an_assistant
with the following command:
edk template deploy -ycw 09_01_02_define_an_assistant
Extend reports resource
In the previous lesson you created a Reports
resource to collect performance reports. You can extend the resource to include further information relevant to an LLM assistant, with the following steps:
- Add resources to track
Costs
, andRevenue
. - Add
revenue
,costs
andprofit
to the reports resource. - Add adjustment of
Sales
process and the updateRevenue
. - Add adjustment of
Pay Supplier
process and the updateCosts
. - Update the
Reports
process to calculateprofit
,costs
andrevenue
, and update theCosts
, andRevenue
resources.
You can add the above changes in the relevant code sections.
Add new resources
Add two new resources as shown below to represent the Costs
, and Revenue
:
const cash = new ResourceBuilder("Cash").mapFromValue(500)
const costs = new ResourceBuilder("Costs").mapFromValue(0)
const revenue = new ResourceBuilder("Revenue").mapFromValue(0)
const liability = new ResourceBuilder("Liability").mapFromValue(0)
Add revenue
, costs
and profit
to the reports resource
Add the three new fields for revenue
, costs
and profit
to the StructType
.
In the code add the above changes:
const reports = new ResourceBuilder("Reports")
.mapFromValue(
new Map(),
DictType(
StringType,
StructType({
date: DateTimeType,
cash: FloatType,
costs: FloatType,
revenue: FloatType,
profit: FloatType,
liability: FloatType,
inventory: IntegerType,
})
)
)
Adjust Sales
process to the adjust Revenue
You can import the revenue resource, and adjust the Sales
process to update the Revenue
resource.
In the code add the above changes:
const sales = new ProcessBuilder("Sales")
.resource(price)
.resource(inventory)
.resource(cash)
.resource(revenue)
.value("qty", IntegerType)
.value("price", FloatType)
// can only sell if there is enough inventory, so update the qty
.assign("qty", (props, resources) => Min(resources["Inventory"], props.qty))
// get the total amount of revenue
.let("amount", props => Multiply(props.qty, props.price))
// update the inventory balance by the qty
.set("Inventory", (props, resources) => Subtract(resources["Inventory"], props.qty))
// update the cash balance by the amount
.set("Cash", (props, resources) => Add(resources.Cash, props.amount))
// update the revenue
.set("Revenue", (props, resources) => Add(resources.Revenue, props.amount))
// the initial data comes from the historic sale data
.mapManyFromStream(sales_data.outputStream())
Adjust Pay Supplier
process to the adjust Cost
You can import the costs resource, and adjust the Pay Supplier
process to update the Costs
resource.
In the code add the above changes:
// pay the supplier for some ordered inventory, and clear the liability
const pay_supplier = new ProcessBuilder("Pay Supplier")
.resource(cash)
.resource(liability)
.resource(inventory)
.resource(costs)
.value("supplierName", StringType)
.value("unitCost", FloatType)
.value("qty", IntegerType)
// the total amount to be paid
.let("amount", props => Multiply(props.qty, props.unitCost))
// the debt has been cleared
.set("Liability", (props, resources) => Add(resources.Liability, props.amount))
// update the costs
.set("Costs", (props, resources) => Add(resources.Costs, props.amount))
// update the cash by the amount
.set("Cash", (props, resources) => Subtract(resources.Cash, props.amount))
Adjust Reporter
process
To update the Reports
resource you can perform the following steps:
- Add import of the
costs
,revenue
resources. - Add
costs
,revenue
to the inserted report. - Calculate
profit
as the difference betweenrevenue
andcosts
. - Reset
costs
andrevenue
to zero, so that the value per period is tracked.
In the code add the above changes:
// create the hourly reports
const reporter = new ProcessBuilder("Reporter")
.resource(cash)
.resource(inventory)
.resource(reports)
.resource(liability)
.resource(costs)
.resource(revenue)
// insert a report into the reports resource
.insert(
(_props, resources) => resources.Reports,
(props, _resources) => Print(props.date),
(props, resources) => Struct({
date: props.date,
cash: resources.Cash,
costs: resources.Costs,
revenue: resources.Revenue,
profit: Subtract(resources.Revenue, resources.Costs),
liability: resources.Liability,
inventory: resources.Inventory,
})
)
// update the costs and revenue
.set("Costs", () => Const(0.0))
.set("Revenue", () => Const(0.0))
// create another report in an hour
.execute("Reporter", props => Struct({ date: AddDuration(props.date, 1, "hour") }))
// the first report should start at the first sale date (in the past)
.mapFromPipeline(builder => builder
.from(sales_dates.outputStreams().first)
.transform((date) => Struct({ date }))
)
Describe resources
To provide an effective LLM assistant, it is important to describe to business context of any relevant information. A TypeBuilder
allows you to define a type of data, and also describe its context.
Describe reports resource
To describe the Reports
resource, you can perform the following steps:
- Add a description of the resource with the
describe
method. - Define a description of the
DictType
collection with thedict
method, overall.- Define a description of the key of the
DictType
collection with thestring
method. - Define a description of the struct and related fields such as
date
,cash
,costs
,revenue
,profit
,liability
,inventory
with thefield
method.
- Define a description of the key of the
In the code add the above changes:
// create a report resource to store the hourly reports
const reports = new ResourceBuilder("Reports")
.mapFromValue(
new Map(),
DictType(
StringType,
StructType({
date: DateTimeType,
cash: FloatType,
costs: FloatType,
revenue: FloatType,
profit: FloatType,
liability: FloatType,
inventory: IntegerType,
})
)
)
.describe(builder => builder.describe("The predicted sales data")
.dict(
builder => builder.describe("The unique identifier for a business kpi report").string(),
builder => builder
.describe("A business kpi report over an hour period")
.struct(
builder => builder
.field("date", builder => builder.describe("The date and time of the start of the reporting period").datetime())
.field("cash", builder => builder.describe("The total amount of cash balance available for the business to spend at the start of the reporting period").float())
.field("costs", builder => builder.describe("The total costs incurred by the business during the reporting period").float())
.field("revenue", builder => builder.describe("The total revenue received by the business during the reporting period").float())
.field("profit", builder => builder.describe("The total profit achieved by the business during the reporting period").float())
.field("liability", builder => builder.describe("The total amount of liability owed by the business at the start of the reporting period").float())
.field("inventory", builder => builder.describe("The qty of stock remaining in inventory at the start of the reporting period").integer())
)
)
)
Describe orders resource
A description of the orders can be added in the same way as above, with the following steps:
- Add a description of the resource with the
describe
method. - Add a description of the
DictType
collection with thedict
method, overall.- Add a description of the key of the
DictType
collection with thestring
method. - Add a description of the struct and related fields such as
date
,supplierName
with thefield
method.
- Add a description of the key of the
In the code add the above changes:
// create a resource containing procurement schedule
const orders = new ResourceBuilder("Orders")
.mapFromStream(procurement_dates.outputStreams().schedule)
.describe(builder => builder.describe("The predicted supplier orders for inventory")
.dict(
builder => builder.describe("The unique identifier for a supplier order").string(),
builder => builder
.describe("An individual supplier order")
.struct(
builder => builder
.field("date", builder => builder.describe("The date and time of supplier order").datetime())
.field("supplierName", builder => builder.describe("The supplier to order inventory from").string())
)
)
)
Create an LLM assistant
An assistant leveraging an Open AI LLM can be created using the LLMBuilder
class which facilitates interaction with streams in Elara, through definition of functions.
Create base assistant
An assistant can be created using the LLMBuilder
class, with the following steps:
- Create a stream to use as a prompt (this will be important later for adding a chat to the dashboard)
- Define a new assistant with the
LLMBuilder
class. - Add some inputs that will be used in later steps, such as the
prompt
,reports
,orders
, andprice
. - Create the assistant using the
assistant
method.- Add an
api_key
to the assistant, if you don't have this you can request create one through Open AI. - Add a
prompt
to the assistant, at the moment the prompt will be empty (i.e. null).
- Add an
In the code add the above changes:
// create an empty prompt stream
const prompt = new SourceBuilder("prompt",)
.value({ value: null, type: Nullable(StringType) });
// create an assistant
const assistant = new LLMBuilder("assistant")
.input({ name: "prompt", stream: prompt.outputStream() })
.input({ name: "reports", stream: predictive.simulationResultStreams().Reports })
.input({ name: "orders", stream: predictive.simulationResultStreams().Orders })
.input({ name: "price", stream: custom_price.outputStream() })
.assistant({
api_key: // YOUR OPEN AI KEY //,
prompt: (inputs) => inputs.prompt
})
Overall the arguments for creating an assistant include the following:
args: {
/** the OpenAI api key */
api_key: string | ((inputs: Inputs) => EastFunction<StringType>)
/** the purpose of the assistant */
purpose?: string | ((inputs: Inputs) => EastFunction<StringType>)
/** the assistant state, a message or thread */
prompt?: string | ((inputs: Inputs) => EastFunction<Nullable<StringType>>)
/** the open ai model to use */
model?: OpenAiModel | ((inputs: Inputs) => EastFunction<LLMModel>)
}
Note that all parameters may be defined by a constant value, or a function that returns an expression of input streams. This allows for the assistant to be entirely data driven, for example it's possible to create an assistant that generates purpose
and prompt
s for another assistant.
Add ability to aggregate reports
You can a function to the assistant that allows it to aggregate the reports, with the following steps:
- Add a function to the assistant with the
aggregate
method.- Add a name for the function as the first argument.
- Add a configuration object, with the following:
- Add a function describing the
value
that the assistant should aggregate. - Add a description of the return value with the
value_description
method, in this case use thetoType
method of the resource
- Add a function describing the
In the code add the above changes:
// create an assistant
const assistant = new LLMBuilder("assistant")
.input({ name: "prompt", stream: prompt.outputStream() })
.input({ name: "reports", stream: predictive.simulationResultStreams().Reports })
.input({ name: "orders", stream: predictive.simulationResultStreams().Orders })
.input({ name: "price", stream: custom_price.outputStream() })
.assistant({
api_key: // YOUR OPEN AI KEY //,
prompt: (inputs) => inputs.prompt
})
.aggregate(
"read_and_aggregate_kpi_reports",
{
value: (inputs) => inputs.reports,
value_description: reports.toType(),
}
)
Add ability to read price
You can add a function to the assistant that allows it to read the price, with the following steps:
- Add a function to the assistant with the
read
method.- Add a name for the function as the first argument.
- Add a configuration object, with the following:
- Add a function describing the
return_value
that will be returned to the assistant. - Add a description of the return value with the
return_description
method, in this case define it with a builder, and add and describe afloat
field namedprice
.
- Add a function describing the
In the code add the above changes:
// create an assistant
const assistant = new LLMBuilder("assistant")
.input({ name: "prompt", stream: prompt.outputStream() })
.input({ name: "reports", stream: predictive.simulationResultStreams().Reports })
.input({ name: "orders", stream: predictive.simulationResultStreams().Orders })
.input({ name: "price", stream: custom_price.outputStream() })
.assistant({
api_key: // YOUR OPEN AI KEY //,
prompt: (inputs) => inputs.prompt
})
// ... other assistant code
.read(
"read_current_price",
{
return_value: (inputs) => inputs.price,
return_description: (builder) => builder
.describe("An object containing the current price")
.struct(builder => builder
.field('price', builder => builder.describe("The current price").float())
),
}
)
Add ability to read orders
You can add a function to the assistant that allows it to read the orders, with the following steps:
- Add a function to the assistant with the
read
method.- Add a name for the function as the first argument.
- Add a configuration object, with the following:
- Add a function describing the
return_value
that will be returned to the assistant. - Add a description of the return value with the
return_description
method, in this case use thetoType
method of the resource
- Add a function describing the
In the code add the above changes:
// create an assistant
const assistant = new LLMBuilder("assistant")
.input({ name: "prompt", stream: prompt.outputStream() })
.input({ name: "reports", stream: predictive.simulationResultStreams().Reports })
.input({ name: "orders", stream: predictive.simulationResultStreams().Orders })
.input({ name: "price", stream: custom_price.outputStream() })
.assistant({
api_key: // YOUR OPEN AI KEY //,
prompt: (inputs) => inputs.prompt
})
// ... other assistant code
.read(
'read_current_inventory_orders',
{
return_value: (inputs) => inputs.orders,
return_description: orders.toType()
}
)
Add ability to write price
You can add a function to the assistant that allows it to write the price, with the following steps:
- Add a function to the assistant with the
write
method.- Add a name for the function as the first argument.
- Add a configuration object, with the following:
- Add a function describing the
value
that the assistant should write. - Add a description of the value with the
value_description
method, in this case define it with a builder, and add and describe afloat
field namedprice
.
- Add a function describing the
// create an assistant
const assistant = new LLMBuilder("assistant")
.input({ name: "prompt", stream: prompt.outputStream() })
.input({ name: "reports", stream: predictive.simulationResultStreams().Reports })
.input({ name: "orders", stream: predictive.simulationResultStreams().Orders })
.input({ name: "price", stream: custom_price.outputStream() })
.assistant({
api_key: // YOUR OPEN AI KEY //,
prompt: (inputs) => inputs.prompt
})
// ... other assistant code
.write(
"write_inputs",
{
value: custom_price.outputStream(),
value_description: (builder) => builder
.describe("An object containing the new price")
.struct(builder => builder
.field('price', builder => builder.describe("The new price").float())
),
}
)
Add ability to chart reports
You can add a function to the assistant that allows it to chart the reports, with the following steps:
- Add a function to the assistant with the
chart
method.- Add a name for the function as the first argument.
- Add a configuration object, with the following:
- Add a function describing the
value
that the assistant should chart. - Add a description of the value with the
value_description
method, in this case use thetoType
method of the resource
- Add a function describing the
// create an assistant
const assistant = new LLMBuilder("assistant")
.input({ name: "prompt", stream: prompt.outputStream() })
.input({ name: "reports", stream: predictive.simulationResultStreams().Reports })
.input({ name: "orders", stream: predictive.simulationResultStreams().Orders })
.input({ name: "price", stream: custom_price.outputStream() })
.assistant({
api_key: // YOUR OPEN AI KEY //,
prompt: (inputs) => inputs.prompt
})
// ... other assistant code
.chart(
"read_and_aggregate_and_chart_kpi_reports",
{
value: predictive.simulationResultStreams().Reports,
value_description: reports.toType(),
}
)
Add fine tuning examples
In order to fine tune the LLM assistant, you can add examples of the expected user interaction, assistant function calls, and assistant responses. Aside providing a way to customize the assistant and improve the quality of function calls, adding examples also results in prompt "hints" to be displayed if the assistant is added to a layout as a chat.
Add write inputs example
In order to fine tune the LLM, you can define a series of methods in the order you expect, to define an example of writing the price, you can take the following steps:
- Add a prompt from the user in a way you would expect a user to ask.
- Add a call to the
write_inputs
function, with the expected caption, and inputs.- The caption is a summary of the purpose of the function call
- The inputs are the values that would be written
- Add a response from the assistant in a way you would expect the assistant to respond.
In the code add the above changes:
// create an assistant
const assistant = new LLMBuilder("assistant")
.input({ name: "prompt", stream: prompt.outputStream() })
.input({ name: "reports", stream: predictive.simulationResultStreams().Reports })
.input({ name: "orders", stream: predictive.simulationResultStreams().Orders })
.input({ name: "price", stream: custom_price.outputStream() })
.assistant({
api_key: // YOUR OPEN AI KEY //,
prompt: (inputs) => inputs.prompt
})
// ... other assistant code
.example(builder => builder
.message('user', `Can you change the price to $${rrp}?`)
.call('write_inputs', builder => builder.args(`Changing the price to $${rrp}.`, { price: rrp }))
.message('assistant', `The current price was read successfully.`)
)
In order to be able to provide context with any values generated by the LLM, all function calls require the LLM to provide a caption to describe the purpose/intent of the function call.
In the above example, the LLM is instructed to write the price, with a caption of Changing the price to $${rrp}.
, and a FloatType
value for the price.
The effect of the above is that both the raw value, and intent can be communicated to an end user in a way that is understandable.
Add read inputs example
A similar approach toa above can be undertaken to instruct the LLM how to use the read_inputs
function, with the following steps:
- Add a prompt from the user in a way you would expect a user to ask.
- Add a call to the
read_inputs
function, with the expected caption. - Add a response from the assistant in a way you would expect the assistant to respond.
In the code add the above changes:
// create an assistant
const assistant = new LLMBuilder("assistant")
.input({ name: "prompt", stream: prompt.outputStream() })
.input({ name: "reports", stream: predictive.simulationResultStreams().Reports })
.input({ name: "orders", stream: predictive.simulationResultStreams().Orders })
.input({ name: "price", stream: custom_price.outputStream() })
.assistant({
api_key: // YOUR OPEN AI KEY //,
prompt: (inputs) => inputs.prompt
})
// ... other assistant code
// ... other examples
.example(builder => builder
.message('user', `Can you tell me the current price?`)
.call('read_current_price', builder => builder.args(`The current price.`,))
.message('assistant', `The current price was read successfully.`)
)
Add aggregate sales example
A similar approach to above can be undertaken to instruct the LLM how to use the read_and_aggregate_kpi_reports
function along with the write_inputs
function, with the following steps:
- Add a prompt from the user in a way you would expect a user to ask, note the the prompt is asking the LLM to perform multiple steps:
- Change the price to some nominated value.
- Based on the effect of changing the price, aggregate the reports to calculate the total profit.
- Add a call to the
write_inputs
function, with the expected caption, and inputs. - Add a call to the
read_and_aggregate_kpi_reports
function, with the expected caption and chart configuration. - Add a response from the assistant in a way you would expect the assistant to respond.
In the code add the above changes:
// create an assistant
const assistant = new LLMBuilder("assistant")
.input({ name: "prompt", stream: prompt.outputStream() })
.input({ name: "reports", stream: predictive.simulationResultStreams().Reports })
.input({ name: "orders", stream: predictive.simulationResultStreams().Orders })
.input({ name: "price", stream: custom_price.outputStream() })
.assistant({
api_key: // YOUR OPEN AI KEY //,
prompt: (inputs) => inputs.prompt
})
// ... other assistant code
// ... other examples
.example(builder => builder
.message('user', `Can you tell me the total profit, based on a price of $${rrp}?`)
.call('write_inputs', builder => builder.args("Set the price.", { price: rrp }))
.call('read_and_aggregate_kpi_reports', builder => builder
.args(
"The total profit.",
{
aggregate: 'sum',
value: { field: 'profit' },
})
)
.message('assistant', `The total profit for a price of $${rrp} was successfully generated.`)
)
One of the advantages of an LLM is the ability to reason and apply chain-of-thought. The above example demonstrates and reinforces this concept for the LLM by providing an example to cement the relationship between inputs, and outputs, or cause, and effect.
The fact that ELARA is a managed data flow platform becomes important for chain of thought processing, this is clear by looking at the relationships between streams and tasks related to the solution. Here is a simplified version:
In the above, it is evident that it's possible to measure the effect of writing the price, by reading the reports. For this to work though, it is necessary that the tasks are triggered in the following order:
Assistant Task
is triggered when the prompt is written and commences processing the prompt, including a function call of write price, and updates the price resource stream accordinglySimulation Task
is triggered as a result of the price being written to, it runs, then updates the reports resource stream accordingly.Assistant Task
is triggered as a result of the report stream being written to, and is free to perform a function call to aggregate.
If the above was not followed strictly, then the LLM may both write the price and read and aggregate the report simultaneously, and the profit would the result of some previous execution of Simulation Task
. To make thing more complicated, there might be multiple conflicting circular dependencies, steps in between, and/or the simulation task could take a long time to run.
Elara abstracts away the complexity of the data flow, and guarantees that the Assistant Task
is only triggered again to read and aggregate the reports when the Simulation Task
is complete, and the Reports
are written. Fortunately this solution is not specific to the above, any interaction will result in guarantees of the correct execution order.
Add reports chart example
A similar approach to above can be undertaken to instruct the LLM how to use the read_and_aggregate_and_chart_kpi_reports
function, with the following steps:
- Add a prompt from the user in a way you would expect a user to ask.
- Add a call to the
read_and_aggregate_and_chart_kpi_reports
function, with the expected caption and chart configuration. - Add a response from the assistant in a way you would expect the assistant to respond.
In the code add the above changes:
// create an assistant
const assistant = new LLMBuilder("assistant")
.input({ name: "prompt", stream: prompt.outputStream() })
.input({ name: "reports", stream: predictive.simulationResultStreams().Reports })
.input({ name: "orders", stream: predictive.simulationResultStreams().Orders })
.input({ name: "price", stream: custom_price.outputStream() })
.assistant({
api_key: // YOUR OPEN AI KEY //,
prompt: (inputs) => inputs.prompt
})
// ... other assistant code
// ... other examples
.example(builder => builder
.message('user', "Can you show me the total revenue over time?")
.call('read_and_aggregate_and_chart_kpi_reports', builder => builder
.args("Total revenue over time", {
name: "Total revenue over time",
mark: { kind: "line", interpolate: 'monotone' },
x: { field: 'date', title: "Date", type: 'temporal' },
y: { field: 'revenue', title: "Revenue", type: 'quantitative' },
})
)
.message('assistant', "The chart of total revenue over time was successfully generated.")
)
Add chat to existing layout
The assistant can be used to add a chat to the existing dashboard, with the following steps:
- Add a chat to the existing dashboard using the
chat
method - Add a thread to the chat using the
thread
method (its possible to add multiple threads) - Define the stream that contains the LLM prompts
- Define the LLM assistant to use
In the code add the above changes:
// create a dashboard to interact with the simulation and optimization
const dashboard = new LayoutBuilder("Dashboard")
// ... existing layout panel
// ... existing layout header
.chat(builder => builder
.thread(
prompt.outputStream(),
builder => builder.fromAssistant(assistant)
)
)
// enable the targets and tasks toolbars
.targetsToolbar(true)
.tasksToolbar(true)
Run and test assistant
Congratulations, you have now created an LLM assistant to interact with simulation, and added a chat to a layout. You can now run the simulation and test the assistant, the video below shows the assistant running.
Example Solution
The final solution for this tutorial is available below:
Next Steps
In this tutorial you created an LLM assistant to interact with simulation, and added a chat to a layout.