Smarter summaries with switches and content references
Advisor
As we've written previously, LLMs (and other associated technologies referred to as AI) are just tools. Like every tool, there are clever, helpful uses and also thoughtless, harmful uses. The people most capable of judging which is which are the folks doing the work.
With that in mind, today we're releasing an initial set of features that will allow you to decide if and where it makes sense for this technology to exist in your team's workflow.
Tell me what I should already know
The first of these features is content references.
With content references, you can tell us where your public documentation is hosted and then Yetto will crawl those sources and build up a collection of references from the authoritative information that matters to you.
Do you have more than one docs site? Maybe your product documentation and developer documentation are hosted in separate places? No problem! Add them both and we'll get to work infusing your Yetto organization with internal smarts.
What does this get you? Read on.
Wait, is that documented?
The single most time consuming and painful part of providing support to customers is acquiring context. Understanding the customer (who may not speak your language natively), searching various docs sites, keeping up with updates and changes, grabbing links to the right page or place, etc.
All of this adds up to real time and toil, and giving you a "summarize with AI ✨" button in the UI doesn't really do anything to eliminate that toil.
To that end, the first thing we're enabling you to do with content references isn't customer facing at all, it's designed to make context and documentation easier to inject automatically whenever you decide you need it.
Like we revealed earlier, almost all of the magic of Yetto runs on switches. This gives us a natural way to enable your team to compose powerful new features.
After you've enabled AI features for your organization, you'll get access to a new summarize
action. This action lets you leverage your content references alongside OpenAI's LLM to automatically summarize pieces of the conversation -- or even provide you with links to the most relevant documentation -- in response to any event you want.
Sir, this is a Wendy's
For example, let's say you've got a subset of very, very opinionated customers who have a habit of writing manifestos about how your team has "destroyed decades of productivity by changing the color of the header bar." Obviously, this is a totally made up scenario that didn't ruin a week of my life.
Reading these to understand and make sure you're not missing anything takes a lot of time (to say nothing of the emotional costs), so let's see if we can make this easier on the team and simpler to understand.
We'll start with a switch that runs every time there's a new conversation, and takes action if the first message is really long. Arbitrarily, we'll decide that anything longer than 5000 characters is too verbose.
{
"version": "2023-03-06",
"events": {
"conversation.created": {
"conditions": {
// Check whether the first message is longer than or equal to
// 5000 characters
"if": "{% data.yetto.conversation.originating_message.size >= 5000 %}"
},
"actions": [...]
},
}
}
Easy peasy.
If the message is longer than 5000 characters, you want to summarize the conversation and get some suggestions for documentation that might speak to the customer's issue.
To do that, we'd use the summarize
action in the actions
array like so:
{
// Let's give it memorable name
"name": "...Wut?",
// Specify the summarize action is what we want to use
"uses": "summarize",
// The "with" block lets us pass options to the action
"with": {
// We want it to use some of the sites we've defined in our
// organization's content references
"content_references": [
// Maybe a product docs site
"uew_01J3RFD4QMATG6VTWT48K8N0QA",
// Maybe a developer docs/API docs site
"uew_01J5RQWJ8XT0KBARGQJ9SP22PS"
],
// The first message is all that exists, so that's all we want to
// summarize for now
"scope": "first_message"
}
}
For extra credit, you could also add a second action to the actions
array that uses labels to specify this message is a specific type of "new" message:
{
"name": "Oh Lawd He Comin",
"uses": "apply_labels",
// Remove the `status.new` label, if it exists (added by another switch we've got set up)
"remove": [
"status.new"
],
// Add a specific child label of `status.new`
"add": [
"status.new.big-chungus"
]
}
All together, you wind up with a switch that looks like this (comments removed for brevity):
{
"version": "2023-03-06",
"events": {
"conversation.created": {
"conditions": {
"if": "{% data.yetto.conversation.originating_message.size >= 5000 %}"
},
"actions": [
{
"name": "...Wut?",
"uses": "summarize",
"with": {
"content_references": [
"uew_01J3RFD4QMATG6VTWT48K8N0QA",
"uew_01J5RQWJ8XT0KBARGQJ9SP22PS"
],
"scope": "first_message"
}
},
{
"name": "Oh Lawd He Comin",
"uses": "apply_labels",
"remove": [
"status.new"
],
"add": [
"status.new.big-chungus"
]
}
]
},
}
}
Now every time someone writes a new tome you'll get a little help in the form of an internal comment, summarizing the message and offering guidance from your own docs.
What are we doing here?
Maybe you have a thread that's gotten out of hand, and you want to help your team out by automatically summarizing everything we know so far.
That switch looks like this:
{
"version": "2023-03-06",
"events": {
"message.created": {
"conditions": {
// Check if the conversation has grown to more than 20 messages
"if": "{% data.yetto.message.conversation.messages.size > 20 %}"
},
"actions": [
//
{
"name": "Summarize a conversation that's getting out of hand",
"uses": "summarize",
"with": {
// No content references this time, we don't need docs,
// just a summary.
//
// Specifically, we want every public message posted by us or
// the customer summarized to catch us (or whoever else) up
// on where we're at.
"scope": "public_thread"
}
},
]
},
}
}
Both of these scenarios happen every day in your queue and can consume hours of your team's time and brain power. Now that work can be completely eliminated.
We can do whatever we want
Something my co-founder Garen says all the time in response to a feature request from me is "We can make the computer do whatever we want." These features are, ultimately, an expression of trying to continually apply that belief to the question "How do we use X to help the professionals doing the work, not just the folks they support?"
You shouldn't have to click a button every time you want to generate something with an LLM, and the major focus of this genuinely impressive new set of technologies should be spent giving you the tools to do joyful, pain-free, excellent work.
We'll be doing more with this as we learn more about the pain and toil you want to relieve or eliminate. We can't wait to see what you do.
Not using Yetto yet? What are you waiting for!? Sign up and get started today!