Register
Autograph: Bootstrapped & Self-Improving for Better Business Planning
Are you enjoying this session?
See exactly how PromptQL works for your business.
Book demo
See how Autograph’s self-learning schematic layer works
Learn how it captures and processes business context
Discover how its self-improving system sharpens planning accuracy
Understand why a bootstrapped approach gives it an edge
What's discussed in the video
How do we make the plans reliable? Some of you might have a question. You still have AI generating the plans. That's still non-deterministic. That's still prone to hallucinations. But what is making your plans reliable? What makes our plans reliable is because they are generated by an AI that understands your company's language. Because you speak your company's language. All your employees speak your company language. There's so much tribal knowledge that you have, tacit knowledge that you have, right? There is so much domain context that you already know. You know what are the constraints you're working with. You know what different terminologies means, what kind of KPIs you're optimizing on. All of this you know. And hence, whatever actions you take, are all based on this knowledge. But your AI does not do that. My AI does not know that, hey, if I need to answer a question on how do I improve my sales forecasting, but I also need to keep these 5 other guardrails, like other KPIs, which should not go down if I want to improve a certain metric. This is the knowledge that you have, your AI did not have. And I can't be expected to think of every single thing as the engineer building this AI system and put it as context into the AI, right? So there is something missing between your AI and your data, which your AI should be able to understand so that it speaks the same language that you speak in your business. So an AI can speak your company's language with what we call is an agentic semantic layer. So this agentic semantic layer, think of it as like a completely bootstrapped, self-improving metadata or context layer, which keeps capturing this business context and business knowledge as you keep using the AI. Every time you had to correct the AI saying, no, no, no, this is not what I asked you. This is what I meant by that. This is what the terminology means. All of this context, as you were giving the AI, the AI should be learning and improving its own semantic And that's exactly what we have built with something called Autograph. So let me show you the Autograph. Actually, let me show you this recorded demo first, and then I'll show it to you live as well. So this basically, this is a good demo to understand what autograph really means. So what I did is I connected a database with to PromptQL, which has very poorly named tables and columns, like completely absurdly named tables and I asked a question like, what employees are working in departments with more than ten thousand dollars in budget? And the 3 tables which I've connected are called more plugins or completely meaningless. The AI has no idea what they even mean, right? So the AI is like, I don't see any information about employees, departments or budgets. All I see are 3 tables called more plugins or I have no idea what to It's like which that's already a reliable response because I know the data is very bad. So I'm like, can you sample a few rows from each table and figure out what which table contains employees? So prompter is like, OK, cool. I'm going to look at every table, sample a few rows. And now I understand. OK. Zork contains employee information. Plug contains department information. And Mark is like a junction table. So now I can execute this. I can answer your question now, because I understand. I, as the developer, didn't give any context to the AI. I let the AI figure it out itself. OK, this is awesome. But one more caveat, that the budget in our data is in cents, not in dollars. But I asked the question in dollars. So you'll have to divide by a thousand, right? So can you do that, please? So prompt is like, cool, I'm going to divide by a thousand. So now it filters down to just 2 employees. Now, what Autograph lets us do is, it lets us, like this steering we had to do, right? Like we had to give context about a certain table. We had to give context about how our data has been saved. All of this, my AI should be learning. So Autograph runs automatically in the background as well, but this is like a manual way of executing it to show what it's doing under the hood. All I'm saying is suggest metadata improvements based on the recent threads. That's all we ask Autograph to do automatically again and again. It says, okay, cool. I'll look at the last one hour conversation, then I see that there are 3 models and let me analyze the thread state to see if there are any meaningful interactions that we had with these models. I can now generate many meaningful descriptions for these different tables. That's what we exactly did. It created these descriptions for these tables. And I also added this context that, hey, this column called QWERTY, it has the department budget, which is in cents. So it not just described every single table, but also every single column. And now, all I have to do is click on this Apply Suggestion. And this semantic layer, this metadata layer, is a version control layer. It has this concept of immutable builds. So now it creates a new immutable build on top of that. right? Which I can then run my evals on, make sure everything is working fine, right? So this you see, the 0 seconds ago a new build was created. And now if I ask the same question, which employees are working in departments more than ten thousand dollars? This time the AI does not have to say that, hey, I don't understand what you're talking about. No, it understands now, because it has all the context and semantic layer. And it also understands that the data is in sense and not in right? So this is what the genetic semantic layer looks like. Let me just show it to you here as well. Let me ask a question. Let's see. So, this is like a finance database where there's a lot of transaction data, anti-money laundering data and ask it a question like find accounts with the maximum suspicious AML outgoing amounts for the first quarter for each print out the account ID and name. All right, let me refresh this page again just to make sure it's working fine. So it's like, OK, this is the query plan. This is what I need to execute. I'm implementing this query plan under the hood. I'm going to execute this query plan under the hood and then come back with a response. That's perfect. So it says that, OK, I asked the question for this quarter, and it says the quarter which starts in January ends in March. But let's say this data is for some country where the quarter arbitrarily starts in February and ends in April. Let's say I say this, the table values are in a country where the financial quarter is offset by a month. So Q one is from February to April. Now, this is a tribal knowledge that I have bought my domain. I just did what I thought it was right. So I had to steer this and it's like, okay, cool. So I'm going to just do that. I'll do the same thing, but with the filters from February to April. So this is your answer. Now, if I go to Autograph, And I'll skip this question. Let's just do the last thread. So improve my metadata using insights from the last thread. So again, Autograph runs agnatically automatically under the hood. You can also manually execute it if you want to run it for a certain set of users, certain type of threads, certain set of queries. So that's why I say, OK, I'll first get the schema information. Let me understand what the schema is. I'll calculate the time range of this thread. from the last 24 hours and then get the top thread, I can identify several insights to improve the metadata descriptions. So let me create these improvements focusing on the anti-money laundering and accounts tables. Okay. And so these are the improvements I'm suggesting that, okay, I need to add context here that the financial quarters are offset by a month, right, and something else which I didn't even realize. some information that it needed to do some kind of joints, it has that as well. Again, apply the suggested improvements, and a new build gets created here, generated by Autograph, and there you go. Next time you ask a question, no more steering, no more nothing. It's just AI keeps learning. So that is what we have been able to achieve with this agentic semantic layer, this highly reliable AI that speaks your language, your company's language. Oops, about that. Yeah, so let's just quickly see what the customers have been saying with an AI that speaks their language. So think of this as like a company QL or QL. one of the directors of data for a Fortune Five hundred international food chain. We tried building it, couldn't do it, then we evaluated a hundred vendors, nothing worked, but then finally we saw PromptQL, and it was just completely reliable AI. Another VP of AI of a global to thousand internet services company, no other tool has come even close to meeting the expectations. PromptQL has just met and exceeded the expectations. CEO of a high-growth fintech company, your prompter was able to demonstrate a hundred percent accuracy on the hardest questions in our eval set. One of the things we do with our customers is we ask them, give us a set of your hardest questions. No matter what kind of assumptions they require, what kind of data sources they might require, how complicated the analysis is, give us the hardest questions. And we promise that we'll get you a hundred percent accuracy on top of that. We call it PromptQL, which is a generic term, but think of your or QL, right? What would you call your company's language? Before we jump into the questions that we have over, like in the Q&A section, I have one. So we were speaking at a conference last week. I know that you're back in Vegas this week with another conference that's going on right now. A lot of us are. One of the things that people kind of presented as like a barrier for getting involved with AI solutions was their concern about not only data hygiene, but also just kind of like getting things rectified in advance and all the work that would go into it before actually connecting to an AI system. So could you speak just a little bit to like the autograph use case and how essentially you can sidestep all that work completely and farm it out to a service as opposed to having to do all that chore work yourself? Great, great, great. So I would talk about both autograph and PromptQL here. If you think that you can have perfect data to build perfect AI, you are kidding yourself. And you will never be able to train a perfect AI on top of that, or make an AI work perfectly on top of that. So your AI needs to be adaptable, as you are, as a human. You understand, like, you might also run into unexpected problems, unexpected data messes. Right? But then you improvise, adapt, and overcome. That's what Autograph has been built to do, which is exactly like, sorry, PromptQL has been built to do, which is improvise, adapt and overcome. And then Autograph is built to learn that this is the problem. Next time, I will make sure this problem doesn't, like I can't for this problem. That's one. second is PromptQL is great for data prep. data investigation, you understand. You can ask it like, hey, can you find inconsistencies in my data and like X by Z? And then you can ask it, OK, can you go fix it, please? And it'll be like, OK, cool. This is what I'm going to do for fixing it. Looks cool to you. OK, cool. I'm going to go and end up doing it. It'll take about it. 20 minutes, fix all of you, as much as it can, working with your data engineers, of course. But yeah. Yeah, and I think that's the big point, right, is that we're saving a tremendous amount of time to market or to implement a solution because instead of having to do all the manual labour of, you know, sanitizing data and making sure that things are semantically like they should be, let's have PromptQL and Autograph do it instead. Easy peasy. Nice.



