A recent Forbes article admonished business leaders, “if you’re not sure what AI agents are, you’re already behind the curve.” But it also noted the pace of innovation around AI has made it difficult to stay informed about this particular advancement and how it might be leveraged to make organizations more efficient. Process automation is a natural breeding ground for agentic AI use cases and a spate of recent announcements from intelligent automation technology providers shows it has become a priority in the industry.
At UiPath’s recent FORWARD + Tech Ed developer and customer event in Las Vegas, agentic AI took center stage as the company’s leadership team outlined how the technology will complement and enhance its existing automation platform. While the notion of a technology that is able to understand prompts in natural language, then translate those directives to retrieve data, create plans, and take action to execute those plans autonomously for a given task is exciting, the level of autonomy involved can raise well-founded security and governance concerns.
A keynote address at the event from UiPath’s chief technology officer Raghu Malpani sought to allay those concerns with an overview of how UiPath will be implementing agentic AI in its platform and, importantly, its focus on trust and safety as the technology takes a more significant role in automation.
Control the Agency of an Agent
“We are going all in on agentic AI and agentic orchestration,” Malpani told Automation Today in a recent interview at the FORWARD event, which the company used as a platform to announce this focus to the outside world. “We’re pivoting large parts of our team to work on it, and we plan to execute on these objectives with focus and determination.”
But the issue that causes executives concern, according to Malpani, is that while agents are capable of great things with minimal human intervention, they are also non-deterministic—that is, the outcome or output of an agent is not guaranteed to be the same even if all the inputs remain unchanged.
In order for UiPath customers to be able to develop trust in AI agents, he said, there must be a threshold of predictability. The provider must be able to control the agency of an agent.
Building trust in AI agents and agentic orchestration at UiPath, Malpani explained, will rest on four pillars: resilience and reliability, compliance and governance, open architecture, and flexible delivery. At the moment, Malpani says, his focus in on the resilience and reliability piece and “bending the curve of reliability.”
If the difference in reliability between a traditional automation robot and an autonomous agent can be described as a curve that travels from the top left to the bottom right of a graph, then Malpani wants to introduce as series of controls that send the reliability of the autonomous agent higher and bend the curve upwards.
Those controls, which will be included in the design of UiPath’s enterprise agents, include prompt auto-tuning and model selection, interactive prompt experimentation, online and offline evaluations, high-quality context grounding, and comprehensive testing. Those attributes, which make up a trust layer that will exist as UiPath customers build enterprise agents that leverage automation bots to complete complex and specialized tasks, will rely on the involvement of UiPath automation experts.
“Enterprise agents are what we call agents on which we’ve done extra work to make them safe, reliable, dependable,” he said. “We have added governance and controls and orchestration layers so you can rely on them to serve you in ways that an off-the-shelf agent may not. We will assist the person building the agent and give them suggestions for how they can improve the quality of the agent: tuning prompts, testing, scoring, and evaluating the agent at each step that will enable us to qualify the product as an enterprise agent.”
That human assistance is what will allow users to bend the curve of reliability and control the agency of agents, according to Malpani.
“Just because there are agents, doesn’t mean humans are out of the loop,” he said. “The salient thing to understand is that humans are always going to be a necessary component when working with agents. My belief is humans will completely control agents in terms of when they’re used, when their decisions are audited or actually affirmed. The person who built an agent would have control in determining what consequential decisions an agent might make. Most importantly, if an agent doesn’t feel like it has confidence in its next step, it can be configured to reach out to the human and ask for confirmation. The amount of agency you give to the agent is in the human’s control.”
That agentic AI is the way forward for process automation was made clear from the stages at FORWARD. But Malpani assured customers that his focus is on giving users the support and tools they need to be able to trust the technology, despite its non-deterministic nature.
“We’re making sure that we give our users the controls, governance and education necessary to comprehensively understand what these agents can do and cannot do” he concluded. “We give these capabilities to enterprise administrators so they feel like they have skin in the game and have controls in terms of how generative AI and agents are used in the enterprise. To my mind, this is the single most important part of my job.”