In five years’ time, using generative AI (GenAI) tools such as ChatGPT will be a normal part of working life, according to Don Schuerman, chief technology officer at Pegasystems, a US company that develops the software used behind the scenes by some of the world’s largest organisations to manage and automate their business processes.
“I think it will become the equivalent of knowing how to write an Excel formula,” he says.
Schuerman believes that GenAI is on the cusp of allowing companies to not just automate their business processes, but also meet their business goals.
Branded the Autonomous Enterprise, the idea is that companies will be able to set business targets, and that their AI-enabled business processes will learn how to achieve them.
For now, companies are taking their time deploying GenAI. This is particularly so in regulated industries, including banking healthcare and insurance, where they want to make sure they understand the potential business risks first. “They are excited, but they also want to work it through their compliance models, their risk analysis and their regulatory teams,” says Schuerman.
It’s not yet clear how AI will be regulated, which is another reason why some companies are holding back. The US is trailing behind, and it’s likely to be Europe that takes the lead on regulating GenAI.
The risk of publicly exposing private company data through large language models such as ChatGPT is one reason why companies may be wary for now. Another is the capacity for GenAI models to make up information, or “hallucinate”.
Both of these problems are solvable, says Schuerman, or at least containable. One potential solution is known as Retrieval Augmented Generation (RAG).
Put simply, this means feeding large language model prompts that contain both the question and the entire set of data needed to answer the question.
So, when an employee or a customer asks a question, RAG software will be able to pull chunks of information from a database relevant to a query and use it to build an AI prompt containing the information required to answer the question.
If the AI system can’t find the answer, it is instructed to say it doesn’t know, to reduce the risk of hallucination.
Pega has attempted to do this with its own product database. It has created an AI interface that can synthesise answers from information held across different documents, rather than require customers to read through each document individually.
The web page comes with an appropriate warning notice and disclaimers, and while it may not have eliminated the effects of hallucination, it has “greatly minimised it”, says Schuerman.
A number of Pegasystems clients are interested in using the RAG approach to make their company’s data and information more accessible. Insurers, for example, would like to give their claims managers access to best practice information to solve problems with insurance claims. “They want to base it purely on the knowledge and documentation that they have built internally,” he says.
The first deployments of GenAI in businesses will not replace the need for people with deep business knowledge and skills, but it will help them to work faster.
One application is to use GenAI to automatically generate test data – significantly cutting down software development time.
Schuerman acknowledges there will still need to be “a human in the loop” to check whether the software is producing the answer expected from each piece of test data and to guard against hallucination.
Programming AI chatbots is another example where AI has the potential to speed up manual work. A bank could use an AI model to generate 50 different ways a customer could ask for their bank balance, for example. That data could then be used to train a chatbot.
It would still need a human to check whether the responses made sense, and to edit them were appropriate, however, it could significantly speed up development time.
Pega, like many IT companies, is not on the guest list for the UK government’s exclusive international artificial intelligence safety summit at Bletchley Park in November.
Prime minister Rishi Sunak is expected to use the summit to announce the creation of an international advisory group on artificial intelligence, operating along the lines of the United Nations Climate Change Panel, to evaluate AI risks.
Schuerman argues it’s important that the politicians and AI experts invited understand what impact any regulatory decisions would have on consumers, employees and businesses that use AI.
“My concern is making sure that in these regulatory discussions there is enough knowledge on the ground to reflect the actual use cases for AI and the business drivers behind the use of some of this technology,” he says.
Mobile phone companies and credit card companies use Pega’s machine learning software to make tailored recommendations to customers. The software learns from the way customers respond so that it can make better decisions and recommendations in the future.
The next stage in Pega’s Autonomous Enterprise vision is to enable organisations to set business goals that will steer how business processes like this respond, says Schuerman.
For example, a bank could set a goal to reduce the time spent on credit card disputes while at the same time making sure it does not pay out unnecessary compensation.
The software will be able to analyse historical trends to identify cases that are likely to result in the bank paying out unnecessarily, or lead to delays in resolving disputes, and escalate them. “You are basically giving businesses tools to state their goals, state their objectives and have their processes continuously optimise and find improvements that help them better meet those objectives for the business,” he says.
Companies would be free to modify their business goals to suit changing conditions. For example, a call centre might want to keep customers on the phone longer to sell them more products during quiet parts of the year. But at busy times of the year, its priority might be to deal with customers as quickly as possible to avoid queues building up.
“The goal is to give the business the ability to dial that up and dial it back, and have their processes dynamically adjust,” he says.
Automating processes in this way takes time. It starts with taking manual tasks and creating a structured process for employees to follow. The next step is to automate the task.
Once tasks are automated, businesses can then collect data on the history of their transactions. That creates a data pool which can be mined by AI and machine learning software to make business predictions and identify potential problems before they happen.
“That is now where we have the potential to take that process and lift it up to something that is self-optimising to meet business goals,” says Schuerman.
Each step can produce a return on investment. “The cost benefit is pretty well defined when we look at a manual process and find opportunities for automation,” he adds.
Companies have been able to predict when they are in danger of missing regulatory deadlines and take action to reduce the costs of fines, for example. “If you give it some pretty basic information about what the cost of your process is, [the software] can actually tell you the business value, for example, of a bottleneck, and what it’s costing you a year in terms of throughput, missed revenue or decrease in customer satisfaction,” says Schuerman.
Pegasystems is about 20% of the way on its own journey to turn itself into an autonomous enterprise.
There are some business processes that will never make it beyond the manual work stage, because the volume is not high enough to justify automation, or the complexity of the process is greater than the potential impact of automating it.
“That’s fine,” he says. “Not everything might actually make it to the point that is self-optimising in your business, but that should be a conscious choice that the business is making.”