Digital Mirror AI - Logo White-1
  • About
  • Solution
  • Strategic Impact
  • Resources
  • Contact
  • Login
  • About
  • Solution
  • Strategic Impact
  • Resources
  • Contact
  • Login

Table of contents

Subscribe to the Digital Mirror blog

What BPM can teach us about Agents and Automation

iconMay 7, 2025
|
icon3 mins read
  • icon
  • icon
Featured Image

All the hype surrounding agents looks to me just like a replay of recent history.  Specifically, Agents look very much like the 2025 equivalent of the 2000-era BPM systems. I can hear some of you yelling ‘not true!’ but give this short post a read and if you're still not seeing it - just give it more time! 

I’m going to take us back to BPM systems of the early 2000s and 2010s, which promised to be the ‘process backbone’ of operations, to automate repetitive tasks and to provide orchestration across departments.  Does that sound familiar? 

Back to BPM

Let’s start by breaking down what those BPM systems really did.  They created workflows that took input from users, systems, or external sources and performed a series of actions based on pre-defined logic. That logic could include some element of "reasoning"—but only to the extent that their army of developers could anticipate the vast number of possible flows and edge cases. The fidelity of the system largely depended on how well the coders understood the business process and how many “if-then” branches they could realistically maintain. 

BPM and Agents

Fast-forward to 2025, and what are agents? At their core, they're also a set of logical functions that take input—from humans, APIs, memory stores, databases, or LLMs—and perform actions based on that data. But there's a big difference: agents today are built to reason, generate, and adapt in real-time. They don’t just follow a script—they write as they go. Need to call an unfamiliar API? They can work it out. Need to compose a custom SQL query on the fly? Not a problem. And, whatever it is, it doesn’t need human intervention. 

That sounds pretty magical, and in many ways, it is. But in other ways, it risks repeating the same shortcomings BPM systems faced two decades ago. 

Let me elaborate. 

Why BPM Systems Failed

BPM systems didn’t fail because the logic was faulty. They failed because the world changed faster than the processes could keep up.  BPM systems were architected on the premise of process optimization through repeatability and control. However, by the late 2000s and into the 2010s, organizations were increasingly operating in fluid, high-change environments. 

A well-written BPM flow might break the moment an API changed, a schema was updated, or a business rule shifted. Or possibly, all of those at the same time! 

And when it failed, it failed properly, sending errors downstream or requiring much rework to roll back or manually complete the task. In large enterprises, these disruptions were common, and maintenance costs became increasingly unacceptable. 

Now let’s consider agents. They promise adaptability, but they still exist in complex environments, full of moving targets like evolving APIs, inconsistent data mappings, and ever-changing user behavior. Just like BPM, they are vulnerable to environmental drift. Certainly, they’ll handle some edge cases better, but they’ll produce the same headaches in response to change. 

Lessons from BPM

So while today’s agents are far more flexible and dynamic than BPM ever was, the lesson remains: automation is only as valuable as your ability to manage the change it encounters. 

One project I worked on really brought that lesson home.  I was working for a large software vendor at the time; we supplied BPM software to a very large organisation and our experts coded it. I was the technical Sales lead so did no coding, but I was very close to the project and the customer having spent two years coding, testing, deploying, performing user acceptance and incorporating feedback. 

After two years, the project was terminated, and a huge amount of work was thrown away. The reason? It was impossible for the system to keep up with the changes it encountered and for us to code for every possible outcome.  As soon as we had a stable system running well, an update to a target system's data structure or API or any one of a hundred possible things changed. As a result, we couldn’t guarantee the system would give the degree of accuracy required for a fully automated system of record. 

History Repeating Itself

Let’s bring this back to today, and a world of AI promise. Agents are amazing, right? They understand natural language, they can search the web, and they can even update a sales or ordering system on my behalf. They are fully automated. And close to 100% accurate. Right? 

Well… maybe. That might be true — for the first five months. 

But what happens when OpenAI, Anthropic, Microsoft, or Google pushes a new model update, and the one you tested thoroughly is deprecated overnight? Suddenly, all your previous testing doesn’t guarantee the same result. Without rigorous and repeated regression testing, you’re flying blind. For a casual chatbot helping people pick a restaurant, that might not matter. But when it comes to banking operations, HR compliance workflows, procurement approvals, or sales compensation systems, the stakes are dramatically higher. 

So, what lessons from the BPM era can we apply to today’s agents? 

Here’s the short answer: agents will fail. 
And they’ll fail again. 
And again. 

That’s not a flaw — it’s just an encounter with reality. Systems this complex, running in dynamic environments, are bound to encounter issues. What matters is how we prepare for those failures, how we detect them early, and how we respond. 

The mistake is treating agents as if they’re infallible — as if they can run unsupervised, flawlessly, at scale, with no oversight. They can’t. And pretending they can leads to bigger problems down the line. In some cases, the effort to monitor and validate their performance can outweigh the effort it would take to complete the task manually.

Why We Need Reference Sets

That’s why grounding is critical. Just like in BPM, agents need a reference set — a foundation of correct responses and expected outcomes. This acts as the benchmark against which every execution is tested. In the world of contract performance management (CPM), we’ve seen how vital this is. It’s exactly why we built our QAQA framework — to continuously evaluate whether agents are behaving as intended and delivering results that match business expectations. 

Without this type of grounding and governance, we’ll keep seeing what we saw with early BPM rollouts: teams throwing away months of work because the system diverged from reality and no one noticed until it was too late. 

If you are sceptical, I am not only one who has his point of view, though they don’t usually invoke 20-year IT history as evidence!  Here’s one that focuses on the limitations of agents.   

Related Articles

Featured Image

Convergence - When Hardware Allows Software to Fly

iconMarch 27, 2025
|
icon
Featured Image

A Brave New World with Old World Challenges

iconMarch 27, 2025
|
icon
Digital Mirror AI - Logo White-1
  • customersupport@digital-mirror.ai
  • info@digital-mirror.ai

Follow Us

  • LinkedIn

Legal

  • Terms & Conditions
  • Privacy Policy
  • Cookie Policy

Subscribe to the Digital Mirror blog

© 2025 DigitalMirrorAi. All Rights Reserved.