Artificial Intelligence is already radically altering the world of case management, but it’s up to humans to remember the Face Behind the Case

08/08/2023
Ken Naismith (Workpro)

By Ken Naismith

SOME of the brightest minds in the technologies of the future have been unusually vocal about the potential dangers of Artificial Intelligence (AI), with concerns ranging from simple job losses to world domination by heartless robots intent on wiping out mankind.

Others are more sanguine, pointing to profound benefits in sectors such as transportation, healthcare, finance and manufacturing, with a recent PwC report suggesting AI could add $15.7 trillion to the global economy by 2030.

Between these poles are countless businesses and organisations who are well aware that AI is coming, if indeed it is not already here, and who are trying to work out the best ways to deal with a seismic shift in the tectonic plates of the economic environment.

Every business will have to chart its own way through the perfect storm of new technology, which is becoming increasingly difficult to navigate, and try to harness its power in ways that are, on balance, more beneficial than harmful.

However, while AI will be – or, rather, is – able to examine massive volumes of data, find patterns, and make predictions or choices using algorithms, statistical models, and machine learning approaches, businesses must not lose sight of the fact that, ultimately, they are dealing with people (Human beings. Remember them?).

This is generating animated discussion in the sector in which we operate – the increasingly important complaints handling and case management systems for major companies, organisations and Ombudsman services, which are required by regulation to identify, classify, report and remediate all complaints.

There is no doubt that AI is increasingly enabling enterprises to gain a holistic view of complaints, which allows them to not only comply with external regulators but also understand the impact of complaints on their business.

What we have to work out now is the Balance of Effectiveness – that is, where do we draw the line in the sand of humanity? How can we effectively use AI to more efficiently help customers, and their customers, without dehumanising connections with people in their time of need?

We also have to get past the confusion between automation and AI. Automation is great for acknowledging complaints, allocating case identifiers and letting people know they’re in the system.

AI, on the other hand, has the capacity to interpret what complainants are saying, to look for the emotional content of the language they use and make the decision, based on that, whether or not to escalate the issue to a human contact. It learns continually, and remorselessly.

What it should not be used to do is to further keep people at arm’s length from an organisation. Consumers are wise to this as a tactic and, if someone is already upset about an issue, putting obstacles in their way will only make them incandescent.

This raises another interesting point. If AI is learning all the time from its interactions with people, will it start to assume that boiling rage, sarcasm and threats are the norm for humans? And, if so, what will it make of that?

These are all questions for which, at the moment, there are few concrete answers. We are all, to a greater or lesser extent, feeling our way into this brave new world which holds such potential blessings and such potential terrors.

One of the primary challenges as we devolve more and more responsibility and decision-making capability to AI will be to remember the Face Behind the Case – that we are not dealing simply with caseload, but with real people with real concerns.

Here, we could take a leaf from the Ombudsman community, with whom we as a company deal closely. It recognises that for every entitled, vocal and persistent complainant, there are as many vulnerable people whose contact with them can be a cry for help.

Their concerns must be dealt with in a humane fashion and, even if they are not valid or relevant, Ombudsman services put serious effort into directing such people towards agencies which can actually help them.  

That requires genuine humanity, not artificial intelligence, because the legitimacy of a complaint in their jurisdiction needs to be upheld or “denied” without fear or favour whereas taking the time to deal gently and helpfully, arguably against their own productivity demands, is akin to taking a minute out of your busy day to help someone to cross a busy road.

Perhaps AI will eventually be able to act in such a compassionate way. Until then, it’s up to humans to remember the Face Behind the Case. 

Ken Naismith is Chief Executive of Workpro.

The Latest Stories

Electrical safety issue tops the agenda as campaigning trade body SELECT prepares to hit the road across Scotland for its 2024 Toolbox Talks tour
Science brought to life with new sponsorship agreement
Payroll compliance and security: A proactive approach to evolving cyber threats
UK’s most successful deep tech founders unite for the first time