All

LLMs in ARM webinar recap

Screen capture from the webinar done in comic book style

Everyone’s buzzing about AI tools like Chat GPT and Google’s Bard, and there’s both excitement and anxiety about them.

We hosted a webinar with AccountsRecovery.net to talk about how large language models (LLMs) like these will affect the ARM industry.

Prodigal’s own Shantanu Gangal was joined by Jitu Viswanadham of Parallon and Michael Meyer of MRS BPO to define LLMs and talk about what we need to be thinking about as we start to use them in ARM.

What is a Large Language Model, anyway?

Even some of us at Prodigal (which uses LLMs!) had to have them explained to us, so if you aren’t sure what they are, you’re in good company.

An LLM is a type of artificial intelligence. (Chat GPT and Bard are LLMs.)

The New York Times explains LLMs this way: “Most large language models have the same basic objective function: Given a sequence of text, guess what comes next.”

So we give the LLM a lot of things to read, and it recognizes patterns in what it’s reading. Once it’s figured out enough of those patterns, it can use its ability to guess what comes next to start creating new things.

Why does it matter to ARM?

LLMs have the potential to revolutionize the accounts receivable management (ARM) industry by automating personalized communications as well as improving dispute resolution and other consumer finance processes. 

These models can provide data-driven suggestions to teams in debt collections, borrowing, and lending, leading to more successful outcomes and faster resolutions.

We’ve long relied on machines of all kinds, including computers, to help us out, and LLMs are just the next generation.

What are the possibilities?

With the youth of and rapid developments in AI, at this point it’s hard to envision all the ways in which we’ll be able to use it in ARM. 

“In the collections industry, there are many places where already or in the next twelve months, we expect a lot of change,” Gangal said.

But because of the ability to train LLMs on specific use cases, we’ll be able to use them where we feel the impact will be biggest, which our panel agreed would start with automation and internal processes.

Viswanadham said large language models could help in automating tasks, call summarization, eliminating back-office processes, and generating monthly reports. Those reports, in particular, are a perfect task for AI, because it can gather context from information and generate relevant data based on that.

Gangal agreed.

“Anything you are doing several times a day or that you spend several hours in a month doing is something you should figure out how you can automate."
“Our Intent Engine uses large language models for a lot of the tasks Jitu mentioned, like summarization, generating repeatable monthly reports, and stuff like that. So those are the kind of things that will really alleviate a lot of the grunt work, a lot of the rote work, and give you back tons of productivity.”

Increased personalization, increased results

The panel agreed that one of the ways in which LLMs can have a huge impact on collections as we move forward is by leaning into personalization. 

Because of the ability of AI to analyze large amounts of data, we can gain insight into likely behaviors and best practices that will allow us to customize interactions at a previously unheard-of scale.

  • LLMs can help drive personalization and improve customer experience by making recommendations for agents and servicers based on specific scenarios.

  • LLMs can be used to automate personalized initial contact and follow-up communications with debtors, resulting in more efficient outreach and improved response rates. We’re already seeing this with chatbots.

  • LLMs can assist in debt negotiations by providing personalized and data-driven suggestions for debt collectors, leading to more successful outcomes and faster resolutions.

What should we worry about?

All three panelists mentioned how early we are into using LLMs and other AI and that we have to be open to the possibilities. “I think the first thing is to be prepared for it and actually be willing to embrace it,” Gangal said.

But that doesn’t mean we can’t also be cautious.

“I think about two aspects of this. One is the training aspect of it, and how you create the models, and the second is the execution aspect of it, how do you use the models. There's going to be a lot of benefit in leveraging these models to do the work, but in the training aspect we do need to be sensitive. It’s ‘garbage in and garbage out,’ right?” Viswanadham said.

There’s a difference between using a freely available tool like Chat GPT and a custom-built model trained on consumer finance that includes safeguards such as redaction capabilities to protect customer privacy, like Prodigal’s AI Intent Engine.

And that enhanced personalization we mentioned is also a potentially thorny issue.

All our panelists agreed on the need for developing safeguards and guidelines to ensure the ethical use of these models in collections and other cus tomer-facing interactions. 

Models must be able to maintain compliance with industry regulations while still providing personalized and efficient communication with debtors.

But it's going to be big.

"The future obviously looks incredibly bright and with these new things being changed and as you had said, being fine-tuned for specific purposes, it's going to be very powerful,” Meyer summed up.