This article first appeared in CUInsight.com and is Part IV of a four-part series about digital responsibility.
Like all digital advancements, artificial intelligence (AI) has great potential and also poses great risks. Over the last few months, we’ve explored the emerging concept of “digital responsibility” through environmental, social, and economic lenses. Now we turn our attention to the technological realm, with a focus on AI.
MITSloan defines the technological realm as “the responsible creation of technologies themselves.” While certainly not limited to AI, this is a frontier worthy of careful consideration, particularly in the financial services industry.
Financial institutions are understandably excited by the promise of AI. After all, data can drive more strategic decision-making, but credit unions and banks are often drowning in it. AI has the potential to sort through these reams of data and pull actionable consumer insights. It can also assist in serving members more efficiently by delivering a human-feeling interaction via chatbots.
But the growing dependence on AI could also make it extremely difficult for financial institutions to protect consumers and ensure they’re treated fairly. Here are a few challenges to be aware of now:
Melany Anderson, a recently divorced 41-year-old pharmaceutical benefits consultant in New Jersey, told The New York Times, “I found the idea of going to a bank completely intimidating and impossible. I was a divorced woman and a Black woman. And also being a contractor — I know it’s frowned upon, because it’s looked at as unstable. There were so many negatives against me.” Then an online pop-up ad led her to Better.com, a digital lending platform, and she ended up getting a mortgage loan without ever speaking to a loan officer face-to-face.
This story, and many others, point to the potential for AI to decrease bias, particularly for financially underserved populations. Yet other stories reveal its potential to not only reflect bias but to magnify its effect.
Political science professor Virginia Eubanks takes a deep dive into the dark side of AI in her book, Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. As one of many deeply troubling examples, she points to an automated computer system in Indiana that denied one million applications for healthcare, food stamps and cash benefits because the algorithm interpreted any mistake as “failure to cooperate.”
One of those one million applicants was a severely disabled 6-year-old girl named Sophie, who, as The New York Times reported, “received a letter… informing her that she was losing her Medicaid benefits because of a ‘failure to cooperate’ in establishing her eligibility for the program. This happened just as she was gaining weight, thanks to a lifesaving feeding tube, and learning to walk for the first time.”
Credit union members who are underserved today could be better served by an AI-driven loan approval process, but they could also easily remain underserved, if not become moreso, in a more automated world. It’s clear that AI is only as good as its algorithm. If the algorithm only automates existing inequalities, AI may end up doing much more harm than good.
Member frustration and limited transparency
According to 2020 research from Capgemini, just over half of the financial institutions they surveyed reported at least 40% of their consumer interactions were enabled by AI. But the same research showed more than half of consumers felt the value of AI-driven contacts was either non-existent or less than they expected. Many missed having a human touch, didn’t like important decisions being made by an algorithm and found that the process lacked transparency. Experts agree that some of the models are so complex that there’s limited or no insight into how the model works. If the experts aren’t really sure how a given AI model works, that should give all of us pause.
Although members are increasingly familiar with the idea of conversational AI resources–like Alexa and Siri — in the financial services space these can feel intrusive. A research report from Filene found conversational AI technologies could “risk invading members’ privacy and being frustrating and opaque.”
Delivering the flexibility of a fintech in a regulated world
An article from McKinsey & Company asserts that to optimize the promise of AI, financial institutions must strive to deliver the speed, agility and flexibility of a fintech, while managing the scale, security standards and regulatory requirements that come with being a traditional financial services enterprise.
That’s a tough ask for a variety of reasons. On the one hand, a credit union can overcome some challenges — such as outdated core systems, insufficient staffing, siloed data and the challenges of finding the right AI partners — with sufficient staff and financial resources. But regulatory requirements will always make data and AI trickier for credit unions to manage.
What can your credit union do now to tackle AI-associated risk?
- Stay up to date on the latest findings on data and AI. If your resources are scarce, consider partnering with an expert in this field or reaching out to industry resources such as CUNA and Filene Research Institute.
- Ask the right questions. Tim Frick, an expert on digital ethics, suggested in a recent episode of The Remarkable Credit Union podcast that credit unions “audit” their digital products or services by asking key questions, such as: “How many practices and policies are associated with the use of your digital tools? What are the opportunities to misuse those tools, intentionally or otherwise? What stakeholders could be impacted by that misuse?”
- Keep people in the equation. The HBR article recommends having humans double-check or choose from algorithm options to increase AI transparency and confidence. And the Brookings article agrees with this logic, pointing out that for all its strengths, AI is still no match for mimicking the human ability to effectively make decisions that require a level of emotional intelligence: for instance, using factors other than those gleaned in a credit report to decide a member is a good risk for a loan.
If there’s one common thread throughout our digital responsibility series, it’s that awareness and intentionality are key. As a society, we’ve perhaps been a little too eager to blindly celebrate digital tools and products without careful consideration of potential unintended consequences.
As we’ve discussed in past weeks, digital isn’t automatically environmentally sustainable, it can all too easily operate on a socially unfair playing field, and even though digital can save credit unions money, there may be unanticipated costs. But if your credit union can commit to transparent, responsible practices, you stand a better chance of reaping the rewards of digital without hurting your members, your team, or your community at large.