Sections

Research

How artificial intelligence affects financial consumers

FILE PHOTO - A combination of photographs shows people using automated teller machines (ATMs) at Australia's "Big Four" banks - Australia and New Zealand Banking Group Ltd (bottom R), Commonwealth Bank of Australia (top R), National Australia Bank Ltd (bottom L) and Westpac Banking Corp (top L).    REUTERS/Staff/File photo - RC1D46C35580
Editor's note:

This report is part of “A Blueprint for the Future of AI,” a series from the Brookings Institution that analyzes the new challenges and potential policy solutions introduced by artificial intelligence and other emerging technologies.

Artificial intelligence (AI) technology has transformed the consumer financial services market and how consumers interact with the financial services ecosystem. This paradigm shift has been driven by the accelerated maturation of the algorithms; the historic level of investment flooding the financial services market; the competition for market share between incumbents and new entrants; and rapid changes in consumers’ preferences for digital financial products. From AI-driven chatbots to sophisticated wealth robo advisors, AI applications have clear potential to expand opportunities for consumers living at the margin. However, experts have yet to discuss the relevance of AI for consumer financial protection in earnest, including the implications of AI solutions that could better protect consumers.

AI hype creates “wicked” consumer financial protection problems

With traditional banks vying to maintain market share and maximize shareholder values, it’s safe to say that the gold rush toward AI will only intensify. According to Autonomous Next’s 2018 Machine Intelligence forecast, banks, insurance companies, and investment management firms are poised to save more than $1.0 trillion by 2030 if they incorporate systematic investment in AI technologies into their business models. Banks are expected to reap the lion’s share of these savings.

While AI means long-run profits for banks, an AI sea change will likely disrupt consumers’ financial lives. In some ways, these changes will translate into better financial well-being for millions of consumers; in others, the outcomes will be more difficult to characterize. The net social and economic benefits to consumers will be closely tied to how banks operationalize potential savings. In the interim, questions loom regarding consumers’ general welfare in an AI-centered financial ecosystem. Which consumers will likely be most visible in this new ecosystem? And when consumers are digitally excluded, what will likely be the social cost of their losses?

While AI means long-run profits for banks, an AI sea change will likely disrupt consumers’ financial lives. In some ways, these changes will translate into better financial well-being for millions of consumers; in others, the outcomes will be more difficult to characterize.

Unveiling the limitations of AI is crucial as miscalculating the true potential of AI algorithms can create wicked consumer protection problems, especially when financial products and services are involved. Horst Rittel famously declared, “wicked” problems are especially challenging because they are difficult to define, contain, and nearly impossible to solve with linear solutions. Even more vexing is the idea that solutions themselves can spawn new and unintended problems.

Over-reliance on AI-driven financial services will undoubtedly lead to wicked problems when bank and fintech algorithms choose which consumers to serve. It seems impossible to fathom a policy solution to address selection bias sanctioned by trusted (profitable) algorithmic models when literally all data in the world has effectively labeled a subgroup of consumers as permanent ‘bad decisions.’

To avoid this dystopian future, we should recognize that consumer financial protection is good social policy. AI can help democratize consumer financial protection by diffusing responsibility to other agents through extending the associated ecosystem. We must move toward a paradigm that furthers this democratization. In the digital era, consumer protection does not lie solely with federal and state regulators; it also involves financial institutions and consumer advocacy groups.

AI opportunities for consumer financial protection

Consumer financial protection in the age of AI provides an opportunity to engage non-public agents in the business of protecting consumers from financial harm. A primary objective of consumer financial protection is to make financial services and markets fairer for all consumers. AI can contribute to this goal by expanding access to safer and more effective financial products and services that allow consumers to build wealth and access credit.

Reducing systematic financial invisibility is one way AI can extend financial access. Financial exclusion remains a significant barrier to economic mobility for millions. Per Consumer Financial Protection Bureau (CFPB) estimates, approximately 15 percent of rural consumers aged 25 and older are likely to be credit invisible. Results from FDIC’s 2017 household banking survey reveal the growing challenge of connecting with rural consumers, who are less likely to use mobile banking and often rely disproportionately on rapidly sunsetting bank teller relationships to navigate their deposit accounts.

Ubiquitous online shopping experiences can ease wary consumers’ transition to a digital financial world.

Despite these daunting trends, near-universal adoption of e-commerce retail, led by tech-centric retailers like Amazon and Walmart, has promoted openness to digital transactional models across all consumer segments. In some ways, ubiquitous online shopping experiences can ease wary consumers’ transition to a digital financial world by enhancing their trust in and comfort with virtual institutions. Nudging these consumer segments toward digital interaction can go a long way in helping these groups navigate a financial ecosystem witnessing a dramatic reduction in brick-and-mortar bank branches. Between 2016 and 2017, “the total number of [bank] branches in the U.S. shrank by 1,700”–“the biggest decline on record” according to The Wall Street Journal.

The plight of retail network shrinkage is even more pronounced in rural communities. The Wall Street Journal noted, “The financial fabric of rural America is fraying. … In-person banking, crucial to many small businesses, is disappearing as banks consolidate and close rural branches.” Large banks continue to forge ahead in their digital transformations, investing in front-end infrastructure with tech-enabled communication via mobile apps and internet conversational interfaces. The fact that AI technology will change the retail business model is an unassailable conclusion.

As fintechs and banks seek to fill the void left by the departure of physical branches, the digital personal lending vertical in the U.S. has exploded, growing to a reported $120 billion since the Great Recession. The surge in digital loan platforms, buttressed by AI, has shown promise in expanding access to credit for previously marginalized consumers. Credit scores calculated by machine-learning algorithms, for instance, have improved financial institutions’ abilities to score credit-poor consumers, providing a much-needed economic lift.

Yet access to credit is not the only element of financial inclusion central to consumers’ financial well-being. Other facets of financial access, such as effective savings products and retirement assets, are arguably more important than creating debt alone. AI-enabled products are making inroads in these market segments as well.

Computer chips on credit cards
Computer chips are seen on newly-issued credit cards in this photo illustration taken in Encinitas, California September 28, 2015. In an effort to reduce counterfeit and credit card fraud more than 200 million payment cards have been issued with embedded computer chips in the U.S., ahead of a Oct. 1 deadline for the switch to such cards, according to the Smart Card Alliance. Credit card companies have set the October deadline which will require U.S. consumers to carry a new kind of card and retailers across the nation to upgrade payment terminals. REUTERS/Mike Blake - GF10000225781

Personalized portfolio management can play a powerful role in helping marginalized and low-income consumers develop some agency in their financial lives. Applications can be used to design intelligent financial products that suit consumers’ financial behavioral biometric profiles, helping them avoid debt traps fueled by late fees and inflexible payment relationships. Incorporating these solutions into financial products could prove transformative for vulnerable consumers.

Role of AI in regulating consumer financial protection

The potential of AI in the consumer financial services marketplace will likely benefit consumers and arbitrage profits for financial institutions plying the AI trade; however, the financial industry could easily expect too much from AI technologies too soon.

Issues around algorithmic bias have been vigorously debated in tech-policy circles, and the stakes become higher when data do not represent certain consumer groups who have been filtered out of mainstream settings by a cutting-edge system. We have yet to grasp that though today’s AI algorithms are exceedingly proficient at complex computational tasks, they are far from mimicking true human intelligence; pattern recognition is but a fraction of humans’ innate intelligence.

At worst, AI can lead to a surge of wicked, legacy problems: product steering, discriminatory pricing, unfair credit rationing, exclusionary filtering, and digital redlining.

The consumer financial services market is a constellation of products, services, and levels of financial access born of intricate human decisions. To believe that algorithms can replace sophisticated human decisions and ultimately improve consumer welfare is naïve at best. At worst, AI can lead to a surge of wicked, legacy problems: product steering, discriminatory pricing, unfair credit rationing, exclusionary filtering, and digital redlining. These practices underscore the need for thoughtful human agency, particularly when faced with unintended outcomes and nuanced marginal problems.

We are moving toward a financial marketplace in which regulators, financial institutions, and consumer advocacy stakeholders must be harmonized to deliver safe products that ensure all consumers’ long-term financial well-being. The Great Recession destroyed consumers’ faith in traditional financial institutions and the regulators tasked with protecting customers. This trust will only continue to erode until regulators adopt novel ways of working with stakeholder constituents to safeguard consumers’ financial well-being.

Transparency around algorithmic models will be regulators’ greatest obstacle in the digital epoch. We need robust oversight to ensure AI applications remain accountable to society—the people and government—and circumvent discriminatory bias, according to the AI principles elegantly laid out by Sundar Pichai. Innovative tools and oversight policy will be the distinguishing factors between good consumer financial protection regulatory regimes and bad ones.

Consumer protection oversight of the future must harness AI tools—machine learning, natural language processing, computer vision predictive models, big data, and other emerging technologies—to monitor a dynamic financial system in nearly real time, particularly in areas of the traditional prudential model that are ripe for improvement.

AI and consumer complaints

Identifying emerging issues in the consumer finance market starts with detecting spikes in consumer complaint patterns. Regulatory systems can exploit AI’s pattern detection advantages to highlight complaint anomalies and estimate whether a veritable risk is emerging. Most regulatory complaint data systems, including CFPB’s consumer complaint portal, accumulate complaints in a highly structured manner. Deviations from hard-coded, feature engineering in such systems can potentially hurt unsophisticated consumers: those without access to advocates, those who are digitally challenged, and those who face basic literacy obstacles.

AI conversational interfaces can be used to gather unstructured consumer data through phone interactions. Other algorithms can be applied to integrate complaint data from internet-based consumer complaint platforms. Expanding the volume and scope of consumer complaints, combined with the prospects of big data technologies, has tremendous implications. Government agencies could be nudged toward a predictive guidance model in time with market shifts instead of relying on complaint volume and sporadic formal grievances to initiate change. Diversifying consumer complaint data collection could also close the gap between average consumers and government agencies whose mission it is to protect them.

The benefits of AI notwithstanding, broadening its adoption in the financial ecosystem presents challenges around informed consent. Effective informed consent in a hyper-digital financial system requires that consumers understand and affirmatively consent to the use of their transactional and personally identifying data in financially intrusive ways. The National Telecommunications and Information Administration acknowledges the gap between current privacy policies and how AI has changed consumers’ lives, rightly calling for an open conversation that supports “user-centric privacy outcomes that underpin the protections that should be produced by any Federal actions on consumer-privacy policy, and … a set of high-level goals that describe the outlines of the ecosystem that should be created to provide those protections.”

Emerging technologies and opportunities for smart supervision

Agile consumer oversight depends on broadening the breadth and depth of the government’s capacity to maintain a healthy financial system. The bulk of regulatory oversight is effectuated through a complex, mechanical supervisory process, which can stand to benefit from emerging technology applications to extend the reach of compliance resources and the supervisory workforce.

The supervisory regime is a prime use case in which distributed ledger technologies (DLTs), such as blockchains, could be used to devise regular monitoring regimes. A DLT is a type of database that creates permanent records of transactions, storing them in ‘blocks’ that are ‘chained’ together using sophisticated cryptographic signatures that ensure transactions are not altered. These blockchains form the subcomponents of a ledger that can be copied, accessed by multiple parties, and whose changes are fully visible to network participants. DLTs offer transparent data collection and reporting advantages for regulatory agencies, provided there is a clear vision around using the technology to benefit consumers.

A street sign for Wall Street is seen outside the New York Stock Exchange (NYSE) in Manhattan, New York City, U.S. December 28, 2016. REUTERS/Andrew Kelly - RTX2WRNR
A street sign for Wall Street is seen outside the New York Stock Exchange (NYSE) in Manhattan, New York City, U.S. December 28, 2016. REUTERS/Andrew Kelly - RTX2WRNR

DLTs can boost reporting efficiency in two critical areas of consumer protection: mortgage regulation and community lending. In practice, prudential regulators rely on limited transaction samples to conduct complex compliance audits and generate conclusive, sweeping inferences about an institution’s compliance with consumer protection rules and regulations. As of 2018, FDIC and the Federal Reserve updated portfolio sampling guidance for the examiner workforce in HMDA and CRA reviews, calling for sample sizes of 159 and 99, respectively. These guidelines cover financial institutions with assets ranging from $250 million to over $10 billion, offering myriad products and serving more than 240 million consumers. Implementing DLTs in these areas can immediately improve reporting efficiency and strengthen the impacts of limited resources.

Ultimately, the goal as we advance in the lifecycle of financial oversight is to design relevant rules that keep pace with market developments and promulgate regulations that protect consumers from undue harm. Prudential regulators should seize the technological prowess of a growing ensemble of AI algorithms for more efficient regulation. Proponents of deregulation have frequently decried soaring compliance costs in their critiques, often blaming the Dodd-Frank Wall Street Reform Act for this growing burden. Since 2010, according to Bloomberg, the financial industry has spent an average of $1 billion beefing up its in-house compliance capacity. Leveraging AI solutions to enhance regulation would provide a direct response to the ‘burden’ issue and possibly inspire a paradigm shift toward a consumer protection model in which industry develops a shared understanding of the benefits of consumer protection.

Consumer financial protection regulators need imaginative thinking and emerging technologies to bend the arc of financial innovation toward consumers’ interests.

However, regulator interest in AI should develop beyond the ‘raising an eyebrow’ stage and engage in targeted, strategic analytical exploration. Rising compliance cost complaints hold more than a kernel of legitimacy—regulators have an obligation to respond with solutions. The deluge of data generated by connected devices and machine learning applications creates a prime opportunity to collect and mine publicly available data to inform critical regulation burden estimation analyses, such as those required by the Small Business Regulatory Enforcement Fairness Act and data collection limitations resulting from the Paperwork Reduction Act. The combination of improved data and AI tools available to regulators would greatly enhance professionals’ abilities to identify risky institutions and prioritize them for heightened oversight. Consumer financial protection regulators need imaginative thinking and emerging technologies to bend the arc of financial innovation toward consumers’ interests.

Employing AI talent for public good

Finally, AI ambitions for regulatory innovations are useless without relevant talent. Regulators should be competing as aggressively as Google and Amazon for AI engineering workers focused on building convolutional neural network models to pinpoint emerging risks or anomalies that could lead to widespread consumer harm. We need investments in regulatory AI investigation models, similar to the kind of Pentagon government funding that seeded AI’s third rebirth.

Academic researchers have also heartily acknowledged the importance of machine learning accountability and explainability. But the degree of progress we require will not come from a research model—it must be a government-funded mandate. Only then can we change the trend of the government playing catch-up.

The need for consumer engagement

Diverse stakeholders must be involved in consumer protection: financial institutions, consumers, consumer advocate groups, and regulators. A digital financial ecosystem needs regulatory mechanisms to provide feedback and proactive guidance to encourage prioritizing consumer financial welfare. Rewiring this systematic relationship calls for regulators to shift from supervisory models that overemphasize punitive, reactionary enforcement toward newer models rooted in proactive supervision with open communication between stakeholders.

Robust consumer financial protection in the age of AI will not come from the regulation of big data or data privacy protections alone. The objective is to avoid consumer harm through ongoing supervision and enforcement when risks are identified—but to do this effectively, regulators must leverage the complementarity of AI innovations and human oversight to ensure these technologies can elicit the most benefits for the most people.

Authors

  • Footnotes
    1. Rittel, H. W., & Webber, M. M. (1973). Dilemmas in a general theory of planning. Policy sciences, 4(2), 155-169.
    2. FDIC guidelines range between 99 and 44, depending on the selected confidence interval, aggregate portfolio volumes, and pre-set preferences for statistical confidence.