by Stephen C. Rea, PhD, Research Assistant Professor at Colorado School of Mines and former IMTFI Research Assistant
Stephen Rea recently published a white paper surveying literature in computer science, law, and the social sciences on developments in machine learning and artificial intelligence, with special focus on their implications for consumer financial services in the United States. This project grew out of a joint collaboration between IMTFI and Capital One's Responsible AI Program. We present here an excerpt from the introduction to the white paper, followed by some updates about recent developments in this space.
Capital One Eno Chatbot |
Stephen Rea recently published a white paper surveying literature in computer science, law, and the social sciences on developments in machine learning and artificial intelligence, with special focus on their implications for consumer financial services in the United States. This project grew out of a joint collaboration between IMTFI and Capital One's Responsible AI Program. We present here an excerpt from the introduction to the white paper, followed by some updates about recent developments in this space.
Excerpt
Machine learning (ML) algorithms and the artificial intelligence (AI) systems that they enable are powerful technologies that have inspired a lot of excitement, especially within large business and governmental organizations. In an era when increasingly concentrated computing power enables the creation, collection, and storage of “big data,” ML algorithms have the capacity to identify non-intuitive correlations in massive datasets, and as such can theoretically be more efficient and effective than humans at using those correlations to make accurate predictions. What is more, AI systems powered by ML algorithms represent a means of removing human prejudices from decision-making processes; since an AI system renders its decisions based solely on the data available, it can avoid the conscious and unconscious biases that often influence human decision-makers.
Machine learning (ML) algorithms and the artificial intelligence (AI) systems that they enable are powerful technologies that have inspired a lot of excitement, especially within large business and governmental organizations. In an era when increasingly concentrated computing power enables the creation, collection, and storage of “big data,” ML algorithms have the capacity to identify non-intuitive correlations in massive datasets, and as such can theoretically be more efficient and effective than humans at using those correlations to make accurate predictions. What is more, AI systems powered by ML algorithms represent a means of removing human prejudices from decision-making processes; since an AI system renders its decisions based solely on the data available, it can avoid the conscious and unconscious biases that often influence human decision-makers.
Contrary to this rosy picture of ML and AI, though, decades of evidence demonstrate how these technologies are not as objective and unbiased as many perhaps wish they were. Biases can be encoded in the datasets on which ML algorithms are trained, arising from poor sampling strategies, incomplete or erroneous information, and the social inequalities that exist in the actual world. And since ML algorithms and AI systems cannot build themselves, the humans who construct them may, however unintentionally, introduce their own biases when deciding on a model’s goals, selecting features, identifying which attributes are relevant, and developing classifiers. Additionally, the inherent complexities of ML algorithms that defy explanation even for the most expert practitioners can make it difficult, if not impossible, to identify the root causes of unfair decisions. That same opacity also presents an obstacle for individuals who believe that they have been evaluated unfairly, want to challenge a decision, or try to determine who should—or even could—be held accountable for mistakes.
Compared to other fields, the financial services industry has taken a relatively conservative approach to ML/AI integrations. Consumer-facing applications like robo-advisors for portfolio management, AI-powered banking assistants, algorithmic trading programs, and proactive marketing tools, as well as harnessing the power of ML to do sentiment analysis of social media feeds and news stories in search of trendlines, have garnered a lot of media attention. However, the visibility of initiatives like these in press releases and news items exaggerates their role in financial services today, as they represent less than one-tenth of the funding received in the financial technology, or “fintech,” vendor space. Thus far, financial institutions have primarily invested in ML and AI for automating routine, back-office tasks, improving fraud detection and cybersecurity, and making regulatory compliance easier.
The current state of ML and AI in consumer financial services, then, is one in which there is still enormous opportunity for innovation, but also reasons to be cautious. To paraphrase the feminist geographer Doreen Massey, some individuals and groups are more on the “receiving end” of these technologies than others. In other words, ML and AI’s advantages and disadvantages are not equally distributed. Nor are the vulnerabilities entailed by digital surveillance techniques for data creation and collection, the sorts of harm that can occur from an erroneous data entry and the burden for correcting it, or the ability to affect how an algorithm interprets one’s individual attributes and characteristics. In many ways, ML/AI research’s most important contributions have been demonstrating the extent to which structural inequalities—that is, conditions by which one or more groups of people are afforded unequal status and/or opportunities in comparison to other groups—persist by providing quantifiable, documented evidence of social disparities. If an organization’s reason for integrating ML- and AI-powered systems is to improve its decision-making procedures so as to make them both more accurate and fairer, then it is imperative to understand and account for persistent inequalities in the social contexts where those systems are embedded. Furthermore, assessing how exactly an algorithmic and/or automated decision-making system could impact specific populations, the risk that it could violate legal standards prohibiting discrimination, and the extent to which the system could perpetuate structural inequalities are of the utmost importance when deciding whether or not to make the integration in the first place.
Updates
Work in ML and AI is fast-moving, and in the time since this paper was published, there have been a number of developments that will affect how these technologies are integrated with the consumer financial services industry and beyond. Two in particular merit attention here:
1) Congressional action: On February 12, 2020, the U.S. House Committee on Financial Services' Task Force on Artificial Intelligence heard testimony from experts on AI, ML, and race and inclusion in a panel titled “Equitable Algorithms: Examining Ways to Reduce AI Bias in Financial Services.” The Committee acknowledged the usefulness of standards for the fairness and accuracy of AI applications in financial services, while also noting that existing laws such as the Equal Credit Opportunity Act, the Fair Housing Act, and the Fair Credit Reporting Act are inadequate in many respects for regulating AI's impact. The panel of experts recommended drafting a definition of "fairness" that could be used for evaluating ML, developing audit and assessment methods for locating biases in data and models, and requiring ML/AI developers to implement and report upon continuous monitoring plans that can detect new biases as they emerge. They also voiced concern regarding the Department of Housing and Urban Development's plans to revise the Fair Housing Act's disparate impact standards, and how such action might exacerbate the discriminatory effects of AI in home lending.
2) Sandvig v. Barr decision: In March 2020, the U.S. District Court for the District of Columbia delivered its ruling in Sandvig v. Barr, which challenged a provision in the Computer Fraud and Abuse Act (CFAA) that made it a crime for researchers and journalists to use "dummy" accounts for the purposes of auditing algorithms in order to identify possible discrimination. The American Civil Liberties Union had initially brought the lawsuit in 2016 on behalf of a group of academics and journalists led by Christian Sandvig of University of Michigan's School of Information. The plaintiffs argued that the CFAA violated their First Amendment rights, and noted that comparable research activities were not illegal in offline contexts. The Court ruled in favor of the plaintiffs, thereby opening the door for more independent review of ML/AI applications and scoring an important victory for researchers' ability to hold algorithms and the institutions that use them accountable.
Additional Resources
AI Now Institute: https://ainowinstitute.orgData and Society's AI on the Ground initiative: https://datasociety.net/research/ai-on-the-ground/